halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
1
7
timestamp
stringlengths
19
19
year
stringclasses
49 values
url
stringlengths
43
389
text
stringlengths
908
2.18M
size
int64
908
2.18M
authorids
sequencelengths
1
102
affiliations
sequencelengths
1
229
01743926
en
[ "shs", "sde" ]
2024/03/05 22:32:07
2013
https://hal.science/hal-01743926/file/2013-Coolsaet-et-al.-Study-for-the-implementation-in-Belgium-of-the-Nagoya-Protocol-on-ABS-to-the-CBD.pdf
Prof Charles-Hubert Born Prof Van Geertrui Prof Delphine Overwalle Prof Missone Cliquet An Isabelle Durant Koen Van Den Bossche Arianna Broggiato Arul Scaria Anne Liesse Heike Rämer Caroline Van Schendel Tom Dedeurwaerdere email: tom.dedeurwaerdere@uclouvain.be LIST OF TABLES Table 1 - LIST OF FIGURES  A phased approach should be adopted for the implementation of the Nagoya Protocol, allowing to benefit from the implementation of the basic principles in a timely manner and to deal with more fine-grained choices at a later stage. Specific recommendations  Alongside the designation of Competent National Authorities (CNAs), a centralized input system to the CNAs should be established.  With regard to compliance measures, sanctions should be provided for cases of non-compliance with PIC and MAT requirements set out by the provider country. When checking content of MAT, a provision in the code of international private law should provide for reference to provider country legislation, with Belgian law as a fallback option.  At this stage of the implementation, the monitoring of the utilization of genetic resources and traditional knowledge by a checkpoint should be done on the basis of the PIC available in the ABS Clearing-House.  With regard to access to Belgian genetic resources, it is recommended to refine the existing legislation relevant for protected areas and protected species, combined with a general notification requirement for access to other genetic resources. Later stages of implementation can then include refinement of additional relevant legislation as well as having ex-situ collections process the other access requests.  At this stage of the implementation, and apart from the general obligation to share benefits, no specific benefit-sharing requirements should be imposed for the Mutually Agreed Terms. A combination of more specific requirements, including the possibility to use standard agreements, can be considered in a later stage of the implementation.  The Royal Belgian Institute of Natural Sciences should be mandated to fulfill the information sharing tasks on Access and Benefit Sharing under the Nagoya Protocol, through the ABS Clearing-House. This study aims to contribute to the ratification and the implementation in Belgium of the Nagoya Protocol on Access and Benefit-sharing (ABS), thereby contributing to the conservation of biological diversity and the sustainable use of its components. This is in support of the overall goal to implement the Convention on Biological Diversity (CBD) since the 2010 Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization is a protocol to the CBD. The CBD is the main international framework for the protection of biodiversity. It has three objectives: (1) the conservation of biological diversity, (2) the sustainable use of its components and (3) the fair and equitable sharing of benefits arising from the utilization of genetic resources. The Nagoya Protocol therefore delineates the means of implementation of the third objective of the CBD. ABS potentially encompasses a large range of issues extending far beyond sole environmental matters, including market regulation and access, international trade, agriculture, health, development cooperation, research & development and innovation. As a consequence, the future implementation of the Nagoya Protocol could be relevant to several departments and several levels of competence in Belgium. Access and Benefit-sharing (ABS) in Belgium Following successive transfers of competences since 1970, the federated entities have the greatest responsibility in ABS-related issues, including environmental policy, agricultural policy, research and development, and economic and industrial policy. However, within these matters, the Federal Government possesses reserved and residual competences, with relevant examples including, among others, the export, import and transit of non-indigenous plant varieties and animal species, industrial and intellectual property, and scientific research that is necessary to the execution of its own competences. The large range of issues also implies an extended administrative distribution of ABSrelated competences within each power level. The implementation of the Nagoya Protocol, as a "double mixed treaty"1 , will thus necessitate competences from both the federal and federate entities and require extensive inter-and intra-departmental coordination. Access to genetic resources, as understood in the Nagoya Protocol, is not as such yet regulated by Belgian public law measures. Nevertheless, existing public and private law provisions already regulate related matters such as property rights, physical access to (genetic material in) protected areas and protected species, or modification and transformation of natural environments. Several of these existing provisions could be used as a basis for the implementation of the Nagoya Protocol in Belgium. In order to fully understand the usefulness of these existing measures, four important preliminary remarks need to be made. First, throughout this study, access and utilization of genetic resources and traditional knowledge are analyzed within the framework of the Nagoya Protocol. The Protocol covers genetic resources and traditional knowledge that are provided by Parties from where such resources originate or by Parties that have acquired them in accordance with the Convention on Biological Diversity. Hence, this report covers:  genetic resources possessed by a country in in-situ conditions and on which that country holds sovereign rights ; and  genetic resources possessed by a country in ex-situ collections and which have been acquired after the entry into force of the Nagoya Protocol and / or in accordance with the obligations of the Convention on Biological Diversity. Second, the CBD distinguishes "genetic material" (i.e. any material of plant, animal, microbial or other origin containing functional units of heredity) from "genetic resources" (i.e. genetic material of actual or potential value). Third, a distinction has to be made between the question of legal ownership of genetic resources in their quality of material goods on the one hand, and the regulation of the access and utilization of genetic resources according to the Nagoya Protocol as an exercise of a sovereign right, on the other. The Belgian State holds sovereign rights over its genetic resources and can thus regulate the utilization of these resources by public law measures, as long as these are justified. However, physical access to and use of genetic material are already regulated by property law and the liability and redress options made available under both civil and criminal procedures related to the enforcement of property rights. Fourth, it is important to remember that while genetic resources can be seen as biophysical entities (e.g. a plant specimen, a microbial strain, an animal, etc.), they also include an "informational component" (i.e. the genetic code). Access to genetic resources therefore relates both to the physical component and/or the informational component. Taking the above into account, currently available national provisions relevant for the legal status of genetic resources in Belgium mainly relate to the question of legal ownership over genetic material. Flowing from the central tenets of the right to property found in the civil code, the conditions and rules surrounding the legal ownership of the genetic material, as a biophysical entity, follow from those governing the ownership of the organism this material can be found in. Property over an organism means that the proprietor possesses the rights to use, perceive the benefits and alienate the specimen. Furthermore, any legal measure regulating access to genetic resources could benefit from building upon existing legislation on physical access to and use of genetic material. The rules regulating physical access and use of genetic material depend upon the type of ownership (movable, immovable or res nullius), the existence of restrictions to the ownership such as specific protection (protected species, protected areas, forests or marine environments) and the location (all four Authorities apply their own rules) of the genetic material. As opposed to its physical components, the informational components regarding the genetic resources may constitute a res communis: "things owned by no one and subject to use by all". While access to such informational components is not covered by subject-specific legislation, the exercise of some use rights can however be limited through intellectual property rights that have been recognized on portions, functions, or uses of biological material resulting from innovations on these materials. These intellectual property rights can take the form of patents, plant variety rights or geographical indications. Alongside these principles surrounding the legal status of genetic resources, a number of rules found in civil, criminal and private international law, offer prospects of liability and redress in cases where an illicit acquisition of genetic resources is established. Their application is different with regard to genetic resources as physical specimens or as informational goods, but also with regard to where the illicit acquisition has taken place. Finally, there are no contemporary legal provisions in Belgium explicitly governing the concepts of "traditional knowledge", "traditional knowledge associated with genetic resources" and "indigenous and local communities". However, concerns over traditional knowledge and the rights of indigenous and local communities have been addressed in some international instruments to which Belgium is a Party, such as the 1957 International Labor Organization (ILO) Convention No. 107 on Indigenous and Tribal Populations, the ILO Convention No. 169 on Indigenous and Tribal Peoples, and the United Nations Declaration on the Rights of Indigenous Peoples. Preliminary recommendations for the options for the implementation of the Nagoya Protocol While the Nagoya Protocol is a recent protocol, it is nonetheless the further implementation of the third objective of the CBD which contains basic principles and ABS related provisions, such as the sovereignty of States over their natural wealth and resources, the fair and equitable sharing of benefits, and the importance of indigenous and local communities and their traditional knowledge. Many Parties to the CBD throughout the world therefore have implemented a series of measures on ABS, which can serve as useful first-hand experience for the implementation of the Nagoya Protocol. Through these experiences, two sets of preliminary recommendations were established in this study, with regard to the available options for the implementation of the Nagoya Protocol in Belgium. The first set of recommendations relates to instruments required for the implementation of the core obligations emanating from the Protocol2 . The second set of recommendations relates to additional measures which are important elements to be taken into account during the implementation of the obligations, but which go beyond the core obligations. With regard to the core obligations, the following is recommended:  Clarify access conditions: By holding sovereign rights over its genetic resources, Belgium can choose whether or not to require users to obtain Prior Informed Consent through the competent authority for access to genetic resources under its jurisdictibon.  Determine the format of the Mutually Agreed Terms: Once the Nagoya Protocol enters into force in Belgium, users operating on its territory will be required to share benefits arising from the utilization of genetic resources. Such sharing shall be based upon MAT. However, the Nagoya Protocol does not impose a specific format for MAT, which can be left to the discretion of stakeholders or flow from guidelines and/or mandatory measures imposed by the State.  Ensure ABS serves conservation and sustainable use of biodiversity: It should be made sure that the implementation of the Nagoya Protocol supports the other two objectives of the CBD: conservation of biodiversity and sustainable use of its components. This can be done, for instance, by linking PIC to mandatory conditions on the sharing of the benefits or by establishing a "benefit-sharing" fund which redirects the benefits towards conservation and sustainable use of biodiversity.  Facilitate access for biodiversity-related research: In order to foster biodiversity-related research and avoiding putting too much burden on non-commercial research utilizing genetic resources, measures could be developed to facilitate access to genetic resources for noncommercial biodiversity-related research.  Establish a Competent National Authority: Each Party has to designate a Competent National Authority that grants access, issues written evidence that access requirements have been met and advises users on applicable procedures and requirements to get access to genetic resources. Given the institutional reality in Belgium, more than one CNA can be established. It should be noted that this task is of the highest priority, as Belgium needs to notify the CBD Secretariat of the contact information of its Competent National Authority/Authorities (and of its national focal point, which is already appointed) no later than the date of entry into force of the Protocol.  Give binding effect to the domestic legislation of provider countries regarding PIC and MAT: As part of the implementation of the Protocol, the basic obligations domestic users have to comply with when utilizing genetic resources in Belgium will have to be laid out. This obligation comes down to giving binding effect to the provider country's PIC and MAT. This could be done by establishing an obligation in the Belgian legislation to comply with the provider country legislation regarding PIC and MAT, or by establishing a self-standing obligation in the Belgian legislation to have PIC and MAT if so required by the provider country.  Designate checkpoint(s) for the monitoring of the utilization of genetic resources: In order to comply with the Nagoya Protocol, at least one institution has to be designated to function as a checkpoint which monitors and enhances transparency about the utilization of GR. This can be a new or existing institution. With regard to additional measures, the following issues are to be taken under consideration: a) specifying benefit-sharing requirements for the MAT; b) establishing a clear and transparent access procedure; c) clarifying additional rights and duties of the Competent National Authorities; d) establishing a monitoring system; e) creating incentives for users to comply; and f) encouraging the development of model clauses, codes of conducts and guidelines. Selected options for the implementation of the Nagoya Protocol In light of the preliminary recommendations for the options for the implementation of the Nagoya Protocol described above, six measures, each including several policy-options, were discussed at the first stakeholder meeting on the 29 th of May 20123 . Based on the results of that meeting, they were selected by the Steering Committee of this study for an in-depth analysis of environmental, social, economic and procedural impacts. Prior to implementing these measures, it should be decided whether to establish both Prior Informed Consent and benefit-sharing as general legal principles in Belgium. While the latter is necessary to comply with the Nagoya Protocol, the former flows from the sovereign rights Belgium holds over its genetic resources and is not necessary for compliance. If Prior Informed Consent is established as a general principle, a procedure needs to be established for access to Belgium's own genetic resources (measure 1). This can be done by modifying existing legislation, by relying upon qualified ex-situ collections, by requiring prior registration or by a combination of these instruments. If benefit-sharing is established as a general principle, the conditions for the specific benefit-sharing requirements through the Mutually Agreed Terms, need to be clarified (measure 2). The specific benefit sharing requirements can be left to the discretion of users and providers (option 1), or be imposed by the state with more or less standardization (options 2 and 3). Measure 2: specifying the benefit-sharing requirements for Mutually Agreed Terms 0. Option 0: No requirement of benefit-sharing for the utilization of genetic resources and traditional knowledge in Belgium; 1. Option 1: No specific benefit-sharing requirements are imposed by the competent authorities for the MAT. Users and providers are free to decide jointly on the content. 2. Option 2: Specific benefit-sharing requirements are imposed, including through standard formats for the MAT for certain uses, which are differentiated depending on the finality of access. b. For unprotected genetic resources: access is provided for through the Belgian ex-situ collections. 2. Option 2 -The baseline fishing net model a. For protected genetic resources: access is made possible through a refinement of existing legislation relevant for protected areas and protected species; b. For unprotected genetic resources: access is accorded upon notification to the competent authority. 3. Option 3 -Modified fishing net model a. For protected genetic resources and genetic resources already covered by specific GR-relevant legislation: access is made possible through a refinement of existing legislation; b. For unprotected genetic resources: access is accorded upon notification to the competent authority. 3. Option 3: Specific benefit-sharing requirements are imposed but without standard formats for the MAT. While taking into account the benefit-sharing requirements, the MAT are tailored on a case-bycase basis by the users and providers. The benefit-sharing requirements are differentiated depending on the finality of access. In order to comply with the Nagoya Protocol, one or several competent national authorities will need to be established (measure 3). Their task is to grant access, to issue written evidence that access requirements have been met and to advise users on applicable procedures and requirements to get access to genetic resources. To fulfill these tasks, the competent national authorities will need to establish entry-points for users of genetic resources. This can be done separately, with each authority having its own entry-point (option 1), or jointly, with a single entry-point for the different authorities (option 2). Once the Nagoya Protocol enters into force in Belgium, it will need to set up compliance measures to make sure that genetic resources and traditional knowledge utilized on its territory have been accessed in accordance with the law of the provider country (measure 4). This can be achieved by referring back to the legislation of the provider country in question and opening review of the content of MAT in accordance with provider country legislation with Belgian law as a fall-back option (option 1), or by setting-up a self-standing obligation under Belgian law (option 2). In the latter option, Belgian legislation would only refer to the specific obligation of requiring PIC and MAT by the provider country without referring to the actual ABS legislation of the provider country. Measure 4: setting-up compliance measures 0. Option 0: No legal provisions on compliance with the Nagoya Protocol are introduced under Belgian law 1. Option 1: A general criminal provision is created that refers back to the legislation regarding PIC and MAT of the provider country. The state enacts a general prohibition to utilize genetic resources and traditional knowledge accessed in violation of the law of the providing country. Review of the content of MAT by judges is subject to provider country legislation, with Belgian law as a fall-back option. 2. Option 2: A provision is created containing an obligation to have PIC and MAT from the provider country for the utilization in Belgium of foreign genetic resources, if this is required by the legislation of the provider country. In order to comply with the Nagoya Protocol, at least one checkpoint needs to be created to monitor the utilization of genetic resources and traditional knowledge in Belgium (measure 5). If Belgium decides to introduce checkpoints, their implementation could take place in several phases. In order to respect the political commitment for a timely ratification of the Nagoya Protocol, the first phase could look at a minimal implementation requiring the establishment of a single checkpoint. Two possible options seem relevant for the first phase, namely monitoring the PIC obtained by users, which is available in the ABS Clearing-House (option 1) and to upgrade the existing patent disclosure obligation (option 2). As options 1 and 2 are not mutually exclusive, a joint implementation could be envisaged. Finally, a Belgian component of/entry-point to the ABS Clearing-House will be created to support exchange of information on specific ABS measures within the framework of the Nagoya Protocol (measure 6). Even if the discussions on the exact modalities of the ABS Clearing-House are still ongoing internationally, three possible candidates have been identified: the Royal Belgian Institute of Natural Sciences (option 1), the Belgian Federal Science Policy Office (option 2), and the Scientific Institute for Public Health (option 3). Impact of the selected options for the implementation of the Nagoya Protocol The evaluation of the possible consequences of the implementation of the above options was conducted through a detailed comparative multi-criteria analysis. This analysis also allowed identifying the possible affected stakeholders. For the operationalization of access to genetic resources (measure 1), the bottleneck option (option 1) and the modified fishing net option (option 3) came out very close. The preference for these options can be explained by the fact they are expected to provide more legal certainty, will have a better environmental impact and correspond better to current practices than the other two options. These two options first require establishing, as a general legal principle, that access to Belgian genetic resources requires Prior Informed Consent. For the specification of benefit-sharing requirements for Mutually Agreed Terms (measure 2) the two options that impose specific benefit-sharing requirements by the Belgian State (options 2 and 3) both ranked better than the option where no specific benefit-sharing requirements are imposed (option 1). This is due to their good economic, environmental and procedural performance (option 2 also has a good social performance). Choosing these options requires adopting benefit-sharing as a general legal principle in Belgium. Alongside the establishment of the Competent National Authorities, a centralized input system clearly came out as the recommended option (option 2 of measure 3). This option scores best on all the criteria and is strictly better on legal certainty and effectiveness for users and providers of genetic resources, at low cost. For the setting up of compliance measures (measure 4), the option to refer back to provider country legislation, with Belgian law as a fallback option, is the recommended option that comes out of this analysis. This can be explained by the closer conformity of this option with existing practices (mainly under the Belgian code of private international law). For the designation of one or more checkpoints (measure 5), the option of monitoring PIC in the ABS Clearing-House stands as the recommended option. It scores at least as well on all criteria and has a better social and procedural performance. Finally, for the sharing of information through the Clearing-House (measure 6), the preference goes to appointing the Royal Belgian Institute of Natural Sciences (RBINS), which has a better performance than other options on most of the analyzed criteria. Recommendations resulting from the impact assessment Two general recommendations result from the impact analysis of the study, along with a set of more specific recommendations for each of the measures. First, the analysis shows that the no policy change baseline (the "0" option for each measure) clearly has the worst performance. This result leads to a first general recommendation, which is to implement both Prior Informed Consent and benefit-sharing as general legal principles in Belgium. Second, the analysis confirmed the validity of a phased approach to the implementation of the Protocol. A phased approach will allow to benefit from the implementation of the basic principles in a timely manner and to deal with more fine grained choices in a later stage. Moreover, the phased approach will be necessary in order to be able to timely ratify the Nagoya Protocol and allow Belgium to participate as a Party to the Nagoya Protocol at the first COP/MOP in October 2014. Finally, the impact assessment has led to a set of specific recommendations on each of the six measures described above: 1. Alongside the designation of Competent National Authorities (CNAs), a centralized input system to the CNAs should be established. 2. With regard to compliance measures, sanctions should be provided for in cases of noncompliance with PIC and MAT requirements set out by the provider country. When checking content of MAT, a provision in the Code of international private law should provide for reference to provider country legislation, with Belgian law as a fallback option. 3. At this stage of the implementation, the monitoring of the utilization of genetic resources and traditional knowledge by a checkpoint should be done on the basis of the PIC available in the ABS Clearing-House. 4. With regard to access to Belgian genetic resources, it is recommended to refine existing legislation relevant for protected areas and protected species, combined with a general notification requirement for access to other genetic resources. Later stages of implementation can then include refinement of additional relevant legislation as well as having ex-situ collections process the other access requests. 5. At this stage of the implementation, and apart from the general obligation to share benefits, no specific benefit-sharing requirements should be imposed for the Mutually Agreed Terms. A combination of more specific requirements, including the possibility to use standard agreements, can be considered in a later stage of the implementation. 6. The Royal Belgian Institute of Natural Sciences should be mandated to fulfill the information sharing tasks on Access and Benefit-sharing under the Nagoya Protocol, through the ABS Clearing-House. Implementation of the recommendations To implement these recommendations, the phased approach could be organized through a three step process: 1. In the first step, a political agreement should be agreed upon by the competent authorities with a clear statement on the general legal principles to be adopted, along with some specification of the actions to be undertaken by the federal and the federated entities to establish these principles and put them into practice. These should include: a. Establishment of benefit sharing as a general legal principle in Belgium. b. Establishment as a general legal principle that access to Belgian genetic resources requires PIC. c. Establishment of the general principle concerning the designation of four Competent National Authorities. d. Commitment that legislative measures will be taken to provide that genetic resources utilized within Belgian jurisdiction have been accessed by PIC and MAT, as required by provider country legislation, and to address situations of non-compliance. e. Designation of the Belgian CBD Clearing-House Mechanism, managed by the Royal Belgian Institute of Natural Sciences, as the Belgian contribution to the ABS Clearing-House, for dealing with the information exchange on ABS under the Nagoya Protocol. The reason for recommending such a political agreement is double. On the one hand, such an agreement provides for a clear political commitment to the core obligations of the Nagoya Protocol, as it specifies the intentions of the competent authorities, within the limits of the decisions already taken at the international and European level at the time of the agreement. On the other hand, it does not prejudge the political decisions to be taken by the different authorities and thus allows for sufficient flexibility to further adjust the implementation process in a later stage. The latter is especially important given the many questions that are still undecided at the present stage, both at the EU and international level, as mentioned and taken into account in this report. 2. In a second step, the specified actions should be subsequently implemented, for example through a cooperation agreement and/or by adding provisions in the relevant legislations such as the environmental codes of the federated entities and the federal government, along with other possible requirements. 3. In a third step, additional actions can be undertaken once there is more clarity from the negotiations on the EU and the international level. 4. RÉSUMÉ ANALYTIQUE Recommandations générales  Tant le consentement préalable donné en connaissance de cause (Prior Informed Consent, PIC) que le partage des avantages (benefit-sharing) devraient être établis comme principe général juridique en Belgique  Une approche par étapes devrait être adoptée pour la mise en oeuvre du Protocole de Nagoya. Celle-ci permettrait de s'appuyer sur l'instauration, dans les temps requis, de principes juridiques de base et de traiter les options plus précises à un stade ultérieur Cette étude a pour objectif de contribuer à la ratification et à la mise en oeuvre en Belgique du Protocole de Nagoya sur l'Accès et le Partage des Avantages (APA), qui à son tour doit contribuer à la conservation de la diversité biologique et à l'utilisation durable de ses éléments. En tant que protocole à la Convention sur la Diversité Biologique (CDB), l'implémentation du Protocole de Nagoya de 2010 sur « l'Accès aux ressources génétiques et le partage juste et équitable des avantages découlant de leur utilisation » participe à l'objectif général de mise en oeuvre de la CDB. La CDB est le principal instrument international pour la protection de la biodiversité. Elle a trois objectifs: (1) la conservation de la diversité biologique, (2) l'utilisation durable de ses éléments et (3) le partage juste et équitable des avantages découlant de l'exploitation des ressources génétiques. Le Protocole de Nagoya dessine les moyens de mise en oeuvre du troisième objectif. L'APA comprend une grande diversité de questions allant bien au-delà des seules matières environnementales, telles que la régulation et l'accès aux marchés, le commerce international, l'agriculture, la santé, le développement et la coopération, la recherche et développement, et l'innovation. Par conséquent, la future mise en oeuvre du Protocole de Nagoya pourrait être pertinente pour plusieurs départements et plusieurs niveaux de compétence en Belgique. L'Accès et le Partage des Avantages (APA) en Belgique Suite aux transferts successifs de compétences depuis 1970, les entités fédérées ont la responsabilité première pour les questions liées à l'Accès et au Partage des Avantages (APA), parmi lesquelles la politique environnementale, la politique agricole, la recherche et le développement, et la politique économique et industrielle. Cependant, le gouvernement fédéral détient dans ces domaines des compétences réservées et résiduelles, s'appliquant entre autres à l'importation, de l'exportation et du transit des espèces végétales et animales non indigènes, à la propriété industrielle et intellectuelle, et à la recherche scientifique nécessaire à l'exercice de ses propres compétences. La grande diversité des questions traitées nécessite aussi une distribution administrative étendue des compétences relatives à l'APA au sein de chaque niveau de pouvoir. La mise en oeuvre du Protocole de Nagoya, en tant que « traité mixte »4 , exigera donc des compétences à la fois de l'Etat fédéral et des entités fédérées, et requerra une coordination inter-et intra-départementale approfondie. L'accès aux ressources génétiques, tel que défini dans le Protocole de Nagoya, n'est pas encore régis en tant que tel par le droit public belge. Néanmoins, des dispositions existantes en droit public et privé réglementent déjà des cas apparentés, tels que les droits de propriété, l'accès physique aux (matériel génétique dans les) régions protégées et aux espèces protégées, ou encore la modification et la transformation des environnements naturels. Plusieurs de ces dispositions existantes pourraient servir de base pour la mise en oeuvre du Protocole de Nagoya en Belgique. Pour comprendre pleinement l'utilité de ces mesures existantes, il y a lieu de faire quatre remarques préliminaires importantes. Premièrement, tout au long de cette étude, l'accès et l'utilisation des ressources génétiques et du savoir traditionnel sont analysés dans le cadre du Protocole de Nagoya. Le Protocole traite des ressources génétiques et du savoir traditionnel qui sont fournis par les Parties qui sont les pays d'origine de ces ressources ou par les Parties qui les ont acquises conformément à la Convention sur la Diversité Biologique. Par conséquent, ce rapport traite:  des ressources génétiques qu'un pays possède dans des conditions in-situ et sur lesquelles il exerce un droit de souveraineté; et  des ressources génétiques qu'un pays possède dans des collections ex-situ et qui ont été acquises après l'entrée en vigueur du Protocole de Nagoya et/ou en accord avec les obligations de la Convention sur la Diversité Biologique. Deuxièmement, la Convention sur la Diversité Biologique distingue "matériel génétique" (c.-à-d. tout matériel végétal, animal, microbien ou de tout autre origine contenant des unités fonctionnelles d'hérédité) des "ressources génétiques" (c.-à-d. matériel génétique de valeur réelle ou potentielle). Troisièmement, il faut distinguer d'une part, la question de la propriété légale de ressources génétiques en leur qualité de biens matériels, et, d'autre part, la réglementation de l'accès et de l'utilisation des ressources génétiques en conformité avec le Protocole de Nagoya en tant qu'exercice d'un droit souverain. L'Etat belge détient des droits souverains sur ses ressources génétiques et peut donc réglementer l'utilisation de ces ressources par des mesures de droit public, pour autant que celles-ci soient justifiées. Cependant, l'accès physique au matériel génétique et leur utilisation sont déjà réglementés par la loi sur la propriété et par les options de responsabilité et de réparation accessibles dans les procédures civiles et pénales relatives au renforcement des droits de propriété. Quatrièmement, il est important de rappeler que si les ressources génétiques peuvent être considérées comme des entités biophysiques (par exemple, un spécimen végétal, une souche microbienne, un animal, etc.), elles comprennent un "composant informationnel" (c.-à-d. le code génétique). L'accès aux ressources génétiques concerne à la fois le composant physique et/ou le composant informationnel. intellectuelle peuvent prendre la forme de brevets, de protection des obtentions végétales ou d'indications géographiques. En parallèle de ces principes directeurs régissant le statut légal des ressources génétiques, le droit civil, pénal, et international privé contiennent des règles et procédures en matière de responsabilité et de réparation relatives à l'acquisition illicite de ressources génétiques. Leur application est différente selon que les ressources génétiques sont des spécimens physiques ou des biens d'information, mais aussi selon l'endroit où l'acquisition illicite a eu lieu. Enfin, il n'y a pas actuellement de dispositions légales en Belgique qui régissent explicitement les concepts de « connaissances traditionnelles », de « connaissances traditionnelles associées à des ressources génétiques » et de « communautés autochtones et locale ». En ce qui concerne les mesures supplémentaires, les questions suivantes doivent être prises en considération: a) spécifier les conditions pour les conditions convenues d'un commun accord (MAT); b) instaurer une procédure claire et transparente pour l'accès aux ressources génétiques; c) clarifier les droits et devoirs supplémentaires de(s) l'autorité(s) nationale(s) compétente(s); d) instaurer un système de surveillance (monitoring); e) créer des incitants à se conformer à l'adresse des utilisateurs ; f) encourager le développement de clauses contractuelles types, de codes de conduite et de lignes directrices. Options sélectionnées pour la mise en oeuvre du Protocole de Nagoya A la lumière des recommandations préliminaires relatives aux options pour la mise en oeuvre du Protocole de Nagoya décrites plus haut, six mesures, chacune comprenant plusieurs options politique, ont été discutées à la première réunion des parties prenantes le 29 mai 2012. Sur base des résultats de cette réunion, elles ont été sélectionnées par le Comité de Pilotage de l'étude pour une analyse en profondeur des impacts environnementaux, sociaux, économiques et procéduraux. Avant de mettre en oeuvre ces mesures, il doit être décidé s'il faut inscrire le consentement préalable donné en connaissance de cause (PIC) et le partage des avantages comme principes juridiques généraux en Belgique. Si ce dernier est indispensable pour se conformer au Protocole de Nagoya, le premier (PIC) découle des droits souverains que la Belgique exerce sur ses ressources génétiques et n'est pas indispensable pour la conformité avec le Protocole. Si le consentement préalable en connaissance de cause est effectivement inscrit comme principe général, il faut instaurer une procédure pour l'accès aux ressources génétiques belges (mesure 1). Cela peut être fait en modifiant la législation existante, en s'appuyant sur les collections ex-situ autorisées, en exigeant une notification préalable ou au travers d'une combinaison de ces dispositifs. Impact des options sélectionnées pour la mise en oeuvre du Protocole de Nagoya L'évaluation des conséquences possibles de l'application des options décrites ci-dessus a été conduite par une analyse comparative détaillée à critères multiples. Cette analyse a également permis d'identifier les parties prenantes qui pourraient être affectées. Pour l'opérationnalisation de l'accès aux ressources génétiques (mesure 1), le modèle « bottleneck » (option 1) et le modèle « fishing net » modifié (option 3) ont des performances très similaires. La préférence pour ces options peut être expliquée par le fait qu'elles sont supposés apporter une plus grande sécurité juridique, qu'elles auront un meilleur impact environnemental, et qu'elles correspondent mieux aux pratiques actuelles que les deux autres options. Ces deux options requièrent d'abord l'instauration du consentement informé préalable (PIC) pour l'accès aux ressources génétiques belges comme principe juridique général. En ce qui concerne la spécification des dispositions pour les conditions convenues d'un commun accord (MAT) (mesure 2), les deux options qui imposent des dispositions spécifiques par l'Etat belge (options 2 et 3) se classent mieux que l'option sans dispositions spécifiques (option 1). Cela s'explique par une meilleure performance économique, environnementale, et procédurale (l'option 2 présente aussi une bonne performance sociale). Choisir ces 2 options impose d'établir le 'partage d'avantage' comme principe juridique général en Belgique. En plus de l'instauration d'autorités nationales compétentes, l'option privilégiant un point d'entrée commun est clairement apparue comme l'option recommandée (option 2 de la mesure 3). Cette option a une bonne performance sur tous les critères, offre un meilleure sécurité juridique et est plus efficace pour les utilisateurs et les fournisseurs de ressources génétiques, à bas coût. Pour l'instauration des mesures de mise en conformité (mesure 4), l'option créant une disposition pénale générale se référant à la législation du pays fournisseur, avec la loi belge comme option de rechange, obtient le meilleur résultat. En effet, cette option présente une meilleure adéquation aux pratiques existantes (dans le Code de droit international privé). Quant à la désignation d'un ou plusieurs points de contrôle (mesure 5), l'option contrôlant le consentement préalable en connaissance de cause (PIC) de l'utilisateur dans le Centre d'Echange pour l'APA, est l'option recommandée. Cette option présente d'aussi bons résultats sur tous les critères que les autres options et présente un meilleur score sur le plan de la performance sociale et procédurale. Enfin, en ce qui concerne le partage de l'information par l'intermédiaire du Centre d'échange pour l'APA (mesure 6), la préférence va à la nomination de l'Institut Royal des Sciences Naturelles de Belgique, qui récolte de meilleurs résultats que les autres options sur la plupart des critères. Recommandations résultant de l'évaluation d'impact Deux recommandations générales résultent de l'analyse d'impact, en même temps qu'un ensemble de recommandations plus spécifiques pour chacune des mesures. D'abord, l'analyse montre que les options n'envisageant pas de changements de politique (les options « 0 » de chaque mesure) obtiennent clairement le résultat le moins bon. Ce score conduit à une première recommandation générale, qui est de mettre en oeuvre à la fois le 'Consentement informé préalable' (PIC) et le 'partage des avantages' (benefit-sharing) comme principes juridiques généraux en Belgique. Ensuite, l'analyse a confirmé la validité d'une approche par étapes pour la mise en oeuvre du Protocole. Une approche par étapes permettra de mettre en place les principes de base dans les temps requis et de traiter les options plus précises à un stade ultérieur. De plus, l'approche par étapes est nécessaire pour être en mesure de ratifier le Protocole de Nagoya dans les temps requis et de permettre à la Belgique de participer comme Partie au Protocole à la première Conférence des Parties (COP/MOP1) en octobre 2014. Enfin, l'évaluation d'impact a conduit à un ensemble de recommandations spécifiques pour chacune des six mesures : Implémentation des recommandations Pour réaliser ces recommandations, l'approche par étapes pourrait être organisée en par un processus en trois étapes : SAMENVATTING Algemene aanbevelingen  Zowel voorafgaande geïnformeerde toestemming (Prior Informed Consent, PIC) als de verdeling van voordelen (benefit-sharing) moeten worden ingevoerd als algemene vereisten in België.  Een gefaseerde aanpak moet worden gevolgd voor de implementatie van het Protocol van Nagoya. Op die manier kan voordeel worden gehaald uit de tijdige invoering van basisprincipes en kunnen specifiekere keuzes in een later stadium worden gemaakt. Specifieke aanbevelingen  Naast de oprichting van de Bevoegde Nationale Instanties, moet ook een gecentraliseerd aanspreekpunt worden gecreëerd voor deze instanties.  Wat de maatregelen inzake naleving van wet-of regelgeving (compliance) betreft, moeten sancties worden voorzien voor situaties van vaststelling van niet-naleving van de PIC en van de Onderling Overeengekomen Voorwaarden (Mutually Agreed Terms, MAT), zoals opgelegd door het oorsprongsland. Voor het controleren van de inhoud van de MAT zou een bepaling in het Wetboek van internationaal privaatrecht moeten verwijzen naar de wetgeving van het oorsprongsland, met de Belgische wetgeving als een eventuele terugvaloptie.  In de eerste uitvoeringsfase zou het controleren van het gebruik van genetische rijkdommen en traditionele kennis moeten gebeuren op basis van de PIC die beschikbaar is via het ABS Clearing-House mechanisme.  Met betrekking tot de toegang tot Belgische genetische rijkdommen, is het aanbevolen de bestaande relevante wetgeving inzake beschermde natuurgebieden en beschermde soorten te verfijnen, in combinatie met een algemene notificatievereiste voor de toegang tot andere genetische rijkdommen. In latere uitvoeringsfasen kan bijkomende relevante wetgeving dan eveneens worden verfijnd, en kan het verwerken van toegangsaanvragen voor andere genetische rijkdommen overgelaten worden aan ex-situ collecties.  In de eerste uitvoeringsfase, en los van de algemene verplichting om de voordelen te verdelen, zouden er geen specifieke vereisten moeten opgelegd worden voor het opstellen van Onderling Overeengekomen Voorwaarden (Mutually Agreed Terms). Een combinatie van meer specifieke vereisten, met de mogelijkheid om standaardakkoorden te gebruiken, kan in een latere uitvoeringsfase worden overwogen.  Het Koninklijk Belgisch Instituut voor Natuurwetenschappen zou moeten gemandateerd worden om de informatieuitwisselingstaken in verband met toegang en verdeling van de voordelen in het kader van het Protocol van Nagoya te vervullen, via het ABS Clearing-House. Het doel van deze studie is bij te dragen tot de ratificatie en implementatie in België van het Protocol van Nagoya inzake toegang en verdeling van voordelen (Access and Benefit-sharing, ABS), welke op haar beurt moet bijdragen tot het behoud van de biologische diversiteit en het duurzame gebruik van bestanddelen daarvan. De implementatie van het Protocol van Nagoya inzake "toegang tot genetische rijkdommen en de eerlijke en billijke verdeling van voordelen voortvloeiende uit hun gebruik" (2010), past in de algemene doelstelling die de implementatie van het Verdrag inzake biologische diversiteit (VBD) beoogt, daar het een protocol is bij het VBD. Het VBD is het voornaamste internationale instrument voor de bescherming van de biodiversiteit. Het heeft drie doelstellingen: (1) het behoud van de biologische diversiteit, (2) het duurzame gebruik van bestanddelen daarvan en (3) de eerlijke en billijke verdeling van de voordelen voortvloeiende uit het gebruik van genetische rijkdommen. Het Protocol van Nagoya bepaalt hoe de derde doelstelling gerealiseerd kan worden. ABS kan een breed scala van gerelateerde aangelegenheden omvatten die veel verder gaan dan louter milieuaangelegenheden, zoals regulering van en toegang tot de markt, internationale handel, landbouw, gezondheid, ontwikkelingssamenwerking, onderzoek & ontwikkeling, en innovatie. Bijgevolg zal de toekomstige implementatie van het Protocol relevant zijn voor verschillende departementen en verschillende beleidsniveaus in België. Toegang en verdeling van voordelen in België Na de opeenvolgende staatshervormingen sinds 1970, ligt de verantwoordelijkheid voor ABSaangelegenheden vooral bij de deelstaten, zoals het milieubeleid, landbouwbeleid, onderzoek en ontwikkeling, en het economisch en industriebeleid. Binnen die domeinen heeft de federale overheid echter gereserveerde en residuaire bevoegdheden. Voorbeelden hiervan zijn o.a. de in-, uit-en doorvoer van inheemse planten-en diersoorten, industriële en intellectuele eigendom, en wetenschappelijk onderzoek dat nodig is voor de uitoefening van haar eigen bevoegdheden. Het brede scala aan gerelateerde aangelegenheden veronderstelt ook een ruime administratieve verdeling van ABS-bevoegdheden binnen elk bevoegdheidsniveau. Voor de implementatie van het Protocol van Nagoya, als een "dubbel gemengd vedrag"6 , spelen de bevoegdheden van zowel de federale overheid als de deelstaten dus een belangrijke rol, en zal een uitgebreide inter-en intradepartementale samenwerking nodig zijn. De toegang tot genetische rijkdommen, zoals die in het Protocol van Nagoya is vastgelegd, is als dusdanig nog niet gereguleerd door Belgische publiekrechtelijke maatregelen. Toch worden gerelateerde aangelegenheden zoals het eigendomsrecht, de toegankelijkheid van (genetisch materiaal in) beschermde natuurgebieden en beschermde soorten, of het wijzigen van vegetatie, al gereguleerd door bestaande publiek-en privaatrechtelijke bepalingen. Deze bestaande bepalingen kunnen als basis worden gebruikt voor de implementatie van het Protocol van Nagoya in België. Om het nut van deze bestaande maatregelen volkomen te begrijpen moeten vier belangrijke, voorafgaande opmerkingen worden gemaakt. Ten eerste wordt in deze studie de toegang tot en het gebruik van genetische rijkdommen en traditionele kennis onderzocht in het kader van het Protocol van Nagoya. Het Protocol betreft genetische rijkdommen en traditionele kennis die worden verschaft door Partijen die het land van oorsprong van deze rijkdommen en/of kennis zijn of door Partijen die genetische rijkdommen in overeenstemming met het VBD hebben verworven. Bijgevolg betreft dit rapport:  genetische rijkdommen die een land bezit onder in-situ omstandigheden en waarop dat land soevereine rechten heeft; en  genetische rijkdommen die een land bezit in ex-situ collecties en die verworven werden na de inwerkingtreding van het Protocol van Nagoya en/of overeenkomstig de verplichtingen uit het Verdrag inzake Biologische Diversiteit. Ten tweede maakt het VBD een onderscheid tussen "genetisch materiaal" (m.a.w. alle materiaal van plantaardige, dierlijke, microbiële of andere oorsprong dat functionele eenheden van de erfelijkheid bevat) en "genetische rijkdommen" (m.a.w. genetisch materiaal van feitelijke of potentiële waarde). Ten derde moet een onderscheid worden gemaakt tussen het juridisch eigendom van genetische rijkdommen als materiële goederen enerzijds, en het reguleren van de toegang tot en het gebruik van genetische rijkdommen overeenkomstig het Protocol van Nagoya in het kader van de uitoefening van een soeverein recht, anderzijds. De Belgische Staat heeft als soevereine staat het recht om het gebruik van haar genetische rijkdommen te reguleren door middel van publiekrechtelijke maatregelen, op voorwaarde dat die maatregelen gerechtvaardigd zijn. De fysieke toegang tot en het gebruik van genetisch materiaal wordt echter al gereguleerd door het eigendomsrecht en door de aansprakelijkheids-en schadeloosstellingsmogelijkheden van de burgerlijke en strafrechtelijke procedures die gebruikt kunnen worden voor het afdwingen van eigendomsrechten. Ten vierde is het belangrijk te onderlijnen dat genetische rijkdommen, ook al kunnen ze als biofysische entiteiten worden beschouwd (e.g. een plantenspecimen, een bacteriële stam, een dier, enz.), ook een "informationele component" bevatten (i.e. hun genetische code). Gelet op het voorgaande zijn de geldende nationale bepalingen met betrekking tot het wettelijke statuut van genetische rijkdommen in België vooral te vinden in het eigendomsrecht van genetisch materiaal. Het juridisch eigendom van genetisch materiaal als biofysische entiteit vloeit voort uit de voorwaarden en regels die de eigendom regelen van het organisme waarin dit materiaal kan worden gevonden, welke vastgelegd zijn door de basisprincipes van het eigendomsrecht in het burgerlijk wetboek. De eigendom op een organisme betekent dat de eigenaar het recht heeft om het organisme te gebruiken, ervan te genieten en er materieel en juridisch over te beschikken. Bovendien zou elke wettelijke maatregel waarin de regulering van de toegang tot genetische rijkdommen wordt overwogen, voordeel kunnen halen uit de bestaande wetgeving die de toegankelijkheid en het gebruik van genetisch materiaal reguleert. Deze wetgeving varieert naargelang het soort eigendom van het materiaal (roerend, onroerend of res nullius), het bestaan van beperkingen op het eigendomsrecht zoals een specifieke bescherming (beschermde soorten, beschermde natuurgebieden, bossen of mariene omgevingen) en de locatie van het genetische materiaal (de vier bevoegde instanties passen elk hun eigen regels toe). In tegenstelling tot de fysieke componenten kunnen de informationele componenten van de genetische rijkdommen aanzien worden als res communis: "zaken die niemands eigendom zijn en door iedereen gebruikt mogen worden". De toegang tot dergelijke informationele componenten valt niet onder een specifieke wetgeving, maar de uitoefening van bepaalde gebruiksrechten kan wel beperkt worden door het intellectuele eigendom dat werd toegestaan op uitvindingen die betrekking hebben op een voortbrengsel dat uit biologisch materiaal bestaat of dit bevat, of op een werkwijze waarmee biologisch materiaal wordt verkregen, bewerkt of gebruikt. Deze intellectuele eigendomsrechten kunnen de vorm aannemen van octrooien, bescherming van kweekproducten of geografische indicaties. Naast deze principes met betrekking tot het wettelijke statuut van genetische rijkdommen bieden enkele burgerrechtelijke, strafrechtelijke en internationale privaatrechtelijke regels ook aansprakelijkheids-en schadeloosstellingsmogelijkheden voor gevallen waarin een illegale verwerving van genetische rijkdommen wordt vastgesteld. Hun toepassing varieert naargelang de aard van het goed (fysieke goederen of informationele goederen), maar ook naargelang de plaats waar de illegale verwerving gebeurt. Tot slot zijn er in België momenteel geen wettelijke bepalingen waarin de concepten "traditionele kennis", "traditionele kennis met betrekking tot genetische rijkdommen" en "inheemse en lokale gemeenschappen" uitdrukkelijk zijn vastgelegd. Traditionele kennis en de rechten van inheemse en lokale gemeenschappen werden echter wel aangekaart in enkele internationale akkoorden waarbij België partij is, zoals het Verdrag nr. 107 van de Internationale Arbeidsorganisatie (IAO) betreffende inheemse en in stamverband levende volken uit 1957, het Verdrag nr. 169 van de IAO betreffende inheemse en in stamverband levende volken, en de VN-verklaring over de rechten van inheemse volken. Voorbereidende aanbevelingen met betrekking tot de opties voor de implementatie van het Protocol van Nagoya Hoewel het Protocol van Nagoya een recent protocol is, is het niettemin de verdere uitvoering van de derde doelstelling van het VBD, welke basisprincipes en ABS aanverwante bepalingen bevat, zoals de soevereine rechten van Staten op hun natuurlijke rijkdommen, de eerlijke en billijke verdeling van voordelen en het belang van inheemse en lokale gemeenschappen en hun traditionele kennis. Verschillende Partijen bij het VBD wereldwijd hebben daarom ABS-maatregelen ingevoerd, welke nuttige ervaringen opleveren voor de implementatie van het Protocol. Op basis van deze ervaringen werden twee groepen voorbereidende aanbevelingen uitgewerkt in deze studie, die betrekking hebben tot de beschikbare opties voor de implementatie van het Protocol in België. De eerste groep aanbevelingen houdt verband met de vereiste instrumenten voor de naleving van de kernverplichtingen die voortvloeien uit het Protocol7 . De tweede groep aanbevelingen houdt verband met belangrijke bijkomende maatregelen waarmee rekening moet worden gehouden bij de naleving van de verplichtingen, maar die verder gaan dan de kernverplichtingen. Voor het implementeren van de kernverplichtingen worden de volgende aanbevelingen gedaan:  De toegangsvoorwaarden verduidelijken: dankzij haar soevereine rechten op de genetische rijkdommen kan België kiezen of gebruikers al dan niet de voorafgaande geïnformeerde toestemming (Prior Informed Consent, PIC) van de bevoegde instantie moeten verkrijgen om toegang te krijgen tot de genetische rijkdommen die onder haar bevoegdheid vallen.  De format van de onderling overeengekomen voorwaarden bepalen: Eenmaal het Protocol van Nagoya in werking treedt in België, moeten gebruikers die op Belgisch grondgebied actief zijn de voordelen die voortvloeien uit het gebruik van genetische rijkdommen verdelen. Die verdeling moet gebeuren op basis van onderling overeengekomen voorwaarden (Mutually Agreed Terms, MAT). Het Protocol van Nagoya legt echter geen specifiek format op voor deze onderling overeengekomen voorwaarden. Deze kunnen worden overgelaten aan het goeddunken van belanghebbenden of voortvloeien uit richtlijnen en/of verplichte maatregelen die door de Staat worden opgelegd.  Ervoor zorgen dat ABS bijdraagt aan behoud en duurzaam gebruik van biodiversiteit: men moet ervoor zorgen dat de implementatie van het Protocol bijdraagt tot de twee andere doelstellingen van het VBD: het behoud van de biologische diversiteit en het duurzame gebruik van bestanddelen daarvan. Dit is bijvoorbeeld mogelijk door aan de PIC verplichte voorwaarden te koppelen voor het verdelen van voordelen of door een "voordelenverdelingsfonds" op te richten waarbij de voordelen voor behoud en duurzaam gebruik van biodiversiteit worden bestemd.  De toegang faciliteren voor biodiversiteit gerelateerd onderzoek: om onderzoek naar biodiversiteit te stimuleren en om niet-commercieel onderzoek met genetische rijkdommen niet te overbelasten, kunnen maatregelen worden uitgewerkt om de toegang tot genetische rijkdommen te faciliteren voor niet-commercieel en biodiversiteit gerelateerd onderzoek.  Een Bevoegde Nationale Instantie oprichten: elke Partij moet een Bevoegde Nationale Instantie (Competent National Authority) aanstellen. Deze instantie is verantwoordelijk voor het verlenen van toegang, of, indien van toepassing, voor de afgifte van schriftelijk bewijs dat voldaan is aan de vereisten voor toegang en voor advisering over de toepasselijke procedures en vereisten voor het toegang krijgen tot genetische rijkdommen. Gelet op de institutionele realiteit in België kan meer dan één Bevoegde Nationale Instantie worden aangesteld. Deze aanstelling is van de hoogste prioriteit, aangezien België uiterlijk op de datum van inwerkingtreding van het Protocol het VBD Secretariaat in kennis moet stellen van de contactgegevens van haar bevoegde nationale instantie of instanties (en van haar nationale contactpunt, dat reeds is aangesteld).  De wetgeving van oorsprongslanden bindend maken: als onderdeel van de implementatie van het Protocol moeten de basisverplichtingen worden vastgelegd waaraan gebruikers moeten voldoen bij het gebruik van genetische rijkdommen in België. Deze verplichting komt neer op het bindend maken van de wetgeving van het oorsprongsland inzake PIC en MAT. Dit zou kunnen gebeuren door in de Belgische wetgeving te verwijzen naar de ABS-wetgeving van het oorsprongsland, of door een op zichzelf staande verplichting vast te leggen in de Belgische wetgeving die PIC en MAT oplegt, indien vereist door het oorsprongsland.  Controlepunt(en) vastleggen om het gebruik van genetische rijkdommen te volgen: om het Protocol van Nagoya na te leven moet minstens één instelling worden aangeduid die als controlepunt zal fungeren om het gebruik van genetische rijkdommen te volgen en de transparantie over het gebruik daarvan te vergroten. Het kan om een nieuwe of bestaande instelling gaan. Wat bijkomende maatregelen betreft, moet het volgende worden overwogen: a) de vereisten voor de MAT verduidelijken; b) een duidelijke en transparante toegangsprocedure uitwerken; c) bijkomende rechten en plichten van de bevoegde nationale autoriteiten verduidelijken; d) een monitoringssysteem invoeren; e) aanmoedigingsmaatregelen voorzien voor de naleving van wet-of regelgeving door gebruikers; en f) de ontwikkeling van contractuele modelbepalingen, gedragscodes en richtlijnen stimuleren. Geselecteerde opties voor de implementatie van het Protocol van Nagoya Gelet op de hierboven beschreven voorbereidende aanbevelingen met betrekking tot de beschikbare opties voor de implementatie van het Protocol, werden zes maatregelen, elk inclusief verschillende beleidsopties, besproken op de eerste vergadering met belanghebbende partijen op 29 mei 20128 . Op basis van deze vergadering selecteerde het Stuurcomité van de studie deze maatregelen voor een grondigere analyse van ecologische, maatschappelijke, economische en procedurele gevolgen van hun implementatie. Alvorens deze maatregelen in te voeren, moet worden besloten of PIC en de verdeling van de voordelen (benefit-sharing) als algemene vereisten moeten gelden in België. Hoewel dit laatste nodig is voor de naleving van het Protocol, vloeit het eerste voort uit de soevereine rechten die België bezit op haar genetische rijkdommen en is het niet nodig voor de naleving van het Protocol. Indien PIC als een algemeen principe wordt beschouwd, moet een procedure worden uitgewerkt voor de toegang tot de eigen genetische rijkdommen van België (maatregel 1). Dit kan door de bestaande wetgeving aan te passen, door op gekwalificeerde ex-situ collecties te vertrouwen, door een voorafgaande notificatie te vereisen of door een combinatie van deze instrumenten. Maatregel 1: de toegang tot genetische rijkdommen operationaliseren Optie 0 -Geen voorafgaande geïnformeerde toestemming Een voorafgaande geïnformeerde toestemming is niet vereist voor het gebruik van genetische rijkdommen en traditionele kennis in België; 5. Optie 1 -Het "bottleneck" model a. Voor beschermde genetische rijkdommen: de toegang wordt mogelijk gemaakt door de bestaande wetgeving relevant voor beschermde natuurgebieden en beschermde soorten te verfijnen; b. Voor niet-beschermde genetische rijkdommen: de toegang wordt mogelijk gemaakt via Belgische ex-situ collecties. 6. Optie 2 -Het "fishing net" model a. Voor beschermde genetische rijkdommen: de toegang wordt mogelijk gemaakt door de bestaande wetgeving relevant voor beschermde natuurgebieden en beschermde soorten te verfijnen; b. Voor niet-beschermde genetische rijkdommen: de toegang wordt toegestaan na notificatie aan de bevoegde instantie. 7. Optie 3 -Het aangepaste "fishing net" model Indien de verdeling van de voordelen als een algemene vereiste wordt beschouwd, moeten de specifieke vereisten voor het opstellen van de Onderling Overeengekomen Voorwaarden (MAT) worden gespecificeerd (maatregel 2). Het bepalen van deze vereisten kan worden overgelaten aan de gebruikers en aanbieders (optie 1), of op een min of meer gestandaardiseerde wijze worden opgelegd door de staat (optie 2 en 3). Maatregel 2: specificeren van de vereisten voor het opstellen van Onderling Overeengekomen Voorwaarden 4. Optie 0: Geen verdeling van voordelen voor het gebruik van genetische rijkdommen en traditionele kennis in België. 5. Optie 1: De bevoegde autoriteiten leggen geen specifieke vereisten op voor het opstellen van de MAT. Het staat gebruikers en aanbieders vrij om gezamenlijk te beslissen over de inhoud. 6. Optie 2: Specifieke vereisten voor het opstellen van MAT worden opgelegd, inclusief door middel van contractuele modelbepalingen die verschillen naargelang van het doel van de toegang. 7. Optie 3: Specifieke vereisten voor het opstellen van MAT worden opgelegd, maar zonder contractuele modelbepalingen. Die specifieke vereisten verschillen naargelang van het doel van de toegang Ze vormen de basis voor onderhandelingen over MAT door de gebruikers en aanbieders van genetische rijkdommen die geval per geval zullen plaatsvinden. Met het oog op de naleving van het Protocol van Nagoya moeten een of meer bevoegde nationale instanties worden aangesteld (maatregel 3). Zij moeten toegang verlenen, schriftelijk bewijs verschaffen dat voldaan is aan de vereisten voor toegang en/of gebruikers adviseren over de toepasselijke procedures en vereisten voor het toegang krijgen tot genetische rijkdommen. Om die taken uit te voeren moeten de bevoegde nationale instanties aanspreekpunten voorzien voor de gebruikers van genetische rijkdommen. Dergelijke aanspreekpunten kunnen afzonderlijk worden voorzien, waarbij elke instantie zijn eigen aanspreekpunt heeft (optie 1), of gezamenlijk, waarbij er één enkel aanspreekpunt is voor de verschillende instanties (optie 2). a. Voor beschermde genetische rijkdommen en genetische rijkdommen die al onder een specifieke relevante wetgeving vallen: de toegang wordt mogelijk gemaakt door de bestaande wetgeving te verfijnen; b. Voor niet-beschermde genetische rijkdommen: de toegang wordt toegestaan na notificatie aan de bevoegde instantie. Maatregel 3: een of meer bevoegde nationale instanties aanstellen Met het oog op de naleving van het Protocol van Nagoya door gebruikers moet minstens één controlepunt worden voorzien voor de monitoring van het gebruik van genetische rijkdommen en traditionele kennis in België (maatregel 5). Indien België besluit om controlepunten in te voeren, kan de invoering daarvan in verschillende fasen gebeuren. Gelet op het politieke engagement voor de tijdige ratificatie van het Protocol van Nagoya, zou in de eerste fase naar een minimale invoering kunnen worden gekeken, met de oprichting van één enkel controlepunt. Voor die eerste fasen lijken twee mogelijke opties relevant, nl. het monitoren van de PIC van de gebruiker, die beschikbaar is via de ABS Clearing-House (optie 1), en het upgraden van de bestaande verplichting van vermelding van de geografische oorsprong in de octrooiaanvragen (optie 2). Aangezien optie 1 en optie 2 elkaar niet uitsluiten, kan een gezamenlijke invoering worden overwogen. Maatregel 5: een of meer controlepunten aanduiden 3. Optie 0: België voorziet geen controlepunten voor de monitoring van het gebruik van genetische rijkdommen en traditionele kennis. 4. Optie 1: het monitoren van de PIC van de gebruiker, die beschikbaar is via de ABS Clearing-House 5. Optie 2: Het octrooibureau wordt als controlepunt gebruikt voor de monitoring van het gebruik van genetische rijkdommen en traditionele kennis. Tot slot zal een Belgische component van of aanspreekpunt voor het ABS Clearing-House worden voorzien, ter ondersteuning van de uitwisseling van informatie over specifieke ABS-maatregelen in het kader van het Protocol van Nagoya (maatregel 6). Hoewel er internationaal nog wordt gediscussieerd over de precieze modaliteiten van het ABS Clearing-House, werden de volgende drie kandidaten reeds geïdentificeerd: het Koninklijk Belgisch Instituut voor Natuurwetenschappen (optie 1), het Federaal Wetenschapsbeleid (optie 2) en het Wetenschappelijk Instituut Volksgezondheid (optie 3). Impact van de geselecteerde opties voor de implementatie van het Protocol van Nagoya De mogelijke gevolgen van de invoering van de bovenvermelde opties werden geëvalueerd door middel van een vergelijkende multicriteria-analyse. Aan de hand van deze analyse konden ook de mogelijk betrokken belanghebbenden worden geïdentificeerd. Wat de operationalisering van de toegang tot genetische rijkdommen betreft (maatregel 1), kwamen het "bottleneck" model (optie 1) en het aangepaste "fishing net" model (optie 3) als beste uit de analyse. De voorkeur voor deze opties kan verklaard worden door het feit dat ze verwacht worden meer rechtszekerheid te zullen bieden, een positiever impact te hebben op het milieu en beter bij de bestaande praktijken te passen dan de andere twee opties. Voor deze opties moet eerst als algemene vereiste worden ingevoerd dat voor de toegang tot Belgische genetische rijkdommen een voorafgaande geïnformeerde toestemming vereist. Wat de specificering van de vereisten voor het opstellen van Onderling Overeengekomen Voorwaarden betreft (maatregel 2), scoorden de twee opties waarbij specifieke vereisten worden bepaald in Belgie (optie 2 en optie 3) beter dan de optie waarbij geen specifieke vereisten worden opgelegd (optie 1). De reden hiervoor zijn hun goede economische, ecologische en procedurele prestaties (optie 2 biedt ook nog goede maatschappelijke prestaties). Om deze opties te kunnen kiezen moet de verdeling van voordelen als een algemene vereiste worden ingevoerd in België. Naast de oprichting van de bevoegde nationale instanties, was ook de oprichting van een gecentraliseerd aanspreekpunt duidelijk de aanbevolen optie (optie 2 van maatregel 3). Wat de uitwerking van nalevingsmaatregelen betreft (maatregel 4), scoort de optie om terug te verwijzen naar de wetgeving van het oorsprongsland (optie 1), met de Belgische wetgeving als terugvaloptie, het best. Dit valt voornamelijk te verklaren door de overeenkomst tussen deze optie en de bestaande praktijken (overeenkomstig het Belgische wetboek van internationaal privaatrecht). Wat de aanduiding van een of meer controlepunten betreft (maatregel 5), is het monitoren, in het ABS Clearing-House, van de door de gebruikers verkregen voorafgaande geïnformeerde toestemming, de aanbevolen optie. Die optie scoort minstens even goed voor alle criteria, en biedt betere maatschappelijke en procedurele prestaties. Tot slot, wat de uitwisseling van informatie via het ABS Clearing-House betreft (maatregel 6), gaat de voorkeur uit naar de aanstelling van het Koninklijk Belgisch Instituut voor Natuurwetenschappen (KBIN), dat voor de meeste onderzochte criteria beter presteert dan de andere opties. Aanbevelingen volgend op de impactanalyse Uit de impactanalyse van deze studie vloeien twee algemene aanbevelingen voort, alsook enkele specifiekere aanbevelingen voor elk van de bovenvermelde maatregelen. Ten eerste blijkt uit de analyse dat de opties die geen beleidsverandering met zich meebrengen (de "0" optie voor elke maatregel) duidelijk de slechtste prestaties bieden. Dat resulteert in een eerste algemene aanbeveling, nl. dat zowel een voorafgaande geïnformeerde toestemming (PIC) als de verdeling van voordelen (benefit-sharing), als algemene vereisten moeten worden ingevoerd in België. Ten tweede bleek uit de analyse de meerwaarde van een gefaseerde aanpak voor de implementatie van het Protocol. Op die manier kan voordeel worden gehaald uit de tijdige invoering van de basisprincipes en kunnen specifiekere keuzes in een later stadium worden gemaakt. Bovendien is een gefaseerde aanpak nodig om het Protocol van Nagoya tijdig te ratificeren en België toe te laten om deel te nemen als een Partij bij het Nagoya Protocol tijdens de eerste bijeenkomst van de Partijen in oktober 2014. Tot slot leverde de impactanalyse enkele specifieke aanbevelingen op voor elk van de zes maatregelen: 1. Naast de oprichting van de Bevoegde Nationale Instanties moet ook een gecentraliseerd aanspreekpunt worden uitgewerkt voor deze instanties. 2. Wat de maatregelen inzake naleving met wet-of regelgeving (compliance) betreft, moeten sancties worden voorzien wanneer de niet-naleving van de PIC en de MAT, zoals opgelegd door het oorsprongsland, wordt vastgesteld. Voor het controleren van de inhoud van de MAT zou een bepaling in het Wetboek van internationaal privaatrecht moeten verwijzen naar de wetgeving van het oorsprongsland, met Belgische wetgeving als een eventuele terugvaloptie. 3. In de eerste uitvoeringsfase zou het controleren van het gebruik van genetische rijkdommen en traditionele kennis moeten gebeuren op basis van de PIC die beschikbaar is via de ABS Clearing-House. 4. Met betrekking tot de toegang tot Belgische genetische rijkdommen is het aanbevolen bestaande relevante wetgeving inzake beschermde natuurgebieden en beschermde soorten te verfijnen, in combinatie met een algemene notificatievereiste voor de toegang tot andere genetische rijkdommen. In latere uitvoeringsfasen kan bijkomende relevante wetgeving dan worden verfijnd, en kan het verwerken van andere toegangsaanvragen overgelaten worden aan ex-situ collecties. 5. In de eerste uitvoeringsfase, en los van de algemene verplichting om de voordelen te verdelen, zouden er geen specifieke vereisten moeten worden opgelegd voor het opstellen van Onderling Overeengekomen Voorwaarden (Mutually Agreed Terms). Een combinatie van meer specifieke vereisten, met de mogelijkheid om standaardakkoorden te gebruiken, kan in een latere uitvoeringsfase worden overwogen. 6. Het Koninklijk Belgisch Instituut voor Natuurwetenschappen zou moeten gemandateerd worden om de informatieuitwisselingstaken in verband met toegang en verdeling van de voordelen in het kader van het Protocol van Nagoya te vervullen, via het ABS Clearing-House. Implementatie van de aanbevelingen Om deze aanbevelingen te implementeren kan voor de gefaseerde aanpak een driestappenproces worden gevolgd: 1. ALs eerste stap kan een politiek akkoord worden afgesloten tussen de bevoegde autoriteiten, die de algemene vereisten uitschrijft en een opsomming maakt van de acties die de federale overheid en de deelstaten moeten ondernemen om deze principes in de praktijk om te zetten. Hiertoe behoren onder andere: a. Het invoeren van de verdeling van voordelen (benefit-sharing) als algemeen vereiste in België. b. Het invoeren van een algemeen principe dat bepaalt dat voor de toegang tot Belgische genetische rijkdommen een PIC nodig is. c. Het bepalen dat van vier Bevoegde Nationale Instanties zullen worden opgericht. d. Het voorzien van wetgevende maatregelen die ervoor zorgen dat het gebruik van genetische rijkdommen onder Belgisch rechtsgebied onderhevig is aan voorafgaande geïnformeerde toestemming (PIC) en onderling overeengekomen voorwaarden (MAT), zoals vereist door de wetgeving van het oorsprongsland. Deze maatregelen moeten er ook in voorzien dat de niet-naleving van deze regels wordt aangepakt. e. Het aanduiden van het Belgisch knooppunt van het VBD Clearing-House Mechanisme, beheerd door het KBIN, als de Belgische deelname aan de ABS Clearing-House in het kader van het Protocol van Nagoya. De reden om een dergelijke politiek akkoord te gebruiken is tweeledig. Enerzijds verschaft het een duidelijk politiek engagement ten opzichte van de kernverplichtingen van het Protocol van Nagoya. Het vermeldt immers de intenties van de bevoegde autoriteiten, binnen de grenzen van de beslissingen die reeds op internationaal en Europees vlak werden genomen op het moment van het akkoord. Anderzijds loopt een dergelijk akkoord niet vooruit op de politieke beslissingen die nog moeten genomen worden door de bevoegde autoriteiten en is het dus voldoende flexibel om het uitvoeringsproces in een later stadium verder aan te passen. Dit laatste is vooral belangrijk gezien de momenteel vele onbeantwoorde vragen, zowel op Europees als op internationaal vlak, die in het evaluatieverslag werden vermeld en behandeld. 2. In de tweede stap zouden de specifieke acties moeten worden geïmplementeerd, bijvoorbeeld aan de hand van een samenwerkingsakkoord en/of door bepalingen toe te voegen aan relevant wetgeving zoals de milieuwetgeving van de deelstaten en de federale overheid, naast andere mogelijke vereisten. 3. Als derde stap kunnen bijkomende acties worden ondernomen eens er meer duidelijkheid is op internationaal en Europees vlak. INTRODUCTION This study aims to contribute to the ratification and the implementation in Belgium of the Nagoya Protocol (NP) on Access and Benefit-sharing (ABS) of the Convention on Biological Diversity 9 . The need for this study was decided by the Interministerial Conference on the Environment of 31 st March 2011 to allow for an early ratification by Belgium of the NP. The objective of the study is to identify and evaluate the possible consequences for the Belgian national legislation and regulation, as well as for Belgian stakeholders, resulting from the implementation of the NP in Belgium. The study involves four phases of work:  Phase 1: Analysis of the regulatory framework of ABS in Belgium  Phase 2: Identification of options and recommendations for possible measures and instruments (legal and non-legal) for the implementation of the NP in Belgium  Phase 3: Impact analysis of the selected options  Phase 4: Conclusions and recommendations Background to ABS and the Nagoya Protocol The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization is a protocol to the UN Convention on Biological Diversity (CBD) 10 . The objective of the NP is expressed as follows: The objective of this Protocol is the fair and equitable sharing of the benefits arising from the utilization of genetic resources, including by appropriate access to genetic resources and by appropriate transfer of relevant technologies, taking into account all rights over those resources and to technologies, and by appropriate funding, thereby contributing to the conservation of biological diversity and the sustainable use of its components. (Article 1 NP) The CBD is the main international framework for the protection of biodiversity. It has three objectives: (1) the conservation of biological diversity, (2) the sustainable use of its components and (3) the fair and equitable sharing of benefits arising from the utilization of genetic resources (GR), including through access. With currently 193 Parties, the CBD has almost universal membership. Since 1996, Belgium is a Party to the CBD, as is the EU and its other Member States.  44(n) Promote the wide implementation of and continued work on the Bonn Guidelines on Access to Genetic Resources and Fair and Equitable Sharing of Benefits arising out of their Utilization, as an input to assist the Parties when developing and drafting legislative, administrative or policy measures on access and benefit-sharing as well as contract and other arrangements under mutually agreed terms for access and benefit-sharing; and  44(o) Negotiate within the framework of the CBD, bearing in mind the Bonn Guidelines, an international regime to promote and safeguard the fair and equitable sharing of benefits arising out of the utilization of genetic resources 13 . This led to the granting of a detailed negotiating mandate for the ABSWG by the CBD COP7 and negotiations were undertaken at CBD COP8 in March 2006. Guided by the Bonn Roadmap (adopted at COP8), Parties committed themselves to complete negotiations at the earliest possible time before CBD COP10 in October 2010. Formal agreement on the textual basis for the final negotiations was only achieved in July 2010, following numerous negotiation meetings between COP9 and COP10 14 . On 30 th October 2010, the final plenary of CBD COP10 successfully adopted the Nagoya Protocol on "Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization". The NP elaborates on and implements the basic principles laid down in the CBD. Of relevance are its Articles 15 and 8(j), in particular: Article 15(1) of the CBD recognizes the sovereign right of States over their natural resources and that the authority to determine access to GR rests with the national governments and is subject to national legislation.  Fair and equitable sharing of benefits arising from GR utilization Pursuant to Article 15(7) of the CBD, the results of research and development and the benefits arising from the commercial and other utilization of GR must be shared in a fair and equitable way with the Contracting Party providing such resources on Mutually Agreed Terms (MAT).  Role and importance of indigenous and local communities (ILCs) and their traditional knowledge (TK) Article 8(j) of the CBD lays down that each contracting Party must, as far as possible and as appropriate and subject to its national legislation, respect, preserve and maintain knowledge, innovations and practices of ILCs embodying traditional lifestyles relevant for the conservation and sustainable use of biological diversity. With the approval and involvement of the holders of such knowledge, innovations and practices, wider application should be promoted and the equitable sharing of the benefits arising from the utilization of such knowledge, innovations and practices should be encouraged.  Adoption and entry into force of the NP The text of the NP was formally adopted on 30 th October 2010 15 and the NP was opened for signature on 2 nd February 2011 till 1 st February 2012 16 . Only Parties to the CBD can sign the NP and only States and Regional Economic Integration Organizations having signed the NP when it was open for signature, can proceed to ratify it 17 . Others will have to accede to the Protocol. Signature in itself does not establish consent to be bound, hence the necessity of an act of ratification 18 or accession 19 . The NP will enter into force on the ninetieth day after the date of deposit of the 50 th instrument of ratification, acceptance, approval or accession by States or REIO that are Parties to the Convention 20 . The Secretary-General of the UN serves as the Depositary of the Protocol 21 . 50 ratifications or equivalent instruments are needed in order for the NP to enter into force. Consequently, there will be one single date of entry into force for the first 50 ratifying Parties, i.e. 90 days after deposit of the 50 th instrument 22 . The ratifying Parties will be bound by treaty obligations upon entry into force. Another date of entry into force will apply for any Party depositing their act of accession after the 15 COP 10 Decision X/1, Access to genetic resources and the fair and equitable sharing of benefits arising from their utilization. Available at: http://www.cbd.int/decision/cop/?id=12267. Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from Their Utilization to the Convention on Biological Diversity, Nagoya, 29 th October 2010, available at http://www.cbd.int/ cop10/doc/ (accessed 30 th December 2010). 16 Ibid. 91 States and 1 regional economic integration organization (REIO), i.e. the EU, have signed the NP. 17 The NP was open for signature by Parties to the CBD. See NP, op. cit., Article 32. 18 Ratification requires the deposit of a formal instrument following completion of internal procedures, as determined by the constitutional law of each Party. 19 The NP remains open to accession for Parties who have not signed it during the time when it is open for signature. 20 See NP, op. cit., Article 34(1). 21 COP 10 Decision X/1, op. cit. 22 It should be noted that the EU instrument of approval is not to be counted as additional to those ratification instruments deposited by the EU Member States since the NP falls within an area of shared competences. See NP, op. cit., Article 34(3). date of deposit of the 50 th instrument (i.e. 90 days after deposit of their instrument23 ). At the time of writing, 15 States had ratified the NP24 . The entry into force of the NP also determines the date of the 1 st Meeting of the Parties to the Nagoya Protocol (NP COP/MOP) and consequently also the decisionmaking capacity of this organ. COP/MOP1 is expected to be held in 2014, concurrently with CBD COP12. Annex 1 of this report contains an analysis of the legal obligations emanating from the NP that has been provided with the terms of reference of this study, by the four Belgian environmental administrations that commissioned this study. This list serves as the background for this study. Ratification process in the European Union The EU and eleven Member States signed the NP on 23 th June 2011. Eleven more did so during July/September 2011. Five Member States have not signed it (but can still accede to the Protocol) 25 . The ratification procedure is laid down in Article 218 of the Treaty on the Functioning of the European Union (TFEU). The expression of EU consent to be bound requires a Council Decision to "conclude" the NP with the consent of the European Parliament (EP). The procedure is triggered by a Commission proposal for a decision, which is submitted to the Council and the EP. The EP expresses its consent in a legislative Resolution, but not through the ordinary legislative procedure (does not involve extensive readings, the EP can only give or withhold its consent). It is for the Council to formally adopt the decision by means of Qualified Majority Voting. As required by Article 34 of the CBD, a declaration of competence is to be included in the instrument of approval, meaning that the EU must declare the extent of its competences with respect to matters governed by the NP. Negotiations are currently on-going at EU level on the basis of a proposal from the Commission to implement the NP in the Union. The ratification of the NP by the EU is equally being prepared. Structure of the report Chapters 2 to 5 analyze the current state of the art of ABS in Belgium. Chapter 2 takes stock of the current political and administrative distribution of ABS-related competences in Belgium. Chapter 3 analyzes how genetic resources and traditional knowledge are currently addressed in Belgian law, including the legal implications of their ownership, access and use. Chapter 4 describes currently existing policy measures and other initiatives in Belgium which are directly relevant to the implementation of the Nagoya Protocol and chapter 5 discusses the conformity of the current situation with the obligations of the NP. Chapter 6 then goes on by taking stock of existing measures and instruments (legal and non-legal) used for the implementation of ABS throughout the world. This allows, in chapter 7, for the establishment of preliminary sets of legal, institutional and administrative measures which could be implemented in Belgium. The recommended measures are divided into two separate sets: the first one containing actions to be taken in case of minimal implementation of the core obligations stemming from the NP and the second one containing measures in case of additional implementation. The core obligations reflect the obligations identified in the terms of reference of this study as requiring special attention. Chapter 8 presents and describes the different options for the minimal implementation of core measures stemming from the NP. Those options were discussed at the first stakeholder meeting on the 29 th of May 201226 . Based on the results of the stakeholder meeting, the options to be further examined were selected by the Steering Committee of this study and submitted to an in-depth analysis of environmental, social, economic and procedural impacts. Chapter 9 analyzes the implementation modalities of the options described in chapter 8, taking into account the existing legal and institutional situation in Belgium described in chapters 2 to 5. Chapter 10 then analyzes the potential impact and compares the selected options through a multicriteria analysis using the set of evaluation criteria described below. A ranking of the options is also established. Finally, chapter 11 outlines some recommendations for a set of instruments and measures (legal or non-legal) for the implementation of the Protocol in Belgium. Scope of the study In order to realize the objectives of the Convention on Biological Diversity and the Nagoya Protocol, this study aims to contribute to the ratification and implementation of the Nagoya Protocol in Belgium. It is based on the list of legal obligations emanating from the NP (Annex 1) provided with the terms of reference of this study, by the four Belgian environmental administrations that commissioned this study. For this study, access and utilization of GR are analyzed in the context of the scope of the Nagoya Protocol. The Protocol applies to GR that are provided by Contracting Parties that are countries of origin of such resources or by the Parties that have acquired the GR in accordance with the Convention on Biological Diversity (Article 15.3, CBD). Countries of origin are countries that possess those GR in in-situ conditions (Article 2, CBD). In Belgium this means that these GR exist within ecosystems and natural habitats in Belgium, or, in the case of domesticated or cultivated species, in the surroundings in Belgium where they have developed their distinctive properties (Article 2, CBD). The status of the GR in ex-situ conditions that have been acquired before the entry into force of the Nagoya Protocol is still under discussion. Therefore, this report only considers the  GR that a provider country possesses in in-situ conditions and  GR in ex-situ collections acquired after the entry into force of the Nagoya Protocol and/or in accordance with the obligations of the Convention on Biological Diversity. It is further important to highlight the provisional nature of the findings presented in this document, as the on-going discussions around the implementation of the Nagoya Protocol in international and European fora will further influence the application of the results of this study. THE DISTRIBUTION OF ABS-RELATED COMPETENCES IN BELGIUM In Belgium, competences relating to ABS are divided between the federal level, the Regions and the Communities. This distribution results from successive transfers of competences to federated entities through the five state reforms since 197027 , the general contours of the sixth state reform having being enacted in 201128 . As a general principle, federated collectivities possess the full competence for matters that have been attributed to them, while the Federal State possesses those competences that have been reserved on its behalf by the Constitution or legislation enacted with special voting quorums, as well as those residual competences that have not been otherwise attributed to other entities 29 . ABS potentially encompasses a large range of issues extending far beyond sole environmental matters, including market regulation and access, international trade, agriculture, health, development cooperation, research & development and innovation. Consequently, several departments and several levels of competence could be responsible for the future implementation of the NP, at federal, regional and community level30 . It should however be noted that in 1995, the Regions and the Federal Government have concluded a cooperation agreement on international environmental matters. This cooperation agreement provides inter alia for an Intra-Belgian coordination framework supplied by the Coordination Committee of the International Environment Policy31 that is used for preparing the implementation of the Nagoya Protocol in Belgium. The political distribution of ABS-related competences Environmental policy The main principle pertaining to the distribution of competences with regard to environmental policy and nature conservation is laid out in Article 6- §1, II and III of the special law (SL) of institutional reform dated as of 8/8/1980, which provides for the so-called exclusive regional "competence block" in accordance with Article 39 of the Constitution. This Article has been modified numerous times, especially in 1993, where the competences attributed to regions were notably strengthened. Today it is the three Regions (Flemish Region, Walloon Region and Brussels Capital Region) that are competent on overall environmental policy, and thus have the greatest responsibility in biodiversityrelated issues. However, applicable legislation also reserves a number of competences to the Federal State, as an "exception" to the general competence on environmental policy and nature conservation of the Regions. When reading the text through the lens of ABS issues, it becomes clear that the Regions are inter alia responsible for the following environmental matters 32 :  the protection of the environment, notably of the soil, subsoil, water and air against pollution (…);  nature conservation;  the protection and conservation of nature;  green area zones, park zones, green areas;  forests;  fluvial fishing and fish farming;  non-navigable waterways, including verges, and polders. Although environmental matters are in principle a regional competence, the Federal Government has retained some reserved competences on the following ABS-related environmental matters in accordance with the special law 8/8/80, as an exception to the general regional competence on environmental matters:  Article 6 §1, II indent 2 of SL 8/8/80: the establishment, for purposes of environmental protection, of product norms for market access (regional governments need to be consulted when drafting these norms).  Article 6 §1, III, 2° of the SL 8/8/80: the export, import and transit of non-indigenous plant varieties as well as non-indigenous animal species and their cadavers. 32 Article 6 §1, II of the SL 8/8/80, 1° and Article 6 §1, III, 2°, 3°, 4°, 6° and 7° of the SL 8/8/80; See also Geeraerts K, Bursens P, Leroy P(2004) Vlaams milieubeleid steekt de grenzen over. De Vlaamse betrokkenheid bij de totstandkoming van Europees en multilateraal milieubeleid. Steunpunt Milieubeleidswetenschappen As the Belgian territorial sea is not considered a part of the territory of (one of the) Regions, the exercise of environmental and nature conservation competences within the Belgian territorial sea is considered to fall under the residual competence of the Federal Government. Having specific regards to the potential changes in the distribution of competences triggered by the current sixth reform of the State, competences regarding ABS related environmental policy are not expected to significantly change 33 . Agricultural policy and maritime fishery Agricultural policy, including the application of the European CAP measures is a regional competence in accordance with Article 6- §1 V of the SL 8/8/80. However, Regions are not responsible for the standardization and monitoring of the quality of raw and vegetal material and the standardization and monitoring of animal welfare in order to ensure the security of the food chain, as these are reserved federal competences. The agreement of regional governments should be sought with regard to animal welfare measures affecting agricultural policy. It should be noted that animal welfare legislation shall be transferred to Regions in the near future, in accordance with the terms of the 2011 institutional agreement establishing the framework for the sixth State reform. Furthermore, those quality or origin labels that possess a regional or local character (such as geographical indications for instance), are included within the realm of the regional competences (Article 6- §1 VI, alinea 5, 4°, of SL 8/8/80 that excludes these measures from those competences reserved to the federal level). Research and development Before the third 1988 state reform, the Federal Government was responsible for virtually all research and development (R&D) related activities. With these amendments, major research-related competences where transferred to the federated entities. Fundamental research and higher education, as well as the regulation of researchers' funding and the management of research institutions were transferred to the French and the Flemish Communities, as exclusively cultural subject-matters falling under the scope of Article 127 of the Constitution and Article 4 of the special law of 8/8/80. The 1993 state reform confirmed this evolution by making the federated entities the prime responsible authorities in matters of R&D 34 . 33 However, it might be relevant to note that the botanical garden located in Meise is mentioned in the transfers of competences that the reform would operate. This transfer is subject to the ratification of a cooperation agreement, the socalled "Peeters-Demotte" plan enacted in 2008 but that has not yet been adopted. The agreement states that the botanical garden's estate and management would fall within the federated competences (of the Flemish Region), under specific conditions. Indeed, the current collections would remain under federal ownership, as these would be considered as "leased" to the Flemish Region and the Flemish Community for a limited period, and the access to collections would be open and free of charge to "all researchers", while "mainstream collections" would be accessed at the same price for all visitors. 34 Wautrequin J. (2011), Nouveaux Transferts de Compétences en Matière de Politique Scientifique? Critères D'appréciation. Intervention au colloque 'Paroles de chercheurs. Etats des lieux et solutions ', 4 mars 2011 ; Goux C. (1996), La recherche scientifique dans la Belgique fédérale: examen de la répartition des compétences, Série Faculté de droit de Namur Centre de droit régional, La Charte, Bruges. With the insertion of Article 6bis into the special law of 8/8/80, Communities and Regions -and thus not only the federated entities falling under the scope of Article 127 of the Constitution -have become "competent with regard to scientific research within the framework of their respective competences, including research carried out in execution to international agreements or acts". Communities and Regions became thus competent in the field of research related to the exercise of their respective Community competences. As for the Regions, they are notably responsible for R&D activities in the following fields 35 : • economically oriented and industrial research, i.e. research or critical investigation aimed at discovering knowledge and skills to develop new products, processes or services, or a significant improvement of products, processes or services; • support for R&D and innovation; • research for technological development; • knowledge diffusion in the industrial sector; • research related to the exercise of other Regional competences. Finally, the Federal Government, remains nonetheless "competent for scientific research that is necessary to the execution of its own competences, including those carried out in execution of international agreements or acts" (Article 6 bis- §2). In accordance with Article 6bis- §2 the federal level also remains competent with regard to 36 : • the implementation and organization of data exchange networks between scientific institutions on the national and international level; • the scientific and cultural federal institutions, including their research and public service activities; • the programs and actions requiring a homogenous implementation on the national and international level in the fields and according to the modalities set out by the cooperation agreement aimed at in Article 92bis- §1 of the special law; • the holding of a permanent inventory of the scientific potential of the country; • the participation of Belgium to the activities of the international research organizations according the modalities set out by the cooperation agreements aimed at in Article 92bis- §1 of the special law; Moreover, the Federal Government can take initiatives, establish structures and provide financial resources for scientific research for the matters that are of regional or community competence, but are related to national or international agreements to which Belgium is a Party, or are related to actions and programs exceeding the interest of a Region or a Community. In that case, the Federal Authority must first submit a proposal for cooperation to the Regions and Communities. 35 Ibid. 36 Ibid. The sixth state reform contains a number of measures that might influence the vertical distribution of ABS related competences, as both interuniversity and technological attraction poles would respectively be transferred to Communities and Regions. Economic and industrial policy The second state reform of 1980 granted economic and industrial competences to the Regions37 . Viewed in the ABS context, the relevant subject-matters of these exclusive regional competences are listed in Article 6 §1 VI of the SL 8/8/80 and include (without being limited to): • economic policy, (Article6- §1 VI, indent1°); • export policy, without prejudice to federal competences in terms of both the grant of warranties against risks of import, export and investment, and of multilateral trade policy (Article 6 §1 VI, indent3°b). The Federal Authority holds besides a full competence on the control and monitoring of import and export of goods and services; • natural resources (Article6- §1 VI, indent5°). Furthermore, Article 6 §1 VI, alineas 4 and 5 designates some reserved competences of the Federal Government. With specific regards to the regulation of ABS-related economic matters, the Federal Authority is competent for the general rules related to the organization of business (Article 6 §1 VI, alinea 4,3°). It conserves also a full competence for the following matters: • Competition law and trade practices, excluding the assignment of quality labels and designations of origin, regional or local character, which are attributed to the Regions (Article6 §1, VI, alinea 5, 4°); • Industrial and intellectual property (Article6 §1, VI, alinea5, 7°); • contingent and permits, for import and export of industrial and agricultural products(Article6,- §1, VI, alinea 5, 8°); Foreign policy and development cooperation Since the 1993 revision of the Constitution, the regulation of international relations is divided according to the principle 'in foro interno, in foro externo': the Federal Government, the Communities and Regions are all responsible for foreign policy related to their respective material competences 38 . Currently, the development cooperation is a shared competence between the Federal Authority, the Regions and the Communities. In this framework, the Federal Authority holds a general competence, whose scope is thus not limited to the other federal material competences. As for them, the Regions and Communities are only competent for the matters related to their material competences39 . The administrative distribution of ABS-related competences At the federal level The main public service at the federal level competent for the implementation of ABS is the Federal Public Service for Health, Food Chain Safety and Environment (FPS Health, Food Chain Safety and Environment). The Directorate-General for the Environment (DG5) is involved in the negotiation and follow-up of a number of international environmental treaties related to its competences. In order to set up the Belgian position at EU and international level, a coordination process with other federal departments and with the federated entities is established since 1995 through the Belgian Coordination Committee on International Environmental Policy (CCIEP). The DG5 is also responsible for the protection of the North Sea and deals with trade in animals and plants through the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Two of its civil servants currently serve respectively as the "Belgian Focal Point for Access and Benefit-sharing" to the CBD and as Belgian Focal Point for Genetically Modified Organisms (GMO) issues related to the Cartagena Protocol. The DG Animal, Plant and Food (DG4) of the FPS Health, Food Chain Safety and Environment, is responsible for the protection against plant diseases, the standardization of food and cosmetic products and the part of the regulation of GMOs. ABS measures might however encompass much more than environmental competences. Hence, a lot of other federal services and administrations might need to contribute to the implementation of ABS in Belgium. The Science Policy (BELSPO) of the Federal Public Planning Service is in charge of the scientific aspects of sustainable development at the federal level and of the implementation of the international obligations of the CBD. It manages long-term scientific support schemes for the federal sustainable development policy. It assures the financing of research activities and makes funds available for CBD implementation and overarches several scientific institutions, including the Royal Belgian Institute of Natural Sciences (RBINS) and the Royal Museum for Central Africa (RMCA), which are major players in Belgian scientific expertise in the field of biodiversity. RBINS ensures the function of Belgian National Focal Point (NFP) to the CBD and the Clearing-House Mechanism (CHM). BELSPO also supports the Federal Council for Sustainable Development (CFDD-FRDO). This council advises the Federal Government on its policy on sustainable development. Particular attention is given to the implementation of international obligations, such as those under the Convention on Biological Diversity. The Interministerial Conference on Science Policy serves as the consultative body between Federal Government and federated entities. Another relevant public service is the Federal Public Service for Economy, SMEs, Middle Classes and Energy (FPS Economy), which is responsible for the overall functioning of markets and the commercialization of (biodiversity-related) goods and services as well as for the regulation of their market approval. Its Directorate-General for Market Regulation and Organization (E3) is responsible for the functioning of the markets for goods and services. Its mission is to create a legal and regulatory environment favorable to business, and to promote effective and fair competition between them. Through the "Immaterial Economy" Service, E3 covers the legal and regulatory framework of intellectual property rights. This service is also responsible for the dissemination of information on these rights and on the technical information to be found in patents. The Directorate-General for Economic Potential (E4) is competent for the follow-up and monitoring of key economic sectors which fall under the competences of the Federal Government. It represents and coordinates Belgian efforts in the international economic institution such as the WTO. This DG is also responsible for allowing or denying the right to import and export goods and services. The Federal Public Service for Finances and the Administration of Customs and Excises, is responsible for the collection of excise duties on imported products, for the monitoring of trade in exotic and endangered plant and animal species under CITES and for the monitoring of import of timber under the Forest Law Enforcement Governance and Trade (FLEGT). The Federal Public Service for Foreign Affairs, Foreign Trade and Development Co-operation regroups different sections which are directly related to the CBD and ABS. Foreign Affairs is responsible for the diplomatic aspects of international negotiations such as those at the CBD. It makes sure the Belgian position in different international forums is consistent. Foreign Trade is in charge of the regulation of international trade of goods and commodities. This service also coordinates the Belgian representation for multilateral trade policy (WTO, OECD) and European trade policy. For foreign economic missions, the FPS is assisted by the parastatal Belgian Foreign Trade Agency which coordinates federal and regional efforts in this matter. The Directorate-General for Development Cooperation (DGD) implements Belgian development cooperation projects. Together with associated research institutions it manages and provides funding and structural assistance for research, capacity building and awareness-raising projects under the CBD. The DGD also manages the Belgian financial contribution to the CBD and the Global Environmental Facility (GEF). It is supported by the Belgian Technical Cooperation (BTC) which is exclusively responsible for implementing direct bilateral cooperation set up by DGD, including biodiversity related cooperation. The Federal Service for Justice regroups various missions which includes the preparation of legislation for the Minister of Justice, the supervision of the operational support to the judiciary power and the monitoring of the execution of administrative and judicial decisions. At the regional level Regional environmental administrations, the Bruxelles Environnement/Leefmilieu Brussel (IBGE-BIM), the Departement Leefmilieu, Natuur en Energie (LNE) of the Flemish government and the Direction générale opérationnelle Agriculture, Ressources naturelles et Environnement (DGARNE) of the Service public de Wallonie are the main official authorities for conservation and sustainable use of biodiversity and genetic resources. These administrations are generally flanked by specialized public agencies such as the Flemish Instituut voor natuur -en bosonderzoek (INBO) or the Flemish Agency for Nature and Forest (ANB) (http://www.natuurenbos.be/), or specific internal administrative units like the Walloon Département de l'Etude du milieu naturel et agricole (DEMNA) among many others. But these power levels also have a strong (and heterogeneous) horizontal breakdown when it comes to ABS-related competences. In Flanders, environmental matters are separated from agricultural matters, for which the Department Landbouw en Visserij is responsible. This is not the case in Wallonia, while in Brussels it is managed by the economic department (Bestuur Economie en Werkgelegenheid, BEW/ Administration de l'Economie et de l'Emploi, AEE). The administrations responsible for foreign trade on the one hand, and for innovation and research policy on the other could play a major role in the regulation and monitoring of non-public research activities, as they are responsible for economically oriented, industrial and innovation research. The following administrations are responsible for these competences: • In Flanders, Departement Economie, Wetenschap en Innovatie (EWI); • in Wallonia, Direction générale opérationnelle Economie, Emploi et Recherche (GDO 6); • in Brussels, BEW/AEE. Each region has its own foreign policy administration, as well as agencies in charge of development cooperation: • In Wallonia, Wallonie-Bruxelles International (WBI) ; • in Flanders, Departement Internationaal Vlaanderen and Vlaams Agentschap voor Internationale Samenwerking (VAIS); • in Brussels, foreign policy is taken care of by the Secretariat General of the Region. Partly linked to these foreign policy administrations are the regional bodies promoting foreign trade. The Agence wallonne à l'Exportation et aux Investissements étrangers (AWEX), Flanders Investment and Trade (FIT) and Brussels Export, are responsible for the management of international entrepreneurship of regional companies and the accommodation of international companies in Belgium. No specific administration is assigned with research in the German Community, but if necessary the Ministry of the German Speaking Community could take the required administrative steps to exercise this competence. The inter-and intra-level coordination of the exercise of ABS-related competences The conclusion of international agreements that fall under the competence of the federal and of federate entities is regulated by the coordination agreement for mixed treaties. This agreement considers three types of international treaties in Belgium40 : (1) treaties under the exclusive federal competence, (2) treaties under the exclusive competence of the Regions and/or Communities and which are concluded and ratified by the regional and/or community Governments and (3) "mixed" treaties when the agreement covers both the competence of the federal and federate entities. The first two types of treaties do not necessarily require coordination between federal and regional authorities. The "mixed" treaty however, must be concluded by a special procedure, agreed on by all concerned Governments, and must also be approved by all competent parliaments. The different power levels coordinate their environmental policy in cross-departmental ways through the Belgian Coordination Committee on International Environmental Policy (CCIEP), for which the secretariat is provided by the Federal Public Sevice for Environment. The CCIEP assigns the task to a specific coordination body (e.g. an existing CCIEP Steering Committee, an ad-hoc Steering Committee, or a coordination group) and appointed experts of the different relevant governments for dealing with specific issues41 . Considering the distribution of competences described previously, the CBD and the NP are obviously "mixed" treaties. The federal and federated governments coordinate issues related to the CBD and the NP through the Biodiversity Steering Committee of the CCIEP. For ABS-related matters, a specific ABS contact group was created under the CCIEP Biodiversity Steering Committee. In cases in which no consensus can be reached through the CCIEP, contentious issues or political issues can be transferred to the Interministerial Conference on the Environment. LEGAL STATE OF THE ART REGARDING ABS IN BELGIUM Access and use of genetic resources under national jurisdiction in Belgium For the analysis below, there are important preliminary distinctions to be highlighted. First, a distinction has to be made between the question of legal ownership of genetic resources in their quality of material goods on the one hand, and the regulation of the access and use of genetic resources according to the Nagoya Protocol as an exercise of a sovereign right (pursuant to Article 15.1 CBD) on the other. Second, it is important to recall the definitions included in the text of the CBD. Article 2 clearly distinguishes between "genetic material" that is defined as any material of plant, animal, microbial or other origin containing functional units of heredity on the one hand, and "genetic resources" that are defined as genetic material of actual or potential value on the other. These definitions make it clear that "genetic resources" are a subset of "genetic material". The distinction between the two terms on the basis of whether or not the material is "of actual or potential value" seems to signify that genetic material only becomes a genetic resource when a use can be or is likely to be ascribed to it42 . The Belgian State holds sovereign rights over its genetic resources and as such can regulate the access and use of these resources by public law measures, as long as these are justified (which in this case would be in particular in the context of the objectives of the CBD and the Nagoya Protocol) and are proportionate to those objectives. Access to genetic resources in their generality (as genetic material having actual or potential value) is not as such yet regulated by Belgian public law measures. However, physical access to genetic material is regulated through various private law provisions and through public regulation of access to genetic materials in national parks and protected species. As this might be relevant in enacting public law measures on genetic resources, physical access to genetic material, including the question of legal ownership over genetic materials as biophysical entities, is briefly discussed hereafter, as well as the legal stakes related to the "informational component" of the genetic resources. Legal status of genetic resources under Belgian legislation The major part of the currently available national provisions addressing the status of and access to genetic resources relates to the regulation of physical access to the genetic material itself, as found in property law and the liability and redress options made available under both civil and criminal procedures related to the enforcement of property rights. The conditions and rules surrounding the legal ownership of genetic material follow from those governing the ownership of the organism as a whole. Eventual restrictions of use can be put upon genetic resources' informational component through intellectual property rights. The violations of both property rights and intellectual property rights are sanctioned through criminal and civil liability procedures, mainly directed at theft charges. Unauthorized access to the informational component of genetic resources is as such today not sanctioned by legislation pertaining to property rights, but should rather be sought under the umbrella of concealment or breach of trust proceedings. Physical access to genetic material subject to property law Belgium is a civil law country, with a property regime centered on the exercise of three categories of prerogatives that follow from legal ownership of goods: the right to use the good (usus), to perceive its benefits and fruits (fructus) and to alienate it (abusus). The central tenets of the right to property established by Articles 544 to 546 of the civil code are as follows43 :  The property of soil includes the property above and beneath (Article 552 of the civil code), limited in its concrete application by laws and regulations pertaining for instance to the exploitation of mines (as by the Decree of Walloon Regional Council of 7 th June 1988, M.B., 27 th January 1989)  The property extends to all the fruits and the products generated by the material good (Article 546 of the civil code), except when the production is the result of a third party's activity, in which case the proprietor would have to reimburse the costs of labor and seeds borne by the third party, in accordance with the theory of unjust enrichment of Article 548 of the civil code ("enrichissement sans cause")  The property of soil extends to all that is united and incorporated to it, to everything that constitutes its accessory through mechanisms coined natural or artificial accessions regulated by Article 546 of the civil code44 . Therefore, the conditions and rules surrounding the legal ownership of the genetic material as a biophysical entity (such as a plant specimen, a microbial strain, an animal, etc.) follow from those governing the ownership of the organism as a whole:  If the organism as a whole is "res nullius": then the bona fide possession of the organism or the specimen leads to legal ownership of the genetic material. o Example: bees as governed by Article 14 of the rural code, which states that when a swarm is in liberty, it is res nullius, until it settles on a specific beehive, where it becomes the property of the person who owns the land to which the hive is attached; also fish in rivers, wild animals, etc.  If the organism as a whole is personal property that is by definition movable: then the legal ownership of the genetic material is a consequence of the legal ownership of the organism as a whole. o Example: flowers bought on the market  If the organism as a whole is a real property (immovable) by incorporation or destination falling under the realms of full private property: then the legal ownership of the genetic material is a consequence of the legal ownership of the organism as a whole. The holder of this private property can be the state (if the good is on state land) or a private person (if the good is on private land) o Example: Article 524 of the civil code governing domestic animals in cages; trees; etc. Property over a specimen and/or its genetic code means that the proprietor possesses, in accordance with the central tenets of Belgian national law, the rights to use, perceive the benefits and alienate the specimen. Access to the informational component of genetic resources In today's growingly digitalized world, access to the informational, rather than the biophysical component of genetic resources can be quite easily provided for, yet difficultly controlled. As opposed to genetic resources' physical specimens, these resources' informational components may constitute a res communis viewed as "things (as light, air, the sea, running water) incapable of entire exclusive appropriation, thereby owned by no one and subject to use by all". However, these resources might also be viewed within a property regime parallel to that of the material components of GR for reasons of clarity and legal coherence. As such, access to such informational components is today not covered by subject-specific legislation, as it does not fall under property laws. The exercise of some use rights can however be limited through intellectual property rights that have been recognized on portions, functions, or uses of biological material resulting from innovations on these materials (precluding thus the material or the information as it directly found in nature) Genetic resources subject to intellectual property law In the context of the discussion on the relevant legislation on intellectual property law, it is important to remember the scope of this study. This report only considers the genetic resources that a provider country possesses in in-situ conditions or has acquired in accordance with the obligations of the Convention on Biological Diversity. Moreover, for these resources, it considers possible measures for implementing the Protocol in relation to the exercise of national sovereignty of States over these resources in their generality. Therefore, the discussion on intellectual property rights (IPR) is relevant insofar it is related to the further downstream utilization of genetic resources. This discussion will be particularly useful for evaluating the best available options for the monitoring process, e.g. a patent application might be an indication of commercial interest in the genetic resource and an upgraded patent application could potentially be used as a checkpoint. The competence pertaining to intellectual property rights is reserved to the federal level, as a formal exception to the attributed competence of regions in terms of economic policy (Article 6 §1 VI, indent 4, 7° of SL8/8/80). However, protection tools which constitute designations of origin with a regional or local character fall under regional competence (Article 6 §1 VI, indent 4, 4° of SL8/8/80). In this framework, three categories of IPR protection can be distinguished: patents, plant variety rights and geographical indications. In Belgium, patents are regulated mainly by the law of 28 th March 1984. A patent is an "exclusive and temporary right to exploit any novel invention that also implies an inventive step while being susceptible of industrial application" (Article 2). The law states that "inventions are patentable even when they relate to biological material or contain a process that enables the production, treatment or use of the biological material" (Article 2). Furthermore, "a biological material isolated from its natural environment can be subject to patent protection, even when it pre-existed under its natural state". Patents are for instance quite often granted for molecular markers that are developed to assist plant breeders in the identification of interesting genetic sequences. Recent European case-law has however reduced the possibilities surrounding the patentability of so-called "native traits" and of "conventional breeding techniques"45 . However, a general research exemption to the rights granted by patents is provided by the law. These rights do not extend to "acts accomplished in a private environment and for non-commercial purposes, nor to acts accomplished for scientific purposes on and with the object of the patented invention" (Article 28 §1 (indents 1 and 2) of the 1984 law, as amended by the law of 28 th May 2005). The exact scope of "research on and with" has been defined in the "travaux préparatoires" of the 2005 amendments of the law, indicating that "research on" relates to "acts accomplished for experimental reasons that verify the function, the efficiency or the operational nature of the patented object". "Research with" relates to "acts accomplished for experimental reasons where the patented invention is used to research something else, as a tool or instrument"46 . Scientific purposes should in this regard be understood in a large sense. Following obligations stemming from the CBD (particularly its Articles 8(j), 15 and 16), the patent law has been amended to include a (qualified) origin indication requirement, if the origin of the material is known (Article 15 §1(6)) 47 . In order for the patent application to be admissible, the filing must contain a statement regarding the geographical origin of the biological material that has been used as a basis for the invention, if known 48 . Plant variety right protection is granted to those new, distinct, stable and uniform plant varieties. A variety is defined in Article 2 of the law of 10 th January 2011 49 as "a plant grouping within a single botanical taxon of the lowest known rank, which grouping, irrespective of whether the conditions for the grant of a breeders' rights are fully met, can be:  defined by the expression of the characteristics resulting from a given genotype,  distinguished from any other plant grouping by the expression of at least one of the said characteristics and  considered as a unit with regard to its suitability for being propagated unchanged". As a consequence, the production, reproduction, conditioning for the purpose of propagation, sale, marketing, import, export or stocking of this variety would need the authorization of the breeder (Article 12 of the law of 10 th January 2011), with the exception of certain specific prerogatives granted for research on the material and breeding with the variety, as well as for certain flexibilities recognized towards small farmers (Articles 14 and 15). Plant variety rights also enjoy research and breeding exemptions. The plant variety rights do not extend to "acts accomplished in a private capacity and for non-commercial purposes, acts accomplished in an experimental capacity or acts accomplished in view of creating or discovering and breeding new varieties" (Article 15 of the law of 10 th January 2011on plant variety rights). Plant variety rights were formerly regulated in Belgium by the law of 20 th May 1975, which has been recently abrogated and replaced by the law of 10 th January 2011. The law of 10 th January 2011 has not yet entered into force 50 , but gives nonetheless the necessary general framework so as to put Belgium in conformity with the provisions of the 1991 UPOV Convention (Union for the protection of plant variety rights). Geographical Indications (GI) are names used to describe a specific agricultural product or a foodstuff that is protected due to its regional and local nature, within general agricultural quality common appreciation of the relationship between intellectual property rights and the relevant provisions of the TRIPS Agreement and the Convention on Biological Diversity, in particular on issues relating to technology transfer and conservation and sustainable use of biological diversity and the fair and equitable sharing of benefits arising out of the use of genetic resources, including the protection of knowledge, innovations and practices of indigenous and local communities embodying traditional lifestyles relevant for the conservation and sustainable use of biological diversity. 48 This requirement is much narrower than the first proposed Bill, which stated that non-compliance with CBD provisions would be considered as contrary to the public order and morality, while the Council of State declared that such obligation would deviate from the initial objective of transposition measures and run counter to the objective of achieving effective harmonization throughout the European Union. See Van Overwalle G. (2006), Implementation of the Biotechnology Directive in Belgium and its After-Effects. International Review of IP and Competition Law, 37:8,pp. 889-1008 (especially at pp. 895-897) 49 Loi du 10 janvier 2011 sur la protection des obtentions végétales 50 See. Article72 of the law for the conditions of its entry into force, which render the mandatory force of the text conditional to the adoption of a royal decree, which has to this day not yet been adopted. As long as the required Royal Decree has not been adopted, the relevant legal framework is still the law of 1975. policies. GI's are usually distinguished between protected designation of origin (PDO), protection of geographical indication (PGI) and traditional specialty guaranteed (TSG) in the European Union 51 . GI's may relate to ABS since the product specification includes a description of the product, comprising the raw materials (and if appropriate the principal physical and microbiological characteristics of such material), and might be stacked on later to the bundle of property rights that surround one particular genetic resource if it is used to produce foodstuff protected by a GI. Liability and redress opportunities in cases of illicit acquisition of genetic resources (material and informational components) Alongside the above legal principles surrounding the legal status of genetic resources, there are a number of rules found in civil, criminal and private international law, that are relevant for the regulation of ABS in cases where an illicit acquisition of genetic resources is established. These legal provisions would indeed be of importance when read in concordance with the obligations related to compliance in the Nagoya Protocol. Liability and redress prospects, when referring to GR, should be analyzed both as physical specimens and as informational goods, through a national lens, and in an international context. Liability and redress for illicit acquisition of GR as physical specimen As with the discussion on the existing legislation on physical access to genetic material as biophysical entities, this legislation concerns access to biophysical specimens and therefore is not directly relevant for the regulation of access and utilization of genetic resources under the Nagoya Protocol. Nonetheless, the discussion on this legislation might be useful when assessing possible overlap and/or inconsistency with the measures that would be proposed for implementing the compliance 51 Regulations 510/2006 on the protection of geographical indications and designations of origin for agricultural products and foodstuffs, JOL, 93, 31.3.2006, p. 12-25, and 509/2006 on agricultural products and foodstuffs as traditional specialties guaranteed, JOL 93, 31.3.2006, p. 1-11;  A "designation of origin" refers to the name of a region, a specific place or, in exceptional cases, a country, used to describe an agricultural product or a foodstuff originating in that region, specific place or country, if the quality or characteristics of which are essentially or exclusively due to a particular geographical environment with its inherent natural and human factors, and the production, processing and preparation of which take place in the defined geographical area.  A "geographical indication" refers to the name of a region, a specific place or, in exceptional cases, a country, used to describe an agricultural product or a foodstuff originating in that region, specific place or country, and which possesses a specific quality, reputation or other characteristics attributable to that geographical origin, and the production and/or processing and/or preparation of which take place in the defined geographical area. provisions of the Nagoya Protocol. When assessing which legal principles should address the issues of liability and redress when facing illicit acquisitions of genetic resources as physical entities, it should first be noted that most conflicts will bear an international dimension, thereby precluding any analysis of applicable legal principles to the determination of actually applicable law and competent authorities. This assessment is made in accordance with the principles of private international law that have been favored by the country where litigation is brought. If Belgian law is deemed applicable to the conflict, then liability and redress opportunities will depend on the existence of a contractual relationship or not, in which case extra-contractual liability schemes both in civil and criminal law should be analyzed. A. Contractual breach If a contract has been used between the user and the provider of the genetic material, then any conflict, whether of a national or an international dimension, will be settled in accordance with the clauses set out by the parties with regard to dispute settlement. A number of national and European legislative texts govern the cases where no applicable law has been set by the parties. In Belgian national law, Article 98 of the private international law code refers to Regulation (CE) No. 593/2008 of 17 th June 2008 on the law applicable to contractual obligations (Rome I) (transposing the 1980 Rome Convention), which states that the law of the country of residence of the principal executor of the contract should apply in times of contractual silence. B. Extra-contractual liability and redress (absence of contract) If no contract has been signed by the user and provider of the genetic material, then positive law will come in to fill the void and establish the terms governing dispute settlement if Belgian law is found to be applicable to the conflict in accordance with the principles of either Belgian private international law (if the case is filed in Belgium) or another country's rules on conflicts of laws and the designation of applicable legislation (if the case is filed in another country) 52 . In the absence of a contract, the illicit appropriation of material goods may qualify as a "simple theft" (in accordance with Articles 461 al 1 and 463 of the criminal code), thereby triggering both criminal and civil liability vis-à-vis the perpetrator. The proprietor of the material good can respectively: (1) Seek injunction against a conduct that is judged to be in contradiction with the social order as a violation of property rights, (CRIMINAL PROCEEDINGS) In accordance with Article 461 of the criminal code, an act corresponding to an "unauthorized/ fraudulent removal of the material good that belongs to a third party" shall qualify as a theft, a criminal offense that shall be repressively punished 53 . The concealment of these objects by third parties knowing of their illegal acquisition is also punished through the concealment offense (Article 505 of the criminal code) 54 . Criminal law is regulated by separate provisions which determine under which circumstances Belgian courts have jurisdiction to hear cases over the alleged infringement of Belgian criminal law. The effectiveness of judgments can be complicated in an ABS context by lack of resources and the priorities of criminal prosecution, as well as issues regarding the execution of judgments 55 . (2) and/or seek compensation for the damage caused by the loss of the material good or by the fault of the person having wrongfully appropriated the good (CIVIL PROCEEDINGS). According to the Belgian Court of Cassation, the res nullius character of material goods cannot exempt the perpetrator from repairing the damage resulting from illicit acts 56 . The physical or legal person that is the legal owner of material goods, can therefore also seek civil compensation/damages ("actions en dommages et intérêts") in parallel to the criminal case being prosecuted ("constitution de partie civile", in accordance with Articles 63 and 70 of the criminal instruction code) 57 , or start civil proceedings before criminal jurisdictions if the prosecutor has dropped the case (in accordance with Article 162 of the criminal instruction code). Both intentional and non-intentional torts engage the extra-contractual responsibility of the perpetrator, when the constitutive elements of civil liability are proven; i.e., the fault, damage, and causal link between the fault and the damage.  With fraudulent intention, an illicit appropriation of genetic resources would qualify as an intentional tort or offense ("délit"), triggering delictual liability under Article 1382 of the civil code.  Without fraudulent intention, an illicit appropriation would qualify as a non-intentional tort ("quasi-délit"), a tort/offense committed by imprudence or negligence, and triggering civil liability. This would lead to a civil procedure concerned with the attribution of compensatory damages under Article 1383 of the civil code. C. Specificity of ABS context: an omnipresent international dimension in conflicts The illicit acquisition of material goods, whether with fraudulent intent or not, can have an international dimension. In an ABS context where the actors would most probably be of different nationalities, and where the contentious access or use of genetic resources might occur in a different country than the country where the alleged owner of the resource is established, it is useful to study 54 Concealment will be further analyzed in part 3.1.2.2. of this section. 55 Aside from the complex issues of competence and applicable law dealt with by private international law, criminal proceedings might also be hindered and further complexified due to the international nature of the conflict brought before the courts at the stage of decision implementation. Indeed extradition procedures would in principle need to be initiated in order to execute the judgment against the person convicted for theft55, or that there would need to be control over his property in order to execute the judgment against his property) These procedures would be expedited depending on the international conventions that have been adhered to by the States concerned (CASTIAUX, J., "Extradition en Belgique", in Chome P., Klees O., Lorent A. (eds.), Droit penal et Procédure pénale, Kluwer, Malines, 2011, p. 155). For instance, the Second Protocol to the 1959 European Convention on mutual assistance in criminal matters provides for transboundary observation when there are suspicions of aggravated theft (Article 17). 56 Cass., 28 janvier 2009, Amén., 2009, p.309 (in this case, damage caused by beavers) 57 The State could also directly start civil proceedings before civil courts, however, it would need to wait for the criminal verdict, in accordance with Article 4 of the code of criminal procedure ("le crimineltient le civil en l'état"). extra-contractual liability58 through the lens of private international law, which would apply, "in default of particular rules" adopted by the legislator in this regard. Private international law determines both the rules pertaining to the conflicts of laws and jurisdiction, respectively determining the legal rules that apply to the case, and the judiciary that would be competent to rule on the subject-matter for civil and commercial matters. A number of specific legal provisions of the private international law code59 govern material goods and the case of their theft. It is in this framework that private international law reveals itself relevant for regulating the illicit acquisition and use of foreign genetic material. The international private law legal principles can contribute in particular to uphold the conditions specified in private law access agreements, in situations where the procedures for mutually agreed terms, established by the country of origin include private law contracts. However, even if these principles are a useful contribution, they are certainly insufficient. In particular, in the ABS context, utilization of GR often occurs on the information components (the DNA code, published research results, databases etc.). Moreover, utilization is often based on the use of a copy of the GR (a clone of the entire biological material or a clone/reproduction of a component of it), even when the GR is not situated in Belgium. These frequent cases of research done on/utilization of GR that are not physically in Belgium are not covered by the legal dispositions the private international law code which does not explicitly refer to the use of GR under the Nagoya Protocol in its current scope60 . In addition, compliance with PIC obligations will involve public law requirements and/or administrative acts in the country of origin of the GR, which fall out of the scope of private international law. Therefore, additional measures might be needed to comply with the obligations under Articles 15, 16 and 18.  Conflict of Jurisdictions (Which jurisdiction is competent?) Article 85 of the code of private international law states that the Belgian judiciary is competent to rule on disputes involving a physical access to a material good "if the good is located in Belgium at the time the claim is made". However, the application of this Article to the situations covered by the Nagoya Protocol is quite limited. Indeed, as stated above, utilization often involves the informational component of GR and/or physical components of GR (copies/clones) of which the original GR is not situated in Belgium.  Conflict of Laws (Which laws to apply?)  Property rights related to a material good are governed by the laws of the State where the good is situated at the time the claim is made, in accordance with Article 87- §1 of the code of private international law. The acquisition and loss of property rights are established by the laws of the State where the good was situated at the moment these acts or facts have occurred. o If the good is an integral part of an ensemble of goods affected to a particular use, it is presumed to be situated in the State that has the strongest ties to the patrimony, in accordance with Article 8- §2 of the code of private international law.  Specific provisions exist for stolen material goods, which could be possibly applied in the ABS framework in the case potential users of genetic resources would come to possess resources that have not been obtained through a legal means of property or possession transfer pursuant to Article 92 of the code of private international law o The "native" proprietor has the choice to refer the case to be ruled by  Either the laws of the State where the material good was situated at the moment of its disappearance,  Or the laws of the State where the material good is located at the moment of the claim. However, in the first scenario, if a possessor in good faith is not protected by the internal legal order of the State, he may invoke the protection offered by the laws of the State where the material good is located at the moment of the claim. Liability and redress for illicit acquisition of GR as informational goods A. Contractual breach As is the case with physical specimens of GR, contractual provisions will prevail in terms of liability and redress if such a contract does exist. In the absence of any contractual relationship, torts law and criminal law will apply. B. Extra-contractual liability and redress (absence of contract) Theft of information is not a qualified infraction under Belgian law, and should most probably be fought through provisions related to breach of trust if the informational component is accessed by third parties without the transfer of actual material possession of the specimen. The use of informational components of genetic resources without PIC or MAT will most probably not be covered by those remedies addressing theft. Indeed, if the informational component of genetic resources is viewed as res communes, the usage of which is common to all, such component may not be subject to theft as long as it is not appropriated 61 . Furthermore, theft provisions apply solely to corporeal objects. However, there exists prominent jurisprudence regarding the theft of computer programs, where these have been considered as corporeal because of their economic value and because of them constituting an element of the patrimony of the original software's proprietor 62 . Neither the doctrine nor the jurisprudence is nonetheless unanimous on this issue, as the fraudulent copying of software has been ruled not to constitute a theft or a breach of trust due to its incorporeal nature, precluding the possibility to cede its ownership63 . These controversies have in this instance led to the draft of Article 504quater of the criminal code on informatics fraud. Other possibilities of redress recognized in Belgian criminal law may be exploited besides. Thus, a first option that might be envisaged is the concealment offense, which normally only applies to corporeal objects. "An offense punished through the criminal code's Article 505" concealment punishes the act of a third party to fraudulently conceal a contentious good, knowing that such good has been acquired through a crime or infraction. Concealment therefore implies the preliminary recognition of a crime. It could therefore only be relevant in the ABS context to genetic resources viewed as informational goods if the criminal code is amended to constitute the "use of the informational component of genetic resources in contradiction to PIC and MAT" as a criminal offense. Indeed, concealment proceedings require that the author of the infraction possesses materially or legally the good, knowing of its illicit acquisition; and both the existence of possession and of such knowledge is appreciated by the judiciary64 . Another possible -but non-exclusive -option would be the breach of trust. As an infraction against property rights, the breach of trust is enshrined in Article 491 of the criminal code, which punishes diverts or dispels goods of any kind from the initial usage or determined use that had been convened, with a prison sentence of one month to five years and a fine from 26 to 55 EUR. This provision could for instance be applied in an ABS context with regard to the exceptions that ought to be provided for research purposes (Article 8a NP), but most importantly against uses of genetic resources contrary to MAT or in absence of MAT in countries where the NP has been ratified and MAT has been requested in national legislation. The turning point for the constitution of this infraction is considered as the moment where the user cannot restore the genetic resources, or use them in a manner consistent with the initial destination65 . All of these approaches require an important stretch from currently applicable legislation so as to address specifically the use of informational components of genetic resources without PIC or MAT. However, breach of trust may be adequately used in cases of change of intent in the use of GR. In order to achieve a high level of dissuasion, the opportunity of addressing "information theft" or "genetic resources" theft should be assessed by law-makers, drawing perhaps on experience acquired with regard to software. Civil proceedings drawing on Articles 1382 and 1383 of the civil code might also be envisaged provided that the existence of damage, fault (negligence or imprudence) and causal link is adequately proven. Specificity of ABS context: an omnipresent international dimension in conflicts With regard to the international dimension of ABS conflicts and the determination of competent jurisdictions and applicable law vis-à-vis informational components of GR, since property rights are not recognized as such components, Articles 87 and 92 of the private international law code are not applicable. Answers may be found in the provisions of the aforementioned code on contractual and extra-contractual obligations, especially Articles 103 and 104 dealing with conflicts of jurisdiction and laws with regard to torts and liability deriving from a damaging act. Legal consequences for access to genetic material Under the current legislation in Belgium, the access to genetic resources for their utilization is not subject to a Prior Informed Consent. However, any legal measure that would consider introducing Prior Informed Consent could benefit from building upon existing legislation on physical access to and use of genetic material. That is why legal consequences for physical access to genetic material are investigated in some more detail in this section. Legislation relevant to physical access depends upon the type of ownership (private, public or res nullius), the existence of restrictions to the ownership, such as specific protection (protected species, protected areas, forests or marine environments) and the location (all four authorities apply their own rules) of the genetic material. Private ownership or res nullius In case of private ownership or res nullius (cf. chapter 3.1.1), access to the territory on which the genetic material (i.e. the specimen) is situated requires consent of the legal owner of the territory to get into his territory for the purpose of physically accessing the genetic material (i.e. the specimen). If a disagreement arose ex-post on the consent, the legal property rights would prevail in absence of proof of the consent (for example in the absence of a written contract) As for access to the genetic material (i.e. specimen) itself:  If it is res nullius (e.g. a bee swarm in liberty): then by law no access permits or contracts are needed. Moreover, if you take possession (that is material deeds of controlling the good for exclusive use), then you automatically become the legal owner of the specimen (Article2279 of the Civil Code)  If it is on territory in private ownership of an individual or a non-state organization: then you need a contract with the private owner, except if special restrictions apply to the legal ownership, which is the case of protected species (cf. discussion below)  If it is on territory in state ownership: then there is the need of an access permit (cf. discussion below on protected areas and territory and on territory in the public domain) Protected species Protected species in the Flemish Region In the Flemish Region, protection of species is regulated by the 'Soortenbesluit'66 of the 13 th August 2009. Under this act, it is forbidden to:  deliberately capture specimens of protected animal species, or to collect their eggs (Article 10- §1)  deliberately pick, collect, cut, uproot, destroy or transplant specimens of protected plantspecies or other types of organisms (Article 10- §2)  transport, sell or exchange or offer for sale or exchange specimens of protected animal species, of protected plant-species or other types of organisms (Article 12)  take away nests of protected birds and breeding sites or resting places of protected animals other than birds (Article 14- §1) The act specifies that, if no other satisfactory solutions exist and if it does not affect the conservation of these species, exceptions can be made for purposes related to research or education, repopulation or reintroduction, for the necessary breeding (Article 20- §1) as well as for reasons of economic, social or cultural nature (Article 20- §2). Request for exceptions needs to be addressed and approved by the "Agentschap voor Natuur en Bos" of the Flemish authorities (Article 22). Protected species in the Walloon Region Protection of species in the Walloon Region is regulated by the nature conservation law of 12 th July 197367 , which contains a general prohibition to:  Capture, kill, detain or transport animal species that are protected (Article 2 for birds, with a number of exceptions according to the species; Article 2bis to 2sexies for other animals)  Collect, pick up, cut, uproot, detain or transport specimens or portions of specimens that belong to those plant species that are listed in Annex 6 of the law (Article 3). Management and maintenance activities do not fall under this prohibition. For partially protected species the prohibition is attenuated by Article 3bis, which states that the "aerial parts of the specimens of the plant species listed in Annex 7 can be collected, picked up or cut in small quantities", but they cannot be sold or intentionally destructed. Derogations to the general prohibition can be awarded in accordance with Articles 5 and 5bis of the 1973 law. These are in principal unique (individual, personal and un-transferrable) but annual derogations can be awarded for physical or moral persons conducting research on one or more biological groups on the entire territory of the Walloon Region (with additional conditions in Article 5bis- §3). Derogations with regard to birds can only be awarded if there is no other satisfying condition and if they do not endanger the population concerned (Article 5 §2) and only for reasons of public health and security, research and education, protection of wild animal or plant species, air security, prevention of important damages to cultures, farm animals, forests or water, as well as allowing the capture, detention or other sound exploitation of small quantities of certain birds selectively, in strictly controlled conditions 68 . Similarly, derogations to the general prohibition with regard to mammals, amphibians, reptiles, fish and wild invertebrates, as well as wild plant species (Article 5 §3) 69 can only be awarded if there is no other satisfactory solution and if such derogation does not harm the maintenance of the population's favorable conservation status in their natural repartition area. These derogations can only be obtained for reasons of protection of wild animal or plant species, prevention of important damages to cultures, farm animals, forests or water, research and education, as well as allowing the taking or detention of certain specimens listed in Annex 2 point A selectively and in limited steps. Article 4 of the same law on nature conservation also mandates the government to regulate the modalities of collection and analysis of biological information on wild animal or plant varieties and the natural habitats falling under the scope of the law by the Walloon government. An administrative order was adopted on 24 th July 2003 70 , stating that the agents of the "Centre 71 " and their collaborators are authorized to enter private property, with prior notification of the owner, to proceed to operations that are indispensable to the collection of biological information (Article 4). Protected species in the Brussels-Capital Region The protection of species is regulated in the region of Brussels-Capital by the Ordinance of 1 st March 2012 regarding nature conservation 72 . With regard to animal species, this act awards strict protection to animal species listed in its Annex II.2.1° throughout the Region's territory, and to species cited in Annex II.3 part 1A throughout protected zones established in the Region (Article 6- §1 of the Ordinance). Such protection implies the interdiction, amongst other acts, to hunt or capture specimens, transport, pick up their eggs, sell, or expose in public spaces (Article 68- §1), except if they fall within the scope of management activities foreseen in the protected zone's management plan (Article 68- §3). Exceptions are made for imports, exports or transit of non-indigenous species, which is a federal competence (see chapter 2.1). With regard to plant species, the Ordinance awards strict protection to plant species listed in its Annex II.2.2° throughout the Region's territory, and to species cited in its Annex II.3 B part 1 B and II.3 part 2 throughout protected zones established in the Region (Article 70- §1 of the Ordinance). Such protection implies the interdiction, amongst other acts, to pick up, cut, uproot, unplant or harm the species in their natural repartition zones or within zones where they benefit from active protection and to detain, transport, or sell specimens collected within these active protection zones (Article 70- §2). Exceptions are made for imports, exports or transit of non-indigenous species, which is a federal competence (see chapter 2.1), except if these acts fall within the scope of management activities foreseen in the protected zone's management plan (Article 70 §3). For species presenting a regional or community interest active protection zones can be set out in accordance with Article 72 of the Ordinance. The measures adopted may for instance include prescriptions restricting the access to certain zones, preserving reproduction or resting areas, or regulating the periods, zones or methods of the sampling and exploitation of Annex II.3 specimens outside protected areas (Article 72- §1, 4°). Special dispensations can be awarded to the above interdictions in accordance with Article 83- §1 of the Ordinance, and the rationale include imperative reasons of major public interest (whether of a social or economic nature) and that would entail primordially beneficial consequences for environmental protection, as well as research or educational purposes. The Article also states that derogations might be granted in order to permit the capture and detention of a limited and specified number of specimens determined by competent authorities, in a strictly controlled, selective and limited fashion. The violation of these rules is punished by imprisonment from 10 days to 1 year, and/ or an administrative fine from 150 EUR to 150.000 EUR. The 2012 Ordinance on nature conservation in the Brussels-Capital Region also contains an Article on the sampling and exploitation of specimens in nature as a whole, stating that the Government is habilitated to take the measures necessary to ensure that the sampling and exploitation of species listed in Annex II.5 are compatible with their maintenance in favorable conservations status, including measures pertaining to the interdiction of capture, detention, transport or sale (Article 82). Protected areas and forests Protected areas and forests in the Flemish Region Nature conservation in the Flemish Region is regulated through the "Natuurdecreet" of 21 st October 199773 , through which the Flemish Government can take all necessary measures for nature conservation, regardless of the type of area. This includes regulating access (Article 13- §1, 6°), prohibiting certain activities or subject them to conditions (Article 13- §3, 6°). These conditions and activities may require a permit. A permit is required for the transformation 74 of the vegetation 75 or the modification of all or part of small landscape elements or their vegetation in the following areas: green areas; park areas; buffer areas; forest areas; nature development areas; valley areas; source areas; agricultural areas with ecological importance or value; and agricultural areas of special value or similar areas designated as such in spatial implementation plans (Article 13). However, it is not allowed to change all types of vegetation, nor do all actions producing change require a permit 76 . The prospecting of GR is not included in the actions requiring a permit. If a permit is delivered, the competent authority shall ensure that no avoidable damage to nature may arise by imposing reasonable conditions to prevent damage, to minimize or, if not impossible, to recover (Article 16). Certain areas in the Flemish Region enjoy a "special" status, where different rules apply. In the Flemish Ecological Network (Vlaams Ecologisch Netwerk, VEN) it is forbidden to change vegetation, including perennial crops or small landscape elements. In the nature reserves (natuurreservaten) it is forbidden to deliberately pick, collect, cut, uproot or destroy plants (Article 35). It should be noted that public servants working in relation to matters governed by the "Natuurdecreet" (i.e. nature conservation), may access real property (excluding houses and buildings intended for private or business use) to make measurements and to conduct research (Article 57bis). Forest areas in the Flemish Region are regulated by the "Bosdecreet" of 13 th June 1990. Although it applies to public access for social and educational purposes 77 , forests can only be accessed through the forest roads ('boswegen'). The Flemish Government can however decide to allow access to the forests outside of the roads for other activities (Article 10- §2). Physical access cannot lead to any reduction of the surface covered by the forest (Article 11). It is regulated through an "access regulation" ("toegankelijkheidsregeling") for forest for which a management plan ("beheersplan") is required 78 . Forest for which no management plan is needed do not need "access regulation" (Article 12). Part of these forest areas can be designated by the Flemish Government as protected "forest reserves" ("bosreservaten") because of the ecologic or scientific function these parts fulfil (Article22). In these "forest reserves" it is not allowed to remove plants or parts of plants (Article 30.1) or to extract material from soil or from the substrate (Article 30.2). Violation of this provision is punishable by a fine of 50 to 200 Euros (Article 30). 74 Change of small landscape elements and vegetation are all acts or works that are not understood to include the normal maintenance. Actions to be considered as normal maintenance are described in Annex 1 of Omzendbrief LNW/98/01 betreffende algemene maatregelen inzake natuurbehoud en wat de voorwaarden voor het wijzigen van vegetatie en kleine landschapselementen betreft volgens het besluit van de Vlaamse regering van 23 juli 1998 tot vaststelling van nadere regels ter uitvoering van het decreet van 21 oktober 1997 betreffende het natuurbehoud en het natuurlijk milieu 75 Vegetation has to be understood as the natural and semi-natural vegetation with all spontaneously established herb, bushes and forest covers, and this independently of possible influence of the abiotic environment by humans (Omzendbrief LNW/98/01) 76 This has been regulated by: Besluit van de Vlaamse Regering betreffende de vergoeding van wildschade of van schade door beschermde soorten en tot wijziging van hoofdstuk IV van het besluit van de Vlaamse Regering van 23 juli 1998 tot vaststelling van nadere regels ter uitvoering van het decreet van 21 oktober 1997 betreffende het natuurbehoud en het natuurlijke milieu. 77 The social and educational function of the forest includes the accessibility of the forest to the public for the purpose of recreation or education. 78 A management plan is required for all public forests and for private forests of at least five acres. Further provisions for physical access to both forest and nature reserves in the Flemish Region are provided by a specific Executive Order 79 which applies only to pedestrians, cyclists, horse riders, fishermen, swimmers, skaters, divers, kayakers, sailors, rowers and windsurfers (Article 5- §2) Protected areas and forests in the Walloon Region In the Walloon Region there is a general obligation to request a permit (for land planning) for acts that consist of "clearing the ground or transforming the vegetation of a zone that is judged by the government to be in need of protection, with the exception of the specific management plan of national and aggregated natural reserves", in accordance with Article 84- §1, 12° of the Walloon code for urban and land planning. Furthermore, Article 136 of the same code states that the execution of acts may be "either prohibited or subject to specific conditions for the protection of persons, goods or the environment when those acts relate to national natural reserves, a humid zone of biological interest, an underground cavity of scientific interest, a Natura 2000 site or a forest reserve (Article 452/27)". In natural reserves and national natural reserves physical access is regulated by Article 12 of the 1973 nature conservation law, in accordance to which the ministerial decree of 23 rd October 1975 80 has been enacted. Access to the non-protected material found in these zones is regulated by Article 11 of the nature conservation law, which states that it is forbidden to take out, cut, destroy or harm trees or the vegetative soil as such, or to modify the soil. For national natural reserves, in addition to those acts prohibited by Article 11 of the nature conservation law, it is also forbidden to "take out plants or vegetal parts, notably moss; or to pick up blueberries or cranberries with the help of a hairbrush", in accordance with Article 5 of the ministerial decree. In humid zones of biological interest, in accordance with Articles 2 and 3 of the Walloon Government decree of 8 th June 1989 81 regulating humid zones of biological interest, it is "forbidden at all times to pick up, unplant, harm or destroy all indigenous species of the flora growing in a wild state in the humid zone". For fauna, it is forbidden to hunt, kill, destroy, capture or disturb all indigenous species, except those for which hunting or fishing is authorized and those listed in the Annex of the decree. In underground cavities of scientific interest, in accordance with Article 3 of the Walloon Government Decree of 26 th January 1995 82 , it is the ministerial decrees establishing the specific protected zone that regulates both the physical access and conditions for research or other utilization of GR. In general 83 , the decrees state that access to the site is only authorized for 83 The texts of these ministerial decrees may be found on http://environnement.walloni.e.be/legis/consnat.htm, for an example, see the decree of 18 th September 2001 on the Ivoz-Ramet Vegetation grotto, http://environnement.walloni.e.be/legis/cavites%20souterraines/cavite041.htm management and scientific follow-up operations with the mandate of the managing committee. Scientific and speleological research can be done with the consent of the managing committee, with due respect for the integrity of the cavity and the scientific follow-up measures. In natural parks, regulated by the Decree of 16 th July 1985 84 , the particular terms of access shall be managed by the Managing commission set up in accordance with Articles 11 and 12. In accordance with the interpretation made by the high administrative authority, that is the Council of State, the Walloon code for urban planning defines the acts that are subject to a permit in natural parks as those that are susceptible of having a significant impact on the landscape and the environment 85 . In forest reserves, in accordance with Article 20 of the Walloon forest code of 15 th July 2008 86 , the access of pedestrians is forbidden outside roads and resting areas. However, access can be granted by the agents designated by the Walloon Government (in accordance with Article 92 of the forest code), under the conditions set out by these agents, for medical, pedagogic, scientific, cultural or nature conservation purposes. In accordance with Articles 32 and 34, it is forbidden to cut out, take out or tear down trees, or take out their sap without the authorization of its owner. Furthermore, Article 50 states that no sampling of any product of the forest can be undertaken without the consent of the owner and without respecting the conditions that could be adopted by the government (implying that such conditions may not be adopted). The fine for violation ranges between 25 and 100 Euros (Article 102). What about those acts that do not require permits? The establishment and prescription of protected zones is considered to be a "servitude légale d'utilité publique", restricting the use and affectation of a specific portion of land. The notions of "acts and works" should be understood as those activities characterized by a physical link to the soil or the vegetation, or causing a physical modification of the soil or the vegetation 87 . Therefore, utilization of GR as such may in certain cases not be considered as a modification or transformation of the ecosystemic balance set out by the protected zone. However, if this is the case, this needs to be specified in the general access rules of the protected zone or the permit. Further, within this understanding of passive obligations, those acts that are normally not subject to a permit might, according to doctrinal and jurisprudential thought, still have to respect the destination of the zone, otherwise they would fall under administrative sanctions 88 . Protected areas and forests in the Brussels-Capital Region The access to natural areas (both protected and non-protected) in Brussels is regulated by the Ordinance of 1 st March 2012 regarding nature conservation. In non-protected areas the Government may regulate public access and behavior applicable to the regional parks, gardens, squares, green areas and unoccupied land managed by the Region and publicly available (Article 66- §2). There is no general prohibition/permit requirement on the collection of natural resources in these areas. According to Article 82 of the Ordinance, the Government has to take the necessary measures to make sure the prospecting and use of specimens of species listed under Annex II of the Ordinance is compatible with the conservation of these species. Measures include the prohibition or limitation of their capture, detention, transportation and sale. In protected areas 89 it is forbidden to:  pick, remove, collect, cut, uproot, transplant, damage or destroy native plant species and bryophytes, lichens and macro-fungi, and destroy, damage or transform the vegetation (Article 27- §1, 1°);  leave the roads and paths open to public traffic (Article 27- §1, 10°). If no other satisfactory solutions exist and if it does not affect the conservation of native species, derogations to Article 27 can be made for purposes related to research or education, repopulation or reintroduction, and for the necessary breeding (Article 83). The requests for derogations, including information on the purposes of the request, need to be addressed and approved by the Brussels Institute for Environmental Management (IBGE/BIM), which delivers a permit (Article 84). Non-compliance with Article 27 is punishable by imprisonment from 10 days to 1 year and a fine of 150 EUR to 150 000 EUR (Article 93). Marine environment There are two main legal sources regarding the protection of the marine environment: the so called "MMM" Law of 20 th January 1999 and the "EEZ" Law of 22 nd April 1999 90 . The first one establishes a general regime of protection of animal and plant species. The second one specifies the rights Belgium detains on the exclusive economic zone and the territorial sea. The "MMM" Law The law of 20 th January 1999 defines the legal principles to be respected in order to preserve the Belgian part of the North Sea against marine pollution, and to conserve and develop its natural environment 91 . To this end, the law of 20 th January 1999 integrates within the Belgian legal order the different general principles of the environmental law: prevention principle, precaution principle, «polluter-pays» principle, etc. 89 Applies to all protected areas found in the Brussels Region: The "MMM" Law sets up a general regime of protection of natural resources and marine areas. In this regard, the Federal Government can take all the necessary measures concerning the protection of marine spaces 92 , including -amongst others -the obligations resulting from the CBD (Article 6). The Federal Government can also create protected marine areas (Article 7). The law so organizes and determines the categories and physical borders of the protected zones. It introduces moreover a categorization of the different potentially concerned zones:  In the integral and "directed" marine areas, any activity is forbidden, except for those areas specified in Article 8 93 . However, some specific activities are authorized in exceptional cases for the "directed" marine areas 94 .  A general authorization is given to the special protection zones and special conservation zones even if some activities can be punctually forbidden. Thus, the 2003 federal Masterplan led to delineate five marine zones designed to specifically protect animal species. The access to and use of these zones are submitted to specific conditions determined by the various users of the North Sea for specific periods of the calendar year. The Federal Government establishes a list of protected species in the marine areas 95 , which benefit from a strict prohibition regime forbidding to capture, kill, detain or transport animal species that are protected, and to collect, pick up, cut, uproot, detain, transport or intentionally destruct specimens or portions of specimens that belong to the plant species that are listed as protected (Article 10 §1). Derogations to the general prohibition can be nonetheless awarded for the needs of public health, scientific research, education, restocking or reintroduction of these species (Article 10 §2). Lastly, the deliberate introduction of non-indigenous organisms is forbidden unless otherwise stated by the Government, as is the deliberate introduction of GMO (Article 11). Finally, the law stipulates that any construction activity or industrial, commercial and advertising activity taking place in marine spaces requires a license (Article 25- §1). The granting of this license depends on an environmental impact assessment of the expected activity (Article 28) 96 . However it should be noted that some activities remain excluded from the scope of Article 25- §1, such as professional fishing, or marine scientific research -whose implementation is regulated in the EEZ law hereafter described (Article 25- §3). 92 See. Article 2 §1. The marine spaces are defined as « the territorial see, the exclusive economic zone and the continental shelf aimed par the Law of 13 th June 1969 on the continental shelf of Belgium » 93 The following activities are accepted in the marine areas: (i) surveillance and control; (ii) monitoring and scientific research carried out for or with the consent of the authority;(iii) sailing; (iv) professional fishing, notwithstanding the restrictions or prohibitions imposed by the Government; (v) nature conservation and development activities; (vi) military activities (Article 8) 94 For an example of directed marine area, see : Executive order of 5 th March 2006 créant une réserve marine dirigée dans les espaces marins sous juridiction de la Belgique et modifiant l'Arrêté royal du 14 th October 2005 créant des zones de protection spéciales et des zones de conservation spéciales 95 See Annex I of the Government Executive Order of 21 st December 2001 aiming for the protection for species in the marine areas under the jurisdiction of Belgium (M.B., 14 th February 2002) 96 The specific devices organising the license granting process are defined in the executive order of 7 th September 2003 establishing the granting procedure of the permits and authorizations required for some activities carried out in marine areas, M.B.,17 th September 2003 The "EEZ" Law The law of 22 nd April 1999 97 specifies the legal status of the territorial sea and broadens the sovereign rights of Belgium to a maritime zone located beyond the territorial sea and adjacent to it: the Economic Exclusive Zone (EEZ). The regulation of the economic exclusive zone concerns the exploration and exploitation of the natural resources of the waters in contact with ("surjacent") the marine soils, i.e. the marine soils themselves as well as their subsoil 98 . Belgium has sovereign rights for the exploration and exploitation, conservation and management of natural, biological and non-biological resources found within the EEZ, as well as for other activities tending to the exploration and the exploitation of the zone to economic ends (Article 4- §1). Belgium also has jurisdiction with regard to the settlement and utilization of artificial islands, installations and construction works, to the marine scientific research and to the protection and preservation of marine environment (Article 4- §2). In this framework, any scientific research in territorial sea and in the exclusive economic zone must be submitted to the consent of the Minister of Foreign Affairs, who has then to consult the different involved ministers (Article 40) 99 . Such consent is supposed to be given if Belgium is part of the institutional organization or of the bilateral agreement on the basis of which the scientific research project is developed -unless Belgium objects to it within the two months following the official research request. Finally, the scientific research carried out by foreign ships in the territorial sea and the economic exclusive zone is under the jurisdiction of the Belgian Law related to the protection and conservation of marine environment (Article 42). In the territorial sea, the exclusive economic zone and in high sea, the Federal Government can take the necessary measures to ensure the conservation of biological resources (Article 1- §1, al.1 of the law of 12 th April 1957 entitling the King to prescribe measures in order to conserve the marine biological resources, as modified by Article 6 of the law of 22 nd April 1999). The fishing in the territorial sea and in the exclusive economic zone is forbidden for foreign fishing boats (Articles 10 and 17), except in the exclusive economic zone and in the territorial sea if allowed by the rights deriving from the Treaty of the European Union and the applicable rules of international law. In this framework, the Federal Government can take the necessary measures ensuring the respect of this general prohibition 100 . Finally, Belgium exercises sovereignty on territorial sea and holds sovereign rights on the continental shelf as for the exploration and exploitation of mineral and non-living resources (Article 27). 97 M.B., 20 th July 1999 98 The limits of the economic exclusive zone are fixed through different bilateral agreements : Agreement between the Government of the Kingdom of Belgium and the Government of the Kingdom of England and North Ireland related to the delimitation of the continental shelf between the two countries, signed in Brussels on 29 th May 1991 and approved by the the Law of 17 th February 1993 (M.B., 1 st December 1993) ; Agreement between the Government of the Kingdom of Belgium and the Government of the French Republic related to the delimitation of the continental shelf between the two countries, signed in Brussels on 8 th October 1990 and approved by the law of 17 th February 1993 (M.B., 18 th December 1993) ; Agreement between the Government of the Kingdom of Belgium and the Government of the Netherlands related to the delimitation of the continental shelf between the two countries and Annexes, signed in Brussels on 18 th December 1996 and approved by the law of 10 th August 1998 (M.B., 19 th June 1999). 99 For the general regulation of the matter, see Article40-44 of the law of 22 nd April 1999 100 See Article 10 and foll.; 17 and foll. Access in state owned land outside of protected zones Access to genetic material on state owned land outside protected zones also requires the authorization of the competent state authority, except if the land is explicitly designated as public domain. In the latter case, under the current legislation it is still unclear how access to genetic material is regulated. In general the public domain encompasses "the goods specifically assigned for public use or arranged with the view to realize a public service objective". The specificity of the destination of such goods requires "a specific legal protection and therefore the application of a specific administrative legal regime" 101 . Access to genetic material is not explicitly mentioned in the current legal framework applicable to public domain goods. However, each public entity has its own public domain that it regulates in accordance with the competences attributed or granted by the Belgian legal order. For instance, with regard to the public domain at the municipal level, the regulation of the administrative police of Gesves in the province of Namur, states in its Article 1 that it is forbidden to pick the flowers found on the public domain, as well as to take out grass, soil, rocks or materials belonging to the public domain without prior authorization. In the absence of such specific regulation by the competent authority, access to genetic material in public domain is still a grey legal zone. This question certainly deserves further clarification. 1.1.2.5 101 Willieme C., Boland M., Simon V. ( 2010), Valorization des biens situés dans le domaine public : Quel encadrement juridique ?. Rec. gén. enr. not., 2010, pp. 253-271 (at p. 256). The status of traditional knowledge associated to genetic resources under national legislation in Belgium Traditional knowledge in the context of the CBD is usually understood as "knowledge, innovations and practices of indigenous and local communities" that "embody "traditional lifestyles relevant for the conservation and sustainable use of biological diversity" (Article 8(j) of the CBD). Traditional knowledge is "developed from experience gained over the centuries and adapted to the local culture and environment" and "transmitted orally from generation to generation". Moreover traditional knowledge "tends to be collectively owned and takes the form of stories, songs, folklore, proverbs, cultural values, beliefs, rituals, community laws, local language, and agricultural practices, including the development of plant species and animal breeds" 102 . There are no contemporary legal provisions in Belgium explicitly governing the concepts of "traditional knowledge", "traditional knowledge associated with genetic resources" and "indigenous and local communities". One might argue that some types of knowledge could be qualified as "knowledge, innovations and practices" that "embody traditional lifestyles relevant for the conservation and sustainable use of biological diversity". One example would be knowledge involved in the conservation and use of old seed varieties by farmers. However, this knowledge is not related to specified local communities and their traditional lifestyles as specified in the CBD's understanding of the concept. Nevertheless, concerns over traditional knowledge and the rights of indigenous and local communities have been addressed in some international instruments, especially in the area of development cooperation and sustainable development, to which Belgium is a Party 103 . Three international instruments broach the rights of indigenous and local communities and recognize the importance of traditional knowledge: Verdrag van de Verenigde Naties ter bestrijding van desertificatie in de landen die te kampen hebben met ernstige verdroging en/of woestijnvorming, in het bijzonder in Afrika, BS: 10-12-1997; 17 JUNI 1994. -BIJLAGE II (Bijlage inzake regionale uitvoering van Azië) aan het Verdrag van de Verenigde Naties ter bestrijding van desertificatie in de landen die te kampen hebben met ernstige verdroging en/of desertificatie, in het bijzonder in Afrika, gedaan te Parijs op 17 juni 1994, BS: 10-12-1997; BIJLAGE III (bijlage inzake regionale uitvoering voor Latijns-Amerika en het Caraibisch gebied) aan het Verdrag van de Verenigde Naties ter bestrijding van desertificatie in de landen die te kampen hebben met ernstige verdroging en/of desertificatie, in het bijzonder in Afrika, gedaan te Parijs op 17 juni 1994.   the United Nations Declaration on the Rights of Indigenous Peoples. The UN Declaration on the Rights of Indigenous People might have a practical and a political interest as it is explicitly "noted" in the preamble of the Nagoya Protocol and might therefore provide a framework in the further elaboration of decisions under the Nagoya Protocol relevant to the rights of indigenous and local communities. It nonetheless remains a non-binding instrument, whose provisions do not create any legal obligations. A fourth instrument of relevance is Agenda 21, following the United Nations Conference on Environment and Development (UNCED) held in Rio de Janeiro, Brazil, 3 rd to 14 th June 1992. Its chapter 26 focuses on the role of indigenous people and their communities. It is provided that such communities possess a unique knowledge of their environment and the natural characteristics thereof. Consequently, indigenous people and their communities should acquire the right of selfdetermination, manage their own resources and participate in the decision-making on development programs affecting them104 . This instrument is not legally-binding and merely addresses issues of potential future action. As pointed out in the fourth National Report on the implementation of the Convention on Biological Diversity in and by Belgium (2009), certain policy initiatives have been adopted or identified in order to support actions105 of indigenous and local communities situated in developing countries. Also the ratification of ILO Convention 169 (Indigenous and Tribal Peoples Convention) was put on the agenda. Bilateral official cooperation provides limited direct support to indigenous and local communities, since this is not often taken up as a priority by the partner countries, neither in their national development and poverty reduction policies, nor in their policy dialogue with donor countries. Belgium ratified the ILO Convention No. 107106 but not the ILO Convention No. 169. The 1957 ILO Convention No. 107 on Indigenous and Tribal Populations The 1957 Indigenous and Tribal Populations Convention (No. 107) was a first attempt to codify international obligations of States in respect of indigenous and tribal populations. It was the first international convention on the subject, and was adopted by the International Labor Organization. ILO Convention Relevant provisions of the ILO Convention No. 107 The Convention does not avail itself of the concept of indigenous and local communities, rather it applies to indigenous tribal or semi-tribal populations in independent countries whose social and economic conditions are at a less advanced stage than the stage reached by the other sections of the national community, and whose status is regulated wholly or partially by their own customs or traditions or by special laws or regulations (Article 1). This convention entails certain obligations incumbent on Belgium, but which have not been addressed by it. Three particular provisions are however of particular relevance for the implementation of the NP by Belgium. These concern:  Article 7(1): In defining the rights and duties of the populations concerned regard shall be had to their customary laws.  Article 11: The right of ownership, collective or individual, of the members of the populations concerned over the lands which these populations traditionally occupy shall be recognized.  Article 13: 1. Procedures for the transmission of rights of ownership and use of land which are established by the customs of the populations concerned shall be respected, within the framework of national laws and regulations, in so far as they satisfy the needs of these populations and do not hinder their economic and social development. 2. Arrangements shall be made to prevent persons who are not members of the populations concerned from taking advantage of these customs or of lack of understanding of the laws on the part of the members of these populations to secure the ownership or use of the lands belonging to such members. EXISTING ABS-RELATED POLICY MEASURES AND OTHER INITIATIVES IN BELGIUM Measures resulting from coordination between the three regions and the federal level In 2006, Belgium adopted its National Biodiversity Strategy 2006-2016 107 , which established 15 strategic objectives and 78 operational objectives to reduce and prevent the causes of biodiversity loss. The 6 th strategic objective aims to contribute to an equitable access to and sharing of benefits arising from the use of genetic resources. This objective is projected to be realized mainly through capacity building of national ABS stakeholders and further implementation of the Bonn Guidelines on ABS. In 2006, a study on the awareness of Belgian users of GR concerning the CBD and the level of implementation of ABS dispositions and the Bonn Guidelines in their activities has revealed mixed knowledge within stakeholder groups 108 . The Convention seemed to be better known in upstream activities (e.g. fundamental research) than in downstream activities (e.g. commercial products). Collections and research sectors, both private and public, have a good understanding of the CBD, while other sectors, predominantly composed of private actors, have little or no knowledge. Concerning the implementation of ABS dispositions, the report showed that PIC-related dispositions seem to be relatively widespread, whereas benefit-sharing provisions are nearly inexistent 109 . Other operational objectives of the National Biodiversity Strategy include the enhancement of synergies between actors for addressing ABS, the protection of local communities and their traditional knowledge and the establishment of an international regime on ABS. However, these seem to be general goals the government wants to strive for, rather than specific delineated strategic actions. The strategy has been evaluated at the end of 2011 and is currently under review in order to bring it into line with the new multilateral and European biodiversity objectives (the Biodiversity Strategic Plan 2011-2020 and its Aïchi Targets, the EU biodiversity Strategy and other national and international commitments) and to extend subsequently the reviewed strategy until 2020. As part of the present impact-study, two stakeholder workshops have been organized. The aim of the workshops was to identify the wide range of stakeholders concerned with the implementation of the Protocol in Belgium, to make them aware of the content of the Protocol and its obligations, and to give stakeholders the possibility to exchange views and provide input on the options for and consequences regarding the implementation of the Protocol110 . Federal measures The National Biodiversity Strategy followed the Second Federal Plan for Sustainable Development 2004-2008111 . It calls for a coherent national position on access and benefit-sharing. A third Federal Plan for Sustainable Development, calling for an "equitable distribution of the commercial exploitation of biological resources", was drafted for the period 2009-2012 but never adopted. The second plan was instead extended until 2012. The two plans above are partly concretized by the Federal Plan for the integration of biodiversity in four key sectors, adopted by the Federal Government in 2010. Three of these key sectors are particular relevant for ABS-implementation: the economy, the development cooperation and the scientific policy. For each of these sectors a separate and detailed action plan has been developed for integration of biodiversity, including several ABS-related measures. For the economic sector the plan mainly focuses on awareness-raising and capacity building of the private sector and call for a proactive participation of the Federal Government in the establishment of an international ABS-regime. The plan also calls for an increased participation of the customs administration in biodiversity policy, albeit not directly linked to ABS. This stronger understanding of biodiversity-related issues inside the customs could however be beneficial for and facilitate the implementation of the NP. TEMATEA is a web-based capacity-building utility to support the coherent implementation of international and regional biodiversity related conventions and provides an overview of national obligations regarding ABS, as derived from several international agreements. In the science policy field, the first proposed action of the Federal Plan for the integration of biodiversity in four key sectors is particularly relevant to ABS as it calls for an inventory of the national collection of plant germplasm. This objective will directly benefit from existing projects and initiatives. For instance, the BELSPO, together with the Ghent University, developed straininfo.net115 , a pilot project using bioinformatics tools (web crawlers and search engines) to access and make available data and information stored in 60 biological resource centers worldwide. A standard format to allow for culture collection catalogue information to be exchanged easily has also been developed. PLANTCOL 116 Regional measures Regions each have separate biodiversity policy-plans, mostly as part of a broader environmental strategy, in which ABS measures could be taken up. Although these plans all explicitly refer to the CBD as guidance for biodiversity policy, none of them contain ABS-related provisions. In its recently released Environmental Policy Plan 2011-2015 (MINA-4), as well as in the latest Flemish Strategy for Sustainable Development118 , the Flemish Government also refers to the 10 th COP of the CBD as an important watershed moment, but without identifying or emphasizing the need for ABS-related actions. Research institutions' and private initiatives and policies on ABS In 1997, the Belgian Coordinated Collection of Micro-organisms (BCCM) launched the Microorganisms Sustainable Use and Access Regulation International Code of Conduct (MOSAICC) initiative. MOSAICC is a voluntary code of conduct to facilitate access to microbial genetic resources in line with the CBD, the TRIPS Agreement and other applicable national and international law, and to ensure that the transfer of material takes place under appropriate agreements between partners and is monitored to secure benefit-sharing. It aims, in particular, to develop an integrated conveyance system that has reliable tools to evaluate the economic value of microbiological resources; that disposes of validated model documents with standard provisions to enable tracking via an uncomplicated procedure, widely applied by microbiologists; and, that combines valuation and tracking in one system for trading of microbiological resources, with balanced benefit-sharing for those that are entitled to be rewarded for the services and products they provide to society. BCCM uses a standard BCCM Material Transfer Agreement (MTA) for getting access to the genetic resources of its public collection. If necessary, the MTA can be amended with additional conditions already attached to the biological material. The resources are distributed for a fee covering expenses. The MTA stipulates that anyone seeking to access genetic resources hold by the BCCM has the responsibility to obtain any intellectual property licenses necessary for its use and agrees, in advance of such use, to negotiate in good faith with the intellectual property rights owner(s) to establish the terms of a commercial license. The National Botanic Garden of Belgium (NBGB) is member of International Plant Exchange Network (IPEN), a network of Botanic Gardens that organizes the exchange of living plant specimens. IPEN's members have adopted a Code of Conduct regarding access to genetic resources and benefit-sharing. The NBGB only supplies seed material to other IPEN-members, unless the "Agreement on the supply of living plant material for non-commercial purposes leaving the International Plant Exchange Network" is signed by authorized staff. Although not explicitly linked with ABS, stakeholder conferences and workshops have been organized in 2010 by the Association for Forests in Flanders (Vereniging voor Bos in Vlaanderen) on the importance of preservation of autochthonous genetic bush and tree material. This initiative led to the Plant van Hier project, which included the development of study material119 and the creation of a product label120 encouraging the commercialization of native bushes and trees. Existing ABS-related EU instruments and other initiatives Implementation of the Bonn Guidelines The EU Biodiversity Action Plan (BAP) lays down the political commitment to promote full implementation of the CBD Bonn Guidelines on ABS and other agreements relating to ABS such as the FAO International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA). With regard to the implementation of Article 8(j) of the CBD, the EU BAP put forward the political commitment to apply from 2006 onwards the principle of PIC when commercially using TK relating to biodiversity and encourage the equitable sharing of benefits arising from the use of such knowledge. Therefore, Member States were encouraged to implement the relevant aspects of the Bonn Guidelines in MS when granting access to TK relating to biodiversity. In particular Member States were encouraged to enhance awareness of stakeholders to effectively participate in and contribute to EU preparations for international ABS negotiations and to effectively contribute to on-going negotiations of the Standard Material Transfer Agreement (SMTA) under the International Treaty on Plant Genetic Resources for Food and Agriculture. In order to assess the status of ABS within Europe, the European Commission undertook to calculate the percentage of European patent applications for inventions based on GR. Indicators were to be developed under the lead of the joint Secretariat of the Pan European Biological and Landscape Diversity Strategy (PEBLDS) with the assistance of the European Patent Office and the World Intellectual Property Organization. In 2010, in the context of its reporting obligations to the EU, Belgium qualitatively monitored the implementation of BAP actions and achievement of targets. It was noted that over the period 2006-2009:  Belgium did not provide funding for the ABS Working Group;  no national legislation implementing the CBD Bonn Guidelines on Access and Benefit-sharing existed;  no national activities that raise awareness of the CBD Bonn Guidelines had been implemented;  no national legislation implementing the MTA Agreement of the ITPGRFA existed;  no national activities raising awareness of the MTA of ITPGRFA had been implemented. The EU funds the BIOPOMA project for ABS capacity building in ACP countries (twenty million Euros) in order to enhance existing institutions and networks by building their capacity to strengthen policy and to implement well informed decisions on biodiversity conservation and protected areas management. The project has two components. Firstly, enhancing the effective planning and management of protected areas in ACP countries through the intensive use of scientific and policy information accessible from appropriate database reference systems combined in one information tool and the establishment of a "Centre for Protected Areas and Biodiversity" in each of the 3 regions. The second component aims to contribute to the Access and Benefit-sharing Capacity Development Initiative. This initiative aims to further build the capacities of stakeholders in the access and benefitsharing in each of the 3 ACP regions and is implemented through a trust fund managed by the German Cooperation Agency (GIZ). Proposal for a Regulation on ABS In October 2012, the European Commission proposed a Regulation on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization in the Union 121 . The proposal was based on two previously conducted impact assessments 122 CONFORMITY OF THE EXISTING NATIONAL LEGISLATION AND MEASURES WITH THE OBLIGATIONS OF THE NAGOYA PROTOCOL To the best of our knowledge, no existing national legislation or measures are in contradiction with the obligations under the Protocol. However, existing legislation that addresses physical access to genetic material and instruments regulating benefit-sharing between users and providers of genetic resources need to evolve and be complemented by additional instruments in order to implement the obligations of the Protocol. As indicated above, this analysis is based on the list of legal obligations summarized in annex 1 to this report. Conformity of existing instruments in Belgium that already address obligations of the Protocol Articles 6.1 + 6.3 Under the current legislation in Belgium access to GR is not subject to Prior Informed Consent (PIC) by the Belgian State as a Party to the NP (that is based on a written decision by a Competent National Authority (CNA) on access and benefit-sharing). Even if it is not compulsory, under the Nagoya Protocol, the Belgian State can decide that access is subject to PIC if it so wishes and take the necessary legislative, administrative or policy measures, as appropriate, to provide for access permits by one or more Competent National Authorities and establish the mutually agreed terms for these access permits. Articles 13.1, 13.2 and 13.4 The ABS national focal point already exists. Belgium nominated a civil servant of DG Environment of the FPS Environment that currently ensures the function of national focal point on ABS. However, the obligations related to the CNA still have to be implemented. Articles 15.1 and 16.1 Under the current legislation in Belgium (more specifically the provisions of private international law), the acquisition and the loss of property rights over genetic materials are established by the laws of the State where the good was situated at the moment these acts or facts have occurred (that is at the moment of the acquisition). However, as discussed in chapter 3, even if these principles are a useful contribution to comply with private law contracts over genetic materials, they are certainly insufficient for the Nagoya Protocol, as the compliance with PIC obligations involves public law requirements and compliance with administrative acts of the Country of Origin of the GR, which fall out of the scope of private international law. Furthermore, at present, "use of GR under the Nagoya Protocol" is not explicitly mentioned within the scope of the Belgian code of private international law. In particular, as stated above, utilization of GR often occurs on the information components (the DNA code, published research results, databases etc.) or might be based on the use of a copy of the GR (a clone of the entire biological material or a clone/reproduction of a component of it), even when the GR is not situated in Belgium. These frequent cases of research done on/utilization of GR that isnot physically in Belgium is not covered by the legal dispositions of the private international law code. Therefore, additional measures will be needed to comply with the obligations under Articles 15 and 16. Article 17.1 One measure to monitor use of genetic resources has already been taken, which is the disclosure of the information on the country origin in patent application under Belgian law, whenever this information is available (cf. detailed discussion in 3.1.1.). However, this measure still needs to be completed by other measures in order to comply with Article 17.1 as it is not organized nor designated as a formal checkpoint. Articles 18.2 and 18.3 Regarding the concrete measures linked to the international ABS regime, three main issues would have then to be addressed: (a) determining the jurisdiction that is internationally competent to deal with disputes raised within ABS agreements; (b) determining the applicable law which has to be applied in the case of ABS-related disputes; (c) recognizing and enforcing in another country, party to the NP, judgments' rendered by a jurisdiction in the ABS context. The first two points (a) and (b) are related to Articles 18.1 and 18.2 of the Protocol, of which the provisions seem to state the obvious and have little added value. Most if not all countries in the world with a legal system provide for an opportunity to seek recourse in cases of breach of contract, and have established specific provisions regulating lawsuits involving a "foreign" law element. See chapter 3.1.2 on the existing Belgian private law and international private law provisions regarding contractual breach, amongst which the EC Regulation 44/2001 (Brussels 1) and the Rome Convention on contractual obligations (as well as the Council Regulation "Rome I") 125 . The third point (c) relates to Article 18.3 from a strict reading of which emerges that a Party could demonstrate compliance by proving ratification -or any effort leading to it -of certain international legal arbitration instruments. First, as convincingly put forward by the IUCN Explanatory Guide to the Nagoya Protocol on Access and Benefit-sharing, "it is important to note that it is not for the Parties jointly to take the measures referred to […] it is for "each Party" to enact such measures at the domestic level. Second, the measures shall be taken (only) if it is judged by the Party "appropriate" to do so". 126 125 To broach it more specifically, Article 18.1 does not need to be analyzed under "existing legislation" as it refers to MAT between Parties to NP: let us also note that the Article 18.1 only "encourages" providers and users of genetic resources to include dispute resolution provisions. Article 18.2, however, sets and obligation for each Party at the domestic level to ensure that recourse is available under its legal system if a dispute arises in the framework of a contractual obligation such as the one established by MAT. Moreover Article 18.2 is drafted in such a way that it does not mention whether the opportunity shall also be granted to foreign citizens. It makes clear though that such recourse has to be consistent with applicable jurisdictional requirements of the Party concerned, leaving this issue to national legislation. 126 Article 20.1 Existing measures that deserve to be mentioned are the codes of conduct of IPEN and MOSAICC. These will be further discussed in the action cards below under section 6.2. 127 Extradition procedures would in principle need to be initiated in order to execute the judgment against the person convicted for theft, or that there would need to be control over his property in order to execute the judgment against his property. These procedures would be expedited depending on the international conventions that have been adhered to by the States concerned (Castiaux J. (2011), Extradition en Belgique. In Chome P., Klees O., Lorent A. (red.), Droit pénal et procédure pénale, Mechelen: Kluwer, p. 155). Here, the Second Protocol to the 1959 European Convention on mutual assistance in criminal matters more peculiarly provides for transboundary observation when there are suspicions of aggravated theft (Article 17). 128 This convention, negotiated at the European Union level, requires user countries to take effective measures to ensure that provider countries have recourse to their legal system to obtain redress. It includes an obligation to provide access to administrative or judicial procedures to challenge breaches of national law in a similar way as provided for by Article 18(2) of the Protocol. Obligations of the Nagoya Protocol currently not addressed by legal or non-legal instruments in Belgium To the best of our knowledge, no other obligations of the Nagoya Protocol are explicitly and specifically addressed by existing legal or non-legal instruments in Belgium. Therefore additional instruments will be needed to implement these obligations. These possible legal and non-legal measures will be considered in a systematic manner in the next section. For the purpose of the analysis, a distinction is made however between the Articles that need to be considered most urgently, because of their core nature in the implementation of the Protocol, and additional measures that are important elements during implementation of the obligations, but that are less urgent. The core measures that are considered are the measures specified in the terms of reference of this study ("measures requiring special attention"): 1 REVIEW OF EXISTING MEASURES AND INSTRUMENTS ON ABS IN OTHER COUNTRIES In the next section, a brief overview of measures adopted in other countries is presented. It is based on the review of primary and secondary information related to existing ABS regulations in other countries. In order to provide a clear and structured overview, they are grouped under the following broad themes: access, benefit-sharing, conservation activities and biodiversity research, competent National Authority, and user compliance and monitoring. Under each theme, a number of issues found in the consulted information and which are relevant for the discussion on implementation in Belgium are listed, with a particular focus on the measures listed in the Bonn Guidelines on Access to Genetic Resources and the Fair and Equitable Sharing of the Benefits arising out of their Utilization. Whenever possible, detailed reference is made to how these issues have been solved in other countries in existing legislation or in detailed assessments of possible legislation. Relevant actions for the implementation of the NP in Belgium are summarized for each theme and a distinction is made between actions which are relevant in case of minimal implementation of the core obligations and actions which are relevant in case of additional implementation (beyond the minimal implementation of the core obligations and beyond the core obligations). This overview of existing ABS measures is by no means exhaustive, nor does it imply anything for the implementation at Belgian level. It rather serves as a base for the reflection on the identification of the possible implementation measures for Belgium. Therefore issues which have been identified as being already present in Belgium (e.g. the designation of a NFP) or which have already been discussed previously are not repeated in this section, even if they will be used in the further assessment of the measures. Furthermore, it has to be noted that most of the options identified below are not mutually exclusive. If necessary and/or desirable, a combination of the options also represents a possible outcome. Access First, as part of the core obligations, each Party to the Protocol will have to determine if access to GR will be subject to PIC by the State or not, and, if requiring PIC for access, will have to take legislative, administrative or policy measures, as appropriate, containing minimum requirements for access rules and procedures (Article 6 of the NP). To determine the applicable access rules for GR, legal ownership of GR under national legislation will need to be fleshed out in order to decide which access conditions and procedures can and need to be implemented in relation to the prior informed consent requirements. In most countries that have ABS legislation in place, ownership of GR is derived from ownership of natural resources, which is defined by the Constitution or the civil code, or by common law129 . This ownership applies to the physical component of these resources. In the exceptional case where patents are already attached to genetic components of natural resources at the moment of accessing a natural resource in its insitu environment (because the same genetic sequence exists in the organism that is accessed and in another organism that was accessed earlier in relation to the patent), an additional layer of ownership rights can be claimed on the genetic information, but only in relation to the specific genetic component as used for the specific industrial use claimed in the patent. In all other cases, under the current Belgian legal framework it seems that no legal ownership could be claimed on the informational component due to its nature. (cf. the analysis in chapter 3.1.1). As shown by a study of national legislation in selected countries 130 , two ownership systems are generally in place with regard to natural resources: they can fall under private and/or communal property, they can be property of the state or they can be both. In both situations, it depends on national legislation in place how these property rights relate to the genetic components of these natural resources. In some countries, these directly derive from the ownership of the natural resources. Other countries have decided to explicitly create legal measures to limit the extent of private ownership of natural resources and to place all GR under state ownership. This option is not an obvious one. In particular, according to some scholars, it would require a modification of the property rights system which could infringe on the existing system for regulating private ownership rights 131 . Second, improperly conceived access legislation can be a major cause of legal uncertainty and/or "scare off" potential applicants. The measures to be created should hence establish a predictable and clear situation. As such, the following measures are considered in countries with ABS legislation:  the conditions under which access will be granted: Most countries, having access legislation in place, require both PIC from the providers of GR and the proof of MAT to grant access to prospective users. However, in order to ease the access procedure, some countries have "decoupled" the access requirement and the benefit-sharing requirement. The South African Government amended its 2004 Biodiversity Act with a distinction between the "discovery phase" and the "commercialization phase" of utilization of GR. As such, it acknowledges the unpredictability of the scientific process and allows for benefit-sharing agreements to be made at a later stage in the research process, once results are clearer and potential value is easier to evaluate. The discovery phase only requires a notification to be made to the relevant Minister, while prospective "commercial users" need to apply for a permit, linked to a BS agreement, before entering in the commercialization phase 132 .  the types of utilization requiring an access permit/PIC: Although not always easy to make, some countries have been trying to differentiate between access for commercial and noncommercial reasons, in order to facilitate access to the latter. This has been done through different approaches. In Brazil, the Genetic Patrimony Management Council (CGEN), responsible for granting access to the country's GR, established a list of the types of research and scientific activities exempted for access requirements 133 . In Australia, access for noncommercial purposes such as taxonomy is free, while the permit fee for commercial purposes is AUD $50 134 . In Costa Rica, biodiversity related research conducted in public universities has been left out of the ABS law's scoop, except if it has commercial purposes 135 . Some countries also established differential treatment depending on the type of commercial purpose 136 .  the actors requested to have an access permit: access requirements can be different for domestic and foreign users. Three approaches are being used for this matter. In India, access requirements only apply to foreign individuals, institutions or companies or any Indian organization which has "non-Indian participation in its share capital or management" 137 . In South Africa, foreign nationals can only apply for access jointly with a juristic person registered in terms of South African law 138 . Most countries, however, do not make this distinction. Moreover, when countries do not require access permits for domestic researchers and companies, they still expect these actors to comply with BS 139 .  the access procedure: a transparent and non-arbitrary procedure needs to be set up in order to provide legal certainty to users. The main steps of an access procedure include: (1) the 132 Sections 29, 38 and 39 of the National Environmental Laws Amendment Act, Government Gazette No. 14 of 2009, Republic of South Africa. 4) approval or denial of the application; (5) appeal 140 . Some countries have chosen to enshrine the procedure in a legal act in order to enhance legal certainty. This is the case in Costa Rica for example, where a "General Access Procedure" was developed as a bylaw to the Biodiversity Law 141 . Table 1 - Benefit-sharing An important debate in the literature concerns the benefit-sharing requirements laid down in the mutually agreed terms. This is also clearly mentioned as a core measure in the "Bonn Guidelines on Access to Genetic Resources and the Fair and Equitable Sharing of the Benefits arising out of their Utilization" (Article 41 to 50). In the context of developing straightforward legislation and provide legal clarity, the following set of issues are addressed in the literature:  the format of MAT: Most countries tend to have an ad-hoc approach for MAT, where the content of the MAT is negotiated on a case-by-case basis. However some countries also strived to go further by providing indicative sets of guidelines for the establishment of MAT 142 : the Australian Government has thus published two model agreements on benefitsharing for public and privately owned material 143 . Finally, some countries decide to opt for a more coercive approach. In Australia, the Environmental Protection Diversity Conservation Regulations 2000 (Regulation 8A.10) imposes different substantial and procedural conditions to the MAT. Two model-agreements are provided, one for publicly owned areas and the other for privately owned lands. These models serve as guidelines: parties to a contract are free to set up their own format, based on bilateral negotiations 144 . The third approach imposes a standard model to be used by all the users. In South Africa, the Biodiversity Act lays down the mandatory content of the MAT, composed of a benefit-sharing agreement (BSA) and a material transfer agreement (MTA). A prescribed format is provided by the Competent National Authority for both the BSA and the MTA 145 .  the types of utilization of GR leading to BS: BS could be claimed for all types of access, notwithstanding the prospects of utilization (commercial and non-commercial) flowing from this access. However, in order to avoid to putting too much of a burden on non-commercial research, some countries have limited their benefit-sharing requirements only to those utilization activities with prospects for commercial use 146 . The access application however generally includes a "return clause", obliging researchers to return to the negotiation table and settle benefit-sharing terms if and when they enter into a commercialization phase 147 .  the moment in the procedure at which BS agreements need to be settled: In 2007, Brazil amended its domestic ABS legislation to allow users and providers to set-up a benefit-sharing contract at a later stage than the moment of access. The aim was to make the result of the planned research clearer and allow for an easier evaluation of the value generated by the GR 148 .  the type of benefits to be shared: Some countries set out the types of benefits to be shared. These include participation of domestic institutions, joint ownership of patents, royalties, 142 Burton G (2009) technology transfer, etc. In India, the National Biodiversity Authority (NBA), responsible for benefit-sharing, determines possible benefit-sharing options 149 .  making BS fair and equitable: Whether shared benefits are fair and equitable is up for debate between the stakeholders agreeing on MAT. However, to avoid unequal bargaining power between users and providers, some countries have set-up minimum benefit-sharing criteria, such as the creation of a minimum royalty rate 150 . Other countries created a trust fund which collects all money arising from benefit-sharing agreements 151 . Although a promising solution to guarantee an equitable distribution of benefits between stakeholders, in some cases the fund only serves to channel benefits to the involved stakeholders in accordance with the provisions of the BS agreement 152 . In India, only those types of benefits determined by the National Biodiversity Authority can be considered as being fair and equitable 153 . Conservation activities and biodiversity research Creating conditions to foster biodiversity-related research and making sure ABS serves a conservation purpose and encourages sustainable use of natural resources is a transversal objective. Most of the issues addressed in the literature in relation to this objective do not concern stand-alone measures, but a set of measures listed under various obligations of the NP that contribute to conservation activities and biodiversity research. The following measures are considered in relation to other objectives to make sure they serve the national biodiversity interest:  Ownership: If the ownership of GR is vested in the state and the state collects all benefits arising from their use, it is much easier to make sure resources are accessed in a sustainable way and that benefits are redirected towards conservation activities 154 .  Geographical scope: The management of ABS and of protected areas/natural parks presents interesting synergies which could be promoted. Protected areas play a crucial role in the conservation of biodiversity as they host unique habitats, species and genetic resources. These could be of interest to users. As such, linking protected areas and ABS could be an innovative funding source for biodiversity conservation 155 .  Benefit-sharing: Several initiatives have been taken to redirect benefit-sharing towards conservation and sustainable use. Both Costa Rica and Peru require a fixed percentage (10%) of the value of gross sales, before tax (Peru) or of the research budget (Costa Rica) to be invested in conservation activities or capacity building initiatives for indigenous communities 156 . In South Africa, ABS regulations stipulate that surplus generated by the benefit-sharing fund should be directed towards conservation and capacity building initiatives 157 . An additional measure could be to allow the administrations responsible for the management of nature and/or biodiversity to handle sharing of benefits. This would establish a link between biodiversity conservation activities and the use of benefits. In Costa Rica, the National Commission for the Management of Biodiversity (CONAGEBIO), for example, is responsible for both the development and coordination of the national strategy concerning biodiversity conservation and the management of the utilization process 158 .  Access: Access conditions can be a major leverage for the sustainable use of GR and for the encouragement of biodiversity related research. Firstly, non-commercial biodiversity related research could be exempted from any access requirements, or their access requirement could be simplified, as is explicitly foreseen in Article 8a of the NP. Secondly, the granting of access permits could, for example, be subjected to a mandatory environmental impact th April 1998 157 However, it is unlikely that the fund generates any significant surplus as it is no more than a conduit for money due to stakeholders; See Wynberg R, Taylor M (2009), op. cit. 158 Article 5 of the General Rules for the Access of Genetic and Biochemical Elements and Resources of Biodiversity. Executive Decree No. 31514, Republic of Costa Rica; See also http://www.conagebio.go.cr/quienes/Funciones.html assessment for users, as is currently being done in several countries having ABS legislation in place 159 . In Kenya, for example, the holders of an access permit are required to provide reports on the environmental impacts of the collection of genetic resources or their intangible components 160 . Table 3 - Competent National Authority The CNA is the official institution that grants access, issues written evidence that access requirements have been met and advises users on applicable procedures and requirements to get access to GR 161 . In order to do so, the CNA has to be designated on behalf of the Party and given powers to fulfill its tasks as listed in the NP. The following two issues are addressed in the literature:  Designation: Two approaches are considered in the literature in regard to the establishment of the CNA. The first one consists of the creation of a new institution, which is then designated as the CNA, and possibly also as the authority fulfilling other related tasks, as is the case with CONAGEBio in the national ABS legislation in Costa Rica 162 . The second approach, as used in South Africa, is the designation of an existing institution as CNA, in this case the Ministry of Environmental Affairs and Tourism 163 . According to the used wording in Article 13 of the NP, a Party may also designate more than one CNA. The African Model legislation, for example, includes the possibility for other institutions to take over the role of the CNA 164 .  Empowering: In order to provide users with legal certainty, the role and mandate of the CNA should be clearly defined. Under current ABS measures, the CNA takes on three types of roles. First, it can function as a "one-stop shop", i.e. as a single point of contact for potential users applying for an access permit, granting the ABS permit, but also channeling the applications of other related permits to the competent authorities and the outcome of the procedure back to the user 165 . Secondly, in addition to the roles under the first option, the CNA could exercise the general responsibility on coordinating/facilitating the access procedure, including the coordination of the procedures of ABS and other ABS related permits, such as the granting of related environmental (non-ABS) permits. Third, it could be the responsible authority not only for channeling, coordinating or facilitating the application and permit delivery for ABS related permits, but also be the competent authority for directly granting all ABS and ABS related permits. This latter option might require an in-depth integration with other power-levels and processes. Additionally, it might provide an opportunity to create synergies between the access granting authority and the authority responsible for compliance monitoring 166 . Another important issue to be solved is the clarification of the powers of the CNA in relation to the confidential treatment of data supplied during the access procedure. The Andean Community's model law, for example, describes the conditions under which such confidential data can be treated167 . It is worth recalling that the first upcoming task of the existing NFP and/or of the newly designated CNA will be to comply with the obligation notify the contact information of the NFP and the CNA and, in case of the designation of more than one CNA, of the relevant information on the division of responsibilities between them. Table 4 -Summary of relevant measures for the Competent National Authority Relevant measures for the minimal implementation of core obligations Establish CNA (Article 13)  Option 1: Designate an existing institution as CNA  Option 2: Establish and designate a new institution as CNA  Option 3: Establish more than one CNA Relevant measures for additional implementation Additional legal rights and duties for the CNA  Option 1: CNA responsible for access permit acting as a single stop shop for all ABS related permits, that it channels through to the competent authorities for granting these permits;  Option 2: CNA has full and sole responsibility for the access application (as in option 1), but also coordinates/facilitates the procedure and the granting of ABS related permits;  Option 3: CNA directly grants all ABS and ABS related permits. Compliance Efficient compliance measures, in particular through monitoring the use of GR, are key to a successful implementation of the NP. The following issues are considered in the literature for the design of compliance measures and monitoring systems:  Giving binding effect to domestic legislation of the provider country: A critical step in the regulation of ABS is to lay out the basic obligations domestic users have to comply with when importing and/or utilizing genetic resources. This obligation comes down to give binding effect to the provider country's PIC and MAT. A first approach would be to consider that the prime responsibility for regulating ABS lies with the provider country. In such a case, a Party would only require users under its jurisdiction to act in accordance with the foreign legislation. A second option would be to establish a self-standing obligation in the legislation of the user country. As such, the legislation does not refer to the actual ABS legislation of the provider country, but only to the specific obligation of requiring PIC and MAT for access to its GR 168 , if so requiered by the provider country.  Monitoring the utilization of GR: Very few monitoring systems for ABS are operational yet. The following issues need to be addressed to serve as a basis to implement a monitoring system: o Checkpoints: at least one institution has to be designated by each Party to function as a checkpoint to monitor the use of PIC and MAT during the valorization process of GR. This can be an existing institution, such as the IPR office amongst others, or a newly created institution. The wording of Article 17.1(a) suggests that more than one checkpoint can be designated/created. o Monitoring system: The heterogeneity of utilization activities makes it very difficult to establish a 'one-size-fits-all' monitoring system. Three approaches are generally considered for the implementation of such a system. The less stringent one is the establishment of a 'voluntary monitoring system' where users would be required to report to the checkpoint(s) on a voluntary basis. It requires a strong commitment and understanding of ABS by private users as well as close collaboration between the monitoring authority and these users. In Australia, a "Biodiscovery Industry Panel" was established to foster this type of collaboration 169 . The second option is the socalled "due-diligence" monitoring system 170 . This system is a self-monitoring system requiring that users make sure they are using GR that has been accessed in compliance with the national and/or foreign ABS legislation. This type of system can be particularly relevant when GR is being transferred to third parties during the valorization process. A cost-effective way to support such an approach has been setup in Australia by creating the Genetic Resources Information Data Base (GRID), where all existing ABS agreements are freely viewable online 171 . It creates a transparent system, allowing any prospective investors to verify the legal status of the genetic resources acquired on Australian territory at no cost. A third approach would rely on monitoring by previously established checkpoints at specific stages of the valorization chain. Particularly relevant here is the choice of the time at which the right of use of GR should be controlled. Possible stages are the research fund granting, the patent granting, the market access authorization and the moment of import into the country 172 . This choice would also influence the type and number of checkpoints to be established.  Foster compliance among users: The strength of the motivation of users to comply is likely to be a determinant factor of the regime's effectiveness. Therefore, the state might want to create incentives and motivations for its users to comply. This could be done by offering financial benefits (e.g., tax deductions, rebates, and other rights), opportunities (e.g., special priority for other filings, permits or opportunities, access to special materials or programs that cannot be accessed by others) and positive publicity to complying users 173 . The latter measures are also clearly mentioned in Article 51 of the "Bonn Guidelines on Access to Genetic Resources and the Fair and Equitable Sharing of the Benefits arising out of their Utilization". Table 5 -Summary of relevant measures for compliance Relevant measures for the minimal implementation of core obligations Give binding effect to domestic legislation of provider country (Article 15, 16, 18)  Option 1: Leave responsibility to the provider country  Option 2: Create a self-standing obligation Designate checkpoints (Article 17.a)  Option 1: Designate existing institution as checkpoint  Option 2: Establish and designate new institution as checkpoint  Option 3: Establish more than one checkpoint Relevant measures for additional implementation 171 https://apps5a.ris.environment.gov.au/grid/public/perrep.jsp 172 Kamau EC, Fedder B and Winter G (2010), op. cit. 173 Young T (2006) Covering ABS: Addressing the Need for Sectoral, Geographical, Legal and International Integration in the ABS Regime. IUCN Environmental Policy and Law Paper No. 67/5 RECOMMENDATIONS FOR LEGAL, INSTITUTIONAL AND ADMINISTRATIVE MEASURES IN BELGIUM Based on the preliminary assessment of existing ABS measures (chapter 6) and the legal gap analysis (chapter 5), this section lists a set of recommendations to support identification of policy options for the implementation of the NP in Belgium. Each recommendation is listed as an "action card", including different options for implementation. For each implementation option, a number of advantages and disadvantages are identified, which are the basis of a first selection of the recommended measures for the impact assessment and which can serve as guidelines for a more in depth discussion. Most of these implementation options are not exclusive: they should be combined in order to achieve an efficient implementation of the NP. Each action card also provides a short description of the rationale behind the action to be taken and shortly states some of the existing Belgian measures which are relevant for the action. The action cards have been divided into two groups.  The first set of actions comprises measures related to the core obligations for the implementation of the NP in Belgium, as specified above (cf. chapter 5.2). They form the basis of compliance with the NP and represent a case of 'minimal implementation' for Belgium (addressing the minimal implementation of the core obligations).  A second set of additional measures which are important elements during implementation of the obligations, but that are less urgent (going beyond the minimal implementation of the core obligations or going beyond the core obligations). For each of the action cards a preliminary recommendation is provided, based on the arguments advanced and organized according to the following categories:  recommended measure  preferred measure, potentially interesting and meriting further analysis  more than one of the suggested measures potentially interesting and meriting further analysis  not recommended for a particular reason Recommendations for actions to be taken in case of minimal implementation of the core obligations As indicated above, this section analyzes and evaluates a set of legal and non-legal instruments for the minimal implementation of the core obligations of the Protocol codified under Articles 5 to 9, Article 13 and Articles 15 to 18. This list is by no means exhaustive, but contains a set of recommendations resulting from the analysis in previous chapters and which supported the selection of the options of which the impact was analyzed in this study. Priority of the measures The below-mentionned action cards have been assigned a priority score according to the following scale:  Action card -Determine format of MAT Description: Under the NP, Belgian users are required to share benefits upon MAT. These MAT should hence be given binding effect under Belgian law. The NP, however, does not impose a format for MAT, which can be left to the discretion of stakeholders or flow from (mandatory) measures. Related Article of the NP: 5, 18 Nature of the measure: Legal Priority for Belgium:  Relevant existing measures in Belgium  BCCM's MOSAICC Option 1 -Leave full discretion on how to execute the BS obligation to users and providers of genetic material Possible advantages:  high flexibility for users and providers to agree on specific benefit-sharing  might be less of a burden for large company users, as they will choose to conclude MAT that generate the least cost  might represent a low-cost measure for public authorities as no additional resources are needed concerning MAT. Possible disadvantages:  does not allow the state to control the benefit-sharing procedure  does not allow to make sure benefits are shared in a fair and equitable way  does not allow to make sure that benefits contribute to the conservation of biological diversity and sustainable use of its components Option 2 -Develop mandatory MAT terms and conditions and/or default MAT provisions Possible advantages:  might provide stronger legal clarity to all stakeholders involved  allows the state to control the content of the MAT and can make sure benefits are shared according to principles of fairness and equity  might also smoothen the negotiation process between commercial users and providers Possible disadvantages:  might offer less flexibility to stakeholders EVALUATION The 2 options are potentially interesting and deserve further analysis. Action card -Clarify access conditions Description: Holding sovereign rights over its genetic resources, Belgium can choose whether or not to require bioprospectors to obtain Prior Informed Consent for the competent authority for access to genetic resources under its jurisdiction Related Article of the NP: 6, 7 Nature of the measure: Legal Priority for Belgium:  Examples of relevant existing measures in Belgium  There are no existing legal measures on PIC to access GR in Belgium Option 1 -Require PIC from the Belgian State as a Party to the Protocol Possible advantages:  Contributes to the implementation of the Nagoya Protocol and its objective of promoting the conservation and sustainable use of genetic materials as well as their fair benefit-sharing  Allows to keep track of accessed Belgian GR  Allows for access statistics to be kept  Provides for legal certainty for users, through clarifying the legal state on access in Belgium and through the possibility of providing them with a PIC/international certificate of compliance Possible disadvantages:  Need to develop access rules and procedures  Depending on the implemented procedure, could create some additional administrative burden for users Option 2 -Do not require PIC from the Belgian State as a Party to the Protocol Possible advantages:  No additional legal measures needed, for establishing an access procedure  Lower administrative burden for users at time of access (as not needing to go through an access procedure) Possible disadvantages:  Does not allow to keep track of accessed Belgian GR  No information to base a potential policy review on;  Does not allow for access statistics to be kept  Would still need to take legal measures in order to clarify the current legal status for access to GR in Belgium  Would not provide legal certainty for users of Belgian GRs (e.g. use in third countries, subsequent use, etc.) EVALUATION Option 1: recommended measure ; contributes to the implementation of the Nagoya Protocol by assuring more legal certainty Action card -Ensure ABS serves conservation and sustainable use of biodiversity Description: Alongside the aim to share benefits in a fair and equitable way, the implementation of the Nagoya Protocol should serve the broader goal of the CBD: conservation of biodiversity and sustainable use of its components Related Article of the NP: 9 Nature of the measure: Administrative and/or legal Priority for Belgium:  Examples of relevant existing measures in Belgium  Article 16, Flemish Natuurdecreet: If a permit for access is delivered, the competent authority shall ensure that no avoidable damage to nature may arise by imposing reasonable conditions  Article 20, Flemish Soortenbesluit: Access to protected species can only be allowed if it does not affect the conservation of these species Option 1 -Link access permit to mandatory conditions on the use of benefits Possible advantages:  Could be a way to ensure at least a minimal part of benefits is directly flowing to conservation/sustainable use  Institutions could be reluctant to implement new tasks, as they might not be considered as 'core tasks'  Could be ineffective if the institution(s) does(do) not have sufficient know-how, related experience and resources  One centralized CNA is not in line with the actual division of competences for environmental issues, which are mainly situated at the level of the Regions (cf. chapter 2) Option 2 -Establish and designate a new institution as CNA Possible advantages:  Could establish very efficient procedures as it would be the 'core task' of the new institution.  Possibility to create synergies between CNA and NFP, providing more process certainty for users Possible disadvantages:  Could have a high financial and transaction cost, as new structure needs to be established  Some necessary information might be confidential and/or difficult to access  One centralized CNA is not in line with the actual division of competences for environmental issues, which are mainly situated at the level of the Regions (cf. chapter 2) Option 3 -Designate more than one CNA Possible advantages:  Would better fit the Belgian institutional framework, considering the actual division of competences Possible disadvantages:  Could create uncertainty for potential users on the competent CNA in certain specific cases of mixed competences (however in most cases access will be clearly granted in one of the regions). Coordination mechanisms (such as a web based centralized input system for access requests) might be required then EVALUATION If PIC is required, option 3 seems the most straightforward, as access is mostly clearly granted in one of the 3 regions and as the access requirements to indigenous species in the Regions are part of the regional competences. would only be an addition to existing tasks (but depending on workload, institutional set-up, possible synergies, …) Action card -Give binding effect to domestic legislation of provider country Possible disadvantages:  Institutions could be reluctant to implement checkpoint-tasks, as they are not considered as "core tasks"  Could be ineffective, if these institutions do not have sufficient relevant knowledge, know-how, related experience, etc.  Could represent high administrative burden, if these institutions have to create a whole new "section" for GR monitoring, with few synergies Option 2 -Establish and designate new institution as general checkpoint Possible advantages:  Could establish very efficient monitoring as it would be the "core task" of the new institution.  Possibility to create synergies with CNA and NFP, providing more process certainty for users Possible disadvantages:  Could have a high financial and transaction cost, as new structure needs to be established  Some necessary related information might be confidential and difficult to access by such a "monitoring" institution (compared to existing institutions that already have acquired data, the confidence from stakeholders and users, etc.) Option 3 -Establish more than one checkpoint Possible advantages:  Would better fit a monitoring system to support compliance if transparency is created and monitoring done at specific stages of the valorization chain  Could be better adapted to the institutional reality in Belgium  Could be more cost effective than one centralized institution  Allows to exploit existing institutional capacity to address the monitoring requirements  If based on existing institutions, these could benefit from more confidence of stakeholders and users Possible disadvantages:  Might need more time to install an effective set of checkpoints, and might incur a higher financial, legislative and administrative cost compared to option 1, but lower than option 2  Might increase the complexity of the monitoring process  Might need additional coordination mechanisms amongst the checkpoints EVALUATION Recommended measure: Option 2: the necessary combination of technical, scientific and administrative competences will probably require a new structure to be effective. Could be combined with option 3, if needed. Recommendations for actions to be taken in case of additional implementation This section presents a list of recommendation for measures, which go beyond the minimal implementation of the core obligations and/or beyond the core obligations as explained above. Priority of the measures The below-mentioned action cards have been assigned a priority score according to the following scale:     Action card -Set additional specifications for benefit-sharing upon MAT Description: Alongside the basic mandatory conditions for access (PIC and MAT), Belgium can set additional specifications on BS upon MAT: as a mandatory access condition (in general terms (e.g. established in a legislative instrument/in standard PIC conditions/…) for all uses, for types of uses, or as specific terms (e.g. in the PIC) for the particular use(s) for which access is requested), as default access conditions (in case not provided for otherwise in the terms of the PIC), as a mandatory condition on the use in general terms (e.g. through a legislative instrument) for all uses, for certain uses, or in particular terms for the use envisaged (established in a particular terms for the use, probably through an approval….) Related Article of the NP: 6, 8 Nature of the measure: Legal Priority for Belgium:   Might be difficult to establish a finite list EVALUATION Option 1 is not recommended because of a lack of flexibility; Option 3 is not recommended as it could be illegal. Options 2 and 4 are equally potentially interesting and are recommended for further analysis. Option 5 is also potentially interesting and recommended for further analysis and could be envisioned in combination with option 2 and 4. Examples of relevant existing measures in Action card -Establish clear and transparent access procedure Description: The application and approval procedure should be made clear, including identifying required action to be taken, the consecutive steps of the application, setting time limits for decision-making process and provide a clear record of the final decision. Difficulties can arise when the application process is not clearly defined and/or stated in law or when the law leaves too much discretion to the competent access authority. Related Possible disadvantages:  requires a strong commitment and understanding of ABS by private users  requires close collaboration between the monitoring authority and these users  Could be constraining for non-commercial research  Depending on how it is implemented could still impose a considerable financial and administrative burden on users and administrations Option 3 -Monitoring by checkpoints at specific stages of the valorization chain Possible advantages:  Could be easily combined with the establishment of certain actions (e.g. patenting, commercialization) as triggers for BS, instead of access  Allow a more in-depth monitoring than voluntary based methods Possible disadvantages:  Depending on how it is implemented could be a high(er)-cost option for users and administrations  If existing institutions: institutions could be reluctant to implement new tasks, as they are not considered as 'core tasks'.  If existing institutions: could be ineffective if the institution does not have sufficient knowledge, knowhow EVALUATION These options should be assessed in combination with other action cards. If there is only 1 checkpoint, option 1 and 2, in combination with user incentives, is potentially interesting and deserves further analysis. If the option of more than one checkpoints is considered, option 3 is potentially interesting (seen the potential cost-effectiveness) and deserves further analysis. Action card -Create incentives for users to comply EVALUATION Option 2 is recommended (can build upon existing practices and has proven its effectiveness). Option 3 might impose a higher burden on authorities. Option 1 gives more responsibility to the sectors, but might lack effectiveness. DEFINING THE POLICY OPTIONS AND PRELIMINARY ANALYSIS OF THEIR EXPECTED IMPACTS As for previous chapters, this chapter mainly focuses on the core measures specified in the terms of reference of this study as requiring special attention (see chapter 5.2). Building upon chapter 6 and 7, this chapter presents policy options discussed at the first stakeholder meeting and, based on the discussion with stakeholders, selected by the Steering Committee of this study for the implementation of six core measures that are needed for the minimal implementation of the Nagoya Protocol in Belgium: It is important to remember that at least one of the legal provisions (designation of National Competent Authorities and the National Focal Point, Article 13.4) needs to be implemented no later than the entry into force of the Nagoya Protocol for each Party (that is the ninetieth day after the date of deposit of the 50 th instrument of ratification if the Party ratified until the deposit of the 50 th instrument, or on the ninetieth day after the date of deposit of the instrument of ratification if the Party ratifies after the deposit of the 50 th instrument). Therefore, Article 13 and the core obligations directly related to that Article (such as Article 6 which has a direct impact on the tasks of the Competent National Authority) deserve a special urgent attention. Further, in line with the EU guidelines, the general principle of the impact assessment is to assess the impact of policy options as net changes compared to the "no policy change" baseline. For this purpose, a general description of the "no policy baseline" is given and for each measure the particular expressions of this baseline are specified. For this purpose, a distinction is also made between a general default "no policy change over all the options" and a specific "0" option for each section, which considers a "no policy change" over a specific obligation, in a situation where nevertheless some other measures could have been taken. The description of the options and the preliminary analysis regarding their expected impacts are based on the discussions held and the comments received during the first stakeholder meeting on 29 th May 2012174 . Description and discussion of the general "0" option The general "0" option represents a situation where "no policy change" takes place for any of the items considered below; that is if none of the options discussed below are implemented. This would lead to a non-ratification of the Nagoya Protocol. However, in this situation, Belgium would still have to comply with the international obligations pertaining to GR and TK, mainly the CBD's Articles 8(j) and 15 (for GR and TK associated to GR) and the ILO Convention No.107, Articles 7(1), 11 and 13 (for TK), which Belgium both ratified. In particular, Belgium would still have to take measures to clarify access to GR for their utilization, which may or may not be covered under existing legislation, and to take potential compliance measures with the aim of sharing benefits from the utilization of GR and TK in a fair and equitable way with the countries of origin of these resources. These measures related to existing international obligations would have to be taken in any of the specific "0" option measures discussed below as well. Moreover, after the entry into force of the Nagoya Protocol (in this case, without ratification by Belgium), there would be a need to clarify (reinterpret/amend) the current Belgian legal framework in the light of the adoption of the Nagoya Protocol. For example, clarify whether existing requirements on access also apply on access in the meaning of the Nagoya Protocol and for setting up a framework to enable dealing with transactions with GR from/to countries that have ratified the Nagoya Protocol. An obvious disadvantage of the general "0" option is the failure to create the legal certainty and transparency, both prominent aspects of the implementation of the Nagoya Protocol, thereby potentially increasing transaction and litigation costs for users and providers. Moreover, as stated above, even in case of non-ratification, Belgium needs to take a set of legal measures on access and benefit-sharing. These would nevertheless be different than the set of measures required for implementing the Nagoya Protocol, creating a confusing situation for users and providers (i.e. existence of many different legal regimes for the same issue). Furthermore, non-ratification would lead to complex relations of Belgium as a non-Party with Parties to the Nagoya Protocol. It would also probably lead to a loss of Belgian credibility and trust on the international forum; with a risk of straining multilateral relations and also loss of exchanges (research, development, collections, industry, …) and hence opportunities for Belgian individuals and institutions (e.g. in relation with Parties to the Protocol). Defining the policy options for the core measures and their expected impacts Access to GR and Benefit-sharing As a first preliminary finding of the study on the implementation in Belgium of the Nagoya Protocol, it is recommended to establish Prior Informed Consent (PIC) and Benefit-sharing (BS) as general legal principles in Belgium in order to implement Article 5 (on benefit-sharing) and Article 6 (on access) of the Nagoya Protocol. As a general principle, the operationalization of PIC and BS should be phased, flexible and based on the subsidiarity principle This operationalization of PIC and BS can then be divided in two implementation components that are interrelated: the operationalization of PIC (first component) and the specification of the Mutually Agreed Terms (second component). Description of the options  The specific "0" option: Benefit-sharing-component The specific "0" option on BS would consider taking no measures on benefit-sharing in Belgium (such as establishing benefit-sharing as a general legal principle). This would lead to a nonratification of the Nagoya Protocol, and would still require to take the measures specified in the general "0" option. Moreover, it is unclear if this would not amount to non-compliance by Belgium with Article 15 of the Convention on Biological Diversity.  The specific "0" option: Access-component This specific "0" option on Access would consider no PIC requirement, with or without benefitsharing as a horizontal principle. This would not necessary lead to a non-ratification of the Nagoya Protocol (if benefit-sharing is established as a horizontal principle). If it leads to a nonratification, this "0" option would still require to take the measures specified in the general "0" option above. However in both cases (with or without benefit-sharing as a horizontal principle), this "0" option would create less legal certainty for users of Belgian GR, would not allow to deliver an international certificate of compliance for such users (which serves as evidence that GR, which it covers, has been accessed in accordance with PIC and that MAT have been established) and could lead to a lack of data on the use of Belgian GR for evaluating policy and promoting research and development.  General ABS option 1: No Prior Informed Consent required, but Benefit-sharing as horizontal principle Under option 1, no PIC would be required, but BS would be established as a general legal principle in Belgium in order to implement Article 5 (on benefit-sharing) and Article 6 (on access) of the Nagoya Protocol (which specifies that a Party might determine not to require PIC). However, even if no PIC is established, the current legal framework on access will still need to be clarified, in a way that allows complying with the obligations of the Nagoya Protocol and the options for implementing the core measures discussed in this report.  General ABS option 2: Prior Informed Consent and Benefit-sharing as horizontal principles Under option 2, PIC and BS would be established as general legal principles in Belgium in order to implement Article 5 (on benefit-sharing) and Article 6 (on access) of the Nagoya Protocol. As a general principle, the subsequent operationalization of this general obligation through PIC and MAT should be phased, based on subsidiarity and flexible. Expected impacts  General ABS option 1: No Prior Informed Consent (PIC) required, but benefit-sharing as a horizontal principle This option might seem easily implementable as it would not require any additional legal measures to be taken and could imply a relatively low administrative burden, as the requirements for operationalizing PIC would be avoided (this possible advantage will only be important if the options chosen below imply a heavy administrative burden). However, this option would still require clarifying the current legal framework on access, in a way that would not only take into account the adoption of the Nagoya Protocol (see general baseline), but would also allow complying with the obligations of the Nagoya Protocol and the options for implementing the core measures discussed further in this section. Furthermore, it is unclear how this option could provide legal clarity for users after access, in particular since it does not allow the State to offer users a proof of legal access such as an international certificate. Nor will it allow post-access tracking and/or monitoring of the utilization of genetic resources and the collection of data, which could result in missing out important input of valuable data for research, innovation and conservation policy. In other words, under this option, the Belgian State would not give itself the means to get information on its GR accessed or to monitor/control the use of its own genetic resources. It could lose out on an important incentive to promote conservation and sustainable use of its own GR.  General ABS option 2: Prior Informed Consent (PIC) and Benefit-sharing as horizontal principles The Nagoya Protocol contributed to turn the debates about PIC and BS around. Whereas previously, it was considered to be more interesting for users to access genetic resources in states having the least regulation in place, now users might prefer states with public and transparent access and benefit-sharing legislation in order to optimize the legal certainty for the subsequent utilization of these resources. A major advantage of the option 2 is that it paves the way for the delivery of an internationally recognized certificate of compliance to users by the Belgian State, hence increasing transparency and legal certainty. It could further allow more efficient and effective monitoring and tracking of the use of its GR. Keeping track of access to GR will also give a better view of the available genetic resources, and facilitate data and statistics collection which are useful for biodiversity policy in general and for further implementation of the Nagoya Protocol in particular. To be functional, this option however needs additional legal access rules and a clearly defined access procedure. Depending on the further operationalization, it could create administrative burden, both for users as for public authorities involved in administrating PIC. Further operationalizing general option 2 on PIC If both Prior Informed Consent and Benefit-sharing are established as horizontal principles (general ABS option 2 above) two additional interrelated measures should be implemented: the operationalization of PIC (first component) and the specification(s) of the possible requirement of and conditions for the Mutually Agreed Terms (second component). The first implementation component could consider the operationalization of PIC through a notification/registration/approval requirement 175 to the Competent National Authority or authorities. In the second component, implementation measures related to the content of the mutually agreed terms of the access agreements, including as specified in the notification/registration/approval procedure, should be considered. In line with Articles 4 and 8 of the Nagoya Protocol, these measures should have due regard for the particular features of certain sectors, species or areas and, in line with Articles 1 and 9, they should contribute to the objectives of conservation and sustainable use of biodiversity. Description of the options The options to establish and operationalize PIC are built up in two parts: Starting point: limit administrative burdens by building on existing legislation Two reasons make a preliminary analysis of the existing legislation necessary for the study of the different PIC "sub-options". First, situations should be avoided where different permits from different administrations would have to be obtained for accessing the same material: the superposition of different requirements and procedures for the same material would furthermore complicate the administrative follow-up and increase the administrative burden, in particular if the same data would have to be resubmitted to different, unrelated permit databases. Second, protected areas (PA) and protected species (PS) contain GR which are important for conservation and sustainable use of biodiversity and may be of actual/potential (high) value. The first step in the implementation of the PIC and BS requirements could (a) consider refining existing PA and PS relevant legislation in order to include more specific regulation for the access to GR for utilization, as defined under the Nagoya Protocol. (b) beyond refining PA and PS relevant legislation, potentially include other relevant categories of GR with e.g. actual or potential value, by also considering other existing 175 "Notification" and "registration" refer to an easy and less burdensome permit requirement: the permit is automatically provided/generated if the applicant provides certain data and complies with certain general conditions. "Approval" refers to permit-requirement that demands an individual assessment of each individual application, that apart from general permit conditions, might also imply the imposition of permit specific conditions. legislation relevant for the access to GR to build upon with the view to further operationalize PIC. Default option to complement the starting point (cf. to build on existing legislation) Additionally, for all the GR which are not covered through PA or PS legislation, a default rule could be adopted. This could be done (c) by only allowing such access from/through Belgian collections, or (d) by allowing access from anywhere, providing the user has registered/notified the Competent National Authority (CNA). When combining the above, the assessment of the impacts of the following three options seems to be the most relevant: This option combines an enlarged approach to refining existing legislation relevant for GR, with a default rule that GR, not covered by such modified legislation, can be accessed from anywhere, providing the user has registered/notified the Competent National Authority.  Expected impacts  Option 1: The bottleneck model: only existing PS/PA relevant legislation & measures + only access to GR through ex-situ collections as default rule Possible Advantages: This option would allow the collections to keep a copy of each accessed GR in Belgium whenever this is feasible at a minor cost. The existing scientific and administrative infrastructure of the culture collections could foster ex-post follow up. Existing databases and standard information could be used. The newly encoded information could contribute to biodiversity research such as taxonomic research. Finally, and as a general remark, the cost for access to collections is very high and should be kept as low as possible. To this end, it should take into account the high number of transactions by the collections. In case part of the benefits arising from the utilization of GR would be directed to the conservation activities of the collections, benefit-sharing could generate additional financial support for the collections. This option would not necessarily lead to heavy transaction costs for the collections, as most collections already have standard Material Transfer Agreements (MTA) in place which could be easily adapted, on the condition that these are in line with CBD provisions, including the Bonn Guidelines. Possible Disadvantages: A lot of the relevant GR might be situated outside the collections, such a configuration requiring thus additional resources for the handling of access requests. For these GR two situations can be distinguished. (a) The collection decides to keep a copy of the GR (for example when it is feasible for the collection to keep the GR at a minor cost and whenever it is scientifically relevant). In that case there are no additional resources required for handling the access request, as it is part of the standard procedures of the collections (including encoding in databases, handling of MTAs, etc.). However additional financial resources might be needed to bear the cost of handling the access request and storing information or samples that would not have passed by the collections otherwise (e.g. depending on whether the access concerns physical samples or only information). (b) The collection decides not to keep a copy of the GR (e.g. because it is expensive/beyond the capacity/scientifically not relevant/technically not possible). In that case, if information has to be kept on the access of the genetic resource, it would require the extension of the database infrastructure beyond the ex-situ holdings, to include documentation on access provided to in-situ resources through the collections. However, this might not represent an important additional cost, as it is possible to build upon the existing infrastructure. This second sub-option could also require the handling of MTA for the in-situ resources accessed through the collections, but not kept in the collection. . Furthermore, the relation between the culture collections and the CNA and the specific access-related powers of the collections will need to be clarified, as the CNA is the final authority able to grant access for utilization in the context of the NP. This could lead to an additional step in the access procedure and could create additional administrative burden for users wishing to access GR. However this border is not necessarily higher than under the other options as it would be based on a division of labor in the PIC procedure over the different entities.  Option 2: The fishing net model: only existing PS/PA relevant legislation & measures + access from everywhere, but with registration as default rule Possible Advantages: For the default rule, this option could strongly encourage utilization, as the administrative burden for users would be low. Financial and transaction costs for the State could also be relatively low, as the notification obligation could be easily set up through a standardized system. Moreover the notification/registration obligation would (1) provide data on the type of users of the genetic resource and (2) facilitate possible policy review. Possible Disadvantages: Under this option the default rule could prove to be ineffective or even create a loophole in the basic rule, if cases where species are found only within protected areas prove to be rare, and/or if most species within protected areas can also be found outside of these areas. Furthermore, this option would not allow to obtain as easily a copy of accessed GR in Belgian collections (whenever feasible) and it might be harder to coordinate with the existing databases of the ex-situ collections which already contain information on previous accesses and utilization of Belgian GR. Moreover, the default rule under this option might need to be limited to non-commercial use only.  Option 3: Existing PA/PS relevant legislation & measures + other specific GR relevant legislation/measures + access from everywhere, but with registration as default rule Possible Advantages: It can be expected that this option would mainly give the same positive impacts as under option 2(PA/PS legislation +access from everywhere as default access to GR).It will however have a bigger impact, as it would apply similar requirements as those for PA and PS to a broader set of GR and by integrating the new regulation with a broader set of related legislation. This option thus allows coping with cases where access to genetic resources is not limited to PA and PS. Microorganisms with potential value for research and development, for example, are generally found where natural selection has taken a different path i.e. in extreme environments that do not necessarily coincide with the PA/PS category. This option thus allows to extend the further operationalized PIC requirements to the broadest range of potentially interesting GR and reduces the amount of GR falling under the default category. Possible Disadvantages: Similar disadvantages as for the option 2. Furthermore, the amount of existing legislation relevant to GRs beyond the PS/PA related legislation, but also the amount of areas/material in Belgium beyond PA/PS, that are of particular importance for biodiversity, will determine whether or not this option has any added value beyond option 2. Specification of the Mutually Agreed Terms If PIC would be required in Belgium, it should also be clarified whether MAT is required and under what conditions (e.g. as a condition to obtain PIC). Given that a phased approach would allow to finetune the measures as more feedback is gathered, the initial MAT requirements could be further developed over timer after a rather limited first implementation phase. This section therefore further describes the "sub-options" considered in the case where both Prior Informed Consent and Benefit-sharing are established as horizontal principles (general ABS option 2 above). Description of the options In order to develop an idea of possible impact, 3 types of MAT are proposed for further exploration:  Option 1: No specific BS requirements imposed for the MAT A first type where, in the exercise of its sovereign rights over its GR, the Belgian State decides not to impose any specific benefit-sharing requirements from users in MAT (apart from the general legal obligation to share benefits and the structural benefits occurring from working of the future Belgian ABS system).  Option 2: Specific BS requirements imposed, through standard agreements, depending on finality of access For the second type, specific benefit-sharing requirements are imposed through standard formats for the MAT (e.g. a limited number of standard MAT-agreements), depending on the finality of the access. This could imply that no specific BS requirements are imposed in the MAT if no commercial utilization of the GR is planned, while more specific BS requirements are imposed if commercial purposes are envisaged (e.g. the collection of revenues from that use or the sale of the GR itself). The related MAT for non-commercial utilization would include a re-negotiation requirement in case of change in intent to commercial use.  Option 3: Specific BS requirements imposed, but their implementation is negotiated on a case by case basis, depending on finality of access Under option 3, specific benefit-sharing requirements are developed by the Belgian Authorities for each access request. These requirements can be of a different nature (e.g. a general regulatory obligation, a specific condition as a PIC-conditionality, etc.) and will be differentiated according to the finality of access. Expected impacts  Option 1: No specific BS requirements imposed for the MAT Possible Advantages: This option provides for high flexibility for users and providers to agree on specific benefit-sharing, depending on the specificities of the exchange of GR. It thus probably represents less of a burden for large company users, as they will choose to conclude MAT that generate the least costs, but it might be burdensome for non-commercial and small company users to negotiate individual MAT (e.g. if no standard MAT are available/applied in their sector). For the authorities, this option also represents a low-cost measure as no additional resources are needed concerning MAT. Possible Disadvantages: This option does not allow the Belgian State to control the benefitsharing procedure and to make sure benefits are shared in a fair and equitable way, or that benefits contribute to the conservation of biological diversity and sustainable use of its components. According to paragraph 45 of the Bonn Guidelines, fair and equitable benefitsharing varies "in light of the circumstances" and a third independent stakeholder (i.e. the state) might be needed to identify these circumstances.  Option 2: Standardized formats for BS requirements, depending on finality of access Possible Advantages: This option provides strong legal clarity to all stakeholders involved. It also allows the Belgian State to control the content of the MAT and can make sure benefits are shared according to principles of fairness and equity. It might also smoothen the negotiation process between commercial users and providers, as it could offer standard formats containing guidelines/default rules/requirements to follow, while providing security to providers that changes of intent will need a renegotiation. Possible Disadvantages: This option offers less flexibility to commercial users that already have their own systems or prefer a more flexible approach.  Option 3: Specific BS requirements, depending on finality of access Possible Advantages: This option provides strong legal clarity to all stakeholders involved. It also allows the Belgian State to control the content of the MAT and make sure benefits are shared according to principles of fairness and equity. It furthermore provides much flexibility to fine tune the BS requirements to cover concerns of both users and providers, including the contribution to conservation and sustainable use. Possible Disadvantages: Non-commercial users and small commercial users might suffer from this option, as they might not necessarily possess the needed resources to negotiate and fulfill the specific BS requirements. Establishing one or more Competent National Authorities Description of the options The designation of one or more Competent National Authorities needs to be implemented no later than the entry into force of the Protocol for each Party. Therefore this measure deserves special attention. Based on the options for the operationalization of PIC, the choice of the Competent National Authority would in the first place be based on the relevant competent authorities for the existing legislation and measures concerning in protected areas and/or protected species. This means four Competent National Authorities would be needed: one for each of the three Regions and a federal one, hence flowing from the actual division of competences in Belgium. The difference between the proposed options lies in the way users might have to request access to GR.  Specific "0" option for the CNA The specific "0" option on the Competent National Authority would consider not creating a Competent National Authority under the Nagoya Protocol. This would lead to a nonratification of the Nagoya Protocol, and still require to take the measures specified in the general "0" option.  Option 1: Decentralized input Each authority would have a separate entry-point, and users of genetic resources would need to request access through separate entry-points depending e.g. on the kind of GR or where they are found.  Option 2: Single entry-point Under this option, the four responsible authorities could agree on a centralized input system. Users would request access through a single point of contact, independently of where/which types of GR are accessed. Expected impacts  Option 1: Decentralized input Possible Advantages: Flowing from the actual division of competences in Belgium, this option could provide more liberty to the federated entities to independently organize their biodiversity and/or genetic resources access policy. Possible Disadvantages: Having four different Competent National Authorities might strongly complicate the access procedure, not in the least for foreign users. Additional efforts will be needed in order to clarify the access procedure, e.g. providing users with a clear overview on which of the four Competent National Authorities is responsible for handling access requests, depending on where/which GR are accessed. This might result in a higher administrative burden for both users and administrations. Moreover a decentralized input system for the data generated might lead to additional data coordination and exchange problems.  Option 2: Single entry-point Possible Advantages: A uniform or harmonized process could increase the legal and procedural clarity for users. This might result in less administrative burdens related to the search for information on access procedures and requirements under the Nagoya Protocol in Belgium. Furthermore, some economies of scale could be possible here for the public authorities concerned. Depending on the scope of these economies of scale, it might be decided to opt for more or less coordination through the single entry-point. Possible Disadvantages: This option potentially has a higher initial administrative burden and transaction cost. A common system needs to be established and close coordination between the different authorities needs to be ensured. Setting up compliance measures Description of the options The options for compliance in order to fulfill the obligations of articles 15, 16 and 18 of the Nagoya Protocol are dependent both on the sufficiency of the existing relevant dispositions contained, inter alia, in the existing criminal code, civil procedural code and on implementation of PIC in Belgium. A general criminal provision covering situations where PIC and MAT are required by the provider country is considered. In situations where a civil judge has to consider the contents of MAT, an extension of the field of application of art 15176 of the Code of private international law is envisaged. The granting of PIC on the access to genetic resources within the context of the Nagoya Protocol pertains to the country of origin of the GR applying its sovereign rights. Therefore, compliance with PIC involves public law and administrative acts, which fall outside of the scope of private international law. To contribute to the implementation of Articles 15, 16 and 18, the following options are proposed:  Specific "0" option for the compliance measures The specific "0" option on compliance measures would consider not introducing any legal provision on compliance. This would lead to a non-ratification of the Nagoya Protocol, and still require to take the measures specified in the general "0" option. Moreover, even if these measures were be taken in order to comply with the obligations of the CBD and the ILO Convention No.107, users and providers would not be able to benefit from the clarified legal framework that the compliance measures envisioned under the Nagoya Protocol would entail. This might lead to increased litigation and transaction costs (for clarifying exactly what the compliance to the CBD implies in a situation of no additional measures).  Option 1: Ensuring compliance with provider country legislation regarding PIC and MAT, with Belgian Law as fall-back option Under this option, a general criminal provision is created that refers back to PIC and MAT obligations as specified in the legislation of the provider country while the private international law code would determine that provider country legislation is applicable to disputes regarding compliance with the PIC and MAT. Sanctions would be provided for cases of non-compliance with PIC and MAT requirements set out by the provider country. When checking content of MAT, a provision in the code of international private law would provide for reference to provider country's legislation, with Belgian law as a fallback option. The state would enact a general prohibition to use GR/TK accessed in violation of the law of the providing country, by specifying that the reference to foreign law in the Belgian code of private international law also applies to the use of GR within the context of the Nagoya Protocol177 . The sanctions for violation could in that case be a fine and a confiscation. The state could act ex officio to enforce this criminal provision, which is usually taken up on the basis of complaints by individuals. The fact that a violation of foreign law would be considered as a violation of national, Belgian law, and could be prosecuted and sanctioned as such, would also make it easier for providers to subsequently claim civil law damages. A provision in the private international law code would determine that provider country legislation is applicable to disputes regarding compliance with PIC and MAT. If it is impossible to determine the content of the foreign law in due time, Belgian law should be applied178 . Option 2: Self-standing obligation in the Belgian legislation to have PIC and MAT if so required by the provider country. Under this option, a provision is created containing an obligation to have PIC from the provider country and MAT for the utilization in Belgium of foreign genetic resources, if the legislation of the provider country requires PIC and MAT for access to its GR. As such, Belgian legislation would not refer to the legislation of the provider country regarding PIC and MAT, but only to the specific obligation of requiring PIC and MAT for access to its GR. Expected impacts  Option 1: Ensuring compliance with provider country legislation regarding PIC and MAT, with Belgian-law as fall-back Possible Advantages: This option could serve as a strong measure to support compliance by Belgian users with the entire provider country ABS legislation. Possible Disadvantages: The option relies upon the assumption that the legislation of the country of origin properly implements the Nagoya Protocol provisions and that it is clear enough and acceptable for enforcement. If not, the option might entail legal uncertainty and unpredictability for the users. This disadvantage is attenuated to a certain extent through the fall-back clause in the code of private international law (cf. description of the option above).  Option 2: Self-standing obligation in the Belgian legislation to have PIC and MAT if so required by the provider country. Possible Advantages: It could create less legal complexity for users and enforcement authorities in Belgium Possible Disadvantages: It might be a less stringent measure for acting against potential illegal utilization of GR by Belgian users, although the criminal provision could later be extended to encompass other elements. Designating one or more checkpoints Description of the options Belgium could consider not introducing checkpoints as envisioned under the Nagoya Protocol, within the general "0" option. If Belgium does decide to introduce checkpoints, their implementation could take place in several phases. In order to respect the political commitment to timely ratify the Nagoya Protocol, the first phase could look at a minimal implementation requiring the establishment of a single checkpoint. Two possible options seem relevant for the first phase, namely PIC ("Option 1") and an upgraded patent disclosure ("Option 2"). In subsequent phases, more effective checkpoints might need to be developed in order to monitor the utilization of GR. Possible checkpoints to be explored at a later stage could possibly include public research funding, ex-situ collections or intellectual property related checkpoints other than the patent authorities, such as authorities for assessing applications for geographical indications of origin. Working with different phases could allow for a fast start with limited resources to prepare for an early ratification of the Nagoya Protocol. It also provides time to better identify concrete problems and to learn from the experience of others. However, it might take a longer time to arrive at the most effective and/or relevant checkpoints for the situation on the ground. Therefore caution should be taken not to delay addressing existing and known problem areas.  The specific "0" option on checkpoints This option would consider not introducing checkpoints as envisioned under the Nagoya Protocol (whether through an integrated PIC requirement or upgrading the disclosure requirements in patent applications or through any other means). This would lead to a nonratification of the Nagoya Protocol, but still require to take the measures specified in the general "0" option. Moreover, in the implementation of the CBD and ILO Convention No.107 obligations, it would lead to a lack of monitoring of the requirements under these conventions (and therefore a lack of transparency and data), compared to a situation where the compliance provisions of the Nagoya Protocol would have been implemented. In particular this might create a lack of legal certainty through the lack of checkpoints (where they are supporting compliance)which, if established, would clarify the relevant obligations.  Option 1: Monitoring PIC in the ABS Clearing-House as a checkpoint For this option, PIC might need to comply with more specific information collection and transfer obligations for checkpoints (irrespective of e.g. the obligation to make available permits to the ABS-CH (Article 14.2(c) of the NP), or the obligations linked to the obtaining of an internationally recognized certificate of compliance (Article 17.2-4 of the NP) which may to a certain extent overlap. Using the ABS CH as checkpoint will depend on further policy decisions taken regarding the CNA and the ABS CH.  Option 2: Using the patent office as a checkpoint Legislation is already in place for the disclosure of origin in patent applications (whenever the information is available): a logical step in this first phase could thus be that the patent office would function as a checkpoint. This might be made possible by an upgrade of the disclosure requirement in the patent applications, including information related to both the country of origin (as under the current legislation) and information on PIC from the country of origin. However, as Article 17 of the NP talks about "relevant information related to PIC, to the source of GR, to the establishment of MAT, and/or to the utilization of GR, as appropriate", an upgrade might not even be necessary in order for the patent office to qualify as a checkpoint. Further clarification on the necessity to comply with the obligation to provide for "appropriate, effective and proportionate measures to address situations of noncompliance" is under negotiations in other multilateral fora (WTO). Expected impacts  Option 1: Using the ABS Clearing-House as a checkpoint Possible Advantages: This option could lead to very few additional obligations in the case that general ABS option 2 (PIC as a general legal principle) would be adopted, except for linking the PIC approval to the information obligations to the Clearing-House, and would therefore be sufficient to contribute to respect the political commitment to timely ratify the Nagoya Protocol. Moreover, if appropriately linked to the Clearing-House, the PIC could constitute an internationally recognized certificate of compliance under the Nagoya Protocol and thereby contribute to the objective of increasing overall legal certainty and transparency. Possible Disadvantages: Some extra administrative burden, as the PIC approval would need to be linked to the information obligations under the Clearing-House (which however will probably not be a heavy obligation).  Option 2: Using the patent office as a checkpoint Possible Advantages: This option could lead to very few additional information exchange obligations and hence administrative burden for the patent authorities and users, as the information on the country of origin of the GR has to be provided by the users in the patent application, is available. The microbial ex-situ collections that are recognized as international deposit authorities (IDA) also keep already records of such information in the current situation. Possible Disadvantages: The Belgian patent office currently covers only a very small proportion of the transactions concerned by the Nagoya Protocol. A legal change could be required to upgrade the patent disclosure in order to be able to use it as a checkpoint within the framework of the Nagoya Protocol. In particular, the information on PIC should be included, wherever applicable, and a link with the information obligations under the Clearing-House should be made. Sharing information through the Clearing-House As the discussions on the exact modalities of the ABS CH are still ongoing internationally, it remains unclear if a separate Belgian ABS Clearing-House (ABS CH) component or only a Belgian entry-point will be required. The "0" option would therefore consist in not taking any steps regarding such a component or entry-point nor provide ABS specific information to the central ABS CH. This would lead to a non-ratification of the Nagoya Protocol, but still require to take the measures specified in the general "0" option. In particular, this "0" option would still need to comply with the obligations concerning the Belgian Clearing-House Mechanism to the Convention on Biological Diversity (CBD CHM), which also concerns information exchange on ABS as explained below. Description of the options beyond the "0" option A distinction needs to be made between two separate functions of a Clearing-House component for ABS: 1. Information exchange on ABS, including on the Nagoya Protocol, within the framework of the CBD  This is ongoing and can be further strengthened by integrating more relevant material into the Belgian CBD CHM managed by the Royal Belgian Institute of Natural Sciences (RBINS). This obligation flows from the CBD and is therefore independent of the future ratification of the NP. Support exchange of information on specific ABS measures within the framework of the Nagoya Protocol  Measures are needed to organize the technical information to be provided according to the Nagoya Protocol (for example on PIC, checkpoints and the ABS CH) as well as other information to be decided upon at international level by the NP COP/MOP. The modalities of a separate Belgian ABS Clearing-House (ABS CH) component therefore still depend on the ongoing multilateral negotiations. In this context, it remains unclear whether a Belgian CHM-component or only a Belgian information entry-point will be required. If such a component/entry-point is required, it is clear that the generated information will be useful for Belgian research and development, as well as for the objectives of conservation and sustainable use of biodiversity. Depending on the decision regarding the exact ABS CH modalities, three options could be explored. Three institutions could be potential candidates to support a Belgian component/entry-point of the ABS Clearing-House, if required. The strengths of these different options can be summarized as follows:  Expected impacts These will depend highly on the decisions taken on the exact role and technical specifications of the Clearing-House. However, in general the following points could be expected under these three options:  Option 1: Royal Belgian Institute of Natural Sciences (RBINS) as ABS Clearing-House Possible Advantages: Interesting synergies could be created under this option. The RBINS already hosts the National Focal Point (NFP) to the CBD and ensures the Belgian component of the CBD Clearing-House Mechanism. The RBINS also runs several biodiversity-related research units which could directly benefit from the generated information. Additionally, the RBINS ensures an important awareness building mission towards the broader public through its Museum of Natural Sciences. It operates the Belgian Clearing-House Mechanism for the CBD, with a strong focus on awareness raising, education and communication. It has development projects running (in collaboration with DGD) on establishing CHM in partner countries. Through these capacity building activities with the partner countries, it could play an important role in supporting developing countries with their obligations under the Nagoya Protocol with regard to the ABS CH. Furthermore, the administrative burden for the RBINS could be relatively low if additional information obligations related to the ABS CH could build upon the experience of the RBINS with the general CBD CHM. Possible Disadvantages: Nevertheless, the CHM only has a general communication, information sharing approach and does not handle specific scientific or technical data, contrary to the WIV with the Biosafety Clearing-House (BCH) (see option 3). Its applicability will therefore heavily depend on the level of technical requirements for the ABS CH.  Option 2: Belgian Federal Science Policy Office (Belspo) as ABS Clearing-House Possible Advantages: This option would be ideal for biodiversity research that contributes to sustainable development, as Belspo already hosts the Biodiversity Platform, which has as main task to foster such research. It has several collection databases that could support the working of PIC/checkpoints/ABS-CH. Belspo also hosts several other consultative bodies linking scientific and policy analysis and is involved at international level with digitalization of collection databases. Possible Disadvantages: Compared to the other options, the administrative burden might be heavier, as Belspo does not currently have any information obligations towards the secretariat of the CBD.  Option 3: Scientific Institute for Public Health (WIV-ISP) as ABS Clearing-House Possible Advantages: As it hosts the Belgian component of the CBD's Biosafety Clearing-House (BCH), the WIV-ISP is used to exchanging scientific, technical data with the CBD Secretariat. It also runs several health-related research units which could directly benefit from the generated information. Furthermore, the administrative burden for the WIV-ISP could be relatively low if additional information obligations related to the ABS CH could build upon the experience of the WIV-ISP with the BCH. Possible Disadvantages: The current BCH is very disconnected from the CBD CHM which would be a disadvantage for the ABS CHM where the link between the three objectives is a prime requisite for any implementation option. It also might have little added value regarding awareness raising, capacity building etc. Its relevance will therefore strongly depend on how much the BCH is taken into account at international level as the example to develop the ABS CH. Target Groups and Stakeholders for Which Potential Impact is Assessed For the purpose of the impact assessment of the recommended options in chapter 10, a list with categories of target groups and stakeholders, that could be affected by the proposed measures, is established. Users and providers of genetic resources Land owners  Protected areas: both public and private areas managed for conservation purposes (as providers (not necessarily in the meaning of the NP) of potentially valuable GR).  Other land owners: any public/private land owner might become a provider of GR (not necessarily within the meaning of the NP) with potential interest for R&D. Agriculture sector The agricultural sector includes a variety of public and private organizations, working in the fields of crop and animal selection/improvement, horticulture, fisheries, forestry and biological control. It is an important sector, given the share of Belgium in the world's agricultural products export. Several types of genetic resources are used by the Belgian agricultural sector, including animal genetic resources for food and agriculture (AnGR), fisheries and aquatic genetic resources for food and agriculture (AqGR), forest genetic resources for food and agriculture (FGR), plant genetic resources for food and agriculture (PGR), microbial genetic resources for food and agriculture (MiGR) and genetic resources relevant for biological control and crop protection. Healthcare sector In the context of this study, the healthcare sector includes the pharmaceutical industries, the care and cosmetics industries, so-called 'soft' natural medicines and in vitro diagnostic companies/laboratories. In the healthcare sector, the industries from the private sector in general play a predominant role. This sector is made up of both major multinationals and small family-style firms. Belgium hosts around twenty multinational companies in this sector. The SME sector is much more developed, with almost one hundred companies in Belgium179 . The country is the world's third largest importing country of biopharmaceutical products and the world's one-but-largest exporter180 . The pharmaceutical sector is thus a major player in the Belgian economy. The sector claims to provide the country with more than 30,000 jobs and to account for up to 40% of private R&D funding181 . IMPLEMENTATION OF THE OPTIONS WITHIN THE EXISTING LEGAL SITUATION IN BELGIUM This chapter analyzes the implementation modalities of the policy options described in chapter 8, taking into account the existing legal and institutional situation in Belgium described in chapters 2 to 5. The structure of the chapter is based on the six core measures used in chapter 8. Operationalizing PIC Two different components of these options need to be compared in the assessment of the options for operationalizing PIC:  First, for the GR which are not PS/PA, comparing the bottleneck option to the fishing net option (access through ex-situ collections, compared to access from everywhere).  Second, comparing the "baseline" fishing net model, which envisions the refinement of existing PA/PS relevant legislation to the "modified" fishing net model, where, in addition, other existing GR relevant legislation would be refined. The impacts identified below are an aggregate of the impacts likely to occur along these two components that are present in options 1, 2 and 3. Under the specific "0" option for access, only the situation where BS as a horizontal principle has been adopted is considered, as the situation, where BS is not addressed as a horizontal principle will be assessed under measures for BS as the specific "0" option for MAT (chapter 9.3). As such the specific "0" option for PIC considered here is equivalent to the general ABS option 1 (BS, but no PIC). IMP 1.0 -Implementation of the specific "0" option for operationalizing PIC Under the "0" option, benefit-sharing would still be established as a general legal principle in Belgium, which is not currently the case (cf. chapter 5). In addition, the European Commission's Summary of the selected options for the operationalization of PIC 8. Specific "0" option (access component): the specific "0" option on access would consider no PIC requirement, with benefit-sharing as a horizontal principle 9. Option 1 -The bottleneck model: refining existing PS/PA relevant legislation & measures + only access to GR through ex-situ collections as default rule 10. Option 2 -The baseline fishing net model: refining existing PA/PS relevant legislation & measures + access to GR from everywhere but with registration as default rule 11. Option 3 -Modified fishing net model: potentially enlarged refinement of existing PA/PS relevant legislation & measures + refinement of other specific GR relevant legislation/measures+ access to GR from everywhere but with registration as default rule For a detailed description of the options please refer to chapter 8.2. proposal for a Regulation on ABS182 , which is currently under discussion, encourages benefit-sharing but in its current form, does not establish benefit-sharing as a general legal principle. Seen the division of competences in Belgium, this general legal principle should be firmly anchored in the environmental competences of the Regions and the Federal Government. Indeed, as argued in section 3.2 of chapter 3, any legal measure that would consider introducing Prior Informed Consent could benefit from building upon existing legislation on physical access to and use of genetic material. Under the current regulations, the rules regulating physical access depend upon the type of ownership (private, public or res nullius), the existence of restrictions to the ownership, such as specific protection (protected species, protected areas, forests or marine environments) and the location (all four authorities apply their own rules) of the genetic material. As these regulations currently are part of the environmental competences of the Regions and the Federal Government such anchorage seems the most logical way forward. The implementation and the subsequent operationalization of this general principle would be phased, based on subsidiarity and flexible. Moreover, as for the implementation of other multilateral environmental agreements such as the Kyoto Protocol183 and the Cartagena Protocol184 , considering the need for a minimum level of harmonization of the implementation procedure in Belgium, a cooperation agreement between the Regions and the Federal Government may be necessary. On this basis, the implementation of option "0" could be based on three components: (1) A political agreement from the competent governments to establish benefit-sharing as a general legal principle, to be implemented for example through a cooperation agreement and/or analogous provisions in relevant legislations, such as the basic environmental code of the three Regions and at the federal level. (2) The subsequent or parallel implementation of this general principle through a cooperation agreement and/or analogous provisions in relevant legislations, such as the basic environmental code of the three Regions and at the federal level 185 . (3) The subsequent operationalization of the general principle by the respective governments at the regional (through executive orders) and federal level (through royal orders), establishing rules and procedures for further implementation of the benefit-sharing provision as envisioned in the other options considered below. would be implemented for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the basic environmental code of the three Regions and at the Federal level (which would not be necessarily part of the first implementation step, cf. considerations in chapter 11). This access rule would specify that access to Belgian GR, that are not covered by PA/PS relevant legislation, would need to be sought and processed through qualified Belgian collections (which are equipped for deposit of data and/or samples). (2) Subsequent or parallel implementation of this general principle for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the basic environmental code of the three Regions and at the federal level, which deal with the establishment of the Competent National Authorities and the rules and procedures for processing access requests by these Authorities (cf. IMP 3.1. below). IMP 1.2 -Implementation of option 2 for operationalizing PIC Similarly to option 1, the implementation of option 2 for operationalizing PIC can be broken down in four subsequent steps: The first three implementation steps are identical to the first three implementation steps of option 1 (IMP 1.1.1; IMP 1.1.2; IMP 1.1.3) The fourth implementation step of option 2 (IMP 1.2.4) is similar to the fourth step of option 1 (IMP 1.1.4), with the exception that the default access rule would specify that PIC would require minimally a registration/notification to the Competent National Authority. As also discussed below, a combination of IMP 1.2.4 and IMP 1.1.4, as a general principle in a cooperation agreement, can also be envisioned. However, for the purposes of the assessment under this section, at this stage these options are considered separately. IMP 1.3 -Implementation of option 3 for operationalizing PIC The implementation components of IMP 1.3 are the same as under IMP 1.2, except that it would also include the refinement of other existing legislation relevant to access to GR187 in an analogous way as the refinement of the access provisions under the PA/PS relevant legislation. IMP 1.3.1 -Further refinement of GR legislation (1) Amendment of other existing legislation relevant for access to GR, establishing that any access in that context not only concerns physical access but also access within the meaning of the Nagoya Protocol and that such access also automatically amounts to PIC under the implementation of the principle established under IMP 1.1.2. (through a Decree/Ordinance of the Regions). refinement of the legislation. The matter of conservation varieties do not fall under Annex 1 of the ITPGRFA Treaty and are currently regulated by the following legislations : Ministerieel besluit van 2 juni 2009 tot vaststelling van bepaalde afwijkingen voor de toelating van landrassen en rassen in de landbouw die zich op natuurlijke wijze hebben aangepast aan de lokale en regionale omstandigheden en die door genetische erosie worden bedreigd (Vlaams Gewest); Arrêté du Gouvernement wallon introduisant certaines dérogations pour l'admission des variétés de légumes traditionnellement cultivées dans des régions spécifiques ou sans valeur commerciale (Région Wallonne) ; Arrêté ministériel du 10 décembre 2010 introduisant certaines dérogations pour l'admission des races primitives et variétés de légumes traditionnellement cultivées dans des localités et régions spécifiques et menacées d'érosion génétique (Région Bruxelloise). Specification of MAT Given that a phased approach would allow fine-tuning the measures as more feedback is gathered, the initial MAT requirements analyzed hereunder could be further developed over time after a rather limited first implementation phase. IMP 2.0 -Implementation of the specific "0" option for the specification of MAT The specific "0" option under MAT would lead to a non-ratification of the Nagoya Protocol. However, as discussed in the preliminary assessment above, it is unclear how the specific "0" option would still allow the Belgian State to comply with the BS obligations of the CBD and the obligations under ILO 107, and what implementation steps would result from this alternative scenario. IMP 2.1 -Implementation of option 1 for the specification of MAT The implementation of option 1 would require establishing the general principle of benefit-sharing (cf. IMP 1.0. above). However, as option 1 considers no specific regulation in addition to the general BS principle, no additional implementation steps are needed. IMP 2.2 -Implementation of option 2 for the specification of MAT As under IMP 2.1 the implementation of option 2 is part of the subsequent operationalization of the general principle of benefit-sharing under IMP 1.0., as envisioned in step 3 of IMP 1.0. However, in this case, specific requirements on MAT are considered. Therefore, to assess this option, we will consider the following implementation component:  The subsequent operationalization of the general principle formulated under IMP 1.0. by the respective governments at the regional (through executive orders) and federal level (through royal orders), establishing specific requirements on MAT, including the use of standard agreements, depending on the finality of use. Summary of the selected options for the specification of MAT 0. Specific "0" option: No benefit-sharing 1. Option 1: No specific benefit-sharing requirements imposed for the MAT 2. Option 2: Standard agreements with specific benefit-sharing requirements, depending on finality of access 3. Option 3: Specific benefit-sharing requirements, negotiated on a case by case basis, depending on finality of access For a detailed description of the options please refer to chapter 8.2. IMP 2.3 -Implementation of option 3 for the specification of MAT Idem as IMP 2.2., but the specification of the BS requirements in the general rules does not impose the use of standard agreements. In this option, the implementation of the specific BS requirements would always be negotiated on a case by case basis. Therefore, to assess this option, we will consider the following implementation component:  The subsequent operationalization of the general principle formulated under IMP 1.0. by the respective governments at the regional (through executive orders) and federal level (through royal orders), establishing specific benefit-sharing requirements, negotiated on a case by case basis. IMP 3.2 -Implementation of option 2 on the Competent National Authorities The implementation of option 2 would be very similar to the implementation of option 1 (IMP 3.1). The choice of the Competent National Authority would in the first place be based on the relevant competent authorities and the division of competences (IMP 3.2.1). The main difference is that option 2 would provide for a centralized input system to access requests, which are then referred to one of the 4 CNAs and their respective rules and procedures. This would require the establishment of a single entry-point (such as a webportal) and the specification of rules and procedures for the single entry-point (IMP 3.2.2). Therefore this assessment will consider the following implementation of option 2  A political agreement from the competent governments to establish a single entry-point for access requests (including the specification of its rules and procedures) through a cooperation agreement between the Regions and the federal level  Subsequent or parallel implementation through a cooperation agreement which would include the rules and procedures for requesting access through a single entry-point as this would avoid differences between the Regions and the federal level Setting up compliance measures As highlighted in chapter 8.2, the Belgian national law enacting the code of private international law states in its Article 15 that, if a foreign law needs to be applied to a case that is examined by a Belgian judge, the content of such applicable law should be identified by the judge, according to interpretations received in the "country of origin" (sic). Collaboration can be required if the content cannot be established clearly by the Belgian judge. If it is "impossible to determine the content of foreign law in due time, Belgian law should be applied" (art.15 §2al2)". Therefore, the implementation of option 1 would only entail a minimal amendment to this code by including explicit reference to the use of GR within the context of the Nagoya Protocol as being part of the scope of this code. At the same time, private international law gives to the legislator the possibility to enact "mandatory laws", that rule out the application of the foreign law even though it would have been applicable according to the usual rules of private international law (for instance in cases when the foreign applicable law is inexistent). The "mandatory law" is applied -with a large interpretation -if the Summary of the selected options on compliance 0. Specific "0" option: not introducing any legal provision on compliance 1. Option 1: Ensuring compliance with provider country legislation regarding PIC and MAT, with Belgian law as a fall-back 2. Option 2: Self-standing obligation in the Belgian legislation to have PIC and MAT if so required by the provider country. For a detailed description of the options please refer to chapter 8.2. State reckons that a national application is necessary. The criteria defining a "mandatory law" are not clearly cut by the jurisprudence and the doctrine, and thus provide the legislator with a certain political margin: that is the ground upon which this report envisages the option 2. Finally, and independently of the options chosen, the effectiveness of the ABS compliance regime will largely depend on the effectiveness of both the national focal points and the Clearing-Houses 188 . This is particularly true when referring back to provider country legislation, as these two institutions are responsible for the channelling of information. Some of the assumptions made in the following part could differ in light of the disparity between provider countries. The amount of legal certainty under option 1 for instance, could greatly differ when dealing with a provider country effectively relaying information to the Clearing-House or when dealing with a provider country which is not. IMP 4.0 -Implementation of the specific "0" option on compliance Idem as under IMP 2.0 IMP 4.1 -Implementation of option 1 on compliance The implementation of the compliance provisions of the Nagoya Protocol is explicitly addressed in the EC's proposal for a Regulation on ABS 189 . Therefore, seen the important effort of harmonization at the EU level concerning compliance, and the ongoing discussions on the proposal, a phased approach to the implementation of the compliance obligations is indicated. Moreover, at the present state, it is unclear to what extent the proposed Regulation will be sufficient to implement the core obligations on compliance and/or what additional compliance measure will be needed in case the Regulation is not sufficient. On the basis of these considerations, the assessment of option 1 on compliance will consider the following implementation components: (1) A political agreement from the competent governments to express the commitment that legislative measures will be taken to provide that GR utilized within Belgian jurisdiction have been accessed by PIC and MAT as required by provider country legislation and to address situations of non-compliance. This political agreement would be executed in a later stage of the implementation, as soon as sufficient clarity is provided at the EU level. (2) Implementation of this general principle for MAT through the referring back to the provider country legislation, with Belgian law as a fall-back. As these two elements are currently already part of the Belgian code of International Private Law, such implementation would minimally only entail to amend this code by including explicit reference to the use of GR within the context of the Nagoya Protocol as being part of the scope of this code. (3) Implementation of a criminal provision on complying with provider country legislation regarding PIC and MAT. Due to the ongoing EU negotiations it is premature at this stage to provide for a detailed analysis of criminal sanctions. This will be evaluated once relative certainty on type of behaviors concerned and level of sanctions are available. 188 Tveldt, Fauchald, (2011), op.cit., p.398 189 EC (2012b), op. cit. IMP 4.2 -Implementation of option 2 on compliance Idem as IMP 4.1, except that the implementation of the general principle of compliance would be based on a self-standing obligation, which requires Belgian users to have PIC and MAT from the provider country (as part of Belgian Law), as far as the legislation of the provider country requires PIC and MAT for access to its GR. IMP 5.1 -Implementation of option 1 on checkpoints The implementation of the monitoring obligations under option 1 is closely related to the establishment of the ABS Clearing-House considered below in section 2.6. Indeed, information regarding uses of GR in Belgium, as obtained from the CNAs of the provider countries, will be made available through the Clearing-House Mechanism of the Nagoya Protocol. If PIC is provided within this information, it will be considered as an international certificate of compliance and be acceptable as a checkpoint. The use of PIC as checkpoint therefore could be organized through ensuring that PIC for GRs accessed and or used in Belgium is available in the Belgian node of the Clearing-House Mechanism. No other implementation components therefore are currently required for the timely ratification of the protocol, in addition to the implementation components considered under the establishment of the Clearing-House (see chapter 9.6 below). IMP 5.2 -Implementation of option 2 on checkpoints The Belgian legislation, while implementing recital 27 of the Directive 98/44/EC of 6 th July 1998 on the legal protection of biotechnological inventions, which has due regard to the obligations stemming from the CBD with specific regards to its Articles 8(j), 15 and 16 has included a (qualified) origin indication requirement (if the origin of the material is known) in its Article 15 §1(6). In order for the patent application to be admissible, the filing must contain a statement regarding the geographical origin of the biological material that has been used as a basis for the invention, if known. This provision would need to be amended to allow its use as checkpoint under the Nagoya Protocol, specifying that patent application should contain relevant information related to prior informed consent, to the source of the genetic resource, to the establishment of mutually agreed terms, and/or to the utilization of genetic resources, as appropriate (NP Art 17. For a detailed description of the options please refer to chapter 8.2. The discussions on the exact modalities of the ABS Clearing-House (CH) are still on-going internationally and decisions will only be taken at the NP COP/MOP1 (earliest: October 2014). In the meantime, it remains unclear if a separate Belgian ABS CH component or only a Belgian entry-point will be required. Moreover, the impact of the CH will highly depend on the decisions taken on the exact role and technical specifications of the Clearing-House. Therefore, the impact assessment of this implementation provision is still tentative and will need to be refined in the future. 9.6.1 IMP 6.0 -Implementation of the specific "0" option for the CH Idem as IMP 2.0 IMP 6.1 -Implementation of option 1 for the CH Seen the still ongoing discussions on the international level and the uncertainty regarding the obligations of Belgium under the Nagoya Protocol, the implementation of this option will benefit from a phased approach. As it is likely that the information tasks under the ABS CH will need to be implemented in Belgium in any case, in a first phase, a CH could be established that specifically deals with the information tasks. In a second phase, collaboration between this CH and other institutions/databases could be established if required to implement the more technical tasks of an ABS CH. Given the existing CBD CHM at the RBINS and the strong Belgian preference to ensure coherence between the different Clearing-Houses under the CBD, it seems logical to start this exercise at the RBINS by extending the current ABS part of the CBD CHM. Therefore, two implementation components will be considered in the assessment of this option: (1) Specify in the cooperation agreement that the RBINS will be appointed as the ABS CH, for dealing with the information exchange on ABS under the Nagoya Protocol and indicate that further development of the ABS CH in terms of more technical or specific tasks related to the implementation of the NP will be undertaken after the first COP/MOP of the NP). IMPACT ANALYSIS Methodology of the impact analysis The evaluation of the possible consequences of the implementation of the NP is conducted through a detailed comparative impact analysis (IA) related to the options described in chapters 8 and 9. The IA has three main objectives: 1. Identifying the possible effects of the options 2. Identifying the affected stakeholders 3. Comparing the different options In this framework, the IA is conducted through a multi-criteria analysis (MCA). MCA has been developed as an alternative to the conventional cost-benefit analysis (CBA). CBA assumes value commensurability between the different objectives (i.e. the possibility to measure them through a common monetary metric, which supposes that it makes sense to construct monetized proxies of all criteria and that information is available to do so) and compensability (i.e. the assumption that a loss observed in one attribute or good can be compensated in quantitative terms by a gain in another, which supposes, for example, that one can quantitatively compare through a common metric such as the loss of biodiversity conservation benefits, profits for industry relating to facilitated access to resources or administrative costs). However, there is a wide literature showing that, from an environmental, social and economic perspective, these assumptions are clearly not substantiated for sustainability impact assessments190 . Nevertheless quantitative monetary values are not to be dismissed completely from the evaluation in a MCA: wherever possible, quantification of certain advantages and disadvantages are a crucial input component the MCA, as shown below, even if there is no commensurability or equivalent compensation across all the criteria. But unlike CBA, MCA allows to compare impacts represented both qualitatively and quantitatively. The goal of the IA is thus to identify the existence of qualitative elements in addition to the quantitative elements that can build the basis for a comparison amongst the options for the implementation of the Nagoya Protocol, rather than the calculus of a specific quantitative threshold of aggregated monetary benefits in a common metric, able to justify the expected aggregated costs. The evaluation of the impact is conducted against a set of evaluation criteria described below, which leads to a performance score per criteria for each of the options. Using the performance scores, a dominance analysis and an outranking analysis are performed to compare and rank the alternatives, based on pre-defined weighting (cf. description below). A sensitivity analysis is then conducted to describe the "behavior" of the outcome when changing the weighting (cf. sensitivity analysis paragraph 10.1.3, Step 4). Figure 1 gives an overview of the different steps of the MCA. The formulation of the set of evaluation criteria has been obtained through two overarching questions: 1. To what objectives is the implementation of the Nagoya Protocol seeking to contribute? 2. How would a good option be distinguished from a bad option, given the decision-making context? Although no clear rules exist on the definition of criteria and their number, it is generally considered that it should be kept as low as is operationally desirable (i.e. the model should be as simple as possible). Different economic, social, environmental and procedural criteria were considered, checking them against the preferences of stakeholders and against quality requirements.  Stakeholder-preferences: Analysis of the preferences of stakeholders, expressed both during the first stakeholder workshop as during the interviews, helped to refine a first set of criteria derived from the above questions. Examples of these preferences include flexibility, continuity, knowledge-improvement, legal certainty, non-redundancy and cost-effectiveness in the establishment of the regulatory framework of the Nagoya Protocol.  Quality requirements: the criteria were then checked against a range of qualities such as value relevance (relation with the overall objective), cognitive relevance (shared understanding of concepts), measurability (some form of measurement or judgment, objective or subjective191 ) and non-redundancy (several indicators measuring the same factor). This selection process allowed identifying four criteria to assess the impacts of the proposed options, which are described below. The assessment of the environmental and social impacts is based on two individual sub-criteria (S1 and M1), while the assessment of the economic impact is composed of three sub-criteria (E1, E2, E3). Four procedural (G1 to G4) sub-criteria have also been added to reflect the overall policy process. The different sub-criteria for the economic and the procedural impact have been created for analytical ease, as the assessment would have been too complex if grouped into one single criterion. Having more sub-criteria does not confer more importance to a particular impact. For this report, the social and economic impact are considered to have the same weight (i.e. they are considered of equal importance), while the environmental impact is slightly more important, given the objectives of the Protocol and the CBD to contribute to conservation and sustainable use of biodiversity. The procedural impact has the lowest importance and serves mainly to help refine the preference for an option in case the difference is not clear enough when using the substantive criteria. If the total weight of the impacts represents 100%, the weighting is distributed based on the following basic allocation key: environmental impact (37,5%), social impact (25%), economic impact (25%) and procedural impact (12,5%) (see also sensitivity analysis below). Economic impact E1 Legal certainty and effectiveness for users and providers of GR, at low cost Four indicators are taken into account to evaluate this criterion. Legal certainty refers to the consistency and predictability of the rules and the process in place. Effectiveness of the legal framework refers to a set of indicators including:  Enforceability: the level with which an option allows the ABS regulation to be enforced.  Redundancy relates to existing legislation regulating related obligations.  Proximity with other international agreements. When combining these indicators, an option will be preferred when it increases, at an equivalent cost, legal certainty, allows better enforceability, reduces redundancy and does not conflict with obligations under other existing international treatments. In addition, an option with similar level of legal certainty and effectiveness, compared to another option, will be considered preferable if it leads to less legal costs (such as the cost for drafting new legislation and the cost for asking legal advice.) E2 Maximizing economic innovation and product development (in particular through its contribution to R&D) at reasonable financial and administrative costs Extensive research on private sector return from public and private investment in research infrastructures involving genetic resources shows a clear correlation between improved conditions for R&D and an increase in likelihood of the development of innovative products and services. Options that maximize research and development opportunities for users and providers of GR are therefore considered preferable. These benefits will be assessed while taking into account the changes in research costs that stakeholders incur for the necessary steps they need to take in order to allow for research that complies with the NP to take place. Such costs include, among others, costs involved in the negotiation of the ABS agreement, the acquisition of genetic resources and transaction costs related to the transferring of the GR. E3 Minimizing implementation costs Implementation costs are costs related to obligations flowing from the implementation of the Nagoya Protocol. They include, for example, the administrative costs related to keeping track of the ABS agreements, the financial costs for the creation of new institutions (if needed), the costs for asking for legal advice in the course of the implementation or the cost of monitoring utilization. An option having a lower cost is considered preferable over another with a higher cost for an equivalent level of produced benefits. In addition, an option leading to a one-time expense is preferred over an option which generates recurring expenses. Social impact S Achievement of social objectives Innovation resulting from R&D with GR is expected to contribute to the achievement of important social objectives, be it health, nutrition, food security, or else. Options that maximize opportunities for the users in socially relevant fields are therefore considered preferable over options that create less such opportunities. Options contributing to the transfer of knowledge and technologies to developing countries and to job creation/preservation in the sectors utilizing genetic resources, both in developing and developed countries, are also considered preferable. A particular social aspect is the contribution to the effective protection of the rights of indigenous and local communities over their traditional knowledge associated with GR. Options that effectively protect or advance indigenous rights are preferable over options that do not achieve this aim. Environmental impact M Promotion of conservation and sustainable use of biodiversity Options that enhance conservation and sustainable use of biodiversity, inter alia through improving its knowledge base (e.g. by enhancing taxonomic research), enhance capacity building and technology transfer, through channeling benefit-sharing to conservation and sustainable use, improving protected areas and protected species management and raising awareness are preferable. Procedural impact G1 Flexibility to accommodating sectorial differences The implementation of the Nagoya Protocol will impact different types of actors, using GR under different conditions, in many different ways and at varying moments in the development process. Therefore it seems important that implementing measures offer some flexibility to accommodate for differences between diverse sectors utilizing genetic resources. An option will be considered preferable if it better balances the need for clear and certain rules with flexibility to accommodate for sectorial differences. An inflexible "onesize-fits-all" regime might have negative effects and might contradict the objectives of the CBD192 . G2 Temporal flexibility to allow for future policy and adjustments The boundaries and needs of the utilization of GR evolve continuously, with new resources being discovered every day. The political and socio-economic context of the NP also changes rapidly, as ABS is a relatively new field. This evolving reality creates the need for a flexible and adaptable implementation over time, in particular in light of implementation measures taken by other Parties of the Protocol and in light of future sectorial initiatives. An option is considered as advantageous if it leaves space for adaptation of the implementation and future policy and adjustments over time. In addition, an option providing such a temporal flexibility at lower costs will be considered preferable over another option. G3 Improving knowledge on the exchange of GR and existing ABS agreements for future policy development and evaluation Currently, little data exists on the exchange of GR and existing ABS agreements. Increasing this knowledge is primordial to design efficient rules addressing the needs of the different stakeholders involved. Furthermore, ABS has a clear link with the conservation and sustainable use of biodiversity. Improving the understanding and knowledge of their interlinkage is an important part of the efforts to halt the erosion of biodiversity. An option is preferred when it allows increasing the knowledge in these two fields. G4 Correspondence with existing practices Previous research stresses the importance of relying upon previously established relationships and existing practices of genetic resource use for the success of ABS agreements. For example, Täuber et al. show that strengthening existing research capacities and existing relationships fosters understanding and mutual trust, attracts users and lowers transaction costs 193 . Options building upon existing practices will therefore be generally considered preferable. An option that would require a significant change in practice or which would run against the basic economic model of a practice will be considered less preferable. Data collection for the indicators Due to the scarcity of data and knowledge on the flows of GR and on the current practices of the ABS in Belgium, three types of sources needed to be triangulated. 1. Primary sources such as internal documents, activity reports and policy documents/reports, inasmuch as these were available and shared, have been collected and analyzed. The list of documents can be found in the bibliography of this report. 2. Existing literature on the economics of genetic resource use has been consulted and integrated whenever possible. For a complete list of reference see footnote references and bibliography. 3. In-depth interviews have been conducted to collect information and data specific to the Belgian situation. These are discussed below. It should be noted that relevant data for the measurement of indicators was not always existent, available or shared. Especially quantitative data was very scarce. An evaluation of the most relevant quantitative costs has nevertheless been attempted and applied towards the fine-tuning of choices amongst closely ranked options (cf. also tables in annex 2 and annex 3). For the data gathering, based on these three sources, a list of general indicators was used, as indicated in Table 6. Interviews 29 interviewees, pertaining to groups of potentially impacted stakeholders, were selected in a nonrandom way based on their proven relevant experience and knowledge of the subject. 17 out of 29 accepted the request for an interview, which were conducted between 23 rd July and 20 th August 2012. A complete list of interviewees can be found at the end of this report (annex 5). 12 others declined or did not reply to the requests for an interview. However this did not result in an overall unbalanced representation of certain sectors, as stakeholders from all sectors were interviewed. The decline by some contacted persons could point to a lack of knowledge, understanding and/or interest for the NP by certain persons within the Belgian stakeholder groups. Any form of future implementation will need to address this by setting up targeted capacity-building activities. A full list of the contacted persons has been sent to the accompanying committee of the study. Most interviews were conducted face to face. Interviewees were briefly introduced to the objectives and progression of the study, if needed. Two sets of structured questionnaires were used for the interviews: one for users of genetic resources and one for providers. Some specific additional questions were also prepared for specific profiles of interviewees which were neither users nor providers. These sets addressed both the quantitative and qualitative evaluation of the options through two distinct parts. Questions related to quantitative data aimed at collecting objective figures related to the access, the distribution and the sharing of benefits related to genetic resources in order to try to map the flows of GR in Belgium. Questions included, inter alia, the amount of access made/received and their related costs, the patenting and commercialization rates of acquired GR or the costs of managing collections. Questions related to qualitative data were used to further elucidate stakeholder preferences observed during the first stakeholder workshop and were mostly open-ended and behavior-based questions (e.g. "If you have the choice between options 1 and 2, which one would you chose and why?"). The questionnaires can be found in annexes 2 and 4. As can be seen in the correspondence table between the criteria and the indicators in annex 3, the majority of the indicators are related to the economic and the environmental criteria. As can be seen in the table, the indicators for the environmental criteria refer both to the quantitative aspects (gaps in biodiversity research, incentive for conservation by potential increased use of Belgian GR, etc.) and with more qualitative aspects of the environmental criterion (as these are more difficult to capture in a quantitative indicator). These qualitative aspects, such as increased awareness of biodiversity issues and education for example, were also discussed during the interviews, and the results of the discussion on these qualitative elements have also been included in our discussion below. The same comment applies to the social aspects, such as promotion of indigenous and local communities and social impact through capacity building, which were also discussed both in a quantitative and qualitative manner with the interviewees. Comparing the alternatives The general principle of the impact analysis is to assess the impact of several policy options as net changes compared to a no-policy-change baseline ("0" options) and to compare the impacts of the options amongst each other. The overall goal is to establish a ranking amongst the options. To this purpose, under each section the proposed options were contrasted with each other and with the specific "0" options. In this exercise, it is important to state from the outset that the evaluations do not give any absolute figures/values for each of the criteria, but give a set of values that allow seeing which option would, comparatively, score higher or lower on each of the criteria. As for the comparison with the general "0" option ("no policy change" over PIC, BS, CNA, compliance, etc.), this can be done through an indirect method, based on the aggregated effects of specific "0" options. If all the specific "0" options rank lower than the list of proposed options under the several measures, then the general "0" option (which is the sum of all the specific "0" options, which is no policy change at all) will a fortiori rank lower than the list of the proposed options under these several measures. Therefore this issue is addressed after having assessed the impacts of all the specific "0" options and seen what consequences can be drawn from an aggregation of all the specific decisions not to act on a certain measure. Step 1: Performance of the options Each option is thus analyzed in relation to the others and described in an accompanying text divided per individual criterion. The impact on stakeholders is described for each individual group of stakeholders (land owners, agriculture sector, healthcare sector, biotechnology and processing industry sector, governmental research institutions, collections, university research sector, and other; as described in chapter 8.3, the agriculture, healthcare and biotechnology sectors are evaluated jointly, except when there are major differences in impact that justify to treat them separately). The economic, social and environmental assessments are then represented in a separate impact grid, indicating whether the impact is positive or negative, the likelihood of the occurrence, the magnitude of the impact as well as a general score. The score ranges from [---] (most negative) to [+ + +] (most positive). Neutral and unimportant impacts are indicated with a "0". Table 6 offers an overview of the scoring system. Reading and interpretation of the impact grids is to be done with caution, as some assessments are based on assumptions that are justified in the text. Also, some options have a different subject-matter (see the IA of the establishment of the CNA for example), or represent an aggregate of different possible scenarios (see the IA of the operationalization of PIC for example). The procedural sub-criteria (G1 to G4), which were outside of the Terms of Reference of the study194 , are not represented in an impact grid. They are submitted to an assessment of their contribution to overall quality and effectiveness of the policy process in the following steps of the MCA, instead of a likelihood/magnitude analysis which is less appropriate for these criteria. Step 2: Visual dominance analysis Options and (sub-)criteria are then compared on the basis of a performance chart. The performance chart visually represents the differences between the options and allows for a dominance analysis to be made. The goal here is to identify if there exists an ideal point: the option that dominates all others. An option dominates another if it scores at least as well on all criteria and is strictly better at least on one. However, having an ideal point is rare: only three of our cases present such an ideal point. To allow for this dominance analysis to be based on all criteria, the procedural sub-criteria are included in this visual analysis 195 . Step 3: Ranking the alternatives If no ideal point can be identified, a ranking the alternatives can nonetheless be made based on their performance. As the scores are the result of comparisons between the options within the criteria, and not of comparisons amongst the criteria (cf. introduction), the results for one criterion cannot simply be added up to the results for another. Therefore, the "Preference Ranking Organization Method for Enrichment of Evaluations" (PROMETHEE) was applied, which allows building an outranking relation on the set of alternatives (called "options" in this report). An outranking relation allows building an ordering of the alternatives through a series of pairwise comparisons of these same alternatives 196 . The basic principle of this method is that an option outranks another if that option outweighs all the other options over a larger number of (sub-)criteria than any other option. PROMETHEE uses a preference function for each of the alternatives, which allows identifying the intensity of preference. The intensity of preference represents the importance of the difference between two alternatives when comparing them. The values of the preference function (i.e. the different levels of intensities) lie in an interval from zero to one, within which higher value of the preference function corresponds to a better alternative. In other words, when option 1 outranks option 2 for a certain criterion, the amount of the difference between option 1 and option 2 determines the intensity of the preference of option 1 for that criterion: the higher the difference, the higher the intensity of the preference 197 . A preference index can then be set up for one option over the other. The preference index is the weighted average of preferences on the individual criteria: Where P k (option1, option2) represents the intensity of the preference of option 1 over option 2 for criterion k, and W k represents the weight of criterion k. In this analysis, a Usual preference function is used (Figure 2) for all the alternatives, which is best suited for qualitative criteria with a small number of levels on the criteria scale 198 . With this function, it is considered that values of the intensity of the preference can only be 0 or 1. In other words, the importance of the difference (d in Figure 2) does not matter. Preference is given to the alternative which has a higher value of criterion. Figure 2 -Usual preference function The preference index of each comparison between two alternatives is then summed up to create two indices: the positive outranking flow and the negative outranking flow. The positive outranking flow represents the strength of an alternative when compared to all others (i.e. when it outranks all others). The negative outranking flow represents the weakness of an alternative when compared to all others (i.e. when it outranks all others). These flows are defined as follows:  Positive outranking flow for option 1:  Negative outranking flow for option 1: Positive and negative flows allow calculating the net flow of each alternative, by which a complete pre-order of the alternatives can be established: Option 1 then outranks option n if the net flow of option 1 is higher than the net flow of option n -. For each subset of proposed policy options, the performance of the options will be evaluated and presented in the impact grid along with the explanatory text. A visual dominance analysis is then performed followed by a first ranking of the alternatives. In this first approximation, as indicated earlier, the environmental impact is considered more important than both the social and economic impacts, which in turn are weighted more than the procedural impact (used for fine-tuning the choice amongst closely ranked options). If the total weight of the impacts represents 100%, the weighting is distributed based on the following basic allocation key:  environmental impact: 37,5%  social impact: 25%  economic impact: 25%  procedural impact: 12,5 % Step 4: Sensitivity analysis In addition to the analysis based on the predefined weight distribution of the criteria, a sensitivity analysis has been performed by changing the weighting amongst the criteria and analyzing the impact on the ranking of the options. The sensitivity analysis is used to test the robustness of the outcome of the ranking. It allows assessing how sensitive the outcome is to changes in the problem definition. To perform the sensitivity analysis, two additional weighting scenarios are compared with the basic allocation scenario, to see if there is a reasonable low threshold of change in these criteria that leads to a change in choice amongst the options: 1. The equalized weighting scenario equalizes the importance of the impacts: an equal weighting (25%) is applied to all the four groups of criteria (environmental, social, economic and procedural). 2. The economic weighting scenario puts a stronger focus on the economic impact, which becomes the most important one (37.5%), while social and environmental impacts are considered of equal importance and procedural impact remains unchanged. Wherever a change in ranking occurs, a ranking of the alternatives based on these new weights has been presented in addition to the environmental weighting scenario, in order to be able to compare the weighting choices amongst each other. this option might be problematic in the cases where the GR is not yet accurately known at the point of access, as it will be very hard to control the accuracy of the provided notification and its adequacy for later monitoring if the GR is of uncertain nature. On the other hand, ex-situ culture collections dispose of all the necessary technical and scientific expertise for the appropriate identification (e.g. genetic profiling) of the accessed GR, providing additional information on the GR to the information available with the PIC (compared to a PIC issued for example for an in-situ resource of uncertain nature). Also, if utilized in combination with a postaccess self-monitoring system (e.g. a due diligence system), the bottleneck option will guarantee that only well identified GR enters the value chain of "legally acquired GR", creating strong legal certainty and easing the auto-monitoring by users 199 . The modified fishing net model would lead to some increase in legal certainty, compared to the baseline fishing model, as it considers also to refine legislation pertaining to GR that are outside PA/PS 200 . However, such an additional refinement would imply an additional legislative cost compared to the two other options. As shown in chapter 9.1 (IMP 1.3), while the implementation of all three options implies the same legislative cost for the amendment of PA/PS relevant legislation, option 3 also includes the identification and refinement of other relevant legislation.. The latter additional cost that is specific to option 3 would only be worthwhile if the GR covered by that legislation would be of potential or actual value not found elsewhere. However, in spite of this uncertainty and the cost, the impact of increase in legal certainty for users of GR can still be rated of medium magnitude. and at this stage it seems difficult to go beyond an illustration of what the "refined fishing net" option could entail. The case of the conservation varieties, cited in chapter 9.1 however provides a plausible illustration of such a legislation that is different from the PA/PS legislation and where the current legislation on the "admission to use" could be considered also as a PIC under the Nagoya Protocol, in further refinement of the legislation (cf. references provided in the footnote in that section). By establishing both PIC and BS as legal principles and by refining existing legislation (see chapter 9.1, IMP 1.1, 1.2 and 1.3), it can be argued that options 1, 2 and 3 increase legal certainty, which is likely to consolidate or increase the use of Belgian GR and therefore is expected to lead to more economic innovation and product development, in particular through higher R&D benefits, compared to the situation of the specific "0" option. Conversely, under the specific "0" option, the absence of legal certainty generated by the obligations to share benefits but the absence of any proof of PIC (see chapter 9.1, IMP 1.0) is likely to lead to less use of Belgian GR and could therefore be an obstacle to innovation and development. Giving ex-situ collections a central role in the PIC process, might strongly foster an increase of deposits, as collections are used to deposit a physical copy of GR they work with201 . This could imply an important financial cost for the collection providing those resources that are accessed for utilization outside PA/PS and that are usually not deposited in an ex-situ collection. If it is assumed that both access situations (through fishing net and through bottleneck) lead to an equivalent increase in economic benefits, then the fishing net is to be preferred over the bottleneck under this criterion, as resources are not deposited under the fishing net model. Measuring this cost is difficult, as it strongly differs depending on the type of resource being deposited. The costs of storing GR ranges from a few Euros for herbaria, between 100 and 250€ for plant collections and microbes and up to 40 000€ for animal breeds202 . Moreover, users and providers could decide to deposit only the information, only the physical resource, or both, which would lead to different price tags. These costs can also be nuanced in light of the positive effects the storing of GR can have for other research users (not intended by the users accessing the GR), such as further taxonomic research in the case the deposited GR is of a yet unknown taxonomic nature. The additional cost for organizing the access to materials for research under the bottleneck and the fishing net model, compared to the specific "0" option is likely to be low in both cases. It would be limited to the working hours for administrative requirements such as the establishment of the agreement, including settling the specifications of use of the material, the scope of the agreement and the drafting. Under the bottleneck option, this effort will be shared between users and providers. Under the fishing net, these costs do not take place, as the sole obligation is that users notify the CNA of the access to a GR. Some time investment will nevertheless be required for this notification obligation, but if a centralized notification system is established, whether digitally or physically (cf. E3 below), the working-time is expected to be low. These additional costs are estimated to be ranging from 70 to 140€ per transaction, with the fishing net model having the least additional costs (between 1 and 54€) 203 . The baseline and the modified fishing net model do not show any difference according to this criterion (they both equally promote economic innovation and there is no expected difference in research costs).  Impact on stakeholders: o Coll.: financial impact if increase of deposits under option 1 (but magnitude of impact is unclear). Bear part of the costs for accessing material under option 1. No impact under option 2 and 3. o Gov. Res.: Impact in terms of working hours for complying with the administrative requirements under options 1, 2 and 3. Indirect positive impact from possible additional storage under option 1. o Ag., health and biotech: Impact in terms of working hours for complying with the administrative requirements under options 1, 2 and 3. Indirect positive impact from possible additional storage under option 1. o Univ.: Impact in terms of working hours for complying with the administrative requirements under options 1, 2 and 3. Indirect positive impact from possible additional storage under option 1. o Land: No impact o Other: No impact E3 -Minimizing implementation costs Implementation costs for the access procedure are mostly administrative costs for the later follow-up of the process, such as drafting the PIC notification/registration/approval and handling the ABS agreements, the genetic profiling and the storage of a track-record of the exchange in a centralized database (e.g. the ABS Clearing-House). These costs are shared between users and providers, but they are small (between 1 and 24€ per transaction) and, on the exception of the costs for drafting, occur equally in options 1, 2 and 3 (except for the genetic profiling, not applicable for option 2)204 . Implementation costs for the public administration will occur under all options, related to the structure of notification that will be set up for the PIC. Notification could be done through a digital access portal where these notifications will be made directly by users or a physical access point for input by an administrative agent. Such a structure could also build synergies with existing services in the collections. However, as Belgium counts around 150 different collections205 , the need for operability and transparency could necessitate the centralization of access requests in a few qualified collections 206 . The expected increase of the access requests could then possibly lead to some increase in administrative costs for these collections, even though this could be shared between the collections and the users requesting access (e.g. through a fee). Under option 3, the public administration for PA/PS could also incur some additional costs, as it will have to handle more access requests due to the inclusion of GR in a refined legislation relevant to PA/PS. Overall, in the various structures that could be set up, the additional administrative costs for implementation of the PIC can be considered to be equivalent between the three options, but potentially incurred by different stakeholders. It should also be noted that, as indicated in chapter 9.1, the impact generated by a growing number of access requests will depend upon the relationship with other institutions created for the implementation of the NP, such as the CNA and the ABS CH. The impact of the specific "0" option is unclear as this option still implies to organize BS, which is highly likely to also lead to implementation costs for the users and providers of the GR. However, probably the costs would be lower than under a systematic PIC requirement.  Social impact S -Achievement of social objectives It is likely that the overall contribution to economic innovation and product development of options 1, 2 and 3 (cf. criterion E2 above) will also have (at least indirect) positive effects on socially important sectors such as food security, health and nutrition, albeit with a difference between option 1 and option 2/3 as discussed above. This contribution to the R&D sector is also expected to contribute to job creation in the overall economy, and in public and private research institutions in particular. Requiring PIC for Belgian genetic resources might improve the knowledge base on ABS in Belgium and could therefore contribute to education activities and help to build capacity. However, making a causal link between the requirement of PIC in Belgium and the fulfillment of specific social objectives, especially concerning the impact on transfer of technology or on the ILCs and TK in developing countries, would require more in-depth and long-term data on the effects of the PIC requirement, which will only be available once the implementation is in place. Environmental impact M -Promotion of conservation and sustainable use of biodiversity In spite of the higher costs of the bottleneck option, environmental benefits can be expected to be higher than under the fishing net model. Indeed, by obliging users to come back to the ex-situ collections for each new acquisition, the quality and the accurate documentation of the exchanged resources can be guaranteed. Furthermore, as stressed earlier, giving the ex-situ collections a central role in the PIC process will increase deposits, which will eventually support further biodiversity research and improve the knowledge base of conservation and sustainable use of biodiversity. Even though all the options will lead to an increased awareness on biodiversity conservation, sustainable use and access and benefit-sharing, option 1 is expected to have a larger impact in terms of increased awareness, in particular through increased attention to the use of biodiversity and increased availability of resources for research. If the bottleneck model allows for a more efficient follow-up of the accessed resources (see E1), the benefit-sharing is likely to be more easily monitored and channeled to conservation and sustainable use activities. As argued here, these benefits are likely to be important (generating increase awareness, knowledge and documentation amongst others). However, the importance of expected benefits for biodiversity conservation can be considered quite similar under options 1, 2, and 3, with some advantage of option 1 over options 2 and 3. In any case these benefits are much broader then the specific category of agreed upon monetary benefits from possible returns on commercial profits from the utilization of GR, which would be low over all the options 207 . The modified fishing net model offers another form of environmental benefit over options 1 and 2, in that, alongside the amendments to PA/PS relevant legislation, it also refines other legislation and thus allows covering a broader range of genetic resources (see also chapter 9.1, IMP 1.3). Furthermore, it could remedy situations where the default rule of option 1 proves to be ineffective or even creates a loophole. This could happen in cases where species found exclusively within protected areas prove to be rare, and/or if most species within protected areas can also be found outside of these areas. As the overall contribution to economic innovation and product development is positive, the research benefits for knowledge on biodiversity can also be expected to be positive, but this applies equally under the three options. Procedural impact G1 -Flexibility to accommodating sectorial differences As indicated in chapter 9.1, the assessment only applies to the establishment of the principle of PIC for access to Belgian GR, which is a necessary condition for the implementation of all three options. As this will be done through for example a cooperation agreement or provisions in relevant legislation, flexibility to accommodate sectorial differences will fully be preserved. The choice of the default access procedure (whether through ex-situ collections or through a notification requirement) is not a crucial step for the implementation of the NP and could be taken at a later stage of the implementation. Therefore, all 3 options can be implemented once sectorial specificities have clearly been identified and adapt accordingly. The same goes for the refinement of the existing legislation relevant to PA/PS, which applies to all three options. The specific "0" option does not lead to sector specific differences. G2 -Temporal flexibility to allow for future policy and adjustments As the assessment only applies to the establishment of the general legal principle of PIC for access to Belgian GR (see chapter 9.1), all options allow for temporal flexibility and can be adjusted to integrate future developments, in particular by integrating elements from the other options. G3 -Improving knowledge for future policy development and evaluation The possible increase in deposits under the bottleneck option, if feasible at reasonable cost (cf. criteria E2), will strongly foster overall understanding and knowledge of the type of GR that is being exchanged and valued. Furthermore, ex-situ collections host an important part of the Belgian biodiversity heritage, which is currently underexploited or unknown. Allowing the collections to play a more important role will help to increase knowledge on currently unknown specimen. The bottleneck component is thus preferable under this criterion. The baseline and modified fishing net would both generate information from all type of use sectors and uses (according to the content of the registration), so there is no difference over this criterion. The specific "0" option scores very badly over this criterion. Indeed, in contrast to all the other options, it will not generate systematic data on notification/registration, on user requests for information on PIC when accessing GR or other knowledge generated in the operationalization of the PIC. G4 -Correspondence with existing practices For the reasons discussed under G1, the bottleneck model might require some changes in practices from those sectors that acquire their GR from outside PA/PS and who do not rely on the public culture collections (so acquisition from in situ outside PA/PS and acquisition from informal exchanges with in house collections, often without systematic track record and documentation). However, as explained above, these practices only concern some sectors and some uses. Indeed, in Belgium, as several interviewees have pointed out, most utilized GR come from ex-situ collections, while the use of in-situ GR presents a diminishing trend. Consequently, ex-situ facilities act as important agents in the production chain: studies show that the use of ex-situ collections for new material is larger than both in situ and induced mutation 208 . The requested changes would essentially re-enforce this existing practice and growing trend of relying upon ex-situ collections for the exchange of GR. Moreover, the ex-situ collections already have a practice of documenting GR and dealing with CBD requirements. So overall one can say that the discordance with existing practices is not very likely to occur in the bottleneck model and that for most access situations the correspondence is very high. Under the fishing net component, the GR can be accessed from everywhere, but the introduction of the notification/registration requirement for the GR outside PA/PS would require a change in practices, although with an expected minor impact on these practices as the intent is to have a light notification/registration requirement. Comparing the baseline fishing net to the modified fishing net model, one can conclude that the modified fishing net model has a slight advantage over this criterion, as it relies on additional pieces of existing legislation covering GR. In particular, and although counter-intuitive, there is no evidence that only protected areas contain interesting genetic material. Refining GR legislation beyond the focus on PA and PS only could therefore increase the correspondence with the existing practices of utilization of GR. Visual dominance analysis No ideal point can be identified in the performance chart (Figure 3). 208 Ranking the alternatives A preference can be observed for options 1 and 3. This can be explained by the fact that option 2 is dominated by both option 1 and 3 for legal certainty (E1), for the environmental impact (M) and for correspondence to current practice (G4). A reasonable change in the weighting amongst the options does not allow changing this result. It should be noted that the social impact is not accounted for in this analysis, as the performance of the options is unclear (see description of performances above). Additional data could alter the outcome of the ranking, given that the social criterion is of substantial importance in all three weighting scenarios. As for the difference between option 1 and 3, a further analysis by changing the weighting can refine the analysis, as this difference is not very high. Option 3 scores comparatively better over the economic innovation criterion (E2). Option 1 has advantages over option 3 due to gain in legal certainty (E1), overall environmental benefits (M), and knowledge gathering for future policy making The sharing of benefits for the exchange or the utilization of GR in Belgium is currently self-regulated by the sector, each provider institution proposing its own rules and standard agreements. In this context, option 1 does not impose important legal costs, as it would simply rely on the same model. However, this option does not allow the Belgian State to specify the circumstances of the benefitsharing procedure and to make sure benefits are shared in a fair and equitable way. In addition, both option 1 and the specific "0" option would not allow the users to benefit from the advantages on legal certainty and effectively provided by options 2 and 3. Indeed, from a perspective of legal effectiveness and legal certainty, working with model contractual clauses (option 2) or tailoring the BS agreements to the specificities of each new transaction (option 3) encompass various advantages for users, providers and public authorities. First the development of standard agreements could eliminate variations between ABS regimes, hence providing legal certainty, facilitating transaction initiation, and suppressing information gaps created by extraneity factors 209 . Second, it could give more incentives to respect the rules already in place, insofar as the actors of the private sector currently prefer to trade with informal private collections that do not follow BS standards 210 . Third, using contracts will facilitate the enforcement of contracts between providers and users: "a contract would be binding as long as it is not found to be void, and could, 209 Tauber et al. (2011), op.cit. The term of « extraneity » is used when a legal issue confronts two or more different national legal systems and requires thus to rule a conflicts of laws or jurisdictions. It is envisaged here that the situation where the identity of the physical provider of a genetic resource (e.g. a public collection) differs from the identity of the owner or the original provider of this resource. 210 Laird S., Wynberg R. ( 2012), Bioscience at a crossroads: Implementing the Nagoya Protocol on Access and Benefit-sharing in a Time of Scientific, Technological and Industry Change, CBD. Summary of the selected options for the specification of MAT 0. Specific "0" option: No benefit-sharing 1. Option 1: No specific benefit-sharing requirements imposed for the MAT 2. Option 2: Standard agreements with specific benefit-sharing requirements, depending on finality of access 3. Option 3: Specific benefit-sharing requirements, negotiated on a case by case basis, depending on finality of access For a detailed description of the options please refer to chapter 8.2 and chapter 9. depending on the dispute settlement clause included in the contract, be brought to arbitration" 211 . Fourth, standardizing the negotiation and/or the agreement allows overcoming unbalanced bargaining power resulting from asymmetries in information, knowledge, negotiation, skills and capacity 212 , which is a barrier to fair and equitable benefit-sharing. Fifth -and related to the fourth point -it allows the state to control if benefits arising from potentially high-value resources are being shared accordingly to their value and being used accordingly with the objectives of the Protocol and the Convention. Option 2 might smoothen the negotiation process between users and providers, as it offers guidelines while providing security to providers that changes of intent will be renegotiated. Nonetheless, the legal setup of option 2 and option 3 (i.e. the inclusion of specific BS requirements in the provisions of the environmental code, cf. chapter 9.2) has yet to overcome the difficulty of delineating practically how divergent finalities of access can be distinguished from each other. Failing to specify the nature of different types of utilization, especially commercial utilization, and the correlated adequate BS, is likely to deprive the Belgian State from possible benefit-sharing which might contribute to conservation and sustainable use of biodiversity. However, this report has identified examples of how to deal with this distinction (cf. chapter 6.1 and 6.2). Hence, these practical difficulties do not seem to outweigh the benefits offered by options 2 and 3 through legal certainty and effectiveness. In particular, in the case of option 3, even if the legal costs are likely to be substantially higher, the benefits for effectiveness discussed above could be higher as well. Impact on stakeholders: o Coll.: Benefit from higher legal certainty under options 2 and 3. o Gov. Res.: Benefit from higher legal certainty under options 2 and 3. o Ag., health and biotech: Benefit from higher legal certainty under options 2 and 3. o Univ.: Benefit from higher legal certainty under options 2 and 3. o Land: Benefit from higher legal certainty under options 2 and 3.. o Other: No impact E2 -Maximizing economic innovation and product development (in particular through its contribution to R&D) at reasonable financial and administrative costs Option "0" would also lead to a non-ratification of the Nagoya Protocol as well as to non-compliance with the BS obligations of Belgium in the framework of the CBD. Conversely, the adoption of BS as a horizontal principle envisioned in options 1, 2 and 3 is likely to maximize economic innovation in the future. In general, if BS is adopted, then the increase in legal certainly under options 2 and 3 will spur the utilization of Belgian GR while leading to the lowest level of transaction costs for the different categories of users of GR. Option 1 could also prove to serve economic innovation and product development, as it offers the advantage to agree upon benefits that generate the least costs. But this advantage mainly applies to users, with providers risk to invest a lot of resources to make sure the benefits shared with them reflect a fair and equitable share of the benefits. Under option 3 this flexibility for users (for instance by encompassing an ex-post renegotiation process once the "non-commercial" utilization of the resource finds a commercial application) can be conserved, while offering a basis of negotiation for providers through specific requirements. Option 3 thus has the advantage of both providing a certain level of certainty for providers and small users (through a predefined set of requirements adapted for this sector) and leaving a certain level of flexibility for bigger commercial users (through the case-by-case negotiation). Furthermore, options 2 and 3 will streamline the utilization of GR for both formally and informally organized providers (such as research laboratories that distribute GR that they collected from in situ or official ex-situ collections that work in the context of public-private partnerships). These advantages seem strongest in option 3 as compared to option 2, where both public and private sector research might feel hindered from the lack of flexibility in a fully standardized set of BS requirements. This could rebalance the role of public collections which sometimes lack the resources/bargaining power to impose the appropriate rules.  Impact on stakeholders: o Coll.: As user, might be disadvantaged under option 3, as resources are generally limited to negotiate specific BS requirements. o Gov. Res.: Might be disadvantaged under option 3, as resources could be limited to negotiate specific BS requirements. o Ag., health and biotech: Bigger users might prefer option 1 or 3, due to the flexibility to accommodate to existing functioning. Small commercial users might be disadvantaged under option 2, as resources could be limited to negotiate specific BS requirements. o Univ.: Might be disadvantaged under option 3, as resources are generally limited to negotiate specific BS requirements. o Land: No impact on economic innovation and product development of land owners o Other: Economic innovation more likely if benefit-sharing is adopted as a general principle. E3 -Minimizing implementation costs The impact of the specific "0" option for minimizing implementation costs is unclear as it is unclear how the specific "0" option would still allow the Belgian State to comply with the BS obligations of the CBD (cf. chapter 8) and what implementation costs would result from this alternative scenario. The implementation costs of the options 1, 2 and 3 will be different for stakeholders on the one hand and for the public administration on the other. Stakeholders will be impacted by the negotiation costs they incur to agree upon the sharing of benefits. The level of these costs is inversely proportional to the standardization of (the process to establish) MAT 214 . When BS is not specified, negotiations can include agreeing on the types of benefits to share, the time-frame of the benefitsharing, the distribution of the benefits between the different stakeholders involved, etc. The less the process is standardized and/or is facilitated by pre-existing requirements, the more time stakeholders will spend on the negotiation of terms they both agree upon. This cost is also likely to differ depending on the moment negotiations are taking place. They can take place before the exchange (ex-ante negotiation), most likely at the moment of access, or after the negotiations (expost negotiation) when the agreement specifies that benefit-sharing terms are to be settled at a later stage of the development chain (e.g. the patent stage, the commercialization stage, etc.) or when terms of a project need to be renegotiated. It is assumed that the cost of ex-post negotiation is substantially larger than the costs of agreeing on BS ex-ante, because of the relationship-specific investment related to an already pre-developed product 215 . This is also voiced by some interviewees, who fear that deciding on the amount of benefits to share at a later stage than the moment of access will create higher expectations, resulting in difficult negotiations between users and providers. Taking the above into account, the negotiation cost could range from no costs at all for a fully standardized procedure to more than 1000€ per transaction for ex-post negotiation in a fully flexible context 216 . For the public administration, option 1 leads to the least implementation costs, while option 2 might lead to high set-up and follow-up costs for implementation and option 3 might lead to high implementation costs due to the recurring need to adapt the BS requirements to every new transaction (including legal advice). At the same time, the legislative costs for the drafting of the different options are not necessarily very high, depending on how the standardized mutually agreed terms are specified in the implementation provisions of the Nagoya Protocol in Belgium. In particular, such implementation provisions can draw some lessons from the practices with existing standardized MTA clauses (Material Transfer Agreements) already put into practice by the collections, which shows the benefits from using standardized material acquisition and transfer arrangements. 214 Täuber et al., (2011), op.cit. 215 O.E. Williamson (1985), The Economic Institutions of Capitalism, New York: Free Press; relationship-specific investment are investments whose return depends on the duration on the relationships continuation (See V.P. Crawford (1990), "Relationship-Specific Investment", The Quarterly Journal of Economics, 105(2), pp. 561-574). In other words, the return on the investment made by users of GR in developing a product depends upon the continuation of the relationship (the ABS agreement) they have with the provider. 216 These figures are based on a quantitative evaluation, by the study team, of the negotiation costs related to the MAT under the various options. This evaluation is based on the data generated in the interviews (especially indicators IND 3.1 and 3.2, data collected for the various stakeholder groups). Combining these two contrasted impacts for comparing options 2 and 3, it can be considered that one time set up costs lead to fewer impacts, compared to recurrent costs such as transaction and negotiation costs (cf. E1 and E2). Under options 2 and 3, negotiation and transaction costs are born on a regular basis by the users and providers of GR. However, the setting-up costs of standardized formats for option 2 are costs incurred only once and lead to less recurrent negotiation costs. As a result, it might be said that the overall impact of option 2 on minimizing implementation costs is better than option 3 which leads to higher recurrent costs for all stakeholders. Option 1 is hard to evaluate as it could both be minimalistic or very extensive, even though it has less set-up costs for the State compared to options 2 and 3. Social impact S -Achievement of social objectives It is likely that the overall contribution to economic innovation and product development of the adoption of BS as a horizontal principle might also have (at least indirect) positive effects on socially important sectors such as food security, health and nutrition. The sharing of (monetary and/or nonmonetary) benefits could take the form of management support, educational programs, technology transfer, institutional capacity building, collaboration among companies, etc. which are expected to support social objectives. A concurrent contribution to the R&D sector is also expected to contribute to job creation in the overall economy, and in public and private research institutions in particular. Conversely contribution for socially important sectors under option 0 can be considered negative. However, the different options might have contrasted effects on various sectors of use, impacting their capacity to innovate, create jobs and contribute to social objectives. Option 2, and to a lesser extent option 3, could at least partially contribute to overcome problems of unbalanced bargaining power between the actors. Small commercial users and the non-commercial users might suffer from option 3 if they do not have sufficient capacity to negotiate the case by case agreements, while at the same time this option could potentially better take into account the specificities of the small commercial and non-commercial users and providers. Option 1 is hard to evaluate as it could both be minimalistic or very extensive. Overall, options 3 and 2, compared to option 1, offer a better opportunity for the Belgian authorities to control the types of benefits being shared and monitor whether use made of them by stakeholders serves social objectives. Combining these effects (specific BS requirements and public control over benefits), it seems that option 2, and to a lesser extent option 3, offer an advantage for this criterion over option 1. Procedural impact G1 -Flexibility to accommodating sectorial differences The utilization of genetic resources is very heterogeneous, ranging across different sectors of the biotechnology industry and including different types of users. BS agreements could reflect heterogeneous uses of genetic resources, in different sectors and different production chains. The possibility to accommodate sectorial differences will clearly be highest in option 1 and to a lesser extent in option 3. At first glance, options 1 and 3 will provide some advantage as the BS conditions will be specified upon case by case transaction. In the case of option 1 this will be without having to follow specific BS requirements, in the case of option 3 there will be specific BS requirements but these will leave a large degree of flexibility to accommodate sector specificities. However, option 1 might have some hidden costs of accommodating flexibility, for the public authorities implementing the Nagoya Protocol, as they have to specify the circumstances under which a certain type of benefitsharing can be considered fair and equitable. Therefore, an option such as option 3, leaving sufficient room for flexibility while providing such indications of "appropriate circumstances" by some standardization, might have a lower cost for organizing flexibility. Overall, for the authorities, both option 2 and option 3 represent a low-cost measure for organizing flexibility with sectorial differences, while providing clear and certain rules for benefit-sharing. Comparing the impact on various stakeholders, it can be said that the lack of flexibility regarding sectorial differences in option 2 might be disadvantageous for larger commercial users while it could benefit non-commercial and small commercial users who do not have the same amount of resources for detailed negotiations under option 3 as larger commercial users. The impact of the specific "0" option for accommodating sectorial differences is unclear as it is unclear how the specific "0" option would still allow the Belgian State to comply with the CBD BS obligations (cf. chapter 8 and 9) and how flexibility would be built in this alternative scenario. G2 -Temporal flexibility to allow for future policy and adjustments Overall, the options 1, 2 and 3 allow for some temporal flexibility as they could provide for gradual fine-tuning, or possibly even for a gradual increase of the level of requirements (1->2->3), for example when more knowledge about the use of GR would become available. However, options requiring a substantial investment in order to put into place (such as options 2 and 3 where substantial effort is needed to define the specific requirements) will be more difficult (and there will be more resistance) to change at a later stage. The specific "0" option is likely to have a contrasted effect on temporal flexibility. On one hand, it would lead to non-ratification of the Nagoya Protocol and also to a default on the implementation of the CBD BS obligations. This would still require, in a later stage, to move towards better implementation of the CBD and could lead towards the ratification of the Nagoya Protocol in a second step by adopting option 1, 2 or 3 in a later stage. However, postponing the implementation of the BS obligations to the future is likely to create comparatively higher costs in the future, than the costs that are envisioned now (for example in option 1). Indeed, not ratifying the Nagoya Protocol would still require legal action, as a non-party, to clarify the relationship with Parties to the Protocol and to deal with implementation measures in other countries when distributing GR to these countries (for example if these countries would put a due diligence system in place, requiring clarification of legal provenance for the GR from Belgium). Changing legal actions that would have been taken place outside the system of the Protocol, in a later stage, and revert back to the implementation of the BS sharing under the Nagoya Protocol at a later stage, in a way which is consistent with the legal developments in other countries, would probably lead to additional costs which outweigh the temporal flexibility gained by postponing the implementation. G3 -Improving knowledge for future policy development and evaluation The absence of a horizontal BS requirement (specific "0" option) comparatively would generate less information than the options requiring BS. Indeed these options would generate information on the way actors deal with benefit-sharing obligations (both under options 1, 2 and 3) that could prove relevant for future policy making. G4 -Correspondence with existing practices The exchange of GR in Belgium is currently self-regulated and most current exchanges of GR already include a benefit-sharing clause based on semi-standardized or standardized BS used between user and provider, many of which make a difference between commercial and non-commercial use purposes. Option 1 could therefore easily build upon existing practice, but there would be no decisive difficulty for those providers to adapt themselves to the options 2 or 3, if the specific conditions imposed by the state would be sufficiently flexible. As for the private actors, the small companies could prefer a certain use of standard models given their possible lack of direct legal expertise and/or resources for extensive negotiations. In contrast, the specific "0" option would move away from the existing trends and practices of the Belgian stakeholders. Visual dominance analysis No ideal point can be identified in the performance chart (Figure 5). Ranking the alternatives With our basic allocation key, option 2 stands out as the preferred solution (Figure 6): it performs better or at least as good as other options on all the criteria except on criterion E2. However, the differences with option 3, which comes second, are rather small, as option 3 scores well on economic (E1 and E2), environmental (M) and most procedural criteria. The leading position of option 2 is maintained throughout the sensitivity analysis but is slightly attenuated. In light of this analysis, option 1 and option 0 are not valuable alternatives. Option 2 will generate lower transaction costs in accessing GR, due to a simplified one stop access procedure for users, but the difference in costs with option 1 is unlikely to have a major impact on economic innovation and product development. Moreover, this effect will be stronger for foreign users of GR, and will be more nuanced for Belgian users. Most Belgian actors already function in the strongly decentralized Belgian system. Therefore, option 1 and option 2 cannot clearly be differentiated along this sub-criterion. With its high process as well as legal uncertainty (due to the non-ratification of the Nagoya Protocol), option 0 represents the least favored option for this sub-criterion. E3 -Minimizing implementation costs Under option 1, both users and public administrations will be faced with higher implementation costs. For public administrations, the cost of establishing the entry-point is fourfold higher under option 1, as four separate entry-points will have to be created and manned. As indicated earlier, for users (especially foreign users), identifying the competent entry-point is likely to require more working time than under option 2. Option 2 will have a higher coordination cost, at least in the initial phase of the implementation, as internal mechanisms and procedures will have to be established to deal with the different CNA's in line with their internal legislations and procedures. However, an initial higher set-up cost is to be preferred over continuing higher operating costs. The impact of the specific 0 option along this criterion is unclear as this option would lead to nonratification of the Nagoya Protocol and therefore depend on the alternative measures taken to clarify the access requests.  Impact on stakeholders: public administrations (for setting-up and operating costs), users (for search and input costs Social impact S -Achievement of social objectives Choosing between a single entry-point and four different entry-points to the CNAs is unlikely to have any significant impacts on any social objective. With its high process as well as legal uncertainty (due to the non-ratification of the Nagoya Protocol), option 0 clearly does not benefit any social objective and therefore represents the least favored option for this criterion. Procedural impact G1 -Flexibility to accommodating sectorial differences As indicated in chapter 9.3, the choice of the entry-point, as dealt with in this analysis, is independent from the establishment of the four CNAs (one for each of the regional + federal authorities). The latter applies in any situation, hence this analysis is only concerned with the entrypoints to the CNA. In this context, both options 1 and 2 potentially offer the same level of sectorial flexibility in the implementation of the NP. The impact of this criterion on the specific 0 option is unclear as it would lead to non-ratification of the Nagoya Protocol and depend on the alternative measures taken to clarify the access procedures to Belgian GR. G2 -Temporal flexibility to allow future policy and adjustment The specific 0 option would lead to non-ratification of the Nagoya Protocol. This could lead to keep more flexibility at present, but is likely to lead to less flexibility later on as explained above (cf. analysis under criterion G2 for specifying MAT). The higher set-up costs of a centralized entry-point would partially lead to less flexibility to allow future changes (due to higher resistance to change, lower willingness to renegotiate) or to change modalities that are inherent to the institutional set-up. However, it is important to note that this partial difference in flexibility for change in modalities only concerns the difference in the entry-point to the CNAs and so can be considered asminor (the establishment of the CNA by each relevant authority applies in any situation, see chapter 9.3) G3 -Improving knowledge for future policy development and evaluation Option 0 would strongly inhibit the gathering of information, as no PIC would be granted and no information would be kept about previous access requests in case of the non-establishment of the CNA. It is unclear which of the options 1 or 2 would provide better opportunities to improve the knowledge base, but it might be argued that a centralized system will avoid redundant knowledge acquisition, improve the consistency of the generated data on the various access requests, and hence be more efficient. G4 -Correspondence with existing practices Both option 1 and option 2 build upon existing practices to a certain extent. On the one hand, the decentralized input of access requirements proposed under option 1 clearly corresponds to the current exercise of competences over GR, where there is no coordination between authorities in charge for dealing with access to GR in PA/PS. From this perspective, establishing an increased coordination would introduce a change to the existing practices, albeit at a low cost as it can be implemented through a one-stop digital portal. A notable exception to this is the longstanding coordination within the BCCM consortium for the culture collections, where information on GR and access procedures is available through a common portal which redirects users to the decentralized member collections. On the other hand, there is already an established practice of coordination amongst the federated entities and the Federal Government on matters pertaining to ABS policy. For example, issues related to the CBD and the NP are coordinated through the Biodiversity Steering Committee of the CCIEP, for which the secretariat is provided by the Federal Public Sevice for Environment. The Belgian Clearing-House Mechanism, managed by the focal point to the CBD is a central access point for information and awareness-raising pertaining to the CBD. Hence, establishing a single entry-point for facilitation/channeling of requests/advice (option 2) would also to a certain extent correspond with existing practices. The impact of the specific 0 option under this criterion is unclear, as this option would lead to a nonratification of the Nagoya Protocol and the impact would therefore depend on the alternative measures taken to clarify the access requirements. Visual dominance analysis Option 2 is the dominant alternative: compared to options 0 and 1, it scores at least as well on all criteria and is strictly better on one economic sub-criteria (legal certainty). However, this dominance has little relative value. In light of the preceding analysis of the performance and the impact on the stakeholders, this chart shows that little difference can be observed between the impact of establishing a single entry-point and the impact of establishing four separate entry points. Even if the measures under the general "0" option were taken in order to comply with the obligations of the CBD and the ILO 107, users and providers would not be able to benefit from the clarified legal framework that the compliance measures envisioned under the NP. This would not create a sufficient level playing field for stakeholders. Therefore both options 1 and 2 are preferable over option 0 under this criterion. Option 1 refers back to the legislation of the provider country while the private international law code would determine that provider country legislation is applicable to disputes regarding compliance with the MAT. If it is impossible to determine the content of the foreign law in due time, Belgian law should be applied. This option therefore relies on the assumption that the legislation of the country of origin properly implements the NP provisions and is clear enough and acceptable for enforcement based on the provider country legislation. Instructing courts and authorities to directly apply the terms set by the provider country could create a level of uncertainty for Belgian users and public authorities, "given that access legislation will vary among countries, creating legal uncertainty as to whether and how each country's provider-side law will affect rights and obligations of users" 217 . However, the latter disadvantage is attenuated by the fall-back clause of the Belgian code of private international law, which specifies that if it is "impossible to determine the content of foreign law in due time, Belgian law should be applied" (art.15 §2al2). In addition, now that Belgium has a national code of private 217 Tvedt, Fauchald, (2011), op.cit.p.386. Summary of the selected options on compliance 0. Specific "0" option: not introducing any legal provision on compliance 1. Option 1: Ensuring compliance with provider country legislation regarding PIC and MAT, with Belgian law as a fall-back option 2. Option 2: Self-standing obligation in the Belgian legislation to have PIC and MAT if so required by the provider country. For a detailed description of the options please refer to chapter 8.2 and chapter 9. international law, the application and control of foreign law is quite common218 . There are today more than 300 judgments on the most-used database (Jura) with the keyword "applicable law to contracts in situations of international private law"; and at least 50 cases regarding "the application of foreign law by a Belgian judge". This is therefore absolutely not a new phenomenon, and would not create an additional burden to the judiciary system. Nevertheless, in this perspective, the passing of a self-standing obligation as envisioned under option 2 could instead create less complexity for users, courts and enforcement authorities in Belgium. However, the obvious disadvantage is that it would respect to a lesser degree the political options and the legal requirements of the provider country pertaining to its national ABS legislation. E2 -Maximizing economic innovation and product development (in particular through its contribution to R&D) at reasonable financial and administrative costs The 0 option would lead to non-implementation of the Nagoya Protocol, which would likely lead to an increased difficulty for Belgian users to acquire foreign GR for research and development and result in a barrier for economic innovation. As stated in chapter 3, there are already various substantial (material rules) and formal (private international law) provisions that could be applied to the contractual and extra contractual conflicts related to GR benefit-sharing. However, these are not fully adapted to the NP playing field (e.g. absence of notion of "informational component"). In the field of research and development, collaborations with users in foreign countries which are Parties to the Protocol might be hampered, which will also hinder obtaining internationally recognized standards for proof of good legal provenance of GR. In comparison, options 1 and 2 would allow implementing the Nagoya Protocol and thereby safeguard, or even extend, the level of trust in the research sector. The expected positive impact from the implementation of the Nagoya Protocol will however differ between option 1 and 2. Indeed, under option 1, giving the priority to the law of the provider country within the Belgian legal system could entail significant transaction costs. The extent of these transaction costs will of course largely depend on the effectiveness of the ABS Clearing-House in providing detailed and up to date information. On the other hand, option 2 would be less complex for users to comply with the provider country requirement regarding the existence of PIC and MAT, which would promote the use of GR for economic innovation and product development.  Impact on stakeholders: o Coll.: As users, incurring higher costs under option 1 o Gov. Res.: Incurring higher costs under option 1 o Ag., health and biotech: Incurring higher costs under option 1 o Univ.: Incurring higher costs under option 1 o Land: No impact as providers o Other: No impact E3 -Minimizing implementation costs The impact of the 0 option under this criterion is unclear. The 0 option would lead to non-ratification of the Nagoya Protocol and therefore the impact on implementation costs will depend on the alternative measures that are taken to comply with the obligations of CBD and ILO Convention 107, which are both ratified by Belgium. ABS disputes would relate to disagreement about implementation of provisions included in the MAT. In this context, the absence of related jurisprudence will create a challenge for the initial cases in all the considered options, which might be reduced only by a legislative draft that is as precise as possible. This remark applies equally to options 1 and 2. Social impact S -Achievement of social objectives The envisioned implementation of both option 1 and 2 would contain a firm commitment under Belgian law to the compliance with PIC and MAT of the provider countries, both for GR and TK associated to GR (IMP 4.1 (1)). However, contrary to option 2, in option 1, the actual provisions for the PIC and MAT and the compliance with those provisions, would also be considered by the courts in the context of the relevant legislations of the provider country. If this legislation covers social objectives, and these are included in the MAT, then option 1 could have a better performance. The 0 option is likely to have negative effects on the social objectives. As discussed earlier, option 0 could hinder access, due to lack of trust and level-playing field, and thus the social objectives of possible BS provisions. Furthermore, it would impede R&D in Belgium, which could be a barrier for innovation in the health, nutrition or food security sectors. Environmental impact M -Promotion of conservation and sustainable use of biodiversity, including biodiversity research The implementation of the Nagoya Protocol is expected to have a positive impact on conservation activities and biodiversity research. Therefore, options 1 and 2 are to be preferred over option 0 on this criterion. The difference between option 1 and option 2 are difficult to assess, as the environmental benefits would depend in both cases on the Mutually Agreed Terms specified in the provider country legislations and/or the clauses negotiated on a case by case basis upon the access of the GR. However, contrary to option 2, in option 1, the actual provisions for the PIC and MAT and the compliance with those provisions, would also be subject to revision by the courts in the context of the relevant legislations of the provider country. If this legislation includes provisions on conservation and sustainable use of biodiversity, and these are included within the MAT, then option 1 would better address conservation and sustainable use of biodiversity. Procedural impact G1 -Flexibility to accommodating sectorial differences N/a (the three options are neutral as regards the specificities of various user sectors). All options can be adapted to sectorial and utilization differences. G2 -Temporal flexibility to allow for future policy and adjustments The impact of options 1 and 2 on temporal flexibility is probably quite similar, as all options would still leave room for further adjustments. Shifting from one option to another, or combining the options, still seems possible at further points in time, even though it would always imply a legislative cost. The specific 0 option would lead to the non-implementation of the Nagoya Protocol. This could lead to keep more flexibility at present, but is likely to lead to less flexibility later on as explained above (cf. analysis under criterion G2 for specifying MAT). G3 -Improving knowledge for future policy development and evaluation Both options could generate knowledge through court decisions. Under option 1, this information could also serve provider countries. G4 -Correspondence with existing practices As indicated in the introduction to this section on compliance, currently there is no reference to GR in the scope of the Belgian code on private international law. Both option 1 and 2 would therefore require a change compared to current practices (including the specific 0 option, because of the obligations under CBD and ILO 107 that still need to be implemented). However, option 1 clearly is closer to the existing practices, as it basically extends the existing code of private international law, in order to explicitly address situations of disputes on the content of MAT. The impact of option 0 is unclear as it depends on the way that the obligations under CBD and ILO 107 would be implemented in a situation of non-ratification of the Nagoya Protocol. Visual dominance analysis No ideal point can be identified in the performance chart (Figure 8). Ranking the alternatives With our basic allocation key, option 1 stands out as the preferred solution (Figure 9): it performs better or at least as good as other options on all the criteria except on criterion E1. However, the difference with option 2, which comes second, is so small that the ranking is of very little relative value. As can be observed in the performance chart, the only significant difference between option 1 and 2 is that option 1 clearly corresponds better to existing practices. Option 0 is clearly the least preferred option. chain, option 2 does not incentivize early users (if their utilization never makes it to the patent stage) to acquire GR legally, increasing the legal uncertainty of end users219 . Furthermore, the patent office currently covers only a very small proportion of the transactions concerned by the Nagoya Protocol. In order to be effective to prevent misappropriation of GR, this option will need to be complemented with other checkpoints. But the small amount of transactions covered by option 2 could provide better opportunities for the enforcement procedures for those GR it will possibly cover220 . By linking the monitoring and the patenting process, it could be easy for the authorities to ensure the monitoring of GR likely to have high(er) commercial value. For users, option 2 also makes it possible to combine patenting and ABS obligations, hence limiting the obligation redundancy. As indicated in chapter 9.5, option 2 will require amending the existing patent law, whereas option 1 does not require any extra legal drafting beyond what can be foreseen under the obligations regarding PIC and the ABS Clearing-House. Hence, option 2 is likely to generate higher legal costs than option 1.  Impact on stakeholders: o Coll.: As users, higher legal and process certainty with option 1 and limiting obligation redundancy under option 2 (for users using patents) o Gov. Res.: Higher legal and process certainty with option 1 and limiting obligation redundancy under option 2 (for users using patents) o Ag., health and biotech: Higher legal and process certainty with option 1 and limiting obligation redundancy under option 2 (for users using patents) o Univ.: Higher legal and process certainty with option 1 and limiting obligation redundancy under option 2 (for users using patents) o Land: Option 2 allows more effective enforcement and monitoring of use of genetic resources o Other: No impact E2 -Maximizing economic innovation and product development (in particular through its contribution to R&D) at reasonable financial and administrative costs Option 0 would lead to non-ratification of the Nagoya Protocol, which would likely result in higher distrust with the provider countries and have a negative impact on the acquisition of GR from foreign countries and thereby on the overall capacity of the Belgian GR sector to innovate. Option 1 and 2 on the contrary are expected to increase trust and have a positive overall impact. Under option 2, users acquiring GR from third-parties will face additional financial and administrative costs and efforts to make sure GR have been legally acquired, in order to avoid complications and unforeseen costs at the moment of patenting. However, the exact cost is unclear as it will strongly vary depending on the type of users, their utilization of GR and the moment in the development chain at which they acquire GR and the possible combination with a self-monitoring scheme. On the other hand, for users acquiring GR mostly directly from developing provider countries, option 2 could also prove to be positive for the collection of GR. Being "the option favored by developing countries in the negotiations on the Protocol"221 , this measure could foster trust with partner countries and thus facilitate access to GR in these countries. No additional research costs are expected with options 0 and 1.  Impact on stakeholders: o Coll.: As users, under option 2, higher costs to make sure GR has been acquired legally (for users using patents) o Gov. Res.: Under option 2, higher costs to make sure GR has been acquired legally (for users using patents) o Ag., health and biotech: Under option 2, higher costs to make sure GR has been acquired legally (for users using patents) o Univ.: Under option 2, higher costs to make sure GR has been acquired legally (for users using patents) o Land: No impact o Other: No impact E3 -Minimizing implementation costs The impact of option 0 under this criterion is unclear, as it will depend on the other measures taken by Belgium to comply with CBD and the ILO 107 Convention. Options 1 and 2 are roughly equal in terms of costs related to the establishment of new institutions. Both options require an additional monitoring authority to be created. Although this new service will be hosted in the existing patent office with option 2, option 1 will most probably make use of the ABS CH (cf. chapter 9.5). Exact costs for the monitoring tasks are difficult to determine, as it will also depend on the interpretation of the term "monitoring", possible requirements at EU level and on how other information exchange measures are implemented such as the ABS Clearing-House. If the task of the checkpoint is understood as being limited to the collection and transfer of information as is currently the case (see chapter 9.5), the cost is roughly limited to the cost of storing and handling the information in a database. This cost is likely to be very small under option 1, as the ABS CH will already be used for the collection of information regarding the implementation of the NP, including on Prior Informed Consent. If, on the other hand, the provided information is to be effectively monitored and verified by the entity collecting the information, cost may be substantially higher. While monitoring costs for the patent office are likely to be reasonable 222 , costs related to the monitoring of a very high amount of GR used in the country (full development of option 1) will be much more important. In the absence of specific figures on the utilization of GR in Belgium, it is difficult to assess this financial cost. The latter will also depend on the way the option is implemented in Belgium (cf. discussion on the phased approach above). Implementation costs for users will be differently distributed depending on which option is chosen. Whereas under option 1 the cost of providing the information is incurred by users utilizing GR on Belgian territory, under option 2 this cost is incurred by the users wishing to patent a (semi-)finished product. However, the cost of providing the information will be small for users using legally acquired GR.  Social impact S -Achievement of social objectives Options 1 and 2 have no direct major impact on social circumstances with regard to the access or benefit-sharing of the GR nor on related research and therefore these options are not expected to have a substantial impact on socially relevant objectives. Option 0 could seriously threaten R&D, as access to GR in provider countries will be much more difficult if Belgium's does not ratify the NP. With regard to the protection of traditional knowledge, in light of the current level of details of the options, both options 1 and 2 offer the same possibilities to protect the TK of local communities in third countries. However, option 1 could offer more opportunities: if all relevant information concerning GR utilized in the country could be monitored (not just collected), this would offer a higher protection rate against fraudulent use of TK, including in non-commercial settings. But as indicated earlier, this is difficult and costly to enforce, so this impact will depend on the way option 1 will be implemented if it would be selected. Option 0 does not lead to the protection of TK.  Environmental impact M -Promotion of conservation and sustainable use of biodiversity, including biodiversity research Options 1 and 2 have no direct major impact on conservation or sustainable use of biodiversity. Better monitoring of the use of GR, and in particular of the PIC and MAT obligations under the Nagoya Protocol, is likely to lead to increased benefit-sharing effectively reaching provider countries, which could in turn benefit activities related to the conservation and sustainable use of biodiversity and raise the awareness on the value of biodiversity for research and innovation. Moreover, both options 2 and 1, being in line with demands of provider countries if properly implemented, could facilitate cooperation activities focusing on conservation and sustainable use of biodiversity, while option 0 could create the opposite effect. Option 0 could seriously threaten generation of benefits for biodiversity conservation and sustainable use, as access to GR in provider countries will be much more difficult if Belgium does not ratify the NP. Procedural impact G1 -Flexibility to accommodating for sectorial differences The 0 option would lead to non-implementation of the Nagoya Protocol. The impact of this option on criterion G1 is unclear as it would depend on what alternative measures would be taken to comply with the obligations under the CBD and the Convention ILO 107 and how Belgium would ensure legal consistency between these measures (taken as a non-party) and the measures taken by other countries that would be Party to the Protocol. Even though option 1 and 2 potentially apply equally to all sectors, adopting option 2 instead of option 1 could impose a relative higher duty on users that are more heavily involved in commercial activities that involve patenting activity. Hence, option 2 is less flexible to accommodate sectorial differences. However, as the upgrading of the patent disclosure procedure is not expected to impose any significant costs other than the amendment to the patent law (cf. chapter 9.5), the actual magnitude of this impact can be considered very low. G2 -Temporal flexibility to allow for future policy and adjustments The 0 option would lead to non-implementation of the Nagoya Protocol. As discussed under G2 above (section G2 under MAT), this might lead to some additional flexibility in the short term, but would probably lead to higher adaptation costs at a later point in time. The implementation of the monitoring obligations under option 2 would require an amendment of the federal law transposing the Directive 98/44/EC on the legal protection of biotechnological inventions to include such a new provision, while the option 1 does not require any additional action (cf. chapter 9.5). Therefore, option 2 will be less flexible for future adjustments. G3-Improving knowledge for future policy development and evaluation The 0 option would lead to non-ratification of the Nagoya Protocol and would not allow benefiting from the measures; in particular the information generated from the PIC procedures that would generate knowledge on the flow and the utilization of GR. If efficient, the broad-scale collection/reception of information related to PIC under option 1 could strongly contribute to the knowledge on the utilization of GR in Belgium, as most GR accessed after the entry into force of the NP would theoretically be covered. However, this has to be nuanced, as the implementation of this measure would probably be phased and therefore the contribution to the knowledge base will be incomplete in the first implementation phases, especially if the checkpoints tasks are limited to collect the information only, without verification. Knowledge improvement under option 2 will be weak, as it only covers a relatively small sub-set of GR accessed for R&D and would not bring more knowledge on the transaction of GR compared to the existing disclosure of origin requirement in the patent legislation. However, upgrading of the patent disclosure to a checkpoint recognized under the NP could lead to improved knowledge gathering. Indeed, in its current form the disclosure obligation is reported to be ineffective. At the same time, the patent authority is not competent to verify the correctness of the information provided by the user. Moreover, some stakeholders complain about the difficult implementation of the current disclosure obligation. G4 -Correspondence with existing practices The 0 option would lead to non-ratification of the Nagoya Protocol, which would go against the existing policy and stakeholder efforts to comply with the internationally adopted environmental commitments under the CBD and its Protocol. There are no existing practices on monitoring the use of GR, hence both option 1 and option 2 represent a change compared to existing practices. However, the requirement for the disclosure of origin is already included in the patent law. While it will still need amending the patent law, option 2 thus corresponds more closely than option 1 to existing practices, due to its link with the patent authority, which is already a necessary step for users willing to patent their products. Visual dominance analysis Option 1 is the dominant alternative: compared to options 0 and 2, it scores at least as well on all criteria and is strictly better on the social criterion (note: E3 is undefined). However, this dominance has little relative value. In light of the preceding analysis of the performance and the impact on the stakeholders, what this chart shows is that little difference can be observed between the impact of using the ABS Clearing-House as a checkpoint and using the patent authority as a checkpoint. Furthermore, as mentioned earlier, these two options are not mutually exclusive. In a phased implementation approach, it can be envisioned to implement both options. In this context, the procedural criteria G1, G2 and G3 give an advantage to option 1 compared to option 2 for earlier implementation The 0 option would lead to non-ratification of the Nagoya Protocol and have a major negative impact on legal certainty and effectiveness, as Belgium would not benefit from the transparency and legal clarity advantages of the Protocol. Options 1, 2 and 3 would equally contribute to the objectives of legal certainty and effectiveness, albeit at a different cost, depending on the final requirements of any ABS CH, as indicated above. Indeed, RBINS is expected to be most cost-effective for more general information tasks (as it is in line with it existent practices and expertise), while BELSPO and WIV-ISP are expected to be most costeffective for the coordination of more technical information (as it is in line with it existent practices and expertise). Option 0 would lead to non-ratification of the Nagoya Protocol, which would likely result in higher distrust with the provider countries and have a negative impact on the acquisition of GR from foreign countries and thereby on the overall capacity of users to innovate. Options 1, 2 and 3 on the contrary, as they contribute to the implementation of the Protocol, are expected to increase trust and have a positive overall economic impact. In general the information tasks of the Belgian input point/component of the ABS CH are expected to generate information on exchanges of GR and on-going innovation activities with GR that are useful for R&D in Belgium. The cost-efficient contribution to research will however be higher if the chosen option also generates more information integration. From that perspective options 1 and 2 are preferable over option 3, as they favor integration of the information handled by the CH with more existing biodiversity initiatives (option 1), or within existing powerful database infrastructures (option 2). E3 -Minimizing implementation costs The impact of option 0 under this criterion is positive, as it would lead to no additional costs. However, this option would lead to non-ratification of the Nagoya Protocol and therefore would still lead to information obligations related to the alternative measures taken by Belgium to comply with CBD and the ILO 107 Convention in the absence of the ratification of the Nagoya Protocol. All three options would potentially offer cost-effective solutions to the CH as they all host some expertise that could be relevant for the CH's task. The main criterion for comparing the cost-effective implementation amongst the options is the possible synergies with the existing infrastructures and/or existing tasks under the CBD. For the general information tasks, this is likely to give an advantage of options 1 and 3 over option 2. Indeed, in terms of software, both RBINS and ISP/WIV already have an online portal for their respective tasks with the Clearing-House. Hence, creating an additional ABS CH portal might not be such a big additional cost. The system at RBINS has the advantage to be designed for easy replication as it is used to set up national Clearing-House nodes in partner countries, leading to an additional advantage. For the coordination of the technical information, both BELSPO and ISP/WIV have existing infrastructure for the management of technical data that could support the information on the working of the PIC/checkpoints/ABS CH. Given this analysis, and pending the final outcome of the international negotiations on the ABS CH, it would seem logical to propose a collaboration of all three, as this would create the highest level of synergy with existing infrastructures. Social impact S -Achievement of social objectives The impact of option 0 is likely to be negative on socially relevant objectives. On the contrary, the impact of options 1, 2 is positive while option 3 is neutral. A particular social impact deserves to be highlighted. RBINS is running development projects on establishing CBD CHMs in partner countries. Entrusting RBINS with additional information tasks pertaining to NP issues could promote interesting synergies and additional capacity building in developing countries that are Party to the Protocol for handling NP and CBD requirements in a coherent and efficient way. Environmental impact M -Promotion of conservation and sustainable use of biodiversity, including biodiversity research The impact of the 0 option is likely to have a negative impact on conservation and sustainable use of biodiversity. On the contrary, the impact of options 1, 2 and 3 is respectively positive and neutral. RBINS is running development projects on conservation and sustainable use of biodiversity, including capacity building, biodiversity research and technology transfers, in partner countries. RBINS also organizes educative and communication actions towards stakeholders and broader awarenessraising campaigns on conservation and sustainable use of biodiversity. Reinforcing the role of RBINS in ABS will create synergies between ABS and the conservation/sustainable use initiatives, both nationally and internationally. Option 2 would be ideal for biodiversity research that contributes to sustainable development as BELSPO already hosts the Biodiversity Platform, which has as main task to foster such research. BESLPO also hosts several other consultative bodies linking scientific and policy analysis and is involved at international level with digitalization of collection databases. ISP-WIV has little connection with conservation or sustainable use of biodiversity and would therefore be the least preferable option for this criterion. From an environmental perspective, options 1 and 2 are the preferred alternatives (Figure 12). They both have similar performances on most of the criteria, but option 1 has a better social impact. This outcome is confirmed in the equalized weighting scenario. But, in line with what can be observed from the visual dominance analysis, the difference between the two options is to be nuanced, with the preference for the two options being almost equal. As RBINS and BELSPO are related institutions, a combination of these two alternatives could produce an ideal outcome. Again, as for the checkpoints, it is important to note that both options 1 and 2 rank substantially better than the "0" option and option 3. An important note of caution is in order here. Although this rank might usefully inform decision making on the choice between the options in Belgium, the final evaluation of the most appropriate mechanism will mainly depend on the decisions still to be taken globally on the ABS CH. RECOMMENDATIONS ON INSTRUMENTS AND MEASURES RESULTING FROM THE IMPACT ASSESSMENT As explained in chapter 5 and 8, the impact assessment in this study considers policy options for implementing 6 core measures that are minimally needed to implement the Nagoya Protocol in Belgium:  Operationalizing Prior Informed Consent;  Specification of the Mutually Agreed Terms;  Establishment of the Competent National Authorities;  Setting up compliance measures;  Designation of one or more checkpoints;  Sharing of information through the Clearing-House. The impacts of the selected options for each of these measures were assessed in chapter 10 on a double comparative basis. First, the impacts of the options were compared to the impacts of the "no policy change" base line (0 options). Second, for each measure, the impacts of the options were compared amongst each other. Two general recommendations result from the analysis in chapter 10, along with a set of more specific recommendations for each of the measures. First, the analysis shows that the" no policy change baseline" (the "0" option) for each measure clearly has the worst performance. Amongst other reasons, this is due to the lack of legal clarity that the "no policy change" would entail for users in Belgium and the absence of the environmental benefits that would follow from not implementing the Protocol. This result leads to a first general recommendation, which is to implement both PIC and benefit-sharing as a general legal principle in Belgium. Second, the analysis confirmed the validity of a phased approach to the implementation of the Protocol, which is the second general recommendation. As seen throughout the impact assessment, a phased approach will allow to benefit from the implementation of the basic principles in a timely manner and to deal with more fine-grained choices in a later stage. These more fine-grained choices can then be based on the experience the administrations and/or users will gather on the utilization of Belgian and foreign genetic resources, through the operation of the Competent National Authorities, the checkpoints and the Clearing-House amongst others. Moreover, the phased approach will be necessary in order to be able to ratify the Nagoya Protocol before June 2014 in order to participate as Party to the next COP/MOP in October 2014. The phased approach could be organized through a 3 step process. Such a process could consist of, 5. In the first step, a political agreement in the form of a declaration of intent from the competent governments on the general legal principles, along with some specification of the actions to be undertaken by the federal and the federated entities to establish these principles and put them into practice. 6. In a second step, the specified actions would be subsequently implemented, for example through a cooperation agreement and/or by adding provisions in the relevant legislations such as the environmental codes of the federated entities and the Federal Government, along with other possible requirements. 7. In a third step, additional actions can be undertaken once there is more clarity from the negotiations on the EU and the international level. The impact assessment has led to a set of specific recommendations on each of the 6 measures that have been analyzed. Not all the options have a clear preferred ranking in the basic weighting scenario, in part because of the ongoing discussions under the Nagoya Protocol, in particular regarding the modalities for the ABS Clearing-House. This result did not change by adjusting the weighting scenario through the sensitivity analysis. For these measures the study recommends to combine features of the best options that came out of the assessment. For 3 of the 6 measures a clear first best ranking came out of the impact assessment:  For the establishment of the Competent National Authorities, a centralized input system clearly came out as the recommended option. This option scores best on all the criteria and is strictly better on legal certainty and effectiveness for users and providers of GR, at low cost.  For the setting up of compliance measures, the option to refer back to provider country legislation regarding PIC and MAT, with Belgian law as fallback is the recommended option that comes out of this analysis. This can be explained by the closer conformity of this option with existing practices (under the Belgian code of private international law).  For the designation of one or more checkpoints, the option of using the PIC available in the Access and Benefit-sharing Clearing-House,as a checkpoint stands as the recommended option. It allows timely ratification, while additional checkpoint systems could evolve from there, in particular by adding other checkpoints to further collect or receive, as appropriate, relevant information related to PIC, to the source of the genetic resource, to the establishment of MAT, and/or to the utilization of genetic resources, (such as through an upgraded patent disclosure or monitoring of PIC upon public grants for research). For the other 3 measures, more than one remaining best option came out of the assessment or the remaining best options were very close:  For the operationalization of PIC, the bottleneck option and the refined fishing net option came out very close. These options require establishing as a general legal principle that access to Belgian GR requires PIC. This could be included in a political agreement from the competent governments, expressing the intent to establish such principle while specifying that this would be implemented afterwards for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the basic environmental codes. The two options also have a common component, namely the refinement of the PA/PS relevant legislation. This refinement considers that the access to specimens under PA/PS relevant legislation, 223 , would also be considered as PIC in the context of the Nagoya Protocol by the Belgian federated entities. This general principle would be included in the analogous provisions of the relevant legislation of the three Regions and at the federal level. The actual refinement could then be implemented in the third step (additional actions for further implementation) for example through executing acts, specifying which access provisions exactly are considered as PIC 224 . Therefore, the recommendation that comes out of the analysis of the operationalization of PIC is to proceed with such a refinement of the PA/PS relevant legislation in the third step. As the two first best options rank very close (and in a contrasted way on different criteria), the best way forward when considering GR beyond PA/PS, might be to combine these options in a phased manner. Therefore, the recommendation resulting from the analysis is to implement first the 'fishing net' approach with a general registration/notification requirement to the Competent National Authorities for GR outside PA/PS. In a later stage, the 'bottle neck' approach, through which access requests are processed through qualified Belgian ex-situ collections in conformity with the Nagoya Protocol, could be organized through a set of administrative arrangements between the Competent National Authorities and the collections. In addition, in this later stage the adjustment of other GR relevant legislation can be implemented as envisioned under the refined fishing net model225 .  For the specification of the Mutually Agreed Terms, the two options that impose specific BS requirements by the Belgian State both ranked better than the option where no specific BS requirements are imposed. Nevertheless, as the specification of Mutually Agreed Terms is not a prerequisite for ratification, this can be done in the third step of the implementation. The choice between these two options can therefore be part of a later phase. The recommendation is therefore not to take action on this point before ratification (and therefore by default implement the "no specific benefit-sharing requirements" option) and to consider, in a later stage, a combination of the options that consider introducing specific benefit-sharing requirements to further implement the Protocol. As indicated in chapter 9.2, this further specification would entail specifying rules for the specific BS requirements in relevant legislation for example in the provisions of the environmental code of the three Regions and at the federal level, including rules for the use of standard agreements for some types of uses if needed. It is considered under this option that the implementation of these rules will be done through executive orders of the federated entities.  Finally, for the sharing of information through the ABS Clearing-House, the assessment makes a distinction between the basic information sharing tasks on Access and Benefitsharing by the Clearing-House and the more technical tasks related to the organization of the c. Establishment of the general principle concerning the designation of four Competent National Authorities, which will be implemented for example through a cooperation agreement and/or relevant legislations. This would be implemented by the respective authorities dealing with legislations and measures related to protected areas and protected species at the Regional level and in the respective authority dealing with environmental issues at the federal level 227 (IMP 3.1.1. ( 1)). d. Commitment that legislative measures will be taken to provide that GR utilized within Belgian jurisdiction have been accessed by PIC and MAT as required by provider country legislation and to address situations of non-compliance (IMP 4.1 (1); IMP 5.1). e. The CBD CHM, managed by the RBINS, will be considered as the Belgian contribution to the ABS CH, for dealing with the information exchange on ABS under the Nagoya Protocol and if required, further steps will be taken after the first COP/MOP to develop the correct modalities for the ABS CH. (IMP 6.1 (1)) 2. Subsequent implementation of the principles stated in the political agreement, for example through a cooperation agreement and/or the introduction of analogous provisions relevant legislations such as the environmental codes of the three Regions and at the federal level (cf. footnote 226). (IMP 1.1.1 (2); IMP 1.1.2 (2); IMP 3.1.1. (2)) 3. Subsequent legal and policy measures as soon as more clarity is provided on EU level and on the global level and more practical experience is gained with the implementation of the NP. This will especially apply to the measures on compliance (after conclusion of discussions on compliance at EU level and at COP/MOP1), the subsequent measures on PIC and MAT, and the administrative agreements to further implement the ABS Clearing-House provisions of the Protocol. These subsequent measures might imply, at a later stage, the need for a second cooperation agreement (IMP 1.1.3, possibly with IMP 1.3.1 in addition; IMP 1.1.4/ IMP 1.2.4 combined; IMP 2.2/IMP 2.3 combined; IMP 3.2.2; IMP 4.1 (2); IMP 6.1 (2)) 227 That is, as stipulated above, the "Agentschap voor Natuur en Bos" in the Flemish Region, the "Division de la nature et des forêts" in the Walloon Region, the "Institut Bruxellois pour la gestion de l'environnement" in the Brussels Region and one authority to be established at the Federal level, probably at the Directorate-General Environment of the Federal Public Service "Health, Food Chain Safety and Environment" (for GR that are not under competences of the federated entities, such as Marine GR and ex-situ GR held at federal institutions). CONCLUSIONS This study addresses the implementation in Belgium of the Nagoya Protocol on Access and Benefitsharing to the Convention on Biological Diversity. For an appropriate understanding of the recommendations presented in this study, it is important to recall the various steps and the intermediary conclusions that have led to these recommendations. The report proceeds through four core phases. Chapters 2 to 5 analyze the current state of the art of ABS law and policy in Belgium (phase 1). Chapters 6 to 9 analyze, present and describe the different options for the minimal implementation of core measures stemming from the NP (phase 2). The argument in these chapters served as a basis for the choice of a set of options selected by the Steering Committee and discussed with relevant stakeholders, for further study. Chapters 10 and 11 conducted a multi-criteria impact assessment of the selected options (phase 3) and concluded with a set of recommendations for the implementation of the Nagoya Protocol in Belgium (phase 4). Conclusion of the first phase of the study The main conclusion of the first step is that the existing legislation that addresses physical access to genetic material and the instruments regulating benefit-sharing between users and providers of genetic resources need to evolve and be complemented by additional instruments in order to implement the obligations of the Protocol. In particular, under the current legislation in Belgium, access to GR is not subject to Prior Informed Consent (PIC) by the Belgian State as a Party to the NP (that is based on a written decision by a Competent National Authority (CNA) on access and benefit-sharing). Even if it is not compulsory for compliance with the Nagoya Protocol, the Belgian State can nonetheless decide to subject access to its GR to a PIC-requirement and take the necessary legislative, administrative or policy measures, as appropriate, to provide for access permits by one or more Competent National Authorities. Alongside the requirement for PIC, Belgium needs to require its users to share benefits arising from the utilization of GR and TKaGR, based on mutually agreed terms. Further, for the implementation of the core obligations on compliance, monitoring through checkpoints and the ABS Clearing-House, additional legal measures need to be put into place for the implementation of the Protocol. While the Belgian code of private international law already contains a set of principles that can be directly used for the implementation of the compliance provisions, these principles are insufficient to comply with the Nagoya Protocol. In particular, the "utilization of GR under the Nagoya Protocol" is not explicitly mentioned within the current scope of the Belgian code of private international law. For the monitoring obligations, the Belgian patent law already requires the disclosure of information on the country origin of biological material in patent applications. However, this measure still needs to be completed by other measures in order to comply with Article 17.1 of the NP, as it is not organized nor designated as a formal checkpoint. Finally, a dedicated ABS Clearing-House for information sharing under the Nagoya Protocol will need to be put into place, whether simply as a node of the international ABS Clearing-House or as a separate Belgian ABS Clearing-House. It is important to highlight the provisional nature of these findings, as the on-going discussions on the implementation of the Nagoya Protocol in international and European fora will further influence the results of this analysis. This is particularly relevant for the issue of compliance, some aspects of which will be addressed in the EU regulation on the Implementation of the Nagoya Protocol, and the issue of information sharing through the ABS Clearing-House, as the international mechanism still needs to be clarified. Conclusion of the second phase of the study The main conclusion of the second step is the importance of a phased approach to the implementation, which would first address a set of options for minimal implementation. As such, the analysis lead to distinguish two categories of actions to be undertaken for the implementation: The detailed analysis of the first set of actions, has led to the formulation of a set of options for 6 implementation measures that were the basis of the multi-criteria impact assessment in the third step: 1. Operationalizing Prior Informed Consent 2. Specification of the Mutually Agreed Terms 3. Establishment of the Competent National Authorities 4. Setting up compliance measures 5. Designation of one or more checkpoints 6. Sharing of information through the Clearing-House 5. For the specification of the Mutually Agreed Terms, the two options that impose specific BS requirements by the Belgian State both ranked better than the option where no specific BS requirements are imposed. Nevertheless, as the specification of Mutually Agreed Terms can be done in the third step of the implementation, the choice between these two options can be part of a later phase. 6. Finally, for the sharing of information through the ABS Clearing-House, the assessment makes a distinction between the basic information sharing tasks on Access and Benefitsharing by the Clearing-House and the more technical tasks related to the organization of the technical information to be provided to the ABS Clearing-House mechanism, amongst others. The first task is already ongoing at the Royal Belgian Institute of Natural Sciences (RBINS). The recommendation from the analysis is therefore to further mandate the RBINS to fulfill the information sharing tasks on Access and Benefit-sharing under the Nagoya Protocol. In a second stage, administrative arrangements between this Clearing-House and other relevant institutions could be put into place to extend the tasks, as soon as more clarity is provided by the international negotiations. appropriate, containing minimum requirements for access rules and procedures -OR determine that access is not subject to PIC Applies to GR Minimum-information to be made available to the CHM when notifying permits (read in conjuncture with Article 14.2.c) Permits or equivalents issued in accordance with Article 6.3.e) and made available to CH have to be accepted as internationally recognized certificates of compliance and have to be accepted as evidence that GR have been accessed with PIC and that MAT have been established, as required by provider country. Applies to GR Summary of relevant measures for access ............................................................................. Table 2 -Summary of relevant measures for benefit-sharing ............................................................Table 3 -Summary of relevant measures for conservation activities and biodiversity research ....... Table 4 -Summary of relevant measures for the Competent National Authority .............................. Table 5 -Summary of relevant measures for compliance .................................................................. Table 6 -List of indicators ................................................................................................................... Table 7 -Scoring system of the impact grid ....................................................................................... Table 8 -Economic impact of the options for the operationalization of PIC ...................................... Table 9 -Social impact of the options for the operationalization of PIC ............................................ Table 10 -Environmental impact of the options for the operationalization of PIC ............................ Table 11 -Economic impacts of the options for the specification of MAT ......................................... Table 12 -Social impacts of the options for the specification of MAT ............................................... Table 13 -Environmental impacts of the options for the specification of MAT ................................. Table 14 -Economic impacts of the establishment of the CNA .......................................................... Table 15 -Social impacts of the establishment of the CNA ................................................................ Table 16 -Environmental impacts of the establishment of the CNA .................................................. Table 17 -Economic impacts of the compliance measures ................................................................ Table 18 -Social impacts of the compliance measures ....................................................................... Table 19 -Environmental impacts of the compliance measures ........................................................ Table 20 -Economic impacts of the options for designating checkpoint(s) ....................................... Table 21 -Social impacts of the options for designating checkpoint(s) .............................................. Table 22 -Environmental impacts of the options for designating checkpoint(s) ............................... Table 23 -Economic impacts of the options for the ABS CH .............................................................. Table 24 -Social impacts of the options for the ABS CH ..................................................................... Table 25 -Environmental impacts of the options for the ABS CH ...................................................... Figure 1 - 1 Figure 1 -Steps of the MCA ................................................................................................................. Figure 2 -Usual preference function ................................................................................................... Figure 3 -Performance chart of the options for the operationalization of PIC .................................. Figure 4 -Net flows of the alternatives for operationalizing PIC (basic weighting scenario) ............. Figure 5 -Performance chart of the options for the specification of MAT ......................................... Figure 6 -Net flows of the alternatives for specification of MAT (basic weighting scenario) ............ Figure 7 -Performance chart for the establishment of the CNA ........................................................ Figure 8 -Performance chart for the options setting up compliance measures ................................ Figure 9 -Net flows of the alternatives for setting up compliance measures (basic weighting scenario) ............................................................................................................................................................. Figure 11 -Performance chart for the options designating checkpoints ............................................ Figure 12 -Performance chart of the alternatives for the ABS CH ..................................................... Figure 13 -Net flows of the alternatives for the ABS CH (basic weighting scenario) ......................... Measure 1 : 1 operationalizing access to genetic resources 0. Option 0 -No PIC No requirement of Prior Informed Consent for the utilization of genetic resources and traditional knowledge in Belgium; 1. Option 1 -The bottleneck model a. For protected genetic resources: access is made possible through a refinement of existing legislation relevant for protected areas and protected species; Measure 3 : 3 establishing one or more competent national authorities 0. Option 0: No competent national authority/authorities are established in Belgium; 1. Option 1: Competent authorities are established, with a separate entry-point for each authority; 2. Option 2: Competent authorities are established, with a single entry-point. Measure 5 : 5 designating one or more checkpoints 0. Option 0: no checkpoints are established in Belgium to monitor the utilization of genetic resources and traditional knowledge 1. Option 1: monitoring the PIC obtained by users, which is available in the ABS Clearing-House 2. Option 2: the patent authority is used as a checkpoint to monitor the utilization of genetic resources and traditional knowledge Measure 6: sharing information through the ABS Clearing-House 0. Option 0: not creating a Belgian entry point to/component of the Clearing-House 1. Option 1: appointing Royal Belgian Institute of Natural Sciences (RBINS) as Clearing-House 2. Option 2: appointing Belgian Federal Science Policy Office (BELSPO) as Clearing-House 3. Option 3: appointing Scientific Institute for Public Health (ISP/WIV) as Clearing-House  La création d'autorités compétentes nationales (Competent National Authorities, CNA) devrait être accompagnée d'un système d'input centralisé pour les différentes autorités.  En ce qui concerne les mesures de conformité, des sanctions devraient être prévues en cas de non-respect des exigences du PIC et des conditions convenues d'un commun accord (Mutually Agreed Terms, MAT) fixées par le pays fournisseur. Pour la vérification du contenu des MAT, une disposition dans le Code de droit international privé devrait se référer à la législation du pays fournisseur, avec le droit belge comme option de rechange.  A ce stade de la mise en oeuvre, la surveillance de l'utilisation des ressources génétiques et du savoir traditionnel par un point de contrôle devrait se faire sur base du PIC disponible dans le Centre d'échanges pour l'APA (ABS Clearing-House).  En ce qui concerne l'accès aux ressources génétiques belges, il est recommandé d'une part de préciser la législation en vigueur pertinente pour les zones et les espèces protégées, et d'autre part d'instaurer une obligation générale de notification pour l'accès aux autres ressources génétiques. Les étapes ultérieures de la mise en oeuvre pourront alors introduire des dispositions supplémentaires appropriées et prévoir que le traitement d'autres requêtes d'accès se fasse par les collections ex-situ.  A ce stade de la mise en oeuvre, et indépendamment de l'obligation générale de partager les avantages, aucune disposition spécifique de partage d'avantages ne devrait être imposée pour les conditions convenues d'un commun accord (Mutually Agreed Terms, MAT). Un ensemble de règles plus standardisées, y compris la possibilité d'utiliser des accords types, peut être envisagée à un stade ultérieur de l'implémentation.  l'Institut Royal des Sciences Naturelles de Belgique devrait être mandaté pour remplir les tâches de partage d'information via le Centre d'échange pour l'APA (ABS Clearing-House), comme imposées par le Protocole de Nagoya.  Désigner le(s) point(s) de contrôle pour la surveillance de l'utilisation des ressources génétiques : Pour se conformer au Protocole de Nagoya, au moins un point de contrôle doit être désigné, qui surveille et garantit la transparence quant à l'utilisation des ressources génétiques en Belgique. Il peut s'agir d'une institution existante ou d'une nouvelle instance. Mesure 6 : 6 Partage d'information via le Centre d'échange pour l'APA (ABS Clearing-House) 0. Option 0 : Pas de création de point d'entrée /composant belge du Centre d'échange pour l'APA 1. Option 1 : Nommer l'Institut Royal des Sciences Naturelles de Belgique (IRSNB) comme centre d'échange 2. Option 2 : Nommer la Politique Scientifique Fédérale (BELSPO) comme centre d'échange 3. Option 3 : Nommer L'Institut Scientifique de Santé Publique (ISP) comme centre d'échange Finally, Communities are also competent for research, related to the exercise of other Community competences. The following administrations are specifically responsible for research related competences: In the French community, the General Administration for Education and Scientific Research (Fédération Wallonie-Bruxelles).  In the Flemish community: Administration of Higher Education and Research. They are protected in Belgium through different legislative texts, including:  Federal law of 6 th April 2010 on trade practices and consumer protection, chapter 7 on geographical indications and protected designations of origin  Decree of the Walloon Region of 7 th September 1989 related to the local geographical indication and designated Walloon certificate  Ministerial Decree of the Flemish Government of 19 th October 2007 on the protection of geographical indications 61 See A. Lorant, "Le vol de la chose d'autrui", op.cit. 62 Anvers, 13 dec. 1984, Bruxelles, 5 dec. 1986, See for instance Corr. Bruxelles 24 juin 1993 J.L.M.B. 1994, which states that "Un logiciel -ou programme informatiqueindépendamment même de son support (disquette) ne constitue pas un bien immatériel: il possède une valeur économique propre et est susceptible d'un transfert de possession qui peut être constaté matériellement. Le fait que le propriétaire du logiciel reste, en cas de duplication illicite de celui-ci, en possession des données originaires, n'exclut pas l'application des Article 461 et 505 C. pénal ». Several ABS-related actions are also planned in the context of development cooperation. These include awareness-raising and capacity-building actions with ABS stakeholders in developing countries; inter-university cooperation programs on traditional knowledge associated with genetic resources and on conservation of biodiversity; the monitoring on effective biodiversity efforts in the development cooperation; the creation of toolkits to support implementation of biodiversity conventions; and the support of gene banks and ex-situ conservation techniques for genetic resources. In the development cooperation sector the Federal Plan for the integration of biodiversity in four key sectors makes direct links with existing initiatives established or supported by the Belgian authorities. Both the RBINS and the RMCA have established biodiversity-related capacity-building initiatives in developing countries, although they do not directly focus on ABS. In 2003, the RBINS started supporting ILCs in developing countries in their implementation efforts of the CBD, through a convention with the Federal DGD 112 . The first phase of this convention has been running from 2003 to 2007, but has been renewed in 2008 and runs until 2012. In April 2008, the RMCA, together with the Belgian Technical Cooperation (BTC), has launched the Central African Biodiversity Information Network (CABIN). The aim of this project is to establish a network of databases on biodiversity information, in collaboration with several Central African research institutions 113 . Awareness-raising on ABS could easily be added to such programs. Also, the FPS Environment and the DGD have contributed to the creation of the TEMATEA Project that was managed by the United Nations Environment Program (UNEP) and the International Union for Conservation of Nature (IUCN) 114 until 2011. . Generalo The National Competent Authorities and the National Focal Points (Article 13) o Legal conformity: the conformity with the national legislation of the provider country and the contractual rules (Articles 15, 16, 17 and 18)  Access to genetic resources and traditional knowledge (Articles 6, 7 and 8).  Benefit-sharing (Articles 5 and 9)  Compliance and monitoring o Monitoring of the use of genetic resources and the designation of one or several checkpoints (Article 17) o The compliance with the legislations or the requirements of the provider country (Articles 15 and 16) o The compliance with the Mutually Agreed Terms (MAT) (Article 18) 133 Santili J. (2009), Brazil's Experience in Implementing its ABS Regime -Suggestions for Reform and the Relationship with the International Treaty on Plant Genetic Resources for Food and Agriculture. In Kamau E.C. and Winter G. (Eds.) Genetic Resources, Traditional Knowledge & the Law. Solutions for Access & Benefit-sharing. London: Earthscan ; 134 Burton G. (2009), Access and Benefit-sharing: ABS Law and Administration in Australia. Revista Internacional de Direito e Cidadania, n. 5, p. 93-101, October 2009; Burton G. (2009), Australian ABS Law and Administration -A Model Law and Approach? In Kamau E.C. and Winter G. (Eds.) Genetic Resources, Traditional Knowledge & the Law. Solutions for Access & Benefit-sharing. London: Earthscan ; 135 Article 4 of the Biodiversity Law, No 7788, Legislative Assembly of the Republic of Costa Rica, 30 th April 1998 136 Carrizosa S., Brush B.S., Wright B.D., McGuire P.E. (2004), Accessing Biodiversity and Sharing the Benefits: Lessons from Implementing the Convention on Biological Diversity. IUCN Environmental Policy and Law Paper No. 54 137 Article 3(2) of the Biological Diversity Act 2002. No 18 of 2003, Republic of India. 138 Article 9(c) of the Regulations on Bio-Prospecting, Access and Benefit-sharing. Government Gazette No. 30739, 8 th February 2008, Republic of South Africa 139 Suneetha M.S., Pisupati B. (2009), Benefit-sharing in ABS: Options and Elaborations. UNU-IAS Report submission to the CNA; (2) the review of the application; (3) the negotiation of PIC and possibly MAT; (  Option 1: no PIC required by the State but clarification of national legislation regulating legal ownership of genetic material for access to GR as provided for in the NP  Option 2 : PIC required by the State with a change in national legislation Relevant measures for additional implementation Clarify access requirements  Option 1: "One-size-fits-all" requirement (same access procedure for all applicants and situations)  Option 2: Differentiate access requirements depending on type of projected utilization (for example by allowing stakeholders to agree on some MAT/BS conditions at later stage than moment of access)  Option 3: Differentiate access requirements depending on type of actors (for example foreign / national) Establish clear and transparent access procedure  Option 1: Enshrine procedure in legal act  Option 2: Develop administrative guidance  Option 3: Provide assistance procedure to facilitate transaction between applicant and private stakeholder 140 Young TR (2009), Legal Certainty for Users of Genetic Resources under Existing Access and Benefit-sharing (ABS) Legislation and Policy. In Young T (Ed.) Covering ABS: Addressing the Need for Sectoral, Geographical, Legal and International Integration in the ABS Regime. IUCN Environmental Policy and Law Paper No. 67/5 141 Medaglia JC (2009) The Role of the National Biodiversity Institute in the Use of Biodiversity for Sustainable Development -Forming Bioprospecting Partnerships. In Kamau E.C. and Winter G. (Eds.) Genetic Resources, Traditional Knowledge & the Law. Solutions for Access & Benefit-sharing. London: Earthscan Table 2 - 2 Summary of relevant measures for benefit-sharing Relevant measures for the minimal implementation of core obligations Determine format of MAT  Option 1: Leave full discretion on how to execute the BS obligation to users and provider of genetic material  Option 2: Develop mandatory MAT terms and conditions and/or default MAT provisions  Option 3: Impose standard MAT(s) Relevant measures for additional implementation Clarify benefit-sharing requirements  Option 1: "One-size-fits-all" requirements  Option 2: Differentiate BS requirements depending on type of projected utilization  Option 3: Differentiate BS requirements depending on type of actors  Option 4: Utilize the actual trigger of MAT/BS, instead of access  Option 5: Specify types of benefits to be shared Ensure benefit-sharing is fair and equitable 149 Article 21(2) of the Biological Diversity Act 2002. No 18 of 2003, Republic of India. 150 Ruiz M., Lapeña I., Clark S.E. (2004), The Protection of Traditional Knowledge in Peru: A Comparative Perspective. Washington University Global Studies Law Review, 3(3): 755-97 151 Carrizosa et al. (2004), op. cit. 152 Wynberg R, Taylor M (2009), op. cit. 153 Article 2(g) of the Biological Diversity Act 2002. No 18 of 2003, Republic of India. Summary of relevant measures for conservation activities and biodiversity research Relevant measures for the minimal implementation of core obligations Ensure ABS serves conservation activities/sustainable use (Article 9)  Option 1: Link access permit to mandatory conditions that direct benefits towards conservation activities/sustainable use  Option 2: Require environmental impact assessment prior to access  Option 3: Establish a "benefit-sharing" fund or other mechanism which redirects the benefits Facilitate access for biodiversity-related research (Article 8a)  Option 1: Exempt (non-commercial) biodiversity-related research from any access requirement  Option 2: Facilitate access for biodiversity-related research Relevant measures for additional implementation / 159 Carrizosa et al. (2004), op. cit. 160 Article 15.2(g) of the Conservation of Biological Diversity and Resources, Access to Genetic Resources and Benefitsharing Regulations 2006 of the Environmental Management and Co-ordination Act, Republic of Kenya, 2006 Belgium In Flanders, Article 57bis of Natuurdecreet allows access to real property for research conducted by public servants and related to nature conservation  Besluit van de Vlaamse Regering betreffende de toegankelijkheid van de bossen en de natuurreservaten, 05/12/2008 Option 1 -'One-size-fits-all' requirements for benefit-sharing Possible advantages:  Easy to implement  High legal certainty Possible disadvantages:  Might be inefficient  Might be too constraining and inflexible for certain types of users Option 2 -Differentiate benefit-sharing requirements depending on type of projected utilization, at the moment of access Possible advantages:  Possibility to facilitate access for non-commercial/low-profit research, under the condition of clearly specifying additional conditions in the case of change in intent (from non-commercial to commercial) Possible disadvantages:  Might be difficult to establish efficient and effective conditions on down-stream use at time of access Option 3 -Differentiate benefit-sharing requirements depending on type of actors, at the time of access Possible advantages:  Possibility to foster domestic research  Could facilitate tracing of accessed GR Possible disadvantages:  Might foster reluctance of foreign prospectors  Might conflict with EU-rules, WTO MFN and national treatment  Might not advance the objectives of the CBD and NP  Might create loop holes in the benefit-sharing obligations, distinction domestic / non-domestic difficult to monitor Option 4 -Utilize the actual trigger for establishing MAT/BS conditions, instead of access Possible advantages:  Allows to settle BS agreement based on clearer view of potential value of GR  Lower administrative burden for users at time of access Possible disadvantages:  Requires efficient monitoring of utilization  Requires return clause to make sure users do come back when entering at different/certain phases of utilization (e.g. commercialization phase) Option 5 -Specify types of benefits to be shared Possible advantages:  Could include the (re)direction of (part of) the benefits towards conservation/sustainable use Possible disadvantages:   Higher legislative cost  Will require more time to set up Option 2 -Develop administrative regulations, guidance for access procedure Possible advantages:  Easily modifiable in case of changing circumstances  Could be quickly operational Possible disadvantages:  Would still need a legal basis, containing the essential elements of the procedure and the rights and obligations of individuals  Possibility of less legal certainty EVALUATION A combination of option 1 (legal basis with the essential elements of the procedure and setting out the rights and obligations of individuals) Could create a more efficient process and follow-up  Less administrative burden for users  More process certainty for the user Possible disadvantages:  Could be difficult to implement, given division of ABS-related Requires a strong commitment and understanding of ABS by private users  Requires close collaboration between the monitoring authority and these users  Could be constraining for non-commercial research Option 2 -'Due-diligence' monitoring system Possible advantages:  Relevant when GR is being transferred to third parties during the valorization process  Could have a lower legislative and administrative cost for authorities  Could be more flexible for users  Could build in a different type and level of standards according to users/use of GR  Could build in subsidiarity and responsibility for sectors Option 1: The bottleneck model: only existing PS/PA relevant legislation & measures + only access to GR through ex-situ collections as default rule (a) + (c) This option combines the refinement of existing PA and PS relevant legislation with the default rule for GR which are not in a protected area or which are not protected species that only Belgian collections can provide access to GR.  Option 2: The fishing net model: only existing PA/PS relevant legislation & measures + access to GR from everywhere but with registration as default rule (a) + (d) This option combines the refinement of existing PA and PS relevant legislation, with the default rule that GR can be accessed from anywhere, providing the user has registered/notified the CNA.  Option3: potentially enlarged existing PA/PS relevant legislation & measures + other specific GR relevant legislation/measures + access to GR from everywhere but with registration as default rule (b) + (d) Option 1: Royal Belgian Institute of Natural Sciences (RBINS) as ABS Clearing-House  Option 2: Belgian Federal Science Policy Office (Belspo) as ABS Clearing-House  Option 3: Scientific Institute for Public Health (WIV-ISP) as ABS Clearing-House 1) in the patent applications. The implementation of the monitoring obligations under option 2 would require an amendment of the federal law transposing the Directive 98/44/EC on the legal protection of biotechnological inventions to include such a new provision. It would also imply a change in the tasks of option : not creating a Belgian entry-point to/component of the clearing-house 9. Option 1: Royal Belgian Institute of Natural Sciences as ABS Clearing-House (RBINS) 10. Option 2: Belgian Federal Science Policy Office (BELSPO) as ABS Clearing-House 11. Option 3: Scientific Institute for Public Health (ISP/WIV) as ABS Clearing-House Figure 1 - 1 Figure 1 -Steps of the MCA 197 Belton V., Stewart T.J. (2002), Multiple criteria decision analysis: an integrated approach. Kluwer Academic Publishers; Podvezko V.,Podviezko A. (2010), Use and choice of preference functions for evaluation of characteristics of socioeconomical processes. 6th International Scientific Conference, 13th-14th May 2010, Vilnius, Lithuania; Brans J.P., Marechal B. (2002), Prométhée-Gaia. Une méthodologie d'aide à la décision en présence de critères multiples. Editions de l'Universite Libre de Bruxelles..198 The PROMETHEE-GAIA FAQ "How to choose the right preference function?"; http://www.promethee-gaia.net/faqpro/?action=Article&cat_id=003002&id=4&lang=; Another preference function can however be applied easily if needed. Figure 3 - 3 Figure 3 -Performance chart of the options for the operationalization of PIC Figure 4 - 4 Figure 4 -Net flows of the alternatives for operationalizing PIC (basic weighting scenario) certainty and effectiveness for users and providers of GR, at low cost Figure 5 - 5 Figure 5 -Performance chart of the options for the specification of MAT Figure 6 - 6 Figure 6 -Net flows of the alternatives for specification of MAT (basic weighting scenario) Figure 7 - 7 Figure 7 -Performance chart for the establishment of the CNA Figure 8 - 8 Figure 8 -Performance chart for the options setting up compliance measures Figure 9 - 9 Figure 9 -Net flows of the alternatives for setting up compliance measures (basic weighting scenario) Figure 10 - 10 Figure 10 -Performance chart for the options designating checkpoints  Impact on stakeholders: o Coll.: indirect benefit from increased legal certainty and effectiveness o Gov. Res.: indirect benefit from increased legal certainty and effectiveness o Ag., health and biotech: indirect benefit from increased legal certainty and effectiveness o Univ.: indirect benefit from increased legal certainty and effectiveness o Land: indirect benefit from increased legal certainty and effectiveness o Other: No impact E2 -Maximizing economic innovation and product development (in particular through its contribution to R&D) at reasonable financial and administrative costs  Impact on stakeholders: o Coll.: indirect benefit from increased legal certainty and effectiveness o Gov. Res.: indirect benefit from increased legal certainty and effectiveness o Ag., health and biotech: indirect benefit from increased legal certainty and effectiveness o Univ.: indirect benefit from increased legal certainty and effectiveness o Land: indirect benefit from increased legal certainty and effectiveness o Other: No impact  Impact on stakeholders: o Coll.: No impact o Gov. Res.: No impact o Ag., health and biotech: No impact o Univ.: No impact o Land: No impact o Other: No impact Figure 11 - 11 Figure 11 -Performance chart of the alternatives for the ABS CH Figure 12 - 12 Figure 12 -Net flows of the alternatives for the ABS CH (basic weighting scenario) 1. A first set of actions, which form the basis of compliance with the NP and address the core obligations for the implementation of the NP in Belgium, including :  The establishment of National Competent Authorities and the National Focal Points (Article 13)  Conformity with the national legislation of the provider country and the contractual rules (Articles 15,16,17 and 18)  Access to genetic resources and traditional knowledge (Articles 6, 7 and 8).  Benefit-sharing (Articles 5 and 9)  Monitoring of the use of genetic resources and the designation of one or several checkpoints (Article 17)  Compliance with the legislations or the requirements of the provider country (Articles 15 and 16)  The compliance with the Mutually Agreed Terms (MAT) (Article 18) 2. A second set of additional measures which are important elements during implementation of the obligations, but that are less urgent (going beyond the core obligations). - take measures, as appropriate, with the aim of ensuring that TK is accessed with PIC and MAT of the ILC holding TK Applies to TK Create conditions to promote and encourage biodiversity research, including simplified measures on access for non-commercial research -Pay due regard to cases of present and imminent emergencies that threaten or damage human, animal or plant health -Encourage users and providers to direct benefits towards conservation of biological diversity and sustainable use of its components Applies to GR for and modalities of a global multilateral benefit-sharing mechanism for 1) GR and TK that occur in transboundary situations or 2) for which it is not possible to grant or obtain PIC Applies to GR+TK Article 11 a. subject Each Party b. obligation Endeavour to cooperate in instances: -where the same GR are found in situ within the territory of more than one Party a. subject Parties b. obligation As far as possible cooperate in cases of alleged violation of provider country legislation Applies or policy measures to provide that TK utilized within jurisdiction has been accessed in accordance with PIC and MAT with legislation of country where ILCs are Adoption of measures to address situations of non-compliance with Article 16.Adoption of measures to monitor and enhance transparency about the utilization of GRs, which shall include a) the adoption of one or more checkpoints, b) encouraging the inclusion of provision on the sharing of information on the implementation in MAT, c) encourage the use of cost-effective communication tools and systems Applies ..... INBO Instituut voor natuur -en bosonderzoek IPEN International Plant Exchange Network IPR Intellectual Property Rights ITPGRFA EXECUTIVE SUMMARY International Treaty on Plant Genetic Resources for Food and Agriculture ICE Interministerial Conference on Environment IUCN International Union for Conservation of Nature LNE MAT General recommendations Department Leefmilieu, Natuur en Energie of the Flemish government Mutually Agreed Terms MOP MOSAICC  Both Prior Informed Consent and benefit-sharing should be implemented as general legal principles Meeting of the Parties Micro-organisms Sustainable Use and Access Regulation International Code of Conduct in Belgium. MS Member State MTA Material Transfer Agreement NBGB National Botanic Garden of Belgium NFP National Focal Point NP Nagoya Protocol OECD Organization for Economic Co-operation and Development PDO Protected Designation of Origin PGI Protected Geographical Indication PIC Prior Informed Consent PROMETHEE Preference Ranking Organization Method for Enrichment of Evaluations R&D Research and Development RBINS Royal Belgian Institute of Natural Sciences REIO Regional Economic Integration Organization RMCA Royal Museum for Central Africa SL Special Law SMTA Standard Material Transfer Agreement TFEU Treaty on the Functioning of the European Union TK Traditional Knowledge TKaGR Traditional Knowledge associated with genetic resources TRIPS Trade-Related Aspects of Intellectual Property Rights TSG Traditional Speciality Guaranteed UN United Nations UNCED United Nations Conference on Environment and Development UNCLOS United Nations Convention on the Law of the Sea UNEP United Nations Environment Program VAIS Vlaams Agentschap voor Internationale Samenwerking WBI Wallonie-Bruxelles International WHO World Health Organization WIPO World Intellectual Property Organization WTO World Trade Organization WSSD World Summit on Sustainable Development préliminaires relatives aux options pour la mise en oeuvre du Protocole de Nagoya Toutefois, certaines préoccupations en matière de connaissances traditionnelles et des droits des communautés indigènes et locales ont été traitées dans certains instruments internationaux auxquels la Belgique est partie, telle que la Convention N° 107 de l'Organisation Internationale du Travail (OIT) relative aux populations aborigènes et tribales, la Convention N° 169 de l'OIT relative aux peuples indigènes et tribaux, et la Déclaration des Nations Unies sur les droits des peuples autochtones. Par ses droits souverains sur les ressources génétiques, la Belgique peut choisir si elle exige, ou non, que les utilisateurs obtiennent un consentement préalable donné en connaissance de cause (Prior Informed Consent, PIC) par l'autorité compétente pour accéder aux ressources génétiques dans sa juridiction. Recommandations Même si le Protocole of Nagoya est récent, il n'en est pas moins l'application du troisième objectif de la CDB, qui contient des principes de base et des dispositions apparentées à l'APA, tels que la souveraineté des Etats sur leurs richesses et ressources naturelles, le partage juste et équitable des avantages, et l'importance des communautés locales, des populations autochtones et de leurs connaissances traditionnelles. Beaucoup de Parties à la Convention à travers le monde ont donc mis en oeuvre une série des mesures sur l'APA, qui peuvent servir d'expériences utiles pour l'exécution du Protocole de Nagoya. A l'analyse de ces expériences, deux groupes de recommandations préliminaires ont pu être établies dans cette étude, quant aux options disponibles pour la mise en oeuvre du Protocole de Nagoya en Belgique. Le premier groupe de recommandations concerne les instruments nécessaires pour l'exécution des obligations fondamentales résultant du Protocole 5 . Le second groupe de recommandations concerne des mesures supplémentaires à prendre en compte au cours de la mise en oeuvre des obligations du Protocole, mais qui vont au-delà des obligations fondamentales. Recommandations relatives aux obligations fondamentales:  Clarifier les conditions d'accès :  Déterminer le format des conditions convenues d'un commun accord (Mutually Agreed Terms, MAT) : Une fois que le Protocole de Nagoya entre en vigueur, les utilisateurs oeuvrant sur le territoire belge auront l'obligation de partager les avantages provenant de l'utilisation des ressources génétiques. Un tel partage sera basé des conditions convenues d'un commun accord (Mutually Agreed Terms, MAT). Cependant, le Protocole de Nagoya n'impose pas un format spécifique pour ces MAT qui peuvent être laissés à l'appréciation des parties prenantes ou découler des lignes de directrices et/ou de mesures obligatoires imposées par l'Etat.  Assurer que l' APA contribue à la conservation et l'utilisation durable de la biodiversité: La mise en oeuvre du Protocole de Nagoya devra servir les deux autres objectifs de la CDB: la conservation de la biodiversité et utilisation durable de ses composants. Cela peut être réalisé par exemple en soumettant l'obtention du PIC à des conditions obligatoires sur le partage des avantages ou en instaurant un «fonds de partage des avantages » qui redirige les avantages vers la conservation et l'usage durable.  Faciliter l'accès pour la recherche relative à la biodiversité : pour soutenir et promouvoir la recherche relative à la biodiversité et pour réduire la charge de la réglementation pour recherche non commerciale qui utilise des ressources génétiques, des mesures pourraient être mise en place pour faciliter l'accès aux ressources génétiques pour de la recherche non commerciale liée à la biodiversité.  Instaurer des autorités compétentes nationales (Competent National Authorities, CNA): Chaque partie doit désigner une autorité ou des autorités nationales compétentes qui sont chargées d'accorder l'accès ou, s'il y a lieu, de délivrer une preuve écrite que les conditions d'accès ont été respectées, et de fournir des conseils sur les procédures et les conditions en vigueur pour accéder aux ressources génétiques. Etant donné la réalité institutionnelle en Belgique, plus d'une autorité nationale compétente peut être instaurée. Cette tâche est de la plus haute priorité, puisque la Belgique doit communiquer au Secrétariat de la Convention, au plus tard à la date d'entrée en vigueur du Protocole pour elle, les coordonnées de son correspondant national et de son autorité ou ses autorités nationales compétentes.  Accorder force contraignante à la législation des pays fournisseurs concernant le PIC et les MAT MAT) ou aux consentements préalable donné en connaissance de cause (PIC) du pays fournisseur. Cela pourrait être fait en imposant, dans la législation belge, le respect de la législation du pays fournisseur en ce qui concerne le PIC et les MAT ou en instaurant une règle de police interne dans la législation belge imposant l'obtention d'un consentement préalable et la conclusion de dispositions communes convenues d'un commun accord, si requis par le pays fournisseur. : Parties intégrantes du Protocole, les obligations fondamentales auxquelles les utilisateurs nationaux doivent se conformer lorsqu'ils utilisent des ressources génétiques en Belgique, doivent être énoncées clairement. Cette obligation consiste à accorder force contraignante aux dispositions convenues d'un commun accord ( Mesure 1 : opérationnalisation de l'accès aux ressources génétiques Si le partage des avantages est bien inscrit comme principe général, les dispositions spécifiques de partage des avantages à inclure dans les conditions convenues d'un commun accord (Mutually Agreed Terms, MAT), doivent être spécifiées (mesure2). Ces dispositions spécifiques peuvent être laissées à l'appréciation des utilisateurs (option1), ou être imposées par l'Etat avec plus ou moins de standardisation (options 2 et 3). Pas de dispositions spécifiques imposées par les autorités compétentes pour les conditions convenues d'un commun accord. Les utilisateurs et les fournisseurs sont libres de décider conjointement de leur contenu. 2. Option 2: Imposition de dispositions spécifiques, y compris par des formats standardisés, pour les conditions convenues d'un commun accord pour certains usage, qui seront différenciés selon la finalité de l'accès. Mesure 2 : Spécifier les dispositions pour les conditions convenues d'un commun accord (Mutually Agreed Terms, MAT) 0. Option 0: Pas de partage d'avantage pour l'utilisation des ressources génétiques et du savoir traditionnel en Belgique. 1. Option 1: b. Pour les ressources génétiques non protégées : l'accès est autorisé via les collections ex-situ 2. Option 2 -Modèle « Fishing Net » a. Pour les ressources génétiques protégées : accès possible en affinant la législation existante pertinente pour les zones et les espèces protégées. b. Pour les ressources génétiques non protégées : accès accordé sur notification préalable auprès de l'autorité compétente 3. Option 3 -Modèle « Fishing Net » modifié a. Pour les ressources génétiques protégées et les ressources génétiques déjà régies par une législation pertinente existante: accès possible en affinant la législation existante. 0. Option 0 -Pas de consentement préalable Pas d'exigence de consentement préalable en connaissance de cause (Prior Informed Consent, PIC) pour l'utilisation des ressources génétiques et du savoir traditionnel en Belgique; 1. Option 1 -Modèle « Bottleneck » a. Pour les ressources génétiques protégées : accès possible en affinant la législation existante pertinente pour les zones et les espèces protégées. b. Pour les ressources génétiques non protégées : accès accordé sur notification préalable auprès de l'autorité compétente 3. Option 3: Imposition de dispositions spécifiques mais sans formats standardisés pour les conditions convenues de commun accord. Tout en prenant en compte les conditions exigées de partage d'avantages, les MAT sont définies au cas par cas par les utilisateurs et les fournisseurs. Les dispositions sont différenciées selon la finalité de l'accès. Pour respecter le Protocole de Nagoya, une ou plusieurs autorités nationales devront être établies (mesure 3). Leur tâche sera d'accorder l'accès ou, s'il y a lieu, de délivrer une preuve écrite que les conditions d'accès ont été respectées, et de fournir des conseils sur les procédures et les conditions en vigueur pour accéder aux ressources génétiques. Pour remplir ces tâches les autorités nationales compétentes devront également établir un point d'entrée pour les utilisateurs de ressources génétiques. Cela peut être fait séparément, chaque autorité instaurant son propre point d'entrée (option 1), ou conjointement, un seul point entrée pour les différentes autorités (option 2). Mesure 3: instaurer une ou plusieurs autorités compétentes rechange (option1), ou en établissant une règle de police interne en droit belge (option 2). Dans cette deuxième option, la législation belge se référerait uniquement aux obligations spécifiques de PIC et de MAT, comme fixées par le pays fournisseur, sans se référer à la législation en vigueur dans le pays fournisseur. Option 1 : Une disposition pénale générale est créée qui se réfère à la législation du pays fournisseur concernant le PIC et les MAT. L'Etat édicte une interdiction générale d'utiliser les ressources génétiques et le savoir traditionnel obtenus en violation de la loi du pays fournisseur. Le contrôle du contenu des MAT par un juge se fait sur base de la législation du pays fournisseur, avec le droit belge comme option de rechange 2. Option 2 : Une disposition est créée instaurant l'obligation d'avoir obtenu un PIC et des MAT Mesure 4 : instaurer des mesures de mise en conformité 0. Option 0 : Pas d'introduction de dispositions légales sur la conformité dans le droit belge. 1. 0. Option 0 : pas d'autorité nationale compétente en Belgique 1. Option 1 : Instauration d'autorités nationales compétentes, avec un point d'entrée séparé pour chaque autorité. 2. Option 2 : les autorités nationales compétentes sont instaurées, avec un point d'entrée commun. Une fois le Protocole de Nagoya entré en vigueur en Belgique, il sera indispensable de mettre en place des mesures de mise en conformité pour s'assurer que les ressources génétiques et le savoir traditionnel utilisés sur le territoire belge ont bien été acquis en accord avec le droit du pays fournisseur (mesure 4). Cela peut être réalisé en se référant à la législation du pays fournisseur concerné et en contrôlant le contenu des MAT sur base de cette même législation, avec le droit belge comme option de de la part du pays fournisseur pour l'utilisation en Belgique de ressources génétiques étrangères, s'ils sont requis par la législation du pays fournisseur (d'origine). Pour se conformer au Protocole de Nagoya, au moins un point de contrôle doit être créé pour surveiller l'utilisation des ressources génétiques et du savoir traditionnel (mesure 5). Si la Belgique décide d'introduire des points de contrôle, leur mise en oeuvre pourrait être réalisée en plusieurs étapes. Pour respecter l'engagement politique d'une ratification rapide du Protocole, une première étape pourrait consister en une implémentation minimale requérant la création d'un point de contrôle unique. Deux options possibles semblent pertinentes pour cette première étape, à savoir le contrôle du consentement préalable en connaissance de cause (PIC) des utilisateurs, lequel est disponible via le Centre d'échange pour l'APA (ABS Clearing-House) (option 1) et/ou le renforcement de l'obligation de mention de l'origine géographique de la matière biologique dans les brevets d'invention (option2). Comme les options 1 et 2 ne s'excluent pas mutuellement, une mise oeuvre combinée pourrait être envisagée. Mesure 5 : Désigner un ou plusieurs points de contrôle Même si les discussions sur les modalités exactes du Centre d'échange pour l'APA sont encore en cours au niveau international, trois candidats possibles ont été identifiés : l'Institut Royal de Sciences Naturelles de Belgique (option1), la politique scientifique fédérale (BELSPO) (option 2), et l'Institut Scientifique de Santé Publique (ISP) (option3). 0. Option 0: pas d'instauration de point de contrôle pour surveiller l'utilisation de ressources génétiques et du savoir traditionnel. 1. Option 1: contrôler le consentement préalable en connaissance de cause (PIC) de l'utilisateur, lequel est disponible via le Centre d'Echange APA (ABS Clearing-House). 2. Option 2: L'autorité des brevets est sollicitée comme point de contrôle pour surveiller l'utilisation des ressources génétiques et du savoir traditionnel. Enfin, un composant ou un point d'entrée belge au Centre d'échange pour l'APA (ABS Clearing-House) sera créé pour soutenir l'échange d'information sur les mesures spécifiques d'accès et de partage des avantages dans le cadre du Protocole de Nagoya (mesure 6). 1. La création d'autorités compétentes nationales (Competent National Authorities) devrait être accompagnée d'un système d'input centralisé pour les différentes autorités. 2. En ce qui concerne les mesures de conformité, des sanctions devraient être prévues en cas de non-respect des exigences du PIC et des conditions convenues d'un commun accord (MAT) fixées par le pays fournisseur. Pour la vérification du contenu des MAT, une disposition dans le Code de droit international privé devrait se référer à la législation du pays fournisseur, avec le droit belge comme option de rechange. 3. A ce stade de la mise en oeuvre, la surveillance de l'utilisation des ressources génétiques et du savoir traditionnel par un point de contrôle devrait se faire sur base du PIC disponible dans le Centre d'échanges pour l'APA (ABS Clearing-House). appropriées et prévoir que le traitement d'autres requêtes d'accès se fasse par les collections ex-situ. 5. A ce stade de la mise en oeuvre, et indépendamment de l'obligation générale de partager les avantages, aucune disposition spécifique de partage d'avantages ne devrait être imposée pour les conditions convenues d'un commun accord (MAT). Un ensemble de règles plus standardisées, y compris la possibilité d'utiliser des accords types, peut être envisagée à un stade ultérieur de l'implémentation. 6. l'Institut Royal des Sciences Naturelles de Belgique devrait être mandaté pour remplir les tâches de partage d'information via le Centre d'échange pour l'APA (ABS Clearing-House), comme imposées par le Protocole de Nagoya. 4. En ce qui concerne l'accès aux ressources génétiques belges, il est recommandé d'une part de préciser la législation en vigueur pertinente pour les zones et les espèces protégées, et d'autre part d'instaurer une obligation générale de notification pour l'accès aux autres ressources génétiques. Les étapes ultérieures de la mise en oeuvre pourront alors introduire des dispositions supplémentaires La raison pour laquelle un tel accord politique est recommandé est double. D'une part, il offre un engagement politique clair quant aux obligations fondamentales du Protocole de Nagoya. En effet, il spécifie les intentions des autorités compétentes, dans la limite des décisions déjà prises aux niveaux européen et international au moment de l'accord. les actions spécifiées devront être mises en oeuvre, par exemple à l'aide d'un accord de coopération et/ou en ajoutant des dispositions dans la législation pertinente, comme les Codes de l'environnement des entités fédérées et de l'Etat fédéral, en plus d'autres conditions éventuelles. 1. Dans la première étape, un accord politique devrait être décidé entre les autorités compétentes, comprenant une déclaration claire quant aux principes juridiques généraux à mettre en place, en plus de certaines spécifications sur les actions à entreprendre par l'Etat fédéral et les entités fédérées pour mettre ces principes en application. Cet accord devrait inclure: a. Instauration du partage d'avantages comme principe juridique général en Belgique. b. Instauration d'un principe juridique général selon lequel l'accès aux ressources génétiques belges requiert un Consentement informé préalable (PIC). c. Instauration d'un principe juridique général concernant la création de quatre Autorités Nationales Compétentes. d. Engagement que des mesures législatives seront prises afin de s'assurer que les ressources génétiques utilisées sous la juridiction belge, ont été acquises moyennant un PIC et des MAT, comme fixé par la législation du pays fournisseur, et de répondre aux situations de non-respect. e. Désignation du Centre d'échange d'informations belge de la CDB (Clearing-House Mechanism), géré par l'Institut Royal des Sciences Naturelles, comme Centre d'échange pour l'APA, traitant les échanges d'information sur l'accès et le partages des avantages au titre du Protocole de Nagoya. D'autre part, il ne préjuge pas des décisions politiques qui seront prises par les différentes autorités et offre ainsi une flexibilité suffisante pour ajuster le processus de mise en oeuvre à un stade ultérieur. Ce dernier point est particulièrement important étant donné les nombreuse questions encore en suspens au stade actuel, tant au niveau européen qu'au niveau international, comme indiqué et pris en compte dans ce rapport. 2. Dans une seconde étape, 3. Dans une troisième étape, des actions supplémentaires peuvent être entreprises, une fois qu'il y a plus de clarté aux niveaux européen et international. Deze optie Maatregel 6: informatie uitwisselen via het ABS Clearing-House scoort het best voor alle criteria; strikt genomen scoort ze ook beter op het vlak van rechtszekerheid en efficiëntie voor de gebruikers en aanbieders van genetische rijkdommen, met minder kosten. 4. Optie 0: Er wordt geen Belgisch component van of aanspreekpunt voor het uitwisselingscentrum voorzien. 5. Optie 1: Het Koninklijk Belgisch Instituut voor Natuurwetenschappen (KBIN) wordt aangesteld tot uitwisselingscentrum 6. Optie 2: Het Federaal Wetenschapsbeleid (BELSPO) wordt aangesteld tot uitwisselingscentrum 7. Optie 3: Het Wetenschappelijk Instituut Volksgezondheid (WIV) wordt aangesteld tot uitwisselingscentrum States are sovereign over their natural wealth and resources For a more complete historical account of the latest months prior to the adoption of the NP see Chiarolla C. (2010), Making Sense of the Draft Protocol on Access and Benefit-sharing for COP 10. Idées pour le débat, Institut du Développement Durable et des Relations Internationales (IDDRI) 11 See Decision VI/24. Available at: http://www.cbd.int/decision/cop/?id=7198 12 World Summit on Sustainable Development. Plan of Implementation. Available at: http://www.johannesburgsummit.org/html/documents/summit_docs/2309_planfinal.htm (accessed 26th March 2013) 13 Paragraph 44(o) of the Johannesburg Plan of Implementation. 14 The FPS Economy also hosts the National Institute of Statistics, which is in charge of compiling the Belgian data on biodiversity, as well as the Belgian Office for Intellectual Property (DIE/OPRI). The DIE/OPRI manages the attribution of industrial property titles, informs users regarding intellectual property, advises Belgian governments and represents Belgium at the WIPO. The Office is advised by field-experts and specialists gathered in thematic councils. Relevant councils include the Council for Plant Variety Rights and the Council for Intellectual Property. 79 Besluit van de Vlaamse Regering betreffende de toegankelijkheid van de bossen en de natuurreservaten, 05/12/2008 80 Arrêté ministériel du 23 octobre 1975 établissant le règlement relatif à la surveillance, la police et la circulation dans les réserves naturelles domaniales, en dehors des chemins ouverts à la circulation publique(M.B., 31 décembre 1975) 1.1.2 81 8 juin 1989 -Arrêté de l'Exécutif régional wallon relatif à la protection des zones humides d'intérêt biologique (M.B. 12.09.1989) 1.1.2.1 82 Arrêté du Gouvernement wallon du 26 janvier 1995 organisant la protection des cavités souterraines d'intérêt scientifique, (M.B., 18 mars 1995) Regional Natural Reserves, Certified Natural Reserves, Forest Reserves, Natura 2000 Reserves. 90 Law of 20 th January 1999 aiming to protect the marine environment falling under the jurisdiction of Belgium, M.B. 12 th March 1999 ; Law of 22 nd April 1999 M.B., 20 July 1999 91 Law of 20 th January 1999 aiming to protect the marine environment falling under the jurisdiction of Belgium, M.B. 12 th March 1999 Traditional Knowledge and the Convention on Biological Diversity, available at http://www.cbd.int/doc/publications/8j-brochure-en.pdf103 Overeenkomst tot wijziging van de partnerschapsovereenkomst tussen de leden van de groep van Afrika, het Caribisch gebied en de stille oceaan, enerzijds, en de Europese gemeenschap en haar lidstaten, anderzijds, ondertekend te Cotonou op 23 juni 2000, BS: 30-04-2008; Overeenkomst inzake politieke dialoog en samenwerking tussen de Europese Gemeenschap en haar Lidstaten, enerzijds, en de Andesgemeenschap en haar Lidstaten (Bolivia, Colombia, Ecuador, Peru en Venezuela), anderzijds, en met de Bijlage, gedaan te Rome op 15 december 2003, BS: 03-06-2008; Internationaal Verdrag inzake plantgenetische hulpbronnen voor voeding en landbouw, gedaan te Rome op 6 juni2002, BS: 21-12-2007; Populations;  the ILO Convention No. 169 on Indigenous and Tribal Peoples; and 102 CBD Secretariat, the 1957 International Labor Organization (ILO) Convention No. 107 on Indigenous and Tribal No. 107 is a broad development instrument, covering a wide range of issues such as land; recruitment and conditions of employment; vocational training, handicrafts and rural industries; social security and health; and education and means of communication. Particularly the provisions of Convention No. 107 with regard to land, territories and resources have a wide coverage and are similar to those of Convention No. 169. Convention No. 107 was ratified by 27 countries. It was revised during 1988-1989, through the adoption of Convention No. 169. Although since the adoption of Convention No. 169, Convention No.107 is no longer open for ratification, it is still in force for 28 States, including Belgium, a number of which have significant populations of indigenous peoples, and remains a useful instrument in these cases as it covers many areas that are key for indigenous and local communities. 107 CCIEP (2006) Belgium's National Biodiversity Strategy 2006-2016. Belgian Coordination Committee for International Environment Policy, Directorate-General for the Environment. The process of drafting the National Biodiversity Strategy was initiated by the Interministerial Conference for the Environment in June 2000. The Strategy was elaborated by a team representing the major actors in the field of biodiversity in Belgium. It acted as a contact group under the "Biodiversity Convention" Steering Committee. This Steering Committee was established under the Belgian Coordination Committee for International Environment Policy (CCIEP) under the auspices of the Interministerial Conference for the Environment, which endorsed the strategy the 26 th October 2006. 108 C. Frison and T. Dedeurwaerdere (2006) Infrastructures publiques et régulations sur l'accès aux ressources génétiques et le partage des avantages qui découlent de leur utilisation pour l'innovation de la recherche des sciences de la vi.e. Accès, conservation et utilisation de la diversité biologique dans l'intérêt général. Enquête Fédérale Belge. Centre de Philosophie du Droit, Université Catholique de Louvain. 109 Ibid. is another similar Belgian initiative, taken by the Association of Botanical Gardens and Arboreta. It has developed a navigation system for sharing plant information from different databases in a common format. It is also worth noting that a Belgian Biodiversity Platform 117 was created by BELSPO in 2003, in the context of the Second Multi-annual Scientific Support Plan for a Sustainable Development Policy. The Platform functions as an interface between providers and users of biodiversity information. Other proposed ABS-related actions in this field closely relate to those in the development cooperation field, including capacity-building initiatives in Central Africa and the promotion of ex-situ conservation.In accordance with COP Decision V/26 of the CBD, a civil servant of the DG Environment of the FPS Environment currently ensures the function of national focal point on ABS. At federal level, a "long term strategic vision for sustainable development to 2050" is currently under development. ABS concerns should be included. . The proposal was discussed during the first Environment Council of the European Union under the Irish Presidency, on 21 st March 2013 123 , as well as during a workshop on Access to Genetic Resources and Fair and Equitable Sharing of Benefits held on the 19 th March 2012 in the European Parliament. Negotiations on the Regulation are still ongoing in the Council's Working Party on Environment. The European Parliament committee vote is scheduled for July 2013. Preceding the impact assessments, from October 2011 to December 2011, the European Commission also held a public consultation on the implementation and ratification of the Nagoya Protocol, with the aim of exploring the possible effects of the Protocol and to gather concrete proposals on the practical challenges of the implementation. Results of this public consultation are publicly available Proposal for a Regulation of the European Parliament and of the Council on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization in the Union. COM(2012) 576 final 122 EC (2012), Impact Assessment accompanying the document "Proposal for a Regulation of the European Parliament and of the Council on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization in the Union, European Commission Staff Working Document, COM(2012) 576 final; IEEP, Ecologic and GHK (2012), Study to analyze legal and economic aspects of implementing the Nagoya Protocol on ABS in the European Union, Final report for the European Commission, DG Environment. Institute for European Environmental Policy, Brussels and London, April 2012 123 Council of the European Union (2013). Press Release of the 3233rd Council meeting Environment. Brussels, 21 st March 2013 124 http://ec.europa.eu/environment/consultations/abs_en.htm on the European Commission website 124 . 121 EC (2012b), , taking a more comprehensive understanding of Article 18.3, the recognition and enforcement of decisions on civil and commercial matters are ruled by the EC Regulation 44/2001 (Brussels 1) as well as the 2007 Convention of Lugano on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters. The 2005 Convention on Choice of Court Agreements adopted in the framework of the Hague Conference on Private International Law is also a useful tool in this regard, as it sets rules for when a court must take jurisdiction or refuse to do so, where commercial parties have entered into an exclusive choice of court agreement. The Convention also provides for the recognition and enforcement of resulting judgments, with an option for States Parties to agree on a reciprocal basis to recognize judgments based on a choice of court agreement that was not exclusive.Moreover, various conventions could act as "effective measures regarding access to justice" (Article 18.3.a). Regarding the investigation procedure, Belgium did not ratify the 1970 Hague Convention on the taking of evidence abroad in civil or commercial matters. This convention is mainly referring to "commissions rogatoires", through which a judge delegates his investigation powers through a limited mandate allowing another judge or judicial officer to execute an investigation act on his behalf in another jurisdiction. Nonetheless Belgium ratified, amongst other applicable conventions, the Second Protocol to the 1959 European Convention on mutual assistance in criminal matters 127 and the 1965 Hague Convention on notification and communication abroad of judicial and extrajudicial acts in civil or commercial matters. Taking an extensive definition of "access to justice", it is relevant to mention that Belgium ratified also the Aarhus Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters 128 . Greiber T., Peña Moreno S., Åhrén M., Carrasco J.N., Kamau E.C., Cabrera Medaglia J., Julia Oliva M., Perron-Welch F. in cooperation with Ali N. and Williams C. ( 2012 ), An Explanatory Guide to the Nagoya Protocol on Access and Benefit-sharing. IUCN, Gland, Switzerland. However , op. cit.143 See : http://www.environment.gov.au/biodiversity/science/access/model-agreements/index/html144 Burton G (2009), op. cit. 145 Wynberg R., Taylor M. (2009), op. cit. 146 Santili J. (2009), op. cit. 147 Burton G. (2009), op. cit. 148 Santili J. (2009), op. cit. Ley que Establece el Régimen de Protección de los Conocimiento Colectivos de los Pueblos Indígenas vinculados a los Recursos Biológicos, 2002. Ley No 27811, Comisión Permanente del Congreso de la República del Perú; Article 76 of the Biodiversity Law, No 7788, Legislative Assembly of the Republic of Costa Rica, 30 154 Wynberg R, Taylor M (2009), op. cit. 155 UNEP, Natural Justice and IUCN (2011) Report of the International Experts Meeting on Access & Benefit-sharing and Protected Areas, Gland, Switzerland, 6 th -8 th July 2011 156 Article 8 of ) of the African Model Legislation for the Protection of the Rights of Local Communities, Farmers and Breeders, and for the Regulation of Access to Biological Resources, 2000165 For example in Kenya, the National Environment Management Authority (NEMA) collects all the necessary permits, issued by other authorities, before granting access permit. SeeKamau E.C., Winter G. (2009), Streamlining Access Procedures and Standards. In Kamau E.C. and Winter G. (Eds.) Genetic Resources, Traditional Knowledge & the Law. Solutions for Access & Benefit-sharing. London: Earthscan 166 Young TR (2009), op.cit. 161 Article 13(2) of the Nagoya Protocol 162 Article 14 of the Biodiversity Law, No 7788, Legislative Assembly of the Republic of Costa Rica, 30 163 Article 6 of the Regulations on Bio-Prospecting, Access and Benefit-sharing. Government Gazette No. 30739, 8 th April 1998 th February 2008, Republic of South Africa 164 Article 7(1 benefit-sharing" fund or other mechanism which redirects the benefits Possible advantages:  Allows for in-depth monitoring of distribution of benefits Possible disadvantages:  Institutional burden Option 4 -Integrate ABS in biodiversity policies Possible advantages:  Links the 3 objectives of the CBD together  ABS could benefit from more political attention through biodiversity policy  Could generate synergies between policies/actions/actors/administrations Possible disadvantages:  ? EVALUATION Option 1 and 4 recommended. The other options are only recommended if they do not lead to a disproportionately high institutional burden. Description: In order to foster biodiversity-related research, Belgium could develop additional measures to facilitate access to GR Related Article of the 8 NP: Nature of the measure: Administrative and/or legal Priority for Belgium:  Examples of relevant  For example, in Flanders, Article 57bis of the Natuurdecreet existing measures in allows access to real property for research conducted by public Belgium servants and related to nature conservation Possible disadvantages:  Difficult to impose on private sector legal owners Option 2 -Require environmental impact assessment of collection, prior to access Possible advantages:  Has already legal basis for certain types of access Possible disadvantages:  High administrative and financial burden/cost for users  May be ineffective, as collecting a sample probably does not have a major environmental impact Option 3 -Establish a " 7.1.4 Action card -Facilitate access for biodiversity-related research Option 1 -Exempt biodiversity-related research by certain actors from any access requirements Possible advantages  requires return clause to make sure users do come back when entering commercialization phase EVALUATION Option 1 and 2: both are recommended measures if they would lead to simplify the access procedure (the relevance of such an option would depend thus on the complexity of the proposed default procedures). They contribute to a core objective of the Nagoya Protocol and can build upon existing legal measures 7.1.5 Action card -Establish CNA Description: Each Party has to designate a CNA that grants access, issues written evidence that access requirements have been met and advises users on applicable procedures and requirements to get access to GR Related Article of the Article 13 NP: Nature of the measure: Institutional Priority for Belgium:  Examples of relevant existing measures in Belgium :  Has already legal basis in (part of) Belgium (cf. example of existing measure) Possible disadvantages:  Does not allow for post-access monitoring Option 2 -Facilitated access measures for non-commercial biodiversity related research (with a retun clause before entering in a commercial phase) Possible advantages:  Allows to settle BS for the commercial phase at a later stage, based on clearer view of potential value of GR  Lowers administrative burden for non-commercial research at time of access Possible disadvantages:  Requires efficient monitoring of utilization  Article 22, Flemish Soortenbesluit: access to protected species needs to be approved by the "Agentschap voor Natuur en Bos" of the Flemish government Option 1 -Designate one existing institution as CNA Possible advantages:  Low institutional cost, as institution(s) already exist  Low financial cost, as tasks would only be an addition to existing tasks Possible disadvantages: Set up financial incentives (tax reductions, rebates, …) for complying users Possible advantages Could favor important users who can more easily share benefits Description: Incentives might be efficient complementary tools to enforcement mechanism. Related Article of the 15 NP: Nature of the measure: Administrative Priority for Belgium:  Option 1 - :  Could foster greater compliance motivation among (private) users Possible disadvantages:  Would have to transfer part of the extra cost arising out of BS to the state  Option 2 -Set up structural incentives (e.g., special priority for other filings, permits or opportunities, (facilitated) access to special materials, programs, funds, …) for complying users Possible advantages :  Could foster greater compliance motivation among (private) users  Lower financial cost than financial incentives Possible disadvantages:  Could favor important users who can more easily share benefits Option 3 -Set up positive publicity measures (e.g. label) for complying user Possible advantages:  Could foster greater compliance motivation among (private) users Possible disadvantages:  Labels need to be established and monitored  Could favor important users who can more easily share benefits EVALUATION The 3 options are potentially interesting and deserve further analysis. 7.2. 6 Action card -Encourage the development of model clauses, codes of conducts and guidelines sector specific aspects Possible disadvantages:  Effectiveness might be doubtful  Does not allow to address differences in bargaining power between stakeholders Option 2 -Develop ABS guidelines Possible advantages:  Could build on existing measures (MOSAICC )  Part of stakeholders already use it Possible disadvantages:  Difficult to impose on private sector users Option 3 -Develop model contractual clauses or mandatory code of conduct Possible advantages:  Increases control by the state on the content of ABS agreements  Could build on existing measures (IPEN, ECCO core MTA)  Could combine mandatory and non-mandatory provisions Possible disadvantages: Description: Encourage the development of model clauses, codes of conducts and guidelines, to help stakeholders develop appropriate agreements when exchanging GR  Guidelines: Non-mandatory provisions aiming to facilitate the exchange of GR and generalize best practices  Code of conduct: Set of rules outlining the responsibilities of stakeholders when exchanging GR (e.g. IPEN)  Model contractual clauses: Specific clauses to be included in an ABS contract (e.g. ECCO core MTA) Related Article of the 19 and 20 NP: Nature of the measure: Administrative Priority for Belgium:  Examples of relevant  BCCM's MOSAICC existing measures in  ECCO core MTA Belgium  IPEN Code of conduct Option 1 -Rely upon current ABS practices of stakeholders Possible advantages:  Low administrative burden  Gives responsibility to the sectors, taking into consideration  Provides less flexibility for users  Difficult to establish one model that fits all types of utilization  Could conflict with models of contracting Party  Higher administrative burden for authority Table 6 - 6 List of indicators Criteria Selected indicators for assessment E1 Legal certainty and effectiveness for users and Legal certainty providers of GR, at low cost  IE1: consistency and predictability of the rules and the process in place. Effectiveness of the legal framework  IE2: enforceability (the level with which an option allows the ABS regulation to be enforced)  IE3: limiting redundancy (if existing legislations regulate related obligations).  IE4: proximity with other international agreements. E2 Maximizing economic innovation and product  IE5: maximize research and development opportunities for users development (in particular through its and providers of GR contribution to R&D) at reasonable financial and  IE6: allow economic and research stakeholders to compliance administrative costs with the NP at reasonable costs (negotiation costs, costs related to acquisition/transfer of GR, etc.) E3 Minimizing implementation costs  IE7: minimize administrative costs related to keeping track of the ABS agreements (including monitoring costs)  IE8: minimize financial costs for the creation of / changes in institutions (including costs for asking for legal advice) S Achievement of social objectives  IS1: job creation/preservation in the sectors utilizing genetic resources (including through management support, collaboration programs amongst companies, educational programs, etc.)  IS2: maximize research and innovation opportunities in socially relevant fields such as health, nutrition and food security  IS3: support to small and medium enterprises  IS4: transfer of knowledge and technologies to developing countries  IS5: effective protection of the rights of indigenous and local communities over their traditional knowledge associated with GR M Promotion of conservation and sustainable use  IM1: helping ensure fair and equitable benefit-sharing of biodiversity, including biodiversity research  IM2: more predictable conditions for access (including through creating greater legal certainty for users/providers of GR)  IM3: encouraging advancement of research on GR and biodiversity  IM4: creating incentives to conservation and sustainable use of GR (for ex. through recognizing their value and through benefit- sharing, through capacity building and technology transfer)  IM5: enhancing the contribution of biodiversity to development Table 7 -Scoring system of the impact grid 7 Likelihood Magnitude If positive effect If negative effect High Strong + + + --- Medium High Strong Medium + + -- Medium Medium High Weak + - Low Strong Medium Weak Low Medium 0 0 Low Weak economic innovation and product development (in particular through its contribution to R&D) at reasonable financial and administrative costs 199 EC (2012) Impact Assessment accompanying the document "Proposal for a Regulation of the European Parliament and of the Council on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization in the Union, European Commission Staff Working Document, COM(2012) 576 final200 As mentioned under IMP 1.3, the refined fishing net would consider any other legislation where notification/registration/permit exist and specify that such notification/registration/permit is also considered as a PIC under the Nagoya Protocol. This option is part of the later implementation steps (step 3 of the implementation, cf. chapter 11)  Impact on stakeholders: o Coll.: impacted under option 0, 1, 2 and 3, depending on level of legal certainty o Gov. Res.: impacted under option 0, 1, 2 and 3, depending on level of legal certainty o Ag., health and Biotech : impacted under option 0, 1, 2 and 3, depending on level of legal certainty o Univ.: impacted under option 0, 1, 2 and 3, depending on level of legal certainty o Land: impacted under option 0, 1, 2 and 3, depending on level of legal certainty o Other: None E2 -Maximizing Impact on stakeholders: o Coll.: Limited administrative costs per transaction under option 1 (as providers) and under options 2 and 3 (as users). If centralization of access in qualified collections (without prejudice to CNAs), possible increase of costs. o Gov. Res.: Limited administrative costs per transaction under option 1, 2 and 3. o Ag., health and biotech: Limited administrative costs per transaction under option 1, 2 and 3 o Univ.: Limited administrative costs per transaction under option 1, 2 and 3 o Land: Limited administrative costs per transaction under option 1, 2 and 3 (as providers). o Other: No impact Table 8 -Economic impact of the options for the operationalization of PIC Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 8 Option 0 Negative High Strong --- E1 Option 1 Option 2 Positive Positive High Medium Strong Medium +++ + Option 3 Positive High Medium ++ Option 0 Negative Medium Medium - E2 Option 1 Option 2 / Positive Medium Medium Weak (*) Medium 0 + Option 3 Positive Medium Medium + Option 0 Negative High Weak - E3 Option 1 Option 2 Negative Negative High High Medium Medium ---- Option 3 Negative High Medium -- (*)The main reason of ranking « weak » instead of « medium » for this option is the possible financial cost of storage (cf. discussion under E2)  Impact on stakeholders: o Coll.: Increasing R&D could trigger job creation o Gov. Res.: Increasing R&D could trigger job creation o Ag., health and biotech: Increasing R&D could trigger job creation o Univ.: Increasing R&D could trigger job creation o Land: Could help to build institutional capacity with small land owners, in particular on ABS issues related to use of biodiversity for R&D o Other: Indirect positive effects for society as a whole through innovation and new available products in the field of food security, health and nutrition Table 9 -Social impact of the options for the operationalization of PIC Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 9 Option 0 Negative Unclear S Option 1 Option 2 Positive Positive Unclear Unclear Option 3 Positive Unclear Table 10 -Environmental impact of the options for the operationalization of PIC Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 10 Economic Valuation of Biodiversity, Final Report for the MOSAICS Project. Korea Institute for International Economic Policy; Rausser G.C., Small A. (2000), Valuing Research Leads: Bioprospecting and the Conservation of Genetic Resources. Journal of Political Economy. 108(1), pp. 173-206; Biber-Klemm S., Martinez S.I., Jacob A. (2010), Access to Genetic Resources and Sharing of Benefits -ABS Program 2003-2010, Swiss Academy of Sciences, Bern, Switzerland; Martin B.R. and Tang P. (2007), The benefits from publicly funded research. SPRU Working Paper: 161.  Impact on stakeholders: o Coll.: More opportunities for own (taxonomic) research due to increased deposits under option 1 o Gov. Res.: No impact o Ag., health and biotech: No impact o Univ.: No impact o Land: Could increase awareness on ABS related to use of biodiversity for R&D o Other: Indirect positive effects for society through increase of knowledge base and closer monitoring of benefits under option 1 and through refined PA/PS under option 3 M Option 0 Negative Medium Medium - 207 Yun M. (2005), Stromberg P., Dedeurwaerdere T., Pascual U. (2007), An empirical analysis of ex-situ conservation of microbial diversity. Presentation at the 9th International BIOECON Conference, Kings College Cambridge, 19 th -20 th September 2007; Täuber et al., (2011), op.cit. The resulting legal uncertainty (see also E1) is likely to lead to less utilization of Belgian GR in research and development and thus potentially hamper economic innovation and product development substantially213 . This expected impact of this hypothetical situation for the Belgium providers and users can be illustrated with the historical example of the legal vacuum, between 1992 and 1994, of the international network of the CGIAR collections 211 M.W. Tvedt, O.K. Fauchald, O. K. (2011), Implementing the Nagoya Protocol on ABS: A Hypothetical Case Study on Enforcing Benefit-sharing in Norway. The Journal of World Intellectual Property, 14: p. 392 212 Tauber et al., (2011), op.cit. 213 (Consortium for International Agricultural Research). As documented in the literature, this legal vacuum led to a temporary, but spectacular, decrease by over 50% of the use of the GR of these collections, see Byerlee, D. and Dubin, J. 2010 . Crop improvement in the CGIAR as a global success story of open access and international collaboration. International Journal of the Commons (4) 1. Table 11 -Economic impacts of the options for the specification of MAT Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 11  Impact on stakeholders: o Coll.: Incurring less implementation costs under option 2 and, to a lesser extent, option 3, both as users and providers. o Gov. Res.: Incurring less implementation costs under option 2 and, to a lesser extent, option 3. o Ag., health and biotech: Incurring less implementation costs under option 2 and, to a lesser extent, option 3. More flexibility under option 3 o Univ.: Incurring less implementation costs under option 2 and, to a lesser extent, option 3. o Land: Incurring less implementation costs under option 2 and, to a lesser extent, option 3 o Other: No impact Option 0 Negative High Strong --- E1 Option 1 Option 2 / Positive Medium Medium Weak Medium 0 + Option 3 Positive Medium Medium + Option 0 Negative High Medium - E2 Option 1 Option 2 Positive Positive Medium Medium Medium Medium + + Option 3 Positive High Medium + + Option 0 Unclear E3 Option 1 Option 2 Positive Positive Medium Medium Weak Medium 0 + Option 3 / Medium Weak 0  Impact on stakeholders: o Coll.: As provider, more opportunity to contribute to social objectives with BS as horizontal principle. o Gov. Res.: Contribution to the R&D can lead to job creation and contribute to social objectives o Ag., health and biotech: Contribution to the R&D can lead to job creation and contribute to social objectives o Univ.: Contribution to the R&D can lead to job creation and contribute to social objectives o Land: As provider, more opportunity to contribute to social objectives with BS as horizontal principle o Other: the adoption of BS as a horizontal principle might also have (at least indirect) positive effects on socially important sectors Table 12 -Social impacts of the options for the specification of MAT Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 12 Option 0 Negative Medium Medium - S Option 1 Option 2 / Positive Medium High Weak Medium 0 ++ Option 3 Positive High Weak + Environmental impact M - Promotion of conservation and sustainable use of biodiversity It is likely that BS, as a horizontal principle would have positive effects on conservation of sustainable use of biodiversity, in particular when non-monetary and monetary benefits, as included in the MAT, are directed towards the objectives of conservation and sustainable use of biodiversity. Conversely, contribution of the option 0 can be considered negative.Like for social objectives, options 3 and 2 offer a better opportunity for the Belgian authorities to control the types of benefits being shared and monitor whether they contribute to conservation and sustainable use of biodiversity.  Impact on stakeholders: o Coll.: No impact o Gov. Res.: No impact o Ag., health and biotech: No impact o Univ.: No impact o Land: Option 1 offers less possibility to channel benefits towards conservation and sustainable use o Other: BS as a horizontal principle is expected to offer positive environmental effects for society as a whole Table 13 -Environmental impacts of the options for the specification of MAT Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 13 Option 0 Negative Medium Medium - M Option 1 Option 2 Positive Positive Medium Medium Weak Medium 0 + Option 3 Positive Medium Medium + Table 14 -Economic impacts of the establishment of the CNA Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 14 ). o Coll.: No impact o Gov. Res.: No impact o Ag., health and biotech: Foreign users can benefit from lower implementation costs under option 2 o Univ.: Foreign universities can benefit from lower implementation costs under option 2 o Land: No impact o Other: No impact Option 0 Negative High Strong --- E1 Option 1 / Medium Weak 0 Option 2 Positive Medium Medium + Option 0 Negative High Strong --- E2 Option 1 / Low Medium 0 Option 2 / Low Medium 0 Option 0 Unclear E3 Option 1 Negative High Medium -- Option 2 Negative High Medium -- Table 15 -Social impacts of the establishment of the CNA Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 15  Impact on stakeholders: o Coll.: No impact o Gov. Res.: No impact o Ag., health and biotech: No impact o Univ.: No impact o Land: No impact o Other: No impact M - Promotion of conservation and sustainable use of biodiversity, including biodiversity research Option 0 would not allow keeping track of the access requests and analyzing the ways in which the resources are accessed, missing a major opportunity to enhance knowledge which could be used for the improvement of conservation and sustainable use. Options 1 and 2 do not lead to a different impact on the environment.  Impact on stakeholders: o Coll.: No impact o Gov. Res.: No impact o Ag., health and biotech: No impact o Univ.: No impact o Land: No impact o Other: No impact Table 16 -Environmental impacts of the establishment of the CNA Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 16 Option 0 Negative High Strong --- M Option 1 / Low Weak 0 Option 2 / Low Weak 0 Table 17 -Economic impacts of the compliance measures Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 17 Option 0 Negative High Strong --- E1 Option 1 Positive Medium Medium + Option 2 Positive Medium Medium + Option 0 Negative High Medium -- E2 Option 1 Positive Medium Weak 0 Option 2 Positive Low Weak 0 Option 0 Unclear / / / E3 Option 1 Unclear / / / Option 2 Unclear / / /  Impact on stakeholders: o Coll.: As users, possible additional costs o Gov. Res.: Possible additional costs o Ag., health and biotech: Possible additional costs o Univ.: Possible additional costs o Land: No impact as providers o Other: No impact Table 18 -Social impacts of the compliance measures Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 18 Option 0 Negative High Medium -- S Option 1 Positive Medium Medium + Option 2 Positive Medium Medium +  Impact on stakeholders: o Coll.: Capacity for social innovation is limited. o Gov. Res.: Capacity for social innovation is limited. o Ag., health and biotech: Capacity for social innovation is limited. o Univ.: Capacity for social innovation is limited. o Land: No impact. o Other: Option 1 and 2 both offer opportunities for increasing protection of ILCs and TK in provider countries Table 19 -Environmental impacts of the compliance measures 19  Impact on stakeholders: o Coll.: No impact o Gov. Res.: No impact o Ag., health and biotech: No impact o Univ.: No impact o Land: Better level playing field, within an effective regime, would benefit awareness raising on biodiversity issues more generally, and in in situ environments in particular. o Other: capacity building and technology transfer to third countries for conservation and sustainable use easier if NP is implemented Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score Option 0 Negative High Medium -- M Option 1 Positive Medium Medium + Option 2 Positive Medium Medium + Table 20 -Economic impacts of the options for designating checkpoint(s) Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 20 Impact on stakeholders: o Coll.: As users, impact dependent upon type of utilization and moment of acquiring GR o Gov. Res.: impact dependent upon type of utilization and moment of acquiring GR o Ag., health and biotech: impact dependent upon type of utilization and moment of acquiring GR o Univ.: impact dependent upon type of utilization and moment of acquiring GR o Land: no impact. o Other: No impact Option 0 Negative High Strong --- E1 Option 1 Positive Low Medium + Option 2 Positive Low Medium + Option 0 Negative Medium Medium - E2 Option 1 / Medium Weak 0 Option 2 / Medium Weak 0 Option 0 Unclear E3 Option 1 Unclear Option 2 Unclear Impact on stakeholders: o Coll.: No impact o Gov. Res.: No impact o Ag., health and biotech: No impact o Univ.: No impact o Land: No impact o Other: Higher level of protection of TK of communities in third countries under option 1 Table 21 -Social impacts of the options for designating checkpoint(s) Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 21 Option 0 Negative Medium Medium - S Option 1 Positive Medium Medium + Option 2 / Medium Weak 0 Table 22 -Environmental impacts of the options for designating checkpoint(s) Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 22 Coll.: biodiversity research could be hindered under option 0 o Gov. Res.: biodiversity research could be hindered under option 0 o Ag., health and biotech: No impact o Univ.: biodiversity research could be hindered under option 0 o Land: Both option 1 and 2 will better contribute to awareness raising, which can contribute to biodiversity conservation activities in general, and in in situ environments in particular o Other: Generation of benefits for biodiversity conservation and sustainable use in provider countries hindered under option 0 Option 0 Negative Medium Medium - M1 Option 1 Positive Medium Medium + Option 2 Positive Medium Medium +  Impact on stakeholders: o  Impact on stakeholders: o Coll.: No impact o Gov. Res.: No impact o Ag., health and biotech: No impact o Univ.: No impact o Land: No impact o Other: No impact Table 23 -Economic impacts of the options for the ABS CH 23 Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score Option 0 Negative High Strong --- E1 Option 1 Option 2 Positive Positive Medium Medium Medium Medium + + Option 3 / Medium Weak 0 Option 0 Negative High Strong --- E2 Option 1 Option 2 Positive Positive Medium Medium Medium Medium + + Option 3 / Medium Weak 0 Option 0 Positive High Medium ++ E3 Option 1 Option 2 Positive Positive Medium Medium Medium Medium + + Option 3 Positive Medium Medium + Table 24 -Social impacts of the options for the ABS CH Selection criteria Option Pos/Neg Likelihood of occurrence Effect magnitude Score 24 Option 0 Negative High Strong --- S Option 1 Option 2 Positive Positive High Medium Medium Medium ++ + Option 3 / Medium Weak 0 The Nagoya Protocol has been declared a "double mixed treaty" by the Working Group on Mixed Treaties on 22/11/2010. This means that the federal State, the Regions and the Communities need to give their consent in order for Belgium to be able to ratify. The core obligations are the obligations specified in the terms of reference of this study as requiring special attention: Access to genetic resources and traditional knowledge; Benefit-sharing; the National Competent Authorities and the National Focal Points; Conformity with the national legislation of the provider country and the contractual rules; and compliance and monitoring. Report of the stakeholder meeting is available here: http://www.biodiv.be/implementation/cross-cuttingissues/abs/workshop-np-20120529/20120529-nagoya-stakeholder-workshopreport-final.pdf Le Protocole de Nagoya a été déclaré « traité doublement mixte » par le Groupe de travail Traités Mixtes de la Conférence interministérielle de la Politique étrangère le 22/11/2010. L'Etat fédéral, les Régions et les Communautés doivent donner leur consentement pour que la Belgique puisse ratifier le Protocol. Au regard des remarques qui précèdent, les dispositions nationales actuellement disponibles régissant le statut légal des ressources génétiques en Belgique concernent principalement la question de la propriété légale du matériel génétique. Il résulte des principes fondamentaux sur le droit de propriété que l'on trouve dans le code civil, que les conditions et règles relatives à la propriété légale du matériel génétique, en tant qu'entité biophysique, découlent de celles qui régissent la propriété de l'organisme dans lequel ce matériel peut être trouvé. La propriété sur un organisme signifie que le propriétaire possède les droits de l'utiliser, d'en jouir et d'en disposer juridiquement et matériellement. De plus, toute mesure légale qui envisagerait de réglementer l'accès aux ressources génétiques pourrait se baser sur la législation existante sur l'accès physique et sur l'usage de matériel génétique. Les lois réglementant l'accès physique et l'usage du matériel génétique dépendent du type de propriété (mobilière, immobilière, ou res nullius), de l'existence de restrictions à la propriété comme une protection spécifique (espèces protégées, zones protégées, forêts ou environnements marins) et de la situation géographique du matériel génétique (les quatre autorités appliquent leurs propres règles).Contrairement à ses composants physiques, les composants informationnels des ressources génétiques peuvent constituer une res communis : "chose qui qui n'appartient à personne mais est sujet à l'usage par tous". Tandis que l'accès à de tels composants informationnels n'est pas couvert par une législation spécifique, l'exercice de certains droits d'utilisation peut cependant être limité par des droits de propriété intellectuelle qui touchent à des parties, des fonctions ou des utilisations de matériel biologique résultant d'innovations faites sur ces matériaux. Ces droits de propriété Les obligations fondamentales sont les obligations spécifiées dans les termes de référence de la présente étude comme requérant une attention spéciale : accès aux ressources génétiques et au savoir traditionnel ; partage des avantages ; l'Autorité Nationale Compétente et les correspondants (coordinateurs) nationaux ; conformité avec la législation nationale du pays d'origine (fournisseur) et règles contractuelles ; conformité et monitoring. Het Protocol van Nagoya werd dubbel gemengd verklaard door de Werkgroep Gemengde Verdragen (WGV) van de Interministeriële Conferentie voor Buitenlands Beleid op 22/11/2010. De instemming van de federale Staat, de Gewesten en de Gemeenschappen is vereist voor de instemming met het Protocol. De kernverplichtingen zijn die verplichtingen die volgens de referentievoorwaarden van deze studie bijzondere aandacht verdienen: toegang tot genetische rijkdommen en traditionele kennis; batenverdeling; de Nationale Bevoegde Autoriteiten en de Nationale Contactpunten; naleving van de nationale wetgeving van het oorsprongsland en de contractuele regels; en naleving en monitoring. Het verslag van deze vergadering is beschikbaar op het volgend adres: http://www.biodiv.be/implementation/crosscutting-issues/abs/workshop-np-20120529/20120529-nagoya-stakeholder-workshopreport-final.pdf The following 'acts' express the consent of a State to be bound by a treaty: ratification, accession, approval and acceptance. The legal implications, i.e. the binding nature of ratification, accession, approval, and acceptance are the same. Status of Signature, and ratification, acceptance, approval or accession, available at http://www.cbd.int/abs/nagoyaprotocol/signatories/default.shtml(accessed 26 March 2013) 25 Estonia, Latvia, Malta, Slovakia and Slovenia. Report of the stakeholder meeting is available here: http://www.biodiv.be/implementation/cross-cuttingissues/abs/workshop-np-20120529/20120529-nagoya-stakeholder-workshopreport-final.pdf Belgian State reforms were performed in 1970, 1980, 1988, 1993 and 2001. The main provisions pertaining to these reforms are to be found in the "special law" dated 8 th August 1980 related to the general institutional reforms, and the special law of 12 th January 1989 pertaining to the institutions of the Brussels Region. The sixth reform will follow on the footsteps of the institutional agreement adopted on 11 th October 2011 and operate additional transfers of competences towards federated entities, especially the Regions. However, this reform, which has not officially been transcribed into applicable legislative texts, shall not highly affect the distribution of competences that may be linked with ABS. This principle applies notwithstanding the future entry into force of Article 35 of the Constitution. The Nagoya Protocol has been declared a double mixed treaty by the Working Group on Mixed Treaties on 22/11/2010. A double mixed treaty indicates that the competent entities for its implementation are the Governments of the Regions (Flemish, Walloon and Brussels-Capital Region), the Governments of the Communities (Flemish, French and German Community) and the Federal Government (see also the analysis in chapter 2 on the distribution of ABS related competences in Belgium). Accord de coopération du 5 avril 1995 entre l'Etat fédéral, la Région flamande, la Région wallonne et la Région de Bruxelles-Capitale relatif à la politique internationale de l'environnement / Samenwerkingsakkoord van 5 april 1995 tussen de Federale Staat, het Vlaamse Gewest, het Waalse Gewest en het Brussels HoofdstedelijkGewest met betrekking tot het internationaalmilieubeleid. Article 6,- § 1er, VI de la loi spéciale du 8 août 1980 de réformes institutionelles / Artikel 6,- § 1st, VI van de wet van 8 augustus 1980 tot hervorming der instellingen [START_REF] Duran | Een vergelijkend onderzoek naar en bestedingsanalyze van het buitenlands beleid en de diplomatieke representatie van regio's met wetgevende bevoegdheid en kleine staten[END_REF], Een vergelijkend onderzoek naar en bestedingsanalyze van het buitenlands beleid en de diplomatieke representatie van regio's met wetgevende bevoegdheid en kleine staten. Rapport, Antwerpen: Steunpunt Buitenlands Beleid, 418 p As for the development cooperation field, the State Reform of 2001 intended to further clarify the distribution of competence between the federal and federated entities through the new Article 6ter of the SL8/8/80 that reads: "certain fragments of development cooperation will be transferred on 1 st January 2004, to the extent which they concern competences attributed to Communities and Regions (Inserted by Article 41 of the special law of 13 th July 2001 (M.B., 3 rd August 2001), which has entered into force on 1st January 2002). A specific working group is constituted to propose a list of subject-matters concerning Community and Regional competences at the latest on 31 st December 2002. Such a working group was created in 2004 to solve the issue but has not yet led to any conclusion. [START_REF] Paquin | Paradiplomatie identitaire et diplomatie en Belgique fédérale : le cas de la Flandre[END_REF], Paradiplomatie identitaire et diplomatie en Belgique fédérale : le cas de la Flandre. Canadian Journal of Political Science/Revue canadienne de science politique(2003), 36 : pp 621-642 [START_REF] Geeraerts | De Vlaamse betrokkenheid bij de totstandkoming van Europees en multilateraal milieubeleid[END_REF], Vlaams milieubeleid steekt de grenzen over. De Vlaamse betrokkenheid bij de totstandkoming van Europees en multilateraal milieubeleid, Steunpunt Milieubeleidswetenschappen, Antwerp, UA. Glowka, L., Burhenne-Guilmin, F., Synge, H., in collaboration with McNeely, J., A. and Gündling, L. (1994), A guide to the convention on biological diversity, Environmental Policy and Law Paper No. 30, IUCN Environmental Law Center [START_REF] Van Den Haselkamp-Hansenne | L'étendue de la propriété immobilière[END_REF], L'étendue de la propriété immobilière. In X., Guide de droit immobilier, 2011, liv. 64, Kluwer, Waterloo, sections I.5.-1 toI.5.3.-4 [START_REF] Hansenne | L'accession immobilière[END_REF], L'accession immobilière, in X., Guide de droit immobilier, 2011, liv. 64, Kluwer, Waterloo, sections I.8.-1 to I.8.4.-1. Indeed, according to the European Patent Office, a process for plant production that contains steps of crossing the entire genome of plants followed by the selection of obtained plants is not patentable. These steps should be seen as "essentially biological", as mentioned in Article 53 (b) of the 1973 Convention on the European patent; See DEN HARTOG, J., (2011), "Interpretatie van Article 53(b) EOV; werkwijzen van wezenlijke biologische aard", BIE, pp. 20-23. Projet de loi modifiant la loi du 28 Mars 1984 sur les brevets d'invention, en ce qui concerne la brevetabilité des inventions biotechnologiques, Rapport fait au nom de la Commission des Finances et Affaires Economiques par Mme Zrihen, Doc.Senat, sess. 2004-2005, no.3-1088/3, p.3. See also Van Overwalle G. (2004), Van groene muizen met rode oortjes: de EU-Biotechnologierichtlijn en het Belgisch wetsontwerp van 21st September 2004.IRDI, This clause is a transposition of Directive 98/44/EC of 6 th July 1998 on the legal protection of biotechnological inventions, which takes Articles 8(j) and 15 of the CBD into consideration. Its preamble notes that in case an invention is based on biological material of plant or animal origin or if such material is used, the patent application should, where appropriate, include information on the geographical origin of such material, if known. The Directive furthermore stresses that Member States must give particular weight to Article 8(j) of the CBD when bringing into force the laws, regulations and administrative provisions necessary to comply with this Directive. The Directive is motivated by the need to develop a The determination of applicable law and juridical competence will be studied with greater detail in part C of this section devoted to the implementation of the route taken by Belgium with regard to private international law. There are three constitutive elements to "theft": « soustraction, chose d'autrui et intention frauduleuse » Indeed, by virtue of the principle of "pacta sunt servanda" ("principe de la convention-loi", Article 1134 al.1 of the civil code), the procedure following breaches of contract and the compensation for the violation shall be determined by the contractual clauses themselves. Law of 16 th July 2004 related to the code of private international law, M.B., 27 th July 2004, pp. 57344 Concerns can also be raised for the lack of reference in these legal dispositions of important issues of "access to justice" addressed in the Nagoya Protocol, such as the legal standing of ILCs before Belgium courts. Liège, 25 avr. 1991, Rev. dr. pén., 1991, p. 1013. Cass. (2e ch.) RG P.98.0082.N, 5 octobre 1999 (Indestege) Cass. RG 2941, 9 avril 1991 (Marchand / Strubbe) One can for instance foresee the starting point of breach of trust at the change of nature of the recipient institution, turning for instance from a public non-profit organisation into a commercial structure. Besluit van 15 mei 2009 van de Vlaamse Regering met betrekking tot soortenbescherming en soortenbeheer, (B.S.,13 augustus 2009). Loi du 12 juillet 1973 sur la conservation de la nature: Région wallonne (M.B., 11 septembre 1973) Decreet betreffende het natuurbehoud en het natuurlijk milieu See: Belgian Senate, parl. sess. 1995[START_REF] Goux | La recherche scientifique dans la Belgique fédérale: examen de la répartition des compétences[END_REF], Bulletin 1-2 à, 18 th June 1996, vraagn°43 (Mrs. V. Dua), 26 th March 1996. See: Federal Plan for SustainableDevelopment (2000Development ( -2004) ) Convention concerning the Protection and Integration of Indigenous and Other Tribal and Semi-Tribal Populations in Independent Countries , 26 th June 1957, Genève, ILO, ratified by Belgium on 19 th November 1958, (M.B., 6 th December 1958), entered into force on 2 nd June 1959. Report of the first stakeholder meeting is available here: http://www.biodiv.be/implementation/cross-cuttingissues/abs/workshop-np-20120529/20120529-nagoya-stakeholder-workshopreport-final.pdf 111 CIDD/ICDO (2008) Federaal plan inzake duurzame ontwikkeling 2004-2008/Plan Fédéral de DéveloppementDurable 2004-2008. Interdepartmental Commission for Sustainable Development 112 More information on http://www.biodiv.be/ info0405/activities/ 113 More information on http://www.africamuseum.be/museum/about-us/cooperation/index_html More information on http://www.tematea.org More information on http://straininfo.net More information on http://www.plantcol.be/ More information on http://www.biodiversity.be Samen Grenzen Ver-Leggen. Vlaamse strategie duurzame ontwikkeling, Vlaamse Regering, 2011. Available at http://do.vlaanderen.be/sites/default/files/VSDO2_3.pdf VBV (2011) Plant van Hier. Praktisch vademecum met oog op het behoud en de promotie van autochtone planten. Available on http://www.vbv.be/projecten/plantvanhier/Vademecum_PlantVanHier_web.pdf http://www.plantvanhier.be/ [START_REF] Cbd | Report on the Legal Status of Genetic Resources in National Law. Including Property Law, where applicable, in a Selection of Countries[END_REF] Report on the Legal Status of Genetic Resources in National Law. Including Property Law, where applicable, in a Selection of Countries.UNEP/CBD/WG-ABS/5/1. Ibid. Wynberg R, Taylor M (2009) Finding a Path Through the ABS Maze -Challenges of Regulating Access and Benefit-sharing in South Africa. In Kamau EC and Winter G (Eds.) Genetic Resources, Traditional Knowledge & the Law. Solutions for Access & Benefit-sharing. London: Earthscan Article 19 of Decision 391 on the Common Regime on Access to Genetic Resources. Cartagena Agreement Official Gazette No. 213 of 17 th July 1996 Report of the stakeholder meeting is available here: http://www.biodiv.be/implementation/cross-cuttingissues/abs/workshop-np-20120529/20120529-nagoya-stakeholder-workshopreport-final.pdf The Belgian national law enacting the code of private international law states in its Article 15 that, if a foreign law needs to be applied to a case that is examined by a Belgian judge, the content of such applicable law should be identified by the judge, according to interpretations received in the "country of origin" (sic). Collaboration can be required if the content cannot be established clearly by the Belgian judge. If it is "impossible to determine the content of foreign law in due time, Belgian law should be applied" (art.15 §2al2)" The Belgian national law enacting the code of private international law states in its Article 15 that, if a foreign law needs to be applied to a case that is examined by a Belgian judge, the content of such applicable law should be identified by the judge, according to interpretations received in the "country of origin" (sic). Collaboration can be required if the content cannot be established clearly by the Belgian judge. If it is "impossible to determine the content of foreign law in due time, Belgian law should be applied" (art.15 §2al2)" cf. supra, previous footnote. Frison C., Dedeurwaerdere T. (2006), op. cit. UN Comtrade (2010) Medicinal and pharmaceutical products (SITC 54) Available at http://comtrade.un.org/. Figures from Pharma.be, http://www.pharma.be/newsitem.aspx?nid=2174 EC (2012b), op. cit. 19 th February 2007 -Accord de coopération entre l'Autorité fédérale, la Région flamande, la Région wallonne et la Région de Bruxelles-Capitale relatif à la mise en oeuvre de certaines dispositions du Protocole de Kyoto 25th April 1997 -Accord de coopération entre l'Etat fédéral et les Régions relatif à la coordination administrative et scientifique en matière de biosécurité (M.B. 14.07.1998) The provisions IMP 1.0 (2); IMP 1.1.1 (2); IMP 2.2 ; IMP 2.3 would require a Federal Law and Decrees of the Federated Entities, to amend the basic environmental codes of the Regions and the Federal State : Natuurdecreet, 21 st of October 1997 (Vlaams Gewest) ; Loi sur la Conservation de la Nature, 12 th of July 1973 (Région Wallonne) ; Ordonnance sur la conservation de la nature, 1 st of March 2012 (Région Bruxelloise) ; Law on the protection of the Marine Environment, 20th January 1999. For a detailed description of these laws, cf. above section 3.1 of the study on the "Access and use of genetic resources under national jurisdiction in Belgium". The refined fishing net would consider any other legislation where notification/registration/permit exist and specify that such notification/registration/permit is also considered as a PIC under the Nagoya Protocol. The case of the conservation varieties, cited for illustration only, shows such a legislation that is different from the PA/PS legislation and where the current legislation on the "admission to use" could be considered also as a PIC under the Nagoya Protocol, in further For an overview see[START_REF] Vatn | Institutions and the Environment[END_REF], Institutions and the Environment. Edward Elgar Pub Due to the scarcity of quantitative data, most performances of criteria in this report reflect a subjective assessment and evaluation based on the various available input and data. Nevertheless, quantitative figures have been included and detailed whenever possible and reliable. See C. Correa (2000), Implications of national access legislation for germplasm flows, In Strengthening partnerships in agricultural research for development in the context of globalization, proceedings of the GFAR Conference, 21-23 May 2000, Dresden, Germany. FAO/Global forum on agricultural research; K.D. Prathapan, R. Dharma, T.C. Priyadarsanan and Narendran, C.A. Viraktamath, N.A. Aravind, J. Poorani, J. (2008) Death sentence on taxonomy in India. Current Science, 94 (2). pp. 170-171. S. Täuber, K. Holm-Müller, T. Jacob, U. Feit (2011), An Economic Analysis of new Instruments for Access and Benefitsharing under the CBD -Standardisation options for ABS Transactions. Research project of the Federal Agency for Nature Conservation, Germany. Terms of reference No. DG5/AMSZ/11008 Procedural sub-criteria have been dichotomized through a Boolean expression, as these criteria were not attributed performance in the previous particle. Options positively addressing the issues raised by a general criterion were assigned a true (1) status, whereas others were considered as false (0). This Boolean expression has been broadened a little in the course of the analysis, as it quickly became apparent that some options could be both true and false. Therefore, a third category was created (0,5) in some cases. In order to allow for software-based comparison scores have been quantified on a scale from 1 to 7: the "---" score corresponds to value 1 and the "+++" to value 7. The software used for the PROMETHEE calculus is Visual Promethee, version 1.0.10.1. For some sectors, the rate of deposits of material used for research outside of the collections is very low. According to one interviewee, for the microbial GR, for instance, less than 1% of the GR serving for research outside of the collections is currently being deposited in an ex-situ collection. M.M.[START_REF] Watanabe | Innovative Roles of Biological Resource Centers[END_REF], Innovative Roles of Biological Resource Centers. Proceedings of the Tenth International Congress for Culture Collections, Japan Society for Culture Collections ; World Federation for the Culture Collections p. 435-438; K B. Koo, P.G. Pardey, B.D. Wright and others (2004), Saving Seeds; The economics of conserving crop genetic resources ex situ in the future harvest centre of CGIAR, CABI Publishing, p.45.; Interviews. These figures are based on a quantitative evaluation, by the study team, of the additional costs for accessing materials under the options for the operationalization of PIC. This evaluation is based on the data generated in the interviews (especially indicators IND 3.1 and 3.2, data collected for the various stakeholder groups, and IND 8.1 to 8.5 for the collections) and existing models from the literature for assessing the implementation costs (in particularTäuber et al. (2007);[START_REF] Eaton | A Moving Target: Genetic Resources and Options for Tracking and Monitoring their International Flows[END_REF]; CBD Bonn Guidelines (2002); Visser B,[START_REF] Visser | Transaction costs of germplasm exchange under bilateral agreements[END_REF], op. cit.). Ibid. Data from address database from Belgian users of GR, acquired for the 2006 awareness study on access and benefitsharing[START_REF] Frison | Infrastructures publiques et régulations sur l'Accès aux ressources génétiques et le Partage des Avantages qui découlent de leur utilisation pour l'innovation dans la recherche des sciences de la vie : Accès, conservation et utilisation de la diversité biologique dans l'intérêt général[END_REF]. 206 This does not imply that these key collections acquire the authority to decide whether or not to grant access, as this task is reserved to the CNA. [START_REF] Marchal | Le droit comparé dans la Jurisprudence de la Cour de Cassation[END_REF] Le droit comparé dans la Jurisprudence de la Cour de Cassation. Revue de Droit International etde Droit Comparé, liv.2-3, 2008 -p. 418. [START_REF] Ieep | Study to analyze legal and economic aspects of implementing the Nagoya Protocol on ABS in the European Union, Final report for the European Commission[END_REF], op.cit. Even if the efficiency depends upon additional training of the patent office, or close contribution of a more technically skilled monitoring office IEEP, Ecologic and GHK (2012), op. cit., p. 139 Eaton D., Visser B. (2007), op. cit. cf. The detailed analysis in section 3.1 on "Access and use of genetic resources under national jurisdiction in Belgium". Non-exhaustive list of examples of legislation covering PA/PS which do not expressly cover access of GM or GR for utilization as defined under the NP: Ordonnance du 1 mars 2012 relative à la conservation de la nature; Besluit van de Vlaamse Regering van 15 mei 2009 met betrekking tot soortenbescherming en soortenbeheer; Besluit van de Vlaamse Regering van 5 december 2008 betreffende de toegankelijkheid van de bossen en de natuurreservaten; Décret du 15 juillet 2008 relatif au Code forestier; Decreet van 21 oktober 1997 betreffende het natuurbehoud en het natuurlijk milieu; Arrêté du Gouvernement wallon du 26 janvier 1995 organisant la protection des cavités souterraines d'intérêt scientifique; Decreet van 13 juni 1990 Bosdecreet; Arrêté de l'Exécutif régional wallon du 8 juin 1989 relatif à la protection des zones humides d'intérêt biologique, modifié par l'arrêté du 10 juillet 1997; Loi du 12 juillet 1973 sur la conservation de la nature For examples of other relevant legislation please refer to footnote 187 The provisions under (1) a, b and c would require a Federal Law and Decrees of the Federated Entities, to amend the basic environmental codes of the Regions and the Federal State : Natuurdecreet, 21 st October 1997 (Vlaams Gewest) ; Loi sur la Conservation de la Nature, 12 th July 1973 (Région Wallonne) ; Ordonnance sur la conservation de la nature, 1st of March 2012 (Région Bruxelloise) ; Law on the protection of the Marine Environment, 20th January 1999. For a detailed description of these laws, cf. above section 3.1 of the study on the "Access and use of genetic resources under national jurisdiction in Belgium". The interpretation of this analysis, especially the results of the outranking flow calculus of the PROMETHEE method, is to be done with care and in light of both the context described in the evaluation of the performance of the options (step 1 of the IA, as described above) and the analysis of the relationship between the options as presented in the visual dominance analysis (step 2 of the IA). Access and Benefit-sharing ABS CH ABS Clearing-House ABSWG Ad Hoc Open-ended Working Group on Access and Benefit-sharing ANB Flemish Agency for Nature and Forest AWEX Agence wallonne à l'Exportation et aux Investissements étrangers BAP EU Biodiversity Action Plan BCCM Belgian Co-ordinated Collections of Micro-organisms BCH Biosafety Clearing-House BELSPO Belgian Federal Science Policy Office BEW/AEE Economy and employment administration of the Brussels-Capital Region BS Benefit-sharing BTC Belgian Technical Cooperation CAP Common Agricultural Policy CBD Convention on Biological Diversity CCIEP Coordinating Committee for International Environment Policy CFDD Conseil Fédéral du Développement Durable CHM Clearing-House Mechanism to the CBD CITES Convention on International Trade in Endangered Species of Wild Fauna and Flora CNA Competent National Authority COP Conference of the Parties COP/MOP Conference of the Parties serving as the Meeting of the Parties DIE-OPRI Dienst voor Intellectueel Eigendom/Office belge de la Propriété intellectuelle DG Directorate-General DG4 DG Animal, Plant and Food of the FPS Health, Food Chain Safety and Environment DG5 DG Environment of the FPS Health, Food Chain Safety and Environment DGARNE Wallonia's Operational Directorate-General for Agriculture, Natural Resources and the Environment DGD/DGOS DG Development Cooperation of the FPS Foreign Affairs, Foreign Trade and Development Co-operation DGO6 Wallonia's Operational Directorate-General for Economy, Employment and Research E3 DG Market Regulation and Organization of the FPS Economy, SMEs, Middle Classes and Energy E4 DG Economic Potential of the FPS Economy, SMEs, Middle Classes and Energy EC European Commission EP European Parliament EU European Union EWI Department Economie, Wetenschap en Innovatie of the Flemish government FLEGT Forest Law Enforcement Governance and Trade FIT Flanders Investment and Trade FPS Federal Public Service GEF Global Environment Facility GMO Genetically Modified Organism GI Geographical Indications GR Genetic Resources IA Impact Assessment ILCs Indigenous and Local Communities ILO International Labour Organization Establish monitoring system  Option 1: Voluntary monitoring system  Option 2: "Due-diligence" monitoring system  Option 3: Monitoring by checkpoints at specific stages of the valorization chain Create incentives for users to comply  Option 1: Set up financial incentives (tax reductions, rebates, …)  Option 2: Set up structural incentives (e.g., special priority for other filings, permits or opportunities, access to special materials or programs that cannot be accessed by others)  Option 3: Set up positive publicity measures (e.g. label) Option 2 -Self-standing obligation in the Belgian legislation to have PIC and MAT, if so required by the provider country. Possible advantages:  could create less legal complexity for users and enforcement authorities in Belgium Possible disadvantages:  might be a less stringent measure for acting against potential illegal utilization of GR by Belgian users EVALUATION The 2 options are potentially interesting and deserve further analysis. Action card -Designate checkpoints Description: At least one institution has to be designated by Belgium to function as a checkpoint to monitor and enhance transparency about the utilization of GR Related Article of the NP: competences in Belgium Option 2 -CNA is single point of contact for user on all ABS related permits, but serves as coordination/facilitation body between NFP, other ABS or non-ABS related access granting authorities Possible advantages:  More suited for Belgium, given shared competences Possible disadvantages:  Could lead to lower process certainty for user  Probably longer application process  Would still need a degree of harmonization and integration of the different permits EVALUATION Option 2 is potentially interesting (especially for coordination between various permits/contracts if multiple permits/contracts are requested) and deserves further analysis. Action card -Establish monitoring system Description: Measures should be taken to ensure an efficient monitoring Related Article of the NP: Biotechnologies and processing industry sector The sector is made up of the remaining stakeholders active in the field of biotechnologies, but not active in the healthcare sector. It covers, amongst other, the following fields: energy, materials, biocatalysts, and chemical industries. The processing industry sector in Belgium is mainly focused on food industries and animal feed industry. Ex-situ collections of genetic resources The ex-situ collections sector in Belgium includes over 300 organizations and covers botanical gardens, zoos, aquariums, museums, herbaria, gene banks, collections of micro-organisms/cells, collections of dead material, both in public and private collections. Governmental research institutions Researchers in governmental institutions are accessing genetic resources, as well as traditional knowledge associated with genetic resources, on a regular basis for research purposes. In this category we only consider the specificities of public and academic research, while private research is dealt with under the other stakeholder categories. Key players may include:  The Royal Belgian Institute of Natural Sciences (RBINS);  The Royal Museum for Central Africa (RMCA)  The Walloon Agricultural Research Center (http://cra.walloni.e.be)  Veterinary and agrochemical Research Center (http://www.coda-cerva.be) University research sector Key players targeted are the departments of Belgian universities that deal in particular with life science, engineering, as well as chemical, agricultural, environmental, health research, etc. Other possible stakeholders 8.3.2.1 Civil society Civil society organizations (advocacy NGOs, interest groups, etc.) do not seem to be directly impacted by the Nagoya Protocol as such. However, they might be consulted if they have gathered relevant information on the provision and/or use of genetic resources that can contribute to the impact assessment (if they developed a certain expertise, or have a privileged contact to information from a main user/provider of GR for example). Citizens and consumers Same comment as for civil society. IMP 1.1 -Implementation of option 1 for operationalizing PIC The implementation of option 1 for operationalizing PIC can be broken down in four subsequent components: IMP 1.1.1 -The establishment of BS as a general legal principle Option 1 also includes (as option 2 and 3) the establishment of BS as a general legal principle in Belgium. For this implementation part, the same components are considered as for IMP1.0: (1) A political agreement from the competent governments to establish benefit-sharing as a general legal principle to be implemented for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the basic environmental code of the three Regions and at the federal level. (2) Subsequent or parallel implementation of this general principle through a cooperation agreement and/or analogous provisions in relevant legislation such as the basic environmental code of the three Regions and at the federal level. (3) Subsequent operationalization of the general principle by the respective governments at the regional (through executive orders) and federal level (through royal orders), establishing rules and procedures for further implementation of the benefit-sharing provision as envisioned in the other options considered below. IMP 1.1.2 -Establishing a general legal principle to require PIC for access to Belgian GR In addition, option 1 would require establishing as a general legal principle that access to Belgian GR requires PIC. For the implementation of this principle, the same considerations as those considered for IMP 1.0 apply. Therefore, the same three phased components are considered as for IMP1.0: (1) A political agreement from the competent governments to establish PIC as a general legal principle for access to Belgian GR, with the specifications that this would be implemented for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the basic environmental code of the three Regions and at the Federal level. (2) Subsequent or parallel implementation of this general principle through a cooperation agreement and/or analogous provisions in relevant legislations such as the basic environmental code of the three Regions and at the federal level. (3) Subsequent operationalization of the general principle by the respective governments at the regional (through executive orders) and federal level (through royal orders), establishing rules and procedures for further implementation of the general PIC provision as envisioned in the other options considered below. IMP 1.1.3 -Refinement of relevant legislation for Protected Areas (PA) and Protected Species (PS) Option 1 could be implemented by refining existing PA/PS relevant legislation to establish that access provisions to PA/PS not only concern physical access but also access within the meaning of the Nagoya Protocol and that such access would also amount to prior informed consent from the Belgian State. Once more information becomes available over time regarding experience with the implementation of that general provision and taking into account ongoing discussions and/or practices at international and Party level, the modalities for executing this general principle/provision could be further refined. In addition, IMP 1.1.3 is a further operationalization of IMP 1.1.2, which in itself already provides a sufficient legal basis for establishing PIC as a general principle in the context of the ratification of the Nagoya Protocol. Therefore, any further specification can be made in a later stage after IMP 1.1.1 and IMP 1.1.2, as soon as more experience is available, and for the implementation of this aspect of option 1, the assessment only considers the following component: (1) Amendment of existing legislation relevant for PA/PS to establish that access provisions to PA/PS not only concern physical access but also access within the meaning of the Nagoya Protocol and that such access also automatically amounts to PIC under the implementation of the principle established under IMP 1.1.2. (through a Decree/Ordinance of the Regions) 186 IMP 1.1.4 -Establishing the default access rule (from qualified Belgian collections only) Option 1 would specify that, for GR outside PA/PS, access to Belgian GR would need to be sought and processed as much as possible through qualified Belgian collections (which are equipped for deposit of data and/or samples). Once IMP 1.1.2 is established, IMP 1.1.4 is a further operationalization that is part of the specification of the procedures for processing access requests by the Competent National Authorities, including the designation of the qualified collections by the Regions and the Federal Government. Therefore, under IMP 1.1.4, only the establishment of the general principle is considered, while the detailed operationalization will be considered under IMP 3.1 below. (1) A political agreement from the competent governments to establish a default access rule from qualified Belgian collections between the Regions and the Federal Government which 186 As explained in the detailed analysis in section 3.2., the current access provisions are regulated by various legal measures, depending on the nature of the material, the region and the environmental competence. Therefore, it would be probably most effective (and efficient) to implement this first through a general provision in the basic environmental code and second make specific amendements in the other applicable codes as discussed in ch 3. th January 1999. The second step could build upon several specific existing access regulations (for example whenever access is given for research), see explanations on existing PA/PS relevant legislation (chapters 3.2.2 -3.2.5). Establishing one or more Competent National Authorities The choice of the Competent National Authority would in the first place be based on the relevant competent authorities for the existing legislation and measures related to GR (that is PA/PS and possibly other existing legislation on GR). This means four Competent National Authorities would be needed: one for each of the three regions and a federal one, flowing from the actual division of competences in Belgium. These CNAs would thereby build upon existing institutions and be responsible for granting the access permits. Given this institutional context, the options do not reflect the amount of CNAs to be established but rather the ways in which users can request access (i.e. directly through one of the CNAs vs. through a centralized entry-point). In particular, under a centralized access system, the CNAs would coordinate through channeling, facilitating and/or advising the access requests. This has consequences for the level of comparability of the proposed options. Whereas options 1 and 2 focus on different scenarios to organize the ways in which a user requests access, option 0 focuses on the non-establishment of the CNA. The latter would lead to a non-implementation of the Nagoya Protocol, but still require from the Belgian State to clarify the access procedures to Belgian GR, as discussed in chapter 8. IMP 3.0 -Implementation of the specific "0" option on the Competent National Authorities Idem to IMP 2.0 IMP 3.1 -Implementation of option 1 on the Competent National Authorities The implementation of option 1 on the CNA implies two distinct steps: Summary of the selected options on the Competent National Authority 6. Specific 0 option: non-establishment of the CNAs 7. Option 1: Decentralized input to the CNAs 8. Option 2: Single entry-point to the CNAs For a detailed description of the options please refer to chapter 8.2. IMP 3.1.1 -Establishing the four Competent National Authorities The choice of the Competent National Authorities should take into consideration the division of competences in Belgium on environmental issues, and the objective of the Nagoya Protocol to contribute to conservation of biological diversity and the sustainable use of its components. The choice of the four authorities competent for the existing legislations and measures related to protected areas and protected species, or for other existing legislation on access to GR, would seem logical. Therefore, option 1 and option 2 consider the logical situation where the CNAs would be established in the respective authorities (cf. section 3 of this report), that is the "Agentschap voor Natuur en Bos" in the Flemish Region, the "Division de la nature et des forêts" in the Walloon Region, the "Institut Bruxellois pour la gestion de l'environnement" in the Brussels-Capital Region and one authority to be established at the federal level, probably at the Directorate-General for the Environment of the Federal Public Service "Health, Food Chain Safety and Environment" (for GR that are not under competences of the federated entities, such as Marine GR and ex-situ GR held at federal institutions). Considering that both option 1 and 2 would benefit from the additional legal clarity that will be provided through a timely ratification of the Nagoya Protocol (in particular through the decisions at the first COP/MOP to the NP), two phased implementation components for this option are considered in this assessment: (1) A political agreement from the competent governments to establish four Competent National Authorities to be implemented for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the environmental codes of the three Regions and at the Federal level. (2) Subsequent or parallel implementation for example through a cooperation agreement and/or the analogous provisions of relevant legislations such as the basic environmental code of the three Regions and at the federal level. The specification of the rules and procedures for processing access requests by these Authorities would be done to the maximum possible extent through executive orders of the governments of the Federated Entities. (3) Administrative arrangements could be established between designated ex-situ collections and the CNAs for processing access requests (under option 1 for PIC)or for the management of the notification/registration procedures by the four CNAs (as envisioned under option 2 and 3 for PIC). Such administrative arrangements would not require any additional legal measures (legislative or executive), but could be supported by policy guidance (advice, provision of technical information). IMP 3.1.2 -Decentralized input system A decentralized input would not require any additional implementation measures to the measures under IMP 3.1.1. Designating one or more checkpoints Summary of the selected options on checkpoints 0. Specific "0"Option : No checkpoints would be introduced as envisioned under the Nagoya Protocol 1. Option 1: Monitor PIC in the ABS Clearing-House 2. Option 2: Using the patent office as a checkpoint For a detailed description of the options please refer to chapter 8.2. The provision of Article 17 of the NP is of binding nature. Therefore, as discussed in chapter 8, option 0 (introducing no checkpoints) would lead to a non-ratification of the Nagoya Protocol. Option 1 envisions using the ABS Clearing-House, which monitors the PIC established in the implementation of the Nagoya Protocol, as checkpoint. However, the choice of the ABS CH as checkpoint depends among others on the options chosen for the operationalization of PIC and the ABS CH (cf. discussion on operationalizing PIC above and the ABS CH below). Under option 2, it is the patent office that would function as such a checkpoint. In any case, the tasks of the institution handling the checkpoint should also be further specified, so that it can address, as appropriate, the monitoring of GR and TKaGR used in Belgium, both for Belgian GR and GR or and TKaGR acquired from other countries. The further specification of these tasks can be done gradually, as part of the phased implementation of the Nagoya Protocol, but would include in the first stance the collection and transfer of relevant information on prior informed consent. "PIC as checkpoint" thus can be understood as the collection/reception (by a yet to be defined authority) of the proof of PIC from the provider country (whether that is Belgium or not) as a condition for the utilization of GR in Belgium. Options 1 and 2 are not mutually exclusive and, in a phased implementation approach, it can be envisioned to implement both options together. Checkpoints are monitoring services collecting or receiving information at different stages of the development chain: before utilization (activities such as collecting, identifying and storing GR), during utilization (basic and applied research, and research for product development) or after utilization stages of the development chain (e.g. commercial sale). Options 1 and 2 respectively represent early and later stages of this development chain. The difference is therefore less regarding what is monitored, than when the monitoring takes place. With its focus on the early steps of the development chain, it is assumed that monitoring under option 1 will cover the broadest possible amount of GR and its users in order to be effective. Option 2, on the contrary, would only focus on the specific situations where the utilization of GR is part of a patent application procedure. IMP 5.0 -Implementation of the specific "0" option on checkpoints Idem as under IMP 2.0 (2) Subsequent implementation by establishing cooperation between this CH and other institutions, through appropriate administrative arrangements between all the players involved. Importantly, in this assessment, for comparative purposes, we consider option 1, 2 and 3 separately. However, in practice it is likely that, based on the assessment of the respective strengths and weaknesses of the players, the first step will only involve the RBINS while a combination of the options might be considered for the full implementation of the CH obligations. IMP 6.2 -Implementation of option 2 for the CH Idem as under IMP 6.1 except that BELSPO would be appointed in the first phase for contributing to the information tasks of the CH. IMP 6.3 -Implementation of option 3 for the CH Idem as under IMP 6.1 except that WIV-ISP would be appointed in the first phase for contributing to the information tasks of the CH. Operationalizing PIC Performance of the options Economic impact E1 -Legal certainty and effectiveness for users and providers of GR, at low cost Option 0 is the least preferred option for this criterion: it does not provide any legal certainty to the user, due to the fact that it does not establish proof of legal access; nor is it enforceable as it does not allow any post-access tracking and monitoring to take place; and it makes responsibility so diffuse that no Party can be held accountable. Moreover, it would not allow issuing an internationally recognized certificate of compliance, which is one of the main contributions of the Nagoya Protocol for increasing legal certainty and transparency of exchanges of GR. Under the fishing net model, PIC is operationalized through a simple notification obligation upon the point of access. Therefore, this model provides users with a high level of process certainty and legal certainty, at an early stage of the ABS application process (before identification and storage procedures in the public ex-situ collections or in research laboratories). The simplified nature of the model also allows for a good overview regarding this process (compared to the sometimes lengthy and complex laboratory operations required before a GR can enter an ex-situ collection). However, Summary of the selected options for the operationalization of PIC 0. Specific "0" option (access component): the specific "0" option on access would consider no PIC requirement, with benefit-sharing as a horizontal principle 1. Option 2 has, compared to option 1, a set of advantages pertaining to legal certainty and effectiveness. Indeed, this option is likely to increase legal clarity, as it would lead to a more standardized input system of access requests, and reduce the redundancy in information provision on access procedures. In addition, it can reduce costs related to the search of the adequate information for users, as only one input system will be in place. The set-up costs of a single entrypoint to the CNAs is to be born only once (and might benefit from some economies of scale), while the operating costs are likely to remain low once implemented (for example through a single digital portal as entry-point to the CNAs). These effects would probably be different for foreign users and for Belgian users. If accessed resources are to be used mainly by Belgian users, the impact on users has to be nuanced, as Belgian users are accustomed to the decentralized Belgian system. In contrast, for foreign users, the choice between 4 different entry-points could indeed create confusion and thereby lead to higher legal uncertainty and/or ineffectiveness. Option 0 clearly represents the least favored option as the non-implementation will create high process as well as legal uncertainty for users. It will also make it impossible to keep track of legally accessed resources and hence prevent the public authority to enforce any obligations at a later stage of the utilization. Performance of the options Economic impact E1 -Legal certainty and effectiveness for users and providers of GR, at low cost Option 0 clearly represents the least favored option as the non-ratification will create high process and legal uncertainty for users. Options 1 and 2 will allow ratifying the Nagoya Protocol and thereby allow users and providers to benefit from the higher legal certainty and transparency created by the Protocol. In addition, the monitoring measures put into place under options 1 and 2 are envisioned as an important contribution to promoting transparency and compliance. In a phased implementation of these measures, it is expected that this additional contribution would only be minor in a first phase (as they would cover a sub-set of GR and/or GR on which information on PIC is already available), but this impact is expected to increase in the later implementation stages. However, creating a common level playing field would provide substantial benefit from the outset. Under ideal conditions, option 1 would be looking at the available information at the beginning of the development chain, thereby providing users and providers with the possibility to review whether all genetic resources utilized in Belgium have been acquired in compliance with the PIC provisions of the provider country. However, much will depend upon the effectiveness of the ABS Clearing-House(s) (internationally, but also in both the provider country and Belgium, cf. chapter 9.5). In case of ineffective transfer of information between the provider country and Belgium, users may face situations of uncertainty. Furthermore, the enforceability of the option is very doubtful, as it will prove hard to systematically control the high quantity of GR being utilized in Belgium from a high variety of sources and by very different users. However, a phased implementation might be a possible answer to these concerns. Option 2 provides both providers and users with less possibility to monitor the correct use of the GR in Belgium than option 1. The patent stage is an advanced stage in the development process. Collecting proof of compliance at this late stage could generate uncertainty for users using GR that have transited through third-parties. By putting the burden of proof at the end of the development Procedural impact G1 -Flexibility to accommodating sectorial differences The general information exchange tasks and the organization of the technical information do not make any differences amongst the sectors. Therefore, the options can be considered neutral in regards to this criterion. G2 -Temporal flexibility to allow for future policy and adjustments The 0 option would lead to non-implementation of the Nagoya Protocol. As discussed under G2 above (section G2 under MAT), this might lead to some additional flexibility in the short term, but would probably lead to higher adaptation costs at a later point in time. The temporal flexibility of the other 3 options will highly depend on the initial set-up costs of the various obligations. If these are high, then it might lead to less flexibility to change the options later on. In addition, if high level of technical expertise was required this might lead to less temporal flexibility in the change of the options, as it would necessitate to acquire again the same expertise by other actors. Both these arguments lead to favor options that build upon existing practices and expertise, over options that less do so. This applies equally to all three options. G3 -Improving knowledge for future policy developments and evaluation The option 0 would lead to non-implementation of the Nagoya Protocol and would not allow the necessary generation of PIC/checkpoints etc. that is useful for informing future policy developments. In contrast, the other options would allow improving and systematizing this knowledge base. As in the case of criterion G2, solutions that build upon existing practices and expertise would probably give a better guarantee of knowledge quality and integration, compared to solutions that less do so. G4 -Correspondence with existing practices The impact of option 0 over this criterion is unclear. Indeed, it would depend on the other measures taken by Belgium to comply with CBD and Convention ILO 107. For options 1, 2 and 3, all three options build upon existing practices for information coordination on ABS issues in Belgium and/or biodiversity policy matters. The RBINS already hosts the Belgian component of the CBD Clearing-House Mechanism (CHM) and the NFP to the CBD. RBINS is, together with BELSPO, part of the Belgian Biodiversity Platform (BBP), while the ISP/WIV hosts the Belgian Biosafety Clearing-House (BCH). Visual dominance analysis No ideal point can be identified in the performance chart. technical information to be provided to the central ABS Clearing-House, amongst others. The first task is already ongoing at the Royal Belgian Institute of Natural Sciences (RBINS) within the framework of the CBD CHM. The recommendation from the analysis is therefore to further strengthen the RBINS to fulfill the information sharing tasks on Access and Benefitsharing under the Nagoya Protocol. In a second stage, based on the modalities to be determined at COP/MOP1, administrative arrangements between this Clearing-House and other relevant institutions might be necessary to extend the tasks. Before summarizing the final outlook of the phased approach based on these recommendations, it is worthwhile to clarify that the three step approach to the implementation presented here is based on a concern for maximum legal clarity for all parties concerned and compliance as a Party with the core obligations of the NP, while at the same time allowing a timely ratification. The proposed approach is therefore to start with a political agreement which would include, in general terms, the principles on which the federated entities and the Federal State will take subsequent actions. The reason for recommending such a political agreement with a specification of the actions that will have to be taken to implement the Protocol in Belgium is double. On the one hand, such an agreement provides for a clear political commitment to the core obligations of the NP as it specifies the intentions of the competent authorities, within the limits of the decisions already taken at the international and European level at the time of the agreement. On the other hand, it does not prejudge the political decisions to be taken by the different authorities and thus allows for sufficient flexibility to further adjust the implementation process in a later stage. The latter is especially important given the many questions that are still undecided at the present stage, both at the EU and international level, as mentioned and taken into account in the assessment report. Based on the above considerations, the recommended phased approach for implementation of the Protocol that results from this study can be summarized as follows. 1. A political agreement by the competent authorities with a clear statement on the general legal principles for a minimal implementation of the Nagoya Protocol in Belgium that came out of the study : a. Establishment of benefit-sharing as a general legal principle in Belgium, which will be implemented for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the environmental codes of the three Regions and at the federal level 226 (IMP 1.1.1 (1)) b. Establishing as a general legal principle that access to Belgian GR requires PIC, which will be implemented for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the environmental codes of the three Regions and at the federal level (IMP 1.1.2 (1)) Conclusion of the third and fourth phase of the study The main conclusions of the third step have been presented in detail in the chapters 10 and 11 of the report and resulted in two general recommendations, along with a set of more specific recommendations for each of the 6 implementation measures. First, the analysis shows that the "no policy change" baseline for each measure clearly has the worst performance. This result has led to a first general recommendation, which is to implement both Prior Informed Consent (PIC) and benefit-sharing as general legal principles in Belgium. Second, the analysis confirmed the validity of a phased approach to the implementation of the Protocol, which is the second general recommendation and could be organized through a 3 step implementation process: 1. In the first implementation step, through a political agreement by the competent authorities which would include a clear statement on the general legal principles, along with the specification of the actions to be undertaken by the federal and the federated entities to put these principles into practice. 2. In a second implementation step, the specified actions would be subsequently implemented for example through a cooperation agreement and/or analogous provisions in the relevant legislations such as the environmental codes of the federated entities and the Federal Government, along with other possible requirements. 3. In a third implementation step, additional actions can be undertaken once there is more clarity on the EU and the international level. Finally, a set of specific recommendations on each of the 6 measures arise from third step of this study: 1. For the establishment of the Competent National Authorities, a centralized input system clearly came out as the recommended option. 2. For the setting up of compliance measures, the option to refer back to provider country legislation, with Belgian law as fallback option, is the recommended option that comes out of this analysis. 3. For the designation of one or more checkpoints, the option of using the PIC of users available in the international ABS Clearing-House (and therefore also through the Belgian node/or the Belgian ABS CH), in the first step of the implementation, stands as the recommended option. 4. For the operationalization of PIC, the bottleneck option and the refined fishing net option came out very close. a. First, both these options require establishing as a general legal principle that access to Belgian GR requires PIC. This could be implemented for example through a cooperation agreement and/or analogous provisions in relevant legislations such as the environmental code of the three Regions and at the federal level b. Second, additional measures should be envisioned afterwards, the most important of which are the refinement to existing PA/PS relevant legislation and the general registration/notification requirements to the Competent National Authorities for GR outside PA/PS. ANNEX 1 -OVERVIEW OF ARTICLES OF THE NAGOYA PROTOCOL THAT CONTAIN LEGAL OBLIGATIONS FOR A PARTY/PARTIES This list contains an analysis of the legal obligations emanating from the NP that has been provided with the terms of reference of this study, by the four Belgian environmental administrations that commissioned this study. This list serves as the background for this study.
590,369
[ "17592", "775504" ]
[ "527713", "300877", "92863" ]
01743927
en
[ "shs", "sde" ]
2024/03/05 22:32:07
2012
https://hal.science/hal-01743927/file/2011-Withana-Baldock-Coolsaet-Volkery-EU-Policy-Report-Final-version.pdf
Dr Phillip Lee M P Chair Foreword W e are in tough economic times and there is no doubt that the overall British approach and our short-term expectations with regards to environmental policy will have to be revised. At the same time, as the impact of climate change becomes apparent, we do then have to address the fundamental problems and act swiftly to protect the environment. Britain has a dual approach to European and international climate change policy: to work with key countries, partners and sectors to demonstrate the potential of low carbon growth, and at the same time to make progress with a legally-binding global deal curbing emissions to tackle climate change. Our focus will now have to be on building a global roadmap to an agreement by 2015 and we also need a strategic and viable approach to renewable energy. I am delighted that, under the UN Framework, both developed and developing countries were able to join together and agree on the strategy at the recent Durban Climate Change Conference -European leadership played a large part in this. In addition, a second commitment of the Kyoto Protocol was agreed and I am proud of the role that the UK played in galvanising support for both agreements. Europe is an essential global player in shaping overall targets and leading the way on environmental policy. It is a hugely complex area and so I am pleased to present this report. I hope that you will find it useful in guiding you through the maze of proposals that cover the wide range of issues that need to be tackled. These issues are certainly very significant, but it is clear that we have the opportunity to make real progress if we act now. Executive summary EU environmental policy is facing a new and challenging context. The current economic and financial preoccupations in Europe are unlikely to fade away quickly. It is difficult to forecast when instabilities in financial markets, uncertainties over economic and job prospects and pressure to maintain austerity regimes will end. The crisis in the Eurozone has led to bigger questions concerning the role of regulation and aspects of the EU project itself; particularly but not exclusively in the UK where political tensions have been brought to the fore in recent months. Details of a new inter-governmental agreement on the economic governance of the Eurozone are currently being negotiated. Most existing EU policies, including those concerning the environment, are not likely to be affected by this agreement. However, the political repercussions and dynamics of the new economic governance structure are yet to unfold and may spread beyond the arenas of fiscal and budgetary policy. For these reasons, conditions for the further development of a proactive EU environmental policy may not look favourable. Nonetheless, several environmental challenges call for a response as a matter of urgency, both within Europe and on a global scale. Many of these issues need to be addressed at a European level and there is a clear link to the single market as well as the ecological integrity of the continent. The current economic situation also offers a number of opportunities for promoting the environmental policy agenda, particularly in view of fostering an efficiency revolution. It has given an impetus to concepts such as the green economy, green growth, resource efficiency etc., which are increasingly reflected in mainstream political discourse both in the EU and domestically. Thus, even in a period of economic recession and political upheaval, the environmental perspective should remain a cornerstone of strategies for the future economy. Whilst far from perfect, environmental policy is certainly one of the success stories of the EU and is an area in which one can clearly see the benefits of the Union, both on the ground and internationally. Over the past four decades, a range of key pressures on the environment have been reduced and several aspects of Europe's environment have improved. Major progress has been made, including reductions in overall air and water pollution, improvements in the preservation of the natural environment and efforts in relation to waste and resource use. EU policy has played a very significant role in achieving these results. A particular strength of EU policy is that it addresses the environmental agenda rather systematically and is less affected by short-term political and budgetary shocks than most national governments. This has provided the conditions for a longer-term view which is particularly valuable in environmental policy. In the next two to three years, safeguarding jobs and stimulating growth are likely to remain an over-riding political priority. Thus securing support for new environmental measures will mean convincing leaders of the costs of inaction and the cost-effectiveness of action. Strong arguments and solid evidence will be at a premium. At the same time, policy is moving in new directions. The rise of emerging economies is dramatically changing the international landscape and the role of the EU therein. Moreover, the nature of contemporary environmental challenges is such that many cannot be addressed by environmental policy alone. Rather, they require wider economic and social changes, with implications for a suite of policies, ranging from trade and international relations to industrial policy, research and development, and fisheries. Although relatively comprehensive already, the body of EU environmental policy remains dynamic and is constantly being updated, subjected to scrutiny and potential modifications or roll-back. It is now at a critical point, with a number of important policy processes and strategic discussions taking place in various areas. Many of the key areas of policy development are reviewed in Chapter 4 of this report, while Chapter 5 provides an overview of the wider strategic context. The Annexes of the report provide an overview of forthcoming strategic events, EU targets and legislative proposals awaiting adoption. The following are amongst some of the key issues and policy processes that will be prominent on the agenda in the next two to three years: • Climate change concerns have infiltrated the main political discourse and there are currently several issues on the agenda. A shift to a 30 per cent EU greenhouse gas emission reduction target remains possible, as well as desirable, despite the reduced impetus from global negotiations. There are also specific proposals to promote energy efficiency more effectively. Other issues on the agenda include addressing emissions from the transport sector and the decarbonisation of transport fuels, as well as securing finance for climate-related investments within the EU and externally • A new approach to resource use in Europe is signalling efforts in the next eight years to improve resource efficiency and, more tentatively, to reduce resource use, while linking to strategies to promote green growth. This emerging agenda needs to be converted into concrete actions at EU and national level starting with the development of concrete targets and indicators for reducing resource consumption. • With regard to the natural environment, the valuation of natural capital and ecosystem services is increasingly recognised. However, it needs to be translated into concrete measures to protect biodiversity in practice, including adequate funding for Natura 2000 and a revised approach in the CAP. • Comprehensive reviews of existing legislation in a number of important areas of EU environmental policy are underway. A 'Blueprint to safeguard Europe's waters' is expected to be presented in November 2012, addressing the broad scope of EU water policy and making recommendations for improvements. These might include legislative changes and initiatives to improve implementation, which has been slow in the case of the Water Framework Directive for example. A review of the EU's approach to regulating the production and use of chemicals through the REACH Regulation is also expected in 2012. A review of EU air quality policy should conclude in 2013, with the presentation of a new clean air package, updating existing policies and directives. • In addition to specific legislative developments, there are also a host of strategies and roadmaps which set out where the EU is heading on the economy, energy policy, climate, innovation and the environment itself. Particularly significant in 2012 will be the emergence of a proposal for the 7th Environment Action Programme which is expected to set the direction for EU environmental policy for the coming years. • Funding for the environment will be a frontline issue with the EU budget for 2014-2020 in principle being agreed during the year. Reforms of the Common Agriculture, Cohesion and Common Fisheries Policies, all with large environmental components, will run through 2012 and beyond. The investment needed to achieve EU environmental objectives and to support the transition to a low-carbon, resource efficient economy is substantial. Rising public debts in several Member States and flailing capital markets have dented the ability to invest in the critical infrastructure and innovative technologies and services. Therefore, securing adequate financing to support environmental commitments in the main EU funds will be an important test of the commitment to environmental progress. The outcomes of these processes will have an important influence on the context and scope of EU environmental policy to 2020 and beyond. In broad terms, the main environmental challenges ahead include reducing the intensity of natural resources used for economic activity, decreasing the negative environmental impacts associated with the use of natural resources, preserving and restoring natural capital and ecosystem services, and improving human well-being and quality of life. The inter-linkages and trade-offs between different thematic areas such as climate change, biodiversity, and natural resources, as well as between environmental policy and sectoral policies such as agriculture, energy or transport will need to be addressed more vigorously. Improving the implementation of existing environmental policy has been and remains a key challenge. It requires a more honest alignment of aspirations, regulatory means and implementation capacity with the political realities of a Union of 27 Member States. Maintaining sufficient institutional and administrative capacities for good governance and regulatory foresight in the face of fiscal austerity and pressures for budgetary cuts will be an important challenge for national authorities as well as the EU institutions. However, recent history suggests that well designed and effectively implemented environmental policy can provide some of the foundations for long term prosperity, as well as steering us towards a more sustainable society. Introduction The European Union (EU) is currently embroiled in significant turmoil. The crisis in the Eurozone has not only had serious economic, political and social ramifications, but has also led to bigger questions concerning the EU project itself. It has exposed some of the weaknesses of the Union within a rapidly changing global context, and has highlighted the overwhelming need for reform in the Eurozone, with possible implications beyond fiscal policy. Growing scepticism about the EU has been voiced in a number of Member States, including the UK where a skirmish over a referendum on the UK's membership of the EU in the autumn brought some of the underlying political tensions to the fore. A veto exercised by the British Prime Minister at a December Council meeting of EU leaders created further furore, both among other Member States and domestically, and may have marked a significant moment in the history of the UK's relationship with its European partners. The details of the subsequent inter-governmental agreement are currently being finalised. Although many existing EU policies, including those concerning the environment, are not likely to be affected by the agreement; the new economic governance structure can be expected to create a momentum of its own and the political repercussions and dynamics of this are yet to unfold. Given this rather volatile climate, some are likely to question the value of having a discussion on EU environmental policy at all. However, it would be a mistake to underestimate the significance of environmental policy on account of the current situation. Whilst far from perfect, environmental policy is certainly one of the success stories of the EU and is an area in which one can clearly see the benefits of the Union, both on the ground and internationally. In the short-to medium-term, the EU is likely to remain the primary source of environmental policy within Europe, driving most national decisions in the same way it does today. Many environmental issues need to be addressed at a European level and there is a clear link to the single market as well as the ecological integrity of the continent. Moreover, while there is a preoccupation with economic performance at present, this needs to be linked to a green agenda, both in the EU and domestically. Concepts such as the green economy, green growth, resource efficiency etc., are increasingly reflected in mainstream political discourse. A relevant example is the Europe 2020 Strategy which has an explicit environmental dimension to it. Thus, even in a period of economic recession and political upheaval, the environmental perspective should remain a key component of strategies to stimulate growth. Despite progress in a number of areas, the overall state of Europe's environment and the EU's impacts on the environment in other parts of the world still require further action [START_REF] Eea | EU Environmental Policy Handbook: A Critical Analysis of EU Environmental Legislation, Making it accessible to environmentalists and decision makers[END_REF]. Inter-linkages between thematic areas such as climate change, biodiversity, natural resources and environment and health, and their links with sectoral policies such as agriculture, energy or transport are far from being adequately addressed. As in the UK, the EU's environment and climate change agenda is now covered by different departments within the European Commission, thus further increasing the need for coherence and coordination. Improving implementation of the substantial body of law already agreed has been, and still is, a key challenge. There are also a number of remaining gaps in the coverage of EU environmental policy, such as tackling climate change adaptation, addressing water scarcity, issues related to nanotechnologies and the cocktail effects of chemicals, where new measures may still be developed. Markets and consumers continue to get distorted price signals that do not fully account for the cost of environmental damage and significant knowledge gaps exist for several environmental problem areas. Addressing these and other remaining challenges will be a priority in the coming years. Although relatively comprehensive already, the body of EU environmental policy remains dynamic and is constantly being updated, subjected to scrutiny and potential modifications or roll-back. It is now at a critical point, with a number of important policy processes and strategic discussions underway. They include the adoption of new strategic plans including in relation to future biodiversity policy and comprehensive reviews of existing legislation in important areas of EU environmental policy such as water, air and chemicals. Substantial discussions on the future EU budget are currently taking place, as are efforts to take forward the EU's new economic strategy -the so-called Europe 2020 Strategy and related Flagship Initiatives. Alongside this, there are also a number of reform processes in sectors with a large environmental impact such as agriculture, transport and energy. These different processes are taking place against the backdrop of a difficult economic and political climate and will have an important influence on the context and scope of EU environmental policy to 2020 and beyond. This report provides an update to the 2009 IEEP report for the All-party Parliamentary Environment Group on the 'The Future of EU Environment Policy: Challenges and Opportunities'. It begins with a brief review of the development of EU environment policy and the changing nature of environmental issues addressed. It then goes on to provide a brief review of the key environmental challenges facing the EU in a number of different thematic areas. This is followed by an overview of the main policy and strategic discussions currently underway. The report concludes with a discussion on some of the prospects and key challenges for the future. Setting the scene Over the past 40 years, the EU has set up a relatively comprehensive and dense body of environmental legislation (the environmental acquis) which is accredited with reducing several pressures on the environment and improving environmental standards in the majority of its Member States, including the UK. The number of items of EU environmental legislation has increased rapidly over the years (see Figure 1) and in the UK it is estimated that over 80 per cent of environmental legislation originates in the EU. The same is probably true of most other EU Member States. Major progress has been made, including in relation to reductions in overall air and water pollution, improvements in the preservation of the natural environment and efforts in relation to waste and resource use. A number of factors have contributed to these achievements including strong networks of environmental actors, political will, and the effective use of 'windows of opportunity'. While there has been a good pace in the adoption of EU policies relating to the environment, progress has not been linear and has tended to be sensitive to wider economic and political cycles (Hey, 2005). The focus of EU environmental policy has shifted from an initial emphasis on addressing issues within Europe to growing consideration of the international dimension. The approach has evolved from an emphasis on controlling pollution from point sources, mainly through 'end of pipe' legislation towards a more holistic, integrated approach aimed at tackling the underlying causes of environmental damage, particularly in key economic sectors such as agriculture, transport and energy. Actor constellations have changed accordingly. The formulation and implementation of EU environmental policy now takes place in a complex system of multi-level governance involving not only environment Ministries and the Commission's Directorate-General (DG) for the Environment, but also sectoral Ministries and other DGs within the Commission, different levels of government, and a growing number of non-state actors. The European Parliament has also played an increasingly active and assertive role in relation to the development of EU environmental policy over the years. Today's environmental challenges are no longer distinct, independent and straightforward. The increasing complexity of inter-linkages between policies on climate change, biodiversity, natural resources, and environment and health has become ever more apparent in recent years. For example, the link between water and climate change is increasingly evident as large parts of Southern Europe are affected by water scarcities while other parts of Europe suffer from a rise in the frequency of major floods and related damage. Over the past few years, the focus of environmental policy has also shifted towards resource inputs in the economy and their environmental impacts. This agenda is taken forward not only by focussed environmental initiatives but also by economic policy drivers and their offshoots, including the Europe 2020 Strategy and related Flagship Initiatives. In particular, the 2011 resource efficiency Flagship Initiative has spawned a number of relevant strategies including the low carbon Roadmap and the resource efficiency Roadmap and has led to the re-conceptualisation of a number of environmental issues to relate them to the resource efficiency agenda. The web of different policies and strategies and the links between them is becoming more complex and sometimes less transparent. For example, feedback mechanisms between policies can lead to unintended negative side effects from well-intended measures, such as the case of indirect land use change impacts of biofuels (Bowyer 2010). Moreover, in a highly interdependent world, many key drivers of environmental pressures operate on a global scale and are likely to unfold over decades. Changes in one region can trigger a cascade of impacts that also affect other regions. The World Economic Forum has warned that a comprehensive set of interlinked global risks (see Figure 2) is evolving and that domestic governance systems lack the capacity to deal with them effectively (WEF 2011). Against this challenging backdrop, the political difficulty of agreeing different response options has increased markedly. In the current context of an extended economic crisis, many of the enabling conditions which have helped advance the development of EU environmental policy in the past are no longer fully present. How to ensure sustainability and promote environmental objectives in times of austerity is thus a critical question as we move forward. Policy is moving in new directions and a major expansion of the EU environmental acquis no longer seems likely in the coming decade. Moreover, the nature of environmental challenges faced today is such that they cannot be addressed by environmental policy alone. Rather, they require wider economic and social changes and relate to a suite of policies from trade and international relations to industrial policy, research and development and fisheries. Over the past four decades, a range of key pressures on the environment have been reduced and several aspects of Europe's environment have improved. EU policy has played a very significant role in achieving these results. However, a number of serious challenges remain; some in areas with a long history of policy efforts (e.g. managing waste, biodiversity) and those where efforts have been more limited to date (e.g. aviation, marine environment, transport). The EU's impact on the environment in other parts of the world, our so-called 'footprint', continues to grow. Further action is still required. The 2010 review by the European Environment Agency on the state of the European environment and outlook (EEA 2010c) identified a number of future priorities. These included the need to improve implementation and the management of natural capital and ecosystem services, to further integrate environmental considerations in sectoral policy domains, such as the CAP, and to achieve the transformation to a green economy. These will be amongst the principal challenges in the years ahead. However, it is less clear how we are going to address them. Given the current economic and political climate, there is a growing sense that the EU cannot proceed by regulation alone. There are active debates underway on the pros and cons of different types of intervention, their effectiveness, the associated administrative burden, costs entailed, etc. Issues of competitiveness and growth have been brought to the forefront of political priorities, creating tensions with environmental objectives. These questions are not purely theoretical; they are feeding into contemporary decisions over the next generation of policies. In the following sections we offer an overview of how these issues are being addressed in a range of themes of particular environmental significance. They include a summary of the current EU policy response and future options being explored. It is not intended as a comprehensive account of all areas of EU environmental policy, but is deliberately selective. Climate change and energy Despite being among some of the largest emitters of greenhouse gases (GHGs) in the world, EU Member States are also among the most active in seeking to address the issue. At the international level, the EU negotiates as a bloc within the UN Framework Convention on Climate Change (UNFCCC) and its Kyoto Protocol. The Kyoto Hungary, Latvia, Lithuania, Poland, Romania, Slovakia Sweden and the UK) (EEA 2011). Some, including Austria, Italy and Spain, are significantly off track. Moreover, these figures do not cover Europe's emissions effectively imported through the substantial trade of goods and services with third countries. Net emission transfers from third countries, particularly from newly advanced economic countries such as China, have increased continuously since 1990 and in total may offset Europe's emission reductions [START_REF] Peters | Growth in emission transfers via international trade from 1990 to 2008[END_REF]. Progress towards the achievement of the other 20-20-20 targets is mixed. Despite progress under several energy saving policies, estimates suggest that the projected impact of these policies would need to triple in order to meet the target of saving 20 per cent of energy use by 2020 (Ecofys and Fraunhofer Institute 2010). By contrast, the share of renewable energy sources continues to increase steadily (Figure 4 overleaf). Several Member States (Austria, Bulgaria, Czech Republic, Denmark, Germany, Greece, Spain, France, Lithuania, Malta, Netherlands, Slovenia and Sweden) are forecast to surpass their own targets, putting the EU on track to meet (or even exceed) its target of increasing the share of renewable energies in the overall energy mix to 20 per cent by 2020 (EC 2011d). The UK has a challenging target under the renewable energy Directive which it aims to meet mainly by a sizeable increase in wind power. A number of studies have demonstrated that more ambitious climate mitigation policies are needed in Europe and are technically and economically feasible. They point to the economic benefits of an ambitious climate policy which could function as a motor for modernising the EU economy and its infrastructure, create jobs and enhance Protocol commits the EU-15 to reducing average GHG emissions by 8 per cent below 1990 levels between 2008 and 2012. The EU has developed an array of internal policies to implement its international commitments and also to achieve the more ambitious overarching objective of limiting global warming to two degrees Celsius above average pre-industrial temperature levels. In 2009, the EU adopted a package of climate and energy measures to implement the so-called '20-20-20' targets agreed by EU leaders in 2007 as the centrepiece of EU climate policy. The targets are to reduce GHG emissions by 20 per cent, increase the share of renewable energy by 20 per cent, and reduce energy consumption by 20 per cent, all by 2020. Although a number of Member States, including the UK, continue to support a move to strengthen EU emission reduction targets, from 20 to 30 per cent, political efforts have yet to bear fruit in this regard. While overall figures offer some encouragement (see Box 1 above), major differences remain between Member States (see Figure 3). competitiveness in fast growing global markets for low-carbon goods and services. Such a proactive approach is supported by many Member States including the UK, and a substantial group of commercial as well as environmental stakeholders are arguing for the adoption of a 30 per cent emissions reduction target as soon as possible (see Box 2 overleaf). However, there continue to be some Member States which are reluctant to take this step and the debate over moving to a 30 per cent target remains central to EU climate policy. Whilst it is important to resolve this, the eight year period to 2020 is only the near horizon. Looking beyond 2020, current information indicates that existing and planned measures on their own are not likely to be sufficient to bring the EU on a pathway to achieve its long-term emission reduction objective of 80-95 per cent by 2050 compared to 1990 levels (EEA 2011). In the near future, new climate measures will be coming into force and others are in the pipeline. These include progressive reinforcement of the EU Emissions Trading System (ETS) which is entering its third phase in 2013 and has a pivotal role in mitigation policy, although some are questioning this in the light of the low carbon price at present. The ETS has been extended to include additional GHG gasses and to include aviation from January 2012 (see next section on transport). Energy efficiency has been less prominent in EU policy but is the topic of considerable action at present. The background is provided by the Commission's energy efficiency Plan presented in March 2011 which aims to ensure the EU delivers on existing policy commitments and goes beyond this to achieve a 25 per cent overall GHG emission reduction by 2020 (EC 2011e). The public sector is allocated a key role in driving change, in particular through the market power of public spending. Complementary to this are the November 2010 energy 2020 strategy and the December 2011 energy Roadmap 2050 which explores different scenarios for the decarbonisation of the energy system (EC 2011h). The Roadmap concludes that the decarbonisation of the energy system is technically and economically feasible and that the overall costs of transforming the energy system are similar in all scenarios. The Roadmap maintains that while energy prices will rise until 2030 or so, new energy systems can lead to lower prices after that. A major programme of new investments will pay off in terms of growth, employment, greater energy security and lower fuel costs. The Roadmap is to be followed by initiatives in specific energy policy areas starting with proposals on the internal market, renewable energy and nuclear safety in 2012. The EU has been prominent in international climate change negotiations, playing a leading role in the establishment of the Kyoto Protocol and in maintaining a future for it after 2012. To some extent it has been willing to accept greater emission reduction commitments than most other major players, such as the USA and Canada, although these have not been very demanding. However, it is now signalling that further European action depends on the willingness of others to commit. The latest round of UNFCCC negotiations in Durban, South Africa led to an agreement among all parties to draft a new protocol, legal instrument or an agreed outcome with 'legal force' by 2015. This was a key condition of the EU to commit to new binding emission reduction targets under a second commitment period of the Kyoto Protocol from 2013. Several countries, including Norway, Iceland and Switzerland, will also be part of this second commitment period; although others including Canada, Russia, and Japan will be notably absent. The targets and time span of this new scheme will be discussed and finalised next year. The EU remains pivotal to the future of the Kyoto Protocol and after a disappointing performance at the 2009 conference in Copenhagen seems to have somewhat re-established its leadership on climate change on the global stage. This is to be welcomed and has provided a much needed boost to EU morale in this area. However, a significant amount of work lies ahead to deliver on the agreed deal and to mobilise the resources that have been committed to developing countries to help them respond to the climate challenge. This will be a priority for the EU and for the more active national governments, such as the UK. Globally or domestically, it will only be possible to adequately address the climate agenda by building a low carbon economy. This is a very substantial undertaking but the EU has started to move in this direction. Climate change concerns have begun to infiltrate the economic discourse and are now reflected in the EU's core economic strategy and in its spending priorities to 2020. Despite the difficult economic context, efforts are being made to incorporate climate change concerns in relevant policies such as energy, transport and regional policy, and are leading to the gradual acceptance of a new approach. Key challenges for the EU in the next decade include the following: • Consolidating, reinforcing and strengthening the existing agenda (e.g. extending action on energy efficiency through mandatory and other measures) so that it is in line with the trajectory set out in the EU's 2050 low carbon roadmap. • Building climate concerns more tightly into economic policy, including measures aimed at innovation and research and at key sectors such as energy supply and transport. • Making more progress with securing adequate finance for climate related investments both within the EU and externally. • Improving coherence between climate change and other environment policy areas so as to ensure an integrated approach in which climate change considerations are sufficiently embedded in other policies and to address any potential negative environmental aspects of climate measures, as exemplified by the biofuels debate. • Securing an appropriate global framework for addressing climate issues more urgently. Transport The transport sector continues to be a source of significant environmental pressure in the EU. Emissions from transport are a major source of the EU's GHG emissions (Figure 5 below). In 2010 transport was responsible for more than a fifth of GHG emissions from the EU (EEA 2011c). The increasing demand for transport has offset potential gains from improvements in the energy efficiency of new vehicles. Transport emissions also exacerbate problems with poor air quality and noise, particularly in urban areas. Additionally, transport infrastructure and its users constitute one of the main drivers of pressure on Europe's ecosystems and biodiversity, particularly with regard to the fragmentation of landscapes and ecosystems, and account for the use of large quantities of raw materials. Given its contribution to EU GHG emissions, the transport sector has become an increasingly important target of the EU's climate change and energy policy. Under the 2009 renewable energy Directive, there is a target that a minimum of 10 per cent of the final energy consumption used by transport is to come from renewable sources by 2020 in each Member State. A 2009 amendment to the fuel quality Directive included a target for the reduction of lifecycle GHG emissions from liquid transport fuels of at least 6 per cent by the end of 2020. In March 2011, for the first time, the Commission proposed a specific GHG emission reduction target for the European transport sector. In the White Paper on transport, the target is to reduce GHG emissions from transport by 60 per cent by 2050, in addition there are a number of goals for a competitive and resource efficient transport system (EC 2011i). Policy on vehicle emissions is now an important means of pursuing progress in the sector, albeit relatively slowly. Fuel efficiency requirements for new passenger cars are established in the passenger car CO 2 Regulation. This sets an average target of 130gCO 2 /km to be met by manufacturers by 2015 and an average target for 2020 of 95gCO 2 /km. Another Regulation aims to reduce average CO 2 emissions from new vans to 175gCO 2 /km by 2017 and 147gCO 2 /km by 2020 [START_REF] Skinner | Carbon impact of HS2: Overview of relevant policy issues and advice on modelling assumptions[END_REF]. A report on progress towards the targets set out in the passenger car CO2 Regulation suggest that in 2010, the average new car in the EU-27 had CO 2 emissions of 140.3g/km. This is an improvement of 3.7 per cent on the 2009 figure. In the UK, average CO 2 emissions of new cars declined from 150g/km to 144g/km (Transport and Environment 2011). These figures provide some initial indications of the effectiveness of the regulatory measures introduced following the failure to deliver significant emission reductions under an earlier voluntary agreement between the Commission and car manufacturers. A revised euro-vignette Directive on road charging of heavy good vehicles (HGVs) agreed in 2011 allows Member States to levy additional charges on HGVs to cover the cost of the air and noise pollution they create in addition to existing infrastructure charges. Although Member States are currently not obliged to introduce such charges, the measure paves the way for a more intelligent approach both to taking account of external costs in charges and achieving greater European harmonisation, and could ultimately enhance incentives for the use of cleaner vehicles. Concrete measures have also been introduced in relation to aviation. In 2008, a new Directive was adopted to expand the scope of the EU ETS to include the aviation sector. However the Directive, which requires all flights landing and taking off from EU airports to be covered by the ETS from 2012, therefore levying costs on airlines, has been condemned by a number of developed and emerging economies which do not have equivalent measures themselves. Just how controversial the measure has been is evident in the actions taken over the past few months which include the approval by the US House of Representatives in October 2011 of a draft law that if passed would ban US airlines from participating in the ETS; threats of legal action by the Chinese Air Transport Association and the Indian government; and a legal challenge brought before the High Court of Justice of England and Wales by a group of US airlines. Aside from the question of emissions, the transport sector is the principal consumer of oil-based fuels on which it is almost wholly dependent. Currently there is a major push to encourage the use of biofuels and to accelerate the process of electrifying road vehicles. There are, however, many challenges to ensuring that these alternative fuels and energy sources are sustainable and in fact low carbon. Efforts to decarbonise the transport sector are inevitably linked to efforts to decarbonise energy supply (European Expert Group 2011) as the use of low carbon electricity in the transport sector requires the decarbonisation of the electricity supply industries. Moreover, GHG benefits from biofuels varies according to the feedstock used, direct land use change and associated emissions from planting feedstock, emissions from processing, transport and the use of the by-products, as well as the potential impact of indirect land use change (Bowyer 2010). In addition, EU and other public spending will need to be shifted to the creation of new low-carbon infrastructure (such as high-speed rail, new bus and rapid urban transit systems in cities) and charging facilities for electric vehicles, as well as initiatives to reduce the need to travel. In October 2011, the Commission set out its proposal for the funding mechanism for EU infrastructure priorities in the transport, energy and telecommunications sectors in the 2014-2020 period -the Connecting Europe Facility (CEF) (EC 2011j). A proposed budget of €31.7 billion is to be invested in transport infrastructure. However the total amount of financing available may be significantly larger with the use of the new EU financial instrument -the Project Bond Initiative -which will be used to attract additional private finance to EU priority projects [START_REF] Withana | Mobilising private investment for climate change action in the EU: The role of new financial instruments[END_REF]. These are potentially powerful levers for change if they can be directed into the most appropriate areas of investment. The transport sector is one in which GHG emissions continue to rise; policy interventions remain quite controversial and in some cases are relatively recent. There are a number of important issues on the horizon, including the following: • The adoption of mandatory targets for vehicle emissions has proved quite successful in reducing emissions in the last year, although there remains some way to go. A proposal to amend legislation on CO 2 from cars and vans is due in 2012 and may lead to the development of targets beyond 2020. Similar legislation for CO 2 emission standards for vehicles in other modes could be developed in the coming years, as called for in the 2011 White Paper on transport. • We are now part way through the process of extending provisions in the EU ETS to the aviation sector. Although this expansion has been agreed by the EU, it has proved to be particularly controversial among international players and is under heavy fire from the US in particular. It is nonetheless important if rising emissions from the sector are to be tackled effectively and addressing related concerns will be a key challenge in the coming year. • The issue of shipping emissions is still at an early stage in development and the Commission is expected to put forward proposals in this regard in 2012 in the absence of international action. • Another key challenge relates to the decarbonisation of transport fuels and ensuring the sustainability of alternative fuels and energy sources as well as addressing issues arising from the use of 'unconventional' sources such as oil sands which have the potential to increase the carbon content of transport fuel. Here the fuel quality Directive is pivotal [START_REF] Skinner | Carbon impact of HS2: Overview of relevant policy issues and advice on modelling assumptions[END_REF]). • At the same time, the need for new approaches to transport infrastructure is increasingly recognised, at least in principle; including the weight of investment in rail, the provision of charging points for electric cars etc. Whether this will be translated into actual results on the ground remains to the seen. Water Over the years, the discharge of pollutants to fresh and coastal waters has fallen in much of Europe, leading to improvements in freshwater quality, for example in relation to inland bathing waters. However, pollution levels remain significant in several European rivers which show a mix of different pollutants, including nutrients, biocides, industrial and household chemicals or pharmaceuticals. The quality of coastal waters is also affected (EEA 2010c). The concentration of nitrates in rivers and ground waters remains a persistent challenge. Diffuse pollution from agriculture is the main source of nitrate pollution, contributing to eutrophication in coastal and marine waters and pollution of drinking water, particularly where ground waters are contaminated. There are also growing problems in relation to water quantity. Although water abstraction rates have fallen in the majority of EU Member States, particularly in the eastern Member States, overexploitation remains a challenge in many parts of Europe. While water scarcity is a growing problem, in particular in the south of Europe, (see Figure 6), other parts of Europe, including many urban areas in England are suffering from a rise in the frequency of major floods (see Figure 7) and related flood damage. The costs of floods have increased markedly as a consequence (EEA 2010c). European water bodies have also been altered through physical modifications, leading to changes in water flows, habitat fragmentation and obstructions to species migration. The water framework Directive (WFD), adopted in 2000, provides the overall policy framework for preserving and restoring the quality of European water bodies. It creates a timeframe for policy action that should bring all water bodies in the EU to good status by 2027. The WFD is widely appraised as a good example of integrated approaches to environmental policy-making, highlighting the ecological assessment of ecosystems and the focus on river-basin management, full cost-recovery and water pricing. It is complemented by daughter directives on groundwater and on environmental quality standards; and is linked to the emission-oriented approach to water protection set out in the urban wastewater treatment Directive and the nitrates Directive. Due to its broad character, the WFD lacks clarity in detail and leaves a lot of room for diverging interpretation by Member States of the actions required. Implementation remains a major challenge with many Member States making slow progress towards their obligations. The recent publication of Member States' river basin management plans (RBMPs), which are required by the Directive, has made clear that a variety of emissions of hazardous substances continue to pose a threat to the quality of Europe's surface water (EEA 2011g). For these reasons the achievement of the targets under the Directive are uncertain. Further action is needed, particularly with regard to using water in agriculture and buildings more efficiently. Progress by Member States in introducing economic instruments such as water pricing needs to be accelerated, while the principle of cost-recovery remains controversial. The scope of EU policy was expanded to flood risk management by the 2007 floods Directive. The Directive aims to establish a framework for the assessment and management of flood risks and is strongly linked to the WFD implementation process. The Directive requires Member States to assess flood risks for each of their river basins and associated costal zones, develop good hazard maps, and produce flood risk management plans. By the end of 2011, Member States are to undertake a preliminary assessment to identify the river basins and associated coastal areas at risk of flooding; flood risk maps are to be developed by 2013. The mapping phase will provide a major improvement in information available. By 2015, Member States are to establish flood risk management plans focused on prevention, protection and preparedness. This requires a particular effort by the legally competent authorities within Member States to get the new system in place. Drafting these plans would benefit from a stronger link to issues of land management, including agriculture. The approach taken so far has tended to be rather reactive, in terms of better preparation for floods, rather than seeking to mitigate their causes. EU water policy provides a comprehensive legislative framework that aims to address issues related to water quality as well as water demand and availability. However a number of challenges remain, including the following: • EU Member States have considerable autonomy and flexibility with regard to meeting the objectives of the WFD, for example in relation to adequate pricing of water use. Many are proceeding slowly, so implementation continues to be a significant challenge. • Economic instruments focusing on efficiency in water supply are not widely used in Europe, while the principle of cost-recovery and water pricing remain controversial. • An effective approach to better integrating water concerns in key sectoral policies is missing, particularly with regard to increasing the efficiency of water use in agriculture and buildings. For example the introduction of efficiency standards for water use in building offers significant potential for future savings. • Despite some progress in addressing the potential of water savings in different sectors, the widespread and potentially growing challenges of water scarcity and droughts are largely outside the current policy framework. There is currently no consensus on whether future regulatory action on droughts is needed although there is widespread agreement on the need for increased policy coordination in the area. • There is also a need to improve the quality and availability of information and data on water issues, together with related institutional capacity, even more so to help improve understanding and respond to climate change in future (Deloitte and IEEP, 2011). The year 2012 will be important for EU water policy. The Commission is currently undertaking a 'fitness check' of EU water law, covering the WFD, groundwater, environmental quality standards for water, urban waste water treatment, nitrates and floods Directives and the Communication on water scarcity and droughts. The aim of such 'fitness checks' (which are part of the EU's better/smart regulation agenda) is to identify excessive burdens, overlaps, gaps, inconsistencies and/or obsolete measures which may have appeared over time. The conclusions of the fitness check, together with a report on implementation of the WFD, an assessment on the RBMPs and the vulnerability of water resources to climate change, will feed into a 'Blueprint to safeguard Europe's waters'. This is expected to be presented in November 2012. The Blueprint is due to address the broad scope of core EU water policy, making recommendations for improvements, which might include legislative changes. Air Forty years of EU policy on air pollution have resulted in an absolute decoupling of direct air emissions from economic growth (ETC/SCP 2011). In the period 1990-2009, levels of several air pollutants in Europe dropped significantly, in particular of sulphur dioxide (SO 2 ), nitrogen oxides (NO x ) and lead (Pb) (EEA 2011d). However, a reduction of emissions does not equate to a reduction of ambient concentrations, in particular for ground-level ozone (O 3 ) and particulate matter (PM), concentrations of which have not decreased despite a reduction in respective emissions. It is estimated that 17 per cent of the EU urban population lives in areas where the EU ozone target value set by the air quality framework Directive is not met (Figure 8 opposite). The WHO has estimated that large majorities of the European urban population breathe air that largely exceeds their recommendations for PM10 (WHO/JRC, 2011), causing a decrease in life expectancy and a rise in respiratory and cardiovascular problems among others. Past emission reductions have not always produced a corresponding drop in atmospheric concentrations because of complex linkages between the two, making it more of a challenge for policy-makers to move trends downwards. Poor air quality is also linked to the increase in transport volumes in the EU (see section on transport). Moreover, while action taken to improve air quality is expected to yield benefits for climate change and vice versa, in certain cases the use of some new technologies to reduce CO 2 emissions could be counter-productive to efforts to improve air quality. For example, a 2011 report by the EEA points to the risk of increasing air pollutant emissions if carbon capture and storage (CCS) technology is to be applied widely in power and industrial plants in the EU (EEA 2011e). The last strategic review of the EU air policy framework resulted in the Thematic Strategy on air pollution in 2005 which set objectives for air quality for the period up to 2020 related to impacts on, and risks to, human health (e.g. a 47 per cent reduction in loss of life expectancy) and the environment (e.g. a 43 per cent reduction in areas or ecosystems exposed to eutrophication). To reach these objectives, sector specific priorities for EU action were identified and revisions of the three main air pollution directives called for. These were the air quality framework Directive, the national emission ceilings (NEC) Directive and the integrated pollution prevention and control (IPPC) Directive. The air quality framework Directive was subsequently revised with some modernisation in assessment and monitoring and changes to air limit values, including the introduction of new air quality limit values for fine particulate matter (PM2.5). The IPPC Directive was recast as the industrial emissions Directive, incorporating other industry-related legislation, including the large combustion plants Directive. This should lead to tighter controls on industrial air pollution emissions, both in terms of conditions set in permits and enforcement of those conditions. However, no proposal to revise the NEC Directive has emerged, despite repeated rumours that it would be forthcoming. Various reasons for this delay have been cited over the years, including first the need to develop GHG emission targets, then the need to take account of the revision of the IPPC Directive and, now, the need to take account of the economic crisis. Furthermore, measures taken to reduce GHG emissions, such as those contained in the climate and energy package, are also expected to lead to important reductions in pollutant emissions. The implementation of EU air quality policy continues to be a major challenge. Issues relating to compliance and enforcement in the Member States were identified as problems associated with the previous IPPC Directive. The new industrial emissions Directive was designed to address some of these issues and progress can be expected over time. However, given that implementation problems were significant with the preceding legislation it will be necessary to pay close attention to implementation of the new Directive on the ground. This will be an important factor in the functioning and performance of the measure, particularly where limit values are more stringent or where installations are included for the first time. Moreover, the value of the air quality Directive depends on the performance and functioning of measures introduced at the European level to reduce emissions at source and on the implementation of national, regional and local measures to ensure air quality limit values are met. Several Member States, including the UK, have requested and been granted derogations from meeting their obligations under the air quality framework Directive. A 2011 report from the UK Environmental Audit Committee on air quality noted that the UK is still failing to meet European targets for safe air pollution limits across many parts of the country. The report found that 30,000 deaths in the UK were linked to air pollution in 2008, with 4,000 in London alone, and that poor air quality is shortening the life expectancy of people in the UK by an average of seven to eight months; costing society up to £20 billion per year (EAC 2011). Given the scale of the challenge and the on-going threat to health, especially in urban areas; the continued problems of non-compliance, in particular for particulate matter and ozone need to be addressed and the legislative framework brought up to speed. Although economic concerns need to be taken into account, they should not derail the process entirely. Moreover, interactions with other policy developments (such as on agriculture, transport, biodiversity etc.) will need to be taken into account more fully. Such issues could be among those taken up in the review of EU air quality policy currently underway. This is expected to conclude in 2013 with the presentation of an EU clean air package, updating existing policies and directives including the NEC Directive. The review is expected to propose stricter emission ceilings for 2020 and potentially see the introduction of a ceiling for fine particulate matter. To support this, a broad consultation process has been launched, which includes an online public consultation, the establishment of a stakeholder group, the organisation of dedicated workshops and events and dialogue with international organisations (such as the WHO, UNECE) (EC 2011). Chemicals The production, use and disposal of chemicals have been linked to a range of environmental and health related problems. Human exposure to chemicals takes place through multiple sources like water, air, food, consumer products and indoor dust. Of particular concern are persistent and bio-accumulative compounds, endocrinedisrupting chemicals and heavy metals used in plastics, textiles, cosmetics, dyestuffs, pesticides, electronic goods and food packaging (EC 2010b). Chemicals in consumer goods may also be of concern when products become waste and chemicals migrate to the environment and can be found in wildlife, ambient air, indoor dust, wastewater and sludge. There is also growing attention to the possible combined effects of exposure to a mixture of chemicals found at low levels in the environment or in consumer goods, especially among vulnerable young children [START_REF] Eea | EU Environmental Policy Handbook: A Critical Analysis of EU Environmental Legislation, Making it accessible to environmentalists and decision makers[END_REF]. The cornerstone of the EU's approach to regulating the production and use of chemical substances is the Regulation on the registration, evaluation, authorisation and restriction of chemicals (REACH). The Regulation entered into force in 2007 following a major lobbying offensive by industry groups, consumer, health and environmental organisations. Under REACH, all chemical substances manufactured or imported in quantities of 1 tonne or more must be registered by the manufacturer/importer with the European Chemicals Agency (ECHA). The registration contains a dossier with information to enable the substance to be used safely. The ECHA can evaluate dossiers and substances. Downstream users are to contribute to the dossier. Substances of very high concern are not to be used unless authorised. Companies will be required to make efforts to find safer substitutes as part of the authorisation procedure; and the manufacture, marketing and use of substances can be restricted. The Regulation is based on the principle that it is for manufacturers, importers and downstream users to ensure that they manufacture, place on the market or use only those substances that do not adversely affect human health or the environment. REACH is an important piece of legislation, not least since it combines traditional regulation with other approaches, notably enhanced producer responsibility. It is also one of the more ambitious and complex environmental regulations, involving a lengthy implementation process. As part of the REACH procedure, substances of very high concern (SVHC) are identified as part of a process to phase out the use of the most hazardous substances. So far 53 SVHC substances of very high concern have been identified and included in the candidate list. From this candidate list priority substances are recommended by the ECHA and their inclusion into the so called Annex XIV list is decided through comitology. Once a substance is added to this list any manufacturer, importer or downstream user of that substance must apply for an authorisation from the Commission or they will not be permitted to use it after a certain deadline (the sunset date). In September 2010, the first six SVHCs were added to the authorisation list. The substance specific sunset dates range from 2014 to 2015. In December 2011, ECHA added twenty SVHCs to the candidate list. Nineteen of these substances are classified as carcinogenic and/or toxic for reproduction. In addition, for the first time, one substance has been identified as an SVHC because of its endocrine disrupting properties which give rise to an 'equivalent level of concern' due to likely serious effects on the environment (ECHA 2011). Specific legislation on the authorisation and use of pesticides has had a considerable impact in recent years. A new Regulation concerning the placing of plant protection products on the market and a Directive establishing a framework for EU action working towards the sustainable use of pesticides were adopted in 2009. The new rules on pesticides are based on hazard-based criteria for granting authorisations and apply tougher controls or a ban on several SVHC. The Regulation aims to harmonise rules for placing pesticides on the market while also addressing agricultural practices. The Directive establishes a framework by promoting the use of integrated pest management (IPM) and alternative approaches or techniques, such as non-chemical alternatives to pesticides. Under the Directive, Member States are required to adopt national action plans to set up quantitative objectives, targets, measures and timetables to reduce risks and the impacts of pesticide use on human health and the environment, and to encourage the development and introduction of IPM and alternative approaches to reduce dependency on pesticides in farming. Member States are also required to ensure that the use of pesticides is minimised or prohibited in certain specific areas. Implementation of the REACH Regulation remains a critical challenge, as is evident in the slow pace of application to date. Implementation is likely to be further complicated by the fact that many binding dates under REACH lie quite far in the future and by the complex procedures of authorisation and restrictions. Moreover, REACH places administrative and procedural burdens, not only on public authorities, but also on the ECHA to compel industry to discharge its responsibilities. Thus resource constraints and the need for prioritisation, has meant that there is still a long way to go before appropriate risk management measures are actually taken for many 'phase-in' substances. Implementation of new pesticides rules will also prove challenging. There remains a need to improve knowledge on the environment and health impacts of chemicals, particularly with regard to the effects of low doses and multiple exposures, methods for risk assessment of endocrine disruptors, cumulative risk assessment, effects of chemical cocktails etc. Planned reviews of relevant legislation to be presented in 2012 may help to address some of these challenges. The forthcoming review of the REACH Regulation is expected to assess the operation of the Regulation to date, identify lessons learnt in particular with regard to the costs and administrative burden, review the scope and potential overlaps of REACH with other EU legislation on chemicals, and review the ECHA. Although the review is more likely to focus on enforcing existing rules rather than a major overhaul of the legislation, one can expect sensitivities concerning the competitiveness of the European chemical industry to be brought to the fore yet again. A revision of the EU strategy on endocrine disruptors is also expected from the Commission in late 2012. Waste In 2011, the EU economy generated around six tons of waste per person every year. Although Europe has become more efficient in managing material resources, in absolute terms, the consumption of materials continues to increase and a consistent trend to reduce waste has not (yet) been achieved. The EU has a long policy tradition and track record in waste management, with the emphasis shifting over time from disposal to recycling and prevention. Reduction of waste generation however, has not proved easy to achieve on the ground and there remain substantial differences in results achieved between Member States (EEA 2010). On the positive side, waste policy has contributed to higher recycling rates, now amounting to up to 60 per cent for packaging waste and 39 per cent for municipal waste (ETC/SCP, 2011). However, half of the total waste generated is still sent to landfills, with large differences between Member States (see Figure 9). Waste legislation continues to suffer from sub-optimal implementation and enforcement in many Member States which has meant less change on the ground than implied by the legislation in place. The philosophy behind current EU waste policy was set out in 2005 in the Thematic Strategy on waste prevention and recycling. This advocated a lifecycle approach and shift towards a materials-based approach to recycling, a new focus on the prevention of waste and a transition towards more flexible mechanisms of policy making and standard setting at the EU level. In 2008, the revised waste framework Directive reset the baselines for much of EU waste management, redefined key terms and concepts, reinforced the waste hierarchy and set the EU's first ever sector-wide targets for re-use and recycling, complementing existing product-based action. However, progress to date has been disappointing. A 2009 Commission report on the implementation of EU waste legislation from 2004-2006 underlined a series of weaknesses including a lack of waste treatment infrastructure, separate waste collections, recycling and recovery targets in many countries (EC 2009). While 90 per cent of hazardous waste is estimated to be treated in the EU-15, the monitoring of illegal shipments of waste still needs improvement. Volumes of electronic waste have grown rapidly and exports need to be better regulated to avoid the potential environmental burdens arising from such waste (which contains hazardous substances) being treated in third countries with less stringent environmental standards than those in the EU. This requires a recast of the WEEE Directive which was first proposed in 2008, but divided the European Parliament and the Council until a recent agreement in December. The 2011 review of the Thematic Strategy on waste (EC 2011a) found that significant progress has been made in the improvement and simplification of legislation, the establishment and diffusion of key concepts such as the waste hierarchy and lifecycle thinking, an increasing focus on waste prevention, efforts to improve knowledge, and new European collection and recycling targets. In terms of waste management performance, recycling rates have improved, the amount of waste going to landfill has decreased, the use of hazardous substances in some waste streams has been reduced, and the relative environmental impacts per tonne of waste treated have decreased. These achievements are however offset by the negative environmental impacts caused by the increase in overall waste generation. The review concluded that the Thematic Strategy has played an important role in guiding policy development, but that the EU is still some way from a 'recycling' society. More impetus is now needed in a number of areas to inter alia: • Properly implement and enforce existing EU waste legislation. In this regard, the Commission suggests the development of a 'proactive verification procedure' and early warning system on compliance with key EU targets, based on national waste management plans. • Define new and more ambitious (material-specific) prevention and recycling targets, • Improve the knowledge base on waste and resources, • Support national actions on waste prevention, • Increase coordination of national inspection activities, • Promote combinations of economic and legal instruments for waste management, • Improve the competitiveness of EU recycling industries and develop markets for secondary raw materials, • Improve measures to prevent illegal waste exports, • Improve stakeholder participation and raise public awareness on waste, and • Promote lifecycle thinking (e.g. through more consistency between waste and product design policies). The Commission is expected to set out proposals to address some of the above mentioned issues in 2012. Resource use The overall environmental impacts of EU natural resource use in and beyond Europe are growing. Europe is struggling to achieve absolute decoupling of resource use from economic growth, despite a range of efforts to improve resource efficiency over the years. Growth in resource productivity has been significantly lower than growth in labour productivity. While many products are gradually becoming more energy-efficient, efficiency gains are often off-set by changing consumption patterns. Some forms of material use are expanding significantly and the ecological footprint of the average European citizen exceeds 4.5 global ha per capita (ETC/SCP, 2011). A new approach to resource use in Europe is increasingly seen as a central plank of the EU environmental agenda for the next decade. There is growing recognition that Europe cannot continue to consume more than its share of global resources and an understanding that the problem goes well beyond the issue of industrial raw materials. This is not a new theme in EU policy; there has been a history of efforts to achieve sustainable consumption and production (SCP) for example, as well as more sectoral initiatives concerned with water, wastes etc. However, there is now a much stronger link being made to the EU's economic strategy (see section 5). Improving resource efficiency and, more tentatively, reducing resource use, is being seen as part of a strategy for green growth. The economic benefits and scope for win-wins for business and also for consumers are being emphasised. In January 2011, the Commission published a Flagship Initiative on resource efficiency (EC 2011b) as part of the Europe 2020 Strategy. It provides a long-term framework for actions in several areas to support the shift towards a resource-efficient, low-carbon economy aiming at sustainable growth. Policy is to be developed in a set of roadmaps, namely on the decarbonisation of Europe's economy, improving resource efficiency and initiating long-term energy transition, as well as more sectoral plans such as the Blueprint for water policy and the 2020 biodiversity Strategy. The Flagship Initiative lacks ambition in tackling the ever increasing trends in European consumption, which are one of the main roots of the problem. Greater efficiency alone will not lead to a decrease in absolute resource use, and some voices in the EU are now calling for a 'resource-intelligent' Europe which refers to the combination of a less materialistic approach to well-being and new economic models (e.g. a goods-as-services based economy). The Commission's thinking is developed considerably further in the Roadmap to a resource-efficient Europe published in September 2011 (EC 2011c). Natural resources, ecosystem services and natural capital concerns are linked with resource use which is understood in a broad sense to include biodiversity. The Roadmap postulates an ambitious long-term policy vision, including no net land-take in 2050 and a sustainable use of resources within planetary boundaries. These are coupled with more detailed milestones of varying levels of ambition, including the phasing out of environmentally harmful subsidies by 2020. However, there are no overarching targets, such as those that apply in the area of climate change policy. Agreement on concrete and detailed indicators to guide action by policy-makers, business and investors is deferred to 2013. Although this may delay critical progress, the Roadmap does refer to the principal areas of European resource consumption, namely buildings, transport and food, and while it is cautious in approaching the topic of absolute reductions in consumption, the implications of inaction are not disguised. The main question now is how far the Roadmap will be converted into concrete proposals at a national as well as EU level over the next eight years. A potentially large range of measures could be introduced over the next decade or so if the Roadmap is to be converted into a concrete action plan, although it would require a step change in the level of determination, starting with the adoption of targets and indicators. Future measures could include: • The review and tightening of existing standards and targets, not least to increase recycling rates for metals and other materials, e.g. current targets for collection rates under the batteries Directive are for 25 per cent in 2012 and 45 per cent in 2016. • Stimulating changes in the design and use of certain products, for example by extending the scope of the ecodesign Directive. • More focussed efforts to apply existing measures, such as realistic pricing for water under the WFD and respect for the principle of Maximum Sustainable Yield for commercial fish stocks under the CFP. • The wider application of economic instruments, including appropriate incentives for recycling and refund systems, the selective use of levies and taxes (generating funds that can be applied to complementary uses), the withdrawal of environmentally harmful subsidies, greater use of green public procurement etc. • An initiative to bring a land-use dimension to EU policy, globally as well as domestically. • Investment in innovation, research and development, education and more innovative approaches to addressing consumption levels. The resource efficiency agenda extends well beyond the sensitive issue of reducing dependence on raw materials, such as rare earths and must embrace the health of ecosystems more broadly. The Roadmap is a good foundation for building a new generation of policies that grasp the opportunity to make resource efficiency central to the creation of a greener economy. Soil Soil has received much less attention than other environmental media in EU policy, even though soil degradation is accelerating in many parts of Europe. European soils continue to face multiple threats such as erosion, organic matter decline, contamination, compaction, salinisation, landslides, contamination, sealing and biodiversity decline (EEA 2010c). Although a number of sectoral EU policies have an impact on soil management practices, including measures taken under industrial emissions policy, as well as water, waste and agricultural policy; they provide only a patchy level of protection. The development of specific EU legislation addressed primarily to issues of soil protection and prevention of land degradation has been limited by concerns among many Member States that soil protection is as an area of national competence and potentially costly, particularly with respect to contaminated land restoration. The last serious EU-level initiative in this area was the Commission's 2006 Thematic Strategy on soil protection intended to pave the way for future policy. The Thematic Strategy aimed to develop a new approach to the management and protection of Europe's soils and was built around four pillars for action: the integration of soil protection into national and Community policies; closing recognised knowledge gaps; increasing public awareness; and the development of framework legislation aimed at protection and sustainable use of soils. The Commission's 2006 proposal for a soil framework Directive was meant to have a significant impact on soil protection and the retention of soil functions in Europe. In its current form, it would require the identification of soils at the greatest risk of degradation and actions to address this, with the precise obligations on Member States being a matter of controversy. The UK, Germany, France, Austria and the Netherlands have established a blocking minority, resisting the adoption of the proposal in its current form for the reasons cited above. Many other Member States support the proposal quite strongly. Some progress has however been made under the other pillars of the Thematic Strategy. Under the first pillar, efforts have been made to integrate aspects of soil protection into relevant EU policies, e.g. the requirements of the IPPC Directive to ensure the protection of soil when an industrial operation is discontinued were too vague to enforce changes in actual practices and they have been clarified in the new industrial emissions Directive. Under the second pillar, a number of studies have considerably improved the existing body of knowledge in the area of soil protection. However, a continuing concern relates to the lack of harmonised information at EU level on soil conditions. Under the third pillar, the adoption of the Thematic Strategy led to several EU-wide stakeholder conferences on soil related issues, attended by scientists, Member State representatives, civil society and other stakeholders. This rising level of awareness has been one of the factors in deepening stakeholder engagement in the debate on future policy in an area which has received much less attention than others such as air and water. There has also been a growing level of awareness of soil issues linked to the climate change debate (e.g. carbon sequestration) and the role of soils in delivering ecosystem services. There is now greater understanding of soil interactions with other priorities such as the need to sequester carbon, manage land in a way that enables adaption to climate change and ensure the protection of water both, in terms of quality and quantity. It is clear that soil conservation needs more priority in both agricultural and environmental policy. However, it is not yet clear whether the adoption of a new framework Directive will be the way forward in this regard. There are a number of issues on the horizon which could contribute to soil protection. They include a Commission technical document on soil sealing expected to be published in early 2012. Also relevant will be the outcome of deliberations on CO 2 emissions associated with land-use change, discussions on carbon credits for the protection of forests (and other terrestrial carbon sinks), the evolution of criteria for the sustainable production of biofuels, cross compliance in future EU agriculture policy, or the EU climate adaptation strategy which is expected to appear in early 2013. Soil management certainly should attract more attention within the CAP in the future. Biodiversity Despite a fairly wide ranging regulatory framework in place and considerable efforts to establish a European network of protected areas (Natura 2000), the loss of Europe's biodiversity remains a persistent problem. On a global scale, biodiversity is still threatened by increases in the five principal pressures: habitat change, overexploitation, pollution, invasive alien species and climate change (CBD 2010). In Europe, species decline is particularly marked in agricultural and grassland ecosystems, mainly due to intensified farming and unsustainable land-management. The fate of wildlife rich habitats varies. Some face very considerable pressures; whilst others, such as European forest coverage grow despite being affected by acidification, eutrophication, forest fires and other regional pressures (Forest Europe, UNECE and FAO 2011). In 2001, EU leaders committed to halting the decline of biodiversity in the EU by 2010 and to restoring habitats and natural systems. Despite some progress in various areas such as the extension of the Natura 2000 network, the EU failed to achieve this target. This is due to continuing negative trends in key pressures, such as changes in agricultural systems, pollution of freshwater, land abandonment, and habitat fragmentation. There have been particular problems related to the implementation of the nature Directives and the biodiversity action plan, including slow or incomplete identification and designation of Natura 2000 sites (especially in the marine environment), inadequate management of habitats and species within Natura sites and especially in the wider environment, as well as problems related to relevant sectoral policies. Over the last two years there has been increasing recognition of the economic value of biodiversity and ecosystem services (such as healthy soils, clean water, carbon sequestration) in the policy process. This has been driven in part by developments in the knowledge base, including inter alia the TEEB (The Economics of Ecosystems and Biodiversity) initiative. This has helped to raise the political profile of biodiversity issues in recent years. A new biodiversity target was agreed by the Council in March 2010 to halt the loss of biodiversity and the degradation of ecosystem services in the EU by 2020 and restore them in so far as feasible; while stepping up the EU contribution to averting global biodiversity loss. The explicit addition of ecosystem services to the target reflects the increased recognition of the value of biodiversity to society and the need to broaden concern for biodiversity across society and sectoral interests (see Box 4). Box 4. Business and biodiversity -Making the case for a lasting solution A 2010 report by the United Nations Environment Programme (UNEP) set out a case for incorporating biodiversity in business models. The report looks at a broad spectrum of businesses across different sectors and for each provides examples of companies that have taken actions to reduce their impacts on biodiversity. The report outlines the case for businesses to examine their impact on biodiversity including expanding market opportunities, brand advantage, opportunities for new business ideas, and the potential of new green technologies. The report argues that a failure to address biodiversity issues could affect the supply of resources, access to markets, reputation, licenses to operate and access to finance. In addition, companies could face a consumer backlash, as an increasing number of customers demand sustainably produced products and services. Source: UNEP-WCMC and UNEP-DTIE, 2010 A new EU biodiversity Strategy to 2020 was subsequently produced in May 2011 which sets out six main targets, 20 actions and 36 measures (EC 2011k). The targets relate to full implementation of the birds and habitats Directives, maintaining and restoring ecosystems and their services, increasing the contribution of agriculture and forestry to maintaining and enhancing biodiversity, ensuring the sustainable use of fisheries resources, combating invasive alien species and helping to avert global biodiversity loss. Biodiversity remains a contested area and despite being one of the oldest areas of EU environmental policy, issues of implementation remain a major challenge, including in several older Member States, such as the UK. The EU's 2010 biodiversity target was very difficult to achieve, particularly given the scale of the task and insufficient political support and action by Member States in this regard. It is not clear whether there will be sufficient commitment to the action now required and whether necessary mechanisms are now in place to deliver the new EU biodiversity objectives for 2020. Some key challenges going forward include how to secure Member States commitment to full implementation of existing measures, the development of new measures to address gaps in the coverage of EU policies (e.g. on invasive alien species, on green infrastructure), how to 'mainstream' biodiversity policy in other relevant policy areas and secure adequate financing for implementation of biodiversity commitments including expansion of the Natura 2000 network. Marine environment and fisheries The European marine environment and the ecosystem services it provides are under considerable pressure. The majority of pollutants in freshwater bodies (described in the section on water above), is ultimately discharged to coastal waters. Runoff from fertilisers and pesticides from land-based sources has led to oxygen depletion and to ecosystem collapses (e.g. Black and Baltic Seas) (EEA 2010c). While some marine protected areas have been established under the Natura 2000 network, marine sites currently only account for around 6 per cent of Sites of Community Importance (SCIs) and 10 per cent of Special Protection Areas (SPAs) (EEA 2010a). Other concerns relate to the threat of invasive species, marine plastic litter and the future impact of climate change. The fisheries sector has a major impact on the overall state of the marine environment. Despite changes made during and since the 2002 reform of the Common Fisheries Policy (CFP), overexploitation of marine fisheries remains a major problem and has led to a situation where 26 per cent of fish stocks are below safe biological limits [START_REF] Sissenwine | An overview of the state of stocks[END_REF]. Despite an apparent improvement in the current state of stocks, there is also pressure to reduce levels of bycatch, eliminate discards of non-target fishing species, and avoid damage to habitats from several types of fishing gear [START_REF] Lutchman | Towards a Reform of the Common Fisheries Policy in 2012 -A CFP Health Check[END_REF]. In 2008, the EU adopted the marine strategy framework Directive (MSFD) under which Member States are required to take measures to achieve or maintain good environmental status in the marine environment by 2020. To this end, marine strategies are to be developed and implemented to protect and preserve the marine environment, prevent its deterioration or, where practicable, restore marine ecosystems and prevent and reduce inputs in the marine environment. Working groups have been established to support the interpretation and practical application of parts of the MSFD. In some countries the structure and responsibilities for implementation of the MSFD are clear. However, some Member States are still discussing how the Directive should be implemented; while others are still working on a process for the identification of potential programmes and/or parameters for good environmental status. When fully implemented, the Directive can be expected to make a significant contribution to improving the state of the marine environment. However, there are some limitations to its use and a number of issues it cannot address which will have to be confronted through other instruments such as the CFP. The MSFD is the environmental pillar of the EU's integrated maritime policy (IMP) which aims to provide a framework for the development of policies affecting maritime areas. The IMP has resulted in, or influenced, a number of subsequent policy documents covering substantive issues (e.g. on maritime spatial planning and integrated maritime surveillance) as well as on sectoral policies and regional policies (e.g. on offshore wind energy and maritime transport). A 2009 report on implementation of the IMP (EC 2009b) highlights a number of positive developments at Member State level in integrating maritime governance, including the UK's Marine Bill. A key challenge in the years ahead will be the issue of integration not only in terms of making marine spatial planning work, but also integrating it with the separate domain of fisheries policy. It will become increasingly necessary to address conflicting policy objectives which will inevitably arise. The pressure to manage fisheries sustainably and responsibly is growing and the current reform of the common fisheries policy (CFP) has highlighted the shortcomings of the current approach and the need for critical changes. The reform is to be completed by 2012 and is likely to include major changes to the principle CFP Regulation including a ban on discards, the introduction of transferable quotas, decentralising some decision-making powers to the regions, measures to move towards multi-species fisheries management, and the introduction of marketbased quota management. The most significant proposed change to the general objectives of the CFP is the aim of reaching maximum sustainable yield of commercial fish stocks by 2015. Another new objective is that the CFP shall integrate the requirements of EU environmental legislation. The Commission's proposals for fisheries funding under the future EU budget also appear to be moving in the direction of sustainability not just of fisheries but of the broader marine environment, with the introduction of a new European Maritime Fisheries Fund (EMFF) (replacing the current European Fisheries Fund). As proposed it should support fishing which is more selective, producing no discards, doing less damage to marine ecosystems and relating to the science that supports these activities. The extent to which these provisions are taken up in final legislation will depend on the outcome of on-going negotiations between the European Parliament and the Council. The 2010 BP Deepwater Horizon oil spill in the Gulf of Mexico and the observed shortcomings of response strategies promoted a review of EU practices and provisions covering off-shore oil and gas exploitation. Developments in technologies and in oil and gas exploration techniques have also rendered current legislation obsolete or ineffective. An assessment of the EU approach concluded that there is insufficient coverage of environmental protection, disaster prevention and response and that the industry's liability for environmental degradation is not clearly defined. Moreover, only segments of the EU's environmental liability Directive, habitats Directive and birds Directive directly apply to off-shore petroleum activities (EC 2010). The Commission thus proposed new rules for the safety of off-shore oil and gas prospection, exploration and production activities in October 2011. The proposal extends the environmental liability Directive to cover all EU marine waters within 370 kilometres from coastal areas and sets rules that cover the lifecycle of all exploration and production activities. Given the sensitivity of the proposal in terms of its encroachment on an area of traditional national competence, discussions on its finalisation are likely to be contentious. Whatever the outcome of current efforts to address the Eurozone crisis, the current economic and financial preoccupations in Europe are unlikely to fade away rapidly. Instabilities in financial markets, uncertainties over growth and job prospects and pressure to maintain austerity regimes could continue in some form for several years. Safeguarding jobs and stimulating growth is thus likely to remain an overriding political priority. The added value of EU policies will frequently be measured against this yardstick. Similarly, political support for specific environmental measures is likely to only be achieved by convincing leaders of the costs of inaction and the costeffectiveness of action. At the same time, the current economic situation also offers a number of opportunities for promoting the environmental policy agenda, particularly in view of fostering an efficiency revolution. There are currently a number of strategic and sectoral processes underway in the EU which will affect the general context and scope for environmental policy action to 2020 and beyond. This includes taking forward the EU's 10year economic strategy known as the Europe 2020 Strategy, promoting eco-innovation, discussions on the EU budget for the 2014-2020 period and on the future strategic framework for EU environmental policy under the 7th Environment Action Programme (7th EAP). Improving implementation of the substantial body of environmental law already agreed has been, and still is, a key challenge. This section provides a brief overview of some of these wider strategic issues and the challenges and opportunities they offer for future EU environmental policy. The Europe 2020 strategy: promoting smart, sustainable and inclusive growth In 2010, the EU adopted a new medium-term growth strategy known as the 'Europe 2020 Strategy' (EC 2010). The Strategy aims to turn the EU into a smart (based on knowledge and innovation), sustainable (promoting resource efficient, greener and more competitive growth) and inclusive (high employment, delivering economic, social and territorial cohesion) economy. These priorities are linked to five headline targets for employment, social inclusion, education, innovation, climate change and energy, which are to be reached by 2020 (see Table 1). These targets are common goals to be achieved through a mix of European and national action. The targets have been translated by Member States into corresponding national objectives and measures reflective of their own geographic, socio-economic and political situation . The Europe 2020 Strategy also identified 'Flagship Initiatives' in seven areas within which EU and national authorities should coordinate their efforts (see Box 5). The 'Flagship Initiatives' were presented by the Commission in 2010/2011 and have led to the adoption of a series of subsequent strategies, roadmaps and measures. Of particular relevance to the environmental agenda is the resource efficiency Flagship Initiative which has spawned a number of strategies and roadmaps (see section on environmental challenges), and has led to the re-conceptualisation of a number of environmental issues so as to relate them to the resource efficiency agenda. Progress in implementing the Europe 2020 Strategy at both EU and Member State level is pursued via the EU's new cycle of economic and fiscal policy coordination (the 'European Semester') and is to be closely followed by EU leaders. The six-month cycle includes the preparation of an annual growth survey by the Commission, the assessment of Member States' stability and convergence reform programmes and national reform programmes, and the adoption of country-specific recommendations. The first cycle of the European Semester ended in June 2011 and it is still too early to judge the results. Some initial assessments suggest that although the topics of energy efficiency, addressing environmentally harmful subsidies and the reduction of GHG emissions in key sectors are highlighted in Member State reports, the treatment of climate change concerns and the overall performance of the Strategy have been relatively weak. A report for the Greens/EFA in the European Parliament concluded that the priorities of the 2011 Annual Growth Survey do not cover all the agreed headline targets, with particular gaps for those concerned with climate change. National recommendations do not appear to be based on Member State progress towards respective goals, but rather focus on fiscal consolidation needs [START_REF] Derruine | The first European Semester and its contribution to the EU 2020 strategy[END_REF]. The Europe 2020 Strategy aims to be central to economic policy in the EU and is strongly supported by the Commission and its President. It has thus become a key strategic document directing EU action across the spectrum, including EU spending (see section below). Most recent Commission policy documents have been linked to the priorities of the Europe 2020 Strategy and framed accordingly. The Strategy underlines the need to combat climate change and increase Europe's resource efficiency, thus placing these objectives high on the overall EU policy agenda. However, given its overarching priority to stimulate economic growth, its primary focus is on 'win-win' environmental options (i.e. those that can bring financial gains, improve competitive advantage, and in some cases reduce dependence on foreign resources). This narrow focus ignores other key policy objectives that are nonetheless firmly embedded in the EU environmental acquis. For example issues such as biodiversity and the broader notion of ecosystems and their services, which are of central relevance to human well-being and economic performance, are side-lined. Too much of a focus on 'win-wins' also creates the risk of developing policies that do not take into account the inter-linkages and trade-offs between different areas. Promoting eco-innovation Environmental challenges and resource constraints have led to growing demand for greener technologies, products and services and have facilitated the emergence of new types of manufacturing and services. Their development creates huge market opportunities as well as new challenges and pressures on companies. The global market for eco-industries is estimated to stand at roughly €600 billion a year, with over one third of this stemming from the EU. The US and Japan account for a large part of the remaining global turnover of eco-industries (ECORYS 2009). The EU's comparative advantage and niche markets are in the areas of renewable power generation technologies (with over 40 per cent of global market share) and waste management and recycling technologies (with 50 per cent of global market share). Although an established market player in certain segments; the EU's eco-industry sector is facing increasing competition from Japanese, US, Taiwanese and Chinese players (ECORYS 2009). There has been some investigation of the factors affecting the level of innovation in the EU. While environmental regulation can help to promote eco-innovation, a number of negative factors persist including under-investment in the knowledge base (where other countries like the US and Japan are out-investing the EU and China is rapidly catching up); unsatisfactory framework conditions, such as poor access to finance, high costs of intellectual property rights, ineffective use of public procurement; and fragmentation and duplication of efforts (EC 2010c). Eco-innovation represents a key area of synergy between environmental and economic objectives and is an important part of delivering smart, sustainable and inclusive growth. The Commission estimates that European eco-industries currently have an annual turnover of €319 billion, or about 2.5 per cent of EU GDP and have recently been growing by 8 per cent each year. The main sub-sectors deal with waste management (30 per cent), water supply (21 per cent), wastewater management (13 per cent) and recycled materials (13 per cent). The sector directly employs 3.4 million people, with around 600 000 additional jobs created between 2004 and 2008 (EC 2011d). The annual growth rate in employment in all subsectors between 2000 and 2008 was roughly 7 per cent (see Figure 11). Germany (24 per cent), France (20 per cent) and the UK (17 per cent) have the highest number of eco-industry jobs (WIFO 2006). When taking into account those people directly employed in jobs related to the environment, including jobs in sectors that depend on a good quality environment as an input, such as organic agriculture, sustainable forestry, and tourism, the number of people employed in the sector in 2008 is estimated at 5.6 million (ECORYS 2009). The EU can help to accelerate eco-innovation through well-targeted policies and actions such as regulatory initiatives, voluntary agreements, incentives, private and public procurement, standards and performance targets all of which can help to create stronger and more stable markets for eco-innovation. The EU can also help to mobilise additional funding for investment in eco-innovation and policy measures to lower and manage risks for entrepreneurs and private investors (EC 2011d). Over the years, the EU has introduced various measures that seek to promote further eco-innovation. Recent developments have been closely related to the Europe 2020 Strategy. In October 2010, the Commission presented a Flagship Initiative on the Innovation Union with the aim of improving conditions and access to finance for research and innovation, ensuring that innovative ideas can be turned into products and services that create growth and jobs (EC 2010c). This was followed in December 2011 with a new Eco-innovation Action Plan (Eco-AP) (EC 2011d). The programme is the successor to the EU's Environmental Technologies Action Plan (ETAP) launched in 2004. The new plan has a broader remit than its predecessor and includes a variety of measures intended to overcome the barriers preventing the development and spread of eco-technologies, particularly among SMEs. Proposed actions include the use of environmental policy and legislation as a driver to promote eco-innovation, supporting demonstration projects and partnering to bring promising technologies to the market, developing new standards which will boost eco-innovation, mobilising financial instruments and support services for SMEs, and supporting the development of emerging skills, jobs and training programmes. Of the total budget of €87bn of the programme, the Commission proposes that at least 60 per cent support sustainable development objectives, out of which around 35 per cent should be climate change related. More specifically it is proposed that €4.7bn will be used to secure sufficient supplies of safe and high quality food and other bio-based products by developing productive and resource-efficient primary production systems; €6.5bn will be allocated to the transition to a reliable, sustainable and competitive energy system; €7.7bn will be allocated to a resource efficient, environmentally-friendly and safe transport system; and €3.5bn will support the objective of achieving a resource efficient and climate change resilient economy, protected ecosystems, biodiversity and sustainable supply of raw materials (EC 2011e). The Horizon 2020 package is expected to be adopted by the end of 2013 with a view to enter into force on 1 January 2014. The Commission has also included a strong innovation component in its proposals for the 2014-2020 Cohesion Policy (EC 2011d). The EU budget: financing environmental policy in times of austerity The growth focussed agenda of the Europe 2020 Strategy is met by an increasingly vigilant push for financial austerity in many Member States. This tension underlies the on-going discussions on Europe's next long-term budget (the so-called multi-annual financial framework (MFF)), that will set out EU spending priorities for the 2014-2020 period. From an environmental perspective there are however a number of opportunities to re-focus significant elements of EU spending so as to support the transition to a low-carbon, resource-efficient economy. Raising further revenues from environmental fiscal reform may also be possible. In both ways a greener budget could contribute positively to the wider political objectives of promoting economic recovery and creating jobs. Reforming EU spending for environmental purposes Public expenditure through the EU budget, albeit relatively small in size, remains an important source of financing for the environment and can exert a strong influence on patterns of investment and related policies in Member States. Since the launch of the review of EU spending and resources in September 2007, there has been growing recognition of the need for reform of the EU budget to reflect new and emerging challenges, such as climate change. The Commission Communication on the EU budget review (EC 2010b) presented in October 2010 argued that the future budget should be closely aligned to the Europe 2020 Strategy and play a key role in its delivery. The need to address climate change, resource efficiency and energy security is highlighted and the case for ensuring the necessary investments in green technologies, services and jobs is clearly made. In June 2011, the Commission formally tabled its proposals for the 2014-2020 MFF under the title 'A Budget for Europe 2020' (EC 2011). The Commission proposes an overall increase for the period to €1,025 billion (1.05 per cent of GNI, which is in fact a slight decrease from the current budget which represents 1.12 per cent of GNI). The CAP (€372 billion) remains a sizeable element of the overall budget but is now to account for a fractionally smaller share than Cohesion Policy (€376 billion). A key function of the budget is to provide a means of responding to persistent and emerging challenges that require a common, pan-European approach such as environmental protection and climate change. With a relatively small sum (€3.2 million) allocated to the future environment funding instrument (LIFE); 'mainstreaming' is put forward as the principal mechanism for financing environment and climate change priorities. Most notable is a requirement that at least 20 per cent of the EU budget is allocated to climate change financing. If mainstreaming is applied rigorously it could help to shift investment patterns in several sectors, aiding energy conservation, the growth in renewables, a greener transport infrastructure etc. To make mainstreaming effective, it requires policy shifts in the main spending areas embodied in EU regulations and willingness in the Member States where the money is spent to take such considerations into account in their planning and decision-making. issues of resource efficiency and biodiversity etc. feature less prominently. More balance is required to take into account the wider suite of environmental issues. Several governments are now arguing for an effective freezing of the future budget (including the UK, Denmark and the Netherlands), while others such as France and most central and eastern European countries aim to defend the traditional spending blocs on agriculture and cohesion [START_REF] Medarova | When financial needs meet political realities. Implications for Climate Change in the Post-2013 EU Budget[END_REF]. This positioning contrasts with that of the European Parliament which supports inter alia a 5 per cent increase in the overall EU budget and the abolition of all rebates and correction mechanisms. Whilst there is no environmentally optimal size of budget, one that is squeezed down may well lose key environmental elements as almost occurred earlier in 2011 with an attack on rural development spending. Despite its small size, the EU budget can have significant multiplier effects in important policy areas such as energy and transport and build institutional capacity at a European scale. However, Member State positions indicate that traditional issues (e.g. the total size of the MFF, the share of CAP and Cohesion Policy, national rebates and new sources of revenues) could dominate the debate. The risk in doing so is that the 'greener' elements of the proposals will be watered down or lost [START_REF] Medarova | When financial needs meet political realities. Implications for Climate Change in the Post-2013 EU Budget[END_REF]. Exploring new revenue sources The difference between national contributions to the EU budget and national receipts is a matter of significant contention and underlies the position of many Member States on the EU budget. The UK has negotiated a sizeable national 'rebate' to reduce the size of its national net contribution and defends it as far as it can during the decisionmaking process. Some other Member States have smaller rebates. The Commission's proposals for the 2014-2020 MFF aim to simplify Member State contributions, introduce a new system of own resources, and reform 'correction mechanisms' (including a review of the UK rebate). With its proposals, the Commission aims to move towards a system in which revenue flows directly to the EU budget, thus reducing reliance on national contributions. The most controversial Commission proposals in this respect concern the introduction of a new EU financial transaction tax (FTT) and new EU VAT resource. The FTT in particular has been criticised by several Member States including the UK. Given the need for unanimity in the Council for the adoption of any fiscal measures the future of these proposals is uncertain. Given continued constraints on public budgets, the Commission is also proposing to increase the use of 'innovative financial instruments' as a means of attracting additional public and private financing to projects of EU interest. Most of the proposals extend, with some modifications or extensions, existing financial instruments. These include risk-sharing instruments (e.g. the Risk-Sharing Finance Facility for investments in research, development and innovation (RSFF)), financial engineering and technical assistance under Cohesion Policy, guarantees and venture capital for SMEs under the Competitiveness and Innovation framework Programme (CIP), and equity instruments such as the Marguerite Fund. One new proposal under the Connecting Europe Facility is the EU project bond initiative, which focuses on securing investment for strategic infrastructure projects in the energy, transport and ICT sectors [START_REF] Withana | Mobilising private investment for climate change action in the EU: The role of new financial instruments[END_REF]. The current focus on fiscal issues in the EU raises questions about whether there should be a shift towards environmental fiscal reform in the years ahead. In principle, shifting part of the current national tax bases from labour to environmentally damaging activities, though environmental tax reform (ETR) could bring about an improvement in both the environment (by properly pricing externalities) and the economy as a whole (e.g. by making the cost of labour cheaper and therefore encouraging employment). The reform and/or phasing out of environmentally harmful subsidies (EHS) could also help to release additional financial resources, including for the environment. Following a significant increase in the use of environmental taxes in the 1990s among EU countries, these levels have remained stable and in some countries have decreased over the past decade. Moreover, despite various EU and international commitments to reforming EHS, progress has been slow. Fiscal issues have always been a particularly sensitive area as can be seen in the on-going discussion on the Commission's proposal to revise the energy taxation Directive to introduce a carbon element which would reflect the environmental impact of various types of fuels. This proposal has been met by resistance from several Member States, including the UK, and the European Parliament. Nevertheless, in the current economic and financial crisis, reforms in this area could play a significant role in the restructuring of EU finances and could also contribute to achieving wider environmental and climate change objectives. Developing the future strategic framework for EU environmental policy: The 7th Environment Action Programme Since 1973, the Commission has periodically issued Environment Action Programmes (EAPs) setting out forthcoming initiatives, legislative proposals, broader approaches and principles for EU environmental policy. In July 2002 the sixth Environment Action Programme (6th EAP) was adopted. The Programme establishes a tenyear framework for EU action on the environment, focusing on four thematic areas: climate change, nature and biodiversity, environment and health, and natural resources and waste. It also outlines governance mechanisms to improve the environmental policy-making process in the EU. More detailed measures to meet the objectives of the Programme were set out in seven Thematic Strategies covering soil protection, marine environment, pesticides, air pollution, urban environment, natural resources and waste. As the Programme nears its last phase, there has been much discussion on its achievements and shortcomings as well as its successor. In August 2011, the Commission presented its final assessment of the 6th EAP (EC 2011a) which concluded that on balance, the Programme has been helpful in providing an overarching framework for EU environmental policy. The 6th EAP acted as an important reference point for Member States, local and regional authorities and other stakeholders. In some areas, the 6th EAP helped to build political will for action (e.g. on marine, soil, urban, resources), while in others it focused on revising existing measures and addressing specific gaps (air, pesticides, waste prevention). However, a number of shortcomings of the 6th EAP were also recognised. The large number of actions (156 in total) and the absence of a longer-term vision were seen to have compromised its capacity to deliver a clear, coherent message. Inadequate implementation and enforcement of EU environmental legislation was another concern, although this problem is not attributable to the EAP. There has been considerable debate and uncertainty about the need for and political added value of a successor EAP. The added value of such a Programme given the current plethora of strategic documents such as the resource efficiency Roadmap was one question on the table. This debate has now been resolved and the Commission has formally announced its intention to present a proposal for a seventh Environment Action Programme (7th EAP) in October 2012. The 7th EAP is expected to set out strategic orientations for EU environmental policy for the shortto-medium term and a longer-term vision, bringing together action to protect natural capital and ecosystems, encourage resource efficiency and improve implementation. The 7th EAP is also expected to build on proposals in the resource efficiency Roadmap and deal with a number of challenging issues including changing consumer behaviour, improving policy coherence, examining environmental determinants for improving public health, the international aspect of environmental policy, and securing better financing (EC 2011b). The development of the 7th EAP is still in its early stages and will be subject to consultation in 2012, thus providing an opportunity to feed strategic thinking into discussions on the future framework for environmental policy in Europe. There are implementation gaps across most of the main topics of environmental law and in almost all Member States. At the end of 2009, Spain had the highest number of on-going infringements cases (40), most of these relate to nature legislation ( 14) and water legislation (10). Italy and Ireland had more than 30 open infringements each and the Czech Republic, France and the UK had 26 each. In the UK, the majority of infringements related to water, air and environmental impact assessments (see Figure 13). Although a number of Member States have received significant fines for poor compliance, the problem persists. The implementation challenge A recent study for the Commission [START_REF] Cowi | The costs of not implementing the environmental acquis[END_REF] attempted to estimate the cost of not implementing the EU environmental acquis. The report notes that the lack of full implementation of the acquis could have negative effects on eco-industries, as uncertainty about environmental measures may hamper investments in new environmental technologies. Uneven implementation can also distort competition across Member States and lead to higher administrative costs when standards vary across countries. Overall, the study suggests that the current cost of not fully implementing key EU environmental legislation, in the fields of water, air, nature and biodiversity, waste, chemicals and noise, may represent about €50 billion per year. Furthermore, it suggests that missing future environmental targets could cost up to €250 billion per year. Although the study is only a first order-of-magnitude estimate, it shows that the lack of full implementation can have real economic impacts. Implementation of the environmental acquis is in the first place the responsibility of EU Member States. However, action at EU level can also be helpful to improve the situation. The Commission has made various efforts over the years to guide Member States in implementing EU law, such as issuing guidance documents interpreting specific matters of EU law, sharing good practices, setting up early 'package meetings' to discuss transposition difficulties with national administrations, etc. Current EU efforts build strongly on a preventative approach, seeking close cooperation with Member States before taking enforcement action via the court. In 2008, an 'EU Pilot project' was launched which aims to correct infringements of EU law at an early stage without recourse to infringement proceedings through closer collaboration between the Commission and Member States. At the end of 2010, the initiative covered 18 Member States, including the UK, and has reportedly contributed to a reduction in the number of infringement proceedings among participating Member States (EC 2011c). Different factors explain implementation failures, including an unwillingness to accept costs, insufficient administrative capacities, lack of political priority for environmental inspections and associated limited resources for inspection authorities at Member State level. A critical barrier remains the lack of political will for real action. Efforts to improve implementation are also not helped by the lengthy nature of litigation procedures, whichdepending on the procedure -can take several years. It can also prevent the Commission from taking action in cases where damage has already occurred and cannot be repaired (IEEP 2011). Full implementation of environmental legislation is not only an issue of credibility but also has economic and social implications. Improving implementation of EU environmental law is a key priority of the current environment Commissioner, Janez Potočnik, and is expected to form an important part of the upcoming 7th Environment Action Programme (see previous section While the difficult economic conditions clearly require attention, strategic environmental priorities are in danger of being neglected or watered down in the face of concerns about streamlining legislation and reducing administrative burdens. Often this is based more on ingrained assumptions than a clear appraisal of the evidence. This more cautious approach has led to a narrower focus on those environmental initiatives that provide win-win solutions and are backed by broader economic interests. Rising public debts in several Member States and flailing capital markets have dented the ability to invest in critical infrastructure and innovative technologies and services necessary for the transition to a low-carbon, resource efficient economy. Moreover, the rise of emerging economies is dramatically changing the international landscape and the role of the EU therein. The crisis also provides EU environmental policy with a number of new opportunities. Addressing the inter-linkages and trade-offs between different thematic areas such as climate change, biodiversity, natural resources and environment and health, as well as between environmental policy and sectoral policies such as agriculture, energy or transport will also be important. The current economic downturn has made it cheaper and thus easier to achieve certain policy objectives, such as the EU's 2020 climate change objectives. Climate change concerns have infiltrated the main political discourse and are now increasingly reflected in the EU's Europe 2020 Strategy, as well as in spending priorities under the future EU budget. The EU has started to develop a policy agenda on resource efficiency that may in the future lead to concrete targets and indicators for reducing resource consumption, thus addressing the key driver behind many of the environmental challenges faced today. Discussions on greater economic convergence among some Member States could provide the conditions for extended efforts on green fiscal reform. Preconceptions are also gradually changing, with growing recognition among policy-makers and business actors that the current model of economic growth is inherently unsustainable and cannot be pursued indefinitely. The inter-connections and inter-dependencies between different economic, political, social, cultural, technological and environmental systems are being re-appraised. Developments in other regions are increasingly affecting Europe and vice-versa. Consequently, there is a need for a more holistic, integrated perspective that looks at the coherence and trade-offs of different policies, and points to a new focus on a green economy. In broad terms, the challenges ahead include reducing the intensity of resources used for economic activity (resource decoupling), reducing the negative environmental impacts from the use of natural resources (impact decoupling), preserving and restoring natural capital, and improving human well-being and quality of life. Improving the implementation of EU environmental policy will remain a key challenge and requires a more honest alignment of aspirations, regulatory means and implementation capacities with the political realities of a Union of 27 Member States. In the face of fiscal austerity and pressures for budgetary cuts across the board, there is a need to defend administrative capacities for good governance and regulatory foresight. The investment needs for achieving EU environmental objectives and to support the transition to a low-carbon, resource efficient economy are substantial and securing adequate financing to support environmental commitments will be another key challenge. Additional financial resources will need to be mobilised through new approaches complementing traditional grant funding, while proper take-up of the proposed mainstreaming approach in the future EU budget could have major implications for investment patterns in Member States. In the current context of economic austerity, there is a tendency to see environmental regulation as a brake on growth without necessarily considering the evidence at hand. Recent history suggests that well designed and implemented environmental policy, regulatory or not, can provide some of the foundations for long term prosperity as well as steering us towards a more sustainable society. Thus, even in a period of economic recession and political upheaval, EU environmental policy is likely to remain dynamic and relevant, offering a number of opportunities and avenues to help move forward from the current stasis. CLIMATE AND ENERGY Energy taxation COM(2011)169 Proposal to revise the existing energy taxation Directive 2003/96/EC under which taxation would be split into two components: a minimum tax rate of €20 per tonne of CO 2 and minimum rates for energy based on the energy content of a fuel rather than volumes. The proposed CO 2 tax rate will apply to all sectors not subject to the EU ETS, namely transport, households, agriculture and small industries. Provisions for some derogations are included. Energy efficiency COM(2011)370 Proposal for the establishment of a common framework for promoting energy efficiency in the EU to ensure the target of 20 per cent primary energy savings by 2020 is met. Placing on the market and use of biocidal products COM(2009)267 The proposal will repeal and replace Directive 98/8/EC on the placing of biocidal products on the market. The Proposal aims to address weaknesses identified in the implementation report on the Directive, such as the costs of compiling a dossier in support of the inclusion of active substance. For the first time the Proposal identifies which active substances may not be used in biocidal products. Control of majoraccident hazards involving dangerous substances COM(2010)781 Proposal to revise Directive on major accident hazards. The main proposed changes are: to align Annex I of the Directive to changes to the EU system of classification of dangerous substances; to include mechanisms to adapt Annex I in the future to deal with changing situations; to strengthen the provisions relating to public access to safety information, participation and access to justice; and to introduce stricter standards for inspections. SOIL Soil protection COM(2006)232 Proposal for the establishment of a framework for the protection of soil defining seven key functions of soil and introducing EU rules on soil condition monitoring, soil erosion, decline in organic matter, and contamination. The Directive would oblige sellers and buyers to provide a soil status report for any transaction of land where a potentially contaminating activity has taken, or is taking, place. Annex II: key EU legislative proposals awaiting adoption the all-party parliamentary environment group One of the larger all-party groups in Parliament, the All-Party Parliamentary Environment Group was set up twelve years ago to strengthen the influence of Parliamentarians on public policy and public debate on the environment. The Group also aims to assist Parliamentarians by improving their access to specialist information through regular group meetings and contact with senior environmental managers and directors from industry and NGOs, written briefings and special reports such as this one. The Group has over 150 Members of Parliament and the House of Lords, and some 180 associate member companies and organisations. It holds regular meetings and receptions at the House of Commons, with talks by leading British and International politicians and captains of industry on key environmental issues. A newsletter and briefing sheet is produced after each meeting. Over the years the Group has played host to quite a number of different British Ministers including David Miliband, Margaret Beckett and Michael Meacher, the Dutch, German and Danish Environment Ministers, senior Brussels officials including Margot Wallstrom, EU Commissioner, and many others from government, business and the campaign groups both in the UK and abroad. The Group meets 5 or 6 times a year at the Houses of Parliament and membership is by invitation. If you would be interested in joining the Group as an associate member, please contact the membership office shown opposite with details of your company or organisation. The Institute for European Environmental Policy The Institute for European Environmental Policy (IEEP) is an independent research organisation working on policies affecting the environment in Europe and beyond. Our aim is to analyse and present policy options and to disseminate knowledge about Europe and the environment. Our research work involves both pressing short-term policy issues and long-term strategic studies, drawing on more than thirty years of experience. Our project portfolio varies from year to year, but we are committed to being at the forefront of thinking about the environmental aspects of EU policies and keeping an open dialogue with policymakers and stakeholders. We have research programmes in several different fields and produce the Manual of European Environmental Policy (http://www.europeanenvironmentalpolicy.eu/). We work closely with the full range of policy actors, from international agencies and the EU institutions to national government departments, NGOs and academics. IEEP has an interdisciplinary staff with experience in several European countries and a wider network of partners throughout the EU. We work closely with universities, specialist institutes and consultancy organisations. The London office of IEEP was founded in 1980 and the Brussels office in 2001. For further information, please see our website: www.ieep.eu Figure 1 : 1 Figure 1: Number of items of EU environmental legislation adopted each year, 1962-2008 Source: IEEP 2011 Figure 2 : 2 Figure 2: Risk interconnections -complexities, interactions and synergies Source: WEF 2011 Figure 3 : 3 Figure 3: Gap between average 2008-2010 emissions and Kyoto targets in sectors not covered by the EU ETS Source: EEA 2011 Box 3 . 3 Adaptation to climate change Adaptation has been another issue on the EU policy map, although less prominently and with less specific measures. The Commission White Paper on adaptation in 2009 (EC 2009a) proposed more than 30 concrete actions in a number of areas, such as the development of a knowledge base, and the integration of adaptation into other EU policies. It called on the EU and Member States inter alia to explore the possibility of making climate impact assessment a condition for public and private investment and to develop indicators to better monitor the impact of climate change, (including vulnerability impacts), and the progress on adaptation. A follow-up EU adaptation Strategy is expected in 2013. Figure 5 : 5 Figure 5: Total GHG emissions from transport, 1990-2008 Source: Annual European Community GHG inventory 1990-2008 and inventory report 2010, Submission to the UNFCCC Secretariat, EEA Technical Report No 6/2010, EEA Figure 6 : 6 Figure 6: Main drought events in Europe (2000-2009) Source: EEA, 2010c Figure 7 : 7 Figure 7: Occurrence of floods in Europe (1998-2009) Source: EEA, 2010c Figure 8 : 8 Figure 8: Percentage of the EU urban population potentially exposed to air pollution exceeding acceptable EU air quality standards Source: EEA, 2011d Figure 9 : 9 Figure 9: Waste management performance/waste treatment in EEA countries in 2010 Source: Derived from information on Eurostat waste data centre 2010, http://epp.eurostat.ec.europa.eu/portal/page/portal/waste/data/sectors/municipal_waste Figure 10 : 10 Figure 10: Use of material resources and material productivity for the EU-15 and EU-12 Note: Domestic material consumption (DMC) is an aggregate of materials (excluding water and air) which are actually consumed by a national economy, calculated based on domestic extraction and physical imports (mass weight of imported goods) minus exports (mass weight of exported goods). Source: EEA, 2010c cent of 20-64 year olds to be employed Innovation • 3 per cent of the EU's GDP (public and private combined) to be invested in R&D/innovation Climate change and energy • Reduction in EU GHG emissions to at least 20 per cent below 1990 levels (reduction of 30 per cent if conditions are right) • 20 per cent of EU energy consumption to come from renewable resources • 20 per cent increase in energy efficiency Education • Reduce school dropout rates below 10 per cent • At least 40 per cent of 30-34 year olds should complete the third level of education Poverty and social exclusion • Reduce the number of people in or at risk of poverty and social exclusion by at least 20 million policy for the globalisation era • An agenda for new skills and jobs • European platform against poverty Source: European Commission 2010 Figure 11 : 11 Figure 11: Employment in various sub-sectors of the EU eco-industry sector Source: ECORYS 2009 The EU also has several sources of funding to support the research and deployment of environmental technologies. As part of the 2007-2013 multi-annual financial framework, the Commission supports research, development and demonstration of eco-innovative technologies and their market penetration within the 7th Framework Programme for Research and Technological Development (FP7) under the 'environmental and climate change' (€1.8 billion) and the 'energy' (€2.3 billion) themes; the Competitiveness and Innovation Framework Programme (CIP) which has a budget of €3.6 billion for the same time period, as well as the Eco-innovation First Application and Market Replication Projects, the European Eco-innovation Platform, and the environmental pillar of the LIFE+ Programme. Moreover, Member States and regions can also draw on funding under Cohesion Policy for the further deployment and replication of eco-innovation (EC 2011d).For the 2014-2020 period, the Commission has proposed a new Horizon 2020 Framework Programme for Research and Innovation which brings together all existing EU research and innovation funding. Horizon 2020 is expected to strengthen the role of eco-innovation and provide financing for the implementation of the Eco-AP. of funds is currently spent on two EU policies: the Common Agricultural Policy (CAP) and 'Cohesion Policy' which is devoted to regional development, social priorities, infrastructure and aid to poorer parts of the EU (see Figure12). A relatively minor fund is dedicated to environmental objectives (LIFE). However environmental spending also takes place through a number of other funds including the Structural and Cohesion Funds and the Rural Development element of the CAP. Figure 12 : 12 Figure 12: EU budget composition, 2007-2013 MFF Source: CEPS 2009 Figure 13 : 13 Figure 13: Infringements of EU environmental legislation by Member State and by sector (as of the end 2009) Source: EC 2010 proposal aims to simplify the existing legal framework, decreasing the share of emissions from two or three-wheel vehicles to overall transport emissions and improving aspects of vehicle functional safety. to add new provisions to the WEEE Directive (2002/96/EC), including new measures on registration and reporting requirements, broaden its scope, new collection and recycling targets, minimum inspection rules and make producers financially responsible for household collection.CHEMICALSExport and import of dangerous chemicals COM(2011)245 Proposal to recast the Regulation on the export and import of dangerous chemical, changing and clarifying some definitions and aspects of the consent procedure and transferring certain tasks to ECHA. the all-party parliamentary environment group chair Phillip Lee MP vice chairs Jack Dromey MP Therese Coffey MP Martin Horwood MP Mike Weir MP Baroness Young The All-Party Parliamentary Environment Group was set up to strengthen the influ- ence of Parliamentarians on public policy and public debate on the environment. The Group also aims to assist Parliamentarians by improving their access to specialist information. It currently has 126 Members of Parliament, 27 Members of the House of Lords and some 180 affiliated member companies, environmental groups and organisations. For more details, please contact: secretariat 45 Weymouth Street London W1N 3LD tel 0207 935 1689 fax 0207 486 3455 email awilkes@apeg.co.uk membership & accounts 4 Vine Place Brighton BN1 3HE tel 01273 720305 fax 0845 299 1092 email info@apeg.co.uk Box 1. EU progress in reducing GHG emissions Since 2008, progress towards the EU's 20-20-20 targets has been aided by the economic downturn and the burgeoning development of renewable energies. In 2009, the EU's total GHG emissions decreased by 7 per cent. However, the combination of economic growth in some countries and a cold winter led to a rise in emissions in 2010 by 2.4 per cent. Overall emissions in the EU-15 had fallen by 10.7 per cent by the end of 2010 as a result of domestic emission cuts and activities in the sphere of land use, land use change and forestry (LULUCF), assisted by the use of flexible mechanisms. Hence, the EU-15 is expected to go beyond its very modest Kyoto targets. Although it has no specific target under the Kyoto Protocol, the EU-27 countries as a block follow a similar trend and by the end of 2010 had reduced emissions by 15.5 per cent (not taking LULUCF into account). Kyoto target, since the Protocol was ratified before 12 other countries joined the EU. Most Member States that joined the EU since 2004 have the same 8 per cent reduction target, with the exception of Hungary and Poland which have a reduction target of 6 per cent. Cyprus is a non-Annex-I Party to the Convention and thus does not have a target. Source: EEA 2010b; EEA 2011; EEA 2011b On an individual basis, only 16 of those EU Member States with a Kyoto target are currently on track to meet their individual objectives (Bulgaria, Czech Republic, Estonia, Finland, France, Germany, Greece, 1 The EU-27 does not have a for moving to a low-carbon economy by 2050', as The Plan was followed by a proposal for an energy efficiency Directive, currently under scrutiny by the European Parliament and the Council (EC 2011f). This sets out a number of measures for energy using sectors, and for the European energy supply sector and other proposals to remove barriers and overcome market failures that impede efficiency. As it stands, the draft Directive requires the Commission to assess in 2014 whether the EU can achieve the current EU energy savings target and, if appropriate, to propose binding legislation with mandatory national targets for 2020 which do not exist at present. Looking further ahead, several strategic documents relating to future climate and energy policy have appeared over the past year. In March 2011, as well as the energy efficiency plan, the Commission presented the 'Roadmap part of the Europe 2020 resource efficiency Flagship (EC 2011g). The overarching objective is to reduce GHG emissions by 80-95 per cent by 2050. The Roadmap also sets out the percentage reductions that would have to be achieved in key sectors (power, transport, the built environment, industry, agriculture and forestry) by 2030 and 2050 respectively. Investment needs are estimated to be, on average, around €270 billion annually over the next 40 years. Substantial investment needs have also been identified in Box 2. European companies support more ambitious EU climate policy In June 2011, 72 leading European and global companies signed a declaration calling on the EU to increase its current target of reducing GHG emissions to 30 per cent by 2020. Together the signatories account for more than 3.8 million employees with an annual turnover of more than €1 trillion. The companies call for an ambitious EU policy framework that can spur innovation and investment, notably in renewables and energy efficiency, lead to the creation of new jobs and enable Europe to maintain its leadership position in a global low carbon economy. Source: WWF 2011 Figure 4: Primary energy production in the EU, by fuel, EU-27 (Mtoe) relation to sustainable energy and transport infrastructure, low carbon technologies, research and development and adaptation (see Medarova et al, 2011). The question of securing financing for climate change related action is reflected quite prominently in the Commission's proposals for the future EU budget, although the sums available directly from this source are limited (see section 5). Source: Eurostat May 2011 The first step is in the political spotlight at present. New draft regulations for the CAP, Cohesion Policy and Research Policy among others are being negotiated in 2012-13. At present the main environmental concerns addressed in the Commission's draft regulations relate to climate change and energy, while Member States' record of implementing a large part of EU environmental legislation remains poor. At the end of 2010, environmental infringement procedures accounted for approximately one fifth of all open cases for noncommunication, non-conformity or bad application of EU law in the 27 Member States. A large number of cases relate to waste, nature, and water matters (see Figure13). The number of judgements of the European Court of Justice (ECJ) in environmental matters has increased continuously over the years as has the number of cases of noncompliance by Member States (EU-15) with ECJ judgements (IEEP 2011). Deadlines set in EU environmental legislation are regularly missed by a large number of Member States, with transposition of the environmental liability Directive for example proving a particularly problematic case. These issues are however normally resolved after the launch of infringement proceedings, and protracted delays only occur in a minority of Member States (EC 2011c). ). A new communication on implementing EU environmental law and policy is also under development by the Commission. The Communication is expected to explore practical avenues to improve current gaps in implementation, examining issues of improving coherence, enhancing compliance, strengthening inspections and enhancing the role of national judges in supporting implementation of EU legislation (EC 2010).EU environmental policy is facing a new and challenging context. Political attention is currently focused on the EU's economic and financial crisis, leaving little appetite for major new legislative initiatives in other areas. The crisis in the Eurozone has led to bigger questions concerning the EU project itself and growing scepticism about the EU has been voiced in a number of Member States, including the UK where political tensions have been brought to the fore in recent months. Details of a new inter-governmental agreement on the economic governance of the Eurozone are currently being negotiated. Many existing EU policies, including those concerning the environment, are not likely to be affected by this agreement. However, the political repercussions and dynamics of the new economic governance structure are yet to unfold. 6 Conclusions: priorities for the future development of EU environmental policy Provisions for end-use sectors, for the energy supply sector, and other measures are included. The proposal requires the Commission to assess in 2014 whether the EU can achieve the current energy savings target and, if appropriate, to propose legislation with mandatory national targets for 2020. The Connecting Europe Facility (CEF) is a new integrated instrument for investing in EU infrastructure priorities in the transport, energy and telecommunications sectors. The proposed budget is €50 billion of which €31.7 billion will be invested in transport infrastructure, €9.1 billion in energy infrastructure and €9.2 billion in broadband networks and digital services. The proposed Europe 2020 Project Bond Initiative will be one of a number of risksharing instruments upon which the CEF may draw to attract private finance. The proposal aims to harmonise recreational craft and personal watercraft with stricter emission limits for NOx, hydrocarbons and particulate matter. TRANSPORT Connecting Europe Facility COM(2011)456 AIR QUALITY Recreational craft COM(2007)851 Transport within the EU is heavily dependent on imported oil and oil products which account for more than 96 per cent of the sector's energy needs -EC (2011) Roadmap to a Single European Transport Area, Facts and Figures, http://ec.europa.eu/transport/strategies/facts-and-figures/putting-sustainability-at-the-heart-of-transport/index_en.htm The future for EU environmental policy 41 MARINE ENVIRONMENT AND FISHERIES Detergents COM( 2010)597 The proposal aims to extend the scope of Regulation 648/2004 to introduce a limitation on the content of phosphates and others phosphorous compounds in household laundry detergents. The proposal will set a phosphorous content limit of 0.5 per cent of the total weight of the product in all laundry detergents on the EU market Safety of offshore oil and gas operations COM( 2011)688 Proposal for a Regulation on the safety of offshore oil and gas prospection, exploration and production activities which establishes minimum requirements for industry and national authorities involved in offshore oil and gas operations. The Regulation aims to reduce the risks of a major accident in EU waters, and to limit the consequences should such an accident occur. Oil pollution COM (2000)802 amended by COM(2002)313 Proposal for a Regulation concerning oil pollution in European water. The proposal, part of the Erika package, is to establish a fund (COPE) to compensate for oil pollution damage in European waters and to complement the existing international two-tier regime on liability and compensation for oil pollution damage by oil tankers. Sulphur content of marine fuel COM(2011)439 Proposal to amend Directive 1999/32/EC as regards the sulphur content of marine fuels. If adopted, the Directive would transpose into EU law global limits on the sulphur content of marine fuels adopted in 2008 by the modification of the MARPOL Agreement of the IMO. By 2020 the limit for sulphur in marine fuels would be lowered from 4.5 per cent to 0.5 per cent. Alternative compliance methods are introduced, such as exhaust gas cleaning systems. The strengthening of the EU monitoring and enforcement regime is also proposed. For example, the Commission would be allowed to specify the frequency of sampling, the sampling methods and the definition of a sample representative of the fuel examined.
143,254
[ "878323", "17592" ]
[ "126879", "527713", "300877" ]
01743995
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01743995/file/RR-9162.pdf
Davide Frey Marc X Makkes Pierre-Louis Roman François Taïani Spyros Voulgaris Dietcoin: shortcutting the Bitcoin verification process for your smartphone Keywords: blockchain, sharding, UTXO, distributed ledger, cryptocurrency, mobile computing Blockchains have a storage scalability issue. Their size is not bounded and they grow indefinitely as time passes. As of August 2017, the Bitcoin blockchain is about 120 GiB big while it was only 75 GiB in August 2016. To benefit from Bitcoin full security model, a bootstrapping node has to download and verify the entirety of the 120 GiB. This poses a challenge for low-resource devices such as smartphones. Thankfully, an alternative exists for such devices which consists of downloading and verifying just the header of each block. This partial block verification enables devices to reduce their bandwidth requirements from 120 GiB to 35 MiB. However, this drastic decrease comes with a safety cost implied by a partial block verification. In this work, we enable low-resource devices to fully verify subchains of blocks without having to pay the onerous price of a full chain download and verification; a few additional MiB of bandwidth suffice. To do so, we propose the design of diet nodes that can securely query full nodes for shards of the UTXO set, which is needed to perform full block verification and can otherwise only be built by sequentially parsing the chain. Dietcoin: court-circuiter la vérification dans Bitcoin pour les téléphones mobiles Résumé : Les blockchains telles que Bitcoin passent mal à l'échelle, notamment du fait de leurs besoins important de stockage. Les besoins de stockage d'une blockchain typique ne sont pas limités et croissent indéfiniment. Par exemple, en août 2017, les données contenues dans la blockchain Bitcoin représentaient environ 120 GiB, contre 75 GiB un an auparavant, en août 2016. Pour bénéficier des garanties complètes de sécurité apportées par Bitcoin, un noeud qui rejoint le réseau doit télécharger et vérifier l'intégralité des 120 GiB de données. Cette nécessité pose un défi pour les appareils à faibles ressources tels que les smartphones. Heureusement, une alternative existe pour de tels dispositifs qui consiste à télécharger et à vérifier seulement l'en-tête de chaque bloc de la blockchain. Cette vérification partielle des blocs permet aux appareils de réduire leurs besoins en bande passante de 120 GiB à 35 MiB, mais cette diminution drastique ne permet qu'une vérification partielle des blocs, et diminue grandement les garanties de sécurité offertes aux noeuds qui l'utilisent. Dans ce travail, nous proposons une approche qui permet aux appareils à faibles ressources de vérifier entièrement des sous-chaînes de blocs sans avoir à payer le prix onéreux d'un téléchargement et d'une vérification complète de la chaîne ; quelques MiB supplémentaires de bande passante suffisent. Pour ce faire, nous proposons d'introduire des noeuds Dietcoin qui sont capables en toute sécurité d'interroger des noeuds exécutant le protocole complet pour obtenir des fragments d'un ensemble appelé UTXO, nécessaire à la vérification complète des blocs. Mots-clés : blockchain, partitionnement, UTXO, registre distribué, monnaie cryptographique, informatique mobile 1 Trustless Bitcoin Within a decade, blockchains have become extremely popular, and have been used to implement several widely-used crytocurrencies [START_REF] Nakamoto | Bitcoin: A peer-to-peer electronic cash system[END_REF], and smart-contract services [START_REF] Dickerson | Adding concurrency to smart contracts[END_REF]. A blockchain implements a tamper-proof distributed ledger in which public transactions can be recorded in a close-toirrevocable manner. Recorded transactions are stored into blocks, which are then incrementally linked (or chained ) in order to form an append-only list. The irrevocability of these chaining mechanisms exploits cryptographic mechanisms and peer-to-peer exchanges. This combination makes it in principle inconceivably hard for individual participants to revoke past transactions (due to the computational cost involved), while it remains possible for any participant to verify the validity of a blockchain's entire history. Verifying a blockchain remains, however, a particularly costly process. The verifying node must first download the entire blockchain, which in many cases has reached a size beyond the communication capabilities of many mobile devices. The Bitcoin blockchain, for instance, had grown to 120 GiB as of August 2017 (Figure 1), and follows an exponential growth, implying the problem can only become more acute. Once the blockchain has been downloaded, the verifying node must then check its consistency block by block, a lengthy process that can take hours on high-end machines. The exorbitant price of a full chain verification makes it unrealistic for low-resource devices to fully implement a blockchain protocol. Some blockchain systems, such as Bitcoin, therefore enable nodes to perform varying degrees of verification: full nodes verify everything while lightweight nodes only verify a small fraction of the data. In the case of Bitcoin, this lightweight verification is known as Simplified Payment Verification (SPV for short). SPV nodes only download and verify a much reduced version of the Bitcoin blockchain, comprised only of its block headers, which today only weights 35 MiB (a reduction by three orders of magnitude). This summary version however only contains the chaining information making up the blockchain, not the recorded transactions. This information is sufficient for SPV nodes to verify that the chain's structure is valid (and hence very unlikely to have been created by malicious nodes), but not that a past transaction does exist in the chain. As a result, SPV nodes are vulnerable to attacks in which an attacker leads an SPV node to believe a transaction t has occurred, while t is later on rejected by the system because the funds transferred by t have in fact already been spent (known as a double-spend attack ). To protect themselves against double-spend attacks, full nodes keep track of unspent funds in a structure known as the set of Unspent TransaCTion Outputs (UTXO set). The UTXO set is unfortunately costly to construct (as this construction requires the entire blockchain), to exchange (currently weighing 1.9 GiB, see Figure 1), and to maintain, which explains why SPV nodes do not use it. In this report, we propose to bridge the gap between full nodes and SPV nodes by introducing diet nodes, and their associated protocol, Dietcoin. Dietcoin strengthens the security guarantees of SPV nodes by bringing them close to those of full nodes. Dietcoin enables low-resource nodes to verify the transactions contained in a block without constructing a full-fledged UTXO set. In our protocol, diet nodes download from full nodes only the parts of the UTXO set they need in order to verify a transaction of interest. This selective download mechanism must, however, be realized with care. Diet nodes must be able to detect any tampering of the UTXO set itself, at a cost that remains affordable for low-resource devices, both in terms of communication and computing overhead. The rest of this report is structured as follows. We first present the Bitcoin protocol in more detail (Section 2), and explain the workings of full and SPV nodes. We then detail the design of Dietcoin and diet nodes and discuss the security guarantees they provide (Section 3). Finally, we present related work (Section 4), and conclude (Section 5). The Bitcoin system A blockchain is a decentralized ledger composed of blocks containing transactions. The transactions, the blocks, and the resulting chain obey a few core rules that ensure the system remains tamper-proof. Great care is required when modifying these rules, as even minor changes might break the blockchain's properties and its security guarantees. In the following, we first detail the default workings of the Bitcoin blockchain and its rationale 1 . We then build upon these explanations to introduce and justify the changes we are proposing. Overview In a blockchain system such as Bitcoin, the blockchain proper ((B k ) k∈Z ≥0 , label 1 in Figure 2) is maintained by a peer-to-peer network of miners. Each block B k links to the previous block B k-1 by including in its header a cryptographic hash that is (i) easy to verify, but (ii) particularly costly to create (this second point is one of the central element of blockchains with open membership, which we will discuss in detail just below). The leftmost block B 0 is known as the Genesis Block : it is the first and oldest block in the blockchain, and it is the only block with no predecessor. Recording a new transaction To transfer 8 bitcoins from herself to Bob, the user Alice must first create a valid transaction (label 2 ) that contains information proving she actually owns the 8 bitcoins (with a cryptographic signature using asymmetric keys), and encode the resulting transaction output with Bob's public key (such that, in turn, only Bob will be able to demonstrate ownership of the transaction's output). Alice then broadcasts this new transaction to the network of miners 3 , in order for it to be included in the blockchain. Before adding Alice's transaction into the blockchain, Miner A first verifies that the transaction is valid (label 4 in Figure 3, details on the transaction verification process will follow in Section 2.2.1-(BV2)). Miner A then includes Alice's transaction together with other transactions received in parallel into a new block (B 3 , 5 ), and attempts to link it to the current tip of the blockchain. This linkage operation requires Miner A to solve a probabilistically difficult cryptopuzzle 6 that regulates the frequency at which blocks are created (or mined ) by the whole network. (In Bitcoin, this periodicity is set to one block every 10 minutes.) If Miner A succeeds, the new block B 3 is now 1 Blockchains with closed membership or different consensus protocols are not discussed in this section RR n°9162 The new block ultimately reaches Bob 9 , who can check that the transaction has been properly recorded (and can then, for example, sell some goods to Alice). ... þ þ þ B 0 B 1 B 2 B 3 B 0 B 1 B 2 B 3 Bob 9 Irrevocability of deep blocks Because blocks are produced at a limited rate such that all the miners receive block B k before they can successfully mine a concurrent block B k , honest miners are highly likely to extend the chain when producing a new block, ensuring a consistent system state with high probability. The views of individual miners may however diverge in problematic cases, causing branches to appear. When a branch occurs, miners resolve the divergence by choosing as valid branch the one that was the most difficult to create (details on block difficulty will follow in Section 2.2.1-(BV2)). Blocks that are left out of the chain are said to be orphan. The risk of being made orphan decreases exponentially as a block lies deeper in a chain, ensuring the practical irrevocability of deep blocks and the transactions they contain. This is illustrated in Figure 4: consider an attacker who wishes to revoke a block B n-k (targeted block), that lies k blocks away from the chain's tip B n . For this attack to succeed, this attacker must produce an alternative subchain (B n-k , .., B n , B n+1 ) that is more difficult to create than the current chain. Producing this subchain is however extremely costly, and takes time which increases the odds that the legitimate chain grows (with a block B n+1 , thus requesting an even more difficult attack subchain) before the attacker succeeds. When the computing power of the attacker is less than half of that of the rest of the network, his probability of success drops exponentially with k. Inria B n-k-1 B' n-k B n-k B n Current blockchain ... ... ... B' n B' n+1 Hijacking a3empt Targeted block Transactions, Blocks, and UTXO set To benefit from the full security of Bitcoin, Bob should verify the validity of the new block that contains Alice's payment to him (label 9 in Figure 3) in addition to verifying the validity of Alice's transaction. This is because Alice could collude with a miner (or launch herself a miner), and produce an invalid block that she would advertise to Bob. Bitcoin relies on a number of builtin validity checks on blocks and transactions to conduct this verification. However, whereas full nodes exploit all of these checks, Simple Payment Verification nodes (SPV nodes) only perform a limited verification. In the following, we describe the details of these validity checks, we discuss the role of an intermediary set known as the set of Unspent TransaCTion Outputs (UTXO set), and the shortcomings of SPV nodes ensued by the limited verification they perform. Checking block validity A block is valid if and only if it meets the following two conditions. • (BV1) Its header respects the blockchain's Proof-of-Work predicate. • (BV2) It only contains valid transactions (which we discuss further below). BV1: The Proof-of-Work predicate makes it very difficult for malicious actors to alter the blockchain in an attempt to edit the ledger. The Proof-of-Work predicate is used as a lock-in mechanism to anchor blocks in the chain. It is enforced on each block header, whose simplified structure is shown in Figure 5. The header of each block B k points both to the header of the previous block B k-1 (using a hash function, 1 ), and to the transactions contained in the current block B k 2 . To fulfill the Proof-of-Work predicate 4 , a header must contain a nonce 3 such that the hash of the header is less than a difficulty target. The difficulty target is set so that a new block is created every ten minutes by the miners as a whole, regardless of the computation power (the difficulty target is regularly adjusted to cope with changes in their computation power). Finding a nonce respecting the difficulty target is computationally very expensive, as every miner competes to create blocks. This computing cost prevents attackers from easily tampering the chain as they have to recompute fresh nonces for the blocks they wish to replace. To establish a secure and verifiable link between the header of B k and the corresponding block B k , the pointer to B k 's transactions 2 consists of the root of a Merkle tree. A Merkle tree is a hierarchical hashing mechanism for sets that enables a verifier to efficiently test whether an item (here a transaction) belongs to the set by reconstructing the root of the Merkle tree. Each leaf node in a Merkle tree consists of the hash of an item, while each internal node (including A the root) consists of the hash of its children. This makes it possible to reconstruct the root, and thus verify set membership, using only a logarithmic number of intermediate hashes. In Figure 6 for example, a node can verify the presence of transaction A in the set by (i) downloading the root from a secured communication channel (e.g., the blockchain), and (ii) downloading A and the three intermediate hashes shown in red: H B , H CD and H EF GH , and reconstructing the root from the downloaded hashes. The reconstructed root should match the downloaded one. BV2: In addition to the Proof-of-Work predicate (BV1), all the transactions included in a block must also be valid for the overall block to be valid. Figure 7 shows the validity mechanisms included in a typical Bitcoin transaction. In this example, Alice uses 3 coins she owns (the transaction's inputs 1 ) to pay 7 Bitcoins to Bob, and 4 to Tux (the transaction's outputs 2 ). Only coins created in earlier transactions may be spent: each of Alice's inputs therefore points back to the output of an earlier transaction 3 . To ensure that only the recipients (Bob and Tux) are able to spend the output, each new coin contains an ownership challenge (a hashed public key), that must be solved to spend this coin 4 . H ABCD H B H C H D H A H AB H EFGH H F H G H H H E H EF H CD H GH H ABCDEFGH B C D E F G H Reconstructed Downloaded Unneeded Alice's transaction is only valid if the following three conditions are met: • (TV1) The inputs do exist, and Alice owns them. She can prove her ownership of the inputs by providing a public key matching their ownership challenges 5 , and by signing the new transaction with the corresponding private key2 6 ; • (TV2) No money is created in the transaction. In effect, the total value of the transaction's inputs must be greater than or equal to that of its outputs: in∈inputs(t) value(in) ≥ out∈outputs(t) value(out). The difference value(in) -value(out) is given as a fee to the miner of the block containing the transaction; • (TV3) The transaction's inputs (tx_ID i , index j ) have not been spent yet (i.e., they do not appear as inputs of any earlier transaction, an attack known as a double spend ). The set of Unspent TransaCTion Outputs (UTXO set) While the validity of a block's header (BV1) only requires access to the current block B k , and to the header of its predecessor B k-1 , verifying transactions (BV2) requires a lot more information. Verifying the ownership challenges of input coins (TV1), and their amount (TV2) requires access to the transactions recorded in earlier blocks. Worse, verifying that inputs coins have not yet been spent (TV3) potentially requires parsing and verifying the entire blockchain. To avoid performing such a costly operation for each new block, nodes that verify transactions maintain an intermediary set known as the set of Unspent TransaCTion Outputs (UTXO set). The UTXO set contains all the coins that have been created in the chain but not spent in later transactions, it thus contains all the spendable coins. A node verifying transactions can prevent double spends (TV3) by simply ensuring that all the inputs of a transaction appears in its UTXO set. The UTXO set evolves as new correct blocks are added to the chain: transaction outputs are removed from the set when they are spent, and outputs of new transaction are added to the set. The limitations of SPV nodes In spite of its benefits, constructing a local UTXO set is costly: in order to obtain the set, a node must first download the entire chain (120 GiB as of August 2017, see Figure 1) and validate it (a lengthy process that can take hours on high-end machines), even if only the latest block is relevant to its interest. Because of this cost, Bitcoin supports several levels of verification. Miners and users running full nodes construct the UTXO set and check both block headers (BV1) and transactions (TV1,2,3 and as a result BV2). By performing all the possible checks, full nodes benefit from the maximum security that the Bitcoin system has to offer. By contrast, Simple Payment Verification nodes (SPV nodes) do not construct the UTXO set. Instead, they only download the chain's block headers (rather than full blocks), and verify that these headers are valid (BV1). With the headers only, SPV nodes are able to verify the well-formedness of the blockchain including the crucial Proof-of-Work predicate that seals the links of the chain. However, because this verification is only partial, SPV nodes are unable to detect if a new block contains an invalid transaction. This scenario, however probabilistically difficult to accomplish for an attacker, is a vulnerability of SPV nodes. To circumvent this vulnerability, SPV nodes typically wait until miners have created subsequent blocks extending the chain containing a block of interest, which implies that these miners have performed a full verification on it and consider this block as valid. The need for SPV nodes to wait makes it particularly problematic to use Bitcoin on limited devices (i.e. mobile phones) for everyday transactions. SPV nodes are not even able to check whether the inputs used in a transaction do exist. It also limits the ability of SPV nodes to detect faulty transactions as early as possible, which is an important usability feature of modern payment systems. In this work, we propose to overcome the inherent limitations of SPV nodes with Dietcoin. Dietcoin enables nodes with limited resources (diet nodes) to benefit from a security level that is close to that of full nodes, at a fraction of the cost required to run full security checks. The Dietcoin system To address the vulnerabilities of SPV nodes and to improve the confidence mobile users can have in recent transactions, we propose Dietcoin, an extension to Bitcoin-like blockchains. Although our proposal can be applied to most existing Proof-of-Work blockchains using the UTXO model for coins, we describe Dietcoin in the context of the Bitcoin system as presented in Section 2. The core of Dietcoin consists of a novel class of nodes, called diet nodes, which provide lowpower devices with the ability to perform full block verification with minimal bandwidth and storage requirements. Instead of having to download and process the entire blockchain to build their own copy of the UTXO set, diet nodes query the UTXO set of full nodes and use it to verify the legitimacy of the transactions they are interested in (as described in Section 2.2.1-(BV2)) and the correctness of the blocks that contain them. This gives diet nodes security properties that sit in between those of full nodes, and those of Bitcoin's SPV nodes. Consider a user wishing to verify a transaction for the sale of some goods. The user's diet node will initially proceed like a standard SPV node. It will contact a full node to obtain the header of the block that supposedly contains its transaction as well as the corresponding branch of the transaction Merkle tree, to verify that the transaction indeed is included in the block. But while an SPV node would stop at this inclusion check, the diet node continues by verifying both the inclusion and the correctness of all the transactions in the block. To make this possible we introduce the possibility for diet nodes to access the state of the UTXO set of full nodes corresponding to the instant right before the block they want to verify. Since downloading the entire UTXO set would result in prohibitive bandwidth overhead, as shown in Figure 1, Dietcoin-enabled full nodes split their UTXO set into small shards, enabling diet nodes to download only the shards that are relevant to the transactions in the block. Inria To prevent diet nodes from trusting maliciously forged shards, the shard hashes are used as leaves of a Merkle tree, which root is stored in each block. Having the Merkle root stored in blocks enables the UTXO shards to benefit from the same Proof-of-Work protection as transactions. In addition to the full verification done on the block of their interest B k , diet nodes can increase their trust in B k by fully verifying its previous blocks. By doing so, diet nodes are ensured that none of the l verified blocks contain illegal transactions nor erroneous UTXO Merkle root. To make a diet node trust a forged transaction in block B k , an attacker has to counterfeit the subchain of l + 1 blocks (B k-l , ..., B k ). Thanks to the Proof-of-Work protection, the cost of this attack increases exponentially as l increases linearly. In the following, we detail the operation of Dietcoin by first describing how full nodes can provide diet nodes with verifiable UTXO shards. Secondly we discuss how miners link blocks to the state of their UTXO set. We then explain how diet nodes can extend the verification process from one block to a subchain of any length. Finally, we detail the operation of diet nodes when verifying transactions. Sharding the UTXO set To enable the operation of diet nodes, Dietcoin-enabled full nodes need (i) to provide diet nodes with shards of the UTXO set, while (ii) enabling them to verify that these shards are authentic. To satisfy (i), Dietcoin-enabled full nodes store the UTXO set resulting from the application of the transactions in each block in the form of shards with a predefined maximum size of 1 KiB (on average, across all shards). The use of shards enables diet nodes to download only the relevant parts of the UTXO set and also limits the storage requirements at full nodes, which only need to store the modified shards for each block to let diet nodes query older versions of the shards. Similarly, the limit of 1 KiB for the size of each shard limits the bandwidth employed by diet nodes in the verification process. To satisfy (ii), full nodes also maintain a Merkle tree that indexes all the shards of the UTXO set. Using shards also proves advantageous with respect to this Merkle tree. If nodes were to index UTXO entries directly, the continuous changes in the UTXO set would cause the Merkle tree to become quickly unbalanced, leading to performance problems or requiring a potentially costly self-balancing tree. The use of shards, combined with the right sharding strategy, gives the UTXO Merkle tree a relatively constant structure, enabling shards to be updated in place most of the time. Moreover, it makes it possible to predict the size of the UTXO Merkle tree and its incurred overhead, which enables us to better control and balance the storage overhead for full nodes and the bandwidth requirements of diet nodes. A number of sharding strategies satisfy the requirement of a fixed number of shards. In this work, we use the simplest approach consisting of indexing UTXO entries by their first k bits. This strategy resembles a random approach since the first bits of an UTXO are the transaction hash it references, which value is expected to be random due to the uniformity property of the SHA-256 hash function. This strategy comes with the added advantages of obtaining shards of homogeneous size and a full binary Merkle tree with 2 k leaves. Keeping in mind the size cap of 1 KiB per shard, k can be adapted locally by each node to cope with the growth of the UTXO set. When the average shard size breaches the cap of 1 KiB, k is incremented by one resulting in (i) halving the average shard size, and (ii) adding one layer to the Merkle tree, doubling its storage footprint in the process. Linking blocks with the UTXO set With reference to Figure 8, consider a Dietcoin-enabled miner that is mining block B k , and let UTXO k be the state of the UTXO set after applying all the transactions in B k . The miner stores the root of the UTXO Merkle tree associated with UTXO k as an unspendable output in the first transaction of block B k before trying to solve the Proof-of-Work as shown in Figure 8. Storing the root of the UTXO Merkle tree in the first transaction of the block does not require any modification to the structure of Bitcoin's blocks. Thus it is possible for Dietcoin-enabled full nodes and miners to co-exist with their legacy Bitcoin counterparts. Dietcoin-enabled full nodes verify the value of the UTXO Merkle root against their own local copy when they verify a block, while legacy Bitcoin nodes simply ignore it. The UTXO Merkle tree provides a computationally efficient way for diet nodes to verify whether the shards they download during the verification process are legitimate and correspond to the current state of the ledger. Still referring to Figure 8, let us consider a diet node d that wishes to verify a transaction in block B k . Node d needs to obtain: block B k , the UTXO Merkle root in block B k-1 , the shards of the UTXO set between the two blocks, and the elements of the associated Merkle tree that are required to verify their legitimacy. It can then verify the shards using the root stored in block B k-1 's first transaction, and use them to verify the correctness of the transactions in B k . We observe that storing the Merkle-root referring to the state after the block into the block itself forces diet nodes to download two blocks to verify transactions. This makes it inherently harder for an attacker to provide a diet node with a fake block B k because it would need to forge not only block B k but also block B k-1 . Extended verification Diet nodes have the ability to extend their confidence in a block by iterating the verification process towards its previous blocks. By doing so, diet nodes ensure the correctness of the UTXO Merkle root present in block B k-1 used to verify the correctness of block B k . The extended verification can be performed on a subchain of any length l provided that the verifying diet node can query UTXO shards of any age. A diet node fully verifying the subchain (B k-l+1 , ..., B k ) can only be tricked into trusting a malicious transaction in block B k if the attacker manages to counterfeit the l + 1 successive blocks starting from B k-l , which becomes exponentially more costly as l increases linearly. Inria The block B k-l contains the UTXO Merkle root that serves as a basis for the verification of the consequent blocks. Since the block B k-l is not verified, it is thus trusted by the diet node. A comparison can be made with full nodes where the first block of the chain, the genesis block, is hard-coded and is therefore trusted. By shifting the trust from the genesis block to the block B k-l for diet nodes, Dietcoin effectively shortcuts the verification process. Picking the value of l exhibits a trade-off between security and verification costs. On one hand choosing a great l draws the behavior of the diet node closer to that of a full node, while on the other hand choosing a small l draws it closer to that of an SPV node. The user can make her decision based on which block depth l is great enough in her opinion such that all blocks prior to B k-l are unlikely to be counterfeited. For instance, a diet node user can choose l such that the trusted block has a block depth of 6 or greater since it is the de facto standard in Bitcoin to consider blocks of depth 6 or greater as secured 3 . In the case depicted in Figure 8, assuming l = 2, diet node d downloads block B k-1 to verify the transactions and the UTXO Merkle root in B k-1 to further increase its confidence in B k . To verify B k-1 , the diet node uses the UTXO Merkle root in B k-2 . Detailed Operation Equipped with the knowledge of Dietcoin's basic mechanisms, we can now detail the verification process carried out by diet nodes. Algorithm 1 depicts the actions taken by a diet node when its user starts the application using Dietcoin and compares it with those taken by legacy SPV clients. Black dotted lines [•] are specific to diet nodes, while hollow dotted lines [•] are common to both diet and SPV nodes. The algorithm begins when the application using Dietcoin starts and updates its view of the blockchain. In this first part of the algorithm, a diet node behaves exactly in the same manner as an SPV node. First, it issues a query containing the latest known block hash and an obfuscated representation of its own public keys in the form of a bloom filter (lines 4-5). A full node responds to this query by sending a list of all the block headers that are still unknown to the SPV/diet node, together with the transactions matching the SPV/diet node's bloom filter, and the information from the transaction Merkle tree that is needed to confirm their presence in their blocks. Using this information, the SPV/diet node verifies the received headers (including Proof-of-Work verification) and updates its view of the blockchain (line 6). Since the response from the full node might contain false positives due to the use of a bloom filter for public key obfuscation, the next verification step consists in ensuring that the received transactions match one of the user's public keys (lines 8-9). Once false positives are discarded, the SPV/diet node verifies that the received transactions are in the blocks (line 10-12). At this point, a standard SPV node simply returns the received transactions to the application (line 14). A diet node, on the other hand, continues the verification process. To this end, the diet node first computes which blocks to fully verify to ensure that (i) no block is fully verified twice (line 17), (ii) old blocks considered by the user as secured enough are not fully verified (parameter maxDepth p , line 17) and (iii) only a subchain of limited length l is fully verified (parameter maxLength p , line 19). If no block is selected for full verification, the diet node falls back to SPV mode (lines 20-21). To bootstrap the full verification process, the diet node must first download the UTXO Merkle root present in the block prior to the first block to verify (line 23). For each of the selected blocks B k , the diet node downloads from Dietcoin-enabled full nodes (i) the block B k itself (line 25), (ii) the state, before B k , of the UTXO shards associated with both the block's transactions' inputs and outputs (line 26), and (iii) the partial UTXO Merkle tree required to prove the integrity of the downloaded shards (line 26). Once it has all the data, the diet node proceeds with the verification process. For each block B k , it first verifies that the downloaded UTXO shards match the UTXO Merkle root from the previous block B k-1 (lines 27-29). Then it verifies that the transactions in each block use only available inputs from these shards (lines 32-33), and computes the new state of these shards based on the transactions in B k (lines 34, 36). Finally it verifies that the updated shards lead to the UTXO Merkle root contained in B k (lines 37-39). Once the verification process has terminated, the diet node returns the transactions associated with the local user to the application (line 14). Related work Making the UTXO set available for queries between nodes has been discussed several times in the Bitcoin community over the past few years. Bryan Bishop published a comprehensive list of such proposals [START_REF] Bryan | bitcoin-dev] Protocol-Level Pruning[END_REF] that share some of the following goals: (i) enabling faster node bootstrap, (ii) strengthening the security guarantees of lightweight nodes, and (iii) scaling the UTXO set to reduce its storage cost. The primary goal of Dietcoin is to strengthen the security of lightweight nodes. Dietcoin's strongest feature is the ability for diet nodes to efficiently perform subchain verification, as described in Section 3.3, which offers stronger security guarantees than the referenced proposals made by the community. Moreover, even though we focus in this report on the security of lightweight nodes, it is also possible to bootstrap full nodes faster with Dietcoin. Sharing similar goals with Dietcoin, Andrew Miller [START_REF] Miller | Storing UTXOs in a Balanced Merkle Tree (zero-trust nodes with O(1)-storage)[END_REF] suggests to store in blocks the root of a self-balancing Merkle tree built on top of the UTXO set. In such a system, lightweight nodes only download the UTXOs they need, which results in a lower bandwidth consumption than with shards as we propose it, but at the cost of a greater storage overhead since the stored Merkle tree is larger. Moreover, Dietcoin combines a full, and thus always balanced, Merkle tree built on top of 2 k shards that can each be updated in place as blocks are appended to the chain. This combination of tree stability and updatable shards enables efficient subchain verification, as described in Section 3.3, and adapting this feature to a system using a self-balancing Merkle tree does not seem trivial. Vault [START_REF] Leung | Vault: Fast bootstrapping for cryptocurrencies[END_REF] also proposes to use Merkle trees to securely record the state of the distributed ledger in recent blocks, and shards this state across nodes, to reduce storage costs. Contrarily to Dietcoin, however, Vault targets balance-based schemes, such as introduced by Ethereum, in blockchains relying on Proof-of-Stake consensus. Vault further stores individual accounts in the Merkle trees, rather than UTXO shards as we do. This represents a different and to some extent orthogonal trade-off to that of Dietcoin, in that Vault chooses to increase the size of Merkle tree witnesses that must be included in transactions, but removes the need for lightweight nodes to download UTXO shards. Whereas we focus on sharding the resulting state of the blockchain, other systems propose to shard the verification process. Both Elastico [START_REF] Luu | A Secure Sharding Protocol For Open Blockchains[END_REF] and OmniLedger [START_REF] Kokoris-Kogias | OmniLedger: A Secure, Scale-Out, Decentralized Ledger via Sharding[END_REF] proposes a permissionless distributed ledger using multiple classical PBFT consensus protocols each executing within a subset (shard) of nodes. Elastico limits the number of shards a malicious nodes may join (under different identifies) by tying the shard of a node to the result of a Proof-of-Work puzzle. OmniLedger [START_REF] Kokoris-Kogias | OmniLedger: A Secure, Scale-Out, Decentralized Ledger via Sharding[END_REF] extends the ideas proposed by Elastico [START_REF] Luu | A Secure Sharding Protocol For Open Blockchains[END_REF] to increase the size of the shards (and thus reduce the probability of failures), and allow for cross-shards transactions thanks to a Byzantine shard-atomic commit protocol called Atomix. Both Elastico and OmniLedger use sharding to increase the transaction processing power of a distributed ledger, rather than to improve access and verification of the Inria UTXO set, as we do. Chainiac [START_REF] Nikitin | CHAINIAC: Proactive Software-Update Transparency via Collectively Signed Skipchains and Verified Builds[END_REF] combines the ideas of skiplists and blockchains to realize skipchains, an authenticated log with both back and forward long-distance links to implement a distributed authenticated software-release ledger. Chainiac relies on digital collective signatures to implement forward links, which are not available in permissionless Proof-of-Work chains such as Bitcoin. Long distance links are particularly well adapted to navigate a well-identified subset of a blockchain (such as a package's releases). They were not however directly designed to handle the kind of dependencies captured by the UTXO model. An entirely different approach to scaling blockchains for lightweight nodes is the use of Non-Interactive Proofs of Proof-of-Work (NIPoPoWs) [START_REF] Kiayias | Non-interactive proofs of proof-of-work[END_REF] that enable constant size queries. NIPoPoWs strive for minimal cost of proof of inclusion of a transaction in a chain, thus reducing to a minimum the bandwidth requirements of lightweight nodes. NIPoPoWs however do not aim at offering improved security for lightweight nodes as we do with Dietcoin. Conclusion In this report, we have presented the design of Dietcoin, that proposes a new form of Bitcoin nodes that strengthens the security guarantees of lightweight SPV nodes by bringing them closer to those of full Bitcoin nodes. The Dietcoin protocol enables low-resource nodes to verify the transactions contained in blocks without constructing a full-fledged UTXO set. In our protocol, diet nodes download from full nodes parts of the UTXO set they need in order to verify a block, or a subchain of blocks, of interest. Diet nodes are able to detect any tampering of the UTXO set itself, at a cost that remains affordable for low-resource devices, both in terms of communication and computing overhead. In our approach, Dietcoin-enabled full nodes split their UTXO set into small shards, and enable diet nodes to download only the shards that are relevant to the transactions in the block, while verifying that these shards do indeed corresponds to the state of the UTXO set for the block they are verifying. 2 . 3 9 3 239 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Recording a new transaction . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 Irrevocability of deep blocks . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Transactions, Blocks, and UTXO set . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 Checking block validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 The set of Unspent TransaCTion Outputs (UTXO set) . . . . . . . . . . . 9 The limitations of SPV nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Dietcoin system 10 3.1 Sharding the UTXO set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 Linking blocks with the UTXO set . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 Extended verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.4 Detailed Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Figure 1 : 1 Figure 1: Both the Bitcoin blockchain and the UTXO set have almost tripled in size in the past two years. Inria 2 B 0 B 1 B 2 BobFigure 2 : 20122 Figure 2: A blockchain is formed of a sequence of blocks containing transactions. The current state of the blockchain (here (B 0 , B 1 , B 2 )) is stored by each individual miner. Figure 3 : 3 Figure 3: To add a new transaction to the current blockchain, a miner first verifies the validity of the transaction. It must then solve a costly cryptopuzzle to encapsulate this transaction in a new block (here B 3 ), before disseminating this block to other miners. Figure 4 : 4 Figure 4: Revoking the content of a block B n-k deep in the chain requires constructing a better alternative subchain, which becomes exponentially harder as the block lies deeper. hash(B k-1 .header) merkle_h(B k .transac1ons) tx 2 BFigure 5 : 25 Figure 5: Content of a block header (simplified). Figure 6 : 6 Figure 6: Example of a Merkle tree root reconstruction that only needs log(n) hashes. Figure 7 : 7 Figure 7: Structure of a transaction. Figure 8 : 8 Figure8: The UTXO set is updated every time a block is validated. For a counterfeited block to be validated by diet nodes, a malicious node has to forge at least two consecutive blocks: the first block B k-1 containing a fake Merkle root of UTXO k-1 , and the second block B k spending fake coins validated by the fake UTXO k-1 . RR n°9162 Bitcoin uses a scripting language to encode challenges and proofs of ownership, enabling for more complex schemes, but for ease of exposition we limit ourselves to the typical case.Inria Acknowledgments This work has been partially funded by the Region of Brittany, France, by the Doctoral school of the University of Brittany Loire (UBL), by the French National Research Agency (ANR) project SocioPlug under contract ANR-13-INFR-0003 (http://socioplug.univ-nantes.fr) and by the SIDN Fonds contract 172027. bloomFilter(keys) Compute a bloom filter from keys buildMRoot(hashes) Compute the Merkle root from hashes getShardKey(txId) Apply the sharding algorithm to txId height(header) Index of header starting from the genesis block updateMTreeInPlace(MTree, dataset) Update the hashes of MTree with the new value of dataset verifyHeaders(headers) Add headers to the chain, return the hash of the new chain tip filter ← bloomFilter(pubKeys p ) •5: ({header, txMTree, txs}) ← send queryMerkleBlocks(tipId p , filter) •6: tipId p ← verifyHeaders((header)) •7: for all {header, txMTree, txs} ∈ ({header, txMTree, txs}) do if ∀k ∈ pubKeys p : k / ∈ {tx.inputs ∪ tx.outputs} then continue Ignore bloom filter false positives •10: assert(∀tx ∈ txs : HASH(tx) ∈ txMTree) •11: builtTxMRoot ← buildMRoot(txMTree) •12: assert(builtTxMRoot = header.txMRoot) •13: verifyBlocksUpTo(height(header)) •14: callback(txs, header) Callback to app •15: procedure verifyBlocksUpTo(last) •16: Do not verify blocks twice or below maxDepth p •17: first ← max(highestVerified p , height(tipId p ) -maxDepth p ) •18: Verify up to maxLength p blocks •19: first ← max(first, last -maxLength p ) •20: if first ≥ last then The first UTXO Merkle root is not verified utxoMRoot ← send queryUtxoMRoot(HASH(headerStorep[first])) •24: for all blockId of height ∈ [first + 1, last] do •25: block ← send queryBlock(blockId) •26: {shards, utxoMTree} ← send queryUtxos(blockId) •27: assert(∀shard ∈ shards : HASH(shard) ∈ utxoMTree) •28: builtUtxoMRoot ← buildMRoot(utxoMTree) •29: assert(builtUtxoMRoot = utxoMRoot) •30: for all tx ∈ block.transactions do for all i ∈ tx.inputs do •32: shard ← shards[getShardKey(i)] •33: assert(i ∈ shard ∧ valid proof of ownership of i) •34: shard.remove(i) utxoMTree ← updateMTreeInPlace(utxoMTree, shards) •38: utxoMRoot ← buildMRoot(utxoMTree) •39: assert(utxoMRoot = block.utxoMRoot)
44,856
[ "842", "990977", "855", "1029804" ]
[ "525247", "62433", "525247", "55417" ]
01744091
en
[ "info" ]
2024/03/05 22:32:07
2016
https://hal.science/hal-01744091/file/PLM016_18_ECNLMS_ProReg_VF.pdf
Farouk Belkadi Ravi Kumar Gupta Ekaterini Vlachou Alain Bernard Dimitris Mourtis Linking modular product structure to suppliers' selection through PLM approach: A Frugal innovation perspective Keywords: PLM, co-evolution, Modular, Supplier selection, frugal innovation To maintain market share rates, frugal innovation is a main solution for competitive enterprises to meet the customer's needs in different regional markets. The co-evolution of product and production network aims to manage local production sites of the OEM and several collaborative relations between OEM and supplier companies for better management of the project resources in the regional market. Supplier selection and evaluation are among the main factors to be resolved at the earlier stage to guarantee successful results from any OEM-Supplier collaboration. This paper discusses the potential of using a modular-based approach as a kernel methodology to support the co-evolution of product structure and production network definition, especially in the case of supplier selection for frugal innovation perspective. The application of PLM approach to manage interconnected data describing the co-evolution of the product structure and production network is also discussed. Introduction In context of hard competitiveness and economic pressures, companies need to reach new markets (i.e. both emerging and mature market) by further sharpening their strategic focus on what customers really need. For this, frugal innovation theory is introduced to explain new market trends and to propose new solutions supporting these evolutions [START_REF] Zeschky | Frugal Innovation in Emerging markets: The case of Mettler Toledo[END_REF]. For Tiwari and Herstatt [START_REF] Tiwari | Frugal Innovation: A Global Networks' Perspective[END_REF], frugal innovation refers to innovative products and services that "seek to minimize the use of material and financial resources in the complete value chain (from development to disposal) with the objective of reducing the cost of ownership while fulfilling or even exceeding certain pre-defined criteria of acceptable quality standards". This theory results new category of products named "frugal" as an aggregation of the following attributes: Functional to answer the exact customer need by focusing on key product features; Robust with integration of recent technologies and facilities of maintenance high life duration; User-friendly through a simple and easy to use functions and interfaces; Growing through large production volumes enabling economies of scale; Affordable by offering to customer good "value for his money" through adaptable prices according to their socio-economic context; and Local to propose products mainly tailored to local requirements but also built using some production facilities (i.e. suppliers) and components from the targeted market [START_REF] Berger | Frugal Products: Study Results[END_REF]. To develop such kind of products, companies should adopt a customer-driven design process and an optimal co-evolution of the production strategy to reduce manufacturing and logistic costs, taking in consideration the capabilities, constraints and resources available in the targeted market. The design strategy respecting frugal attributes can be conducted through a set of following design actions: • Design new specific modules or modifying the features of existing ones, • Reuse existing solutions developed by the company in previous projects, • Use standard modules developed by external suppliers for several products. The co-evolution of the production network to support the adaptation of an existing product to a new market implies new changes on the production process or on suppliers to cope with new modification in the product structure (it can be the change of one module or modification of some module features). In parallel, the modification of the product structure can be a consequence of using a new module (or technology) proposed by a supplier in the local market which results in the co-evolution of the product structure with the production strategy [START_REF] Arndt | Customer-driven Planning and Control of Global Production Networks -Balancing Standardisation and Regionalisation[END_REF]. The big challenge is then to have an efficient collaboration between the OEM company and all its suppliers. The co-evolution of the production system should be extended to the production network level that aims to create several collaborative relations between the OEM (Original Equipment Manufacturer) and suppliers' companies for better management of their distinctive skills and resources in the whole production process [START_REF] Hochdörffer | Evaluation of global manufacturing networks -a matter of perspective[END_REF]. Thus, Supplier selection and evaluation are among the main factors to be resolved at the earlier stage to guarantee successful results from any OEM-Supplier collaboration [START_REF] Cheraghi | Critical Success Factors for Supplier Selection: An Update[END_REF]. As per our study, an emerging market's needs can be solved by adapting an existing product and production which are well stablished in westerned and European markets to fulfil regional customer's requirements (new market) especially when the customer belongs to emerging markets (e.g. India, Africa and China). This adaptation of the existing product to emerging market is researched using frugal innovation. Few companies adopted the frugal innovation for the above aspects, but took long time to launch their product in the markets. The research in ProRegio [7] is to fill these gaps. One part in this research is suppliers' selection respecting the requirements from regional markets and also company's policies. In this context, this paper discusses the potential of using a modular-based approach as a kernel methodology to support the co-evolution of product structure and production network definition, especially in the case of supplier selection in context of frugal innovation. This approach is mainly suitable for design strategies based on the use of standard modules or the reuse of existing solutions (with possible adaptation). A smart algorithm is used to generate and evaluate the alternative supplier networks. 2 PLM approach for supplier selection Frugal innovation and suppliers' selection With respect to the frugal innovation principle, the global modular-based approach should handle the easier interpretation of customer requirements and the identification of only concerned modules to be considered for the customization process [START_REF] Du | Product Families for Mass Customization: Understanding the Architecture[END_REF][START_REF] Mourtzis | The evolution of manufacturing systems: From craftsmanship to the era of customization. Design and Management of Lean Production Systems[END_REF]. The concept of module represents a physical or conceptual grouping of product components to form a consistent unit that can be easily identified and replaced in one product architecture in order to increase product variety and flexible adaptability [START_REF] Jiao | Product Family Design and Platform-based Product Development: State-of-the-art Review[END_REF]. In the proposed frugal innovation process (Figure 1), the designer can propose several alternatives of product modules with specific features to cope with a set of customer requirements. These features concern technical characteristics used for the engineering perspective as well as useful inputs for building the related production network. Each module is identified with all possible production capabilities or suppliers able to provide it with the desired characteristics. The selection of the best alternatives of production systems or suppliers is fulfilled by taking in consideration of different facilities and constraints in the local market to build the global production network. Then the selection of the best module solutions can be obtained as a consequence of selecting the related production systems or suppliers. Fig 1. Co-definition of product structure and production network Production system refers to technological elements (machines and tools), organizational behavior and managing resources within OEM whereas supplier refers to external supplier of product modules and related supports. A supplier can be local or international based on local requirements and product modules characteristics. By fixing the different production systems and suppliers (for final assembly and for the production of modules), the structure of the production network is defined as a combination of the selected items. The expected behavior of the network is obtained by the definition of the global planning and all collaborative processes supporting information and material exchange among these production systems and suppliers. Thus, the assembly process of the whole product structure is obtained according to the global production planning at network level. Several collaborative relations between the OEM and suppliers are identified in the literature based on the level of integration of the supplier in the final project of the OEM [START_REF] Calvi | How to manage early supplier involvement (ESI) into the new product development process (NPDP): Several lessons from a French study[END_REF]. At the low levels, suppliers are assimilated to simple executors of detailed specifications from the OEM. In more collaboration level, the supplier is more involved in the development project and participates to the definition of the product architecture (1 st rank suppliers). This nature of collaboration can provide serious advantages for the deployment of frugal innovation strategy in new markets. In this case the OEM will exploit some interesting and innovative solutions proposed by suppliers to design new frugal product or to adapt an existing one to one specific market. By this, the development process will follow a concurrent path in which the selection of the best product modules can be obtained from the identification of the best suppliers respecting the frugal requirements. Supplier selection strategy contributes principally to the improvement of "Robust", ""Affordable" and "Local" attributes since it gives the possibility to the company to: use new solution that can enhance the product quality respecting the target market standard, moderate the cost by selling from competitive suppliers, especially when those ones are coming from the target market. The concept of product module can be used to connect the product structure to different outputs of the suppliers involved in the related production network. Thus, using product module features combined with additional information from the final assembly process planning and logistic constraints in the local market can give interesting requirements for the evaluation of KPIs (Key performance indicators) useful to get an impartial assessment of potential suppliers' capacities before considering them in the production network. The mapping of product module features (from product configuration view) to KPIs (suppliers selection view) is presented in Figure 2. The product modules are the products of the suppliers. The decision making for supplier selection and product module selection is based on the mapping between requirements (product module features) and real values of KPIs for the suppliers and their products. Global PLM approach for supplier selection The role of mapping is to classify all available suppliers according to the matching level between their KPI values (representing real assessment) and product features (representing requirements). Some time, when there are limited possibilities of suppliers to cover the requested module, the mapping table can be used to adapt, if necessary, some product features according to the capacities of the related supplier. The PLM approach can support the concurrent design process of frugal products and the supplier network through a smart management of product modules and related suppliers alternatives for several product configurations addressed to various markets. This is ensured through an optimal integration, storage and connection of numerous data coming from several applications, including suppliers' selection tools. Indeed, the implementation of PLM approach results from the integration between heterogeneous IT systems such as ERP (Enterprise Resource Planning), PDM (Product Data Management), SCM (Supply Chain management), etc. [START_REF] Bosch-Mauchand | Knowledge based assessment of manufacturing process performance: integration of product lifecycle management and value chain simulation approaches[END_REF]. In the proposal, the supplier selection process is achieved based on several data stored on different business tools. This data is identified and managed in the PLM system as specific features connected to different product modules alternatives. Each module is analyzed to generate supplying requirements. The final decision is the identification of optimal "Modules-Suppliers" combination as fragment co-definition of best product structure and supplier network with respect to frugal attributes. Product features for supplier selection To illustrate the proposed principle of solution for the use of modularity for supplier selection problem, an example of concurrent design process of frugal product and identification of possible suppliers for each module is presented in Figure 3. This scenario deals with the case of adapting existing product architecture to a new market based on the capabilities of existing suppliers in the targeted market. Supplier selection strategy based on global modular product design approach The process starts by the analysis of new market requirements (including specific customer requirements) and then search for existing product structures covering these needs. The analysis of modules functions and features as requirements is used to generate list of possible suppliers. The list of suppliers' alternatives will provide list of possible module alternatives matching with the same functions of the originally design product. Then several alternatives of product structures from the original one (developed in older projects) can be obtained as combination of the product modules proposed by the selected suppliers. The performance assessment of each alternative will help to fix the best product structure and consequently the best product modules replacing the original ones. In final the list of supplier is fixed as those providing the selected modules. To perform these decision making processes, additional categories of information should be embedded in the product module concept. The definition of product module through features is to connect different views of the product design and development. Product module features are defined to translate the regional customer requirements to product design and connect the product design to production planning and supplier network design. The product modules' features are defined to address three categories objectives as inputs for frugal decision making problems regarding the objectives of the global modular product design approach: • Analyzing customer/market requirements and linking product modules to requested functions in specific product architecture. • Defining modules' parameters and its interfaces to support the design of new product (and related alternatives) as a consistent combination of product modules. • Responding to production strategy (production system and network definition) through a consistent connection of modules to optimal production capabilities. These three categories of features are used to identify requirements for supplying properties and production planning as well as to refine the selection of product module. From these categories of features, the following Table 1 includes the list of relevant features that should be used as requirements for supplier selection process. Table 1. List of modules features for supplier selection process Feature Description Performance Acceptable standardization and tolerance level for product module performance values, regarding the global product performance. Interfacing Capacity to one module to be interfaced with other ones Interchangeability Capacity for one module to be replaced by one or more other modules from other suppliers but providing similar functions. Customization Requested possibility to change some proprieties of the module and level of options proposed in the supplied module. Process position Connection of the module to different steps of the final assembly process and the level of dependency with the connected modules Criticality The importance of the related function regarding its added value to the final product structure. Supplying tolerance Level of request on the supplier in terms of cost, delivery time and confidence level as a consequence of the previous modules' features. Figure 4 illustrates an example of using product module features (green color rectangle) to analyze product life cycle and generate additional Supplying Tolerance Features (inner blue color rectangle), to be considered as requirements for decision criteria for the selection of best suppliers for frugal innovation issue. These supplying decision criteria and/or related KIPs can be used for production network design. Through the application of the concurrent design of product structure and suppliers' network, conflict can appear from contradictions on results based on analysis of product module features on one side and supplying requirements on the other side, for example a module can have high performance but real risk on delivery time regarding the supplier capacities, especially if the module is requested earlier in the production process. This will imply important delays on the final assembly process of the product. Affordable and growing attributes are seriously affected. Product module features as requirements for supplying decision criteria Figure 5 illustrates such kind of conflicts between product modules "as designed" (inner blue color rectangle) and real modules "as produced by suppliers" (inner green color rectangle). It is shown that the product module "PM1" has product module feature "Criticality" value as "High". PM1.1 is the "Best" implementation and PM1.2 is the "Worst" one according to the related suppliers KPIs. However, PM1.1 presents a low level of interfacing with the PM 2.1 that is the only possible implementation of PM2. The conflict is that if the best product module with high criticality (PM1.1) is selected then supplying of other product module (PM2) is not possible. There are two possibilities to resolve this type of situations either (i) Select the modules to be supplied as per the requested module features and redesign (or reselect new supplier) the product modules which are not possible to be supplied by any existing supplier, or (ii) Relax features' values for the conflicted product module in the requested product. Thus, product module features and KPIs for production network design should be connected together through the PLM approach to evaluate all possible combinations of product-suppliers in the production network. Conflicts possibilities on the co-definition of product structure and supplier network This connection provides real positive impacts on the frugal attributes. For instance, deciding optimal supplier based not only on its own properties but also on the criticality of the related modules allows managers adapting the balance between product cost and quality by favoring reputable suppliers for the critical modules (high function criticality requesting high quality) even the price is high, and obtaining the less critical modules (less added value for the target market) from low cost suppliers. Supplier selection and network design method The supplier selection and network design method should be capable of generating alternative combinations of product-suppliers and evaluate their performance based on multiple and conflicting criteria based on the constrains-suggestions provided by the design during the generation of the different product structures. One of the main challenges during the supplier's selection and the supplier networks design is to select the optimum supplier based on their suitability and their availability. Therefore, the proposed work proposes a smart search algorithm capable of subset of total number of alternative manufacturing network configurations by utilizing three adjustable parameters [START_REF] Chryssolouris | Manufacturing Systems: Theory and Practice[END_REF]. The three control parameters that are utilized are the maximum number of alternatives (MNA) which controls the breadth of the search, the decision horizon (DH) which controls the depth of the search, and the sampling rate (SR) which guides the search towards the high quality branches of the tree of alternatives [START_REF] Mourtzis | A Multi-Criteria Evaluation of Centralized and Decentralized Production Networks in a Highly Customer-Driven Environment[END_REF]. The optimum values of these factors will be obtained through a statistical Design of Experiments (SDoE) [START_REF] Phadke | Quality Engineering Using Robust Design[END_REF]. The main inputs of the proposed algorithm are the various product features generated by the product configuration and the pre-filtered list of suppliers based on their compatibility and their suitability, as well the bill of processes to produce the product modules and in general the final product. The decision-making procedure following by the supplier selection and networks design algorithm is based on resource-task assignment decisions. Considering that, in order to produce a part/module of the product a task should be performed by a resource and the resource belongs to a plant or to a supplier. In case of suppliers the task is directly connected with the supplier. The main steps of the proposed algorithm the formalization of the alternatives, the criteria determination to satisfy the objectives, the definition of the criteria weights, the calculation of the criteria values and last is the selection of the optimum alternative based on their performance (Figure . 6) [START_REF] Mourtzis | A Multi-Criteria Evaluation of Centralized and Decentralized Production Networks in a Highly Customer-Driven Environment[END_REF][START_REF] Mourtzis | Design and Operation of Manufacturing Networks for Mass Customisation[END_REF]. Multiple and conflicting criteria are considered and are calculated by the smart search algorithm and during the decision making procedure. Production and Transportation Cost, Quality, Lead time, total Energy Consumption, CO2 emissions are among the main criteria considered [START_REF] Mourtzis | F A Toolbox for the Design, Planning and Operation of Manufacturing Networks in a Mass Customisation Environment[END_REF]. To address the need of frugal innovation in the context of the supplier's selection method also the locality of the suppliers is considered as main criterion. Locality shows how close to the targeted market is the supplier. This will contribute to enhance the affordable and local attributes by reducing logistics costs and using existing standards in the target market since the providers are coming from the same market. Moreover, through the consideration of Locality, as criterion, local suppliers can be considered, increasing the market share of local markets. The total network performance is calculated by measuring defined KPIs. In that stage, also the suggested KPIs, from the product configuration stage, are considered. The final results of the selected network are sending back to the PLM tool in order to fix the different modules of the product and finalize the design of the product. constraints and resources. Global modular approach has been used to support the design and development of frugal product. The application of module concept for frugal innovation perspectives showed the need to re-think the definition of the product model and its integration in future generation of PLM systems. New features should be considered to represent requirements of production from frugal product side and to connect product design to production and supplying strategies Considering product features as decision criteria for supplier selection strategy will contribute to enhance the product frugal attributes, especially affordable, robust and local attributes. This is given through an optimal balance between cost, quality and delivery time requests, assessed separately for each product module according to its criticality in the target market, impact on the global performance and implication on the final assembly process. The logic is that for critical modules, strong trust on suppliers' performance is required. Less important modules or with high interchangeability allow more flexibility on the selection strategy for cost reduction. The proposed methodology can provide a great advantage to companies that are moving towards frugal innovation concept by providing local, low-cost and affordable products taking into account local customer's requirements. Moreover, through the consideration of Locality, as criterion, during the proposed supplier's selection and networks design tool, local suppliers can be considered, increasing the market share of local markets. This paper focuses on the conceptual solution instead of presenting the final implementation for industrial use case which is under development. The developed methodology is well appreciated by industrial partners as well as academic partners. Fig 2.Global PLM approach for supplier selection Fig 3.Supplier selection strategy based on global modular product design approach Fig 4.Product module features as requirements for supplying decision criteria Fig 5.Conflicts possibilities on the co-definition of product structure and supplier network Fig 6 . 6 Fig 6.Supplier selection and network design Algorithm-Smart search algorithm Acknowledgments. The presented results were conducted within the project "ProRegio" entitled "customer-driven design of product-services and production networks to adapt to regional market requirements". This project has received funding from the European Union's Horizon 2020 program under grant agreement no. 636966. The authors would like thank the industrial partners involved in this research.
26,612
[ "18529", "981818", "174660" ]
[ "473973", "159925", "473973", "77278" ]
01744138
en
[ "shs" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01744138/file/2018BlogCybergeo-Tannier_en.pdf
Cécile Tannier About fractal models in urban geography and planning: refuting the aesthetics and the universal norm Fractal models have been used in urban geography for forty years. Their main applications were for analyzing urban forms but they were also used to simulate urban growth. Research in the field has significantly contributed to better characterise the local and global shape of cities and to better understand their evolutions. Yet concomitantly, one can deplore the circulation of some myths about the interpretation of fractal analysis results and about the possible usage of fractal models for urban analysis and planning. I propose here to undermine some of these myths on the basis on some scientific publications in the field1. Cities are not fractal. Indeed, measures of fractal dimensions vary in space: the fractal dimension estimated for a whole city differs from the fractal dimension estimated for its neighbourhoods, each having its own dimension (see e.g. [START_REF] Thomas | Comparing the fractality of European urban neighbourhoods: do national contexts matter[END_REF]). Moreover, considering a given built pattern, the slope of the curve that represents the number of counted elements with respect to the size of the counting window may exhibit local variations (Frankhauser 1998[START_REF] Frankhauser | Comparing the morphology of urban patterns in Europe. A fractal approach[END_REF][START_REF] Tannier | Fractals in Urban Geography: A Theoretical Outline and an Empirical Example[END_REF][START_REF] Thomas | Clustering patterns of urban built-up areas with curves of fractal scaling behaviour[END_REF]. Very early on, pioneer works in geography have insisted on the fact that the fractal dimension is not expected to be constant in reality [START_REF] Goodchild | Fractals and the Accuracy of Geographical Measures[END_REF]; most often, it is constant over a limited range of scales but varies somewhat over successive ranges of scales [START_REF] Lam | On the Issues of Scale, Resolution, and Fractal Analysis in the Mapping Sciences[END_REF][START_REF] White | Urban Systems Dynamics and Cellular Automata: Fractal Structures between Order and Chaos[END_REF]. Consequently, properties of scale-invariance and statistical self-similarity are locally specific but not universal, and concern limited scale ranges. Thus, by definition, cities are not fractal. Nevertheless, measures of fractal dimension are interesting for geographers as they enable the characterisation of spatial distributions being highly heterogeneous. Indeed, as fractal dimension is determined via counting the elements of a spatial distribution according to several nested spatial resolutions, it informs us about the systematic variations of a given geographical fact through scales. Such variations are not proportional, not linear, and would not be detected when we use other spatial concentration indexes such as density [START_REF] François | Villes, densité et fractalité. Nouvelles représentations de la répartition de la population[END_REF]. In practice with fractal analysis, the starting point is to set the hypothesis that a spatial distribution is scale-invariant (or statistically self-similar). Then deviations to scale-invariance are studied. Such deviations may appear for some scale ranges but not others. Deviations may also vary in space, which allows the identification of spatial differentiations. 2. Evolution of the shape of cities does not comply with a unique model that would finally end in a fractal order state. For P. [START_REF] Frankhauser | Comparing the morphology of urban patterns in Europe. A fractal approach[END_REF] and N. [START_REF] Salingaros | A universal rule for the distribution of sizes[END_REF], the emergence of fractal urban patterns originates in the combination of two types of processes:  bottom-up processes resulting from individual actions (for instance, the choice of households to become owner of an individual house in a suburban area) or from collective actions (for instance, actions of local pressure groups according to the Nimby (Not In My Backyard) logic);  top-down processes (i. e. urban and regional planning). For F. [START_REF] Schweitzer | Analysis and Computer Simulation of Urban Cluster Distributions[END_REF], fractal urban patterns emerge from the interaction of contradictory actions at an individual level only, i. e. residential location choices that minimize the distance to both the city centre and the urban boundary (countryside). Underlying those assumptions is the idea of self-organisation. In a self-organised system, a meso-or macroscopic order emerges from interactions at a microscopic level and constraints in return the future evolutions of the system2. Accordingly, F. [START_REF] Schweitzer | Analysis and Computer Simulation of Urban Cluster Distributions[END_REF] have set the hypothesis that the rank-size distribution of urban built clusters changes in the course of time toward becoming a Pareto distribution. If this hypothesis is confirmed, the conclusion would be that a Pareto exponent indicates the development stage of a city and that the deviations from a Pareto distribution indicates potentials for future urban developments. Yet some empirical research results have infirmed this hypothesis. In particular, L. [START_REF] Benguigui | The Dynamics of the Tel Aviv Morphology[END_REF] have shown that the rank-size distribution of built clusters of the city of Tel Aviv (Israel) followed a Zipf's law from 1935 to 1964 then gradually deviated from it between 1974 and 2000. Several scholars have also studied the evolution of fractal dimensions of cities in the course of time. Although the comparison of values of fractal dimension obtained for different studies is not possible since the data used and the calculation methods involved are different, we note similar general tendencies. P. [START_REF] Frankhauser | Aspects fractals des structures urbaines[END_REF] calculated an increase of fractal dimension of Berlin (Germany) in the course of time: 1.43 in 1875, 1.54 in 1920, 1.69 in 1945[START_REF] Shen | Fractal Dimension and Fractal Growth of Urbanized Areas[END_REF] calculated an increase of fractal dimension of Baltimore (USA), from 1.015 in 1822 to 1.722 in 1992. An increase of fractal dimension of the built surface has also been shown for the metropolitan area of Basel (Switzerland, France, Germany) between 1882 and 1994 [START_REF] Tannier | Fractals in Urban Geography: A Theoretical Outline and an Empirical Example[END_REF] as well as for the metropolitan area of Lisbon (Portugal) between 1960 and 2004 [START_REF] Encarnação | Fractal cartography of urban areas[END_REF]). Yet if the evolution of city shapes evolves according to a fractal growth process, their fractal dimension should not change in the course of time, which studies quoted above contradict. An objection can be that these studies consider each city within a spatial extent that is fixed in the course of time and that comprises on the one hand the urban area itself which expands gradually, and on the other hand, its periphery. That's why other studies have considered cities within a study area that expands according to the urbanisation process. For instance, L. Benguigui, D. Czamanski, M. Marinov et J. [START_REF] Benguigui | When and Where Is a City Fractal? Environment and Planning B: Planning and Design[END_REF] have analysed the evolution of Tel Aviv's metropolitan area from 1935 to 1991 taking into account three nested study areas. They have shown that the growth differed for each part of the metropolitan area: the fractal dimension increased at different speed in each part of the metropolitan area; some parts have become "fractal" (i. e. statistically self-similar for a given scale range) earlier than others; and the whole metropolitan area has become "fractal" until the mid 80s. The fact is that a city does not evolve for centuries according to a fractal growth process that would be unique and that would go on until the achievement of a final "maturity" stage of the urban form. First, rules that determine the location and the shape of new urban developments change in the course of time. Second, the urban sprawl process results in the gradual integration of peripheral built areas (villages, hamlets, diffuse suburban settlements/buildings) within the inner city. Third, an urban built pattern can be deeply modified through destruction and reconstruction (for instance, the re-shaping of Paris during the 19th century, the reconstruction of cities after massive destructions resulting from bombing or natural disasters, or the massive destructions of old built neighbourhoods and the construction of large buildings and skyscrapers in contemporary Chinese cities). Last, we can observe creations ex nihilo of new towns in the periphery of large cities. 3. Nothing proves that fractal urban forms are optimal by nature. Starting from the statement that a fractal order can emerge at a meso-or macroscopic level from self-organising processes, fractality is sometimes seen as a desirable equilibrium state. « Multifractality represents optimal structure of human geographical systems because a fractal object can occupy its space in the most efficient way. Using the ideas from multifractals to design or plan urban and rural terrain systems, we can make the best of human geographical space » (Chen 2016)3. Thus, if self-organised fractal forms are satisfying (even optimal), urban planning becomes useless (and even annoying) because top-down constraints may engender deviations from fractality (Genre-Grandpierre 2017). Subsequently, deviations from fractality can be seen as signs of dysfunction. For Y. [START_REF] Chen | Multifractal Characterization of Urban Form and Growth: The Case of Beijing[END_REF], for instance, deviations from a multifractal structure of Beijing's urban area denote its decline and the degeneration of the inner city. Yet we have previously seen that shapes of cities and urban built patterns are not fractal. Nonetheless they are not all in decline neither are they all degenerated. Adopting an organicist point of view, other scholars support the idea that urban planning and design should aim at the creation of fractal forms because such forms exist in nature (in which they spontaneously emerge) and thus are "by nature" virtuous and optimal. Fractals are then raised to a universal aesthetic principle, see e. g. [START_REF] Jiang | A New Kind of Beauty Out of the Underlying Scaling of Geographic Space[END_REF]. Besides the fact that such a principle falls under belief more than science, its adoption leads most often to give more importance to emerging forms than to their generative mechanisms. Resulting models are essentially structural: they take into account neither the real behaviours of individuals (as a result of a combination of aspirations, constraints and available means) nor the strong emergence that characterizes social systems4. The fact is that functional advantages of realistic fractal urban developments (starting from an existing urban pattern) with respect to non fractal developments are still poorly known because they have rarely been studied until now. However, a shape is not intrinsically optimal: it is optimal only with regards to the processes (i. e. behaviours, practices) that this shape allows to optimise. Moreover, the absolute does not exist in the case of spatial distributions of human settlements because of the diversity of contexts (social, political, economic, natural, etc.) from which results a high diversity of human spatial behaviours and practices. The "good" fractal dimension for urban planning does not exist. A same value of fractal dimension can characterise very different urban shapes and can result from generative processes being qualitatively and quantitatively very different [START_REF] Pumain | Commentaire sur le chapitre 3 -Les fractales doivent-elles guider l'aménagement urbain ? In G. Dupuy (dir) Villes, réseaux et transport. Le défi fractal[END_REF]. As a Most ideas exposed in this blog post are taken from the dissertation entitled "Analysis and simulation of the concentration and the dispersion of human settlements from local to regional scale. Multi-scale and trans-scale models" [In French], chap. 3 "Variation de la concentration et de la dispersion des implantations humaines à travers les échelles : modèles mono-et multi-fractals", pp. 114-175, C.[START_REF] Tannier | Analyse et simulation de la concentration et de la dispersion des implantations humaines -Modèles multi-échelles et trans-échelles[END_REF]. https://tel.archivesouvertes.fr/tel-01668615v1 Additionally, scholars commonly introduce in self-organising models a limiting factor that often corresponds to a maximum city size, which can not be overcome[START_REF] Schweitzer | Analysis and Computer Simulation of Urban Cluster Distributions[END_REF], or to a maximum urbanisation rate[START_REF] Chen | Defining urban and rural regions by multifractal spectrums of urbanization[END_REF]. Y.[START_REF] Chen | Defining urban and rural regions by multifractal spectrums of urbanization[END_REF] goes further as he suggests that the urbanisation process should ideally stop at the stage at which the urbanisation rate L equals 0.618 and the urban-rural ratio (called "golden ratio" in the quoted paper) equals 1.618. It is possible to distinguish weak emergence, where the macroscopic structures resulting from microscopic behaviours can be observed by an external observer identifying a particular regularity in the observed process, from strong emergence, where the microscopic entities observe themselves the macroscopic structures they have produced and adapt their behaviours accordingly[START_REF] Livet | Ontology, as a mediator for agent-based modeling in social science[END_REF]. Social systems are characterised by a strong emergence: through their membership to a social group or to a place, individuals participate in the creation of collective references that refer to this group or place (weak emergence). In return, the adoption (or not) of these collective references by the group or the individual influences its behaviour (strong emergence).
14,314
[ "867696" ]
[ "57629" ]
01316563
en
[ "shs" ]
2024/03/05 22:32:07
2015
https://shs.hal.science/halshs-01316563/file/The%20politics%20of%20post-suburban%20densification.pdf
Eric Charmes Roger Keil THE POLITICS OF POST-SUBURBAN DENSIFICATION IN CANADA AND FRANCE This debate specifically focuses on densification as a particular dimension of (post-) suburbanization. In the introduction, we discuss densification, along with 'compactness' and 'intensification', conceptual terms that have become buzzwords within urban planning. Objectives associated with these tend to be presented in the literature within a normative framework, structured by a critique of the negative effects attributed to sprawl. The perspective here is different. It is not normative but critical, and articulated around the analysis of political and social issues, related to the transformation of wider metropolitan space. Three main themes are developed: (1) the politics of densification (the environmental arguments favouring densification are highly plastic, and are thus often used to defend projects or initiatives which are actually determined by other agendas); (2) why morphology matters (a similar number of houses or square metres can be established in many different ways, and those different ways have political and social meaning); (3) the diversity of suburban densification regimes (it is not only the landscapes of the suburbs that are diverse, but also the local bodies governing them--between the small residential municipalities of the Paris periurbs and the large inner suburbs of Toronto lies a broad spectrum). Introduction 1 While newly developed areas of North America and Western Europe continue to be filled with detached single-family dwellings (see Figure 1), many older suburbs are changing [START_REF] Harris | Meaningful types in a world of suburbs[END_REF]Hamel and Keil, 2015). This process has been described as post-suburbanization (Phelps and Wu, 2011a). It does not affect all suburbs, and resistance to change is strong (Filion, 2015, this issue), but the changes are significant in many places. It is important to state that we are not talking here about a distinct typology--suburbs versus post-suburbs--but rather a historical change in direction: a process of dedensification (classical suburbanization) is partly converted, inverted or subverted into a process that involves densification, complexification and diversification of the suburbanization process (see Figure 2). Los Angeles, for example, long viewed as the ultimate suburban city, is really one of the densest metropolitan areas in the United States, an 'inverted city' where suburbanization has begun to fold in on itself as areas traditionally considered sprawling, mono-cultural and mono-functional have increasingly become denser, more multicultural and mixed in use. In the European case, postsuburbanization involves a slight shift in focus from the discourse on the traditional (dense, centralized, politically integrated) European city--as argued masterfully by Patrick Le [START_REF] Galès | European Cities: Social Conflicts and Governance[END_REF]--to a model that acknowledges the dissolution as posited in the literature on the 'in-between city' or zwischenstadt (Sieverts, 2003). In the Canadian case, we have witnessed the turn away from the classical North American model, with its clear separation of classical nineteenth-century industrial inner city and twentieth-century suburbanization, towards 'in-between' or 'post-suburban' forms of peripheral urbanization: in this process, the post-second world war 'inner suburbs' with their particular assemblage of issues--decaying high-rise housing stock, concentrations of poverty, mobility imbalances, often racialized social segregation, a dramatic lack of services, etc.--have become a focus of attention among researchers [START_REF] Hulchanski | The Three Cities Within Toronto. Income Polarization Among Toronto's Neighbourhoods, 1970-2005[END_REF][START_REF] Keil | Post-Suburbia and City-region Politics[END_REF][START_REF] Keil | In-Between Mobility in Toronto's New (Sub)urban Neighbourhoods[END_REF]Poppe and Young, 2015, this issue). intra-muros 12, 74x100 cm, 2008, www.jeanpierreattal.com) As the growing literature on post-suburbs shows, such processes are by no means ignored. We can consult the most comprehensive and influential recent contribution to the debate on postsuburbanization in the introduction to a recent collection of studies on the subject by Phelps and Wu (2011a). In this view, post-suburbanization refers primarily to an era involving a double re-definition of classical suburbia: a 'maturation' of the suburbs and a host of new influences changing the nature of those areas. Some interventions, like the extensive literature on Los Angeles since the 1980s, have gone so far as to claim an epochal shift in urbanization patterns. More broadly, however, post-suburbia entails the notion of a reversal of the linearity of historical processes, as traditional geographical typologies of ordered concentric segmentation have given way to a more splintered or fragmented urbanism (as encountered under broader processes of neoliberalization). Post-suburbanization does not refer to a complete and featureless dissolution but to a reconsolidation of the urban fabric, even a balancing, and a rejection of classical functional or conceptual dichotomies such as live-work. This is particularly the case in the technoburbs that have been associated with the process. Postsuburbanization also entails a profound re-scaling of the relations and modes of governance that have traditionally regulated the relationships between centre and periphery in the suburban model [START_REF] Phelps | The New Post-suburban Politics?[END_REF]Phelps and Wu, 2011b;Hamel and Keil, 2015). In this contribution to Debates & Developments in IJURR, we specifically focus on densification as a particular dimension of (post-)suburbanization. This introduction and the four essays that follow make no larger claims about suburbanization in France and Canada or even their comparative trajectories. Yet we do think that these essays have significance in engaging critically with the strategic and normative preference for density and compactness in a sustainability paradigm that often remains unchallenged on both sides of the Atlantic. In this context the Canadian and French examples, and their comparison through the work presented here, is valuable. Low-density morphological patterns have not only been at the heart of the original programme of suburban utopia with its setting of pastoral life [START_REF] Fishman | Bourgeois Utopias: The Rise and Fall of Suburbia[END_REF], but they are also key to understanding the recent transformation of suburbs. Yet in both national contexts under review here, Canada and France, higher-density peripheral development has also been part of the development regime since as far back as the 1960s. More recently (and most relevant to the debate presented here), densification, along with 'compactness' and 'intensification', have become buzzwords in urban planning. Objectives associated with densification tend to be presented in the literature within a normative framework, structured by a critique of the negative effects attributed to sprawl. There is a wealth of propositions under the banners of New Urbanism or Smart Growth in North America and Sustainable Cities in Europe, around the 'battle against urban sprawl' and the need to increase the density of cities to make them more resilient and sustainable. Our perspective here is different. It is not normative but critical, and articulated around the analysis of political and social issues, related to the transformation of wider metropolitan space. The aim of this debate is to document with empirical surveys the issues at stake with densification policies. These transformations serve certain interests yet neglect many others. They often lead to displacement of less well-off populations. More broadly, their geography is uneven, predominantly targeting the low-income and working-class suburbs. The success of the themes of densification and of the battle against urban sprawl should also be related to the fact that those themes converged with the interests of urban planners (under the buzzwords of growth control), politicians in core urban areas (who welcome new residents and activities) and developers (who can exploit the rent gap, new opportunities, etc.). Besides, the need for dense cities is a convenient argument in overcoming the strong local resistance to urban development (see below). Such critical perspective has of course already been developed in the literature. Among others, John [START_REF] Logan | Urban Fortunes: The Political Economy of Place[END_REF]Harvey Molotch (2007 [1987]: xx) noted (in the second edition of their book Urban Fortunes) that densification not only serves environmental interests but also helps defend the 'same old growth machine'. Emergence of consensual discourse on sustainable development is often accompanied by de-politicization of the issues at stake [START_REF] Béal | Le développement durable changera-t-il la ville ? Le regard des sciences sociales, Saint-Etienne[END_REF]. The gravitas gained by sustainable development ideology contributes to silencing debates on political and social issues. Sustainability itself becomes the stand-in for better (sub)urbanization and is not usually exposed to critical scrutiny [START_REF] Keil | Cities and the Politics of Sustainability[END_REF]. For example, the website of Richard Rogers' planning and architecture practice (one of the leading proponents of compactness) proclaims that 'Compact polycentric cities are the only sustainable form of development' (Rogers Stirk Harbour + Partners, n.d.). Since the survival of humanity is at stake, there is no point debating the opportunity to increase density (which is key for compactness) of cities. With this debate, however, we propose to discuss this normative ideal. Three main themes are being developed. The first is the politics of densification. All the essays address some of the political, social and economic stakes of densification. Within this introduction, we will stress the plasticity of the environmental arguments favouring densification. In fact, the idea that the dense city is more sustainable than the low-density city can be contested on environmental grounds. This plasticity of the environmental discourse makes it all the more obvious to consider densification as a political process favouring some interests while disadvantaging others. In any case, smart growth and new urbanist models are eagerly supported by many land developers, builders and local political elites favouring growth. The second dominant theme of this debate is how and why morphology matters. For reasons that will be developed in this introduction and in the essays, discussing density will not suffice to qualify changes related to densification. Density is a poor predictor of urban forms; a development consisting of terraced houses may, for example, have the same density as a modernist estate of tower blocks. The third and last dominant theme is the diversity of suburban densification regimes. It is not only the landscapes of the suburbs that are diverse, but also the local bodies governing them. Between the small residential municipalities of the Paris periurbs and the large inner suburbs of Toronto lies a broad spectrum. The debate presented here develops these considerations from four case studies, two in Canada and two in France. This is fortuitous, the result of encounters between participants in two research projects (a French project on urban change in low-density residential areas and the Global Suburbanisms project, a globally scaled Canadian study)2 . Moreover, by focusing on Canada and France respectively, we are not arguing that such a comparison is entirely new, nor are we ignoring the wealth of literature that exists in each country on processes of suburban diversification and change. Yet the comparison of France and Canada is highly relevant, especially in a context where US case studies have largely dominated the Anglophone (and indeed other) literatures. What the US tells us about the suburbs and post-suburbs is relevant for other countries, including Canada and France. Moreover, global dynamics exist and we can in fact talk about the phenomenon of 'global sprawl' [START_REF] Keil | Editorial, Global sprawl: "Urban form after Fordism?[END_REF]). Yet each national context gives a specific flavour to suburbanization. And since the focus of this debate is on the politics of densification specifically, Canada and France are two interesting cases to discuss and to compare with the US case, as the societal conditions differ significantly and the variegations of (neoliberal) social formations matter in terms of (post-)suburban outcomes. Indeed, we can detect a Wacquantian landscape of difference: 'In this sense, Canada/Toronto is located in the mid-range of a scale in which France/Paris and USA/ Chicago are extremes. This has to do as much with the traditionally mixed social and capitalist economy in Canada as with the particular nature of neoliberalization in that country' [START_REF] Young | Rebuilding the Modern City After Modernism in Toronto and Berlin[END_REF]Keil, 2014: 1594). These differing political histories translate into different representations of the suburbs. Low-density residential suburbs are numerous in France for example, and combating sprawl is a central focus of public policies and debates. Yet the dominant image of the suburb is not associated with individual detached houses. In France, the word banlieues (which translates to suburbs in English) evokes images of apartment towers and barred windows (see Figures 3 and4) rather than a grid-like alignment of detached single-family dwellings, and images of marginalized immigrant populations rather than a white middle-class community fully integrated into the economy. As explained by Max Rousseau (2015, this issue) in the first part of his essay, this image can be explained by the particular role played by the French state in the production of the city. In Canada too the suburbs are not homogeneous neighbourhoods of single-family homes, instead largely comprising higherdensity morphological forms of comprehensive socio-economic and ethno-cultural diversity (see below). The contributors to this debate are building on a robust literature in Canada that has specifically explained the country's suburbanization through its historical-geographical diversity (see e.g. [START_REF] Harris | Unplanned Suburbs. Toronto's American Tragedy[END_REF]2004;[START_REF] Walks | The Causes of City-Suburban Political Polarization? A Canadian Case Study[END_REF]2007;[START_REF] Walks | Urban Form, Everyday Life, and Ideology: Support for Privatization in Three Toronto Neighbourhoods[END_REF]Addie et al., 2015;Keil et al., 2015). Finally, the four essays to follow in this debate focus on transformations in residential space. This choice is, above all, practical: by limiting the diversity of cases, we facilitate significant comparisons (although the terrains under investigation were limited to two countries). That said, this choice is not meant to reduce the suburbs to their residential status. As is well documented, the suburbs do not consist of housing alone. For quite some time, they have brought together employment, commerce, cultural organizations, infrastructural and logistical facilities, ecological spaces (parks and greenbelts) and large-scale institutions such as hospitals and universities. These changes are at the very heart of the move from suburbanization to post-suburbanization. The politics of densification: towards sustainable cities or the new guise of growth coalitions? The debate presented here focuses on densification. Densification is by no means the only morphological change that affects suburbs and post-suburbs, but it is certainly among the most discussed strategies within the planning community. The densification of residential suburbs is commonly considered a key objective. It is evidenced in France by the so-called 'Grenelle 1 et 2 de l'environnement' legislation, enacted in 2009 and 2010. Grenelle 2, for example, permits a minimum level of density, especially in areas close to public transport links. Such dispositions are almost unopposed, either from the left or from the right (at least at national level). In the next section of this introduction, we present the politics behind this consensus in favour of densification. As we will see, much recent research points towards a questioning of the relationship between density and sustainability. A re-politicization of the anti-sprawl discourse within urban planning can be expected from this turn in the debate. The density turn within the environmentalist discourse In the 1970s, an ecologist was often someone who escaped the city and its pollution; in the 1980s, however, a reversal occurred with respect to the environmental discourse and the city. This can be symbolized by the success of the work of Peter Newman and Jeffrey Kenworthy (1999), who popularized the simple equation that a dense city was a sustainable city (Charmes, 2010a). Living in a lower-density environment, attractive for its greenness, usually means being far from the concentrated resources of a city and depending on the car for even the shortest trips. More energy is consumed and this lifestyle therefore has a negative effect on the environment. This critique of car dependency reinforced previously expressed critiques. In both France and Canada, leapfrog development has since the 1970s been perceived as a threat to agriculture. A related but different criticism focuses on excessive land consumption driven by individual housing. Lastly, a more economistic critique (yet one often integrated with ecological discourse) highlights the cost of sprawl, since the more spread out the city is, the longer its networks and infrastructures need to be (see [START_REF] Sewell | The Shape of the Suburbs: Understanding Toronto's Sprawl[END_REF] for a typical summary of those arguments in the Canadian case; Jaglin, 2010 in respect of France). Fostered by these arguments, the war against sprawl mobilizes most urbanists and planners (in both France and Canada) and most of them now ideologically favour dense and compact cities (planning history in the twentieth century shows that this was not always the case--see Touati, 2010). Critics of low-density residential spaces foster the diffusion of now well-established planning norms like: densification of residential neighbourhoods (see Figure 5); infill on brownfield sites (see Figures 6 and7); functional diversification (with the development of businesses and employment within a polycentric pattern); and concentration of urban development around train stations and public transport nodes. In any case, the anti-sprawl discourse questions the traditional suburban pastoral ideal of subdivisions and detached single-family dwellings. Living in a low-rise residential space is no longer perceived as being about getting closer to the countryside, rather it is deemed unfriendly to the environment. Yet the arguments behind this reasoning are debatable. There is insufficient space here for a comprehensive discussion, but recent research has shown that many of the arguments presenting the dense city as more sustainable are at best questionable (see e.g. [START_REF] Echenique | Growing cities systainably[END_REF]. In their assessment of energy consumption in transportation, Newman and Kenworthy (1999) only take daily trips into consideration, ignoring longdistance trips (for pleasure or business). Yet the latter increase with higher densities, due among other things to the need for respite from the noise and stress of dense city centres [START_REF] Holden | Three Challenges for the Compact City as a Sustainable Urban Form: Household Consumption of Energy and Transport in Eight Residential Areas in the Greater Oslo Region[END_REF]Nessi, 2012). More research needs to be undertaken to evaluate the impact of such compensatory trips, but it is significant and severely reduces the presumed advantages of density regarding energy consumption in transportation. Transportation is not the only source of energy consumption in cities. Buildings are also a major one. Yet if the energy consumption of a group of buildings tends to decrease with density, pre-existing detached houses are much more adaptable than high-rise buildings, and it is easier to reduce energy consumption of the former (by installing heat pumps, wells or solar panels). Regarding land consumption, countries like France and Canada do not lack space for urbanization. In France, if every household were to live in a detached house on a 1,000 m2 lot, only about 10% of the nation's land mass would be urbanized [START_REF] Charmes | L'artificialisation est-elle vraiment un problème quantitatif ?[END_REF]. For the preservation of natural land and agriculture, the issue is less about limiting sprawl, and more about organizing it and controlling so-called leapfrog development. We could go much further. By raising these points of discussion, our intent is certainly not to advocate sprawl, as for example Robert [START_REF] Bruegmann | Sprawl: A Compact History[END_REF] did. Nor do we fall in line with the conventional suburb-boosting arguments, emanating usually from American libertarian scholars and pundits (Cox, Kotkin and Richardson for example) who identify classical suburbanization with the promises of the 'American dream'. Density has many other advantages beyond environmental ones, especially for urbanity, serendipity, creativity and so on. Our intent is rather to point to the normativity of the environmental discourse in favour of density. In a context of energy scarcity and global climate change, density appears to be a non-disputable issue, a 'given' beyond political debate. Yet it is not, and density should be re-politicized. Density = sustainability: the new motto of growth coalitions? These scientific uncertainties about the real environmental benefits of density show how important it is for research on the transformation of the suburbs to distance itself from the planning discourses on sprawl. This is all the more important at the metropolitan level, where environmental interests of densification and limitation of sprawl are most disputable. Even if the dense city proves to have some climate change virtues, these may only be apparent at the international level. Worse, the inhabitants of the densified areas will suffer from increased exposure to local pollution [START_REF] Echenique | Growing cities systainably[END_REF]. At the metropolitan scale, other than an uncertain improvement of the local environment the stakes of urban redevelopment reside, for example, in: the mobilization of scarce land resources for new construction; improving the image of socially distressed neighbourhoods (an image that looms over the city itself ); and the growth of the metropolis. Indeed, the density turn within environmental discourse is especially convenient for promoting projects of growth coalitions or (more fundamentally) urban growth, and often serves to override local resistance. In many suburbs, residents expect that their representatives seek to protect the environment, preserve quality of life and limit growth, rather than attract new jobs or homes. Of course, everything depends on the context, but the general trend is one of decline in positions favourable to growth. In the light of that, the density turn helps to counter the main arguments used by movements opposed to urban growth, especially in recent cases of new-build gentrification. It helps to dismiss any desire to preserve the original programme of suburban utopias, such as low-density built-up areas, as local expressions of selfishness. And it helps to discredit and delegitimize local mobilizations by designating them as nimbyism [START_REF] Wolsink | Invalid theory impedes our understanding: a critique on the persistence of the language of NIMBY[END_REF]. More broadly, the density turn helps to weaken local opposition to growth from its own environmental perspective. By referring to the future of the planet, one can justify the densification of suburbs in the name of sustainable development and denounce opposition to local intensification projects as self-serving. Such disqualification of local opposition to growth is questionable. In fact, behind the motivations associated with sustainable development, and more specifically behind the equation that density equals a sustainable city, often hides the old conflict between exchange value and use value [START_REF] Logan | Urban Fortunes: The Political Economy of Place[END_REF][START_REF] Logan | Urban Fortunes: The Political Economy of Place[END_REF]). The defensive politics of suburbanites vis-à-vis continuous development is not just a defensive stance of private interests; 'such a politics also recognizes the constraints of the jumbled, anaesthetic environments [of post-suburban in-between cities] as the true playgrounds of a new and potentially productive politics of the urban region' (Keil and Young, 2011: 77). Within that politics, citizens have learned to suspect other special interests behind the positions put forward. Thus, the idea that growth is good for a local community is often obliterated by the notion that this growth serves the interests of developers (growth is greed, to put it bluntly). From this point of view, defending one's suburban 'castle' against higher density is perceived by observers to be a defence by 'the little guy' against the greed of developers and those financing their projects. This local perspective appears all the more legitimate since there is a growing interest in local policymaking processes, in conjunction with the rise of participatory forms of democracy. Last but not least, local opposition to growth is successful precisely because it is not reduced to expressions of local selfishness, and is able to make its views resonate with issues that go beyond the local [START_REF] Keil | Going up the country: Internationalization and urbanization on Frankfurt's northern fringe[END_REF]. For that matter, such opposition is often associated with the use of environmental arguments: new construction is not only bad for house values, it is also bad for the environment. Without necessarily adhering to those ideologies, many citizens leverage environmental concerns to serve their local interests. All this confirms that environmental arguments are particularly plastic. And two of the central terms of this debate below, sustainability and density, are themselves 'chaotic terms'. They can be used to support growth as much as to justify stopping or limiting growth. They can justify the fight against urban sprawl, while justifying the purchase of a house with a large garden on the outskirts of a city. This plasticity, coupled with the ability of environmental arguments to generate consensus, creates legitimizations that are often used to defend projects or initiatives, which are actually determined by other agendas. It is therefore necessary to analyse and critique these issues. In short, the morphological changes of suburbs must be analysed as much as those faced by urban centres. These transformations bring with them political and social issues. Morphology matters: deconstructing densification If suburbs are ever-less frequently called suburbs, but zwischenstadt, in-between cities or post-suburbs instead, it is primarily because many so-called suburbs do not look like dormitory towns with their agglomeration of detached houses. Yet those morphological changes are rarely considered in and of themselves, but instead as signs or symptoms of something else, like structural modifications of daily mobility, changes in suburban politics, redefinition of metropolitan centrality and so on (Phelps and Wu, 2011b). Within this framework, a new multi-storey building in a low-density suburb manifests among other things the evolution of the position of that suburb within a metropolitan system. More generally, it manifests that some quintessential attributes of centrality are now to be found within suburbs. Yet density matters not only as a signifier or as a symbol, but as an important component of the production of the city. Through the various forms it may take, density reveals power relations. It also mediates between different interests, favouring some and disadvantaging others. As Figure 8 shows, density can take many different forms. A comparable number of houses or square metres can take many different forms. And those different forms have political and social meaning. Thus (and this is what we are debating here), it remains to be understood why a multi-storey building emerges in one particular suburb and not in another one; why the densification process should take the form of a multi-storey building and not of infill semi-detached or terraced houses. These questions are discussed below by Anastasia Touati (2015, this issue), who contrasts hard and soft densification (see also [START_REF] Touati | Economie Politique de la densification des espaces à dominante pavillonnaire: l'avènement de stratégies post-suburbaines différenciées[END_REF]. She makes the illuminating statement that soft densification can be a compromise between exchange value and use value. In a low-density residential neighbourhood, the addition of individual houses through infill is a way of reconciling economic interests emergent from urban growth with the interests of the inhabitants, since the residential image of the neighbourhood is preserved. Soft densification can indeed overcome resistance from inhabitants, while hard densification may trigger strong opposition. Yet, soft densification may not be sufficient to sustain a strategy seeking to establish a suburb as a metropolitan sub-centre. In this sense, the type of densification is revealing of power relations, particularly between local and metropolitan interests. Other interesting questions addressed in this debate are: which are the social groups that densification policies aim to attract, and who are those coming to inhabit the new building. Indeed, as shown by Max Rousseau (this issue), densification may (depending upon local context) be part of a process of social downgrading as well as a process of upgrading. In an upscale residential suburb, the construction of a multi-storey building is perceived as a threat, both from the perspective of landscape conservation and from the perspective of social engineering. The same building on a derelict modernist-era housing estate (the so-called grands ensembles) is on the contrary perceived as a way of rehabilitating the place and attracting middle-class households. In this debate, two out of the four essays--those of Will Poppe and Douglas Young and of Max Rousseau--focus on large modernist housing estates. As stated above, in France the image of the suburb (banlieue) is less associated with the single-family home (pavillon), and more with the grands ensembles, and thus towards verticality and concrete rather than horizontality and greenery. This image of the suburbs contrasts sharply with the dominant one in North America, even if Canadian cities are renowned for high-rise neighbourhoods that are often found at the urban periphery. Particularly relevant to the debate presented here, the French grands ensembles are the object of a policy focusing on morphology. Between 1954 and 1973, millions of new homes were built in grands ensembles. For various reasons we cannot present in detail here (see [START_REF] Subra | Histoire des discours politiques sur la densité » In E. Charmes (Ed.) « La densification en débat[END_REF], the grands ensembles are today the places where France's urban crisis is concentrated, places like Clichy-sous-Bois and Montfermeil, municipalities in the Paris suburbs that were at the epicentre of the 2005 riots [START_REF] Dikec | Badlands of the republic. Space, politics and urban policy[END_REF] Figure 3). These incidents drive public intervention, and significant public funds have been mobilized. This intervention is largely morphological: it targets towers and housing blocks that 'disfigure' the landscape, focusing on the image of 'concrete neighbourhoods', and it reconstructs a habitat of more traditional (low-rise) urban forms, which is supposed to appeal to the middle classes. It is also said that the grands ensembles should be 'de-densified'. Indeed, the French grands ensembles are often associated with hyper-density, and their repugnant image is regularly mobilized by those opposing densification (many French grand ensembles technically have the same density as a town core made up of terraced houses with small gardens, but what matters is the size of individual buildings). This focus on morphology may be related to the French state's poor capacity to trust the people, especially those from immigrant backgrounds (Bacqué and Sintomer, 2004;[START_REF] Baudin | Faut-il vraiment démolir les grands ensembles ?[END_REF]. The comparison with Canada is telling. While Canada shares with France a built environment of peripheral high-rise housing estates that have become concentrations of poor, immigrant, non-white tenant populations, the (Anglo-)Canadian urban experience also includes communitarian recognition, through the institutions of a multicultural society which provides safeguards against some of the problems associated with the exclusionary tendencies embedded in the republican tradition. While no federal urban policy exists, the Canadian (local) state has institutionalized integrationist measures, some of them place-based, allowing for targeted interventions through a variety of mechanisms: schools, community welfare, culture, etc.3 In the case of Toronto, renewal of residential areas in the inner suburbs has focused in recent years on what has been termed tower renewal (see Figure 4). Discussed at greater length in the essay below by Will Poppe and Douglas Young, this process has been seen as an attempt not only to apply architectural and energy retrofits to existing concrete towers, but also to re-engineer entire tower neighbourhoods. This has been part of a general place-based strategy targeting 13 so-called 'priority neighbourhoods', selected (for both socio-demographic and built-form reasons) as concentration points for social policy interventions. While the discourse around concrete towers and the communities inhabiting them resembles the negative practices found in France, there has also been considerable movement by urban specialists and residents alike to rehabilitate rather than demonize these high-rise suburbs. Many initiatives and projects specifically seek to empower residents of these neighbourhoods and overcome the negative images associated with their environment. In any case, morphological changes do not happen easily in the suburbs. They can be hindered by many factors. Some have already been mentioned, but a fact often overlooked outside the inner circle of urban designers and planners is that one of the biggest obstacles to change in suburban housing estates (be they high-rise buildings or detached houses) is their functional specialization. Suburbs were built as dormitory areas. As documented in Pierre Filion's essay below, functional specialization is one of the main causes of inertia in suburban landscapes, however dated that may seem. What remains prevalent is the morphological structure of the suburban street and road network that became predominant in the second half of the twentieth century. Throughout that period, all the planning manuals recommended the environmental area model, to preserve low-density residential areas from the nuisance of through traffic. According to the creator of the concept, Colin Buchanan (1963), environmental areas should be designed so as to have no extraneous traffic, no drifting through of traffic without business in the area, which should be accessible from arterial or distributor roads at one intersection only (see Figure 9). These environmental areas are highly favourable to functional specialization: there is only one type of user, the resident. Those who access the space do so either as residents or as visitors of the residents. Of course, functional specialization is not inherent to environmental areas: mono-functional zoning is a planning decision that can be made relatively independently from the design of the road system. Yet, a comparison of how residential suburbs structured along a gridded street system evolve, and how suburbs structured along pods develop, shows that the former are much more open to functional mixing than the latter. In fact, the dense and multifunctional city is facilitated where there are flows of pedestrians, cars, etc. [START_REF] Mangin | Influence du contexte urbain et du rapport au cadre de vie sur la mobilité en Ile-de-France et à Rome[END_REF]Charmes, 2010b). The organization of the urban fabric within a collection of enclaves where all traffic is excluded prevents this activation of land by circulatory flows. Since the road and street networks, as well as the underlying infrastructures such as water and sewerage, exert very strong inertia in the cities, circulation flows in many suburbs will remain separated from urban life for generations to come. And the transformation of the suburban landscape must happen within that framework, which means that often densification happens and will happen without functional diversity or, more accurately, with limited functional diversity. The low-rise office park perhaps changes into a high-rise office park, not into a dense multifunctional urban centre like those of Lyon, Toronto or Paris. This is one of the major reasons why the densification processes occurring in many suburbs produce a landscape that is very different from that found in older urban centres. This is why many suburbs are turning into post-suburbs and not into urban centres. Of course, suburbs include older towns and cities which can become denser through functional diversity, but those towns and cities do not constitute the major part of suburban landscapes. Densification regimes Factors hindering or promoting densification are present in almost all circumstances. However, they exist in varying degrees. Variations occur from one country to another depending on the particular urban history, attitudes towards nature and density, and systems of local government. For example, the idea that density or, more broadly, that the dense city is collectively desirable may historically have been a more accepted notion in France than in Canada (yet it is also obvious that there are convergences here: Canadians have learned to accept the diktats of a climate-change-driven push towards greater compactness; and French suburbanites have learned to escape the grands ensembles, gravitating towards the lotissements of pavillons on the outskirts of not only large conurbations but also many smaller towns and villages, generating a pervasive pattern of leapfrog development similar to that found in many North American cities). Variations are also observed within a single city. This is illustrated well in the essay by Max Rousseau (this issue): the capacity of households living in lowincome areas to protect their quality of life is not the same as that of affluent households (primarily because low-income households are less able to mobilize legal or other means to challenge development projects). In addition, as stated above, municipalities with poor populations are often motivated to transform the urban landscape, whereas richer municipalities do not have this same desire. These are only a few cases from a broad variety. The diversity of reactions to the dynamics of change in suburban morphology reflects the diversification of governmental regimes in suburbs and postsuburbs, with each being more or less favourable to certain coalitions and particular morphological changes [START_REF] Phelps | The New Post-suburban Politics?[END_REF]. These suburban regimes are formed among actors with varying levels of status and intervention capacity. We propose an overview of that diversity in Tables 1 and2, which should help readers make the fullest sense of the four essays comprising this debate (especially those based on case studies). The tables are based on the knowledge of the authors, acquired both through fieldwork and from the literature. Due to space constraints, our accompanying commentary is brief. The suburban political game is played out through the central modalities of suburban governance on the terrain of local communities, where the state (at various scales), capital accumulation (land development) and authoritarian governmentalities (articulated through politicized class interests) interact [START_REF] Ekers | Governing Suburbia: Modalities and Mechanisms of Suburban Governance[END_REF]. In France, the public actor predominates (see Table 1). This corresponds with the prevailing image of France abroad. Yet contrary to that image, the state has lost many of its prerogatives. Remnants of its former power may be glimpsed in places like Défense or in new towns around Paris, but today the state remains largely in the background (working through national regulations or project funding). The image of a highly centralized country that remains prevalent around the world does not reflect the highly fragmented nature of France's public actors. Municipalities play much more of a key role in suburban regimes than the national state does. Moreover, municipal authorities are extremely fragmented, especially in periurban (or exurban) areas. Inter-municipal cooperation has developed since the turn of the millennium but, as Max Rousseau's essay (this issue) demonstrates, power remains largely in the hand of the municipalities themselves. This results in a strong emphasis on local (sometimes very local) perspectives, focused on the defence of quality of life and residential interests more generally [START_REF] Charmes | La ville émiettée. Essai sur la clubbisation de la vie urbaine[END_REF]. Everywhere, residents mobilize to preserve the landscape and maintain certain social qualities of their local communities. Such resistance has become very important, both in North America and in Europe, as evidenced by the literature on nimbyism or no-growth coalitions [START_REF] Subra | Histoire des discours politiques sur la densité » In E. Charmes (Ed.) « La densification en débat[END_REF]. This resistance represents an extension from the home to the local environment of the domain over which people consider that they have property rights. Householders do not only buy a home, but also the local environment that comes with it. Through that process, the relationship to the neighbourhood becomes more and more similar to one of co-ownership, a process we described as 'clubbisation' [START_REF] Charmes | On the Clubbisation of the French periurban municipalities[END_REF]. This process is one of the main driving forces powering not only residents' movements, but also the development of private residential neighbourhoods and gated communities. And in the case of France this process has a significant effect on municipalities. The fragmentation of the French municipal fabric gives residents' movements a singular ability to influence planning regulations. A metropolitan region like Lyon is composed of 514 municipalities with a population of 2.1 million, within which are about 380 periurban municipalities each with an average population of 1,560 inhabitants. In such municipalities, extremely local issues prevail in residents' concerns. At the same time, the prerogatives of those municipalities are far reaching, and include planning (which allows suburbanites to control types of construction, lot size, etc.). The resultant conservationist agenda acts as a significant barrier to redevelopment projects. This type of planning is often exclusive too, because it usually limits (or even halts) urbanization, which not only prevents population increase but also raises prices and thus restricts access to those who can afford to obtain entry (see Figure 10). In Canada, suburban municipalities also play an important role, but they are much larger (see Table 2). And they (e.g. Surrey, BC; Mississauga, Ontario; Markham, Ontario; Laval, Quebec; Brossard, Quebec) challenge, rival and sometimes supersede the political centrality of the core city as they attempt to redefine their (sub)urban future in the context of more general calls for more sustainable (and now increasingly resilient) forms of development. In any case, suburban politics is not only about residential qualities of places. The suburbs have historically often lacked employment. Gradually, however, the work commute has been reconfigured: the suburbs to centre transportation flow has lessened as more people commute from suburb to suburb, or even from centre to suburb, following employment opportunities in Fordist factories and post-Fordist manufacturing, logistics and office locations, as well as commercial and entertainment enterprises (see Figures 7,11 and 12). In parallel, during the decades following the second world war, governments sought to organize suburban growth by creating new settlements and satellite cities. While this policy has had relatively limited impact in the US, the creation of the Municipality of Metropolitan Toronto in the 1950s was specifically linked to the siting of multi-density housing developments away from the centre [START_REF] Young | Rebuilding the Modern City After Modernism in Toronto and Berlin[END_REF]. In France, an ambitious policy of new town construction was launched in the late 1960s, and the five new towns created around Paris at that time now constitute major sub-centres (see Figure 10). Likewise, edge cities have developed around highway interchanges serving shopping malls in the US and Canada [START_REF] Garreau | Edge City: Life on the New Frontier[END_REF]. Large cities also include neighbouring smaller cities within their orbit, with all their shops, jobs, equipments and services. Finally, apart from edge cities, one notices a dissemination of employment and commerce to the outskirts, to what have been named 'edgeless cities' [START_REF] Lang | Edgeless Cities: Exploring the Elusive Metropolis[END_REF]. Within that context, in both France and Canada (albeit in different guises) suburban regimes that have historically been formed in contradistinction to the central city have recently been freeing themselves from the inside-outside duality traditionally characterizing their political frame. The increasing recognition of regional and 'in-between' issues, particularly in policy sectors such as transportation, welfare, ecology and housing, has led to an increase of rhetoric (if not action) regarding cooperation between suburbs and the central city on one hand, and coordination in a competitive regional environment among suburban municipalities in decentralizing regions on the other [START_REF] Lehrer | Producing global metropolitanism in the periphery: Toronto and Frankfurt[END_REF]. These logics articulate themselves in variable configurations depending on context. In France, for example, while inner-and middle-ring suburbs clearly have an urban identity, and are often integrated into metropolitan communities (communautés urbaines) in which the core municipality cooperates with its neighbouring municipalities, periurban areas retain a more rural identity and tend to adopt a defensive stance against the city. Again, space constraints prevent us from exploring in detail all the regimes presented in Tables 1 and2. But the examples above, and the cases discussed in the essays, show the importance of considering densification and, more broadly, morphological transformations of suburbs in their respective contexts--considering different scales that interact. Suburbs are also extremely diverse, to a degree that makes it difficult to talk about suburbs or post-suburbs in general. From that perspective, the comparison between cities and between countries is very helpful. It helps to disentangle the contingent from the structural. It also helps to identify the various factors determining a suburban regime. The politics of post-suburban densification There are differences, but also similarities, in the formulaic morphology of post-second world war metropolitan landscapes, and in the crisis besetting those landscapes. The archetypes of the concrete residential tower and the single family pavillon are just the external markers of that dialectics of difference and convergence. Beyond that dialectics, in the debate we present here, we can note certain convergences. Both French and Canadian metropolitan suburbs and periurban areas are caught up in the frantic pace of modernization, although collective actors in both settings continue to mobilize around conservation of present scales, forms and communities. On both sides of the Atlantic, we detect strong conflicts between local and metropolitan perspectives, private and public interests, community and corporate actors. And we have noted the multifaceted nature of the suburban theatre of collective action, which cannot be dismissed as mere nimbyism. The suburban political realm is not static, nor is it dormant. There are strong and growing mobilizations around issues of everyday suburbanism, but also around long-term planning and policy in regional matters. There are expectations in both Canada and France that there will be (decisive) state action when needed. While Canadian jurisdiction lies mostly with provincial government (of which local communities are mere creatures), in France the post-1981 decentralization has led to a strong dialectic of central and local political action. These essays speak of multiple rationalized projects (the social, the environmental, the economic), around which actors play a political game in which they are conscious and deliberate participants that don't just react, but also set new boundaries and rules. In this sense, all the essays pay tribute to [START_REF] Logan | Urban Fortunes: The Political Economy of Place[END_REF][START_REF] Logan | Urban Fortunes: The Political Economy of Place[END_REF]) initial formulation, that sees suburbanization as a process of complex interrelationships of individual decisions in firm structures, located in a political universe of use and exchange value decisions by individual and institutional actors. They speak of multiple scales and recognize the metropolitan significance of change in place (most notably in the case of Rousseau, this issue) and take seriously the actors that are involved. Suburbs are not just treated as objects of external planning and policy, but also as places subject to endogenous, maybe even autonomous, agency. All the essays presented here discuss different suburban forms (tower blocks, houses, pavillons, grands ensembles) and demonstrate an ability to discern the differences between those morphologies, the consequences they have for the political process and their engagement with the modalities of suburban governance. And finally, they all elaborate upon both the ideological and material aspects of the relationships of urban form and social structures, an important part of the new debate on suburbs and suburbanization to which we hope to contribute with this debate. The French essays (more than the Canadian essays) display a comparative perspective that involves an excellent recognition of the current state of debate on post-suburban politics. North American-European differences are critically acknowledged and productive solutions are found. The Canadian essays present two very different views of the changeability of suburban form (tower renewal versus stasis of the suburban morphology). They also engage different scales of suburbanization, one local and place-based, the other regional and trans-jurisdictional. The concepts we suggest readers of this debate might contemplate may be summarized in the terms 'soft' and 'hard', which are most forcefully introduced by Anastasia Touati who discusses densification in rather different suburban environments. Poppe and Young demonstrate that Toronto's peripheral concrete tower neighbourhoods, in respect of which reformers seek renewal, operate on the plane of both the hard material retrofit and the soft social engineering of inventing a new contextual fit for communities in a changing post-suburban landscape. Rousseau discusses the hard and soft edges of the urban region depending on alternative socioeconomic and political structures. And finally Filion's essay questions whether the suburban morphologies of Canadian urban regions are hard or soft, and likely to resist pressure for change. We can conclude that post-suburbia has well and truly arrived, and we may propose that we need to accept that post-suburbia is now ubiquitous. No new frontiers are part of this particular set of case studies; their view is directed towards the inside. In all of this, there is some clear transatlantic convergence but also lots of diversity, both internally and between the French and Canadian cases. The debate also highlights the fact that comparative studies of this nature are now more important than ever in order to create productive conversations about what needs to be done. The choice of Canada and France for such a comparison was productive as it allowed the authors of the individual essays (as well as us as editors and editorializers of the case studies) to nod politely to the 'classical' US case, but then to break free of it, liberating innovative and new modes of thinking that engage with the shared realities and divergent idiosyncrasies of both cases. The cases and the comparison leave us with the insight that all measures are inevitably socio-ecological and socio-economic, as well as politically negotiated. Despite clear path dependencies (in morphology, institutions, ideology and political process), political choices and options remain available in our postsuburban futures. Figure 1 : 1 Figure 1: An artist's view of an archetypical outer-suburban development in France (Jean-Pierre Attal,intra-muros 12, 74x100 cm, 2008, www.jeanpierreattal.com) Figure 2 : 2 Figure 2: In-between city Toronto: York University campus at the northwestern edge of Toronto looking south (photo by Roger Keil) Figure 3 : 3 Figure 3: A barre in Montfermeil: the building is a condominium and a process of acquisition by individual residents is underway (photo by Eric Charmes) Figure 4 : 4 Figure 4: Toronto tower neighbourhood: Thorncliffe Park (photo by Roger Keil) Figure 5 : 5 Figure 5: New urbanist development in Markham, Ontario (photo by Roger Keil) Figure 6 : 6 Figure 6: Infill at former industrial site in eastern Toronto inner suburb of Scarborough (photo by Roger Keil) Figure 7 : 7 Figure 7: Redevelopment of former industrial land in Plaine Saint-Denis, close to Paris city limits (photo by Eric Charmes) Figure 8 : 8 Figure 8: Morphological modulations of density. In all those three cases, the density of construction is the same (source: Institut d'aménagement et d'urbanisme de l'Île-de-France, 2005; Appréhender la densité, Note Rapide, 383) Figure 9 : 9 Figure 9: The 'environmental area' principle, as conceived by Colin Buchanan(1963: 69) Figure 10 : 10 Figure 10: Driving out of a small periurban commune (a territory governed by a municipality) in the first periurban ring of Paris (photo by Eric Charmes) Figure 11 : 11 Figure 11: Industrial-residential mix: industrial plant in eastern Toronto suburb with encroaching highrise and single-family-home residential development (photo by Roger Keil) Figure 12 : 12 Figure 12: The Carré Sénart, south of Paris, in Île-de-France: this 'shopping parc' was designed by the planners of Sénart new town to be the centre of the whole development (photo by Eric Charmes) Table 1 : 1 Diversity of post-suburban regimes in France Suburbs type Population type Dominant politics Main actors Morphological (inhabitants) change Inner and Upper class Exclusionary zoning; Municipality (10,000s Stability with local middle ring occasional local up to 100,000) sporadic changes suburbs redevelopment (the farther projects from the core Diverse with a Redevelopment Municipality (10,000s Densification city the less domination of projects of various up to 100,000); (mostly soft); brown intense the middle classes sizes metropolitan field changes) community (including redevelopment; new core city)*; offices; developers sporadic transformations Diverse with a Redevelopment Municipality (10,000s Densification (soft to domination of through gentrification up to 100,000); hard), brown field lower and lower (including new build metropolitan redevelopment; new middle classes gentrification) community (including commercial core city)*; infrastructure; new developers offices Poor (with Urban renewal Municipality From modernist to many through partial (10,000s); neotraditional immigrants) demolition metropolitan community (including core city) *; national State ; large developers Edge cities: new towns Diverse without Extensive growth National State; Extensive growth (mostly around upper middle from the 1960s; several municipalities ending or slowing Paris and Lyon) nor upper redevelopment from (100,000s); down; classes the 1990s large developers renewal in some neighbourhoods old urban Preserving their Municipality Renewal in some centers influence over their (10,000s); developers neighbourhoods; surroundings extension of the urbanized area through subdivisions; business parks; shopping strips Periurbs: residential Middle to upper No growth; Municipality Stability with municipality middle classes exclusionary zoning; (typically 1,500) occasional (first periurban with a marked clubbisation redevelopment of rings) homogeneity at old village core the municipal level Towns Diverse (within Extensive growth Municipality Subdivisions; new (about 10 % of middle classes) and/or (typically between business parks; new the periurban redevelopment of 3,000 and 10,000); suburban shopping municipalities) town center (the developers strips; densification closer from the core of town center city the less extensive Table 2 : 2 Diversity of post-suburban regimes in Canada Suburbs type Population type Dominant politics Main actors Morphological change Inner and Elite uses in Institutional Universities; state Rapid and large middle ring educational redevelopment projects; institutions; scale change suburbs (In- institutions like capital expenditure into hospitals between universities; little prestige infrastructures; cities; third residential use by highways; in Toronto city) upper classes specifically: conservative upper class appeal through populist politics to lower and middle classes that feel excluded from inner city political power perceived as elitist; end to the 'war on the car' Middle class Some infill; some Developers; private Densification continuing larger scale households; transit (mostly soft), suburbanization through agencies, park and brownfield single family homes, sports redevelopment; townhouses, high rise organizations new offices and condominiums; some warehouses; gentrification along conversions of emerging transit lines industrial spaces (subway extension in into places of Toronto) worship; conversions of places of worship into condominiums; Sporadic transformations Lower to lower Redevelopment of Local state; Retrofits middle class tower neighbourhoods planning and (environmental, and strip malls through architecture aesthetic and state action; hesitant professionals; structural); some gentrification effects public housing new commercial agencies (major infrastructure, new landlords in tower offices; community neighbourhoods); centres in priority school boards neighbourhoods (agents aiming for See http://www.yorku.ca/suburbs. Canada and France have an unfortunate shared experience in terms of their marginalized suburban populations. We are extremely grateful to Imelda Nurwisah for translating the original draft of this introduction from French into English. The Canadian part of the research presented here was supported by the Social Sciences and Humanities Research Council of Canada through funding from the Major Collaborative Research Initiative 'Global Suburbanisms: Governance, Land and Infrastructure in the 21st Century ' (2010-17). The French part of the research was supported by the French National Research Agency (ANR) within the framework of the 'Sustainable City' research programme.
61,357
[ "181005" ]
[ "145345", "310230" ]
01744252
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01744252/file/RR-9164.pdf
Teddy Furon The illusion of group testing Keywords: Group testing, hypothesis testing, identification, information theory Test par groupe, test d'hypothèse, identification, théorie de l'information This report challenges the assumptions usually made in non-adaptive group testing. The test is usually modelled as a probabilistic mechanism prone to false positive and / or false negative errors. However, the models are still too optimistic because the performances of these non ideal tests are assumed to be independent of the size of the groups. Without this condition, the report shows that the promises of group test (a number of tests and a decoding complexity scaling as c log N ) do not hold. Introduction Group testing has recently received a surge of research works mainly due to its connection to binary compressed sensing [START_REF] Lam | Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms[END_REF][START_REF] Atia | Boolean compressed sensing and noisy group testing[END_REF][START_REF] Scarlett | Converse bounds for noisy group testing with arbitrary measurement matrices[END_REF][START_REF] Scarlett | Phase transitions in group testing[END_REF]or to traitor tracing [START_REF] Meerwald | Group testing meets traitor tracing[END_REF][START_REF] Laarhoven | Asymptotics of fingerprinting and group testing: Tight bounds from channel capacities[END_REF]. The usual setup is often described in terms of clinical screening as it was the first application of group testing [START_REF] Dorfman | The detection of defective members of large populations[END_REF]. Among a population of N individuals, there are c infected people, with c much smaller than N . Screening the whole population by individual blood test is too costly. However, it is possible to mix blood samples from several persons and to perform a test. Ideally, the test is negative if none of these persons are infected, and positive if at least one of them is infected. The application of group testing are nowadays DNA screening [START_REF] Ngo | A survey on combinatorial group testing algorithms with applications to DNA library screening[END_REF], signal processing [START_REF] Gilbert | Recovering simple signals[END_REF], machine learning [START_REF] Zhou | Parallel feature selection inspired by group testing[END_REF]. Indeed, group testing may be a solution to any 'needles in haystack' problem, i.e. aiming at identifying among a large collection the few 'items' sharing a peculiar property detectable by a test, provided that this test can be performed on groups of several items. In this paper, we use the terminology of items and defective items. The dominant strategy nowadays is called non-adaptive group testing [START_REF] Atia | Boolean compressed sensing and noisy group testing[END_REF][START_REF] Lam | Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms[END_REF]. A first stage pools items into groups and performs the tests. A second stage, so-called decoding, analyses the result of these tests to identify the defective items. Tests and decoding are sequential. If the number of tests M is sufficiently big, the decoding stage has enough information to identify the defective items. In a nutshell, the groups are overlapping in the sense that one item is involved in several tests. Decoding amounts at finding the smallest subset of items which would trigger the observed positive tests. The promises of group testing are extremely appealing. First, the theoretical number of tests M asymptotically scales as O(c log N ) as N goes to infinity [START_REF] Atia | Boolean compressed sensing and noisy group testing[END_REF][START_REF] Laarhoven | Asymptotics of fingerprinting and group testing: Tight bounds from channel capacities[END_REF][START_REF] Scarlett | Converse bounds for noisy group testing with arbitrary measurement matrices[END_REF]. This result holds even if c increases with N , but at a lower rate [START_REF] Scarlett | Converse bounds for noisy group testing with arbitrary measurement matrices[END_REF][START_REF] Scarlett | Phase transitions in group testing[END_REF]. Second, recent papers propose practical schemes non only achieving this efficiency (or almost, i.e. O(c log c log N )) but also within a decoding complexity of O(c log N ) (or almost, i.e. O(c log c log N )) [START_REF] Cai | Grotesque: Noisy group testing (quick and efficient)[END_REF][START_REF] Lee | SAFFRON: A fast, efficient, and robust framework for group testing based on sparse-graph codes[END_REF]. This paper takes a complete opposite point of view. The number c of defective items is fixed, and we don't propose more efficient design. On contrary, we show that these promises hold only for some specific probabilistic models. These models are well known in the literature of group testing. They do take into account some imperfection in the test process, however, they are somehow optimistic. As group testing becomes popular, people applying this technique to their 'needles in haystack' problems might be disappointed. The promises of group testing (a number of tests in O(c log N ) together with a computational complexity of O(c log N )) fade away for applications not compliant with these models. The goal of this paper is to investigate what is specific in these models and to better understand the conditions necessary for achieving the promises of group testing. This paper has the following structure. Section 2 describes the recent approaches achieving both the minimum asymptotic number of tests and the minimal decoding complexity. The usual models are introduced together with an information theoretic justification that these approaches are sound. Section 3 introduces some more general models and shows that the total number of tests no longer scales as O(c log N ) in most cases. Previous works A typical paper about non-adaptive group testing proposes a scheme, which is composed of a pooling design and a decoding algorithm. The pooling is the way M groups are composed from a collection of N items. The decoding receives the M binary test results (positive or negative) to infer the defective items. Under a definition of a successful decoding and some probabilistic models, the paper then shows how the necessary number of tests asymptotically scales. For instance, if the decoding aims at identifying all the defective items, the authors show how M should scale as N → ∞ to make the probability of success converge to one. The best asymptotical scaling has been proven to be in O(c log N ) in theoretical analysis [START_REF] Atia | Boolean compressed sensing and noisy group testing[END_REF][START_REF] Laarhoven | Asymptotics of fingerprinting and group testing: Tight bounds from channel capacities[END_REF]. Notations and models The assumptions of the proof of a typical group testing paper concern the distribution of the defective items in the collection and the model of the test. Denote x a binary vector of dimension N encoding which items are defective: x i = 1 if the i-th item is defective, 0 otherwise. X is the random variable associated to this indicator vector. We assume that there are a fixed number c of defective s.t. P[X = x] = 1/ N c if |x| = c, 0 otherwise. As for the test, the models define the probabilistic behavior of its output. Suppose a group G i of n items, and let 0 ≤ K i ≤ max(n, c) be the random variable encoding the number of defectives in this group. Denote first by Z i a binary r.v. s.t. Z i = 1 if K i > 0, 0 otherwise. Now denote by Y i ∈ {0, 1} the r.v. Y i = Z i ⊕ N i with N i independent and identically distributed as Bernouilli B( ) and ⊕ the XOR operator. Dilution: Y i = ∨ j∈Gi [X j ∧ W i,j ], where ∧ and ∨ are the AND and OR operators and W i,j a binary r.v. modeling the detectability of the j-th item in the i-th group. These random variables are independent (both along i and j) and identically distributed: W i,j ∼ B(1-υ). For a given defective and test, the probability of being diluted (i.e. not detectable) is υ. Threshold: Y i = 0 if K i ≤ κ L and Y i = 1 if K i ≥ κ U . There are plenty variants describing what happens for κ L < K i < κ U [START_REF] Cheraghchi | Improved constructions for non-adaptive threshold group testing[END_REF]. Note that some models can be 'concatenated': we can witness a dilution phenomenon of parameter υ followed by a noise channel of parameter . Another way to model a test is through the c + 1 parameters (θ 0 , • • • , θ c ) defined as the following probabilities: θ k := P[Y i = 1|K i = k]. (1) Parameter θ 0 is thus the probability of a false positive, whereas 1 -θ k for 0 < k ≤ c are the probabilities of false negative when k defectives pertain to the test group. For the models above mentioned, we have the equivalent formulation: 1. Noiseless test: θ 0 = 0 and θ k = 1 for 0 < k ≤ c. 2. Noisy test: θ 0 = and θ k = 1 -for 0 < k ≤ c. Dilution: θ k = 1 -υ k with the convention that x 0 = 1, ∀x ∈ R + . 4. Threshold: θ k = 0 if 0 ≤ k ≤ κ L , θ k = 1 if κ U ≤ k ≤ c. Inria Probabilistic group testing The next step is to create the binary design matrix A ∈ {0, 1} M ×N . This matrix indicates which items belong to which groups: A i,j = 1 if item j is involved in test i, and 0 if not. There are constructions which are deterministic (up to a permutation over the N items) such as those relying on disjunct matrices [START_REF] Indyk | Efficiently Decodable Non-adaptive Group Testing[END_REF][START_REF] Ngo | A survey on combinatorial group testing algorithms with applications to DNA library screening[END_REF]. Another popular method is the probabilistic construction where A i,j is set to one depending on a coin flip: P[A i,j = 1] = p. These coin flips are independent w.r.t. indices i (groups) and j (items). The sequence (A 1,j , • • • , A M,j ) is often called the codeword of item j. We shall focus on this last construction. Theoretical studies [START_REF] Scarlett | Phase transitions in group testing[END_REF][START_REF] Scarlett | Converse bounds for noisy group testing with arbitrary measurement matrices[END_REF][START_REF] Atia | Boolean compressed sensing and noisy group testing[END_REF] shows that there is a phase transition: it is possible to identify all defectives with an asymptotically vanishing probability of error (as N → ∞) if M ≥ max ∈{1,••• ,c} log 2 N I(Y ; A G dif |A Geq ) (1 + η); (2) whereas the error probability converges to one for any decoding scheme if M ≤ max ∈{1,••• ,c} log 2 N I(Y ; A G dif |A Geq ) (1 -η). (3) The sets of items (G dif , G eq ) compose a partition of the set of defective items such that |G dif | = and |G eq | = c -, and A G dif (resp. A Geq ) denote the codewords of the items in G dif (resp. A Geq ). These theoretical results are extremely powerful since they still hold when c is not fixed but slowly increasing with N . They are somehow weakly related to practical decoding schemes. For instance, equation (2) comes from a genie aided setup: a genie reveals to the decoder some defective items G eq , and the number of tests needed to identify the remaining ones, i.e. G dif , is evaluated. This is done for different sizes of G eq , from 0 to c -1. A decoder without any genie needs more than the supremum of all these quantities. In the sequel, we consider simpler expressions of the total number of tests but related to practical (or almost) decoders. Joint decoder The joint decoder computes a score per tuple of c items. It spots the tuple of defective items (identifying all of them) with a probability at least 1 -α J ; and it incorrectly points a tuple of non defective items with probability β J . Denote γ J := log(α J )/ log(β J /N c ). T. Laarhoven [START_REF] Laarhoven | Asymptotics of fingerprinting and group testing: Tight bounds from channel capacities[END_REF] showed that a sufficient and necessary number of tests is at least: M J = c log 2 N max p∈(0,1) I J (p) (1 + O( √ γ J )), (4) where I J (p) = I(Y i , (A i,j1 , • • • , A i,jc )|p) is the mutual information between the output of the test and the codeword symbols of the defectives {j 1 , • • • , j c }. In other words, this corresponds to the case where the genie reveals no information: G eq = ∅ [START_REF] Scarlett | Phase transitions in group testing[END_REF]. Since lim N →∞ γ J = 0 for fixed (α J , β J ), this allows to state that M J scales as M J ≈ c log 2 N/I J (p J ) with p J = arg max p∈(0,1) I J (p). For the equivalent model (θ 0 , • • • , θ c ), this amounts to find the maximizer of the following function: I J (p) := h (P (p)) - c k=0 π k h(θ k ), (5) with P (p) := P[Y i = 1|p] = c k=0 π k θ k , (6) π k := c k p k (1 -p) c-k , ∀0 ≤ k ≤ c, (7) and h(x) is the entropy in bits of a binary r.v. distribution as B(1, x). Laarhoven gives the expressions of p J for large c and for the usual models [START_REF] Laarhoven | Asymptotics of fingerprinting and group testing: Tight bounds from channel capacities[END_REF]. The maximizer and the maximum are functions of c and of the parameters of the test model (for example, or υ for the noisy or dilution model). The drawback is that the decoding is exhaustive: it scans the N c possible subsets of size c from a set of N items. Therefore its complexity is in O(N c ). This is called a joint decoder as it jointly considers a subset of c items. The joint decoder is mainly of theoretical interest since its complexity is hardly tractable. Some schemes propose approximations of a joint decoder with manageable complexity resorting to Markov Chain Monte Carlo [START_REF] Knill | Interpretation of pooling experiments using the Markov chain Monte Carlo method[END_REF], Belief Propagation [START_REF] Sejdinovic | Note on noisy group testing: asymptotic bounds and belief propagation reconstruction[END_REF] or iterative joint decoders [START_REF] Meerwald | Group testing meets traitor tracing[END_REF]. Single decoder The single decoder analyses the likelihood that a single item is defective. It correctly identifies a defective item with probability 1 -α S while incorrectly suspecting a non defective item with probability less than β S . Denote γ S = log(β S )/ log(α S /N ). Laarhoven [START_REF] Laarhoven | Asymptotics of fingerprinting and group testing: Tight bounds from channel capacities[END_REF] showed that a sufficient and necessary number of tests is at least: M S = log 2 N max p∈(0,1) I S (p) (1 + O(γ S )) (8) where I S (p) = I(Y i , A i,j1 |p) is the mutual information between the output of the test and the symbol of the codeword of one defective, say j 1 . Again, since lim N →∞ γ S = 0 for fixed (α S , β S ), this allows to state that M S scales as M S ≈ log 2 N/I S (p S ) with p S = arg max p∈(0,1) I S (p). For the equivalent model (θ 0 , • • • , θ c ), this amounts to find the maximizer of the following function: I S (p) := h (P (p)) -ph(P 1 (p)) -(1 -p)h(P 0 (p)) (9) with P 1 (p) := P[Y i = 1|A i,j1 = 1, p] = c k=1 c -1 k -1 p k-1 (1 -p) c-k θ k (10) P 0 (p) := P[Y i = 1|A i,j1 = 0, p] = c-1 k=0 c -1 k p k (1 -p) c-1-k θ k (11) Laarhoven gives the expressions of p S for large c and for the usual models [START_REF] Laarhoven | Asymptotics of fingerprinting and group testing: Tight bounds from channel capacities[END_REF]. It always holds that I J (p) ≥ cI S (p), for any p ∈ [0, 1]. This yields M S inherently bigger than M J [START_REF] Laarhoven | Asymptotics of fingerprinting and group testing: Tight bounds from channel capacities[END_REF]: Both total numbers of tests scale as c log N , but with a bigger multiplicative constant for M S . The simple decoder computes a score for each item. Therefore its complexity is linear in O(N ). Divide and Conquer Papers [START_REF] Cai | Grotesque: Noisy group testing (quick and efficient)[END_REF][START_REF] Lee | SAFFRON: A fast, efficient, and robust framework for group testing based on sparse-graph codes[END_REF] have recently proposed schemes meeting the promises of group testing as listed in the introduction: optimal scaling both in the total number of tests and decoding complexity. Inria Both of them are deploying a 'Divide and Conquer' approach. Identifying c defectives among a collection of N items is too complex. Their strategy splits this problem into S simpler problems. The collection is randomly split into S subsets. S is chosen such that any subset likely contains at most one defective. Indeed, their proof selects S big enough s.t., with high probability, each defective belongs at least to one subset where it is the only defective. Assume that it is possible to detect whether a subset has no, one or more defectives. Then, a group testing approach is applied on each subset containing a single defective (so called 'singleton' subset in [START_REF] Lee | SAFFRON: A fast, efficient, and robust framework for group testing based on sparse-graph codes[END_REF]): The decoding identifies this defective thanks to the result of tests performed on groups composed of items of that subset. It turns out that identifying defectives in a collection is much simpler when knowing there is only one. In a non-adaptive framework, all group tests are performed in the first stage, but the decoding is only run on subsets deemed as 'singleton'. We detail here our own view of this 'Divide and Conquer' approach. Papers [START_REF] Cai | Grotesque: Noisy group testing (quick and efficient)[END_REF][START_REF] Lee | SAFFRON: A fast, efficient, and robust framework for group testing based on sparse-graph codes[END_REF] slightly differ in the way subsets are created. More formally, each subset S k ,1 ≤ k ≤ S, is composed independently by randomly picking N S items in the collection of N items. Denote by π the probability that an item belongs to a given subset: π = N S /N. Subset S k is not useful for identifying a given defective if: • it doesn't belong to subset S k with probability 1 -π, • else, if it is not the only defective in this subset with probability 1 -H(0; N, c, N S ), where H(k; N, c, N S ) is the hypergeometric distribution, • else, if the decoding over this subset misses its identification with probability denoted by α. Over all, this event happens with probability g(N S ) := (1 -π) + π ((1 -H(0; N, c, N S ) + H(0; N, c, N S )α) (12) which is minimized by selecting N S = N +1 /c+1 because g(N S ) -g(N S -1) ≤ 0 iff N S ≤ N +1 /c+1 (we assume that c + 1 divides N + 1). The probability that S k is useless for identifying a given defective simplifies in: g(N S ) = 1 -N S (N -c -1)! N ! (N -N S )! (N -N S -c)! (1 -α), (13) = 1 - c c + 1 c . 1 -α c + 1 .(1 + O(1/N )), (14) where we use the fact that Γ(N +a) /Γ(N+b) = N a-b (1 + (a + b -1) (a-b) /2N + O( 1 /N 2 )) [16]. Suppose that the goal of the decoding is to identify on expectation a fraction (1 -α S ) of the defectives. The probability of missing a given defective because none of the subset is useful for identifying it equals α S : P[Not identifying a given defective] = g(N S ) S = α S . (15) Since ( c /c+1) c ≥ 1 /e, it is safe to choose S = (c + 1)e(-log α S )/(1 -α) . Suppose now that the goal is to identify all the defectives with probability 1 -α J . The probability of identifying them all is given by: P[identifying all of them] = 1 -g(N S ) S c = 1 -α J . ( 16 ) This can be achieved with S ≥ e(c + 1) log(c/α J )/(1 -α) . RR n°9164 The point of this 'Divide and conquer' approach is that m = Θ(log 2 N S ) tests are needed for identifying a unique defective item in a subset of size N S and with a fixed probability of error α (see Sec. 2.4). Since the sizes of the subsets are all equal to N /c, the total number of tests scales as M DC = O(c log c log N /c) to identify all defectives with high probability, which is almost the optimal scaling. In [START_REF] Lee | SAFFRON: A fast, efficient, and robust framework for group testing based on sparse-graph codes[END_REF], the authors show that the decoding can also exploit subsets containing two defectives (so-called 'doubleton') which reduces S = O(c) for identifying all the defectives. To discover a fraction of the defectives the total number of tests scales as M DC = O(c log 2 N /c). This ends up in the optimal scaling achieved by GROTESQUE [START_REF] Cai | Grotesque: Noisy group testing (quick and efficient)[END_REF] and the 'singleton' only version of SAFFRON [START_REF] Lee | SAFFRON: A fast, efficient, and robust framework for group testing based on sparse-graph codes[END_REF]. These schemes have also the following advantages: • The decoding complexity scales like O(cm) = O(c log 2 N /c) if a deterministic construction is used as in [START_REF] Cai | Grotesque: Noisy group testing (quick and efficient)[END_REF] and [START_REF] Lee | SAFFRON: A fast, efficient, and robust framework for group testing based on sparse-graph codes[END_REF]. We decode O(c) 'singleton' subsets in total. In the noiseless setup, decoding a 'singleton' amounts to read the outputs of the tests because it exactly corresponds to the codeword of the unique defective of that subset. If the setup is not noiseless, the outputs are a noisy version of this codeword. An error correcting code whose decoding is in O(m) gets rid of these wrong outputs. For instance, the authors of [START_REF] Lee | SAFFRON: A fast, efficient, and robust framework for group testing based on sparse-graph codes[END_REF] uses a spatially-coupled LDPC error correcting code. • The decoding complexity scales like O(cmN S ) = O(N log 2 N /c) if a probabilistic construction is used per subset. We decode O(c) 'singleton' subsets. Decoding a singleton amounts to compute the likelihood scores for N S items and identifying the defective as the items with the biggest score. The likelihood is a weighted sum of the m test outputs. The next section shows that finding the optimal parameter p of the probabilistic construction is also simple. The main drawback of the 'Divide and Conquer' strategy is that it doesn't apply when θ 1 = θ 0 . This typically corresponds to the 'threshold' model where one unique defective is not enough to trigger the output of a test. Likewise, if θ 1 ≈ θ 0 , any efficient error correcting decoder will fail and the only option is the exhaustive maximum likelihood decoder. At that point, a probabilistic construction is preferable. Identifying the unique defective in a 'singleton' subset This 'Divide and conquer' approach greatly simplifies the model [START_REF] Atia | Boolean compressed sensing and noisy group testing[END_REF]. Since there is a single defective, we only need parameters θ 0 and θ 1 . By the same token, there is no need of joint decoding since the defective is unique. The mutual information in (8) takes a simple expression: I DC (p) = H(Y i |p) -H(Y i |A i,j , p) = h(θ 0 + p(θ 1 -θ 0 )) -(1 -p)h(θ 0 ) -ph(θ 1 ), (17) which is strictly positive on (0, 1) if θ 1 = θ 0 and whose maximisation is simpler than for the single and joint decoders. This concave function has a null derivative for: p DC = 1 θ 1 -θ 0 • 1 2 h(θ 1 )-h(θ 0 ) θ 1 -θ 0 + 1 -θ 0 . ( 18 ) This gives the following application to the usual models: 1. Noiseless test: θ 0 = 1 -θ 1 = 0 so that p DC = 1 /2 and I DC (p ) = 1. 2. Noisy test: θ 0 = 1 -θ 1 = so that p DC = 1 /2 and I DC (p ) = 1 -h( ). Inria 3. Dilution: θ 0 = 0 and θ 1 = 1 -υ so that p DC (υ) = 1 1 -υ • 1 2 h(υ)/(1-υ) + 1 , (19) I DC (p DC (υ)) = h ((1 -υ)p DC (υ)) -p DC (υ)h(υ). (20) Denote f (υ) = 1 /p DC (υ). We have: f (υ) = -1 -2 h(υ) 1-υ 1 + ln υ 1 -υ for υ ∈ [0, 1]. ( 21 ) Since ln(υ) ≤ υ -1 -(1 -υ) 2 /2 , we have on one hand 2 h(υ) 1-υ ≥ 2 υ /(1 -υ) and on the other hand (1 + ln υ 1-υ )/(1 -υ) ≤ -1 /2. This shows that f (υ) ≥ 0. Function f is increasing, therefore p DC is a decreasing function of the dilution factor υ. As υ → 1, h(υ)/(1 -υ) = 1/ ln(2) -log 2 (1 -υ) + O(1 -υ), s.t. p DC → 1 /e ≈ 0.37. The number of tests for a subset is given by ( 8) which multiplied by the number of subsets gives M DC ≈ ce(-log α S ) I DC (p ) log 2 ( N /c) (22) for identifying a fraction 1 -α S of defectives on expectation. Less optimistic models We would like to warn the reader that the promises of group testing are due to the simplicity of the models described in Sec. 2.1. These models are not naive since they do encompass the imperfection of the test over a group. The output of the test is modeled as a random variable. Yet the statistics of the test only depend on the number of defective items inside the group, but not on the size of the group. Consider the noisy setup with parameter modelling the imperfection of the test. The optimal setting is p DC = 1 /2 for the 'Divide and Conquer' scheme. This means that if N = 200, then the size of the groups is around 100, if N = 2 • 10 9 , the groups are composed of a billion of items and, still, the reliability of the test is not degraded. There are some chemical applications where tests can detect the presence of one single particular molecule among billions. But this is certainly not the case of all 'needles in haystack' problems. Our proposed model We believe there are many applications where the reliability of the test degrades as the size n of the group increases. Indeed, when the size of the group grows to infinity, the test might become purely random. For the noisy setup, should be denoted as n s.t. lim n→+∞ n = 1 /2 if the test gets asymptotically random. For the dilution model, υ should be denoted as υ n s.t. lim n→+∞ υ n = 1. This captures the fact that the defectives get completely diluted in groups whose size grows to infinity. Instead of coping with the noisy or dilution setups, we prefer to consider the equivalent model where probabilities (θ 0,n , • • • , θ c,n ) now depend on the size of the group. We make the following assumptions: • For all n, θ 0,n ≤ • • • ≤ θ c,n . Having more defective items in a group increases the probability that the test is positive. • θ 0,n is a non decreasing function of n. Parameter θ 0,n is the probability of a false positive (the test is positive whereas there is no defective in the group). Increasing the size of the group will not help decreasing the probability of this kind of error. • For 0 < k ≤ c, θ k,n is a non increasing function. Again, 1 -θ k,n is the probability of a false negative (the test is negative whereas there k defective items in the group). Increasing the size of the group will not help decreasing the probability of this kind of error. • These probabilities are bounded monotonic functions, therefore they admit a limit as n → +∞, denoted as θk := lim n→+∞ θ k,n . A test is deemed as asymptotically random if θ0 = • • • = θc , whatever the value of this common limit. Application to group testing designs We consider the three schemes above-mentioned: single, joint and 'Divide and conquer'. As described in Sec. 2.5, the 'Divide and conquer' design builds S subsets by randomly picking N S items out of N . The optimal size N S of a subset grows linearly with N . The three schemes then compose random groups from a population whose size N, be it N = N (single and joint schemes) or N = N S ('Divide and conquer' design), grows to infinity. We denote the size of the test groups by n(N) to investigate different choices as N goes to infinity. Note that n(N) ≤ N. When we pick up at random n(N) items the probability p that this group contains a given defective is p = n(N)/N. We analyze two choices concerning the asymptotical size of the test groups: either lim N→∞ n(N) = +∞ or lim N→∞ n(N) < +∞. We derive the mutual information at stake for the three schemes ('Divide and Conquer', simple, and joint) and we deduce the asymptotical scaling of the total number of tests. We introduce the non increasing functions δ k,n := θ k,n -θ 0,n ≥ 0. For the 'Divide and conquer' scheme, the decoding is performed on singleton subset. Therefore δ 1,n shows the speed at which the test gets closer to a random test. For the simple and joint decoder, there are up to c defective items in a group (we suppose that c < n) and δ c,n shows the speed at which the test gets closer to a random test. The most important factor is the limit δk := lim n→∞ δ k,n . For the 'Divide and conquer' scheme, δ1 > 0 means that the two probabilities θ 0,n and θ 1,n have distinct limits, and Sect. 2.6 gives the optimum choice replacing θ 0 and θ 1 by their limits θ0 and θ1 . When there is a single defective in a subset, Eq. [START_REF] Knill | Interpretation of pooling experiments using the Markov chain Monte Carlo method[END_REF] shows that the number of tests to identify it is Θ(log 2 N S ), which in turn is Θ(log 2 N /c). Because the number of subsets S used by the 'Divide and Conquer' strategy in Sect. 2.5 asymptotically gets independent of N , the total number of tests scales as Θ(c log 2 N /c) (identification of a fraction of defective items) or Θ(c log c log 2 N ) (identification of all the items). For the single and joint decoders, δc > 0 means that the test is not asymptotically random. Packing more and more items in groups always provides informative tests. The strategy of selecting n(N) s.t. lim N→∞ n(N)/N = p will deliver a non null mutual information. Parameter p is derived from Sec. 2 replacing (θ 0 , • • • , θ c ) by their limits ( θ0 , • • • , θc ). There might be other way giving a lower total number of tests, but at least this strategy delivers the promises of group testing with a total number of tests scaling as Θ(c log 2 N ). The next sections investigate our main concern : the case where the test is asymptotically random. Inria 4 First strategy: lim N→∞ n(N) = n The first strategy makes the size of the test groups converging to the finite value n := lim N→∞ n(N) for which the test is not random: suppose θ 0,n < θ 1,n . On the other hand, the probability of an item being in a given test group vanishes as p = n/N. 4.1 The case where θ 0,n = 0 Assume first that θ 0,n = 0, Taylor series give the following asymptotics (See Appendices A.1, B.1, and C.1): I J (p) ≈ c n N ∆h 0,1 , (23) I S (p) ≈ n N ∆h 0,1 , (24) I DC (p) ≈ n N S ∆h 0,1 , (25) with ∆h 0,1 := [(θ 1,n -θ 0,n )h (θ 0,n ) + h(θ 0,n ) -h(θ 1,n )]. These three mutual informations only depend on the first two parameters of the model which is unusual for the joint and the single schemes. As the probability p vanishes, the tests are positive for a unique reason: there is a single defective in the groups. More formally, thanks to L'Hôspital's rule, the probability that there is a single defective knowing that there at least one converges to zero: lim p→0 π 1 1 -π 0 = lim p→0 1 -(c -1) p 1 -p = 1 (26) It is therefore quite normal that I DC (p) and I S (p) coincide (except that N S is replaced by N ). However, the 'Divide and Conquer' scheme runs a group testing procedure per subset s.t. it asymptotically needs e(-log α S ) more tests than the single decoder (comparison of ( 22) with ( 8)). I J (p) is exactly c times bigger than I S (p), which in the end offers the same scaling for the total number of tests: M J ≈ M S (comparison of ( 4) with ( 8)). This signifies that the joint decoding doesn't perform better than the single decoding. Indeed, the score computed for a tuple of c items by the joint decoder becomes asymptotically equal to the sum of the scores of the c items as computed by the single decoder. The three schemes need a total number of tests scaling as O(N log N ). It is surprising that it doesn't depend on c, but the most important point is that this is much less appealing than the promise in O(c log N ). 4.2 The case where θ 0,n = 0 When θ 0,n = 0, the expressions above are no longer correct because lim x→∞ h (x) = ∞. New Taylor series give the following asymptotics (See Appendices A.1, B.1, and C.1): I J (p) ≈ cθ 1,n n N log 2 N n , (27) I S (p) ≈ θ 1,n n N log 2 N n , (28) I DC (p) ≈ θ 1,n n N S log 2 N S n . ( 29 ) The same comments as above hold except that this time the schemes provide a better total number of tests scaling as O(N ). The explanation is that a test s.t. θ 0,n = 0 has the advantage of being positive if and only if there is at least one defective in the group. Indeed, there is a single defective in a positive group exactly in the 'Divide and Conquer' scheme and asymptotically for the joint and single decoders. This certainty eases a lot the decoding. These mutual informations show that the multiplicative factor of this scaling is 1/(nθ 1,n ) for the single and joint decoders. We thus need to select n s.t. nθ 1,n > 1 in order to be, asymptotically at least, preferable to an exhaustive search testing items separately. This raises an even more stringent condition for the 'Divide and Conquer' scheme because we need nθ 1,n > e(-log α S ). 5 Second strategy: lim N→∞ n(N) = ∞ The second strategy makes the size of the test groups increasing as N → ∞. Therefore, the rate at which the test becomes random matters. This is reflected by the speed at which δ 1,n ('Divide and Conquer' ) or δ c,n (joint and single) converge to zero. Once again, we make the distinction between tests s.t. θ0 > 0 and those for which θ0 = 0. The case where θ0 = 0 Assume first that θ0 = 0, Taylor series give the following asymptotics (See Appendices A.2.1, B.2.1 and C.2.1): I J (p) ≈ - 1 2 h ( θ0 )Var[θ K,n ], (30) I S (p) ≈ - 1 2 h ( θ0 ) 1 c 2 p(1 -p) Cov(K, θ K,n ) 2 , (31) I DC (p) ≈ - 1 2 h ( θ0 )p(1 -p)δ 2 1,n , (32) with K ∼ B(c, p). Since Cov(K, θ K,n ) 2 ≤ Var[K]Var[θ K,n ] and Var[K] = cp(1 -p), these series comply with the rule that I S (p) ≤ I J (p)/c. We can also check that if c = 1, these three series are equal. Another remark: If θ K,n = Kδ 1,n + θ 0,n ∀0 ≤ K ≤ c, then I DC (p) = I S (p) = I J (p)/c and the three schemes provide the same scaling of the total number of tests. Note however that the probability p of belonging to a group equals n/N in the first two expressions, whereas it equals n/N S in the last expression. Application to the noisy group testing: In this setup, θ 0,n = 1 -θ k,n = n → 1 /2. This simplifies the expressions above as follows: I J (p) ≈ 2 ln 2 (1 -p) c (1 -(1 -p) c )δ 2 1,n (33) I S (p) ≈ 1 ln 2 p(1 -p) 2c-1 δ 2 1,n (34) I DC (p) ≈ 2 ln 2 p(1 -p)δ 2 1,n . (35) If we suppose that δ 1,n = O(n -a ) with a > 0 and we increase the size of the group s. t. n ∝ N b with 0 ≤ b ≤ 1 (or n ∝ N b S for the 'Divide and Conquer' scheme), then the three schemes offer a mutual information of the same order: Inria • If 0 < a ≤ 1 /2, the best option is to choose b = 1, i.e. to fix the value of p, to achieve I = O(N -2a ), which ends up in a total number of test scaling as Ω(N 2a log N ). • If a > 1 /2, the best option is to set b = 0, i.e. to fix the size of the group, to achieve I = O(N -1 ), which ends up in a total number of tests scaling as Ω(N log N ). We rediscover here the results of Sect. 4. These scalings are much bigger than the promised Θ(c log N ). Yet, if the test smoothly becomes random as n increases, i.e. when a < 1 /2, the situation is actually not that bad since the scale of the total number of tests is slower than Θ(N ), i.e. the scaling of the exhaustive screening (yet, we need a setup where the Ω becomes a Θ). The appendices gives upper bounds of the mutual informations of the single and joint decoders in the general case: I J (p) - 1 2 h ( θ0 )(1 -(1 -p) c )δ 2 c,n , (36) I S (p) - 1 2 h ( θ0 ) p 1 -p δ 2 c,n (37) These two upper bounds share the same decrease in O(N -2a ) if δ 1,n = O(n -a ) with 0 < a ≤ 1 /2. Now to get M = O((log N ) d ) we need to have I = Ω((log N ) 1-d ) and therefore, for a fixed p, δ c,n (or δ 1,n for the 'Divide and Conquer' scheme) being Ω((log n) 1-d /2 ). The point of this chapter is to consider that δ c,n converges to zero, therefore d > 1. We are getting closer to the promise of group testing for tests becoming random at a very low speed. The case where θ0 = 0 The appendices A.2.2, B.2.2 and C.2.2 show that: I J (p) ≤ (-θ c,n log 2 θ c,n )(1 -π 0 -π c ) + θ c,n (-(1 -π 0 ) log 2 (1 -π 0 )) + o(θ c,n ) (38) I S (p) ≤ (-θ c,n log 2 θ c,n )(1 -π 0 -π c ) + θ c,n (-(1 -π 0 ) log 2 (1 -π 0 ) + (c -1)p c log 2 p) + o(θ c,n ) (39) I DC (p) = θ 1,n (-p log 2 p) + o(θ 1,n ), (40) with π 0 = (1 -p) c and π c = p c . For a fixed p, the mutual informations of the joint and single decoders are dominated by -θ c,n log 2 θ c,n . If θ c,n = O(n -a ), a > 0, the total number of tests scales as Ω(N a ). This does not hold for the 'Divide and Conquer' scheme: if θ 1,n = O(n -a ), the total number of tests scales as Ω(N a log N ). If n ∝ N b , 0 ≤ b ≤ 1 so that p ∝ N b-1 , then the mutual informations of the joint and single decoders are O(N b(1-a)-1 log N ). If the test is slowly converging to a random test, i.e. a < 1, then we should set b = 1 and we are back to the option of freezing p. Otherwise, it is better to set b = 0 so that the total number of tests scales as Ω(N ), and we find back the first strategy fixing n. The same comment holds for the the 'Divide and Conquer' scheme. Again, this case is preferable to the case θ0 = 0: M = Ω(N a log N ) and not Ω(N 2a log N ), and for a longer range 0 < a ≤ 1 (and not 0 < a ≤ 1 /2). Last but not least, for the 'Divide and Conquer' scheme, to get M = O((log N ) d ) we need to have θ 1,n = Ω((log N ) 1-d ) with d > 1 to make θ 1,n vanishing as n = pN increases (p is fixed). With the same setup, M = O( (log N ) d /log log N ) for the single and joint decoders. Conclusion The point of this chapter is not to find the best choice concerning the asymptotical size of the test groups. We just show that whatever this choice, group testing fails delivering the promise of a total number of tests scaling as O(c log N ). The condition of utmost importance for such an appealing scaling is to have a test which doesn't become purely random as the size of the group grows to infinity. However, group testing almost keeps its promise, i.e. a total number of tests scaling as a power of log N , for setups where the test converges to randomness very slowly, i.e. at a rate in Ω( 1 /log g n) with g > 0. For this kind of setups, it is better to fix p, which means that the size of the groups are proportional to N . However, if the test becomes random too rapidly, i.e. as fast as O(n -a ) with a ≥ 1 /2, it is useful to switch from a fixed p strategy to a fixed n strategy. Setups where there is no false positive (θ 0,n = 0) or no false negative (θ k,n = 1 for k > 0) lead to better performances: the total number of tests is lower and the transition from fixed p to fixed n occurs at a higher rate, i.e. for a = 1. A 'Divide and conquer' A.1 First strategy: lim N→∞ n(N) = n The first strategy makes the size of the test groups converging to the finite value n := lim N S →∞ n(N S ) for which the test is not random, i.e. θ 0,n < θ 1,n . On the other hand, the probability of an item being in a given test group vanishes as p = n/N S . Assume that θ 0,n = 0, a Taylor series of [START_REF] Zhou | Parallel feature selection inspired by group testing[END_REF] gives the following asymptotic: I DC (p) = n N S [(θ 1,n -θ 0,n )h (θ 0,n ) + h(θ 0,n ) -h(θ 1,n )] + o(N -1 S ). (41) If θ 0,n = 0, the result above does not hold because lim x→0 h (x) = +∞. A new Taylor series gives the following asymptotic: I DC (p) = θ 1,n n N S log 2 N S n + o 1 N S log(N S ) . (42) A.2 Second strategy: lim N→∞ n(N) = ∞ A.2.1 When θ0 ∈]0, 1[ The assumption here is that θ1 = θ0 which lies in ]0, 1[. We denote η n := θ 0,n -θ0 and δ 1,n := θ 1,n -θ 0,n . With these notations, we have I DC (p) = h( θ0 + η n + pδ 1,n ) -(1 -p)h( θ0 + η n ) -ph( θ0 + η n + δ 1,n ), (43) Note that η n ≤ η n + pδ 1,n ≤ η n + δ 1,n because 0 ≤ p ≤ 1, which implies that |η n + p n δ 1,n | ≤ max(|η n |, |η n + δ 1,n |). (44) Both |η n | and |δ 1,n | converges to 0 so that, for > 0, there exist n 0 big enough s.t. ∀n ≥ n 0 , max(|η n |, |η n + δ 1,n |) ≤ . We then apply the following Taylor development for θ0 ∈]0, 1[: h( θ0 + ) = h( θ0 ) + h ( θ0 ) + 2 h ( θ0 )/2 + o( 2 ), (45) Inria on the three terms of (43) to simplify it to: I DC (p) = - 1 2 δ 2 1,n h ( θ0 )p(1 -p) + o( 2 ). (46) Since we assume in the text that θ 0,n is non decreasing, it has to converge to θ0 from below s.t. η n is non positive. In the same way, θ 1,n converges to θ0 from above s.t. η n +δ 1,n is non negative. This shows that ≤ δ 1,n ≤ 2 and therefore δ 1,n = Θ( ). This allows to replace o( 2 ) by o(δ 2 1,n ) in (46). A.2.2 When θ0 ∈ {0, 1} We detail the case for θ1 = θ0 = 0. With the same notations as in App. A.2.1, this case implies that η 0,n = 0 because θ 0,n is non decreasing and non negative. The mutual information in this context equals: I DC (p) = h(pδ 1,n ) -ph(δ 1,n ). (47) For > 0, there exist n big enough for which δ 1,n = and where h( ) = -log 2 ( ) + / ln 2 + o( ). Applying this development, we obtain: I DC (p) = δ 1,n (-p log 2 p) + o(δ 1,n ). (48) B Joint decoder We assume that the size of a group is always larger than c. Therefore, (c + 1) parameters define the test (θ 0,n , • • • , θ c,n ). B.1 First strategy: lim N→∞ n(N) = n The mutual information for the joint decoder (5) has the following Taylor series when p = n/N → 0, for θ 0,n > 0: I J (p) = c n N [(θ 1,n -θ 0,n )h (θ 0,n ) + h(θ 0,n ) -h(θ 1,n )] + o(1/N ), (49) and, for θ 0,n = 0 and θ 1,n > 0: I J (p) = cθ 1,n n N log (N ) + o(N -1 log N ). (50) We have supposed that n is chosen s.t. δ 1,n > 0. It is possible to relax this constraint and the first non nul parameter δ k,n will appear in the above equations. Yet, we also get a decay in N -k instead of N -1 , whence choosing n s.t. δ 1,n = 0 should be avoided if possible. This is a real issue for the threshold group testing model where θ 1,n = θ 0,n (see Sect. For 0 < θ0 < 1, we apply the development (45) to h(P (p)) and h(θ k,n ): I J (p) = - 1 2 h ( θ0 ) c k=0 π k θ 2 k,n -P (p) 2 + o( 2 ) (51) = - 1 2 h ( θ0 )Var[θ K,n ] + o( 2 ). (52 C Single decoder We assume that the size of a group is always larger than c. Therefore, (c + 1) parameters (θ 0,n , • • • , θ c,n ) define the test. C.1 Asymptotical analysis for lim N S →∞ n(N S ) < ∞ Assume that θ 0,n = 0, a Taylor series of (9) gives the following asymptotic: I S (p) = n N [(θ 1,n -θ 0,n )h (θ 0,n ) + h(θ 0,n ) -h(θ 1,n )] + o(1/N ). ( 56 ) If θ 0,n = 0, a Taylor series gives the following asymptotic: (60) From ( 10) and [START_REF] Meerwald | Group testing meets traitor tracing[END_REF], it is clear that θ 0,n ≤ P 0 (p) and P 1 (p) ≤ θ c,n . We also have p(1 -p)P (p) = E[(K -cp)θ K,n ] = E[Kθ K,n ] -E[K]E[θ K,n ] = Cov(K, θ K,n ). ( 61 ) with K ∼ B(c, p) because E[K] = cp. We introduce K ∼ B(c, p) independent of K. Then, on one hand: Cov(K -K , θ K,n -θ K ,n ) = Cov(K, θ K,n ) + Cov(K , θ K ,n ) = 2Cov(K, θ K,n ), (62) while, on the other hand, Cov(K -K , θ K,n -θ K ,n ) = E[(K -K )(θ K,n -θ K ,n )] -E[K -K ]E[θ K,n -θ K ,n ] = 0≤k,k ≤c π k π k (k -k )(θ k,n -θ k ,n ). (63) 1 [ 1 2.1). B.2 Second strategy: lim N→∞ n(N) = ∞ Now suppose that the parameters of the model vary with n s.t. θ 0,n ≤ θ 1,n ≤ • • • ≤ θ c,n , and that δ c,n = θ c,n -θ 0,n vanishes to 0 as n increases. The analysis is made for a fixed 0 < p < 1. We remind that π k = c k p k (1 -p) c-k , ∀0 ≤ k ≤ c, the distribution of the binomial B(c, p). T. Furon B.2.1 When θ0 ∈]0, As in the previous section, η n = θ 0,n -θ0 and δ k,n = θ k,n -θ 0,n converge to zero. For any > 0, there exists n large enough for which max(|η n |, η n + δ c,n ) = . This implies that |η n + δ k,n |, ∀0 ≤ k ≤ c, and |η n + P (p) -θ 0,n | are smaller than . ) Note that Var[θ K,n ] ≤ E[(θ K,n -θ 0,n ) 2 ] ≤ δ 2 c,n (1 -(1 -p) c ) < 4 2 .On the other hand,Var[θ K,n ] ≥ π 0 (θ 0,n -P (p)) 2 + π c (θ c,n -P (p)) 2 ≥ π 0 π c π 0 + π c δ 2 c,n ≥ 2 . (53)This shows thatVar[θ K,n ] = Θ( 2 ) so that we can replace o( 2 ) by o(Var[θ K,n ]). B.2.2 When θ0 ∈ {0, 1}We start by applying the development h(x) = -x log 2 (x) + x /ln 2 + o(x) on h(P (p)) and h(θ k,n ):I J (p) = (-P (p) log 2 (P (p))) -c k=0 π k (-θ k,n log 2 θ θ k,n ) + o(θ c,n ).(54)The function x → -x log 2 (x) is increasing over [0, 1 /e) and P (p) ≤ θ c,n (1-π 0 ). On the other hand c k=0 π k (-θ k,n log 2 θ θ k,n ) ≥ π c (-θ c,n log 2 θ c,n ). This inequality follows from these arguments: I J (p) ≤ (-θ c,n log 2 θ c,n )(1 -π 0 -π c ) + θ c,n (-(1 -π 0 ) log 2 (1 -π 0 )) + o(θ c,n ). (55) I S (p) = θ 1,n n N log 2 Nπ 2 n + o (log(N )/N ) . (57) If θ 1,n = θ 0,n , the Taylor series wil show the role of the first non nul coefficient δ k,n but fraction n/N must be replaced by ( n/N ) k . This should be avoided as it Inria C.2 Second strategy: lim N→∞ n(N) = ∞We first present some relations between P (p), P 1 (p) and P 0 (p):P 1 (p) = P (p) + (1 -p)P (p)/c,(58)P 0 (p) = P (p) -pP (p)/c, k θ k,n (k -cp). Since θ k,n is increasing with k, the summands in the last equation are all non negative. We can also lower bound this sum by only keeping the terms |k -k | = c. This shows that0 ≤ δ c,n cp c-1 (1 -p) c-1 ≤ P (p) = 1 p(1 -p) Cov(K, θ K,n ). (64)An upper bound is given by noting that c k=0 π k θ k,n k ≤ cpθ c,n and c k=0 π k θ k,n ≥ θ 0,n , s.t. p(1 -p)P (p) ≤ δ c,n cp. This proves that P (p) = Θ(δ c,n ).Since P (p) ≥ 0, we haveθ 0,n ≤ P 0 (p) ≤ P (p) ≤ P 1 (p) ≤ θ c,n .(65)These five probabilities converge to θ0 as n → ∞.C.2.1 When θ0 ∈]0, 1[ For > 0, there exists n large enough s.t. max(|η n |, δ c,n + η n ) = .The Taylor series of (9) leads to:(1 -p) Cov(K, θ K,n ) 2 + o( 2 ).(67) Now, δ c,n = δ c,n + η n -η n s.t. ≤ δ c,n ≤ 2 and P (p) = Θ( ). This allows to replace o( 2 ) by o(P (p) 2 ) or o(Cov(K, θ K,n ) 2 ) in the equations above. The upper bound on P (p) is used to bound the mutual information: I S (p) ≤ - modeling the output of the test performed on this group. There are four well known models:1. Noiseless test: Y i = Z i . The test is positive if and only if there is at least one defective in a group, 2. Noisy test: RR n°9164 Publisher Inria Domaine de Voluceau -Rocquencourt BP 105 -78153 Le Chesnay Cedex inria.fr ISSN 0249-6399 T. Furon C.2.2 When θ0 ∈ {0, 1} We have: Following the same rationale as in Sect. B.2.2, we get:
46,349
[ "3087" ]
[ "491438" ]
01744266
en
[ "chim" ]
2024/03/05 22:32:07
2018
https://hal.sorbonne-universite.fr/hal-01744266/file/JABER_Maguy.pdf
Pollyana Trigueiro Silvia Pedetti Baptiste Rigaud Sebastien Balme Jean-Marc Janot Ieda M G Dos Santos Régis Gougeon Maria G Fonseca Thomas Georgelin Maguy Jaber email: maguy.jaber@upmc.fr Going through the wine fining: intimate dialogue between organics and clays Keywords: adsorption, resveratrol, BSA, NMR, Fluorescence spectroscopy, wine, clay Wine chemistry inspires and challenges with its complexity and intriguing composition. In this context, the composites based on the use of a model protein, a polyphenol of interest and montmorillonite in a model hydroalcoholic solution have been studied. A set of experimental characterization techniques highlighted the interactions between the organic and the inorganic parts in the the composite. The amount of the organic part was determined by ultraviolet-visible (UV-VIS) and thermal analysis. X-ray diffraction (XRD) and transmission electronic microscopy (TEM) informed about the stacking/exfoliation of the layers in the the composites. Vibrational and nuclear magnetic resonance spectroscopies methods stressed on the formation of a complex between the protein and the polyphenol before adsorption on the clay mineral. The mobility/rigidity of the organic parts were determined by fluorescence time resolved spectroscopy. Changes in the secondary structure of the protein occured upon complexation with polyphenol on clay mineral due to strong interactions. Although not representating faithfully enological conditions, these results highlight the range and nature of mechanisms possibly involved in wine fining. Introduction Proteins are present in wine in low concentrations, contributing weakly to their nutritive characteristics. Nevertheless, their presence in high proportion may affect the quality of the product, being responsible for the turbidity and time-instability of white wines. The proteins of wine are generally tolerant to the pH of the wine (pH = 3-3.6) [START_REF] Jaeckels | Influence of bentonite fining on protein composition in wine[END_REF][START_REF] Van Sluyter | Wine protein haze: Mechanisms of formation and advances in prevention[END_REF][START_REF] Moreno-Arribas | Analytical methods for the characterization of proteins and peptides in wines[END_REF]. Naturally occurring proteins of grapes, and in particular pathogenesis related (PR) proteins, have been shown to cause turbidity or haze formation in white wines, where the instability can be related to factors such as pH of the medium, ethanol content, ionic strength, temperature or concentration of the organic acids, tannins and polyphenolic compounds [START_REF] Dordoni | Effect of bentonite characteristics on wine proteins, polyphenols, and metals under conditions of different pH[END_REF][START_REF] Vincenzi | Study of combined effect of proteins and bentonite fining on the wine aroma loss[END_REF][START_REF] Esteruelas | Phenolic compounds present in natural haze protein of Sauvignon white wine[END_REF]. Therefore, it is necessary in the fining treatment to eliminate the risk of protein precipitation, which content can vary from 10 to more than 260 mg.L -1 [START_REF] Cilindre | It's time to pop a cork on champagne's proteome![END_REF][START_REF] Wigand | Analysis of Protein Composition of Red Wine in Comparison with Rosé and White Wines by Electrophoresis and High-Pressure Liquid Chromatography-Mass Spectrometry (HPLC-MS)[END_REF]. It must be noted though, that in Champagne, proteins can have a positive role on the foamability. Polyphenols are responsible for the differences between white and red wine, especially in the determination of the color, body, flavor and structure of red wine [START_REF] Pérez-Magariño | Polyphenols and colour variability of red wines made from grapes harvested at different ripeness grade[END_REF]. Among the diversity of polyphenolic compounds, stilbenes and derivatives have been largely studied,in particular resveratrol (RESV), has antioxidant, bactericides, anti-inflamatory and vitamin properties, which are related to its protective effect against cardiovascular diseases [START_REF] Cao | Non-covalent interaction between dietary stilbenoids and human serum albumin: Structure-affinity relationship, and its influence on the stability, free radical scavenging activity and cell uptake of stilbenoids[END_REF].Even though it can also be found in food products and beverages such as peanut butter, mulberries and grape juice, Red wine is believed to be the main source of resveratrol in the human diet [START_REF] Lu | Resveratrol, a natural product derived from grape, exhibits antiestrogenic activity and inhibits the growth of human breast cancer cells[END_REF][START_REF] Wenzel | Metabolism and bioavailability of trans-resveratrol[END_REF]. The phenolic compounds found in wines may vary according to their structure as, tannins, non-flavonoids and flavonoids. RESV (Figure 1 SI, in Supplementary information) is a flavonoid, natural pigment, which can be present in high concentration in grape skin but not in grape flesh, so red wine contains significant amounts of resveratrol compared to white wine, with concentration ranging from 0.1 to 3.0 mg.L -1 , where its amount depends on the grape variety and the vinification process [START_REF] Minussi | Phenolic compounds and total antioxidant potential of commercial wines[END_REF][START_REF] Lucas-Abellán | Cyclodextrins as resveratrol carrier system[END_REF]. It has been reported in the literature that the complexation of resveratrol with macromolecules can modulate the expression of its bioviability and stability as well as its antioxidant effect [START_REF] Lucas-Abellán | Cyclodextrins as resveratrol carrier system[END_REF][START_REF] Richard | Recognition characters in peptide-polyphenol complex formation[END_REF]. Different fining agents in winemaking are reported in the literature, but clay minerals rich in montmorillonite such as bentonite, remain heavily used for white wines. Bentonite behaves as stabilizer by adsorbing proteins in wine wich typically have isoelectric point (IP) between 3 and 9 and a MW between 20-70 KDa [START_REF] Achaerandio | Protein Adsorption by Bentonite in a white wine model solution: Effect of Protein Molecualr Weight and Ethanol Concentration[END_REF]. Montmorillonite displays a layered structure of type 2:1, ideally formed by two sheets of tetrahedrally coordinated silicon linked through a sheet of octahedrally coordinated aluminum. Al 3+ and Mg 2+ generally occupy octahedral sites, whereas Hydrated exchangeable Na + or Ca 2+ cations are present in the interlayers to balance the negative layer charge [START_REF] Jaber | Synthesis, characterization and applications of 2:1 phyllosilicates and organophyllosilicates: Contribution of fluoride to study the octahedral sheet[END_REF][START_REF] Brigatti | Chapter 2 Structures and Mineralogy of Clay Minerals[END_REF][START_REF] Jaber | Mercaptopropyl Al-Mg phyllosilicate: synthesis and characterization by XRD, IR, and NMR[END_REF]. Adsorption of proteins onto clay minerals has been widely reported in the literature [START_REF] Assifaoui | Structural Studies of Adsorbed Protein (Betalactoglobulin) on Natural Clay (Montmorillonite)[END_REF][START_REF] Yu | Adsorption of proteins and nucleic acids on clay minerals and their interactions: A review[END_REF][START_REF] Servagent-Noinville | Conformational Changes of Bovine Serum Albumin Induced by Adsorption on Different Clay Surfaces: FTIR Analysis[END_REF]. It may involve several types of physical and chemical interactions such as cation exchange, electrostatic forces or hydrogen bonding, on the interlayer surface of clay minerals, on the edges or both. The positive charge of the protein when the pH is below the IP allows the adsorption on the negatively charged surface of montmorillonite due to electrostatic forces. Various possible mechanisms can describe the interaction between proteins and clays: intercalation, exfoliation or both [START_REF] Assifaoui | Structural Studies of Adsorbed Protein (Betalactoglobulin) on Natural Clay (Montmorillonite)[END_REF]. Extensive penetration of protein chains into the interlayer space of clays can lead to exfoliation or delamination of the silicate layers [START_REF] Kumar | Effect of Type and Content of Modified Montmorillonite on the Structure and Properties of Bio-Nanocomposite Films Based on Soy Protein Isolate and Montmorillonite[END_REF][START_REF] Gopakumar | Influence of clay exfoliation on the physical properties of montmorillonite / polyethylene composites[END_REF][START_REF] Cai | Adsorption of DNA on clay minerals and various colloidal particles from an Alfisol[END_REF]. The use of clay minerals for the fine treatment of wine may induce not only adsorption of the protein but also of other molecules. As reported in the literature, there may be a decrease in the amount of resveratrol in the wine due to interactions with fining agents [START_REF] Donovan | Effects of Small-Scale Fining on the Phenolic Composition and Antioxidant Activity of Merlot Wine[END_REF][START_REF] Threlfall | Effects of fining agents on transresveratrol concentration in wine[END_REF] or an interaction of resveratrol with the protein due to electrostatic forces, hydrophobic interactions and/or hydrogen bond between these two compounds [START_REF] Liang | Interaction of b-lactoglobulin with resveratrol and its biological implications[END_REF][START_REF] N′ Soukpoé-Kossi | Resveratrol Binding to Human Serum Albumin[END_REF]. Bovine serum albumin (BSA, Figure 2 SI in Supplementary information) with MW of 66.5 KD a is used as model protein for this study. It is characterized by globular structure and is the most abundant soluble plasma protein with a typical circulating concentration of 0.6 mmol.L -1 . In addition, it is extensively used in biochemical studies due to its wide availability and high structural resemblance with human serum albumin [START_REF] Bourassa | Binding sites of resveratrol, genistein, and curcumin with milk α-And β-caseins[END_REF][START_REF] Carter | Structure of serum albumins[END_REF]. Resveratrol has a strong interaction with albumin. Indeed, several molecules can be bound to the hydrophobic pocket of albumin with an affinity constant around 2.10 4 M -1 or 5.10 4 M -1 . This binding occurs via both hydrophillic and hydrophobic interactions and/or hydrogen bonding [START_REF] Bourassa | Binding sites of resveratrol, genistein, and curcumin with milk α-And β-caseins[END_REF][START_REF] Nair | Biology Spectroscopic study on the interaction of resveratrol and pterostilbene with human serum albumin[END_REF][START_REF] Jiang | Study of the interaction between trans-resveratrol and BSA by the multi-spectroscopic method[END_REF]. The aim of the present work is to investigate the co-adsorption of a model protein, the BSA, and a polyphenol of interest, resveratrol, onto a synthetic montmorillonite (Mt)in an hydroalcoholic solution, in order to investigate, via a multi-technical approach, the presence and the nature of interactions between the clay mineral, the protein and the polyphenolic compound present in wine. 2. Experimental part Materials Bovine Serum Albumin (>99%) and resveratrol (>99%) were purchased from Sigma-Aldrich. Potassium phosphate monobasic (>99%) buffer 0.1M and ethanol (solvent system) were used to solubilize the protein and polyphenol. Anapproximation of amodel wine solution was made with the mixture of 25% distilled water, 50% phosphate buffer and 25% ethanol (ratio v/v) at pH = 4.5 (solvent system used in BSA solution and RESV solution). For the synthesis of the montmorillonite,the following reagents were used: Aerosil 130 (Evonik Industries) as source of silica, Bohemite (AlOOH) 74% Al2O3 (Sasol Germany), Magnesium acetate tetrahydrate (Sigma-Aldrich), Sodium acetate (Sigma-Aldrich) and hydrofluoric acid. Synthesis of montmorillonite For the synthesis of sodium montmorillonite, the reagents were mixed in the following order: deionized water, hydrofluoric acid and the sources of interlamellar cation: sodium acetate, magnesium acetate, alumina and silica. The hydrogels were aged under stirring at room temperature for 2h and then were autoclaved for reaction at 220 °C for 72h. The autoclaves were cooled to room temperature and the products were washed thoroughly with distilled water and centrifuged. The solids were then dried at 50 °C for 24 h [START_REF] Jaber | Selectivities in Adsorption and Peptidic Condensation in the (Arginine and Glutaminic Acid)/Montmorillonite Clay System[END_REF][START_REF] Tangaraj | Adsorption and photophysical properties of fluorescent dyes over montmorillonite and saponite modified by surfactant[END_REF]. Adsorption of proteins Adsorption experiments were carried out by adding dropwise BSA solution, dissolved in the above described solvent system, in an aqueous suspension of the clay mineral (1.2mg.mL -1 ). The initial concentration of BSA (1.0 mg.mL -1 ) was determined before addition of montmorillonite. The mixture was stirred at room temperature for 4 hours. After that, the suspension was centrifuged at 2700rpm for 8 minutes. Both supernatants and solids were collected and analyzed. Solids were washed with aqueous solution to remove weakly adsorbed BSA and dried at room temperature. The amount of protein remaining in the supernatant was determined by UV-Vis spectroscopy considering the absorption maximum wavelength at 281 nm. All measurements were performed in triplicate. The concentration of adsorbed protein on montmorillonite was determined by: Г t = V ( C 0 -C t ) m ( 1 ) where V is the total volume of the protein solution, C0 is the initial concentration of protein, Ct is the concentration of protein in supernatant at time t, and m is the mass of the clay mineral. With V (mL), C0 and Ct (mg.mL -1 ), the interfacial concentration Гt of protein is given in mg of protein per mg of clay mineral (mg.mgMt -1 ). Adsorption of resveratrol onto sodium clay The polyphenol adsorption experiments were performed as described for the protein. The initial concentration of RESV (0.5 mg.mL -1 ) was determined before addition of montmorillonite. The amount of resveratrol remaining in the supernatant was determined by UV-Vis spectroscopy with absorption at 306 nm. Simultaneous adsorption of polyphenol and BSA onto sodium clay BSA (1.0 mg.mL -1 ) was dissolved in solution of the above described solvent system. Separately, resveratrol with a concentration of (0.5 mg.mL -1 ) was dissolved in a solution of the above described solvent system. Then, the solution containing RESV was added dropwise to the BSA solution that was then kept under constant stirring for 2 hours. Resulting solution was added dropwise to the aqueous suspension of the clay mineral with a of concentration 1.2 mg.mL - 1 under constant stirring for 2 hours. The resulting mixture was stirred for 4 hours at room temperature. The resulting slurry was washed and centrifuged for removal of excess of protein and polyphenol. The resulting solid was separated by centrifugation and dried at room temperature. UV-Visible Ultraviolet-visible (UV Vis) absorption spectra were recorded using an Ocean View Optics spectrometer DH-2000-BAL. The light source was a Deuterium and Halogen lamp, equipped with 400μm diameters optic fibers, coupled with CUV 1cm cuvette holder. Spectra were acquired by Ocean View Spectroscope Software. The absorbance wavelength range was from 250 to 900 nm. X-ray diffraction (XRD) Powder X-ray diffractograms were recorded using D8 Advance Bruker-AXS Powder X-ray diffractometer with CuKα radiation (λ= 1.5405 Å). The diffractions patterns were measured between 5-70° (2θ) with a scan rate of 0.5 deg.min -1 . The active area of the detector was limited as much as possible in order to reduce the background scattering at low angle between 1-10° (2θ). Samples were kept 72 h before measurements under controlled humidity. Transmission electron microscopy (TEM) TEM study of the samples was performed on a JEOL 2010 microscope, 200kV LaB6 coupled Orius camera, from Gatan Company. Samples in the form of bulk powders were suspended in ethanol and then deposited on 400 mesh copper grids covered with an ultrathin carbon membrane of 2-3 nm thickness. 2.9.Attenuated Total Reflectance (ATR) Infrared spectroscopy Fourier transform infrared spectroscopy (FTIR) of the solid samples was performed on Agilent Cary 630 FTIR spectrometer using Agilent diamond attenuated total reflectance ATR technique. Spectra were acquired on the 4000 and 650 cm -1 range and processed with the Microlab FTIR Software (Agilent Technologies). Thermal analysis Thermal analyses were carried out using a Q600-1694 SDT Q600 TA Instrument and TA Universal Analysis 2000, the heating rate was of 5 °C.min -1 from 25 °C to 1000 °C, with flow rate of 10 mL.min -1 , in air atmosphere. Solid State Cross-Polarization Magic Angle Spinning Carbon 13 Nuclear Magnetic Resonance ( 13 C CP-MAS Solid state nuclear magnetic resonance) [START_REF] Minussi | Phenolic compounds and total antioxidant potential of commercial wines[END_REF] CCP-MAS NMR spectra were obtained on a Bruker Advance 500 spectrometer operating at ΩL=500MHz ( 1 H ) and125 MHz ( 13 C ) with a 4 mm H-X MAS probe. Chemical shifts were calibrated using the CH2signal of adamantane (38.52 ppm) as an external standard. The CP spectra were acquired with a MAS rate of 14 kHz, an acquisition time of 40 ms, a ramp-CP contact time of 1ms, a 1 s recycle delay and with a 1 H spinal 64decoupling sequence. The number of scans to obtain the spectra depending on the S/N obtained for each sample. Spectra were processed with a zero filling factor of 2 and with an exponential decay corresponding to 25 Hz line broadening. Only spectra with the same line broadening were directly compared. The decomposition of the spectra was performed using Dmfit-2015 software. Decay analysis was performed using a Levenberg-Marquardt algorithm. For the analysis, the fluorescence decay law at the magic angle IM(t) was assumed as a sum of exponentials. We assumed a Poisson distribution of counts in the calculation of the χ2 criterion; residual profiles and autocorrelation function as well as Durbin-Watson and skewness factor were used in order to estimate the quality of the adjustment. The number of exponentials used for the fit was increased until all the statistical criterions were improved. All details about calculation of both lifetime are given elsewhere [START_REF] Balme | Highly efficient fluorescent label unquenched by protein interaction to probe the avidin rotational motion[END_REF]. 2.12.Fluorescence spectroscopy Steady Results and Discussions BSA and RESV adsorption onto sodium clay The UV-Vis spectra of the BSA and RESV before and after adsorption on montmorillonite are shown in Figure 1. The intensity of the band maxima of the BSA in Figure 1(a) at 281 nm decreases after adsorption on montmorillonite. The RESV showed band maxima at 306 nm as we can see in Figure 1(b) and follows the same trend as BSA. The influence of the contact duration was also investigated. The adsorption kinetics (C0 = 1.0 mg.mL -1 ) curves are reported in Figure 3 SI in Supplementary information and similar behavior was observed for both protein and resveratrol. In the present study, the Langmuir isotherm model adequately describes the adsorption data, that are as widely reported for protein adsorption on clay minerals [START_REF] Bajpai | Study on the adsorption of hemoglobin onto bentonite clay surfaces[END_REF]. The Langmuir isotherm of pseudo-second-order model can be linearized in agreement to the equation: 𝑡𝑡 Γ 𝑡𝑡 = 1 𝑘𝑘 2 Γ 𝑒𝑒 2 + 1 Γ 𝑒𝑒 𝑡𝑡 (2) where Γ 𝑡𝑡 and Γ 𝑒𝑒 is the interfacial concentration (mg.mgMt) at time t and at the equilibrium respectively, k2 is the equilibrium rate constant of pseudo-second-order adsorption (mg mgMt - 1 .min -1 ). Pseudo-second order equation were used to research the adsorption kinetic behavior for protein and polyphenol on montmorillonite, reported in Figure 4 SI in Supplementary files. The slope and intercept of the plot of t/Γt versus t were used to calculate the K2 rate constant. Values of the Γ 𝑒𝑒 (interfacial concentration maximum) and K2 (adsorption kinetic constant) are reported in Table I. K2 values indicates affinity of the protein and polyphenol for the inorganic support and in the present case the value of K2=0.16 obtained demonstrates the high affinity of the BSA with the surface of the montmorillonite [START_REF] Assifaoui | Structural Studies of Adsorbed Protein (Betalactoglobulin) on Natural Clay (Montmorillonite)[END_REF][START_REF] Lepoitevin | BSA and lysozyme adsorption on homoionic montmorillonite: Influence of the interlayer cation[END_REF]. Table I. Kinect parameters obtained by Langmuir isotherm for BSA and resveratrol adsorption on montmorillonite. Resveratrol BSA Γ 𝑒𝑒 (mg.mgMt -1 ) 0.32 0.87 K2 (mg mgMt -1 .min -1 ) 1.90 0.16 Localization of the organic molecules in the clay mineral XRD patterns of the montmorillonite and all hybrids composites are reported in Figure 2. The X-Ray pattern of the raw montmorillonite shows typical reflections characteristics of the dioctahedral smectite (d060 = 0.149 nm) [START_REF] Lepoitevin | BSA and lysozyme adsorption on homoionic montmorillonite: Influence of the interlayer cation[END_REF][START_REF] Reinholdt | Synthesis and characterization of montmorillonite-type phyllosilicates in a fluoride medium[END_REF]. The d001 value is about 1.26 nm before adsorption corresponding to the thickness of the layer and the presence of hydrated sodium in the interlayer space. Upon adsorption of the protein, the d001 peak disappears while the other characteristic reflection peaks of montmorillonite remained intact, hence the hypothesis of a possible partial exfoliation or delamination can be considered [START_REF] Assifaoui | Structural Studies of Adsorbed Protein (Betalactoglobulin) on Natural Clay (Montmorillonite)[END_REF]. After adsorption of resveratrol, the interbasal spacing of the clay mineral cannot be detected, making difficult to evaluate the degree of intercalation of the polyphenol which may possibly be non-uniformly distributed between the montmorillonite layers. After adsorption of the BSA and RESV together, it is not possible to measure the inter-basal spacing d001, which may suggest partial or total exfoliation of the montmorillonite layers upon inclusion of the protein and the polyphenol. These studies will be complemented with TEM analysis. Presence of the crystalline phase of resveratrol is common, characteristic peaks of the drug at 2θ=16. Transmission electron microscopy (TEM) Transmission electron microscopy (TEM) micrographs show layered structures with alternate dark and bright fringes with a repeat length of 1.26 nm for the raw clay, synthetic clay without modification. In Figure 3 BSA-RESV-Mt (e) and (f). Experimental conditions: BSA (1.0 mg.mL -1 ), RESV (0.5 mg.mL -1 ), Mt (1.2 mg.mL -1 ) in a buffer phosphate solution at pH 4.5 and ethanolic solution in ratio of 25 %. Spectroscopic characterization Study of BSA and RESV conformation before and after adsorption onto clay mineral was performed using ATR-Infrared spectroscopy (Figure 4). The presence of the physisorbed water of montmorillonite is observed by the bending vibration at 1634 cm -1 [START_REF] Georgelin | Inorganic phosphate and nucleotides on silica surface: Condensation, dismutation, and phosphorylation[END_REF] while the characteristic peak at 1015 cm -1 is due to stretching vibrations of the Si-O groups [START_REF] Madejová | FTIR techniques in clay mineral studies[END_REF]. According to the literature [START_REF] Bourassa | Resveratrol, genistein, and curcumin bind bovine serum albumin[END_REF][START_REF] Roy | Spectroscopic and docking studies of the binding of two stereoisomeric antioxidant catechins to serum albumins[END_REF][START_REF] Liu | Molecular Modeling and Spectroscopic Studies on the Interaction of Transresveratrol with Bovine Serum Albumin[END_REF] the spectral ranges for the most intense peaks for proteins occur in the region 1700-1600 cm -1 and 1600-1500 cm -1 that are assigned to amide I and amide II vibrations, respectively. Infrared absorption of resveratrol showed three characteristics intense bands (Figure 4b) at 1606 cm -1 corresponding to C-C aromatic double band stretching, 1585 cm -1 band assigned to C-C olefinic stretching and the band observed at 1381 cm -1 corresponding to a ring C-C stretching [START_REF] Popova | Preparation of resveratrol-loaded nanoporous silica materials with different structures[END_REF][START_REF] Billes | Vibrational spectroscopy of resveratrol[END_REF]. Conformational changes of the structure of the BSA upon confinement onto montmorillonite were pointed. A shift in amide I and amide II bands from 1640 cm -1 to 1643 cm -1 and from 1521 cm -1 to 1525cm -1 in Figure 4(a) were observed. This is consistent with the change in the secondary structure of the protein [START_REF] Servagent-Noinville | Conformational Changes of Bovine Serum Albumin Induced by Adsorption on Different Clay Surfaces: FTIR Analysis[END_REF][START_REF] Della Porta | RSC Advances Conformational analysis of bovine serum albumin adsorbed on halloysite nanotubes and kaolinite : a Fourier transform infrared spectroscopy study[END_REF]. Upon adsorption of RESV onto Mt, we observed a spectral shift of characteristics bands of the RESV in Figure4(b) that move from 1606 to 1608cm -1 ,from 1585 to 1589cm -1 and from 1381 to 1386 cm -1 . The significant shift of this last band corresponding to the ring C-C stretching suggests the interaction of RESV with the Mt surface [START_REF] Popova | Preparation of resveratrol-loaded nanoporous silica materials with different structures[END_REF]. The absorption bands characteristic of the BSA-RESV-Mt composite are showed in Figure 4(c). Shifts from 1640 to 1648 cm -1 and from 1521to 1524cm -1 as well as increase in the intensity of the amide I and amide II bands of protein, respectively were observed. Changes from 1585 to 1588cm -1 and from 1381 to 1388 cm -1 of the characteristics bands of resveratrol after interaction with BSA and montmorillonite were observed. This could be attributed to the presence of interactions between the protein and the polyphenol with montmorillonite or it may suggest the co-adsorption of resveratrol in the protein before interaction with the clay mineral. This information can be complemented with the fluorescence analysis. 3.5.Thermogravimetric Analyses TG curve and their corresponding derivative of the montmorillonite are reported in Figure 5 SI in Supplementary files. The curve relative to montmorillonite alone is characterized by three steps of mass loss associated to endothermal events. The first one, at 82 o C with weight loss of 8.6 %, corresponds to the release of water physically adsorbed on the surface, the weight loss of 1.4 % at 184 o C is due to the departure of the interlayer water molecules. Finally, two peaks at 428 o C and 645 o C are due to dehydroxylation with weight loss of about 3.5 % [START_REF] Xie | Thermal characterization of organically modified montmorillonite[END_REF]. Thermogravimetric analyses of BSA and RESV alone and of BSA-RESV-Mt composite were studied and reported in Figure 6 SI and Figure 7 SI in Supplementary information. DTG curve of free BSA exhibits a broad mass loss at 61 o C related to dehydratation while two peaks observed at 275 o C and 316 o C are likely due to the polypeptide chain thermal decomposition of proteins that corresponds to 55% mass loss. In addition, a second thermal decomposition step of the organic matter is observed between 450-650 o C (mass loss around of 40%) with a maximum at 493 o C and 617 o C, includes both the decomposition of the hard residues of the proteins [START_REF] Duce | Loading of halloysite nanotubes with BSA, α-Lac and β-Lg: A Fourier transform infrared spectroscopic and thermogravimetric study[END_REF]. The free RESV is thermally stable under air flow up to about260°C and its degradation shows two maximum at 310 o C and 549°C, with a total mass loss around 90%. After adsorption of the protein on Mt, an increase of the decomposition temperature for the BSA was observed, which occurs at 308°C and 537°C (mass loss 22%). This is likely due to the changes in the conformational structure of the proteins upon interactions with the clay mineral. After adsorption of RESV on Mt, an increase in thermal stability of the polyphenol was observed with a maximum decomposition temperature of polyphenol at 428 o C, with a mass loss of 14%. This may suggest intercalation or interactions of resveratrol in the external and/or internal surface of clay. The peak observed at 632°C with a mass loss of 2.6 % is attributed to matrix dehydroxylation. Thermal analysis BSA-RESV-Mt composite showed mass loss peaks shifted with respect to single Mt composites, observed in Figure 7 SI in Supplementary files. The differential peaks at 46°C with mass loss of 18 % and 219°C with mass loss of 3 % correspond to the removal of physisorbed water and to the dehydratation of interlayer water, respectively. The progressive decomposition of the organic matter, with mass loss 37%, is indicated by the peaks maximum at 316°Cand 489°C. The significant enhancement in thermal stability of the composites can be attributed to the exfoliation of silicate layers with intercalation of the protein and polyphenol complex preventing the fast decomposition of the products. The results obtained are similar to those observed in the adsorption tests. onto Mt, peaks are shifted to 178.5; 175.8 and 173.0 ppm. For the BSA-RESV-Mt composite the shift to 178.6; 175.7 and 173.7 ppm was observed. It must be noted also that resveratrol is significantly co-adsorbed with BSA, as illustrated by the corresponding intense peaks for the spectrum of the BSA-RESV-Mt composite. Together, these results indicate that an interaction between the inorganic surface and the two organic molecules probably takes place leading to a modification of the secondary structure of the protein. 3.7.Fluorescence spectroscopy The fluorescence is an interesting method to characterize the interaction between resveratrol and protein [START_REF] N′ Soukpoé-Kossi | Resveratrol Binding to Human Serum Albumin[END_REF][START_REF] Nair | Biology Spectroscopic study on the interaction of resveratrol and pterostilbene with human serum albumin[END_REF][START_REF] Jiang | Study of the interaction between trans-resveratrol and BSA by the multi-spectroscopic method[END_REF][START_REF] Balme | Structure, orientation and stability of lysozyme confined in layered materials[END_REF]. The BSA contains 2 Trp residues. Trp212 is located within a hydrophobic binding pocket of the protein and Trp134 is located on the surface of the molecule [START_REF] Bourassa | Resveratrol, genistein, and curcumin bind bovine serum albumin[END_REF][START_REF] Poklar | Interactions of different polyphenols with bovine serum albumin using fluorescence quenching and molecular docking[END_REF]. BSA and resveratrol thus exhibit different photophysical properties. The resveratrol absorbance spectrum covers the emission of the BSA. This induces a fluorescence quenching of the BSA in presence of resveratrol due to a non-radiation energy transfer [START_REF] Xiao | Probing the interaction of trans-resveratrol with bovine serum albumin: A fluorescence quenching study with tachiya model[END_REF][START_REF] Cao | Interaction between trans-resveratrol and serum albumin in aqueous solution[END_REF]. The adsorption mechanism is not totally elucidated since two assumptions have been formulated: (i) The BSA and resveratrol are adsorbed independently or (ii) the BSA binds the resveratrol. The fluorescence emission spectra of BSA-RESV-Mt composite exhibits a similar emission spectra that RESV-Mt under excitation at 290 nm. At the maximum of emission wavelength its intrinsic fluorescence and also confirm that a complex between resveratrol and BSA occurred [START_REF] Liu | Molecular Modeling and Spectroscopic Studies on the Interaction of Transresveratrol with Bovine Serum Albumin[END_REF]. Thus the most likely scenario is a binding of resveratrol by BSA before the adsorption onto montmorillonite. Conclusion The use of montmorillonite as a fining agent in winemaking adsorbed protein thus decreasing -state and time-resolved fluorescence spectra were obtained by the time-correlated single-photon counting technique. The excitation wavelength was achieved using a SuperK EXTREME laser (NKT Photonics, model EXR-15) as a continuum pulsed source combined with SuperK EXTEND-UV super continuum (NKT Photonics, model DUV); the wavelength was selected by coupling to a monochromator (Jobin-Yvon H10). The repetition rate was set to 19.4 MHz; the excitation pulse duration on this device is around 6 ps (full-width-at-halfmaximum, FWHM). The emission of fluorescence is detected, after passing through a polarizer oriented at the magic angle (54.73°) to the polarization of the excitation, through a double monochromator Jobin-Yvon DH10 on a hybrid PMT detector HPM-100-40 (Becker & Hickl).The instrumental response function of the equipment was measured by using a dilute suspension of polystyrene nanospheres in water (70 nm of diameter) as a scattering solution; it was typically about 130-160 ps FWHM. Decays were collected at a maximum counting rate of 17 kHz into 4096 channels using an acquisition card SPC-730 (Becker & Hickl). This limiting count rate was achieved by dilution in water of the sample and after sedimentation of the suspension in order to minimize as much as possible the scattering of the particles. The time per channel was set around 6 ps ch-1 in order to fit a full decay in the experimental time window. Figure 1 . 1 Figure 1. UV-Vis absorption spectra of supernatant solution during adsorption reaction of BSA (a) and RESV (b) on montmorillonite. Experimental conditions: BSA (1.0 mg.mL -1 ), RESV (1.0 mg.mL -1 ), Mt (1.0 mg.mL -1 ) in a buffer phosphate solution at pH 4.5 and ethanolic solution in ratio of 25%. 5, 22.6, 23.8 and 30.1° were observed in RESV-Mt sample and at 16.4, 22.5 and 28.5° (2θ) in the BSA-RESV-Mt. Shifts in the d(001) reflexion in the RESV-Mt sample is probably due to a different hydration state of the interlayer space. In the pattern of BSA-Mt sample, the (001) reflexion is not visible due probably to an heterogeneity in the layer stacking or a delamination of the layers[START_REF] Bertacche | Host-guest interaction study of resveratrol with natural and modified cyclodextrins[END_REF][START_REF] Popova | Preparation of resveratrol-loaded nanoporous silica materials with different structures[END_REF]. These results will be confirmed by TEM. Figure 2 . 2 Figure 2. XRD patterns diffractograms and insert at low angle 1-10 o (2θ) of the montmorillonite (black), BSA absorbed onto clay (violet), RESV absorbed onto clay (pink) and BSA-RESV-Mt composite (green). (a) and 3(b), after BSA adsorption on Mt, the measured d001 value reaches3.2 nm indicating the intercalation and partial delamination of the protein in the interlayer space of the Mt. After resveratrol adsorption, the d001value varies from 1.5 to 1.85 nm, attesting of the heterogeneous incorporation of resveratrol, in Figure 3(c) and 3(d), in agreement with the XRD results. The sample containing both resveratrol and BSA presented different populations of layers: some are exfoliated and others are intercalated with a d spacing of BSA-RESV-Mt composite varying between 2.96-3.9 nm. From these results the incorporation of the BSA and RESV together is supposed to cause exfoliation of the clay mineral shown in Figure 3(e) and 3(f). Figure 3 . 3 Figure 3. Transmission electron micrographs (TEM) of the BSA-Mt (a) and (b); RESV-Mt (c) and (d); Figure 4 . 4 Figure 4. ATR-IR spectra of (a) free BSA and upon adsorption on Mt; (b) free RESV and upon adsorption onto Mt and (c) free BSA, free RESV and BSA-RESV-Mt composite. 3. 6 . 13 CFigure 5 6135 Figure 5(a) exhibits spectra of the protein, the polyphenol and the composites prepared with the Figure 5 ( 5 Figure 5(a). 13 C CP-MAS NMR spectra with a MAS frequency of 14kHz. Free RESV and free BSA and BSA-Mt, RESV-Mt and BSA-RESV-Mt composites. Figure 5 ( 5 Figure 5(b). Deconvoluted spectra for BSA alone and upon adsorption on montmorillonite and BSA-RESV-Mt hybrid as illustrated by components attributed to carbonyl peaks. Figure 5 ( 5 Figure 5(b) exhibits the decomposition of the spectra for the BSA, BSA-Mt and BSA-RESV- Figure 6 ( 6 Figure 6(a). Fluorescence emission spectra of free BSA(black) and RESV-Mt (green), BSA-Mt(red) and BSA-RESV-Mt (blue) hybrids. Figure 6 6 Figure 6(b). Fluorescent decay of BSA-Mt (red), RESV-Mt (green), BSA-RESV-Mt (blue and violet) hybrids. ( 390 nm), the average fluorescence lifetime 2.254 ns is longer than the one of RESV-Mt (under excitation 340 nm). The emission band of BSA has disappeared meaning that the Trp fluorescence is quenched by the polyphenol. In addition, we can observe a decrease of fluorescence lifetime to 1.302 ns measured at 340 nm. The fluorescence quenching of BSA is explained by a non-radiation energy transfer. The lifetime values of BSA-RESV-Mt τ1=2.587 ns, τ2=0.680 ns, τ3=0.03 ns are different from those of BSA-Mt, suggesting that the adsorbed BSA conformation is different in presence of RESV. Both fluorescence quenching and the different fluorescence lifetime of BSA indicates that RESV can interact with protein and quench turbidity or haze formation and can modulate the polyphenol concentration of wine. The results suggest a non denaturated intercalation of the protein in the interlayer space of montmorillonite by electrostatic forces and hydrogen bonding causing delamination and/or partial exfoliation of the layers of clay mineral observed in XRD and TEM. The direct interaction of resveratrol with montmorillonite can also be observed. It can be located on the surface and the edges of the clay mineral. Extensive penetration of the protein and resveratrol together causes exfoliation of the lamellar structure of the montmorillonite. Time resolved fluorescence experiments highlight the strong interaction between resveratrol and BSA and changes in the environment of amino acid residues due to energy transfer from Trp to resveratrol. This is the first example of coadsorption of a polyphenol and protein in presence of montmorillonite. Current studies are under progress to transpose this work on a real wine solution. Table II . II Time resolved fluorescence results for free BSA and BSA-Mt, RESV-Mt and BSA-RESV-Mt hybrids. λEx(nm) λEm (nm) τ1 (Y1 %) τ2 (ns) (Y2 %) τ3 (ns) (Y3 %) τav(ns) χ 2 (ns) BSA 290 340 6.287 (88.8) 2.359 (10.7) 0.189 (0.6) 5.833 1.01 BSA-Mt 290 340 4.556 (71.8) 1.553 (24.6) 0.427 (3.7) 3.667 1.18 RESV-Mt 340 390 1.517 (33.4) 0.808 (42.4) 0.123 (24.2) 1.193 .93 Acknowledgements This is work was supported by Capes/Cofecub Project (N 835/15) as well as with participation of the Archeology Molecular and Structural Laboratory and National School of Chemistry of Montpellier, in France and Fuel and Material Laboratory and Fast Solidification Laboratory, in Brazil. Supplementary information :
39,585
[ "16743", "16744", "750743", "769261", "1175472" ]
[ "218275", "218275", "541807", "1087310", "1087310", "496972", "218275", "541749", "218275" ]
01744359
en
[ "info" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01744359/file/dL-extended.pdf
Étienne Miquey A Classical Sequent Calculus with Dependent Types Dependent types are a key feature of the proof assistants based on the Curry-Howard isomorphism. It is well-known that this correspondence can be extended to classical logic by enriching the language of proofs with control operators. However, they are known to misbehave in the presence of dependent types, unless dependencies are restricted to values. Moreover, while sequent calculi naturally support continuation-passing style interpretations, there is no such presentation of a language with dependent types. The main achievement of this paper is to give a sequent calculus presentation of a call-by-value language with a control operator and dependent types, and to justify its soundness through a continuation-passing style translation. We start from the call-by-value version of the λµ μ-calculus. We design a minimal language with a value restriction and a type system that includes a list of explicit dependencies to maintains type safety. We then show how to relax the value restriction and introduce delimited continuations to directly prove the consistency by means of a continuation-passing-style translation. Finally, we relate our calculus to a similar system by Lepigre, and present a methodology to transfer properties from this system to our own. INTRODUCTION Control operators and dependent types Originally created to deepen the connection between programming and logic, dependent types are now a key feature of numerous functional programming languages. From the point of view of programming, dependent types provide more precise types-and thus more precise specifications-to existing programs. From a logical perspective, they permit definitions of proof terms for statements like the full axiom of choice. Dependent types are provided by Coq or Agda, two of the most actively developed proof assistants. They both rely on constructive type theories: the calculus of inductive constructions for Coq [START_REF] Coquand | Inductively defined types[END_REF], and Martin-Löf's type theory for Agda [START_REF] Martin-Löf | Constructive mathematics and computer programming[END_REF]. Yet, both systems lack support for classical logic and more generally for side effects, which make them impractical as programming languages. In practice, effectful languages give the programmer a more explicit access to low-level control (that is: to the way the program is executed on the available hardware), and make some algorithms easier to implement. Common effects, such as the explicit manipulation of the memory, the generation of random numbers and input/output facilities are available in most practical programming languages (e.g., OCaml, C++, Python, Java). In 1990, Griffin discovered that the control operator call/cc (short for call with current continuation) could be typed by Peirce's law ((A→ B) →A) →A) [START_REF] Griffin | A formulae-as-type notion of control[END_REF], thus extending the formulas-as-types interpretation. Indeed, Peirce's law is known to imply, in an intuitionistic framework, all the other forms of classical reasoning (excluded middle, reductio ad absurdum, double negation elimination, etc.). This discovery opened the way for a direct computational interpretation of classical proofs, using control operators and their ability to backtrack. Several calculi were born from this idea, for example Parigot's λµ-calculus [START_REF] Parigot | Proofs of strong normalisation for second order classical natural deduction[END_REF], Barbanera and Berardi's symmetric λ-calculus [START_REF] Barbanera | A symmetric lambda calculus for classical program extraction[END_REF], Krivine's λ c -calculus [START_REF] Krivine | Realizability in classical logic. In interactive models of computation and program behaviour[END_REF], Curien and Herbelin's λµ μ-calculus [START_REF] Curien | The duality of computation[END_REF]. c b n a This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. 1:2 Étienne Miquey Nevertheless, dependent types are known to misbehave in the presence of control operators, and lead to logical inconsistencies [START_REF] Herbelin | On the degeneracy of sigma-types in presence of computational classical logic[END_REF]. Since the same problem arises with a wider class of effects, it seems that we are facing the following dilemma: either we choose an effectful language (allowing us to write more programs) while accepting the lack of dependent types, or we choose a dependently typed language (allowing us to write finer specifications) and give up effects. Many works have tried to fill the gap between effectful programming languages and logic, by accommodating weaker forms of dependent types with computational effects (e.g., divergence, I/O, local references, exceptions). Amongst other works, we can cite the recent works by Ahman et al. [START_REF] Danel Ahman | Dependent Types and Fibred Computational Effects[END_REF], by Vákár [START_REF] Vákár | A framework for dependent types and effects[END_REF][START_REF] Vákár | In Search of Effectful Dependent Types[END_REF] or by Pédrot and Tabareau who proposed a systematical way to add effects to type theory [START_REF] Pédrot | An effectful way to eliminate addiction to dependence[END_REF]. Side effects-that are impure computations in functional programming-are often interpreted by means of monads. Interestingly, control operators can be interpreted similarly through the continuation monad, but the continuation monad generally lacks the properties necessary to fit these frameworks. Although dependent types and classical logic have been deeply studied separately, the problem of accommodating both features 1 in one and the same system has not found a completely satisfying answer yet. Recent works from Herbelin [START_REF] Herbelin | A constructive proof of dependent choice, compatible with classical logic[END_REF] and Lepigre [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF] proposed some restrictions on dependent types to make them compatible with a classical proof system, while Blot [START_REF] Blot | Hybrid realizability for intuitionistic and classical choice[END_REF] designed a hybrid realizability model where dependent types are restricted to an intuitionistic fragment. Call-by-value and value restriction In languages enjoying the Church-Rosser property (like the λ-calculus or Coq), the order of evaluation is irrelevant, and any reduction path will ultimately lead to the same value. In particular, the call-by-name and call-by-value evaluation strategies will always give the same result. However, this is no longer the case in presence of side effects. Indeed, consider the simple case of a function applied to a term producing some side effects (for instance increasing a reference). In call-by-name, the computation of the argument is delayed to the time of its effective use, while in call-by-value the argument is reduced to a value before performing the application. If, for instance, the function never uses its argument, the call-by-name evaluation will not generate any side effect, and if it uses it twice, the side effect will occur twice (and the reference will have its value increased by two). On the contrary, in both cases the call-by-value evaluation generates the side effect exactly once (and the reference has its value increased by one). In this paper, we present a language following the call-by-value reduction strategy, which is as much a design choice as a goal in itself. Indeed, when considering a language with control operators (or other kinds of side effects), soundness often turns out to be subtle to preserve in call-by-value. The first issues in call-by-value in the presence of side effects were related to references [START_REF] Wright | Simple imperative polymorphism[END_REF] and polymorphism [START_REF] Harper | Polymorphic type assignment and CPS conversion[END_REF]. In both cases, a simple solution (but often unnecessarily restrictive in practice [START_REF] Garrigue | Relaxing the value restriction[END_REF][START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF]) to solve the inconsistencies consists in the introduction of a value restriction for the problematic cases, restoring then a sound type system. Recently, Lepigre presented a proof system providing dependent types and a control operator [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF], whose consistency is preserved by means of a semantical value restriction defined for terms that behave as values up to observational equivalence. In the present work, we will rather use a syntactic restriction to a fragment of proofs that allows slightly more than values. As we will see, the restriction that arises naturally coincides with the negative-elimination-free fragment of Herbelin's dPAω system [START_REF] Herbelin | A constructive proof of dependent choice, compatible with classical logic[END_REF]. A sequent calculus presentation The main achievement of this paper is to give a sequent calculus presentation2 of a call-by-value language with classical control and dependent types, and to justify its soundness through a continuation-passing style translation. Our calculus is an extension of the λµ μ-calculus [START_REF] Curien | The duality of computation[END_REF] with dependent types. Amongst other motivations, such a calculus is close to an abstract machine, which makes it particularly suitable to define CPS translations or to be an intermediate language for compilation [START_REF] Downen | Sequent calculus as a compiler intermediate language[END_REF]. As a matter of fact, the original motivation for this work was the design of a program translation for Herbelin's dPAω system (that already encompasses control operators and dependent types) to justify its soundness. However, this calculus was presented in a natural deduction style, making such a translation hard to obtain. We thus developed the framework presented in this paper to have an intermediate language more suitable for a continuation-passing style translation at our disposal. Additionally, while we consider in this paper the specific case of a calculus with classical logic, the sequent calculus presentation itself is responsible for another difficulty. As we will see, the usual call-by-value strategy of the λµ μ-calculus causes subject reduction to fail, which would already happen in an intuitionistic type theory. We claim that the solutions we give in this paper also works in the intuitionistic case. In particular, the system we develop might be a first step towards the adaption of the well-understood continuation-passing style translations for ML to design a (dependently) typed compilation of a system with dependent types such as Coq. Delimited continuations and CPS translation The main challenge in designing a sequent calculus with dependent types lies in the fact that the natural relation of reduction one would expect in such a framework is not safe with respect to types. As we will discuss in Section 2.6, the problem can be understood as a desynchronization of the type system with respect to the reduction. A simple solution, presented in Section 2, consists in the addition of an explicit list of dependencies in typing derivations. This has the advantage of leaving the computational part of the original calculus unchanged. However, it is not suitable for obtaining a continuation-passing style translation. We thus present a second way to solve this issue by introducing delimited continuations [START_REF] Ariola | A type-theoretic foundation of delimited continuations[END_REF], which are used to force the purity needed for dependent types in an otherwise non purely functional language. It also justifies the relaxation of the value restriction and leads to the definition of the negative-elimination-free fragment (Section 3). In addition, it allows for the design, in Section 4, of a continuation-passing style translation that preserves dependent types and permits us to prove the soundness of our system. Finally, it also provides us with a way to embed our calculus into Lepigre's calculus [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF], as we shall see in Section 5. This embedding has in particular the benefit of furnishing us with a realizability interpretation for free. Contributions of the paper Our main contributions in this paper can be listed as follows: • We soundly combine dependent types and control operators by means of a syntactic restriction to the negative-elimination-free fragment; • We give a sequent calculus presentation and solve the type-soundness issues it raises in two different ways; • Our first solution simply relies on a list of dependencies that is added to the type system • Our second solution uses delimited continuations to ensure consistency with dependent types and provides us with a CPS translation (carrying dependent types) to a calculus without control operator; • We relate our system to Lepigre's calculus, which gives us a realizability interpretation for free and offers an additional way of proving the consistency of our system. This paper is an extended and revised version of the article presented at ESOP 2017 [START_REF] Miquey | A classical sequent calculus with dependent types[END_REF]. A MINIMAL CLASSICAL LANGUAGE WITH DEPENDENT TYPES 2.1 A short primer to the λµ μ-calculus We recall here the spirit of the λµ μ-calculus, for further details and references please refer to the original article [START_REF] Curien | The duality of computation[END_REF]. The syntax and reduction rules (parameterized over a subset of proofs V and a subset of evaluation contexts E) are given in Figure 1, where μa.c can be read as a context let a = [ ] in c. A command ⟨p||e⟩ can be understood as a state of an abstract machine, representing the evaluation of a proof p (the program) against a co-proof e (the stack) that we call context. The µ operator comes from Parigot's λµ-calculus [START_REF] Parigot | Proofs of strong normalisation for second order classical natural deduction[END_REF], µα binds a context to a context variable α in the same way that μa binds a proof to some proof variable a. The λµ μ-calculus can be seen as a proof-as-program correspondence between sequent calculus and abstract machines. Right introduction rules correspond to typing rules for proofs, while left introduction are seen as typing rules for evaluation contexts. In contrast with Gentzen's original presentation of sequent calculus, the type system of the λµ μ-calculus explicitly identifies at any time which formula is being worked on. In a nutshell, this presentation distinguishes between three kinds of sequents: (1) sequents of the form Γ ⊢ p : A | ∆ for typing proofs, where the focus is put on the (right) formula A; (2) sequents of the form Γ | e : A ⊢ ∆ for typing contexts, where the focus is put on the (left) formula A; (3) sequents of the form c : (Γ ⊢ ∆) for typing commands, where no focus is set. In a right (resp. left) sequent Γ ⊢ p : A | ∆, the singled out formula3 A reads as the conclusion "where the proof shall continue" (resp. hypothesis "where it happened before"). For example, the left introduction rule of implication can be seen as a typing rule for pushing an element q on a stack e leading to the new stack q • e: Γ ⊢ q : A | ∆ Γ | e : B ⊢ ∆ Γ | q • e : A → B ⊢ ∆ → l As for the reduction rules, we can see that there is a critical pair if V and E are not restricted enough: c[α := μx .c ′ ] ←-⟨µα .c || μx .c ′ ⟩ -→ c ′ [x := µα .c]. The difference between call-by-name and call-by-value can be characterized by how this critical pair 4is solved, by defining V and E such that the two rules do not overlap. Defining the subcategories Remark 2.1 (Application). The reader unfamiliar with the λµ μ-calculus might be puzzled by the absence of a syntactic construction for the application of proof terms. Intuitively, the usual application p q of the λ-calculus is replaced by the application of the proof p to a stack of the shape q • e as in an abstract machine 5 . The usual application can thus be recovered through the following shorthand: Γ ⊢ t :A | ∆ Γ | e : A ⊢ ∆ ⟨t ||e⟩ : (Γ ⊢ ∆) (Cut) (a : A) ∈ Γ Γ ⊢ a : A | ∆ (Ax r ) Γ, a : A ⊢ p : B | ∆ Γ ⊢ λa.p : A → B | ∆ (→ r ) c : (Γ ⊢ ∆, α : A) Γ ⊢ µα .c : A | ∆ (µ) (α : A) ∈ ∆ Γ | α : A ⊢ ∆ (Ax l ) Γ ⊢ p : A | ∆ Γ | e : B ⊢ ∆ Γ | p • e : A → B ⊢ ∆ (→ l ) c : (Γ, a : A ⊢ ∆) Γ | μa.c : A ⊢ ∆ ( μ) (c) Typing rules p q ≜ µα .⟨p||q • α⟩ Finally, it is worth noting that the µ binder is a control operator, since it allows for catching evaluation contexts and backtracking further in the execution. This is the key ingredient that makes the λµ μ-calculus a proof system for classical logic. To illustrate this, let us draw the analogy with the call/cc operator of Krivine's λ c -calculus [START_REF] Krivine | Realizability in classical logic. In interactive models of computation and program behaviour[END_REF]. Let us define the following proof terms: call/cc ≜ λa.µα .⟨a||k α • α⟩ k e ≜ λa ′ .µβ .⟨a ′ ||e⟩ The proof k e can be understood as a proof term where the context e has been encapsulated. As expected, call/cc is a proof for Peirce's law (see Figure 2), which is known to imply other forms of classical reasoning (e.g., the law of excluded middle, the double negation elimination). Let us observe the behavior of call/cc (in call-by-name evaluation strategy, as in Krivine λ c -calculus): in front of a context of the shape q • e with e of type A, it will catch the context e a : (A → B) → A ⊢ a : (A → B) → A | • (Ax r ) •, a ′ : A ⊢ a ′ : A | • (Ax r ) • | α : A ⊢ α : A, • (Ax l ) ⟨a ′ ||α⟩ : (•, a ′ : A ⊢ α : A, β : B) (Cut) •, a ′ : A ⊢ µβ .⟨a ′ ||α⟩ : B | α : A (µ) • ⊢ λa ′ .µβ .⟨a ′ ||α⟩ : A → B | α : A (→ r ) | α : A ⊢ α : A (Ax l ) • | λa ′ .µβ .⟨a ′ ||α⟩ • α : (A → B) → A ⊢ α : A (→ l ) ⟨a||λa ′ .µβ .⟨a ′ ||α⟩ • α⟩ : (a : (A → B) → A ⊢ α : A) (Cut) a : (A → B) → A ⊢ µα .⟨a||λa ′ .µβ .⟨a ′ ||α⟩ • α⟩ : A | (µ) ⊢ λa.µα .⟨a||λa ′ .µβ .⟨a ′ ||α⟩ • α⟩ : ((A → B) → A) → A | (→ r ) (where • is used to shorten useless parts of typing contexts.) Fig. 2. Proof term for Peirce's law thanks to the µα binder and reduce as follows: ⟨λa.µα .⟨a||k α • α⟩||q • e⟩ → ⟨q|| μa.⟨µα .⟨a||k α • α⟩||e⟩⟩ → ⟨µα .⟨q||k α • α⟩||e⟩ → ⟨q||k e • e⟩ We notice that the proof term k e = λa ′ .µβ .⟨a ′ ||e⟩ on top of the stack (which, if e was of type A, is of type A → B, see Figure 2) contains a second binder µβ. In front of a stack q ′ • e ′ , this binder will now catch the context e ′ and replace it by the former context e: ⟨λa ′ .µβ .⟨a ′ ||e⟩||q ′ • e ′ ⟩ → ⟨q ′ || μa ′ .⟨µβ .⟨a ′ ||e⟩||e ′ ⟩⟩ → ⟨µβ .⟨q ′ ||e⟩||e ′ ⟩ → ⟨q ′ ||e⟩ This computational behavior corresponds exactly to the usual reduction rule for call/cc in the Krivine machine [START_REF] Krivine | Realizability in classical logic. In interactive models of computation and program behaviour[END_REF]: call/cc ⋆ t • π ≻ t ⋆ k π • π k π ⋆ t • π ′ ≻ t ⋆ π Inconsistency of classical logic with dependent types The simultaneous presence of classical logic (i.e. of a control operator) and dependent types is known to cause a degeneracy of the domain of discourse. Let us shortly recap the argument of Herbelin highlighting this phenomenon [START_REF] Herbelin | On the degeneracy of sigma-types in presence of computational classical logic[END_REF]. Let us adopt here a stratified presentation of dependent types, by syntactically distinguishing terms-that represent mathematical objects-from proof terms-that represent mathematical proofs. In other words, we syntactically separate the categories corresponding to witnesses and proofs in dependent sum types. Consider a minimal logic of strong existentials and equality, whose formulas, terms (only representing natural number) and proofs are defined as follows: Formulas A, B ::= t = u | ∃x N .A Terms t, u ::= n | wit p Proofs p, q ::= refl | subst p q | (t, p) | prf p (n ∈ N) Let us explain the different proof terms by presenting their typing rules. First of all, the pair (t, p) is a proof for an existential formula ∃x N .A where t is a witness for x and p is a certificate for A[t/x]. This implies that both formulas and proofs are dependent on terms, which is usual in mathematics. What is less usual in mathematics is that, as in Martin-Löf's type theory, dependent types also allow for terms (and thus for formulas) to be dependent on proofs, by means of the constructors wit p and prf p. Typing rules are given with separate typing judgments for terms, which can only be of type N: Γ ⊢ p : A(t) Γ ⊢ t : N Γ ⊢ (t, p) : ∃x N .A (∃ I ) Γ ⊢ (t, p) : ∃x N .A Γ ⊢ prf p : A[wit p/x] (prf ) Γ ⊢ t : ∃x N .A Γ ⊢ wit t : N (wit) n ∈ N Γ ⊢ n : N Then, refl is a proof term for equality, and subst p q allows us to use a proof of an equality t = u to convert a formula A(t) into A(u): t → u Γ ⊢ refl : t = u (refl) Γ ⊢ p : t = u Γ ⊢ q : B[t] Γ ⊢ subst p q : B[u] (subst) The reduction rules for this language, which are safe with respect to typing, are then: wit (t, p) → t prf (t, p) → p subst refl p → p Starting from this (sound) minimal language, Herbelin showed that its classical extension with the control operators call/cc k and throw k (that are similar to those presented in the previous section) permits to derive a proof of 0 = 1 [START_REF] Herbelin | On the degeneracy of sigma-types in presence of computational classical logic[END_REF]. The call/cc k operator, which is a binder for the variable k, is intended to catch its surrounding evaluation context. On the contrary, throw k discards the current context and restores the context captured by call/cc k . The addition to the type system of the typing rules for these operators: Γ, k : ¬A ⊢ p : A Γ ⊢ call/cc k p : A Γ, k : ¬A ⊢ p : A Γ, k : ¬A ⊢ throw k p : B allows the definition of the following proof: p 0 ≜ call/cc k (0, throw k (1, refl)) : ∃x N .x = 1 Intuitively such a proof catches the context, gives 0 as witness (which is incorrect), and a certificate that will backtrack and give 1 as witness (which is correct) with a proof of the equality. If, besides, the following reduction rules6 are added: wit (call/cc k p) → call/cc k (wit (p[k(wit { })/k])) call/cc k t → t (k FV (t)) then we can formally derive a proof of 1 = 0. Indeed, the term wit p 0 will reduce to call/cc k 0, which itself reduces to 0. The proof term refl is thus a proof of wit p 0 = 0, and we obtain the following proof of 1 = 0: ⊢ p 0 : ∃x N .x = 1 ⊢ prf p 0 : wit p 0 = 1 (prf ) wit p 0 → 0 ⊢ refl : wit p 0 = 0 (refl) ⊢ subst (prf p 0 ) refl : 1 = 0 (subst) The bottom line of this example is that the same proof p 0 is behaving differently in different contexts thanks to control operators, causing inconsistencies between the witness and its certificate. The easiest and usual approach (in natural deduction) to prevent this is to impose a restriction to values (which are already reduced) for proofs appearing inside dependent types and within the operators wit and prf , together with a call-by-value discipline. In the present example, this would prevent us from writing wit p 0 and prf p 0 . 1:8 Étienne Miquey A minimal language with value restriction In this section, we will focus on value restriction in a similar framework, and show that the obtained proof system is coherent. We will then see, in Section 3, how to relax this constraint. We follow here the stratified presentation7 from the previous section. We place ourselves in the framework of the λµ μ-calculus to which we add: • a language of terms which contain an encoding 8 of the natural numbers, • proof terms (t, p) to inhabit the strong existential ∃x N .A together with the first and second projections, called respectively wit (for terms) and prf (for proofs), • a proof term refl for the equality of terms and a proof term subst for the convertibility of types over equal terms. For simplicity reasons, we will only consider terms of type N throughout this paper. We address the question of extending the domain of terms in Section 6.2. The syntax of the corresponding system, that we call dL, is given by: Terms t ::= x | n | wit V Proof terms p ::= V | µα .c | (t, p) | prf V | subst p q Proof values V ::= a | λa.p | λx .p | (t, V ) | refl Contexts e ::= α | p • e | t • e | μa.c Commands c ::= ⟨p||e⟩ (n ∈ N) The formulas are defined by: Formulas A, B ::= ⊤ | ⊥ | t = u | ∀x N .A | ∃x N .A | Π a:A B. Note that we included a dependent product Π a:A B at the level of proof terms, but that in the case where a FV (B) this amounts to the usual implication A → B. Reduction rules As explained in Section 2.2, a backtracking proof might give place to different witnesses and proofs according to the context of reduction, leading to inconsistencies [START_REF] Herbelin | On the degeneracy of sigma-types in presence of computational classical logic[END_REF]. The substitution at different places of a proof which can backtrack, as the call-by-name evaluation strategy does, is thus an unsafe operation. On the contrary, the call-by-value evaluation strategy forces a proof to reduce first to a value (thus furnishing a witness) and to share this value amongst all the commands. In particular, this maintains the value restriction along reduction, since only values are substituted. The reduction rules, defined in Figure 3 (where t → t ′ denotes the reduction of terms and c ⇝ c ′ the reduction of commands), follow the call-by-value evaluation principle. In particular one can see that whenever a command is of the shape ⟨C[p]||e⟩ where C[p] is a proof built on top of p which is not a value, it reduces to ⟨p|| μa.⟨C[a]||e⟩⟩, opening the construction to evaluate p 9 . Additionally, we denote by A ≡ B the transitive-symmetric closure of the relation A ▷ B, defined as a congruence over term reduction (i.e. if t → t ′ then A[t] ▷ A[t ′ ]) and by the rules: 0 = 0 ▷ ⊤ 0 = S(u) ▷ ⊥ S(t) = 0 ▷ ⊥ S(t) = S(u) ▷ t = u ⟨µα .c ||e⟩ ⇝ c[e/ Typing rules As we explained before, in this section we limit ourselves to the simple case where dependent types are restricted to values, to make them compatible with classical logic. But even with this restriction, defining the type system in the most naive way leads to a system in which subject reduction will fail. Having a look at the β-reduction rule gives us an insight of what happens. Let us imagine that the type system of the λµ μ-calculus has been extended to allow dependent products instead of implications. and consider a proof λa.p : Π a:A B in front of a context q • e : Π a:A B. A typing derivation of the corresponding command would be of the form: Π p Γ, a : A ⊢ p : B | ∆ Γ ⊢ λa.p : Π a:A B | ∆ (→ r ) Π q Γ ⊢ q : A | ∆ Π e Γ | e : B[q/a] ⊢ ∆ Γ | q • e : Π a:A B ⊢ ∆ (→ l ) ⟨λa.p||q • e⟩ : Γ ⊢ ∆ (Cut) while this command would reduce as follows: ⟨λa.p||q • e⟩ ⇝ ⟨q|| μa.⟨p||e⟩⟩. On the right-hand side, we see that p, whose type is B[a], is now cut with e whose type is B[q]. Consequently, we are not able to derive a typing judgment 10 for this command anymore: Π q Γ ⊢ q : A | ∆ Γ, a : A ⊢ p : ¨B[a] | ∆ Γ, a : A | e : ¨B[q] ⊢ ∆ ⟨p||e⟩ : Γ, a : A ⊢ ∆ Mismatch Γ | μa.⟨p||e⟩ : A ⊢ ∆ ( μ) ⟨q|| μa.⟨p||e⟩⟩ : Γ ⊢ ∆ (Cut) The intuition is that in the full command, a has been linked to q at a previous level of the typing judgment. However, the command is still safe, since the head-reduction imposes that the command ⟨p||e⟩ will not be executed before the substitution of a by q11 is performed, and by then the problem would be solved. This phenomenon can be seen as a desynchronization of the typing process with respect to computation. The synchronization can be re-established by making explicit a list of dependencies σ in the typing rules, which links μ variables (here a) to the associated proof term on the left-hand side of the command (here q). We can now obtain the following typing derivation: Γ ⊢ p : A | ∆; σ Γ | e : B ⊢ ∆; σ {•|p} B ∈ A σ ⟨p||e⟩ : Γ ⊢ ∆; σ (Cut) (a : A) ∈ Γ Γ ⊢ a : A | ∆; σ (Ax r ) (α : A) ∈ ∆ Γ | α : A ⊢ ∆; σ {•|p} (Ax l ) c : (Γ ⊢ ∆, α : A; σ ) Γ ⊢ µα .c : A | ∆; σ Π q Γ ⊢ q : A | ∆ Π p Γ, a : A ⊢ p : B[a] | ∆ Π e Γ, a : A | e : B[q] ⊢ ∆; σ {a|q}{•|p} ⟨p||e⟩ : Γ, a : A ⊢ ∆; σ {a|q} (Cut) Γ | μa.⟨p||e⟩ : A ⊢ ∆; σ {.|q} ( μ) ⟨q|| μa.⟨p||e⟩⟩ : Γ ⊢ ∆; σ (Cut) Formally, we denote by D the set of proofs we authorize in dependent types, and define it for the moment as the set of values: D ≜ V . We define a list of dependencies σ as a list binding pairs of proof terms 12 : σ ::= ε | σ {p|q}, and we define A σ as the set of types that can be obtained from A by replacing all (or none) occurrences of p by q for each binding {p|q} in σ such that q ∈ D: A ε ≜ {A} A σ {p |q } ≜ A σ ∪ (A[q/p]) σ if q ∈ D A σ otherwise. The list of dependencies is filled while going up in the typing tree, and it can be used when typing a command ⟨p||e⟩ to resolve a potential inconsistency between their types: Γ ⊢ p : A | ∆; σ Γ | e : B ⊢ ∆; σ {•|p} B ∈ A σ ⟨p||e⟩ : Γ ⊢ ∆; σ (Cut) Remark 2.2. The reader familiar with explicit substitutions [START_REF] Fridlender | Pure type systems with explicit substitutions[END_REF] can think of the list of dependencies as a fragment of the substitution that is available when a command c is reduced. Another remark is that the design choice for the (Cut) rule is arbitrary, in the sense that we chose to check whether B is in A σ . We could equivalently have checked whether the condition σ (A) = σ (B) holds, where σ (A) refers to the type A where for each binding {p|q} ∈ σ with q ∈ D, all the occurrences of p have been replaced by q. Furthermore, when typing a stack with the (→ l ) and (∀ l ) rules, we need to drop the open binding in the list of dependencies 13 . We introduce the notation Γ | e : A ⊢ ∆; σ {•| †} to denote that the dependency to be produced is irrelevant and can be dropped. This trick spares us from defining a second type of sequents Γ | e : A ⊢ ∆; σ to type contexts when dropping the (open) binding {•|p}. Alternatively, one can think of † as any proof term not in D, which is the same with respect to the list of dependencies. The resulting set of typing rules is given in Figure 4, where we assume that every variable bound in the typing context is bound only once (proofs and contexts are considered up to α-conversion). Note that we work with two-sided sequents here to stay as close as possible to the original presentation of the λµ μ-calculus [START_REF] Curien | The duality of computation[END_REF]. In particular this means that a type in ∆ might depend on a variable previously introduced in Γ and vice versa, so that the split into two contexts makes us lose track of the order of introduction of the hypotheses. In the sequel, to be able to properly define a typed CPS translation, we consider that we can unify both contexts into a single one that is coherent with respect to the order in which the hypotheses have been introduced. Example 2.3. The proof p 1 ≜ subst (prf p 0 ) refl which was of type 1 = 0 in Section 2.2 is now incorrect since the backtracking proof p 0 , defined by µα .(0, µ_.⟨(1, refl)||α⟩) in our framework, is not a value in D. The proof p 1 should rather be defined by 14 µα .⟨p 0 || μa.⟨subst (prf a) refl||α⟩⟩ which can only be given the type 1 = 1. Subject reduction We start by giving a few technical lemmas that will be used for proving subject reduction. First, we will show that typing derivations allow weakening on the lists of dependencies. For this purpose, we introduce the notation σ ⇛ σ ′ to denote that whenever a judgment is derivable with σ as list of dependencies, then it is derivable using σ ′ : σ ⇛ σ ′ ≜ ∀c ∀Γ ∀∆.(c : (Γ ⊢ ∆; σ ) ⇒ c : (Γ ⊢ ∆; σ ′ )). This clearly implies that the same property holds when typing evaluation contexts, i.e. if σ ⇛ σ ′ then σ can be replaced by σ ′ in any typing derivation for any context e. Lemma 2.4 (Dependencies weakening). For any list of dependencies σ we have: 1. ∀V .(σ {V |V } ⇛ σ ) 2. ∀σ ′ .(σ ⇛ σσ ′ ) Proof. The first statement is obvious. The proof of the second one is straightforward from the fact that for any p and q, by definition A σ ⊂ A σ {p |q } . □ As a corollary, we get that † can indeed be replaced by any proof term when typing a context. Corollary 2.5. If σ ⇛ σ ′ , then for any p, e, Γ, ∆: Γ | e : A ⊢ ∆; σ {•| †} ⇒ Γ | e : A ⊢ ∆; σ ′ {•|p}. Proof. Assume that e is of the form μa.c (other cases are trivial), then we have c : Γ ⊢ ∆; σ {a| †}. By definition of † and from the hypothesis, we get that σ {a| †} ⇛ σ ′ , i.e. that c : Γ ⊢ ∆; σ ′ is derivable. By applying the previous Lemma, we get that c : Γ ⊢ ∆; σ ′ {a|p} is derivable for any proof p, whence the result. □ We first state the usual lemmas that guarantee the safety of terms (resp. values, contexts) substitution. Lemma 2.6 (Safe term substitution ). If Γ ⊢ t : N | ∆; ε then: (1) c : (Γ, x : N, Γ ′ ⊢ ∆; σ ) ⇒ c[t/x] : (Γ, Γ ′ [t/x] ⊢ ∆[t/x]; σ [t/x]), (2) Γ, x : N, Γ ′ ⊢ q : B | ∆; σ ⇒ Γ, Γ ′ [t/x] ⊢ q[t/x] : B[t/x] | ∆[t/x]; σ [t/x], (3) Γ, x : N, Γ ′ | e : B ⊢ ∆; σ ⇒ Γ, Γ ′ [t/x] | e[t/x] : B[t/x] ⊢ ∆[t/x]; σ [t/x], (4) Γ, x : N, Γ ′ ⊢ u : N | ∆; σ ⇒ Γ, Γ ′ [t/x] ⊢ u[t/x] : N | ∆[t/x]; σ [t/x]. Lemma 2.7 (Safe value substitution). If Γ ⊢ V : A | ∆; ε then: (1) c : (Γ, a : A, Γ ′ ⊢ ∆; σ ) ⇒ c[V /a] : (Γ, Γ ′ [V /a] ⊢ ∆[V /a]; σ [V /a]), (2) Γ, a : A, Γ ′ ⊢ q : B | ∆; σ ⇒ Γ, Γ ′ [V /a] ⊢ q[V /a] : B[V /a] | ∆[V /a]; σ [t/x], (3) Γ, a : A, Γ ′ | e : B ⊢ ∆; σ ⇒ Γ, Γ ′ [V /a] | e[V /a] : B[V /a] ⊢ ∆[V /a]; σ [V /a], (4) Γ, a : A, Γ ′ ⊢ u : N | ∆; σ ⇒ Γ, Γ ′ [V /a] ⊢ u[V /a] : N | ∆[V /a]; σ [V /a]. Lemma 2.8 (Safe context substitution). If Γ | e : A ⊢ ∆; ε then: (1) c : (Γ ⊢ ∆, α : A, ∆ ′ ; σ ) ⇒ c[e/α] : (Γ ⊢ ∆, ∆ ′ ; σ ), ( 2 ) Γ ⊢ q : B | ∆, α : A, ∆ ′ ; σ ⇒ Γ ⊢ q[e/α] : B | ∆, ∆ ′ ; σ , ( 3 ) Γ | e : B ⊢ ∆, α : A, ∆ ′ ; σ ⇒ Γ | e[e/α] : B ⊢ ∆, ∆ ′ ; σ , ( 4 ) Γ ⊢ u : N | ∆, α : A, ∆ ′ ; σ ⇒ Γ ⊢ u : N | ∆, ∆ ′ ; σ ]. Proof. The proofs are done by induction on typing derivations. □ We can now prove the preservation of typing through reduction, using the previous lemmas for rules which perform a substitution, and the list of dependencies to resolve local desynchronizations for dependent types. Theorem 2.9 (Subject reduction). If c, c ′ are two commands of dL such that c : (Γ ⊢ ∆; ε) and c ⇝ c ′ , then c ′ : (Γ ⊢ ∆; ε). Proof. The proof is done by induction on the typing derivation of c : (Γ ⊢ ∆; ε), assuming that for each typing proof, the conversion rules are always pushed down and right as much as possible. To save some space, we sometimes omit the list of dependencies when empty, writing c : Γ ⊢ ∆ instead of c : Γ ⊢ ∆; ε, and we denote the composition of consecutive rules (≡ l ) as: Γ | e : B ⊢ ∆; σ Γ | e : A ⊢ ∆; σ Π q Γ ⊢ q : A ′ | ∆ Γ ⊢ q : A | ∆ (≡ l ) Π p Γ, a : A ⊢ p : B | ∆ Γ, a : A ⊢ p : B ′ | ∆ (≡ r ) Π e Γ, a : A | e : B ′ q ⊢ ∆; {a|q}{•|p} B ′ q ∈ B ′ {a |q } ⟨p||e⟩ : Γ, a : A ⊢ ∆; {a|q} (Cut) Γ | μa.⟨p||e⟩ : A ⊢ ∆; {.|q} ( μ) ⟨q|| μa.⟨p||e⟩⟩ : Γ ⊢ ∆ (Cut) using Corollary 2.5 to weaken the dependencies in Π e . • Case ⟨µα .c ||e⟩ ⇝ c[e/α]. A typing proof for the command on the left-hand side is of the form: Π c c : Γ ⊢ ∆, α : A Γ ⊢ µα .c : A | ∆ (µ) Π e Γ | e : A ⊢ ∆; {•|µα .c} ⟨µα .c ||e⟩ : Γ ⊢ ∆ (Cut) We get a proof that c[e/α] : Γ ⊢ ∆ is valid by Lemma 2.8. • Case ⟨V || μa.c⟩ ⇝ c[V /a]. A typing proof for the command on the left-hand side is of the form: Π V Γ ⊢ V : A | ∆ Π c c : Γ, a : A ′ ⊢ ∆; {a|V } Γ | μa.c : A ′ ⊢ ∆; {•|V } ( μ) Γ | μa.c : A ⊢ ∆; {•|V } (≡ l ) ⟨V || μa.c⟩ : Γ ⊢ ∆ (Cut) We first observe that we can derive the following proof: Π V Γ ⊢ V : A | ∆ Γ ⊢ V : A ′ | ∆ (≡ l ) and we get a proof for c[V /a] : Γ ⊢ ∆; {V |V } by Lemma 2.7. We finally get a proof for c[V /a] : Γ ⊢ ∆ by Lemma 2.4. • Case ⟨(t, p)||e⟩ ⇝ ⟨p|| μa.⟨(t, a)||e⟩⟩, with p V . A proof of the command on the left-hand side is of the form: Π t Γ ⊢ t : N | ∆ Π p Γ ⊢ p : A[t/x] | ∆ Γ ⊢ (t, p) : ∃x N .A | ∆ (∃ r ) Π e Γ | e : ∃x N .A ⊢ ∆; {•|(t, p)} ⟨(t, p)||e⟩ : Γ ⊢ ∆ (Cut) We can build the following derivation: Π p Γ ⊢ p : A[t/x] | ∆ Π (t,a) Γ, a : A[t/x] ⊢ (t, a) : ∃x N .A | ∆ (∃ I ) Π e Γ | e : ∃x N .A ⊢ ∆; {a|p}{•|(t, a)} ⟨(t, a)||e⟩ : Γ, a : A[t/x] ⊢ ∆; {a|p} (Cut) Γ | μa.⟨(t, a)||e⟩ : A[t/x] ⊢ ∆; {•|p} ( μ) ⟨p|| μa.⟨(t, a)||e⟩⟩ : Γ ⊢ ∆ (Cut) where Π (t,a) is as expected, observing that since p D, the binding {•|(t, p)} is the same as {•| †}, and we can apply Corollary 2.5 to weaken dependencies in Π e . • Case ⟨prf (t, V )||e⟩ ⇝ ⟨V ||e⟩. This case is easy, observing that a derivation of the command on the left-hand side is of the form: Π t Π V Γ ⊢ V : A(t) | ∆ Γ ⊢ (t, V ) : ∃x N .A(x) | ∆ (∃ r ) Γ ⊢ prf (t, V ) : A(wit (t, V )) | ∆ (prf ) Π e Γ | e : A(wit (t, V )) ⊢ ∆; {•| †} ⟨prf (t, V )||e⟩ : Γ ⊢ ∆ (Cut) Since by definition we have A(wit (t, V )) ≡ A(t), we can derive: Π V Γ ⊢ V : A(t) | ∆ Π e Γ | e : A(wit (t, V )) ⊢ ∆; {•|V } Γ | e : A(t) ⊢ ∆; {•|V } (≡ l ) ⟨prf (t, V )||e⟩ : Γ ⊢ ∆ (Cut) • Case ⟨subst refl q||e⟩ ⇝ ⟨q||e⟩. This case is straightforward, observing that for any terms t, u, if we have refl : t = u, then A[t] ≡ A[u] for any A. • Case ⟨subst p q||e⟩ ⇝ ⟨p|| μa.⟨subst a q||e⟩⟩. This case is similar to the case ⟨(t, p)||e⟩. • Case c[t] ⇝ c[t ′ ] with t → t ′ . Immediate by observing that by definition of the relation ≡, we have A[t] ≡ A[t ′ ] for any A. □ Soundness We here give a proof of the soundness of dL with a value restriction. The proof is based on an embedding into the λµ μ-calculus extended with pairs, whose syntax and rules are given in Figure 5. A more interesting proof through a continuation-passing translation is presented in Section 4. We first show that typed commands of dL normalize by translation to the simply-typed λµ μcalculus with pairs (i.e. extended with proofs of the form (p 1 , p 2 ) and contexts of the form μ(a 1 , a 2 ).c). We do not consider here a particular reduction strategy, and take ↣ to be the contextual closure of the rules given in Figure 5. The translation essentially consists in erasing the dependencies in types 15 , turning the dependent products into arrows and the dependent sum into a pair. The erasure procedure is defined by: (∀x N .A) * ≜ N → A * ⊤ * ≜ N → N (∃x N .A) * ≜ N ∧ A * ⊥ * ≜ N → N (Π a:A B) * ≜ A * → B * (t = u) * ≜ N → N and the corresponding translation for terms, proofs, contexts and commands is given by: 1:16 Étienne Miquey Proofs p ::= V | µα .c | (p 1 , p 2 ) Values V ::= a | λa.p | (V 1 , V 2 ) Contexts e ::= α | p • e | μa.c | μ(a 1 , a 2 ).c Commands c ::= ⟨p||e⟩ Γ ⊢ p 1 : A 1 | ∆ Γ ⊢ p 2 : A 2 | ∆ Γ ⊢ (p 1 , p 2 ) : A 1 ∧ A 2 | ∆ (∧ r ) c : Γ, a 1 : A 1 , a 2 : A 2 ⊢ ∆ Γ | μ(a 1 , a 2 ).c : A 1 ∧ A 2 ⊢ ∆ (∧ l ) (a) Syntax (b) Typing rules ⟨µα .c ||e⟩ ↣ c[e/α] ⟨λa.p||q • e⟩ ↣ ⟨q|| μa.⟨p||e⟩⟩ ⟨p|| μa.c⟩ ↣ c[p/a] ⟨(p 1 , p 2 )|| μ(a 1 , a 2 ).c⟩ ↣ c[p 1 /a 1 ][p 2 /a 2 ] µα .⟨p||α⟩ ↣ p μa.⟨a||e⟩ ↣ e (c) Reduction rules ⟨p||e⟩ * ≜ ⟨p * ||e * ⟩ α * ≜ α (t • e) * ≜ t * • e * (q • e) * ≜ q * • e * ( μa.c) * ≜ μa.c * x * ≜ x n * ≜ n (wit p) * ≜ π 1 (p * ) a * ≜ a refl * ≜ λx .x (λa.p) * ≜ λa.p * (λx .p) * ≜ λx .p * (µα .c) * ≜ µα .c * (prf p) * ≜ π 2 (p * ) (t, p) * ≜ µα .⟨p * || μa.⟨(t * , a)||α⟩⟩ (subst V q) * ≜ µα .⟨q * ||α⟩ (subst p q) * ≜ µα .⟨p * || μ_ .⟨µα .⟨q * ||α⟩||α⟩⟩ (p V ) where π i (p) ≜ µα .⟨p|| μ(a 1 , a 2 ).⟨a 1 ||α⟩⟩. The term n is defined as any encoding of the natural number n with its type N * , the encoding being irrelevant here as long as n ∈ V . Note that we translate differently subst V q and subst p q to simplify the proof of Proposition 2.12. We first show that the erasure procedure is adequate with respect to the previous translation. Lemma 2.10. The following holds for any types A and B: (1) For any terms t and u, (A[t/u]) * = A * . (2) For any proofs p and q, (A[p/q]) * = A * . ( ) If A ≡ B then A * = B * . (4) For any list of dependencies σ , if A ∈ B σ , then A * = B * . 3 Proof. Straightforward: (1) and ( 2) are direct consequences of the erasure of terms (and thus proofs) from types. (3) follows from (1),( 2) and the fact that (t = u) * = ⊤ * = ⊥ * . (4) follows from [START_REF] Ariola | A type-theoretic foundation of delimited continuations[END_REF]. □ We can extend the erasure procedure to typing contexts, and show that it is adequate with respect to the translation of proofs. Proposition 2.11. The following holds for any contexts Γ, ∆ and any type A: (1) For any command c, if c : Γ ⊢ ∆; σ , then c * : Γ * ⊢ ∆ * . (2) For any proof p, if Γ ⊢ p : A | ∆; σ , then Γ * ⊢ p * : A * | ∆ * . (3) For any context e, if Γ | e : A ⊢ ∆; σ , then Γ * | e * : A * ⊢ ∆ * . Proof. By induction on typing derivations. The fourth item of the previous lemma shows that the list of dependencies becomes useless: since A ∈ B σ implies A * = B * , it is no longer needed Γ * | μa.⟨(t * , a)||α⟩ : A * ⊢ ∆ * , α : N∧A * ( μ) ⟨p * || μa.⟨(t * , a)||α⟩⟩ : Γ * ⊢ ∆ * , α : N∧A * (Cut) Γ * ⊢ µα .⟨p * || μa.⟨(t * , a)||α⟩⟩ : N ∧ A * | ∆ * (µ) □ We can then deduce the normalization of dL from the normalization of the λµ μ-calculus [START_REF] Polonovski | Strong normalization of lambda-bar-mu-mu-tilde-calculus with explicit substitutions[END_REF], by showing that the translation preserves the normalization in the sense that if c does not normalize, then neither does c * . Proposition 2.12. If c is a command such that c * normalizes, then c normalizes. Proof. We prove this by contraposition, by showing that if c does not normalize (i.e. if it admits an infinite reduction path), then c * does not normalize either. We will actually prove a slightly more precise statement, namely that each step of reduction is reflected into at least one step through the translation: ∀c 1 , c 2 , (c 1 1 ⇝ c 2 ⇒ ∃n ≥ 1, (c 1 ) * n ↣ (c 2 ) * ). Assuming this holds, we get from any infinite reduction path (for ⇝) starting from c another infinite reduction path (for ↣) from c * . Thus, the normalization of c * implies the one of c. We shall now prove the previous statement by case analysis of the reduction c 1 ⇝ c 2 . • Case wit (t, V ) → t: Proof. Proof by contradiction: if c does not normalize, then by Proposition 2.12 neither does c * . However, by Proposition 2.11 we have that c * : Γ * ⊢ ∆ * . This is absurd since any well-typed command of the λµ μ-calculus normalizes [START_REF] Polonovski | Strong normalization of lambda-bar-mu-mu-tilde-calculus with explicit substitutions[END_REF]. (wit (t, V )) * = π 1 (µα .⟨V * || μa.⟨(t * , a)||α⟩⟩) ↣ π 1 (µα .⟨(t * , V * )||α⟩) ↣ π 1 (t * , V * ) = µα .⟨(t * , t * )|| μ(a 1 , a 2 ).⟨a 1 ||α⟩⟩ ↣ µα .⟨t * ||α⟩ ↣ t * • Case ⟨µα .c ||e⟩ ⇝ c[e/α]: (⟨µα .c ||e⟩) * = ⟨µα .c * ||e * ⟩ ↣ c * [e * /α] = c[e/α] * • Case ⟨V || μa.c⟩ ⇝ c[V /a]: (⟨V || μa.c⟩) * = ⟨V * || μa.c * ⟩ ↣ c * [V * /a] = c[V /a] (⟨prf (t, V )||e⟩) * = ⟨π 2 (µα .⟨V * || μa.⟨(t * , a)||α⟩⟩)||e * ⟩ ↣ ⟨π 2 (µα .⟨(t * , V * )||α⟩)||e * ⟩ ↣ ⟨π 2 (t * , V * )||e * ⟩ = ⟨µα .⟨(t * , V * )|| μ(a 1 , a 2 ).⟨a 2 ||α⟩⟩||e * ⟩ = ⟨(t * , V * )|| μ(a 1 , a 2 ). □ Using the normalization, we can finally prove the soundness of the system. Theorem 2.14 (Soundness). For any p ∈ dL, we have ⊬ p : ⊥ . Proof. We actually start by proving by contradiction that a command c ∈ dL cannot be welltyped with empty contexts. Indeed, let us assume that there exists such a command c : (⊢). By normalization, we can reduce it to c ′ = ⟨p ′ ||e ′ ⟩ in normal form and for which we have c ′ : (⊢) by subject reduction. Since c ′ cannot reduce and is well-typed, p ′ is necessarily a value and cannot be a free variable. Thus, e ′ cannot be of the shape μa.c ′′ and every other possibility is either ill-typed or admits a reduction, which are both absurd. We can now prove the soundness by contradiction. Assuming that there is a proof p such that ⊢ p : ⊥, we can form the well-typed command ⟨p||⋆⟩ : (⊢ ⋆ : ⊥) where ⋆ is any fresh α-variable. The previous result shows that p cannot drop the context ⋆ when reducing, since it would give rise to the command c : (⊢). We can still reduce ⟨p||⋆⟩ to a command c in normal form, and see that c has to be of the shape ⟨V ||⋆⟩ (by the same kind of reasoning, using the fact that c cannot reduce and that c : (⊢ ⋆ : ⊥) by subject reduction). Therefore, V is a value of type ⊥. Since there is no typing rule that can give the type ⊥ to a value, this is absurd. □ Toward a continuation-passing style translation The difficulties we encountered while defining our system mostly came from the interaction between classical control and dependent types. Removing one of these two ingredients leaves us with a sound system in both cases. Without dependent types, our calculus amounts to the usual λµ μ-calculus. And without classical control, we would obtain an intuitionistic dependent type theory that we could easily prove sound. To prove the correctness of our system, we might be tempted to define a translation to a subsystem without dependent types, or without classical control. We will discuss later in Section 5 a solution to handle the dependencies. We will focus here on the possibility of removing the classical part from dL, that is to define a translation that gets rid of the classical control. The use of continuationpassing style translations to address this issue is very common, and it was already studied for the simply-typed λµ μ-calculus [START_REF] Curien | The duality of computation[END_REF]. However, as it is defined to this point, dL is not suitable for the design of a CPS translation. Indeed, in order to fix the problem of desynchronization of typing with respect to the execution, we have added an explicit list of dependencies to the type system of dL. Interestingly, if this solved the problem inside the type system, the very same phenomenon happens when trying to define a CPS translation carrying the type dependencies. Let us consider, as discussed in Section 2.5, the case of a command ⟨q|| μa.⟨p||e⟩⟩ with p : B[a] and e : B[q]. Its translation is very likely to look like: q μa.⟨p||e⟩ = q (λa.( p e )), where p has type (B[a] → ⊥) → ⊥ and e type B[q] → ⊥, hence the sub-term p e will be ill-typed. Therefore, the fix at the level of typing rules is not satisfactory, and we need to tackle the problem already within the reduction rules. We follow the idea that the correctness is guaranteed by the head-reduction strategy, preventing ⟨p||e⟩ from reducing before the substitution of a was made. We would like to ensure that the same thing happens in the target language (that will also be equipped with a head-reduction strategy), namely that p cannot be applied to e before q has furnished a value to substitute for a. This would correspond informally to the term 16 : ( q (λa. p )) e . Assuming that q eventually produces a value V , the previous term would indeed reduce as follows: ( q (λa. p )) e → ((λa. p ) V ) e → p [ V /a] e Since p [ V /a] now has a type convertible to (B[q] → ⊥) → ⊥, the term that is produced in the end is well-typed. The first observation is that if q, instead of producing a value, was a classical proof throwing the current continuation away (for instance µα .c where α FV (c)), this would lead to the unsafe reduction: (λα . c (λa. p )) e → c e . Indeed, through such a translation, µα would only be able to catch the local continuation, and the term would end in c e instead of c . We thus need to restrict ourselves at least to proof terms that could not throw the current continuation. The second observation is that such a term suggests the use of delimited continuations 17 to temporarily encapsulate the evaluation of q when reducing such a command: ⟨λa.p||q • e⟩ ⇝ ⟨µ t p.⟨q|| μa.⟨p|| t p⟩⟩||e⟩. Under the guarantee that q will not throw away the continuation 18 μa.⟨p|| t p⟩, this command is safe and will mimic the aforedescribed reduction: ⟨µ t p.⟨q|| μa.⟨p|| t p⟩⟩||e⟩ ⇝ ⟨µ t p.⟨V || μa.⟨p|| t p⟩⟩||e⟩ ⇝ ⟨µ t p.⟨p[V /a]|| t p⟩||e⟩ ⇝ ⟨p[V /a]||e⟩. This will also allow us to restrict the use of the list of dependencies to the derivation of judgments involving a delimited continuation, and to fully absorb the potential inconsistency in the type of t p. In Section 3, we will extend the language according to this intuition, and see how to design a continuation-passing style translation in Section 4. EXTENSION OF THE SYSTEM Limits of the value restriction In the previous section, we strictly restricted the use of dependent types to proof terms that are values. In particular, even though a proof term might be computationally equivalent to some value (say µα .⟨V ||α⟩ and V for instance), we cannot use it to eliminate a dependent product, which is unsatisfactory. We will thus relax this restriction to allow more proof terms within dependent types. 16 We will see in Section 4.4 that such a term could be typed by turning the type A → ⊥ of the continuation that q is waiting for into a (dependent) type Π a:A R[a] parameterized by R. This way we could have q : ∀R .(Π a:A R[a] → R[q]) instead of q : ((A → ⊥) → ⊥). For R[a] := (B(a) → ⊥) → ⊥, the whole term is well-typed. Readers familiar with realizability will also note that such a term is realizable, since it eventually terminates on a correct term p[q/a] e . 17 We stick here to the presentations of delimited continuations in [START_REF] Ariola | A type-theoretic foundation of delimited continuations[END_REF][START_REF] Herbelin | An approach to call-by-name delimited continuations[END_REF], where t p is used to denote the top-level delimiter. 18 Otherwise, this could lead to an ill-formed command ⟨µ t p.c ||e ⟩ where c does not contain t p. Proofs p ::= ::= V | (t, p N ) | µ⋆.c N fragment | prf p N | subst p N q N c N ::= ⟨p N ||e N ⟩ e N ::= ⋆ | μa.c N (a) Language ⟨µα .c ||e⟩ ⇝ c[e/α] ⟨λa.p||q • e⟩ q ∈nef ⇝ ⟨µ t p.⟨q|| μa.⟨p|| t p⟩⟩||e⟩ ⟨λa.p||q • e⟩ ⇝ ⟨q|| μa.⟨p||e⟩⟩ ⟨λx .p||V t • e⟩ ⇝ ⟨p[V t /x]||e⟩ ⟨V p || μa.c⟩ ⇝ c[V p /a] ⟨(V t , p)||e⟩ p V ⇝ ⟨p|| μa.⟨(V t , a)||e⟩⟩ ⟨prf (V t , V p )||e⟩ ⇝ ⟨V p ||e⟩ ⟨prf p||e⟩ ⇝ ⟨µ t p.⟨p|| μa.⟨prf a|| t p⟩⟩||e⟩ ⟨subst p q||e⟩ p V ⇝ ⟨p|| μa.⟨subst a q||e⟩⟩ ⟨subst refl q||e⟩ ⇝ ⟨q||e⟩ ⟨µ t p.⟨p|| t p⟩||e⟩ ⇝ ⟨p||e⟩ c → c ′ ⇒ ⟨µ t p.c ||e⟩ ⇝ ⟨µ t p.c ′ ||e⟩ wit p → t ⇐ ∀α, ⟨p||α⟩ ⇝ ⟨(t, p ′ )||α⟩ t → t ′ ⇒ c[t] ⇝ c[t ′ ] where: We can follow several intuitions. First, we saw at the end of the previous section that we could actually allow any proof term as long as its CPS translation uses its continuation and uses it only once. We do not have such a translation yet, but syntactically, these are the proof terms that can be expressed (up to α-conversion) in the λµ μ-calculus with only one continuation variable (that we write ⋆ in Figure 6), and which do not contain application 19 . We insist on the fact that this defines a syntactic subset of proofs. Indeed, ⋆ is only a notation and any proof defined with only one continuation variable is α-convertible to denote this continuation variable with ⋆. For instance, µα .⟨µβ ⟨V ||β⟩||α⟩ belongs to this category since: µα .⟨µβ .⟨V ||β⟩||α⟩ = α µ⋆.⟨µ⋆.⟨V ||⋆⟩||⋆⟩ Interestingly, this corresponds exactly to the so-called negative-elimination-free (nef) proofs of Herbelin [START_REF] Herbelin | A constructive proof of dependent choice, compatible with classical logic[END_REF]. To interpret the axiom of dependent choice, he designed a classical proof system with dependent types in natural deduction, in which the dependent types allow the use of nef proofs. V t ::= x | n V p ::= a | λa.p | λx .p | (V t , V p ) | refl c[t] ::= ⟨(t, p)||e⟩ | ⟨λx .p||t • e⟩ (b) Reduction rules Second, Lepigre defined in recent work [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF] a classical proof system with dependent types, where the dependencies are restricted to values. However, the type system allows derivations of judgments up to an observational equivalence, and thus any proof computationally equivalent to a value can be used. In particular, any proof in the nef fragment is observationally equivalent to a value, and hence is compatible with the dependencies of Lepigre's calculus. From now on, we consider the system dL of Section 2 extended with delimited continuations, which we call dL t p , and we define the fragment of negative-elimination-free proof terms (nef). The syntax of both categories is given by Figure 6, the proofs in the nef fragment are considered up 19 Indeed, λa .p is a value for any p, hence proofs like µα . ⟨λa .p ||q • α ⟩ can drop the continuation in the end once p becomes the proof in active position. to α-conversion for the context variables 20 . The reduction rules, given in Figure 6, are slightly different from the rules in Section 2. In the case ⟨λa.p||q • e⟩ with q ∈ nef (resp. ⟨prf p||e⟩), a delimited continuation is now produced during the reduction of the proof term q (resp. p) that is involved in the list of dependencies. As terms can now contain proofs which are not values, we enforce the call-by-value reduction by requiring that proof values only contain term values. We elude the problem of reducing terms, by defining meta-rules for them 21 . We add standard rules for delimited continuations [START_REF] Ariola | A type-theoretic foundation of delimited continuations[END_REF][START_REF] Herbelin | An approach to call-by-name delimited continuations[END_REF], expressing the fact that when a proof µ t p.c is in active position, the current context is temporarily frozen until c is fully reduced. Delimiting the scope of dependencies Regarding the typing rules, which are given in Figure7, we extend the set D to be the nef fragment: D ≜ nef and we now distinguish two modes. The regular mode corresponds to a derivation without dependency issues whose typing rules are the same as in Figure 4 without the list of dependencies; plus the new rule ( t p I ) for the introduction of delimited continuations. The dependent mode is used to type commands and contexts involving t p, and we use the symbol ⊢ d to denote these sequents. There are three rules: one to type t p, which is the only one where we use the dependencies to unify dependencies; one to type context of the form μa.c (the rule is the same as the former rule for μa.c in Section 2); and a last one to type commands ⟨p||e⟩, where we observe that the premise for p is typed in regular mode. Additionally, we need to extend the congruence to make it compatible with the reduction of nef proof terms (that can now appear in types), we thus add the rules: A[p] ▷ A[q] if ∀α (⟨p||α⟩ ⇝ ⟨q||α⟩) A[⟨q|| μa.⟨p||⋆⟩⟩] ▷ A[⟨p[q/a]||⋆⟩] with p, q ∈ nef Due to the presence of nef proof terms (which contain a delimited form of control) within types and lists of dependencies, we need the following technical lemma to prove subject reduction. Lemma 3.1. For any context Γ, ∆, any type A and any e, µ⋆.c: ⟨µ⋆.c ||e⟩ : Γ ⊢ d ∆, t p : B; ε ⇒ c[e/⋆] : Γ ⊢ d ∆, t p : B; ε. Proof. By definition of the nef proof terms, µ⋆.c is of the general form µ⋆.c = µ⋆.⟨p 1 || μa 1 .⟨p 2 || μa 2 .⟨. . .|| μa n-1 .⟨p n ||⋆⟩⟩⟩⟩. For simplicity reasons, we will only give the proof for the case n = 2, so that a derivation for the hypothesis is of the form (we assume the Thus, we have to show that we can turn Π e into a derivation Π ′ e of Γ | e : A ⊢ d ∆ t p ; {a 1 |p 1 }{•|p 2 } with ∆ t p ≜ ∆, t p : B, since this would allow us to build the following derivation: Π 1 Γ ⊢ p 1 : A 1 | ∆ Π 2 Γ, a 1 : A 1 ⊢ p 2 : A | ∆ Π ′ e • • • | e : A ⊢ d ∆ t p ; {a 1 |p 1 }{•|p 2 } ⟨p 2 ||⋆⟩ : Γ, a 1 : A 1 ⊢ ∆ t p ; {a 1 |p 1 } (Cut) Γ | μa 1 .⟨p 2 ||e⟩ : A 1 ⊢ d ∆ t p ; {•|p 1 } ( μ) ⟨p 1 || μa 1 .⟨p 2 ||e⟩⟩ : Γ ⊢ d ∆ t p ; ε (Cut) It suffices to prove that if the list of dependencies is used in Π e to type t p, we can still give a derivation with the new one. In practice, it corresponds to showing that for any variable a and any list of dependencies σ : {a|µ⋆.c}σ ⇛ {a 1 |p 1 }{a|p 2 }σ . For any A ∈ B σ , by definition we have: A[µ⋆.⟨p 1 || μa 1 .⟨p 2 ||⋆⟩⟩/b] ≡ A[µ⋆.⟨p 2 [p 1 /a 1 ]||⋆⟩/b] ≡ A[p 2 [p 1 /a 1 ]/b] = A[p 2 /b][p 1 /a 1 ]. Hence for any A ∈ B {a |µ⋆.c }σ , there exists A ′ ∈ B {a 1 |p 1 } {a |p 2 }σ such that A ≡ A ′ , and we can derive: A ′ ∈ B {a 1 |p 1 } {a |p 2 }σ Γ | t p : A ′ ⊢ d ∆, t p : B; {a 1 |p 1 }{b|p 2 }σ A ≡ A ′ Γ | t p : A ⊢ d ∆, t p : B; {a 1 |p 1 }{b|p 2 }σ (≡ l ) □ We can now prove subject reduction for dL t p . Theorem 3.2 (Subject reduction). If c, c ′ are two commands of dL t p such that c : (Γ ⊢ ∆) and c ⇝ c ′ , then c ′ : (Γ ⊢ ∆). Proof. Actually, the proof is slightly easier than for Theorem 2.9, because most of the rules do not involve dependencies. We only give some key cases. • Case ⟨λa.p||q • e⟩ ⇝ ⟨µ t p.⟨q|| μa.⟨p|| t p⟩⟩||e⟩ with q ∈ nef. A typing derivation for the command on the left is of the form: Π p Γ, a : A ⊢ p : B | ∆ Γ ⊢ λa.p : Π a:A B | ∆ (→ l ) Π q Γ ⊢ q : A | ∆ Π e Γ | e : B[q/a] ⊢ ∆ Γ | q • e : Π a:A B ⊢ ∆ (→ l ) ⟨λa.p||q • e⟩ : Γ ⊢ ∆ (Cut) • Case ⟨µ t p.c ||e⟩ ⇝ ⟨µ t p.c ′ ||e⟩ with c ⇝ c ′ . This case corresponds exactly to Theorem 2.9, except for the rule ⟨µα .c ||e⟩ ⇝ c[e/α], since µα .c is a nef proof term (remember we are inside a delimited continuation), but this corresponds precisely to Lemma 3.1. □ Remark 3.3. Interestingly, we could have already taken D ≜ nef in dL and still be able to prove the subject reduction property. The only difference would have been for the case ⟨µα .c ||e⟩ ⇝ c[e/α] when µα .c is nef. Indeed, we would have had to prove that such a reduction step is compatible with the list of dependencies, as in the proof for dL t p , which essentially amounts to Lemma 3.1. This shows that the relaxation to the nef fragment is valid even without delimited continuations. To sum up, the restriction to nef is sufficient to obtain a sound type system, but is not enough to obtain a calculus suitable for a continuation-passing style translation. As we will now see, delimited continuations are crucial for the soundness of the CPS translation. Observe that they also provide us with a type system in which the scope of dependencies is more delimited. A CONTINUATION-PASSING STYLE TRANSLATION We shall now see how to define a continuation-passing style translation from dL t p to an intuitionistic type theory, and use this translation to prove the soundness of dL t p . Continuation-passing style translations are indeed very useful to embed languages with classical control into purely functional ones [START_REF] Curien | The duality of computation[END_REF][START_REF] Griffin | A formulae-as-type notion of control[END_REF]. From a logical point of view, they generally amount to negative translations that allow us to embed classical logic into intuitionistic logic [START_REF] Ferreira | On various negative translations[END_REF]. Yet, we know that removing classical control (i.e. classical logic) from our language leaves us with a sound intuitionistic type theory. We will now see how to design a CPS translation for our language which will allow us to prove its soundness. Target language We choose the target language to be an intuitionistic theory in natural deduction that has exactly the same elements as dL t p , except the classical control. The language distinguishes between terms (of type N) and proofs, it also includes dependent sums and products for types referring to terms, as well as a dependent product at the level of proofs. As is common for CPS translations, the evaluation follows a head-reduction strategy. The syntax of the language and its reduction rules are given by Figure 8. The type system, also presented in Figure 8, is defined as expected, with the addition of a secondorder quantification that we will use in the sequel to refine the type of translations of terms and nef proofs. As in dL t p , the type system has a conversion rule, where the relation A ≡ B is the symmetric-transitive closure of A ▷ B, defined once again as the congruence over the reduction -→ and by the rules: 0 = 0 ▷ ⊤ 0 = S(u) ▷ ⊥ S(t) = 0 ▷ ⊥ S(t) = S(u) ▷ t = u. Translation of proofs and terms We can now define the continuation-passing style translation of terms, proofs, contexts and commands. The translation is given in Figure 9, in which we tag some lambdas with a bullet λ • for technical reasons. The translation of delimited continuations follows the intuition we presented in Section 2.8, and the definition for stacks t • e and q • e (with q nef) inlines the reduction producing a command with a delimited continuation. All the other rules are natural in the sense that they Γ ⊢ q : t = u Γ ⊢ q : A[t] Γ ⊢ subst p q : A[u] (subst) Γ ⊢ p : A A ≡ B Γ ⊢ p : B (CONV) (c) Type system reflect the reduction rule ⇝, except for the translation of pairs (t, p): (t, p) p ≜ λk. p p ( t t (λxa.k (x, a))) The natural definition would have been λk. t t (λu. p p λq.k (u, q)), however such a term would have been ill-typed (while the former definition is correct, as we will see in the proof of Lemma 4.9). Indeed, the type of p p depends on t, while the continuation (λq.k (u, q)) depends on u, but both become compatible once u is substituted by the value return by t t . This somewhat strange definition corresponds to the intuition that we reduce t t within a delimited continuation 22 , in order to guarantee that we will not reduce p p before t t has returned a value to substitute for u. The complete translation is given in Figure 9. Before defining the translation of types, we first state a lemma expressing the fact that the translations of terms and nef proof terms use the continuations they are given once and only once. In particular, it makes them compatible with delimited continuations and a parametric return type. This will allow us to refine the type of their translation. Lemma 4.1. The translation satisfies the following properties: (1) For any term t in dL t p , there exists a term t + such that for any k, we have t t k → * β k t + . (2) For any nef proof p N , there exists a proof p + N such that for any k, we have p N p k → * β k p + N . wit p t ≜ λk. p p (λ • q.k (wit q)) V t V t ≜ λk.k V t a V ≜ a λa.p V ≜ λ • a. p p (V t , V p ) V ≜ ( V t V t , V V ) V p ≜ λk.k V V µα .c p ≜ λ • α . c c prf p p ≜ λ • k.( p p (λ • qλk ′ .k ′ (prf q))) k (t, p) p ≜ λ • k. p p ( t t (λxλ • a.k (x, a))) subst V q p ≜ λk. q p (λ • q ′ .k (subst V V q ′ ))) subst p q p ≜ λk. p p (λ • p ′ . q p (λ • q ′ .k (subst p ′ q ′ ))) (p V ) α e ≜ α t • e e ≜ λp.( t t (λ • v.p v)) e e q N • e e ≜ λp.( q N p (λ • v.p v)) e e (q N ∈ nef) q • e e ≜ λ • p. q p (λ • v.p v e e ) ( q n V t ≜ n x V t ≜ x refl V ≜ refl λx .p V ≜ λ • x . p p µ t p.c p ≜ λk. c t p k μa.c e ≜ λ • a. c c ⟨p|| t p⟩ t p ≜ p p μa.c e t p ≜ λ • a. c t p Fig. 9. Continuation-passing style translation In particular, we have : t t λx .x → * β t + and p N p λa.a → * β p + N Proof. Straightforward mutual induction on the structure of terms and nef proofs, adding similar induction hypothesis for nef contexts and commands. The terms t + and proofs p + are given in Figure 10. We detail the case (t, p) with p ∈ nef to give an insight of the proof. Proof. Simple proof by induction on the reduction rules for ⇝, using Lemma 4.1 for cases involving a term t. □ (t, p) p k → β p p ( t t (λxa.k (x, a))) → β ( t t (λxa.k (x, a))) p + → β (λxa.k (x, a)) t + p + → β (λa.k (t + , a)) p + → β k (t + , p + ) ( by x + ≜ x n + ≜ n (wit p) + ≜ wit p + a + ≜ a refl + ≜ refl (λa.p) + ≜ λa. p p (λx .p) + ≜ λx . p p (t, p) + ≜ (t + , p + ) (prf p) + ≜ prf p + (subst p q) + ≜ subst p + q + (µ⋆.c) + ≜ c + (µ t p.c) + ≜ c + (⟨p||⋆⟩) + ≜ p + (⟨p|| t p⟩) + ≜ p + (⟨p|| μa.c t p ⟩) + ≜ c + [p + /a] Normalization of dL t p We can in fact prove a finer result to show that normalization is preserved through the translation. Namely, we want to prove that any infinite reduction sequence in dL t p is responsible for an infinite reduction sequence through the translation. Using the preservation of typing (Proposition 4.10) together with the normalization of the target language, this will give us a proof of the normalization of dL t p for typed proof terms. To this purpose, we roughly proceed as follows: (1) we identify a set of reduction steps in dL t p which are directly reflected into a strictly positive number of reduction steps through the CPS; (2) we show that the other steps alone can not form an infinite sequence of reductions; (3) we deduce that every infinite sequence of reductions in dL t p gives rise to an infinite sequence through the translation. The first point corresponds thereafter to Proposition 4.5, the second one to the Proposition 4.6. As a matter of fact, the most difficult part is somehow anterior to these points. It consists in understanding how a reduction step can be reflected through the translation in a way that is sufficient to ensure the preservation of normalization (that is the third point). Instead of stating the result directly and giving a long and tedious proof of its correctness, we will rather sketch its main steps. First of all, we split the reduction rule → β into two different kinds of reduction steps: • administrative reductions, that we denote by -→ a , which correspond to continuationpassing and computationally irrelevant (w.r.t. to dL t p ) reduction steps. These are defined as the β-reduction steps of non-annotated λs. • distinguished reductions, that we denote by -→ • , which correspond to the image of a reduction step through the translation. These are defined as every other rules, that is to say the β-reduction steps of annotated λ • 's plus the rules corresponding to redexes formed with wit, prf and subst . In other words, we define two deterministic reductions -→ • and -→ a , such that the usual weakhead reduction → β is equal to the union -→ • ∪ -→ a . Our goal will be to prove that every infinite reduction sequence in dL t p will be reflected in the existence of an infinite reduction sequence for -→ • . Second, let us assume for a while that we can show that for any reduction c ⇝ c ′ , through the translation we have: c c t 0 t 1 t 2 c ′ c β * a * • 1 β * 1:30 Étienne Miquey Then by induction, it implies that if a command c 0 produces an infinite reduction sequence c 0 ⇝ c 1 ⇝ c 2 ⇝ . . ., it is reflected through the translation by the following reduction scheme: Using the fact that all reductions are deterministic, and that the arrow from c 1 c to t 02 (and c 2 c to t 12 and so on) can only contain steps of the reduction -→ a , the previous scheme in fact ensures us that we have: c c 0 c t 00 t 01 t 02 t 10 t 11 t 12 t 20 t 21 c 1 c c 2 c β * a * a * • 1 β * β * • 1 β * β * • 1 This directly implies that c 0 c produces an infinite reduction sequence and thus is not normalizing. This would be the ideal situation, and if the aforementioned steps were provable as such, the proof would be over. Yet, our situation is more subtle, and we need to refine our analysis to tackle the problem. We shall briefly explain now why we can actually consider a slightly more general reduction scheme, while trying to remain concise on the justification. Keep in mind that our goal is to preserve the existence of an infinite sequence of distinguished steps. The first generalization consists in allowing distinguished reductions for redexes that are not in head positions. The safety of this generalization follows from this proposition: Proof. By induction on the structure of t, a very similar proof can be found in [START_REF] Joachimski | Short proofs of normalization for the simply-typed λ-calculus, permutative conversions and gödel's t[END_REF]. □ Following this idea, we define a new arrow ? -→ • by: u -→ • u ′ ⇒ t[u] ? -→ • t[u ′ ] where t[] :: = [] | t ′ (t[]) | λx .t[], expressing the fact that a distinguished step can be performed somewhere in the term. We denote by -→ β + the extended reduction relation defined as the union -→ β ∪ ? -→ • , which is not deterministic. Coming back to the thread scheme we described above, we can now generalize it with this arrow. Indeed, as we are only interested in getting an infinite reduction sequence from c 0 c , the previous proposition ensures us that if t 02 (t 12 , etc.) does not normalize, it is enough to have an arrow t 01 * -→ β + t 02 (t 11 * -→ β + t 12 , etc.) to deduce that t 01 does not normalize either. Hence, it is enough to prove that we have the following thread scheme, where we took advantage of this observation: c 0 c t 00 t 01 t 02 t 10 t 11 t 12 t 20 t 21 c 1 c c 2 c β * β * β * a * a * • 1 β + * • 1 β + * • 1 In the same spirit, if we define = a to be the congruence over terms induced by administrative reductions -→ a , we can show that if a term has a redex for the distinguished relation in head position, then so does any (administratively) congruent term. -→ • u and t = a t ′ , then there exists u ′ such that t ′ 1 -→ • u ′ and u = a u ′ . Proof. By induction on t, observing that an administrative reduction can neither delete nor create redexes for -→ • . □ In other words, as we are only interested in the distinguished reduction steps, we can take the liberty to reason modulo the congruence = a . Notably, we can generalize one last time our reduction scheme, replacing the left (administrative) arrow from c i c by this congruence: c 0 c t 00 t 01 t 02 t 10 t 11 t 12 t 20 t 21 c 1 c c 2 c β * β * β * a a • 1 β + * • 1 β + * • 1 For all the reasons explained above, such a reduction scheme ensures that there is an infinite reduction sequence from c 0 c . Because of this guarantee, by induction, it is enough to show that for any reduction step c 0 ⇝ c 1 , we have: c 0 c t 0 t 1 t 2 c 1 c β * • 1 β + * a (1) In fact, as explained in the preamble of this section, not all reduction steps can be reflected this way through the translation. There are indeed 4 reduction rules, that we identify hereafter, that might only be reflected into administrative reductions, and produce a scheme of this shape (which subsumes the former): c 0 c * -→ β + t = a c 1 c (2) This allows us to give a more precise statement about the preservation of reduction through the CPS translation. Proposition 4.5 (Preservation of reduction). Let c 0 , c 1 be two commands of dL t p . If c 0 ⇝ c 1 , then it is reflected through the translation into a reduction scheme (1), except for the rules: ⟨subst p q||e⟩ p V ⇝ ⟨p|| μa.⟨subst a q||e⟩⟩ ⟨subst refl q||e⟩ ⇝ ⟨q||e⟩ ⟨µ t p.⟨p|| t p⟩||e⟩ ⇝ ⟨p||e⟩ c[t] ⇝ c[t ′ ] which are reflected into the reduction scheme (2). Proof. The proof is done by induction on the reduction ⇝ (see Figure 6). To ease the notations, we will often write λ • v.(λ • x . p p ) v -→ • λ • x . p p where we perform α-conversion to identify λ • v. p p [v/x] and λ • x . p p . Additionally, to facilitate the comprehension of the steps corresponding to the congruence = a , we use an arrow ? -→ a to denote the possibility of performing an administrative reduction not in head position, defined by: u -→ a u ′ ⇒ t[u] ? -→ a t[u ′ ] We write -→ a + the union -→ a ∪ ? -→ a . • Case ⟨µα .c ||e⟩ ⇝ c[e/α]: We have: ⟨µα .c ||e⟩ c = (λ • α . c c ) e e -→ • c c [ e e /α] = c[e/α] c • Case ⟨λa.p||q • e⟩ ⇝ ⟨q|| μa.⟨p||e⟩⟩: We have: ⟨λa.p||q • e⟩ c = (λk.k (λ • a. p p )) λ • p. q p (λ • v.p v e e ) -→ a (λ • p. q p (λ • v.p v e e )) λ • a. p p -→ • q p (λ • v.(λ • a. p p ) v e e ) ? -→ • q p (λ • a. p p e e ) = ⟨q|| μa.⟨p||e⟩⟩ c • Case ⟨λa.p||q N • e⟩ q N ∈nef ⇝ ⟨µ t p.⟨q N || μa.⟨p|| t p⟩⟩||e⟩: We know by Lemma 4.1 that q N being nef, it will use, and use only once, the continuation it is applied to. Thus, we know that if k -→ • k ′ , we have that: q N p k * -→ β k q + N -→ • k ′ q + N β ←-q N p k ′ and we can legitimately write q N p k -→ • q N p k ′ in the sense that it corresponds to performing now a reduction that would have been performed in the future. Using this remark, we have: ⟨λa.p||q N • e⟩ c = (λk.k (λ • a. p p )) λp.( q N p (λ • v.p v)) e e 2 -→ a ( q N p (λ • v.(λ • a. p p ) v)) e e -→ • ( q N p (λ • a. p p )) e e a ←-(λk.( q N p (λ • a. p p )) k) e e = ⟨µ t p.⟨q N || μa.⟨p|| t p⟩⟩||e⟩ c • Case ⟨λx .p||V t • e⟩ ⇝ ⟨p[V t /x]||e⟩: Since V t is a value (i.e. x or n), we have V t t = λk.k V t V t . In particular, it is easy to deduce that p[V t /x] p = p p [ V t V t /x], and then we have: ⟨λx .p||V t • e⟩ c = (λk.k (λ • x . p p ))λp.( V t t (λ • v.p v)) e e 2 -→ a ( V t t (λ • v.(λ • x . p p ) v)) e e -→ a ((λ • v.(λ • x . p p ) v) V t V t ) e e -→ • ((λ • x . p p ) V t V t ) e e -→ • ( p p [ V t V t /x]) e e = p[V t /x] p e e = ⟨p[V t /x]||e⟩ • Case ⟨V || μa.c⟩ ⇝ c[V /a]: Similarly to the previous case, we have V p = λk.k V V and thus c[V /x] c = p p [ V V /a]. ⟨V p || μa.c⟩ c = (λk.k V V )λ • a. c c -→ a (λ • a. c c ) V V -→ • c c [ V V /a] = c[V /a] c • Case ⟨(V t , p)||e⟩ p V ⇝ ⟨p|| μa.⟨(V t , a)||e⟩⟩: We have : ⟨(V t , p)||e⟩ c = (λ • k. p p ( V t t (λxλ • a.k (x, a))) e e -→ • p p ( V t t (λxλ • a. e e (x, a))) -→ a + p p ((λxλ • a. e e (x, a)) V t V t ) -→ a + p p (λ • a. e e ( V t V t , a)) a +←-p p (λ • a. (V t , a) p e e ) a +←-(λk p p (λ • a. (V t , a) p k)) e e = ⟨p|| μa.⟨(V t , a)||e⟩⟩ c • Case ⟨prf p||e⟩ ⇝ ⟨µ t p.⟨p|| μa.⟨prf a|| t p⟩⟩||e⟩: We have: ⟨prf p)||e⟩ c = λ • k.( p p (λ • aλk ′ .k ′ (prf a))) k) e e -→ • ( p p (λ • a.λk ′ .k ′ (prf a))) e e a ←-(λk.( p p (λ • a.λk ′ .k ′ (prf a))) k) e e = ⟨µ t p.⟨p|| μa.⟨prf a|| t p⟩⟩||e⟩ c • Case ⟨prf (V t , V p )||e⟩ ⇝ ⟨V p ||e⟩: We have: ⟨prf (V t , V p )||e⟩ c = λ • k.((λk.k ( V t V , V p V )) (λ • qλk ′ .k ′ (prf q))) k) e e -→ • ((λk.k ( V t V , V p V )) (λ • qλk ′ .k ′ (prf q))) e e -→ a ((λ • qλk ′ .k ′ (prf q))( V t V , V p V )) e e -→ • (λk ′ .k ′ (prf ( V t V , V p V ))) e e -→ a e e (prf ( V t V , V p V ))) ? -→ • e e V p V a ←-⟨V p ||e⟩ c • Case ⟨subst p q||e⟩ p V ⇝ ⟨p|| μa.⟨subst a q||e⟩⟩: We have: ⟨subst p q||e⟩ c = (λk. p p (λ • a. q p (λ • q ′ .k (subst a q ′ )))) e e -→ a p p (λ • a. q p (λ • q ′ . e e (subst a q ′ ))) ? a ←-p p (λ • a.(λk. q p (λ • q ′ .k (subst a q ′ ))) e e ) = ⟨p|| μa.⟨subst a q||e⟩⟩ c • Case ⟨subst refl q||e⟩ ⇝ ⟨q||e⟩: We have: ⟨subst refl q||e⟩ c = (λk. q p (λ • q ′ .k (subst refl q ′ ))) e e -→ a q p (λ • q ′ . e e (subst refl q ′ )) ? -→ • q p (λ • q ′ . e e q ′ ) ? -→ • q p e e = ⟨q||e⟩ c • Case ⟨µ t p.⟨p|| t p⟩||e⟩ ⇝ ⟨p||e⟩: We have: .c ′ ||e⟩ • Case t → t ′ ⇒ c[t] ⇝ c[t ′ ]: As such, the translation does not allow an analysis of this case, mainly because we did not give an explicit small-step semantics for terms, and defined terms reduction through a big-step semantics: ∀α, ⟨p||α⟩ * ⇝ ⟨(t, q)||α⟩ ⇒ wit p → t However, we claim that we could have extended the language of dL t p with commands for terms: c t ::= ⟨t ⟨V t || μx .c t ⟩ ⇝ c t [V t /x] ⟨wit (V t , V p )||e t ⟩ ⇝ ⟨V t ||e t ⟩ ⟨V p || μ ť p.⟨ ť p||e⟩⟩ ⇝ ⟨V p ||e⟩ c ⇝ c ′ ⇒ ⟨p|| μ ť p.c⟩ ⇝ ⟨p|| μ ť p.c ′ ⟩ It is worth noting that these rules simulate the big-step definitions we had before while preserving the global call-by-value strategy. Defining the translation for terms in the extended syntax: wit V t t ≜ λk.k (wit V t V t ) wit p t ≜ λk. p p (λ • q.k (wit q)) μ ť p.c t t ≜ c t t μx .c t ≜ λ • x . c c ⟨t ||e t ⟩ t ≜ t t e t t ť p p ≜ λ • k.k We can then prove that each reduction rule satisfies the expected scheme. Case ⟨λx .p||t • e⟩ ⇝ ⟨µ t p.⟨t || μx .⟨p|| t p⟩⟩||e⟩: We have: We have: ⟨λx .p||t • e⟩ = (λ • k.k λ • x . p p ) ( wit (V t , V p ) t e t t = (λk.k (wit ( V t V t , V p V ))) e t t -→ a e t t (wit ( V t V t , V p V )) -→ • e t t V t V t a ←-(λk.k V t V t ) e t t = V t t e t Case ⟨V t || μx .c t ⟩ ⇝ c t [V t /x]: We have: V t t μx .c t = (λk.k V t V t ) λ • x . c c -→ a (λ • x . c c ) V t V t -→ • c c [ V t V t /x] = c[V t /x] c Case ⟨V || μ t p.⟨ t p||e⟩⟩ ⇝ ⟨V ||e⟩: We have: V p μ t p.⟨ t p||e⟩ e = (λk.k V V ) ((λk.k) e e ) -→ a ((λk.k) e e ) V V -→ a e e V V a ←-(λk.k V V ) e e = ⟨V ||e⟩ c Case c ⇝ c ′ ⇒ ⟨V || μ t p.c⟩ ⇝ ⟨V || μ t p.c ′ ⟩: This case is similar to the case for delimited continuations proved before, we only need to use the induction hypothesis for c c to get: V p μ t p.c e = (λk.k V V ) c c -→ a c c V V * -→ β + t V V = a c ′ c V V a +←-(λk.k V V ) c ′ c = V p μ t p.c ′ e □ Proposition 4.6. There is no infinite sequence only made of reductions: (1) ⟨subst p q||e⟩ p V ⇝ ⟨p|| μa.⟨subst a q||e⟩⟩ (2) ⟨subst refl q||e⟩ ⇝ ⟨q||e⟩ (3) ⟨µ t p.⟨p|| t p⟩||e⟩ ⇝ ⟨p||e⟩ (4) c[t] ⇝ c[t ′ ] Proof. It is sufficient to observe that if we define the following quantities: (1) the quantity of subst p q with p not a value within a command, (2) the quantity of subst within a command, (3) the quantity of t p within a command, (4) the quantity of wit terms within a command. then the rule (1) makes quantity (1) decrease while preserving the others. Likewise, (2) decreases quantity (2) preserves the other, and so on. All in all, we have a bound on the maximal number of steps for the reduction restricted to these four rules. □ Proposition 4.7 (Preservation of normalization). If c c normalizes, then c is also normalizing. Proof. Reasoning by contraposition, let us assume that c is not normalizing. Then in any infinite reduction sequence from c, according to the previous proposition, there are infinitely many steps that are reflected through the CPS into at least one distinguished step (Proposition 4.5). Thus, there is an infinite reduction sequence from c c too. □ Proof. Using the preservation of typing that we shall prove in the next section (Proposition 4.10), we know that if c is typed in dL t p , then its image c c is also typed. Using the fact that typed terms of the target language are normalizing, we can finally apply the previous proposition to deduce that c normalizes. □ Translation of types We can now define the translation of types in order to show further that the translation p p of a proof p of type A is of type A * . The type A * is the double-negation of a type A + that depends on the structure of A. Thanks to the restriction of dependent types to nef proof terms, we can interpret a dependency in p (resp. t) in dL t p by a dependency in p + (resp. t + ) in the target language. Lemma 4.1 indeed guarantees that the translation of a nef proof p will eventually return p + to the continuation it is applied to. The translation is defined by: A * ≜ ( A + → ⊥) → ⊥ t = u + ≜ t + = u + ∀x N .A + ≜ ∀x N . A * ⊤ + ≜ ⊤ ∃x N .A + ≜ ∃x N . A + ⊥ + ≜ ⊥ Π a:A B + ≜ Π a: A + B * N + ≜ N Observe that types depending on a term of type T are translated to types depending on a term of the same type T , because terms can only be of type N. As we shall discuss in Section 6.2, this will no longer be the case when extending the domain of terms. To extend the translation for types to the translation of contexts, we consider that we can unify left and right contexts into a single one that is coherent with respect to the order in which the hypotheses have been introduced. We denote this context by Γ ∪ ∆, where the assumptions of Γ remain unchanged, while the former assumptions (α : A) in ∆ are denoted by (α : A ⊥ ⊥ ). The translation of unified contexts is given by: Γ, a : A ≜ Γ + , a : A + Γ, x : N ≜ Γ + , x : N Γ, α : A ⊥ ⊥ ≜ Γ + , α : A + → ⊥. As explained informally in Section 2.8 and stated by Lemma 4.1, the translation of a nef proof term p of type A uses its continuation linearly. In particular, this allows us to refine its type to make it parametric in the return type of the continuation. From a logical point of view, it amounts to replacing the double-negation (A → ⊥) → ⊥ by Friedman's translation [START_REF] Friedman | Classically and intuitionistically provably recursive functions[END_REF]: ∀R.(A → R) → R. It is worth noticing the correspondences with the continuation monad [START_REF] Filinski | Representing monads[END_REF]. Also, we make plain use here of the fact that the nef fragment is intuitionistic, so to speak. Indeed, it would be impossible to attribute this type 23 to the translation of a (really) classical proof. Moreover, we can even make the return type of the continuation dependent on its argument (that is a type of the shape Π a:A R(a)), so that the type of p p will correspond to the elimination rule: ∀R.(Π a:A R(a) → R(p + )). This refinement will make the translation of nef proofs compatible with the translation of delimited continuations. Lemma 4.9 (Typing translation for nef proofs). The following holds: (1) For any term t, if Γ ⊢ t : N | ∆ then Γ ∪ ∆ ⊢ t t : ∀X .(∀x N .X (x) → X (t + )). (2) For any nef proof p, if Γ ⊢ p : A | ∆ then Γ ∪ ∆ ⊢ p p : ∀X .(Π a: A + X (a) → X (p + ))). (3) For any nef command c, if c : (Γ ⊢ ∆, ⋆ : B) then Γ ∪ ∆ , ⋆ : Π b:B + X (b) ⊢ c c : X (c + )). Proof. The proof is done by induction on typing derivations. We only give the key cases of the proof. • Case (µ). For this case, we could actually conclude directly using the induction hypothesis for c. Rather than that, we do the full proof for the particular case µ⋆.⟨p|| μa.⟨q||⋆⟩⟩, which condensates the proofs for µ⋆.c and the two possible cases ⟨p N ||e N ⟩ and ⟨p N ||⋆⟩ of nef commands. This case corresponds to the following typing derivation in dL t p : Π p Γ ⊢ p : A | ∆ Π q Γ, a : A ⊢ q : B | ∆ • • • | ⋆ : B ⊢ ∆, ⋆ : B ⟨q||⋆⟩ : Γ, a : A ⊢ ∆, ⋆ : B the reduction rules for the fragment of Lepigre's calculus we use, for the type system we refer the reader to [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF]: Values Terms Stacks Processes Formulas v, w ::= x | λx .t | {l 1 = v 1 , l 2 = v 2 } t, u ::= a | v | t u | µα .t | p | v.l i π , ρ ::= α | v • π | [t]π p, q ::= t * π A, B ::= X n (t 1 , . . . , t n ) | A → B | ∀a.A | ∃a.A | ∀X n .A | {l 1 : A 1 , l 2 : A 2 } | t ∈ A The reduction ≻ is defined as the smallest relation satisfying: t u * π ≻ u * [t]π v * [t]π ≻ t * v • π λx .t * v • π ≻ t[x := v] * π µα .t * π ≻ t[α := π ] * π p * π ≻ p (v 1 , v 2 ).l i ≻ v i It is worth noting that the call-by-value strategy is obtained via the construction [t]π which allows to evaluate the argument of t to a value before pushing it onto the stack. Even though records are only defined for values, we can define pairs and projections as syntactic sugar: ( t 1 , t 2 ) ≜ (λv 1 v 2 .{l 1 = v 1 , l 2 = v 2 }) t 1 t 2 fst(t) ≜ (λx .(x .l 1 )) t snd(t) ≜ (λx .(x .l 2 )) t A 1 ∧ A 2 ≜ {l 1 : A 1 , l 2 : A 2 } Similarly, only values can be pushed on stacks, but we can define processes 25 with stacks of the shape t • π as syntactic sugar: t * u • π ≜ tu * π We first define the translation for types (extended for typing contexts) where the predicate Nat(x) is defined 26 as usual in second-order logic: Nat(x) ≜ ∀X .(X (0) → ∀y.(X (y) → X (S(y))) → X (x)) and t t is the translation of the term t given in Figure 11. (∀x N .A) * ≜ ∀x .(Nat(x) → A * ) (∃x N .A) * ≜ ∃x .(Nat(x) ∧ A * ) (t = u) * ≜ ∀X .(X ( t t ) → X ( u t )) ⊤ * ≜ ∀X .(X → X ) ⊥ * ≜ ∀XY .(X → Y ) (Π a:A B) * ≜ ∀a.((a ∈ A * ) → B * ) (Γ, x : N) * ≜ Γ * , x : Nat(x) (Γ, a : A) * ≜ Γ * , a : A * (Γ, α : A ⊥ ⊥ ) * ≜ Γ * , α : ¬A * Note that the equality is mapped to Leibniz equality, and that the definitions of ⊥ * and ⊤ * respectively correspond to (0 = 1) * and (0 = 0) * in order to make the conversion rule admissible through the translation. The translation for terms, proofs, contexts and commands of dL t p , given in Figure 11 is almost straightforward. We only want to draw the reader's attention on a few points: • the equality being translated as Leibniz equality, refl is translated as the identity λa.a, which also matches with ⊤ (α). By hypothesis, σ realizes Γ, α : A ⊥ ⊥ from which we obtain ασ = σ (α) ∈ A ⊥ σ . ( * ). We need to show that tσ * πσ ∈ B ⊥⊥ σ , so we take ρ ∈ B ⊥ σ and show that (tσ * πσ ) * ρ ∈ ⊥ ⊥. By anti-reduction, it is enough to show that (tσ * πσ ) ∈ ⊥ ⊥. This is true by induction hypothesis, since tσ ∈ A ⊥⊥ σ and πσ ∈ A ⊥ σ . (µ). The proof is the very same as in [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF]Theorem 6]. (∀ l ). By induction hypothesis, we have that πσ ∈ A[x := t] ⊥ σ . We need to show the inclusion A[x := t] ⊥ σ ⊆ ∀x .A ⊥ σ , which follows from ∀x .A σ = t ∈Λ A[x := t] σ ⊆ A[x := t] σ . (⇒ l ). If t is a value v, by induction hypothesis, we have that vσ ∈ A σ and πσ ∈ B ⊥ σ , and we need to show that vσ • πσ ∈ A ⇒ B ⊥ σ . The proof is already done in the case (⇒ e ) (see [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF]Theorem 6]). Otherwise, by induction hypothesis, we have that tσ ∈ A ⊥⊥ σ and πσ ∈ B ⊥ σ , and we need to show that tσ • πσ ∈ A ⇒ B ⊥ σ . So we consider λx .u ∈ A ⇒ B σ , and show that λx .u * tσ • πσ ∈ ⊥ ⊥. We can take a reduction step, and prove instead that tσ * [λx .u]πσ ∈ ⊥ ⊥. This amounts to showing that [λx .u]π ∈ A ⊥ σ , which is already proven in the case (⇒ e ). (let). We need to show that for all v ∈ A σ , v * [tσ ]πσ ∈ ⊥ ⊥. Taking a step of reduction, it is enough to have tσ * v •πσ ∈ ⊥ ⊥. This is true since by induction hypothesis, we have tσ ∈ A ⇒ B ⊥⊥ Proof. The proof is an easy induction on the typing derivation Γ ⊢ p : A | ∆. Note that in a way, the translation of a delimited continuation decompiles it to simulate in a natural deduction fashion the reduction of the applications of functions to stacks (that could have generated the same delimited continuations in dL t p ), while maintaining the frozen context (at top-level) outside of the active command (just like a delimited continuation would do). This trick allows us to avoid the problem of dependencies conflict in the typing derivation. For instance, assuming that q 1 p (resp. q 2 p ) reduces to a value V 1 (resp. V 2 ) . p p [V 1 /a 1 ]] e e ≻ * p p [ V 1 p /a 1 ][ V 2 p /a 2 ] * e e * ≺ q 2 p * [λa 2 . p p [V 1 /a 1 ]] e e * ≺ q 1 p * [λa 1 a 2 . p p )] q 2 p • e e * ≺ (λa 1 a 2 . p p ) * q 1 p • q 2 p • e e = ⟨λa 1 λa 2 .p||q 1 • q 2 • e⟩ c where we observe that e e is always kept outside of the computations, and where each command ⟨q i || μa i .c t p ⟩ is decompiled into (µα . q i p * [λa i . c t p t p ].α) * e e , simulating the (natural deduction style) reduction of λa i . c t p t p * q i p • e e . These terms correspond somehow to the translations of former commands typable without types dependencies. □ As a corollary we get a proof of the adequacy of dL t p typing rules with respect to Lepigre's realizability model. This immediately implies the soundness of dL t p : Theorem 5.4 (Soundness). For any proof p in dL t p , we have: ⊬ p : ⊥. Proof. By contradiction, if we had a closed proof p of type ⊥, it would be translated as a realizer of ⊤ → ⊥. Therefore, p p λx .x would be a realizer of ⊥, which is impossible. □ Furthermore, the translation clearly preserves normalization (in the sense that for any c, if c does not normalize then neither does c c ), and thus the normalization of dL t p is a consequence of adequacy. It is worth noting that without delimited continuations, we would not have been able to define an adequate translation, since we would have encountered the same problem 27 than with a naive CPS translation (see Section 2.8). FURTHER EXTENSIONS As we explained in the preamble of Section 2, we defined dL and dL t p as small languages containing all the potential sources of inconsistency we wanted to mix: classical control, dependent types, and a sequent calculus presentation. It had the benefit to focus our attention on the difficulties inherent to the issue, but on the other hand, the language we obtain is far from being as expressive as other usual proof systems. We claimed our system to be extensible, thus we shall now discuss this matter. Intuitionistic sequent calculus There is not much to say on this topic, but it is worth mentioning that dL and dL t p could be easily restricted to obtain an intuitionistic framework. Indeed, just like for the passage from LK to LJ, it is enough to restrict the syntax of proofs to allow only one continuation variable (that is one conclusion on the right-hand side of sequent) to obtain an intuitionistic calculus. In particular, in such a setting, all proofs will be nef, and every result we obtained will still hold. Extending the domain of terms Throughout the paper, we only worked with terms of a unique type N, hence it is natural to wonder whether it is possible to extend the domain of terms in dL t p , for instance with terms in the simply-typed λ-calculus. A good way to understand the situation is to observe what happens through the CPS translation. We saw that a term t of type T = N is translated into a proof t * which is roughly of type T * = ¬¬T + = ¬¬N, from which we can extract a term t + of type N. However, if T was for instance the function type N → N (resp. T → U ), we would only be able to extract a proof of type T + = N → ¬¬N (resp. T + → U * ). There is no hope in general to extract a function f : N → N from such a term, since such a proof could be of the form λx .p, where p might backtrack to a former position, for instance before it was extracted, and furnish another proof. Such a proof is no longer a witness in the usual sense, but rather a realizer of f ∈ N → N in the sense of Krivine classical realizability. This accounts for a well-know phenomenon in classical logic, where witness extraction is limited to formulas in the Σ 1 0 -fragment [START_REF] Miquel | Existential witness extraction in classical realizability and via a negative translation[END_REF]. It also corresponds to the type we obtain for the image of a dependent product Π a:A B, that is translated to a type ¬¬Π a:A + B * where the dependence is in a proof of type A + . This phenomenon is not surprising and was already observed for other CPS translations for type theories with dependent types [START_REF] Barthe | CPS translations and applications: The cube and beyond[END_REF]. Nevertheless, if the extraction is not possible in the general case, our situation is more specific. Indeed, we only need to consider proofs that are obtained as translation of terms, which can only contains nef proofs in dL t p . In particular, such proofs cannot drop continuations (remember that this was the whole point of the restriction to the nef fragment). Therefore, we could again refine the translation of types, similarly to what we did in Lemma 4.9. Once more, this refinement would also coincide with a computational property similar to Lemma 4.1, expressing the fact that the extraction can be done simply by passing the identity as a continuation 28 . This witnesses the fact that for any function t in the source language, there exists a term t + in the target language which represents the same function, even though the translation of t is a proof t . To sum up, this means that we can extend the domain of terms in dL t p (in particular, it should affect neither the subject reduction property nor the soundness), but the stratification between terms and proofs is to be lost through a CPS translation. If the target language is a non-stratified type theory (most of the presentations of type theory correspond to this case), then it becomes possible to force the extraction of terms through the translation. Another solution would consist in the definition of a separate translation for terms. Indeed, as it was reflected by Lemma 4.1, since neither terms nor nef proofs may contain continuations, they can be directly translated. The corresponding translation is actually an embedding which maps every pure term (without wit p) to itself, and which performs the reduction of nef proofs p to proofs p + so as to eliminate every µ binder. Such a translation would intuitively reflect an abstract machine where the reduction of terms (and the nef proofs inside) is performed in an external machine. If this solution is arguably a bit ad hoc, it is nonetheless correct and it is maybe a good way to take advantage of the stratified presentation. Adding expressiveness From the point of view of the proof language (that is of the tools we have to build proofs), dL t p only enjoys the presence of a dependent sum and a dependent product over terms, as well as a dependent product at the level of proofs (which subsumes the non-dependent implication). If this is obviously enough to encode the usual constructors for pairs (p 1 , p 2 ) (of type A 1 ∧ A 2 ), injections ι i (p) (of type A 1 ∨ A 2 ), etc..., it seems reasonable to wonder whether such constructors can be directly defined in the language of proofs. In fact, this is the case, and we claim that is possible to define the constructors for proofs (for instance (p 1 , p 2 )) together with their destructors in the contexts (in that case μ(a 1 , a 2 ).c), with the appropriate typing rules. In practice, it is enough to: • extend the definitions of the nef fragment according to the chosen extension, • extend the call-by-value reduction system, opening if needed the constructors to reduce them to a value, • in the dependent typing mode, make some pattern-matching within the list of dependencies for the destructors. The soundness of such extensions can be justified either by extending the CPS translation, or by defining a translation to Lepigre's calculus (which already allows records and pattern-matching over general constructors) and proving the adequacy of the translation with respect to the realizability model. For instance, for the case of the pairs, we can extend the syntax with: We then need to add the corresponding typing rules (plus a third rule to type μ(a 1 , a 2 ).c in regular mode): Γ ⊢ p 1 : A 1 | ∆ Γ ⊢ p 2 : A 2 | ∆ Γ ⊢ ( )||e⟩⟩⟩ ⟨(V 1 , V 2 )|| μ(a 1 , a 2 ).c⟩ ⇝ c[V 1 /a 1 , V 2 /a 2 ] We let the reader check that these rules preserve subject reduction, and suggest the following CPS translations: (p 1 , p 2 ) p ≜ λ • k. p 1 p (λ • a 1 . p 2 p (λ • a 2 .k (a 1 , a 2 ))) (V 1 , V 2 ) V ≜ λ • k.k ( V 1 V , V 2 V ) μ(a 1 , a 2 ) .c e ≜ λp. split p as (a 1 , a 2 ) in c c which allow us to prove that the calculus remains correct with these extensions. We claim that this methodology furnishes a good approach to handle the question "Can I extend the language with ... ?". In particular, it should be enough to get closer to a realistic programming language and extend the language with inductive fixed point operators 29 . CONCLUSION Several directions remain to be explored. We plan to investigate possible extensions of the syntactic restriction we defined, and its connections with notions such as Fürhmann's thunkability [START_REF] Führmann | Direct models for the computational lambda calculus[END_REF] or Munch-Maccagnoni's linearity [START_REF] Munch-Maccagnoni | Models of a Non-associative Composition[END_REF]. Moreover, it might be of interest to check whether this restriction could make dependent types compatible with other side effects, in presence of classical logic or not. More generally, we would like to better understand the possible connections between our calculus and the categorical models for dependently typed theory. On a different perspective, the continuation-passing style translation we defined is at the best of our knowledge a novel contribution, even without considering the classical part. In particular, our translation allows us to use computations (as in the call-by-push value terminology) within dependent types with a call-by-value evaluation strategy, and without any thunking construction. It might be the case that this translation could be adapted to justify extensions of other dependently typed calculi, or provide typed translations between them. Last but not least, we plan to present an application of dL t p to solve the problem that was our original motivation to design such a calculus. In [START_REF] Miquey | Classical realizability and side-effects[END_REF]Chapter 8], we present a sequent calculus equivalent to Herbelin's dPA ω [START_REF] Herbelin | A constructive proof of dependent choice, compatible with classical logic[END_REF], whose presentation is inspired from dL t p . This leads to the definition of a realizability model inspired from Lepigre's construction and from another technique developed with Herbelin [START_REF] Miquey | Realizability interpretation and normalization of typed call-by-need λ-calculus with control[END_REF] to give a realizability interpretation to calculi with laziness and memory sharing (two features of dPA ω ). As a consequence, we deduce the normalization and the soundness of our system. Proofs p ::= a | λa.p | µα .c ⟨p|| μa.c⟩ → c[a := p] p ∈ V Contexts e ::= α | p • e | μa.c ⟨µα .c ||e⟩ → c[α := e] e ∈ E Commands c ::= ⟨p||e⟩ ⟨λa.p||u • e⟩ → ⟨u || μa.⟨p||e⟩⟩ (a) Syntax (b) Reduction rules Fig. 1 . 1 Fig. 1. The λµ μ-calculus Fig. 4 . 4 Fig. 4. Typing rules of dL Fig. 5 . 5 Fig. 5. λµ μ-calculus with pairs Theorem 2 . 13 . 213 ⟨a 2 ||e * ⟩⟩ ↣ ⟨V * ||e * ⟩ = (⟨V ||e⟩) * • Case ⟨subst refl q||e⟩ ⇝ ⟨q||e⟩: (⟨subst refl q||e⟩) * = ⟨µα .⟨q * ||α⟩||e * ⟩ ↣ ⟨q * ||e * ⟩ = (⟨q||e⟩) * • Case ⟨subst p q||e⟩ ⇝ ⟨p|| μa.⟨subst a q||e⟩⟩ (with p V ): (⟨subst p q||e⟩) * = ⟨µα .⟨p * || μ_ .⟨µα .⟨q * ||α⟩||α⟩⟩||e * ⟩ ↣ ⟨p * || μ_ .⟨µα .⟨q * ||α⟩||e * ⟩⟩ ↣ ⟨µα .⟨q * ||α⟩||e * ⟩ = (⟨subst a q||e⟩) * □ If c : (Γ ⊢ ∆; ε), then c normalizes. Fig. 6 . 6 Fig. 6. dL t p : extension of dL with delimited continuations Fig. 8 . 8 Fig. 8. Target language definition) (by induction) (by induction) □ Moreover, we can verify that the translation preserves the reduction: Proposition 4.2. If c, c ′ are two commands of dL t p such that c ⇝ c ′ , then c c = β c ′ c Fig. 10 . 10 Fig. 10. Linearity of the translation for nef proofs Proposition 4 . 3 . 43 If u -→ • u ′ and t[u ′ ] does not normalize, then neither does t[u]. Proposition 4 . 4 . 44 If t 1 • ⟨µ t p.⟨p|| t p⟩||e⟩ c = (λk. p p k) e e -→ a p p e e = ⟨p||e⟩ c Case c ⇝ c ′ ⇒ ⟨µ t p.c ||e⟩ ⇝ ⟨µ t p.c ′ ||e⟩: By induction hypothesis, we get that c c * -→ β + t = a c ′ c for some term t. Therefore, we have: ⟨µ t p.c ||e⟩ = (λk. c c k) e e -→ a c c e e * -→ β + t e e = a c ′ c e e a ←-(λk. c ′ c k) e e = ⟨µ t p Theorem 4 . 8 ( 48 Normalization). If c : Γ ⊢ ∆, then c normalizes. σ and πσ ∈ B ⊥ σ , thus v • πσ ∈ A ⇒ B ⊥ σ . □ It only remains to show that the translation we defined in Figure 11 preserves typing to conclude the proof of Proposition 5.3. Lemma 5.2. If Γ ⊢ p : A | ∆ (in dL t p ), then (Γ ∪ ∆) * ⊢ p p : A * (in Lepigre's extended system). The same holds for contexts, and if c : Γ ⊢ ∆ then (Γ ∪ ∆) * ⊢ c c : ⊥. Proposition 5 . 3 ( 53 Adeqacy). If Γ ⊢ p : A | ∆ and σ is a substitution realizing (Γ ∪ ∆) * , then p p σ ∈ A * ⊥⊥ σ . p ::= • • • | (p 1 , p 2 ) e ::= • • • | μ(a 1 , a 2 ).c • • • | µ t p.c t p Delimited c t p ::= ⟨p N ||e t p ⟩ | ⟨p|| t p⟩ continuations e t p ::= μa.c t p nef p N ||e t ⟩ e t ::= μx .c[t] c[] ::= ⟨([], p)||e⟩ | ⟨λx .p||[] • e⟩ and adding dual operators ť p/ μ ť p for (co-)delimited continuations to allow for a small-step definition of terms reduction: ⟨λx .p||t • e⟩ ⇝ ⟨µ t p.⟨t || μx .⟨p|| t p⟩⟩||e⟩ ⟨wit p||e t ⟩ ⇝ ⟨p|| μa.⟨wit a||e t ⟩⟩ ⟨(t, p)||e⟩ ⇝ ⟨p|| μ ť p.⟨t || μx .⟨ ť p|| μa.⟨(x, a)||e⟩⟩⟩⟩ λp.( t t (λ • v.p v)) e e ) -→ • (λp.( t t (λ • v.p v)) e e ) λ • x . p p -→ a ( t t (λ • v.(λ • x . p p ) v))e e +←-λk.(( t t (λ • x . p )) k) e e = ⟨µ t p.⟨t || μx .⟨p|| t p⟩⟩||e⟩ c Case ⟨(t, p)||e⟩ ⇝ ⟨p|| μ ť p.⟨t || μx .⟨ ť p|| μa.⟨(x, a)||e⟩⟩⟩⟩: We have: ⟨(t, p)||e⟩ = (λ • k. p p ( t t (λx .λ • a.k (x, a)))) e e -→ • p p ( t t (λx .λ • a. e e (x, a))) a +←-p p ( t t (λx .(λk.k)λ • a. e e (x, a))) a +←-p p ( t t (λx .(λk.k)λ • a.(λk.k (x, a)) e e )) = ⟨p|| μ ť p.⟨t || μx .⟨ ť p|| μa.⟨(x, a)||e⟩⟩⟩⟩ c Case ⟨wit p||e t ⟩ ⇝ ⟨p|| μa.⟨wit a||e t ⟩⟩: We have: wit p t e t t = (λk. p p (λ • a.k (wit a))) e t t -→ a p p (λ • a. e t t (wit a))) a +←-p p (λ • a.(λk.k (wit a)) e t t ) = ⟨p|| μa.⟨wit a||e t ⟩⟩ c Case ⟨wit (V t , V p )||e t ⟩ ⇝ ⟨V t ||e t ⟩: ? -→ • ( t t (λ • x . p )) e e a • e e ≜ q p • e e t • e e ≜ t t • e e μa.c e ≜ [λa. c c ]• (•). By definition, we have ⊥ σ = ∀X .X σ = ∅, thus for any stack π , we have π ∈ ⊥ ⊥ σ = Π. In particular, • ∈ ⊥ ⊥ σ . x t ≜ x (t, p) p ≜ ( t t , p p ) q n t ≜ λzs.s n (z) µα .c p ≜ µα . c c wit p t ≜ π 1 ( p p ) prf p p ≜ π 2 ( p p ) a p ≜ a refl p ≜ λa.a λa.p p ≜ λa. p p subst p q p ≜ p p q p λx .p p ≜ λx . p p α e ≜ α * , we have: ⟨µ t p.⟨q 1 || μa 1 .⟨q 2 || μa 2 .⟨p|| t p⟩⟩⟩||e⟩ c = µα .(µα .( q 1 p * [λa 1 . ⟨q 2 || μa 2 .⟨p|| t p⟩⟩ t p ]α) * α) * e e ≻ µα .( q 1 p * [λa 1 . ⟨q 2 || μa 2 .⟨p|| t p⟩⟩ t p ]α) * e e ≻ q 1 p * [λa 1 . ⟨q 2 || μa 2 .⟨p|| t p⟩⟩ t p ] e e ≻ * q 2 p * [λa 2 p 1 , p 2 ) : (A 1 ∧ A 2 ) | ∆ ∧ r c : Γ, a 1 : A 1 , a 2 : A 2 ⊢ d ∆, t p : B; σ {(a 1 , a 2 )|p} Γ | μ(a 1 , a 2 ).c : (A 1 ∧ A 2 ) ⊢ d ∆, t p : B; σ {•|p} ∧ land the reduction rules:⟨(p 1 , p 2 )||e⟩ ⇝ ⟨p 1 || μa 1 .⟨p 2 || μa 2 .⟨(a 1 , a 2 Aside from strictly logical considerations as in[START_REF] Herbelin | A constructive proof of dependent choice, compatible with classical logic[END_REF], there are motivating examples of programs that could only be written and specified in such a setting. Consider for instance the infinite tape lemma that states that from any infinite sequence of natural numbers, one can extract either an infinite sequence of odd numbers, or an infinite sequence of even numbers. Its proof deeply relies on classical logic, and the corresponding program (which, given as input a stream of integers, returns a stream that consists either only of odd integers or only of even ones) can only be written in a classical setting and requires dependent types to be specified. See[START_REF] Lepigre | Semantics and Implementation of an Extension of ML for Proving Programs[END_REF] Section 7.8] for more details. In the sense of a formulas-as-types interpretation of a sequent calculus à la Hilbert (as Curien-Herbelin's λµ μ-calculus[START_REF] Curien | The duality of computation[END_REF] or Munch-Maccagnoni's system L[START_REF] Munch-Maccagnoni | Focalisation and Classical Realisability[END_REF]), as opposed to traditional type systems given in a natural deduction style. This formula is often referred to as the formula in the stoup, a terminology due to Girard. Observe that this critical pair can be also interpreted in terms of non-determinism. Indeed, we can define a fork instruction by ⋔≜ λab .µα . ⟨µ_⟨a ||α ⟩ || μ_. ⟨b ||α ⟩ ⟩, which verifies indeed that ⟨⋔||p 0 • p 1 • e ⟩ → ⟨p 0 ||e ⟩ and ⟨⋔||p 0 • p 1 • e ⟩ → ⟨p 1 ||e ⟩. To pursue the analogy with the λ-calculus, the rest of the stack e can be viewed as a context C e [ ] surrounding the application p q, the command ⟨p ||q • e ⟩ thus being identified with the term C e [p q]. Similarly, the whole stack can be seen as the context C q•e [ ] = C e [[ ]q], whence the terminology. Technically this requires to extend the language to authorize the construction of terms call/cc k t and of proofs throw t . The first rule expresses that call/cc k captures the context wit { } and replaces every occurrence of throw k t with throw k (wit t ). The second one just expresses the fact that call/cc k can be dropped when applied to a term t which does not contain the variable k . This design choice is usually a matter of taste and might seem unusual for some readers. However, it has the advantage of exhibiting the different treatments for terms and proofs through the CPS in the next sections. The nature of the representation is irrelevant here as we will not compute over it. We can for instance add one constant for each natural number. The reader might recognize the rule (ς ) of Wadler's sequent calculus[START_REF] Wadler | Call-by-value is dual to call-by-name[END_REF]. Observe that the problem here arises independently of the value restriction (that is whether we consider that q is a value or not), and is peculiar to the sequent calculus presentation. Note that even if we were not restricting ourselves to values, this would still hold: if at some point the command ⟨p ||e ⟩ is executed, it is necessarily the case that q has produced a value to substitute for a. (µ) c : (Γ, a : A ⊢ ∆; σ {a|p}) Γ | μa.c : A ⊢ ∆; σ {•|p} ( μ) Γ, a : A ⊢ p : B | ∆; σ Γ ⊢ λa.p : Π a:A B | ∆; σ (→ r ) Γ ⊢ q : A | ∆; σ Γ | e : B[q/a] ⊢ ∆; σ {•| †} q D → a FV (B) Γ | q • e : Π a:A B ⊢ ∆; σ {•|p} (→ l ) Γ, x : N ⊢ p : A | ∆; σ Γ ⊢ λx .p : ∀x N .A | ∆; σ (∀ r ) Γ ⊢ t : N ⊢ ∆; σ Γ | e : A[t/x] ⊢ ∆; σ {•| †} Γ | t • e : ∀x N .A ⊢ ∆; σ {•|p} (∀ l ) Γ ⊢ t : N | ∆; σ Γ ⊢ p : A(t) | ∆; σ Γ ⊢ (t, p) : ∃x N .A(x) | ∆; σ (∃ r ) Γ ⊢ p : ∃x N .A(x) | ∆; σ p ∈ D Γ ⊢ prf p : A(wit p) | ∆; σ prf Γ ⊢ p : A | ∆; σ A ≡ B Γ ⊢ p : B | ∆; σ (≡ r ) Γ | e : A ⊢ ∆; σ A ≡ B Γ | e : B ⊢ ∆; σ (≡ l ) Γ ⊢ p : t = u | ∆; σ Γ ⊢ q : B[t/x] | ∆; σ Γ ⊢ subst p q : B[u/x] | ∆; σ (subst) Γ ⊢ t : N | ∆; σ Γ ⊢ refl : t = t | ∆; σ (refl) Γ, x : N ⊢ x : N | ∆; σ (Ax t ) n ∈ N Γ ⊢ n : N | ∆; σ(Ax n ) Γ ⊢ p : ∃x .A(x) | ∆; σ p ∈ D Γ ⊢ wit p : N | ∆; σ (wit) In practice we will only bind a variable with a proof term, but it is convenient for proofs to consider this slightly more general definition. It is easy to convince ourselves that when typing a command ⟨p ||q • μa .c ⟩ with {• |p }, the "correct" dependency within c should be {a |µα ⟨p ||q • α ⟩ }, where the right proof is not a value. Furthermore, this dependency is irrelevant since there is no way to produce such a command where a type adjustment with respect to a needs to be made in c. That is to say let a = p 0 in subst (prf a) refl in natural deduction. (≡ l )where the hypothesis A ≡ B is implicit.• Case ⟨λx .p||t • e⟩ ⇝ ⟨p[t/x]||e⟩.A typing proof for the command on the left-hand side is of the form:Π p Γ, x : N ⊢ p : A | ∆ Γ ⊢ λx .p : ∀x N .A | ∆ (∀ r ) Π t Γ ⊢ t : N | ∆ Π e Γ | e : B[t/x] ⊢ ∆; {•| †} Γ | t • e : ∀x N .B ⊢ ∆; {•|λx .p} (∀ l ) Γ | t • e : ∀x N .A ⊢ ∆; {•|λx .p} (≡ l ) ⟨λx .p||t • e⟩ : Γ ⊢ ∆ (Cut)We first deduce A[t/x] ≡ B[t/x] from the hypothesis ∀x N .A ≡ ∀x N .B. Then, using the fact that Γ, x : N ⊢ p : A | ∆ and Γ ⊢ t : N | ∆, by Lemma 2.6 and the fact that∆[t/x] = ∆, we get a proof Π ′ p of Γ ⊢ p[t/x] : A[t/x] | ∆.We can thus build the following derivation:Π ′ p Γ ⊢ p[t/x] : A[t/x] | ∆ Π e Γ | e : B[t/x] ⊢ ∆; {•|p[t/x]} Γ | e : A[t/x] ⊢ ∆; {•|p[t/x]} (≡ l ) ⟨p[t/x]||e⟩ : Γ ⊢ ∆ (Cut)using Corollary 2.5 to weaken the binding to p[t/x] in Π e .• Case ⟨λa.p||q • e⟩ ⇝ ⟨q|| μa.⟨p||e⟩⟩.A typing proof for the command on the left-hand side is of the form:Π p Γ, a : A ⊢ p : B | ∆ Γ ⊢ λa.p : Π a:A B | ∆ (→ r ) Π q Γ ⊢ q : A ′ | ∆ Π e Γ | e : B ′ [q/a] ⊢ ∆; {•| †} Γ | q • e : Π a:A ′ B ′ ⊢ ∆; {•|λa.p} Γ | q • e : Π a:A B ⊢ ∆; {•|λa.p} (≡ l ) ⟨λa.p||q • e⟩ : Γ ⊢ ∆ (Cut)If q D, we define B ′ q ≜ B ′ which is the only type in B ′ {a |q } . Otherwise, we define B ′ q ≜ B ′ [q/a] which is a type in B ′ {a |q } . In both cases, we can build the following derivation: for the (cut)-rule. Consequently, it can also be dropped for all the other cases. The case of the conversion rule is a direct consequence of the third case. For refl, we have by definition that refl * = λx .x : N * → N * .The only non-direct cases are subst p q, with p not a value, and (t, p). To prove the former with p V , we have to show that if:Γ ⊢ p : t = u | ∆; σ Γ ⊢ q : B[t/x] | ∆; σ Γ ⊢ subst p q : B[u/x] | ∆; σ (subst)then subst p q * = µα .⟨p * || μ_ .⟨µα .⟨q * ||α⟩||α⟩⟩ : B[u/x] * . According to Lemma 2.10, we have that B[u/x] * = B[t/x] * = B * . By induction hypothesis, we have proofs of Γ * ⊢ p * : N * → N * | ∆ * and of Γ * ⊢ q * : B | ∆ * . Using the notation η q * ≜ µα .⟨q * ||α⟩, we can derive:Γ * ⊢ p * : N * → N * | ∆ * Γ * ⊢ q * : B * | ∆ * Γ * ⊢ η q * : B * | ∆ * α : B * ⊢ α : B * ⟨η q * ||α⟩ : Γ ⊢ ∆ * , α : B * (Cut) Γ * | μ_ .⟨η q * ||α⟩ : B * ⊢ ∆ * , α : B * ( μ) ⟨p * || μ_ .⟨η q * ||α⟩⟩ : Γ * ⊢ ∆ * , α : B * (Cut) Γ * ⊢ µα .⟨p * || μ_ .⟨η q * ||α⟩⟩ : B * | ∆ * (µ)The case subst V q is easy since (subst V q) * = q p has type B * by induction. Similarly, the proof for the case (t, p) corresponds to the following derivation: Γ * ⊢ p * :A * | ∆ * Γ * ⊢ t * : N | ∆ * a : A * ⊢ a : A * Γ * , a : A * ⊢ (t * , a) : N∧ A * | ∆ * (∧ r ) α : N∧ A * ⊢ α : N∧ A * ⟨(t * , a)||α⟩ : Γ, a : A * ⊢ ∆ * , α : N∧A * (Cut) We actually even consider α -conversion for delimited continuations t p, to be able to insert such terms inside a type. Even though this might seem strange at first sight, this will make sense when proving subject reduction. Everything works as if when reaching a state where the reduction of a term is needed, we had an extra abstract machine to reduce it. Note that this abstract machine could possibly need another machine itself for reducing proofs embedded in terms, etc. We could actually solve this by making the reduction of terms explicit, introducing for instance commands and contexts for terms with the appropriate typing rules. However, this is not necessary from a logical point of view and it would significantly increase the complexity of the proofs, therefore we rather chose to stick to the actual presentation. (µ) c : (Γ, a : A ⊢ ∆) Γ | μa.c : A ⊢ ∆ ( μ) Γ, a : A ⊢ p : B | ∆ Γ ⊢ λa.p : Π a:A B | ∆ (→ r ) Γ ⊢ q : A | ∆ Γ | e : B[q/a] ⊢ ∆ q D ⇒ a FV (B) Γ | q • e : Π a:A B ⊢ ∆ (→ l ) Γ, x : N ⊢ p : A | ∆ Γ ⊢ λx .p : ∀x N .A | ∆ (∀ r ) Γ ⊢ t : N ⊢ ∆ Γ | e : A[t/x] ⊢ ∆ Γ | t • e : ∀x N .A ⊢ ∆ (∀ l ) Γ ⊢ t : N | ∆ Γ ⊢ p : A(t) | ∆ Γ ⊢ (t, p) : ∃x N .A(x) | ∆ (∃ r ) Γ ⊢ p : ∃x N .A(x) | ∆ p ∈ D Γ ⊢ prf p : A(wit p) | ∆ prf Γ ⊢ p : A | ∆ A ≡ B Γ ⊢ p : B | ∆ (≡ r ) Γ | e : A ⊢ ∆ A ≡ B Γ | e : B ⊢ ∆ (≡ l ) Γ ⊢ p : t = u | ∆ Γ ⊢ q : B[t/x] | ∆ (→ I ) Γ ⊢ p : Π a:A B Γ ⊢ q : A Γ ⊢ p q : B[q/a] (→ E ) Γ, x : N ⊢ p : A Γ ⊢ λx .p : ∀x N A (∀ 1 I ) Γ ⊢ p : ∀x N .A Γ ⊢ t : N Γ ⊢ p t : A[t/x] (∀ 1 E ) Γ ⊢ p : A X FV (Γ) Γ ⊢ p : ∀X .A (∀ 2 I ) Γ ⊢ p : ∀X .A Γ ⊢ p : A[P/X ] (∀ 2 E ) Γ ⊢ t : N Γ ⊢ p : A[u/x] Γ ⊢ (t, p) : ∃x N A (∃ I ) Γ ⊢ p : ∃x N A Γ ⊢ prf p : A(wit p) (prf ) Γ ⊢ p : ∃x N A Γ ⊢ wit p : N (wit) Γ ⊢ refl : x = x (refl) In fact, we could define it formally, which would require a kind of co-delimited continuation. A classical proof might backtrack, thus it translation might use a former continuation. The return type of continuations thus need to be uniform (usually ⊥) and can not be parametrized by ∀R. (Cut) Γ | μa.⟨q||⋆⟩ : A ⊢ ∆, ⋆ : B ( μ) ⟨p|| μa.⟨q||⋆⟩⟩ : Γ | ∆, ⋆ : B (Cut) Γ ⊢ µ⋆.⟨p|| μa.⟨q||⋆⟩ | ∆⟩ : B (µ)We want to show that for any X we can derive:Γ ∪ ∆ ⊢ λk. p p (λa. q p k) : Π b:B X (b) → X (q + [p + /a]).By induction, we have:Γ ∪ ∆ ⊢ p p : ∀Y .(Π a:A + Y (a) → Y (p + )) Γ ∪ ∆ , a : A + ⊢ q t : ∀Z .(Π b:B + Z (b) → Z (q + )),so that by choosing Z (b) ≜ X (b) and Y (a) ≜ X (q + ), we get the expected derivation:Γ ∪ ∆ ⊢ p p : . . . Γ ∪ ∆ , a : A + ⊢ q p : . . . k : Π b:B X (b) ⊢ k : k : Π b:B X (b) Γ ∪ ∆ , k : Π b:B X (b), a : A + ⊢ q p k : X (q + ) (→ E ) In particular, Lepigre's semantical restriction is so permissive that it is not decidable, while it is easy to decide whether a proof term of dL t p is in nef. This will allow us to ease the definition of the translation to handle separately proofs and contexts. Otherwise, we would need formally to define ⟨p ||q • e ⟩ c all together by p p q p * e e . Where 0 is defined as λzs .z and S (t ) as (λzs .s(t zs)), i.e. as the translation of the corresponding 0 and successor from dL t p . ⟨p||e⟩ c ≜ p p * e e µ t p.c p ≜ µα . c t p ⟨p|| t p⟩ t p ≜ p p ⟨p|| μa.c⟩ t p ≜ (µα . p p * [λa. c t p ]α) * α Fig. 11. Translation of proof terms into Lepigre's calculusΓ ⊢ t : A Γ ⊢ π : A ⊥ ⊥ Γ ⊢ t * π : B * Γ ⊢ • : ⊥ ⊥ ⊥ • Γ, α : A ⊥ ⊥ ⊢ α : A ⊥ ⊥ α Γ, α : A ⊥ ⊥ ⊢ t : A Γ ⊢ µα .t : A µ Γ ⊢ π : (A[x := t]) ⊥ ⊥ That is, the translation q p * [λa . p p * e e ]• of a command ⟨q || μa . ⟨p ||e ⟩ ⟩ (where e is of type B[q] and p of type B[a]) would have been ill-typed (because p p * e e is). To be precise, for each arrow in the type, a double-negation (or its refinement) would be inserted. For instance, to recover a function of type N → N from a term t : ¬¬(N → ¬¬N) (where ¬¬A is in fact more precise, at least ∀R .(A → R) → R), the continuation needs to be forced at each level: λx .t I x I : N → N. We do not want to enter into to much details on this here, as it would lead us to much more than a paragraph to define the objects formally, but we claim that we could reproduce the results obtained for terms of type N in a language with terms representing arithmetic functions in finite types. The interested reader could see for instance[START_REF] Miquey | Classical realizability and side-effects[END_REF] Chapter 8] where a similar language with pairs, patter-matching, inductive and coinductive fixed points is defined. Acknowledgments. The author wishes to thank Pierre-Marie Pédrot for a discussion that led to the idea of using delimited continuations, Gabriel Scherer for his accurate observations and the constant interest he showed for this work, Hugo Herbelin who provided valuable help all along the writing of this paper, Rodolphe Lepigre for the example of the infinite tape lemma, as well as Alexandre Miquel and anonymous referees of this paper for their constructive remarks. Regular mode: We can thus build the following derivation for the command on the right: • Case ⟨prf p||e⟩ ⇝ ⟨µ t p.⟨p|| μa.⟨prf a|| t p⟩⟩||e⟩. We prove it in the most general case, that is when this reduction occurs under a delimited continuation. A typing derivation for the command on the left to be of the form: The proof p being nef, so is µ t p.⟨p|| μa.⟨prf a|| t p⟩⟩, and by definition of the reduction for types, we have for any type A that: so that we can prove that for any b: σ {b|prf p} ⇛ σ {b|µ t p.⟨p|| μa.⟨prf a|| t p⟩⟩}. Thus, we can turn Π e into Π ′ e a derivation of the same sequent except for the list of dependencies that is changed to σ {•|µ t p.⟨p|| μa.⟨prf a|| t p⟩⟩}. We conclude the proof of this case by giving the following derivation: • Case ⟨µ t p.⟨p|| t p⟩||e⟩ ⇝ ⟨p||e⟩. This case is trivial, because in a typing derivation for the command on the left, t p is typed with an empty list of dependencies, thus the type of p, e and t p coincides. prf (t, p) → β p subst refl q → β q (a) Language and formulas (b) Reduction rules • Case (wit). In dL t p the typing rule for wit p is the following: We want to show that: By induction hypothesis, we have: hence, it amounts to showing that for any X we can build the following derivation: • Case (∃ I ). In dL t p the typing rule for (t, p) is the following: Hence, we obtain by induction: and we want to show that for any Z : So we need to prove that: We let the reader check that such a type is derivable by using X (x) ≜ Π a:A(x ) Z (x, a) in the type of t p , and using Y (a) ≜ Z (t + , a) in the type of p p : □ Using the previous Lemma, we can now prove that the CPS translation is well-typed in the general case. Proposition 4.10 (Preservation of typing). The translation is well-typed, i.e. the following holds: (1 Proof. The proof is done by induction on the typing derivation, distinguishing cases according to the typing rule used in the conclusion. It is clear that for the nef cases, Lemma 4.9 implies the result by taking X (a) = ⊥. The rest of the cases are straightforward, except for delimited continuations that we detail hereafter. We consider a command ⟨µ t p.⟨q|| μa.⟨p|| t p⟩⟩||e⟩ produced by the reduction of the command ⟨λa.p||q • e⟩ with q ∈ nef. Both commands are translated by a proof reducing to ( q p (λa. p p )) e e . The corresponding typing derivation in dL t p is of the form: A Classical Sequent Calculus with Dependent Types 1:39 By induction hypothesis for e and p we obtain: Applying Lemma 4.9 for q ∈ nef we can derive: We can thus derive that: Γ ∪ ∆ ⊢ q p (λa. p p ) : B[q + ] * , and finally conclude that: Γ ∪ ∆ ⊢ ( q p (λa. p p )) e e : ⊥ . □ We can finally deduce the correctness of dL t p through the translation: Theorem 4.11 (Soundness). For any p ∈ dL t p , we have: ⊬ p : ⊥. Proof. Any closed proof term of type ⊥ would be translated in a closed proof of (⊥ → ⊥) → ⊥. The correctness of the target language guarantees that such a proof cannot exist. □ EMBEDDING INTO LEPIGRE'S CALCULUS In a recent paper [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF], Lepigre presented a classical system allowing the use of dependent types with a semantic value restriction. In practice, the type system of his calculus does not contain a dependent product Π a:A B strictly speaking, but it contains a predicate a ∈ A allowing the decomposition of the dependent product into ∀a.((a ∈ A) → B) as it is usual in Krivine's classical realizability [START_REF] Krivine | Realizability in classical logic. In interactive models of computation and program behaviour[END_REF]. In his system, the relativization a ∈ A is restricted to values, so that we can only type V : V ∈ A: However, typing judgments are defined up to observational equivalence, so that if t is observationally equivalent to V , one can derive the judgment t : t ∈ A. Interestingly, as highlighted through the CPS translation by Lemma 4.1, any nef proof p : A is observationally equivalent to some value p + , so that we could derive p : (p ∈ A) from p + : (p + ∈ A). The nef fragment is thus compatible with the semantical value restriction. The converse is obviously false, observational equivalence allowing us to type realizers that would be untyped otherwise 24 . We shall now detail an embedding of dL t p into Lepigre's calculus, and explain how to transfer normalization and correctness properties along this translation. Additionally, this has the benefits of providing us with a realizability interpretation for our calculus. While we do not use it in the current paper, we take advantage of this interpretation (and in particular of the interpretation of dependent types) in [START_REF] Miquey | Classical realizability and side-effects[END_REF]Chapter 8] to prove the normalization of dLPA ω , the sequent calculus which originally motivated this work and whose construction relies on dL t p . Actually, his language is more expressive than ours, since it contains records and patternmatching (we will only use pairs, i.e. records with two fields), but it is not stratified: no distinction is made between a language of terms and a language of proofs. We only recall here the syntax and Γ ⊢ π : • the strong existential is encoded as a pair, hence wit (resp. prf ) is mapped to the projection π 1 (resp. π 2 ). In [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF], the coherence of the system is justified by a realizability model, and the type system does not allow us to type stacks. Thus, we cannot formally prove that the translation preserves typing, unless we extend the type system in which case this would imply the adequacy. We might also directly prove the adequacy of the realizability model (through the translation) with respect to the typing rules of dL t p . We will detail here a proof of adequacy using the former method. We then need to extend Lepigre's system to be able to type stacks. In fact, his proof of adequacy [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF]Theorem 6] suggests a way to do so, since any typing rule for typing stacks is valid as long as it is adequate with the realizability model. We denote by A ⊥ ⊥ the type A when typing a stack, in the same fashion we used to go from a type A in a left rule of two-sided sequent to the type A ⊥ ⊥ in a one-sided sequent (see the remark at the end of Section 2.5). We also add a distinguished bottom stack • to the syntax, which is given the most general type ⊥ ⊥ ⊥ . Finally, we change the rule ( * ) of the original type system in [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF] and add rules for stacks, whose definitions are guided by the proof of the adequacy [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF]Theorem 6] in particular by the (⇒ e )-case. These rules are given in Figure 12. We shall now show that these rules are adequate with respect to the realizability model defined in [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF]Section 2]. Proposition 5.1 (Adeqacy). Let Γ be a (valid) context, A be a formula with FV (A) ⊂ dom(Γ) and σ be a substitution realizing Γ. The following statements hold: Proof. The proof is done by induction on typing derivations, we only need to do the proof for the rules we defined above (all the other cases correspond to the proof of [START_REF] Lepigre | A classical realizability model for a semantical value restriction[END_REF]Theorem 6]). 1:46 Étienne Miquey A fully sequent-style dependent calculus While the aim of this paper was to design a sequent-style calculus embedding dependent types, we only presented the Π-type in sequent-style. Indeed, we wanted to be sure above all that it was possible to define a sound sequent-calculus with the key ingredients of dependent types (i.e. dependent pairs and dependently-typed functions). In particular, rather than having left-rules (as in sequent calculi) for every syntactic constructors, we presented the existential type and the equality type with the following elimination rules (as in natural deduction): However, it is now easy to replace both elimination rules (and thus the corresponding destructors) by equivalent left-rules (and thus syntactic constructors for contexts). For instance, we could rather have contexts of the shape μ(x, a).c (to be dual to proofs (t, p)) and μ=.c (dual to refl). We could then define the following typing rules: and define prf p and subst p q as syntactic sugar: prf p ≜ µ t p.⟨p|| μ(x, a).⟨a|| t p⟩⟩ subst p q ≜ µα .⟨p|| μ=.⟨q||α⟩⟩. Observe that prf p is now only definable if p is a nef proof term. Since for any p ∈ nef and any variables a, α, the formula A(wit p) belongs to A(wit (x, a)) {(x,a)|p } , this allows us to derive the admissibility of the former (prf )-rule: . As for the reduction rules, we can define the following (call-by-value) reductions: and check that they advantageously 30
122,550
[ "6597" ]
[ "495900" ]
01744382
en
[ "spi" ]
2024/03/05 22:32:07
2013
https://hal.science/hal-01744382/file/tex00000397.pdf
F Pacheco-Torgal S Jalali J C Morel J E Aubert Y Millogo E Hamard A Fabbri Some observations about the paper "Earth construction: Lessons from the past for future eco-efficient construction" by As the starting point of this discussion, we would like to congratulate the authors for this interesting review [START_REF] Pacheco-Torgal | Earth construction: Lessons from the past for future ecoefficient construction[END_REF], which defends the use of earth as a building material. Indeed, while this is one of the oldest building materials in the world, it is also one of the less studied by the scientific community, and thus, one of the less understood. However, as stated by the two authors of this review, the number of scientific studies on this subject has increased dramatically in recent years. There searches on earth as a building material are mainly motivated by the growing demand of masons and construction companies for scientific data and evidence to evaluate and improve the wealth, the hygrothermal comfort and the seismic resistance of earth construction. First of all, we share the approach proposed by the author that consists of connecting the past and present (and even the future). This point is well illustrated by the first paragraph of the paper and its attractive title. Indeed, we think that the comparisons between the characteristics of modern earthen material sand existing ones, which have proven their effectiveness over the decades, is a major key to improve our understanding of this multi-scale composite material. The authors wish here to compare their views with those of Pacheco-Torgal and Jalali and highlight the following three main points of disagreement that justify this discussion. The present and the future of unstabilised earth constructions Throughout the article, the authors seem to postulate that the stabilisation technique (e.g., addition of hydraulic binders) is a compulsory step for earth construction. This leads to quite surprising conclusions about the cost and environmental impacts and their assumed direct link with the nature and amount of the binder used. These conclusions become even stranger if we consider cement stabilisation, which could be irrelevant from environmental, economic and technical perspectives. Indeed, if this stabilisation is efficient in the case of kaolinic clay materials containing appreciable amount of sand [START_REF] Millogo | Microstructural characterization and mechanical properties of cement stabilised adobes[END_REF], the same is not necessarily the case for raw clay materials rich in montmorillonite [START_REF] Molard | Study of the extrusion and stabilization with cement of monomineralogic clay[END_REF][START_REF] Amor | Cold stabilization of montmorillonitebased materials using Portland cement[END_REF][START_REF] Temimi | Making building products by extrusion and cement stabilization: limits of the process with montmorillonite clay[END_REF]. From this partial point of view on the stabilisation, we can strongly question the consistency displayed by the authors to link traditional earth constructions to the modern use of soil as a building material. Indeed, this former is mostly constituted by structures made of unstabilised earth, even for areas subject to heavy rains (Northern Europe). As a consequence, while it is true that in some countries, the temptation to accelerate the strengthening of the material by the addition of hydraulic binders can be justified for industrial production rates [START_REF] Ciancio | Experimental investigation on the compressive strength of cored and molded cement-stabilized rammed earth samples[END_REF] or for maintenance purposes [START_REF] Millogo | Microstructural characterization and mechanical properties of cement stabilised adobes[END_REF][START_REF] Reddy | Cement stabilised rammed earth. Part B: Compressive strength and stress-strain characteristics[END_REF][START_REF] Reddy | Structural behavior of story-high cementstabilized rammed-earth walls under compression[END_REF], an understanding review should not overshadow the research that is ongoing on un-stabilised earth constructions. An illustration of the significant importance of taking into consideration both stabilised and unstabilised materials is given in Germany, which is regularly used as a reference by the authors in their review. Indeed, after updating their professional rules, the Dachverband Lehm wrote a draft standard on earth-based bricks considering only un-stabilised bricks (except plant fibres that can be considered as a stabiliser in some cases). Based on this premise that un-stabilised earth constructions were only useful in the past, the majority of this review loses its relevance and contradicts the title that suggests that we can build the future using knowledge from the past. This contradiction becomes particularly annoying during the discussions on economic and environmental impacts. The unstabilised earth is solely able to be returned to its initial state (as a soil) without any "waste" of energy, by simply wetting. Moreover, it is possible to reuse the material with the same embodied energy to build again. This is the only material with drystone masonry to be able to do that. Using cement or lime stabilisation increases the embodied energy of the material. The authors are not at all against the stabilisation, particularly if it is done with the real three dimensions of sustainable building, when for example it enables to use local materials and develop local skills and employment but they are just aware that using earth is not sufficient to be sustainable. Material properties The presentation of the material properties of the soil used for earth construction is interesting but definitely lacks a discussion on the compressive strength. This feature has been extensively studied by various researchers since the 1980s, see for examples . Indeed, this characteristic is currently a feature required by all parties involved in construction as a proof of durability. However, there is a paradox between this parameter and the observation of existing earth constructions that have long shown sufficient durability. Thus, despite years of research, there is still no consensus on how to measure this characteristic [START_REF] Morel | Compressive strength testing of compressed earth blocks[END_REF]. As an example, similarly to what it is observed for other building materials, such as concrete, Anglo-Saxon culture advocates the measurement of the "confined" resistance, whereas in French culture, we continue to be attached to the "unconfined" measurements. Many discussions about the extent and relevance of this feature continue to animate the scientific debate within the community working on this subject. Moreover, as is rightly stated by the authors, the compressive strength will depend on the sample shape (and this is where the main problem with the compression test lies). However, to echo the previous discussion, it is important to underline here that the stabilisation will also significantly affect the fracture behaviour of the test sample. Actually, stabilisation created by rigidifying the material will induce a commonly observed behaviour in brittle materials as concrete or stone. In contrast, theun-stabilised material is likely to be closer to conventional behaviour of soils. In this case, the soil behaviour elastoplastic models are a priori better suited to earthen materials [START_REF] Nowamooz | Finite element modelling of a rammed earth wall[END_REF][START_REF] Jaquin | The strength of unstabilised rammed earth materials[END_REF]. Thus, any comparison between the compressive strengths of stabilised and unstabilised earth samples should be made with care. Hygrothermal properties Finally, a similar discussion can be undertaken on the hygrothermal properties of earthen materials and their impact on comfort and indoor air quality. One of the main assets that is used to promote earth constructions is their role in controlling moisture and indoor air quality. To our knowledge, there are also few studies that demonstrate what the influence of stabilisation on the hygroscopic behaviour might be. It is well known since [START_REF] Olivier | Le matériau terre: Essai de compactage statique pour la fabrication de briques de terres compressées[END_REF] that, for materials of the same soils manufactured at the optimum water content, stabilisation increases the volume fraction porosity of the material. The consequence is that the sorptivity of the cement stabilised samples is higher than the unstabilised sample [START_REF] Hall | Moisture Ingress in Rammed Earth: Part 3 -The Sorptivity and the Surface Inflow Velocity[END_REF]. But recent studies in the moisture buffering in buildings, clearly stated that the vapour transfer were reduced by the stabilisation with lime or cement [START_REF] Mcgregor | The effect of stabilisation on humidity buffering of earth walls[END_REF][START_REF] Eckermann | Auswirkung von Lehmbaustoffen auf die Raumluftfeuchte[END_REF]. However, the prediction of this phenomenon through the integration of the measured physical properties of moisture and the heat transfer coupled models is extremely rare [START_REF] Allinson | Hygrothermal analysis of a stabilised rammed earth test building in the UK[END_REF][START_REF] Hall | Chapter 2: Hygrothermal behaviour and thermal comfort in modern earth buildings[END_REF][START_REF] Hall | Analysis of the Hygrothermal Functional Properties of Stabilised Rammed Earth Materials[END_REF] and should also be completed in further studies. Conclusions We believe that it is necessary to continue discussions among scientists on the use of this material in modern "green buildings". Moreover, it is quite relevant, as suggested by Pacheco-Torgal and Jalali, to study existing earth constructionsfor, at least, the transmission of cultural know-how. However, the existence of these structures is, by itself, evidence of the durability of these types of constructions, which have remained intact for decades. It will be necessary to increase our knowledge of this material to renovate it properly (including from the energy point of view, for example, in the countries of Northern Europe). Finally, the question remains open on stabilisation. While it is entirely appropriate for some applications (low-cost buildings in India subjected to the monsoon to avoid having to rebuild every year for example), its routine use in industrialised countries can be questioned. Some local soils are known to exhibit sufficient mechanical characteristics without amendment.
10,683
[ "21807", "1326335" ]
[ "698", "58114", "218084", "222000", "698" ]
01744426
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01744426/file/TACO_HAL.pdf
Oleksandr Zinenko Stéphane Huot Cédric Bastoul Keywords: CCS Concepts:, Human-centered computing → Human computer interaction (HCI), •So ware and its engineering → Compilers, Polyhedral model, direct manipulation ACM Reference format: Parallelism is one of the key performance sources in modern computer systems. When heuristics-based automatic parallelization fails to improve performance, a cumbersome and error-prone manual transformation is o en required. As a solution, we propose an interactive visual approach building on the polyhedral model that visualizes exact dependences and parallelism; decomposes and replays a complex automatically-computed transformation step by step; and allows for directly manipulating the visual representation as a means of transforming the program with immediate feedback. User studies suggest that our visualization is understood by experts and non-experts alike, and that it may favor an exploratory approach. Visual Program Manipulation in the Polyhedral Model OLEKSANDR ZINENKO, Inria and University Paris-Saclay ST ÉPHANE HUOT, Inria C ÉDRIC BASTOUL, University of Strasbourg and Inria INTRODUCTION Large-scale adoption of heterogeneous parallel architectures requires e cient solutions to exploit the available parallelism from applications. Despite signi cant e ort in simplifying parallel programming through new languages, high-level language extensions, frameworks and libraries, manual parallelization may still be required although o en ruled out as time consuming and errorprone. us, programmers mostly rely on automatic optimization tools, such as those based on the polyhedral model, to improve program performance. e polyhedral model [START_REF] Feautrier | Polyhedron Model[END_REF] has been the cornerstone of loop-level program transformation in the last two decades [START_REF] Bastoul | Code Generation in the Polyhedral Model Is Easier an You ink[END_REF][START_REF] Bondhugula | e Pluto+ Algorithm: A Practical Approach for Parallelization and Locality Optimization of A ne Loop Nests[END_REF][START_REF] Feautrier | Some E cient Solutions to the A ne Scheduling Problem. Part II. Multidimensional Time[END_REF]. It features exact iteration-wise dependence analysis and optimization for both parallelism and locality. However, automatic polyhedral compilation is based on imprecise heuristics [START_REF] Bondhugula | e Pluto+ Algorithm: A Practical Approach for Parallelization and Locality Optimization of A ne Loop Nests[END_REF][START_REF] Bondhugula | A Practical Automatic Polyhedral Parallelizer and Locality Optimizer[END_REF]. Polyhedral compilers give user some (limited) control over the optimization process, which requires understanding their internal operation anyway. Furthermore, they are applicable globally and do not allow for nergrain control, e.g., a ecting only one loop nest. Visual interfaces for con guring the polyhedral compiler [START_REF] Papenhausen | PUMA-V: An Interactive Visual Tool for Code Optimization and Parallelization Based on the Polyhedral Model[END_REF] partially mitigate these issues by making polyhedral compiler blocks discoverable, but still require a deep understanding of internal operation of a compiler. Semi-automatic approaches provide the user with a set of prede ned program transformations, typically exposed as compiler directives [START_REF] Girbal | Semi-Automatic Composition of Loop Transformations for Deep Parallelism and Memory Hierarchies[END_REF][START_REF] Kelly | A Unifying Framework for Iteration Reordering Transformations[END_REF][START_REF] Yuki | Alphaz: A System for Design Space Exploration in the Polyhedral Model[END_REF]. ey shi the expertise requirements from heuristics to loop-level code transformations. ey also require program transformation to be performed from scratch (as polyhedrally-transformed code is barely readable) while o ering li le for (i = 0; i < N ; i ++) for (j = 0; j < N ; j ++) S : z[i+j] += x[i ] * y [ j ]; (a) Original #pragma omp parallel for private ( t2 ) for ( t1 = 0; t1 <= 2* N -2; t1 ++) for ( t2 = max (0 , t1 -N +1); t2 <= min ( t1 , N -1); t2 ++) S : z where arr corresponds to the name of accessed array and a 1 corresponds to its only dimension. Program Transformation and Schedules. Changing the relative execution order of statement instances transforms the program. We can de ne a scheduling relation to map iteration domain points to logical execution dates. If these dates are multidimensional, statement instances are executed following the lexicographical order of their dates. Scheduling relations are expressive enough to encode a complex composition of program transformations including, e.g., loop interchange, fusion, ssion, skewing, tiling, index-set-spli ing, etc. [START_REF] Girbal | Semi-Automatic Composition of Loop Transformations for Deep Parallelism and Memory Hierarchies[END_REF]. For example, loop tiling [START_REF] Irigoin | Supernode Partitioning[END_REF] for the polynomial multiply can be expressed by the schedule θ S (N ) = {(i, j) T → (t 1 , t 2 , t 3 , t 4 ) T | (32t 1 ≤ t 3 ≤ 32t 1 +31) ∧ (32t 2 ≤ t 4 ≤ 32t 2 +31) ∧ t 3 = i ∧ t 4 = j}, where 32 is the tile size. Note that t 3 and t 4 are de ned explicitly by equalities while t 1 and t 2 are de ned implicitly by bounding inequalities, which correspond to integer division. Schedule relations can be constructed manually or using high-level frameworks [START_REF] Bagnères | Opening Polyhedral Compiler's Black Box[END_REF][START_REF] Girbal | Semi-Automatic Composition of Loop Transformations for Deep Parallelism and Memory Hierarchies[END_REF][START_REF] Kelly | A Unifying Framework for Iteration Reordering Transformations[END_REF]. Automatic optimizers directly construct a scheduling relation with certain properties, including minimal reuse distances, tilability and inner/outer parallelism [START_REF] Bondhugula | e Pluto+ Algorithm: A Practical Approach for Parallelization and Locality Optimization of A ne Loop Nests[END_REF][START_REF] Bondhugula | Automatic Transformations for Communication-Minimized Parallelization and Locality Optimization in the Polyhedral Model[END_REF]. However, they may fail to improve performance when achieving di erent properties requires contradictory transformations, for example exploiting spatial locality may be detrimental for parallelism [START_REF] Shirako | Oil and Water Can Mix: An Integration of Polyhedral and AST-Based Transformations[END_REF]. 2.1.3 Encoding Lexical Order. roughout this paper, we use the so called (2d + 1) structure of scheduling relations. It introduces (d + 1) auxiliary dimensions to the scheduling relation [START_REF] Kelly | A Unifying Framework for Iteration Reordering Transformations[END_REF] to represent lexical order. ey are referred to as β-dimensions [START_REF] Girbal | Semi-Automatic Composition of Loop Transformations for Deep Parallelism and Memory Hierarchies[END_REF], as opposed to α dimensions that represent the execution order of the d loops. Zero-based contiguous constant values of β i enforce the relative order between di erent objects (loops or statements) at depth i. ey express code motion transformations such as loop fusion and ssion. For example, the (2d + 1) form of the identity scheduling relation for polynomial multiply is θ S (N ) = {(i, j) T → (β 1 , α 1 , β 2 , α 2 , β 3 ) T | β 1 = 0 ∧ α 1 = i ∧ β 2 = 0 ∧ α 2 = j ∧ β 3 = 0}. Given that β-dimensions are constant, they can be concisely rewri en as a vector ì β = (0, 0, 0) T . β-vectors uniquely identify statements since no two statements can have the same lexical position. Pre xes of β-vectors (β-pre xes) uniquely identify loops, with their length corresponding to the nesting depth. Statements that share d loops, have identical β-pre xes of length d. 2.1.4 Program Analysis and Parallelism. ey key power of the polyhedral model is its ability to compute exact instance-wise dependences [START_REF] Feautrier | Data ow Analysis of Array and Scalar References[END_REF]. Two statement instances are dependent if they access the same array element and at least one of them writes to it. For a program transformation to preserve original program semantics, it is su cient that pairs of dependent instances are executed in the same order as before the transformation [START_REF] Kennedy | Optimizing Compilers for Modern Architectures: A Dependence-Based Approach[END_REF]. A dependence relation maps statement instances (dependence sources) to the instances that must be executed a er them (dependence sinks). If a transformation inverses the execution order of dependent instances or assigns them the same logical execution time, the dependence becomes violated and the transformation is illegal. e polyhedral model provides means to verify the legality of a scheduling relation [START_REF] Bastoul | Mapping Deviation: A Technique to Adapt or to Guard Loop Transformation Intuitions for Legality[END_REF][START_REF] Feautrier | Data ow Analysis of Array and Scalar References[END_REF][START_REF] Pugh | e Omega Test: A Fast and Practical Integer Programming Algorithm for Dependence Analysis[END_REF][START_REF] Vasilache | Violated Dependence Analysis[END_REF]. Groups of instances, including loops, that do not transitively depend on each other may be executed in an arbitrary order, including in parallel. Loop-level parallelism is expressed by a aching a "parallel" mark to an α dimension, which requires code generator to issue a parallel loop. Code Generation. A er a scheduling relation is de ned, code generation is a ma er of building a program that scans the iteration domain with respect to the schedule [START_REF] Ancourt | Scanning Polyhedra with DO Loops[END_REF]. Modern code generators rely on generalized change of basis that combines the iteration domain and the scheduling relation and puts scheduling dimensions in the foremost positions before creating loops from all dimensions. Several e cient algorithms and tools exist for that purpose including CLooG [START_REF] Bastoul | Code Generation in the Polyhedral Model Is Easier an You ink[END_REF], CodeGen+ [START_REF] Chen | Polyhedra Scanning Revisited[END_REF] and ppcg [START_REF] Grosser | Polyhedral AST Generation Is More an Scanning Polyhedra[END_REF]. For example, given the schedule T S = {(i, j) T → (t1, t2) T | t1 = i + j ∧ t2 = j} that implements loop skewing for the polynomial multiply kernel and a parallel mark for dimension t1, CLooG may generate the code in Fig. 1b. Transformation Directives Even though polyhedral and syntactic approaches can be combined in an automatic tool [START_REF] Shirako | Oil and Water Can Mix: An Integration of Polyhedral and AST-Based Transformations[END_REF], the polyhedral optimizer does not operate in syntactic terms and provides only li le control over its parameters through compiler ags. Recently, Bagnères et.al. proposed the Clay transformation set that expresses a large number of syntactic loop transformations as structured changes to scheduling relations and rely on β-pre xes to identify targets [START_REF] Bagnères | Opening Polyhedral Compiler's Black Box[END_REF]. ey also proposed the Chlore algorithm that identi es a sequence of Clay primitives that would transform any given scheduling relation into another scheduling relation. For example, the aforementioned loop skewing transformation is expressed as a dimension substitution: S ( ì ρ, i, k): ∀θ S : ì β S,1.. dim ì ρ = ì ρ, α dim ì ρ → α dim ì ρ + k • α i . Any occurrence of the output dimension α dim ì ρ is replaced by a linear combination of itself with another output dimension α i . us, the schedule T S from the previous section is obtained from the identity schedule by S ((β 1 ) T = (0) T , i = 2, k = 1) where β 1 identi es the outer i loop. Loop R is similar to S except that it uses a linear combination of the input rather than output dimensions. Transformations of the lexical order are encoded as modi cations of β-vectors. For example, fusing two subsequent loops is expressed as F N ( ì ρ): ∀θ S : ì β S,1.. dim ì ρ-1 = ì ρ 1.. dim ì ρ-1 ∧ ì β S,dim ì ρ = ì ρ dim ì ρ + 1, ì β S,1.. dim ì ρ-1 ← ì ρ S,1.. dim ì ρ , ì β S,dim ì ρ ← ì β S,dim ì ρ + max T : ì β T = ì ρ ì β T ,dim ì ρ , where dim ρ encodes fusion depth. is transformations assigns equal β values up to given depth, which corresponds to fusion, and updates the remaining ones to maintain uniqueness and contiguity. Clay transformations are applicable to unions of scheduling relations such that the entire union (but not necessarily individual relations) is le -total and injective. Internally, Clay operates on a matrix representation of systems of linear inequalities and supports arbitrarily complex transformations as long as the properties of an union are preserved. Chlore algorithm builds on matrix decompositions to identify sequences of Clay primitives that transform one set of matrices into another set. More information and full speci cation of transformations is available in [START_REF] Bagnères | Opening Polyhedral Compiler's Black Box[END_REF]. Although Clay and Chlore enable interaction with a polyhedral engine using syntactic terms, they face several challenges in application. (1) Target selection-β-pre xes are required for each transformation, yet they are not easily accessible in the source code. (2) Target consistency-the generated code may have a di erent structure than the original code, for example due to loop separation [START_REF] Bastoul | Code Generation in the Polyhedral Model Is Easier an You ink[END_REF], resulting in a mismatch between β-vectors and loop nesting. (3) E ect separationeven if Chlore produces a sequence of primitive transformations, it is di cult to evaluate (potentially negative) e ects of individual transformation by reading the polyhedrally transformed code. We address these challenges with Clint, a new interactive tool based on a graphical representation of SCoPs which: simpli es target selection to directly choosing a visualization of a transformation target; maintains target consistency by matching the visualization to the original (o en simpler) code; and replays primitive "steps" of transformation to separate their e ects, supporting further interactive modi cation. DIRECTLY MANIPULATING POLYHEDRAL VISUALIZATIONS To reduce the burden of code editing and transformation primitive application, we propose Clint, an interactive loop-level transformation assistant based on the polyhedral model. It leverages the geometric nature of the model by presenting SCoPs in a directly manipulable [START_REF] Shneiderman | Direct Manipulation: A Step Beyond Programming Languages[END_REF] visualization that combines sca er plots of iteration domains and node-link diagrams of instance-wise dependences. is approach is similar to the one commonly used in the polyhedral compilation community to illustrate iteration domains. Clint goes beyond these static views by allowing program transformation to be initiated directly from the visualization, and provides an animation-based visual explanation of an automatically computed program transformation. Animated transitions correspond to program transformations that, when applied, would change the program to obtain the nal visualization. e user can replicate the action by directly manipulating the visualization similarly to the transition or in a more elaborate way. e set of interactive manipulations builds on the geometry-related vocabulary of classical loop transformations, such as skewing or shi ing, which is expected to give the user supplementary intuition on the transformation e ects and to support exploration and learning. e design of Clint is motivated by the need for (1) a single and consistent visual interface to bridge the gap between dependence analysis and subsequent program transformation; (2) an e cient way to explore multiple alternative loop transformations without rewriting the code; (3) explaining the code modi cations yielded by an automatic optimization. Although built around the complete Clay transformation set [START_REF] Bagnères | Opening Polyhedral Compiler's Black Box[END_REF], it can be extended to support di erent transformations as long as e ects of any transformation can be undone by (a sequence of) other transformations. Clint seamlessly combines loop transformations to support reasoning about execution order and dependences rather than loop bounds and branch conditions. e interactive visual approach reduces parallelism extraction to visual pa ern recognition [START_REF] Ware | Information Visualization: Perception for Design[END_REF] and code transformation to geometrical manipulations, giving even non-expert programmers a way to manage the complexity of the underlying model [START_REF] Norman | Living with Complexity[END_REF]. Finally, it brings insight into the code-level e ects of the polyhedral optimization by decomposing a complex program transformation into primitive steps and providing a step-by-step visual replay, independent of how an automatic optimizer operates internally. Structure of the Visualization Clint visualizes scheduled iteration domains, e.g., statement instances mapped by the scheduling relation to the new coordinates in logical time space, see Fig. 2, for an example of a simple code and its corresponding visualization. e main graphical elements are as follows. Points and Polygons. Our visualization consists of polygons containing points on the integer la ice. Each point represents a statement instance, positioned using values of α dimensions. Points are linked together by arrows that depict instance-wise data ow between them. e polygon delimits the loop bounds in the iteration space and is computed as a convex hull of the points it includes. e space itself is displayed as a coordinate system where axes correspond to loop iteration variables. Color Coding. Statements are color coded to ensure matching between code and visual representations. A transformation, such as peeling or index-set-spli ing, may result in sets of instances of the same statement being executed in di erent loop nests. We refer to this case as multiple occurrences of the statement. Di erent occurrences of the statement share the same color coding. for (i = 0; i < 2*N-1; ++i) for (j = max(0,i-N-1); j < min(i+1,N); ++j) z[i] += x[i-j] * y[j]; Coordinate Systems. Each coordinate system is at most two-dimensional. e horizontal axis represents the outer loop, and the vertical axis represents the inner loop. Statement occurrences enclosed in both loops are displayed in the same coordinate system, with optional slight displacement to discern them (see Fig. 4). Statement occurrences that share only the outer loop are placed into di erent coordinate systems, vertically aligned so that they visually share the horizontal axis. We refer to this structure as pile (see Fig. 8b). Finally, statement occurrences not sharing loops are displayed as a sequence of piles (see Fig. 8a), arranged to follow the lexical order. We use β-vectors internally to arrange polygons and coordinate systems. Statements with identical β-pre xes of length d share a coordinate system if d is the depth of the inner loop, and a pile if d is the depth of the outer loop. Consequently, coordinate systems and piles are uniquely identi ed by a β-pre x. Execution Order. Statement instances are executed bo om to top, then le to right, crossing the bounds of coordinate systems in both cases. Multiple instances sharing a loop iteration are executed in the order of increasing displacement. Arrows point at the instance executed second. Tiling. Tiled domains are displayed as polygons with wide lines inside to delimit tile shapes. All dimensions that are implicitly de ned (see Section 2.1.2) are considered as tile loops and serve to build the tile shapes creating sca erplots with nested axes [START_REF] Rey Heer | A Tour rough the Visualization Zoo[END_REF]. Tiling makes execution order two-level: entire tiles are executed following the previously described order; instances inside each tile are executed bo om to top, then le to right without crossing tile boundaries. Multiple Projections. e overall visualization is a set of two-dimensional projections, where loops that are not matched to the axes are ignored. As the goal of Clint is program transformation, we only display projections on the schedule α-dimensions, which coincide with iteration domain dimensions before transformation. For a single statement occurrence, they may be ordered in a sca erplot matrix as in Fig. 5a. e points are displayed with di erent intensity of shade depending on how many multidimensional instances were projected on this point. We motivate the choice of 2D projections vs 3D visualization by easier direct manipulation with a standard 2D input device (e.g., mouse) [START_REF] Beaudouin-Lafon | Designing Interaction, Not Interfaces[END_REF][START_REF] Cockburn | 3D or Not 3D?: Evaluating the E ect of the ird Dimension in a Document Management System[END_REF] and consistency of the visualization for even higher dimensionality. Dependences and Parallelism. Dependences between points in the same coordinate system are shown as arrows pointing from source to sink. By default, only direct (i.e., non transitively-covered) dependences are shown. When hovering a point, all its dependences are visualized. Dependences between vertically or horizontally adjacent coordinate systems are aggregated into large dots (Fig. 7b). Finally, dependences between points in distant coordinate systems are only visualized when either their source or sink is being manipulated to avoid visual clu ering. Arrows and dots turn red if the dependence is violated. Transformation legality check is performed parametrically. If legality violation exists for values of parameters other than currently selected, the polygon contour turns red instead of arrows. Generally, parallel dependence arrows imply some parallelism is present in the loops -e.g., if they are orthogonal to an axis, the loop corresponding to an axis features D A parallelism. Clint highlights "parallel" axes in green to simplify parallelism identi cation (see Fig. 2). Parametric Domains. Domains whose bounds involve parametric expressions are visualized for a xed value of the parameters. By default, all parameters are assigned identical values computed as follows. Clint computes the dependence distance sets from dependence relations by subtracting the relation's range from its domain. It then takes the maximum non-parametric absolute value across all dimensions. Finally, it takes a minimum of this value and a prede ned constant. We selected this constant as 6 from our preliminary studies, observing that it is su cient to represent the majority of dependence pa erns in our test suite. e user can dynamically modify values of individual parameters and the visualization will be automatically updated. Directly Manipulable Visual Objects Since program transformations in the polyhedral model correspond to changes of the statement instance order, they can be performed on the visual representation of that order. In Clint, the execution dates are mapped to point positions. erefore, moving points corresponds to program transformations. Visual marks such as points and polygons a ord direct manipulation, i.e., they can be dragged and dropped directly to the desired position. Because many of the visual elements are mapped from the underlying SCoP properties, manipulation should be structured so as to maintain those properties. For example, point coordinates should remain integer to properly map to counted for loops. Furthermore, the polyhedral model represents parametric iteration domains-having constant yet unknown sizes-making it technically impossible to schedule each instance separately. erefore, we only enable structured point manipulation that can be mapped to similarly structured program transformations as expressed in, e.g., Clay framework. Visually, we use polygons and coordinate systems as manipulation substrates [START_REF] Klokmose | Webstrates: Shareable Dynamic Media[END_REF] that mediate interaction with groups of points while ensuring structure preservation. We refer to polygons and coordinate system as point containers. ey can be seen as persistent selection of the points manipulable together and sharing a common property: representing instances of the same statement or being enclosed in the same loops. Polygons and coordinate systems also allow to reify the conventional target selection and make it a rst-class interactive object [START_REF] Beaudouin-Lafon | Rei cation, Polymorphism and Reuse: ree Principles for Designing Visual Interfaces[END_REF]. e user no longer needs an explicit (and sometimes cumbersome) selection step, by either clicking or lassoing the objects with cursor, before starting the manipulation. Mapping Interactions to Loop Transformations As motivated above, we center the manipulation around polygons. We augment the polygon with handles at its corners and borders, similarly to a conventional graphical editor. ey appear when the polygon is hovered and support many transformations without using any instruments or modes. We rely on structured scheduling relation modi cations of Clay framework, most of which were inspired by well-known "classical" loop transformations [START_REF] Joseph | High Performance Compilers for Parallel Computing[END_REF]. Some of them map directly to Clint visualization (e.g., S ), while others do not (e.g., I ) or, even worse, can be mapped in a misleading way (S ). erefore, instead of trying to map Clay transformations, we rather follow an interaction-centered approach by mapping the possible graphical actions to sequences of Clay transformations. Fig. 3 lists the graphical actions and the corresponding program transformations. e action parameters correspond to the a ributes of the object being manipulated or properties of the manipulation. Drag polygon between CS ì β, x, , ì ρ 1. R (β 1.. , put last), β ← max S β S -1 2. D (β 1.. ), β -1.. = (β -1 + 1, 0) T 3. repeat 1,2 until dimension x 4. R (β 1..x , put a er ρ x ), β x ← ρ x + 1 5. F N (β 1..x ), β x ..x +1 ← (β x -1, max S β S x +1 +1) 6. repeat 4,5 until dimension 7. R (β 1.. put last) Drag corners from center ì β, x, , dx, d , sx, s 1. R ( ì β, , x, dx/s ) 2. R ( ì β, x, , d /sx ) use skew when possible Drag corners towards center ì β, x, dx, sx ( axis used if d > dx) 1. I ( ì β, x, ) if dx/sx mod 2 = 1 2. R (β 1..x ) if 1 ≤ dx/sx mod 4 ≤ 2 3. R (β 1.. ) if dx/sx mod 4 ≥ 2 Drag border x, dx, sx 1. D ( ì β) 2. R ( ì β) if dx < 0 3. G ( ì β, dx/sx ) Click on rect- angular selec- tion of points ì β, x, , tx, t 1. I (β 1.. +2 , , +1) if implicitly de ned 2. L (β 1.. +1 ) if + 1 implicitly de ned 3. L (β 1..x ) if x implicitly de ned 4. S M (β 1..x , tx) 5. S M (β 1.. +1 , t ) 6. I (β 1.. 2 , , + 1) Select points and move ì β, ì ρ, selection shape { f i (x, ) ≥ 0} 1. ∀i, I S S ( ì β, f i ) if ì ρ = ∅ C ( ì ρ) otherwise. Fig. 3. Mapping between interactive polygon manipulations and Clay transformations. ì β identifies the statement occurrence corresponding to the polygon; ì ρ identifies the β-prefix of the coordinate system; x and are loop depths corresponding to the horizontal and vertical axes, respectively; dx and d are cursor o sets from its position when the manipulation started; sx and s are sizes of the polygon; tx and t are sizes of the selection. O sets and sizes are expressed in coordinate system units, i.e., iterations. For example, dragging a polygon along one of the axes directly corresponds to the S transformation. However, dragging it to a di erent coordinate system corresponds to a complex sequence of Clay directives that perform code motion (see "Drag polygon between CS" in Fig. 3). Transformations that result in an identical schedule are omi ed, for example, no R is applied before D if the statement occurrence is already the last in the loop. Polymorphic Actions. e coordinate system can be automatically extended to t the polygon being dragged. We leverage the equivalence property of transformation to stop automatic extension. Shi ing past the largest bound does not change the relative execution order. In such cases, the polygon goes outside the coordinate system, which is shrunk to t only the remaining polygons. Parametric Transformations. Transforming a parametrically-bounded domain may result in parametric transformations. In particular, we look for a parametric bound closest to the mouse cursor at the end of manipulation. For example, the amount of S is computed with respect to the closest bound of the polygon other than the one being shi ed. Alternatively, the conditions for I S S are ( rst) computed as a ne expressions of the closest bound. If there is no such expression, they are computed without using parameters. Skew and Reshape. By default, the graphical action of skewing corresponds to the R transformation, and not the S transformation. e la er transforms the loop with respect to the current expression for the other loop rather than to the original iterator. is makes S transformation combine badly: if the x loop is skewed by to become (x + ), it becomes impossible to skew by x as it does not appear independently of anymore. e graphical intuition behind loop skewing does not hold for combinations of skews. However, when a R is identical to S , Clint will perform a S since it is one of the well-known classical transformations 1 . Targeting Individual Statements. Many Clay transformations operate on β-pre xes, that is loops rather than statements. We circumvent this by distributing away the target statement, applying the desired transformation to a loop nest with only this statement, and then fusing everything back. Manipulating Multiple Statements. If multiple polygons are selected within a coordinate system, transformations are applied to all of them in inverse lexicographical order of their respective β-vectors. Inversion prevents transformations from modifying β-vectors used to target subsequent transformations. If a user manipulates a pile (or a coordinate system), the action is propagated to all the polygons it contains, making the pile an implicit selector for the polygons it contains. Manipulating Groups of Points. Individual points or groups thereof can be manipulated by turning them into a polygon rst. Selecting a group of points and dragging it away from existing polygon separates it into two parts, mapping to the I S S transformation. It creates a new statement occurrence that can be manipulated separately. Dropping this polygon on top of another polygon that represents a di erent occurrence of the same statement is mapped to the C transformation. In cases of selections that are not adjacent to borders and/or not convex, multiple I S S transformations are performed. Each of the two resulting parts may correspond to multiple occurrences of the statement, but is visualized and manipulated as a whole. Cross-Projection Selections. When multiple projections are used, the selection of statement instance points is combined from di erent projections. e overall multidimensional selection is an intersection of constraints imposed by each separate two-dimensional selection. Empty selection in a projection is thus equivalent to selecting everything. Decoupling Visualization from Code. In Clint, we keep the visualization consistent with the original program structure unless the user manually modi es the code. is allows for manipulating multiple statement occurrences together, for example in case of shi ing one statement with respect to another inside the loop, which may result in loop separation as in Fig. 4. Fig. 4. Manipulation for Transformation: the darker polygon is dragged right so that dependence arrows become vertical without spanning between di erent iterations on i. The visualization is then decoupled from the code structure, and both statements can still be manipulated as if they were not split between two loops. Transformation Legality Feed-Forward. Clint graphical interactions are structured so that it is possible to identify the transformation before it is completed. For example, dragging a corner of a polygon away from its center corresponds to a R , the dragging direction and distance de ne transformation parameters. Since they are typically expressed in units of iteration steps through division, we can use ceil instead of normal rounding to obtain the parameters earlier. Hence Clint can perform a transformation before the end of corresponding user interaction. is allows to provide feed-forward about the transformation, i.e., its e ects (in particular dependence violation) are visualized during the interaction, guiding the user in their choice. In addition, this approach allows Clint to hint the user about the state of the visualization if they nish manipulation immediately using a grayed-out preview shape (see Fig. 8). Mapping Loop Transformations to Animated Transitions Clint visualization enables the illustration of step-by-step execution of a Clay transformation script, either constructed manually or translated from a compiler-computed schedule using Chlore [START_REF] Bagnères | Opening Polyhedral Compiler's Black Box[END_REF]. Instead of providing a one-to-one mapping between individual transformations and animated transitions, we take a generalized approach based on the structure of transformations. ey can be divided based on the scheduling relation dimensions they a ect: (1) only α, (2) only β or (3) both α and β. e rst group contains all transformations except F N , D and R , which belong to the second group, and S M , L , I S S and C , which belong to the third group. is classi cation allows us to limit the animation scope. Transformations that do not modify β-dimensions may only a ect points inside one container while points cannot be moved between containers. Furthermore, only the projections on iterators involved in the transformation should be updated. Transformations that only modify β-dimensions a ect entire containers without modifying the point positioning inside them. Within-Container Transformations. Transformations of the rst group are animated by simultaneously moving individual points to their new positions. During the transition, polygonal shapes are updated to match the convex hull of the respective points. us S transformation moves all points simultaneously in one direction and corresponds to visual displacement, while R transformation moves rows (or columns) of points at di erent lengths and results in shape skewing. Multiple Projections. Several transformations operate on two dimensions, for example R and I . For these cases, we consider the projection on both of these dimensions as the main one, and the projections on one of the dimensions as auxiliary ones. In the main projection, the one-to-one point transition remains applicable. On the other hand, in the auxiliary ones, points may be created or deleted. For example, an auxiliary projection retains the rectangular shape a er a R but becomes larger as some points are projected onto new coordinates (see Fig. 5). Clint handles this by introducing a temporary third axis, orthogonal to the screen plane. is axis corresponds to the dimension present in the transformation, but not in the projection. Points and arrows are then re-projected on three dimensions. Extra objects become visible only during the animated transition and create a pseudo-3D e ect. A er the transition, the third axis is deleted while the projected points remain in place (see Fig. 5b). is technique is analogous to Sca erDice [START_REF] Elmqvist | Rolling the Dice: Multidimensional Visual Exploration Using Sca erplot Matrix Navigation[END_REF], but without axis switching. Between-Container Transformations. As transformations of the second group a ect entire polygons only, we can translate them into motion of polygons. If all polygons of a container are moved, the entire container is moved instead. Target containers are identi ed using β-pre xes. Container Creation and Deletion. Transformations of the third group may result in containers being created or deleted. However, without points, a polygon would correspond to statement occurrence that has no instances and thus is never executed. erefore, it must be impossible to create empty containers. e only way to create a container in Clint is by spli ing an existing container into multiple parts. is exactly corresponds to the I S S transformation if the container is a polygon. It also maps to the D transformation when the container is a coordinate system or a pile. Conversely, C and F N transformations correspond to visually joining two containers. Clint Interface Clint combines three editable and synchronized representations (see Fig. 5a): (1) the interactive visualization; (2) a navigable and editable transformation history view based on Clay scripts; and (3) the source code editor. A consistent color scheme is used between the views to match code statements to the visualization. Transformation directives corresponding to graphical actions are immediately appended to the history view. e user can then navigate through the history by selecting an entry, which will update the visualization to the corresponding previous state, or edit it directly using Clay syntax. As the target code tends to become complex and unreadable a er several manipulations, the user has the option to keep the original code visible instead of the transformed one. Finally, when the code is edited, the visualization is updated, thus making Clint a dynamic visualizer for polyhedral code. USE SCENARIOS Clint can be used as a stand-alone program transformation tool or in conjunction with an automatic optimizer. In the rst case, the user must decide on the transformation to perform. In the second case, Clint proposes a sequence of primitive transformations equivalent to the automatically computed one, le ing the user complement or modify it independently from the optimizer. In both cases, the user may reason in terms of an instance-wise dependence graph rather than in terms of loop transformations or parameters of the optimization algorithm. Our approach does not impose a particular transformation heuristic. Instead, we suggest to build intuition by visualizing (optimized) programs that perform well and identifying visual pa erns. For an optimization expert, these pa erns may eventually lead to a novel heuristic. We provide two end-to-end illustrative examples, in which we a empt to make dependence arrows short to improve reuse and orthogonal to axes to exploit parallelism. Assisted Semi-Automatic Transformation Clint can be used as a tool for applying loop-level transformations that provides instant legality feedback and generates transformed code automatically. Let us continue with the polynomial multiplication kernel example, see Fig. 1a, to demonstrate how a long sequence of transformations can be applied. Default representation of the kernel, with parameters set to 4, is shown in Fig. 6a. e loop j features parallelism and is marked accordingly. Inner parallelism is o en less desirable as it would incur barrier synchronization cost on every iteration of the outer loops. erefore, observing that dependence arrows are diagonal, the user may decide to make them orthogonal to the i loop to make it parallel. ey can do so by dragging the top right handle of the polygon right, Fig. 6a. However, such transformation is illegal as indicated by the red arrows that point in the direction opposite to the j access. is dependence violation can be removed by switching the direction of arrows, which is achieved by dragging the top right handle le to rotate the polygon around its center, Fig. 6b. e combined transformation sequence is now legal yet potentially ine cient: di erent iterations of parallel loop i execute di erent numbers of statement instances. Observing the symmetry of the polygon, the user selects a triangular-shaped group of points on the right, Fig. 6c, and drags it to the empty space on the le , Fig. 6d, until the balanced, rectangular shape is reconstructed, Fig. 6e. e nal transformation corresponds to loop skewing, followed by two loop reversals and shi s, then by index-set spli ing, and nally by shi ing. However, at no time during transformation, the user must be aware of particular loop transformations, their legality or the transformed code. ey can operate on an instantiation of the instance-wise dependence graph as opposed to directive-based approaches where, even with visualization, they would have to nd the transformation directive that would result in a desired visual shape. Understanding, Improving and Rectifying Automatic Transformation Manual program transformation, even with e cient support tools, may require su cient e ort from the programmer. Fully automated program optimizers are designed to yield decent performance in most cases. However, they are based on imprecise heuristics, which may fail to improve performance or even degrade it. Polyhedral optimizers are essentially source-to-source black boxes o ering li le control over the optimization process. Clint relies on Chlore [START_REF] Bagnères | Opening Polyhedral Compiler's Black Box[END_REF] to nd a sequence of primitive directives equivalent to the automatically computed optimization and let the user replay and modify it, independently of the optimization algorithm. e user does not have to know or understand the internal operation of the optimizer and its con guration. Consider the Multi-Resolution Analysis Kernel code, available in doitgen benchmark of the PolyBench/C 4.2 suite [START_REF] Pouchet | PolyBench/C 4.2. Polyhedral Benchmark Suite[END_REF] and presented in Fig. 7a. A sequential version of this kernel runs in 0.83s on our test machine. 2 We applied Pluto3 polyhedral compiler [START_REF] Bondhugula | A Practical Automatic Polyhedral Parallelizer and Locality Optimizer[END_REF] to extract parallelism from this code. We also requested Pluto to tile the transformed code, which is likely to improve performance thanks to data locality and expose wavefront parallelism. A simpli ed version of the resulting code is presented in Fig. 7c. It indeed contains tiled and parallelized loops. Yet this code executes in 0.91s, a 10% slowdown compared to the sequential version (untiled parallel version executes in 50.1s, a 62× slowdown). Without any further suggestion from Pluto, the user may either stick with a non-transformed sequential version or with a non-e cient parallel one. e code was transformed so aggressively that the user is unlikely to a empt code modi cations or even understanding the transformation that was applied. Comparing Clint visualizations before, Fig. 7b, and a er, Fig. 8a, transformation suggests loop ssion took place, which can also be inferred from the generated code. Step-by-step replay con rms this and also demonstrates loop tiling followed by skewing. It also shows that inner loops were parallelized, which is known to result in large barrier synchronization overheads. A fat dot between coordinate systems indicates there is some reuse between loops, but it is unclear whether Pluto performed ssion to ensure legality of skewing and tiling or because of its fusion heuristic. To discover that, the user may undo the ssion by fusing the loops back together, Fig. 8a. While they drag the polygon, legality feedforward appears in a shape of gray arrows that indicate that transformation would be legal and would preserve parallelism. Motivated by the success and observing the remaining reuse, the user may decide to fuse the remaining loop as well. is transformation would be illegal as indicated by red arrows appearing as the polygon is being dragged. e users can still nish the manipulation, and then use a conventional "undo" command. Particular use cases of the previous section illustrate well the potential bene ts of the tool in speci c cases, but they do not help evaluating and understanding its overall usability in more general cases and with di erent users. erefore, as it is commonly done in Human-Computer Interaction, we conducted a series of user studies considering more abstract tasks that assess the usability of Clint. Understanding the Visualization Although similar visualizations have been already used for descriptive or pedagogical purposes, there is no empirical evidence of their appropriateness for conveying program structures. We designed an experiment to assess the suitability of our visual representation. In particular, we test whether both experts in the polyhedral model and non-expert programmers can establish a bidirectional mapping between Clint visualization and code. Participants. We recruited 16 participants (aged 18-53) from our organizations. All of them had experience in programming using imperative languages with C-like syntax and basic understanding of the polyhedral model and its limitations. Six participants reported to have manually constructed similar visualizations from scratch and were therefore considered Experts. Because participants were asked to construct visualizations following given rules, previous exposure to these rules is a more relevant criterion of expertise than familiarity with the polyhedral model. Procedure. Our experiment is a [3 × 2] mixed design having two factors: • T : mapping direction (between participants) -Visualization to Code (VC) -writing a code snippet corresponding to a given visualization using a C-like language featuring loops and branches with a ne conditions; -Code to Visualization (CV ) -drawing an iteration domain visualization given the corresponding code. • D : problems may be (within participants) -Simple -two-dimensional with constant bounds; -Medium -multi-dimensional with constant bounds; -Hard -two-dimensional with mutually-dependent bounds and branches. We divided participants in two groups with equal number of experts. Group 1 performed the VC task, group 2 performed the CV task. is between participant factor allowed us to present the same problems to all participants while avoiding learning e ect. Both tasks were performed on paper with squared graph support for the CV task. Participants were instructed about visualization and performed two practice tasks before the session. ey were asked to work as accurately as possible without time limit and were allowed to withdraw from a task. Expected solutions were shown at the end of the experiment. Each session lasted about 20 minutes. Data Collection. For each trial, we measured Completion Time, Error and Abandon rates. e errors were split in two categories: Parameter Errors, when the shape of the resulting polyhedron was drawn correctly, but linear sizes or position were wrong; Shape Errors, when the shape of the polyhedron was incorrect. Codes describing the same iteration domain were considered equivalent (e.g., i <= 4 and i < 5). Upon completion, participants lled out a demographics questionnaire. Data Processing and Analysis. We performed log-transformation of the Completion Time to compensate for the positive skew of its distribution, resulting in asymmetric con dence intervals. Due to concerns over the limits of null hypothesis signi cance testing in various research elds [START_REF] Cumming | e New Statistics Why and How[END_REF][START_REF] Dragicevic | Fair Statistical Communication in HCI[END_REF], our analyses are based on estimation [START_REF] Cumming | Inference by Eye: Con dence Intervals and How to Read Pictures of Data[END_REF]. We report symmetric e ect sizes on means -es = 2(m 1 -m 2 )/(m 1 + m 2 ) where m 1 , m 2 are means-and 95% con dence intervals (CIs). Results . We did not observe signi cant order e ect on the Error Rate or Completion Time, meaning that there were neither learning nor fatigue e ect along the experiment. Completion Time. We discarded 7 trials in which participants produced erroneous code. T did not strongly a ect the Completion Time: VC took 182s (95%CI = [127s, 262s]) on average while CV took 215s (95%CI = [156s, 296s]) on average, resulting in an e ect size of 16.3% (95%CI = [-39.2, 50.9]). Despite Experts being familiar with similar representations, we observed no interaction between expertise and T . Experts performed 56.7% (95%CI = [26.8, 98.8]) faster than Non-Experts for Hard tasks. Both performed similarly on Easy and Medium tasks. In general, Completion Time is more consistent across Non-Expert participants than across Expert participants (Fig. 9a). ese results suggest that our representation is suitable for both Experts and Non-Experts if the complexity of the task remains limited. ey also con rm our assessment of task di culty. Errors and Abandons. Participants performed the tasks with very low error rates, 8.3% (95%CI = [-3.6%, 20.3%]) for VC tasks and 4.2% (95%CI = [-4.5%, 12.8%]) for CV. Non-Experts proposed wrong code for Hard VC tasks, equally split between Parameter and Shape Errors. Experts made Parameter Errors for some Medium tasks. We observed only two withdrawals during a trial, both from nonexperts on a Hard task, one in VC and CV, and a er more than 500s (Fig. 9b). Overall, such low error rates make it di cult to conclude on the causes of the errors, but suggest that both experts and non-experts users can reliably map Clint visual representation to the code and vice versa. Interactive Manipulation A er assessing the visualization approach, we focused on interactive program transformation with Clint. We conducted a preliminary usability study with users already familiar with the visualization. In order to separate the e ect of direct manipulation from individual di erences in expertise, participants were not allowed to use any automatic parallelizing compiler that would help experts to achieve be er performance. We also decided not to use Clay syntax directly as it is li le-known and was designed as an intermediate representation for graphical manipulation. Noone a empted to use other diretive-based tools. Protocol. Participants. We recruited 8 participants (aged 23-47) by direct email to the participants of the previous study. Since they all were familiar with Clint, our expertise criterion does not apply. Apparatus. e study was conducted with a prototype of Clint running on a 15" MacBook Pro. Participants were interacting with the laptop keyboard and a standard Apple mouse. Procedure. e task consisted in transforming a program part so that the maximum number of loops becomes parallelizable. Participants had to transform the program, but not to include parallelism-speci c constructs, e.g., OpenMP pragmas, in order to avoid bias from individual expertise di erences. e experiment has a [3 × 3] within-subject design with two factors: • T used in the trial: Code -writing code in an editor of user's choice, no visualization available; Viz -direct manipulation, no code visible; Choice -full interface, with direct manipulation and source code editing. • D of the task: Easy -two-dimensional case with at most two transformations; Medium -two-or three-dimensional case with rectangular bounds and at most three transformations; Hard -two-or three-dimensional case with mutually-dependent bounds and at least two transformations. Trials were grouped in three blocks by T . e Code and Viz blocks were presented rst. eir order was counterbalanced across participants. Choice was always presented last in order to assess participants' preference in using code editing or direct manipulation a er having used both. In each block, participants were presented with one task of each di culty level in random order. Tasks were randomly picked into di erent blocks across participants. ey were drawn from real-world program examples and polyhedral benchmarks. Trials were not limited in time and participants were asked to explicitly end the trial by pushing an on-screen bu on. Prior to the experiment, participants were instructed about source code transformations and the corresponding direct manipulation techniques. ey also practiced 4 trials of medium di culty for each technique before the experiment and were allowed to perform two "recall" practice trials before each T block. Each session lasted about 60 minutes. e study was completed by a demographics questionnaire. Data Collection. For each trial, we measured: • the overall trial Completion Time; • First Change Time, the amount of time from the start to the rst change in the program structure (code edited or visualization manipulated); • Success Rate, the ratio between the number of loops made parallel by transformation and the total number of possibly parallel loops. We recorded both the nal state and all intermediary transformations to the program. During the analysis, we performed a log-transform of the Completion Time and First Change Time. Results and Discussion . Because this experiment was conducted with a small sample, we mostly report results graphically in order to illustrate general trends. We did not observe any ordering e ect of T or D on Completion Time and Success Rate. Accuracy and E ciency. Fig. 10a suggests, despite large variability, that participants were in general more successful in transforming the program with direct manipulation than with code editing. E ect sizes reach 40% and 44% for Easy and Medium tasks. However, for Hard tasks, the success rates are identical. is suggests that nding a multi-step transformation is a key di culty. Fig 10b suggests that, for successful trials, participants performed the transformation consistently faster in Viz condition. e di erence in variability between Code and Viz suggests that direct manipulation compensates for individual expertise di erences. Similar Completion Times for failed trials can be explained, a er analyzing the transformations, by participants "abandoning" the trial if their rst a empt did not expose parallelism and submi ing a non-parallelizable version. Strategy and Exploration. Participants at least tried to perform a transformation in 76% cases with Code and 94% with Viz, suggesting that visualization engages participants by changing the perception of task di culty. We computed the ratio First Change Time/Completion Time as a measure of "engagement" (Fig. 10c). It increases with di culty for Code, but drastically decreases for Viz, suggesting that participants were more likely to adopt an exploratory trial-and-error strategy supported by the interactive visualization as opposed to code. In Choice condition, the ratio remains stable, as participants spent time choosing which representation to use. Choice between Code Editing and Direct Manipulation. In the Choice condition, only 3 participants interacted with the code. ey made edits during the rst 30s and then switched to the visualization. A er the experiment, they explained to have modi ed the code for the sake of analysis, e.g., to see whether a dependence was triggered by a particular access they temporarily removed. We observed that most participants were examining the code, but not selecting it. is observation suggests that, although they see the limitations of code representation, participants may need it to relate to the conventional program editing that be er corresponds to their expertise. Preference for Code or Visualization Our last experiment investigates the use of textual and visual representations for SCoPs. We relied on eye tracking technology in order to precisely measure visual a ention between code and visualization when both were available. We expect that, given su cient training, users will prefer visualization to code analysis if there is a meaningful task-relevant mapping between the two. is experiment required a pair of small program analysis tasks such that either code or visualization support each of them be er, but never both. Participants had to answer a binary question, with positive or negative formulation to avoid bias. e study was structured as the previous one. Protocol. Participants. We recruited 12 participants (aged 21-34, mean=27) through mailing lists. ey did not participate in previous studies and had a self-reported experience in programming of 5 to 15 years. All had normal uncorrected vision. Apparatus. e experimental setup consisted of a 15" MacBook Pro with 2880 × 1800 screen at 220 ppi connected to the SMI-ETG v1 eye-tracking system 4 . e participant was seated 70 cm away from the screen, which resulted in gaze position accuracy of 27.7px in screen space. e tracking system outputs a 30 FPS video stream from its frontal camera. We placed bright-colored tokens on the screen corners to locate it in the video and compensate for perspective distortion. ese tokens were tracked by a custom OpenCV-based script that generated gaze position in screen coordinates through linear interpolation with perspective correction. We ensured that the sizes of both representations are identical across conditions, with the content centered in each of them. Unused space was lled with neutral gray to avoid distraction. When visible, multiple representations were 60 px away (2× resolution) to identify gaze into one of them. Procedure. e study is a [3 × 3 × 2] within-participants experiment with 4 repetitions per participant and the following factors: • R used in the trial, one of visual representation (Viz), source code (Code) or both simultaneously (Choice); • D of the question, one of Easy, a loop nest with constant conditions, Medium, a loop nest with at least 3 non-constant conditions, or Hard, a loop nest with a branch inside and at least 5 non-constant conditions; • a binary asked to the user, either concerns the textual form of loop bounds (Bounds) or a statement instance being executed or not inside a loop (Execution). Bounds questions were targeted at Code, where the answer is immediately visible, while Execution questions were targeted at Viz. We refer to these conditions as matching questions, and to other conditions as mismatching questions. In total, we collected data for 12 • 3 • 3 • 2 • 4 = 864 trials. Trials were rst blocked by R and then by repetition. R blocks are ordered identically to the previous study. Each of them comprises 4 repetition blocks, each of which has 6 trials with di erent and D in a randomized order. R blocks were preceded by a practice session with 4 trials of Medium di culty. A er each trial in Choice condition, participants were asked about their preferred representation for this question. Blocks featuring only Code or Viz were conducted without eye tracking. Participants were wearing the eye-tracking glasses for the third block, a er we performed a 3-point calibration with 30px tokens and checked if the glasses did not a ect their vision by performing a read-aloud test. Participants started the trial by clicking the "start" bu on and ended it by clicking the answer bu on. ey could abandon the trial a er at least 15s to avoid immediate abandons for Hard tasks with mismatching questions. So ware provided the correct answer a er each trial. One session lasted 50 minutes on average and was complemented by a demographic questionnaire. Data Collection and Processing. We collected the following data: • Completion Time of the trial; • Correctness of the answer; • Preference between R for the last block; • Gaze from the eye-tracking glasses for the last block. Given gaze position in screen coordinates, we identi ed the widget in the focus of a ention as one out of three: Code Widget, Viz Widget or estion Widget. Outside any of the widget areas, the gaze was considered O Screen. We randomly sampled 10 frames from each video and veri ed manually that the script provides exact classi cation. Completion Time was log-transformed to compensate the positive skew of its distribution. Results and Discussion . Ordering e ects. We observed a slight decrease in Completion Time between rst blocks, e ect size -13.6% (95%CI = [-37.7, 6.1]), but large variability does not allow to conclude on the presence of a learning e ect. Correctness did not vary substantially between blocks. Completion Time. Mismatching questions required substantially more time to complete the trial than matching questions, except for Easy tasks as shown in Fig. 11a . ese results suggest that, given two representations, participants are likely to chose the matching one. Although they do not spend more time on average, the variability is larger for Choice condition. It suggests that participants only e ectively use one representation, but consider both. We illustrate this later with eye tracking data. Correctness. e participants succeeded to answer the majority of the questions with 93% (95%CI = [90, 95]) of correct results on average as shown in Fig. 11b. Abandoned trials were considered as incorrect answers. Overall trends are similar to Completion Time. Given Choice, participants had a high success rate overall, except Easy Execution questions with mean Success Rate 89.5% (95%CI = [79%, 100%]). is may be explained by choosing the mismatching Code representation due to visible task simplicity. Due to low error rates, we did not perform any further analyses. Only 4 trials were abandoned, all featuring mismatching questions, 3 of which with Code. Abandons took place a er 91s on average whereas the mean trial duration is 13.7s. Representation Choice. Our analyses are built on the following metrics, de ned prior to the study. Visual Preference, VP -total duration of gaze on the Viz Widget divided by the total duration of gaze on Viz or Code Widget. Values close to 1 indicate participant looking more at the visualization. Representation Uncertainty, RU -the measure of a ention distribution computed as RU = 2 • abs(VP -0.5). High values mean a ention was distributed evenly between representations, low values -that only one representation was used. We expect Completion Time to increase with Representation Uncertainty as the participant uses two representations where one would su ce. At the same time, it may increase even more for lower values of Representation Uncertainty and high Visual Preference for the unadapted representation. Fig. 12a shows the Visual Preference for di erent conditions, the center line corresponding to the equal distribution of visual a ention. For Medium and Hard tasks, participants spent more time on matching representations, e ect sizes reach 66.6% (95%CI = [4.8, 128.5]) and 81.2% (95%CI = [16.7, 145.7]), respectively. For Easy tasks they relied on the Code independent of . e reported Preference, depicted on Fig. 12b, shows the same tendency. e preference for Code drops from 56% in Easy Execution tasks to 6% in Medium and Hard Execution tasks. Since we asked which representation they found "most useful", the di erence between reported Preference and Visual Preference suggests that participants tend to look at both representations even though they do not nd one of them useful. Nevertheless, we observed a positive correlation between reported Preference and Visual Preference, r =0.41 (95%CI = [0.20, 0.57]), suggesting that participants tend to use more the representation they nd useful. Overall, we observed a correlation between Representation Uncertainty rate and Completion Time, r =0.41 (95%CI = [0.19, 0.58]) as well as a negative correlation between Representation Uncertainty rate and Correctness, r =-0.27 (95%CI = [-0.47, -0.04]): the more participants' a ention was distributed between representations, the less correct answers they gave. Although the correlation does not imply causality, the connection between the simultaneous use of di erent representations and the total trial duration suggests that one matching representation should be preferred to two. RELATED WORK Interactive Program Parallelization. Program editors supporting interactive program parallelization date back to wide adoption of parallelism for scienti c programming. We review those speci cally targeting loop-level optimizations. e ParaScope editor [START_REF] Kennedy | Interactive Parallel Programming Using the ParaScope Editor[END_REF] provided dependence analysis and interactive loop transformation for High-Performance F (HPF). It reported the dependence analysis results and allowed the user to perform various loop transformations, including parallelization. e D Editor interacted with an distributed HPF compiler to report optimization choices regarding data distribution and parallelization [START_REF] Hiranandani | e D Editor: A New Interactive Parallel Programming Tool[END_REF]. SUIF Explorer took a di erent approach, collecting dynamic execution and dependence data to suggest loops (or parts thereof thanks to program slicing [START_REF] Weiser | Program Slicing[END_REF]) for parallelization [START_REF] Liao | SUIF Explorer: An Interactive and Interprocedural Parallelizer[END_REF]. Similarly, DECO records traces of the memory accesses along with cache hit information and uses pa ern recognition algorithms to suggest memory optimizations [START_REF] Tao | An Interactive Graphical Environment for Code Optimization[END_REF]. NaraView provides a navigable 3D visualization of loop-level access pa erns [START_REF] Sasakura | NaraView: An Interactive 3D Visualization System for Parallelization of Programs[END_REF]. Contrary to these tools, Clint uses the polyhedral model with its instancewise dependence analysis and static guarantees of loop transformation legality. It also allows for transforming the program using its visualization. Chlore-based transformation replay is not tied to particular compiler transformations. Semi-Automatic Polyhedral Transformations. User-assisting tools based on the polyhedral model emerged as a means to express "classical" loop transformations [START_REF] Joseph | High Performance Compilers for Parallel Computing[END_REF] in the model, the Uni ed Transformation Framework (UTF) stemming from the rst approach [START_REF] Kelly | A Unifying Framework for Iteration Reordering Transformations[END_REF]. URUK was proposed to improve loop transformation composability and enable automated traversal of a transformation search space [START_REF] Girbal | Semi-Automatic Composition of Loop Transformations for Deep Parallelism and Memory Hierarchies[END_REF], delaying the legality analysis until code generation. Loop Transformation Recipes combine loop transformations, mapping to accelerators and code generation directives from CHiLL [START_REF] Chen | CHiLL: A for Composing High-Level Loop Transformation[END_REF] with the POET [START_REF] Yi | POET: Parameterized Optimizations for Empirical Tuning[END_REF] language for auto-tuning speci cation. AlphaZ focuses on equational programming and enables complex memory mapping and management [START_REF] Yuki | Alphaz: A System for Design Space Exploration in the Polyhedral Model[END_REF]. Clay is arguably the rst complete set of directives for polyhedral program transformations [START_REF] Bagnères | Opening Polyhedral Compiler's Black Box[END_REF]. Clint uses visualization and direct manipulation to address the challenges of directive-based approaches, such as identifying a promising transformation, targeting it at a program entity or evaluating its e ects. Visualizations for the Polyhedral Model. e literature on the polyhedral model heavily relies on sca erplot-like visualizations of iteration domains. Polyhedral libraries include components for visualization, including VisualPolylib [START_REF] Loechner | PolyLib: A library for manipulating parameterized polyhedra[END_REF] for Polylib and islplot [START_REF] Grosser | islplot: Library to Plot Sets and Maps[END_REF] for isl [START_REF] Verdoolaege | Isl: An Integer Set Library for the Polyhedral Model[END_REF]. LooPo was arguably the rst tool to visualize the polyhedral dependence analysis information during program transformation [START_REF] Griebl | e Loop Parallelizer LooPo-Announcement[END_REF]. Tulipse integrates polyhedral visualization into Eclipse IDE [START_REF] Wong | Tulipse: A Visualization Framework for User-Guided Parallelization[END_REF]. Clint goes beyond static visualization by enabling direct manipulation to transform the program. 3D iteration space visualizer lets the user interactively request loop parallelization through a visual representation [START_REF] Yu | Loop Parallelization Using the 3D Iteration Space Visualizer[END_REF]. Polyhedral Playground [START_REF] Grosser | PollyLabs Polyhedral Playground[END_REF] augments a web-based polyhedral calculator with domain and dependence visualizations. PUMA-V provides a set of visualizations that expose internal operation of the R-Stream compiler [START_REF] Papenhausen | PUMA-V: An Interactive Visual Tool for Code Optimization and Parallelization Based on the Polyhedral Model[END_REF][START_REF] Papenhausen | Polyhedral User Mapping and Assistant Visualizer Tool for the R-Stream Auto-Parallelizing Compiler[END_REF]. It allows the user to control the optimizationrelated compiler options from the visualization. Clint builds on Clay as intermediate abstraction and does not require the user to control or even understand the operation of a compiler. CONCLUSION Clint addresses the issues of directive-based approaches in the polyhedral model: target identication is made direct without exposing polyhedral-speci c concepts; transformation legality and e ects are visible immediately during manipulation; reading polyhedrally-transformed code is no longer necessary. It makes loop optimization accessible, interactive and independent of a particular algorithm. Our approach enables human-machine partnership where an automatic framework performs heuristic-driven transformation and provides feedback on demand while a user brings in domain knowledge to tweak the transformation without modifying the heuristics. Such domain knowledge may be unavailable to framework designers and di er between use cases. Experiments suggest that visualizations lower the expertise necessary to perform aggressive program restructuring and decrease the time necessary for program analysis. Semi-automatic transformation decreases the time of program transformation. In our studies, visual semi-automatic approach to program transformation doubled the success rate and decreased the required time by a factor of 5 for some program structures. We also contribute to the discussion on visualization acceptance, suggesting its perceived utility increases with the relative complexity of the task. Limitations. As Clint was designed using a set of polyhedral test cases with small number of statements nested in shallow loops, it may be subject to clu ering for larger program parts. Long blocks of interdependent statements may result in a profusion of dependence arrows. Visual replay may become distracting when multiple projections are rendered for deep loops. However, program parts amenable to the polyhedral model are typically small yet require aggressive transformation. Future Work. Drawing from the eye-tracking study conclusions and existing limitations, the visual approach seems promising yet restricted for di cult cases. We plan to address those by interleaving visual representations and code fragments and by proposing a zoomable interface with di erent levels of detail. At the same time, the visualization may be bene cial for learning, which can be supported with a smooth transition between code and visual representation. Visual clu ering can be addressed by only displaying salient parts. ey can be identi ed directly by the users, or inferred from their behavior. On the other hand, a polyhedral compiler may provide additional feedback on, e.g., dependences that prevent parallel execution. Finally, Clint visualizations may be used conjointly with performance models and runtime evaluators, and integrated into a larger development environment in order to account for program parallelization all along the development process. Fig. 1 . 1 Fig. 1. Polynomial Multiply computation kernel. = 0; i < N; ++i) for (j = 0; j < N; ++j) z[i+j] += x[i] * y[j]; Fig. 2 . 2 Fig. 2. Performing a skew transformation to parallelize polynomial multiplication loop by deforming the polygon. The code is automatically transformed from its original form (le ) to the skewed one (right). for (i = 0; i < N; ++i) for (j = 0; j < N; ++j) { A[i+1][j+1] += 0.5 * A[i+1][j]; B[i+1][j+1] += A[i][j]; } for (j = 0; j < N; ++j) B[1][j+1] += A[0][j]; #pragma omp parallel for private(j) for (i = 0; i < N; ++i) for (j = 0; j < N; ++j) { A[i+1][j+1] += 0.5 * A[i+1][j]; B[i+1][j+1] += A[i][j]; } for (j = 0; j < N; ++j) A[N][j+1] += 0.5 * A[N][j]; (a) Clint interface includes: (1) interactive visualization with multiple projections, (2) editable history view of transformations, and (3) source code editor; all coordinated with each other. (b) When main projection is manipulated, auxiliary projections are updated simultaneously. Fig. 5 . 5 Fig. 5. Clint displays multiple projections for deep loop nests. Fig. 6 . 6 Fig. 6. Users can directly manipulate the visual representation of SCoPs and have the transformed program generated automatically. Dragging the corner from the center performs loop skewing, to the center-reversal; selecting the points and dragging them performs index-set spli ing followed by loop shi ing. Dependence arrows orthogonal to axes enable parallel execution. Fig. 8 . 8 Fig. 8. Using visual representation to re-adjust automatically computed transformation with immediate feed-forward on semantics preservation. Dependences violated by the intended transformation turn red, within shapes depict tiles. Shaded shapes are positions before manipulation. Fig. 9 . 9 Fig. 9. (a) Completion Times increase with task di iculty but less so for Experts. Results are similar between Experts and Non-Experts. Error bars are 95% confidence intervals. (b) overall Error Rate is low. Experts are more successful but fail at simpler tasks; Non-Experts may abandon. Fig. 10 . 10 Fig. 10. (a) Success Rate is higher with Viz, except for Hard tasks. (b) Completion Time is lower with Viz, especially in successful trials. (c) Ratio First Change Time / Completion Time; the change in trend between Code and Viz may be due to users adopting an exploratory strategy. Error bars are 95% CIs. Fig. 11 . 11 Fig. 11. (a) mismatching questions required up to 4× more time. (b) Medium and Hard questions with mismatching representation result in more incorrect answers. Completion Times and Correctness Ratios for C are close to those for matching representation. Dots are means, error bars are 95% CIs, vertical density plots show underlying distributions. . With Code, participants spent 14% (95%CI = [-22, 40]) more time on Easy Execution questions, and respectively 132% (95%CI = [108, 146]) and 134% (95%CI = [111, 147]) more time on Medium and Hard Execution questions than on the Bounds questions of the same di culty. Similarly, with Viz representation, they answered Execution questions 9% (95%CI = [-20, 49]), 40% (95%CI = [1, 102]) , and 57% (95%CI = [10, 129]) faster for increasing D . is result supports the de nition of mismatching question suggesting that a representation not adapted for the question slows participants down. e smaller increase of Completion Time with Viz compared to Code suggests that Viz representations allows to reason about mismatching questions easier than Code. Choice condition shows Completion Times close to those for matching representation. For Bounds questions, it took on average 6% (95%CI = [-27, 56]), -3% (95%CI = [-40, 56]), and 7% (95%CI = [-33, 70]) more time compared to Code for increasing D . For Execution questions, it took 5% (95%CI = [-27, 53]), -14% (95%CI = [-54, 54]) and -21% (95%CI = [-63, 58]) more time than Viz for increasing D Fig. 12 . 12 Fig. 12. (a) matching representations are more used for Medium and Hard tasks, but Code for Easy tasks. (b) reported preference demonstrates similar trend. Dots are means, error bars are 95% CIs. ACM Transactions on Architecture and Code Optimization, Vol. 15, No. 1, Article 16. Publication date: March 2018. In fact, we created R transformation in Clay to address the skew combination problem. It was the last missing transformation that enabled completeness of the set. ACM Transactions on Architecture and Code Optimization, Vol. 15, No. 1, Article 16. Publication date: March 4× Intel Xeon E5-2630 (Sandy Bridge, 6 cores, 15MB L3 cache), 64 GB RAM, running CentOS Linux 7.2.1511, compiled with GCC 4.9.3 with -O3 -march=native ags, benchmark size LARGE, NQ= 140, NR= 150, NP= 160. Average of 12 runs is reported, kernel execution time only, using high-resolution CPU timers. Pluto 0.11.4 with --parallel --tile, as available on h ps://github.com/bondhugula/pluto/releases/tag/0.11.4 ACM Transactions on Architecture and Code Optimization, Vol. 15, No. 1, Article 16. Publication date: March 2018. h p://www.eyetracking-glasses.com/ ACM Transactions on Architecture and Code Optimization, Vol. 15, No. 1, Article 16. Publication date: March 2018. for (r = 0; r < NR ; r ++) for (q = 0; q < NQ ; q ++) { for (p = 0; p < NP p ++) { sum [p] = 0.0; for (s = 0; s < NP ; s ++) sum e nal manually retouched version runs in 0.67s with a (modest) 25% speedup. Without step-by-step replay and direct manipulation, it would be hard to experiment with di erent fusion structures using a general trial-and-error strategy. Although loop fusion is o en implemented as a separate optimization problem in polyhedral optimizers, it is no easier to control externally. Clint allows users to understand and directly modify the fusion/ ssion structure, instead of reasoning about how a particular heuristic would behave.
78,059
[ "1844", "5285" ]
[ "454670", "525217", "233382", "217648" ]
01744451
en
[ "shs" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01744451/file/Kwok%20Diana_JWB%20PDW2.pdf
Keywords: mergers and acquisitions, interpersonal trust, boundary spanning, emerging economies, Malaysia How does trust develop in new subordinate-leader relationships during post-acquisition integration? Does boundary spanning facilitate or hamper trust? Cultural differences between the acquirer and acquired firm add to integration uncertainty, occurring at multiple, interconnected levels (national, organizational, functional cultures). Multicultural societies add another layer of complexity. This study compares a domestic M&A with a South African cross-border acquisition in the Malaysian financial industry. The analysis reveals that boundary spanning by acquirer and acquired-firm managers facilitated subordinate-leader trust development. I thus posit that boundary spanning mitigates uncertainty and cultural differences during integration. Further and paradoxically, integrating domestic rather than cross-border acquisitions can be more complex when intra-national culture differences are accounted for. This paper offers insights for advancing Western-developed theories and for more successful integration. INTRODUCTION Mergers and acquisitions (M&As) are burdened by high failure rates. At best only half succeed [START_REF] Cartwright | The role of culture compatibility in successful organizational marriage[END_REF][START_REF] Haspeslagh | Managing acquisitions: Creating value through corporate renewal[END_REF]. i Post-acquisition integration is a vital process characterized by complexity, uncertainty and unpredictability [START_REF] Cording | Reducing causal ambiguity in acquisition integration: Intermediate goals as mediators ofintegration decisions and acquisition performance[END_REF]Graebner et al., 2017). Birkinshaw et al.'s (2000) seminal paper argues that successful human integration facilitates the effectiveness of task integration, although both processes determine acquisition success. Cultural differences between the acquirer and acquired firm add to integration uncertainty and complexity, and have long been blamed for unsuccessful domestic and crossborder M&As [START_REF] Buono | When cultures collide: The anatomy of a merger[END_REF][START_REF] Cartwright | The role of culture compatibility in successful organizational marriage[END_REF][START_REF] Chatterjee | Cultural differences and shareholder value in related mergers: linking equity and human capital[END_REF]Nahavandi & Malekzadeh, 1988). Culture is a multifaceted construct incorporating shared assumptions, practices, values, artifacts, and rituals [START_REF] Taras | Half a century of measuring culture: Review of approaches, challenges, and limitations based on the analysis of 121 instruments for quantifying culture[END_REF]2016). Cultural differences often lead to complex trust dynamics and relationships between acquirer and acquired-firm personnel [START_REF] Buono | The human side of mergers and acquisitions[END_REF][START_REF] Stahl | Do cultural differences matter in mergers and acquisitions? A tentative model and examination[END_REF][START_REF] Teerikangas | The culture-performance relationship in M&A: From yes/no to how[END_REF]. Cultural challenges occur at multiple, interconnected levels including national, industrial, organizational, functional and professional cultures in M&As [START_REF] Teerikangas | The culture-performance relationship in M&A: From yes/no to how[END_REF]. Scholars have yet to address this interconnectivity adequately (ibid.), despite pinpointing national and organizational culture differences as the main integration challenges [START_REF] Brannen | Merging without alienating: Interventions promoting cross-cultural organizational integration and their limitations[END_REF][START_REF] Chatterjee | Cultural differences and shareholder value in related mergers: linking equity and human capital[END_REF][START_REF] Shimizu | Theoretical foundations of cross-border mergers and acquisitions: A review of current research and recommendations for the future[END_REF]. Moreover, the M&A cultural 'baggage' [START_REF] Teerikangas | The culture-performance relationship in M&A: From yes/no to how[END_REF]) can be weighed down further by within-country cultural diversity. Recent studies highlight the complexities of multifaith and multilingual societies in cross-border M&As [START_REF] Cuypers | The effects of linguistic distance and lingua franca proficiency on the stake taken by acquirers in cross-border acquisitions[END_REF][START_REF] Dow | The effects of within-country linguistic and religious diversity on foreign acquisitions[END_REF][START_REF] Kroon | Explaining employees' reactions towards a cross-border merger: The role of English language fluency[END_REF][START_REF] Kwok | CEOs we trust: Religious homophily and crossborder acquisitions in multifaith Asian emerging economies[END_REF]. In particular, [START_REF] Dow | The effects of within-country linguistic and religious diversity on foreign acquisitions[END_REF] found that multifaith and multilingual societies can add complexity to behavioral uncertainty and information asymmetry between foreign acquirers and local acquired firms. Trust is a complex, multilevel [START_REF] Currall | A multilevel approach to trust in joint ventures[END_REF] and multifaceted construct [START_REF] Mayer | An integrative model of organizational trust[END_REF][START_REF] Rousseau | Not so different after all: A cross-discipline view of trust[END_REF]. It refers to positive expectations when the trustor is vulnerable to the trustee's actions (ibid.). Trust is essential in M&As [START_REF] Graebner | Caveat venditor: Trust asymmetries in acquisitions of entrepreneurial firms[END_REF][START_REF] Lander | Boarding the aircraft: Trust development amongst negotiators of a complex merger[END_REF][START_REF] Maguire | Citibankers' at Citigroup: a study of the loss of institutional trust after a merger[END_REF][START_REF] Stahl | Trust dynamics in acquisitions: A case survey[END_REF]2012). During postacquisition integration, trust facilitates cooperation, job performance, resource sharing, and knowledge transfer [START_REF] Bauer | Speed of acquisition integration: Separating the role of human and task integration[END_REF][START_REF] Stahl | Trust in mergers and acquisitions[END_REF]2010). However, unanswered questions remain about how trust develops during integration, especially in subordinateleader relationships (Graebner et al., 2017). Inspired by calls for a multilayered, multifaceted and contextual view of culture in multiple domains (e.g., [START_REF] Dietz | Unravelling the complexities of trust and culture[END_REF][START_REF] Leung | Culture and international business: Recent advances and their implications for future research[END_REF]Tung, 2008), this study addresses the following research question: How does trust develop in new subordinateleader relationships amidst the uncertainty and cultural differences of post-acquisition integration? The research is contextualized in Malaysia, a multicultural, middle-ranking emerging economy in Southeast Asia for cross-border M&As (UNCTAD, 2017). Emergingeconomy firms actively engage in both domestic and cross-border M&As [START_REF] Aybar | Cross-border acquisitions and firm value: An analysis of emerging-market multinationals[END_REF][START_REF] Bandeira-De-Mello | Theoretical and empirical implications for research on South-South and South-North expansion strategies[END_REF][START_REF] Lebedev | Mergers and acquisitions in and out of emerging economies[END_REF]. Interestingly, Malaysia's M&A volume and value increased in 2017 despite the decline in global deals [START_REF] Duff | Transaction Trail: Annual Issue[END_REF]. Equally, this paper addresses another gap in M&A scholarship, on the difference between integrating domestic vs. international acquisitions. Most researchers contend that integrating cross-border M&As is riskier and more complicated than domestic deals [START_REF] Angwin | Strategic perspectives on European cross-border acquisitions: A view from top European executives[END_REF][START_REF] Barkema | Foreign entry, cultural barriers, and learning[END_REF][START_REF] Olie | Shades of culture and institutions in international mergers[END_REF]. Yet, some demonstrate that domestic M&A integration can be more challenging: despite the merging firms' shared national culture, organizational culture differences can hinder the integration process [START_REF] Véry | A cross-national assessment of acculturative stress in recent European mergers[END_REF]1997). In fact, the differences between domestic and cross-border M&As are still poorly understood [START_REF] Bris | The value of investor protection: Firm evidence from crossborder mergers[END_REF][START_REF] Gregory | Do cross border and domestic acquisitions differ? Evidence from the acquisition of UK targets[END_REF][START_REF] Reynolds | The international experience in domestic mergers-Are purely domestic M&A a myth[END_REF]. I apply a comparative two-case research design in the Malaysian financial services industry, with a domestic M&A and a South African cross-border acquisition. South Africa and Malaysia are newly developed mid-range emerging economies [START_REF] Hoskisson | Emerging multinationals from mid-range economies: The influence of institutions and factor markets[END_REF]. The domestic M&A combines potentially conflicting cultures: two government-linked corporations (GLCs) with majority ethnic Malay personnel, and two entrepreneurial firms with mainly ethnic Chinese personnel. ii Using an abductive approach [START_REF] Timmermans | Theory construction in qualitative research: From grounded theory to abductive analysis[END_REF][START_REF] Welch | Theorising from case studies: Towards a pluralist future for international business research[END_REF], my analysis reveals how boundary spanning by both acquirer and acquired-firm managers facilitated subordinate-leader trust development. I had found puzzling the apparent mismatch between theory-led expectations (complexity of integrating domestic vs. cross-border acquisitions) and interview narratives. Post-acquisition integration had been relatively smooth in both cases except the domestic M&A's first year, when it had been "absolute chaos" and "very stressed and a challenging environment for all the people". How did integration uncertainty and cultural differences dissipate? Moreover, other similar mergers in Malaysia had involved "a lot of infighting… intensified lobbying [and] backstabbing". Abduction led to "a re-description or re-contextualization of the phenomenon" (Welch et al., 2011, p. 748). The systematic analysis revealed boundary spanning behavior by acquirer and acquired-firm senior and middle managers. This indicates the widening locus of M&A leadership. The manager's relating, scouting, persuading and empowering behaviors [START_REF] Druskat | Managing from the boundary: The effective leadership of self-managing work teams[END_REF]Wheeler, 2003) spanned horizontal, vertical, stakeholder, demographic andgeographical boundaries (Palus et al., 2013), and encompassed cultural differences at national (between-country), intra-national (within-country), organizational and functional levels [START_REF] Teerikangas | The culture-performance relationship in M&A: From yes/no to how[END_REF]. Boundary spanning overcame the 'us and them' mentality between acquirer/acquired-firm subordinates and acquired-firm/acquirer leaders by reducing uncertainty and cultural differences, thus enabling the development of trust. Thus, I posit that boundary spanning mitigates uncertainty and cultural differences during post-acquisition integration. This paper makes two other contributions to M&A scholarship and offers managerial insights for more successful post-acquisition integration. First, it extends the role of boundary spanning in M&As, beyond the pre-M&A negotiation phase [START_REF] Lander | Boarding the aircraft: Trust development amongst negotiators of a complex merger[END_REF] to post-acquisition integration. This complements the process perspective [START_REF] Jemison | Corporate acquisitions: A process perspective[END_REF] and elucidates on the temporality of integrative actions [START_REF] Birkinshaw | Managing the post-acquisition integration process: How the human integration and task integration processes interact to foster value creation[END_REF][START_REF] Froese | Integration management of Western acquisitions in Japan[END_REF][START_REF] Teerikangas | Structure first! Temporal dynamics of structural and cultural integration in cross-border acquisitions[END_REF] from the perspective of South-South M&As. Second, it contributes to the literature on the complexities of multilingual societies in M&As [START_REF] Cuypers | The effects of linguistic distance and lingua franca proficiency on the stake taken by acquirers in cross-border acquisitions[END_REF][START_REF] Dow | The effects of within-country linguistic and religious diversity on foreign acquisitions[END_REF][START_REF] Kroon | Explaining employees' reactions towards a cross-border merger: The role of English language fluency[END_REF]. The domestic M&A was unable to establish a lingua franca for all personnel despite the possibility of speaking in two languages (English and the national language) at meetings. This demonstrates that when intra-national culture differences are taken into consideration, integrating domestic rather than cross-border M&As can be more challenging [START_REF] Véry | A cross-national assessment of acculturative stress in recent European mergers[END_REF]1997). LITERATURE REVIEW Subordinate-leader trust and M&As The paper adopts the widely quoted definition of trust from [START_REF] Rousseau | Not so different after all: A cross-discipline view of trust[END_REF], as "a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another " (p. 395). This definition captures two essential concepts: positive expectations of trustworthiness, including beliefs and perceptions of being able to rely on the trustee; and, willingness to accept vulnerability or a suspension of uncertainty [START_REF] Colquitt | Trust, trustworthiness, and trust propensity: a meta-analytic test of their unique relationships with risk taking and job performance[END_REF][START_REF] Ferrin | Can I trust you to trust me? A theory of trust, monitoring, and cooperation in interpersonal and intergroup relationships[END_REF]. In M&As individual trust can be directed at: a leader, subordinate, or between peers; a group including top management team (TMT) and headquarters (HQ); and, an organization, e.g., during M&A negotiations [START_REF] Currall | A multilevel approach to trust in joint ventures[END_REF]. Interpersonal trust refers to trust between individuals. In one of the most influential trust models [START_REF] Burke | Trust in leadership: A multilevel review and integration[END_REF][START_REF] Lewicki | Models of interpersonal trust development: Theoretical approaches, empirical evidence, and future directions[END_REF], [START_REF] Mayer | An integrative model of organizational trust[END_REF] proposes three main characteristics for trusting work relationships. Ability refers to the trustee's skills and domain-specific competence. Benevolence represents the trustor's belief that the trustee wants to "do good to the trustor" (p. 718). Integrity refers to the perception that the trustee "adheres to a set of principles that the trustor finds acceptable" (p. 719). These trust antecedents juxtapose with the trust constructs introduced by McAllister (1995): cognition-based trust based on rational information on the trustee's competence, reliability and credibility; and, affect-based trust which refers to an emotional attachment or concern for the other party's interests and welfare. Cognition-based trust precedes affectbased trust in organizations [START_REF] Mcallister | Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations[END_REF][START_REF] Rousseau | Not so different after all: A cross-discipline view of trust[END_REF], and increases job satisfaction and performance [START_REF] Yang | Examining the effects of trust in leaders: A basesand-foci approach[END_REF]. Studies of subordinate-leader trust in the context of M&As emphasize subordinate trust in leaders and emerging economies are still relatively rare [START_REF] Kwok | CEOs we trust: Religious homophily and crossborder acquisitions in multifaith Asian emerging economies[END_REF][START_REF] Stahl | Does national context affect target firm employees' trust in acquisitions? A policy-capturing study[END_REF]. Trust between subordinates and leaders is not necessarily mutual or reciprocal [START_REF] Brower | A model of relational leadership: The integration of trust and leader-member exchange[END_REF][START_REF] Korsgaard | It Isn't Always Mutual: A Critical Review of Dyadic Trust[END_REF][START_REF] Schoorman | An integrative model of organizational trust: Past, present, and future[END_REF]. Their antecedents differ. Leaders emphasize the subordinate's receptivity, availability, and discreteness; whereas, subordinates emphasize the leader's availability, competence, discreteness, integrity, and openness [START_REF] Brower | A closer look at trust between managers and subordinates: Understanding the effects of both trusting and being trusted on subordinate outcomes[END_REF]. This asymmetry may lead to longer-term implications on trust dynamics and post-acquisition integration. Boundaries, cultural differences, boundary spanning and M&As Boundaries separate an organization's internal and external environments. Boundaries exist within organizations [START_REF] Palus | Boundary-Spanning Leadership in an Interdependent World[END_REF][START_REF] Schotter | Boundary spanning in global organizations[END_REF]. Managers often face five types of boundaries: horizontal, vertical, stakeholder, demographic, and geographic [START_REF] Palus | Boundary-Spanning Leadership in an Interdependent World[END_REF]. Horizontal boundaries separate functions and units by specialized expertise, and include functional, occupational and professional culture differences. During post-acquisition integration, horizontal boundaries often stem from organizational culture differences (pre-M&A legacies). Vertical boundaries are found across hierarchical levels and define title, rank and power. Stakeholder boundaries refer to shareholders, Boards of Directors, customers, partners, governments, and other communities. Stakeholder boundaries can overlap with organizational culture differences in M&As, and with national cultural differences in crossborder M&As. Demographic boundaries separate people by gender, age, ethnicity, religion, political ideology, etc., and include within-country cultural differences. Geographic boundaries are defined by physical location including countries, regions and East/West, and include national and regional culture differences. Boundaries are linked to identities (i.e., who we are and how we define ourselves; ibid.). Identity has attracted the attention of M&A researchers [START_REF] Clark | Transitional identity as a facilitator of organizational identity change during a merger[END_REF][START_REF] Drori | One out of many? Boundary negotiation and identity formation in postmerger integration[END_REF][START_REF] Maguire | Citibankers' at Citigroup: a study of the loss of institutional trust after a merger[END_REF]. Aldrich and Herker (1977) describe boundary spanning in organizations in terms of information processing -to selectively interpret, filter, summarize, and facilitate the transmission of information; in other words "uncertainty absorption" (p. 219). More recently, [START_REF] Schotter | Boundary spanning in global organizations[END_REF] offers a multidimensional definition of boundary spanning, as "a set of communication and coordination activities performed by individuals within an organization and between organizations to integrate activities across multiple cultural, institutional and organizational contexts" (p. 404). In bridging and mediating across one or multiple contexts, boundary spanners overcome the 'us and them' mentality, thus allowing trust to develop [START_REF] Schotter | Boundary spanning in global organizations[END_REF][START_REF] Yip | The nexus effect: when leaders span group boundaries[END_REF]. Boundary spanners can perform a specific function or responsibility such as public relations, union representation, acquiring/disposing resources and relationship management (Aldrich & Herker, 1977), or improvise his/her actions to get the job done [START_REF] Yagi | Boundary work: An interpretive ethnographic perspective on negotiating and leveraging cross-cultural identity[END_REF]. Boundary spanning contexts include expatriates (Au & Fukuda, 2002), bicultural managers [START_REF] Yagi | Boundary work: An interpretive ethnographic perspective on negotiating and leveraging cross-cultural identity[END_REF], and global vendor-client partnerships [START_REF] Søderberg | Boundary Spanners in Global Partnerships: A Case Study of an Indian Vendor's Collaboration with Western Clients[END_REF]. In M&As, [START_REF] Lander | Boarding the aircraft: Trust development amongst negotiators of a complex merger[END_REF] studied the Air France-KLM merger where chief negotiators acted as "boundary role persons" [START_REF] Currall | Measuring trust between organizational boundary role persons[END_REF], enabling interorganizational collaboration and trust. This contrasts with [START_REF] Drori | One out of many? Boundary negotiation and identity formation in postmerger integration[END_REF] analysis into the process of boundary creation/re-creation in shaping the post-merger identity of an organization, and allowing managers and personnel to maintain key aspects of their previous identities. Anecdotal evidence suggests that leaders engage in boundary spanning during post-acquisition integration, for example, in the Lenovo-IBM merger [START_REF] Stahl | Lenovo-IBM: Bridging Cultures, Languages, and Time Zones-Integration Challenges (B)[END_REF][START_REF] Yip | The nexus effect: when leaders span group boundaries[END_REF]. Outstanding boundary spanning leaders constantly engage in internal (team) and external (organization) oriented initiatives for their teams [START_REF] Druskat | Managing from the boundary: The effective leadership of self-managing work teams[END_REF]. From studying the interactions between external leaders and their team members and managers in a large manufacturing firm, Druskat and Wheeler inductively categorized eleven boundary spanning behavior into four functional clusters. Table 1 summarizes this typology. - ------------------------------------------INSERT TABLE 1 ABOUT HERE ------------------------------------------- RESEARCH METHODS AND DATA Empirical design and context This research applies comparative case-study research and focuses on trust development in new subordinate-leader relationships arising from M&As in multicultural emerging economies. Case study research is particularly suited for in-depth probing into complex concepts such as trust and culture, which quantitative approaches cannot easily reveal [START_REF] Yin | Case study research: Design and methods[END_REF]. I compare a domestic and a cross-border acquisition from Malaysia's financial services industry. Malaysia is multiethnic, multifaith, and multilingual. There are three main ethnic groups (Bumiputra-comprising Malay and other indigenous groups-63%, Chinese 25%, and Indian 7%), and four main religions (Muslim 61%, Buddhist 20%, Christian 9%, Hindu 6%; Malaysian Department of Statistics, 2010). Generally the Malays are Muslims, the Chinese are Buddhists or Christians, and the Indians are Hindus, Muslims or Christians. Most Malaysians are bilingual in Bahasa Malaysia and English, the national and business languages, but many are trilingual (in Chinese or Indian languages) or quadrilingual [START_REF] Fontaine | Cross-cultural research in Malaysia[END_REF]. The financial services industry was selected to control for environmental variation and a transparently observable phenomenon of interest [START_REF] Eisenhardt | Building theories from case study research[END_REF]. The Malaysian financial industry comprises banking intermediaries, insurance firms and capital market intermediaries (IMF, 2014). As Malaysia's seventh largest sector, it accounts for 4.5% of nominal GDP (IHS, 2016) and has undergone substantial consolidation and rationalization subsequent to the 1997-1998 Asian financial crisis (IMF, 2014). In 2015, the financial industry ranked fourth in terms of emerging economies' M&A activity (Thomson Reuters, 2016) and accounted for 19% of global M&A volume (Bloomberg, 2016). Further, I had worked in the Malaysian financial services industry and am familiar with its characteristics. Case selection and description This paper compares a domestic M&A "DoeMez" versus a South African acquisition of a Malaysian firm "Cross". Purposive sampling was applied during case selection and care was taken to match them on contextual appropriateness [START_REF] Poulis | The role of context in case study selection: An international business perspective[END_REF][START_REF] Yin | Case study research: Design and methods[END_REF]. Given South Africa's cultural diversity and history of apartheid, the two cases provide an intriguing comparison of cross-cultural trust between acquired-firm and acquirer personnel. In Malaysia, the Bumiputra have enjoyed special rights since 1971 including in employment, resulting in sensitive although peaceful ethnic relations [START_REF] Bhopal | Ethnicity as a management issue and resource: Examples from Malaysia[END_REF][START_REF] Haque | The role of the state in managing ethnic tensions in Malaysia: A critical discourse[END_REF]IHS, 2016). South Africa introduced a comparable majority-favoring regime in 1998 to eliminate post-apartheid employment discrimination against the 91% black population [START_REF] Thomas | Employment equity in South Africa: Lessons from the global school[END_REF][START_REF] Lee | Affirmative Action in Malaysia and South Africa: Contrasting Structures, Continuing Pursuits[END_REF]. Bumiputra and blacks are increasingly represented in high-level occupations, mainly in the public sector [START_REF] Lee | Affirmative Action in Malaysia and South Africa: Contrasting Structures, Continuing Pursuits[END_REF]. Cross is a Malaysian firm acquired by "SAF", a South African multinational corporation with operations in approximately 40 emerging economies. DoeMez is a domestic acquisition of two sister firms (Mez1 and Mez2) which were then merged with two subsidiaries of the acquirer Doe to create DoeMez1 and DoeMez2. The acquirer and its subsidiaries are majority controlled by GLCs (IMF, 2014) and employ 80% Malay personnel, whereas the acquired firms were founded by an ethnic Chinese entrepreneur, had "Chinaman entrepreneurial" culture, and employed 80% ethnic Chinese personnel. iii Table 2 summarizes the main features of the cases. - ----------------------------------------- - INSERT TABLE 2 ABOUT HERE ------------------------------------------- Both acquisitions were friendly deals, had similar motives (business expansion and scaling up), involved experienced acquirers, and were completed approximately two years prior to data collection. To gain access to the firms, personalized invitation letters were sent to the CEOs of the acquired firms and their acquirers/parent firms, together with an offer to share my findings with them. The participating firms and informants were promised complete confidentiality and anonymity. Data collection The analysis was based mainly on 35 semi-structured interviews with TMT, senior and middle managers, and supplemented by 2 conference calls with dealmakers, participatory observations, and archival data. To capture new acquired-firm/acquirer subordinate-acquirer/acquired-firm leader relationships arising from the M&A, I sought individuals who had reported to different managers pre-and post-M&A. The interviewees were selected by their firms-23 from DoeMez and 12 from Cross-and originated from the acquired firm, acquirer, or were 'neutrals' recruited during the focal M&As, as detailed in Table 3. The interviews lasted an hour on average, transpired face-to-face except for one (by conference call), and were recorded with permission and transcribed. All communication was in English except for local expressions sprinkled in some interviews. - ----------------------------------------- ------------------------------------------The interview questions covered: organizational conditions around the time of the acquisition including the pre-acquisition mood at the individual's firm and the impact of the acquisition on the participant; post-acquisition working relationships with his/her new direct manager and subordinates; and, interactions with the new TMT. An interview protocol was developed beforehand and refined through pilot interviews with 7 Malaysians who were either M&A legal advisors or professionals with M&A experience. - INSERT TABLE 3 ABOUT HERE - Due to practical constraints, the interviews were conducted in a tandem arrangement: DoeMez in mid-November 2016, then Cross in late November/mid-December 2016. At DoeMez especially, 23 interviews in 5 consecutive days left little time for reflection. However, since the interviews were held at the firms' premises, I was able to engage in informal conversations and observe the work environment and personnel interactions. Further, I participated in one of Cross's day-long workshops in December 2016 when the acquirer's values and culture were introduced. The workshop was facilitated by an external consultant and attended by approximately 100 Cross personnel, its TMT, and 2 senior executives from the acquirer's HQ. Data analysis My analysis applied abduction which is partly deductive (theory-driven) and partly inductive (data-driven) [START_REF] Timmermans | Theory construction in qualitative research: From grounded theory to abductive analysis[END_REF][START_REF] Welch | Theorising from case studies: Towards a pluralist future for international business research[END_REF]. Abduction facilitates the discovery of new variables and relationships by moving back-and-forth between framework, data sources and analysis [START_REF] Timmermans | Theory construction in qualitative research: From grounded theory to abductive analysis[END_REF]. Abduction has been used in studies on boundary spanning [START_REF] Søderberg | Boundary Spanners in Global Partnerships: A Case Study of an Indian Vendor's Collaboration with Western Clients[END_REF][START_REF] Yagi | Boundary work: An interpretive ethnographic perspective on negotiating and leveraging cross-cultural identity[END_REF], M&As [START_REF] Monin | Giving sense to and making sense of justice in postmerger integration[END_REF][START_REF] Reynolds | The international experience in domestic mergers-Are purely domestic M&A a myth[END_REF], and international business [START_REF] Barron | Exploring the performance of government affairs subsidiaries: A study of organisation design and the social capital of European government affairs managers at Toyota Motor Europe and Hyundai Motor Company in Brussels[END_REF][START_REF] Maitland | Managerial cognition and internationalization[END_REF]. Initially, I built individual case descriptions by triangulating the multiple sources of evidence [START_REF] Graebner | Caveat venditor: Trust asymmetries in acquisitions of entrepreneurial firms[END_REF]. Next, the interview transcripts were analyzed iteratively in groups: acquirer, acquired-firm and neutral managers; TMT and department heads vs. managers; and, subordinate-leader dyads where possible. For DoeMez's transcripts, withinand between-division analysis was also conducted since its integration involved three 'standards': one division followed Doe's practices, a second division followed Mez's practices, while the third division adopted a mix of both. Multiple examples of TMT and manager boundary spanning were identified. Finally, I categorized the examples according to [START_REF] Druskat | Managing from the boundary: The effective leadership of self-managing work teams[END_REF] typology and identified the boundaries spanned-to reveal the intent or purpose behind each boundary spanning behavior, for clearer linkage to trust development. FINDINGS The post-acquisition integration approaches of DoeMez and Cross is described first, followed by the boundary spanning actions of the acquirer and acquired-firm managers and my interpretation of how this behavior facilitated subordinate-leader trust development. Integration approaches Cross' integration by SAF corresponds with the light-touch integration approach [START_REF] Liu | Light-Touch Integration of Chinese Cross-Border M&A: The Influences of Culture and Absorptive Capacity[END_REF] of Asian acquirers [START_REF] Kale | Don't integrate your acquisitions, partner with them[END_REF][START_REF] Liu | Light-Touch Integration of Chinese Cross-Border M&A: The Influences of Culture and Absorptive Capacity[END_REF][START_REF] Marchand | Do All Emerging-Market Firms Partner with Their Acquisitions in Advanced Economies? A Comparative Study of 25 Emerging Multinationals' Acquisitions in France[END_REF]. In DoeMez's case, the integration approach resembles symbiosis [START_REF] Haspeslagh | Managing acquisitions: Creating value through corporate renewal[END_REF]. Table 4 compares their integration approaches. - ----------------------------------------- - INSERT TABLE 4 ABOUT HERE ------------------------------------------- Boundary spanning behaviors My analysis identified boundary spanning in the domestic and the cross-border acquisition cases, by TMT and managers of both the acquirers and acquired firms. The boundary spanning behaviors are described below according to [START_REF] Druskat | Managing from the boundary: The effective leadership of self-managing work teams[END_REF] clusters. Feedback on the behaviors is provided where possible. Horizontal, vertical, stakeholder, demographic, and geographic boundaries [START_REF] Palus | Boundary-Spanning Leadership in an Interdependent World[END_REF] were spanned, representing national, intra-national, organizational and functional culture differences. Tables 5 and6 show boundary spanning examples from Cross and DoeMez, the boundaries and cultural differences spanned, and trust attributes. - --------------------------------------------------- --------------------------------------------------- Cross managers' social and political awareness spanned all five boundaries and differences in national, organizational, functional and personal cultures (see Table 5). A senior manager enthused: "[SAF are] totally different from shareholders who are American or British. They're very respectful of local cultures and very keen on sharing their knowledge… They don't dictate that they know better than we do, which is a total contrast to where I come from." DoeMez manager's social and political awareness spanned all boundaries, organizational culture differences and especially intra-national culture differences. A Mez leader said, "I go the extra mile to make sure the [Doe] colleagues feel very welcome." Particular attention was paid to cater for halal food preferences at work, on business trips, annual dinners, and a company trip overseas. Chinese colleagues were reminded eat nonhalal food outside the workplace. Personnel were instructed to speak in English and the national languages only during meetings, for everyone to understand. One senior manager commented, "I heard some departments had [a language-related] problem, [like] immediate separation between [Doe and Mez people who] tend to not mingle [and] seldom talk... There's quite a lot of staff in [X] Department who were from Chinese schools. Their main language… is Cantonese and Mandarin... Sometimes in meetings… it's an issue." Another manager, whose team spanned two locations, introduced a regular weekly conference call and persevered against Asian reticence to seek team members' negative feedback. Other examples are provided in Table 6. In developing subordinate trust in leaders, Cross leaders showed transparency and willingness to span all boundaries, and national, intra-national and organizational culture differences. Group TMT visited Cross' offices four times in the first eighteen months from the acquisition announcement. Personnel were informed of SAF's intentions and future plans, which dispersed a lot of uncertainty. Managers received regular CEO and CFO performance updates to disseminate in their departments/units. In his first townhall with Cross personnel, the CEO presented his short-term goal and personal interest in being assigned to Malaysia. The CEO's trust-building approach is detailed in Table 5. DoeMez's leaders developed subordinate trust across intra-national and organizational culture differences. Like Cross, this was initiated from early post-acquisition integration and a personal approach was used sometimes (see Table 6). Cross and DoeMez managers demonstrated their care for subordinates in numerous examples with noteworthy examples in Tables 5 and6. Other caring behavior at DoeMez included a department head covering for subordinates who had exceeded their transaction limits, and a senior manager's insistence that the whole team leaves work by 6:30 pm. Subordinate-leader bonding at DoeMez and Cross transpired over coffee, drinks and personal chats. Two DoeMez senior managers also organized weekly/daily lunches and outings for within-team bonding. Cluster 2: Scouting Scouting involves seeking information within the organization to clarify the manager's understanding or team and organizational needs, and to solve problems. This second function of managerial boundary spanning includes a set of three behaviors: seeking information from managers, peers and specialists, diagnosing subordinate behavior, and investigating problems systematically. Seeking information helped to span national, intra-national and organizational culture differences at both Cross and DoeMez, as evidenced from the examples pertaining to incoming expatriates and the expatriate CEO's efforts to develop sensitivity to Malaysian cultural nuances (Table 5), and solving a religion-related problem at DoeMez (Table 6). Subordinate behavior diagnosis spanned mainly vertical boundaries and, organizational culture differences at Cross and intra-national cultural differences at DoeMez (Tables 5 and6). Cross and DoeMez managers investigated problems involving horizontal and vertical boundaries. At Cross, this was related to 'silos' where subordinates performed their functions with very little understanding of what happened in other areas of the business. Table 5 provides an example where this where intra-national and functional culture differences were involved. A DoeMez department manager recounted a similar problem when personnel who withhold information, while another manager mediated such a situation involving intranational culture differences (Table 6). Cluster 3: Persuading Persuading involves managers seeking resources from within the organization to support their teams' needs, and influencing subordinates' priorities to support organizational objectives. Successful persuasion enables a manager to align his/her team's objectives with organizational objectives. The data reveals that DoeMez leaders influenced their subordinates more than Cross leaders. In seeking organizational-level resources for their teams, Cross managers spanned mainly stakeholder boundaries with its two shareholders and Board of Directors. The CEO had monthly engagements and weekly video conferences with various people at HQ, while department managers had a dedicated support person at HQ (e.g., IT coordinator for the IT head, HR coordinator for the HR head). Cross managers were formally and informally linked with Group TMT, e.g., unit heads who also reported to the group's department heads, or the COO and his department manager who liaised regularly with the Group COO. The CEO also recognized the potential of Cross' minority shareholder and its deep local market knowledge (see Table 5). DoeMez managers spanned vertical and horizontal boundaries in seeking their leaders to intervene on critical occasions. Table 6 presents two examples including one with organizational and intra-national culture differences in the attitudes of certain subordinates. In terms of influencing subordinates, Cross managers did so less than DoeMez managers, perhaps reflecting its light-touch integration approach (cf. DoeMez's symbiotic approach). One Cross senior manager said, "[The TMT] don't bulldoze things. They will [talk to everybody] and see people's acceptance. When a few fellows don't accept, they will try to convince the persons." The TMT encouraged subordinates to span organizational and national culture differences by debating and differing in opinion from their leaders. For example: "I am used to seriously being challenged at [HQ]… whereas here when I make statements, I must say 'You are allowed to disagree. Please disagree if you feel so, talk to me, tell me this is a wrong assumption or this is a wrong statement or whatever.'" The TMT concurred that their people were respectful and uncomfortable with discord, reflecting their Asian culture. Nevertheless, two expatriates noted that subordinates, from watching the TMT challenge one another, began doing so themselves. DoeMez managers influenced their subordinates at organizational-and team-levels through the organization's four core values which were enacted from the outset of the M&A. First, the core values shaped changes in culture and thought processes across the organization. As one dealmaker/leader explained, "We initiated [culture change] by the new brand. We launched an idea behind that brand, these disciplines, this other culture we want to emulate... We started building new thought processes." Second, the core values guided managers to focus/refocus their teams. For example, one leader reminded his team, "No, no, our core value is to collaborate. Guys, we need to collaborate…" and "We need to be entrepreneurial. That means we need to think how to solve the problem." The core values spanned organizational and intra-national culture differences as seen in the same manager's comment: "Without the core values, everybody will define their own… I think without that, this merger would be a nightmare. I honestly tell you that." Department heads influenced their subordinates to accept incoming members from the acquirer/acquired firm and fuse horizontal boundaries. One entire team was instructed to treat the newcomers well regardless of the circumstances and to refer back to the manager if any issue arose. Another manager instructed his unit heads similarly. Managers also emphasized the amount of time the two sides would spend together and the need for a pleasant work atmosphere. One subordinate remarked that human integration was "very well-driven… We're all no longer ex-Mez or ex-Doe people, we're all DoeMez." Cluster 4: Empowering Empowering involves delegating authority to subordinates and coaching to improve subordinates' capabilities. This final function of managerial boundary spanning includes a set of three behaviors that span horizontal, vertical and demographic boundaries: delegating authority, flexible on team decisions, and coaching subordinates. My data reveals that the most frequent empowering behavior was coaching subordinates, with some coaching actions implemented through the HR departments. Delegating was observed in DoeMez and Cross. One DoeMez department head spanned organizational and intra-national culture differences through delegation: "When I first took [the Doe people] in, they weren't comfortable doing a lot of things... They weren't given a lot of freedom in the old days… I found that a lot of them are quite capable... They did make a lot of mistakes and I do hold them accountable for it, but [no] penalties...'" For one Cross expatriate, delegating authority overlaps with being flexible on team decisions behaviors: "If [you] have to change approaches, then we change if it makes sense... [The initial approach is] not necessarily right. It's only right if it makes sense to you also." The leader explained, "They'll say something works like this. Then I'll ask 'Why?'… 'Do you think it's right?'… I'm starting to ask questions [but] not telling them what to do." I did not observe any flexible on team decisions behavior in DoeMez. Subordinate coaching, the most prominent behavior in this cluster, was implemented both through the HR departments and independently. First, HR mobilized and trained managers to be relays: Cross managers were coached to coach their subordinates and thus support them more effectively, while DoeMez senior managers were trained to counsel their teams on how to deal with a different organizational culture, new processes and new workflows. If each HR department had provided support to personnel on an organizational level, more resources would have been required (e.g., more HR personnel, outsourcing). One Cross expatriate introduced a 360-degree feedback system through the HR department, where an individual's leader, peers, and direct subordinates provide confidential and anonymous feedback on "This is what I'd like to see more of you, in…" Second, as a personal initiative, a Cross expatriate coached department managers who had been working in silos on the impact of this behavior on team outcome, and how to deliver as a team. At DoeMez, several managers gave indirect answers to subordinates' questions, for example, a department head: "I say 'I want to hear from you first. Come back to me, say, in 2-3 hours' time and see what you have.'… I put pressure to them to think through properly… Otherwise they will never learn. Always remember, don't give the man fish all the time. You must teach him how to fish." DISCUSSION The objective of this study was to explore how trust develops in new subordinate-leader relationships in the M&A nucleus, amidst the uncertainty and cultural differences of integration. Three bodies of literature are connected to understand the trust development process: M&A, boundary spanning, and trust. The paper's main contribution is to posit that boundary spanning mitigates the uncertainty and cultural differences arising from postacquisition integration. Managerial boundary spanning overcame the 'us and them' mentality between acquirer/acquired-firm subordinates and acquired-firm/acquirer leaders, reduced uncertainty and cultural differences, and thus catalyzed subordinate-leader trust development. Focusing on a domestic and a cross-border acquisition of Malaysian firms, my systematic analysis reveals that both acquirer and acquired-firm managers engaged in boundary spanning behavior. The managers ranged from TMT, integration managers [START_REF] Teerikangas | Integration managers' value-capturing roles and acquisition performance[END_REF] and middle managers in diverse functions. This indicates the widening locus of M&A leadership (Graebner, 2004;[START_REF] Junni | The role of leadership in mergers and acquisitions: A review of recent empirical studies[END_REF] and suggests more widespread managerial influence for smoother and more successful integration. Second, this research extends the ambit of managerial boundary spanning beyond the pre-M&A negotiation phase [START_REF] Lander | Boarding the aircraft: Trust development amongst negotiators of a complex merger[END_REF] to post-acquisition integration, and adds evidence that effective leaders engage in boundary spanning behavior [START_REF] Druskat | Managing from the boundary: The effective leadership of self-managing work teams[END_REF]. This paper also complements the process perspective of M&As [START_REF] Jemison | Corporate acquisitions: A process perspective[END_REF] and elucidates on the temporality of integrative actions. Birkinshaw et al.'s (2000) influential study argues for a two-phased implementation of effective integration. First, human integration is emphasized while the acquirer and acquired units achieve acceptable performance, and then revisiting task integration so that the existing success of human integration facilitates task integration across units. Other researchers suggest closer interrelation between these elements of integration. [START_REF] Froese | Integration management of Western acquisitions in Japan[END_REF] find that human integration and organizational integration occur simultaneously, from the beginning of post-acquisition integration. [START_REF] Teerikangas | Structure first! Temporal dynamics of structural and cultural integration in cross-border acquisitions[END_REF] observe that cultural integration begins once structural integration is underway, as long as both are implemented in complementarity. In this study, DoeMez implemented extensive organizational integration and human integration rapidly, similar to the Renault-Nissan acquisition [START_REF] Froese | Integration management of Western acquisitions in Japan[END_REF]. In contrast, SAF selectively implemented Cross' organizational integration over time and to a lesser extent, with human integration from the third year post-acquisition. Both deals are considered successful, as are SAF's other similarly integrated South-South acquisitions. Third, the study's focus on South-South acquisitions extends M&A scholarship which is heavily focused on acquisitions to and from developed economies [START_REF] Lebedev | Mergers and acquisitions in and out of emerging economies[END_REF], and contributes to the literature on the complexities of multilingual societies in M&As [START_REF] Cuypers | The effects of linguistic distance and lingua franca proficiency on the stake taken by acquirers in cross-border acquisitions[END_REF][START_REF] Dow | The effects of within-country linguistic and religious diversity on foreign acquisitions[END_REF][START_REF] Kroon | Explaining employees' reactions towards a cross-border merger: The role of English language fluency[END_REF]. DoeMez exposes that domestic rather than cross-border M&As can be more challenging to integrate [START_REF] Véry | A cross-national assessment of acculturative stress in recent European mergers[END_REF]1997), especially when the intra-national culture differences of a multicultural emerging economy are considered. The language diversity among Malaysian personnel was more problematic for DoeMez than Cross and there was no lingua franca for all DoeMez personnel. Malaysian personnel's proficiency in English varies significantly, particularly in the younger generation [START_REF] Darmi | English language in the Malaysian education system: Its existence and implications[END_REF][START_REF] Yin | Case study research: Design and methods[END_REF]. Some ethnic Chinese personnel were not sufficiently fluent in English or Bahasa, despite the possibility of using both languages in meetings. Malaysian Chinese can be differentiated by their schooling: those with Chinese-school education relate strongly to traditional Chinese values, while those with non-Chineseschool education identify weakly with such values [START_REF] Fontaine | Cross-cultural research in Malaysia[END_REF][START_REF] Ong | Chinese ethnicity: Its relationship to some selected aspects of consumer behaviour[END_REF]. This paradoxical finding draws attention to the intricacy of cultural challenges in M&As [START_REF] Teerikangas | The culture-performance relationship in M&A: From yes/no to how[END_REF], while extending our knowledge of the differences between integrating domestic and cross-border acquisitions [START_REF] Bris | The value of investor protection: Firm evidence from crossborder mergers[END_REF][START_REF] Gregory | Do cross border and domestic acquisitions differ? Evidence from the acquisition of UK targets[END_REF][START_REF] Reynolds | The international experience in domestic mergers-Are purely domestic M&A a myth[END_REF]. More broadly, my research adds to the literature on multilingual organizations including multinational corporations [START_REF] Bordia | Employees' willingness to adopt a foreign functional language in multilingual organizations: The role of linguistic identity[END_REF][START_REF] Brannen | The multifaceted role of language in international business: Unpacking the forms, functions and features of a critical challenge in MNC theory and performance[END_REF][START_REF] Feely | Language management in multinational companies[END_REF], and the conversation that "context matters". The Malaysian context diversifies the geographic focus of emerging market studies away from China [START_REF] Jormanainen | International activities of emerging market firms[END_REF] and offers tentative insights to advance Western-developed theories in strategy and management [START_REF] Jormanainen | International activities of emerging market firms[END_REF][START_REF] Wright | Strategy research in emerging economies: Challenging the conventional wisdom -Introduction[END_REF][START_REF] Xu | Linking theory and context: 'Strategy research in emerging economies' after Wright et al[END_REF]. The findings support the argument that national cultures are more heterogeneous than homogeneous [START_REF] Shenkar | Cultural distance revisited: Towards a more rigorous conceptualization and measurement of cultural differences[END_REF][START_REF] Taras | Does country equate with culture? Beyond geography in the search for cultural boundaries[END_REF][START_REF] Tung | The cross-cultural research imperative: The need to balance crossnational and intra-national diversity[END_REF][START_REF] Tung | Beyond Hofstede and GLOBE: Improving the quality of cross-cultural research[END_REF]. Fourth, the paper contributes to trust scholarship by diversifying from its reliance (Fulmer & Gelfand, 2012) on WEIRD samples (Western, educated, industrialized, rich, and democratic;[START_REF] Henrich | Most people are not WEIRD[END_REF]. While Mayer et al.'s (1995) model has become one of the most well-known and influential trust models, it remains unclear whether each component of trustworthiness (ability, benevolence, integrity) is equally as important in trust outcomes [START_REF] Burke | Trust in leadership: A multilevel review and integration[END_REF]. My analysis of the linkage between boundary spanning and subordinate-leader trust shows that benevolence and integrity are more prominent than ability, for facilitating Malaysian subordinate-leader trust in M&As. This complements [START_REF] Poon | Effects of benevolence, integrity, and ability on trust-in-supervisor[END_REF] findings on the significance of benevolence for Malaysian personnel to trust their leaders. The precedence of affect-based trust over cognition-based trust is consistent with other emerging economies where personal relationships are important, e.g., China [START_REF] Chua | Guanxi vs networking: Distinctive configurations of affect-and cognition-based trust in the networks of Chinese vs American managers[END_REF][START_REF] Jiang | Effects of cultural ethnicity, firm size, and firm age on senior executives' trust in their overseas business partners: Evidence from China[END_REF], Singapore and Turkey [START_REF] Tan | Understanding interpersonal trust in a Confucian-influenced society: An exploratory study[END_REF][START_REF] Wasti | Cross-cultural measurement of supervisor trustworthiness: An assessment of measurement invariance across three cultures[END_REF]. The importance of building personal relationships was evident in DoeMez and Cross where informal social activities facilitated bonding between subordinates, peers and leaders. Managerial implications This research shows that a wider leadership can influence and smoothen M&A processes, outcomes and cultural differences. Regardless of their hierarchical positions, managers from both the acquirer and acquired firm can contribute to reducing post-acquisition uncertainty among personnel, thereby facilitating subordinate-leader trust during integration. The managers should be empowered to play a more central role in human integration from early post-acquisition integration. For example, through awareness-building workshops on where boundaries and cultural differences may lie, with illustrative boundary spanning behaviors. Moreover, since boundary spanning behaviors are often improvised by managers in doing their jobs [START_REF] Yagi | Boundary work: An interpretive ethnographic perspective on negotiating and leveraging cross-cultural identity[END_REF], integration and HR managers should develop ways to formalize and share such experiences with other managers in the organization so that larger scale benefits can be drawn. This research highlights how language diversity can be an issue not only cross-border acquisitions but in domestic acquisitions in a multicultural society. Boundary spanning behaviors to address this divide includes seeking colleagues to translate or summarize the essential points, and cascading information downwards through department/unit managers so that subordinates have a more accessible point of reference for clarification. In the longer term, personnel should have training and opportunities to overcome their "language anxiety" [START_REF] Darmi | English language in the Malaysian education system: Its existence and implications[END_REF][START_REF] Yin | Case study research: Design and methods[END_REF]. Limitations and future research directions The study's findings should be interpreted caution. Despite exposing the positive effects of managerial boundary spanning in post-acquisition integration, further research is required to ascertain the conditions under which boundary spanning influences successful integration. First, the analysis compares two M&A case studies within a single industry in Malaysia. Malaysia falls under [START_REF] Schwartz | A theory of cultural value orientations: Explication and applications[END_REF] South Asia cluster of cultural value orientations, which characterize hierarchical and collective societies. Future studies should examine M&As in the same cluster (e.g., Indonesia, India), in other multicultural emerging economies (e.g., Turkey, Nigeria, Chile), as well as vertical acquisitions and acquisitions of financially-troubled firms. Moreover, the integration approach adopted may have a bearing on the extent of boundary spanning. It would be useful to study not only other M&As applying symbiosis and light-touch integration like DoeMez and Cross, but also preservation and absorption [START_REF] Haspeslagh | Managing acquisitions: Creating value through corporate renewal[END_REF]. Second, the managerial boundary spanning behaviors may be related to the individual's job function, level of experience, seniority in the organization, and previous acquisition experience. To analyze this requires a larger sample. Third, the study relied mostly on interview data to analyze trust development retrospectively. Despite my efforts to reduce social desirability bias, some interviewees may not have been entirely forthcoming since they were selected by their firms. Ethnographic research or a longitudinal study would be more robust. Looking beyond, it would be fruitful to enquire into how boundary spanning affects trust between peers during post-acquisition integration, and trust between the acquired firm and HQ personnel. The linkage between boundary spanning, interpersonal trust and identity formation is another interesting research area. I look forward to more active scholarly pursuit along this path. -INSERT TABLES 5 AND 6 ABOUT HERE - - Cluster 1: RelatingRelating involves building relationships within the team and organization while showing social and political awareness. This first function of managerial boundary spanning includes a set of three behaviors: social and political awareness, building subordinate trust in leaders, and caring about subordinates. TABLE 1 : Typology of boundary spanning actions 1 (adapted from Druskat & Wheeler, 2003) Function Behavior Examples Relating Social and political Building rapport by: understanding the awareness concerns and interests of specific groups, appreciating power relationships, politicking Build subordinate trust in Showing that the leader is reliable, fair and leaders honest and focused on the team's best interests Care about subordinates Caring actions towards a team or individual Scouting Seek information from Making the effort to go beyond the boundary managers, peers and spanner's knowledge or understanding by specialists at organizational- referring to someone in the organization level Diagnose subordinate Analyzing a subordinate or team's verbal or behavior nonverbal behavior to understand their needs Investigate problems Solving a problem by breaking it down, systematically systematically tracing the cause of a problem Persuading Obtain organizational Persuading external groups so that they assist support for their teams or support the needs of the leader's teams Influence subordinates Encouraging subordinates to make choices that support team or organizational goals Empowering Delegate authority Giving responsibility, control or authority to subordinates Flexible on team decisions When the leader is open-minded about how subordinates fulfil a role, assignment, etc. Coach subordinates Developing subordinates' knowledge and skills TABLE 2 : The research cases 2 Cross DoeMez TABLE 3 : Dealmakers and interviewees 3 Position / Firm-of-origin Cross DoeMez Dealmakers Acquirer 2 2 3 2 a Acquired firm - 1 Interviewees 12 23 TMT Acquirer 4 b 2 c Acquired firm - 3 Neutral - 1 Department heads Acquirer - 3 Acquired firm 4 2 Neutral 3 1 Other managers Acquirer - 5 Acquired firm 1 4 Neutral a Includes a TMT. - 2 b Non-Malaysians. c Includes a non-Malaysian. TABLE 4 : The integration approaches 4 Cross DoeMez Integration Financial controls; HR, operations Integration of business activities, and and processes were improved or operations, policies, procedures and controls cleaned up processes. 3 offices were converged into 2 Corporate Former shareholder's name was Launched "DoeMez" (new name, logo, rebranding removed; Cross endorsed as "A identity, image); new corporate HQ member of the SAF Group" TMT 4 expatriates from SAF Acquirer, acquired firm and neutral TMT Cultural Initiated during third year post- None integration acquisition Integration approach; Speed Light-touch integration; Slow Symbiosis; Fast TABLE 5 : 5 Cross boundary spanning examples(HOD = head of department; SM = senior manager) Boundary Cultural difference Trust attribute Selective quotes Relating behaviors Social and political Vertical, National, Benevolence, [P]art of my job is… to cushion off all [issues and]… complement awareness horizontal, organizational integrity, my staff… so everyone works in harmony [with the boss] or SAF… stakeholder ability I'll do a lot of [consulting], very frequent advice to my staff that this is how [the key people] work. (HOD) Vertical, National, Ability, [The CFO] is trying to [develop more informal interaction with HQ] stakeholder, organizational integrity, through conf calls, updates… We always have this difficulty to align geographic benevolence [their] understanding of certain topics because their environment, perspective, legal and operating environments are different. (SM) Horizontal, Personal, Benevolence, Introverts… tend to be more comfortable on email. Very often [after vertical functional integrity, a] meeting,… I'll email [a summary]… Extroverts are normally on ability their feet and more direct [but] you have IT people, [actuaries and accountants] who are normally introverts. (Expatriate) Build subordinate trust Vertical, Intra-national, Benevolence, You will build trust easier with your senior people than the rest of horizontal, national integrity, the company [because of more frequent face-to-face interaction]. demographic ability The rest of the company will be my communications… On [festive occasions] I send a nice message… You must do those things repeatedly, then people will see 'He genuinely means he embraces diversity, he doesn't favor any specific groups.' When we have a Divali celebration, I dress like an Indian and those types of things. People like that. They start thinking 'Yeah, [he] can walk in my shoes, I can trust him.' (CEO) Care about subordinates Vertical Personal Benevolence, People must come and tell me [when problems arise]… then try to integrity fix it themselves… I concentrate on the fixing, not on the blame. (Expatriate) I would draft a speech and say to my secretary-she's a Muslim-and my HR person [who] is Chinese, 'Here is my speech. Please look at it. Am I using the wrong words? Have I said the wrong thing?'... TABLE 6 : 6 DoeMez boundary spanning examples (HOD = head of department; SM = senior manager) People no longer should have that notion of 'I used to manage client A, it's my client'. Today, client A is the firm's clients and is being serviced by a team of people from Doe and Mez... By doing that we avoided a fight, people trying to fight for accounts, for clients, because naturally people want the bigger accounts. (HOD) lot of time trying to tell my acquiree employees that… certain aspects of [their] perception [of our corporate culture]… may not be the reality… Obviously we are a government-linked company [with] Malay mentality and… mindset… What can I dispel? … I can certainly dispel that the management in Doe does not have interference of any sort from the government shareholding that we have, that we are not a racial-biased entity, that we are predominantly merit-based. (Leader) to explain… and remind [my team] what we're going through… [Doe people] have to learn [Mez's] system and now 'I have to lead you.' (DoeMez HOD) The big boss]… was really pushing for [the seller] to accept Doe's bid… because he knows that [if another bidder] were to acquire us, then most of the staff will be jobless. (DoeMez HOD) Muslim [senior management] colleagues together and we brainstormed [on the religion-related problem]… I said 'These are the issues that we have. How should we manage it?' … [The Head of Doe] said, 'Let me talk to [the Muslim staff].' … He helped ease the whole process. (Leader) Some of my people], they are very hierarchical 'I'm the boss, you guys take instructions'… [Other people] are waiting for instructions [or] order-takers. [Another] set of people [are] cookie-cutters,… only can do one thing... [and no more]. (HOD) You have pockets of people who… hold [the] organization to ransom. They know but… don't volunteer the information. For example, 'do we need to do this?' They will say yes. But they never tell you the consequence… I said [to my team], 'You need to probe them further to make sure we don't [miss any details].' (HOD) Two subordinates doing the same job and withholding information from each other: The Chinese guy [from Mez] asked the Malay guy [from Doe] 'Have you done this...?' This Malay guy [felt] 'Why do I have to report to you?'... Then same thing happened [the other way around]. (Manager) [the Group COO] and give her heads-up how to fix the problem… She gave me a lot of air-cover. (HOD) [the] different [corporate] culture [and personal attitude] that I have a problem [with in some of my team]… I discussed it with [my boss]… I think he did talk to the team. Things are better now. (SM)We initiated [culture change] by the new brand. We launched an idea behind that brand, these disciplines, this other culture we want Boundary Cultural difference Trust attribute Illustrative quotes demographic to emulate... We started building new thought processes. (Leader) You have me telling the team that 'you better treat [the Mez] people nicely; I don't care what they do to you, you just treat them nicely. Any [issues], you just come to me.' (HOD) When I first took [them] in, they weren't comfortable doing a lot of things... They weren't given a lot of freedom in the old days… I found that a lot of them are quite capable... They did make a lot of mistakes and I do hold them accountable for it, but [no] penalties...' (HOD) I want to hear from you first. Come back to me, say, in 2-3 hours' time and see what you have.'… I put pressure to them to think through properly… Otherwise they will never learn. Always remember, don't give the man fish all the time. You must teach him how to fish. (HOD) HR Department… arranged training on mergers and acquisitions, post-mergers, and things like that for the senior management team. Then we will go back to our own department and advise or counsel [our teams] on that different culture… the new flow, new process, new way of doing things. (SM) Boundary Cultural difference Trust attribute Illustrative quotes Relating behaviors Social and political Horizontal, Organizational, Benevolence, It's not for the acquiring company to reach out to us. Because I'm awareness vertical intra-national integrity the acquirer, I felt it was important for me to reach out to them. (Leader) Vertical, Organizational, Benevolence, stakeholder intra-national integrity Build subordinate trust I take pains Care about subordinates Vertical, horizontal, demographic, stakeholder Organizational, intra-national Benevolence, integrity, ability I spent a Vertical, horizontal, demographic Organizational, intra-national Benevolence, integrity, ability Vertical Personal Benevolence, integrity [Scouting behaviors ENDNOTES i The terms "M&As" and "acquisitions" are used interchangeably in this paper, as in extant literature. ii Malaysian GLCs are usually Malay-owned, controlled and managed, with substantial ethnic Malay managers and personnel (Lee, 2016). GLC shareholding in Malaysia's financial industry is high: four of eight banking groups have direct and indirect government shareholding of 40%-60% (IMF, 2014). iii This paper refers to Malaysians of Chinese ancestry as ethnic Chinese/Malaysian Chinese to distinguish them from Chinese who are nationals of China; idem for Malaysians of Indian descent.
71,076
[ "1029831" ]
[ "108098" ]
00174446
en
[ "chim" ]
2024/03/05 22:32:07
2007
https://hal.science/hal-00174446/file/nedelec_revised.pdf
C Mansuy Dr J M Nedelec C Dujardin R Mahiou Concentration effect on the scintillation properties of Sol-Gel derived LuBO 3 doped with Eu 3+ and Tb 3+ Keywords: 78.55.Hx, 81.20. Fw, 29.40.Mc, 73.61.Tm scintillators, borate, sol-gel, medical imaging, x-ray conversion Lu 1-x Eu x BO 3 and Lu 1-x Tb x BO 3 powders have been prepared by a sol-gel process with 0 < x < 0.15 for Eu 3+ and 0 < x < 0.05 for Tb 3+ . The purity of powders has been verified by X-Ray diffraction and the results confirm that all the materials have the vaterite type even if the calcination has been performed at 800°C. Furthermore, the solid solution for LuBO 3 vaterite is observed up to x=0.15 and x=0.05 for europium and terbium ions respectively. So doping with Eu 3+ or Tb 3+ ions does not affect the structure. These materials have also been analyzed by Fourier Transform Infra Red Spectroscopy. The morphology of the powders has been studied by Scanning Electron Microscopy and shows a very nice morphology with small spherical particles with narrow size distribution. Optical properties have then been studied to confirm the effective substitution of Eu 3+ or Tb 3+ for Lu 3+ ions and to determine the materials scintillation performances. The optima, in term of scintillation yield, are obtained for Eu 3+ and Tb 3+ concentration of x=0.05 in both cases. The afterglows have also been measured and confirm the potentiality of these materials as scintillators. Introduction Nowadays, research directed towards scintillating materials is in constant development. These materials, that convert high energy radiation into UV-Visible light, are used in various applications: medical imaging, high energy physics, airport security and industrial control. The use of these scintillating materials in medical equipment (X-rays, γrays, positron emission, ... ) requires improvement of their properties, in particular their conversion yield. Soft chemistry routes and in particular the sol-gel process offer an attractive alternative solution for the production of efficient scintillators with a control of the optical properties. Rare earth activated lutetium orthoborates, which present a high density due to lutetium, appear to be good scintillators [1,[START_REF] Moses | Procceding of the International Conference on Inorganic Scintillators and Their Applications[END_REF]. We consequently decided to prepare LuBO 3 :Eu 3+ and LuBO 3 :Tb 3+ powders by an original sol-gel route. Indeed, the major advantage of this soft chemistry process is that it allows the control of the morphology and the texture of the materials and also the preparation of the material as thin films [START_REF] Schmidt | [END_REF][START_REF] Nedelec | [END_REF]5]. Moreover, the sol-gel route allows the elaboration of materials of different composition and doped easily with different ions, in various concentration and the sol-gel derived materials are synthesized at lower temperature than the ones elaborated by classical solid state synthesis. The powders have been characterized using various techniques. The purity and the morphology have been respectively analysed by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM). The scintillation properties of LuBO 3 :Eu 3+ and LuBO 3 :Tb 3+ powders have also been studied. Experimental section Powders preparation LuBO 3 :Eu 3+ and LuBO 3 :Tb 3+ powders have been prepared by a sol-gel process described elsewhere [6]. In a first step, lutetium and Ln (Ln: Eu or Tb) chlorides are dissolved in required amounts in isopropanol during 2 hours. Rare earth chlorides and solvents used were anhydrous and all the experiments were carried out in an argon inert atmosphere to prevent any influence of air moisture. Potassium isopropoxide was prepared by reacting metallic potassium (Aldrich) with anhydrous 2-propanol (Accros). After dissolution of chlorides, the solution of potassium isopropoxide is added and then Lu/Ln chlorides mixture is reacted with the potassium alcoholate which substitutes for chloride leading to the formation of the rare earth alkoxides and the immediate precipitation of KCl according to: LnCl 3 + 3 K + + 3 i OPr -→ Ln( i OPr) 3 + 3 KCl↓ This mixture was then refluxed for 2h at 85°C, in order to complete the formation of the rare earth alkoxides. Secondly, an alcoholic solution of boron tri-isopropoxide is added. A homogeneous solution is obtained after 4h-reflux at 85°C. After cooling down at room temperature, centrifugation is performed in order to separate KCl from the sol. The hydrolysis of the sol with distilled water yields a gel, which is dried at 80°C to obtain a white xerogel. This xerogel is then fired at 800°C for 18h in order to obtain Lu (1-x) Ln x BO 3 (Ln :Eu or Tb) crystalline powders. Samples with 0.005 < x < 0.15 and 0.002 < x < 0.05 have been prepared respectively for LuBO 3 :Eu 3+ and LuBO 3 :Tb 3+ . Characterization All powders have been checked by X-Ray diffraction on a Siemens D501 diffractometer working in the Bragg-Brentano configuration with Cu-K α radiation (λ = 1.5406 Ǻ). Infrared spectra were recorded on a Perkin Elmer 2000 FTIR spectrometer using the KBr pellet technique. Thermogravimetric analysis was performed using a Metler Toledo 851 apparatus. Samples were heated in air with a rate of 1 °C.min -1 . Micrographs were recorded using a Cambridge StereoScan 360 SEM operating at 20 kV. Samples were prepared by depositing a small quantity of powder on adhesive carbon film before coating the surface with gold. The excitation spectra of all the powders, doped with Eu 3+ or Tb 3+ , were recorded at room temperature using a Xenon lamp as continuous excitation source and a Triax 320 monochromator coupled with a CCD detector. The scintillation spectra were recorded with a Jobin-Yvon Triax 320 monochromator coupled with a CCD camera after excitation of the samples with a tungsten X-ray tube working at 35 kV and 15 mA. The signal was collected near the sample with an optical fiber. For relative conversion yield estimation, the samples were placed in a quartz tube with a fixed position throughout the measurements. Commercial polycristalline Gd 2 O 2 S:Tb 3+ powder supplied by Riedel de Haën was used as a standard for the scintillation yields measurements. The setup was kept constant between the measurements (excitation and detection), simply changing the sample tube. Equivalent masses of samples were used for measurements and all samples were milled in similar conditions in order to keep the granulometry as constant as possible. Reproducibility was tested on the gadox sample yielding values within a 5 % deviation range. The afterglow measurements were performed at room temperature on the samples corresponding to concentration optima. The excitation was performed during 10 s with a Xray source working at 40 kV and 35 mA. Gd 2 O 2 S:Tb 3+ powder was used as a reference. presents two crystalline types depending on the thermal treatment. As mentioned in [7] the vaterite form is obtained at 800°C instead of the calcite form when the material is prepared by solid state reaction. However, all the recorded diffractograms are identical and show exclusively the vaterite form of LuBO 3 with no evidence for LuBO 3 calcite, EuBO 3 or TbBO 3 phases. Refinement of powder X-ray diffraction patterns allows deriving the cell parameters for the vaterite phase of LuBO 3 . Plot of the cell volume as a function of Eu 3+ content as shown in Figure 2, clearly shows a linear evolution as expected from Vegard's Law for a solid solution. So rare earth ions (Eu 3+ , Tb 3+ ) substitute for Lu 3+ cation in LuBO 3 vaterite structure and the solid solution is observed up to 15% and 5% for respectively europium and terbium ions. Solid solution limits have not been determined and higher concentration might be possible while keeping the vaterite structure and a monophasic material. Results and discussion Characterisation X-Ray diffraction FTIR Spectroscopy Fourier transform Infrared spectroscopy has been carried out on the different samples doped with Eu 3+ or Tb 3+ ions. All recorded spectra, displayed in Figure 3 All the bands observed in the range 500-1200 cm -1 correspond to B-O group vibration modes and the positions agree well with published results [START_REF] Nedelec | [END_REF]8]. Thermal analysis The evolution of LuBO 3 from the amorphous to crystalline form has been studied by thermal analysis. This study gives informations on the cristallisation of the material. A thermogravimetric analysis has been carried out on LuBO 3 powder elaborated by sol-gel process and the resulting thermogramm is presented in Figure 4. The first derivative is also shown in order to clearly identify the temperatures associated with the different weight losses. A first weight loss is observed around 100°C and can be allotted to the elimination of adsorbed species such as alcohol or water molecules. A second significant weight loss, observed at about 150°C, corresponds to the condensation of the material. There is a condensation of the alkoxy and hydroxy groups with subsequent alcohol or water elimination. Some residual organic compounds can also be directly pyrolyzed. At this temperature, the inorganic skeleton is formed. A last weight loss is observed at 700°C and corresponds to the crystallisation temperature. From this temperature, the crystalline growth occurs and no more weight loss is observed. This last loss is characteristic of the mineral network reorganization and of the final elimination of residual OH groupments. Total loss of weight is approximately 17%. Thermal behavior appear very consistent with former results concerning sol-gel derived oxides. Scanning Electron Microscopy To complete the study, Scanning Electron Microscopy has been performed on vaterite LuBO 3 powders synthesized by sol-gel process and treated at 800°C for 18h. The micrograph recorded at 30 000 x magnification, given in Figure 5, indicate that LuBO 3 powders are homogeneous and constituted of small spherical particles of about 200nm. The size distribution of these particles is uniform, which is a usual consequence of using the solgel process. Excitation and emission spectra Excitation spectra The excitation spectra of LuBO 3 :Eu 3+ and LuBO 3 :Tb 3+ vaterite powders were recorded for different concentrations of Eu 3+ (0.05 < x < 0.15) and Tb 3+ (0.002 < x < 0.05). Figure 6 shows the excitation spectra recorded at room temperature by fixing the emission wavelength at respectively 591nm and 541nm for the optima (Lu 0.95 Eu 0.05 BO 3 and Lu 0.95 Tb 0.0 5BO 3 ). The excitation spectrum recorded for LuBO 3: Eu 3+ powder is constituted of lines corresponding to 4f-4f transitions. The line observed at about 470 nm is attributed to 7 F 0 →5 D 2 transition and the ones situated in the range between 300-430 nm correspond to 7 F 0 → 5 F 2 , 5 H J , 5 D 4 , 5 G J , 5 L 8 , 5 L 6 , 5 D 3 transitions. The excitation band located below 250 nm is assigned to the charge transfer absorption [9]. The excitation lines observed for LuBO 3 :Tb 3+ powder, in the range 300-500 nm, are characteristic of 4f-4f transitions. They correspond to 7 F 6 → 5 H 6 , 5 H 7 , 5 L 8 , 5 L 9 , 5 D 2 , 5 G 5 , 5 L 10 , Emission spectra The emission spectra recorded at room temperature under X-ray excitation for Eu 3+ and Tb 3+ doped LuBO 3 vaterite, with different concentrations of Eu 3+ and Tb 3+ , are presented in Figure 7. Gadox (Gd 2 O 2 S:Tb) emission spectrum has also been recorded in order to calculate the scintillation yields of Eu and Tb doped materials. In the case of LuBO 3 :Eu 3+ (Fig. 7a), the spectrum is constituted of lines corresponding to 5 D 0 → 7 F J (J = 0-4) transitions of Eu 3+ ions. The spectral distribution of the Eu 3+ doped materials results in a global orange-red emission. LuBO 3 :Tb 3+ emission spectrum (Fig. 7b) exhibits, in the range between 475-650 nm, several lines characteristic of 5 D 4 → 7 F J (J = 3-6) transitions of Tb 3+ ions. 5 D 4 → 7 F 5 transition is the most intense and confers to the materials an overall green emission. Scintillation yields Scintillation yields have been calculated for all the powders by comparing the integrating areas of the emission spectra of the sample and Gadox. Scintillation yields under γ-rays excitation is 78000 photons/Mev [11] for Gadox. The yields of our materials were calculated from reference values, which are obtained under γ-ray excitation. Our measurements have been performed under X-ray excitation, so the results given for the scintillation yields under γ-ray excitation might be under-estimated. Yields have been calculated for concentrations of Eu 3+ and Tb 3+ ions with respectively 0.005 < x < 0.15 and 0.002 < x < 0.05. The scintillation yields for all the samples and their evolution as a fonction of the doping ion concentration is presented in Figure 8. For Eu 3+ doped LuBO 3 , the optimum is obtained for a Eu concentration of 5% with a scintillation yield of about 8923 photons/MeV. This scintillation yield is 11% of the Gadox one, which is a good value. In the case of Tb 3+ doped LuBO 3 powders, LuBO 3 :Tb 3+ (5%) present the higher scintillation yield since this one is equal to about 4398 photons/MeV. Afterglow Precise knowledge of the afterglow is required for practical applications. Afterglow measurements were recorded at room temperature, using an X-Ray source, which operated at 40kV with an intensity of 25mA. The material was excited for 10s and the signal was collected using a photomultiplier. The afterglow behaviours for Eu 3+ and Tb 3+ doped LuBO 3 powders are presented in Figure 9. The afterglow of Gadox (Gd 2 O 2 S:Tb 3+ ) was also measured as a reference. The afterglow values, mesured 1 s after X-ray turn-off, are 1%, 0.2% and 0.007% respectively for Lu 0.95 Eu 0.05 BO 3 , Lu 0.95 Tb 0.05 BO 3 and Gadox. Conclusion The sol-gel process has been proven to be a good technique for the preparation of scintillating materials doped with different rare earth ions both as powders and thin films. The advantages of this method compared to traditional syntheses are, the lower temperature of treatment, the good crystallinity and purity of the samples, the morphology control and a homogeneous distribution of the particles size. The scintillation properties of Eu 3+ and Tb 3+ doped LuBO 3 powders were studied for different concentrations of doping ion and good scintillation yields were obtained. Sol-gel derived LuBO 3 :Eu 3+ and LuBO 3 :Tb 3+ appear to be promising scintillators. Figures captions Figures captions Figure 1 1 Figure 1 presents the X-Ray diffraction patterns recorded for LuBO 3 powders heated at 800°C for 18h and doped with Eu 3+ (Fig. 1 a) or Tb 3+ (Fig. 1 b) ions. Orthoborate LuBO 3 , are similar. No significant change is observed upon doping with Eu 3+ (Figure 3(a)) or Tb 3+ (Figure 3(b)) ions. Figure 1 :Figure 2 : 12 Figure 1: X-Ray diffraction patterns recorded for (a) Lu 0.85 Eu 0.15 BO 3 and (b) Lu 0.95 Tb 0.05 BO 3 Figure 3 :Figure 4 :Figure 5 :Figure 6 : 3456 Figure 3: IRTF spectra of (a) LuBO 3 :Eu 3+ and (b) LuBO 3 :Tb 3+ powders of vaterite form Figure 7 : 7 Figure 7: Emission spectra recorded, at room temperature, under X-ray excitation on (a) LuBO 3 :Eu 3+ and (b) LuBO 3 :Tb 3+ of vaterite form, with different concentrations of Eu 3+ and Tb 3+ ions. Figure 8 :Figure 9 : 89 Figure 8: Relative scintillation yields of (a) LuBO 3 :Eu 3+ and (b) LuBO 3 :Tb 3+ powders Figure 1 : 1 Figure 1: X-Ray diffraction patterns recorded for (a) Lu 0.85 Eu 0.15 BO 3 and (b) Lu 0.95 Tb 0.05 BO 3 with the corresponding ASTM reference patterns (dotted lines) 1 )Figure 3 :Figure 4 :Figure 5 :Figure 6 :Figure 7 : 134567 Figure 3: IRTF spectra of (a) LuBO 3 :Eu 3+ and (b) LuBO 3 :Tb 3+ powders of vaterite form heated at Figure 8 : 8 Figure 8: Relative scintillation yields of (a) LuBO 3 :Eu 3+ and (b) LuBO 3 :Tb 3+ powders. Figure 9 : 9 Figure 9: Afterglow measurement on Lu 0.95 Eu 0.05 BO 3 , Lu 0.95 Tb 0.05 BO 3 and Gadox under X-ray G , 5 D 3 and 5 D 4 transitions[10]. Acknowledgements The authors would like to thanks the French FRT for financial support under project LuminiX (RNTS-01B262).
16,271
[ "1219020", "6039", "174838", "738659" ]
[ "13918", "13918", "174", "13918" ]
01744529
en
[ "sdu", "sde" ]
2024/03/05 22:32:07
2018
https://insu.hal.science/insu-01744529/file/1-s2.0-S1352231018301857-main-1.pdf
Dandan Li Likun Xue email: xuelikun@sdu.edu.cn Wen Liang Xinfeng Wang Tianshu Chen Abdelwahid Mellouki email: mellouki@cnrs-orleans.fr Jianmin Chen Wenxing Wang Characteristics and sources of nitrous acid in an urban atmosphere of northern China: Results from 1-yr continuous observations Keywords: Nitrous acid, Seasonal variation, Heterogeneous conversion, Atmospheric oxidizing capacity, North China Plain Nitrous acid (HONO) is a key reservoir of the hydroxyl radical (OH) and plays a central role in the atmospheric chemistry. To understand the sources and impact of HONO in the polluted atmosphere of northern China, continuous measurements of HONO and related parameters were conducted from September 2015 to August 2016 at an urban site in Ji'nan, the capital city of Shandong province. HONO showed well-defined seasonal and diurnal variation patterns with clear wintertime and nighttime concentration peaks. Elevated HONO concentrations (e.g., over 5 ppbv) were frequently observed with a maximum value of 8.36 ppbv. The HONO/NO X ratios of direct vehicle emissions varied in the range of 0.29%-0.87%, with a mean value of 0.53%. An average NO 2 -to-HONO nighttime conversion frequency (k het ) was derived to be 0.0068±0.0045 h -1 from 107 HONO formation cases. A detailed HONO budget analysis suggests an unexplained daytime missing source of 2.95 ppb h -1 in summer, which is about seven times larger than the homogeneous reaction of NO with OH. The effect of HONO on OH production was also quantified. HONO photolysis was the uppermost source of local OH radical throughout the daytime. This study provides the year-round Introduction Nitrous acid (HONO) is a key precursor of the hydroxyl radical (OH), one of the main tropospheric oxidants in the gas phase. Numerous field and modeling studies have shown that HONO photolysis contributes significantly to the OH sources not only in the early morning but also during the rest of the daytime (Acker et al., 2006b;[START_REF] Kleffmann | Daytime formation of nitrous acid: A major source of OH radicals in a forest[END_REF][START_REF] Xue | Oxidative capacity and radical chemistry in the polluted atmosphere of Hong Kong and Pearl River Delta region: analysis of a severe photochemical smog episode[END_REF]. This is mainly ascribed to the unexpectedly high concentrations of HONO during daytime which would have been kept at lower levels due to its rapid photolysis (R1). Therefore, the knowledge of characteristics and sources of HONO is critical for a better understanding of the tropospheric oxidation chemistry processes. HONO + hυ → OH + NO (320 nm < λ < 400 nm) (R1) So far, field observations of HONO have been carried out at remote, rural and urban areas. The reported ambient concentrations rang from several pptv up to 15 ppbv (e.g., [START_REF] Beine | Surprisingly small HONO emissions from snow surfaces at Browning Pass, Antarctica[END_REF][START_REF] Elshorbany | Oxidation capacity of the city air of Santiago, Chile[END_REF][START_REF] Zhou | Snowpack photochemical production of HONO: A major source of OH in the Arctic boundary layer in springtime[END_REF]). However, the potential sources that could explain the observed elevated daytime HONO are still under controversial discussion. The well accepted HONO sources include direct emissions from vehicle exhaust [START_REF] Kurtenbach | Investigations of emissions and heterogeneous formation of HONO in a road traffic tunnel[END_REF] and homogeneous gas phase reaction of NO with OH (R2) [START_REF] Pagsberg | Kinetics of the gas phase reaction OH + NO(+M) →HONO(+M) and the determination of the UV absorption cross sections of HONO[END_REF]. NO + OH +M → HONO + M (R2) Heterogeneous reactions of NO 2 occurring on wet surfaces (R3) have been also proposed as an important source of HONO according to both laboratory studies and field observations (e.g., [START_REF] Finlayson-Pitts | The heterogeneous hydrolysis of NO 2 in laboratory systems and in outdoor and indoor atmospheres: An integrated mechanism[END_REF]. Nonetheless, the source strength of reaction (R3) has not been accurately quantified and relies on the NO 2 concentrations, surface area density and water content [START_REF] Finlayson-Pitts | The heterogeneous hydrolysis of NO 2 in laboratory systems and in outdoor and indoor atmospheres: An integrated mechanism[END_REF]. These reactions could occur on various types of M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 3 surfaces including ground, buildings, vegetation, and aerosol surfaces [START_REF] Liu | Evidence of aerosols as a media for rapid daytime HONO production over China[END_REF][START_REF] Vandenboer | Understanding the role of the ground surface in HONO vertical structure: High resolution vertical profiles during NACHTT-11[END_REF]. Up to now, the contribution of the ground surfaces to the overall production of HONO is still under discussion and subject to intensive research activity [START_REF] Wong | Vertical profiles of nitrous acid in the nocturnal urban atmosphere of Houston, TX[END_REF][START_REF] Zhang | Potential Sources of Nitrous Acid (HONO) and Their Impacts on Ozone: A WRF-Chem study in a Polluted Subtropical Region[END_REF]. 2NO 2 + H 2 O → HONO + HNO 3(ads) (R3) In addition, the heterogeneous reduction of NO 2 on soot particles, mineral dust, and surfaces containing organic substrates was also proposed as a source of HONO (R4) [START_REF] Ammann | Heterogeneous production of nitrous acid on soot in polluted air masses[END_REF]2005;[START_REF] Ma | SO 2 initiates the efficient conversion of NO 2 to HONO on MgO surface[END_REF], and these processes can be further photo-enhanced during the daytime [START_REF] George | Photoenhanced uptake of gaseous NO 2 on solid organic compounds: a photochemical source of HONO?[END_REF][START_REF] Monge | Light changes the atmospheric reactivity of soot[END_REF][START_REF] Ndour | Photoenhanced uptake of NO 2 on mineral dust: Laboratory experiments and model simulations[END_REF][START_REF] Stemmler | Photosensitized reduction of nitrogen dioxide on humic acid as a source of nitrous acid[END_REF]. Although the heterogeneous NO 2 conversion on soot surfaces has high potential to produce HONO, it decreases rapidly with aging and is usually regarded to be less important for ambient HONO formation [START_REF] Han | Heterogeneous photochemical aging of soot by NO 2 under simulated sunlight[END_REF]. NO 2 + HC red → HONO +HC ox (R4) Besides, some other HONO sources have also been proposed, including soil emissions [START_REF] Su | Soil nitrite as a source of atmospheric HONO and OH radicals[END_REF], photolysis of adsorbed nitric acid (HNO 3 ) and nitrate (NO 3 -) at UV wavelengths of 300 nm [START_REF] Zhou | Snowpack photochemical production of HONO: A major source of OH in the Arctic boundary layer in springtime[END_REF], and homogeneous reaction of NO 2 with HO 2 •H 2 O [START_REF] Li | Missing gas-phase source of HONO inferred from Zeppelin measurements in the troposphere[END_REF]. Despite the abovementioned significant progress, the 'missing' daytime source(s) of atmospheric HONO is still under exploration. In comparison with sources, the sink pathways of HONO are relatively well established. The chemical losses of HONO include the photolysis (R1) and reactions with OH radicals (R5). Moreover, HONO can be also removed through dry deposition on ground surfaces. Budget analysis of HONO sources and sinks has been proved to be a robust method to examine the unknown sources and quantify their source strength [START_REF] Sörgel | Quantification of the unknown HONO daytime source and its relation to NO 2[END_REF]Su et al., 2008b). HONO + OH → H 2 O + NO 2 (R5) A number of field studies have been conducted to measure ambient HONO in the polluted urban and rural atmospheres of China during the last decade. High concentration M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 4 levels and strong potential missing source(s) of HONO have been reported in some metropolises (e.g., Beijing, Shanghai and Guangzhou) and surrounding regions (e.g., [START_REF] Bernard | Measurements of nitrous acid (HONO) in urban area of Shanghai, China[END_REF][START_REF] Qin | An observational study of the HONO-NO 2 coupling at an urban site in Guangzhou City, South China[END_REF][START_REF] Tong | Exploring the nitrous acid (HONO) formation mechanism in winter Beijing: direct emissions and heterogeneous production in urban and suburban areas[END_REF]. However, most of these studies were mainly based on short-term intensive observations. While long-period measurements are necessary to support a holistic investigation of characteristics and sources of HONO, they are very scarce [START_REF] Hendrick | Four years of ground-based MAX-DOAS observations of HONO and NO 2 in the Beijing area[END_REF]. In the present study, we have carried out 1-yr continuous observations of HONO and related parameters at an urban site of Ji'nan city, which is located almost in the center of the North China Plain (NCP), the most polluted region of China with dense population and industries. A large amount of observational data and HONO formation cases provided an opportunity of a thorough examination of temporal variations, sources and impacts of HONO in this polluted urban atmosphere of northern China. In the following sections, we will first show the seasonal and diurnal variations of HONO and related species. Then, several sources of HONO will be explored, including vehicle emission, nighttime heterogeneous formation and potential unknown daytime sources. We will finally evaluate the impacts of HONO photolysis on the primary OH sources and hence atmospheric oxidizing capacity. Experimental Site description The measurements were conducted from September 1 st 2015 to August 31 st 2016 at an urban site of Ji'nan, the capital city of Shandong Province, with approximately 7 million A detailed description of the study site can be found elsewhere [START_REF] Wang | HONO and its potential source particulate nitrite at an urban site in North China during the cold season[END_REF]. Measurement techniques HONO was measured by a commercial instrument of LOPAP (long path absorption photometer, QUMA GmbH, Germany). The LOPAP is a wet chemistry based real-time measurement device, with which HONO is sampled in an external sampling unit as a stable diazonium salt and is subsequently detected photo-metrically after conversion into an azodye in a long-path absorption tube of 2.4 m Teflon AF. The LOPAP is conceived as a 2-channel system to correct for the potential interferences. In channel 1, HONO as well as possible interfering gases are determined, while in channel 2 only the interfering gases are quantified. The difference of both channels yields the HONO concentrations. A detailed description of the LOPAP instrument has been described in detail by [START_REF] Heland | A new instrument to measure gaseous nitrous acid (HONO) in the atmosphere[END_REF]. In the present study, the sampling gas flow and the peristaltic pump velocity were set to 1 L min -1 and 20 r min -1 during the whole measurement period. With these settings, the HONO collection efficiency was ensured above 99.99%. Zero air calibration by ultrapure nitrogen (purity of 99.999%) was performed for 30 min automatically at a time interval of 12 h 30 min. An experimental cycle of 8 days was calibrated twice manually by using a known concentration of nitrite (NO 2 -) standard solution. The detection limit of our measurements was 3 ppt at a time resolution of 30 s, with an accuracy of 10% and a precision of 1%. We note that although the LOPAP instrument may collect data in 30 s (or 1 min) intervals, the physical time resolution of the instrument is relatively longer, ca. 3 min. The [START_REF] Xue | Source of surface ozone and reactive nitrogen speciation at Mount Waliguan in western China: new insights from the 2006 summer study[END_REF] . Results and discussion Data overview Figure 2 shows an overview of the measured HONO, NO, NO 2 , PM 2.5 , PM 1.0 , J HONO , and meteorological parameters in the present study. During the 1-yr measurement period, the prevailing winds were from the east and southwest sectors, indicating the general influence of industrial emissions on the study site (see Fig. 1). The air temperature ranged from -15 ℃ to 39 ℃ with a mean value (±standard deviation) of 16±11 , ℃ and the relative humidity showed a clear seasonal variation pattern with higher levels in winter and summer. Markedly poor air quality was observed as expected. Throughout the 1-yr period, 137 haze episodes occurred with daily mass concentration of PM 2.5 exceeding the National Ambient Air Quality Standard (Class II: 75 µg m -3 ), including 9 severe polluted haze episodes with daily average PM 2.5 concentrations above 250 µg m -3 . In addition, elevated levels of NO X , i.e., up to 350 ppbv of NO and 108 ppbv of NO 2 , were also frequently recorded, possibly as a result of intensive vehicle emissions nearby the study site. Overall, these observations highlight the nature of our measurement station as a typical polluted urban environment in North China. M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT Table 1 documents the measured levels of HONO, NOx, HONO/NOx ratios and the comparison with the results obtained previously elsewhere. The measured HONO mixing ratios in Ji'nan ranged from 17 pptv to 8.36 ppbv with a mean (±SD) value of 1.15±1.07 ppbv. Elevated HONO concentrations were frequently observed during the measurement period, with the daily maximum values exceeding 2 ppbv and 4 ppbv on 156 and 50 days, respectively (see Fig. 2). The maximum hourly value of 7.39 ppbv was recorded in the early morning of 6 December 2015. Such high levels of ambient HONO indicate the intense sources of HONO and potentially strong atmospheric oxidizing capacity in urban Ji'nan. The nighttime average (18:00-06:00, LT) HONO concentration was 1.28±1.16 ppbv, compared to the daytime average value (6:00-18:00, LT) of 0.99±0.95 ppbv. In particular, the mean HONO mixing ratio around noontime (11:00-13:00, LT) was even as high as 0.76±0.61 ppbv, which is nearly the highest levels ever recorded in the urban atmospheres, and about 27% of the noontime HONO data were above 1.00 ppbv during the measurement period. This implies the existence of strong daytime sources of HONO in the atmosphere of Ji'nan, which will be further discussed in Section 3.4. The seasonal variations of ambient HONO and related parameters are depicted in Fig. 3. The highest concentrations of HONO occurred in winter (i.e., December-January), followed by spring (i.e., April-May), summer (especially August) and autumn, with seasonal mean (±SD) values of 1.71±1.62, 1.16±0.90, 1.12±0.93 and 0.78±0.60 ppbv, respectively. Overall, the seasonal variation of HONO coincided with that of NO 2 , an important precursor of HONO. Such measured seasonal pattern of HONO is different from those measured in Hong Kong [START_REF] Xu | Nitrous acid (HONO) in a polluted subtropical atmosphere: Seasonal variability, direct vehicle emissions and heterogeneous production at ground surface[END_REF] and Beijing [START_REF] Hendrick | Four years of ground-based MAX-DOAS observations of HONO and NO 2 in the Beijing area[END_REF], where the highest levels were found in the autumn season. The wintertime peak of ambient HONO in Ji'nan should be the result of the lower boundary layer height, weaker photolysis, and enhanced heterogeneous production of HONO given the more abundant NO 2 . The relatively higher springtime HONO mixing ratios might be related to some degree to the more intense heterogeneous reactions of NO 2 on the surface of mineral particles [START_REF] Nie | Asian dust storm observed at a rural mountain site in southern China: chemical evolution and heterogeneous photochemistry[END_REF], as indicated by the coincident higher concentrations of NO 2 and PM 2.5 (note that PM 10 was not measured in the present study). Indeed, the air quality of Ji'nan in the spring of 2016 was characterized by high levels The diurnal profiles of HONO and related supporting parameters are shown in Figure 4. Overall, the diurnal variations of HONO in different seasons were similar, which dropped rapidly after sunrise and reached a minimum at around 15:00 LT, and then increased and peaked during the morning rush hours (an exception is the winter case that showed a concentration peak at midnight). The diurnal variation trend of HONO was similar to that of NO, owing to a variety of chemical and physical processes, and the similar nighttime profiles suggest that vehicle emissions may pose a significant effect on the measured HONO levels. Such nighttime pattern was also found at Tung Chung, Hong Kong [START_REF] Xu | Nitrous acid (HONO) in a polluted subtropical atmosphere: Seasonal variability, direct vehicle emissions and heterogeneous production at ground surface[END_REF], a roadside site in Houston, U.S. [START_REF] Rappenglück | Radical precursors and related species from traffic as observed and modeled at an urban highway junction[END_REF] and in a tunnel in Wuppertal, Germany [START_REF] Kurtenbach | Investigations of emissions and heterogeneous formation of HONO in a road traffic tunnel[END_REF]. The average diurnal profiles of HONO/NO 2 ratio are also shown in Fig. 4f. The HONO/NO 2 ratio generally decreased after sunrise due to the increase of HONO photolysis, and then increased during the nighttime. An interesting finding was the second peak of HONO/NO 2 at around noontime in spring, summer and winter seasons. If the HONO sources during nighttime were the same as those at daytime, the minimum HONO/NO 2 ratios should be found at noon due to the strong photolysis of HONO. Thus, the higher ratios at noontime indicated the existence of additional daytime sources of HONO. Moreover, the HONO/NO 2 ratios increased with solar radiation (e.g., J HONO ), implying that the additional sources may be related to the solar radiation intensity. We will further discuss the potential daytime sources of HONO in Section 3.4. Contribution of vehicle emissions As our study site is close to several major roads of large traffic fleet, it is necessary to evaluate the contribution of vehicle emissions to the measured HONO concentrations. The HONO/NO X ratio was usually chosen to derive the emission factor of HONO in the freshly emitted plumes [START_REF] Kurtenbach | Investigations of emissions and heterogeneous formation of HONO in a road traffic tunnel[END_REF]. In order to ensure the fresh air masses, the Table 2 summarizes the estimated emission factors of HONO/NOx for the 12 vehicular emission plumes. The average ∆NO/∆NO X ratio of the selected plumes was 94%, indicating that the air masses were indeed freshly emitted. The correlation coefficients (r 2 ) of HONO with NOx varied case by case and were in the range of 0.58-0.96, which may be due to the inevitable mixing of vehicle plumes with other air masses and/or heterogeneous conversion of NO 2 on soot particles and ground surface. The derived ∆HONO/∆NO X ratios varied in the range of 0.19%-0.87%, with an average value (±SD) of 0.53%±0.20%. This is comparable to the emission factors obtained in Santiago, Chile (0.8%; [START_REF] Elshorbany | Oxidation capacity of the city air of Santiago, Chile[END_REF] and Wuppertal, Germany (0.3-0.8%; [START_REF] Kurtenbach | Investigations of emissions and heterogeneous formation of HONO in a road traffic tunnel[END_REF], but is substantially lower than those derived in Guangzhou (with a minimum value of 1.4%; [START_REF] Qin | An observational study of the HONO-NO 2 coupling at an urban site in Guangzhou City, South China[END_REF] and Houston, U.S. (1.7%; Rappengluck et al., 2013). The emission factors should be dependent on the types of vehicle engines, fuels and catalytic converters [START_REF] Kurtenbach | Investigations of emissions and heterogeneous formation of HONO in a road traffic tunnel[END_REF]. The variance in the HONO/NOx ratios derived from different metropolitan areas highlights the necessity of examining the vehicular emission factors of HONO in the target city in the future studies. In the present study, the average HONO/NOx value of 0.53% was adopted as the emission factor in urban Ji'nan, and was used to estimate the contributions of traffic 1) with a rough assumption that the observed NOx at our site was mainly emitted from vehicles. HONO = NO × 0.0053 (E1) Where, HONO emis is the HONO concentration arising from the direct vehicle emissions. The calculated HONO emis levels contributed on average 12%, 15%, 18% and 21% of the whole measured nighttime HONO concentrations in urban Ji'nan in spring, summer, autumn and winter, respectively. (3) the meteorological conditions, especially surface winds, should be stable. Figure 5 presents an example of the heterogeneous HONO formation case occurring on 6-7 September, 2015. In this case, the HONO mixing ratios increased rapidly after sunset from 0.08 ppbv to 0.78 ppbv. Since the HONO concentrations and HONO/NO 2 almost increased linearly throughout the night, the slope fitted by the least linear regression for HONO/NO 2 ratios against time can be taken as the conversion frequency of NO 2 -to-HONO (k het ; also referred to as C HONO in other studies). During the 1-yr period, a total of 107 cases were finally selected. Such a large set of cases facilitated a more robust statistical analysis of the heterogeneous formation of HONO. Heterogeneous conversion of NO As our study site is close to the traffic roads, it is necessary to subtract the contribution from direct vehicle emissions. The emission ratio of HONO/NO X derived in Section 3.2 was M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT then used to adjust the HONO concentrations by Eq. ( 2). The NO 2 -to-HONO conversion frequency can be computed by Eq. ( 3), by assuming that the increase of HONO/NO 2 ratio was caused by the heterogeneous conversion (Su et al., 2008a;[START_REF] Xu | Nitrous acid (HONO) in a polluted subtropical atmosphere: Seasonal variability, direct vehicle emissions and heterogeneous production at ground surface[END_REF]. HONO = HONO -NO × 0.0053 (E2) k = [ ! "#$$ ](& ' ) [! ' ](& ' ) ( [ ! "#$$ ](& ' ) [! ' ](& ) ) ( ' ( ) ) (E3) The k het values derived from the 107 cases showed a large variability, from 0.0013 h -1 to 0.0194 h -1 , with a mean value of 0.0068±0.0045 h -1 . These results are well within the range of k het obtained previously from other urban areas. For example, the k het in Ji'nan is comparable to that derived at an urban site of Shanghai (0.007 h -1 ; [START_REF] Wang | Long-term observation of atmospheric nitrous acid (HONO) and its implication to local NO 2 levels in Shanghai, China[END_REF], and less than those in Guangzhou (0.016 h -1 ; [START_REF] Qin | An observational study of the HONO-NO 2 coupling at an urban site in Guangzhou City, South China[END_REF], Milan (0.012 h -1 ; [START_REF] Alicke | Impact of nitrous acid photolysis on the total hydroxyl radical budget during the Limitation of Oxidant Production/Pianura Padana Produzione di Ozono study in Milan[END_REF] and Kathmandu (0.014 h -1 ; [START_REF] Yu | Observations of high rates of NO 2 -HONO conversion in the nocturnal atmospheric boundary layer in Kathmandu, Nepal[END_REF]. Figure 6 provides the seasonal variation of the NO 2 -to-HONO conversion rate in Ji'nan. Clearly, the largest average k het was found in winter with a value of 0.0073±0.0044 h -1 . This should be ascribed to the higher S/V surface density within the shallower boundary layer in the wintertime. Moreover, weak correlation (R = 0.07) between k het and aerosol surface density was also found, which suggests that the efficient heterogeneous formation of HONO may be independent on the aerosol surface. The uptake coefficient of NO 2 on various surfaces to yield HONO (* +, ' →.,+, ) is a key parameter with large uncertainty in the air quality models to simulate HONO and OH radicals [START_REF] Zhang | Potential Sources of Nitrous Acid (HONO) and Their Impacts on Ozone: A WRF-Chem study in a Polluted Subtropical Region[END_REF]. The overall * +, ' →.,+, on the bulk surface of ground and particles can be estimated from Eq. ( 4). Where, > +, ' is the mean molecular velocity of NO 2 (370 m s -1 ); S/? @ and S/? A are the surface area to volume ratio (m -1 ) for both aerosol and ground, respectively. Considering the land use of the study site, the ground was treated as an uneven surface, and a factor of 2.2 per unit ground surface measured by [START_REF] Voogt | Complete urban surface temperatures[END_REF] was adopted to calculate the total M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 12 active surface. Hence, S/? A can be calculated by Eq. ( 5), where H is the mixing layer height and was obtained from the European Centre for Medium-Range Weather Forecasts (ECMRWF, ERA-Interim; http://apps.ecmwf.int/datasets/data/interim-full-daily/levtype=sfc/). The calculated uptake coefficients for the 107 cases varied in a wide range from 6.1×10 -8 to 1.7×10 -5 , whilst the majority (5%-95% percentiles) fell in a narrower range of 1.1×10 -7 to 4.5×10 -6 . The mean γ ΝΟ2 value was 1.4±2.4×10 -6 . Current laboratory studies have reported the range of γ 01 ' →2101 from 10 -6 to 10 -5 on the ground surface [START_REF] Kurtenbach | Investigations of emissions and heterogeneous formation of HONO in a road traffic tunnel[END_REF][START_REF] Vandenboer | Understanding the role of the ground surface in HONO vertical structure: High resolution vertical profiles during NACHTT-11[END_REF] and from 10 -7 to 10 -5 on the aerosol surface [START_REF] Ndour | Photoenhanced uptake of NO 2 on mineral dust: Laboratory experiments and model simulations[END_REF][START_REF] Wong | Vertical profiles of nitrous acid in the nocturnal urban atmosphere of Houston, TX[END_REF]. Obviously, the uptake coefficient in different orders of magnitude would definitely lead to different assessment of the importance of heterogeneous HONO sources [START_REF] Li | Impacts of HONO sources on the photochemistry in Mexico City during the MCMA-2006/MILAGO Campaign[END_REF]. The average uptake coefficient obtained from such a large set of samples could serve as a reference for modeling studies to simulate ambient HONO and atmospheric oxidation processes in the urban atmospheres of North China. Furthermore, the total area of ground surface is much larger than the reactive surface provided by aerosols, suggesting that the heterogeneous reactions of NO 2 on ground surface may play a dominant role. It should be noted that the exact uptake coefficients of NO 2 on ground and aerosol surfaces are variable and should be different, and the present analysis simplified this process by treating the ground and aerosol surfaces the same. Our derived uptake coefficients can be regarded as an equivalent γ NO2 on the bulk surface of ground and particles. Daytime HONO budget analysis In this section, we examine the potential unknown source(s) of daytime HONO by a detailed budget analysis. Equation ( 6) summarizes the main factors affecting the ambient concentrations of HONO. Even though taking a mean noontime [HONO] level of 1 ppbv, a value of 6×10 -5 ppb s -1 was derived [START_REF] Dillon | Chemical evolution of the Sacramento urban plume: Transport and oxidation[END_REF][START_REF] Sörgel | Quantification of the unknown HONO daytime source and its relation to NO 2[END_REF], which is much smaller compared to L phot (1×10 -3 ppb s -1 ). The noontime data (11:00-14:00 LT) with the strongest solar radiation were chosen to calculate the unknown HONO source strength based on Eq. ( 7). Here the d[HONO]/dt was approximated by ∆HONO/∆t, which is the difference of the measured HONO concentrations every 10 minutes [START_REF] Sörgel | Quantification of the unknown HONO daytime source and its relation to NO 2[END_REF]. Where, l .,+, and l +, ' are the photolysis frequencies of HONO and NO 2 (s -1 ), respectively. P EFGF HF = L J + L 12D2101 + L B J + ∆[2101] ∆ -P 12D01 -P (E7) = [HONO] \J 2101 + K 12D2101 [OH] + ^ ! ;$#_`a 2 b + ∆[2101] ∆ -K 12D01 [OH][NO] - ∆[01 c ]×d.e5% ∆ [OH] = a(J 1 ) h ) i (J 01 ' ) j k01 ' D4 01 ' ' DB01 ' D4 ( Direct measurements of l .,+, and l +, ' were made in this study except for the period from December 2015 to May 2016. For the reaction of HONO with OH, a rate constant m ,.D.,+, of 6.0×10 -12 cm 3 molecules s -1 was taken from [START_REF] Atkinson | Evaluated kinetic and photochemical data for atmospheric chemistry: Volume I -gas phase reactions of Ox, HOx, NOx and SOx species[END_REF]. The OH mixing ratios were expressed by the NO 2 concentrations and photolysis frequencies of O 3 and NO 2 , as shown in Eq. ( 8) [START_REF] Alicke | Impact of nitrous acid photolysis on the total hydroxyl radical budget during the Limitation of Oxidant Production/Pianura Padana Produzione di Ozono study in Milan[END_REF]. In the present study, the calculated daily peak OH concentrations were in the range of 0.3-2×10 7 molecules cm -3 , which are comparable to those M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 14 measured in the polluted atmospheres of northern China [START_REF] Lu | Missing OH source in a suburban environment near Beijing: observed and modelled OH and HO 2 concentrations in summer 2006[END_REF]. Nonetheless, it should be noted that the OH calculation from such empirical equation may be subject to some uncertainty. V ZMW was calculated by assuming a HONO daytime dry deposition velocity of 2 cm s -1 and an effective mixing height of 200 m. Due to the rapid photolysis of HONO at daytime, most of HONO cannot reach the height above 200 m [START_REF] Alicke | Impact of nitrous acid photolysis on the total hydroxyl radical budget during the Limitation of Oxidant Production/Pianura Padana Produzione di Ozono study in Milan[END_REF]. m ,.D+, is the rate constant for the reaction of OH with NO, using a value of 9.8×10 -12 cm 3 molecules -1 s -1 from [START_REF] Atkinson | Evaluated kinetic and photochemical data for atmospheric chemistry: Volume I -gas phase reactions of Ox, HOx, NOx and SOx species[END_REF]. The emission source strength was estimated from the HONO/NO X emission ratio of 0.53% as determined in Sec. 3.2. Figure 7 shows the average contributions of all the source and sink terms to the HONO budget in August 2016, when accurate J values observations were available and elevated daytime HONO were observed. An unknown source L QRSRTUR was clearly the dominant part, accounting for over 80% of the HONO production. An average P unknown value of 2.95 ppb h -1 was derived, which is more than 7 times greater than that of the homogeneous formation rate (L ,.D+, , 0.40 ppb h -1 ). The major loss pathway of HONO was the photolysis with a mean V WXTY value of 2.80 ppb h -1 , followed by dry deposition (V ZMW , 0.49 ppb h -1 ), and V ,.D.,+, was very small and almost less than 3% of V WXTY . The unknown source strength of daytime HONO in Ji'nan is higher than those derived in Santiago, Chile (1.69 ppb h -1 ; [START_REF] Elshorbany | Oxidation capacity of the city air of Santiago, Chile[END_REF], Beijing (1.83 ppb h -1 ; [START_REF] Hou | Comparison of atmospheric nitrous acid during severe haze and clean periods in Beijing, China[END_REF], and Houston, US (0.61 ppb h -1 ; Wong et al., 2012). Some studies have reported much lower L QRSRTUR obtained from a rural site in Guangzhou, China (0.76 ppb h -1 ; [START_REF] Li | Exploring the atmospheric chemistry of nitrous acid (HONO) at a rural site in Southern China[END_REF], a mountain site in Hohenpeissenberg, Germany (0.40 ppb h -1 ; Acker et al., 2006), and a forest site in Julich, Germany (0.50 ppb h -1 ; [START_REF] Kleffmann | Daytime formation of nitrous acid: A major source of OH radicals in a forest[END_REF]. We further explored the potential unknown daytime source(s) of HONO based on our measurement data. According to the laboratory studies, heterogeneous reactions of NO 2 on wet surfaces should be an important contributor to the ambient HONO concentrations, and the reaction rate is first order in NO 2 . It has been also proposed that these heterogeneous reactions can be photo-enhanced [START_REF] Stemmler | Photosensitized reduction of nitrogen dioxide on humic acid as a source of nitrous acid[END_REF]. Thus, the strength of the unknown HONO source (P unknown ) can be expressed by equation (E9), if the heterogeneous reactions were the major HONO sources. M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT L QRSRTUR ∝ l +, ' * [pq = ] * r s t u (E9) Correlation analysis between P unknown and related parameters has been widely adopted to diagnose the potential HONO sources (e.g., Su et al., 2008b). The NO 2 concentration ([NO 2 ]) is usually used as an indicator of the heterogeneous reactions on the ground surface since the ground surface to volume ratio (S/V g ) can be assumed to be constant for the well-mixed boundary layer at noontime, whilst [NO 2 ]*[S/V a ] can be taken as a proxy for the HONO formation on the aerosol surface. JNO 2 *[NO 2 ] and JNO 2 *[NO 2 ]*[S/V a ] can be used to infer the photo-enhanced heterogeneous reactions on ground and aerosol surfaces. Figure 8 shows the scatter plots of the calculated P unknown versus the abovementioned four indicators for the summer case (i.e., August 2016) when accurate J value measurements were available. P unknown showed a moderate correlation with [NO 2 ] with a correlation coefficient (R) of 0.55, and it was significantly improved after J NO2 *[NO 2 ] was considered (R=0.76). When aerosol surface density was taken into account, however, the correlation became even weaker with R values of 0.40 and 0.43. This suggests that the photo-enhanced heterogeneous reactions of NO 2 on the ground surface played a major role in the daytime HONO formation in Ji'nan in summer. Impact on the primary OH sources Photolysis of HONO presents an important primary source of OH in the atmosphere. The from O 3 photolysis can be calculated by Equation ( 12) (Su et al., 2008b). P 12 (O w ) = 2J 1 ) h × [O w ] / (1 + k w [M]/k = [H = O]) (E12) Figure 9 shows the daytime profiles of OH production rates from photolysis of HONO and O 3 in the summer (i.e., August 2016) period when accurate J value measurements were available. Clearly, photolysis of HONO dominated the daytime OH production in urban Ji'nan. The mean L ,. (vqpq) was 1.88 ppb h -1 , almost 3 times higher than L ,. (q w ). Furthermore, in contrast to most of the earlier studies which suggested that the contribution of HONO photolysis was mainly concentrated in the early morning and neglected at noon, photolysis of HONO presents the dominant OH contributor throughout the daytime at our study site. Even though at noontime in summer, high contributions of HONO photolysis were still found. These results demonstrate the significant role of HONO in the atmospheric oxidizing capacity in the urban atmosphere of Ji'nan. Summary Highly time-resolved continuous field observations of HONO, related air pollutants and meteorological parameters were performed at an urban site of Ji'nan in North China, for a year from September 2015 to August 2016. The measured mean concentration of HONO was 1.15 ppbv with a maximum level of 8.36 ppbv. The ambient HONO concentrations presented a seasonal variation with the highest level in winter as well as elevated concentrations in spring (April-May) and summer (August). Well-defined diurnal cycles of HONO with concentration peaks in the early morning and valleys in the afternoon were found for all the four seasons. Direct emissions from vehicle exhaust posed a large contribution to the ambient HONO, with an average emission ratio ∆HONO/∆NO X of 0.53%. During the nighttime, the heterogeneous conversion of NO 2 on the ground surface is an important source of HONO. The average conversion frequency of NO 2 to HONO was derived as 0.0068 h -1 from over hundred cases. At daytime, a missing HONO source with an average strength of 2.95 ppb h -1 was derived in summer, which was about seven times larger than the gas phase reactions. Our One-year continuous measurements of HONO were made at a typical urban site in northern China. Seasonal and diurnal variations, vehicle emission factors, heterogeneous formation, and daytime sources were examined based on a large observational data set. A strong missing daytime source was needed to explain the measured HONO concentrations. HONO photolysis is the dominant OH source throughout the daytime. population and 1.6 million automobiles. The site is located in the central campus of Shandong University (36º40'N, 117º03'E), a typical urban area surrounded by massive buildings and condensed population and close to several main traffic roads (Fig.1). Large-scale industries, including steel plants, thermal power plants, cement plants, oil refineries and chemical plants in suburban areas are the major industrial emission sources of local air pollution in Ji'nan, and are mainly distributed in the northeast and southwest of the site. By observing the spikes of sulfur dioxide (SO 2 ) concentrations under northeasterly and/or southwesterly winds, we know that the site was affected by the local industrial emission sources. All the measurements were carried out on the rooftop of a six-floor teaching building, around 22 m above the influenced by several dust storms and urban dust (http://www.sdein.gov.cn /dtxx/hbyw/201605/t20160512_294758.html). Besides, the elevated HONO levels in August under the condition of intense solar radiation suggests the presence of strong HONO sources as well as the important contributions of HONO as a potential source of OH radicals to the atmospheric oxidation chemistry. were adopted to select the cases: (a) only data in the morning rush hours (6:00-8:30 LT) in winter (e.g., November-February) were used; (b) NO/NO X>0.7; (c) good correlation between HONO and NO X ; (d) short duration of the plumes (<2 hours). Rush hours are the prominent period with strong traffic emission and thus greatest contribution of vehicle exhaust to HONO concentrations. Furthermore, during the winter early morning rush hours when the solar radiation is weak and boundary layer height is relatively stable, the derived HONO/NOx is less interfered by atmospheric photochemical reactions and mixing with the air masses aloft. Criteria (b) was used as an indicator for identifying the freshly emitted plumes. Criteria (c) and (d) further confirmed that the increase of HONO was mainly attributed to direct emissions instead of heterogeneous reactions of NO 2 . The slopes of the scatter plot of HONO versus NO X can be considered as the emission ratios. With such strict selection criteria, a total of twelve cases were screened out to estimate the vehicle emission factors of HONO in urban Ji'nan. ambient nocturnal HONO levels (Equation 2 to HONO during nighttimeIt has been widely accepted that the heterogeneous reactions of NO 2 on wet surfaces present an important formation pathway of ambient HONO, and generally make a dominant contribution at nighttime (e.g.,[START_REF] Finlayson-Pitts | The heterogeneous hydrolysis of NO 2 in laboratory systems and in outdoor and indoor atmospheres: An integrated mechanism[END_REF]. To investigate the heterogeneous production of HONO in Ji'nan, a number of nighttime HONO formation cases were identified to estimate the NO 2 -to-HONO conversion frequency. The selected cases should meet the following criteria: (1) only the nighttime data in the absence of sunlight (i.e., 20:00-05:59 LT in autumn and winter and 20:00-04:59 LT in spring and summer) were used considering the fast HONO loss via photolysis and potential existence of unknown HONO sources at daytime; (2) both HONO concentrations and HONO/NO 2 ratios increased steadily during the target case; photolysis and gas phase reactions at noontime. The average daytime wind speed in Ji'nan in the present study was 1.7 m s -1 , so that at a HONO lifetime of about 15 min at noontime, the horizontal transport had to occur within 1.6 km (in which no large-scale pollution sources) to reach the site.[START_REF] Dillon | Chemical evolution of the Sacramento urban plume: Transport and oxidation[END_REF] proposed a parameterization for the dilution by background air to estimate the magnitude of vertical transport (T V = k (dilution) ([HONO]-[HONO] background ). E8) (α = 0.83, β = 0.19, a = 4.1×10 9 , b = 140, c = 0.41, and d = 1.7) photolysis (O 1 D+H 2 O), another important OH source, based on the concurrent observations of HONO, O 3 , J HONO and J O1D in summer. The other primary OH sources, such as photolysis of peroxides and ozonolysis reactions of alkenes, are generally not very important in urban areas, especially at daytime, and were not considered in the present study. We also don't consider the primary sources of HO 2 and RO 2 radicals (such as photolysis of OVOCs) due to the lack of measurement data for these radical precursors. The net OH production rate from HONO photolysis (L ,. (vqpq) RMY ) was calculated by the source strength subtracting the sink terms due to reactions (R2) and (R5)(Equations 10 and 11). The OH production rate HONO) F = P 12 (HONO) -k 01D12 [NO][OH] -k 2101D12 [HONO][OH] (E11) the photo-enhanced heterogeneous reaction of NO 2 on the ground surface may be a major source of daytime HONO in summer. Photolysis of HONO presents the predominant OH contributor not only in the early morning but also throughout the daytime in urban Ji'nan, and hence plays a vital role in the atmospheric oxidation and ozone formation in the polluted urban atmosphere of northern China. (18:00-06:00, LT); D: daytime (06:00-18:00, LT) 1: Elshorbany et al. (2009); 2: Acker et al. (2006a); 3: Yu et al. (2009); 4: Bernard et al. (2016); 5: Qin et al. (2009); 6: Tong et al. (2015); 7: Su et al. (2008a); 8: Alicke et al. (2002); 9: Li et al. (2012); 10: this study. Figure 1 . 1 Figure 1. Locations of Ji'nan and the sampling site. The left map is color-coded by the anthropogenic NOx emissions (Zhang et al., 2009), while the right is color-coded by the geographical height. The large industrial sources are labeled with different colors, including steel plants (light blue), thermal power plants (pink), cement plants (red), oil refineries (grey) and chemical plants (green). Figure 2 .Figure 3 .Figure 4 .Figure 5 . 2345 Figure 2. Time series of HONO, NO, NO 2 , PM 2.5 , PM 1.0 , J HONO , temperature (T), relative humidity (RH) and surface wind in Ji'nan from September 2015 to August 2016. The data gap is mainly due to the maintenance of the instruments. Figure 6 .Figure 7 . 67 Figure 6. Seasonal variation of the NO 2 -to-HONO conversion frequency (k het ) in Ji'nan. Figure 8 .Figure 9 . 89 Figure 8. Scatter plots of the unknown daytime HONO source strength (P unknown ) with (a) NO 2 , (b) NO 2 *(S/V) a , (c) NO 2 *J NO2 , and (d) NO 2 *J NO2 *(S/V) a during August 2016. Table 1 . 1 Overview of the measured HONO and NOx levels in urban Ji'nan and comparison with other studies. Location Time HONO(ppb) N D NO 2 (ppb) N D NO X (ppb) N D HONO/NO 2 N D HONO/NO X N D Ref. Santiago, Chile (urban) Mar-Jun 2005 3.00 1.50 30.0 20.0 200.0 40.0 0.100 0.075 0.015 0.038 1 Rome, Italy (urban) May-Jun 2001 1.00 0.15 27.2 4.0 51.2 4.2 0.037 0.038 0.020 0.024 2 Kathmandu, Nepal (urban) Jan-Feb 2003 1.74 0.35 17.9 8.6 20.1 13.0 0.097 0.041 0.087 0.027 3 Shanghai, China (urban) Oct 2009 1.50 1.00 41.9 30.0 / / 0.038 0.032 / / 4 Guangzhou, China (urban) Jun 2006 3.5 2.00 20.0 30.0 / / 0.175 0.067 / / 5 Beijing, China (urban) Oct-Nov 2014 1.75 0.93 37.6 35.3.0 94.5 53.4 0.047 0.026 0.019 0.017 6 Xinken, China (suburban) Oct-Nov 2004 1.30 0.80 34.8 30.0 37.8 40.0 0.037 0.027 0.034 0.020 7 Milan, Italy (suburban) May-Jun 1998 0.92 0.14 33.2 18.3 117.5 23.4 0.028 0.008 0.008 0.006 8 Backgarden, China (rural) Jul 2006 0.95 0.24 16.5 4.5 20.9 5.5 0.057 0.053 0.045 0.043 9 Sep 2015-Aug 2016 1.28 0.99 31.0 25.8 46.4 40.6 0.079 0.056 0.040 0.035 10 Sep-Nov 2015 (autumn) 0.87 0.66 25.4 23.2 38. 37.5 0.049 0.034 0.034 0.022 10 Ji'nan, China (urban) Dec 2015-Feb 2016 (winter) 2.15 1.35 41.1 34.6 78.5 64.8 0.056 0.047 0.034 0.031 10 Mar-May 2016 (spring) 1.24 1.04 35.8 25.8 47.3 36.0 0.046 0.052 0.035 0.041 10 Jun-Aug 2016 (summer) 1.20 1.01 22.5 19.0 29.1 25.8 0.106 0.079 0.060 0.049 10 N: nighttime Table 2 . 2 The emission ratios ∆HONO/∆NO X of fresh vehicle plumes. Date Local Time ∆NO/∆NO X R 2 ∆HONO/∆NO X (%) 11/03/2015 06:08-07:45 0.89 0.91 0.29 11/05/2015 06:00-07:30 0.92 0.84 0.63 11/17/2015 06:00-07:30 1 0.95 0.75 11/20/2015 06:00-07:15 0.92 0.72 0.59 12/06/2015 06:00-07:48 0.83 0.75 0.87 12/08/2015 06:02-07:30 0.95 0.61 0.58 12/25/2015 06:00-07:30 0.94 0.72 0.30 12/31/2015 06:00-07:30 0.96 0.94 0.47 01/02/2016 06:00-07:30 1 0.61 0.46 01/20/2016 06:44-07:48 0.92 0.96 0.71 01/21/2016 06:26-07:56 0.94 0.58 0.54 01/27/2016 07:00-08:12 0.89 0.77 0.19 Acknowledgments We thank Chuan Yu, Ruihan Zong, Dr. Zheng Xu and Dr. Long Jia for their contributions to the filed study. We are grateful to the European Centre for Medium-Range Weather Forecasts for sharing the boundary layer height data and to the National Center Atmospheric Research for providing the TUV model. This work was funded by the National Natural Science Foundation of China (No. 41505111 and 91544213), the National Key Research and Development Programme of the Ministry of Science and Technology of China (No. 2016YFC0200500), the Natural Science Foundation of Shandong Province (ZR2014BQ031), the Qilu Youth Talent Program of Shandong University, and the Jiangsu Collaborative Innovation Center for Climate Change.
46,119
[ "773861", "181905", "770416" ]
[ "474247", "26886", "474247", "474247", "474247", "474628", "474247", "197066", "246207", "474247", "474247" ]
00174455
en
[ "chim" ]
2024/03/05 22:32:07
2007
https://hal.science/hal-00174455/file/Nedelec_et_al.pdf
Keywords: thermoporosimetry, confinement, crystallisation, nanoporous materials The thermal behaviour of carbon tetrachloride confined in silica gels of different porosity was studied by differential Scanning Calorimetry. Both the melting and the phase transition at low temperature were measured and found to be inextricably dependant upon the degree of confinement. The amount of solvent was varied through two sets of experiments, sequential addition and original progressive evaporation allowing the measurement of the DSC signals for the various transitions as a function of the amount of CCl 4 . These experiments allowed the determination of transition enthalpies in the confined state which in turn allowed the determination of the exact quantities of solvent undergoing the transitions. A clear correlation was found between the amounts of solvent undergoing the two transitions (both free and confined) demonstrating that the formation of the adsorbed layer t does not interfere with the second transition. The thickness of this layer and the porous volumes of the two silica samples were measured and found to be in very close agreement with the values determined by gas sorption. Crystallization of carbon tetrachloride in confined geometries Adil Meziane 1 , Jean-Pierre E. Grolier 2 , Mohamed Baba 2 and Jean-Marie Nedelec Introduction The peculiar behaviour of liquids in confined geometry has attracted a lot of interest in particular during the past ten years. A comprehensive review has been published in 2001 [1]. The case of water [2,3] is particularly relevant because of the numerous works dealing with water and also because of obvious practical applications. The revival in the interest in transitions in confined geometry undoubtedly comes from the considerable progress in the preparation of nanoporous materials with controlled pore size and with spatially controlled pore distribution and connectivity. In this context discovery of MCM type materials [4] has played an important role. The use of organized molecular systems (surfactants) to limit spatially the condensation of alkoxide precursors is now common and has been extended to various systems and various pore organizations. The availability of such porous materials with controlled porosity, and to some extent with tuneable porosity, has lead to an increased interest for the study of crystallisation in confined geometry. Practical interest of liquids in porous materials is also very widespread and the case of oil recovery is a major example. The chemistry of water in clouds is also greatly affected by confinement effects. More importantly, the research devoted to the preparation of nanocrystals with a good control of both crystal size and size distribution has been incredibly expanding in the last twenty years [5]. In particular semi-conducting nanocrystals or quantum dots have been the subject of many research papers [6,7] due to the possible observation in these materials of a direct quantum effect correlated to the size of the crystals. Porous materials appeared to be ideal candidates for the preparation of such nanocrystals, utilizing the pores as nanoreactors where the crystallization of the desired material could be confined. In particular abundant examples concerning MCM-41 and SBA-15 mesoporous silicas templates can be found in the literature, see [8,9] for instance. Another very interesting example of crystallization in confined geometries is Biomineralization [10,11]. Biomineralization is a complex process in which the solution conditions, organic template, and crystal confinement coordinate to yield nanostructured composite materials with controlled morphology and mechanical and structural properties. Over the past few decades, research has examined various aspects of this mineralization process both by characterizing those found in nature and by creating synthetic composites. Another field in which crystallization in confined geometries play a major role is polymer science. Numerous examples demonstrate how the confinement can modify the kinetics of crystallization of polymers and also the morphology of the crystals [12]. All these selected examples demonstrate how crucial it is to get information concerning crystallization in confined geometries. In particular the energetic of crystallization in confining media is not well documented. The well known modification of the freezing point temperature of liquids in confined geometry has led to the development of characterization techniques for the measurement of porosity in solids. Such techniques are based upon the Gibbs-Thomson equation [START_REF] Gibbs | Collected works[END_REF][START_REF] Thomson | [END_REF] which relates the shift ∆T of the crystallization temperature to the pore size of the confining material according to [15]: p m p s m SL p R H k R H T Cos T T T ∆ ≈ ∆ = - = ∆ ρ θ σ 0 0 . 2 (Equation 1) where T p is the melting temperature of a liquid confined in a pore of radius R p , T 0 is the normal melting temperature of the liquid, σ SL is the surface energy of the solid/liquid interface, θ the contact angle, ∆H m is the melting enthalphy, ρ S the density of the solid and k a constant. The measurement of ∆T by calorimetry or NMR technique leads to thermoporosimetry [16] and NMR cryoporometry [17] respectively. The advantages of both techniques have been discussed extensively [18]. As first proposed by Kuhn [19] in the 1950's, thermoporosimetry can also be of great value for soft networks characterization like polymeric gels [20]. In this case, the confinement is created by the meshes defining the 3-dimensional polymer network. The study of polymer architecture modification by thermoporosimetry requires knowledge of the behaviour of liquids able to swell these organic materials. We recently developed reference porous materials for calibration of thermoporosimetry with various solvents [21,22]. In our systematic work, we observed that some solvents presenting a low temperature phase transition in the solid state offered even more interest [23]. Indeed, this transition is also affected by the confinement and is an interesting alternative to the use of liquid to solid transition since it is usually much more energetic. From a practical point of view the use of these transitions does not change the procedure requiring the calibration of the technique with samples of known porosity. But from a fundamental point of view, this observation raises some questions about the underlying thermodynamics. The objective of this paper is to discuss the transitions of carbon tetrachloride in confined geometry because CCl 4 is an effective solvent for polymer swelling and also presents this solid state phase transition as observed before. [24,25]. Theoretical considerations According to Equation (1), the shift of the transition temperature of a confined liquid ∆T is inversely proportional to the radius of the pore in which it is confined. In fact it is well known that not all the solvent takes part in the transition and that a significant part of it remains adsorbed on the surface of the pore. The state of this adsorbed layer has been discussed extensively in the case of water. Consequently, the radius measured by application of the Gibbs-Thomson equation should be written R=R p -t where t is the thickness of the adsorbed layer leading to a reformulation [7] of Equation 1as 2) t T H k R m p + ∆ ∆ = (Equation The value of t can be determined by the calibration procedure using materials of various pore sizes and this is the traditionally adopted procedure. The problem in doing so is that the underlying hypothesis is that the thickness of the adsorbed layer t does not vary with pore size. For small pores, the error on t can lead to large errors on the measurement of R p . We proposed an alternative method to measure t by adding sequentially various amounts of liquid in the porous material [26]. As stated before this layer t represents the part of the solvent which does not crystallize. For solvents like CCl 4 which exhibit a further transition at low temperature the behaviour of this adsorbed layer is an open question. Does this solvent participate in the second transition? Is a new adsorbed layer created on the top of the first one? To get further insight into these questions we studied the behaviour of CCl 4 in mesoporous silica gels in this paper as described in the following section. Experimental section Mesoporous silica gels Mesoporous monolithic silica gels (2.5 mm × 5.6 mm diameter cylinders) were prepared by the acid catalysed hydrolysis and condensation of a silicon alkoxide, following procedures reviewed elsewhere [START_REF] Hench | Sol-gel silica : processing, properties and technology transfer[END_REF]. Careful control of the aging time performed at 900°C allowed the production of samples with controlled textural properties. In this study two samples (A and B) with different textural properties (Specific Surface Area (SSA), total pore volume (V p ) and pore size distribution (PSD)) were used. The textural characteristics of the samples were determined by N 2 sorption. Gas sorption measurements Textural data of the silica gels were determined on a Quantachrome Autosorb 1 apparatus. The instrument permits a volumetric determination of the isotherms by a discontinuous static method at 77.4 K. The adsorptive gas was nitrogen with a purity of 99.999%. The cross sectional area of the adsorbate was taken to be 0.162 nm 2 for SSA calculations purposes. Prior to N 2 sorption, all samples were degassed at 100°C for 12 h under reduced pressure. The masses of the degassed samples were used in order to estimate the SSA. The BET [START_REF] Brunauer | [END_REF] SSA was determined by taking at least 4 points in the 0.05<P/P 0 <0.3 relative pressure range. The pore volume was obtained from the amount of nitrogen adsorbed on the samples up to a partial pressure taken in the range 0.994<P/P 0 <0.999. Pore size distributions were calculated from the desorption isotherm by the BJH method [29]. The mean pore radius R av was calculated according to BET p av S V R 2 = (Equation 3) corresponding to a cylindrical shape for the pores which is also the underlying hypothesis in equation (1). Textural data for the two samples are displayed in Table 1. In this table, the modal pore diameter R p is also shown. This value fairly matches the R av derived from S BET measurement with cylindrical shape assumption thus confirming the validity of the hypothesis on the pore shape. Sample SSA (m 2 /g) Vp (cm 3 /g) R av (nm) R p (nm) A 183 1, DSC measurements A Mettler-Toledo DSC821 instrument calibrated (both for temperature and enthalpy) with metallic standards (In, Pb, Zn) and with n-heptane was used to record the thermal curves. It was equipped with an intracooler set allowing a scanning range of temperature between -70 and 600 °C. About 10 or 20 mg of the studied material was introduced into an aluminium DSC pan to undergo an appropriate temperature program. To allow the system to be in an equilibrium state, a slow freezing rate is required [30]. A rate of -0.7 °C/min was chosen. Other slower cooling rates were tested which did not show any significant discrepancy. CCl 4 (Aldrich) of HPLC quality was used without any supplementary purification. Results and discussion Thermal behavior of free CCl 4 Bulk CCl 4 was studied before and its thermal phase transitions were well characterized [24,25]. It exhibits a complex thermal transitions system as shown in Figure 1. - (R) (M) (Liquid) (Liquid) (Liquid) (M) (M) (R) (R) (FCC) (FCC) Heat Flow (mW.g -1 ) T (°C) As it is cooled down, liquid CCl 4 crystallizes into Face-Centered-Cubic phase (FCC) which follows a phase transition upon further cooling to a Rhombohedral one (R) which, in turn, transforms to Monoclinic crystalline structure (M) around -48°C. Heating the (M) phase leads to (R) in a reversible way but upon heating (R) melts directly without transforming into the (FCC) phase. Observing the transition heat values (Figure 1), it can be pointed out that the R to-liquid transition releases an enthalpy (13.6 J/g) equivalent to the total heat liberated by the liquid-to-FCC (9.6 J/g) together with the FCC-to-R (3.8 J/g). Takei et al. [24] showed that both solid-to-solid and liquid-to-solid transitions of CCl 4 were strongly dependent on the average pore size of the material in which the liquid is confined. In particular, they demonstrated that the FCC-to-R transition is no longer observed when the pore radius is smaller than 16.5 nm, which is the case for our silica samples (see Table 1). Because of the complex behaviour of CCl 4 upon cooling, we chose to use the heating of the solvent to limit the study to the M-to-R and R-to liquid transitions. The two transitions were studied for CCl 4 confined in the two porous samples A and B. Thermal behaviour of CCl 4 confined in sample A The objective is to get quantitative information on the solvent undergoing both transitions (both confined and free solvent). In order to do so, we performed sequential addition of precise quantities of CCl 4 in the sample as described in [26]. Briefly, a known mass of silica gel (about 20 mg) is set in the DSC pan which is sealed. A small hole is drilled in the cover allowing further injection of known masses of carbon tetrachloride. This procedure allows a precise control of the added mass of solvent. After each thermal cycle, a new injection is performed. For the first time to our knowledge, we also performed some experiments in the reverse way, by progressively evaporating the solvent starting from a large excess. This was performed by inert gas flushing in the DSC pan at 25 °C. The subsequent evaporation of the solvent is controlled by the flushing time. Obviously in this case we do not know the remaining mass of CCl 4 , but we can calculate it from the measured enthalpies. The thermograms recorded for various quantities of CCl 4 added to sample A are shown in Figure 2. In this figure, 4 peaks can be observed which are labelled from 1 to 4 starting from low temperature to room temperature. The assignment of all peaks is presented in Table 2. In Figure 2, it can be seen that for small quantities of added CCl 4 , no transition is observed. This first step corresponds to the creation of the adsorbed layer t onto the surface of the porous silica gel. For higher amounts of CCl 4 , peak 3 appears at a temperature shifted with respect to the normal melting temperature of solid CCl 4 . At about the same time, peak 1 also appears corresponding to the M-to-R transition for the confined solvent. The intensities of these two peaks increase upon further addition of solvent until they remain constant coinciding with the appearance of peaks 2 and 4 corresponding to excess free solvent. A plot of the heats corresponding to peaks 3 and 4 (H3 and H4) as a function of the mass of CCl 4 added (m CCl4 ) is presented in Figure 4. The different steps are clearly observable. The point where H3 is different from zero corresponds to the end of the creation of the adsorbed layer allowing the determination of the quantity of solvent (m t ) participating in the formation of this layer. At a given point, H3 remains constant and this point corresponds to the total filling of the pores (H3 Max ) thus allowing the determination of the porous volume of the sample (see section 3.4). To measure the amounts of CCl 4 involved in each transition precisely, we need to know the transition enthalpy at the given temperature. These values are known for free solvent transiting at regular temperatures but not for confined solvent which undergoes transitions at lower temperature. From Figure 4, we can measure H3 Max at the point where all pores are filled in the constant part of the curve. In this case the enthalpy corresponds to a mass of solvent equal to the total mass added (m vp ) minus the mass required for the creation of the adsorbed layer (m t ) namely m=m vp -m t . We can then deduce the enthalpy of melting per gram for the confined solvent ∆H3 = 13.67 J.g -1 . For the M-to-R transition, the situation is different. Because of the overlapping of peaks 1 and 2 we can only use the sum H1+H2. If we plot the evolution of H3 and H4 as a function of (H1+H2) we obtained the curves presented in Figure 5. It is worth noting that the points corresponding to desorption experiments complete nicely the points corresponding to sequential addition of solvent (empty and full symbols respectively). The point where H4 differs from zero corresponds to the H1 Max value corresponding to the totality of solvent undergoing the transition (in this case H2=0). The enthalpy of transition ∆H1 can then be deduced ∆H1= 27.22 J.g -1 . Knowing ∆H1 and ∆H3, we can now calculate the masses of solvent which undergo the various transitions for all points. Figure 6 presents the correlation between these masses M3 and M1 (the indexes correspond to the different peaks). Progressive filling (•) and desorption (○) experiments. The line y=x is also plotted. A clear correlation is observed between the confined solvent which undergoes the R-tliquid and the M-to-R transitions. This correlation is observed both for addition and evaporation experiments. This clearly confirms that all solvent undergoing the first transition also undergoes the second one. This observation is further confirmed by the plot of Figure 7 showing the correlation between M2 and M4, the masses of free solvent which undergo the transitions 2 and 4. M2 is determined through the following equation: ( ) 2 1 2 1 2 H H H H M Max ∆ - + = Equation (4) where H1 Max is the enthalpy required for the M-to-R transition of the liquid totally filling the pores (see Figure 5) and ∆H2=46.6 J.g -1 the specific enthalpy for the M-to-R transition of free CCl 4 . Once again a clear correlation is observed between the two quantities confirming that all the solvent which has crystallised outside the pores undergoes the second transition at a regular temperature (no confinement). These conclusions also demonstrate that the layer t remains adsorbed and does not participate in the low temperature transition. Thermal behaviour of CCl 4 confined in sample B The same experiments and calculations were applied to sample B which presents smaller pores i.e. higher confinement. The thermograms recorded for sample B filled with CCl 4 upon progressive evaporation are displayed in Figure 8. Because of the higher degree of confinement, the two peaks 1 and 2 are well resolved and can be discriminated. -60 -50 -40 -30 -20 Heat Flow (a.u.) T (°C) Following the same procedure, we can plot the evolution of H3 and H4 as a function of m CCl4 as performed in Figure 9. The plot of H1 and H2 as a function of m CCl4 (not shown here) can also be performed. From these curves, ∆H1 and ∆H3 for sample B can be derived (∆H1=22.19 J.g -1 and ∆H3=10.13 J.g -1 ). Together with the known values of ∆H2 (46.6 J.g -1 ) and ∆H4 (25.07 J.g -1 ) they allow the calculation of the quantities of solvent which undergo the different transitions for the various experiments. Calculation of porous volumes and thicknesses of adsorbed layers Considering the curves of Figure 4 and 9, we can measure the mass of solvent corresponding to total filling of the pore m vp . The porous volume of the gel can then be calculated according to: ρ being the density of CCl 4 . We took the value at -20°C ( ρ=10.85 kmol.m -3 ) [31]. From the same figures, we can also measure the mass of the adsorbed layer m t , the thickness of this layer can be calculated according to: SSA being the specific surface area of the silica sample given in Table 1. The results are summarized in Table 3. Sample V p (cm 3 .g -1 ) V N2 (cm The calculated value of V p are in very good agreement with the value measured by nitrogen sorption, the error is less than 2%. The values of t determined for samples A and B are also in good agreement with average value given in [25] after calibration procedure with samples of various pore size. All calculations were performed with a constant value of ρ CCl4 measured at -20°C. Obviously no information can be found in the literature for densities of carbon tetrachloride at lower temperatures since it is usually solid at these temperatures. Nevertheless using the value at -20°C, the error must be small. Furthermore, with the validity of such an approach demonstrated, we can now consider the exact porous volume to calculate the exact density of the confined solvent at various low temperatures. Conclusions The thermal behavior of carbon tetrachloride confined in two mesoporous silica gels of different porosity was studied. The two transitions (solid to liquid and Monoclinic to Rhombohedral) were measured and are affected by the confinement. The enthalpies of these two transitions were determined for the first time at the temperatures corresponding to confined solvent. Using these enthalpies, a clear correlation has been shown between the solvent undergoing the first and the second transitions. Consequently, the adsorbed layer which is created during the intrusion of CCl 4 inside the porosity of the silica gels is kept constant and does not participate in the two transitions. The thickness of this layer was measured for both samples and is found to be slightly dependant on the pore radius. Finally the porous volumes of the silica gels have been measured and the values agree very closely with those derived from nitrogen sorption isotherm. It has been demonstrated that using porous samples of known porosity (measured by mercury intrusion porosimetry or gas sorption analysis) could allow the measurement of thermodynamical data of confined liquids (Enthalpy of transition, density,….). Figure 1 : 1 Figure 1: DSC thermogram of pure CCl 4 showing the different transitions. 2 : 2 Labelling of the different peaks observed in the DSC curves. Figure 3 3 Figure 3 presents the DSC curves recorded upon desorbing the CCl 4 by gas flushing. As can Figure 2 : 2 Figure 2: Thermograms recorded for various amount of CCl 4 added to sample A. Figure 3 : 3 Figure 3: Thermograms recorded for various flushing times for sample A filled with CCl 4 . Figure 4 : 4 Figure 4: Evolution of H3 (circles) and H4 (squares) as a function of the mass of CCl 4 Figure 5 : 5 Figure 5: Evolution of H3 (circles) and H4 (squares) for progressive filling (full symbols) and Figure 6 : 6 Figure 6: Evolution of the mass of confined solvent undergoing transition M-to-R (M1) as a Figure 7 : 7 Figure 7: Correlation between the masses of free solvent undergoing the R-to-liquid (M2) Figure 8 : 8 Figure 8: Thermograms recorded for various flushing times for sample B filled with CCl 4 . Figure 9 : 9 Figure 9: Evolution of H3 (circles) and H4 (squares) as a function of the mass of CCl 4 Figure 10 :Scheme 1 : 101 Figure 10: Correlation between M3 and M1 (circles) and M2 and M4 (squares) for 3 * 3 1 Laboratoire de Photochimie Moléculaire et Macromoléculaire, UMR CNRS 6005 2 Laboratoire de Thermodynamique des Solutions et des Polymères, UMR CNRS 6003 3 Laboratoire des Matériaux Inorganiques, UMR CNRS 6002 TransChiMiC Ecole Nationale Supérieure de Chimie de Clermont Ferrand & Université Blaise Pascal, 24 Avenue des landais 63177 Aubière Cedex, FRANCE. Table 1 : 1 Porous characteristics of the silica gels samples. 327 14,5 14,25 B 166 0,991 11,9 8,7 Acknowledgements Financial support from the French ANR under project Nanothermomécanique (ACI Nanosciences N°108) is gratefully acknowledged. The authors would like to thank A. Gordon and Pr S. Turrell for careful reading of the paper. Contact author for correspondence and return of proofs: e-mail : j-marie.nedelec@univ-bpclermont.fr
23,905
[ "6039" ]
[ "899", "789", "13918", "899" ]
01744623
en
[ "sdv" ]
2024/03/05 22:32:07
2018
https://hal.sorbonne-universite.fr/hal-01744623/file/AAA_Deuve%202018%20version%20ultime.pdf
Thierry Deuve email: <deuve@mnhn.fr> What is the epipleurite? A contribution to the subcoxal theory as applied to the insect abdomen Keywords: Arthropoda, Hexapoda, Insecta, morphology, morphogenesis, segment, limb, pleurite, sternite, subcoxa, precoxa, ectodermal genitalia, wings has shown that they are instead eupleural (i.e. appendicular) and correspond to a dorsal part of the subcoxa. Their presence in the abdominal segments of insects illustrates the fundamental importance of the subcoxa in segmental structure, with a function of anchoring and supporting the appendage when the latter is present. However, the epipleurites are normally separated and functionally dissociated from the coxosternum, which integrates the ventral component of the subcoxa. In females, the epipleurite of segment IX of the abdomen corresponds to the gonangulum, as already pointed out by Deuve in 1994 and 2001, and it is involved in gonopod articulation. At segments VIII and IX of both males and females of holometabolans, the formation process of the genital ducts leads to an internalisation of the whole subcoxosternum (i.e. the coxosternum with the exception of the coxal and telopodal territories), and it is the two flanking epipleurites that ventrally close the abdomen in relation to the rearward displacement of the gonopore. This model may be generalised, in its broad lines, to a large part of the hemimetabolans. The body plan of the insect abdomen underlines the morphological and functional importance of the subcoxa in its fundamental structure, but the study of the Hexapoda in general also indicates the presence of a more proximal segment, the precoxa, which would belong to the groundplan but is more cryptic because it is often closely associated with the subcoxa and/or the paranotal lobe. Its location, which is sometimes on the ventral flank of the paranotal lobe, is in line with the hypothesis of a dual origin of the pterygote wing. Résumé. Qu'est-ce qu'un épipleurite ? Une contribution à la théorie subcoxale appliquée à l'abdomen des Insectes. Les épipleurites ont été d'abord décrits par Hopkins en 1909 sur l'imago et la larve d'un Coléoptère. Puis ce terme a été largement utilisé en morphologie de l'Insecte pour désigner des sclérites de la région pleurale, surtout pour les larves. Ils ont récemment été interprétés comme tergopleuraux (c'est-à-dire pleuraux mais non strictement appendiculaires) par Deuve en 2001, mais une étude du développement embryonnaire par Kobayashi et al. en 2013 a montré qu'ils étaient en réalité eupleuraux (c'est-à-dire appendiculaires) et correspondaient à la partie dorsale de la subcoxa. Leur présence aux segments abdominaux des Insectes illustre l'importance fondamentale de la subcoxa dans l'architecture segmentaire, avec une fonction d'ancrage et de support de l'appendice lorsque celui-ci est présent. Cependant, les épipleurites sont habituellement séparés et fonctionnellement dissociés du coxosternum, lequel intègre la composante ventrale de la subcoxa. Chez les femelles, l'épipleurite du segment IX de l'abdomen correspond au gonangulum, comme déjà indiqué par Deuve en 1994 et 2001, et participe à l'articulation du gonopode. Aux segments VIII et IX des mâles et des femelles d'Insectes Holométaboles, la morphogenèse des conduits génitaux provoque une internalisation de tout le subcoxosternum (c'est-à-dire le coxosternum moins les territoires coxaux et télopodiaux), et ce sont les deux épipleurites adjacents qui ferment ventralement l'abdomen en accompagnement du déplacement vers l'arrière du gonopore. Ce modèle peut être étendu, dans ses grandes lignes, à une large partie des Insectes hémimétaboles. Le plan structural de l'abdomen des Insectes souligne l'importance morphologique et fonctionnelle de la subcoxa dans l'architecture fondamentale, mais l'étude de tous les Hexapodes indique aussi la présence d'un article plus proximal de l'appendice, la precoxa, qui appartiendrait au plan de base mais serait plus discret car souvent étroitement associé à la subcoxa et/ou au lobe paranotal. Sa position, souvent au flanc ventral du lobe paranotal, s'accorde avec l'hypothèse d'une origine duale de l'aile des Ptérygotes. Introduction For more than a century, the question of structure of the arthropod segment along its dorsoventral axis has been subject of intensive debate. In theory, this structure corresponds to a simple model: a dorsal tergum is separated from a ventral sternum by lateral pleura [START_REF] Audouin | Recherches anatomiques sur le thorax des animaux articulés et celui des insectes hexapodes en particulier[END_REF][START_REF] Milne-Edwards | Introduction à la zoologie générale, ou, considérations sur les tendances de la[END_REF]. In practice, however, it is often difficult to delimit these respective territories if we consider the morphological diversity of arthropods and the adaptive differentiations that respond to mechanical and functional constraints. Sclerified areas and membranous areas delineate visible territories that do not always correspond to the fundamental morphological fields. Sclerites of different origins may merge, or, conversely, a given sclerite may be reduced or fragmented and partially replaced by a membranous area with blurred or indefinable limits. Invaginations of some skeletal territories to form internal ducts or apophyses for muscle attachment have also been reported. Several approaches make it possible to search for the identity and limits of the original territories: the classical methods of comparative anatomy, including study of muscles, whose inserts can act as landmarks, those of descriptive and comparative embryology, and the more recent ones of developmental genetics using molecular markers of anatomical territories. Over the past few decades, the combined use of these methods has illustrated the complexity of the issues involved, but has also clarified our understanding of the fundamental organisation of the skeleton. Pleural regions present the majority of problems, because their boundaries are ipso facto those of the tergal and sternal regions. In addition, they include the limbs and a subsequent question is whether the pleura are entirely appendicular in nature or whether some parts of the pleura would instead be 'non-appendicular' or 'formerly appendicular'. Hypothetical homologies with a prearthropodan ancestor may even be sought. Added to this is the complex structure of the arthropodal limb itself, of a pleural nature, whose fundamental organisation needs to be elucidated in order to uncover the true pattern of its diversification within the different arthropod lineages and, for a given organism, according to the metamere considered. The pleura are lateral areas, but the insertion of the arthropodal limb has shifted to a more lateroventral location. This arrangement and functional necessities lead to a differentiation of the most proximal segments of the appendage (precoxa and coxa in crustaceans, precoxa and subcoxa in hexapods): on the ventral side, these segments tend to merge with the sternum or even to replace it [START_REF] Börner | Die Gliedmassen der Arthropoden[END_REF][START_REF] Weber | Die Gliederung der Sternopleuralregion des Lepidopterenthorax. Eine vergleichende morphologische Studie zur Subcoxaltheorie[END_REF][START_REF] Ferris | Some general considerations[END_REF]Ferris , 1940a)), on the dorsal side, they tend to be embedded into the body wall, or even to merge with it [START_REF] Börner | Die Gliedmassen der Arthropoden[END_REF]. In the dorsal region of the pleuron, there is indeed a peculiar area situated between the apparent base of the appendage and the tergum, the interpretation of which is problematic. Respiratory organs-the spiracles in Hexapoda and Myriapoda-are located in this laterodorsal region. [START_REF] Crampton | Notes on the thoracic sclerites of winged insects[END_REF] proposed the name eupleurites for the pleural sclerites belonging to the limb, and [START_REF] Prell | Das Chitinskelett von Eosentomon[END_REF] introduced the term tergopleurites for those which, being 'not appendicular', are located between the base of the limb and the genuine tergum (the paranotal lobes could probably be included among these). In the Hexapoda, Snodgrass first named some laterodorsal sclerites "tergopleurites" (1927), then "paratergites" (1931) and finally "laterotergites" (1935a), because, from his final point of view, they belonged not to the limb but to the tergum. He then drew a boundary, called the "dorsopleural line" (1931, 1935a, 1958), which would separate the pleural and tergal regions. This "pleural line" was the so-called "pleural suture" of [START_REF] Hopkins | Contributions toward a monograph of the scolytid beetles. I. The genus Dendroctonus[END_REF] with respect to the abdomen. It should be noted that in Snodgrass' diagrams (ibid.), the spiracles are more dorsal than this boundary and therefore located in the tergal or, more precisely, 'laterotergal' area. This led me to rename the "laterotergites" of Snhodgrass (1935a) "epipleurites", using the terminology of [START_REF] Hopkins | Contributions toward a monograph of the scolytid beetles. I. The genus Dendroctonus[END_REF] created for both imago and larva of Coleoptera, because they seemed to me to be pleural rather than tergal in nature [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF](Deuve , 2001a)). In my view, and following [START_REF] Hopkins | Contributions toward a monograph of the scolytid beetles. I. The genus Dendroctonus[END_REF], they were tergopleurites in the sense of [START_REF] Prell | Das Chitinskelett von Eosentomon[END_REF]. It can also be noted that Snodgrass (1935a[START_REF] Snodgrass | Evolution of arthropod mechanisms[END_REF] later identified as "epipleurites" or "epipleural sclerites" the subalar and basalar sclerites of the pterygote thorax, lying between the limb-base and the notum. In a recent work on the embryonic development of a carabid beetle, [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] correctly note that I misplaced the dorsopleural or longipleural fold separating the epipleurite from the subcoxa [START_REF] Deuve | Les sternites VIII et IX de l'abdomen sont-ils visibles chez les imagos des Coléoptères et des autres Insectes Holométaboles ?[END_REF][START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF](Deuve , 2001a)), confusing it in the thorax of a Carabus larva with the paracoxal furrow separating the catepimerite and the anepimerite. Consequently, anepimerite and epipleurite were confused in the metathoracic segment, the latter losing its subcoxal identity. However, I had only reproduced the successive diagrams of Snodgrass that drew the "dorsopleural line" on larvae of Silphidae [START_REF] Snodgrass | Morphology of the insect abdomen. Part I. General structure of the abdomen and its appendages[END_REF], "Fig. 3B") and Carabidae (Snodgrass 1927, "Fig. 25";1935a, "Fig. 139B";1958, "Fig. 8G"). Indeed, Snodgrass (1935a) interpreted this larval thoracic anepimerite as a laterotergite or, later (1958), as a pleurite more dorsal than the anapleurite. It must be noted in this respect that Snodgrass himself reproduced an error of Hopkins (1909, "Fig. 3"), who, on an imago of Coleoptera, confused under the unique name "pleural suture" the true thoracic pleural furrow and the abdominal dorsopleural furrow. Snodgrass (1931, p. 11) subsequently corrected this error of homonomy for the imago, but he did not correct it for the larva as can be seen from his figures cited above. This misinterpretation of larval thoracic segments led to a consecutive error in abdominal segments by serial comparison. The understanding of this repeated error now makes it possible to correct the general interpretation of the abdominal epipleurites (abdominal laterotergites sensu Snodgrass 1935a) by giving them an identity more coherent with the hexapodan segment model according to the general subcoxal theory. In his fundamental work on the skeletal morphology of a scolytid beetle, [START_REF] Hopkins | Contributions toward a monograph of the scolytid beetles. I. The genus Dendroctonus[END_REF] placed great importance on the longitudinal fold which, in the abdominal pleural region, separates a dorsal epipleurite and a ventral hypopleurite. As mentioned above, [START_REF] Snodgrass | Morphology and mechanism of the insect thorax[END_REF][START_REF] Snodgrass | Morphology of the insect abdomen. Part I. General structure of the abdomen and its appendages[END_REF]Snodgrass ( , 1935a) ) named this fold the "dorsopleural line" and for this reason considered the sclerites located above this line as tergal and those below it as pleural (subcoxal). Thus the "epipleurites" of [START_REF] Hopkins | Contributions toward a monograph of the scolytid beetles. I. The genus Dendroctonus[END_REF] and [START_REF] Böving | An illustrated synopsis of the principal larval forms of the order Coleoptera[END_REF] became "paratergites" or "laterotergites". Returning to the conception of Hopkins, I adopted the term epipleurite and named the dorsopleural line of Snodgrass the "longipleural furrow" [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF]). It should be noted, however, that the epipleurite of Hopkins described for the imago of Dendroctonus (Scolytidae) is located around the spiracle and not below it, as in the campodeiform larva of Carabus (Carabidae) that served as my model. The two territories are not exactly homologous and [START_REF] Böving | On the abdominal structure of certain beetle larvae of the campodeiform type. A study of the relation between the structure of the integument and the muscles[END_REF] made a useful distinction between the abdominal "pleural suture" in the sense of Hopkins and an "antipleural suture" located above it. This antipleural furrow was named "tergopleural suture" by [START_REF] Craighead | The determination of the abdominal and thoracic areas of the cerambycid larvae as based on a study of the muscles[END_REF], and "dorsolateral suture" by [START_REF] Böving | An illustrated synopsis of the principal larval forms of the order Coleoptera[END_REF], adding to the confusion. I actually used the term epipleurite for all beetle larvae in the classical sense generalised by [START_REF] Böving | An illustrated synopsis of the principal larval forms of the order Coleoptera[END_REF], as well as by [START_REF] Snodgrass | Morphology of the insect abdomen. Part I. General structure of the abdomen and its appendages[END_REF]Snodgrass ( , 1935a) ) under the name "paratergite" or "laterotergite". The use of the term epipleurite continues today in most works on larval morphology, but not without maintaining some confusion. As will be seen below, the meaning of this term may refer, depending on the case, to subcoxal or to subcoxal + precoxal territories. In my previous paper (Deuve 2001a), presented at a symposium on the origin of the Hexapoda held in Paris in January 1999, I hesitated whether to interpret the "epipleural field" as a precoxal (appendicular) or tergopleural (not appendicular) territory, finally opting for the second hypothesis, but only after expressing my doubts. In a clumsy way, I gave too much importance to this uncertain choice, including in the title, which had the effect of misleading the reader and diverting attention from the best-supported part of my demonstration, i.e. the presence of some peculiar sclerites in the groundplan of the pterygote abdominal segments and their key involvement in the functioning of the external female genitalia after subcoxosternal plate internalisation. The denomination and interpretation of these sclerites ("epipleurites, hypopleurites, tergopleurites, laterocoxites, subcoxae, precoxae") are secondary but not minor issues. The results of [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] show that the epipleurites have a subcoxal identity and therefore a readjustment must be made. But this does not change the broad lines of my previous study, as will be discussed below. If it is considered that there also exists a longitudinal field in the fundamental organisation of the arthropod metamere that was formerly appendicular but not strictly part of the present arthropodal appendage, and then became located between the primary tergum and the base of the limb and possibly including the paranotal lobe, [START_REF] Prell | Das Chitinskelett von Eosentomon[END_REF] terminology could be used and the term "tergopleural field" should be applied instead of the inappropriate "epipleural field". There are also, especially in certain Chilopoda, some sclerites named "dorsal sclerites" by [START_REF] Bäcker | A forgotten homology supporting the monophyly of Tracheata: the subcoxa of insects and myriapods re-visited[END_REF], located above-and not belonging to-the limb, which I marginally included in my former "epipleural field" concept. They must not now be confused with the epipleurites and, as far as we know, they would be paranotal and/or tergopleural in nature. However, these sclerites deserve special attention in reference to their possible relationship with precoxal extensions (we have in mind the origin of insect wings). In addition, Bäcker et al. point out that species showing these dorsal sclerites are often burrowing forms or species of the edaphon, a milieu that requires functional adaptations of the skeleton with losses (i.e. transformations) of the projecting paranotal lobes. The subcoxal theory The idea that a proximal segment of the appendage, named subcoxa by [START_REF] Heymons | Beiträge zur Morphologie und Entwicklungsgeschichte der Rhynchoten[END_REF], would make up most, if not all, of the pleural and ventral regions, is called the "subcoxal theory". Following the embryological observations of Heymons on Heteroptera, [START_REF] Börner | Die Gliedmassen der Arthropoden[END_REF] promoted and extended this idea to many arthropods, distinguishing (p. 690) between subcoxae that are free, pleural, or integrated into a subcoxosternum. He wrote very explicitly: "It is not uncommon to observe a more or less deep fusion of the subcoxa with the sternum or tergum, as is the case with many crustaceans and hexapods, whereas a fusion with the coxa seems to have occurred only rarely" (p. 656, translated from German). [START_REF] Snodgrass | Morphology and mechanism of the insect thorax[END_REF] and [START_REF] Weber | Die Gliederung der Sternopleuralregion des Lepidopterenthorax. Eine vergleichende morphologische Studie zur Subcoxaltheorie[END_REF] later discussed, employed and developed this theory. In insects, the subcoxal theory could only be really applied to the thoracic segments, where the limbs are particularly developed as legs. Snodgrass described in pterygotes a pleural wall, of a subcoxal nature, split dorsoventrally by an oblique fold called pleural furrow or pleural sulcus. This fold serves for muscular attachments, reinforced in the thoracic tagma whose locomotory function is predominant. The presence of meso-and metathoracic wings further reinforces the mechanical importance of these parietal pleura. However, if the subcoxa is a proximal segment of the limb, it should surround the base of the more distal coxa and therefore, if embedded, it would also have some anterior, posterior and ventral components. [START_REF] Börner | Die Gliedmassen der Arthropoden[END_REF], [START_REF] Weber | Die Gliederung der Sternopleuralregion des Lepidopterenthorax. Eine vergleichende morphologische Studie zur Subcoxaltheorie[END_REF][START_REF] Weber | Morphologie, Histologie und Entwicklungsgeschichte der Articulaten[END_REF] and [START_REF] Ferris | Some general considerations[END_REF]Ferris ( , 1940a) ) have shown that the subcoxa is ventrally associated or integrated with the sternum in formations whose interpretation is complex and sometimes rather speculative. Recent observations of embryonic development have confirmed the preponderant role of some ventral subcoxal formations (particularly the basisternum) in the formation of the secondary sternum, which is in fact a subcoxosternum (e.g. [START_REF] Uchifune | Embryonic development of Galloisiana yuasai Asahina, with special reference to external morphology (Insecta: Grylloblattodea)[END_REF]. In addition, developmental genetic techniques have definitively confirmed the existence of the subcoxa as a proximal segment of the appendage of the Hexapoda, not only in the thorax, but also in the cephalic segments [START_REF] Coulcher | Molecular developmental evidence for a subcoxal origin of pleurites in insects and identity of the subcoxa in the gnathal appendages[END_REF]. Certainly, this theory may be extended to more or less all segments, including those of the abdomen. Despite the bitter opposition of [START_REF] Hansen | Studies on Arthropoda II[END_REF], the subcoxal theory prevailed in the following decades, before [START_REF] Snodgrass | A textbook of arthropod anatomy[END_REF][START_REF] Snodgrass | Evolution of arthropod mechanisms[END_REF][START_REF] Snodgrass | A contribution toward an encyclopedia of insect anatomy[END_REF] changed his own mind, and [START_REF] Bekker | Evolution of the leg in Tracheata. Part 1. Subcoxal theory and a critique of it[END_REF], [START_REF] Sharov | Basic arthropodan stock[END_REF] and [START_REF] Manton | The evolution of arthropodan locomotory mechanisms. Part 10. Locomotory habits, morphology and evolution of the hexapod classes[END_REF] criticised it in quite different ways. [START_REF] Bekker | Evolution of the leg in Tracheata. Part 1. Subcoxal theory and a critique of it[END_REF] criticised Heymons' observations and Börner's old interpretations of the articulation of the limb on the pleuron, but he seems to have been unaware of the fundamental works of Snodgrass and Weber, not citing any bibliographical reference after 1913. For [START_REF] Sharov | Basic arthropodan stock[END_REF], the pleuron of the pterygotes was essentially precoxal in nature ("pleuron" or "basal joint corresponding to the crustacean precoxopodite", p. 186-187, see also the present Figure 2), the true subcoxa being reduced to the trochantin (the latter would then be a genuine segment-the subcoxa-and not a fragment of the coxa as Snodgrass claimed in his subcoxal theory). [START_REF] Bäcker | A forgotten homology supporting the monophyly of Tracheata: the subcoxa of insects and myriapods re-visited[END_REF] adopted the broad lines of this interpretation. In his final model, [START_REF] Snodgrass | A textbook of arthropod anatomy[END_REF][START_REF] Snodgrass | Evolution of arthropod mechanisms[END_REF][START_REF] Snodgrass | A contribution toward an encyclopedia of insect anatomy[END_REF] considered the whole pleural region of arthropods as resulting from differential sclerotisation of the body-wall or from fragmentation of the coxa. In doing so, he totally abandoned his former subcoxal theory. [START_REF] Manton | The evolution of arthropodan locomotory mechanisms. Part 10. Locomotory habits, morphology and evolution of the hexapod classes[END_REF], who gave more importance to functional aspects than to problems of identity and homology, followed Snodgrass' later conceptions and considered the pleurites of arthropods as so many adaptations to different modes of life. For the latter author, there is no subcoxa and it is vain to try to establish such homologies. Today, however, there seems to be a consensus in favour of the subcoxal theory. It implies the existence of a segment more proximal than the coxa, named the subcoxa. Dorsally, located between the coxa and the tergum, the subcoxa is divided in pterygote insects by the pleural furrow into an anterior episternite and a posterior epimerite. The subcoxa should not be confused with the trochantin, a basal sclerite of the appendage known for long time, which according to Snodgrass, as well as [START_REF] Carpentier | Quelques remarques concernant la morphologie thoracique des Collemboles (Aptérygotes)[END_REF], would be genuinely coxal in nature (with serious arguments, Sharov 1966, as well as [START_REF] Bäcker | A forgotten homology supporting the monophyly of Tracheata: the subcoxa of insects and myriapods re-visited[END_REF], did not accept this interpretation). Some collembolans, such as Tetrodontophora and Orchesiella species, are known to have two well-defined segments, more proximal than the coxa, interpreted as precoxa (or "pretrochantin") and subcoxa ("trochantin") [START_REF] Hansen | Studies on Arthropoda II[END_REF], also sometimes referred to respectively as "subcoxa 1" and "subcoxa 2" [START_REF] Denis | Sous-classe des Aptérygotes[END_REF][START_REF] Deharveng | Morphologie évolutive des Collemboles Neanurinae en particulier de la lignée néanurienne[END_REF]. These two segments have also been observed as buds during the embryonic development [START_REF] Bretfeld | Zur anatomie und Embryologie der Rumpfmuskulatur und der abdominalen Anhänge der Collembolen[END_REF]. More generally, these are two arciform sclerites that can be observed in some collembolans and that [START_REF] Willem | Recherches sur les Collemboles et les Thysanoures[END_REF] has considered as true "precoxal segments", i.e. true segments more proximal than the coxa. [START_REF] Hansen | Studies on Arthropoda II[END_REF] clearly described these two segments in Nicoletia, an exobasal zygentom (note that only an ancestor or a node in phylogenetical tree may be said 'basal'; a basally branched extant clade is 'exobasal'). This observation was taken up by [START_REF] Sharov | Basic arthropodan stock[END_REF], who observed these two segments in both Nicoletia and Tricholepidon. As [START_REF] Barlet | Le thorax des Japygides[END_REF] reported, these two segments were also observed in the Diplura as supracoxal rings, while in the Protura they have also been described as "subcoxa 1" and "subcoxa 2" by [START_REF] Denis | Sous-classe des Aptérygotes[END_REF]. These supracoxal arches, named anapleurite (subcoxa 1) and catapleurite (subcoxa 2) by Barlet and Carpentier, are separated from each other by the "paracoxal fold" [START_REF] Matsuda | Morphology and evolution of the insect thorax[END_REF]. [START_REF] Bäcker | A forgotten homology supporting the monophyly of Tracheata: the subcoxa of insects and myriapods re-visited[END_REF] interpreted their presence as forming part of the hexapodan groundplan and named them respectively "eupleurite" and "trochantinopleurite". We could also use Hansen's terminology ("pretrochantin" and "trochantin") or, to make matters even easier and more definitive, the terms precoxa and subcoxa. This problem of terminology may seem a secondary question, but it has crucial theoretical implications. Although Bäcker et al.'s argumentation against the terminology employed in the works of Barlet and Carpentier, as well as those of [START_REF] Snodgrass | Evolution of arthropod mechanisms[END_REF][START_REF] Snodgrass | A contribution toward an encyclopedia of insect anatomy[END_REF], is relevant considering their interpretation, it should be noted that the terms anepimeron, anepisternum, catepimeron and catepisternum had been proposed long before. They were introduced by [START_REF] Crampton | A contribution to the comparative morphology of the thoracic sclerites of insects[END_REF] for the pterygote thorax. Later, the terms anepisternite, anepimerite, catepisternite and catepimerite were widely used by Ferris and his school-of which Matsuda was a young member-to describe the subdivisions of the so-called subcoxa of the pterygotes (e.g. [START_REF] Rees | The morphology of Tipula reesi Alexander (Diptera: Tipulidae)[END_REF]Ferris 1940b) and it is this model that was later adopted and developed by Matsuda. The anapleurite and catapleurite were meticulously studied by Carpentier and Barlet, who clearly observed them in Collembola, Archaeognatha, Zygentoma and Diplura [START_REF] Carpentier | Sur la valeur morphologique des pleurites du thorax des Machilides (Thysanoures)[END_REF][START_REF] Carpentier | Quelques remarques concernant la morphologie thoracique des Collemboles (Aptérygotes)[END_REF][START_REF] Barlet | Remarques sur la musculature thoracique des Machilides (Insectes Thysanoures)[END_REF][START_REF] Barlet | La question des pièces pleurales du thorax des Machilides (Thysanoures)[END_REF][START_REF] Carpentier | Les sclérites pleuraux du thorax de Campodea (Insectes, Aptérygotes)[END_REF][START_REF] Barlet | Le thorax des Japygides[END_REF]). In addition, [START_REF] François | Le squelette thoracique des Protoures[END_REF][START_REF] François | Squelette et musculature thoraciques des Protoures[END_REF] described them accurately in the Protura (Figure 1). This arrangement has also been reported in most orders of pterygote insects by many authors. In this latter clade the pleural fold distinctly separates the anepisternite from the anepimerite and the catepisternite from the catepimerite. [START_REF] Matsuda | Morphology and evolution of the insect thorax[END_REF][START_REF] Matsuda | Morphology and evolution of the insect abdomen[END_REF] gave an excellent theoretical synthesis of the main works on the fundamental pattern of the thorax of hexapods, following the subcoxal theory. In a previous paper, I retained homologies between the anapleuron and the precoxa, and between the catapleuron and the subcoxa (Deuve 2001a). In this model, two segments more proximal than the coxa are then present at the base of the hexapodan appendage. More recently, [START_REF] Kobayashi | Formation of subcoxae-1 and 2 in insect embryos: the subcoxal theory revisited[END_REF] showed that the subdivision of the subcoxa into two segments can be observed during the embryonic development of Coleoptera, Megaloptera, Neuropterida and Trichoptera. In hemimetabolan orders, the differentiation of the paracoxal fold is more difficult to detect (e.g. [START_REF] Mashimo | Embryological evidence substantiates the subcoxal theory on the origin of pleuron in insects[END_REF]), but it has been observed at the imaginal stage in several orders [START_REF] Duporte | The lateral and ventral sclerites of the insect thorax[END_REF] and confirmed for the embryonic stage by Y. Kobayashi (personal communication). However, partly taking into account the remarks of [START_REF] Mashimo | Embryological evidence substantiates the subcoxal theory on the origin of pleuron in insects[END_REF], Y. Kobayashi (personal communication) considers that both the anapleural and catapleural rings are subdivisions of the "subcoxa 2", and thus correspond to the true subcoxa, whereas the "subcoxa 1", as observed in several insect embryos, is a true proximalmost segment of the limb, located at the border with the tergum or paranotum. This model of three basal segments of the hexapodan appendage (i.e. from most proximal to most distal: precoxa, subcoxa and coxa) fits perfectly with the theoretical model of the interpretation of the crustacean protopodite, which would also be three-segmented (from proximal to distal: precoxa, coxa and basis), bearing in mind that the crustacean coxa is homologous with the hexapodan subcoxa and the basis is homologous with the hexapodan coxa [START_REF] Hansen | Zur Morphologie der Gliedmassen und Mundtheile bei Crustaceen und Insekten[END_REF] (Figure 2). Hansen (1893, p. 198) was the first author to describe in Argulus a trisegmented protopodite for the biramous swimming limb of Maxillopoda. In addition, the precoxa seems to belong to both the crustacean groundplan [START_REF] Hansen | Studies on Arthropoda II[END_REF][START_REF] Boxshall | The evolution of arthropod limbs[END_REF]) and the hexapodan groundplan [START_REF] Sharov | Basic arthropodan stock[END_REF][START_REF] Boxshall | The evolution of arthropod limbs[END_REF]. [START_REF] Boxshall | Comparative limb morphology in major crustacean groups: the coxa-basis joint in postmandibular limbs[END_REF] reported it in Remipedia and Maxillopoda. It was long considered to be present in Malacostraca (e.g. [START_REF] Hansen | Studies on Arthropoda II[END_REF]) and in Stomatopoda [START_REF] Balss | Stomatopoda[END_REF]. [START_REF] Vandel | Embranchement des Arthropodes (Arthropoda, Siebold et Stannius 1845). Généralités. Composition de l'Embranchement[END_REF] pointed out that this trisegmentation is not primitive, but derived from secondary divisions of a primordial coxa. Recent studies of embryonic development confirm this view, but do not call into question the fundamentally trisegmented nature of the protopodite, probably common to all arthropods, although this hypothesis is still disputable [START_REF] Bitsch | The hexapod appendage: basic structure, development and origin[END_REF]. In a previous work (Deuve 1994, p. 203-204), I was already speculating that the paranotal lobes of the Arthropoda were pleural in nature and I compared those of the campodeiform larvae of certain Coleoptera to those of amphipods or isopods such as Saduria entomon. In the same vein, Y. Kobayashi (personal communication) recently drew my attention to the coxal plates of some amphipods and isopods, whose embryonic development has been accurately studied. In the late embryo of Orchestia (Pericarida), these plates are flap-like and mobile along the margin of the tergum, and they seem to form part of the limb [START_REF] Ungerer | External morphology of limb development in the amphipod Orchestia cavimana (Crustacea, Malacostraca, Peracarida)[END_REF]. In Porcellio (Oniscidea), they are still distinct but merge with the tergum and retain a shape like a 'paranotal lobe' [START_REF] Wolff | The embryonic development of the malacostracan crustacean Porcellio scaber (Isopoda, Oniscidea)[END_REF]. In the light of these different morphologies observed in crustaceans, the narrow anatomical relations between the precoxa and the paranotal lobe will have to be more accurately analysed in Hexapoda in order to understand the tergopleural territories and their respective links with the base of the limb and the true tergum. The pleural regions of Myriapoda and Hexapoda have often been compared (e.g. Füller 1963a,b) and they offer evident similarities, related to terrestrial life, that might be homologies. In particular, two concentric sclerite rings at the base of the locomotory limb seem to belong to the myriapodal groundplan and would correspond to the two proximal true-segments, which [START_REF] Bäcker | A forgotten homology supporting the monophyly of Tracheata: the subcoxa of insects and myriapods re-visited[END_REF] named eupleurite and trochantinopleurite, partly following [START_REF] Crampton | Notes on the thoracic sclerites of winged insects[END_REF] terminology and [START_REF] Weber | Lehrbuch der Entomologie[END_REF] view. In the same vein, [START_REF] Wesener | Sternites and spiracles -The unclear homology of ventral sclerites in the basal millipede order Glomeridesmida (Myriapoda, Diplopoda)[END_REF] pointed out that in Diplopoda the spiracle-bearing plates previously interpreted as sternites were not of a sternal nature, but were instead subcoxal sclerites associated with the apparent limb base. [START_REF] Bäcker | A forgotten homology supporting the monophyly of Tracheata: the subcoxa of insects and myriapods re-visited[END_REF] gave a reasoned description of a possible evolution of pleural structures in Myriapoda and Hexapoda. The subcoxal theory applied to abdominal segments Because the abdomen of the Hexapoda does not bear multisegmented locomotory limbs, it is more difficult to interpret its skeletal pattern. Few studies refer to the presence of the subcoxa and precoxa on this tagma. However, ventral sclerites (= ventrites) are classically interpreted as 'coxosternites', referring to the integration of some eupleural elements merged with the genuine sternum, and some abdominal pleurites have often been described, more or less associated with the spiracles (see e.g. the pleurites in Figure 11, named "epipleurites"). Bitsch (1994, p. 120) rightly interpreted the pleural sclerites of the abdomen as "arising from a basal appendicular segment (subcoxa) secondarily incorporated into the body wall". In this interpretation, we can stress a fundamental dissociation between the ventral component of the subcoxa, which is part of the coxosternum, and the more isolated dorsal component, which is embedded into the more laterodorsal body-wall and is often named 'pleurite'. A distinction is also made between the pregenital abdomen (segments I-VII), the appendages of which are rudimentary or absent, and the genital/post-genital abdomen (segments VIII-XI), some segments of which may carry visible appendages in the form of complex external genitalia (segments VIII and IX of female, segment X of male) and cerci [START_REF] Bitsch | Morphologie abdominale des Insectes[END_REF]. In particular, the genital segments often carry a pair of gonopods, which are sometimes connected to the tergum by an articular pleurite. The interpretation of these structures is easier in females, whose ovipositor apparatus has sometimes retained its original metameric arrangement and its original connections with the tergum, which is not the case in males. Recently, embryological studies have made it possible to identify the first buds of limbs and hence determine homonomies between thoracic and abdominal segments by serial comparison. Thus, [START_REF] Komatsu | Embryonic development of a whirligig beetle, Dineutus mellyi, with special reference to external morphology (Insecta: Coleoptera, Gyrinidae)[END_REF] and [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] have identified in Coleoptera the "subcoxa 1" and "subcoxa 2", which can respectively be interpreted as the precoxa and subcoxa, more dorsal than the coxa, on the thoracic segments and abdominal segments I-VIII (Figure 3). These authors correctly presented their results as a contribution to the subcoxal theory (see also [START_REF] Kobayashi | Formation of subcoxae-1 and 2 in insect embryos: the subcoxal theory revisited[END_REF]. [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] demonstrated that the epipleurite of the campodeiform Carabus larva corresponds to the subcoxa (subcoxa 2), whereas the juxta-or peri-spiracular area corresponds to the precoxa (subcoxa 1). In so doing, they showed that the 'epipleural field' I had identified and illustrated (Deuve 2001a) is actually a subcoxal territory (or, in some cases, subcoxal + precoxal). I fully agree with this interpretation, which makes my scheme of the hexapodan abdominal segments more coherent. Just as the dorsal part of both subcoxa and precoxa are integrated into the lateral wall of the thoracic segments, with a function of fixation of the appendage to the body, the epipleurite is also integrated into the wall of the abdominal segments, but it remains distinct from the coxosternum (with which it may, however, merge in some cases) on the pregenital segments, and it has the same function of fixation of the appendage on the genital segments. By homonomy, these territories have the same identity, as well as the same functional specialisation. In general, the skeletal organisation of the thoracic and abdominal segments is thus homogenous. A study of the cephalic segments will probably show a similar pattern, with dorsal precoxal and subcoxal areas embedded into the body-wall. Precoxa and subcoxa tend to lose their appendicular morphology, but remain functionally associated with the protruding part of the limb, for which they provide anchoring, support and articulation. In addition, we must keep in mind that if epipleural areas are subcoxal in their nature, they are truly appendicular and must therefore have a corresponding ventral part that is associated with the formation of the secondary sternum. This is precisely what [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] observed in the embryo. In my 2001 paper, I showed that the epipleural areas participate in specialisation of the genital segments by serving as a substitute for the internalised subcoxosternum (formation of the vaginal ducts) through a medio-ventrad shift. This process of ventral closure of the genital segments, starting from the primitive seripleural type, leads to the formation of a sympleural abdomen (the epipleurites, of a subcoxal nature, join and merge ventrally) (Figure 4). In a way, this is the formation of a 'tertiary sternum'. In the 'sympleural-I' type the eighth epipleurites merge and participate in the formation of a 'subgenital plate', while gonopods VIII move rearwards to form a functional ovipositor in association with gonopods IX. In the 'sympleural-II' type, the eighth epipleurites form the first subgenital plate, but the ninth epipleurites also merge together to form a second subgenital plate, while the gonopods are no longer visible. The resulting secondary gonopore is then displaced to the rear of segment IX. The primitive (plesiomorphic) seripleural type is present in Archaeognatha, Zygentoma, Dictyoptera and Dermaptera, as well as some exceptional coleopterans belonging to the genus Eustra, which have retained a neotenic abdominal structure [START_REF] Deuve | L'abdomen et les genitalia des femelles de Coléoptères Adephaga[END_REF](Deuve , 2001b) ) (Figure 5). The sympleural-I type is present in Odonata, orthopteroids, Hymenoptera, Neuropterida, Coleoptera and Diptera; the sympleural-II type occurs in Trichoptera, Lepidoptera and Mecoptera (Deuve 2001a). However, it should be noted that the condition in Mecoptera is peculiar. The most plesiomorphic type of external genitalia occurs in Nannochoristidae, apparently corresponding to the sympleural type II, but the female gonopore is located at the rear of segment VIII rather than at the rear of segment IX [START_REF] Mickoleit | Die Genital-und Postgenital Segmente der Mecoptera-Weibchen. I. Das Exoskelet[END_REF]). In the other families, an apparent second subgenital plate is produced by elements that would belong to the eighth segment ("coxosternites VIII") but are apparently located on segment IX, while the true epipleurites IX ("coxosternites IX") regress and finally disappear [START_REF] Mickoleit | Die Genital-und Postgenital Segmente der Mecoptera-Weibchen. I. Das Exoskelet[END_REF]. [START_REF] Bitsch | Morphologie abdominale des Insectes[END_REF], however, did not adopt this interpretation. In addition, a "genital chamber" is formed at the level of the segments VIII and IX, which contains vestiges of the gonopods on its inner surface [START_REF] Grell | Der Genitalapparat von Panorpa communis L. Zoologische Jahrbücher[END_REF], as discussed and critised by [START_REF] Mickoleit | Die Genital-und Postgenitalsegmente der Mecoptera-Weibchen (Insecta, Holometabola). II. Das Dach der Genitalkammer[END_REF]. The best argument in favour of this seripleural and sympleural-type model is that there are no combined structures: the seripleural type, with retained metamerism, precludes the presence of a genuine subgenital plate when the vaginal duct is formed; the sympleural-II type, with a second subgenital plate, is incompatible with the presence of a true ovipositor. However, a problematic aspect of my interpretation was the structure of the genital segments in some females of Mecoptera. [START_REF] Mickoleit | Die Genital-und Postgenital Segmente der Mecoptera-Weibchen. I. Das Exoskelet[END_REF] recognised the absence of the genuine sternites VIII and IX as a result of the morphogenesis of the genital chamber. He also described the presence of a small "laterotergite" lying near the spiracle of the pregenital segments, but also on segment VIII, where it is adjacent to the subgenital plate. [START_REF] Kristensen | Heterobathmia valvifer n. sp.: a moth with large apparent 'ovipositor valves' (Lepidoptera: Heterobathmiidae)[END_REF] considered this to be an argument against my interpretation of the epipleural nature of the ventral sclerites VIII. The same difficulty arises with Embioptera, which also have "laterotergites" in addition to the epipleurites [START_REF] Klass | The female genitalic region and gonoducts of Embioptera (Insecta), with general discussions on female genitalia in insects[END_REF]. However, as I have previously explained (Deuve 2001a), the 'laterotergite' of some Mecoptera is located beside the spiracle and not below it, as is usually the case for an epipleurite. In fact, if we consider that the epipleurite represents the dorsal part of the subcoxa, it is likely that the small socalled 'laterotergite' of segment VIII in some Mecoptera would be precoxal, ocupying same juxtaspiracular location as this sclerite in the larva of Carabus [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF]. A similar situation is found in Neuropterida. [START_REF] Liu | Homology of the genital sclerites of Megaloptera (Insecta: Neuropterida) and their phylogenetic relevance[END_REF] admitted that "the predominant sternite-like sclerite of the female abdominal segment VIII represents the fused gonocoxites VIII" (it is actually formed from the merged epipleurites VIII). However, this subgenital plate is separated from the tergite by a fairly large membranous area containing the spiracle (Liu et al. 2016, "Fig. 12"). This area would represent a large part of the dorsal precoxal territory. The same situation can be observed in Coleoptera, but the dorsal part of the subcoxa (epipleurite) and of the precoxa may merge together, or merge with the tergum and even with the coxosternum or with the subcoxosternum such as in segment VIII of certain Scarabaeoidea [START_REF] Ritcher | Spiracles of adult Scarabaeoidea (Coleoptera) and their phylogenetical significance. 1. The abdominal spiracles[END_REF], or on the contrary become subdivided to allow increased mobility of the pregenital abdomen, as in Staphylinidae [START_REF] Naomi | The lateral sclerites of the pregenital abdominal segments in Coleoptera (Arthropoda: Hexapoda)[END_REF]). It should be noted that a sclerification does not necessarily follow the limits of the different anatomical fields. For example, among beetles, the epipleurite is located below the spiracle in Carabidae larvae, but the spiracle lies in the centre of a large sclerite (epipleurite + precoxite?) in certain larvae of Lampyridae (Deuve 2001a, "Fig. 6 and 7"). It is also well known that spiracles may move, some may disappear and yet others may appear, in the evolution of hexapods, especially in the most exobasal orders, such as the Diplura (see discussion in [START_REF] Kristensen | The groundplan and basal diversification of the hexapods[END_REF]in Klass &[START_REF] Klass | The ground plan and affinities of hexapods: recent progress and open problems[END_REF]). Their location is certainly less variable in the pterygotes, where the location of the spiracular line can reasonably be used as landmark. [START_REF] Ritcher | Spiracles of adult Scarabaeoidea (Coleoptera) and their phylogenetical significance. 1. The abdominal spiracles[END_REF] documented the migration of abdominal spiracles in some Scarabaeoidea (Coleoptera), but it is mainly the areas of sclerification of the abdominal wall that differ, depending on whether they are covered and protected by elytra or, in contrast, exposed to the environment. As Snodgrass (1935b, "Fig. 2") clearly illustrated, the location of the spiracular line can appear to be variable in different orthopteran families, but it is actually the areas of sclerification of the abdominal segments that have moved: in Tettigidea and Rhipipteryx the spiracle lies in the pleural membrane, whereas in Melanoplus it is located on the lateral margin of the so-called 'tergite', which apparently includes tergum + merged precoxae. Recently, [START_REF] Mashimo | Embryological evidence substantiates the subcoxal theory on the origin of pleuron in insects[END_REF] showed that in Gryllus bimaculatus the subcoxa remains undivided during embryonic development and the spiracle appears on the lateral margin of the so-called tergum. Thus, the precoxa cannot be seen-there is only a 'paratergal furrow', which is a simple, not very evident linear depression that delimits the paranotal lobe bearing the spiracle. It is questionable whether the differences between the observations of [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] of the Carabus embryo and those of [START_REF] Mashimo | Embryological evidence substantiates the subcoxal theory on the origin of pleuron in insects[END_REF] in a Gryllus embryo are due to artefacts produced by the techniques used, or whether they are instead real differences between the organisms. In fact, the buds and territories observed on a given embryo are already partly dependent on the future morphology of the imago, being precursors of the latter. That is a very general rule in any development process. In many orthopterans, spiracles are located on the apparent tergite (e.g. see above concerning the abdominal segments in Melanoplus). In this case, it is not unexpected that the same structure can be observed at an earlier embryonic stage of development. Alternatively, one might try to retain the hypothesis of [START_REF] Mashimo | Embryological evidence substantiates the subcoxal theory on the origin of pleuron in insects[END_REF] and consider a tergal (or, more exactly, tergopleural) origin of the spiracle-bearing territory. In this case, the "subcoxa 1" described by [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] in Carabus, apparently homologous with the 'tergal' territory observed in a Gryllus embryo, would also be of a tergopleural nature and could not be interpreted as the precoxa. The problem with this hypothesis is that it contradicts the formation of this "subcoxa 1" through a division of the primordial subcoxa observed by [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] during the embryonic development. It must be recognised that the problem of anatomical relationships between the spiracle and the lateral margins of the 'tergum' in hexapods is not yet resolved. For example, the anatomical studies of [START_REF] François | Squelette et musculature thoraciques des Protoures[END_REF] on proturans clearly indicate the presence of spiracle much more dorsal than the anapleurite and located on the lateral margin of the so-called tergum (Figure 1). Another example amongst many others is that of the embryo of Pedetontus (Archaeognatha), in which the spiracles appear to be located on the margins of the so-called tergum [START_REF] Machida | External features of embryonic development of a jumping bristletail, Pedetontus unimaculatus Machida (Insecta, Thysanura, Machilidae)[END_REF]. The informative work of [START_REF] Niwa | Evolutionary origin of the insect wing via integration of two developmental modules[END_REF] on the latter illustrates the complex genetic relationships controlling the morphogenesis of both paratergal and limb territories. In papers on Hemiptera, Sweet noted the presence of a double row of abdominal sclerites, which he named "laterotergites" [START_REF] Sweet | The external morphology of the pre-genital abdomen and its evolutionary significance in the order Hemiptera (Insecta)[END_REF], more or less following the terminology of [START_REF] Dupuis | Données nouvelles sur la morphologie abdominale des Hémiptères Hétéroptères et en particulier des Pentatomoides[END_REF][START_REF] Dupuis | Les Rhopalidae de la faune française (Hemiptera, Heteroptera). Caractères généraux -Tableaux de détermination -Données monographiques sommaires[END_REF]. Later, he interpreted them as genuine pleurites, which he called hypopleurites and epipleurites [START_REF] Sweet | Comparative external morphology of the pregenital abdomen of the Hemiptera[END_REF], using the terminology of Hopkins (1909) also adopted by [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF]. According to Sweet, these pleurites would form part of the fundamental organisation of the insect abdomen. It should be noted that in Heteroptera, it is the 'hypopleurite' in Sweet's sense, not the epipleurite, that bears the spiracle. This in turn raises new questions. The fundamental pattern of insect abdomen has been illustrated by diagrams that explain the specialisation of the female genital segments with internalisation of the subcoxosternal plates (Deuve 2001a). These diagrams are reproduced here with slight modifications. Figure 4 illustrates the three abdominal types occurring in female insects: seripleural, sympleural-I and sympleural-II. A precoxite (i.e. precoxal sclerite) is figured next to the spiracle. In my first attempt to introduce a particular sclerite, the "epipleurite", in the groundplan of the hexapodan segment, I mentioned the following homology: "to be cited as an example among other structures, the 'gonangulum' of [START_REF] Scudder | Reinterpretation of some basal structures in the insect ovipositor[END_REF], which would correspond to the epipleurite of the IXth abdominal segment" (Deuve 1994, p. 203, translated from French). Indeed, the interpretative model proposed there makes it possible to understand the nature of the gonangulum, which corresponds to the epipleurite IX [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF]2001a, p. 210-211). In sympleural-I type, the gonangulum is a more or less triangular sclerite that has retained its articular function, connecting the ovipositor (gonopods VIII and IX) to the ninth tergite. In sympleural-II type, the gonangulum does not have this articular function because the epipleurite IX is a component of the second subgenital plate. This explains why [START_REF] Scudder | Comparative morphology of insect genitalia[END_REF] pointed out the so-called 'absence' of a gonangulum in Mecoptera, Diptera, Trichoptera, Lepidoptera and Coleoptera. In the case of Coleoptera, the abdomen of females indeed belongs to the sympleural-I type, but gonopods VIII are reduced and closely associated with gonopods IX, so that there is no typical ovipositor and the epipleurites IX (gonangula) are, in contrast, strongly developed and do not look at all like 'small triangular articular sclerites'. The gonangulum has sometimes been confused by coleopterists with the lateral extremities of the so-called "tergum IX" expansions [START_REF] Bils | Das abdomenende weiblicher terrestrische lebender Adephaga une seine Bedeutung für die Phylogenie[END_REF][START_REF] Burmeister | Der Ovipositor der Hydradephaga (Coleoptera) und seine phylogenetische Bedeutung unter besondere Berücksichtigung der Dytiscidae[END_REF], following the interpretation of [START_REF] Mickoleit | Ueber Ovipositor der Neuropteroidea und Coleoptera und seine phylogenetische Bedeutung[END_REF]. More recently, Hünefeld et al. (2012) interpreted the holometabolan groundplan as lacking the gonangulum, but I do not share this view. Simply, this epipleurite IX no longer has the morphological and functional appearance of a gonangulum as described by Scudder. In a recent and meticulous work on the gonangulum, [START_REF] Klass | The gonangulum: a reassessment of its morphology, homology, and phylogenetic significance[END_REF] have presented an interpretation that is the same in its broad lines as the one I had previously developed [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF](Deuve , 2001a)), but using the term laterocoxite (Bitsch 1973b[START_REF] Bitsch | Morphologie abdominale des Machilides (Thysanura) -II. Squelette et musculature des segments génitaux femelles[END_REF]) instead of epipleurite. In a study of the blattarian ovipositor, Klass (1998) had rightly compared the structure observed with that known in Archaeognatha, which led him to compare the gonangula, visible in the eighth and ninth segments, with the laterocoxites described by Bitsch in machilids. Later, in successive works on the abdomen of other exobasal hemimetabolan orders, notably Dermaptera, [START_REF] Klass | The female abdomen of the viviparous earwig Hemimerus vosseleri (Insecta : Dermaptera : Hemimeridae), with a discussion of the postgenital abdomen of Insecta[END_REF][START_REF] Klass | The female genitalic region in basal earwigs (Insecta: Dermaptera: Pygidicranidae (s.l.)[END_REF] was progressively led to generalise the term laterocoxite. Bitsch has named "laterocoxite" a small angular scleritesometimes visible in the genital segments of certain Archaeognatha, more rarely in the pregenital segments, and sometimes merged with the coxa-, that was already known as the "subcoxa" [START_REF] Bekker | K stroyeniyu i proiskhozhdeniyu naruzhnykh polovykh pridatkov Thysanura I Hymenoptera [Structure and origin of the external genital appendages of Thysanura and Hymenoptera[END_REF], the "laterosternite" [START_REF] Gustafson | The origin and evolution of the geniatlia of the Insecta[END_REF] or "laterotergite IX" [START_REF] Livingstone | On the morphology and bionomics of Tingis. Duddleidae Drake (Heteroptera: Tingidae). Part III -Functional morphology of the abdomen, male and female genitalia and abdominal scent glands[END_REF], and which [START_REF] Smith | Evolutionary morphology of external insect genitalia. 1. Origin and relationships to other appendages[END_REF] had correctly interpreted as the subcoxa, independently of the older study by [START_REF] Bekker | K stroyeniyu i proiskhozhdeniyu naruzhnykh polovykh pridatkov Thysanura I Hymenoptera [Structure and origin of the external genital appendages of Thysanura and Hymenoptera[END_REF]. Bitsch proposed an identity of this sclerite with the piece named gonangulum by [START_REF] Scudder | Reinterpretation of some basal structures in the insect ovipositor[END_REF]Scudder ( , 1961aScudder ( ,b, 1964[START_REF] Scudder | Comparative morphology of insect genitalia[END_REF]) and, at the same time, he suggested that it has probably a subcoxal nature. I also wrote about the laterocoxite as described in female machilids: "If this 'laterocoxite' corresponds to Scudder's gonangulum, as Bitsch suggested, it would be an epipleurite" (Deuve 2001a, p. 219). Thus, the link is established and we reach a consensus: [START_REF] Gustafson | The origin and evolution of the geniatlia of the Insecta[END_REF] laterosternite (described on the genital segments of female machilids, but misinterpreted because it is non-sternal), [START_REF] Scudder | Reinterpretation of some basal structures in the insect ovipositor[END_REF] gonangulum (described only in segment IX of the abdomen of some female insects), Bitsch's (1973b) laterocoxite (described on the genital segments of female machilids and correctly interpreted) and Hopkins' (1909) epipleurite (described on the abdominal segments of imagos and larvae of both sexes in coleopteran), are one and the same fundamental sclerite of the hexapodan segment, having a subcoxal nature. In this context, it should be noted that [START_REF] Bekker | K stroyeniyu i proiskhozhdeniyu naruzhnykh polovykh pridatkov Thysanura I Hymenoptera [Structure and origin of the external genital appendages of Thysanura and Hymenoptera[END_REF] was the first author to correctly interpret epipleurite IX as the subcoxa in machilids (Archaeognatha), lepismatids and nicoletiids (Zygentoma), as well as in Gryllidea (Bekker 1932a,b). This consensus might be extended further: [START_REF] Kukalová-Peck | Origin of the insect wing articulation from the arthropodan leg[END_REF][START_REF] Kukalová-Peck | New Carboniferous Diplura, Monura, and Thysanura, the hexapod ground plan, and the role of thoracic side lobes in the origin of wings of Insecta[END_REF], 1992, 1997, 2008) proposed an original model of the structure of the insect segment and limb based on study of some Carboniferous fossil hexapods. I published (Deuve 2001a, p. 220-221) a severe critique of this model because, in my opinion, Kukalová-Peck confused the epipleurite with the subcoxa. However, if we now consider that the epipleurite does indeed have a subcoxal nature, then Kukalová-Peck's model would regain part of its relevance. For this author, the hexapodan limb is composed of several segments integrated into the lateral wall, these being (from the most dorsal to the most distal): epicoxa (archipleuron), subcoxa, coxa and trochanter, followed by the multisegmented telopodite. Kukalová-Peck's model, based on compression fossils, the study of which is very delicate, can still be criticised. Shortcomings worth noting are: the subalar and basalar thoracic sclerites are claimed to be subcoxal (1983, p. 1652), the segment ("epicoxa") described as the most proximal is confused with the paranotal lobe (for this reason, the "epicoxa" of Kukalová-Peck, located above the spiracle, is not really synonymous with the precoxa), the subcoxa described by Kukalová-Peck on the abdomen lies near the spiracle and resembles the precoxa, the trochanter is dissociated from the telopodite. In fact, it is clear that in most of her illustrations-notably the diagrammatic representation of the typical segment of a protoinsect (Kukalová-Peck 1983, "Fig. 4") or the representation of a dipluran ("Fig. 7") or a paleodictyopteroid ("Fig. 1")-this author displaces all respective segments of the appendage. This results in the "epicoxa" at the location of the paranotal lobe, the "subcoxa" at the location of the precoxa, the "coxa" at the location of the subcoxa, and the "trochanter" at the location of the coxa. It should also be noted that the term "epicoxite" can hardly be adopted for the precoxa or paranotal lobe, because it has already been used by [START_REF] Becker | Zur morphologischen Bedeutung der Pleuren bei Ateloceraten[END_REF] and Füller (1963a) in a completely different sense: it designates a sclerite of a subcoxal nature in Myriapoda. The more precise study by [START_REF] Kobayashi | Embryonic development of Carabus insulicola (Insecta, Coleoptera, Carabidae) with special reference to external morphology and tangible evidence for the subcoxal theory[END_REF] on a campodeiform Carabus larva demonstrates the location of the precoxa ("subcoxa 1") on the ventral flank of the paranotal lobe next to the spiracle (Figure 6a). This close association of precoxa and paranotal lobe evokes [START_REF] Rasnitsyn | A modified paranotal theory of insect wing origin[END_REF] model of a dual origin-paranotal and appendicular-of the wing of pterygotes (see also [START_REF] Niwa | Evolutionary origin of the insect wing via integration of two developmental modules[END_REF][START_REF] Prokop | Paleozoic nymphal wing pads support dual model of insect wing origins[END_REF][START_REF] Elias-Neto | Tergal and pleural structures contribute to the formation of ectopic prothoracic wings in cockroaches[END_REF][START_REF] Mashimo | Embryological evidence substantiates the subcoxal theory on the origin of pleuron in insects[END_REF]. Kukalová-Peck (1978), inspired by the paper of [START_REF] Wigglesworth | Evolution of insect wings and flight[END_REF], considered the wing as an exite of her "epicoxa". This idea of a wing originating from an exite (of the precoxa?) was taken up in developmental genetic studies (Averov & Cohen 1997). In a recent study of the postabdomen of Odonata, [START_REF] Klass | The female abdomen of ovipositor-bearing Odonata (Insecta:Pterygota)[END_REF] pointed out the presence of a gonangulum (epipleurite IX) divided into two distinct sclerites, which he named "antelaterocoxa" and "postlaterocoxa". He homologised the former with [START_REF] Bitsch | Morphologie abdominale des Machilides (Thysanura) -II. Squelette et musculature des segments génitaux femelles[END_REF] "precoxite" and the latter with Bitsch's "laterocoxite". It should be noted in this respect that Bitsch (1974, p. 102, footnote) had taken care to specify that his term "precoxite" was purely descriptive and in no way referred to a presumption of precoxal nature. [START_REF] Emeljanov | The evolutionary role and fate of the primary ovipositor in insects[END_REF] used Klass' observation to give the gonangulum a fundamentally dual nature, with "sternal" and "pleural or pleuroparatergal" components. The question of the identity of the antelaterocoxa in Odonata still deserves scrutiny, but Klass' ( 2008) interpretation of a gonangulum with either one-piece or bipartite condition sounds as corresponding to the nature of epipleurite in either subcoxal or subcoxal + precoxal identity. While awaiting precise analyses, it is important to assume that an epipleurite, in its current broad sense, may include a precoxal component, often merged with the subcoxal one. This is consistent with the delineation of the epipleurite as initially given by [START_REF] Hopkins | Contributions toward a monograph of the scolytid beetles. I. The genus Dendroctonus[END_REF]. The present interpretation of the limb with a protopodite composed of three segments (precoxa, subcoxa and coxa) seems coherent. A subsidiary problem is presented by the coxa of the female gonopods, which sometimes appears to be bi-segmented. In Coleoptera, the gonopod IX is often (primitively?) composed of two segments, followed by the so-called stylus that is presumably of a telopodal nature [START_REF] Deuve | L'abdomen et les genitalia des femelles de Coléoptères Adephaga[END_REF] (Figure 6b,c). Following [START_REF] Mickoleit | Ueber Ovipositor der Neuropteroidea und Coleoptera und seine phylogenetische Bedeutung[END_REF] and [START_REF] Bils | Das abdomenende weiblicher terrestrische lebender Adephaga une seine Bedeutung für die Phylogenie[END_REF], Hünefeld et al. (2012) reported the existence of these two joints as "a cranial element of gonocoxite IX articulating with the laterotergite IX [what I named epipleurite IX], and a caudal element bearing a stylus". In his most recent studies on adephagans, Ball names these joints "gonocoxite 1" and "gonocoxite 2", i.e. coxa 1 and coxa 2 respectively (e.g. [START_REF] Ball | Taxonomic review of the Tribe Melaenini (Coleoptera: Carabidae), with observations on morphological, ecological and chorological evolution[END_REF]. The question arises as to whether coxa 2 might be the trochanter, which has become well developed and adapted for oviposition in soil. In this case, the more distal so-called stylus in coleopterans would be the second segment of the telopodite, not the first. However, this idea is still speculative and lacks support. One might think that, quite simply, the coxal segment is dimeric in some coleopterans, showing a kind of annulation found in limb structure of many arthropods [START_REF] Boxshall | The evolution of arthropod limbs[END_REF]), but it is noticeable that some muscles connect the basal joint to the distal one [START_REF] Bils | Das abdomenende weiblicher terrestrische lebender Adephaga une seine Bedeutung für die Phylogenie[END_REF]). In addition, this bi-segmented structure is known in Hymenoptera and, according to [START_REF] Mickoleit | Ueber Ovipositor der Neuropteroidea und Coleoptera und seine phylogenetische Bedeutung[END_REF], it would belong to the groundplan of the Holometabola. It should also be noted that the gonopods VIII of certain Pygidicranidae (Dermaptera) have been described as bi-jointed (Deuve 2001a, "Fig. 28";Klass 2003, "Fig. 68-69"), but they would be gonapophyses. Rearward displacement of the gonopore Epitopy is the apparent rearward displacement of the gonopore as a result of the internalisation of subcoxosternal areas involved in formation of the ectodermal genital ducts. This term means that it is not, strictly speaking, a rearward migration of the gonopore, but rather an 'epitopic location' of it, i.e. peripheral in relation to the whole invaginated area forming the genital pouch, the protovagina and finally the vaginal duct. In females, a location of the secondary gonopore at the rear of segment VIII, or even at the rear of segment IX, is observed in most pterygotes. In contrast, gonopore orthotopy corresponds to the primitive condition, in which the primary female gonopore lies at the posterior margin of segment VII, corresponding to its original metameric position. True orthotopy has been reported in Archaeognatha, Zygentoma and, very exceptionally, in partially neotenic Coleoptera belonging to the genus Eustra [START_REF] Deuve | L'abdomen et les genitalia des femelles de Coléoptères Adephaga[END_REF](Deuve , 2001a)). I have mentioned elsewhere (Deuve 2001b) that a subterranean life may lead to partial neoteny, in relation to environmental stability [START_REF] Gould | Ontogeny and phylogeny[END_REF], and this may reveal some primitive patterns by prematurely stopping the morphogenetic process of the organ concerned. In the case of the genus Eustra, it is the internalisation process of the subcoxosternal plates VIII and IX that has been prematurely halted during its development. The apparent morphogenetic shift of the female gonopore during pre-imaginal development has been studied by several authors, especially Singh-Pruthi (1924), [START_REF] Heberdey | Zur Entwicklungsgeschichte vergleichende Anatomie und Physiologie der weiblichen Geschletsausführwege der Insekten[END_REF] and Metcalfe (1932a,b). [START_REF] Snodgrass | Morphology of the insect abdomen, Part II. The genital ducts and the ovipositor[END_REF], [START_REF] Weber | Lehrbuch der Entomologie[END_REF][START_REF] Weber | Grundriss der Insektenkunde[END_REF] and [START_REF] Vandel | Embranchement des Arthropodes (Arthropoda, Siebold et Stannius 1845). Généralités. Composition de l'Embranchement[END_REF] published useful overview, but they used the diagrams of Heberdey, which are misleading. Indeed, there is no internal connection between three genital pouches, which are located, respectively, at the rear of sterna VII, VIII and IX, but there is a concomitant invagination of the subcoxosternal areas VIII and IX. [START_REF] Styš | Reinterpretation of the theory on the origin of the pterygote ovipositor and notes on the terminology of the female ectodermal genitalia of insects[END_REF] well understood this formation of a "gynatrium" by "invagination of the ventral portions of the VIIIth and IXth urites". Figure 7 illustrates the location of the gonopore ("gon.") and its epitopy as a result of the ventral connection of the epipleurites VIII and epipleurites IX in sympleural types I and II, respectively. It should be noted that under these conditions the invaginated 'sternal' areas correspond to the secondary sternum and, necessarily, include some ventral components of the subcoxa and precoxa (but not the coxal ones, which correspond to the gonopods). These are therefore subcoxosternal areas (and even, as it should be written, precoxal-subcoxosternal). In the case of the sympleural-II type, the gonopods themselves have regressed and could have part of their territory included in this invagination. Thus, the secondary genital ducts (vaginal ducts) of insects are not only sternal, but subcoxosternal or sometimes coxosternal in nature. These considerations need to be taken into account in any study of the musculature supporting the ectodermal genitalia. On the other hand, the anatomical delimitation of these internalised territories is hardly possible in the current state of our knowledge (Figure 8). However, on the vaginal duct or bursa copulatrix of some Adephaga (Coleoptera), [START_REF] Bils | Das abdomenende weiblicher terrestrische lebender Adephaga une seine Bedeutung für die Phylogenie[END_REF] was able to distinguish territories connected by muscles to segment VIII and others connected by muscles to segment IX: the former, ventral, are connected to epipleurites VIII, the latter, dorsal, lateral or caudal, are connected to epipleurites IX or, more rarely, to tergite IX or coxae of the ninth segment. It is interesting to note that formation of the arthropod hypopharynx, which is ectodermal, follows a similar process, with participation of several cephalic segments and invagination of their respective subcoxosternal or sternal areas. The formation of the stomodeum, which is possibly comparable, has been interpreted in a variety of different ways (see review in Bitsch 1973a). In the same vein, the internalisation of the primary sternum in thoracic segments was clearly described and illustrated by [START_REF] Weber | Die Gliederung der Sternopleuralregion des Lepidopterenthorax. Eine vergleichende morphologische Studie zur Subcoxaltheorie[END_REF] in context of the nascent "subcoxal theory". It can also be noted that, in the same way as at the thorax the coxa maintains an articular function, remains associated with the prominent appendage but dissociates itself from the subcoxal territories which are embedded into the body, at the level of the genital segments the coxa remains associated with the gonopod of which it is a part, but dissociates itself from the so-called coxosternal plate, in reality subcoxosternal. This subcoxosternal plate includes only the ventral component of the subcoxa, not the dorsal component corresponding to the epipleurite, which is not internalised. In the case of the insect abdomen, a crucial question is whether the whole subcoxosternum is internalised, or whether lateral margins of it could instead remain in their external position and participate partly in formation of the final subgenital plate in some cases. Just as thoracic subcoxosternal territories are more internalised in holometabolans than in hemimetabolans [START_REF] Weber | Die Gliederung der Sternopleuralregion des Lepidopterenthorax. Eine vergleichende morphologische Studie zur Subcoxaltheorie[END_REF][START_REF] Ferris | Some general considerations[END_REF], it can be supposed that the same is true for the subcoxosternal plates of the genital segments, the two processes probably being, in genetic terms, parallel. For example, [START_REF] Klass | The female abdomen of ovipositor-bearing Odonata (Insecta:Pterygota)[END_REF] considers the ventrite VIII of the Odonata to be of a 'coxosternal' nature, but with inclusion of the epipleurites (which he names "laterocoxites"). This would therefore be, more exactly, an epipleurosubcoxosternum, because the coxae are excluded. The same author indicates that the embiopteran subgenital plate is atypical, being complex in nature and subject to interpretation in various hypothetical ways [START_REF] Klass | The female genitalic region and gonoducts of Embioptera (Insecta), with general discussions on female genitalia in insects[END_REF]. In Coleoptera and certainly also in Neuropterida, which have very similar abdominal structures [START_REF] Mickoleit | Ueber Ovipositor der Neuropteroidea und Coleoptera und seine phylogenetische Bedeutung[END_REF][START_REF] Liu | Homology of the genital sclerites of Megaloptera (Insecta: Neuropterida) and their phylogenetic relevance[END_REF], it is clear that the totality of the subcoxosternal area is internalised because the two well delimited epipleurites are connected medio-ventrally to each other, are juxtaposed, and often even merge to form a kind of 'tertiary sternum'. In segment VIII, this merging of the epipleurites is sometimes so perfect that the resulting subgenital plate (of a strictly sympleural nature), named ventrite VIII, is still often confused by most coleopterists with 'sternite VIII' or 'coxosternite VIII', which is incorrect. The reality of the connection of the epipleurites is particularly well illustrated in the imaginal stage of certain Paussidae (Caraboidea), in which all intermediate steps can be observed between an orthotopic state-with the subcoxosternum still present between the epipleurites and the gonopore located at the rear of segment VII-and an epitopic state-with complete suturing of the epipleurites (Figure 9). Another argument is the presence in the genus Luperca (Caraboidea Siagonidae) of a membranous septum that connects the internal margins of the two epipleurites VIII to the gonopods VIII, which are associated with the ninth segment [START_REF] Deuve | Les sternites VIII et IX de l'abdomen sont-ils visibles chez les imagos des Coléoptères et des autres Insectes Holométaboles ?[END_REF], 1993, p. 145, "Fig. 219"). The plesiomorphic presence of the gonopods VIII in their primitive location-between the two epipleurites VIII-is also observed in some Hydradephaga (Deuve 2001a, "Fig. 14"), whereas in other beetles they have become closely associated with gonopods IX to form an ovipositor and then regress. Concomitantly, the previously lateral epipleurites VIII become connected to each other ventromedially. [The following errors in Deuve (2001a) need to be noted: the references to figures "18" and "19" in the text (p. 207-208) should be corrected to 20 and 21, and those to figures "21" and "22" in the text (p. 208), should be corrected to 18 and 19]. In many beetles, the merging of epipleurites VIII is perfect, such that the resulting ventral sclerite is identical in appearance to a coxosternite. In particular, strong longitudinal muscles are observed that connect ventrite VII to ventrite VIII. This point deserves special attention because it has often been used as an argument to interpret the ventrite VIII as being homonomous with the coxosternite VII. While it is true that muscle insertions can often be used as landmarks to delimit anatomical territories, this principle should not be applied dogmatically. For example, it was the presence of muscles directly connecting the tergum to the coxa that led [START_REF] Snodgrass | A textbook of arthropod anatomy[END_REF][START_REF] Snodgrass | Evolution of arthropod mechanisms[END_REF] to reject the subcoxal theory. In his study on the abdomen of female machilids, Bitsch (1973b) refers to the theoretical possibility of "a secondary displacement of the anterior attachment of the muscle 71" (translated from French). It is known that muscle insertions can actively migrate during development, especially during metamorphosis [START_REF] Williams | Active muscle migration during insect metamorphosis[END_REF]. Also, neoformations of muscles adapted to a new specialised function are frequent in insects. For example, Hünefeld et al. (2012) report the neoformation of a transverse muscle between appendages VIII of the Antliophora. The postabdominal musculature of the Adephaga (Coleoptera) was carefully studied by [START_REF] Bils | Das abdomenende weiblicher terrestrische lebender Adephaga une seine Bedeutung für die Phylogenie[END_REF] and [START_REF] Burmeister | Der Ovipositor der Hydradephaga (Coleoptera) und seine phylogenetische Bedeutung unter besondere Berücksichtigung der Dytiscidae[END_REF]. In Hygrobia, there are muscles connecting epipleurite VIII to gonopod IX and other muscles connecting gonopod VIII to gonopod IX. These are functional specialisations related to the complex movements of the ovipositor. Also the muscles connecting epipleurites VIII to coxosternite VII in beetles are similar in shape and arrangement to the strong, ventral, antagonistic, longitudinal muscles that connect coxosternite VII to coxosternite VI, or coxosternite VI to coxosternite V. These are functional necessities related to the mobility of the whole abdomen and its ability to retract. The analogy of musculatures cannot be used to assert that ventrite VIII is the homonom of ventrite VII. A study of the abdominal muscles of female Paussidae (Caraboidea) beetles would have to be undertaken in the future to show, step by step, the muscular homologies and rearrangements between partially neotenic species of the genus Eustra which have an orthotopic gonopore and seripleural abdomen, and those belonging to other genera of the tribe Ozaenini, in which epipleurites VIII move closer to each other until they become connected and finally merge together to accompany the gonopore epitopy and to close the abdominal venter (Figure 9). In Coleoptera, I showed that ventrites VIII and IX were formed by the suture or merging of the epipleurites in males as well [START_REF] Deuve | Les sternites VIII et IX de l'abdomen sont-ils visibles chez les imagos des Coléoptères et des autres Insectes Holométaboles ?[END_REF]). The differences are that gonopods IX of males are not visible on the ninth ventrite and that the aedeagus is mainly formed from segment X [START_REF] Dupuis | Origine et dévelopement des organes génitaux externes des mâles d'Insectes[END_REF]. The study of a gynandromorph of Cetoniidae (Coleoptera) showed the co-existence in the same specimen of the female (segment IX) and male (segment X) ectodermal genitalia, the latter located behind the former [START_REF] Deuve | Origine segmentaire des genitalia ectodermiques mâles et femelles des Insectes. Données nouvelles apportées par un gynandromorphe de Coléoptère[END_REF], thus providing evidence that they are not homologous. Ventrite IX, which has the misleading shape of an ordinary coxosternite in males (Figure 10f), is 'feminised' in this gynandromorph and shows some subdivisions that have the appearance of epipleurites and rudimentary gonopods (Figure 10e). While it seems clear that ventrite VIII of Coleoptera (see [START_REF] Dupuis | L'abdomen et les genitalia des femelles de Coléoptères Scarabaeoidea (Insecta, Coleoptera)[END_REF], for Scarabaeoidea) and Neuropterida has arisen through the juxtaposition or merging of just the two epipleurites to form the first subgenital plate, without any subcoxosternal component, it is crucial to consider whether the same pattern could be generalised to all insects showing an abdomen of the sympleural type. Indeed, it is conceivable that in this type only a medial part of the subcoxosternum has been invaginated to form the genital ducts and that some lateral parts of it remain closely associated with the epipleurites. However, while the bipartite composition of ventrite VIII has often been documented in various orders, an obvious tripartite arrangement (i.e. with a median relictual subcoxosternal area) has never been reported. In pterygote insects, it appears that the coxosternum of each abdominal segment forms an undivided ventral plate (= ventrite), with only few examples of fragmentation being known. It can therefore be hypothesised that the entire genital subcoxosternal plate (i.e. in segments VIII and IX) is internalised when the vaginal ducts are formed. In contrast, the epipleurites are often well separated from the coxosternum. In many beetles, such as Stictotarsus or Systelosoma (Figure 11), the epipleurites are contiguous with the lateral margins of the coxosternite, but are not merged with it. Such a merging occurs in many pterygotes and we can refer to a resulting epipleuro-coxosternum occupying the ventral side of all pregenital segments of the abdomen. It appears important to insist on this anatomical and functional dissociation between the epipleurite on one side, and coxosternite on the other, on all abdominal segments. On the genital segments, the gonopods are dissociated from the subcoxosternum and therefore the epipleurites must a fortiori be dissociated. This dissociation is fundamental in the pterygote groundplan. In female Coleoptera, [START_REF] Verhoeff | Zur vergleichenden Morphologie des Abdomens der Coleopteren und über die phylogenetische Bedeutung desselben, zugleich ein zusammenfassender kritischer Rückblick und neuer Beitrag[END_REF] had already pointed out the presence of a "bipartite sternite 8" and Tanner (1927) had written: "in some cases it [the VIIIth sternite] is shield-shaped and divided into two parts on the middle by a small strip of membrane". In female Mecoptera, the symmetrical bipartite nature of the ventrite VIII was noted by [START_REF] Mickoleit | Die Genital-und Postgenital Segmente der Mecoptera-Weibchen. I. Das Exoskelet[END_REF], who admits the absence of the true sternum. Indeed, it is certainly in Mecoptera that this bipartite nature of ventrite VIII is most clearly visible. This author generalised this observation, writing: "Furthermore, we can assume that a larger sternal plate is lacking in the groundplan of the genital segments of the pterygotes" (Mickoleit 1975, p. 102, translated from German). However, in Mickoleit's model, which differs significantly from mine on this point, only the genuine primary sternum is internalised and the two remaining components of ventrite VIII retain the name "coxosternite VIII". Some students working with him, such as [START_REF] Bils | Das abdomenende weiblicher terrestrische lebender Adephaga une seine Bedeutung für die Phylogenie[END_REF] and [START_REF] Burmeister | Der Ovipositor der Hydradephaga (Coleoptera) und seine phylogenetische Bedeutung unter besondere Berücksichtigung der Dytiscidae[END_REF], also continued to use "coxosternite VIII" to designate the ventral sclerite of the adephagan beetles, whereas the homonomous ventrites IX are named by them, as by Mickoleit, "tergites IX" or "laterotergites IX". Mickoleit (ibid.) stated precisely: "Therefore, a true sternal plate would not exist at all at the venter of the 8th segment of the Pterygota. Whether and to what extent the subgenital plate of the pterygote groundplan is involved in the formation of the sclerites known as gonocoxosternites is not determined" (translated from German). After studying the postabdominal musculature in a nannochoristid (Mecoptera), [START_REF] Hünefeld | The female postabdomen of the enigmatic Nannochoristidae (Insecta: Mecopterida) and its phylogenetic significance[END_REF] have finally followed Mickoleit and concluded: "The ventral sclerotised parts of segments VIII and IX of mecopterid females (and endopterygote females in general) are derivatives of the genital appendages of these segments and definitively not true sternal sclerotisations". These so-called appendicular formations are subcoxal in nature and correspond to the epipleurites VIII and IX. In addition, this model can most likely be generalised to endopterygote males, as shown for coleopterans [START_REF] Deuve | Les sternites VIII et IX de l'abdomen sont-ils visibles chez les imagos des Coléoptères et des autres Insectes Holométaboles ?[END_REF]. But in my model, not only the primary sternum but the whole subcoxosternum is internalised and therefore absent from the external skeleton of the genital segments in both males and females of the endopterygotes. Although the subgenital plate has often been interpreted as resulting from a rearward expansion of segment VII (see review in [START_REF] Bitsch | Morphologie abdominale des Insectes[END_REF], [START_REF] Kristensen | Heterobathmia valvifer n. sp.: a moth with large apparent 'ovipositor valves' (Lepidoptera: Heterobathmiidae)[END_REF] wrote concerning the subgenital plate of an heterobathmiid (Lepidoptera): "While a derivation from the venter VII territory remains a possibility, it could also be a fusion product of the segment VIII appendage bases, as hypothesized for similar formations in the Mecoptera and Trichoptera [START_REF] Mickoleit | Die Genital-und Postgenital Segmente der Mecoptera-Weibchen. I. Das Exoskelet[END_REF][START_REF] Nielsen | A comparative study of the genital segments and the genital chamber in female Trichoptera. Biologiske Skrifter[END_REF]". Although by 1999 the interpretation of these sclerites as being formed from only appendicular components was beginning to be accepted, [START_REF] Kristensen | Heterobathmia valvifer n. sp.: a moth with large apparent 'ovipositor valves' (Lepidoptera: Heterobathmiidae)[END_REF] referred to Mickoleit's model, with internalisation of the only primary sternum. It is also important to clearly distinguish between two different types of 'subgenital plates', which are not homologous and should not be confused. Posterior expansions of coxosternite VII to protect or support the genital segments are present in many insects and have sometimes been referred to collectively as the 'subgenital plate'. In Lepismatida, [START_REF] Rousset | Squelette et musculature des régions génitales et postgénitales de la femelle de Thermobia domestica (Packard). Comparaison avec la région génitale de Nicoletia sp. (Insecta: Apterygota: Lepismatida)[END_REF] described a "languette" (small tongue-like lobe) which is a prominent membrane fold belonging to segment VII. In Coleoptera, ventrite VII is usually dilated rearwards to support the retracted postabdomen. This is not the typical subgenital plate of the sympleural-type abdomen, which consists of the two joined epipleurites VIII. Among hemimetabolan insects, the Dermaptera have a seripleural type abdomen (Deuve 2001a;[START_REF] Klass | The female abdomen of the viviparous earwig Hemimerus vosseleri (Insecta : Dermaptera : Hemimeridae), with a discussion of the postgenital abdomen of Insecta[END_REF][START_REF] Klass | The female genitalic region in basal earwigs (Insecta: Dermaptera: Pygidicranidae (s.l.)[END_REF], whereas the sympleural-I type is seen in orthopteroids. Their subgenital plate has been the subject of several studies without clear conclusions, probably because both kinds of 'subgenital plates' (expansion of coxosternum VII versus ventral junction of epipleurites VIII) have often been confused under this name. [START_REF] Qadri | On the development of the genitalia and their ducts of orthopteroid insects[END_REF], however, interprets the subgenital plate of Tettigonioidea as resulting from the fusion of lateroventral formations of the "sternite VIII" (read "ventrite VIII") which might be epipleurites. Snodgrass (1935b) indicates that the vaginal ducts of Locustana are formed from a medial groove along the entire length of the eighth ventrite during organogenesis. These observations are consistent, at least in part, with the hypothesis of a subgenital plate formed by the secondary juxtaposition of epipleurites VIII. The abdomen of female Odonata has been well studied by [START_REF] Klass | The female abdomen of ovipositor-bearing Odonata (Insecta:Pterygota)[END_REF] in a work that takes into account musculature and innervation. This author essentially agrees with my interpretation of the female genital abdominal segments (Deuve 2001a) by recognising in the same way the presence of epipleurite IX, which he names "laterocoxite IX", but with an anterior and posterior component. Regarding segment VIII, Klass (2008, p. 137) admits that the ventral plate VIII resembles the coxosternum of the pregenital segments, but in contrast to these the coxae VIII are not included. In such cases, it is reasonable to assume that the subcoxosternum VIII is also not included. However, in the same work Klass (2008, p. 117) rejects my interpretation of the ventral plate VIII as resulting from the medial merging of epipleurites VIII (his "laterocoxites VIII") for the sole reason that I named "coxosternum VII" the ventral sclerite of the preceding segments, which in my model would be "exclusively non-homonomous". That was not my opinion: in many pterygotes the epipleurites of the pregenital segments are indeed merged with the lateral margins of the coxosternum to form a single ventral sclerite. This is certainly the case in Mecoptera, judging by the musculature described by [START_REF] Hünefeld | The female postabdomen of the enigmatic Nannochoristidae (Insecta: Mecopterida) and its phylogenetic significance[END_REF] for the pregenital and genital segments, and probably also in Odonata. I agree that, in this precise case of Zygoptera, it would have been more accurate to use the full term 'epipleuro-coxosternum VII' rather than 'coxosternum VII'. Instead, I opted for a simpler term, as was usual at that time. In addition, rearrangements of the musculature of the genital segments during metamorphosis are significant. A study by [START_REF] Matushkina | Skeletomuscular development of genital segments in the dragonfly Anax imperator (Odonata, Aeshnidae) during metamorphosis and its implications for the evolutionary morphology of the insect ovipositor[END_REF] on the transformation of the abdominal musculature of an aeshnid during metamorphosis shows the absence of several muscles of segment VIII in the imago, mainly "intersegmental sternal" and "intersegmental pleuro-sternal" muscles, in relation to the development of the internal and external genitalia. Conversely, these muscles remain present in the pregenital segments. Therefore, there is a considerable rearrangement of the ventral musculature of segment VIII, and even more so for segment IX. In a subsequent paper, [START_REF] Klass | The female genitalic region and gonoducts of Embioptera (Insecta), with general discussions on female genitalia in insects[END_REF] explicitly admitted a process of internalisation of the primary sternum of the genital segments in the introduction of their study of Embioptera: "the ventral plates of both segments 8 and 9 probably lack a sternal component, yet we call them coxosternites; and in cases of absence of discrete laterocoxites it is often disputable whether laterocoxal sclerotisations are absent or included in the coxosternite, and how large this portion is". However, I cannot follow these authors when they write a few lines earlier: "the laterotergites and perhaps also the pleurites fall into the category of epipleural sclerotisations as defined by Deuve (e.g. 2001)" (Klass & Ulbricht 2009, p. 120). In fact, it is rather the "laterocoxite", as described in detail by Klass since his work on Dermaptera [START_REF] Klass | The female abdomen of the viviparous earwig Hemimerus vosseleri (Insecta : Dermaptera : Hemimeridae), with a discussion of the postgenital abdomen of Insecta[END_REF], that most precisely corresponds to the epipleurite as I characterised and illustrated it in the fundamental structure of the insect abdomen in both sexes [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF], 2001a, "Fig. 10 to 31"), its subcoxal identity being another question. The "laterotergites and pleurites" described by [START_REF] Klass | The female genitalic region and gonoducts of Embioptera (Insecta), with general discussions on female genitalia in insects[END_REF] in female embiopterans probably correspond to tergopleural and/or precoxal formations. Moreover, it is precisely this fundamental distinction between the epipleurite and a small sclerite (precoxite?) lying next to the spiracle that I pointed out (Deuve 2001a) in response to an objection from [START_REF] Kristensen | Heterobathmia valvifer n. sp.: a moth with large apparent 'ovipositor valves' (Lepidoptera: Heterobathmiidae)[END_REF] in their study of the subgenital plate of a heterobathmiid (Lepidoptera) (see above). That said, additional studies are still needed to assess the importance and limitations of the internalisation of all, or part, of the eighth subcoxosternum in Odonata and in other orders of hemimetabolan insects having a sympleural postabdomen. The question is whether the eighth ventrite consists only of the two connected epipleurites VIII-as I presume-or whether it may also retain some traces of the former subcoxosternum and, if so, in what proportion. However, if coxae VIII are not included in the eighth ventral plate of the Odonata [START_REF] Klass | The female abdomen of ovipositor-bearing Odonata (Insecta:Pterygota)[END_REF]) and if a sternal component is also lacking in the Embioptera [START_REF] Klass | The female genitalic region and gonoducts of Embioptera (Insecta), with general discussions on female genitalia in insects[END_REF], as in all pterygote insects [START_REF] Mickoleit | Die Genital-und Postgenital Segmente der Mecoptera-Weibchen. I. Das Exoskelet[END_REF], almost only the epipleurites VIII remain 'available' to form the subgenital plate. Coxal territories being absent (gonopods), the lateral margins of a subcoxosternite would theoretically be of a strictly subcoxal nature (ventral parts of the subcoxae). Therefore, at most, we could imagine some subcoxal remnants of the coxosternum which would then merge with the epipleurites to form a resulting tripartite or quadripartite subgenital plate that is totally subcoxal in nature. Yet nothing like this has ever been observed and it is not the simplest hypothesis. Except for a few details and the semantic replacement of the term epipleurite by laterocoxite, I finally see no fundamental difference between Klass's model and mine. Conclusions We have arrived at a fairly consensual model to describe the fundamental structure of the insect skeleton. In particular, the importance of the subcoxal area-not only on the thorax, but also on the abdomen-no longer seems to be in doubt. Named epipleurite [START_REF] Hopkins | Contributions toward a monograph of the scolytid beetles. I. The genus Dendroctonus[END_REF][START_REF] Böving | An illustrated synopsis of the principal larval forms of the order Coleoptera[END_REF][START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF][START_REF] Sweet | Comparative external morphology of the pregenital abdomen of the Hemiptera[END_REF]), subcoxa [START_REF] Bekker | K stroyeniyu i proiskhozhdeniyu naruzhnykh polovykh pridatkov Thysanura I Hymenoptera [Structure and origin of the external genital appendages of Thysanura and Hymenoptera[END_REF](Bekker , 1932a,b;,b;[START_REF] Smith | Evolutionary morphology of external insect genitalia. 1. Origin and relationships to other appendages[END_REF]) (in reality, only the dorsal part of the subcoxa, the ventral part being integrated into the secondary sternum), laterosternite [START_REF] Gustafson | The origin and evolution of the geniatlia of the Insecta[END_REF][START_REF] Matsuda | Comparative morphology of the abdomen of a machilid and a rhaphidiid[END_REF], gonangulum [START_REF] Scudder | Reinterpretation of some basal structures in the insect ovipositor[END_REF][START_REF] Scudder | Comparative morphology of insect genitalia[END_REF] or laterocoxite (Bitsch 1973b[START_REF] Bitsch | Morphologie abdominale des Machilides (Thysanura) -II. Squelette et musculature des segments génitaux femelles[END_REF][START_REF] Klass | The female abdomen of the viviparous earwig Hemimerus vosseleri (Insecta : Dermaptera : Hemimeridae), with a discussion of the postgenital abdomen of Insecta[END_REF], a sclerite of a subcoxal nature has been identified at all segments of the abdomen, and in both sexes. The same applies to the cephalic segments, for which developmental genetic techniques have recently revealed subcoxal territories at the base of gnathal appendages [START_REF] Coulcher | Molecular developmental evidence for a subcoxal origin of pleurites in insects and identity of the subcoxa in the gnathal appendages[END_REF]. Also, I have stressed the dissociation, in anatomical and probably also genetic terms, between the dorsal part of the subcoxa, which corresponds to the epipleurite, and the ventral part, which is more closely associated with the primary sternum to form a coxosternum or a subcoxosternum. It is noteworthy how much the existence of a subcoxa has been discussed for more than a century and that it has only recently been recognised in the mandible or on all segments of the abdomen. It might be concluded from this that this segment is cryptic, but this is not the case. On the contrary, it actually occupies a large area and has important functions in the patterning of the hexapodan skeleton. In the thorax, the subcoxa plays a major role in the formation of the secondary sternum and the lateral wall lying between the coxa and the tergum. In the abdomen, the subcoxa plays a similar role in the formation of a secondary sternum (coxosternum), but it also plays this role in the pleural regions below the spiracle and, on the genital segments, in the support and articulation of the gonopods and/or in the formation of a new ventrite (subgenital plate) that is a sort of 'tertiary sternum'. Whereas the coxa has acquired a predominant articular function during evolution, the subcoxa (and probably also the precoxa that originates from it) has become specialised for the functions of fixing the limb to the body-wall and carrying the appendage, while maintaining its relative mobility. This can be observed on the abdominal and thoracic segments, whose general architecture is finally quite similar. There is not only structural but also functional homonomy and identity. The presence or absence of a precoxa at the base of the hexapodan appendage still remains a subject of discussion. There are good arguments to support the existence of this proximalmost segment resulting from a subdivision of the embryonic subcoxal bud, in agreement with the tri-segmented protopodite model of the crustaceans. The hypothesis of a precoxal segment is obviously crucial, particularly for understanding the anatomical origin of the wings of the Pterygota. Some points still remain to be clarified. Whereas the hypothesis of the internalisation of the whole subcoxosternum in the genital segments of Holometabola seems to be firmly established, the possibility in hemimetabolan insects that some marginal subcoxosternal elements might remain external and, if so, in what proportions, still needs to be evaluated. Also, we have yet to identify the nature of certain small enigmatic sclerites, such as the "tergopleurites" described by [START_REF] Prell | Das Chitinskelett von Eosentomon[END_REF] and the "laterotergites" described by [START_REF] François | Le squelette thoracique des Protoures[END_REF][START_REF] François | Squelette et musculature thoraciques des Protoures[END_REF] in Protura and by [START_REF] Bäcker | A forgotten homology supporting the monophyly of Tracheata: the subcoxa of insects and myriapods re-visited[END_REF] in some Chilopoda. More importantly, the problem of the origin of wings in pterygotes needs to be clarified, in relation to the identity of the axillary sclerites. In a recent paper, [START_REF] Mashimo | Embryological evidence substantiates the subcoxal theory on the origin of pleuron in insects[END_REF] stressed the difficulty in delineating the crucial anatomical boundary between the tergum and the arthropod appendage. The question of the fundamental location of the spiracle on the lateral margins of the socalled tergum or on the very base of the appendage remains wide open. These wing and spiracle problems are reflected in the abundant literature dealing with the pleura of arthropods for over a century. It may be questioned whether the intrinsic difficulty in locating this tergal/pleural boundary is related to an earlier appendicular nature of the lateral margins of the so-called tergum itself. As has already been pointed out [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF], the dorsum of Euarthropoda is fundamentally trilobed [START_REF] Lauterbach | Die Muskulatur der Pleurotergite im Grundplan der Euarthropoda[END_REF] and it is consistent to suppose that paranotal lobes are tergopleural in nature, rather than being strictly tergal formations. Given that the ancestor of Euarthropoda was probably a type of marine worm, it seems likely that the ventral part of its pleura became specialised for locomotion on the seabed, whereas the dorsal part became specialised for respiration and gill protection. This same pleural origin of the base of the arthropodal limb and of the paranotal lobe would make it easier to understand their close functional (and genetic?) association, as well as the 'dual' origin of the wings of pterygote insects. Hopefully this speculative idea will be read with indulgence, but, after more than a century of debate and some probably excessive attempts at schematisation, it must be kept in mind that there is still much work to be done to fully clarify our understanding of the arthropod pleura. Glossary of the main terms used: Appendage: Eupleural set corresponding to a specialised organ and consisting of several successive segments; in hexapods these are: precoxa, subcoxa, coxa, and telopodites. (Synonym: limb) Coxosternum: Ventral sclerite corresponding to the merging of the true sternum, the coxae and some ventral components of the subcoxae. In theory, ventral components of the precoxae would also be included. The coxosternum, often termed "secondary sternum" in the context of the subcoxal theory, or just ventrite, is clearly visible as the only ventral sclerite of the pterygote pregenital abdominal segments. (Plural: coxosterna) Dorsum: The dorsal face of the body (Audouin 1820). In theory, a sclerite of the dorsum would be a "dorsite", but this term is rarely used. Epipleurite: [START_REF] Hopkins | Contributions toward a monograph of the scolytid beetles. I. The genus Dendroctonus[END_REF]). Separated sclerite well visible in the skeleton of many arthropods, especially in pterygotes. It corresponds to the dorsal part of the subcoxa, which dissociates itself from the ventral part to remain in pleural position between the base of the functional appendage and some more dorsal sclerites. It may also have a precoxal component. Conversely, the ventral part of the subcoxa tends to associate with the sternum to form a subcoxosternum or a coxosternum. However, the epipleurite can in some cases merge with the coxosternum. The epipleurite has often been referred to as "laterotergite" by Snodgrass, especially in his famous book "Principles of insect morphology" (1935a). The gonangulum of [START_REF] Scudder | Reinterpretation of some basal structures in the insect ovipositor[END_REF] is the abdominal epipleurite IX [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF](Deuve , 2001a)). The laterocoxite (Bitsch 1973b[START_REF] Bitsch | Morphologie abdominale des Machilides (Thysanura) -II. Squelette et musculature des segments génitaux femelles[END_REF]) might be a synonym of epipleurite. Epipleuro-coxosternum: Ventral sclerite resulting from the merging of the epipleurites and the coxosternum. It is often observable on the pregenital abdominal segments of insects. Eupleurite: [START_REF] Crampton | Notes on the thoracic sclerites of winged insects[END_REF]). Pleural sclerite forming part of the arthropodal appendage. Exobasal: (Deuve 2013, p. 5, footnote). In a phylogenetical tree, only an ancestor or a node (i.e. a cladogenetical event), or, at the limit, an entire lineage, may be said to be 'basal'; an extant clade branched basally is exobasal, but its most recent common ancestor is basal in the phylogeny of the group. Evidently, the sister-group of an exobasal clade is also exobasal. For example, Archaeognatha is an exobasal order of Hexapoda, which is not the case of Lepidoptera. Paranotal lobe: Often considered as lateral lobes of the tergum, the paranotal lobes may be tergopleural formations [START_REF] Deuve | Sur la présence d'un "épipleurite" dans le plan de base du segment des Hexapodes[END_REF]). In the theory of the dual origin of the pterygote wing [START_REF] Rasnitsyn | A modified paranotal theory of insect wing origin[END_REF], they have become associated themselves with basal elements of the appendage (eupleural elements) to form the functional wing. Pleuron: Lateral area located between the tergum and the sternum. A sclerite of the pleuron is a pleurite. (Plural: pleura) Precoxa: The basalmost segment of the arthropodal appendage. Its existence is still under debate. It has been observed in crustaceans, myriapods and hexapods. It would correspond to the 'subcoxa 1'. A sclerite of the precoxa is a precoxite. (Plural: precoxae) Sternum: Area located ventrally between the pleura, i.e. between the appendages. A sclerite of the sternum is a sternite. (Plural: sterna) Subcoxa: Basal segment of the appendage (homologue of the coxa of crustaceans) located between the precoxa and the coxa. It corresponds to the 'subcoxa 2'. Often cryptic because embedded into the body-wall of arthropods (subcoxal theory), with a function of anchoring and supporting the appendage. The epipleurite corresponds to its dorsal component, which retains a pleural location and remains more or less isolated from the coxosternal plate (coxosternum). A sclerite of the subcoxa is a subcoxite. (Plural: subcoxae) Subcoxosternum: Ventral plate resulting from the merging of the true sternum and some ventral components of both the subcoxae and precoxae. The coxae are not integrated in it. It should be noted that the 'secondary sternum' of the pterygote thoracic segments is a subcoxosternum rather than a coxosternum, the two coxae being separated from it and functioning as articular segments of the appendages. Although they are often confused by authors, the distinction between subcoxosternum and coxosternum is essential, especially at the level of the abdomen, in order to fully understand the structure of sclerites of the pregenital segments on the one hand (with a coxosternum) and that of the genital segments on the other (with internalisation of the subcoxosternum). (Plural: subcoxosterna) Tergopleurite: [START_REF] Prell | Das Chitinskelett von Eosentomon[END_REF]). Pleural sclerite not strictly belonging to the arthropodal appendage. Located between the base of the arthropodal appendage and the true tergum. Tergum: Area located dorsally between the pleura. A sclerite of the tergum is a tergite. (Plural: terga) Venter: The ventral face of the body. A sclerite of the venter is a ventrite. (It should be noted that Snodgrass (1935a[START_REF] Snodgrass | A contribution toward an encyclopedia of insect anatomy[END_REF] considered the venter as synonymous with sternum and [START_REF] Styš | Reinterpretation of the theory on the origin of the pterygote ovipositor and notes on the terminology of the female ectodermal genitalia of insects[END_REF] the ventrite as synonymous with the epipleuro-coxosternum, which he named "zygosternum".) Abbreviations used in the illustrations: (after Sharov 1966). The precoxa of the machilid was named "pleurite" by Sharov. It is noteworthy that the hexapodan subcoxa corresponds to the coxa of the crustaceans and that the hexapodan coxa corresponds to the basis of the crustaceans. In the female, the exodermal genital ducts are well shaped, with oviduct, bursa copulatrix, spermatheca and spermathecal gland. b. In the gynandromorph, the female components are vestigial, with poorly shaped bursa copulatrix but without oviduct or spermatheca; the male components are fully formed, with a complete aedeagus, but its rotation is incomplete (90° instead of 180°). c. In the male, the aedeagus is complete, with 180° rotation. d. In female, the external genitalia show lateral epipleurites IX (subcoxae) and developed gonopods VIII (coxae). e. In the gynandromorph, the ventrite IX is decomposed into distinct epipleural and coxal elements, but the VIII-IX costal area shows a 'genital ring' and a normally shaped spiculum gastrale. f. In the male, the ventrite IX (synsclerite) would be formed from totally merged epipleurites IX. Figure 1 . 1 Figure 1. Mesothoracic segment of proturans (after François 1964, modified). a. Lateral view of the external morphology in Acerentomon. There is a so-called laterotergite more dorsal than the anapleurite and located on the lateral margin of the tergum. It might be a tergopleurite. b. Section of the appendage and of more dorsal areas in Eosentomon. Note the position of the spiracle above the anapleurite on the lateral margin of the tergum, in a probably tergopleural area. Figure 2 . 2 Figure 2. Homologies of the respective segments of the appendage of an Anaspides sp. (Crustacea, Syncarida) (a), a Palaeozoic monuran (b) and a machilid (Archaeognatha) (c)(after Sharov 1966). The precoxa of the machilid was named "pleurite" by Sharov. It is noteworthy that the hexapodan subcoxa corresponds to the coxa of the crustaceans and that the hexapodan coxa corresponds to the basis of the crustaceans. Figure 3 . 3 Figure 3. Carabus insulicola(Coleoptera, Carabidae). Embryo fixed at 60% DT (percentage of total developmental time, from oviposition to hatching), showing thoracic segments and four abdominal segments (afterKobayashi et al. 2013, modified). Different colours refer to the apparent longitudinal fields, blue: precoxalred: subcoxalyellow: coxal and telopodal. The discoid organ lying in the coxal area of the first abdominal segment is the pleuropodium. Figure 4 . 4 Figure 4. Diagrams of the three theoretical abdominal types in female ectognathan hexapods: seripleural, sympleural-I and sympleural-II types. Note that the borders of the coloured regions for the coxosternal plates are arbitrary, being intended only to indicate existence of coxal, subcoxal and sternal components; for simplicity, no precoxal component are represented.In the seripleural type, the original metameric arrangement is maintained; the primary gonopore lies behind segment VII. In sympleural-I type, gonopods VIII join gonopods IX to form an ovipositor; epipleurites IX correspond to gonangula with an articular function; epipleurites VIII join ventrally to form the first subgenital plate that closes the abdomen; the secondary gonopore is apparently located at the rear of segment VIII. In the sympleural-II type, the gonopods are regressed; epipleurites IX join ventrally to form a second subgenital plate; the secondary gonopore is apparently located at the rear of segment IX. Colours refer to the different limb segments, blue: precoxalred: subcoxalyellow: coxal and telopodal. Figure 5 . 5 Figure 5. Female ectodermal genitalia of Eustra leclerci (Coleoptera, Paussidae), external views (a, c) and internal views (b, d) of the ventral side. a and b. Segment IX: gonopods and subcoxosternal plate.The protovagina is shaped, with formation of the spermatheca, accessory gland and vaginal apophysis, but the whole is still separated from the oviductal pouch. c and d. Segments VIII and IX. There is a large subcoxosternal plate VIII between the epipleurites VIII. The oviductal pouch is located anterior to this plate, in the form of a membranous funnel into which the common oviduct opens. Figure 6 . 6 Figure 6. Carabus sp. (Coleoptera, Carabidae). a. Abdominal segment of a larva, lateral view. Note the presence of a precoxal territory on the ventral flank of the paranotal lobe, near the spiracle. b. Distal extremity of the female abdomen. Note the distinctly bipartite morphology of the ventrite VIII, formed by juxtaposition of the two epipleurites. Gonopod IX is dimeric. c. Idem, scanning electron micrograph. Different colours indicate the longitudinal fields, blue: precoxalred: subcoxalyellow: coxal and telopodial. Note that the borders of the coloured regions for the coxosternal plates are arbitrary, being intended only to indicate existence of coxal, subcoxal and sternal components; for simplicity, no precoxal component are represented. Figure 7 . 7 Figure 7. Diagrams illustrating the rearward displacement of the gonopore in a female insect. Note that the colours of the coxosternal plates are here arbitrary, intended only to remind the existence of coxal, subcoxal and sternal components; for simplicity, no precoxal component is represented. In the orthotopic stage (a. seripleural type), the metameric organisation is maintained and the gonopore is located at the rear of segment VII. Note the separation of the primary gonopore (oviductal pouch) and the protovagina. The dashed line indicates the area that will be internalised to form the epitopic gonopore, located at the rear of segment VIII (b. sympleural-I type) or at the rear of segment IX (c. sympleural-II type). Note the anatomical and functional dissociation of the epipleurites and the coxosternal plates. Figure 8 . 8 Figure 8. Female ectodermal genitalia of Pachyteles granulatus (Coleoptera, Paussidae). The dashed line separates the oviductal components (segment VII) from the vaginal components (segments VIII and IX). Figure 9 . 9 Figure 9. Illustration of the abdominal closure by juxtaposition of the epipleurites VIII in various female Paussidae (Coleoptera). a. Eustra lebretoni, in which the epipleurites VIII are in lateral position, still separated from each other by the presence of the subcoxosternal plate VIII. Some elements of subcoxosternum IX are still visible between the gonopods. The oviductal pouch is separated from the protovaginal formations. b. Sphaerostylus punctatostriatus, in which epipleurites VIII are still in lateral position, but the subcoxosternal areas VIII and IX are internalised and the oviduct opens into the vaginal duct. c. Tachypeles pascoei, in which epipleurites VIII and IX joined and are juxtaposed to form a new ventral plate or 'tertiary sternum'. A ligula basalis can be observed, perhaps representing a vestige of the primary gonopore. Figure 10 . 10 Figure 10. Internal (Figures a-c) and external (Figuresd-f) genitalia in Cotinis mutabilis (Coleoptera, Cetoniidae) (after Deuve 1992): a and d, female; b and e, gynandromorph; c and f male. a. In the female, the exodermal genital ducts are well shaped, with oviduct, bursa copulatrix, spermatheca and spermathecal gland. b. In the gynandromorph, the female components are vestigial, with poorly shaped bursa copulatrix but without oviduct or spermatheca; the male components are fully formed, with a complete aedeagus, but its rotation is incomplete (90° instead of 180°). c. In the male, the aedeagus is complete, with 180° rotation. d. In female, the external genitalia show lateral epipleurites IX (subcoxae) and developed gonopods VIII (coxae). e. In the gynandromorph, the ventrite IX is decomposed into distinct epipleural and coxal elements, but the VIII-IX costal area shows a 'genital ring' and a normally shaped spiculum gastrale. f. In the male, the ventrite IX (synsclerite) would be formed from totally merged epipleurites IX. Figure 11 . 11 Figure 11. Distal extremity of the abdomen in beetles, illustrating the dissociation of epipleurites and coxosternites. a. Stictotarsus duodecimpustulatus (Dytiscidae), lateral view (after Burmeister 1976, modified). It is noteworthy that epipleurite VIII acquires mobility following its involvement in the ventral closure of segment VIII which follows the internalisation of coxosterna VIII and IX. b. Systolosoma breve (Trachypachidae). Ventral and dorsal views, the tergites having been cut along the midline. The alignment of the epipleurites is clearly visible. Subcoxosternal plates VIII and IX are entirely internalised. Note that the borders of the coloured regions for the coxosternal plates are arbitrary, being intended only to indicate existence of coxal, subcoxal and sternal components; for simplicity, no precoxal components are represented. c. Systolosoma breve, lateral view. scx: subcoxa scxst: subcoxosternum sp.: spiracle sp.gastr.: spiculum gastrale spm.: spermatheca sty: stylus subgen. pl.: subgenital A1, A2… An: abdominal segments 1 to N oss: subapical setose organ (including the so- acc.gl.: accessory gland called stylus) aed.: aedeagus ov : oviduct ana: anapleurite pcx: precoxa b.c.: bursa copulatrix pl: pleurite ba: basis pp: periproct ca: carpus pr.: proctodeum cata: catapleurite pro: propodus cx: coxa prtvg: protovagina cxst: coxosternum ptar: pretarsus d: dactylus def. gl.: defensive gland ej.duct: ejaculatory duct epl.: epipleurite eppd: epipodite exo: exite fm: femur plate fun.: funnel synscl.: synsclerite gen.ring: genital ring T1, T2, T3: thoracic segments 1-3 gon.: gonopore tar: tarsus gpd.: gonopod tb: tibia is: ischium tg: tergum or tergite lig.bas.: ligula basalis tlpd: telopodite lpnt: paranotal lobe tr: trochanter ltg: laterotergite v.: ventrite mb: membrane vg: vaginal duct mer: merus vg. ap.: vaginal apophysis Acknowledgements I am very grateful to the two reviewers, whose corrections and constructive comments were helpful for improving the manuscript, to Mark Judson (MNHN), who kindly corrected the English text, and to Jocelyne Guglielmi (MNHN), who found for me the little-known articles of Ernest Becker (or "Bekker").
122,231
[ "1240975" ]
[ "519585" ]
01741085
en
[ "chim" ]
2024/03/05 22:32:07
2018
https://hal.sorbonne-universite.fr/hal-01741085/file/MATCHEMPHYS-D-17-02747R1%281%29_sans%20marque.pdf
Sana Ben Moussa Afef Mehri Michel Gruselle Patricia Beaunier Sana Ben Moussa Guylène Costentin Béchir Badraoui Combined effect of magnesium and amino glutamic acid on the structure of hydroxyapatite prepared by hydrothermal method Keywords: hydrothermal method, apatite surface, glutamic acid, hybrid compound come Combined effect of magnesium and amino glutamic acid on the structure of hydroxyapatite prepared by hydrothermal method Introduction Among the minerals having an interest from an economic point of view, apatites, mostly hydroxy-(CaHAp) and fluoro-apatites (FAp), are of considerable interest in numerous research areas [START_REF] Elliot | Structure and Chemistry of the Apatites and Other Calcium Orthophosphates[END_REF][START_REF] Govindaraj | Carbon nanotubes/pectin/minerals substituted apatite nanocomposite depositions on anodized titanium for hard tissue implant: In vivo biological performance[END_REF][START_REF] Gomez-Morales | Progress on the preparation of nanocrystalline apatites and surface characterization : overview of fundamental and applied aspects[END_REF]. Apatites are used in several applications such as sorbents, catalysts and biomaterials [START_REF] Ben Moussa | Calcium, Barium and Strontium Apatites: A New Generation of Catalysts in the Biginelli Reaction[END_REF][START_REF] Gruselle | Apatites: a new family of catalysts in organic synthesis[END_REF]. Apatites are component of bones and teeth [START_REF] Wang | FT-IR and XRD study of bovine bone mineral and carbonated apatites with different carbonate levels[END_REF][START_REF] Kanwal | In-vitro apatite formation capacity of a bioactive glass -containing toothpaste[END_REF]. Apatites belong to the phosphate family of compounds, and are of general formula M 10 (PO 4 ) 6 Y 2 where M is a divalent cation: Ca, Sr, Ba, Pb…, and Y a hydroxyl (OH) or halide (F, Cl) [START_REF] Bulina | Fast synthesis of Lasubstituted apatite by the dry mechanochemical method and analysis of its structure[END_REF][START_REF] Lim | Enhancing osteoconductivity and biocompatibility of silver-substituted apatite in vivo through silicon cosubstitution[END_REF]. Apatites may contain several different substituents in their structure. This ability to trap substituents in their structures leads to the formation of total or *Manuscript Click here to view linked References partial solid solutions. The substitution processes are controlled by the crystallographic rules related mainly to: ionic radius, charge, electronegativity and polarisability [START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF][START_REF] Ben Moussa | Synthesis, Rietveld refinements and electrical conductivity of news fluorobritholite based on lead Ca 7x Pb x La 3 (PO 4 ) 3 (SiO 4 ) 3 F 2 (0  x  2)[END_REF]. Previous works [START_REF] Kaur | Lanthanide (= Ce, Pr, Nd and Tb) Ions Substitution at Calcium Sites of Hydroxyl Apatite Nanoparticles as Fluorescent Bio Probes: Experimental and Density Functional Theory Study[END_REF][START_REF] Badraoui | Synthesis and characterization of Sr (10-x) Cd x (PO 4 ) 6 Y 2 (Y = OH and F): A comparison of apatites containing two divalent cations[END_REF][START_REF] Aissa | Synthesis, X-ray Structural Analysis and Spectroscopic Investigations (IR and 31 P MAS NMR) of Mixed Barium/Strontium Fluoroapatites[END_REF] have shown that steric hindrance related to cations bigger in size than calcium ions, plays an important role in the limitation of the cationic substitution processes. Magnesium is undoubtedly one of the most important bivalent ions associated to biological apatites [START_REF] Geng | Synthesis, characterization and the formation mechanism of magnesium-and strontium-substituted hydroxyapatite[END_REF][START_REF] Geng | Synthesis, characterization and biological evaluation of strontium/magnesium-co-substituted hydroxyapatite[END_REF]. It has been verified that in calcified tissues, the amount of magnesium associated to the apatitic phase is higher at the beginning of the calcification process and decreases on increasing calcification [START_REF] Gu | Influences of doping mesoporous magnesium silicate on water absorption, drug release, degradability, apatitemineralization and primary cells responses to calcium sulfate based bone cements[END_REF][START_REF] Bigi | Structural and chemical characterization of inorganic deposits in calcified human mitral valve[END_REF][START_REF] Burnell | Normal maturational changes in bone matrix, mineral, and crystal size in the rat[END_REF]. This interest is increasing taking into consideration the ability of such mixed apatites to lead to hybrid organic-inorganic materials by reaction with amino-acids [START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF][START_REF] Yu | Synergistic antibacterial activity of multi components in lysozyme/chitosan/silver/hydroxyapatite hybrid coating[END_REF][START_REF] Hong | Fabrication, biological effects, and medical applications of calcium phosphate nanoceramics[END_REF][START_REF] Padilla | High Specific Surface Area in Nanometric Carbonated Hydroxyapatite[END_REF][START_REF] Sanchez-Salcedo | In vitro structural changes in porous HA/β-TCP scaffolds in simulated body fluid[END_REF][START_REF] Zhou | Nanoscale hydroxyapatite particles for bone tissue engineering[END_REF][START_REF] Carrodeguas | α-Tricalcium phosphate: synthesis, properties and biomedical applications[END_REF]. The expected benefit from the introduction of an amino-acid such as a glutamic acid in magnesium modified apatites is correlated with the ability of magnesium to coordinate amino-acids more strongly that calcium ions do and consequently to bind a greater proportion and more firmly proteins on their surfaces. In this aim, we have carried out a structural, morphology and chemical investigation of the combined effect of magnesium and amino glutamic acid on hydroxyapatite structure. We have also studied the interaction between glutamic acid and the apatite surface. The results show that the acid forms a complex on the apatite surface. As glutamic acid functionalization should potentially modify the electrostatic interaction, the corresponding change in the surface charge of the powder was monitored by zeta-potential measurements and [H + ] consumed as a function of the pH value. Experimental and methods Synthesis The mixed Mg/CaHAp of general formula: Ca (10-x) Mg x (PO 4 ) 6 (OH) 2 (x = 0, 0.5 and 1.0), named Ca (10-x) Mg x HAp, have been synthesized using the hydrothermal method [START_REF] Bigi | Magnesium influence on hydroxyapatite crystallization[END_REF]. A demineralised water solution (14 mL, 0.75 M) of a mixture of the two nitrates Ca(NO 3 ) 2 .4H 2 O and Mg(NO 3 ) 2 .6H 2 O in the desired proportions is added to a (NH 4 ) 2 HPO 4 water solution (25 mL, 0.25 M). The pH of the final solution is adjusted to 10 by adding a NH 4 OH solution (d = 0.89, Purity = 28%). The final solution is transferred to an autoclave. The mixture is maintained at 120°C for 12 hours. After filtration and washing using hot demineralised water, the mineral is dried at 120°C overnight. The hybrid materials were prepared according to the same experimental protocol, with addition of a quantity of organic reagent glutamic acid (GA) to the phosphate solution before pH adjustment [START_REF] Bigi | Microstructural investigation of hydroxyapatite-polyelectrolyte composites[END_REF]. The samples will be named as Ca (10-x) Mg x HAp-GA(n), where n is the value of the glutamic acid/(CaHAp) molar ratio (n= 10 and 20). Powder characterization N 2 adsorption-desorption isotherms were performed at 77 K using a Micromeritics ASAP 2000 instrument. The Brunauer-Emmett-Teller equation was used to calculate the specific surface area (S BET ). X-ray diffraction (XRD) analysis were carried out by means of a X'Pert Pro Panalytical X-pert diffractometer using Cu-Kα radiation (λ=1.5418 Å, with θ-θ geometry, equipped with an X'Celerator solid detector and a Ni filter). The 2θ range was from 20 to 70° with a step size Δ2θ=0.0167°. The experimental patterns were compared to standards compiled by the Joint Committee on Powder Diffraction and Standards (JCPDS cards) using the X'Pert High-Score Plus software [29]. The infrared (IR) adsorption analysis of the samples were obtained using a Spectrum Two 104462 IR spectrophotometer equipped with a diamond ATR setup in the range 4000-400 cm -1 . Nitrogen sorption isotherms for dried powders were recorded at 77K using a sorptometer EMS-53 and KELVIN 1040/1042 (Costech International). Points of Zero Charge (PZC) and Iso-Electric Point (IPE) of the samples are determined by zeta potential measurements, using a Malvern Nano ZS. Suspensions were prepared using NaCl (0.1M) as a background electrolyte with each powder using aqueous solutions, starting in an alkaline medium and stopping at pH = 4 under N 2 at 25°C [START_REF] Wu | Surface complexation of calcium minerals in aqueous solution[END_REF]. The titrations were carried out on suspensions of different samples of apatite obtained by adding 0.15 g of apatite to 30 mL of electrolyte (NaCl) and then 1.5 mL of 0.1M NaOH. The titrant used is 0.1 M hydrochloric acid prepared from 1M HCl, at the same ionic strength as the electrolyte by the addition of NaCl. The phosphorus and calcium contents were obtained by ICP-OES on a Horiba Jobin Yvon modele activa. The thermal analysis of the carbon was carried out using a SETARAM SETSYS 1750. Heating was performed in a platinum crucible in air flow at a rate of 10°C /min up to 800°C. For transmission electron microscopy (TEM) investigations, samples were prepared by dispersing the powders in a slurry of dry ethanol, deposited on a copper grid covered with a carbon thin film. High-resolution transmission electron microscopy (HRTEM) observations were performed on a JEOL JEM 2010 transmission electron microscope equipped with a LaB 6 filament and operating at 200 kV. The images were collected with a 4008 X 2672 pixels CCD camera (Gatan Orius SC1000). Circular dichroism (CD) experiments were performed at solid state using 5 mg of powder dispersed in nujol between NaCl pellets [START_REF] Castiglioni | Experimental Aspects of Solid State Circular Dichroism[END_REF]. The measurements were performed by a TASCO J-815 spectropolarimeter. The scans were recorded from 190 to 300 nm wavelength with the following parameters: 0.5 data pitch, 2 nm bandwidth, 100 nm/min scanning speed, and are the result of 3 accumulations. Results and discussion Elemental analysis The results of chemical analysis for mixed CaMgHAp with glutamic acid are reported in Table 1. The CaHAp sample shows a (Ca/P) molar ratio very close to the targeted stoichiometric value of 1.67. For the apatite series, the (Ca+Mg/P) ratio decreased from the starting CaHAp (1.68) to Ca 9 Mg 1 HAp-GA [START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF] (1.55). The presence of the organic anion in the precipitated material is attested and quantified by the total carbon analysis. We can note the increase in carbon amount with the concentration of Mg, this great affinity of glutamic acid to mixed CaMgHAp could be explained by the high electronegativity of magnesium (χ Ca = 1, χ Mg = 1,31) [START_REF] Pauling | The Nature of the Chemical Bond[END_REF], in agreement with the results previously reported for CaCuHAp modified by polyaspartic acid [START_REF] Othmani | Surface modification of calcium-copper hydroxyapatites using polyaspartic acid[END_REF] and CaZnHAp modified by tartric acid [START_REF] Turki | Surface modification of zinccontaining hydroxyapatite by tartaric acid[END_REF]. The larger absorption of carbon in the CaMgHAp-GA would explain their loss of stoichiometry. This indicates that our samples are indeed hydroxyapatite-glutamic acid composites. For the samples CaHAp, Ca 9.5 Mg 0.5 HAp and Ca 9 Mg 1 HAp the increase in carbon amount with the concentration of Mg is explained by the disorder induced by the magnesium substitution promoting the incorporation of carbonates. Thermal analysis The (TG) curves of Ca 9 Mg 1 HAp, Ca 9 Mg 1 HAp-GA [START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] and Ca 9 Mg 1 HAp-GA [START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF] samples are reported in figure 1. The thermal decomposition shows a first weight loss between 50°C and 200°C, assigned to the removal of physisorbed water. The second one, between 200°C and 500°C, corresponds to the elimination of the organic matter of glutamic acid. The weight loss associated with this second process allows evaluation of the relative amount of glutamic acid in the composite hybrids. The values obtained for carbon expressed as wt% of the solid product are reported in Table 2. The relative amount of glutamic acid increases with its increasing concentration in the reaction with CaHAp. This result is in agreement with the increase of the percentage of carbon determined by chemical analysis. Figure 2 reports the differential thermal analysis curves (DTA) of the samples. These curves display an unexpected endothermic effect associated with water desorption. This effect was already reported for Mg modified hydroxyapatites [START_REF] Yasukawa | Preparation and characterization of magnesiumcalcium hydroxyapatites[END_REF][START_REF] Diallo-Garcia | Influence of Magnesium Substitution on the Basic Properties of Hydroxyapatites[END_REF]. It was assigned to the fact that heating in the presence of physisorbed water first induce a structuring effect. The latter may be related both to the surface relaxation at solid-water interface upon water release [START_REF] Ben Osman | Control of calcium accessibility over hydroxyapatite by post-precipitation steps: influence on the catalytic reactivity toward alcohols[END_REF] and to 6 the polarization process of OH groups from the columns known to occur at 200°C that initiates the proton mobility inside the columns [START_REF] Nakamura | Proton Transport Polarization and Depolarization of Hydroxyapatite Ceramics[END_REF]. An exothermal effect is observed in the temperature range 200-500°C with a peak top at 300°C for Ca9Mg1HAp-GA [START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] and Ca9Mg1HAp-GA(20) samples. This peak, which is absent in the DTA plot of non-modified Ca9Mg1HAp, corresponds to the combustion of the organic material. Furthermore, the intensity of these peaks increases with increasing grafted amount content. In fact, their presence confirms that the prepared samples correspond to hydroxyapatite-glutamic acid composites, similar to that previously found for hydroxyapatite modified with glycine and sarcosine acids [START_REF] Ben Moussa | Hybrid organic-inorganic materials based on hydroxyapatite structure[END_REF]. Table 2 Results of TG analysis of ungrafted and grafted apatites. Samples Infrared investigation The IR spectra recorded with or without glutamic acid onto the apatite surface are illustrated in figure 4. The vibrations in the range 1800-1200 cm -1 are summarized in table 3. In particular, the bands observed at 1598, 1445 and 1261cm -1 (Fig. 4b,4c), which are not present in the spectrum of Ca 9 Mg 1 HAp (Fig. 4a) can be attributed to sorbed glutamic acid [START_REF] Navarrete | Vibrational study of aspartic acid and glutamic acid dipeptides[END_REF]. The vibration located at 1635 cm - 1 is attributed to adsorbed water [START_REF] Garcia-Ramos | The adsorption of acidic amino acids and homopolypeptides on hydroxyapatite[END_REF]. The higher water content in the presence of glutamic acid is consistent with the decrease in the ratio (Ca + Mg) / P. In order, to eliminate the hypothesis of a simple mechanical mixture between glutamic acid and apatite, further data have been recorded from such 20) sample. The FT-IR spectrum of this mixture, which is reported in Fig. 4d, displays a number of bands due to the two discrete components of the mixture, and indicates the absence of specific interaction between the carboxylic groups of the glutamic acid and the calcium ions of hydroxyapatite. The comparison with the spectra reported in Fig. 4b and4c, which show also a small shift of the carboxylic stretching band to lower wave numbers, in agreement with an increase of the C-O bond length can be attributed to those of the organic moieties grafted on calcium or magnesium atoms on the surface of the hydroxyapatite, in agreement with the results previously reported for CaHAp modified by amino-acids [START_REF] Ben Moussa | Hybrid organic-inorganic materials based on hydroxyapatite structure[END_REF][START_REF] Bachoua | Preparation and characterization of functionalized hybrid hydroxyapatite from phosphorite and its potential application to Pb 2+ remediation[END_REF]. Table 3 FTIR Spectral data (±5) of Ca 9 Mg 1 HAp-GA(10), Ca 9 Mg 1 HAp-GA [START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF], and pure glutamic acid. Tentative assignments Wave number (cm -1 ) Glutamic acid [START_REF] Navarrete | Vibrational study of aspartic acid and glutamic acid dipeptides[END_REF] Asym.: Asymmetrical, str.: stretching X-ray analysis The X-ray powder diffractograms for CaHAp and Ca 9 Mg 1 HAp synthesized with or without the presence of glutamic acid are shown in figure 5. In table 4, we reported the size of the apatite crystallites induced by magnesium and glutamic acid for the reflections (002) and (310). For all samples, we observe a unique apatitic phase belonging to the P6 3 /m space group (n° 9-432-ICDD-PDF). We could not prepare the grafted apatites Ca 8.5 Mg 1.5 HAp-GA(n), since all our preparations failed. Several authors have shown that the particular behavior of the Ca 8.5 Mg 1.5 HAp compound can therefore be explained by the fact that sample Ca 8.5 Mg 1.5 HAp is a pure non stoichiometric CaMgHAp which would turn into whitlockite under the effect of temperature and already made up of a mixture of phases CaMgHAp crystalline and amorphous whitlockite [Ca 3y Mg y (HPO 4 ) z (PO 4 ) 2-2z/3 ], which is crystallized under the effect of heat treatment at 900 °C [START_REF] Chaudhry | Synthesis and characterization of magnesium substituted calcium phosphate bioceramic nanoparticles made via continuous hydrothermal flow synthesis[END_REF][START_REF] Correia | Wet synthesis and characterization of modified hydroxyapatite powders[END_REF]. Broadening of the diffraction lines increases with the concentration of magnesium and glutamic acid. The crystallite sizes were calculated from the broadening of the (0 0 2) and (3 1 0) using the Scherrer equation [START_REF] Bigi | Strontium-substituted hydroxyapatite nanocrystals[END_REF]: β 1/2 cosθ Kλ D hkl  Where  is the diffraction angle,  the wavelength and K a constant depending on the crystal (chosen as 0.9 for apatite crystallites) and β ½ is the line width at full width at half maximum (FWHM), of a given reflection. The line broadening of the (0 0 2) and (3 1 0) reflections was used to evaluate the crystallite size along the c-axis and along a direction perpendicular to it. The crystallinity (Xc) is defined as the fraction of the crystalline apatite phase in the investigated volume of powdered sample. An empirical relation between X c and β ½ was deduced, according to the following equation [START_REF] Ren | Characterization and structural analysis of zinc-substituted hydroxyapatites[END_REF]: X c = [K A / 1/2 ] 3 . Where K A is a constant set at 0.24 and β ½ is the FWHM of the (0 0 2) reflection, the 4. The crystallite size and the crystallinity decrease with increasing magnesium and amino acid concentration. It can also be deduced that the crystallites are of nanometric sizes and the decrease is more important in the (3 1 0) than in the (0 0 2) direction. Such observation was earlier reported for other organic moieties grafted onto apatite surfaces and can be explained by a better interaction of the glutamic acid with faces parallel to the c axis [START_REF] Boanini | Nanocomposites of hydroxyapatite with aspartic acid and glutamic acid and their interaction with osteoblast-like cells[END_REF][START_REF] Othmani | Surface modification of calcium-copper hydroxyapatites using polyaspartic acid[END_REF]. The individual effect of magnesium and glutamic acid on the crystallite size can be observed in figure 6. The crystallite sizes D (002) and D (310) decrease slowly with the concentration of magnesium whereas the addition of glutamic acid induces a bigger change in the crystallinity. It would appear that the glutamic acid is the component most responsible for the loss of crystallinity which could be explained by the presence of the groups COO -on the surface of materials. We do not observe a change in position of the peaks between CaHAp and Ca 9 Mg 1 HAp diffractograms, here the main phenomenon is the broadning of the peaks, which has already been observed during the substitution of calcium by another divalent ion. When glutamic acid is present, we observe no difference between Ca 9 Mg 1 HAp-GA [START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] and Ca 9 Mg 1 HAp-GA(20) diffractograms. In contrast, peak shifts are observed compared to the CaHAp and Ca 9 Mg 1 HAp one, in particular for the (211) and ( 222) peaks. This phenomenon that we have already described was attributed to a better interaction of the glutamic acid with the faces parallel to the axis c during the process of crystalline growth [START_REF] Ben Moussa | Hybrid organic-inorganic materials based on hydroxyapatite structure[END_REF]. TEM observations Transmission electron microscopy (TEM) analysis micrographs of the samples are illustrated in figure 7A. From the photomicrographs, it can be seen that the size of precipitated apatite particles, prepared with and without glutamic acid, is on the nanometer scale. CaHAp is constituted of well dispersed plate-shaped crystals with an average size of about 40-150 nm long and about 30 nm wide. A small addition of magnesium induces a decrease of the size: 30-60 nm long and about 15-20 nm wide for the Ca 9.5 Mg 0.5 HAp. The presence of magnesium and glutamic acid in the start solution completely modifies the aspect. We obtain large bundles of CaMgHAp-GA fibers (300 nm length /80 nm width). The HRTEM images (Fig. 7B) reveal that theses fibers are thin (15 nm wide) and stacked together to each other. The analysis of the FFT patterns show that the growth of the particles always occurs in the (002) direction, indicating that they are growing along the c axis direction such as the [[D (310) ] Mgx, GA(n) / [D (310) ] CaHAp ]% 0 0,2 0,4 0,6 0,8 1 Functionalization of CaHAp with glutamic acid Speciation of the interface apatite-solution The influence of the glutamic acid amount on the textural properties of CaHAp was examined (Table 5). We obtained 43 m 2 /g for Ca 9 Mg 1 HAp and after functionalization 30 m 2 /g and 24 m 2 /g for Ca 9 Mg 1 HAp-GA [START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] and Ca 9 Mg 1 HAp-GA [START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF], respectively. The decrease in the specific surface area of the apatite samples is related to the structural arrangement of the 2-aminopentanedioic acid on the surface of the solid. These results are in good agreement with those obtained by Oberto Da Silva et al. [START_REF] Silva | Hydroxyapatite organofunctionalized with salivating agents to heavy cation removal[END_REF]. The initial pH i value measured in aqueous solutions increase with the glutamic acid ratio. We recorded 8.9 and 9.3 for Ca 9 Mg 1 HAp-GA [START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] and Ca 9 Mg 1 HAp-GA(20) respectively against only 8.4 for Ca 9 Mg 1 HAp. The Zeta potential of different apatite samples is presented in figure 8. The values of Point of Zero Charge (PZC) and iso-electric point (IEP) were determined (Table 5). In a basic medium, the Ca 9 Mg 1 HAp sample is characterized by a highly negative zeta potential indicating a deficit of positive surface charge, a value lower than that of the apatites Ca 9 Mg 1 HAp-GA [START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] and Ca 9 Mg 1 HAp-GA [START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF]. This result indicates an increase in positive surface charge sites, on amino-acid grafted apatites [START_REF] Kollath | A Modular Approach To Study Protein Adsorption on Surface Modified Hydroxyapatite[END_REF]. Whereas the value of pH IEP is 7. The total number of protons consumed during titrations of different samples is determined using this equation:                                    OH ) pH ) Ke ( Log ( 10 H pH 10 V blanc ) b V b C a V a C ( susp ) b V b C a V a C exp S 8(B) ). This quantity of protons (H + ) is deduced from the difference between the amount of (H + ) added to the suspension and the amount of free (H + ) in solution. This is calculated directly from the measured pH. The amount of the added protons is corrected by the amount of hydroxyl initially added to the suspension. The proton quantity consumed by apatite samples [H + ] shows the same evolution: in an acid medium, [H + ] is between 9 and 12 μmol/m² (pH = 5) for Ca 9 Mg 1 HAp, Ca 9 Mg 1 HAp-GA [START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] and Ca 9 Mg 1 HAp-GA [START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF], respectively. Then, it decreases regularly and becomes negative in a basic medium with the protons consumed in the order of -8 and -10 μmol/m² at pH = 11. Proposed glutamic acid sorption mechanism The IR spectra provide data about the ionization state of the carboxylate groups grafted onto the apatite surface. Indeed, the presence of bands characteristic of -COO -groups and the absence of bands attributed to COOH groups (1700 cm -1 ) indicate the carboxylate form. This shows that the interaction is mainly due to the electrostatic interaction between -COO -groups of the glutamic acid and the calcium Ca 2+ /Mg 2+ ions of the hydroxyapatite. We cannot exclude also that interactions between COO -and surface POH groups are possible. The fixation is due to the simultaneous presence of -COO-/Ca 2+ / Mg 2+ electrostatic interactions and H-bonds between NH 3 + protons and surface oxygen atoms of the PO 4 group [START_REF] Almora-Barrios | Density functional theory study of the binding of glycine, proline, and hydroxyproline to the Hydroxyapatite (001) and (010) surfaces[END_REF][START_REF] Rimola | Ab initiomodelling of protein/biomaterial interactions: glycine adsorption at hydroxyapatite surfaces[END_REF]. The nature of interactions between the apatite surface and a glutamic acid depends on the pH value of the medium. Taking into consideration that the reaction is carried out at a pH above 9.5, we can consider that in aqueous solution glutamic acid exists as a carboxylate ion, the amino group being neutral [57]. Under the same conditions, the apatite surface is considered as negatively charged, therefore some authors consider that the electrostatic interactions between the surface and the aminoacid are very weak [START_REF] Palazzo | Amino acid synergetic effect on structure, morphology and surface properties of biomimetic apatite nanocrystals[END_REF][START_REF] Brown | An analysis of hydroxyapatite surface layer formation[END_REF]. In the present study, glutamic molecules are present in the form of carboxylate ions, which can lead to calcium complexes that participate in the formation of the crystalline edifice. The observation of the IR spectra, showing characteristic vibrations of carboxylate salts, lead us to conclude that the amino-acid in this carboxylate ionic form exchanges an hydroxyl ion. Different model grafting mechanisms proposed based on the results obtained are shown in scheme 1. Scheme 1. Formation of Ca carboxylate salt leading to the grafting of glutamic on the CaHAp surface. Conclusion In conclusion, we have successfully synthesized hydroxyapatite-glutamic composites of different glutamic acid content using the hydrothermal method. The presence of the glutamic acid and/or magnesium in the reaction solution does not change the apatite structure, but reduces the crystallinity and the crystallite sizes. According to IR spectroscopy, the new vibrations after adsorption can be attributed to those of the organic moieties grafted on calcium or magnesium atoms onto the surface of FirstFig. 2 . 2 Fig. 2. DTA plots; (a) Ca 9 Mg 1 HAp, (b) Ca 9 Mg 1 HAp-GA(10) and (c) Ca 9 Mg 1 HAp-GA(20). Fig. 1 . 1 Fig. 1. TG plots; (a) Ca 9 Mg 1 HAp, (b) Ca 9 Mg 1 HAp -GA(10) and (c) Ca 9 Mg 1 HAp-GA(20). Fig. 3 . 3 Fig. 3. (Left) Circular dichroism spectrum and (right) UV absorption curves of (a) glutamic acid and (b) Ca 9 Mg 1 HAp-GA(20). 9 9 Mg 1 HAp and glutamic acid in the same relative amounts as those contained in the Ca 9 Mg 1 HAp-GA( Fig. 4 . 4 Fig. 4. FT-IR spectra of: (a) Ca 9 Mg 1 HAp, (b) Ca 9 Mg 1 HAp-GA(10), (c) Ca 9 Mg 1 HAp-GA(20), (d) Mixed Ca 9 Mg 1 HAp-glutamic acid and (e) glutamic acid. Ca 9 Mg 1 HAp-GA(10) Ca 9 Mg 1 HAp-GA(20) COO-asym.str. CaHAp Ca 9 9 Mg 1 HAp Ca 9 Mg 1 HAp-GA[START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] Ca 9 Mg 1 HAp-GA[START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF] Fig. 5 . 5 Fig. 5. X-ray diffractograms for CaHAp and Ca 9 Mg 1 HAp ungrafted and grafted. Fig. 6 . 6 Fig. 6. Effect of the Mg and GA concentration on the crystallite size D (002) (A) and D (310) (B) relative to the CaHAp values taken as references. Fig. 7B . 7B Fig. 7A. TEM images (Scale bars=50 nm) of CaHAp, Ca 9.5 Mg 0.5 HAp, Ca 9 Mg 1 HAp-GA(10) and Ca 9 Mg 1 HAp-GA(20). specific surface area (m 2 /g) exposed in the suspension, V is total volume of solution; C a and C b are the HCl and NaOH concentration used for titration. K e is the dissociation constant of water. ( H+ ) and ( OH-) are the coefficient of the dissociation activity of H + and OH -calculated by Debye-strength of solution (mol/L), A=0.507, B=0.328.10 -8 and  i 0 is the effective diameter (Fig. Fig. 8 . 8 Fig. 8. (A) Zeta potential IEP (in mV) as a function of pH spectra of the as-prepared Ca 9 Mg 1 HAp-GA(n) (n=10 and 20) and sample Ca 9 Mg 1 HAp. (B) Number of protons consumed by the surface for different samples reported in µmol/m 2 during their 2 hours immersion in a solution containing 0.1 M NaCl + HCl or NaOH as a function of pH of the as-prepared Ca 9 Mg 1 HAp-GA(n) (n=10 and 20) and sample Ca 9 Mg 1 HAp. the apatite. TEM images confirm the reduction of crystallite sizes and indicate the change in its morphology. Table 1 1 Chemical composition (% weight ±0.02) of grafted mixed CaMgHAp. Samples %Ca %Mg %P %C (Ca + Mg)/P CaHAp 39.31 - 18.09 0.12 1.68 Ca 9.5 Mg 0.5 HA 38.14 0.98 18.66 0.15 1.65 p Ca 9 Mg 1 HAp 35.28 1.82 18.01 0.18 1.64 CaHAp-GA(10) 39.05 - 18.15 0.72 1.66 Ca 9.5 Mg 0.5 HAp- 37.94 1.08 18.63 0.87 1.64 GA(10) Ca 9 Mg 1 HAp- 34.05 1.89 17.98 1.12 1.59 GA(10) CaHAp-GA(20) 38.97 - 18.23 1.25 1.65 Ca 9.5 Mg 0.5 HAp- 37.23 1.13 18.58 1.49 1.63 GA(20) Ca 9 Mg 1 HAp- 32.51 2.03 17.88 2.58 1.55 GA(20) Table 4 : 4 Evolution size of the apatite crystallites with magnesium and glutamic acid for the reflexion (002) and (310). Samples β 1/2 (002) D (002) (Å) β 1/2 (310) D (310) (Å) Crystallinity (X C ) CaHAp 0.191(1) 427 0.477(9) 177 1.981 Ca 9.5 Mg 0.5 HAp 0.207(2) 394 0.640(1) 132 1.554 Ca 9 Mg 1 HAp 0.264(2) 309 0.712(1) 119 0.749 CaHAp-GA(10) 0.230(4) 354 0.710(4) 119 1.042 Ca 9.5 Mg 0.5 HAp-GA(10) 0.283(4) 288 0.931(3) 91 0.607 Ca 9 Mg 1 HAp-GA(10) 0.364(5) 224 1.286(3) 66 0.285 CaHAp-GA(20) 0.233(1) 350 0.761(2) 111 1.029 Ca 9.5 Mg 0.5 HAp-GA(20) 0.322(3) 253 1.084(1) 75 0.413 Ca 9 Mg 1 HAp-GA(20) 0.389(2) 210 1.385(1) 61 0.234 Table 5 5 Surface area and pH value of different apatite samples (c) (d) 002 100 Samples S BET (m 2 /g) IEP PZC pH i in aqueous solution Ca 9 Mg 1 HAp 43 7.1 8.2 8.4 Ca 9 Mg 1 Hap-GA(10) 30 8.2 8.6 8.9 Ca 9 Mg 1 Hap-GA(20) 24 8.6 8.8 9.3 PZC: point zero charge is determined by an acid-basic titration equilibrated at 16h IEP: Iso-electric point (mV) is determined by zeta-metry 1, the PZC for the Ca 9 Mg 1 HAp sample is around 8.2. The same value is found in the literature[START_REF] Bell | The point of zero charge of hydroxyapatite and fluorapatite in aqueous solutions[END_REF][START_REF] Attia | The equilibrium composition of hydroxyapatite and fluoroapatitewater interfaces[END_REF][START_REF] Saleeb | Surface properties of alkaline earth apatites[END_REF][START_REF] Wu | Surface complexation of calcium minerals in aqueous solution[END_REF] (Figure8(B)). This result can be explained by the presence of initial charge and specific adsorption at the surface of apatite. After glutamic acid functionalization, the Zeta-potential curve is shifted towards alkaline pH values, with pH IEP of 8.2 and 8.6 for Ca 9 Mg 1 HAp-GA[START_REF] Hamad | Synthèse et étude physico-chimique de fluoroapatites mixtes à cations bivalents Pb-Cd, Pb-Sr et Sr-Cd[END_REF] and Ca 9 Mg 1 HAp-GA[START_REF] Li | Dopamine Modified Organic -Inorganic Hybrid Coating for Antimicrobial and Osteogenesis[END_REF], respectively. This result verified by measurement of PZC (from 8.6 to 8.8), confirms the functionalization of the surface of Ca 9 Mg 1 HAp by the glutamic acid. Acknowledgments This research was carried out with the financial support of the University of Monastir, (Tunisia); the University Pierre and Marie Curie and CNRS (France).
34,719
[ "757617", "183142" ]
[ "302159", "302159", "541749", "541749", "302159" ]
01744738
en
[ "phys" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01744738/file/EXME2018a-ccsd.pdf
Benoit Voillot email: voillot@lmt.ens-cachan.fr Jean-Lou Lebrun René Billardon François Hild J.-L Lebrun F Hild Lmt Validation of registration techniques applied to XRD signals for stress evaluations in titanium alloys Keywords: DIC, in-situ test, integrated methods, stress analyses, XRD To estimate stresses near specimen surfaces, X-ray diffraction (XRD) is applied to titanium alloys. Some of these alloys are difficult to study since they are composed of various phases of different proportions, shapes and scales. For millimetric probed volumes, such multi-phase microstructures induce shallow and noisy diffraction signals. Two peak registration techniques are introduced and validated thanks to tensile tests performed on two titanium alloy samples. Introduction High performance titanium alloys are used in aeronautical industries and especially large forgings (e.g., landing gears). These critical parts of an aircraft are submitted to various heat and mechanical treatments [START_REF] Hill | Engineering residual stress in aerospace forgings[END_REF]. One of the consequences are changes in microstructure, roughness and residual stresses [START_REF] Cox | The effect of finish milling on the surface integrity and surface microstructure in Ti-5Al-5[END_REF][START_REF] Aeby-Gautier | Isothermal α formation in β metastable titanium alloys[END_REF]. Each of these parameters has an impact on the global in-service mechanical behavior of the whole structure and in particular in fatigue [START_REF] Lütjering | Titanium[END_REF][START_REF] Guillemot | Prediction of the endurance limit taking account of the microgeometry after finishing milling[END_REF][START_REF] Souto-Lebel | Characterization and influence of defect size distribution induced by ball-end finishing milling on fatigue life[END_REF]. It is of high interest to be able to describe the effect of these treatments by several ways (e.g., microstructure studies, roughness measurements). Another way to quantify these treatments is via residual stress analyses. Various methods exist to estimate residual stresses. Most of them are destructive (e.g., incremental hole [START_REF]ASTM E[END_REF], contour methods [START_REF] Prime | Cross-sectional mapping of residual stresses by measuring the surface contour after a cut[END_REF]). To carry out analyses on in-service structures, nondestructive methods need to be performed. One of the most popular nondestructive stress analysis techniques is X-ray diffraction (XRD) [START_REF] Reed Reed | The influence of surface residual stress on fatigue limit of titanium[END_REF][START_REF] Fitzpatrick | Determination of residual stresses by x-ray diffraction -issue 2[END_REF][START_REF] Freour | Determination of the macroscopic elastic constants of a phase embedded in a multiphase polycrystal -application to the beta-phase of Ti17 titanium based alloy[END_REF][START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF][START_REF] Cullity | Elements of x-ray diffraction[END_REF][START_REF] Hauk | Structural and Residual Stress Analysis by Non Destructive Methods: Evaluation, Application, Assessment[END_REF]. Various post-processing algorithms are used to evaluate stresses from XRD measurements [START_REF] Pfeiffer | Evaluation of thickness and residual stress of shallow surface regions from diffraction profiles[END_REF][START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF]. Among them the centered barycenter is common [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF][START_REF] Cullity | Elements of x-ray diffraction[END_REF][START_REF] Hauk | Structural and Residual Stress Analysis by Non Destructive Methods: Evaluation, Application, Assessment[END_REF]. It consists in determining positions of the diffraction peak by the sliding center of gravity of all the points in its neighborhood. This method is very powerful for well-defined peaks. However it is known that stress analyses are challenging for titanium alloys, especially multi-phase grades [START_REF] Lefebvre | External reference samples for residual stress analysis by x-ray diffraction[END_REF][START_REF] Suominen | Residual Stress Measurement of Ti-Metal Samples by Means of XRD with Ti and Cu Radiation[END_REF]. This difficulty comes from high levels of fluorescence and high noise to signal ratios. As a result, it is difficult to accurately locate diffraction peaks [START_REF] Withers | The precision of diffraction peak location[END_REF]. Other post-processing methods are currently used to register diffraction peaks. Peak to peak registrations are also utilized in some cases [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF]. A prerequisite is good similarity between peaks, which is not necessarily very suitable for two-phase materials or materials whose fluorescence varies with the angle of measurement. The latter phenomenon is observed in two-phase titanium alloys [START_REF] Cullity | Elements of x-ray diffraction[END_REF][START_REF] Lütjering | Titanium[END_REF]. All these methods are implemented in most of commercial codes (e.g., Stressdiff [START_REF]Stressdiff[END_REF] or Leptos by Bruker [START_REF] By | [END_REF]). The last method used in existing commercial codes consists in modelling peaks with a mathematical function (e.g., Lorentz, Pearson VII, pseudo-Voigt or Gauss distributions [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF]). This post-processing method (i.e., model with a known function) will be used to benchmark the integrated approach proposed herein. Stress analyses are most of the time benchmarked with stress-free configurations (i.e., powder of the material [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF][START_REF] Noyan | Residual Stresses: Measurements by Diffraction and Interpretation[END_REF]) or standard coupon [START_REF] Lefebvre | External reference samples for residual stress analysis by x-ray diffraction[END_REF]. One of the issues with the former is finding a powder whose lattice structure is close to those of the material of interest. In some cases such as multi-phase materials, it is hard or impossible to get powder of the real material. An alternative route is to monitor a mechanical test with XRD means. Tools used for evaluating residual (i.e., ex-situ) stresses can also determine applied (i.e., in-situ) stresses [START_REF] Geandier | Elastic strain distribution in metallic filmpolymer substrate composites[END_REF] during mechanical tests. Either the testing machine is put within the goniometer [START_REF] Geandier | Development of a synchrotron biaxial tensile device for in-situ characterization of thin films mechanical response[END_REF] or the goniometer is mounted inside the testing machine [START_REF] Rekik | Dispositif de mesure du comportement magnéto-mécanique d'un alliage de fer-silicium sous chargement mécanique multiaxial[END_REF]. This methodology will be used to validate the stress estimates reported herein. Further, it will also be combined with 2D digital image correlation [START_REF] Sutton | Image correlation for shape, motion and deformation measurements: Basic Concepts, Theory and Applications[END_REF] to monitor surface displacements. Such combination has been used to validate XRD measurements and determine stress/strain curves in biaxial experiments on thin films [START_REF] Djaziri | Combined synchrotron x-ray and image-correlation analyses of biaxially deformed w/cu nanocomposite thin films on kapton[END_REF][START_REF] Djaziri | Investigation of the elastic-plastic transition of nanostructured thin film under controlled biaxial deformation[END_REF]. To address the issue of high noise to signal ratio associated with titanium alloys, it is proposed to require the peak shifts to be expressed in terms of the quantities of interest, namely, the sought elastic strains (or stresses). When a peak registration procedure is used it corresponds to an integrated approach as used in digital image correlation [START_REF] Hild | Digital image correlation: From measurement to identification of elastic propertiesa review[END_REF][START_REF] Roux | Stress intensity factor measurements from digital image correlation: post-processing and integrated approaches[END_REF][START_REF] Leclerc | Integrated digital image correlation for the identification of mechanical properties[END_REF][START_REF] Réthoré | A fully integrated noise robust strategy for the identification of constitutive laws from digital images[END_REF], stereocorrelation [START_REF] Réthoré | Robust identification of elasto-plastic constitutive law parameters from digital images using {3D} kinematics[END_REF][START_REF] Beaubier | CAD-based calibration of a 3D-DIC system: Principle and application on test and industrial parts[END_REF][START_REF] Dufour | CAD-based displacement measurements. Principle and first validations[END_REF][START_REF] Dufour | Shape, Displacement and Mechanical Properties from Isogeometric Multiview Stereocorrelation[END_REF] and digital volume correlation [START_REF] Hild | Toward 4d mechanical correlation[END_REF]. The outline of the paper is as follows. First, the studied alloys and the experimental setup are presented. Then the two registration techniques used herein are introduced. Last, the results obtained on the two titanium alloys are analyzed and validated. Studied alloys Two two-phase titanium alloys are studied herein. First, Ti64 alloy is selected for benchmark purposes because it is well-known and its microstructure is favorable for XRD measurements [START_REF] Lefebvre | External reference samples for residual stress analysis by x-ray diffraction[END_REF][START_REF] Suominen | Residual Stress Measurement of Ti-Metal Samples by Means of XRD with Ti and Cu Radiation[END_REF]. Second, Ti5553 is a two-phase material used by Safran Landing Systems for landing gears [START_REF] Boyer | The use of beta titanium alloys in the aerospace industry[END_REF]. The composition of Ti64 is given in Table 1. It is composed of 6 wt% of Aluminum and 4 wt% of Vanadium. It is a quasi pure α-phase (i.e., 95 wt% of α-phase and only 5 wt% of β-phase). The α-phase is hexagonal close packed (HCP) and the β-phase is body centered cubic (BCC). Table 1: Chemical composition of Ti64 Alloying elements of Ti5553 are mostly composed of 5 wt% of Aluminum, 5 wt% of Vanadium, 5 wt% of Molybdenum, and 3 wt% of Chromium (Table 2). It is a quasi β-metastable material. It contains 60 wt% of α-phase and 40 wt% of β-phase. Such metastable alloys are of interest for aeronautical applications thanks to their high specific strength. Only the α-phase will be considered in XRD measurements. The same lattice family (i.e., {213}-planes) will be studied. The proportion of β-phase in Ti64 can be neglected. This is not the case for Ti5553 alloys for which the stress evaluation will be incomplete. Figure 1 shows micrographs of both alloys. For Ti64 the α-grains are observed as well as the former β-grains. For Ti5553 the matrix of the primary β-phase appears in gray and contains primary and secondary α-nodules (a few µm in diameter) and lamellae (at sub-µm scale) in black [START_REF]Standard Test Method for Determining Average Grain Size[END_REF]. The coexistence of these two phases is an issue for XRD analyses [START_REF] Lu | Handbook of measurement of residual stresses[END_REF][START_REF] Voillot | Evaluation of residual stresses due to mechanical treatment of Ti5553 alloy via XRD[END_REF]. Figure 3 shows that the size of β-grains is millimetric. This dimension is comparable to the extent of the volume probed by the X-ray beam (i.e., 1 mm in diameter for a depth of 5 µm, see Section 3). For one XRD analysis, only few grains of the β-phase are analyzed. It results that the β-phase lattices of the probed volume are not always in Bragg's conditions. Consequently, the β-phase is not adequate for stress analyses. This EBSD picture also reveals that there is no significant texture. The orientations of β-grains are not linked to one another. The α-phase made of nodules and lamellae do not show any preferential orientation with the β-grain in which they are nested [START_REF] Duval | Mechanical Properties and Strain Mechanisms Analysis in Ti5553 Titanium Alloy[END_REF]. This is of prime importance for XRD measurements since a random orientation of α-grains in the interaction volume ensures that Bragg's condition will be satisfied for the α-phase in any probed direction. A complete stress estimation is possible with a lab goniometer [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF][START_REF] Hauk | Structural and Residual Stress Analysis by Non Destructive Methods: Evaluation, Application, Assessment[END_REF] for this phase. 3 and4). Consequently, it is expected that the peak positions are more difficult to determine with Ti5553 than Ti64 alloys, which will presumably have an impact on stress evaluations [START_REF] Withers | The precision of diffraction peak location[END_REF][START_REF] Voillot | Evaluation des contraintes residuelles induites par traitements mecaniques dans un alliage de titane bi-phase par trois methodes de depouillement differentes[END_REF]. The aim of the following analyses is to assess the feasibility of XRD analyses under such challenging conditions. 3 Experimental setup The experimental setup consists of a biaxial testing machine [START_REF] Bertin | Integrated digital image correlation applied to elasto-plastic identification in a biaxial experiment[END_REF] mounted in the X-ray goniometer (Figure 6). The advantage of such testing machine, which can also be used in an SEM chamber, is that two actuators are used to load the sample. In the chosen control mode, the center of the sample is motionless. It was checked (a posteriori) with DIC analyses that the maximum motion was less than 10 µm, which is negligible with respect to the probed surface (i.e., 1 mm 2 ). The testing machine is moved and relocated into the goniometer at each step of loading in order to acquire images for DIC purposes. The goniometer used in the present work follows the χ-method [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF], see Figure 7. The radius of the goniometer is 150 mm. It is equipped with a mobile head composed of the X-Ray source (X) and a single linear position sensor (LPS) that turns about a motionless coupon to be analyzed. The LPS sensor gives a 1D measurement of diffraction intensity. S1, S2, S3 are the basis axes corresponding to the analyzed sample S. Here as the coupon is motionless during all the measurements, the basis corresponds to the goniometer reference axes. 2θ is the diffraction angle between the incident and diffracted beams. For diffraction in titanium alloys, a copper source was selected. The use of a collimator gives a probed volume of the order of 1 mm 2 × 5 µm. A Nickel (K β ) filter is added in front of the LPS sensor to only analyze the K α rays of the copper source. The duration of X-Ray exposure (i.e., 3 min per diffractogram), intensity and voltage (i.e., 20 mA and 40 kV) are a compromise between duration of measurement and diffraction peak quality in the current conditions (Figure 5). To have access to various zones on the diffraction sphere, rotations are made possible in the Eulerian cradle of the goniometer as illustrated in Figure 8. Along the S3 axis (i.e., normal to the sample surface and the vertical axis of the goniometer) various angles φ are reached. They correspond to the main direction of measurement. The rotation along the S1 axis allows various angles χ or ψ to be probed. For one single stress analysis, thirteen diffractograms at various angular positions ψ ranging from -50 • and 50 • are selected [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF]. It is worth noting that the goniometer does not allow for oscillations during measurements. When the moving head is turning within the goniometer, the center of the surface area of the volume probed by the X-ray beam is moving by less than 0.1 mm. This observation ensures that the same volume of material is always impacted by X-rays at each angle of measurement. To check the errors due to the use of the goniometer, an analysis on a stress-free powder made of pure titanium was carried out before and after each measurement campaign. All evaluated stress components varied about 0 MPa with a standard deviation of 10 MPa. Further, a height change of 50 µm of the coupon led to an error in stress of about 4 MPa. This stability of the beam interaction with the probed surface and the verification of stress-free evaluations with a Ti powder are indications of good working conditions for stress analyses. In order to validate the stress analyses performed on titanium alloys, in particular the new integrated method, dog-bone samples (Figure 9) were machined. The coupon has been designed to ensure a controllable and homogenous stress and strain area in the central area of the coupon where X-ray analyses are performed. The aim is to carry out in-situ tensile tests. It is an alternative way to benchmark stress analyses via XRD. Usual validations are carried out with standard coupons [START_REF] Lefebvre | External reference samples for residual stress analysis by x-ray diffraction[END_REF] or powders assumed to be stress-free. For pure titanium, powders exist and were proven to yield good results for the goniometer used herein [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF]. However, the lattice parameters of pure titanium are slightly different for Ti5553 or Ti64 alloys. The position and quality of peaks also vary (Figure 5). Consequently, having reliable results on titanium powders does not necessarily ensure trustworthy estimates for titanium alloys. The Ti64 grade will be the reference material since it is more suitable to XRD analyses. The Ti5553 alloy will be subsequently studied once the registration procedures have been validated. In-situ uniaxial tensile tests are carried out under a load control mode. The direction φ is chosen to coincide with the longitudinal direction S1 of the coupon (Figures 7 and9). To measure the total strains on the sample surface, 2D DIC will be used. The sample surface is coated with black and white paints in order to create a speckle pattern (Figure 9(b)). To avoid any bias, the analyzed XRD zone is not covered. When the targeted load level is reached, the sample and testing machine are moved out of the goniometer to acquire pictures. A telecentric lens is mounted on the digital camera to minimize as much as possible the spurious effects associated with out of plane motions. The actuators of the testing machine induce stress variations less than 16 MPa (standard deviation) for the investigated stress range. An isostatic set-up ensures the testing machine to be positioned very precisely in the goniometer. The angular standard uncertainty in relocation is less than 2 • , which corresponds to a small error in final stress estimations. The spatial repositioning error is less than 0.1 mm, a value ten times smaller than the probed surface diameter. Stress extraction When a material is loaded or has residual stresses, crystal lattices deform. XRD procedures measure the variations of inter-reticular distances by analyzing diffraction peaks in (poly)crystalline materials [START_REF] Hauk | Structural and Residual Stress Analysis by Non Destructive Methods: Evaluation, Application, Assessment[END_REF][START_REF] Noyan | Residual Stresses: Measurements by Diffraction and Interpretation[END_REF][START_REF] Cullity | Elements of x-ray diffraction[END_REF][START_REF] Lu | Handbook of measurement of residual stresses[END_REF]]. In the case of titanium alloys analyzed by a copper source, the peak commonly used to carry out stress analyses diffracts at an angle ≈ 140 • [10] and corresponds to the {213} family of the α-phase, which can be found in both Ti64 and Ti5553 alloys [START_REF] Lefebvre | External reference samples for residual stress analysis by x-ray diffraction[END_REF][START_REF] Boyer | The use of beta titanium alloys in the aerospace industry[END_REF][START_REF] Aeby-Gautier | Isothermal α formation in β metastable titanium alloys[END_REF]. The inter-reticular distance for this angle is given by the dimension of the HCP crystal structure where the edges of the hexagon are a = 0.295 nm and the height of the lattice is c = 0.468 nm [START_REF] Leyens | Titanium and titanium alloys: fundamentals and applications[END_REF]. The link between diffraction peak angle and inter-reticular distances is given by Bragg's law 2d {213} sin θ = λ KαCu (1) where d {213} is the inter-reticular distance for {213} planes of the α-phase, θ the diffraction angle, and λ KαCu the wavelength of the X-ray beam (e.g., copper source with λ KαCu = 0.154 nm). Peak registrations presented in the following do not dissociate K α1 and K α2 rays. Elastic stresses near the sample surface in α-phases are evaluated via XRD by estimating peak shifts [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF][START_REF] Hauk | Structural and Residual Stress Analysis by Non Destructive Methods: Evaluation, Application, Assessment[END_REF]. From channels to stresses Figure 10 describes the whole procedure used from the acquisition of diffractograms by the goniometer equipped with a linear position sensor (LPS) that gives an information expressed in channels x to stress estimations. and angular position depends on the geometry of the goniometer (Figure 11) 2θ(x) = 2θ ref + tan -1 L x -x ref x max (2) with x ref = x max + 1 2 (3) where 2θ ref is the angle corresponding to the central channel of the linear sensor (this angle is equal to 140 • thanks to the use of Ti powder). The ratio /L is a geometric parameter of the goniometer (i.e., it depends on the length of the LPS sensor, = 50 mm, and the distance between the analyzed surface and the sensor, L = 150 mm). The angles φ and ψ determine the location of the goniometer during one single measurement with φ the main measurement direction and ψ the incidence angle of the X-ray source and sensor (see Figures 7 and8). the interreticular distance for the undeformed configuration, and 2θ 0 the corresponding diffraction angle. To get an estimate of the stress tensor in one direction, several measurements (i.e., angular positions ψ) are needed for each point of analysis [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF]. Inter-reticular distances d φψ and corresponding angles θ φψ depend on the main measurement direction φ but also on the incidence angle of the X-ray source ψ. As measurements are carried out very close to the sample surface, the normal stress component perpendicular to the surface (along the axis S3, see Figure 7) will be assumed to vanish (σ 33 = 0), in order to be consistent with a traction-free surface. The relationship between stress and elastic strain components is then given by ˜ φψ = - 1 2 S {213} 2 σ φ sin 2 ψ - 1 2 S {213} 2 τ φ sin 2ψ + ˜ ψ=0 (5) where ˜ φψ = ln(sin θ φψ ), S are invariant for any probed angle for the α-phase. It is worth noting that with the present setting, the knowledge of θ 0 is not needed when using ˜ φψ instead of φψ . However, to probe the consistency of the results, it will be checked a posteriori when analyzing tensile tests (in particular, the value of θ 0 ). In the present case, the X-Ray elasticity parameters S 1 = - ν α E α and 1 2 S 2 = 1 + ν α E α depend on the Poisson's ratio ν α and Young's modulus E α of the α-phase (i.e., 1 2 S 2 = 11.9 × 10 -6 MPa -1 and S 1 = -2.64 × 10 -6 MPa -1 [START_REF] Fréour | Influence of a two-phase microstructure on XEC and XRD stress analysis[END_REF][START_REF] Bruno | Surface and Bulk Residual Stress in Ti6Al4V Welded Aerospace Tanks[END_REF]. These values correspond to E α = 109 GPa and ν α = 0.3. The stress tensor components σ φ and τ φ are determined by least squares minimization min σ φ ,τ φ ,˜ ψ=0 ψ [˜ φψ + 1 2 S 2 σ φ sin 2 ψ + 1 2 S 2 τ φ sin 2ψ -˜ ψ=0 ] 2 (6) in addition to the composite strain ˜ ψ=0 . Registration of diffraction peaks The previous section has shown that the sought stresses can be related to the strains (Equation ( 5)) or equivalently to the channel position of the diffraction peak (Equations ( 2)-( 4)). In both cases, the location of the peaks θ φψ has to be determined. One of the standard approaches consists in registering the measured diffractogram with an a priori chosen function 0 ≤ g(x) ≤ 1 whose maximum location is known (i.e., g(x = x 0 ) = 1). The registration procedure consists in finding the location x 0 of the peak in addition to other parameters needed to model the measured signal to minimize the sum of squared differences min x 0 (ψ),ξ 0 ,∆I,I b ,D x η 2 (x; x 0 (ψ), ξ 0 , ∆I, I b , D) with η(x; x 0 (ψ), ξ 0 , ∆I, I b , D) = f (x; ψ) -I b -Dx ∆I -g x -x 0 (ψ) ξ 0 ( 8 ) where f is raw signal measured by XRD for the angular position ψ, x the channel position on the LPS sensor, ξ 0 the width of the function that is proportional to the full width at half maximum (FWHM), and ∆I = I max -I bgl the peak intensity above the background line (bgl). In the present case, the background intensity is modelled with a linear function I bgl (x) = I b + Dx. Consequently, there are five unknowns to be determined, namely, x 0 , ξ 0 and I max that characterize the diffraction peak, in addition to I b and D that describe the background line (Figure 12). Each diffractogram is analyzed independently (i.e., for each considered angle ψ) with a registration technique that can be referred to as diffraction signal correlation or DSC (i.e., it is the one dimensional version of (2D) digital image correlation or (3D) digital volume correlation [START_REF] Sutton | Image correlation for shape, motion and deformation measurements: Basic Concepts, Theory and Applications[END_REF][START_REF] Hild | Digital Image Correlation[END_REF]. A Gauss-Newton algorithm is implemented to minimize the sum of squared differences [START_REF]ASTM E[END_REF] for each considered angle ψ. The covariance matrix associated with the evaluated parameters, which are gathered in the column vector {p} = {x 0 , ξ 0 , ∆I, I b , D} † , reads [START_REF] Hild | Digital Image Correlation[END_REF] [ Cov p ] = γ 2 ∆I 2 [M] -1 ( 9 ) where γ is the standard deviation of the acquisition noise that is assumed to white and Gaussian, and [M] the Hessian used in the minimization scheme [M] = [m] † [m] (10) with [m] =     ∂η ∂x 0 (x) ∂η ∂ξ 0 (x) ∂η ∂∆I (x) ∂η ∂I b (x) ∂η ∂D (x) . . . . . . . . . . . . . . .     (11) This covariance matrix allows the resolution 1 of the registration technique to be assessed [START_REF] Hild | Digital Image Correlation[END_REF]. The standard resolution of the peak position is then equal to the diagonal term of [Cov p ] corresponding to x 0 , provided the other parameters do not change. The influence of the peak position uncertainty on stress resolutions is easily computed since the stress extraction is a linear least squares problem in terms of ˜ φψ (see Equation ( 6)). The covariance matrix of the stress extraction technique reads [Cov σ ] = [C] -1 [c] † [Cov ][c][C] -1 (12) with [C] = [c] † [c] (13) and [c] =     1 2 S 2 sin 2 ψ 1 2 S 2 sin 2ψ 1 . . . . . . . . .     (14) 1 The resolution of a measuring system is the "smallest change in a quantity being measured that causes a perceptible change in the corresponding indication" [START_REF]International Vocabulary of Metrology -Basic and General Concepts and Associated Terms, VIM. International Organization for Standardization[END_REF]. Commercial softwares (e.g., Stressdiff [START_REF]Stressdiff[END_REF] or Leptos [START_REF] By | [END_REF]) use peak positions issued from such minimization to end up with so-called sin 2 ψ methods [START_REF]Non-destructive Testing, Test Method for Residual Stress analysis by X-ray Diffraction[END_REF][START_REF] Cullity | Elements of x-ray diffraction[END_REF][START_REF] Lütjering | Titanium[END_REF]. This procedure will serve as benchmark for the validation of the following integrated procedure. Given the fact that the peak positions are parameterized in terms of the sought stresses and composite strain (i.e., x 0 = x 0 (σ φ , τ φ , ˜ ψ=0 ), see Equations ( 2) and ( 4)), the previous minimization can be performed over the whole set of diffractograms min σ φ ,τ φ ,˜ ψ=0 ,ξ 0 ,∆ I ,I b ,D ψ x η 2 I (x, ψ; σ φ , τ φ , ˜ ψ=0 , ξ 0 , ∆ I , I b , D) (15) with η I (x, ψ; σ φ , τ φ , ˜ ψ=0 , ξ 0 , ∆ I , I b , D) = f (x, ψ) -I b -Dx ∆I -g x -x 0 (σ φ , τ φ , ˜ ψ=0 ) ξ 0 (16) This approach corresponds to integrated DSC for which the sought quantities are directly determined from the registration procedure and do not need an additional minimization step. A Gauss-Newton scheme is also implemented and the initial guess of the sought parameters comes from a first non integrated analysis. The covariance matrix associated with the evaluated parameters, which are gathered in the column vector {p I } = {σ φ , τ φ , ˜ ψ=0 , ξ 0 , ∆ I , I b , D} † , reads [Cov p I ] = γ 2 ∆I 2 [M I ] -1 (17) where [M I ] is the approximate Hessian used in the global minimization scheme [M I ] = [m I ] † [m I ] (18) with [m I ] =     ∂η I ∂σ φ (x, ψ) ∂η I ∂τ φ (x, ψ) ∂η I ∂˜ ψ=0 (x, ψ) ∂η I ∂ξ 0 (x, ψ) ∂η I ∂∆I (x, ψ) ∂η I ∂I b (x, ψ) ∂η I ∂D (x, ψ) . . . . . . . . . . . . . . . . . . . . .     (19) Tests have been made by selecting various distributions (i.e., Gauss, Pearson VII, Lorentz). They give the same types of results without any significant improvement for any of them in terms of registration residuals. In the following, a Gaussian distribution was selected g ≡ g G (x) = exp - x -x 0 ξ 0 2 (20) A preliminary step is performed to calibrate the LPS sensor in terms of local offset corrections. The offset of the detector is determined by measuring the intensity during diffraction on a material that does not diffract at an angle close to that of {213} planes of the α-phase. Amorphous glass was chosen. The diffractogram on glass is obtained by a measurement lasting at least half an hour in order to get enough signal. After rescaling the offset is subtracted to each diffractogram used in stress analyses. Figure 13 shows that most of the spacial variations are erased and that the diffraction peak appears more clearly. Fig. 13: Subtraction of detector offset For the sake of convenience, the residuals that will be reported hereafter will refer to the raw acquisitions, namely, ρ(x; ψ) = f (x; ψ) -∆Ig x -x 0 (ψ) ξ 0 + I b + Dx (21) and are therefore expressed in counts. Any deviation from random noise will be an indication of model error (i.e., associated with the choice of g, and elasticity). The final outputs of the two registration algorithms are σ φ , τ φ , and ˜ ψ=0 = ln(sin θ 0 ) -S 1 tr(σ). 5 Results of stress analyses Validation of the registration techniques Six loading steps were applied (Figure 14) on the Ti64 sample ranging from 0 to 660 MPa. For such type of alloy, the maximum stress level is lower than the yield stress ≈ 900 MPa [START_REF] Lütjering | Titanium[END_REF][START_REF] Duval | Mechanical Properties and Strain Mechanisms Analysis in Ti5553 Titanium Alloy[END_REF]. The applied stress corresponds to the applied load divided by the cross-sectional area of the sample ligament. Interestingly, the integrated DSC residuals (i.e., 39 counts), which are expected to be higher since less degrees of freedom are available, remain close to the non integrated approach. The similarity of residual levels between integrated and non integrated approaches validates the assumption of elasticity in the probed volume. These two results also show that the registration was successful via DSC and integrated DSC. Stress analyses can also be performed via integrated DIC [START_REF] Leclerc | Integrated digital image correlation for the identification of mechanical properties[END_REF]. The displacement field is first measured with FE-based DIC in which the kinematic field is parameterized with nodal displacements associated with an unstructured mesh (see Figure 16) made of three-noded triangles (with linear interpolation of the shape functions). Even though the central part of the sample was not speckled, the DIC code could converge (i.e., helped by the convergence in the speckled area) and yields results within the XRD measured area. From these measurements, only those corresponding to the two extreme transverse rows of elements are considered and they are prescribed as Dirichlet boundary conditions to an elastic FE analysis under plane stress hypothesis, which allows the stress field to be evaluated everywhere. Macroscopic elasticity constants used are the Young's modulus which is equal to 109 GPa and the Poisson ratio which is equal to 0.3. Figure 16 shows the longitudinal stress field for each loading step. In all the zone probed by XRD, the stress is virtually uniform and the corresponding standard deviation is less than 10 MPa for the highest load level. From this analysis, the mean longitudinal stress is reported and will be compared to the other stress evaluations. Fig. 16: Longitudinal stress (expressed in MPa) field evaluated via integrated DIC at the last loading step. Online version: history of the six analyzed steps (Figure 14) In Figure 17, the comparison between various stress analyses is shown. In the present plot, the reference (i.e., horizontal axis) is the applied stress. Integrated DIC and DIC give the same stress estimation and correspond to the mean longitudinal stress averaged over the XRD zone. These levels are very close to the applied stress. For DIC, the measured strains are assumed to be elastic and the corresponding stresses are evaluated, as would be performed by XRD analyses (i.e., without any equilibrium requirement), by using isotropic elasticity under plane stress The two DSC approaches yield results that are in very good agreement with each other and with the evaluations based on integrated DIC and load measurements. Interestingly, the stress estimations based on both DIC results for which the strains are averaged over the zone of interest of XRD analyses are consistent with each other. This observation proves that the assumption of linear elasticity is satisfied. The fact that DSC results are consistent with IDSC, integrated DIC evaluations and applied stress estimates also validates the choice of X-Ray elasticity constants for the XRD measurements in Ti64 alloy. Figure 18 shows the error quantifications for DSC and integrated DSC with respect to the applied stress. The stress resolutions of both DSC methods are also shown. They are based on the effect of acquisition noise only (see Equations ( 12) and ( 12)). The standard resolution of the two DSC techniques is very low and of the order of 3 MPa (i.e., 2.7 MPa for integrated DSC and 3.3 MPa for regular DSC). Thanks to integration, the standard resolution is decreased by 20 %. The root mean square error between the applied stress and DSC or IDSC is respectively equal to 37 MPa and 41 MPa. This level corresponds to an upper bound given the fact that load fluctuations occur (i.e., inducing stress variations of the order of 16 MPa in the present case). The standard applied stress uncertainty is equal to 16 MPa The last output of the DSC codes is the composite strain ˜ ψ=0 = ln(sin θ 0 ) -S 1 tr(σ). Given the fact that the trace of the stress tensor is equal to the longitudinal stress in a uniaxial tensile test, it is possible to evaluate 2θ 0 , which is the diffraction angle for zero stress of {213} planes for the α-phase. The mean value is 2θ 0 = 140.16 • , which is close to the expected value for pure titanium (i.e., 140 • ), see Figure 20. The corresponding standard deviation is equal to 0.019 • for DSC and 0.018 • for IDSC, which is 5 % lower than DSC. With Bragg's law, it is concluded that this angular uncertainty divided by tan(θ 0 ) is an estimate of the longitudinal strain uncertainty (i.e., ≈ 1.2 × 10 -4 for both methods), and a longitudinal stress uncertainty of ≈ 13 MPa. This level is of the same order as the stress fluctuations induced by the tensile stage. Thanks to the reported consistencies of stress estimations both DSC and IDSC algorithms are now considered as validated and will be used to study the Ti5553 grade. Application to Ti5553 alloy For the specimen made of Ti5553 alloy, 9 loading steps and one globally unloaded step were applied and analyzed via DIC and DSC (Figure 21). The lowest stress level is 0 MPa, and the highest is 1100 MPa (the yield stress for this alloy is of the order of 1250 MPa [START_REF] Martin | Simulation numerique multi-echelles du comportement mecanique des alliages de titane betametastable Ti5553 et Ti17[END_REF]). However during electropolishing, the geometry of the sample was roughened and the thickness of the coupon has been reduced from 0.5 mm to 0.45 mm and some imperfections on the edges were created. As a result, plasticity may occur during the test especially at high load levels. Further, the estimation of the applied stress is more delicate and thus will not be reported. The so-called DIC stresses are preferred. The occurrence of plasticity is confirmed in Figure 22 in which the stresses are evaluated with the DIC strain fields, which are assumed to be elastic. For high load levels, very high (and non physical) stress levels are observed in one area of the sample, which was not probed via XRD. It is worth noting that the last step corresponds to total unloading, which does not result in vanishing strains and stresses, thereby confirming the presence of plastic strains in the lower part of the coupon. However, as will be illustrated in Figure 25, in the area impacted by X-rays it is believed the local stress state remains elastic and homogeneous during the whole experiment. The standard stress uncertainty corresponding to the probed area is less than 30 MPa via DIC estimations. Fig. 22: Longitudinal stress (expressed in MPa) field evaluated via DIC (assuming elasticity) at the last loading step (Figure 21). Online version: history of the ten analyzed steps Figure 23 shows root mean square residuals for both DSC approached. In the present case, the two residuals are virtually coincident for all investigated stress levels. This remarkable agreement between the two DSC techniques enables the hypothesis of elasticity to be fully validated in the present case. This observation is consistent with the fact that the α-phase is finely distributed within the probed volume (i.e., of the order of 1 mm 2 × 5 µm, see Figure 2). When compared to Ti64, the mean residuals of DSC and IDSC are lower (e.g., 25 counts instead of 39 counts for IDSC in Ti64). However, the signal levels were also significantly lower (Figure 5). Due to the coexistence of two phases in the studied alloy (Figures 3 and4), the results will first focus on elastic strains. Figure 24 displays the estimations of elastic strain by various techniques, namely integrated DIC in addition to DSC and integrated DSC. In the present case, the abscissa corresponds to the mean longitudinal strain in the XRD zone measured by DIC. The fact that DIC and integrated DIC results coincide proves that the hypothesis of elasticity was fulfilled during all the experiment and that plasticity was confined in an area that has not impacted the zone probed by XRD (as expected from Figure 22). This effect cannot be attributed to plasticity since linear relationships between DIC and integrated DIC results would not be observed. The explanation comes from the fact that DIC analyses are performed at the macroscale (i.e., the two-phase alloy) whereas DSC analyses only consider the α-phase. In the present case, it is shown that the mean elastic strains evaluated in the α-phase are about 73 % those of the alloy (i.e., at the macroscopic level). This difference is due to the presence of two phases in Ti5553 whose elastic properties are different [START_REF] Hauk | Structural and Residual Stress Analysis by Non Destructive Methods: Evaluation, Application, Assessment[END_REF][START_REF] Fréour | Influence of a two-phase microstructure on XEC and XRD stress analysis[END_REF][START_REF] Sylvain Fréour | Determining ti-17 β-phase singlecrystal elasticity constants through x-ray diffraction and inverse scale transition model[END_REF][START_REF] Martin | Simulation numerique multi-echelles du comportement mecanique des alliages de titane betametastable Ti5553 et Ti17[END_REF][START_REF] Herbig | D short fatigue crack investigation in beta titanium alloys using phase and diffraction contrast tomography[END_REF][START_REF] Duval | Mechanical Properties and Strain Mechanisms Analysis in Ti5553 Titanium Alloy[END_REF]. Contrary to Ti64, which is mostly composed of α-phase, the presence of 40 wt% β-phase induces a deviation for the determination of macroscopic stresses using X-Ray elasticity constants of pure titanium. The deviation is estimated at 27 % in the present case. Figure 25 shows the stress estimates for Ti5553. In the present case the DIC stresses (i.e., the macroscopic stresses) are the reference. Independent tensile tests on the same alloy provided a Young's modulus of 115 GPa, and Poisson's ratio equal to 0.35. DIC and integrated DIC stresses are close, which is consistent with the observations of Figure 24. The longitudinal stresses obtained with both DSC approaches are also in good agreement. By using the same elastic parameters for the α-phase as those considered for Ti64, the mean stress in the α-phase amounts to about 69 % of the macroscopic stress. This stress ratio is again due to the difference in elastic properties of α-and β-phases in this alloy [START_REF] Hauk | Structural and Residual Stress Analysis by Non Destructive Methods: Evaluation, Application, Assessment[END_REF][START_REF] Fréour | Influence of a two-phase microstructure on XEC and XRD stress analysis[END_REF][START_REF] Sylvain Fréour | Determining ti-17 β-phase singlecrystal elasticity constants through x-ray diffraction and inverse scale transition model[END_REF][START_REF] Martin | Simulation numerique multi-echelles du comportement mecanique des alliages de titane betametastable Ti5553 et Ti17[END_REF][START_REF] Herbig | D short fatigue crack investigation in beta titanium alloys using phase and diffraction contrast tomography[END_REF][START_REF] Duval | Mechanical Properties and Strain Mechanisms Analysis in Ti5553 Titanium Alloy[END_REF]. The standard DIC stress uncertainty is equal to 30 MPa The shear stresses are again very small (Figure 26). The mean level is 13.5 MPa for IDSC, and 15 MPa for DSC. Therefore, there is a small bias in the present case. It may be due to small misalignments of the goniometer (whose standard uncertainty was estimated to be 2 • ). From this information, the stress state is subsequently determined. The second route consists in merging both steps into a single analysis. It is referred to as integrated diffraction signal correlation (i.e., IDSC). It is shown that the latter leads to lower stress resolutions when compared to the former. For Ti64 a very good agreement is observed between the stresses estimated with both DSC techniques, integrated DIC and regular DIC. For Ti5553, it is shown that the elastic strains in the α-phase are about 73 % of the macroscopic strain, which is due to the difference in elastic properties of the two phases. Further, the fact that the two DSC techniques yield virtually identical results validates the hypothesis of elasticity at the level of the α-phase. For both materials, it is observed that the diffraction angle for the unstressed state is very close to 140 • (i.e., the reference level for pure titanium). Stress resolutions and uncertainties have been systematically analyzed for both alloys. In particular, it is shown that the uncertainty associated with the knowledge of the unstressed configuration has a limited impact on the overall levels. With the implemented registration techniques there is only a 20 % increase of overall stress uncertainties for Ti5553 with respect to Ti64 even though the peak height has been decreased by a factor of 4. This result validates the two registration techniques and shows their robustness even for Ti5553. Having validated the present registration techniques, they can now be used in other configurations to study, for instance, the surface integrity [START_REF] Lütjering | Titanium[END_REF][START_REF] Guillemot | Prediction of the endurance limit taking account of the microgeometry after finishing milling[END_REF][START_REF] Souto-Lebel | Characterization and influence of defect size distribution induced by ball-end finishing milling on fatigue life[END_REF] associated with different milling conditions of Ti5553 [START_REF] Cox | The effect of finish milling on the surface integrity and surface microstructure in Ti-5Al-5[END_REF]. They may also be used in benchmarks with other materials and procedures [START_REF] Lefebvre | External reference samples for residual stress analysis by x-ray diffraction[END_REF][START_REF] Suominen | Residual Stress Measurement of Ti-Metal Samples by Means of XRD with Ti and Cu Radiation[END_REF]. Fig. 1 :Figure 2 12 Fig. 1: Micrographs of Ti64 (a) and Ti5553 (b) alloys Fig. 2 : 2 Fig. 2: (a) EBSD orientation map of Ti64 sample (α-indexation). The black box is enlarged in sub-figure (b) Fig. 3 : 4 In Figure 4 , 344 Fig. 3: (a) EBSD orientation map of Ti5553 sample (β-indexation). The black rectangle is enlarged in (b). The boxed detail is shown in Figure 4 Fig. 4 :Figure 5 45 Fig. 4: EBSD orientation map of Ti5553 sample (zoom of Figure 3(b)). (a) α-indexation, and (b) β-indexation Fig. 5 : 5 Fig. 5: Diffractograms of the two studied alloys and of pure titanium powder (the same dynamic range is used for all plots) Fig. 6 : 6 Fig. 6: In-situ tensile test enabling for XRD stress analyses Fig. 7 : 7 Fig. 7: Sketch of the linear position sensor (LPS), X-ray source (X), analyzed sample S in the goniometer (after Ref. [12]) Fig. 8 : 8 Fig. 8: Sketch of a rotation of ψ = -50 • around χ axis and locations reached thanks to the Eulerian cradle of χ-method goniometer (after Ref. [12]) Fig. 9 : 9 Fig. 9: Coupon for in-situ tests. (a) Dog-bone geometry (ligament width: 6 mm, thickness: 0.5 mm, radius of the hour-glass shape: 65 mm). (b) Speckle pattern used for DIC procedures with an unpainted area to allow for XRD measurements. The area probed by the X-ray beam is depicted in orange. Fig. 10 : 10 Fig. 10: General procedure used in stress analyses Fig. 11 : 11 Fig. 11: Linear position sensor (LPS) and goniometer geometry 2 2 are the X-Ray elasticity constants depending on the diffracting planes, and ˜ ψ=0 = ln(sin θ 0 ) -S {213} 1 tr(σ) will be referred to as composite strain. The normal and shear stresses are denoted by σ φ ≡ σ φφ and τ φ ≡ σ φ3 , respectively. In the present case elastic isotropy and homogeneity are assumed since anisotropy in {213} planes of α lattice is relatively small, so that S Fig. 12 : 12 Fig. 12: Peak model and notations Fig. 14 : 14 Fig. 14: Loading history during the in-situ tensile test on Ti64 alloy. The standard stress uncertainty is equal to 16 MPa Fig. 15 : 15 Fig. 15: Root mean square residual for Ti64 alloy for the two DSC analyses Fig. 17 : 17 Fig. 17: Longitudinal stress σ φ in Ti64 alloy coupon measured with four different evaluation techniques. The error bars depict the root mean square difference between DSC or IDSC estimates and the applied stress. The standard applied stress uncertainty is equal to 16 MPa Fig. 18 :Fig. 19 : 1819 Fig. 18: Stress resolutions and errors for Ti64 alloy coupon. The standard applied stress uncertainty is equal to 16 MPa Fig. 20 : 20 Fig. 20: Estimation of 2θ 0 for Ti64 alloy from the analysis of the composite strain . The standard applied stress uncertainty is equal to 16 MPa Fig. 21 : 21 Fig. 21: Loading history during the in-situ tensile test on Ti5553 alloy. The standard DIC stress uncertainty is equal to 30 MPa Fig. 23 : 23 Fig. 23: Root mean square residual for Ti5553 alloy for the two DSC analyses. The standard DIC stress uncertainty is equal to 30 MPa Fig. 24 : 24 Fig. 24: Longitudinal elastic strains for the 10 loading steps with the four techniques Fig. 25 : 25 Fig. 25: Longitudinal stresses σ φ in Ti5553 alloy evaluated via different techniques. The error bars depict the room mean square difference between DSC or IDSC estimates and the DIC stresses. Fig. 26 :Fig. 27 :Fig. 28 : 262728 Fig. 26: Shear stress τ φ in Ti5553 alloy coupon evaluated with DSC and IDSC. The error bars depict the room mean square difference between DSC or IDSC estimates and a vanishing shear stress. The standard DIC stress uncertainty is equal to 30 MPa Table 2 : 2 Chemical composition of Ti5553 Acknowledgements This work was funded by Safran Landing Systems. The authors acknowledge Pierre Mella for providing Ti64 alloy and useful discussions on XRD analyses. The authors also thank Adam Cox for providing Ti5553 alloy samples and Thierry Bergey for electropolishing them.
49,967
[ "9087" ]
[ "247321", "211916", "469296", "247321" ]
01744748
en
[ "sde" ]
2024/03/05 22:32:07
2018
https://amu.hal.science/hal-01744748/file/garcia.pdf
Ana Paula García-Nieto Ilse R Geijzendorffer Francesc Baró Philip K Roche Alberte Bondeau Wolfgang Cramer Impacts of urbanization around Mediterranean cities: changes in ecosystem service supply Keywords: Land cover, population, urban, rural, spatial analysis, trend, nature's contributions to people Urbanization is an important driver of changes in land cover in the Mediterranean Basin and it is likely to impact the supply and demand of ecosystem services (ES). The most significant land cover changes occur in the periurban zone, but little is known about how these changes affect the ES supply. For eight European and four North African cities, we have quantified changes in peri-urban land cover, for periods of sixteen years (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006) in the Northern African, and twenty-two years in the European cities, respectively. Using an expertbased method, we derived quantitative estimates of the dynamics in the supply of twenty-seven ES. The nature of land cover changes slightly differed between European and North African Mediterranean cities, but overall it increased in urban areas and decreased in agricultural land. The capacity of the peri-urban areas of Mediterranean cities to supply ES generally reduced over the last 20-30 years. For nine ES the potential supply actually increased for all four North African cities and three out of the eight European cities. Across all cities, the ES timber, wood fuel and religious and spiritual experience increased. Given the expected increase of urban population in the Mediterranean Basin and the current knowledge of ES deficits in urban areas, the overall decrease in ES supply capacity of peri-urban areas is a risk for human wellbeing in the Mediterranean and poses a serious challenge for the Sustainable Development Goals in the Mediterranean basin. Introduction Approximately two thirds of the world's population (i.e., 6,4 billion people for the median projection) and 84% in Europe will be living in urban areas by 2050. In 2014 already more than half of the global population was urban, while in Europe this was 70% [START_REF] Kabisch | Diversifying European agglomerations: evidence of urban population trends for the 21st century[END_REF]United Nations, 2015a, 2014). The increase in total population entails a corresponding increase in demand for natural resources [START_REF] Ma | MA) Millennium Ecosystem Assessment, 2005. Ecosystems and Human Well-being: Synthesis[END_REF], particularly for energy and water. The demand for water is expected to increase with 55% between 2000 and 2050 (United Nations World Water Assessment Programme, 2014). The effect of urban population growth on peri-urban landscapes is expected to be particularly prominent since urban land cover increases even faster than could be expected from demographic pressure, resulting in substantial land use conversions [START_REF] Angel | The dimensions of global urban expansion: Estimates and projections for all countries, 2000-2050[END_REF][START_REF] Seto | A Global Outlook on Urbanization[END_REF][START_REF] Seto | Global forecasts of urban expansion to 2030 and direct impacts on biodiversity and carbon pools[END_REF][START_REF] Seto | A Meta-Analysis of Global Urban Land Expansion[END_REF]. Urban populations in countries around the Mediterranean Sea increased from 152 million to 315 million between 1970 and 2010 (an average rate of 1.9 % per year) (UNEP/MAP, 2012). By 2030, the Mediterranean Basin will be the global biodiversity hotspot with the highest percentage of urban land (5%) [START_REF] Elmqvist | History of Urbanization and the Missing Ecology[END_REF]. Urbanization rates have been accelerated by environmental change; for example, intense drought conditions contributed to a rural exodus in Morocco between 1980and 1990, and in Algeria and Tunisia in 1999(FAO, 2001;[START_REF] Hervieu | Rethinking rural development in the Mediterranean[END_REF]. Tourism and housing development have led to the development of infrastructure close to coastal areas and near culturally important cities [START_REF] Eea | Biogeographical regions in Europe. The Mediterranean biogeographical regionlong influence from cultivation, high pressure from tourists, species rich, warm and drying[END_REF][START_REF] Houimli | The factors of resistance and fragility of the littoral agriculture in front of the urbanization: the case of the region of North Sousse in Tunisia[END_REF]. Mediterranean cities are considered attractive places to settle for retirees from northern Europe [START_REF] Membrado-Tena | Costa Blanca: Urban Evolution of a Mediterranean Region through GIS Data[END_REF], and for return migrants to the Maghreb countries [START_REF] Cassarino | Return migrants to the Maghreb Countries: Reintegration and development challenges[END_REF]. The growth of urban areas often takes place at the expense of agricultural land and this can potentially lead to environmental degradation and socio-economic challenges [START_REF] Orgiazzi | Global soil biodiversity atlas[END_REF]. Although probably the most studied, the direct conversion of agricultural land into urban land is only one of the many impacts of urbanization on the structure and function of ecosystems and their services [START_REF] Mcdonnell | The use of gradient analysis studies in advancing our understanding of the ecology of urbanizing landscapes: current status and future directions[END_REF][START_REF] Modica | Spatio-temporal analysis of the urban-rural gradient structure: an application in a Mediterranean mountainous landscape[END_REF]. Examples of other impacts of urbanization include changes in demand patterns [START_REF] Bennett | Linking biodiversity, ecosystem services, and human well-being: three challenges for designing research for sustainability[END_REF][START_REF] García-Nieto | Mapping forest ecosystem services: From providing units to beneficiaries[END_REF][START_REF] Schulp | Uncertainties in Ecosystem Service Maps: A Comparison on the European Scale[END_REF], or the infrastructure construction for water distribution facilities, energy plants and internet connection [START_REF] Kasanko | Are European cities becoming dispersed?: A comparative analysis of 15 European urban areas[END_REF][START_REF] Seto | Global forecasts of urban expansion to 2030 and direct impacts on biodiversity and carbon pools[END_REF], agricultural land abandonment [START_REF] Hasse | Land resource impact indicators of urban sprawl[END_REF][START_REF] Hervieu | Rethinking rural development in the Mediterranean[END_REF] or the protection of traditional landscapes with the aim to maintain the aesthetic quality [START_REF] Baró | Mapping ecosystem service capacity, flow and demand for landscape and urban planning: a case study in the Barcelona metropolitan region[END_REF]. Urbanization may also affect the diversity of the landscape, with agricultural land being managed by hobby farmers rather than for commercial production [START_REF] Jarosz | The city in the country: Growing alternative food networks in Metropolitan areas[END_REF][START_REF] Zasada | Multifunctional peri-urban agriculture-A review of societal demands and the provision of goods and services by farming[END_REF]. As these examples show, the influence of urban areas on ecosystems extends well beyond the urban boundaries [START_REF] Lead | Drivers of change in ecosystem condition and services[END_REF], but it is unclear how the changes in the peri-urban landscapes affect human well-being. A growing number of studies of human well-being and quality of life in urban areas focus on the benefits provided by natural elements within cities, so called urban ecosystem services [START_REF] Bolund | Ecosystem services in urban areas[END_REF][START_REF] Kremer | Key insights for the future of urban ecosystem services research[END_REF]. Many of these studies have found that the natural elements currently present in cities often do not seem to provide ecosystem services (ES) in sufficient quantities in comparison to the demand for these services [START_REF] Folke | Ecosystem Appropriation by Cities[END_REF][START_REF] Jansson | Reaching for a sustainable, resilient urban future using the lens of ecosystem services[END_REF]. [START_REF] Baró | Mismatches between ecosystem services supply and demand in urban areas: A quantitative assessment in five European cities[END_REF] have recently shown mismatches between ES in supply and demand for five European cities. These mismatches may depend on many factors, e.g. differences in spatial distribution of goods and needs or access restrictions to resources for particular groups, such as women [START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. Some of these factors can be addressed through governance or land use management [START_REF] Jansson | Reaching for a sustainable, resilient urban future using the lens of ecosystem services[END_REF]. Potentially, land cover changes around cities affect ES supply, and these changes may therefore potentially reduce or enhance deficits for ES within cities. The objective of this study is to assess how growing urban areas in the Mediterranean Basin modify the periurban landscapes and, consequently, ES supply. With the current urban population being expected to increase to 385 million people by 2025 (UNEP/MAP, 2012) and the objective of improved human well-being of the Sustainable Development Goals (United Nations, 2015b), an increase of ES is required for this growing urban population. Therefore, there is a particular need to assess the recent dynamics in ES supply, both within cities and in their peri-urban areas. The European Mediterranean areas are estimated to be particularly vulnerable with respect to ES supply, mostly due to climate and land use change [START_REF] Schröter | Ecosystem Service Supply and Vulnerability to Global Change in Europe[END_REF]. Although similar studies for the north-African Mediterranean countries are missing [START_REF] Nieto-Romero | Exploring the knowledge landscape of ecosystem services assessments in Mediterranean agroecosystems: insights for future research[END_REF], it is highly likely that these countries are subject to similar, if not higher, anthropogenic pressures, experience more rapid population increases and are undergoing significant landscape changes. The importance of this knowledge gap goes beyond its implications for regional assessments, by additionally increasing the uncertainty of supra-regional assessments of sustainable futures. The need to evaluate land use and land cover changes and their impacts for future conditions is increasingly being recognized [START_REF] Eea | Biogeographical regions in Europe. The Mediterranean biogeographical regionlong influence from cultivation, high pressure from tourists, species rich, warm and drying[END_REF][START_REF] Fichera | GIS and Remote Sensing to Study Urban-Rural Transformation During a Fifty-Year Period[END_REF]. To inform land management improvements, land use -land cover assessments should take into account spatial and temporal patterns along urban-rural gradients [START_REF] Kroll | Rural-urban gradient analysis of ecosystem services supply and demand dynamics[END_REF]. Previous land use -land cover assessments in the Mediterranean have focused on areas where spatial data was available and, as a consequence, the north-African region has received little attention [START_REF] Haase | A quantitative review of urban ecosystem service assessments: concepts, models, and implementation[END_REF][START_REF] Luederitz | A review of urban ecosystem services: six key challenges for future research[END_REF]. Also, most studies to date focus on single city case studies and are limited to the dense urban fabric. A multi-city analysis therefore fills an important knowledge gap by allowing for a comparison of the impacts of urbanization in peri-urban land and its consequences for ES supply in the Mediterranean Basin. For this study we selected both European and north-African Mediterranean cities: eight Mediterranean European cities (Lisbon, Madrid, Barcelona, Marseille, Florence, Rome, Athens, Thessaloniki) and four Northern African cities (Nabeul, Sfax, Tunis, Rabat). Material and methods For this study, we analyzed: 1) whether land cover changes around cities differed significantly from trends at national level; 2) whether different specific conversions of land cover are common for groups of cities occurred over time and finally 3) whether the spatio-temporal supply patterns of ES over the period 1990 -2012 were shared among these Mediterranean cities, depending on data availability. The assessment was carried out in six steps. First, we selected twelve major Mediterranean cities as case studies. We used a systematic approach to define the peri-urban area for each city. Based on time series of available land cover maps (Fig. 1), we assessed land cover changes in each peri-urban area and we compared them with national dynamics per country. In addition, we identified the main patterns in land cover changes across all periurban areas. Finally, we identified changes in land cover with expert based estimates of ES supply [START_REF] Stoll | Assessment of ecosystem integrity and service gradients across Europe using the LTER Europe network[END_REF] and searched for specific or general dynamics in ES supply. Data were available for the period 1990-2006 for Northern African cities, and for 1990-2012 for European cities (Table 1). These periods allowed for the analysis of important dynamics and they correspond to the used expertbased ES estimates (see below). -Insert Fig. 1 around here - Selection of Mediterranean cities In the selection of cities we aimed to achieve a geographical distribution in the Mediterranean biogeographical region [START_REF] Olson | Terrestrial Ecoregions of the World: A New Map of Life on Earth A new global map of terrestrial ecoregions provides an innovative tool for conserving biodiversity[END_REF] (Fig. 2), with a special attention to include both cities on northern and southern Mediterranean shores. An additional search criterion was that land cover data should be available on at least two moments in time. These criteria allowed for the selection of twelve cities in total, four in Northern Africa (Nabeul, Sfax, Tunis, Rabat) and eight in Southern Europe (Lisbon, Madrid, Barcelona, Marseille, Florence, Rome, Athens, Thessaloniki). Spatial land cover data is available for the entire Mediterranean basin, but the categories, spatial resolution and time series differ (Fig. 1). CORINE Land Cover (CLC) [START_REF] Feranec | European landscape dynamics: CORINE land cover data[END_REF] is a spatial database with a resolution of 100 m and it is available for all European countries for the years 1990, 2000, 2006 and 2012 (Table 1). For the North African countries, CLC is available only for 1990. We used GlobCorine Land Cover (GLC) [START_REF] Bontemps | GlobCorine-A joint EEA-ESA project for operational land dynamics monitoring at pan-European scale[END_REF] to include another point in time ( 2006) for these countries. The GLC land cover map was developed by the European Environmental Agency and European Space Agency attempting to ensure compatibility with CLC (Appendix 1). -Insert Fig. 2 around here - Defining the peri-urban areas There are many different approaches to define the urban and peri-urban areas of a city [START_REF] Orgiazzi | Global soil biodiversity atlas[END_REF]. For our study we searched for a simple, yet objective delineation of the urban areas that could be adapted to include peri-urban areas. We defined the peri-urban area as the rural area located in proximity around the urban area. In addition, the delineation method should be able to deal with the differences in data resolutions between the European countries and the north-African Mediterranean countries. The approach published by [START_REF] Kasanko | Are European cities becoming dispersed?: A comparative analysis of 15 European urban areas[END_REF] assumes a fixed relationship to estimate the boundary of the urban area, separating it into the urban core area (A) and the adjacent urban area (W u ) ( ). For our study, we used this method to additionally define the peri-urban areas (Wp). To parameterize the equation of [START_REF] Kasanko | Are European cities becoming dispersed?: A comparative analysis of 15 European urban areas[END_REF] for the boundary of the peri-urban area (W p ), we used the peri-urban estimate published by [START_REF] Kroll | Rural-urban gradient analysis of ecosystem services supply and demand dynamics[END_REF], obtaining a general equation for peri-urban areas: (see Fig. 3). By using these criteria, the width of the adjacent urban area and the peri-urban area are assumed to depend only on the area of the urban core in each city. Table 2 shows the resulting urban core areas and the corresponding W u and W p areas for 2006. The urban core area (A) was computed for the Mediterranean cities using the urban land cover determined by CLC and GLC using 2006 as reference time period. In the case of CLC, from the 44 land cover classes (Appendix 2), we selected the polygons belonging to continuous and discontinuous urban fabric (categories 111 and 112) whose centroid was inside the administrative boundary of the city. From the fourteen land cover classes of GLC (Appendix 1), class 10 (urban and associated areas) was used to determine the urban core area. In a second step, we repeated this process including those polygons whose centroid was within a radius of 1 km from the selected urban polygons, to calculate the final size and location of urban core area (A). -Insert Fig. 3 around here - Identification of land cover changes Focusing the analysis on the peri-urban area (W p ), we identified land cover changes from 1990 to 2012 for the European cities and from 1990 to 2006 for Northern African cities. Spatial land cover data was extracted for the different time periods in each city and its belonging country, considering the Mediterranean biogeographical region boundaries [START_REF] Olson | Terrestrial Ecoregions of the World: A New Map of Life on Earth A new global map of terrestrial ecoregions provides an innovative tool for conserving biodiversity[END_REF]. The land cover data (area by land cover category) for each year in each W p area was normalized using the size of each peri-urban area, to be comparable across the different case studies. The same normalization was made for each country based on the proportion of that country classified as Mediterranean. In the remainder of this text we will refer to this area as the "national area". The spatial information was analyzed using ArcGIS 10.2.2 (ESRI, 2013). To allow for the comparison between land cover categories from CLC (1990) and GLC (2006) in North Africa, we developed weighing factors using Andalusia and Sicily as most closely representative sampling sites for the Northern African Mediterranean setting. The spatial information of CLC and GLC in 2006 was intersected and extracted to measure the contributions of each CLC category in each GLC class. To transform the spatial GLC information of North African cities into information on each of the 44 CLC classes, we defined X x as the area of a specific GLC category and computed weighing factors (W x ) based on how the GLC categories in Andalusia and Sicily were composed of the different CLC categories. We applied these weighing factors in all calculations on North African areas to transform all GLC data into CLC data to allow for multiplication with the Stoll capacity matrix. This means that the surface of each CLC category (Y x ) is equal to a multiplication of the surface (km 2 ) for each GLC category (X x ) by weighing factors (W x ) (Equation 1). Equation 1: For the statistical analysis, the CLC categories were summarized in 7 different CLC groups (Table 3). To obtain the urbanization trends in Mediterranean cities, we estimated the total standardized surface by CLC group coming from W p and compared it with the total standardized land cover from the respective Mediterranean parts of countries (Portugal, Spain, France, Italy and Greece; Tunisia and Morocco) over the different periods. Assessments for Europe and North Africa follow the same methods, but were applied separately because the uncertainties and applied methods for the input data are different. Conversion of land cover changes into ecosystem services supply dynamics Following an approach developed by [START_REF] Burkhard | Landscapes' capacities to provide ecosystem services-a concept for land-cover based assessments[END_REF][START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF], we related land cover data to expert-based values of the capacity for ES supply. In a recent study, [START_REF] Stoll | Assessment of ecosystem integrity and service gradients across Europe using the LTER Europe network[END_REF] developed an ES supply capacity matrix based on CLC types combined with expert-based estimates of the supply capacity for thirty-one ES for European countries. Capacity estimates for ES supply range from 0 (no relevant capacity of the land cover type to provide this particular ecosystem service) to 5 (very high capacity). To assess how land cover changes around the case study cities influence ecosystem service supply over time, we translated our land cover changes into ES dynamics using the ES supply matrix published by [START_REF] Stoll | Assessment of ecosystem integrity and service gradients across Europe using the LTER Europe network[END_REF]. Estimates for each ES (ES x ) at different time periods in every peri-urban area were calculated by multiplying the area of each CLC class (X x ) by the corresponding ES value from ES matrix (ESstoll xn ) (Equation 2). The resulting ES assessment included twenty-seven ES supplied by peri-urban landscapes. Equation 2: Statistical analysis The chosen statistics in this study responded to type of variables, sampling distribution and scientific objectives: 1) analysis of land cover changes around cities and comparison with trends at national level; 2) identification of common specific conversions of land cover among cities over time; and 3) assessment of spatio-temporal supply patterns of ecosystem services over the period 1990 -2012 in Mediterranean peri-urban areas. For each objective, a group of statistical analysis was conducted. To assess whether land cover change patterns around cities differ from trends at the national level, the standardized total surface of land cover groups was statistically compared over time. For this purpose, we selected the non-parametric Wilcoxon test and the parametric Two-sample t-test. According to sampling distribution of CLC and paired groups, non-parametric Wilcoxon test was suitable for the case of permanent crops, complex cultivation patterns and shrub and/or herbaceous. Based on the assumptions of normal sampling distribution and paired groups, the parametric Twosample t-test was conducted for urban, non-irrigated and irrigated arable land, and forest land cover groups. Appendix 4). The data for the European cities was assessed using Within-class Correspondence Analysis (WCA) [START_REF] Benzécri | Analyse de l'inertie intraclasse par l'analyse d'un tableau de correspondance[END_REF][START_REF] Chessel | Méthodes K-tableaux[END_REF] through the "within.coa function" in the R package ade4 [START_REF] Rstudio | RStudio: Integrated development environment for R (Version 0.96.122[END_REF]. Data for the European and North-African cities was analyzed separately due to the differences in the input data (as discussed in section 2.3). Changes on ES supply in peri-urban areas over time were estimated conducting a Within-class Principal Component Analysis (WCP) [START_REF] Benzécri | Analyse de l'inertie intraclasse par l'analyse d'un tableau de correspondance[END_REF][START_REF] Chessel | Méthodes K-tableaux[END_REF], evaluating separately ES estimates for European and North African peri-urban areas (see Appendix 5 and 6). WCA and WCP are similar to standard Correspondence Analysis with a single constraining factor to remove [START_REF] Chessel | Méthodes K-tableaux[END_REF]. As strong differences between cities may mask patterns over time, we used this analysis to compare spatio-temporal variations of land cover distributions and ES supply removing alternatively the effect of the city and the year variables as constraining factors. Results Land cover change under urbanization Common patterns emerge for all selected Mediterranean cities when we compared land cover patterns over time between the selected Mediterranean cities and the trends in the respective countries (Fig. 4A and4B). Overall, changes were more pronounced in the peri-urban areas than at national level. As expected, all peri-urban areas demonstrated a significant increase of urban fabric. Around the European cities this took place mostly at the expense of complex cultivation patterns, non-irrigated and irrigated arable land, and shrub and/or herbaceous and pastures from 1990 to 2012. The parametric two-sample t-test revealed significant differences over the time between peri-urban areas from selected cities and countries in Europe in the case of urban, non-irrigated and irrigated arable land, and forest (p-value = 0.05) (Fig. 4A). In the North African peri-urban areas, the increase of urban area from 1990 to 2006 occurred in parallel with an increase in irrigated arable land, permanent crops, complex cultivation patterns and shrublands and/or herbaceous and pastures, at the expense of non-irrigated arable land and forest, both around in peri-urban areas as well as at the national level (Fig. 4B). -Insert Fig. 4 around here - Mediterranean peri-urban areas, spatio-temporal dynamics in land cover Within-class Correspondence Analyses (WCA) were performed separately to assess how land cover changes differed over time in peri-urban areas of Mediterranean European cities (from 1990 to 2012) and in peri-urban areas of Northern African cities (from 1990 to 2006) (Appendix 7). Results from European cities (where city as a variable was removed) showed that 4.56% of the variation was due to time patterns in land use; whereas when WCA was based on the time period or year (enhancing the differences between cities) 97.87% of the variation was due to the difference between cities and peri-urban land uses. This means that the differences of surrounding land use between cities are a dominant pattern that masks common trends if it is not removed beforehand. In the case of European cities (Fig. 5A Results from WCA in North African cities (where city as a variable was removed) showed that 62.86% of the variation was due to temporal patterns in land use; whereas when removing year variable 82.18% of variation is due to differences between cities and land uses. North African peri-urban areas (Fig. 5B) in 1990 were characterized by a clear pattern from non-irrigated arable land and forest patterns (in negative scores of F1) to an increase in 2006 of urban land, shrubs and herbaceous associated vegetation and pastures (in positive scores of F2). Peri-urban areas of Sfax showed transitions of permanent crops (in positive scores of F1) to urban land, shrubs and herbaceous associated vegetation and pastures (in positive scores of F2) also, with an increase of irrigated arable land. -Insert Fig. 5 around here - Ecosystem services supply: spatio-temporal patterns The supply of ESs (Appendix 5 and 6) was estimated through multivariate within-class Principal Component Analysis (WCP), considering European and North African cities over the indicated periods respectively. Trends in ES supply differed between EU cities (Fig. 6). -Insert Fig. 7 around here -All Mediterranean peri-urban areas show increases in the supply of air quality regulation, timber, wood fuel and religious and spiritual experience (Fig. 8). The European peri-urban areas show small or negative trends for the other ES. North-African peri-urban areas show much larger changes than those found in the European peri-urban areas, but this is maybe caused by the differences between CLC and GLC data. In general, the North-African peri-urban area showed stronger increases of ES supply capacities than the European peri-urban areas. In addition to the previously mentioned ES, the supply capacity of pollination, livestock, religious and cultural heritage increased in North-African peri-urban areas. Peri-urban areas do not show the same patterns for ES supply over time (Fig. 9). In some peri-urban areas the supply of regulating and cultural ES was more important than for the provisioning ES (Lisbon, Barcelona, Marseille, Florence, Athens and Sfax) or the inverse in the case of Thessaloniki. In other cases, the supply of regulating ES was more important than provisioning and cultural ES (Madrid and Rome). Cultural ES increased around Nabeul, Rabat and Tunis. -Insert Fig. 8 around here - Discussion Changes in land cover Land cover changes in European Mediterranean peri-urban areas showed an expansion of urban and forested areas at the expense of agriculture land similar to described by d [START_REF] Amour | Future urban land expansion and implications for global croplands[END_REF] and [START_REF] Depietri | The urban political ecology of ecosystem services: The case of Barcelona[END_REF]. Especially irrigated (Marseille and Florence) and non-irrigated arable land (Madrid and Thessaloniki) were reduced. Around Barcelona, Lisbon and Athens, complex cultivation patterns, shrublands and pastures were more abandoned. In Rome's peri-urban areas where the urban area increased less, general land cover change patterns are also less pronounced. Peri-urban areas of all North African cities, showed the same general pattern of increases in urban land, but instead this coincided with increases of agricultural land, herbaceous associated vegetation and pastures while irrigated agriculture and forest areas were reduced for Rabat, Nabeul and Tunis. In the peri-urban area of Sfax the area of irrigated arable land increased, while complex cultivation systems reduced in area. This expansion of both urban areas as well as agricultural areas in North Africa had been observed earlier [START_REF] Bouraoui | L'agriculture, nouvel instrument de la construction urbaine? Étude de deux modèles agri-urbains d'aménagement du territoire: le plateau de Saclay, à Paris, et la plaine de Sijoumi[END_REF]. Previous European focused studies demonstrated an expansion of woodlands in abandoned and marginal agricultural land [START_REF] Eea | Biogeographical regions in Europe. The Mediterranean biogeographical regionlong influence from cultivation, high pressure from tourists, species rich, warm and drying[END_REF][START_REF] Zanchi | Afforestation in Europe final version 26/01/07[END_REF]. Indeed, our results showed an increase in forest and shrublands for Europe at both peri-urban and national level. For the North African cities and countries, however, it is particularly shrublands which increased while the forest area showed a relatively slight decline. -Insert Fig. 9 around here - Implications of ES trends in peri-urban areas The Mediterranean basin has some serious challenges to advance on global Sustainable Development Goals (UNEP/MAP, 2016), to which the ES supplied by peri-urban areas could positively contribute. In general, the ES supply capacity of peri-urban Mediterranean areas decreased over time, in particular for the supply of provisioning and regulating ES. If we consider the ES that are of most immediate concern for ensuring human well-being, i.e. supply of food, water and protection from hazards, then European and north-African peri-urban regions showed decreasing supply trends for the supply of food and protection from hazards. Supply of freshwater in Europe remains constant, but north-African peri-urban areas showed an increased supply of freshwater. This is linked to a growing surface of continental water bodies, since several dams and reservoirs have been built to address issues related to water scarcity and strong interannual variability of precipitation [START_REF] Tramblay | Future water availability in North African dams simulated by high-resolution regional climate models[END_REF]. However, as the ES of water purification decreased (Fig 8), we may have to be cautious as to the use of this water for all purposes. Total area for crop production decreased over time in peri-urban areas, but this does not necessarily mean that the total food production also decreased. Changes in farm management may have increased the productivity of remaining agricultural land. Also, since these Mediterranean cities already rely heavily on food imports [START_REF] Lead | Drivers of change in ecosystem condition and services[END_REF][START_REF] Soulard | Peri-urban agro-ecosystems in the Mediterranean: diversity, dynamics, and drivers[END_REF], a reduction in locally produced food is likely not to lead immediately to a food deficit. However, an increased dependence on global food market prices does render countries more vulnerable to potential food crises. Urbanization often implies the expansion of impermeable surfaces leading to an increase of surface water runoff, and consequently an increased risk of flooding [START_REF] Gómez-Baggethun | Urban Ecosystem Services[END_REF]. Our results show that the regulation of natural hazards has been decreasing over the years in Mediterranean peri-urban areas which pose a particular threat to people living around urban and peri-urban areas. Depending on whether the supply of ES needs to be local in order to provide benefits, the peri-urban area can supply ES in the urban area. Recent studies on ES mismatches between supply and demand in urban areas have predominantly indicated deficits in local climate regulation (carbon sequestration, urban cooling), air quality regulation, and recreation and nature tourism [START_REF] Baró | Mapping ecosystem service capacity, flow and demand for landscape and urban planning: a case study in the Barcelona metropolitan region[END_REF][START_REF] Baró | Mismatches between ecosystem services supply and demand in urban areas: A quantitative assessment in five European cities[END_REF]. Of these deficits, peri-urban land could provide air quality regulation to reducing the deficit in the future. Although urban areas represent relatively small areas at a global scale, their increase could negatively impact local climate [START_REF] Foley | Global Consequences of Land Use[END_REF][START_REF] Verburg | Challenges in using land use and land cover data for global change studies[END_REF]. Our results indicate that Mediterranean peri-urban capacity to supply climate regulation has been decreasing. The steady trend or the increase of potential supply of some cultural ES by peri-urban areas (religious and spiritual experience) was not considered in previous assessment on ES around the Mediterranean, [START_REF] Nieto-Romero | Exploring the knowledge landscape of ecosystem services assessments in Mediterranean agroecosystems: insights for future research[END_REF][START_REF] Runting | Incorporating climate change into ecosystem service assessments and decisions: a review[END_REF] and this study offers therefore, a first reflection on its trends. There are ES supplied by the peri-urban areas which may not actually reach inhabitants of urban areas, for instance trees only provide shade locally. We can assume however, that for other ES for which distances are less relevant (e.g. global climate regulation) or for which people are likely to travel short distances (e.g. recreation and nature tourism), ES supply by peri-urban areas can be considered relevant for urban areas. For instance, the increased potential of cultural ES in peri-urban areas that we found (due to predominantly an increase in the nonirrigated agricultural and forest areas) is increasingly relevant for people seeking to spend their leisure time outside of urban areas. As the urban population grows, it could be possible that the demand for cultural services, notably in the nearby surroundings of cities, will increase. To determine whether or not the identified increase in supply will be able to meet this presumed increase in demand would need to a more detailed study. Our use of the capacity matrix developed by [START_REF] Stoll | Assessment of ecosystem integrity and service gradients across Europe using the LTER Europe network[END_REF] has several limitations, namely, 1) the capacity matrix was based on expert estimates only reflected the potential ES supply which can be different from the actual supply; 2) land cover information does not incorporate the type of management on arable lands and forest, prohibiting estimations of effects due to changes in land use intensity; 3) the matrix essentially represents a European perspective on land use and potential ES supply, so a capacity matrix adapted to the Mediterranean biome should be required to obtain more accurate estimates. Despite those limitations, we consider that our approach allows assessing land cover and ES changing patterns across Mediterranean peri-urban areas based on openly available data, identifying potential influence on ES supply. The multi-city approach used for this study allowed addressing the complexity of landscapes and management around the Mediterranean basin reflected by the different evolution of supplied ES over time. European peri-urban areas evolved from a bundle of ES provided mainly by agro-ecosystems to forest and natural vegetation ecosystems. Meanwhile, North African peri-urban areas supplied a bundle of ES from agroecosystems and natural vegetation ecosystems that change tending to rangelands over the years. A first step to improve the estimates presented in this paper would be to take into account land management and its diversity, which is likely to entail a larger diversity in ES supply trends that can be obtained by focusing on land cover information only. In a second future step, the identified trends in ES supply should be confronted with the trends in demand for ES, to evaluate in a quantified manner how forecasted population increases in cities around the Mediterranean will affect ES deficits. Conclusion Mediterranean peri-urban areas can play an important role contributing to the supply of some ES to near urban areas (air quality regulation, timber, wood fuel and religious and spiritual experience). However, general trends indicated a decrease of ES supply due to land cover changes in Mediterranean peri-urban areas, induced by nearby urbanization. 111,112,121,122,123,124,131,132,133,141,142 Non Land cover database Multivariate analyses were conducted to identify land cover changes over time and to detect spatio-temporal trends in ES supply around Mediterranean cities. Variables included land cover data for European cities (for theyears 1990years , 2000years , 2006years and 2012, see Appendix 3) , see Appendix 3) and data for North African cities(for 1990 and 2006, see ), different dynamics of land cover change over time occurred in the periurban areas. A clear pattern of change from non-irrigated arable land in 1990 (negative scores of F1) to urban in 2012 (positive scores of F1) was identified in Madrid and Thessaloniki. Barcelona, Lisbon and Athens showed transitions of complex cultivation patterns, shrublands and pastures (negative scores of F2) to forest (in positive scores of F2) in 1990, and to urban in 2012 (in positive scores of F1). Peri-urban areas of Marseille and Florence were characterized by a transition from irrigated arable land (negative scores of F1) towards urban. Land cover change patterns in the peri-urban area of Rome are less pronounced, but mostly correspond to permanent crops, shrublands and pastures in 1990 transforming into urban land in 2012. In conclusion, for European cities the temporal gradient was dominated by an increase of urban areas and marginal changes in agricultural land uses. Figure 1 . 1 Figure 1. Land cover information. A -Spatial information for 1990 (Corine Land Covergreen colour); B -Spatial information for 2006 (Globcorineblue colour); spatial information used in 2000, 2006 and 2012 covers only Europe (Corine Land Covergreen colour). Figure 2 . 2 Figure 2. Mediterranean biogeographical region (Olson et al. 2001) and selected study sites. Figure 3 .Figure 4 .Figure 5 . 345 Figure 3. Boundaries definition concept and applied example. Figure 6 .Figure 7 .Figure 8 .Figure 9 . 6789 Figure 6. Biplots of Within-class Principal Component Analysis (WCP) for the most statistical significant ecosystem services and their relationship with the European peri-urban areas (i.e., Lisbon, Madrid, Barcelona, Marseille, Florence, Rome, Athens, Thessaloniki). A: variables, B: observations. Table 1 . 1 Land cover data. Year Spatial resolution Cities LC categories 1990 Corine Land Cover 2000 2006 100 m Lisbon (Portugal), Madrid (Spain), Barcelona (Spain), Marseille (France), Florence (Italy), Rome (Italy), Athens (Greece), Thessaloniki (Greece) 44 2012 1990 250 m Rabat (Morocco), Tunis (Tunisia), Sfax (Tunisia), GlobCorine 2006 300 m Nabeul (Tunisia) 14 Table 2 . 2 Urban and surrounding areas (km 2 ) CLC groups Table 3 . 3 CLC categories summarized into CLC groups. LISBON MADRID BARCELONA MARSEILLE FLORENCE ROME ATHENS THESSALONIKI RABAT TUNIS SFAX NABEUL Urban core (km 2 ) 103,39 175,20 74,07 104,01 43,50 296,05 163,07 44,77 251,89 427,33 149,85 23,72 Adjacent urban area Wu (km 2 ) 348,61 431,82 219,95 441,96 145,01 1483,68 452,86 135,46 410,34 790,00 209,41 43,37 Peri-urban area Wp (km 2 ) 1412 2409,41 1000,45 1849,99 657,24 5031,76 2107,68 640,91 3451,95 5715,11 1884,03 350,25 Appendix 4. Land cover standardized surface in peri-urban North African cities from 1990 to 2006. Appendix 5. Ecosystem service capacity provision estimated for European peri-urban areas from 1990 to 2012. Appendix 6. Ecosystem service capacity provision estimated for North African periurban areas from 1990 to 2006. 2012 15,289 0,521 0,000 3,061 13,310 12,063 15,027 40,729 THESSALONIKI 1990 7,124 29,634 7,175 0,443 14,032 7,117 14,420 20,055 2000 7,678 29,709 7,088 0,000 13,938 5,893 15,634 20,060 EESS PROVISI ON -EU (km 2 * estimatio n) EESS PROVISI ON -North Africa (km 2 * estimatio n) Year Year Global climate regulation Global climate regulatio n Local climate regulatio n Local climate regulatio n Air quality regulatio n Air quality regulatio n Water flow regulatio n Water flow regulatio n 2006 2012 Water purificati on Water purificati on 15,018 15,045 Nutrient regulatio n Nutrient regulatio n Erosion regulatio n Erosion regulatio n 19,820 19,839 Natural hazard protectio n Natural hazard protectio n Pollinatio n Pollinatio n Pest and disease control Pest and disease control 7,177 7,094 Regulatio n of waste Regulatio n of waste Crops Crops 0,000 0,000 Energy (Biomass) Fodder Energy (Biomass ) Fodder Livestock Livestock 16,724 16,765 Fibre Timber Fibre Timber Wood fuel Wood fuel 5,446 5,472 Wild Wild food, semi-domestic livestock and ornament al s s resource food, semi-domestic livestock and ornament resource al Biochemi cals and medicine Biochemi cals and medicine 15,971 15,942 Freshwat er Recreati on and tourism Recreati Freshwat on and er tourism Landsca pe aesthetic , amenity and inspiratio n Landsca pe aesthetic , amenity and n inspiratio 19,845 19,843 Knowled ge systems Religious and spiritual experien ce Religious Knowled and ge spiritual systems experien ce Cultural heritage and cultural diversity Cultural heritage and cultural diversity Natural heritage and natural diversity Natural heritage and natural diversity NABEUL 312,489 242,464 17,653 52,478 286,019 288,058 51,679 150,971 66,351 240,647 233,519 163,389 137,769 133,886 44,313 89,420 20,001 37,991 250,279 204,304 0,457 308,984 311,816 331,790 32,051 234,060 271,570 110,537 172,486 59,746 147,179 146,541 141,885 124,268 135,472 103,582 164,796 207,246 143,518 111,302 63,028 66,528 62,382 82,525 105,668 173,900 105,718 90,949 342,044 305,668 297,387 115,534 251,342 264,913 SFAX 299,783 220,164 43,534 61,011 279,125 278,395 62,622 125,707 50,959 200,164 184,150 212,441 111,120 86,580 8,308 39,803 136,523 136,534 213,487 159,586 2,043 395,276 348,311 355,137 86,595 272,984 328,299 RABAT STANDARISED SURFACE % (land cover groups) 87,839 164,908 31,465 142,043 326,601 302,231 108,664 86,251 Year URBAN 121,429 110,244 316,500 313,650 106,321 122,225 NON-IRRIGATED ARABLE LAND 135,262 89,403 184,568 125,280 IRRIGATED ARABLE LAND 170,484 184,249 133,374 254,858 250,813 165,262 PERMANENT CROPS 93,498 59,496 162,117 128,870 COMPLEX CULTIVATION PATTERNS 64,994 68,992 36,825 52,307 11,538 70,644 107,736 114,476 FORESTS 153,494 308,826 SHRUB AND/OR 84,466 310,582 HERBACEOUS VEGETATION 90,600 282,710 ASOC AND PASTURES 246,184 6,067 321,737 322,791 OTHERS 269,165 66,221 CATEGORIES 331,931 67,908 207,049 227,962 247,250 292,171 NABEUL 0,429 0,43 12,23 2,68 5,68 15,91 11,982 51,018 121,146 182,818 71,328 147,142 148,046 144,343 137,294 133,701 120,929 164,786 198,491 172,874 124,335 71,461 72,245 70,545 101,461 128,755 174,431 116,848 75,826 334,781 304,327 302,295 113,677 258,980 268,210 7,412 7,41 8,09 3,13 11,03 11,43 14,094 39,552 TUNIS 216,298 224,977 37,844 88,996 177,533 176,861 76,849 135,210 98,911 181,707 174,019 273,091 189,307 136,028 42,442 114,581 43,040 59,951 189,404 158,934 4,822 251,206 248,653 285,513 42,793 235,044 196,021 125,511 SFAX 190,205 75,822 139,873 132,022 0,219 130,222 137,189 0,22 122,203 129,075 156,309 5,48 172,357 232,339 154,723 0,00 90,106 83,330 96,172 34,76 116,172 142,650 10,29 158,694 119,711 1,256 50,553 311,060 285,848 47,984 294,556 108,999 268,805 248,358 4,818 4,82 10,84 2,81 4,38 12,55 9,062 53,864 RABAT 0,267 0,27 28,22 0,03 1,18 5,60 2,950 41,476 3,754 3,75 11,11 2,44 14,08 13,35 16,647 32,238 TUNIS 0,622 0,62 36,66 2,29 9,17 10,31 10,323 27,819 2,691 2,69 16,20 4,94 17,78 15,17 16,383 20,464 ROME 128,936 179,164 92,054 95,175 105,090 97,349 122,211 97,375 118,235 151,830 136,329 215,767 131,043 101,45 9 23,823 86,372 114,023 122,393 137,078 108,127 13,292 212,913 189,015 202,580 78,021 189,790 151,108 119,253 ATHENS 173,531 91,193 94,273 93,847 8,237 86,800 121,234 1,344 93,358 118,361 147,843 0,000 132,092 216,578 3,223 125,779 101,79 0 22,568 19,286 90,082 114,381 120,867 11,588 129,743 102,619 20,912 13,082 201,318 181,481 35,410 194,963 78,664 186,099 142,797 126,792 177,599 90,815 94,060 101,553 10,246 94,491 120,832 1,066 96,304 118,125 152,158 0,000 136,533 215,792 3,027 127,021 103,01 3 22,659 18,005 89,697 113,834 120,506 9,117 135,717 106,952 23,048 13,102 207,970 187,891 35,491 201,081 78,921 189,145 148,904 126,792 177,599 90,815 94,060 101,553 15,122 94,491 120,832 0,574 96,304 118,125 152,158 0,000 136,533 215,792 3,070 127,021 103,01 3 22,659 13,413 89,697 113,834 120,506 14,310 135,717 106,952 16,822 13,102 207,970 187,891 36,689 201,081 78,921 189,145 148,904 Acknowledgements This work has received support from European Union FP7 projects OPERAs (Contract No. 308393, to APGN and WC), EU BON (Contract No. 308454, to IG and WC) and ECOPOTENTIAL project (Contract No. 641762, to IG). The authors acknowledge Labex OT-Med (ANR-11-LABX-0061) funded by the French Government Investissements d'Avenir program of the French National Research Agency (ANR) through the A*MIDEX project (ANR-11-IDEX-0001-02). Special thanks for valuable advice go to Berta Martín-López, Violeta Hevia-Martín, Claude Napoleone, Marina Cantabrana and Benjamin Mary. We also thank three anonymous reviewers for their constructive comments. OTHERS CATEGORIES LISBON 10,739 4,419 0,830 0,129 27,795 7,248 11,590 37,250 18,272 2,955 0,513 0,889 23,866 5,928 10,585 36,992 19,499 2,857 0,548 0,865 23,133 5,798 10,263 37,037 19,998 2,649 1,682 0,624 21,883 6,040 10,005 37,119 MADRID 9,969 41,753 4,952 2,057 7,805 7,353 25,769 0,342 16,950 36,069 4,250 1,910 7,135 7,456 25,801 0,430 21,689 33,182 3,873 1,699 6,335 7,247 25,522 0,454 24,829 31,340 3,432 1,755 5,637 13,383 18,864 0,760 BARCELONA 19,749 8,023 4,481 1,261 4,204 15,084 8,750 38,447 22,411 7,078 3,820 1,232 3,768 14,851 8,356 38,483 25,470 6,233 3,082 1,180 3,411 14,914 7,378 38,332 25,990 4,807 2,566 1,294 1,545 17,633 7,888 38,278 MARSEILLE 10,020 2,773 0,000 2,252 9,017 17,166 14,777 43,994 11,291 2,566 0,000 2,144 8,871 17,120 15,056 42,953 11,541 2,530 0,000 2,135 8,738 16,552 15,328 43,176 12,172 2,268 0,000 2,136 8,441 16,364 15,975 42,644 FLORENCE 8,216 15,634 0,000 24,596 15,808 32,859 2,401 0,485 9,658 14,518 0,000 24,552 15,282 33,217 2,166 0,608 10,138 14,168 0,000 24,516 15,210 33,317 2,136 0,516 11,205 12,872 0,000 25,276 14,493 33,694 1,963 0,497 ROME 4,206 21,181 0,000 11,299 16,693 13,739 3,386 29,495 4,378 21,506 0,000 10,200 17,522 13,750 3,052 29,593 4,569 21,341 0,000 10,223 17,476 13,656 3,115 29,620 4,569 21,341 0,000 10,223 17,476 13,656 3,115 29,620 LISBON 104,945 146,094 52,712 76,263 93,706 83,958 104,976 91,634 97,049 165,774 154,361 136,308 93,783 99,078 48,168 90,359 48,780 69,098 113,234 73,984 9,905 186,355 186,888 189,761 71,990 174,361 140,182 81,677 118,387 46,108 66,024 63,712 53,603 91,949 63,890 90,640 126,148 114,791 120,533 81,791 86,971 46,470 75,527 43,413 62,703 88,290 56,307 9,135 165,358 153,388 153,729 88,236 153,579 105,086 91,473 123,161 45,036 64,557 83,133 74,155 89,405 79,142 89,591 143,037 135,594 118,531 89,452 88,420 44,721 73,541 41,897 60,720 108,176 62,028 9,083 189,302 178,143 174,508 91,149 164,917 120,896 79,364 115,634 47,099 65,324 62,740 53,385 92,397 61,996 88,735 121,708 111,654 116,718 84,229 83,894 50,448 73,522 44,639 62,422 91,044 54,868 9,705 162,858 151,809 150,777 92,415 152,091 102,510 MADRID 137,797 199,918 56,863 108,995 111,356 121,370 132,650 110,338 142,564 135,914 148,838 274,137 195,064 135,37 5 90,761 135,202 50,567 79,411 162,637 127,094 4,652 201,209 225,817 261,502 89,716 250,232 148,864 130,689 186,171 57,627 103,281 111,629 121,891 131,774 103,995 140,643 127,803 141,433 242,262 178,412 124,30 3 85,941 118,698 49,981 79,663 158,859 121,162 4,575 204,671 228,131 253,178 101,874 244,149 147,914 123,451 174,879 56,463 98,327 105,735 114,796 128,026 98,353 136,908 122,045 137,988 223,268 169,395 119,79 8 85,268 108,876 48,865 78,175 153,020 113,923 4,896 202,429 221,640 242,784 105,976 236,262 141,380 131,432 188,827 82,577 102,726 118,983 126,120 148,076 106,672 145,994 129,522 144,785 208,887 169,671 127,08 9 78,080 93,980 82,616 101,141 160,224 129,146 5,733 221,928 228,805 232,985 113,354 238,321 152,210 BARCEL ONA 104,597 139,139 88,391 65,725 107,207 106,103 105,470 84,387 109,993 114,706 115,719 90,618 93,764 46,019 43,271 52,729 68,616 85,006 111,394 85,945 3,829 170,836 189,225 175,857 117,118 176,564 136,369 90,790 126,253 86,867 62,620 94,903 93,991 102,600 75,901 107,380 103,571 104,711 81,508 86,375 40,758 38,859 46,318 67,230 82,956 100,715 76,276 3,512 160,800 178,281 163,086 119,090 168,926 124,043 95,075 125,564 86,586 59,491 100,586 99,067 98,178 76,513 105,049 104,425 105,097 72,774 82,363 39,385 32,015 39,924 66,829 81,916 103,691 78,496 3,458 163,040 179,719 161,798 117,983 165,671 125,924 93,834 130,124 100,189 66,236 106,400 105,671 117,187 80,173 115,280 101,974 103,284 58,125 76,210 37,244 28,351 26,481 77,559 92,762 101,322 83,260 1,991 165,372 183,837 153,062 120,179 164,404 133,256 MARSEI LLE 111,411 157,355 106,099 82,843 131,348 131,976 133,360 108,164 135,612 133,547 131,249 67,212 72,312 45,323 23,159 32,200 73,884 107,622 120,187 114,603 1,850 189,120 215,081 184,777 113,303 172,073 183,176 106,295 151,825 107,475 82,517 126,013 127,594 133,514 107,041 138,397 131,674 128,779 65,940 70,985 44,775 21,833 32,220 73,202 108,238 118,007 116,612 1,643 186,608 215,665 182,889 117,600 172,633 180,830 105,661 149,557 104,906 81,011 124,507 126,091 130,315 105,637 136,935 130,942 128,995 65,374 70,968 44,759 22,872 32,533 70,935 106,469 118,177 114,907 1,635 185,909 214,075 182,918 117,106 171,136 178,250 103,419 146,747 104,621 80,854 122,703 124,202 129,828 104,603 138,177 129,288 128,160 63,504 70,686 43,747 24,031 31,800 70,143 106,866 117,562 113,602 1,643 186,138 214,294 182,850 120,036 171,013 177,062 FLOREN CE 193,919 258,570 195,260 128,613 196,728 192,867 215,729 138,292 177,815 215,029 198,635 241,838 150,413 113,10 9 15,457 75,417 236,077 256,816 189,705 173,841 5,022 339,559 321,765 305,446 182,269 289,536 266,320 193,160 256,651 196,341 127,204 197,859 194,424 216,145 137,630 177,685 213,766 197,500 234,689 147,249 110,60 6 13,717 72,073 236,730 256,974 189,730 173,045 4,846 340,556 322,355 303,497 184,748 288,627 266,316 193,097 256,172 196,780 126,939 198,312 194,873 216,556 137,464 177,697 213,674 197,462 232,615 146,659 109,85 4 13,635 71,143 237,083 257,356 189,882 172,910 4,874 340,651 322,917 303,481 185,496 288,566 266,537 193,283 254,591 199,142 126,494 200,625 197,478 218,644 137,548 178,392 212,681 195,734 226,819 145,902 107,36 4 12,309 66,968 237,373 257,985 190,031 173,269 4,195 345,555 324,019 301,760 187,129 288,851 266,849 ATHENS 113,068 146,677 85,671 82,287 114,243 110,934 129,314 92,356 134,158 148,577 139,569 96,617 78,479 63,284 46,926 65,878 60,262 101,040 120,086 105,884 5,750 181,755 208,424 201,081 18,556 61,988 39,373 92,633 126,313 75,064 77,396 92,753 89,640 117,404 80,931 129,405 132,619 127,237 89,870 74,853 58,148 51,204 63,888 49,300 94,418 105,749 92,215 5,618 164,652 191,603 188,511 18,317 58,793 34,432 110,525 142,303 94,250 77,242 121,486 118,819 129,790 91,032 132,630 139,484 131,413 74,723 74,514 57,621 36,942 44,376 67,672 106,038 121,562 109,146 4,423 190,175 211,711 189,463 19,716 59,197 38,499 88,793 125,752 81,091 74,093 101,116 98,565 120,289 81,288 121,935 124,459 111,326 74,026 68,138 55,001 38,090 42,742 58,631 93,117 105,010 96,477 4,395 180,025 197,139 172,515 19,749 57,520 35,183 THESSA LONIKI 117,510 178,528 55,660 100,109 91,191 94,486 100,697 101,271 116,576 144,519 131,552 240,847 167,882 125,32 8 52,279 129,664 42,002 63,560 134,987 107,776 5,141 162,875 186,291 213,034 318,720 597,785 561,152 100,224 165,622 49,362 95,119 69,034 70,128 91,844 92,801 114,360 133,498 125,396 238,606 165,581 121,46 2 61,864 134,154 37,186 61,732 115,665 97,525 4,977 146,246 169,680 201,209 311,819 555,889 495,746 102,450 155,985 49,606 91,497 81,958 84,854 92,892 90,083 110,168 138,786 123,525 201,948 146,025 113,64 6 51,704 119,170 35,295 61,316 121,027 96,828 4,753 155,449 185,303 203,661 383,423 574,958 543,387 88,737 147,750 49,704 91,528 68,256 71,084 93,205 84,663 110,227 130,477 115,218 201,783 143,016 110,94 6 51,495 118,924 35,482 61,417 110,027 88,646 4,798 144,593 174,359 192,603 349,169 530,562 499,209
55,631
[ "799075", "19228", "18543" ]
[ "188653", "188653", "522913", "116622", "98227", "302049", "188653", "188653" ]
01718234
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01718234v2/file/main.pdf
Alejandro Gómez-Boix email: alejandro.gomez-boix@inria.fr Pierre Laperdrix email: pierre.laperdrix@inria.fr Benoit Baudry email: baudry@kth.se Hiding in the Crowd: an Analysis of the Effectiveness of Browser Fingerprinting at Large Scale Keywords: browser fingerprinting, privacy, software diversity Browser fingerprinting is a stateless technique, which consists in collecting a wide range of data about a device through browser APIs. Past studies have demonstrated that modern devices present so much diversity that fingerprints can be exploited to identify and track users online. With this work, we want to evaluate if browser fingerprinting is still effective at uniquely identifying a large group of users when analyzing millions of fingerprints over a few months. We analyze 2,067,942 browser fingerprints collected from one of the top 15 French websites. The observations made on this novel dataset shed a new light on the ever-growing browser fingerprinting domain. The key insight is that the percentage of unique fingerprints in this dataset is much lower than what was reported in the past: only 33.6% of fingerprints are unique by opposition to over 80% in previous studies. We show that non-unique fingerprints tend to be fragile. If some features of the fingerprint change, it is very probable that the fingerprint will become unique. We also confirm that the current evolution of web technologies is benefiting users' privacy significantly as the removal of plugins brings down substantively the rate of unique desktop machines. INTRODUCTION Web browsers share device-specific information with servers to improve online user experience. When a web browser requests a webpage from a server, by knowing the platform or the screen resolution, the server can adapt its response to take full advantage of the capabilities of each device. In 2010, through the data collected by the Panopticlick website, Eckersley showed that this information is so diverse and stable that it can be used to build what is called a browser fingerprint to track users online [START_REF] Eckersley | How Unique is Your Web Browser?[END_REF]. By collecting information from HTTP headers, JavaScript and installed plugins, he was able to uniquely identify most of the browsers. With the gathered data, Eckersley not only showed that there exists an incredible diversity of devices around the world but he highlighted that this very same diversity could be used as an identification mechanism on the web. Since this study, researchers have looked at new ways to collect even more information [13, 14, 18, 24-26, 32, 34, 36], measure the adoption of these techniques on the Internet [START_REF] Acar | The Web Never Forgets: Persistent Tracking Mechanisms in the Wild[END_REF][START_REF] Acar | FPDetective: dusting the web for fingerprinters[END_REF][START_REF] Englehardt | Online Tracking: A 1-million-site Measurement and Analysis[END_REF][START_REF] Nikiforakis | Cookieless Monster: Exploring the Ecosystem of Web-Based Device Fingerprinting[END_REF], propose defense mechanisms [START_REF] Baumann | Disguised Chromium Browser: Robust Browser, Flash and Canvas Fingerprinting Protection[END_REF]17,[START_REF] Fiore | Countering Browser Fingerprinting Techniques: Constructing a Fake Profile with Google Chrome[END_REF][START_REF] Laperdrix | FPRandom: Randomizing core browser objects to break advanced device fingerprinting techniques[END_REF][START_REF] Laperdrix | Mitigating browser fingerprint tracking: multi-level reconfiguration and diversification[END_REF][START_REF] Nikiforakis | PriVaricator: Deceiving Fingerprinters with Little White Lies[END_REF], and track devices over long periods of time [START_REF] Vastel | FP-STALKER: Tracking Browser Fingerprint Evolutions[END_REF]. In 2016, a study conducted by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF] with the AmIUnique website confirmed Eckersley's findings. The authors noted a shift in the most discriminating attributes with the addition of new APIs like Canvas and the progressive removal of browser plugins. They also demonstrated that fingerprinting mobile devices is possible, but with a lower degree of success. Tracking users with fingerprinting is a reality. If a device presents the slightest difference compared to other ones, it can be identified and followed on different websites. While Panopticlick and AmIUnique proved that tracking is possible, one problem arises when looking at both datasets: their bias. First, both websites are dedicated to fingerprinting, people who visited them are interested in the topic of online tracking. It limits the scope of their studies. Then, looking at the general statistics page of the AmIUnique website from July 2017, we can clearly see a bias as 57% of visitors are on Windows, 15% on Linux, 13% on Mac, 5% on Android and 4% on iOS. The latest statistics from StatCounter for the month of July 2017 reveal that the OS market share is dominated by Android with a percentage around 40%, followed by Windows at 36%, iOS at 13%, Mac at 5% and Linux under 1% [START_REF]Operating System Market Share Worldwide -StatCounter[END_REF]. One can then ponder about the impact of such a big difference on the effectiveness of browser fingerprinting. In this paper, we investigate whether tracking can be extended to websites that target a broad audience. We analyze 2,067,942 fingerprints collected from one of the top 15 French websites, and we investigate whether browser fingerprinting techniques are still effective in identifying users by collecting the same attributes reported in the literature. Our first two research questions are related to this issue: RQ 1. How uniquely identifiable are the fingerprints in our data? RQ 2. Can non-unique fingerprints become unique if some value changes? The other questions are related to the characteristics of the dataset and the possible impact of the evolution of web technologies: RQ 3. Can the circumstances under which fingerprints are collected affect the obtained results? RQ 4. Does the evolution of web technologies limit the effectiveness of browser fingerprinting? Where previous studies reported having above 80% of unique fingerprints, we obtained a surprising number: 33.6% of unique fingerprints. This gap can be explained by the targeted audience as our study looks at fingerprints collected from the global population and not necessarily biased towards users interested in online privacy. The difference is even more noticeable when looking at the 251,166 fingerprints coming from mobile devices. [START_REF] Fifield | Fingerprinting web users through font metrics[END_REF].5% of them are unique which is in direct contradiction with the 81% that has been observed by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]. These results show another aspect of browser fingerprinting and its tracking capabilities with the current evolution of web technologies. Here, we extend the analyses carried out by Eckersley [START_REF] Eckersley | How Unique is Your Web Browser?[END_REF] and Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF] by putting the browser fingerprinting domain under a different light. Our key contributions are: • We explore the current state of browser fingerprinting with the analysis of 2,067,942 fingerprints composed of 17 different attributes. We also provide the first large-scale study of JavaScript font probing and we measure its real-life effectiveness. • We show that by collecting these attributes and targeting a much broader audience, browser fingerprinting is not as effective as it was reported in the literature. While previous studies reported having above 80% of unique fingerprints, we obtained 33.6%. • We compare our dataset with the ones from Panopticlick and AmIUnique and we explain in details the numerous differences that can be observed. • We provide a discussion on the future of browser fingerprinting and what these results mean for the domain and for future applications of this technique. The paper is organized as follows. Section 2 introduces our new dataset along with the ones from Panopticlick and AmIUnique. Section 3 analyzes the diversity of browser fingerprints in our data and compares the three datasets by providing detailed statistics to help explain the differences. Section 4 discusses the impact of our results on the domain and we simulate possible technical evolutions to have an insight on future applications of this technique. Finally, Section 5 concludes this paper. DATASET This section introduces the three different datasets that form the basis of the comparison in the next section. First, we give a short description of the two available sets of browser fingerprint statistics conducted on a large scale. Then, we describe the attributes collected to form the browser fingerprints analyzed here. Previous studies 2.1.1 Panopticlick. In 2010, Peter Eckersley launched the Panopticlick website with the goal of collecting device-specific information via a script that runs in the browser [START_REF] Eckersley | How Unique is Your Web Browser?[END_REF]. The script collected values for 10 different web browser features and its execution platform. Features were collected from three different sources: HTTP protocol, JavaScript and Flash API. Eckersley collected 470,161 fingerprints from January 27th to February 15th, 2010. Data obtained by Panopticlick is "representative of the population of Internet users who pay enough attention to privacy" [START_REF] Eckersley | How Unique is Your Web Browser?[END_REF], so in this sense the data is quite biased. In the study performed by Eckersley, the list of fonts (collected through the Flash API) and the list of plugins (collected via JavaScript) were the most distinguishable attributes. 2.1.2 AmIUnique. With the aim of performing an in-depth analysis of web browser fingerprints, the AmIUnique website was launched in November 2014. Collected fingerprints are composed of 17 features (among them, those proposed by Eckersley [START_REF] Eckersley | How Unique is Your Web Browser?[END_REF]). These fingerprints include recent technologies, such as the HTML5 canvas element and the WebGL API. In the study conducted by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF], 118,934 fingerprints collected between November 2014 and February 2015 were analyzed. The authors validated Eckersley's findings with Panopticlick and provided the first extensive analysis of fingerprints collected from mobile devices. Data collected on this website is biased towards users who care about privacy and their digital footprint. The dataset The fingerprints used in this study have been collected through a script deployed in collaboration with the b<>com Institute of Research and Technology (IRT) on one of the top 15 French websites (according to the Alexa traffic rank) on two specific web pages: a weather forecast page and a political news page. The script ran for a six month period, from December 7th, 2016 to June 7th, 2017. To be compliant with the European directives 2002/58/CE and 2009/136/CE, and with the French data protection authority (CNIL), only visitors who consented to the use of cookies, and thus the use of fingerprinting techniques, were fingerprinted. When users first connect to one of these two pages, we set up a 6-months long cookie in their browser. This supports the identification of returning visitors. Compared to the other two detailed studies, the website used to collect this dataset covers a wide range of topics and it is not dedicated to browser fingerprinting. According to the Hawthorne effect [START_REF] Mccarney | The Hawthorne Effect: a randomised, controlled trial[END_REF], if individuals are aware that they are being studied, a type of reaction occurs, in which individuals modify an aspect of their behavior in response to their awareness of being observed. In our case, this means that the fingerprints in this dataset are more representative of those found in the wild, since users are not enticed to play with their browsers to change their configuration and produce different fingerprints. Fingerprinted attributes. In order to compare ourselves with previous studies, we rely on the same attributes found in the study conducted by Laperdrix et al. in 2016 [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]. The complete list of attributes is given in the 'Attribute' column of Table 2. However, to reflect recent technological trends, we made the following modifications to our script: List of fonts. Fonts are usually collected through the Flash plugin. With a few lines of code, one can get access to the entire list of fonts installed on the user's system. However, because of security and stability reasons, plugins are being deprecated in modern browsers in favor of a feature-rich HTML5 environment [START_REF] Schuh | Saying Goodbye to Our Old Friend NPAPI[END_REF]. Flash is expected to disappear definitely as Adobe announced the end-of-life of its solution for 2020 [START_REF]Flash & The Future of Interactive Content -Adobe[END_REF]. All major web browsers like Chrome, Firefox, Edge and Safari already block Flash content or have removed support for it. This means that fingerprinting scripts must turn to another mechanism to get access to the list of fonts. Nikiforakis et al. revealed that it is possible to probe for the existence of fonts through JavaScript [START_REF] Nikiforakis | Cookieless Monster: Exploring the Ecosystem of Web-Based Device Fingerprinting[END_REF]. A script can ask to render a string with a specific font in a div element. If the font is present on the device, the browser will use it. If not, the browser will use what is called a fallback font. By measuring the dimensions of the div element, one can know if the demanded font is used or if the fallback font took its place. The biggest difference between these two gathering methods is that fonts through JavaScript must be checked individually whereas Flash gives all the installed fonts in a single instruction. This means that testing a large number of fonts is time consuming and can delay the loading of a web page. For this reason, we chose to test 66 different fonts, some among the most popular 'web-safe fonts' which are found in most operating systems and other less common ones. Appendix A reports on the complete list of fonts we tested in our script. Before deploying our script in production, we identified a limitation in how JavaScript font probing operates. We found out that some fonts can have the exact same dimensions as the ones from the fallback font. Figure 1 illustrates this problem. In the example, the two tested fonts are metrically comparable and have the exact same width and height. However, they are not identical as it can be seen in the shapes of some of the letters (especially "e", "a" and "w"). This means that font probing here will report incorrect results if one were to ask Times New Roman on a system with the Tinos font installed (or vice versa). To fix this problem, we measured the dimensions of a div against three font style variants. There are different typefaces that can be used by a web browser with the most popular ones being serif, sans-serif, monospace, cursive and fantasy. We chose the first three and we tested each font against the three of them, resulting in 66 * 3 = 198 different tests. This way, we avoid reporting false negatives as the three fallback fonts have different dimensions. Canvas. The Canvas API allows for scriptable rendering of 2D shapes and texts in the browser. Discovered by Mowery et al. [START_REF] Mowery | Pixel Perfect: Fingerprinting Canvas in HTML5[END_REF], investigated by Acar et al. [START_REF] Acar | The Web Never Forgets: Persistent Tracking Mechanisms in the Wild[END_REF], and then collected on a large scale by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF], canvas fingerprinting can be used to differentiate devices with pixel precision by rendering a specific picture following a set of instructions. In order to see how far we can go with this technique, we took as a basis the canvas test performed by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF] and we made a more complex canvas element by combining new elements of different natures. First, the script asks the browser to render the two following strings: "Yxskaftbud, ge vår WC-zonmö IQ-hjälp" and "Gud hjälpe Zorns mö qvickt få byxa". Both strings are pangrams (a string with all the letters of the alphabet) of the Swedish alphabet. For the first string, we force the browser to use one of its fallback fonts by asking for a font with a fake name. Depending on the OS and the fonts installed on the device, fallback fonts may differ from one user to another. For the second line, the browser is asked to use the Arial font that is common in many operating systems. Then, we ask for additional strings with symbols and emojis. All strings, with the addition of a rectangle are drawn with a specific rotation. A second set of elements is rendered with four mathematical functions: a sine, a cosine and two linear functions. These functions are plotted on a specific interval and using the PI value of the JavaScript Math library as a parameter. The third set of elements consists in drawing a set of ellipsis. These figures are drawn with different colors and with different levels of transparency. Since filters for opacity change among browsers, it creates differences between them. The last element is a centered shadow that overlaps the canvas element. Figure 2 displays an example of a canvas rendering following the instructions of our script. Cookies. Since we only have fingerprints from users who accepted the use of cookies, all fingerprints have the exact same value for this attribute. Descriptive statistics. We distinguish two different kinds of fingerprints: those belonging to mobile devices and those belonging to desktop and laptop machines (we will refer to desktop and laptop machines as personal computers). To prevent collecting multiple copies of the same fingerprint from the same user, we store a cookie on the user's device with a unique ID for six months. Among the 2,067,942 fingerprints, the distinction is as follows: 1,816,764 come from personal computers (87.9% of the data), and the rest, 251,190 fingerprints come from mobile devices (12.1% of the data). 1 reports on the distribution of operating systems in both our dataset and the one from the AmIUnique website. Statistics gathered from StatCounter for the month of July 2017 have also been added to give an idea how close they are from the global population. First, by looking at the differences between our newly collected data and AmIUnique, we can see that there is a significant difference in terms of distribution. Notably, we can see a clear bias in the demographic that AmIUnique attracted since the percentage of Linux desktop machines is much higher than the reported by StatCounter. Then, if we compare our numbers with the ones from StatCounter, we can see that we provide a closer representation of the global population as the percentages for both distributions are close to each other. Table 2 summarizes the essential descriptive statistics of our dataset. The 'Distinct values' column provides the number of different values that we observed for each attribute, while the 'Unique values' column provides the number of values that occurred a single time in our dataset. For example, the Use of local/session storage attribute has no unique values since it is limited to "yes" and "no". Moreover, in our data, all users accepted the use of cookies, so all the fingerprints have "yes" for this attribute. Other attributes can take a high number of values. For example, we observed 6,618 unique values for the list of fonts. In fact, we also know the higher bound for the number of distinct values for this attribute. We perform in total 66 * 3 tests and each one can take the value 'true' or 'false'. These results in 2 66 * 3 possible combinations even if, in practice, many of them will not be found. ANALYSIS AND COMPARISON In this section, we first analyze how diverse browser fingerprints are in our dataset. Then, we analyze the level of identifying information of each attribute that makes up the fingerprint. Finally, we compare our dataset with the two available sets of fingerprint statistics, provided by Eckersley in 2010 [START_REF] Eckersley | How Unique is Your Web Browser?[END_REF] and Laperdrix et al. in 2016 [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]. Browser fingerprint diversity Our data was collected on a much larger scale than previous studies and targeting a much broader audience, which leads to the RQ 1. How uniquely identifiable are fingerprints in our data? This question aims at determining how diverse the browser fingerprints are in this novel dataset. Using attributes from Table 2, we succeeded in uniquely identifying 33.6% of fingerprints in our dataset. On personal computers, 35.7% of fingerprints are unique while this number is lower on mobile devices with 18.5%. On personal computers, the threat is less important than reported in other studies. On mobile devices, the number is much smaller but the threat comes from elsewhere: closed platforms with integrated tracking applications. Figure 3 represents the distribution of the anonymity sets. A set represents a group of fingerprints with identical values for all the collected attributes. If a fingerprint is in a set of size 1, it means that this fingerprint is unique and it can be identified. On mobile devices, the percentages of fingerprints belonging to sets of size larger than 50 is around 59%, while on personal computers this percentage is around 8%. It means that the number of devices sharing equal fingerprints on mobile devices is larger than on personal computers. This can be explained by the fact that the software and hardware environments of these devices are much more constrained than on desktop and laptop machines. Users buy very specific models of smartphones that are shared by many. The largest set of mobile devices contains 13,241 fingerprints, while for personal computers it contains 1,394 fingerprints. Low rates of success at uniquely identifying browser fingerprints in our data reveal that, by collecting more than two million browser fingerprints on a commercial website, it is very unlikely that a fingerprint is unique and hence exploitable for tracking. Possibilities for a fingerprint to be unique are three times lower than in the previous datasets collected for research purposes (Panopticlick and AmIUnique). Unique fingerprints. There are 46,459 unique fingerprints on mobile devices and 647,741 on personal computers. A fingerprint is unique due to one of the following reasons: • It has an attribute whose value is only present once in the whole dataset. • The combination of all its attributes is unique in the whole dataset. On mobile devices, 73 % of fingerprints are unique because they contain a unique value, while this percentage is around 35% for personal computers. While mobile fingerprints tend to be unique because of their unique values, laptop/desktop fingerprints tend to have combinations of values so diverse that they create unique fingerprints. The most distinctive attributes are canvas on mobile devices and plugins on personal computers. Fingerprints with unique canvas values represent 62% of unique fingerprints on mobile devices, while on personal computers, fingerprints with unique combinations of plugins represent 30% of unique fingerprints. Investigating changes on browser fingerprints. Over the course of its lifetime, a device exhibits different fingerprints. This comes from the fact web technologies are constantly evolving and thus, web browser components are continually updated. From the operating system to the browser and its components, one single update can change the exhibited browser fingerprint. For instance, a new browser version is directly reflected by a change in the user-agent. A plugin update is noticeable by a change in the list of plugins. When web browsers evolve naturally, changes happen automatically without any user intervention and this affects all users. Natural evolution of web technologies is not the only reason why fingerprints evolve. There are some parameters that usually are the choice of the users, such as the use of cookies, the presence of the "Do Not Track" header, or the activation of specific plugins. Users are allowed to change these values at anytime. Besides, some attributes such as timezone or fonts are indirectly impacted by a Let us take as an example of a user with a non-unique fingerprint. Running Chrome 55 on Windows 10, the browser displays the following value for the Content language header: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 For some reason, the user decides to add the Spanish language. The browser then displays the following value: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,es;q=0.2 By changing the language settings, does the fingerprint become unique? In order to answer this type of question and to study how resilient non-unique fingerprints are in the face of evolution, we conducted an experiment. We looked at analyzing the impact made by the user's choice on the uniqueness of their fingerprints. There is a set of attributes whose values cannot be changed such as attributes related to the hardware and software environment on which the browser is running. The Platform attribute is linked to the operating system, while WebGLVendor and WebGLRenderer reveal information about the GPU. Attributes such as User-agent, List of HTTP headers or Content encoding are beyond the control of the user because they are related to the HTTP protocol. However, attributes such as Cookies enabled, Do Not Track, Content language and List of plugins are a direct reflection of the user's choice. Nevertheless, Cookies enabled, Do Not Track, Use of local/session storage are limited to "yes" and "no", so they do not offer a very discriminant information. This leaves the Content language, List of plugins, Available fonts and Timezone under the scope of our analysis. For the experiment, we chose fingerprints belonging to sets larger than 50 fingerprints. New values were chosen randomly from nonunique fingerprints that had the same operating system and web browser (including versions). This was made to ensure that the new values are consistent with fingerprints that can be found in the wild. This way, we avoid choosing values that are not characteristic of the fingerprint environment. For example, two browsers can have the same language configuration, but the encoding is different depending on the web browser. Example: Windows 8.1, Chrome, fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 Windows 8.1, Firefox, fr-FR,fr;q=0.8,en-US;q=0.5,en;q=0.3 Both browsers are running on the same operating system and have the same language configuration: French/France[fr-FR], French[fr], English/United States[en-US] and English [en]. But, depending on the web browser, the final language headers are different. Results. The experiment was repeated ten times and results were averaged. Figure 4 represents the distribution of the anonymity sets resulting of randomly changed values for the Content language, List of plugins, Available fonts and Timezone on mobile devices and desktop/laptop machines. First, we can clearly notice an important difference between devices. For desktop/laptop machines, more than 85% of fingerprints turned into unique fingerprints. This is due to the fact that combinations of values tend to be so diverse on personal computers that they make up unique fingerprints. On mobile devices, when changing the Available fonts and the List of plugins, over 80% of fingerprints remained in large sets. These results are explained by the absence of diversity in these attributes on mobile devices. Results are very different for Content language and Timezone: over 60% of fingerprints turned into unique fingerprints. This can be explained by the lack of diversity in these attributes. As most users share the same timezone and languages, a single change on one of these two attributes dramatically increases the likelihood of the fingerprint to become unique. By looking at the results of the experiment, we can conclude that if one single feature of a fingerprint changes, it is very probable that this fingerprint becomes unique. In the end, desktop/laptop fingerprints tend to be much more fragile than their mobile counterparts. Comparison of attributes Mathematical treatment. We used entropy to quantify the level of identifying information in a fingerprint. The higher the entropy is, the more unique and identifiable a fingerprint will be. Let H be the entropy, X a discrete random variable with possible values x 1 ; ...; x n and P(X ) a probability mass function. The entropy follows this equation: H (X ) = - n i=0 P(x i )loд b P(x i ) (1) We use the entropy of Shannon where b = 2 and the result is expressed in bits. One bit of entropy reduces by half the probability of an event occurring. In order to compare all three datasets which are of different sizes, we applied the Normalized Shannon's entropy: H (X ) H M (2) H M represents the worst case scenario where the entropy is maximum and all values of an attribute are unique (H M = loд 2 (N ) with N being the number of fingerprints in our dataset). The advantage of this measure is that it does not depend on the size of the anonymity set but on the distribution of probabilities. We are quantifying the quality of our dataset with respect to an attribute uniqueness independently from the number of fingerprints in our database. This way, we can qualitatively compare the datasets despite their different sizes. Table 3 lists the Shannon's entropy for all attributes from both the Panopticlick and AmIUnique studies, and our dataset. Column 'Entropy' shows the bits of entropy and column 'Norm. ' shows the normalized Shannon's entropy. The last two rows of Table 3 show the worst case scenario where the entropy is maximum (i.e. all the values are unique) and the total number of fingerprints. In the novel dataset analyzed here, the most distinctive attributes are the List of plugins, the Canvas, the User-agent and the Available fonts. Due to differences in software and hardware architecture between mobile devices and personal computers, we computed entropy values separately. By comparing entropy values between mobile devices and personal computers, we observed three attributes where the difference is significant. The largest difference is for the List of plugins with a difference of 0.485 for the normalized entropy. It can be explained by the lack of plugins on mobile devices as web browsers on mobile devices take full advantage of functionalities offered by HTML5 and JavaScript. On personal computers, the List of plugins is the most discriminant attribute while it is almost insignificant for mobile devices. We can observe in Table 2 that among 251,166 fingerprints coming from mobile devices, there are only 81 distinct values for plugins. The second significant difference is 0.214 for the Available fonts. Installing fonts on mobile devices is much more restrained than on personal computers. Even if we test a very limited set of fonts through JavaScript compared to what could be collected through Flash, we can see that there is clearly more diversity on personal computers. The last significant difference is for the User-agent attribute, with a difference of 0.182. On mobile devices, the user-agent has the highest entropy value. This is because phone manufacturers include the model of their phone and even sometimes the version of firmware directly in the user-agent as revealed by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]. Attributes Use of an ad blocker and Use of local/session storage have very low entropy values because their values are either "yes" or "no". We also tested the impact of compressing a canvas rendering to the JPEG format. It should be noted that the JPEG compression comes directly from the Canvas API and is not applied after collection. Due to the lossy compression, it should come as no surprise that the entropy from JPEG images is lower than the PNG one usually used by canvas fingerprinting tests (from 0.407 to 0.391). In the study realized by Eckersley [START_REF] Eckersley | How Unique is Your Web Browser?[END_REF], the analysis of browser fingerprints was performed without differentiating between mobile and desktop fingerprints. Later, some researchers conducted studies about browser tracking mechanisms either on desktop machines [START_REF] Acar | The Web Never Forgets: Persistent Tracking Mechanisms in the Wild[END_REF][START_REF] Boda | User Tracking on the Web via Cross-Browser Fingerprinting[END_REF] or on mobile devices [START_REF] Spooren | Mobile Device Fingerprinting Considered Harmful for Risk-based Authentication[END_REF][START_REF] Wu | Efficient Fingerprinting-Based Android Device Identification With Zero-Permission Identifiers[END_REF] but not on both. In 2016, Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF] provided the first extensive study about browser fingerprinting on mobile devices, they proved that both kind of devices presented different discriminating attributes. If the analysis of browser fingerprinting is not carried out by differentiating mobile devices from personal computers, results obtained will not be representative of both kinds of devices. In our data, 12.1% of fingerprints belong to mobile devices so mobile fingerprints represent a small part of the entire data. If we take a look at Table 3, entropy values for attributes like List of plugins or Available fonts are largely influenced by the group that contains the majority of fingerprints which, in our case, is the one with personal computers. For future work, it is strongly recommended to differentiate mobile devices from personal computers (laptops and desktop machines) to obtain more accurate results. Comparison with Panopticlick and AmIunique In the data collected by Panopticlick, Eckersley observed that 83% of visitors had instantaneously recognizable fingerprints. This number reached 94% for devices with Flash or Java installed. With the AmIUnique website, Laperdrix and colleagues observed that 89.4% of fingerprints from their dataset were unique. Thanks to the high percentages of unique browser fingerprints, browser fingerprinting established itself as an effective stateless tracking technique on the web. However, with our study, we provide an additional layer of understanding in the fingerprinting domain. By having 33.6% of unique fingerprints compared to the 80+% of the other two studies, we show that browser fingerprinting may not be effective at a very large scale and that the targeted audience plays an important role in its effectiveness. Comparing data size. When analyzing the percentages of unique fingerprints, the amount of fingerprints is an important element that influences the results. As discussed by Eckersley in [START_REF] Eckersley | How Unique is Your Web Browser?[END_REF], the probability of any fingerprint to be unique in a sample of size N is 1/N . It is clear that probabilities of being unique in our dataset are much lower than the probabilities of being unique in the AmIUnique one. With the aim of establishing a more equitable comparison, we took some samples with the same number of fingerprints as the AmIUnique data and we then calculated the percentage of unique fingerprints. We perform a comparison with the AmIUnique data because the amount of fingerprints is four times smaller than the one collected by Panopticlick. Because our dataset spans a six month period, we divided the data into six parts, each part containing data for one month. We kept the same proportion between mobile devices and desktop machines as the AmIUnique data, so we randomly took 105,829 desktop/laptop fingerprints and 13,105 mobile fingerprints from each month. Results were averaged. On average, 56% of personal computers are unique, while 29% of mobile devices are unique. These percentages show that low ratios of unique fingerprints are influenced by the number of fingerprints. Even so, results obtained on the sample are significantly distant from those obtained by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]. These results show that performing tracking with fingerprinting is possible, yet difficult. Comparing entropy values. Comparison with Panopticlick can be established by taking into account only six attributes. We observe that entropy values for our dataset and Panopticlick differ significantly for all attributes, except for the Screen resolution. Entropy values for the Screen resolution attribute hardly change for the three datasets. Regarding Timezone and Cookies enabled, drops occur in entropy values due to the characteristics of our dataset. As we explained in Section 2, we analyze fingerprints from users who accepted cookies, and most of them live in the same geographic region. The difference in the entropy value for Content language is due to the fact that most users are located in the same geographic region, which implies that most of them share the same language. In fact, 98% of users present the same value for timezone, which corresponds to Central European Time Zone UTC+01:00 and as a direct consequence of this, 97.7% of fingerprints present French as their first language. The noticeable drop in the entropy values for the Timezone and Content language affects the fingerprint diversity. To a great extent, the lack of diversity in any attribute has a direct impact on the fingerprint diversity. By decreasing the amount of values that an attributes can take, the identifying value of the attribute is reduced and therefore the identifying value of the browser fingerprint decreases. It means that the diversity surface is reduced, which reduces the diversity among the browser fingerprints giving as result less identifiable browser fingerprints. For the List of plugins, it is still the most discriminating attribute but a gradual decrease can be observed. From Panopticlick to AmI-Unique, a difference of 0.24 is present. From AmIUnique to our dataset, the difference is 0.126 resulting in a decrease of 0.365 from Panopticlick to our data. This gradual decrease in the entropy value for the List of plugins is explained by the absence of plugins on mobile devices and by the removal of plugins from modern browsers. Over time, features have been added in HTML5 to replace plugins as they were considered a source of many security problems. Chrome stopped supporting the old NPAPI plugin architecture on Chrome in 2015 (topic discussed in [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]). Mozilla dropped support in version 52 of the Firefox browser released in March 2017. Safari has never supported plugins, Flash is long discontinued for Android, and MS Edge for Windows 10 does not support most plugins. Anything else reliant on the Netscape Plugin API (NPAPI) is now dropped which means Silverlight, Java and Acrobat are gone [START_REF]Mozilla Developer Network and individual contributors[END_REF]. The difference in the entropy value for Available fonts between Panopticlick and AmIUnique is explained by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]. Half of the fingerprints in the AmIUnique dataset were collected on browsers that do not have the Flash plugin installed or activated. Between AmIUnique and our data, the difference for the entropy value of fonts is 0.117. Even if collecting fonts through JavaScript is not as effective as with Flash, we observe that the entropy of fonts is still high, keeping its place as one of the top distinctive attributes. For the other attributes, we observe that the entropy values for both our dataset and AmIUnique are similar. DISCUSSION In this section, we discuss our results along with the potential implications on the browser fingerprinting domain. The impact of different demographics In Section 3, we compared our dataset with the two available sets of fingerprint statistics. There are two key elements that can influence the results of this analysis: the targeted audience and the evolution of web technologies. Previous datasets were collected through websites dedicated to browser fingerprint collection. Both websites amiunique.org and panopticlick.eff.org inform users about online fingerprint tracking, so users who visit these websites are aware of online privacy, interested in the topic or might be more cautious than the average web user. Our dataset is much different as it was collected by targeting a general audience through a commercial website. We believe that this difference in the fingerprint collection process is key to explain the differences between datasets, giving rise to RQ 3. Can the circumstances under which fingerprints are collected affect the obtained results? Web technologies affect browser fingerprinting. The fact that some technologies are no longer used leads to the evolution of fingerprinting techniques. In some cases, it leads to a decrease in the identifying value of certain features, as we noticed in the attributes List of plugins and Available fonts. As a result of the progressive disappearance of plugins, the List of plugins is rapidly losing its identifying value. In addition to the effects produced by the evolution of technologies, there are some issues resulting from the collection process. Some of them are caused by targeting a specific demographic group. For instance, the market share distribution across the planet is not uniform. According to StatCounter [START_REF]Operating System Market Share Worldwide -StatCounter[END_REF] in 2017, the European mobile market was led by Apple and Samsung, with similar participation percentages above 30%. Although the mobile market in North America is also led by Apple and Samsung, Apple represents about 50% of the market, while Samsung about 24%. If we collect a sample of mobile fingerprints from North America, there is a good chance that the sample will have a greater presence of Apple devices. So, the distribution of some features like the Platform, WebGL Vendor or User-agent will be more representative of Apple devices. In the end, depending on the website, the use case or the targeted demographic, the results can greatly vary and the effectiveness of browser fingerprinting can change. Moreover, if we were to perform a similar study targeted at different countries, can we expect the same diversity of fingerprints? Do more developed countries have access to a wider range of devices and, as a consequence, present a larger set of fingerprints? Do more educated users have a tendency to specialize and configure more their devices which, as a result, would make their fingerprints more unique? From data that we gathered, it is impossible to answer these questions as we do not collect information beyond what is presented by the user's device. Yet, considering these different facets may be the key to understand the extent to which browser fingerprinting can work for tracking and identification. Its actual effectiveness is much more nuanced that what was reported in the past and it is far from being an answer to a simple yes or no question. Towards a potential privacy-aware fingerprinting An arms race is currently developing between users and thirdparties. As people are getting educated on the questions of tracking and privacy on the web, more and more users are installing browser extensions to protect their daily browsing activities. At the end of 2016, 11% of the global Internet population is blocking ads on the web [START_REF]The state of the blocked web -2017 Global Adblock[END_REF]. This represents 615 million devices with an observed 30% growth in a single year. With regards to browser fingerprinting, several browsers already include protection to defend against it. Pale Moon [START_REF]Pale Moon browser -Version 25[END_REF], Brave [START_REF]Fingerprinting Protection Mode -Brave browser[END_REF] and the Tor Browser [START_REF]The Design and Implementation of the Tor Browser [DRAFT] "Cross-Origin Fingerprinting Unlinkability[END_REF] were the very first ones to add barriers against techniques like Canvas or WebGL fingerprinting. Mozilla is also currently adding its own fingerprinting protection in Firefox [START_REF]Fingerprinting protection in Firefox as part of the Tor Uplift Project -Mozilla Wiki[END_REF] as part of the Tor Uplift program [START_REF]Tor Uplift Project -Mozilla Wiki[END_REF]. With our study, we show that we do not know yet the full extent of what is possible with browser fingerprinting and as so, modern browsers are getting equipped with mitigation techniques that require a lot of development to integrate and maintain. But, Does the evolution of web technologies limit the effectiveness of browser fingerprinting? In order to answer the RQ 4, we follow the idea proposed by Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]. The authors simulated the effectiveness of browser fingerprinting against possible technical evolutions. We recreated some of their scenarios on our dataset. Scenario n°1 -The end of browser plugins. Web browsers are evolving to an architecture not based on plugins. Despite the progressive disappearance of plugins, the list of plugins is still the most distinctive attribute for personal computers in our data. A glimpse of the impact of this scenario is observed on mobile devices, although some plugins still remain. To estimate the impact of the disappearance of plugins, we simulate the fact that they are all the same in our dataset, but only on personal computers and thus taking mobile devices as reference. The improvement is significant with a decrease of exactly 19.2% from 35.7% to 16.5%, taking slightly lower value than on mobile devices, which is 18.5%. Disappearance of plugins for personal computers reduces significantly the effectiveness of browser fingerprinting at uniquely identifying users, as we observed first on mobile web browsers. Scenario n°2 -Adherence to the standard HTTP headers. Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF] simulated this scenario assuming that the HTTP header fields had the same value for all fingerprints. We followed this idea, and in addition to that, we reduced the identifying information of the user-agent, just by keeping the name and version of the operating system and the web browser. On personal computers, the improvement is moderate with a decrease of 4.7% from 35.7% to 31% in overall uniqueness. However, on mobile fingerprints, we can observe a drop less significant of 2.3% from 18.5% to 16.2%. In the simulation of this scenario, Laperdrix et al. [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF] obtained significant results compared with our results. The small drop is due to the low entropy value of the HTTP headers in our data of 0.085 compared with the value obtained by [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF] of 0.249. Another element to consider is that we included a piece of information contained in the user-agent, illustrating that the combination of operating system and web browser still includes some diversity. Scenario n°3 -The end of JavaScript. By using only features collected through JavaScript (equivalent to remove HTTP features), it is possible to uniquely identify 28.3% of personal computers and 14.3% of mobile devices. By removing all features collected through JavaScript, fingerprint uniqueness drastically drops. On mobile devices, the percentage drops by 14.2% from 18.5% to 4.3%. On personal computers, the drop is abrupt from 35.7% to 0.7%. The improvement in privacy by removing JavaScript is highly visible, but the cost to the ease and comfort of using web services could be overly high. These findings show that the evolution of web technologies can benefit privacy with a limited impact. While some of them are becoming a reality, others are more improbable. Yet, it is possible to envision a future where a "privacy-aware" form of fingerprinting is possible, i.e. one that does not enable identification but that can still provide the security benefits touched upon in the literature. First, the W3C has put privacy at the forefront of discussions when designing new APIs. In 2015, Olejnik et al. performed a privacy analysis of the Battery Status API [30]. They found out that the level of charge of the battery could be used as a short-term identifier across websites. Because of this study, this API has been removed from browsers several years after its inclusion [START_REF] Olejnik | Battery Status Not Included: Assessing Privacy in Web Standards[END_REF] and it changed the way new APIs are making their way inside our browsers. A W3C draft has even been written on how to mitigate browser fingerprinting directly in web specifications [START_REF]Mitigating Browser Fingerprinting in Web Specifications -W3C Draft[END_REF]. This shows how important privacy is going forward and we can expect in the future that new APIs will not reveal any identifying information on the user's device. Then, looking at our own dataset, a privacy-aware fingerprinting seems achievable thanks to the low percentages of unique fingerprints we present in this study. CONCLUSION In this work, we analyzed 2,067,942 browser fingerprints collected through a script that was launched on one of the top 15 French websites. Our work focuses on determining if fingerprinting is still possible at a large scale. Our findings show that current fingerprinting techniques do not provide effective mechanisms to uniquely identify users belonging to a specific demographic region as 33.6% of collected fingerprints were unique in our dataset. Compared to other large scale studies on browser fingerprinting, this number is two to three times lower. This difference is even larger when only considering mobile devices as 18.5% of mobile fingerprints are unique compared to the 81% from [START_REF] Laperdrix | Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints[END_REF]. The other key elements from our study are as follows. Personal computers and mobile devices have unique fingerprints that are composed differently. While desktop/laptop fingerprints are unique mostly because of their unique combinations of attributes, mobile devices present attributes that have unique values across our whole dataset. We show that by changing some features of the fingerprint, such as Content language or Timezone, it is very probable that the fingerprint will become unique. We also show that User-agent and HTML5 canvas fingerprinting play an essential role in identifying browsers on mobile devices, meanwhile the List of plugins is the most distinctive elements on personal computers, followed by the HTML5 canvas element. Furthermore, in the absence of the Flash plugin to provide the list of fonts, we used an alternative for collecting fonts through JavaScript. Even if the list of tested fonts is much smaller compared to what could be captured through Flash, collecting fonts through JavaScript still presents some good results to distinguish two devices from each other. We also discussed some of the elements that can change the effectiveness of browser fingerprinting, such as the targeted demographic and the existing web technologies. Finally, we analyze the impact of current trends in web technologies. We show that the latest changes in fingerprinting techniques have benefited users' privacy significantly, i.e. the end of browser plugins is bringing down substantively the rate of uniqueness among desktop/laptop fingerprints. Figure 1 : 1 Figure 1: Difference between Tinos (top) and Times New Roman (bottom). Figure 2 : 2 Figure 2: Example of a rendered picture following the canvas fingerprinting test instructions. Figure 3 : 3 Figure 3: Comparison of anonymity set sizes between mobile devices and desktop/laptop machines. Figure 4 : 4 Figure 4: Anonymity sets resulting of changing values randomly in sets larger than 50 fingerprints on mobile devices (a) and Personal computers (b). Table 1 : 1 OS market share distribution. OS Our data AmIUnique StatCounter Nov'14-Jul'17 [22] Jul'17 [6] Windows 93.5% 63.7% 84% MacOS 5.5% 14.9% 11% Linux 0.9% 16.9% 1.8% Android 72% 55.6% 70% iOS 18.8% 42.3% 22% Windows Phone 7.6% <1% 1% Table Table 2 : 2 Browser measurements for the data. change in the environment, such as to a different timezone or adding fonts (fonts can be added intentionally or come as a side effect of installing new software on a device). Which gives rise to the RQ 2. Can non-unique fingerprints become unique if some value changes? and specifically, do non-unique fingerprints become unique if only one value changes? Dataset Mobile devices Personal computers Attribute Distinct Unique Distinct Unique Distinct Unique values values values values values values User-agent 19,775 8,702 10,949 5,424 8,826 3,278 Header-accept 24 9 9 2 19 8 Content encoding 30 8 19 5 25 4 Content language 2,739 1,313 961 529 2,128 958 List of plugins 288,740 196,898 81 33 288,715 196,882 Cookies enabled 1 0 1 0 1 0 Use of local/session storage 2 0 2 0 2 0 Timezone 60 16 39 1 58 18 Screen resolution and color depth 2,971 1,015 434 159 2675 897 Available fonts 17,372 6,618 94 36 17,326 6,603 List of HTTP headers 610 229 158 78 491 164 Platform 32 5 21 2 26 3 Do Not Track 3 0 3 0 3 0 Canvas 78,037 65,787 30,884 28,768 47,492 37,194 WebGL Vendor 27 1 20 2 26 3 WebGL Renderer 3,691 657 95 10 3,656 661 Use of an ad blocker 2 0 2 0 2 0 Table 3 : 3 Shannon's entropy for all attributes from Panopticlick, AmIUnique and our data. Panopticlick AmIUnique Dataset Mobile devices Desktop/laptop machines Attribute Entropy Norm. Entropy Norm. Entropy Norm. Entropy Norm. Entropy Norm. Platform - - 2.310 0.137 1.200 0.057 2.274 0.127 0.489 0.024 Do Not Track - - 0.944 0.056 1.919 0.091 1.102 0.061 1.922 0.092 Timezone 3.040 0.161 3.338 0.198 0.164 0.008 0.551 0.031 0.096 0.005 List of plugins 15.400 0.817 11.060 0.656 9.485 0.452 0.206 0.011 10.281 0.494 Use of local/session storage - - 0.405 0.024 0.043 0.002 0.056 0.003 0.042 0.002 Use of an ad blocker - - 0.995 0.059 0.045 0.002 0.067 0.004 0.042 0.002 WebGL Vendor - - 2.141 0.127 2.282 0.109 2.423 0.135 1.820 0.088 WebGL Renderer - - 3.406 0.202 5.541 0.264 4.172 0.233 5.278 0.254 Available fonts 13.900 0.738 8.379 0.497 6.904 0.329 2.192 0.122 6.967 0.335 Canvas - - 8.278 0.491 8.546 0.407 7.930 0.442 8.043 0.387 Header Accept - - 1.383 0.082 0.729 0.035 0.111 0.006 0.776 0.037 Content encoding - - 1.534 0.091 0.382 0.018 1.168 0.065 0.153 0.007 Content language - - 5.918 0.351 2.716 0.129 2.291 0.128 2.559 0.123 User-agent 10.000 0.531 9.779 0.580 7.150 0.341 8.740 0.487 6.323 0.304 Screen resolution 4.830 0.256 4.889 0.290 4.847 0.231 3.603 0.201 4.437 0.213 List of HTTP headers - - 4.198 0.249 1.783 0.085 1.941 0.108 1.521 0.073 Cookies enabled 0.353 0.019 0.253 0.015 0.000 0.000 0.000 0.000 0.000 0.000 H M (worst scenario) 18.843 16.860 20.980 17.938 20.793 Number of FPs 470,161 118,934 2,067,942 251,166 1,816,776 ACKNOWLEDGMENT We thank the b<>com Institute of Research and Technology (IRT) for their support and we are particularly grateful to Alexandre Garel for his collaboration in setting up the script on the commercial website and collecting the data. This work is partially supported by the CominLabs-PROFILE project and by the Wallenberg Autonomous Systems Program (WASP). APPENDIX A. LIST OF TESTED FONTS Andale Mono, AppleGothic, Arial, Arial Black, Arial Hebrew, Arial MT,Arial Narrow, Arial Rounded MT Bold, Arial Unicode MS,
57,484
[ "1020188", "184121", "838700" ]
[ "491189", "491189", "366312" ]
01744818
en
[ "phys" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01744818/file/PhysRevB95_075402_2017.pdf
T Garandel R Arras X Marie P Renucci L Calmels Electronic structure of the Co(0001)/MoS2 interface, and its possible use for electrical spin injection in a single MoS2 layer Keywords: 72.25.Dc, 72.25.Hg, 72.25.Mk, 73.20.At, 73.63.Rt, 75.70.Ak, 75.70.Cn The ability to perform efficient electrical spin injection from ferromagnetic metals into twodimensional semiconductor crystals based on transition metal dichalcogenide monolayers is a prerequisite for spintronic and valleytronic devices using these materials. Here, the hcp Co(0001)/MoS2 interface electronic structure is investigated by first-principles calculations based on the density functional theory. In the lowest energy configuration of the hybrid system after optimization of the atomic coordinates, we show that interface sulfur atoms are covalently bound to one, two or three cobalt atoms. A decrease of the Co atom spin magnetic moment is observed at the interface, together with a small magnetization of S atoms. Mo atoms also hold small magnetic moments which can take positive as well as negative values. The charge transfers due to covalent bonding between S and Co atoms at the interface have been calculated for majority and minority spin electrons and the connections between these interface charge transfers and the induced magnetic properties of the MoS2 layer are discussed. Band structure and density of states of the hybrid system are calculated for minority and majority spin electrons, taking into account spin-orbit coupling. We demonstrate that MoS2 bound to the Co contact becomes metallic due to hybridization between Co d and S p orbitals. For this metallic phase of MoS2, a spin polarization at the Fermi level of 16 % in absolute value is calculated, that could allow spin injection into the semiconducting MoS2 monolayer channel. Finally, the symmetry of the majority and minority spin electron wave functions at the Fermi level in the Co-bound metallic phase of MoS2 and the orientation of the border between the metallic and semiconducting phases of MoS2 are investigated, and their impact on spin injection into the MoS2 channel is discussed. I.INTRODUCTION Triggered by the success of graphene, the field of two-dimensional (2D) semiconductor (SC) crystals based on transition metal dichalcogenide (TMDC) monolayers encounters a spectacular development [1]. This new class of exciting materials presents several original characteristics. Their direct band gap [2] allows investigations by optical techniques, and the strong spin-orbit coupling combined with the lack of inversion symmetry result in nonequivalent valleys in the reciprocal space, that can be selectively addressed by circularly polarized light due to optical selection rules [3]. In a sense, the valley index could constitute a novel degree of freedom to carry and process information ("valleytronics" [4]). The exploration of the spin and valley degrees of freedom by all-optical experiments have been carried in the past years [5,6,7]. However, electrical spin injection yet remains elusive in these systems [8]. The ability to inject spin polarized currents would pave the way for new spintronic and spinoptronic devices [9,10,11], where one can imagine to take benefit from optical and spin properties of these materials. It could also benefit to future valleytronic devices, where electrical generation and control of the valley index is required. Due to the unique correlation between the spin and valley indices of charge carriers, this point is directly linked to electrical injection of spin polarized (and energy selected) carriers in TMDC. The problem of electrical injection into 2D semiconductors like MoS 2, MoSe2, WSe2, or WS2 is thus a new challenge in this field, after the recent exploration of electrical spin injection into tridimensional (3D) semiconductors such as GaAs [9,10,11], Si [12], Ge [13], and in 2D Graphene [14]. The first experimental proof of electrical spin injection in 2D semiconductors has just been reported very recently: Ye et al. have shown that spin polarized holes can be injected in WSe2 from the ferromagnetic semiconductor GaMnAs that acts as a spin aligner [15]. Unfortunately, this ferromagnetic material has a Curie temperature near 200K, far below room temperature. An alternative way consists in using ferromagnetic metals (FM) like Cobalt, Iron or Nickel, with Curie temperatures well above room temperature. In the context of the electrical spin injection into 3D semiconductors in the diffusive regime, it is well established that one has to overcome the impedance mismatch [16] between the FM and the SC layers: this is usually performed by inserting an oxide layer or a tunnel barrier [17]. The electrical spin injection from a ferromagnetic electrode into a single TMDC layer should, however, be totally different from the case of spin injection in a 3D semiconductor and the problem has to be reconsidered. In particular, the semiconductor nature of MoS 2 just below the metallic contact is questionable because covalent bonds could be formed between the metal and the TMDC monolayer, as discussed by Allain et al. [8]. Modification of the magnetic properties of MoS2 below the contact, in particular a spin-polarization of the electron states near the Fermi level and induced magnetic moments, could also occur due to the direct contact between MoS2 and the magnetic layer. First-principles calculations of the electronic structure based on the density functional theory (DFT) constitute a tool of choice to understand the bonding mechanisms between the FM and the TMDC layers and to give a clear description of the spin-polarized electron states at their interface. Most of the first-principles studies on interfaces between metals and a single TMDC layer have focused on non-magnetic metals like Ir, Pd, Ru, In, Ti, Au, Mo or W [18,19]. Only few studies have concerned interfaces between Fe, Co or Ni and a TMDC monolayer: Dolui et al. have studied the Giant Magnetoresistance of Fe/MoS2/Fe magnetic tunnel junctions and demonstrated that MoS2 becomes conductive when the MoS2 spacer only contains one or two layers, due to the strong interaction between Fe and S atoms; this behavior could, however, be due to the fact that Fe is present on both sides of the MoS2 layer, and the case of a single Fe/MoS2 interface has not been investigated there [20]. Considering an interface between a single Co atomic layer and a MoS2 sheet, Chen et al. showed that the electronic structure of these two monolayers is drastically modified by a strong interface binding [21]; their results are however strongly influenced by the extreme thinness of the Co layer (which is so thin that it even becomes half-metallic), while Co electrodes in real Co/MoS2-based devices would certainly be thicker and contain several Co atomic layers. Leong et al. and Co/MoS2 interfaces [23]. Their study of the Fe(111)/MoS2 interface is based on a supercell in which a 3x3 slab of Fe(111) and a 4x4 cell of MoS2 are stacked together. For Co/MoS2, they used a supercell in which a MoS2 monolayer is sandwiched between face-centered cubic (fcc) Co layers and an atomic structure which should not correspond to that of a real Co/MoS2 interface, not only because Co actually crystallizes in the hexagonal compact (hcp) structure, but also because their multilayer is based on 4x4 Co(111) atomic layers superimposed on a 3x3 MoS2 single layer: considering the experimental values of the lattice parameters (0.2507 nm for hcp Co and 0.312 nm for MoS2), this interface corresponds to a relatively high lattice mismatch of 6.9%. Moreover, the MoS2 layer would only be bound to Co on one of its two sides, in a realistic MoS2/Co contact. For all these reasons, the genuine atomic structure of the Co/MoS2 interface may probably be different from that used by these authors. In the present paper, we considered supercells built from 5x5 Co(0001) atomic layers with the hcp stacking and a 4x4 MoS2 single layer. These stacking would correspond to the very small lattice mismatch of 0.4% and would be more realistic for calculating the physical properties of the Co/MoS2 interface between a Co magnetic electrode and a single MoS2 layer. After a brief description of the first-principles methods that we have used, we will first describe the atomic structure of the Co(0001)/MoS2 interface, before giving details on the electronic states, magnetic moments, and charge transfers at the interface. We finally discuss the physical properties of the Co(0001)/MoS2 interface in the perspective of spin injection in the 2D semiconductor MoS2, and give insights on the possible utilization of the low strained Co/MoS2 contact for in plane spin transport in lateral channels [17]. II.FIRST-PRINCIPLES METHODS The ground state energy, the charge and the spin densities of all the supercells have been calculated self-consistently using the full-potential augmented plane waves + local orbitals (APW+lo) method implemented in the code WIEN2k [START_REF] Blaha | WIEN2k, an augmented plane wave+local orbitals program for calculating crystal properties[END_REF]. The Kohn-Sham equation has been solved in the framework of the density functional theory (DFT), using the parametrization proposed by Perdew, Burke and Ernzerhof for the exchange and correlation potential that was treated within the generalized gradient approximation (GGA) [START_REF] Perdew | [END_REF]. In all our supercells, we used atomic sphere radii of 1.8, 1.8, and 2.0 atomic units (a. u.), respectively for Co, S and Mo atoms. The largest wave vector Kmax used for expanding the Kohn-Sham wave functions in the interstitial area between atomic spheres is given by the dimensionless parameter RminKmax=6.0, where Rmin is the smallest atomic sphere radius of the supercell; this corresponds to an energy cut-off of 151 eV. The irreducible wedge of the Co/MoS2 supercell two-dimensional Brillouin zone was sampled with a k-mesh of typically 24 different k-vectors, generated with a special k-grid used to perform Brillouin zone integrations with the modified tetrahedron integration method. To model the Co/MoS2 interface, we used rather big symmetric supercells consisting in a Co slab with hcp stacking and a thickness of five 5x5 (0001) monolayers (MLs), covered on each of its two sides by a 4x4 MoS2 single layer, followed by vacuum. The thickness of the Co layer is sufficient to recover most of the electronic structure of bulk hcp Co at the center of the slab (shape of the density of states curves, value of the spin magnetic moment) and the thickness of the vacuum separation between periodically adjacent MoS2 layers is above 1 nm, large enough to avoid interaction effects between them. We have chosen to use a periodic slab with a single MoS2 layer on both sides of Co to get also identical surfaces on both sides of the Co slab and vacuum separation, which avoids artefacts such as charge transfer between surfaces across the Co layer. This kind of huge supercells contain 125 Co, 64 S and 32 Mo atoms and up the 36 non-equivalent atoms, depending on the relative positions of S and Co atoms at the interface. This high number of atoms is a necessary requirement to treat the problem of the realistic low-strained single MoS2 contact that can be found, for example, at the source or the drain of a spin-field effect transistor (FET) based on a MoS2 channel. The lattice parameter that we chose for the Co(5x5)/MoS2(4x4) unit cell corresponds to four times the lattice parameter calculated for MoS2 (0.319 nm). The atomic structure of all the different MoS2/Co(5MLs)/MoS2 supercells that we have considered has been obtained by minimizing the forces acting on the different atoms. The minimum energy is achieved when all the atoms have reached their equilibrium position in the supercell. London dispersion forces are not included in the calculation. Once the atomic structure has been calculated, we can choose to include spin-orbit coupling effects at each self-consistent loop, to check whether these effects modify the electronic structure of the Co(0001)/MoS2 interface. This is performed within the second-order perturbation theory, for which we chose to include unoccupied electron states up to the maximum energy of 48 eV above the Fermi level, the magnetization being oriented along the Co(0001) axis. III. ATOMIC STRUCTURE OF THE Co(0001)/MoS2 INTERFACE We considered three different supercells, which correspond to three different manners of superimposing the 4x4 unit cell of 1H-MoS2 (shown on Figure 1a) on the 5x5 unit cell of Co(0001) (Fig. 1b). The first supercell (further labelled Supercell1) corresponds to the case where one of the interface S atoms of the 4x4 MoS2 cell (for instance, the S atom at the corners of the 4x4 MoS2 cell shown in Fig. 1a) is located just above one of the fcc hollow atomic sites of the Co(0001) surface. The second supercell (Supercell2) corresponds to the case where one of the interface S atoms is on a top atomic site (i. e. just above an atom of the surface Co ML). We finally considered the last supercell (Supercell3), for which one of the interface S atoms is above one of the hcp hollow sites of the Co(0001) surface (i. e. just above an atom of the subsurface Co ML). These three supercells are represented in Figure 2. The atomic structure of these three supercells has been calculated self-consistently without including spinorbit coupling. After all the atoms have reached their equilibrium position, we observed that the ground state energy is the lowest for Supercell1, and respectively 0.0827 eV and 0.813 eV higher for Supercell2 and Supercell3. These energy differences correspond to the whole supercells, which all contain 2 MoS 2/Co interfaces with 16 MoS2 formula units in the 4x4 MoS2 cell. Consequently, the difference per MoS2 formula unit, between the ground state energies of the different supercells is only of 2.6 meV between Supercell2 and Supercell1, and of 25.4 meV between Supercell3 and Supercell1. From now, we will only consider the physical properties of Supercell1, which corresponds to the lowest energy interface structure. remaining S and Co interface atoms. These interatomic distances are very close to those which have been measured for bulk cobalt sulfides (0.232 nm for CoS2 with the pyrite structure [26]). Each of the two This clearly shows that bonding between the MoS2 single layer and the Co(0001) surface has a covalent nature and is not due to the Van der Walls interaction. Each of the interface Co atoms are bound to a single S atom, except the four ones marked by a star in Fig. 3. The atomic layers present a small warping near the Co/MoS2 interface, due to the different numbers of chemical bounds formed by the non-equivalent interface S atoms. Based on the averaged values of the z-coordinates (the z-axis being perpendicular to the interface) calculated for the atoms in the different atomic layers, we can estimate the average distance between successive atomic layers: we obtain an average distance of 0.205 nm between the interface Co and interface S atomic layers, and average distances of 0.160 and 0.154 nm between the Mo layer and the interface S and external S layers, respectively. These later distances can be compared to the distance of 0.157 nm calculated between the S and Mo atomic layers of an isolated MoS2 sheet. The undulation of the successive atomic layers (difference between the highest and lowest z-coordinates) is respectively of 0.011 nm, 0.029 nm, 0.026 nm and 0.023 nm for the interface Co, interface S, Mo and external S atomic layers. IV. ELECTRONIC STRUCTURE OF THE Co(0001)/MoS2 INTERFACE The majority and minority spin band structures of the Co/MoS2 slab are shown in Figures 4b and4c. They contain a huge number of bands, some of them corresponding to cobalt bands and others to MoS2 bands, folded on themselves, with important band gap opening at the center and the edges of the two-dimensional Brillouin zone. The shaded areas which appear in these band structures correspond to the Co d-band continuum (below -0.5 eV for majority spin and across the Fermi level for minority spin electrons). Additional bands, which do not correspond to the folded bands of Co or MoS2 also appear in this figure; this is for instance the case between -0.5 eV and 0.5 eV for majority spin electrons. These new bands correspond to interface Bloch states involving covalently bound Co and S atoms. They strongly modify the physical properties of MoS2, giving a metallic behavior to this layer, induced by interface covalent bonding. Some of these interface bands have a non-negligible dispersion near the Fermi level (electrons in the metallic phase of MoS2 at the Co/MoS2 interface will not have a high effective mass). Each energy band in Figs. 4b and4c is drawn with small circles, the radius of which is proportional to the contribution of the MoS2 layer to the corresponding Bloch states. The band of the Co/MoS2 slab corresponding to the bottom of the conduction band of the MoS2 single layer appears near 0.32 eV above the Fermi level (it is indicated with a blue arrow on Fiq.4b). This band is more clearly visible for minority spin (Fig. 4b) than for majority spin electrons for which it is located in the continuum of Co d-bands (Fig. 4c). The difference between the energy of this band and the Fermi level corresponds to the height of the Schottky barrier [19], the value of which can be estimated at 32 . 0 = B φ eV for the Co/MoS2 interface. Our calculation of the Schottky barrier height is performed at the DFT level. As discussed by Zhong et al. [27], this seems to give results in a better agreement with experimental ones than those computed with the GW method, probably due to the fact that many-electron effects are greatly depressed by the charge transfer at the MoS2/metal interface, which significantly screens electron-electron interaction [27]. Figure 5 shows the density of states (DOS) of the MoS2 layer at the Co(0001)/MoS2 interface. This DOS curve is continuous between -8 eV and energies well above the Fermi level, confirming the metallic character of the MoS2 layer when it is covalently bound to the cobalt surface. This holds for majority as well as for minority spin electrons. Despite the strong modification of the electronic structure of MoS2, important DOS peaks can be identified in Fig. 5 as belonging to the valence and conduction bands of the isolated MoS2 layer: The DOS curves of the isolated MoS2 sheet has also been represented in Fig. 5, where it has been shifted to coincide to the main DOS peaks of MoS2 at the Co/MoS2 interface. The bottom of the conduction band in the shifted DOS curve of the isolated MoS2 layer is near 0.32 eV above the Fermi level, which confirms the value of the Schottky barrier height estimated from the band structure of the supercell. This estimation of the Schottky barrier height is larger than the values measured experimentally (between 60 meV in [28] and 121 meV [29]). These measurements are however performed on exfoliated MoS2 flakes. In these cases, the flakes are exposed to air before Co is deposited. It means that impureties and molecules can be trapped at the interface, resulting in localized states within the gap that can modify the pinning of the fermi level, and thus the Schottky barrier height. It would be interesting to compare our result with a full epitaxial Co/MoS2 hybrid system elaborated in ultra-high vacuum, when it will be available. Note that our estimation of the Schottky barrier height of the Co/MoS2 interface is comparable to values found in previous studies for similar interfaces involving another metal, like the Ti/MoS2 system [19]. The majority and minority spin densities of states give access to the spin polarization S P near the Fermi level in the MoS2 layer. This important quantity, defined as the difference between the majority and the minority spin densities of states divided by their sum, clearly when Mo and S atoms keep the same positions as in Supercell1 and Co atoms have all been removed. The majority spin space-dependent charge transfer can finally be obtained from ( ) ( ) ( ) ( ) { } r r r r 2 , , MoS Co n n n n ↑ ↑ ↑ ↑ + - = ∆ . The minority spin charge transfer is similarly given by 7 shows a top and a side view (the direction of observation corresponds to the blue arrow in Fig. 3) of the calculated three-dimensional majority and minority spin charge transfers interface is due to an excess of minority spin electrons, concomitant with a lack of majority spin electrons on interface Co atoms. Similarly, the magnetic moment of Mo atoms is due to an excess of majority spin and a lack of minority spin electrons for the Mo atoms which have a positive spin magnetic moment (right hand side of Figs. 7b and7d), and mostly to an excess of minority spin electrons for Mo atoms which have a negative magnetic moment (left hand side of Fig. 7d). ( ) ( ) ( ) ( ) { } r r r r 2 , , MoS Co n n n n ↓ ↓ ↓ ↓ + - = ∆ . Figure VII. PERSPECTIVES: TOWARDS ELECTRICAL SPIN INJECTION IN MoS2 In the context of spin injection into bulk semiconductor materials, it has been established that the so-called conductivity mismatch [16] between the ferromagnetic metal injector and the semiconductor constitutes a fundamental obstacle for efficient spin injection at the ferromagnetic metal/semiconductor interface in the diffusive regime. To circumvent this problem, it has been shown that a thin tunnel barrier must be introduced between FM and SC, inducing an effective spin dependent resistance [17]. A possible tunnel barrier that can be exploited is the natural Schottky barrier that occurs between the two materials. As spin injection in tunnel regime is desirable, a careful engineering of the doping in the semiconductor close to the interface is required in order to make the Schottky barrier thin enough to behave as a tunnel barrier. It has been realized successfully in Fe/GaAs system [10] where GaAs was gradually n-doped up to 10 19 cm -3 at the interface, resulting in a very efficient electrical spin injection from Fe into GaAs in tunnel regime. In the context of TMDCs, the general problem of electrical contacts on TMDCs for transport in a two-dimensionnal channel is a challenging task (tackled for example by phase engineering techniques [31,32], as well as using contacts based on heterostructures involving graphene [22]. Concerning spin injection with Co/MoS2 (or other FM/TMDC [33]) interfaces, one has to consider the Schottky barrier between the metallic phase of MoS2 just below the contact, labeled hereafter (MoS2)*, and the semiconductor MoS2 channel out of the contact. As in GaAs, one could imagine to strongly increase the doping in MoS2 close to the (MoS2)*/MoS2 one-dimensional border, see Figure 8. Even if, up to now, spatially controlled chemical doping of a single layer TMDC is still a challenge, such an in-plane localized doping has been already obtained with the help of additional gate electrodes developed successfully for in-plane P-I-N junctions in TMDC Light Emitting Diodes [34]. Considering the electrical spin injection tunnel process at this (MoS2)*/MoS2 (n-doped) one-dimensional interface, one has to compare, for majority and minority spin, the compatibility of the symmetries of the electron wave functions in (MoS2)* at the Fermi level, with the ones in the conduction band of the MoS2 single layer semiconductor channel, in order to estimate the efficiency of the tunneling process. The conduction electron wavefunctions in the isolated MoS2 channel exhibit a strong Mo dz 2 character (as well as a smaller Mo s character), and a S (px+py) character. The DOS at the Fermi level calculated for (MoS2)* is qualitatively different from the DOS at the bottom of the conduction band of MoS2. Due to the strong hybridization between Co and S atomic orbitals at the Co/MoS2 interface, Bloch electron states at the Fermi level result from a non-trivial combination of atomic orbitals that involve Mo-dz 2 , S-(px+py) and also other orbitals, see Table 1. However, the contribution of Mo-dz 2 and S-(px+py) atomic orbitals is not negligible and still represents 33% and 37% of the total DOS of (MoS2)* at the Fermi level, respectively for majority and minority spin electrons. The symmetry of the majority and minority spin electron states at the Fermi level in (MoS2)* is thus partly compatible with the symmetry of Bloch states in the conduction band in the MoS2 channel. The efficiency of electrical injection in the MoS2 channel in the tunnel regime should also depend on the direction of the one dimensional border between (MoS2)* and MoS2. We know that the Bloch vector of electrons in the conduction band of the MoS2 channel, after tunneling through the Schottky barrier, corresponds to one of the K-valleys. We also know, using the simple model of a free electron with mass m and energy E propagating in a plane towards a straight one-dimensional potential step where the potential jumps from 0 to E U > , that the penetration depth that characterizes the exponential decay of the wave function after the step is given by 2 1 2 // 2 2 2 2 -                   + - m k E U m h h ; // k is the component of the electron two- dimensional Bloch vector parallel to the straight border, and TABLE CAPTIONS: Table 1: Total and partial (for the most important atomic orbitals) Mo and S majority and minority spin density of states at the Fermi level for the Co/MoS2 interface. All the results are given in the same arbitrary unit. FIGURE CAPTIONS have studied the Ni(111)/MoS2 interface, but they mostly focus their study on the consequences of the insertion of a graphene layer between Ni and MoS2, than on a detailed description of the Ni(111)/MoS2 interface [22]. Finally, Yin et al. have recently calculated the electronic structure of the Fe/MoS2 Co/MoS2 interfaces in the supercell involves 25 Co and 16 S atoms. One of these 16 interface S atoms (further labelled S3) is covalently bound to 3 different but equivalent interface Co atoms (labelled Co3); 3 of the 16 interface S atoms (labelled S2) are each bound to 2 different but equivalent Co atoms (Co2), and all the 12 remaining interface S atoms are bound to a single Co atom. The position of all these atoms of the Co/MoS2 interface is shown in Figure 3. The distance between interface S and Co atoms is of 0.236 nm between S3 and Co3, 0.234 nm between S2 and Co2, and of 0.221 or 0.222 nm between the 12 . Red and green areas respectively correspond to a local excess and a local lack of electrons. As expected, we see on this figure that charge transfers occur between interface Co and S atoms along the Co-S covalent bonds. This figure shows that the reduction of the Co magnetic moment at the . It follows that the straight (MoS2)*/MoS2 border should be perpendicular to the (ΓK) direction of the two-dimensional Brillouin zone, in order to get a spin-polarized current with maximum intensity reaching a K-valley in the MoS2 was granted access to the HPC resources of CALMIP supercomputing center under the allocation p1446 (2014-2016). Figure 1 : 1 Figure 1: Atomic structure (top views) of (a): the 4x4 unit cell of MoS2, and (b): the 5x5 unit cell of a 5MLs Co(0001) slab. Figure 2 : 2 Figure 2: Side views of the atomic structure of the Co/MoS2 supercells corresponding to (a): Supercell1 (b): Supercell2, and (c): Supercell3. Figure 3 : 3 Figure 3: Top view of the atomic structure of the Co(0001)/MoS2 interface for Supercell1. The atoms Co2 (dark blue spheres), Co3 (medium blue), S2 (red) and S3 (orange) discussed in section III are indicated. The Co atoms marked by a star are those which are not directly bound to a S atom. Figure 4 : 4 Figure 4: Band structure of (a): the 4x4 MoS2 single layer supercell, (b): the Co/MoS2 slab for majority spin, (c): the Co/MoS2 slab for minority spin, (d): the 5x5 Co(0001) slab for majority spin and (e): the 5x5 Co(0001) slab for minority spin. The radius of the red circles in panels (b) and (c) is proportional to the MoS2 contribution to the electron states. The blue arrow in panel (a) indicates the minimum of the conduction band in MoS2; its corresponding position in the band structure of the Co/MoS2 slab is also shown with a blue arrow in panel (b), where it corresponds to the height of the Schottky barrier. Figure 5 : 5 Figure 5: Contribution of a MoS2 monolayer to the density of states of Supercell1 (dark curves).The density of states of an isolated MoS2 layer is also represented after an energy shift (red curves). The upper and lower parts of the figure respectively correspond to majority and minority spin electrons. Figure 6 : 6 Figure 6: Spin polarization in the MoS2 layer, calculated without (red curve) and with (dark curve) spin-orbit coupling. Figure 7 : 7 Figure 7: (a): Top view and (b): side view of the majority spin charge transfer Figure 8 : 8 Figure 8: Sketch representing (a): the bottom view (below the Co contact) and (b): the side view of the physical area involving the Co(0001) contact, (MoS2)*, the Schottky barrier (MoS2)*/ MoS2 obtained by suitable doping and the MoS2 channel. The one-dimensional border which should be designed to lower the exponential decay of wave functions in the Schottky barrier is indicated by red a dashed line in (a), together with the (ΓK) direction of MoS2 (red arrow). The corresponding Schottky barrier profile is represented in panel (c). Figure 3 Figure 4 Figure 5 345 Figure 3 Figure 6 6 Figure 6 ACKNOWLEDGEMENTS The authors acknowledge the Université Fédérale de Toulouse-Midi-Pyrénées and Région Midi-Pyrénées for the PhD grant SEISMES as well as the grant NEXT n° ANR-10-LABX-0037 in the framework of the « Programme des Investissements d'Avenir". This work indicates if a spin-polarized current can be injected through the Co/MoS2 interface to a MoS2 channel. DOS curves shown in Fig. 5 The estimation of the spin polarization at the Fermi level for Co/MoS2 interface is of importance for spin injection. As pointed out by Mazin [30], an accurate determination of the electric current spin polarization would however require complementary DFT-based calculations (including the transmittance of the whole complex structure and the effects of the bias voltage). V. SPIN MAGNETIC MOMENTS AT THE Co(0001)/MoS2 INTERFACE The spin magnetic moment of Co atoms is on average 8% lower at the Co(0001)/MoS2 interface than in bulk hcp Co (1.69 µB), with values that depend on the kind of interface S atom to which they are covalently bound: The interface Co atoms that show the highest spin magnetic moment (1.66 µB and 1.63 µB) are the 4 atoms which are not bound to S atoms, followed by Co3 (1.62 µB) and Co2 atoms (1.57 µB). All the other interface Co atoms have a spin-magnetic moment between 1.48 and 1.50 µB: the lowering of the interface Co atom magnetic moment is more important when Co atoms are more strongly bound to S atoms, with a shorter Co-S bond length. All the interface S atoms have a small spin magnetic moment with the same sign as the Co atom magnetic moments, with values between 0.012 and 0.016 µB. The spin magnetic moment of S atoms in the external S layer is even smaller (0.003 to 0.004 µB). The spin magnetic moment of Mo atoms (between -0.029 and -0.024 µB) is antiferromagnetically coupled to the Co and S magnetic moments, except when these Mo atoms are bound to one or to two S2 atoms (in this case, Mo spin magnetic moments respectively take the positive values 0.008 and 0.050 µB). VI. CHARGE TRANSFER AT THE Co(0001)/MoS2 INTERFACE To calculate the charge transfer between atoms induced by covalent bonding at the Co(0001)/MoS2 interface, we proceeded as follows: first, we calculated the majority spin electron density ( ) VIII. CONCLUSIONS In this paper we have investigated the electronic structure of a single low-strained hcp Co(0001)/MoS2 interface, using first-principles calculations based on the functional density theory, in order to estimate the potentiality of a cobalt injector for electrical spin injection into a MoS2 monolayer, in view of spintronic devices as spin-FET. We first described the lowest energy atomic structure and show that interface S atoms are covalently bound to one, two or three interface Co atoms. A lower spin magnetic moment is observed for Co atoms at the interface, together with a small magnetization of S atoms. Mo atoms also hold small magnetic moments which can takes positive as well as negative values. The induced magnetic moments have been interpreted in terms of majority and minority spin charge transfers at the interface. Band structures and density of states curves have been calculated for minority and majority spin electrons in the hybrid system, taking into account spin-orbit coupling. We demonstrate that MoS2 just below the cobalt contact becomes metallic due to hybridization with Co d orbitals. TABLES: FIGURES:
33,099
[ "18552", "170766", "171543", "18553" ]
[ "519179", "519179", "116255", "43574", "519179" ]
01744852
en
[ "info" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01744852/file/Template_ISBI2018-copy.pdf
Sudhanya Chatterjee Olivier Commowick Onur Afacan Simon K Warfield Christian Barillot MULTI-COMPARTMENT MODEL OF BRAIN TISSUES FROM T2 RELAXOMETRY MRI USING GAMMA DISTRIBUTION Keywords: T 2 relaxometry, microstructure, brain The brain microstructure, especially myelinated axons and free fluids, may provide useful insight into brain neurodegenerative diseases such as multiple sclerosis (MS). These may be distinguished based on their transverse relaxation times which can be measured using T 2 relaxometry MRI. However, due to physical limitations on achievable resolution, each voxel contains a combination of these tissues, rendering the estimation complex. We present a novel multi-compartment T 2 (MCT2) estimation based on variable projection, applicable to any MCT2 microstructure model. We derive this estimation for a three-gamma distribution model. We validate our framework on synthetic data and illustrate its potential on healthy volunteer and MS patient data. INTRODUCTION MRI voxels of the human brain are heterogeneous in terms of tissue types due to the limited imaging resolution and physical constraints. Each voxel in the white matter (WM) contains a large number of myelinated and non-myelinated axons, glial cells and extracellular fluids [START_REF] Tomasch | Size, distribution, and number of fibres in the human corpus callosum[END_REF][START_REF] Lancaster | Three-pool model of white matter[END_REF]. For example, every square millimeter of the corpus callosum in a human brain has more than 100,000 fibers (myelinated and non-myelinated) of varying diameters [START_REF] Tomasch | Size, distribution, and number of fibres in the human corpus callosum[END_REF]. These tissues can be distinguished based on their T 2 relaxation times. Myelin being a tightly wrapped structure has a very short T 2 relaxation time of 10 milliseconds (ms) [START_REF] Lancaster | Three-pool model of white matter[END_REF]. The estimated T 2 relaxation time of the myelinated axons is 40ms [START_REF] Lancaster | Three-pool model of white matter[END_REF]. The ventricles and tissue injury regions contain free fluids which have a high T 2 relaxation time (>1000ms). The T 2 relaxation values between those of myelin and myelinated axons and the free fluids correspond to the glial cells and extracellular tissues [START_REF] Lancaster | Three-pool model of white matter[END_REF]. An ability to obtain the condition of these tissues can help us gain better insights into the onset and progress of neurodegenerative diseases such as multiple sclerosis (MS). Myelin water fraction (MWF) has been computed from T 2 relaxometry images using a variety of approaches [START_REF] Mackay | Magnetic resonance of myelin water: An in vivo marker for myelin[END_REF][START_REF] Akhondi-Asl | Fast myelin water fraction estimation using 2D multislice CPMG[END_REF]. Most of these methods primarily focus on the MWF estimation. However, MWF alone might not be able to convey the en-tire information since it is a relative measurement. For example, in MS patients a decrease in MWF in a WM lesion might be caused by myelin loss or fluid accumulation due to tissue injury or both. Hence for relative measurements like water fraction (WF), all the WF maps should be observed simultaneously for a complete understanding of the tissue condition. Here we propose an estimation framework to obtain brain microstructure information using a multi-compartment tissue model from T 2 relaxometry MRI data. The T 2 space is modeled as a weighted mixture of three continuous probability density functions (PDF), each representing the tissues with short, medium and high T 2 relaxation times. We estimate the PDF parameters using variable projection (VARPRO) approach [START_REF] Golub | Separable nonlinear least squares: the variable projection method and its applications[END_REF]. We derive this generic estimation framework for gamma PDFs. We validate the proposed method using synthetic data against known ground truth. We then illustrated it on a healthy subject and MS patient. METHOD AND MATERIALS Theory Signal model The T 2 space is modeled as a weighted mixture of three PDFs, representing each of the three T 2 relaxometry compartments: short-, medium-and high-T 2 . Thus the voxel signal at the i-th echo time (t i ) is given as: s (t i ) = M 0 3 j=1 w j ∞ 0 f j (T 2 ; p j ) EPG (T 2 , T E, i, B 1 ) dT 2 (1) Each compartment is described by a chosen PDF, f j (T 2 ; p j ), where p j = {p j1 , . . . , p jn } ∈ R n are the PDF parameters. In Eq. (1), w j is the weight of the j-th distribution with j w j = 1. ∆T E, B 1 and M 0 are the echo spacing, field inhomogeneity and magnetization constant respectively. Imperfect rephasing of the nuclear spins after application of refocusing pulses leads to the generation of stimulated echoes [START_REF] Hennig | Calculation of Flip Angles for Echo Trains with Predefined Amplitudes with the Extended Phase Graph (EPG)-Algorithm: Principles and Applications to Hyperecho and TRAPS Sequences[END_REF]. Hence the T 2 decay is not purely exponential. The stimulated echoes are thus obtained using the EPG algorithm [START_REF] Prasloski | Applications of stimulated echo correction to multicomponent T2 analysis[END_REF]. EPG(•) is the stimulated echo computed at t i = i ∆TE where i = {1, . . . , m} and m is the number of echoes. Optimization M 0 and w j can be combined into a single term α j ∈ R + without any loss of generality. In that case, the weight w j corresponding to each compartment is obtained as w j = α j / i α i . In the most general case, the least squares minimization problem is thus formulated as: α, p, B1 = arg min α,p,B1 m i=1   y i - 3 j=1 α j λ j (t i ; p, B 1 )   2 = arg min α,p,B1 Y -Λ (p, B 1 ) α 2 2 (2) where Y ∈ R m is the observed signal and m is the number of echoes; α ∈ R + 3 ; Λ ∈ R m×3 ; p = {p 1 , p 2 , p 3 } ∈ R k , where k = 3n. In Eq. ( 2), each element of Λ, Λ ij =λ (t i ; p j , B 1 ), is computed as: Λ ij = ∞ 0 f j (T 2 ; p j ) EP G (T 2 , T E, i, B 1 ) dT 2 (3) Due to the EPG formulation, there is no closed form derivative solution for the optimization of B 1 [START_REF] Prasloski | Applications of stimulated echo correction to multicomponent T2 analysis[END_REF]. Hence, we opt for an alternate optimization scheme where we iterate between optimization of {p, α} with a fixed value of B 1 and optimization for B 1 using the obtained {p, α} values. The terms Λ (p, B 1 ) and α in Eq. ( 2) are linearly separable. Hence we can use the VARPRO approach to solve for {p, α} [START_REF] Golub | Separable nonlinear least squares: the variable projection method and its applications[END_REF]. The unknown α is substituted by Λ (p) + Y, where Λ (p) + is the Moore-Penrose generalized inverse of Λ (p). The VARPRO cost function is computed as: arg min p I -Λ (p) Λ (p) + Y 2 2 (4) where, I -Λ (p) Λ (p) + is the projector on the orthogonal complement of the column space of Λ (p). Since p ∈ R k , the Jacobian matrix J ∈ R k×m and its columns are computed as shown in [START_REF] Golub | Separable nonlinear least squares: the variable projection method and its applications[END_REF]. To compute the elements of J, we need to obtain ∂Λ/∂p ji , ∀i, j [START_REF] Golub | Separable nonlinear least squares: the variable projection method and its applications[END_REF]. After solving Eq. ( 4) for p, the values of α are obtained as Λ (p) + Y. The optimization for {α, p} and B 1 is performed alternatively until convergence. B 1 is optimized using a gradient free optimizer (BOBYQA), as it does not have any closed form solution [START_REF] Prasloski | Applications of stimulated echo correction to multicomponent T2 analysis[END_REF]. Multi-compartment model using gamma PDF The previous estimation framework is generic as it does not depend on the chosen PDF. We choose here to use gamma PDF for f j (•) for j = {1, 2 ,3} since their non-negativity and skewed nature are well suited to describe the compartments used to model the T 2 space. The mean T 2 values of myelin, myelinated axons, inter-and extra-cellular and free fluids in the brain are well studied in the literature [START_REF] Lancaster | Three-pool model of white matter[END_REF][START_REF] Mackay | Magnetic resonance of myelin water: An in vivo marker for myelin[END_REF]. Hence we parameterized each f j in terms of its mean (µ j ) and variance (v j ) rather than the usual shape and scale parameter representation (refer Eq. ( 5)). Using this parametric form of the gamma PDF makes the choice of optimization bounds convenient. f (T 2 ; µ j , v j ) = T (µ 2 j /vj )-1 2 Γ µ 2 j /v j (v j /µ j ) µ 2 j v j exp -T 2 v j /µ j (5) Hence we have, There are almost no echoes available which correspond to the high-T 2 compartment. The robustness and accuracy of the implementations to simultaneously estimate the weights and all the PDF parameters has been found to be not reliable [START_REF] Kj Layton | Modelling and estimation of multicomponent T 2 distributions[END_REF]. p = {µ s , v s , µ m , v m , µ h , v h } Hence we choose to estimate only the mean of the gamma PDF corresponding to the medium-T 2 compartment. Using the VARPRO approach we thus estimate four parameters of the signal model: mean of the medium-T 2 gamma PDF (µ m ) and the three weights corresponding to each compartment. Hence only ∂Λ/∂µ m is required for computing the Jacobian matrix and is obtained as: ∂Λ ∂µ m = ∞ 0 f (T 2 ; µ m , v m ) µ m v m 2 log T 2 µ m v m - (6) 2Ψ µ 2 m v m + 1 - T 2 v m EP G (T 2 , T E, i, B 1 ) dT 2 where Ψ(•) is the digamma function. The remaining gamma PDF parameter values are pre-selected for the three compartments based on histology findings reported in the literature [START_REF] Mackay | Magnetic resonance of myelin water: An in vivo marker for myelin[END_REF] and are set as {µ s , µ h } = {30, 2000} ms and {v s , v m , v h } = {50, 100, 6400} ms 2 . We assume a reasonable bound on µ m of 100-125ms for its optimization. The minimization problem in Eq. ( 4) is solved for µ m using the analytically obtained derivative in Eq. ( 6) with a gradient based optimizer [START_REF] Svanberg | A class of globally convergent optimization methods based on conservative convex separable approximations[END_REF]. The short-T 2 compartment here indicates the condition of myelin and myelinated axons [START_REF] Lancaster | Three-pool model of white matter[END_REF]. The medium-T 2 compartment's WF conveys information on the condition of axons, glial cells and extracellular fluids [START_REF] Lancaster | Three-pool model of white matter[END_REF]. The condition of free fluids (such as in ventricles and fluid accumulation due to tissue injuries) is indicated by the high-T 2 WF values. Experiments Synthetic data. MS patient data. The method was finally tested on T 2 relaxometry MRI data of a MS patient. The observations from our estimation maps were compared with the pathological findings on MS lesion reported in the literature [START_REF] Lassmann | Heterogeneity of multiple sclerosis pathogenesis: Implications for diagnosis and therapy[END_REF][START_REF] Cr Guttmann | The evolution of multiple sclerosis lesions on serial MR[END_REF]. We observed whether the estimation maps obtained from our method were able to provide insight into MS lesion which corroborate with the pathological findings. The acquisition details are the same as for the healthy volunteer data. The ventricles and other regions with free fluids have a higher µ m compared to the normal appearing white matter (NAWM) tissues. This is relevant as free fluids have a higher T 2 value compared to the relatively tightly bound tissues present in NAWM. MS patient data. Two lesions are present in the MR image of the MS patient shown in Fig. 3 that are marked with red and blue arrows. We observe absence of short-T 2 WF in both lesions. The lesions and their neighboring tissues have higher w m values than NAWM tissues. The w h map shows fluid accumulation in lesion-2 but not in lesion-1 . The estimated medium-T 2 gamma PDF map shows a higher PDF mean for both lesions compared to the NAWM. In lesion-1 the estimated µ m increases (with respect to the neighboring NAWM) as it approaches the core of the lesion, but is less than the µ m estimated at the ventricles where there is free fluid. RESULTS Synthetic DISCUSSION Our method was successfully validated against synthetic data with known ground truth for all SNR values (refer Fig. 1). High w s values in the genu of the CC of healthy volunteer data (refer Fig. 2) is due to the high myelin and myelinated fibers density in this region compared to any other part of the brain [START_REF] Tomasch | Size, distribution, and number of fibres in the human corpus callosum[END_REF]. In the estimation maps of the MS patient (refer Fig. 3), the absence of short-T 2 WF in the lesions and in its neighboring regions can be explained by demyelination of the nerve fibers caused by MS [10][3]. Demyelination at the onset of MS is followed by macrophage intervention leading to an increased cellular activity in the MS lesion regions [START_REF] Lassmann | Heterogeneity of multiple sclerosis pathogenesis: Implications for diagnosis and therapy[END_REF] whose T 2 relaxation time is greater than myelin and myelinated axons but less than those of free fluids [START_REF] Lancaster | Three-pool model of white matter[END_REF]. This phenomenon might explain the high w m values in the lesions and in neighboring regions. Demyelination is followed by progressive axonal damage and fluid accumulation (due to tissue injuries) in MS lesions [START_REF] Lassmann | Heterogeneity of multiple sclerosis pathogenesis: Implications for diagnosis and therapy[END_REF]. The extent of axonal damage and fluid accumulation in the lesions can provide useful information regarding the lesion state and its response to a treatment. Lesion-2 has fluid accumulation unlike lesion-1 , possibly indicating that the two lesions are in different stages. The continuous axonal damage in the MS lesions [10][11] explains the higher µ m in the lesion regions compared to the neighboring NAWM as a higher µ m value indicates tissues with less tightly bound water. The increment in µ m values in lesion-1 as we approach the lesion core from the lesion boundary might also indicate a reduction in axon density. This is in accordance with the pathology of MS lesion evolution [10][11]. CONCLUSION We proposed a generic estimation method to obtain estimates of tissue microstructure in brain by modeling the T 2 spectrum as a weighted mixture of three gamma PDFs. The maps estimated from our method can be effective in understanding the heterogeneity of lesions [START_REF] Lassmann | Heterogeneity of multiple sclerosis pathogenesis: Implications for diagnosis and therapy[END_REF] in MS patients and be used as potential biomarkers to have information on the MS lesion growth stage. As a part of the future study we intend to validate the observations by applying the proposed estimation framework on more healthy controls and MS patient datasets. data. The results of the synthetic data simulation are shown in Fig 1. It shows that with increasing SNR the weights estimation gets more accurate. The bars around the mean value are the 95% confidence intervals (CI) which is obtained as 1.96 times the standard deviation of the estimates. The CIs of the estimation improve with with increasing SNR for all three WFs. The ground truth lies in the CI of the mean estimated weights for all three compartments. Fig. 1 . 1 Fig. 1. Mean of estimated weights with a 95% confidence interval for 100 signal averages for the synthetic data study. Healthy volunteer data. The estimation maps for the healthy volunteer are shown in Fig 2. The genu of the corpus callosum (CC) has higher w s values compared to any other region Fig. 2 . 2 Fig. 2. Estimation maps for a healthy volunteer. Fig. 3 . 3 Fig. 3. Estimation maps for MS patient data. Lesion-1 and lesion-2 are marked with red and blue arrows respectively. where, (•) s , (•) m and (•) h are the PDF parameters describing the short-, medium-and high-T 2 compartments respectively. Due to practical limitations such as feasible acquisition time, coil heating and specific absorption rate (SAR) guidelines, T 2 relaxometry MRI sequences have limitations on the shortest echo time and number of echoes per acquisition. The high-T 2 compartment aims at capturing of free fluids in the brain, and hence has a T 2 relaxation time larger than 1 second[START_REF] Mackay | Magnetic resonance of myelin water: An in vivo marker for myelin[END_REF]. A standard T 2 spin echo multi-contrast sequence has the shortest T 2 acquisition (first echo) at around 8 -10ms and has 20 -40 acquired echoes. Hence for the short-T 2 compartment, we usually have a very limited (around 3-4) number of echoes. The proposed method was first validated against synthetic data generated following a known ground truth, composed of three gamma PDFs with parameters, {µ s , µ m , µ h } = {25, 120, 1900} and {v s , v m , v h } = {40, 90, 6000}. The weights chosen for each compartment were, {w s , w m , w h } = {0.2, 0.7, 0.1}. The B 1 , M 0 and T 1 values considered for this simulation were 1.3, 950 and 1000 respectively. The experiments are carried out for SNR values ranging from 5 to 100 in steps of 5. We simulated T 2 relaxometry data with the following parameters: first echo (TE 0 ) is at 9ms; TE= 9ms; 32 echoes, 100 signal averages. Healthy volunteer data. The method was tested on T 2 relaxometry data acquired on a healthy volunteer (male, age: 26) with the following acquisition details: Siemens 3T MRI scanner; 2D multislice CPMG sequence; 32 echoes; TE 0 = 9ms; echo spacing of 9ms; TR = 3000ms; single slice acquired; slice thickness of 4mm; in plane resolution of 1.04mm × 1.04mm; matrix size of 192 × 192.
17,974
[ "1010653", "14904", "1245" ]
[ "491318", "491318", "81450", "81450", "491318" ]
01744885
en
[ "sdu" ]
2024/03/05 22:32:07
2018
https://insu.hal.science/insu-01744885/file/Laurent_et_al-2018-Journal_of_Metamorphic_Geology.pdf
Valentin Laurent email: valentin.laurent@univ-orleans.fr Pierre Lanari Inès Naïr Romain Augier Abdeltif Lahfid Laurent Jolivet Exhumation of eclogite and blueschist (Cyclades, Greece):Pressure-temperature evolution determined by thermobarometry and garnet quilibrium modelling Keywords: Subduction zone, Exhumation of eclogite and blueschist, Garnet equilibrium modelling, XMAPTOOLS, Slab rollback High-pressure rocks such as eclogite and blueschist are metamorphic markers of paleosubduction zones, and their formation at high-pressure and low-temperature conditions is relatively well understood since it has been the focus of numerous petrological investigations in the past 40 years. The tectonic mechanisms controlling their exhumation back to the surface are, however, diverse, complex and still actively debated. Although the Cycladic Blueschist Unit (CBU, Greece) is among the best worldwide examples for the preservation of eclogite and blueschist, the proposed P-T evolution followed by this unit within the Hellenic subduction zone is quite different from one study to another, hindering our comprehension of exhumation processes. In this study, we present an extensive petrological dataset that permits refinement of the shape of the P-T trajectory for different subunits of the CBU on Syros. High-resolution quantitative compositional mapping has been applied to support the thermobarometric investigations, which involve semi-empirical thermobarometry, garnet equilibrium modelling and P-T isochemical phase diagrams. The thermodynamic models highlight the powerful use of reactive bulk compositions approximated from local bulk compositions. The results were also | INTRODUCTION Comprehension of the dynamic processes controlling the exhumation of high-pressure and lowtemperature (HP-LT) metamorphic rocks during subduction is partly based on the reconstruction of detailed Pressure-Temperature-time-deformation (P-T-t-d) paths of tectonic units that have undergone a complete burial-exhumation cycle. The Cycladic Blueschist Unit (CBU), cropping out in the Cycladic archipelago (Greece), is one of the best worldwide examples of a fossilized subduction channel. Syros Island, located in the central part of the Cyclades, is mainly composed of the CBU and is famous for its excellent preservation of HP-LT metamorphic rocks such as eclogites and blueschists. Consequently, this island has been the focus of petrological, structural and geochronological studies aimed at constraining the tectonometamorphic evolution of the CBU subduction complex (e.g. [START_REF] Cliff | Geochronological challenges posed by continuously developing tectonometamorphic systems: insights from Rb-Sr mica ages from the Cycladic Blueschist Belt, Syros (Greece)[END_REF][START_REF] Keiter | A new geological map of the Island of Syros (Aegean Sea, Greece): Implications for lithostratigraphy and structural history of the Cycladic Blueschist Unit[END_REF][START_REF] Keiter | Structural development of high-pressure metamorphic rocks on Syros island (Cyclades, Greece)[END_REF][START_REF] Lagos | High precision Lu-Hf geochronology of Eocene eclogite-facies rocks from Syros, Cyclades, Greece[END_REF][START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF][START_REF] Laurent | Extraneous argon in high-pressure metamorphic rocks: Distribution, origin and transport in the Cycladic Blueschist Unit (Greece)[END_REF][START_REF] Lister | White mica 40 Ar/ 39 Ar age spectra and the timing of multiple episodes of high-P metamorphic mineral growth in the Cycladic eclogite-blueschist belt, Syros, Aegean Sea, Greece[END_REF][START_REF] Philippon | Tectonics of the Syros blueschists (Cyclades, Greece): From subduction to Aegean extension[END_REF][START_REF] Schumacher | Glaucophane-bearing marbles on Syros, Greece[END_REF]Soukis & Stöckli, 2013;[START_REF] Tomaschek | Zircons from Syros, Cyclades, Greece -recrystallization and mobilization of zircon during high-pressure metamorphism[END_REF]Trotet, Jolivet, & Vidal, 2001a;[START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF]. However, after a decade of investigations, the tectonometamorphic evolution of the CBU of Syros is still actively debated, as attested by the different shapes of P-T paths proposed in the literature from burial to exhumation (Figure 1). For example, [START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF] used a multi-equilibrium approach with the TWEEQU software [START_REF] Berman | Thermobarometry using multi-equilibrium calculations: a new technique, with petrological applications[END_REF] to characterize the P-T paths followed by eclogite-, blueschist-and greenschist-facies metamorphic rocks from different tectonic subunits on Syros. Their results lead to the conclusion that, irrespective of the intensity of retrogression, the different units have all undergone the same metamorphic peak conditions around 20 kbar and 550 °C but have followed distinct exhumation evolutions after peak metamorphism (Figure 1a). In this scenario, the preservation of eclogitic parageneses at the top of the lithological pile of Syros is explained by cooling during exhumation. A stronger retrogression under greenschist-facies conditions is observed further down and is associated Accepted Article This article is protected by copyright. All rights reserved. | GEOLOGICAL CONTEXT | Aegean domain The Aegean domain corresponds to a collapsed segment of the Hellenic belt above a north-plunging subduction zone (Figure 2; [START_REF] Jolivet | Exhumation of deep crustal metamorphic rocks and crustal extension in arc and back-arc regions[END_REF][START_REF] Le Pichon | The Aegean Sea[END_REF][START_REF] Ring | The Hellenic subduction system: high-pressure metamorphism, exhumation, normal faulting, and large-scale extension. Annual Review[END_REF]. The tectonometamorphic evolution of this domain is traditionally described in two main stages (Figure 2; [START_REF] Jolivet | Cenozoic geodynamic evolution of the Aegean[END_REF][START_REF] Ring | The Hellenic subduction system: high-pressure metamorphism, exhumation, normal faulting, and large-scale extension. Annual Review[END_REF]. First, the late Cretaceous-Eocene convergence between Africa and Eurasia plates led to the formation of the Hellenides-Taurides belt. During this episode, a series of oceanic and continental nappes entered the subduction zone and were thrust on top of each other in an overall HP-LT metamorphic context [START_REF] Bonneau | Subduction, collision et schistes bleus; l'exemple de l'Egee (Grece)[END_REF]. Then, inception and acceleration of African slab retreat from 35-30 Ma led to southward migration of the subduction front, crustal collapse of the belt in the back-arc domain and extensional reworking of the nappe stack [START_REF] Jolivet | Correlation of syn-orogenic tectonic and metamorphic events in the Cyclades, the Lycian nappes and the Menderes massif. Geodynamic implications[END_REF][START_REF] Jolivet | Aegean tectonics: Strain localisation, slab tearing and trench retreat[END_REF][START_REF] Le Pichon | The Aegean Sea[END_REF][START_REF] Lister | Metamorphic core complexes of Cordilleran type in the Cyclades, Aegean Sea, Greece[END_REF]. Extension in the back-arc domain was characterized by a distributed deformation oriented N-S over a wide region covering the entire Aegean Sea, part of western Anatolia and the Rhodope Massif in the north [START_REF] Gautier | Ductile crust exhumation and extensional detachments in the central Aegean (Cyclades and Evvia Islands)[END_REF][START_REF] Jolivet | Cenozoic geodynamic evolution of the Aegean[END_REF][START_REF] Jolivet | Ductile extension and the formation of the Aegean Sea[END_REF][START_REF] Ring | The Hellenic subduction system: high-pressure metamorphism, exhumation, normal faulting, and large-scale extension. Annual Review[END_REF][START_REF] Urai | Alpine deformation on Naxos (Greece)[END_REF]. Extensional tectonics were also characterized by more localized deformation with the development of large-scale detachments and metamorphic core complexes (MCCs) in which the exhumation of HP-LT units was completed in a low-pressure and hightemperature (LP-HT) environment (Figure 2; [START_REF] Gautier | Structure and kinematics of upper Cenozoic extensional detachment on Naxos and Paros (Cyclades Islands, Greece)[END_REF][START_REF] Jolivet | Ductile extension and the formation of the Aegean Sea[END_REF][START_REF] Lister | Metamorphic core complexes of Cordilleran type in the Cyclades, Aegean Sea, Greece[END_REF][START_REF] Urai | Alpine deformation on Naxos (Greece)[END_REF]. The Cycladic Archipelago is located in the centre of the Aegean domain and corresponds to the deepest exhumed parts of the Hellenides-Taurides belt (Figure 2). In this archipelago, the Cycladic Blueschist Unit, belonging to the Pindos oceanic domain [START_REF] Bonneau | Correlation of the Hellenide nappes in the south-east Aegean and their tectonic reconstruction[END_REF][START_REF] Bonneau | Subduction, collision et schistes bleus; l'exemple de l'Egee (Grece)[END_REF], reached peak-pressure conditions during the formation of the Hellenides at 53-48 Ma (Figure 2; [START_REF] Lagos | High precision Lu-Hf geochronology of Eocene eclogite-facies rocks from Syros, Cyclades, Greece[END_REF][START_REF] Laurent | Extraneous argon in high-pressure metamorphic rocks: Distribution, origin and transport in the Cycladic Blueschist Unit (Greece)[END_REF][START_REF] Lister | White mica 40 Ar/ 39 Ar age spectra and the timing of multiple episodes of high-P metamorphic mineral growth in the Cycladic eclogite-blueschist belt, Syros, Aegean Sea, Greece[END_REF][START_REF] Tomaschek | Zircons from Syros, Cyclades, Greece -recrystallization and mobilization of zircon during high-pressure metamorphism[END_REF][START_REF] Uunk | Understanding phengite argon closure[END_REF]. This HP-LT metamorphic unit was exhumed during the Eocene within the subduction channel between a top-to-the south thrust at the base and top-to-the east/northeast synorogenic detachment at the top, the Vari Detachment (Figure 2; [START_REF] Augier | Exhumation kinematics of the Cycladic Blueschists unit and backarc extension, insight from the Southern Cyclades (Sikinos and Folegandros Islands, Greece)[END_REF][START_REF] Brun | Exhumation of high-pressure rocks driven by slab rollback[END_REF][START_REF] Huet | Thrust or detachment? Exhumation processes in the Aegean: insight from a field study on Ios (Cyclades, Greece)[END_REF][START_REF] Jolivet | Subduction tectonics and exhumation of high-pressure metamorphic rocks in the Mediterranean orogens[END_REF][START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF][START_REF] Ring | The Hellenic subduction system: high-pressure metamorphism, exhumation, normal faulting, and large-scale extension. Annual Review[END_REF][START_REF] Ring | The Hellenic subduction system: high-pressure metamorphism, exhumation, normal faulting, and large-scale extension. Annual Review[END_REF]. In the Cyclades, regional-scale detachments such as the North Cycladic Detachment System (NCDS), the Naxos-Paros Detachment (NPD) or the West Cycladic Detachment System (WCDS) have accommodated back-arc extension (Figure 2; [START_REF] Grasemann | Miocene bivergent crustal extension in the Aegean: Evidence from the western Cyclades (Greece)[END_REF]Jolivet et al., 2010). Eclogite and blueschist of the CBU are best preserved on Syros Island, where the synorogenic Vari Detachment is exposed [START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF]Soukis & Stöckli, 2013;Trotet et al., 2001a). | Geology of Syros Located in the Cycladic Archipelago (Figure 2), Syros is mainly composed of metasedimentary and metabasite from the CBU. To the SE of the island, a large-scale klippe of Pelagonian affinity, locally referred as the Vari Unit, is also exposed, limited by the Vari detachment (Figure 3; e.g. Soukis & Stöckli, 2013). [START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF] have subdivided the stack of the CBU on Syros in three subunits, delimited by extensional top-to-the east shear zones and characterized by their lithology and predominant metamorphic facies, which are from bottom to top as follows (Figure 3): (1) The Posidonia Subunit is composed of the structurally lower felsic gneiss of Komito overlain by albitic micaschists with intercalated, quite rare, boudins of metabasite and thin marble layers. This subunit has been pervasively overprinted in the greenschist-facies, except in few places where occurrences of blueschist-and eclogite-facies parageneses are seldom observed, (2) The Chroussa Subunit is composed of a lithostratigraphic sequence of alternating micaschists, thick marble layers and metabasites. This subunit is in part overprinted in the greenschist-facies while other places show well preserved eclogite-and blueschist-facies parageneses, (3) The Kampos Subunit is mainly composed of a tectonic mélange of metabasites wrapped in serpentinites and minor metasedimentary rocks. Within this subunit, eclogite-and blueschist-facies parageneses are spectacularly preserved, apparently escaping significant retrogression in the greenschist facies. Eclogite parageneses are therefore recognized within all three subunits [START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF], implying that despite their entirely different degrees of retrogression, these three subunits have undergone similar HP-LT metamorphic and peak P-T conditions [START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF]. Finally, structurally positioned above the CBU [START_REF] Keiter | Structural development of high-pressure metamorphic rocks on Syros island (Cyclades, Greece)[END_REF][START_REF] Keiter | A new geological map of the Island of Syros (Aegean Sea, Greece): Implications for lithostratigraphy and structural history of the Cycladic Blueschist Unit[END_REF]Soukis & Stöckli, 2013;Trotet et al., 2001a), the Vari Unit is formed by a greenschist mylonitic unit overlain by the orthogneiss of Vari intruding amphibolite-facies metabasite. High-pressure imprinting is not recognized and apparently lacking as in other outcrops of Pelagonian rocks [START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF]Soukis & Stöckli, 2013). | METHODS | EPMA compositional mapping X-ray maps have been acquired to measure the compositional variability of the studied minerals at the local scale (from half a mm to a few µm) and to investigate the relationships between compositional zoning and microstructures. This technique has proved to be efficient in the characterization of compositional zoning related to P-T changes [START_REF] Kohn | Retrograde net transfer reaction insurance for pressure-temperature estimates[END_REF][START_REF] Lanari | Diachronous evolution of the alpine continental subduction wedge: evidence from P-T estimates in the Briançonnais Zone houillère (France-Western Alps)[END_REF][START_REF] Lanari | Deciphering high-pressure metamorphism in collisional context using microprobe mapping methods: Application to the Stak eclogitic massif (northwest Himalaya)[END_REF][START_REF] Loury | Late Paleozoic evolution of the South Tien Shan: Insights from P-T estimates and allanite geochronology on retrogressed eclogites (Chatkal range, Kyrgyzstan)[END_REF]. A JEOL JXA-8230 instrument was used at ISTerre (University of Grenoble-Alpes, France) to acquire X-ray compositional maps and spot analyses required for the analytical standardization of the maps [START_REF] De Andrade | Quantification of electron microprobe compositional maps of rock thin sections: an optimized method and examples[END_REF]. Analytical conditions for mapping were 15 keV accelerating voltage, 100 nA specimen current and 15 keV accelerating voltage and 12 nA specimen current for spot analyses. Compositional mapping was carried out with dwell time of 200 ms and a step size (corresponding to the pixel size in the final images) of 2 μm. The X-ray compositional maps were processed using the program XMAPTOOLS 2.2.1 [START_REF] Lanari | XMapTools: A MATLAB\copyright-based program for electron microprobe X-ray image processing and geothermobarometry[END_REF][START_REF] Lanari | Quantitative compositional mapping of Accepted Article This article is protected by copyright. All rights reserved. mineral phases by electron probe micro-analyser[END_REF]. The classification assigned each pixel to a mineral, which was then standardized using the high-resolution spot analyses as internal standards. Each intensity map was converted during this step to a map of oxide weight-percentage composition. Local bulk compositions were generated from the oxide weight-percentage maps by averaging pixels with a density correction (Lanari & Engi, 2017). Finally, the structural formula was calculated for each pixel of the mapped area on the basis of 12 anhydrous oxygen for garnet, 11 for white mica, 12.5 for epidote, 6 for clinopyroxene, 14 for chlorite and 8 for feldspar. Accepted Article This article is protected by copyright. All rights reserved. | Bulk rock composition and reactive bulk composition Bulk rock composition of each sample was measured at the CRPG (Nancy, France) where major element concentrations were obtained by ICP-OES analyses. Results are reported in Table 1. In the literature, most of the P-T isochemical phase diagrams (or pseudosections) rely on the measured bulk rock composition as an approximation of the reactive (or effective) bulk composition, which is defined as the composition of the equilibration volume at a given stage of the rock history. However, it has been demonstrated that this approximation can lead to erroneous P-T estimates if the sample contains zoned minerals (see Lanari & Engi, 2017 for a recent review). In such cases it is critical to restrict the investigation to a more reduced scale where chemical equilibrium may have been established. As the diffusion rates of the major cations in the intergranular medium are largely unknown, a qualitative approximation of the size of the equilibration volume is required. In this study, all the equilibrium models involving Gibbs free energy minimizations were computed using as reference both (1) the bulk rock composition measured by ICP-OES and (2) a local bulk composition derived from thin-section analysis and compositional maps. In both cases, the reference composition is used as the first reactive bulk composition, i.e. to estimate the P-T condition of the first stage. Then, the reactive bulk composition evolves along the P-T trajectory because of garnet fractionation (Konrad-Schmolke, O'Brien, [START_REF] Konrad-Schmolke | Garnet growth at high-and ultra-high pressure conditions and the effect of element fractionation on mineral modes and composition[END_REF][START_REF] Moynihan | An automated method for the calculation of P-T paths from garnet zoning, with application to metapelitic schist from the Kootenay Arc, British Columbia, Canada[END_REF][START_REF] Spear | Metamorphic fractional crystallization and internal metasomatism by diffusional homogenization of zoned garnets[END_REF]. The local bulk compositions were calculated based on the mode of the petro-textural domains (porphyroblasts with their inclusions, mineral-matrix, domains showing late retrogression) observed within the thin section and their local bulk compositions determined with XMAPTOOLS; this is somewhat equivalent to what has been done in other studies [START_REF] Loury | Late Paleozoic evolution of the South Tien Shan: Insights from P-T estimates and allanite geochronology on retrogressed eclogites (Chatkal range, Kyrgyzstan)[END_REF][START_REF] Marmo | Fractionation of bulk rock composition due to porphyroblast growth: effects on eclogite facies mineral equilibria, Pam Peninsula, New Caledonia[END_REF]Warren & Waters, 2006). Mineral abundances were estimated by thin-section optical image analysis. The composition of each local domain is evaluated from the compositional map using XMAPTOOLS and a density correction function. | Thermometric/thermobarometric methods | Raman Spectrometry of Carbonaceous Material (RSCM) geothermometry The RSCM method is an empirical geothermometric approach based on the quantification of the degree of organization of carbonaceous material (CM) reached during metamorphism (Beyssac, Goffé, Chopin, & Rouzaud, 2002a). Due to the irreversible character of graphitization, CM structure is not affected by retrogression reactions and allows metamorphic peak temperature (RSCM-T) to be calculated, using an area ratio (R2 ratio) of different peaks of the Raman spectra (Beyssac et al., 2002a). The reader interested in the graphitization process during HP-LT metamorphism in terms of physico-chemical transformation of CM is referred to [START_REF] Beyssac | Graphitization in a highpressure, low temperature metamorphic gradient: A Raman micro-spectroscopy and HRTEM study[END_REF]. RSCM-T are determined with a calibration-attached accuracy of  50 °C in the range 330-640 °C (Beyssac et al., 2002a). Relative uncertainties on RSCM-T obtained from samples selected within the same section are however much smaller and bracketed to 10-15 °C (e.g. [START_REF] Augier | Exhumation kinematics of the Cycladic Blueschists unit and backarc extension, insight from the Southern Cyclades (Sikinos and Folegandros Islands, Greece)[END_REF][START_REF] Gabalda | Thermal structure of a fossil subduction wedge in the Western Alps[END_REF][START_REF] Brovarone | Stacking and metamorphism of continuous segments of subducted lithosphere in a highpressure wedge: the example of Alpine Corsica (France)[END_REF]. Raman spectra Accepted Article This article is protected by copyright. All rights reserved. were obtained using the Renishaw inVia Reflex system (BRGM-ISTO, Orléans). RSCM analyses were conducted on thin sections prepared on CM-rich metasedimentary rocks cut in the structural X-Z plane. To avoid defects on the CM related to thin-section preparation, analyses were all performed below the surface of the section by focusing the laser beam beneath dominantly quartz and calcite. Between 11 and 25 spectra were recorded in order to bring out the inner structural heterogeneity of CM within samples. Internal dispersion of RSCM-T presents generally unimodal temperature distributions with quite low dispersion. The investigated samples are 19 CM-bearing marble and metapelite samples collected within the three subunits composing the CBU on Syros, ensuring the determination of the large-scale thermal structure of this island (Figure 3). | Thermodynamic modelling The P-T evolution of the metamorphic rocks was reconstructed based on forward thermodynamic models. As garnet growth strongly fractionates the reactive bulk composition, the program GRTMOD (Lanari et al. 2017) was used to retrieve the P-T information stored in garnet compositional zoning. GRTMOD searches the optimal P-T conditions for the composition of each successive growth zone using a fractional crystallization model and with possible resorption of the previous zones (a complete description is provided in Lanari et al. 2017). As the maximum temperature reached by the sample is ~ 550 °C (see below), intragranular diffusion is not expected to have significantly altered the compositional zoning of the large porphyroblast that is interpreted as growth zoning [START_REF] Caddick | Preservation of garnet growth zoning and the duration of prograde metamorphism[END_REF]. As the compositions of the co-existing phases such as omphacite and phengite are less sensitive to changes in the reactive bulk composition (e.g. [START_REF] Airaghi | Microstructural vs compositional preservation and pseudomorphic replacement of muscovite in deformed metapelites from the Longmen Shan (Sichuan, China)[END_REF] for phengite), the bulk rock composition or local bulk composition was used to generate the P-T diagrams showing mineral isopleths. The Gibbs free energy minimizations and the isochemical phase diagrams were computed using the program Theriak-Domino (de [START_REF] Capitani | The computation of chemical equilibrium in complex systems containing non-ideal solutions[END_REF][START_REF] De Capitani | The computation of equilibrium assemblage diagrams with Theriak/Domino software[END_REF]) in the chemical system (±MnO)-Na 2 O-CaO-K 2 O-FeO-Fe 2 O 3 -MgO-Al 2 O 3 -SiO 2 -TiO 2 -H 2 O. The internally consistent thermodynamic database JUN92.bs [START_REF] Berman | Internally-consistent thermodynamic data for minerals in the system Na 2 O-K 2 O-CaO-MgO-FeO-Fe 2 O 3 -Al 2 O 3 -SiO 2 -TiO 2 -H 2 O-CO 2[END_REF] and subsequent updates) was used for modelling. A comparison of the results obtained with other thermodynamic databases is shown in Figure S1. Four models were systematically derived for each sample, using the bulk rock composition (WR) and the local bulk compositions (LB), in both either considering or not MnO. | PETROGRAPHY AND MINERAL CHEMISTRY A petrographic description of analysed samples based on the results of compositional mapping is presented here. Four samples, characteristic of the entire CBU from top to base, were selected for this study (from a collection of 138). Two were collected in the Kampos Subunit to determine burial and peak-pressure conditions. Additionally, one sample was selected in Chroussa Subunit to characterize the transition from blueschist-to greenschist-facies P-T conditions and one sample preserving HP parageneses was collected from Posidonia Subunit to compare the maximum pressure conditions recorded in this subunit and in the Kampos Subunit. Typical textures and mineral assemblages of the four samples are shown in Figure 4. All mineral abbreviations are after Whitney and Evans (2010). Accepted Article This article is protected by copyright. All rights reserved. | Sample SY1401 (Kampos Subunit) | Outcrop and sample description The studied outcrop is located south of Syros airport and shows a 50 m-long retrogression gradient from well-preserved HP rocks, including very fresh eclogite, to strongly overprinted HP rocks in the greenschist-facies (Figures 3 and5). This apparent metamorphic transition is accompanied by an increasing gradient of deformation toward the more retrogressed rocks (Figure 5a). This outcrop corresponds to the contact zone between the Kampos and Chroussa subunits, which is interpreted as an extensional top-to-the east shear zone, namely the Kastri Shear Zone (see Laurent et al., 2016 for details). Well-preserved eclogite shows a weak deformation and still exhibits the original structure of a pillow-lava breccia (Figure 5b-d). While only a slight foliation is observed, a lineation oriented N90 °E is marked by intense stretching (Figure 5c). Markers of asymmetric ductile deformation, such as shear bands, are rare and visible only at the microscopic scale, showing an incipient top-to-the east sense of shear (Figure 6a). The eclogitic metapillow-lava breccia studied here is typical of rocks from the Kampos Subunit with only weak subsequent overprinting. In this sense, this sample appears as one of the best candidates to determine peak-pressure conditions and possibly prograde P-T conditions of the CBU on Syros. The metamorphic assemblage of this sample includes garnet, white mica, epidote, clinopyroxene and less abundant quartz, albite, apatite, rutile and sphene (Figures 4 and6). Garnet occurs as 0.5-1.5 mm porphyroblasts and contains inclusions of epidote, white mica, clinopyroxene, albite and quartz (Figures 4 and6). | Petrography Chemical composition mapping of 400,000 pixels covers an area of 1600*1000 μm and was acquired around a 1.3 mm diameter garnet showing a large diversity of mineral inclusions (Figure 6). This garnet is comprised within an eclogitic matrix and surrounded by a pressure shadow, dividing the Xray map into three distinct petro-textural domains (Figure 6d). The eclogitic matrix is mainly composed of omphacite, epidote and white mica with less abundant quartz, apatite and sphene (Figure 6b,c). The pressure-shadow on the garnet is mainly composed of white mica associated with albite (Figure 6). Compositional zoning is observed in garnet porphyroblasts with, from core to rim, a decrease of almandine and spessartine (XAlm 0.650.57 and XSps 0.0450.02 ) and an increase of grossular and pyrope (Figure 7; XGrs 0.240.30 and XPrp 0.070.11 ). The rim composition of garnet is also affected by the presence of phengite inclusions with a relatively higher grossular, pyrope and spessartine content that evidences garnet re-equilibration around them (Figure 7). Accepted Article This article is protected by copyright. All rights reserved. The composition of white mica shows a more complex pattern with four distinct compositional groups based on the Si and Na contents (Figure 7). Within the omphacitic matrix and in the pressure-shadow, phengite shows a strong increase in celadonite from core to rim and at the contact with the garnet porphyroblast (Figure 7; phengite 1: XCel =0.35-0.45 and phengite 2: XCel =0.45- 0.65 ). Phengite 1 is also observed as an inclusion in the garnet rim. In contrast, Si-poor phengite 3 and phengite-paragonite mixing compositions are only observed as inclusions in garnet cores (Figure 7). In the eclogitic matrix, clinopyroxene corresponds to omphacite, with some grains showing a compositional zoning marked by an increase of the jadeite-content from core to rim (Al_M in Figure 7; XJd 0.360.46 ). In contrast, inclusions of clinopyroxene observed in the core of the garnet have lower jadeite content (XJd =0.30-0.35 ) similar to the core of some grains of the matrix (Figure 7). Two distinct compositions of epidote are observed in the map with higher zoisite-content for the grains included into the garnet (XZo =0.60-0.75 ) than those of the omphacitic matrix (XZo =0.30-0.45 ). Plagioclase has homogeneous and nearly pure albitic composition (XAb =0.99 ) and is found both included in the core of the garnet and in the pressure-shadow where it has grown into the cleavage of phengite and around a few grains of remnant omphacite and quartz. Finally, less abundant rutile is observed in some garnet rims, while apatite and sphene accessory minerals are only observed in the omphacitic matrix (Figure 6). | Petrographic interpretations The compositional zoning preserved in garnet, phengite and omphacite can be used in a qualitative way to reconstruct the P-T history of this sample. Several lines of evidence suggest the prograde growth of garnet under increasing pressure conditions, (1) plagioclase inclusions are only observed in the core of garnet, suggesting that the transformation of plagioclase into omphacite was not complete at the initiation of garnet growth, (2) the chemical composition of omphacite and phengite inclusions in the core of garnet have respectively lower XJd and XCel content than inclusions in the rim, which again suggests an increase in pressure during garnet growth, (3) the compositional zoning in both omphacite and phengite grains in the matrix shows a similar trend (e.g. increase in XJd and XCel, respectively) and also supports the record of a prograde trajectory. The only evidence of retrogression observed in this sample is the presence of albite in the pressure shadow. Albite occurs only in omphacite-poor domains and is interpreted as a late feature, formed at the expense of omphacite and quartz. This interpretation is supported by observations of remnant quartz and omphacite grains in the pressure shadow that are systematically surrounded by newly grown albite (Figure 6). | Sample SY1460 (Kampos Subunit) | Outcrop and sample description This sample was collected in the Kampos Subunit on the south coast of Syros, and is structurally positioned just below the Vari Detachment (Figure 3; [START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF]Soukis & Stöckli, 2013;Trotet et al., 2001a). This outcrop is composed of metabasite preserving eclogite-and blueschistfacies parageneses that are ductilely sheared with syn-blueschist top-to-the east kinematics (Laurent et al., 2016). This sample corresponds to a foliated and stretched boudin of eclogite hosted in blueschist- Accepted Article This article is protected by copyright. All rights reserved. facies rocks. Textural analysis shows that a matrix composed of omphacite and white mica with minor calcite, sphene and epidote formed ~ 95 vol% of the sample volume and comprised a few large garnet (0.5-1 mm) and glaucophane (5-10 mm) crystals (Figures 4 and8). | Petrography The X-ray map of 204,750 pixels covering an area of 910*900 μm was acquired around a 0.9 mm diameter garnet of almandine type showing inclusions of clinopyroxene and rutile and a core to rim zonation (Figure 9). Almandine content decreases toward the rim (XAlm 0.660.59 ) while grossular and pyrope content show a general increase (XGrs 0.240.28 and XPrp 0.060.09 ). The spessartine content displays a more complex, oscillatory zoning pattern (XSps 0.0420.058 ; Figure 9). Two compositional groups of white mica are observed in this sample (phengite 1 and 2, Figure 9). Phengite 1 is characterized by a relatively high-celadonite (XCel =0.60-0.73 ) and occurs as fine grain cores surrounded by phengite 2 rims, which has lower celadonite (XCel =0.45-0.53 ). Most of the matrix grains show phengite 2 composition (Figure 9). Omphacite can be divided into two compositional groups. Omphacite 1 has lower jadeite and higher diopside content (XJd =0.36-0.40 , XDi =0.35-0.38 ) than omphacite 2 (XJd =0.46-0.54 , XDi =0.27-0.32 ), and is observed in the core of the omphacite 2 in both matrix and garnet (Figure 9). The composition of glaucophane is homogeneous (XGln =0.95- 0.99 ) and only a few epidote grains are observed in the matrix with a few allanitic cores (Figure 8). | Petrographic interpretations Similar to sample SY1401, prograde P-T conditions are suggested by the compositional zoning preserved in garnet and omphacite. However, in this case the compositional zoning of phengite shows a different record, with lower XCel contents observed at the rim of garnet grains. This feature suggests later re-equilibration of phengite grains in the mineral matrix under lower pressure conditions and possibly higher temperatures. Oscillatory zoning in garnet may reflect variations in the garnet-forming reaction or transportcontrolled growth [START_REF] Kohn | Geochemical Zoning in Metamorphic Minerals A2[END_REF]. It is an open question if the assumption of chemical equilibrium used in the thermodynamic models is valid or not for this specific case. It is important to note here that only Mn is affected by oscillatory zoning and to a relatively low extent (between XSps 0.045 and XSps 0.06 ) that would not necessarily be detected without using quantitative maps. If the equilibrium volume for Mn was smaller than the equilibration volume of the other major elements when garnet grew, then the Mn component needs to be excluded from the thermodynamic computations. Accepted Article This article is protected by copyright. All rights reserved. | Sample SY1407 (Chroussa Subunit) | Outcrop and sample description The outcrop is located in the southeast of the island within the Chroussa Subunit (Figure 3). It is mainly composed of albite-epidote-blueschist-facies (AEBS metamorphic facies of [START_REF] Evans | Phase relations of epidote-blueschists[END_REF] metasedimentary rock, hosting metabasite boudins in which eclogite and blueschist parageneses are preserved. This sample corresponds to an AEBS-facies metasedimentary rock showing a clear lineation oriented N70 °E marked by the stretching of glaucophane and albite. Textural analysis shows that this sample is composed of a matrix of quartz, white mica, glaucophane, albite and epidote with minor chlorite, rutile, and apatite in which large garnet occurs (0.5-1.5 mm, Figures 4 and10). | Petrography The compositional mapping of 288,750 pixels covers an area of 1100*1050 μm and was acquired around a 1.5 mm diameter garnet comprising mainly quartz inclusions (Figure 10). The composition of the garnet porphyroblast changes from core to rim (Figure 11). The core shows comparatively low almandine, spessartine and pyrope (XAlm =0.61-0.62 , XSps =0.075-0.085 , XPrp =0.038-0.04 ) and high grossular (XGrs =0.24-0.26 ). A decrease of both almandine and spessartine is observed through the mantle to rim (XAlm 0.660.59 , XSps =0.120.04 ) coupled to an increase of pyrope and grossular (XPrp 0.0460.061 , XGrs 0.170.28 ). The composition of white mica corresponds in a large majority to paragonite, with only a few phengite grains observed as inclusions in garnet and single grains in the mineral-matrix (Figure 11). Glaucophane grains, observed in the matrix, show textural equilibrium relationships with albite and epidote (Figure 10). | Petrographic interpretations In this sample, compositional zoning is only observed in garnet but with a more complex pattern than in samples SY1401 and SY1460. The texture and compositional zoning of the garnet core differs from the mantle and rim and is interpreted as a relict preserved from either the protolith or as a first growth stage followed by intense resorption. As rocks of the CBU are all characterized by a single metamorphic history, the garnet core is more likely the result of a first stage of garnet growth followed by intense garnet resorption before a new stage of garnet growth. The lack of index mineral inclusions in the core (most of the inclusions are quartz) does not allow the coexisting assemblage to be determined. The presence of albite inclusions in the garnet rim suggests that the second growth episode took place at moderate pressure conditions, possibly during exhumation. This is consistent with fine-grained phengite inclusions in the garnet rim that show intermediate Si-content and with the presence of glaucophane, epidote and albite in textural equilibrium in the matrix. Similar features have been interpreted as strong evidence of retrogression in the albite-epidote-blueschist facies [START_REF] Evans | Phase relations of epidote-blueschists[END_REF]. Following these observations, compositional zoning of the garnet in SY1407 may be interpreted as a first event of garnet growth, followed by an intense stage of resorption and a second growth episode that took place after partial exhumation of the rock. Accepted Article This article is protected by copyright. All rights reserved. | Sample SY1418 (Posidonia Subunit) | Outcrop and sample description This sample was collected in the SW of Syros, in the Posidonia Subunit, where few metabasite lenses occur within the gneiss of Komito (Figure 3). Metabasite appears pervasively retrograded under greenschist-facies conditions. However, [START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF] have shown that within metre-scale metabasite boudins, HP relicts are preserved with the occurrence of eclogite-and blueschist-facies parageneses. Sample SY1418 corresponds to one of these preserved HP metabasite occurrences of the Posidonia Subunit and has been selected to estimate maximum P-T conditions undergone by the Posidonia Subunit. Textural analysis shows that this sample is formed by a matrix of fine-grained glaucophane and omphacite with minor rutile, Fe-oxide, white mica, epidote and albite hosting porphyroblasts of garnet, glaucophane, quartz, calcite and chlorite (Figures 4 and12). Pressure shadows are composed of quartz, phengite, epidote and glaucophane with minor albite and represent 10 vol% of the sample (Figure 12). | Petrography The compositional mapping of 160,801 pixels covers an area of 802*802 μm and was acquired around a 0.6 mm diameter garnet showing numerous inclusions of quartz, epidote and Fe-oxide (Figure 12). This garnet is compositionally zoned with increasing almandine and pyrope (XAlm 0.620.70 , XPrp 0.0380.09 ) and decreasing spessartine and grossular toward the rim (XSps 0.080.004 , XGrs 0.240.17 ; Figure 13). The composition of white mica is homogeneous and corresponds to phengite (XCel= 0.40 ). The minor presence of omphacite is also observed as fine grains in the main glaucophane matrix (Figure 12). | Petrographic interpretations Omphacite compositions suggest that this sample has reached eclogite-facies conditions. Then, it has recorded a strong retrogression as attested by the practically complete destabilisation of omphacite (+quartz) into albite. The chlorite-albite assemblage is characteristic of low-pressure conditions. Compositional zoning is again restricted to garnet and does not provide a clear picture of the recorded P-T history. However, the higher proportion of garnet in this sample (~ 35 vol%; Figure 12) suggests crystallization under HP conditions with absence of significant resorption. | THERMOBAROMETRY RESULTS | RSCM geothermometry RSCM geothermometry has been applied to 19 samples distributed all over the island (Figure 3). Detailed results, including R2 ratio, number of spectra, RSCM-T and standard deviation are presented in Table 2. In addition, measured RSCM temperatures are all reported on the metamorphic map of Syros (Figure 3). Maximum temperatures recorded in the three subunits of Syros yielded very similar RSCM-T. This temperature is then compared with peak temperature estimates obtained using thermodynamic modelling. Accepted Article This article is protected by copyright. All rights reserved. Results show that RSCM-T range from 489 to 564 °C in the Posidonia Subunit, 485 to 581 °C in the Chroussa Subunit and 510 to 561 °C in the Kampos Subunit. On average, RSCM-T are equivalent within error in each subunit with 537 ± 20 °C in the Posidonia Subunit, 540 ± 19 °C in the Chroussa Subunit and 530 ± 17 °C in the Kampos Subunit (Table 2). Moreover, the maximum RSCM-T measured in each subunit is also equivalent within error, showing that these three subunits have experienced quite similar peak temperatures during their metamorphic history. However, the RSCM-T measured within a single subunit varies up to 100 °C, as observed in the Posidonia and Chroussa subunits (Table 2). There is no apparent correlation between the internal RSCM-T variations and the structural positions of the sample. In some cases, varying RSCM-T were estimated in samples collected at the same structural level, within a single and consistent lithology (Figure 3). The significance of partial discrepancies in the temperatures is beyond the scope of this study. Such variations are generally explained by the presence of an inherited component of CM (Beyssac et al., 2002a) or to the concentration of structural defects caused by pervasive deformation (see [START_REF] Aoya | Extending the applicability of the Raman carbonaceous-material geothermometer using data from contact metamorphic rocks[END_REF] for further discussions). | Empirical thermobarometry The garnet-omphacite thermometer of [START_REF] Ravna | The garnet-clinopyroxene Fe 2+ -Mg geothermometer: an updated calibration[END_REF] was used to estimate the temperature of garnet and co-existing omphacite, whereas the pressure information was extracted from the assemblage garnet-omphacite-phengite (Waters & Martin, 1996). The P-T maps have been calculated in XMAPTOOLS for samples SY1401 and SY1460, following the strategy described in [START_REF] Lanari | Deciphering high-pressure metamorphism in collisional context using microprobe mapping methods: Application to the Stak eclogitic massif (northwest Himalaya)[END_REF][START_REF] Lanari | XMapTools: A MATLAB\copyright-based program for electron microprobe X-ray image processing and geothermobarometry[END_REF]. In sample SY1401, petrographic observations suggest that garnet records prograde to peakpressure conditions. In order to derive maximum pressure of garnet growth, the rim composition of garnet and the phengite 1 composition of white mica were used (Figure 7, Tables 3 and4), as our petrographic observations suggest that they grew in equilibrium during the same P-T stage. Results predict peak P-T conditions for garnet growth between 22-24 kbar and 500-560 °C (Figure 14). In sample SY1460, omphacite and phengite are also observed in textural equilibrium with the rim composition of the garnet (Figure 9). The phengite 2 composition has been used for thermobarometry and estimated P-T conditions range from 19 to 21 kbar and 500 to 560 °C for the growth of the garnet rim (Figure 14). As a preliminary conclusion, results of empirical thermobarometry using compositional maps suggest maximum pressure and temperature for the garnet growth of the CBU of Syros at 22  2 kbar and 530  30 °C. Note that the assumption of chemical equilibrium between the different groups of garnet, phengite and omphacite are based on the petrographic observation and tested in the following using Gibbs free energy minimization. Accepted Article This article is protected by copyright. All rights reserved. | Garnet equilibrium modelling GRTMOD computations were performed on samples SY1401, SY1460, SY1407 and SY1418 using several compositions (WR and LB, Table 1) including or excluding MnO and using the JUN92.bs thermodynamic database (see the methods, section 3.3.2). The results are reported in P-T diagrams (Figure 15) with error bars that represent the uncertainty in the optimal P-T conditions resulting from the uncertainties in chemical analyses, taken as the spacing between the garnet isopleths (see Lanari et al., 2017). The residual values (C o in the original publication, Lanari et al., 2017) represent the quality of the isopleths intersection, i.e. the quality between the modelled and observed garnet compositions. A value of C o < 0.04 ensures a good fit of the model and the fit is excellent where C o < 0.01. Isopleths of Si-content in phengite and Al-content in omphacite were calculated using the local bulk composition and are shown in the P-T diagrams of samples SY1401 and SY1460 (Figure 15). In sample SY1401, predictions fall in a narrow P-T range using the LB and considering MnO or not (see the discussion below, section 6.1.2). A first event of garnet growth is constrained at 16-18 kbar and 450-500 °C (core) and a second event is suggested at 22-24 kbar and 550-580 °C using the LB composition (rim, Figure 15). For this model, 20 vol% of garnet is produced (core 2 vol% and rim 18 vol%; Figure 15). In sample SY1460, garnet P-T predictions can only be made using the LB composition and residual values are systematically high for all the models (above >0.04) suggesting either a smaller equilibrium volume, or kinetic phenomena (see discussion). In this model ~9 vol% of garnet is produced (2 vol% core and 7 vol% rim). In sample SY1407, the best defined models were obtained using the LB composition and suggest garnet growth for core, mantle and rim at lower pressure conditions of 10-12 kbar at 500-560 °C. A less well defined but still acceptable model obtained using the WR and including MnO suggest garnet growth for core and mantle at 18-20 kbar and 470-500 °C (dark blue squares in Figure 15). In this last case, 2.5 vol% of garnet is produced (core ~0.13 vol%, mantle 0.57 vol% and rim 1.85 vol%). Finally, in sample SY1418, the only garnet P-T predictions that are characterized by low residuals suggest homogeneous P-T conditions for core and rim around 18  2 kbar and 450  50 °C (Figure 15). In this case, 12 vol% of garnet is produced (core 0.12 vol% and rim 11.8 vol%). The models including MnO and with a garnet rim predicted at higher pressure show much higher garnet mode of 33 vol% (core 12 vol% and rim 21vol%, Figure 15). The main result of these models is that three main growth events of garnet are partially recorded in the four investigated samples (Figure 15). First, a prograde pulse of garnet growth is predicted from the core garnet composition of SY1401 and SY1418 at 17  2 kbar and 450  50 °C (Figure 15). Then, the maximum pressure conditions recorded by garnet are estimated at 22-24 kbar and 550-580 °C from the rim garnet composition of SY1401 (circles in Figure 15). While these estimates are only retrieved in one sample, the residual values show a good fit of the model, and these conditions are consistent with maximum pressure conditions retrieved from the Si-content in phengite for SY1401 and from the Al-content in omphacite for SY1401 and SY1460 samples (Figures 7,9 and 15). Finally, a last event of garnet growth is recorded in SY1407. This retrograde garnet growth event is well constrained between 9-12 kbar and 500-570 °C, from both core, mantle and rim compositions of the garnet (Figure 15). P-T isochemical phase diagrams Results obtained using garnet equilibrium modelling were compared with P-T isochemical phase diagrams to test mutual consistency between predicted and observed mineral assemblages in sample SY1401 (Figure 16). Two phase diagrams using the LB and WR composition were calculated at P-T conditions encompassing the first stage of core garnet crystallization determined by equilibrium modelling at 16-18 kbar and 450-500 °C (Figures 15 and16). Two additional isochemical phase diagrams were then elaborated for P-T conditions surrounding the second stage of rim garnet crystallization constrained at 22-24 kbar and 550-580 °C (Figure 16). To calculate these diagrams, LB and WR compositions were modified to account for fractionation of garnet that crystallized during stage 1. For the first stage of garnet growth (core), the predicted assemblages are consistent with the minerals trapped as inclusions in garnet cores for both LB and WR compositions (Figure 16). Phengite, paragonite, omphacite and epidote are all predicted to be in equilibrium and observed in the garnet core (Figures 6 and7), while rutile and quartz are not observed in the garnet porphyroblast that has been mapped but in other garnet cores of the same sample. Only chlorite is predicted in equilibrium but has not been clearly identified within the garnet core, consistent with a garnetforming reaction involving chlorite. Furthermore, on the compositional map of SY1401, albite is observed within the core of the garnet but not predicted in equilibrium for these P-T conditions. Albite was not completely destabilized when the garnet nucleated, suggesting significant P-T overstepping with incomplete re-equilibration of the mineral matrix. If the matrix was only partially re-equilibrated at the time garnet nucleated, the inclusion phases in the garnet core do not reflect the equilibrium assemblage from which this garnet nucleated. In this scenario, the deviation from an equilibrium model cannot be approximated using an OS model (e.g. [START_REF] Castro | Reaction overstepping and re-evaluation of peak P-T conditions of the blueschist unit on Sifnos, Greece: implications for the Cyclades subduction zone[END_REF][START_REF] Spear | Overstepping the garnet isograd: a comparison of QuiG barometry and thermodynamic modeling[END_REF]. In this study, the P-T conditions of garnet nucleation for three samples showing different matrix assemblages fall within a narrow P-T window of <30 °C and <2 kbar. In the OS model, the garnet composition is fully controlled by the Gibbs free energy of the matrix assemblage (assuming equilibrium). Different matrix assemblagesfor different bulk rock compositionsare likely to produce different degrees of overstepping (as reported by [START_REF] Castro | Reaction overstepping and re-evaluation of peak P-T conditions of the blueschist unit on Sifnos, Greece: implications for the Cyclades subduction zone[END_REF], which are not observed in this study. For the second stage of garnet (rim), the two models produce different results as shown by the GRTMOD results (Figures 15 and16). For the modified LB composition, the results show an excellent correlation between the predicted assemblage in equilibrium and petrographic observations with phengite, omphacite, epidote and quartz that are all observed as inclusions near the garnet rim and/or within the matrix (Figures 6 and7). Rutile is not present in the mapped area but is commonly observed as an inclusion in the rim of other porphyroblasts of the same sample. Additionally, results obtained using the fractionated WR composition confirm that the calculated pressure prediction for the growth episode of the garnet rim is too high, as coesite is predicted above 25 kbar and is not identified within the CBU (Figure 16). Accepted Article This article is protected by copyright. All rights reserved. | DISCUSSION AND CONCLUSIONS While Syros Island is a world-class reference for the preservation of HP-LT metamorphic rocks such as eclogite and blueschist, the P-T path of the CBU in this island is not completely understood (Figure 1), hindering our global understanding of exhumation processes and deep subduction dynamics. This study complements results obtained in previous thermobarometric studies conducted in the CBU [START_REF] Ashley | Geothermobarometric history of subduction recorded by quartz inclusions in garnet[END_REF][START_REF] Dragovic | Pulsed dehydration and garnet growth during subduction revealed by zoned garnet geochronology and thermodynamic modeling, Sifnos, Greece[END_REF][START_REF] Dragovic | Using garnet to constrain the duration and rate of water-releasing metamorphic reactions during subduction: An example from Sifnos, Greece[END_REF][START_REF] Groppo | Glaucophane schists and associated rocks from Sifnos (Cyclades, Greece): New constraints on the P-T evolution from oxidized systems[END_REF][START_REF] Huet | Coupled phengite 40 Ar-39 Ar geochronology and thermobarometry: P-T-t evolution of Andros Island (Cyclades, Greece)[END_REF][START_REF] Parra | Relation between the intensity of deformation and retrogression in blueschist metapelites of Tinos Island (Greece) evidenced by chlorite-mica local equilibria[END_REF][START_REF] Schumacher | Glaucophane-bearing marbles on Syros, Greece[END_REF][START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF]. Our new P-T estimates for the CBU on Syros highlight a two-step exhumation history in the subduction channel, resembling ones previously retrieved on Tinos and Andros islands [START_REF] Huet | Coupled phengite 40 Ar-39 Ar geochronology and thermobarometry: P-T-t evolution of Andros Island (Cyclades, Greece)[END_REF][START_REF] Parra | Relation between the intensity of deformation and retrogression in blueschist metapelites of Tinos Island (Greece) evidenced by chlorite-mica local equilibria[END_REF] but which have not been described on Syros thus far. | P-T path and comparison of the methods | P-T path of the CBU on Syros In this study, different independent thermobarometric methods such as RSCM peak-temperatures, empirical thermobarometry and thermodynamic modelling have been used to reconstruct the P-T path of the CBU on Syros (Figure 17). The most important result of our study is the good agreement between the different methods used in different samples, which allows the P-T evolution of the CBU to be accurately determined (Figure 17). All methods used in this study provide results in line with the petrological observations. Peak metamorphic temperatures measured with the RSCM method are in good agreement with maximum temperatures estimated for both empirical and garnet thermobarometry at 530  30 °C (Figure 17). Additionally, minimum peak metamorphic P-T conditions obtained with empirical thermobarometry (22-24 kbar for 500-560 °C and 19-21 kbar for 500-560 °C) fall in the same range of maximum P-T conditions calculated with GRTMOD (22-24 kbar and 550-580 °C, Figure 17). The retrograde path is constrained through an event of garnet growth occurring at 10-12 kbar and 500-570 °C that suggests a period of relatively isobaric heating. This late event is only recorded in retrogressed samples from the basal and mid-part of the CBU on Syros. RSCM-T estimates provide also an upper limit to the maximum amount of heating during this period. The P-T trajectory is consequently well constrained, at least for the prograde to peak part and the retrograde event (Figure 17). | Bulk composition and kinetic sensitivity in equilibrium models As the isochemical equilibrium phase diagrams and isopleth positions critically depend on the bulk composition used for modelling, WR compositions and LB composition extracted from compositional maps were measured. Both compositions were systematically tested, considering or not MnO (Figure 15). While MnO has a direct impact on the modal abundance of garnet produced, the models computed either with WR or LB and assuming a MnO-free system generally have lower residue values (or at least similar, Figure 15). A significant amount of MnO is measured in all garnet cores and in the mantle composition of the SY1407 garnet (XSps 0.06-0.11 ; Table 3). Our results show relative mutual consistency between WR and LB compositions without MnO (light blue colour in Figure 15) that yield close predictions in samples SY1401, SY1407 and SY1418. When considering MnO, Accepted Article This article is protected by copyright. All rights reserved. results show more variations between the predicted garnet P-T equilibrium using WR or LB compositions as shown in samples SY1407 and SY1418, demonstrating that considering MnO in the models is critical in this study. Note that a significant higher fraction of garnet is predicted for models including MnO in sample SY1418 (from 12 to 33 vol%). To compare the observed and predicted modes of garnet, it is necessary to include MnO, as garnet is the main carrier of Mn in these rocks (see also [START_REF] Tinkham | Metapelite phase equilibria modeling in MnNCKFMASH: the effect of variable Al2O3 and MgO/(MgO+ FeO) on mineral stability[END_REF]. Moreover, this work highlights the powerful use of LB composition calculated from the composition of local assemblages identified on compositional maps as an approximation of reactive bulk compositions for thermodynamic modelling. Because of the local heterogeneities of metamorphic rocks and the compositional zoning of the rock-forming minerals, the measured wholerock composition is not always adapted for modelling (e.g. [START_REF] Lanari | Deciphering high-pressure metamorphism in collisional context using microprobe mapping methods: Application to the Stak eclogitic massif (northwest Himalaya)[END_REF][START_REF] Loury | Late Paleozoic evolution of the South Tien Shan: Insights from P-T estimates and allanite geochronology on retrogressed eclogites (Chatkal range, Kyrgyzstan)[END_REF]Warren & Waters, 2006). In this study, several examples suggest that the models based on LB provide better results (lower residuals for similar P-T conditions). This result is in line with small equilibration volumes, which would, for example, be smaller than hand specimen sample used to estimate the bulk rock composition. The quantitative mapping strategy employed in this study helps to ensure that the composition of each textural-domain is accurately determined (Lanari & Engi, 2017;[START_REF] Lanari | Quantitative compositional mapping of Accepted Article This article is protected by copyright. All rights reserved. mineral phases by electron probe micro-analyser[END_REF]. This would not be the case by using only qualitative maps and EPMA spot analyses. Sample SY-14-60 is an interesting case study as the compositional maps show evidence of transport-controlled growth with the oscillatory zoning of MnO (Figure 9). In this case, high residuals are obtained in GRTMOD (Figure 15), suggesting that the part of the rock considered in the models never reached whole-scale chemical equilibrium during the entire P-T cycle. Other potential cause for MnO oscillatory zoning in the garnet, such as externally-derived fluid, is not likely due to the very low solubility of Mn in the fluid at HP conditions. All other samples show compositional zoning more typical of growth zoning suggesting equilibrium-controlled growth (Figures 7,11 and 13). In this case, the successive compositions of garnet can be related to differences in P-T conditions and the models have lower residuals (Figure 15). | P-T evolution of the CBU In light of our results, the shape of the P-T path of the CBU on Syros is constrained by prograde, peak and retrograde garnet growth stages, maximum RSCM temperature and peak pressure crystallization of the garnet-omphacite-phengite assemblage (Figure 17). Preservation of prograde mineral parageneses constraining the burial P-T conditions of HP-LT metamorphic rocks is rare and often preserved only in garnet cores. In our study, late burial P-T conditions of the CBU in the subduction zone is constrained by a garnet growth stage that was recorded in three samples at 17  2 kbar and 450  50 °C (Figures 15 and17). On Sifnos, the other Cycladic island where HP-LT eclogites and blueschists are well preserved, prograde garnet growth event has been also retrieved at ~ 20 kbar and 450-500 °C [START_REF] Dragovic | Using garnet to constrain the duration and rate of water-releasing metamorphic reactions during subduction: An example from Sifnos, Greece[END_REF][START_REF] Groppo | Glaucophane schists and associated rocks from Sifnos (Cyclades, Greece): New constraints on the P-T evolution from oxidized systems[END_REF]. The timing of this early growth of garnet is well constrained at 53.4  2.6 Ma [START_REF] Dragovic | Pulsed dehydration and garnet growth during subduction revealed by zoned garnet geochronology and thermodynamic modeling, Sifnos, Greece[END_REF]. Accepted Article This article is protected by copyright. All rights reserved. We have obtained consistent peak metamorphic P-T conditions from the three subunits of the CBU using independent thermobarometric methods at 22  2 kbar and 530  30 °C (Figure 17). These conditions differ significantly from the results of [START_REF] Schumacher | Glaucophane-bearing marbles on Syros, Greece[END_REF] who proposed lower peak metamorphic conditions at 15-16 kbar and 500 °C (Figure 1). Our results are thus in line with peak P-T conditions proposed on Syros by [START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF] and with other recent studies conducted on Sifnos [START_REF] Ashley | Geothermobarometric history of subduction recorded by quartz inclusions in garnet[END_REF][START_REF] Dragovic | Using garnet to constrain the duration and rate of water-releasing metamorphic reactions during subduction: An example from Sifnos, Greece[END_REF][START_REF] Dragovic | Pulsed dehydration and garnet growth during subduction revealed by zoned garnet geochronology and thermodynamic modeling, Sifnos, Greece[END_REF][START_REF] Groppo | Glaucophane schists and associated rocks from Sifnos (Cyclades, Greece): New constraints on the P-T evolution from oxidized systems[END_REF]. The shape of the retrograde P-T path is characterized by a two-step exhumation history delimited by a phase of isobaric heating for the two lowermost subunits, Chroussa and Posidonia (Figure 17). The first step of exhumation was achieved under HP-LT conditions for all units, permitting the preservation of eclogite-and blueschist-facies parageneses. Such shape of the P-T path for early exhumation following peak P-T conditions is also retrieved on Sifnos Island [START_REF] Groppo | Glaucophane schists and associated rocks from Sifnos (Cyclades, Greece): New constraints on the P-T evolution from oxidized systems[END_REF]. Moreover, this retrograde evolution of the CBU on Syros is close to that proposed by [START_REF] Lister | White mica 40 Ar/ 39 Ar age spectra and the timing of multiple episodes of high-P metamorphic mineral growth in the Cycladic eclogite-blueschist belt, Syros, Aegean Sea, Greece[END_REF], except for their unconstrained retrograde P-T loops (Figures 1 and17). After this first step of exhumation, a last garnet growth event was recorded at lower P-T conditions (Figures 15 and17). In our interpretation, this retrograde garnet growth event is related to a phase of relatively isobaric heating at 10-12 kbar and 500-570 °C (Figure 17). Because samples of the Kampos Subunit are not very retrogressed, we suggest that rocks of this subunit have not been affected by this heating and have continued to be exhumed under HP-LT conditions. Recently proposed (but not constrained) for the CBU on Syros by [START_REF] Lister | White mica 40 Ar/ 39 Ar age spectra and the timing of multiple episodes of high-P metamorphic mineral growth in the Cycladic eclogite-blueschist belt, Syros, Aegean Sea, Greece[END_REF], this heating phase was previously retrieved on Tinos and Andros islands [START_REF] Huet | Coupled phengite 40 Ar-39 Ar geochronology and thermobarometry: P-T-t evolution of Andros Island (Cyclades, Greece)[END_REF][START_REF] Parra | Relation between the intensity of deformation and retrogression in blueschist metapelites of Tinos Island (Greece) evidenced by chlorite-mica local equilibria[END_REF]. On Tinos, this phase was estimated at 9-10 kbar and between 400 and 550 °C, while on Andros, it was estimated at lower conditions of 7 kbar and 300-420 °C [START_REF] Huet | Coupled phengite 40 Ar-39 Ar geochronology and thermobarometry: P-T-t evolution of Andros Island (Cyclades, Greece)[END_REF][START_REF] Parra | Relation between the intensity of deformation and retrogression in blueschist metapelites of Tinos Island (Greece) evidenced by chlorite-mica local equilibria[END_REF]. Finally, our results do not permit determining the second phase of exhumation that followed the heating period. However, consistent P-T conditions of this late phase of ductile exhumation have been estimated on Syros, Sifnos, Tinos and Andros, and were utilized to draw a complete P-T path of the CBU (Huet et al., 2016;[START_REF] Parra | Relation between the intensity of deformation and retrogression in blueschist metapelites of Tinos Island (Greece) evidenced by chlorite-mica local equilibria[END_REF][START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF]Figure 17). To conclude, the P-T path proposed in this study for the CBU on Syros differs significantly from the preceding ones (Figures 1 and17). Our new estimates partly reinforce the conclusion of [START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF], who suggested that the P-T evolution was not the same throughout the nappe pile and that the best-preserved parts at the top had been exhumed along a colder path than the lower units. Here, however, we show in addition that the evolutions of the three subunits were similar during the first part of exhumation, from 22 kbar to 10 kbar and that they diverged from that point, with the lower units being heated at constant pressure before their final exhumation, a precision not attained with the methods used by [START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF]. In our interpretation, the increasing gradient of retrogression in the greenschist-facies conditions observed toward the base of the CBU is directly linked to these different P-T evolutions. The geodynamic implications of these results are discussed in the following section. Accepted Article This article is protected by copyright. All rights reserved. | Implications for exhumation processes and deep dynamics in the Hellenic subduction zone The tectonometamorphic history of the CBU has been the focus of many works in the last decades. In this section, we synthetize these studies in an attempt to deepen understanding of exhumation processes and deep dynamics in the subduction zone. The conditions of peak metamorphic P-T conditions in the CBU on Syros have been debated in the literature (Figure 1, e.g. [START_REF] Schumacher | Glaucophane-bearing marbles on Syros, Greece[END_REF][START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF]. In light of our results and considering recent studies conducted on Sifnos, it seems now clear that the CBU was buried deeply in the subduction zone at a minimum of 60-65 km, assuming lithostatic pressure [START_REF] Ashley | Geothermobarometric history of subduction recorded by quartz inclusions in garnet[END_REF][START_REF] Dragovic | Using garnet to constrain the duration and rate of water-releasing metamorphic reactions during subduction: An example from Sifnos, Greece[END_REF][START_REF] Dragovic | Pulsed dehydration and garnet growth during subduction revealed by zoned garnet geochronology and thermodynamic modeling, Sifnos, Greece[END_REF][START_REF] Groppo | Glaucophane schists and associated rocks from Sifnos (Cyclades, Greece): New constraints on the P-T evolution from oxidized systems[END_REF][START_REF] Trotet | Exhumation of Syros and Sifnos metamorphic rocks (Cyclades, Greece). New constraints on the PT paths[END_REF]. Burial of the CBU is characterized by top-to-the SW sense of shear [START_REF] Philippon | Tectonics of the Syros blueschists (Cyclades, Greece): From subduction to Aegean extension[END_REF]Roche, Laurent, Cardello, Jolivet, & Scaillet, 2016). For example, [START_REF] Philippon | Tectonics of the Syros blueschists (Cyclades, Greece): From subduction to Aegean extension[END_REF] identified on Syros prograde top-to-the SW sense of shear affecting pseudomorphs of lawsonite, while kilometric SW verging folds have been interpreted as prograde drag folds on Sifnos [START_REF] Roche | Anatomy of the Cycladic Blueschist Unit on Sifnos Island (Cyclades, Greece)[END_REF]; see also [START_REF] Aravadinou | Ductile nappe stacking and refolding in the Cycladic Blueschist Unit: insights from Sifnos Island (south Aegean Sea)[END_REF], for the early top-to-the south component of the burial path). Moreover, the P-T path proposed in this study for the CBU on Syros shows interesting similarities with those previously proposed for Tinos and Andros islands [START_REF] Huet | Coupled phengite 40 Ar-39 Ar geochronology and thermobarometry: P-T-t evolution of Andros Island (Cyclades, Greece)[END_REF][START_REF] Parra | Relation between the intensity of deformation and retrogression in blueschist metapelites of Tinos Island (Greece) evidenced by chlorite-mica local equilibria[END_REF]. The main characteristic of these paths is the two-step exhumation history delimited by an isobaric phase of heating (Figure 17). The first part of the retrograde path constrains the P-T conditions of a syn-orogenic phase of exhumation from eclogite-to blueschist-facies conditions. On Syros, the timing of this first exhumation step, accommodated within the subduction channel while the overall regime was still under compression, is constrained between 50 and 35 Ma by 40 Ar/ 39 Ar and Rb/Sr ages [START_REF] Cliff | Geochronological challenges posed by continuously developing tectonometamorphic systems: insights from Rb-Sr mica ages from the Cycladic Blueschist Belt, Syros (Greece)[END_REF][START_REF] Laurent | Extraneous argon in high-pressure metamorphic rocks: Distribution, origin and transport in the Cycladic Blueschist Unit (Greece)[END_REF][START_REF] Lister | White mica 40 Ar/ 39 Ar age spectra and the timing of multiple episodes of high-P metamorphic mineral growth in the Cycladic eclogite-blueschist belt, Syros, Aegean Sea, Greece[END_REF]. Syn-orogenic exhumation kinematic indicators are well exposed on Syros, Sifnos and Tinos, showing a clear top-to-the E/NE sense of shear observed in preserved HP rocks [START_REF] Gautier | Ductile crust exhumation and extensional detachments in the central Aegean (Cyclades and Evvia Islands)[END_REF][START_REF] Jolivet | Ductile extension and the formation of the Aegean Sea[END_REF][START_REF] Laurent | Strain localization in a fossilized subduction channel: Insights from the Cycladic Blueschist Unit (Syros, Greece)[END_REF][START_REF] Philippon | Tectonics of the Syros blueschists (Cyclades, Greece): From subduction to Aegean extension[END_REF][START_REF] Roche | Anatomy of the Cycladic Blueschist Unit on Sifnos Island (Cyclades, Greece)[END_REF]Trotet et al., 2001a). The main structure accommodating this top-to-the NE deformation is the Vari Detachment described on Syros and Tinos and located at the roof of the CBU. [START_REF] Ring | The Hellenic subduction system: high-pressure metamorphism, exhumation, normal faulting, and large-scale extension. Annual Review[END_REF] interpreted this deformation as a synconvergent extensional shearing at the top of an extruding wedge, contemporaneous with thrusting at the base of the wedge. This is consistent with a syn-orogenic exhumation of the CBU between a basal thrust, observed on Ios, and a detachment at the top, the Vari Detachment [START_REF] Huet | Thrust or detachment? Exhumation processes in the Aegean: insight from a field study on Ios (Cyclades, Greece)[END_REF][START_REF] Jolivet | Cenozoic geodynamic evolution of the Aegean[END_REF][START_REF] Jolivet | Subduction tectonics and exhumation of high-pressure metamorphic rocks in the Mediterranean orogens[END_REF]. A pause in the exhumation accompanied by isobaric heating at the base of the crust has also been previously shown on Tinos and Andros. This study recognizes the record of this re-heating period in Syros, except for the top of the CBU where HP-LT parageneses are the best preserved (i.e. in Kampos Subunit; Figure 17). This phase coincides with a change in subduction dynamics, and the settling of a faster slab retreat, coeval with the migration of the subduction front southward within the more external zones (Phyllite-Quartzite Nappe in Crete and on the Peloponnese; [START_REF] Jolivet | Cenozoic geodynamic evolution of the Aegean[END_REF][START_REF] Jolivet | Subduction tectonics and exhumation of high-pressure metamorphic rocks in the Mediterranean orogens[END_REF]. Slab retreat then induced large-scale back-arc extension leading to thermal re-equilibration of the lithosphere from a cold syn-orogenic regime in the subduction zone to a Accepted Article This article is protected by copyright. All rights reserved. warmer post-orogenic regime in the back-arc domain [START_REF] Jolivet | Cenozoic geodynamic evolution of the Aegean[END_REF][START_REF] Parra | Relation between the intensity of deformation and retrogression in blueschist metapelites of Tinos Island (Greece) evidenced by chlorite-mica local equilibria[END_REF]. The timing of this isobaric heating has been determined on Tinos and Andros between 37 and 30 Ma [START_REF] Bröcker | High-Si phengite records the time of greenschist facies overprinting: implications for models suggesting mega-detachments in the Aegean Sea[END_REF][START_REF] Bröcker | 40Ar/39Ar and oxygen isotope studies of polymetamorphism from Tinos Island, Cycladic blueschist belt, Greece[END_REF][START_REF] Huet | Coupled phengite 40 Ar-39 Ar geochronology and thermobarometry: P-T-t evolution of Andros Island (Cyclades, Greece)[END_REF] and needs further temporal constraints on Syros. An interesting feature is that the degree of heating was practically identical on Tinos and Andros (~ 120-150 °C) but seems to be lower on Syros (~ 60-70 °C), which probably allows better preservation of HP-LT metamorphic rocks on Syros. Finally, the last part of the retrograde path of the CBU on Syros, Tinos and Andros constrains the P-T conditions of the post-orogenic phase of exhumation from blueschist-to greenschist-facies conditions (Figure 17; [START_REF] Huet | Coupled phengite 40 Ar-39 Ar geochronology and thermobarometry: P-T-t evolution of Andros Island (Cyclades, Greece)[END_REF][START_REF] Parra | Relation between the intensity of deformation and retrogression in blueschist metapelites of Tinos Island (Greece) evidenced by chlorite-mica local equilibria[END_REF]. The timing of this last ductile exhumation phase is constrained between 30 and 20 Ma by 40 Ar/ 39 Ar and Rb/Sr ages [START_REF] Bröcker | The geological significance of 40Ar/39Ar and Rb-Sr white mica ages from Syros and Sifnos, Greece: a record of continuous (re) crystallization during exhumation?[END_REF][START_REF] Bröcker | Rb-Sr isotope studies on Tinos Island (Cyclades, Greece): additional time constraints for metamorphism, extent of infiltration-controlled overprinting and deformational activity[END_REF], 2005;[START_REF] Bröcker | 40Ar/39Ar and oxygen isotope studies of polymetamorphism from Tinos Island, Cycladic blueschist belt, Greece[END_REF][START_REF] Forster | Several distinct tectono-metamorphic slices in the Cycladic eclogite-blueschist belt, Greece[END_REF][START_REF] Laurent | Extraneous argon in high-pressure metamorphic rocks: Distribution, origin and transport in the Cycladic Blueschist Unit (Greece)[END_REF]. This post-orogenic phase of exhumation is mainly accommodated by opposing-sense sets of detachment such as the North Cycladic Detachment System (NCDS, Jolivet et al., 2010), associated with syn-greenschist top-to-the NE sense of shear, or the West Cycladic Detachment System (WCDS, [START_REF] Grasemann | Miocene bivergent crustal extension in the Aegean: Evidence from the western Cyclades (Greece)[END_REF], a syn-greenschist top-to-the SW system of detachments outcropping on Serifos, Kythnos and Kea islands. Then, final exhumation stages of the CBU toward the surface were accommodated by post-metamorphic brittle deformation through low-angle and steeply dipping normal faults (e.g. [START_REF] Jolivet | Ductile extension and the formation of the Aegean Sea[END_REF][START_REF] Jolivet | Correlation of syn-orogenic tectonic and metamorphic events in the Cyclades, the Lycian nappes and the Menderes massif. Geodynamic implications[END_REF][START_REF] Keiter | A new geological map of the Island of Syros (Aegean Sea, Greece): Implications for lithostratigraphy and structural history of the Cycladic Blueschist Unit[END_REF][START_REF] Mehl | From ductile to brittle: evolution and localization of deformation below a crustal detachment (Tinos, Cyclades, Greece)[END_REF][START_REF] Philippon | Tectonics of the Syros blueschists (Cyclades, Greece): From subduction to Aegean extension[END_REF]Ring, Thomson, & Bröcker, 2003;[START_REF] Ring | The Hellenic subduction system: high-pressure metamorphism, exhumation, normal faulting, and large-scale extension. Annual Review[END_REF]. The multi-stage exhumation process of the CBU within the Hellenic subduction zone is strongly governed by slab rollback. Expanding to the general aspects of subduction zones, we suggest that such metamorphic evolution of HP-LT units, with two exhumation stages delimited by a phase of isobaric heating, should be regarded as a characteristic feature of exhumation driven by slab rollback. To better understand the transition from crustal thickening (syn-orogenic) to back-arc extension (postorogenic) and the dynamics of the subduction channel, it is of paramount importance to know (1) when and at which depth heating has started, (2) whether or not the HP-LT metamorphic units has first been exhumed along a cooling path similar to that retrieved on Syros, and (3) what is the mechanical significance of the depth at which this isobaric heating took place. Accepted Article This article is protected by copyright. All rights reserved. Figure 15: Garnet equilibrium modelling using the program GRTMOD (Lanari et al., 2017). Results are calculated using the thermodynamic database JUN92.bs and both measured bulk rock composition (WR, square) and local bulk compositions (LB, circle) considering or not MnO (dark vs. light blue colour). Solid lines are the error bars on the optimal P-T conditions (see text). Numbers refer to the residual for each growth stage (core / rim or core / mantle / rim). Raman peak temperature estimates (RSCM-T) are represented in grey (2 σ weighted error) for samples of the same subunit. Figure 16: P-T isochemical phase diagrams of SY1401 showing the good consistency between the predicted mineral assemblage in equilibrium for the two events of garnet growth and the mineral assemblage observed in the corresponding textural domain of the compositional map. Note that no consistency is observed for the second stage of rim garnet growth using the fractionated WR composition. Abbreviations: LB, Local Bulk; WR, Bulk Rock. Accepted Article This article is protected by copyright. All rights reserved. Accepted Article This article is protected by copyright. All rights reserved. Table 2: RSCM peak temperature results. For each sample, the total number of measured Raman spectra (n) is shown together with the mean calculated temperature and the mean R2 ratio. The associated standard deviation (SD), related to the intra-sample heterogeneity, is also indicated. Peak temperature results are also represented on the Figure 2. T (°C) Posidonia Subunit Chroussa Subunit Kampos Subunit Sample n R2 Accepted Article This article is protected by copyright. All rights reserved. Accepted Article This article is protected by copyright. All rights reserved. Figure 3 : 3 Figure 3: Geological and metamorphic maps of Syros and associated cross-sections showing the large-scale structure of Syros (after Laurent et al. 2016). Location of samples and RSCM metamorphic peak temperature are shown on the metamorphic map. Figure 4 : 4 Figure 4: Thin-section pictures of typical textures and mineralogical assemblages of analysed samples for thermobarometry and thermodynamic modelling. Figure 5 : 5 Figure 5: Outcrop and sample description of SY1401 (Kampos Subunit). a) The Kastri Shear Zone, in which a strong gradient of deformation and retrogression is observed from eclogite-to greenschistfacies rocks. b, c, d) Decreasing scale pictures from outcrop to sample SY1401. Pictures b and c show stretched eclogites preserving a pillow-lava breccia structure and picture d corresponds to SY1401 before sampling. Figure 6 : 6 Figure 6: Petrography of sample SY1401. a) Thin section photography showing the location of the mapped area. The white arrow shows incipient top-to-the east shear deformation. b) BSE image of the mapped area. c) Map of the minerals (obtained from the classification of the X-ray maps). d) Schematic representation of the mapped area with the distinct petro-textural domains and their estimated proportion in the sample. Figure 7 : 7 Figure 7: X-Ray compositional map of SY1401. a, b, c, d) Composition of the garnet is zoned from core to rim. e) Composition of white mica defines four distinct groups in the map. f) Composition of clinopyroxene. Figure 8 : 8 Figure 8: Petrography of sample SY1460. a) BSE image of a drilled rock-section of SY1460 locating the mapped area. b) BSE image of the mapped area. c) Map of the minerals (obtained from the classification of the X-ray maps). d) Schematic representation of the mapped area with the distinct petro-textural domains and their estimated proportion in the sample. Figure 9 : 9 Figure 9: X-Ray compositional map of SY1460. a, b, c, d) Composition of the garnet is zoned from core to rim. e) White mica defines two phengite compositions. f) Composition of clinopyroxene. Figure 10 : 10 Figure 10: Petrography of sample SY1407. a) BSE image of a drilled rock-section of SY1407 locating the mapped area. b) BSE image of the mapped area. c) Map of the minerals (obtained from the classification of the X-ray maps). d) Schematic representation of the mapped area with the distinct petro-textural domains and their estimated proportion in the sample. Figure 11 : 11 Figure 11: X-Ray compositional map of SY1407. a, b, c, d) Composition of the garnet is zoned from core to rim. e) White mica corresponds mainly to paragonite with only few phengite. Figure 12 : 12 Figure 12: Petrography of sample SY1418. a) BSE image of a drilled rock-section of SY1418 locating the mapped area. b) BSE image of the mapped area. c) Map of the minerals (obtained from the classification of the X-ray maps). d) Schematic representation of the mapped area with the distinct petro-textural domains and their estimated proportion in the sample. Figure 13 : 13 Figure 13: X-Ray compositional map of SY1418. a, b, c, d) Composition of the garnet is zoned from core to rim. e) White mica composition corresponds to phengite. Figure 14 : 14 Figure14: P-T map calculated using the garnet-omphacite thermometer of[START_REF] Ravna | The garnet-clinopyroxene Fe 2+ -Mg geothermometer: an updated calibration[END_REF] together with the garnet-omphacite-phengite barometer of Waters and Martin (1996) (from XMAPTOOLS). Figure 17 : 17 Figure 17: P-T path of the Cycladic Blueschist Unit on Syros. a) RSCM peak temperatures and results of empirical thermobarometry. b) Thermodynamic modelling of garnet growth. c) Compiled P-T constraints for the CBU on Syros, Sifnos, Tinos and Andros islands. d) Proposed P-T path for the CBU on Syros. Table 1 : 1 Bulk rock (WR) and local bulk (LB) compositions used for thermodynamic modeling. SY1401 SY1460 SY1407 SY1418 WR LB WR LB WR LB WR LB SiO 2 52.64 51.54 50.76 51.34 67.39 65.61 48.52 49.75 TiO 2 0.69 0.44 0.78 0.90 0.37 0.48 2.51 1.65 Al 2 O 3 13.97 16.45 14.11 15.63 14.21 15.94 14.28 13.20 FeO 8.15 9.42 6.29 7.48 4.71 4.61 15.58 17.63 MnO 0.17 0.22 0.12 0.21 0.10 0.25 0.24 0.54 MgO 5.21 5.06 5.76 5.13 2.42 1.62 6.04 4.43 CaO 10.39 9.72 9.25 8.04 2.78 1.84 5.59 4.97 Na 2 O 4.67 4.23 5.51 5.29 4.04 4.52 3.28 3.20 K 2 O 1.70 1.85 2.50 2.47 0.53 0.21 0.16 0.08 Total 97.57 98.93 95.08 96.49 96.55 95.09 96.20 95.46 Grt core Grt mantle Grt rim Grt core Grt rim Grt core Grt mantle Grt rim Grt core Grt rim SY1401 SY1460 SY1407 SY1418 SiO 2 38.57 38.70 39.35 37.26 37.86 35.65 35.42 35.85 36.70 37.81 TiO 2 21.44 21.50 22.04 22.08 22.91 22.01 21.35 21.55 19.73 20.52 Al 2 O 3 28.73 29.26 27.77 30.25 27.34 28.63 30.02 27.71 28.97 31.59 FeO 3.15 2.00 0.75 2.80 1.61 3.49 5.09 2.10 3.08 0.28 MnO 1.58 1.73 2.56 1.45 2.21 1.00 1.17 1.46 1.02 2.18 MgO 7.93 8.37 9.73 7.22 9.58 8.89 6.02 10.13 7.85 7.15 CaO 0.01 0.01 0.01 0.04 0.03 0.03 0.03 0.03 0.02 0.02 Na 2 O 0.01 0.01 0.01 0.00 0.00 0.01 0.01 0.01 0.04 0.03 K 2 O 101.40 101.58 102.21 101.10 101.54 99.71 99.11 98.84 97.41 99.58 Atom site distribution (12 anhydrous-oxygen basis) Si(O/T) 3.03 3.03 3.03 2.96 2.95 2.89 2.91 2.91 3.03 3.03 Al(Y) 1.99 1.98 2.00 2.06 2.10 2.10 2.06 2.06 1.92 1.94 Fe(X) 1.89 1.92 1.79 2.01 1.78 1.94 2.06 1.88 2.00 2.12 Mg(X) 0.18 0.20 0.29 0.17 0.26 0.12 0.14 0.18 0.13 0.26 Mn(X) 0.21 0.13 0.05 0.19 0.11 0.24 0.35 0.14 0.22 0.02 Ca(X) 0.67 0.70 0.80 0.61 0.80 0.77 0.53 0.88 0.70 0.61 Xalm 0.64 0.65 0.61 0.67 0.61 0.63 0.67 0.61 0.66 0.70 Xprp 0.06 0.07 0.10 0.06 0.09 0.04 0.05 0.06 0.04 0.09 Xsps 0.07 0.04 0.02 0.06 0.04 0.08 0.11 0.05 0.07 0.01 Xgrs 0.23 0.24 0.27 0.21 0.27 0.25 0.17 0.29 0.23 0.20 Table 3 : 3 Garnet compositions used for thermodynamic modelling. Table 4 : 4 White mica compositions used for thermodynamic modelling. ACKNOWLEDGEMENTS This work has received funding from the European Research Council (ERC) under the seventh Framework Program of the European Union (ERC Advanced Grant, grant agreement No 290864, RHEOLITH) and from the Institut Universitaire de France. It is a contribution of the Labex VOLTAIRE. The authors are grateful to S. Janiec and J.G. Badin (ISTO) for the preparation of thin sections and V. Magnin and V. Batanova (ISTerre) for the acquisition of X-Ray compositional maps. Finally, we thank B. Dragovic for insightful and helpful reviews and D.Whitney for careful and constructive editorial handling. Accepted Article This article is protected by copyright. All rights reserved. Accepted Article This article is protected by copyright. All rights reserved. Earth and Planetary Sciences, 38, 45-76. Ring, U., Thomson, S., & Bröcker, M. (2003). Fast extension but little exhumation: the Vari detachment in the Cyclades, Greece. Geological Magazine,140,[245][246][247][248][249][250][251][252]. Accepted Article This article is protected by copyright. All rights reserved. SUPPORTING INFORMATION Additional Supporting Information may be found online in the supporting information tab for this article. Figure S1. Garnet equilibrium modelling using the program GRTMOD (Lanari et al., 2017). 2008), c) [START_REF] Lister | White mica 40 Ar/ 39 Ar age spectra and the timing of multiple episodes of high-P metamorphic mineral growth in the Cycladic eclogite-blueschist belt, Syros, Aegean Sea, Greece[END_REF]. Mineral abbreviations are after Whitney and Evans (2010). Facies: AEBS, albite epidote blueschist; AM, amphibolite; EA, epidote-amphibolite; EB, epidote-blueschist; EC, eclogite; GS, greenschist; LB, lawsonite-blueschist; LC, lawsonite-chlorite; PA, pumpellyite-actinolite; PrAc, prehnite-actinolite; PrP, prehnite-pumpellyite; ZE, zeolite [START_REF] Peacock | The importance of blueschist→ eclogite dehydration reactions in subducting oceanic crust[END_REF]. Figure captions Figure 2: Tectonic reconstructions of a N-S section of the Aegean domain highlighting the structure of the Hellenic subduction zone, the southward retreat of the slab with time and the evolution of the Cycladic Blueschist Unit from burial to final exhumation (after [START_REF] Jolivet | Cenozoic geodynamic evolution of the Aegean[END_REF]. Abbreviations: NCDS, North Cycladic Detachment System; PND, Paros-Naxos Detachment, WCDS, West Cycladic Detachment System, NAF, North Anatolian Fault.
91,069
[ "773731", "176806", "862863" ]
[ "82342", "187882", "524367", "18404", "541927" ]
01744978
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01744978/file/pods050withoutcopyright.pdf
Serge Abiteboul Pierre Bourhis Victor Vianu Explanations and Transparency in Collaborative Workflows Keywords: data-centric workflows, collaboration, views, explanations We pursue an investigation of data-driven collaborative workflows. In the model, peers can access and update local data, causing side-e↵ects on other peers' data. In this paper, we study means of explaining to a peer her local view of a global run, both at runtime and statically. We consider the notion of "scenario for a given peer" that is a subrun observationally equivalent to the original run for that peer. Because such a scenario can sometimes di↵er significantly from what happens in the actual run, thus providing a misleading explanation, we introduce and study a faithfulness requirement that ensures closer adherence to the global run. We show that there is a unique minimal faithful scenario, that explains what is happening in the global run by extracting only the portion relevant to the peer. With regard to static explanations, we consider the problem of synthesizing, for each peer, a "view program" whose runs generate exactly the peer's observations of the global runs. Assuming some conditions desirable in their own right, namely transparency and boundedness, we show that such a view program exists and can be synthesized. As an added benefit, the view program rules provide provenance information for the updates observed by the peer. INTRODUCTION Consider peers participating in a collaborative workflow. Such peers are typically willing to publicly share some data and actions, but keep others private or disclose them only to selected participants. During a run of the workflow, a peer observes side e↵ects of other peers' actions, but may wish to be provided with a more informative explanation of the workflow. At runtime, one would like to explain to each peer the side e↵ects she observes, in terms of the unfolding run. Statically, one would like to provide the peer with a program specifying all the transitions she may observe. In this paper, we consider the problem of providing peers with such runtime and static explanations. In particular, we identify two natural properties of workflows, transparency and boundedness, that are of interest in their own right and greatly facilitate the explanation task. We use the data-driven collaborative workflow model of [START_REF] Abiteboul | Collaborative data-driven workflows: think global, act local[END_REF]. In contrast to process-centric workflows, datadriven workflows treat data as first-class citizens [START_REF] Nigam | Business artifacts: An approach to operational specification[END_REF]. In our collaborative workflow model, each peer sees a view of a global database, that hides some relations, columns of other relations (projection), and tuples (selection). The workflow is specified by datalog-style rules, with positive and negative conditions in rule bodies, and insertions/deletions in rule heads. An event in the system is an instantiation by some peer of a rule in the workflow's program. Consider a run of a workflow and a particular peer, say p. To be able to understand a run from p's perspective, it is useful to isolate the portion of the run relevant to p from other computations that may be occurring in the system. Towards this goal, we introduce the notion of scenario for p, which is a subrun that is observationally equivalent for p to the original run. Such a scenario includes events visible at p, but also events initiated by other peers, that have no immediate side-e↵ects visible at p, but eventually enable events with visible side-e↵ects. Among possible scenarios, minimal ones are desirable because they exclude redundant or useless (from the peer's viewpoint) events. We show that computing minimal scenarios for a peer is generally hard (coNP-complete). Moreover, despite being observationally equivalent to the original run, scenarios can be misleading by di↵ering considerably from what occurs in the actual run. To overcome these issues, we consider an additional property of scenarios, called faithfulness, that guarantees tighter consistency between the scenario and the actual run. Moreover, faithful scenarios turn out to be particularly well behaved. They form a semiring with respect to natural operators, which enables e cient computation of minimal faithful scenarios, as well as their incremental maintenance. We show that every run has a unique minimal faithful scenario for each peer, that can be computed efficiently. We use minimal faithful scenarios as the natural semantic and computational basis for explaining runs. We then turn to the ambitious goal of providing static specifications of the runs as seen from a peer's perspective, which we call view programs for that peer. While such programs cannot generally exist for informationtheoretic reasons, we consider some natural properties of workflows allowing to construct view programs that define precisely a peer's observations of the workflow runs. The properties we consider, transparency and boundedness, are often desirable in practice, for technical and even ethical reasons. In layman terms, an algorithm is transparent (for a specific purpose) if it discloses its motivation and actions. In workflows involving human participants (often the case for collaborative workflows), one might want to require transparency for particular users; indeed, one may be compelled by law to do so in certain settings. Intuitively, a workflow is transparent for a peer if the data that the peer sees at each point in a run is su cient to determine all possible future transitions visible at that peer. For example, if a CEO vetoes the hiring of Alice, and as a consequence it becomes certain that she cannot be hired in the future, this information must be disclosed to her in the next transition she sees. Boundedness is a more technical condition. For an integer h, h-boundedness of a run for a peer p limits to h the number of consecutive events invisible but relevant to p that the other peers can perform. (This does not prevent them from performing arbitrarily long sequences of events irrelevant to that particular peer.) We show that one can decide whether, for a given h, a workflow program only produces transparent and hbounded runs for a particular peer. Furthermore, when this is the case, one can construct a "view program" for a peer that specifies exactly the transitions that peer may see. The rules of the program also provide the peer with provenance information, consisting of the facts visible to that peer that have led to the transition. Because of boundedness, the provenance always involves a bounded number of tuples, which allows their static specification in the bodies of rules of the view program. The synthesis of view-programs described in Section 5 is related in spirit with partner synthesis in services modeled as Petri Nets [START_REF] Wolf | Does my service have partners? Trans. Petri Nets and Other Models of Concurrency[END_REF][START_REF] Lohmann | Wendy: A tool to synthesize partners for services[END_REF][START_REF] Sürmeli | Synthesizing cost-minimal partners for services[END_REF]. With practical considerations in mind, we lastly present some design guidelines for producing transparent and hbounded programs for a specified peer. We also show that, for a large class of programs, one can force transparency and h-boundedness by rewriting each program so that, modulo minor di↵erences, it has the same transparent and h-bounded runs as the original and filters out runs violating these properties. The article is organized as follows. The model is described in Section 2. Scenarios are considered in Section 3, faithfulness in Section 4, transparency and view programs in Section 5, and design methodology for transparent and bounded programs in Section 6. Related work and conclusions are considered in two last sections. Some of the proofs are relegated to an appendix. COLLABORATIVE WORKFLOW In this section, we recall the collaborative workflow model of [START_REF] Abiteboul | Collaborative data-driven workflows: think global, act local[END_REF], introducing minor extensions. We start with some basic terminology. We assume an infinite data domain dom with a distinguished element ? (intuitively denoting an undefined value), and including an infinite set P 1 of peers. We also assume an infinite domain of variables var disjoint from dom. A relation schema is a relation symbol together with a sequence of distinct attributes. We denote the sequence of attributes of R by att(R). A database schema is a finite set of relation schemas. A tuple over R is a mapping from att(R) to dom. An instance of a database schema D is a mapping I associating to each R 2 D a finite set of tuples over R, i.e., a relation over R. We denote by Inst(D) the set of instances of D. We assume that each relation schema R is equipped with a unique key, consisting, for simplicity, of a single attribute K (the same for all relations). An instance I 2 Inst(D) is valid (for the key constraints) if, for each R 2 D, no tuple in I(R) has value ? for attribute K, and there are no distinct tuples u, v in I(R) with the same key. We denote by Inst K (D) the set of valid instances of D. We can apply to each instance I 2 Inst(D) the following chase step up to a fixpoint J denoted chase K (I): for some R, some A, and distinct u, v in I(R), with u(K) = v(K), u(A) 6 =?, and v(A) =?, replace v by v 0 identical to v except that v 0 (A) = u(A). Note that the chase turns some invalid instances into valid ones, while for others it terminates with invalid instances. More precisely, an instance is turned into a valid one i↵ it contains no two tuples with the same key and distinct non-null values for the same attribute. In this case, the result of the chase is unique. For technical reasons, we associate to each relation R (with attribute K), a unary relational view Key R , that consists of the projection of R on K, i.e., for each R and instance I, I(Key R ) = ⇡ K (I(R)). We recall the notion of full conjunctive query with negation (FCQ ¬ query for short). It is adapted to our context to take into account the view relations for keys. A term is a variable or a constant. A literal is of the form (¬) R(x), (¬) Key R (y), x = y, or x 6 = y, where x is a sequence of terms of appropriate arity, x is a variable, and y a term. A FCQ ¬ query is an expression A 1 ^... ^An (for n 0) where each A i is a literal and such that each variable occurs in some positive literal R(u) (a safety condition). Observe that the use of a literal Key R (k) is syntactic sugar, since it can be replaced by R(k, z 1 , ...) where the z i are new. On the other hand, the use of ¬Key R (k) is not. For attributes A, B, and a in dom (possibly ?) , A = a and A = B are elementary conditions. A condition is a Boolean combination of elementary conditions. Peer views of the database will be defined using projections and selections. Note that, in order to enable powerful static analysis, the views in [START_REF] Abiteboul | Collaborative data-driven workflows: think global, act local[END_REF] did not use selections. The more powerful views used here better capture realistic peer views. Collaborative schema. We now define collaborative workflow schemas, extending the definition in [START_REF] Abiteboul | Collaborative data-driven workflows: think global, act local[END_REF]. Starting from a global database schema D and a finite set P of peers participating in the workflow, the collaborative schema specifies, for each peer p, a view of the database. The view consists of selection-projection views of a subset of the relations in D. A view of R 2 D for a peer p (if provided) is denoted R@p. The view allows p to see only some of the attributes of R (projection on att(R@p)) and only some of the tuples (specified by a selection denoted (R@p)). The view of a global instance I of D at p is denoted I@p. Before providing the formal definition, we introduce the following notation. For a relation R and an instance J over a subset att(J) of att(R), J ? denotes the instance of R obtained by padding all tuples of J with the value ? on all attributes in att(R) att(J). Definition 2.1. For a global database schema D, a collaborative schema S consists of a finite set P of peers, and for each p 2 P, a view schema D@p such that: • each relation in D@p is of the form R@p for R in D, • for each R@p, K 2 att(R@p) ✓ att(R). To each relation R@p 2 D@p is associated a selection condition (R@p) over att(R). For an instance I over D, the view instance of I at peer p, denoted I@p, is the instance over D@p defined by: for each R@p in D@p, • I@p(R@p) = ⇡ att(R@p) ( (R@p) (I(R)) ). Furthermore, we impose the following (losslessness) condition: For each I 2 Inst K (D) and R 2 D, I(R) = chase K ⇣ [ {(I@p(R@p)) ? | p 2 P, R@p 2 D@p} ⌘ The losslessness property guarantees that for each tuple in I(R), and each A in R, the value of the tuple for A is visible at some peer. The global instance can therefore be recovered from its peer views using the chase. One can e↵ectively check whether a collaborative schema has the losslessness property. Let S be a collaborative schema, with global schema D and set P of peers. A peer p can perform two kinds of updates on a valid instance I: • A deletion of the form Key R@p (k), where R@p 2 D@p and k is a key value in I@p(R@p). The deletion results in removing from I(R) the tuple with key k. • An insertion of the form +R@p(u), where R@p 2 D@p and u is a tuple over att(R@p) such that: -(i) J = chase K (I [ {R(u ? )} ) is valid, and -(ii) u is subsumed by some tuple v in J@p(R@p). Then J is the result of the insertion. Note that the semantics of an update requested by some peer is specified on the global instance. This circumvents the view update problem. Observe also that a peer can delete only a tuple the peer sees, and that, if an insertion succeeds, the tuple the peer inserted is part of its view after the update (by (ii)). Some subtleties of updates are illustrated next. Example 2.2. Suppose the database consists of a single relation R over KAB, and we have two peers p, q, att(R@p) = KAB, att(R@q) = KA, (R@p) is A = ?, and (R@q) is true. The losslessness condition is not satisfied by this schema. For example, consider the global instance I obtained using the sequence of inserts: +R@p(k, ?, c); +R@q(k, a). It consists of a single tuple, R(k, a, c). Note that as a result of the second insertion, the tuple with key k disappears from the view of p. Moreover, I cannot be reconstructed from the collective views of the peers (and the value "c" is lost). The losslessness condition prevents such anomalies. Moreover, it allows treating the global instance as a virtual rather than materialized database, represented in a distributed fashion by the views of the peers. Collaborative workflow. A collaborative workflow specification (in short workflow spec) W consists of a collaborative schema S and a workflow program for W, i.e., a finite set of "update rules" for each peer p of W. The rules are defined using the auxiliary notion of "update atom" as follows. An update atom at p is an insertion atom or a deletion atom. An insertion atom at p is of the form +R@p(x) where R@p 2 D@p and x is a tuple of variables and constants, of appropriate arity. A deletion atom p is an expression Key R@p (x), where R@p 2 D@p and x is a variable or constant. A rule at peer p is an expression Update :-Cond where: • Cond is a FCQ ¬ query over D@p, and • Update is a sequence of update atoms at p such that: If the sequence includes two updates of the same relation R of tuples with keys x, x 0 , respectively, then x and x 0 are not both the same constant and the body includes a condition x 6 = x 0 . In the previous definition, the conditions x 6 = x 0 impose that no two updates in the same rule a↵ect the same tuple. As a consequence, the order of the update atoms in a rule is irrelevant. An example of rule is the following, where Assign(x, y) says that employee x is assigned to project y, and HR is the Human Relations peer: Key Assign@HR (x), + Assign@HR(x 0 , y) : Assign@HR(x, y), Replace@HR(x, x 0 ), x 6 = x 0 The rule allows the HR peer to replace employee x by employee x 0 on project y. If P is the program of a workflow spec W for a collaborative schema S, we speak simply of the (workflow) program P when S and W are understood. Let P be a program. To simplify, we assume that a run of P starts from the empty instance (note that an arbitrary "initial" instance can be constructed by the peers due to losslessness). The global instance then evolves under transitions caused by updates in rule instantiations, defined next. Let ↵ = Update(ȳ) :-Cond(x) be a rule of P at some peer p where x are the variables occurring in Cond and ȳ are the variables in Update. A valuation ⌫ of ↵ for a global instance I is a mapping from ȳ [ x to dom such that I@p |= Cond(⌫(x)). For a valuation ⌫ of a rule ↵ at some p for some I, the instantiation ⌫↵ is called an event; p the peer of this event, denoted peer (⌫↵), and ↵ its rule. We define the transition relation `e among valid global instances of D as follows. For I, J, and some event e as above, I `e J if all insertions and deletions in Update(⌫(ȳ)) are applicable, and J is obtained from I by applying them in any order. A run of P is a finite sequence ⇢ = {(e i , I i )} 0in , such that ; `e0 I 0 , where ; is the empty instance; and for each 0 < i  n, I i 1 `ei I i . Additionally, we require: for each event e i = ⌫↵, if x is a variable occurring in the head of ↵ but not in its body, then x must be instantiated to a globally fresh value, i.e. ⌫(x) does not occur in const(P ) or in I 1 . . . I i 1 . We denote by Runs(P ) the set of runs of P . Let ⇢ = {(e i , I i )} 0in be a run of P . Note that the event sequence (e 1 ...e n ), denoted e(⇢), uniquely determines the run ⇢. By slight abuse, we sometimes call run a sequence (e 1 ...e n ) of events that yields a run. It is occasionally useful to consider runs starting from an arbitrary initial instance I rather than ;. A run on initial instance I is defined in the obvious manner, by replacing ; with I in the previous definition. Normal-form programs. We show a normal form for workflow programs that will be particularly useful in the next sections. A workflow program P is in normal form if (i) each rule whose head contains a deletion Key R@q (x) also contains a literal R@q(x, u) in its body, and (ii) rule bodies do not contain negative literals of the form ¬R@q(x, u) or positive literals of the form Key R@q (x). Intuitively, (i) simply makes explicit that deletion updates are e↵ective. (Recall that this is imposed by the definition of deletion.) As for (ii), it allows distinguishing between the cases when a fact R@q(k, u) is false because no tuple in R@q has key k, or because R@q(k, v) is a fact for some v 6 = u. Thus, rules in normal form provide more refined information. We next show that this does not limit the expressivity of the model. Proposition 2.3. For each workflow program P , one can construct a normal-form program P nf , and a function ✓ from the rules of P nf to rules of P , such that ⇢ = {(e i , I i )} 0in is a run of P i↵ ⇢ nf = {(f i , I i )} 0in is a run of P nf for some {f i } 0in such that peer(e i ) = peer(f i ) and rule(e i ) = ✓(rule(f i )). Proof. Informally, P nf is constructed as follows. For each rule containing a deletion Key R@q (x) in its head, a literal R@q(x, u) is added to the body, where u consists of distinct new variables. This guarantees (i). For (ii), a literal Key R@q (x) occurring in the body of some rule of P can be replaced by a literal R@q(x, u), where u consists of distinct new variables. Now suppose a literal ¬R@q(x, u) occurs in the body of some rule r of P . First, an instantiation ⌫ of this rule may hold because ¬Key R@q (⌫(x)) holds. Then this can be captured by a rule obtained from r by replacing ¬R@q(x, u) by ¬Key R@q (x). Next, an instantiation ⌫ of r may hold because R@q(⌫(x), v) holds for some v and ⌫(u(A)) 6 = v(A) for some attribute A 6 = K of R@q. Then this can be captured by a rule obtained from r by replacing ¬R@q(x, u) by R@q(x, z) where z is a tuple of distinct new variables, and adding the condition u(A) 6 = z(A). Note that the previous construction replaces r with a set Rules(r) of rules in P nf , corresponding to the different cases. The mapping ✓ is defined by ✓(r 0 ) = r for each r 0 2 Rules(r). Rules r of P that are not modified are included as such in P nf , and ✓(r) = r. VIEWS AND SCENARIOS In this section, we isolate the portions of a run that are relevant to a particular peer and accurately mirror what is actually occurring in the workflow from that peers viewpoint. Towards this goal of filtering out irrelevant events, we introduce the notions of subruns and scenarios. In the next section, we further consider scenarios called faithful that adhere more closely to the actual run and also turn out to have desirable properties from a semantic and computational viewpoint. We first define the p-view of a run, for a peer p. Intuitively, this is the most basic view of a run, consisting essentially of the observations of peer p. Let ⇢ = {(e i , I i )} 0in be a run of P . An event e i is visible at p if peer(e i ) = p or peer(e i ) 6 = p and I i 1 @p 6 = I i @p. Otherwise, e i is invisible (or silent) at p. The p-view of an event e, denoted e@p, is e if peer(e) = p and ! if peer(e) 6 = p, where ! is a new symbol (standing for "world"). Definition 3.1. Let ⇢ = {(e i , I i )} 0in be a run of P , p a peer, and ⇢ 0 = {(e i @p, I i @p)} 0in . The sequence obtained from ⇢ 0 by deleting all (e i @p, I i @p) such that e i is invisible at p, is called the view of ⇢ at p, and denoted ⇢@p. We denote Runs@p(P ) = {⇢@p | ⇢ 2 Runs(P )}. Thus, ⇢@p consists of all transitions caused by p marked with e i @p, as well as all transitions of events visible at p caused by other peers, marked with !. We next consider subruns of a given run, with the goal of isolating the portion of the run that is relevant to p, and which can be used as a sound basis for providing the peer with additional information. For example, in Section 5 we discuss richer views that include provenance information for the updates observed by a peer, extracted from subruns relevant to the peer. A subrun b ⇢ of ⇢ is a run such that e(b ⇢) is a subsequence of e(⇢). In a subrun, only some of the events of ⇢ are retained, in the same order as they occur in ⇢. Observe that the instances in b ⇢ are typically di↵erent than those in ⇢. Of course, not all subsequences of e(⇢) yield subruns. If a subsequence ↵ of e(⇢) does yield a subrun, it is denoted run(↵) (or ↵ by slight abuse). We are interested in those subruns of a run that are compatible with p's observations of the run. This is captured by the notion of "scenario". We will, in particular, be interested by "minimal" such scenarios that, in some sense, explain what can be observed by p in a non-redundant manner. Definition 3.2. Let P be a workflow program, p a peer of P , and ⇢ = {(e i , I i )} 0in a run of P . A sce- nario of ⇢ at p is a subrun b ⇢ of ⇢ such that ⇢@p = b ⇢@p. A scenario b ⇢ of ⇢ at p is minimal if there is no sce- nario ⇢ of ⇢ at p for which e(⇢) is a strict subsequence of e(b ⇢). A minimal scenario is minimum if there is no shorter scenario of ⇢ at p. Clearly, ⇢ itself is a scenario for ⇢ at p, but is likely to include portions that are irrelevant to p's observations, which motivates the study of minimal and minimum scenarios. We next show that finding a minimum scenario is hard. (We will see that the problem is hard also for minimal scenarios.) In the proof, and throughout the paper whenever convenient, we use propositions in workflow programs as syntactic sugar. A proposition x can be simulated by a unary relation R x (with key K), a literal (¬)x by (¬)R x (0), an insertion or deletion of this literal by +R x (0) or Key R x (0). Theorem 3.3. It is np-complete, given a workflow program P , a run ⇢ of P , a peer p and an integer N , whether there exists a scenario of ⇢ at p, of length at most N . Moreover, this holds even for workflows with ground positive rules and no deletions. Proof. Membership in np is obvious. For hardness, we use a reduction from the hitting set problem. An instance of Hitting Set consists of a finite set V = {v 1 , . . . , v n }, a set {c 1 , . . . , c k } of subsets of V , and an integer M < n. It is np-complete whether there exists W ⇢ V of size at most M such that W \ c j 6 = ; for 1  j  k [START_REF] Garey | Computers and Intractibilitiy: A Guide to the Theory of NP-Completeness[END_REF]. From an instance of Hitting Set, we construct a program and a run ⇢ as follows. The schema consists of propositions V i (1  i  n), C j (1  j  k) , and OK. There are two peers p, q. Peer q sees all propositions and p only OK. The rules are the following: (a) +V i @q :-, for each i 2 [1..n], (b) +C j @q :-V i @q, for each i 2 [1..n], j 2 [1..k], v i 2 c j , (c) +OK@q :-C 1 @q, . . . , C k @q Observe that the workflow program uses only propositions. Note that all rules belong to peer q but p can observe the value of OK. Intuitively, firing a subset of the (a)rules designates a subset W of V . With W chosen, the (b)-rules designate the sets c j hit by some element in W . If all sets are hit, rule (c) is enabled. Consider firing first all (a)-rules, followed by all (b) rules, and ending with rule (c). It is clear that this yields a run; call it ⇢. Intuitively, ⇢ corresponds to picking the trivial hitting set W = V . It is easy to check that there exists a hitting set W of size at most M i↵ there exists a scenario of ⇢ for p, of length at most M + k + 1. For the specific runs ⇢ used in the proof, some minimal scenario can always be found e ciently in a greedy manner. (Start with ⇢. First, remove one (a)-rule at a time together with the (b) rules depending on it, and check if the remaining sequence is still a scenario. Then when no (a) rule can be further removed, keep only one (b) rule for each j. The resulting scenario is minimal.) However, this cannot be done for arbitrary runs. Indeed, the next result shows that testing whether a scenario is minimal is hard (see Appendix for proof). Theorem 3.4. It is coNP-complete, given a workflow program P , a run ⇢ of P , and a peer p, whether ⇢ is a minimal scenario of ⇢ at p. Moreover, this holds even for workflows with ground positive rules and no deletions. The lack of a unique minimal scenario of runs for a given peer is problematic when richer views need to be defined starting from several candidate minimal scenarios. Moreover, as seen in the next section, even minimal scenarios can provide misleading explanations about what occurs in the global run. We will propose a natural restriction called faithfulness, that overcomes the problems of unrestricted scenarios. FAITHFUL SCENARIOS A scenario b ⇢ of a run ⇢ at peer p produces the same observations as ⇢ at p, but is allowed to achieve this by means that di↵er considerably from what occurred in the original run. This can be misleading, as illustrated further in Examples 4.1 and 4.2. We therefore consider an additional property of subruns, called faithfulness, that guarantees tighter consistency between what happens in the subrun and in the actual run. The idea, that we will pursue in the next section, is that the workflow system wants to be more transparent for particular individual peers. Furthermore, as we shall see, faithful scenarios turn out to be particularly well behaved: each run has a unique minimal faithful scenario for p, computable in polynomial time, that explains what happens at p in a concise, non-redundant way. We also demonstrate useful properties of faithful scenarios, in particular that they are closed under intersection and union. Before defining faithful scenarios, we illustrate some of the discrepancies that may arise between arbitrary scenarios and actual runs. C i , i 2 [1, k]. Consider a run that derives OK. Suppose the run starts with V 1 @q :-; C 5 @q :-V 1 @q ; V 2 @q :-; C 5 @q :-V 2 @q. Then a scenario could ignore C 5 @q :-V 1 @q although this is the event that actually derived C 5 @q. Example 4.2. Consider a workflow with peers cto, ceo, assistant, and applicant and propositions ok and approval. The peers cto, ceo, and assistant all see ok and approval, and applicant sees only approval. Consider the run ⇢ consisting of the following events: e: +ok@cto :f : ok@cto :g: +ok@ceo :h: +approval@assistant :-ok@assistant The subrun e h is a scenario of ⇢ at peer applicant. It indicates that the applicant's request was approved because it was ok'd by the cto. This subrun is misleading, since in the actual workflow, the cto retracted its ok and the request was approved by the ceo. This arises because the subrun ignores the deletion of ok@cto. Let p be a fixed peer of some workflow program P and ⇢ a run. We next introduce the notion of "p-faithful subrun of ⇢", that prevents the kinds of anomalies previously illustrated. First, the definition is driven by the intuition that tuples in a given relation with a fixed key k represent evolving objects in the workflow. Objects identified by k can go through several lifecycles, occurring between the creation and deletion of a tuple with key k. Faithfulness requires that boundaries of lifecycles of events in the subrun be the same as those in the run, eliminating anomalies such as Example 4.2. It also requires that the events a↵ecting relevant attributes of object k in the same lifecycle be the same in the subrun as in the actual run, eliminating alternative subruns as in Example 4.1. To define faithful subruns formally, we use the following auxiliary definitions. We assume wlog that all programs are in normal form. Let P be a program and ⇢ = {(e i , I i )} 0in a run of P . We say that k occurs as a key of R in an event e i of peer q, if it occurs in a literal R@q(k, ū) or ¬Key R@q (k) in the body of e i , or as an update +R@q(k, ū) or Key R@q (k) in the head of e i . We say that k occurs as a key of R in a sequence ↵ of events if it occurs as a key of R in some event of ↵. We denote the set of such keys by K(R, ↵). Let k 2 K(R, e(⇢)) where We now define the ingredient of faithfulness concerning boundaries of lifecycles. For a subrun b ⇢ of ⇢, the requirement applies to e(b ⇢) alone. In fact, it will be useful to define this notion for arbitrary subsequences of e(⇢). Observe that boundaries of R-lifecycles of k in ⇢ that do not contain events in ↵ need not be included in ↵. ⇢ = {(e i , I i )} 0in . A closed R-lifecycle of k in ⇢ is an interval [i 1 , i 2 ] ✓ [0, n] such that e i1 inserts in We now consider the last ingredient of faithfulness. Its definition relies on the auxiliary concept of the set of attributes of R that are "relevant" to peer q, that is denoted att(R, q). Specifically, the values on att(R, q) determine whether a given tuple is seen by q, and provides the visible values. Formally, for a peer q with selection condition (R@q), we define att(R, q) = att(R@q) [ att( (R@q)), where att( (R@q)) is the subset of attributes of R used in (R@q). The last ingredient of faithfulness focuses on events that modify existing tuples with a given key. Intuitively, modification faithfulness for a peer p requires that, within a lifecycle of key k, all updates of attributes relevant to p must be included in the subsequence. It also requires that updates of attributes relevant to other peers participating in the subsequence be included as well. Definition 4.4. Let ⇢ = {(e i , I i )} 0in be a run of P and p a fixed peer. A subsequence ↵ = {e ij } 0jm of e(⇢) is modification faithful for p if the following holds for each e ij , for each R and each k 2 K(R, e ij ): if e i belongs to the same R-lifecycle of k in ⇢ as e ij , i < i j , peer(e ij ) = q, and e i contains an insertion that turns in ⇢ some attribute in att(R, q) [ att(R, p) of an existing tuple with key k from ? to some other value, then e i also belongs to ↵. Observe that boundary faithfulness is independent of the fixed peer p but modification faithfulness is dependent on p. We can now define the notion of faithful subsequence and subrun of a run. Observe that faithfulness rules out the counterintuitive scenarios in Examples 4.1 and 4.2. For instance, gh is an applicant-faithful subsequence of the run in Example 4.2, whereas eh is not. Moreover, note that gh is a subrun. This is not coincidental. We next show the following key fact: p-faithful subsequences of e(⇢) always yield scenarios of ⇢ for p. The proof is provided in the appendix. Lemma 4.6. Let ⇢ be a run of W, p a peer, and ↵ a p-faithful subsequence of e(⇢). Then (i) ↵ yields a subrun of ⇢, and (ii) run(↵) is a scenario of ⇢ for p. We next show the existence of a unique minimal pfaithful scenario of ⇢ for p, that can be computed in ptime. To this end, we define an operator T p (⇢, •) on subsequences of e(⇢). For each subsequence ↵ of e(⇢), T p (⇢, ↵) consists of the subsequence of e(⇢) obtained by adding to ↵ all events of ⇢ whose presence is required by boundary and modification p-faithfulness due to events that are already in ↵. Observe that (almost by definition) a subsequence ↵ of ⇢ is boundary and modification p-faithful i↵ it is a fixed-point of T p (⇢, •), i.e., T p (⇢, ↵) = ↵. Let ⌧ be the subsequence relation on sequences of events. Note that T p (⇢, •) is monotone with respect to ⌧, i.e., for ↵ ⌧ , T p (⇢, ↵) ⌧ T p (⇢, ). Consider the increasing sequence ↵ 0 = ↵, ↵ 1 = T p (⇢, ↵ 0 ), ↵ 2 = T p (⇢, ↵ 1 ), ... and let T ! p (⇢, ↵) = ↵ n where n is the minimum integer for which ↵ n = ↵ n+1 . Now we have: Theorem 4.7. Let P be a program schema. For each run ⇢ of P there is a unique minimal p-faithful scenario ⇢ of ⇢. Moreover, ⇢ equals run(T ! p (⇢, ↵)), where ↵ consists of all events of ⇢ visible at p, and can be computed from ⇢ in polynomial time. Proof. Clearly, T ! p (⇢, ↵) is p-faithful, because it is a fixed-point of T p (⇢, •) and contains all events of ⇢ visible at p. By Lemma 4.6, it also yields a scenario for p. Consider any p-faithful scenario ⇢ of ⇢ for p. By definition of p-faithfulness, ⇢ must include the events in ↵. From the fact that ↵ ⌧ e(⇢), T p (⇢, •) is monotone, and e(⇢) is a fixed-point of T p (⇢, •), it follows that T ! p (⇢, ↵) ⌧ e(⇢). Thus, ⇢ = T ! p (⇢, ↵) is the unique minimal p-faithful scenario of ⇢. Clearly, T ! p (⇢, ↵) can be computed in polynomial time from ⇢. We now consider two natural ways of combining subsequences of a run ⇢, that are useful in many practical situations: "multiplying" them by taking the intersection of their events, and "adding" them by taking the union of their events. We will show that p-faithful subsequences of a run ⇢ are closed under both operations, and form a semiring. Formally, let ↵ 1 and ↵ 2 be subsequences of e(⇢) for a run ⇢. Their product, denoted ↵ 1 ⇤↵ 2 is the subsequence consisting of the events in ⇢ occurring in both ↵ 1 and ↵ 2 . Their addition, denoted ↵ 1 + ↵ 2 , is the subsequence consisting of the events of ⇢ in ↵ 1 or ↵ 2 . Clearly, if ⇢ 1 and ⇢ 2 are subruns of ⇢, e(⇢ 1 ) + e(⇢ 2 ) and e(⇢ 1 ) ⇤ e(⇢ 2 ) are not guaranteed to yield subruns of ⇢. Addition and multiplication of sequences are both of interest in practice. Closure under intersection is the core reason for the existence of a unique minimal pfaithful scenario for a peer. As we will see, addition is useful in incremental maintenance of minimal p-faithful scenarios. We next show the following (see Appendix for proof). Theorem 4.8. Let ⇢ be a run of W. The set of pfaithful scenarios of ⇢ is closed under addition and multiplication, and forms a semiring. Incremental evaluation To conclude the section, we discuss how to maintain incrementally the minimum pfaithful scenario of a run, leveraging the closure under addition of p-faithful scenarios. More specifically, for a run ⇢, we wish to maintain incrementally T ! p (⇢, ↵), where ↵ consists of the events of ⇢ visible at p. To do so, we additionally maintain some auxiliary information, consisting of T ! p (⇢, f ) for each event f in ⇢. Intuitively, T ! p (⇢, f ) represents an "explanation" of the individual event f by a minimal boundary and modification pfaithful subrun of ⇢ containing f . Note that f need not be visible at p. Suppose the current run is ⇢ and we have computed T ! p (⇢, ↵) and T ! p (⇢, f ) for each event f in ⇢. Now suppose that a new event e arrives. We need to compute the following (where ⇢.e denotes the concatenation of ⇢ and e): (i) T ! p (⇢.e, f ), for each event f in ⇢.e. (ii) T ! p (⇢.e, ↵ 0 ), where ↵ 0 = ↵.e if e is visible at p and ↵ 0 = ↵ otherwise. Consider (i). For f = e, T ! p (⇢.e, e) consists of e together with all events in T ! p (⇢, g) for some g 2 T p (⇢.e, e). For f 6 = e, there are two cases, depending on whether e is the right-boundary of an open lifecyle of a key occurring in T ! p (⇢, f ). If this is the case, meaning that e is in T p (⇢.e, T ! p (⇢, f )), then T ! p (⇢.e, f ) consists of e together with the events in T ! p (⇢, f ) and those in T ! p (⇢, g) for some g 6 = e in T p (⇢.e, e). Otherwise, T ! p (⇢.e, f ) = T ! p (⇢, f ). Now consider (ii) . Suppose e is visible at p, or is the right-boundary of an open lifecycle of a key in T ! p (⇢, ↵), meaning that e is in T p (⇢.e, T ! p (⇢, ↵)). Then T ! p (⇢.e, ↵ 0 ) consists of e together with the events in T ! p (⇢, ↵) and in T ! p (⇢, g) for some g 6 = e in T p (⇢.e, e). Otherwise, T ! p (⇢.e, ↵ 0 ) = T ! p (⇢, ↵). Observe that the incremental maintenance algorithm outlined above requires only single applications of the operator T p (⇢, •), avoiding computations of its least fixpoint from scratch. This is similar in spirit to results on incremental maintenance of recursive views by nonrecursive queries (e.g., see [START_REF] Dong | Incremental maintenance of recursive views using relational calculus/sql[END_REF]). TRANSPARENCY & VIEW-PROGRAMS The goal. Let P be a workflow program, and p a peer. We have defined, for each run ⇢ of P , the view ⇢@p of the run as seen by peer p. As a next step, we would like to also provide p with a "view-program" P 0 whose runs are precisely the views Runs(P )@p. Intuitively, we wish such a program to provide p with an "explanation" of the global workflow in terms it can understand and access. The view-program will use the same schema D@p as p, and the fictitious peer ! (world) that represents the environment. The rules of P 0 consist of the rules of p in P , together with new rules for peer !, that define the transitions caused by other peers with visible side-e↵ects at p. More precisely, a program P 0 is a view-program for P at p if: • P 0 is over global schema D@p and uses two peers p and !, both with schema D@p, and all selection conditions true. • the rules of peer p are the same in P and P 0 . • (completeness) for each run ⇢ of P , there exists a run ⇢ 0 of P 0 such that ⇢@p is obtained from ⇢ 0 by replacing all !-events with !. • (soundness) for each run ⇢ 0 of P 0 , there exists a run ⇢ of P , such that ⇢@p is obtained from ⇢ 0 by replacing all !-events with !. We illustrate the notion of view-program with an example. Example 5.1. Consider a program P with peers hr, ceo, cfo, and Sue, using the following rules: +Cleared@hr(x) :-+cfoOK@cfo(x) :-+Approved@ceo(x) :-Cleared@ceo(x), cfoOK@ceo(x) +Hire@hr(x) :-Approved@hr(x) Note that there are no rules for Sue. Suppose hr, cfo, and ceo see all relations, but Sue sees only relations Cleared and Hire. A view-program P 0 for Sue operates on the schema Cleared and Hire and has the following rules: +Cleared@!(x) :-( †) +Hire@!(x) :-Cleared@!(x) It is easy to check that P 0 is sound and complete for P and peer Sue, and so it is a view-program for P at Sue. Remark 5.2. Observe that soundness and completeness of view-programs amounts to equivalence of P 0 and P with respect to the views of their linear runs. However, consider the following subtlety in the above example. By soundness of P 0 and rule ( †), if Sue sees Cleared(x), then there exists a run of P in which Sue sees Hire(x) inserted in the next transition visible to her. However, it is not the case that this is possible in every run of P , because the transition is also dependent on relation cfoOK, invisible to Sue. Enforcing this stronger property requires a more stringent notion of equivalence based on the trees of runs rather than just linear runs. We later show how this can be done by forbidding the use of information invisible to the given peer that affects the view of the peer. Intuitively, this makes the collaborative workflow more transparent to the peer. We formally introduce the notion of transparency further. Note that one can trivially define a sound program for P at p (by keeping only the rules of P at peer p) and a complete one (by adding to the rules of p all rules at ! that insert or delete up to M arbitrary tuples in relations of D@p, where M is the maximum number of updates in the head of a rule in P ). Clearly, these are of little interest. Ideally, one would like a program that is both sound and complete. Unfortunately, as shown next, it is not generally possible to construct such a program. Proposition 5.3. There exists a program P and a peer p, such that there exists no view-program for P at p. Proof. (sketch) The program P uses three binary relations R, S, T that a peer q sees, whereas peer p sees only R, T . Suppose p has a rule inserting an arbitrary tuple in R (so p can construct arbitrary instances over R). Peer q has two rules to add to S pairs in the transitive closure of R. Finally, q has a rule to transfer a tuple (0, 1) from S to T . No view-program can exist for P at p because the insertion of the tuple (0, 1) in T @p is conditioned by the existence of a directed path of arbitrary length from 0 to 1 in R@p. This cannot be expressed by any rule with a bounded number of literals in its body. Theorem 5.4. It is undecidable, given a program P and a peer p, whether there exists a view-program for P at p. Proof. (sketch) We use the following observation: (?) It is undecidable, given a program Q whose schema includes a unary relation U , whether there exists a run of Q leading to an instance where U is non-empty. The proof of (?) is by a straightforward reduction from the Post Correspondence Problem, known to be undecidable [START_REF] Post | Recursive unsolvability of a problem of Thue[END_REF]. Intuitively, Q attempts to construct a solution to an instance of the PCP. If it succeeds, a fact is inserted in U (details omitted). Consider the program P in the proof of Proposition 5.3, modified so that the rule to transfer the tuple S(0, 1) to T (0, 1) is controlled by non-emptiness of U . Now add to P the rules of an arbitrary program Q whose schema contains U (and no other relations of P ). Then a viewprogram at p exists for the resulting program i↵ there is no run of Q leading to some non-empty U . This is undecidable by (?). Transparency and Boundedness. Clearly, an obstacle towards obtaining a view-program for P at p is that updates visible at p may depend on information unavailable to p. To overcome this di culty, we consider a property of programs called "transparency". Intuitively, transparency of a program P for peer p guarantees that the possible updates of a view instance I@p caused by other peers are determined by I@p. Put di↵erently, the other peers must disclose to p all information that they use in order to modify p's view of the data. The only action that can depend on hidden information is the creation of new values, which is constrained by the global history. It turns out that transparency of a program P for p does not alone guarantee the existence of a viewprogram of P for p. This is because the other peers can still perform arbitrarily complex computations hidden from p. For instance, the program used in the proof of Proposition 5.3 is transparent for p but does not have a view-program for p. To control the complexity of computations a↵ecting p, we introduce a notion of "boundedness" of P with respect to p, that limits the number of steps invisible but relevant to p that are carried out by other peers. As we will see, transparency together with boundedness guarantee the existence of a view-program and its e↵ective construction. We next define transparency, then turn to boundedness. To formalize the notion of transparency, we first define "fresh" instances, obtained as the results of events visible at p. We use the following notation. If ↵ is a sequence of events of P yielding a run on initial instance I, we say that ↵ is applicable at I and denote by ↵(I) the last instance in the run. Definition 5.5. Let P be a program and p a peer. An instance I is p-fresh if I = ; or there exists an instance I 0 and an event e of P that is applicable to I 0 and visible at p, such that e(I 0 ) = I. We can now define transparency. We will use the notion of minimum p-faithful run, defined as follows. A run ↵ on initial instance I is a minimum p-faithful run if ↵ = T ! p (↵, v), where v consists of the events of ↵ that are visible at p. In other words, ↵ is its own minimum p-faithful scenario for p. To deal with new values, we will need the following. For a sequence ↵, let adom(↵) consist of all values occurring in ↵, and new(↵) consist of all values a for which there is an event e in ↵ such that a occurs in the head but not in the body of e. Let const(P ) denote the set of constants used in program P , together with ?. Definition 5.6. A program P is transparent for p if for all p-fresh instances I, J such that I@p = J@p the following holds. For every sequence ↵ of events such that adom(J)\new(↵) = ;, if ↵ is a minimum p-faithful run on I such that all its events but the last are silent at p, then the same holds on J, and ↵(I)@p = ↵(J)@p. Intuitively, transparency implies that the computation as seen from p depends only on what p sees, except for the specific choice of new values. The definition is illustrated by the following example. Example 5.7. Consider again Example 5.1. It is easy to see that the program in the example is not transparent for Sue. Intuitively, this is because the relation cfoOK carries information that Sue does not see, yet it impacts Sue's view of the workflow. Now consider the following program (obtained by eliminating cfoOK): +Cleared@hr(x) :-+Approved@ceo(x) :-Cleared@ceo(x) +Hire@hr(x) :-Approved@hr(x) At first glance, the program may now appear to be transparent for Sue. However, this is not the case. Indeed, consider the instances I, J containing the following facts: I: Cleared(Sue); Approved(Sue) J: Cleared(Sue) Clearly, I and J are Sue-fresh since both can be obtained by an application of the Sue-visible event +Cleared@hr(Sue). In addition, I@Sue = J@Sue = {Cleared@Sue(Sue)}. The sequence consisting of the single event +Hire@hr(Sue) :-Approved@hr(Sue) is a minimum Sue-faithful run on instance I. Transparency requires it to also be applicable on J, which is not the case. Intuitively, in order for transparency to hold, Sue-freshness must ensure that pre-existing information invisible to Sue, such as the fact Approved(Sue) in instance I, cannot be used in later events leading to transitions visible by Sue. This can be achieved in various ways. We illustrate one approach, that is also adopted in the design methodology for transparent programs in Section 6. We introduce an additional binary relation Stage visible by all peers, that inhibits the use of any information computed prior to the latest Sue-visible update. Relation Stage is either empty or contains a single tuple Stage(0, s) where s is a value refreshed by peer Sue prior to events by other peers. The invisible relation Approved is extended with an extra column holding the current value of s. The program is the following: +Stage@Sue(0,s) :-¬ Key Stage@Sue (0) +Cleared@hr(x), Key Stage@Sue (0) :-Stage@hr(0,s) +Approved@ceo(x,s) :-Cleared@ceo(x), Stage@ceo(0,s) +Hire@hr(x), Key Stage@hr (0) :-Approved@hr(x,s), Stage@hr(0,s) Observe that Stage(0,s) is deleted by each event visible at Sue, which forces its re-initialization with a fresh s before any event using invisible relations can proceed, preventing the use of previous invisible facts. It is easy to check that the program is now transparent for Sue. We next introduce the boundedness property. Definition 5.8. Let P be a program, p a peer, and h an integer. P is h-bounded for p if for each instance I and sequence ↵ of events applicable to I that yields a minimum p-faithful run at p such that all its events but the last are invisible at p, |↵|  h. Intuitively, this bounds the number of consecutive events that are silent but relevant to p. Note that the bound applies only to minimum p-faithful subruns. Thus, the other peers are still allowed to carry out arbitrarily long silent computations that do not a↵ect p. We next consider the decidability of transparency and boundedness. It is easy to show: Theorem 5.9. It is undecidable, given a program P and peer p, (i) whether P is transparent for p, and (ii) whether there exists h such that P is h-bounded for p. Proof. Straightforward, using (?) in the proof of Theorem 5.4. On the other hand, as shown next, it is decidable if a program is h-bounded for some given h. Moreover, for programs that are h-bounded, transparency is decidable. In particular, it is decidable whether a program is simultaneously h-bounded and transparent. Indeed, we will see that the two together guarantee the existence of a view program that can be e↵ectively constructed. We first show that h-boundedness is decidable (see Appendix for proof). Intuitively, this holds because violations of h-boun-dedness are witnessed by minimum p-faithful runs of bounded length, on initial instances of bounded size. Theorem 5.10. It is decidable in pspace, given a program P , peer p, and integer h, whether P is h-bounded for p. Next, we show that transparency is decidable for hbounded programs. Theorem 5.11. The following are decidable in pspace: (i) given a program P that is h-bounded for peer p, whether P is transparent for p, and (ii) given a program P , a peer p and an integer h, whether P is h-bounded and transparent for p. The proof is provided in the appendix. Clearly, (ii) follows from (i) and Theorem 5.10. The proof of (i) relies on the existence of short counterexamples for violations of transparency by h-bounded programs. Remark 5.12. Observe that the p-fresh instances used in the definition of transparency need not be reachable in actual runs of P . Thus, the definition has a "uniform" flavor, reminiscent of uniform containment for Datalog. Limiting transparency to reachable instances would yield a much weaker requirement, leading to undecidability of p-transparency even for h-bounded programs. Previously, we assumed that the boundedness parameter h is given. There are various ways to obtain h. One approach is heuristic: by examining traces of runs, one can "guess" h and then test h-boundedness using Theorem 5.10. Another possibility, briefly considered in Section 6, is to provide syntactic restrictions on the program P that ensure h-boundedness for some h computable from P . Alternatively, we introduce in Section 6 the means of ensuring by design transparency and hboundedness of a program, for a given peer and desired h. View-programs and provenance. We show that for each program P and peer p such that P is h-bounded and transparent for p, one can construct a view-program of P for p. As discussed earlier, the view program uses peers p and ! (for world). It contains the rules for p, and additional rules for ! that define the side-e↵ects observed by p as a result of actions by other peers. We describe the construction of the rules for !. Intuitively, the rules specify, for each instance I@p visible at p, the possible updates to I@p caused by minimal p-faithful runs of length up to h starting from I. The body of each rule specifies the tuples of I@p causing the update, so intuitively provides the provenance of the update in terms of the data visible at p. The view program P @p. We outline informally the construction of the view-program of P for p, that we denote P @p. Let C h+1 = {a 1 , ..., a m }, where C h+1 is the set of constants (polynomial in P ) defined in the proof of Theorem 5.10. For each i 2 [1, m], let x i be a new distinct variable. Let ⌫ be the mapping defined by ⌫(a i ) = x i if a i 6 2 const(P ) and ⌫(a i ) = a i otherwise. Consider a p-fresh instance I and a sequence of events ↵ of P , both over C h+1 , such that the tuples in I(R) use only keys in K(R, ↵) for each relation R, and ↵ is a minimal p-faithful run of P on I in which all events but the last are invisible at p. Let J = ↵(I). Observe that by boundedness of P for p, |↵|  h, and so there are finitely many triples (I, ↵, J) as above. For each such triple (I, ↵, J), the view program P @p contains a rule for peer ! constructed as follows. For each relation R: • (positive body) for each t in I@p(R), add R@!(⌫(t)) to the body. • (negative body) for each a i in K(R, ↵) that is not a key value in I@p(R), add ¬Key R@! (⌫(a i )) to the body. • (inequalities) for all a i , a j where i 6 = j, the body contains the inequality ⌫(a i ) 6 = ⌫(a j ). • (head insertions) For each t in J@p(R) I@p(R), add +R@!(⌫(t)) to the head. • (head deletions) For each a i in I@p(Key R@p ) J@p(Key R@p ), add Key R@! (⌫(a i )) to the head. Observe that P @p is a syntactically valid program with schema D@p and peers p and !. We next show that the construction is correct (see Appendix for proof). Theorem 5.13. Given a program P that is h-bounded and transparent for peer p, the program P @p is a viewprogram of P for p. Moreover, as suggested in Remark 5.2, the view-program P @p constructed above is sound and complete not only with respect to the linear runs of P as viewed by p, required by the definition, but also with respect to its tree of runs as viewed by p. We omit the formal development. TRANSPARENT PROGRAM DESIGN Transparency and h-boundedness for a given peer may be desirable goals for some applications. There are various ways to achieve them. It is of course possible to first design the workflow program and then test it a posteriori for transparency and h-boundedness. However, it may be preferable in practice to specify directly view programs that are transparent and h-bounded by design. We begin the section by showing how this can be done by following some simple design guidelines. We then show that a large class of programs can be transformed so as to make them transparent and h-bounded, by filtering out the runs that violate these properties while preserving the runs that satisfy them (modulo some minor di↵erences). Before proceeding, we introduce some notions used throughout the section. Consider a run ⇢, and a subsequence e.↵.e 0 of consecutive events of ⇢, of which the only events visible at p are e and e 0 . Then ↵.e 0 is a p-stage. In the following, each non-trivial stage (↵ 6 = ✏) will be equipped with a unique id, in order to identify the facts produced during that stage. Intuitively, transparency is obtained by controlling the provenance of facts produced in that stage that lead to the visible event e 0 . Generating the stage ids can be done using a binary relation Stage, visible by all peers. Stage is either empty or contains a single tuple with key 0. A fact Stage(0, s) indicates that the current stage id is s. When a peer q carries out an event visible at p, it deletes the current fact Stage(0, s) (if such exists). A special rule, that can be performed by any peer q, inserts a new tuple Stage(0, s 0 ) with a fresh value s 0 . Specifically, the rule is +Stage@q(0, z) :-¬ Key Stage@q (0). All rules generating events invisible at p are guarded by an atom Stage(0, x) (so all p-invisible events of the stage are preceded by the event creating a new stage id). Note that the above assumes that each peer q can tell whether its updates are visible at p. This is not always the case, but holds under certain conditions, such as (C1) further. Transparency and boundedness by design. We introduce design guidelines to guarantee transparency and h-boundedness for a designated p. It turns out to be rather subtle to guarantee transparency while allowing other peers to perform arbitrary computations that do not impact p. In order to ensure transparency of a program P for a peer p, we impose the following restrictions on program specifications, to be followed in the design process: (C1) Each peer that sees a relation R visible at p (including p) sees it fully. Formally, for each relation R@p 2 D@p, if R@q 2 D@q then att(R@q) = att(R) and (R@q) = true. (C2) The program maintains the Stage relation as previously described. Note that, because of (C1), every non-trivial update of a relation in D@p, caused by any peer, is also visible at p. Thus, every peer can tell when it performs an update visible at p. As noted previously, this enables the maintenance of relation Stage. (C3) The relations in D are separated in two disjoint classes: p-transparent and p-opaque. The relations that p sees are all p-transparent. The ptransparent relations that p does not see include an attribute, StageID, that contains the id of the stage in which the tuple was created. (C4) If an event modifies some p-transparent relation, (i) only positive facts from p-transparent relations with the current stage id can occur in its body, and (ii) all the updates in the head are either updates of p-visible relations, creations of tuples with new keys in a p-transparent relation, or modifications of tuples in such a relation that have been created during the same stage and are visible by the peer performing the event. When p is understood, we simply speak of transparent and opaque relations/facts. It is straightforward to specify syntactic criteria to guarantee (C1-C2-C3-C4). For instance, (C4)(ii) can be ensured as follows: if +R@q(x, u) occurs in the head of a rule for a transparent p-invisible relation R, then either x is a variable that does not occur in the body (so it generates a new key) or R@q(x, v) occurs in the body, where v(StageID) is the current stage id provided by relation Stage. Condition (C1) is natural in many applications where peers are doing some computations about a peer p, for which transparency is desired. For instance, p may be a customer, a job applicant, a participant in a crowdsourcing application, etc. Intuitively, (C1) prevents a peer from unknowingly performing some update that is visible at p. We briefly elaborate on the motivation of (C4). Clearly, the use of opaque relations in rule bodies may lead to non-transparent computations. The restriction disallowing deletions from p-invisible transparent relations in heads of rules simplifies the presentation, but such deletions could be allowed at the cost of a more complex construction. The following example illustrates the motivation for prohibiting simultaneous updates of transparent and opaque relations in rule heads. Example 6.1. Consider a workflow program P with a peer p, a p-visible relation R, and a p-invisible opaque relation T . Note that there is no transparent invisible relation. Suppose that P includes the following rules: +R@q(Sue, hire), +T@q(Sue, hire) :-+R@q(Sue, reject), +T@q(Sue, reject) :-The other peers may have silently computed for an arbitrarily long time, and derived T (Sue, reject). This precludes application of the first rule. Intuitively, they have ruled out a possible future event for Sue without letting her know, thus violating transparency. It is straightforward to see that a program satisfying (C1-C2-C3-C4) is transparent for p. Note that other peers may perform arbitrary computations as long as they do not a↵ect what p sees. Observe that the transparent program shown in Example 5.7 follows the previous design guidelines. We next show how to guarantee h-boundedness within a stage ↵.e 0 immediately following an event e. We wish to ensure that the minimum p-faithful subrun of ↵.e 0 contains no more than h events, leading to the activation of the visible event e 0 . This could be easily done by limiting the length of the entire stage to h + 1 using a propositional counter. However, this brute-force solution would be overly restrictive, because one often wishes to allow within the same stage an unbounded number of events that a↵ect only p-opaque relations, or p-transparent relations in events not leading to e 0 . Achieving this requires a more careful approach, in which the "steps" in each stage are identified by ids consisting of fresh values. More precisely, each event within the stage ↵.e 0 is called a step, and is identified by a step id. We will use the notion of step-provenance of a fact, i.e. the set of step ids in that stage that contribute to deriving the fact. In more detail, we equip each p-transparent relation invisible at p, say Q, with h additional columns B 1 • • • B h that are used to record the step-provenance of each fact in the relation (ids of the steps contributing to its creation). When an update +Q(u, b 1 , ..., b h ) is performed, its set of non-? B i 's is set to the concatenation of all distinct non-? values of the B i 's in the body of the event, augmented with a new id for the current step (shared by all insertions in the head). Recall that by (C4), there is no key deletion from Q. (In some sense, all the facts in Q are logically deleted when a new stage is entered.) Thus, a p-transparent event can be activated only if there is "enough room", i.e., if a sequence of at most h events of the stage are su cient to enable this event. In particular, the last update of the stage has at most h non-? B i 's. The events of the corresponding steps provide a p-equivalent sequence of length at most h, so the minimal p-faithful scenario for that stage has length at most h. Thus, the resulting run is h-bounded for p. In summary, we can show: Theorem 6.2. Each program obtained using the aforementioned guidelines is transparent and h-bounded for p. Boundedness by acyclicity. We next show how to guarantee boundedness for a certain class of programs using an acyclicity condition. We consider programs with single updates in heads of rules (which we call linear-head programs), satisfying (C1). For such a program, we define the p-graph of P as follows. Its nodes are the relations in D, and there is an edge (intuitively "depends on") from R to Q if Q is invisible at p and there is a rule at some peer q whose head is +R@q(u) or Key R@q (h) and its body contains Q@q(v) or ¬Key Q@q (k). The program P is p-acyclic if for each R@p 2 D@p, the subgraph of its p-graph induced by the nodes reachable from R is acyclic. We can show the following (see Appendix for proof). Enforcing transparency and boundedness. We next show, given a program P and peer p, how to rewrite P into a transparent and h-bounded program for p that has essentially the same behaviour as P except that it filters out the runs that are either not transparent or not h-bounded for p. We already defined these properties for programs. We need to define them for runs. Definition 6.4. Let P be a program and p a peer. A run ⇢ of P is transparent for p if for each stage ↵.e 0 of ⇢, the minimum p-faithful subrun ↵ 0 .e 0 of ↵.e 0 has the following property. For any p-fresh instance J such that I@p = J@p, and adom(J) \ new(↵ 0 .e 0 ) = ;, ↵ 0 .e 0 is a minimum p-faithful run on J, all its events but e 0 are silent at p, and ↵ 0 .e 0 (I)@p = ↵ 0 .e 0 (J)@p. We say that ⇢ is h-bounded for p if |↵ 0 .e 0 |  h for every ↵ 0 .e 0 as above. The set of transparent and h-bounded runs of P for p is denoted tRuns p,h (P ). Our rewriting technique applies to the programs satisfying certain conditions, that we call transparencyform. Unlike the conditions used in the design guidelines, transparency-form does not require separating transparent and opaque relations at the schema level, instead allowing to make a more refined distinction at the fact level. In this more permissive setting, runs are no longer guaranteed to be transparent and h-bounded. However, we show how the violating runs can be filtered out. Definition 6.5. A normal-form program is in transparencyform (TF for short) for p if it satisfies (C1-C2) and: (C3') For each rule of a peer q 6 = p, if its head contains an update +R@q(x, ȳ) for some R that p does not see, either x is a variable that does not appear in the body (key creation) or the body contains an atom R@q(x, z). (C4') For each relation R that p does not see, and each peer q such that R@q 2 D@q, the selection (R@q) uses only attributes in att(R@q). As discussed earlier, condition (C1) is meant to guarantee that a peer knows when it is performing an event visible at p, which enables maintaining the relation Stage. (C3') is a natural condition that essentially comes down to preventing the "reuse" of a key after it has been deleted. The motivation for (C4') is more subtle. The presence of a fact in the view of some peer q may depend (because of selections) on some values that q does not see and that have been derived in a non-transparent manner. This may lead to violations of transparency that cannot be filtered out. We will show the main result of this section for programs satisfying (C1-C2-C3'-C4'). We first need to introduce the notion of "run projection". Definition 6.6. (run projection) Let P be a program over schema D. Let ⇧ be a schema consisting of a subset of the relations in D, each having a subset of its attributes in D (always containing the key). ⇧ induces a projection function on runs, defined as follows. The projection ⇧(⇢) of a run ⇢ of P is obtained from ⇢ by removing facts and updates over relations not in ⇧, projecting out the missing attributes in facts and updates over relations in ⇧, and removing events with resulting empty heads. We say that ⇧ is the identity for peer p if for every run of ⇢ of P , ⇧(⇢)@p = ⇢@p. We extend this definition to a set Runs of runs and denote the result ⇧(Runs). We now state the main result of the section: Theorem 6.7. Let P be a TF program, p a peer, and h an integer. One can construct a program P t that is transparent and h-bounded for p, and a projection function ⇧ that is the identity for p, such that the runs of P that are transparent and h-bounded for p are exactly the projections of the runs of P t , i.e., tRuns p,h (P ) = ⇧(Runs(P t )). The construction of the program P t is outlined further. From Theorems 5.13 and 6.7 it also follows that, for an arbitrary TF program P , we can obtain a view program that specifies precisely the views at p of the runs of P that are transparent and h-bounded for p. Corollary 6.8. Let P be a TF program, p a peer, and h an integer. One can construct a view program P t p for P t and p such that Runs(P t p ) = {⇢@p | ⇢ 2 tRuns p,h (P )}. Remark 6.9. Note that, if the peers attempt to perform a non-transparent computation, the transformed program P t will prevent carrying out the run and the computation may block. In practice, one might want to let the computation proceed and simply send an alert. Alternatively, one might wish to perform some "recovery", e.g., roll back to the state at the beginning of the stage. It is possible to modify P t to implement such an alert or a roll-back. Program construction. We next outline the construction of the program P t from the given TF program P . Intuitively, we need to identify, in each stage of a run of P , the "transparent facts" that have been obtained in a transparent manner within that stage. Transparent facts can only be created or updated by "transparent events", in which all the facts used in the body are transparent. More precisely, a positive fact is transparent at a particular time within a stage if it is p-visible, or if all events that participate in defining that fact up to that time, within that stage, are transparent. A negative fact is transparent if it concerns a p-visible relation or if its key was transparently created and transparently deleted in the same stage (recall that by (C3') keys cannot be reused after being deleted). We next enrich the schema of P in order to keep track of the transparent facts. There are some subtleties in the process: (i) a p-invisible tuple may have portions that are transparent and portions that are not, (ii) stepprovenance has to be recorded at the level of attributes rather than of the entire fact, and (iii) the system remembers which are the keys that were created and then deleted transparently during that stage. The schema is modified as follows. Each relation R of P has a corresponding relation R t in P t . Tuples in R t will use the same keys as in R; intuitively, the tuple of key k of R t will hold information about the tuple of key k in R. For each attribute A of R, R t has an attribute tA. For each q, tA has the same visibility as A, i.e. tA 2 att(R t @q) i↵ A 2 att(R@q). For the key k, besides the attribute tK, the relation R t includes an attribute dK with the same visibility as K. Intuitively, the attribute tA indicates if the value of the corresponding attribute was produced transparently (its value is ?) or not (it has the particular value 1). Each p-visible fact is transparent by definition. The attribute dK is turned to 1 when the tuple is deleted transparently. Finally, for each A 2 att(R), R t has attributes A s 1 , ..., A s h in R t , where A s 1 holds the step id of the event that defined this attribute, and the others provide the step-provenance of that event (the list of step ids that lead to it). A rule r in R at q is transformed into a set of transparent rules in P t by adding new atoms and updates as follows. For each atom R@q(k, u) in the body, we add an atom R t @q(k, ...) to the body to record extra information for k, and for each atom ¬Key R@q (k), an atom R t @q(k, ...) to the body to record extra information about k, including the fact that the deletion of k was transparent. For each update +R@q(k, u), there exists an update +R t @q(k, ...), and for each update Key R@q (k) an update +R t @q(k, ...). The information included in +R t @q is explained further. Consider a fact R@q(k, u) in a p-invisible relation. Suppose that the tuple with key k in R t @q satisfies: for each A in att(R@q), the value of tA is ?, the tuple stage (as provided by K s 1 ) is the current stage id, and dK is ?. Then R@q(k, u) holds transparently. Now, consider ¬Key R@q (k) holds. Suppose that the tuple with key k in R t @q satisfies: tK is ?, the tuple stage (as provided by K s 1 ) is the current stage id, and dK is 1. Then ¬Key R@q (k) holds transparently. To detail the use of the A s i attributes, consider the firing of a transparent event and let H be the set of step-IDs occurring in its body augmented with the current step-ID. Then: • | H | h. • If the event modifies a non-key attribute A (there is a single step in the minimum p-faithful subrun of a stage that may do that), the set of non-? values in the A s i attributes of the resulting tuple is H. • If the event creates a tuple with a new key, the set of non-? values of the K s i attributes of the created tuple equals H. • If the event deletes a tuple already recording a set H 0 of step-IDs in its attributes K s i , the values in H H 0 are added in still-available places in the K s i . Note that this imposes that runs can only progress transparently as long as there is enough space available to record h step ids, which guarantees h-boundedness. It is also important to observe that, in transparent events, all updates are e↵ective. This is guaranteed because the program is in TF. The program also allows non-transparent events. These may come from the use of some non-transparent fact in the body. They may also come from the use of only transparent facts in the body, but such that the number of step ids occurring in them plus one is larger than h. When an event is not transparent, it is not allowed to modify a visible relation; it can only update other relations in an opaque manner. The program P can be modified to incorporate the above information, allowing to trace transparency status and step ids. All the necessary information can be maintained as outlined above. Each rule of P yields at most exponentially many new rules resulting from a case analysis on the transparency status of the attributes, and the number of steps constributing to their generation. One can show correctness of the construction, which yields Theorem 6.7 and Corollary 6.8. RELATED WORK A survey on data-centric business process management is provided in [START_REF] Hull | Data management perspectives on business process management: tutorial overview[END_REF], and surveys on formal analysis of data centric workflows are presented in [START_REF] Calvanese | Foundations of data-aware process analysis: a database theory perspective[END_REF][START_REF] Deutsch | Automatic verification of database-centric systems[END_REF]. Although not focused explicitly on workflows, Dedalus [START_REF] Alvaro | Dedalus: Datalog in time and space[END_REF][START_REF] Hellerstein | The declarative imperative: experiences and conjectures in distributed logic[END_REF] and Webdamlog [START_REF] Abiteboul | A rule-based language for web data management[END_REF][START_REF] Abiteboul | Viewing the web as a distributed knowledge base[END_REF] are systems supporting distributed data processing based on condition/action rules. Local-as-view approaches are considered in a number of P2P data management systems, e.g., Piazza [START_REF] Tatarinov | The piazza peer data management project[END_REF] that also consider richer mappings to specify views. Update propagation between views is considered in a number of systems, e.g., based on ECA rules in Hyperion [START_REF] Arenas | The hyperion project: from data integration to data coordination[END_REF]. Finite-state workflows with multiple peers have been formalized and extensively studied using communicating finite-state systems (called CFSMs in [START_REF] Parosh | Verifying programs with unreliable channels[END_REF][START_REF] Brand | On communicating finite-state machines[END_REF], and ecompositions in the context of Web services, as surveyed in [START_REF] Hull | Web services composition: A story of models, automata, and logics[END_REF][START_REF] Hull | Tools for composite web services: a short overview[END_REF]). Formal research on infinite-state, datadriven collaborative workflows is still in an early stage. The business artifact model [START_REF] Nigam | Business artifacts: An approach to operational specification[END_REF] has pioneered datadriven workflows, but formal studies have focused on the single-user scenario. Compositions of data-driven web services are studied in [START_REF] Deutsch | Verification of communicating data-driven web services[END_REF], focusing on automatic verification. Active XML [START_REF] Abiteboul | The Active XML project: an overview[END_REF] provides distributed datadriven workflows manipulating XML data. A collaborative system for distributed data sharing geared towards life sciences applications is provided by the Orchestra project [START_REF] Green | Orchestra: facilitating collaborative data sharing[END_REF][START_REF] Ives | The orchestra collaborative data sharing system[END_REF]. Our model is an extension of the collaborative datadriven workflow of [START_REF] Abiteboul | Collaborative data-driven workflows: think global, act local[END_REF]. The results in [START_REF] Abiteboul | Collaborative data-driven workflows: think global, act local[END_REF] focus on the ability of peers to reason about temporal properties of global runs based on their local observations, and are orthogonal to the present investigation. To enable static analysis, the model of [START_REF] Abiteboul | Collaborative data-driven workflows: think global, act local[END_REF] uses more restricted views that those considered here. Attaching provenance to facts derived in a rule-based language is considered in, e.g., [START_REF] Zaychik Mo Tt | Collaborative access control in Webdamlog[END_REF][START_REF] Abiteboul | A formal study of collaborative access control in distributed datalog[END_REF]. The paper [START_REF] Bourhis | Analyzing data-centric applications: Why, what-if, and how-to[END_REF] studies a notion of explanation in a model of datacentric workflows with a single user, no views, and no abstraction. They consider a notion of explanation that has some similarities to our faithful explanations, but is much simpler. Their results do not apply here. There has been extensive work on causality and explanations in logic and AI. More specific to data-centric workflows, the relationship between provenance, explanations, and causality is considered in [START_REF] Cheney | Causality and the semantics of provenance[END_REF]. The focus is on provenance of data resulting from complex processes, such as scientific workflows. The synthesis of view-programs described in Section 5 is related in spirit to partner synthesis in services modeled as Petri Nets [START_REF] Wolf | Does my service have partners? Trans. Petri Nets and Other Models of Concurrency[END_REF][START_REF] Lohmann | Wendy: A tool to synthesize partners for services[END_REF][START_REF] Sürmeli | Synthesizing cost-minimal partners for services[END_REF]. The issue of transparency of algorithms is gaining increased attention, see e.g., the Data Transparency Lab (datatransparencylab.org/) in the US, and the Transalgo Lab starting in France. Data transparency has been studied in di↵erent contexts. For instance, the causality of machine-learning-based decisions is studied in [START_REF] Datta | Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems[END_REF]. Workflow transparency sometimes refers to the ability of considering a business process independently of the workflow implementing it, an aspect not considered here. Data transparency has also been considered in the context of workflows in [START_REF] Wolter | Collaborative workflow management for egovernment[END_REF], where an architecture for providing transparency in human-centric eGovernment workflows is proposed. CONCLUSION In this paper, we formally studied the problem of providing explanations of data-driven collaborative workflows to peers participating in the workflows, exploring semantic and computational issues. We identified faithful scenarios for a peer p as a particularly appealing basis for explanations from a semantic viewpoint. In a first contribution, we show that faithful scenarios form a semiring with respect to union and intersection, implying the existence of a unique minimal faithful scenario for each peer, computable in polynomial time, and enabling incremental maintenance of scenarios. In a second contribution, we identified desirable properties of workflows, namely transparency and boundedness, that guarantee the existence of a view program for a peer p, and showed how such a program can be constructed. Finally, we studied how programs satisfying transparency and h-boundedness for some peer p can be designed. We also show how, under certain restrictions, runs violating transparency or h-boundedness can be filtered out. A remaining open question is whether it is possible to perform such a filtering for arbitrary workflows. It is possible to implement workflow programs by relying on a master server that has access to all the information, receives the updates, propagates them to appropriate peers, and controls transparency and boundedness for certain peers. Blockchain technology provides an alternative to such a central authority. It is, in spirit, an excellent match with collaborative workflows. A blockchain-based implementation of collaborative workflows is therefore a promising research direction with challenging technical issues, notably with respect to performance and access control. hold for the valuation mapping all variables to true. Let R be a relation of arity n + 2, with key K, attributes A x for each x 2 X, and a last attribute A q . For each variable x 2 X, there is a peer p x that sees the projection of R over K, A x . There is a peer q that sees K, A q . In addition, there is a peer p that sees the projection of R on K with the selection p = (A q = 1) ^( _ ' ) where = ^x2X (A x = 1), and ' checks that the formula ' is true for the valuation ⌫ of X such that ⌫(x) = true i↵ A x = 1. The program consists of all ground rules of the form r xi : +R@p xi (0, 1) :-(for each x i 2 X) e : +R@q(0, 1) :-Consider the run ⇢ consisting of r x1 . . . r xn e. Observe that p sees R@p(0) only after the last event, because of the condition A q = 1 in its selection condition on R. We prove that ' is not satisfiable i↵ ⇢ is a minimal scenario of ⇢ at p. First suppose that ' is satisfiable. Let ⌫ be a valuation satisfying '. Consider the subsequence of r xi 1 . . . r xi k e obtained by keeping only the events r xi j such that ⌫(x ij ) is true. By (*), it is shorter than the original sequence. Let ⇢ ⌫ be the corresponding run. It is easy to see that ⇢ ⌫ is a strict subrun of ⇢ and that ⇢ ⌫ @p = ⇢@p. Thus ⇢ is not minimal at p. Now suppose that ' is not satisfiable. Let ⇢ 0 be a scenario of ⇢ at p. Because ' is not satisfiable, ' can never be satisfied, so in order for p to see R@p(0), it is necessary that hold, so ⇢ 0 must contain all events in ⇢. Thus, ⇢ is minimal. Proof of Lemma 4.6 Let ⇢ = {(e i , I i )} 0in and ↵ = {e ij } 0jm . We prove by induction on h (0  h  m) that ( †) ↵| h = {e ij } 0jh yields a subrun {(e ij , I 0 j )} 0jh of ⇢ and I 0 h @p = I i h @p. Suppose ( †) holds. Then ↵ yields a subrun of ⇢, establishing (i). Additionally, I 0 j @p = I ij @p for 0  j  m. This together with the fact that ↵ includes all events in ⇢ visible at p implies that run(↵)@p = ⇢@p. Thus, run(↵) is a scenario of ⇢ for p, establishing (ii) We now prove ( †). For the basis, let h = 0. We need to show that (a) e i0 is applicable to the empty instance and (b) I 0 0 @p = I i0 @p. For (a), suppose the body of e i0 contains a literal R@q(k, u). Then i 0 belongs to an Rlifecycle of k in ⇢, whose left boundary must be included in ↵, a contradiction. Thus the body of e i0 contains no literal of the form R@q(k, u) and is applicable to the empty instance, proving (a). Now consider (b). Consider t 2 I 0 0 @p(R@p) with key k. Then e i0 must insert in R a tuple t 0 with the same key, such that t 0 @p = t. Thus, i 0 belongs to the R-lifecycle of k in ⇢, and in fact it must be its left boundary (otherwise, by boundary faithfulness, e i0 must be preceded by another event in ↵, a contradiction). It follows that I 0 0 and I i0 both contain t 0 , so t 2 I i0 @p(R@p). Thus, I 0 0 @p ✓ I i0 @p. Conversely, let t 2 I i0 @p(R@p) with key k. Let h 0 (h 0  i 0 ) be the minimum in the same R-lifecycle of k in ⇢, for which a tuple with key k is visible by p in I h 0 @p(R@p). It follows that e h 0 is visible at p so it is included in ↵, so h 0 = i 0 and e i0 must insert a tuple t 0 with key k. Moreover, i 0 must also be the left boundary of the Rlifecycle of k (or else, that left boundary would have to be included in ↵ prior to e i0 ). It follows that t 0 @p = t. Thus, t 2 I 0 0 (R@p), and I i0 ✓ I 0 0 . This completes the basis. For the induction step, suppose {e ij } 0jh yields a subrun {(e ij , I 0 j )} 0jh of ⇢ where I 0 h @p = I i h @p, for h < m, and consider e i h+1 . For (a), we need to show that e i h+1 is applicable in I 0 h . Let R@q(k, u) occur in the body of e i h+1 . Then i h belongs to an R-lifecycle of k in ⇢, and, by modification faithfulness, all prior events of the Rlifecycle that a↵ect attributes in att(R, q) of tuples with key k are included in ↵. It follows that I 0 h (R) and I i h (R) both contain tuples with key k, that agree on att(R, q). Thus, since R@q(k, u) holds in I i h , it also holds in I 0 h . Next, suppose ¬Key R @q(k) occurs in the body of e i h+1 . Suppose k 2 I i h (Key R ). Then, similarly to the previous case, I 0 h (R) and I i h (R) both contain tuples with key k, that agree on att(R, q), so ¬Key R @q(k) holds in I 0 h . Now suppose that k 6 2 I i h (Key R ) but k 2 I 0 h (Key R ). Let v < h be the left boundary of the R-lifecycle in run(↵| h ) to which h belongs. It follows that i v belongs to an R-lifecycle of k in ⇢ but i h does not, so the Rlifecycle has a right boundary in ⇢ occurring before i h , which by boundary faithfulness must also belong to ↵. This contradicts the fact that h is in an R-lifecyle of k in run(↵| h ). So, k 6 2 I 0 h (Key R ) and ¬Key R @q(k) holds in I 0 h . In summary, e i h+1 is applicable in I 0 h . Now consider (b). Let t 2 I 0 h+1 @p(R@p) where t has key k. Thus, I 0 h+1 (R) contains a tuple t 0 with key k such that t 0 @p = t. From boundary and modification faithfulness it easily follows that I i h+1 (R) contains a tuple t 00 with key k that agrees with t 0 on att(R, p), so t 00 @p = t 0 @p = t and t 2 I i h+1 @p(R@p). Thus, I 0 h+1 @p ✓ I i h+1 @p. Conversely, let t 2 I i h+1 @p(R@p) with key k. Similarly to the base case, let h 0 be the minimum in the same R-lifecycle of k in ⇢, for which a tuple with key k is visible by p in I h 0 @p(R@p). It follows that e h 0 is visible at p so is included in ↵. Clearly, e h 0 must contain an insertion of a tuple with key k into R. From boundary and modification faithfulness it follows that I i h+1 (R) and I 0 h+1 (R) contain tuples with key k that agree on att(R, p), so t 2 I 0 h+1 @p(R@p). Thus, I i h+1 @p ✓ I 0 h+1 @p. This completes the induction and the proof of ( †). 2 Example 4 . 1 . 41 Consider again the worlflow used in the proof of Theorem 3.3. Suppose that p also sees the propositions Definition 4 . 3 . 43 Let ⇢ = {(e i , I i )} 0in be a run of some workflow program P . A subsequence ↵ = {e ij } 0jm of e(⇢) is boundary faithful if for every e ij and k 2 K(R, e ij ) such that i j belongs to an R-lifecycle 1 of k in ⇢, the left boundary event of the R-lifecycle belongs to ↵, and the right boundary event of the R-lifecycle also belongs to ↵, if the R-lifecycle is closed. Definition 4 . 5 . 45 A subsequence ↵ of e(⇢) is p-faithful if it contains all events of ⇢ that are visible at p, is boundary faithful, and modification faithful for p. A subrun b ⇢ of ⇢ is p-faithful if e(b ⇢) is p-faithful. Theorem 6 . 3 . 63 Let P be a linear-head program over schema D satisfying (C1). Let b be the maximum number of facts in a rule body of P , d = |D|, and a the maximum arity of a relation in D plus one. If P is pacyclic then it is h-bounded for p, where h = (ab + 1) d . R a new tuple with key k, this tuple is not deleted between i 1 and i 2 , and e i2 deletes it. An open R-lifecycle of k in ⇢ is an interval [i 1 , 1) such that e i1 inserts in R a new tuple with key k and this tuple is not deleted later on in ⇢. In both cases, we say that e i1 is the left boundary event of the lifecycle, and we say that e i2 is the right boundary event of the closed lifecycle [i 1 , i 2 ]. Observe that k K(R, ei j ) need not belong to an Rlifecycle containing ij, because k may occur in a negative literal ¬Key R@q (k). Acknowledgment. Serge Abiteboul and Pierre Bourhis are supported by the ANR Project Headwork, ANR-16-CE23-0015, 2016-2021. Victor Vianu is supported in part by the National Science Foundation under award IIS-1422375. APPENDIX A. APPENDIX We provide proof outlines for several results of the paper. Proof of Theorem 3.4 Membership in coNP is immediate. For hardness, we reduce the problem of testing unsatisfiability of a Boolean formula to testing whether a scenario for some peer p is minimal. Let ' be a formula over some set X = {x 1 , ..., x n } of variables. We assume without loss of generality that (*) ' does not Proof of Theorem 4.8 We first note the following useful fact, that follows immediately from the definition of T p (⇢, •). Lemma A.1. The operator T p (⇢, •) is additive: for all subsequences ↵ 1 , ↵ 2 of e(⇢), T p (⇢, ↵ 1 +↵ 2 ) = T p (⇢, ↵ 1 )+ T p (⇢, ↵ 2 ). We now turn to the proof of the theorem. Let ⇢ 1 an ⇢ 2 be p-faithful scenarios of ⇢. Consider first e(⇢ 1 ) + e(⇢ 2 ). By definition, e(⇢ 1 ) + e(⇢ 2 ) contains all events of ⇢ visible at p. By additivity of T p (⇢, •), T p (⇢, e(⇢ 1 ) + e(⇢ 2 )) = T p (⇢, e(⇢ 1 )) + T p (⇢, e(⇢ 2 )) = e(⇢ 1 ) + e(⇢ 2 ). Thus, e(⇢ 1 ) + e(⇢ 2 ) is also a fixed-point of T p (⇢, •) and so it is a p-faithful scenario of ⇢. Now consider e(⇢ 1 ) ⇤ e(⇢ 2 ). Since ⇢ 1 and ⇢ 2 are pfaithful scenarios, e(⇢ 1 ) and e(⇢ 2 ) are fixed-points of T p (⇢, •) and contain all events of ⇢ visible at p. Since e(⇢ 1 ) ⇤ e(⇢ 2 ) ⌧ e(⇢ 1 ) and e(⇢ 1 ) ⇤ e(⇢ 2 ) ⌧ e(⇢ 2 ), it follows that T ! p (⇢, e(⇢ 1 ) ⇤ e(⇢ 2 )) ⌧ e(⇢ 1 ) and T ! p (⇢, e(⇢ 1 ) ⇤ e(⇢ 2 )) ⌧ e(⇢ 2 ), so . Since e(⇢ 1 ) ⇤ e(⇢ 2 ) also contains all events visible at p, by Lemma 4.6, e(⇢ 1 ) ⇤ e(⇢ 2 ) yields a p-faithful scenario of ⇢. Finally, observe that multiplication distributes over addition, ✏ is the additive identity and ⇢ the multiplicative identity. 2 Proof of Theorem 5.10 We begin with two technical lemmas. The first essentially says that various properties of sequences of events are invariant under isomorphism. Lemma A.2. Let I be an instance and ↵ = e 1 . . . e n a sequence of events applicable at I. Let f be a bijection on dom that is the identity on const(P ). We denote by f (↵) the sequence of events obtained by applying f to every value occurring in ↵. Then the following hold: (minimum) p-faithful run on f (I), and the events visible at p are the same in the two runs. Proof. Straightforward induction on the length of ↵. We also need the following. Recall that, for each relation R, K(R, ↵) denotes the set of values occurring as keys of relation R in some event of ↵. For an instance I, we denote by I|K(↵) the instance retaining, for each relation R, only the tuples in I(R) with keys in K(R, ↵). Lemma A.3. Let I be an instance, ↵ a sequence of events, and I|K(↵) ✓ J ✓ I. The following hold: (i) if ↵ is a (minimum) p-faithful run on I then it is also a (minimum) p-faithful run on J, and the events visible at p are the same in the two runs. (ii) if ↵ is a (minimum) p-faithful run on J such that adom(I) \ new(↵) = ; then ↵ is also a (minimum) p-faithful run on I, and the events visible at p are the same in the two runs. Proof. Also by induction on the length of ↵. The only subtlety concerns new values. For (i), note that, if an event of ↵ creates a new value on I, that value is also new in the run on J, since J ✓ I. For (ii), the converse holds because adom(I) \ new(↵) = ; ensuring that the new values created in ↵ do not occur in I. We can now prove Theorem 5.10. By definition, P is not h-bounded i↵ ( ‡) there is an instance I and sequence ↵ of events, of length h + 1, that yields a minimim p-faithful run on initial instance I, such that all of its events but the last are silent at p. Let c m be the maximum number of values occurring in a sequence ↵ of events of length at most m and an instance I such that the tuples in I(R) use only keys in K(R, ↵) for each relation R. Let cm = |const(P )| + c m and C m consist of const(P ) together with c m additional distinct constants (so |C m | = cm ). By Lemmas A.2 and A.3 (i), it is sucient to check ( ‡) for sequences ↵ of events of length at most h + 1 and instances I, both using only values in C h+1 . This establishes decidability. The pspace upper bound follows from the fact that ch+1 is polynomial in h and P , which yields a non-deterministic pspace test. This completes the proof. 2 Proof of Theorem 5.11 Clearly, (ii) follows from (i) and Theorem 5.10. Consider (i). Let P be a program that is h-bounded for p. By a slight reformulation of the definition of transparency, P is transparent for p i↵ the following holds. ( †) For all instances I 1 , I 2 and events e 1 , e 2 such that e i is applicable at I i and visible at p (i = 1, 2) and e 1 (I 1 )@p = e 2 (I 2 )@p, and for each sequence ↵ of events such that adom(e 2 (I 2 )) \ new(↵) = ;, if ↵ is a minimum p-faithful run on e 1 (I 1 ) such that all its events but the last are silent at p, then the same holds on e 2 (I 2 ), and ↵(e 1 (I 1 ))@p = ↵(e 2 (I 2 ))@p. We show that ( † 0 ): ( †) holds i↵ it holds for all instances I 0 1 , I 0 2 such that, for each relation R, I 0 i (R) contains at most c|↵|+2 tuples. For suppose this holds. Since P is h-bounded, it is su cient to check ( †) for instances I 1 and I 2 with at most ch+2 tuples in each relation. The existence of counterexamples can be easily checked by a nondeterministic pspace algorithm. This completes the proof. We now show ( † 0 ). The "only if" part is trivial. Consider the "if" part. Suppose ( †) holds for all instances I 0 1 , I 0 2 such that, for each relation R, I 0 i (R) contains at most c|↵|+2 tuples. Let I 1 , I 2 be arbitrary instances, e 1 , e 2 events such that e i is applicable at I i and visible at p (i = 1, 2), e 1 (I 1 )@p = e 2 (I 2 )@p, ↵ is a minimum p-faithful run on e 1 (I 1 ) such that all but its last event are silent at p, and adom(e 2 (I 2 )) \ new(↵) = ;. We can assume without loss of generality that adom(I 2 )\ new(↵) = ;; otherwise, we can rename the values in I 2 and e 2 that occur in the intersection by a bijection that is the identity on const(P 1 and I 0 2 contains at most c|↵|+2 tuples. We next show that I 0 1 , I 0 2 satisfy the conditions of ( †). By Lemma A.3 (i), e i is applicable to I 0 i and is visible at p (i = 1, 2). Moreover, ↵ is a minimum pfaithful run on e 1 (I 0 1 ) and all but its last event are silent at p. We show that e 1 (I 0 1 )@p = e 2 (I 0 2 )@p. Let t 2 e 1 (I 0 1 )@p(R@p) with key k for some R. Observe that k 2 K 1,2 . Since e 1 (I 0 1 ) ✓ e 1 (I 1 ) and e 1 (I 1 )@p = e 2 (I 2 )@p, t 2 e 2 (I 2 )@p(R@p) and there is t 0 2 e 2 (I 2 )(R), with the same key k as t, such that t = t 0 @p. Suppose there is no tuple with key k in I 2 , so t 0 is created by e 2 . Then t 0 is also in e 2 (I 0 2 ) and t 2 e 2 (I 0 2 )@p(R@p). Now suppose there is a tuple t 00 with key k in I 2 (R). Since k 2 K 1,2 , t 00 2 I 0 2 (R) and so t 0 2 e 2 (I 0 2 )(R) and t 2 e 2 (I 0 2 )@p(R@p). We have shown that e 1 (I 0 1 )@p ✓ e 2 (I 0 2 )@p. The converse holds by symmetry, so e 1 (I 0 1 )@p = e 2 (I 0 2 )@p. Also, adom(e 2 (I 0 2 )) \ new(↵) = ;. Since I 0 1 and I 0 2 satisfy the condition of ( †), it follows that ↵ is a minimum p-faithful run on e 2 (I 0 2 ) such that all but its last event are silent at p, and ↵(e 1 (I 0 1 ))@p = ↵(e 2 (I 0 2 ))@p. By Lemma A.3 (ii), ↵ is also a minimum p-faithful run on e 2 (I 2 ) such that all but its last event are silent at p. It remains to show that ↵(e 1 (I 1 ))@p = ↵(e 2 (I 2 ))@p. Let t 2 ↵(e 1 (I 1 ))@p(R@p) with key k for some R. Thus, there exists a tuple t 0 2 ↵(e 1 (I 1 ))(R), with key k, such that t = t 0 @p. First suppose k 6 2 K 1,2 , then t 0 also belongs to e 1 (I 1 ). Since e 1 (I 1 )@p = e 2 (I 2 )@p, t 2 e 2 (I 2 )@p(R@p). Thus, there is t 00 2 e 2 (I 2 )(R) with key k, such that t 00 @p = t. Since k 6 2 K 1,2 , ↵ does not a↵ect t 00 , so t 00 2 ↵(e 2 (I 2 ))(R) and t 2 ↵(e 2 (I 2 ))@p(R@p). Now suppose k 2 K 1,2 . If there is no tuple in e 1 (I 1 )(R) with key k, then ↵ creates t 0 on any initial instance on which it is applicable, so t 0 2 ↵(e 2 (I 2 ))(R) and t 2 ↵(e 2 (I 2 ))@p(R@p). Suppose there is a tuple t 00 in e 1 (I 1 )(R) with key k. Since k 2 K 1,2 , t 00 2 e 1 (I 0 1 )(R). It follows that t 0 2 ↵(e 1 (I 0 1 ))(R) and t 2 ↵(e 1 (I 0 1 ))@p(R@p). Since ↵(e 1 (I 0 1 ))@p = ↵(e 2 (I 0 2 ))@p, t 2 ↵(e 2 (I 0 2 ))@p. Since ↵ is applicable to e 2 (I 2 ) and I 0 2 ✓ I 2 , it follows that ↵(e 2 (I 0 2 ))@p ✓ ↵(e 2 (I 2 ))@p, and t 2 ↵(e 2 (I 2 ))@p(R@p). In both cases, t 2 ↵(e 2 (I 2 ))@p(R@p), thus, ↵(e 1 (I 1 ))@p ✓ ↵(e 2 (I 2 ))@p. The converse holds by symmetry. Hence, ↵(e 1 (I 1 ))@p = ↵(e 2 (I 2 ))@p, which concludes the proof. 2 Proof of Theorem 5. [START_REF] Datta | Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems[END_REF] We need to show that P @p is sound and complete for P and p. Consider completeness. For runs ⇢ of P and ⇢ 0 of P @p, we denote by ⇢@p ⇠ ⇢ 0 that fact that ⇢@p is obtained from ⇢ 0 by replacing all !-events with !. Let ⇢ = {(e i , I i )} 0in be a run of P . We can write e(⇢) as ↵ 1 .e i1 .↵ 2 .e i2 . . . ↵ n .e in .↵ n+1 where e ij are the events visible at p (1  j  n) and ↵ j are sequences of events invisible at p (1  j  n + 1). We define a sequence of events E 1 . . . E n yielding a run of P @p, such that Consider a fixed j > 1 for which e ij is not en event of p (the case j = 1 is a straightforward variation). Let e = e ij , e 0 = e i (j 1) , ↵ = ↵ j , I = I i (j 1) , I 0 = I i (j 1) 1 , and J = I ij . Let ↵ be the subsequence of ↵ such that ↵.e is the unique minimum p-faithful subrun of ↵.e on initial instance I. Since P is h-bounded for p, | ↵.e|  h. Let Ī = I|K(e 0 . ↵.e). By Lemma A.3 (i), ↵.e is a minimum p-faithful run of P on Ī, all events of ↵ are invisible at p, and e is visible at p. Also, Ī is a p-fresh instance, since it is easily seen that Ī = e 0 (I 0 |K(e 0 .↵.e)) and e 0 is visible at p. Let J = ↵.e( Ī). Observe that | Ī|  c h+1 . By Lemma A.2 we can assume without loss of generality that Ī and ↵.e use only constants in C h+1 . Consider the rule of P @p corresponding to the triple ( Ī, ↵.e, J). Consider the event E j obtained by applying to the variables of the rule the valuation ⌫ 1 . Clearly, the event is applicable to Ī@p and E j ( Ī@p) = J@p. It remains to show that E j (I@p) = J@p. Let Īc = I Ī. By definition, Īc contains no tuple a↵ected by ↵.e, so J = J [ Īc . Similarly, no tuple of Īc @p is a↵ected by E j . It follows that E j (I@p) = E j ( Ī@p) [ Īc @p = J@p [ Īc @p = J@p. This establishes completeness of P @p. Now consider the soundness of P @p. Let ⇢ p = {(E i , I i )} 0in be a run of P @p. We show that there exists a run ⇢ of We prove the statement by induction on j. Consider j = 0. Thus, E 0 (;) = I 0 . If E 0 is an event of p then the statment holds. Otherwise, by definition, there exists a p-fresh instance I and a minimum p-faithful run ↵.e of P on I, such that the tuples in I(R) use keys in K(R, ↵) for each relation R, and ↵.e(I)@p = E 0 (I@p). By construction, since E 0 has no positive atoms in its body, I@p = ;. By transparency of P , ↵.e is also a minimum p-faithful run of P on ; in which all events but e are invisible at p, and ↵.e(;)@p = ↵.e(I)@p = E 0 (;). This completes the base of the induction. For the induction step, let 0 < j < n suppose there is a run ⇢ j of P such that ⇢ j @p ⇠ ⇢ p | j . Let e(⇢ j ) = ↵ j and ↵ j (;) = J. So J is a p-fresh instance and J@p = I j . From the definiton of E j+1 , it can be shown similarly to the base case that there exists a p-fresh instance I over D, and a minimum p-faithful run ↵.e of P on initial instance I, in which all events but e are invisible at p, such that I@p = I j and ↵.e(I)@p = I j+1 . We would like to obtain a run corresponding to ⇢ p | j+1 by concatenating ↵ j with ↵.e. However, it could be that new(↵.e) \ adom(↵ j ) 6 = ;. Observe that new(↵.e) \ adom(I j ) = ; since adom(I j ) ✓ adom(I). Thus, there are two cases to handle: (i) new(↵.e) contains values in adom(⇢ p | j 1 ), (ii) new(↵.e) contains values in adom(↵ j ) adom(⇢ p | j 1 ) Case (i) can be handled by applying to ↵.e a bijection f on dom that is the identity on const(P ) [ adom(I) [ adom(I j+1 ) and such that adom(f (new(↵.e)))\adom(⇢ p | j 1 ) = ;, and using Lemma A.2. Case (ii) can then be avoided by applying to ↵ j a bijection g on dom that is the identity on const(P )[adom(⇢ p | j ) and such that adom(g(↵ j ))\ new(↵.e) = ;, and using again Lemma A.2. Thus, we can assume that new(↵.e) \ adom(↵ j ) = ;. By transparency of P , since I and J are p-fresh, I@p = J@p, and adom(J) \ new(↵.e) = ;, it follows that ↵.e is a run of P on J and ↵.e(J)@p = ↵.e(I)@p = I j+1 . Thus, ↵ j .↵.e yields a run of P and run(↵ j .↵.e) ⇠ ⇢ p | j+1 . This completes the induction and the proof of soundness of P @p. 2 Proof of Theorem 6.3 Consider an instance I and a sequence ↵.e of events applicable to I that yields a minimum p-faithful run at p, such that all its events but e are invisible at p. Observe that no event of ↵ has a relation visible at p in the head. Since ↵.e is a minimum p-faithful run, ↵.e = T ! p (↵.e, e). Let R be the relation occurring in the head of e. Since e is visible at p, R@p 2 D@p. It can be shown that T ! p (↵.e, e) = T g p (↵.e, e), where g is the maximum length of a path in the p-graph of P , starting from relation R. Intuitively, this is because every productive application of T p (↵.e, •) to an event corresponds to traversing at least one edge in the p-graph (from the relation in the head to some in the body). It follows that T p (↵.e, e) can only be applied g times before converging. Moreover, given a set E of events, T p (↵.e, E) adds to E at most b • |E| lifecycles of keys, each containing at most a events. It follows that |↵.e|  (ab + 1) g  (ab + 1) d . 2 Proof of Theorem 6.7 (sketch) Let P be a TF program, p a peer, and h an integer. Let P t be the program constructed from P as previously described. We show that (*) P t is transparent and h-bounded for p, and (**) each run of P that is transparent and h-bounded for p is the projection of a run of P t . Towards (*), consider a p-fresh instance I and a minimal p-faithful sequence ↵ of events of P t such that only the last one is visible for p. Transparency is satisfied by construction. For h-boundedness, observe that the subrun consisting of the events corresponding to the stepIDs occurring in the B i attributes of the last event of ↵ is observationally equivalent to ↵ for p. Therefore h-boundedness holds as well. Towards (**), first observe that the projection simply removes from the runs of P t , the relations R t for each R. Consider a transparent and h-bounded run ⇢ of P . Let its stages be ↵ i for i 2 [1. .n], and I i be the instance reached after ↵ i for each i. We construct a corresponding run ⇢ 0 of P t . For this, for each i, consider the minimal faithful subrun ↵ 0 i of ↵ i on input I i 1 . It is transparent and, by h-boundedness, its length is less or equal to h. We can therefore extend the events of P to transparent events of P t to capture the events in ↵ 0 i . Because its length is less that h, we dispose of enough space in the B i . For the other events, it is irrelevant whether we extend them using transparent or non transparent events. We thus obtain a run ⇢ 0 such that ⇢ = ⇧(⇢ 0 ), which concludes the proof. 2
103,271
[ "749231", "964789" ]
[ "478610", "432644", "26818" ]
01745104
en
[ "info" ]
2024/03/05 22:32:07
2013
https://hal.science/hal-01745104/file/2013-june-pimrc-workshop_salah.pdf
Vineeth S Varma email: vineeth.varma@lss.supelec.fr Salah Eddine Elayoubi email: salaheddine.elayoubi@orange.com Merouane Debbah email: merouane.debbah@supelec.fr Samson Lasaulce email: samson.lasaulce@lss.supelec.fr On the Energy Efficiency of Virtual MIMO Systems The major motivation behind this work is to optimize the sleep mode and transmit power level strategies in a small cell cluster in order to maximize the proposed energy efficiency metric. We study the virtual multiple input multiple output (MIMO) established with each base station in the cluster equipped with one transmit antenna and every user equipped with one receive antennas each. The downlink energy efficiency is analyzed taking into account the transmit power level as well as the implementation of sleep mode schemes. In our extensive simulations, we analyze and evaluate the performance of the virtual MIMO through zero-forcing schemes and the benefits of sleep mode schemes in small cell clusters. Our results show that for certain configurations of the system, implementing a virtual MIMO with several transmit antennas can be less energy efficient than a system with sleep mode using OFDMA with a single transmitting antenna for serving multiple users. I. INTRODUCTION The energy consumed by the radio access network infrastructure is becoming a central issue for operators [START_REF] Saker | System selection and sleep mode for energy saving in cooperative 2G/3G networks[END_REF]. The goal of this work is to provide insights on how to design green radio access networks, especially in the framework of virtual MIMO systems. Indeed, classical network architectures are focused on integrated, macro base stations, where each cell covers a predetermined area, and inter-cell interference is reduced by the means of fixed frequency reuse patterns [START_REF] Elayoubi | Uplink intercell interference and capacity in 3G LTE systems[END_REF]. Heterogeneous Networks (HetNets) introduced a new notion of small cells where pico or femto base stations are deployed within the coverage area of the macro base stations [START_REF] Saker | Optimal control of wake up mechanisms of femtocells in heterogeneous networks, Selected Areas in Communications[END_REF]. Virtual MIMO is a step forward in this context that allows distributed systems of base stations/antennas that cover a common area and cooperate in order to increase the overall spectral efficiency [START_REF] Wang | Cooperative MIMO channel models: A survey[END_REF]. This paper focuses on these latter solutions and aims at addressing the problem from an energy efficiency point of view. For classical macro networks, early works focused on designing energy-efficient power control mechanisms [START_REF] Goodman | Power Control for Wireless Data[END_REF]. Therein, the authors define the energy-efficiency of a communication as the ratio of the net data rate (called goodput) to the radiated power; the corresponding quantity is a measure of the average number of bits successfully received per joule consumed at the transmitter. This metric has been used in many works. Although fully relevant, the performance metric introduced in [START_REF] Goodman | Power Control for Wireless Data[END_REF] ignored the fact that transmitters consume a constant energy regardless of their output power level [START_REF] Desset | Flexible power modeling of LTE base stations[END_REF]. The impact of this constant energy has been studied for single user point-to-point MIMO systems in [START_REF] Varma | An Energy Efficient Framework for the Analysis of MIMO Slow Fading Channels[END_REF]. Sleep mode mechanisms have thus been regarded as a solution for this issue; they consist in deactivating network resources that have low traffic load, eliminating thus both the variable and constant parts of the energy consumption [START_REF] Saker | System selection and sleep mode for energy saving in cooperative 2G/3G networks[END_REF]. This mechanism has been applied to macro networks [START_REF] Saker | System selection and sleep mode for energy saving in cooperative 2G/3G networks[END_REF], as well as to heterogeneous networks with macro and small cells [START_REF] Saker | Optimal control of wake up mechanisms of femtocells in heterogeneous networks, Selected Areas in Communications[END_REF]. Our aim in this paper is to extend this concept to virtual MIMO networks, where an antenna that is not significantly contributing to the network capacity (for a given configuration of user positions and radio channels) is put into sleep mode. The remainder of this paper is organized as follows. In section II, we present the system model and the resource allocation scheme. Section III presents our energy efficiency metric and optimizes it for a given system and channel configuration, using sleep mode mechanisms. Section IV presents some numerical examples and section V eventually concludes the paper. II. SYSTEM MODEL The wireless system under consideration is the downlink in a virtual MIMO system within a small cell cluster. To be precise, each of the small cell base stations are connected to a central processor and so they act as antennas for the virtual MIMO as shown in Fig 1 . We refer to the set of these base Fig. 1. An example illustration of a 2×2 virtual MIMO with g i,j representing the channel between BS antenna i and user j. stations as the "cluster". Each user is equipped with a single receive antenna. In order to eliminate interference zero-forcing is implemented. We consider a block-fading channel model where the channel fading stays is assumed to stay constant for the duration of the block and changes from block to block. The base stations require the channel state information available at the user end in order to implement the zero-forcing technique. Therefore, in each block channel a training and feedback mechanism happens, after which data is transmitted. We also assume that every base station is capable of entering into a "sleep-mode". In this mode, the base station does not send any pilot signals and therefore does not perform the training or feedback actions consuming a lesser quantity of power compared to the active base stations. Let there be M base stations in the cluster and K users. Define K = {1, 2, . . . , K} and M = {1, 2, . . . , M } the sets of users and base station antennas. A. Power consumption model As the transmit antennas are not co-located, each of them have an individual power budget. When a base station is active, it consumes a constant power of b due to the power amplifier design and training or feedback costs. Additionally, it consumes a power P m x m 2 proportional to the radiated power, where P m ≤ P max and x m ≤ 1 is the signal transmitted and P max is the power constraint [START_REF] Saker | System selection and sleep mode for energy saving in cooperative 2G/3G networks[END_REF] [START_REF] Desset | Flexible power modeling of LTE base stations[END_REF]. When it is placed on sleep mode, it is assumed that it only consumes power c where c < b. Denote by s the sleep mode state vector of the cluster with elements s m ∈ {0, 1}. The base station m is in sleep mode when s m = 1 and active when s m = 0. Thus the power consumption of the m-th base station is cs i + (1 -s i )(b + P m x m 2 ). The total power consumption of the cluster is given by: P tot = M m=1 cs m + (1 -s m )(b + P m x m 2 ) (1) For any given state of the cluster, we define ω(s) as the total number of base stations that are active. This value can be calculated as ω(s) = Mm s m . If M < K, zero-forcing can not be used. However, if M > K, and ω(s) ≥ K, then the zero-forcing technique can be implemented by choosing K base stations to transmit the data signals after all ω(s) active base stations train and obtain feedback on their channels. B. The zero-forcing scheme As there are K users connected to the ω(s) active base stations, there are a total of ω(s) × K number of channels. Let ζ = {1, 2, . . . , ω(s)} be the set of active base stations. We denote by G with elements g l,k ∈ R the path fading between base station l ∈ ζ and user k ∈ K, while H with elements hl,k ∈ C denotes the fast fading component, resulting in a signal at k given by: y k = ω(s) l=1 g l,k P l σ 2 hl,k x l + z (2) where x is the signal transmitted with x l as its elements; z is the normalized noise and σ 2 the noise strength. Note that g l,k can be determined based on the user location while hl,k are i.i.d. zero-mean unit-variance complex Gaussian random variables. We define H(G, H) as the combined channel matrix with elements hl,k = √ g l,k hl,k . In our work, as we focus on the small-cell scenario where the antennas are distributed over the cell in a dense manner, we assume that every user can have a similar level of signal strength. Define N = {1, 2, . . . , N } as the set of transmitting antennas that perform zero forcing beam-forming. We define β ∈ N → ζ as the function that describes which base stations in ζ will be picked to transmit the data signals. Given BS j ∈ N , the corresponding label for the BS in ζ is given by β(j). Given ω(s) active base stations that perform training and receive feedback on H, we define the effective channel matrix as H( H, β), where its elements h j,k = hβ(j),k . For zeroforcing, we require that the number of transmitting antennas is equal to the number of receiving antennas or K = N . With this constraint, if H is an invertible matrix, we define: x = (H( H, β)) -1 u (3) where u is a vector of length K, where u k which is the signal received by the k-th user. In this work, we take u k 2 = 1 so that each user obtains identical signal strengths. This constraint has several benefits: 1) This results in a very "fair" beam forming scheme as all users experience equal signal strength and thus similar data rates. 2) As the base station antennas are spread over the cell, there is no user on the "cell edge" or "cell center". In this situation, equal SINRs for all users can result in less congestion when user traffic patterns are taken into account 3) Finally, the resulting system is far less complex and easier to analyze than one with arbitary values for u k 2 . With these definitions, we can now precode the transmitted signal as: x = x α( H, β) (4) where α( H, β) = max(x m ). This pre-coding works only if all the P j are equal, and so we chose P j = P 0 . From this point onwards, P 0 represents the transmit power level of the system with this pre-coding. The signal received by each of the K users is given by: y k = P 0 σ 2 u k α( H, β) + z (5) Thus the SINR at each user is now given by γ k = P 0 α( H, β) 2 σ 2 (6) III. ENERGY EFFICIENCY OPTIMIZATION This work aims at minimizing the energy consumption by base stations. If each user in the network is connected to download some data, then the total energy consumed by the network is the total power consumed multiplied by the total duration for which the user stays connected. Energy efficiency (EE) is a metric that is often used to measure this, and maximizing the energy efficiency leads to minimizing the total energy consumed. A. Defining the EE metric Before defining the EE, we first calculate the total power consumption of the network. From ( 1) and (4), the total power consumed is given by: P tot (P 0 , H, β) = M m=1 cs m + (1 -s m ) ×   b + P 0 (H -1 ( H, β)u) β -1 (m) α( H, β) 2   (7) Here we define ∀m ∈ M; β -1 (m) = j if j ∈ N exists s.t β(j) = m 0 otherwise. (8) and () j is the j -th element if j = 0 and is 0 if j = 0. In this scenario, we define the instantaneous energy efficiency as: η(P 0 , H, β) = k f (γ k (P 0 , H, β)) P tot (P 0 , H, β) (9) where f () gives the effective throughput as a function of the SINR. f (γ k ) = log(1 + γ k ) for example. However when we study the base station energy efficiency for a longer duration, the effects of fast fading in H gets averaged and in this case a more reasonable definition for the EE is: η(P 0 , G, β) = E H[ k f (γ k (P 0 , H(G, H), β))] E H[P tot (P 0 , H(G, H), β)] (10) B. Optimization w.r.t the transmit power In this section, we show some properties of our energy efficiency metric w.r.t P 0 . If the goal of a system is to be energy efficient using power control, then one important question arises: Is there a unique power for which the energy efficiency is maximized ? We answer this question with the following proposition: Proposition 1: Given a certain path loss matrix G and a selection of transmitting base stations β in the virtual MIMO system, the average EE η(P 0 , G, β) is maximized for a unique P * 0 and is quasi-concave in P 0 . Proof: Consider the SINR for each user γ k . It can be observed that ∂γ k (P0, H,β)) ∂P0 is a constant. So if f () is concave in γ k , it is also concave in P 0 . Now consider the numerator of (10), E H[ k f (γ k (P 0 , H(G, H), β))] . This is a weighted sum of several concave functions and is hence also concave itself. Similarly, ∂Ptot(P0, H,β) ∂P0 can also be verified to be a constant. Hence, ∂ E H[Ptot (P0, H,β)] ∂P0 is a constant. Thus η(P 0 , G, β) is the ratio of a concave function of P 0 to a linear function of P 0 . This is known to be quasi-concave and has a unique maximum P * 0 satisfying: ∂ η(P * 0 , G, β) ∂P 0 = 0 (11) This proposition guarantees that given a certain choice of transmit antennas and a path fading matrix, the energy efficiency can always be optimized w.r.t the transmit power level P 0 . C. Optimizing the selection of transmitting base stations Given a certain sleep mode state s, there are ω(s) base stations active that train and obtain feedback. From this set ζ, K base stations have to be picked for zero-forcing. This choice is mathematically expressed by β. The β that optimizes the energy efficiency depends on the channel state H. The following proposition details the method of choosing the β that optimizes EE. Proposition 2: When P0 b → 0, the β * that maximizes η(P 0 , G, β) is obtained by: β * = arg min[α( H, β); β ∈ {N → K}] (12) Proof: By observing ( 6) it can be seen that γ k (P 0 , H, β) is maximized by picking β * . And so k f (γ k (P 0 , H, β)) is maximized when β = β * . When P0 b → 0, for β * and any β 1 we have: lim P 0 b →0 η(P 0 , G, β * ) -η(P 0 , G, β 1 ) = (13) E H[ k f (γ k (P 0 , H, β * )) -k f (γ k (P 0 , H, β 1 ))] M m=1 cs m + (1 -s m )b ≥ 0 (14) This implies that we pick β such that α( H, β) is minimized for optimizing EE when b >> P 0 . From a practical point of view, the above result is useful as it proposes an algorithm to pick the right base stations based on the CSI obtained from all the base stations that are active. The condition b >> P 0 is most applicable when the users are close to the base stations resulting in a low P 0 being used for maximizing EE. IV. NUMERICAL RESULTS In this section we use simulations and numerical calculations to study the effectiveness of our proposal as well as the advantages offered. For the purpose of a thorough numerical study, we will analyze two kinds of system settings A and B. For both the configurations the common parameters are: 1) c = b 10 W 2) P max = 2 W 3) f (γ) = B log(1 + γ) 4) σ 2 = 1 mW Where B = 10 6 hz is the bandwidth. The fast fading co-efficient we consider is hi,j = o(π m,k )Ω + 0.1ξ. Where ξ ∈ CN (0, 1), a is the direct line of sight factor which plays a dominant role in most small cell networks, o π m,k ∈ 0, 1 is the shadow factor and o(pi m,k ) = 1 with probability π m,k . Here π m,k is the probability that the receiver k has line of sight with the BS antenna m. We take π m,k = 0.5 ∀(k, m) for our simulations. The presented results study the case of two users K = 2 served by a small cell cluster of three base stations, i.e M = 3. In addition to zero-forcing, when there are two users a single base station could also alternately use Orthogonal Frequency-Division Multiple Access (OFDMA) to serve the two users and keep the other two BS in sleep mode (i.e . Our numerical simulations study all these possible scenarios and compare their respective EE performances. In both of the settings presented below, we study two main regimes of interest: 1) b = 1W : This regime represents the futuristic case where power amplifier efficiencies are quite high and the constant power consumed is lower than the maximum RF output power. 2) b = 10W : This regime represents the more current state of the art w.r.t power amplifier efficiency where in small cell antennas, a large portion of the power is lost as a fixed cost. We also consider two possible values of Ω, the line of sight factor. The case Ω = 10 is representative of pico-cells that are deployed externally, whereas the case Ω = 0 represents femtocells deployed internally and no line of sight communication is possible. A. Setting A The deployment of antennas and the user locations are shown in Fig 2 . In this setting we take g m,k = 1 ∀m, k. In Fig 3 we study the EE of a VMIMO system with a very efficient power amplifier. In this figure, we notice that using all available base station antennas is more efficient when line of sight communications are possible. In this case, no BS is in sleep mode and all the antennas train and obtain feedback on their channels. The choice of β is very much relevant in this scenario. However in the regime where there is no direct line of sight (users are inside buildings), it becomes more efficient to just use 2 BS antennas and put one on sleep mode. As the configuration is symmetric, the choice of the base station in sleep mode is not relevant. In Fig 4 we study the EE of a VMIMO system with an inefficient power amplifier. In this figure, we notice that using all available base station antennas is not efficient even when line of sight communications are possible. In this case, having one BS in sleep mode and obtaining a 2×2 virtual MIMO with the other remaining antennas is the most efficient. Suprisingly, in the case of no line of sight, i.e Ω = 0, we observe that using OFDMA with one BS active is the most efficient solution. This is explained by the relative inefficiency of zero-forcing in the low SNR regime, causing less energy to be spent by having two BS antennas in sleep mode and just one antenna transmitting for the two users in orthogonal frequencies. B. Setting B The deployement of antennas and the user locations are shown in Fig 5 . In this setting we take g 1,1 = g 2,1 = 4, g 3,1 = g 1,2 = g 2,2 = 0.1 and g 3,2 = 10. In Fig 6 , similarly to what was done in the previous setting, we study the EE of a VMIMO system with a very efficient power amplifier. In this figure, for both Ω = 0 and Ω = 1 we see that having to use 2 BS antennas and put one on sleep mode is the most efficient. In this setting, the configuration of BS and users are asymmetric and the BS to be put in sleep mode has to be chosen carefully. BS 1 and 2 are symmetric and are close to user 1, but 3 is closer to user 2. In this case In this setting, we see from Fig 7 that unlike in Setting A, using OFDMA to divide resources between the two users is not as efficient as ZF due to the higher SNR when served by nearby BS antennas. Like in Fig 6, choosing s 1 = 1 or s 2 = 1 and zero-forcing is always the most efficient solution. V. CONCLUSION This paper studies the performance of virtual MIMO systems from an energy efficiency perspective. It defines an energy efficiency metric that takes into account the capacity as well as the energy consumption, and considers both fixed and variable parts of this latter. We optimize the power allocations of the different antennas and show that sleep mode can bring a significant energy efficiency gain. This involves deactivating antennas that do not have a significant contribution to the system capacity, for a given number of users and radio channel conditions. This work is applicable only for the specific case of a small cell cluster with a centralized network and CSIT. Thus, many extensions of the proposed work are possible. The most relevant extension is to apply the proposed framework Fig. 2 . 2 Fig. 2. Setting A schematic Fig. 3 .Fig. 4 . 34 Fig. 3. Setting A: EE v.s P 0 for b = 1 W Fig. 5 . 5 Fig. 5. Setting B schematic Fig. 7 . 7 Fig. 7. Setting B: EE v.s P 0 for b = 10 W VI. ACKNOWLEDGMENTS This work is a joint collaboration between Orange Labs, Laboratoire des signaux et systémes (L2S) of Supélec and the Alcatel Lucent Chair of Supélec. This work is part of the European Celtic project "Operanet2".
20,540
[ "5857", "174684", "863308", "1068236" ]
[ "1289", "365900", "365900", "58436", "1289" ]
01745111
en
[ "info" ]
2024/03/05 22:32:07
2013
https://hal.science/hal-01745111/file/blackseacom2013_mhri.pdf
Mariem Mhiri email: mariem.mhiri@gmail.com Vineeth S Varma email: vineeth.varma@lss.supelec.fr Maël Le Treust email: mael.le.treust@emt.inrs.ca Samson Lasaulce email: samson.lasaulce@lss.supelec.fr Abdelaziz Samet email: abdelaziz.samet@ept.rnu.tn On the benefits of repeated game models for green cross-layer power control in small cells (Invited Paper) Keywords: distributed power control, energy efficiency, repeated game, channel state information In this paper, we consider the problem of distributed power control for multiple access channels when energy-efficiency has to be optimized. In contrast with related works, the presence of a queue at each transmitter is accounted for and globally efficient solutions are sought. To this end, a repeated game model is exploited and shown to lead to solutions which are distributed in the sense of the decision, perform well globally, and may rely on limited channel state information at the transmitter. I. INTRODUCTION Designing energy-efficient communication systems has become a critical issue in modern day wireless networks. The problem treated in this work deals with power control when energy efficiency (EE) has to be optimized. This metric (EE) has been defined in [START_REF] Goodman | Power control for wireless data[END_REF] as a ratio of the net data rate (goodput) to the transmit power level. The problem was formulated as a non-cooperative game where each transmitter aims at selfishly maximizing its individual energy-efficiency. The considered solution is the Nash equilibrium (NE) which is shown to be unique but generally Pareto inefficient. To deal with this inefficiency, an operating point (OP) was proposed in [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF] where repeated game was exploited. Authors in [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF] showed that when playing with the developed OP according to a cooperation plan, only channel state information (CSI) is needed and transmitters can improve the social welfare (sum of utilities). Recently, a generalized EE metric has been proposed in [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF] for two important transport layer protocols (Transmission Control Protocol (TCP) and User Datagram Protocol (UDP)). The new EE metric is based on a cross-layer approach and takes into account the effects of the presence of a queue with a finite size at the transmitter. An interference channel system was studied and it was shown that a unique NE exists for a noncooperative game. In this paper, we consider the problem of distributed power control with the new EE metric according to UDP protocol developed in [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF] and for multiple access channels (MAC) system. Our goal is to find another unique solution concept which is efficient and may rely on limited CSI at the transmitter. We refer to a repeated game model (RG) developed in [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF] and try to apply the results on the crosslayer power control game. One of the major mathematical distinction between the two metrics used is the presence of a constant power term in the denominator of the EE metric. Although it appears to be a small change, the structure of the equilibrium solution is quite different. The optimal SINR when using the [START_REF] Goodman | Power control for wireless data[END_REF] metric is independent of the channel state. This property is lost when accounting for the constant power consumption, and motivates us to propose a new OP for the cross-layer metric. The main contributions of this work are: 1) Study the RG when using the cross layer EE as the utility of the game; 2) Establish the threshold on the game length beyond which the equilibrium policy can be pareto-optimal; 3) Propose a new OP that is efficient and can be reached in a distributed manner. This paper is structured as follows. In section II-A, we introduce the system model under study. Then, we define (in section II-B) the static power control game. This is followed (section II-C) by a review of the non-cooperative one-shot game. In section III, we give the formulation of the RG model. In section IV, we introduce the new OP and an equilibrium for the finite RG is proposed. Numerical results are presented and discussed in section V. Finally, concluding remarks are proposed in section VI. II. PROBLEM STATEMENT A. System model The communication network under study is that of a MAC system, where N small transmitters are communicating with a receiver and are operating in the same frequency band. Transmitter i ∈ {1, . . . , N } sends a signal √ p i x i with power p i ∈ [0, P max i ] where P max i > 0 is the maximum transmit power. The channel gain of the link between transmitter i and the destination is denoted as g i . Thus, the baseband signal received is written: y i = g i √ p i x i + N j=1 j =i g j √ p j x j + n i , (1) with n i is additive white Gaussian noise (AWGN) with mean 0 and variance σ 2 i . We assume that σ 2 i is identical for all the transmitters such that: σ 2 i = σ 2 . Therefore, the resulting SINR γ i at the receiver is given by: γ i (p) = p i |g i | 2 σ 2 + 1 L N j=1 j =i p j |g j | 2 , (2) where p = (p 1 , p 2 , . . . , p N ) is the power vector which will describe later the power actions of the N transmitters and L refers to the spreading factor [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF]. We assume that the described system is based on the IP (Internet Protocol) stack where packets arrive from an upper layer into a finite memory buffer of size K (in packets). Here, the considered protocol is UDP for which the packet arrival process follows a Bernoulli process with a constant probability q, independent from the SINR. This results in an effective packet loss denoted by Φ(γ i ) and an energy efficiency η i given by: η i (p i , p -i ) = Rq(1 -Φ(γ i (p))) b + qp i (1 -Φ(γ i (p))) f (γ i (p)) , (3) where p -i = (p 1 , .., p i-1 , p i+1 , .., p N ), R is the used throughput (in bit/s) and b represents the fixed consumed power when the radiated power is zero [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF]. B. Static power control game The major motivation behind this work is in order to establish an efficient equilibrium point to which a completely distributed system can converge to. A non-cooperative game has been introduced in [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF] where the existence of a unique Nash equilibrium was proved. Here, we are looking for more efficient solutions which are distributed in the sense of the decision making, but may rely on limited channel state information at the transmitter. As motivated in [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF], the power control can be modeled by a strategic form game (see e.g., [START_REF] Lasaulce | Game Theory and Learning for Wireless Networks: Fundamentals and Applications[END_REF]). Definition 2.1: The game is defined by the ordered triplet G = N , (A i ) i∈N , (u i ) i∈N where • N is the set of players. Here, the players of the game are the sources/transmitters, N = {1, . . . , N }; • A i is the set of actions. Here, the action of source/transmitter i consists in choosing p i in its action set A i = [0, P max i ]; • u i is the utility function of each user according to UDP given by: u i (p i , p -i ) = η i (p i , p -i ) (4) The function f : [0, +∞) → [0, 1] is a sigmoidal efficiency function which corresponds to the packet success rate verifying f (0) = 0 and lim x→+∞ f (x) = 1. The function Φ identifies the packet loss due to both bad channel conditions and the finiteness of the packet buffer. This can be calculated as: Φ(γ i ) = (1 -f (γ i ))Π K (γ i ) (5) where Π K (γ i ) is the stationary probability that the buffer is full and is given by: Π K (γ i ) = ω K (γ i ) 1 + ω(γ i ) + . . . + ω K (γ i ) (6) with: ω(γ i ) = q(1 -f (γ i )) (1 -q)f (γ i ) (7) In [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF], the authors prove that the non-cooperative game with rational players, G, allows for a unique pure Nash equilibrium (NE). This NE is the set of powers from which no player has anything to gain by changing only his own strategy unilaterally. This is explained in the following section. C. Review of the non-cooperative game The non-cooperative power control game has been investigated in [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF] where the quasi-concavity of the utility function given in (4) was proved. Accordingly, as the NE represents the fundamental solution for a non-cooperative game, existence and uniqueness of such a solution have been studied and demonstrated as well. Thus, the optimal power denoted as p * i is obtained by setting ∂u i /∂p i to zero, which leads to solve the following equation: bγ i Φ (γ i ) + q 1 -Φ(γ i ) f (γ i ) 2 [f (γ i ) -p i γ i f (γ i )] = 0, ( 8 ) where γ i = dγ i dp i = γ i p i , f = df dγ i and Φ = dΦ dγ i . However, the NE solution is not always Pareto efficient for many scenarios. An example is presented in Fig. 1 where we stress that the NE is far from the Pareto frontier. Motivated by the need to design an efficient solution relying on limited CSI at the transmitter, we move to the repeated game framework. III. REPEATED POWER CONTROL GAME In repeated games (RG), as the name suggests, the same game is played several times. The long-term interactions between the players in such a situation is studied under the RG framework. The players react to past experience by taking into account what happened in all previous stages and make decisions about their future choices [START_REF] Hart | Robert aumann's game and economic theory[END_REF], [START_REF] Sorin | Repeated games with complete information[END_REF]. The resulting utility of each player is an average of the utility of each stage. A game stage t corresponds to the instant in which all players choose their actions simultaneously and independently and thus a profile of actions can be defined by p(t) = (p 1 (t), p 2 (t), . . . , p N (t)). When assuming full monitoring, this profile choice is observed by all the players and the game proceeds to the next stage [START_REF] Sorin | Repeated games with complete information[END_REF]. The sequence of actions p i (t) of a transmitter i at time t defines his history denoted as h(t) = p i (t) = (p i (1), p i (2), . . . , p i (t -1)) and which lies in the set H t = P t-1 i . Before playing stage t, all histories are known by all the players [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF]. According to the above descriptions, a pure strategy δ i,t of player i ∈ N is a mapping from H t to the action set A i = [0, P max i ] specifying the action to choose after each history [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF], [START_REF] Sorin | Repeated games with complete information[END_REF]: δ i,t : H t → [0, P max i ] h(t) → p i (t) (9) We define the joint strategy δ = (δ 1 , δ 2 , . . . , δ N ) as the vector of all the players strategies. In this paper, we are interested in the finite repeated game, i.e the game is played for a finite number of steps (T steps). The utility function of each player results from averaging over the instantaneous utilities over all the game stages. At each stage t, the instantaneous utility of player i is a function of the profile of actions of all the players p(t). Definition 3.1: The utility function of the i th player for the finite RG is the arithmetic average of the sum of the utilities for the initial T first stages [START_REF] Sorin | Repeated games with complete information[END_REF], [START_REF] Aumann | Long-term competition-a gametheoretic[END_REF]. We have [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF]: v T i (δ) = 1 T T t=1 u i (p(t)) for the finite RG (10) where T ≥ 1 defines the number of game stages in the finite RG. An equilibrium solution of the RG is defined in the following manner: Definition 3.2: A joint strategy δ satisfies the equilibrium condition for the finite repeated game if for all players i ∈ N , for all other strategies δ i , we have v T i (δ) ≥ v T i (δ i , δ -i ). It means that no deviating strategy δ i can increase the utility v T i (δ) of any one player. This equilibrium solution is exactly what we are interested in, as a strategy δ satisfying the above condition would be precisely what rational players in a RG would play. In a RG with complete information and full monitoring, the Folk theorem characterizes the set of possible equilibrium utilities [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF], [START_REF] Sorin | Repeated games with complete information[END_REF]. It states that the set of Nash equilibrium in a RG is precisely the set of feasible and individually rational outcomes of the one-shot game (non-cooperative game) [START_REF] Hart | Robert aumann's game and economic theory[END_REF], [START_REF] Sorin | Repeated games with complete information[END_REF]. In a RG, interesting patterns of behavior between players can be seen and studied. This includes: rewarding and punishing, cooperation and threats, transmitting information and concealing [START_REF] Hart | Robert aumann's game and economic theory[END_REF]. IV. AN OPERATING POINT AND REPEATED GAME CHARACTERIZATION A. New OP for the game G Consider the operating point (OP) described in [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF]. It is identified by a subset of points of the achievable utility region such that p i |g i | 2 = p j |g j | 2 for all (i, j) ∈ N . The optimal subset of powers consists of the solutions of the following system of equations: ∀(i, j) ∈ N , ∂u i ∂p i (p) = 0 with p i |g i | 2 = p j |g j | 2 (11) with u i is the utility function defined in (4). Due to the presence of the parameter b which we consider different from 0, it can be observed that there will be N different solutions corresponding to equation (11) in terms of p i and thus the operating point from [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF] is not well defined when using the utility defined in [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF]. To deal with this problem, a new OP is proposed. The new OP consists in setting p i |g i | 2 to a constant denoted as α that can be optimized. We propose to determine a unique optimal α by maximizing the expected sum utility over all the channel states as follows: α = arg max E g N i=1 u i (α, g) (12) When playing at the OP, the power played by the i th player, denoted as pi , is given by: pi = α |g i | 2 (13) In the following section, we focus on the characterization of the finite RG. B. Repeated power control game characterization As a first step, we determine the minimum number of stages (T min ) corresponding to the finite RG. When the number of stages in the game is less than T min , the equilibrium of the RG is to simply play at the NE. However, if T > T min , a more efficient equilibrium point can be reached. Assuming that channel gains |g i | 2 lie in a compact set [ν min i , ν max i ] [2], we have the following proposition: Proposition 4.1 (Equilibrium solution for the finite RG): For a finite RG, if T > T min , then the corresponding equilibrium solution is given by [START_REF] Treust | A repeated game formulation of energyefficient decentralized power control[END_REF]: δ i,t : pi for t ∈ {1, 2, . . . , T -T min } p * i for t ∈ {T -T min + 1, . . . , T } P max i for any deviation detection (14) where T min is: Tmin=        Aν max i bν min i +γ i σ 2 B - Gν max i bν min i + αH Eν min i bν max i +γ * i ( σ 2 + 1 L j =i p * j ν max i ) F - Cν min i bν max i + γ i( σ 2 + 1 L j =i p max j ν max i ) D        (15) The proof for this proposition is given in Appendix A, as well as the quantities A, B, C, D, E, F , G and H. γ * i is the SINR at the NE while γi and γ i are the SINRs related to the utility max and the utility minmax respectively (see Appendix A). V. NUMERICAL RESULTS We consider a scenario with a MAC where N transmitters are communicating with their corresponding receiver (e.g. base station). The efficiency function is assumed to be f (x) = e -c/x where c = 2 R R 0 -1 with R and R 0 are the throughput and the used bandwidth and supposed to be 1 Mbps and 1 MHz respectively. The other parameters are set as follows: • σ 2 = 10 -3 W • b = 10 -2 W • K = 10 • q = 0.5 • P max i = P max = 10 -1 W The channel gains are assumed to be |g i | 2 = 1 and |g j | 2 = 0.5. Our simulations consist in showing firstly the advantage of the OP regarding the NE of the one shot game. Thus, we plot the achievable utility region, the NE and the proposed OP when considering a system of two transmitters and a spreading factor L = 2. In Fig. 2, the region delimited by the Pareto frontier and the minmax level defines, according to the Folk theorem, the possible set of equilibrium utilities of the RG. In addition, we highlight that the new OP dominates in terms of Pareto the NE and it is Pareto efficient. Fig. 3 represents the ratio w F RG w N E for the finite RG as a function of the number of stages. We have: w F RG w N E = N i=1 ( T -Tmin t=1 ũi (p(t)) + T t=T -Tmin+1 u * i (p(t))) N i=1 T t=1 u * i (p(t)) (16) We consider the same system as in Fig. 2 (2 transmitters and a spreading factor L = 2). We proceed to an averaging over channel gains lying in a compact set such that 10 log 10 νmax νmin = 20. According to equation (15), the minimum number of stages T min is equal to 4800. According to this figure, we deduce that the social welfare can be improved when playing a RG. 3. Improvement of the social welfare in finite repeated game vs the Nash equilibrium. While the efficiency of the RG while using the traditional metric defined in [START_REF] Goodman | Power control for wireless data[END_REF] seems to be higher, it requires a longer game than in the cross layer model. Fig. 3 plots the improvement of the social welfare as defined in ( 16). This improvement obtained is compared for case when using the metric defined in [START_REF] Goodman | Power control for wireless data[END_REF] to the cross-layer metric used. The required time for profiting from the RG scenario is much lower in the cross-layer case, but the improvement seems to be relatively smaller. However, note that the NE in the crosslayer game itself is more efficient than the NE in [START_REF] Goodman | Power control for wireless data[END_REF] and so in absolute terms, the proposed OP is still quite efficient and can be utilized for shorter games. This validates our approach and shows that the RG formulation is a useful technique for efficient distributed power control. VI. CONCLUSION In this paper, we study an efficient solution for a relevant game with a new EE metric considering a cross-layer approach and taking into account the effects of the presence of a queue with a finite size at the transmitter. As the NE is generally inefficient in terms of Pareto, we design a new OP and exploit a repeated game model to improve the performance of a MAC system. We contribute to express the analytic form for the minimum number of stages in a finite RG. Moreover, our approach provides an efficient solution relying on limited CSI at the transmitter when comparing to the NE and contributes to considerable gains in terms of social welfare for the finite RG. As a first step, we determine the power p i maximising u i and which we denote as ṗi . This amounts to reduce ∂u i /∂p i to 0. We recall that we consider the following notations: γ i = dγi dpi = γi pi , f = df dγi and Φ = dΦ dγi . The power ṗi maximising u i is then the solution of the following equation: b γ i p i Φ (γi)+q 1-Φ(γ i ) f (γ i ) 2 [f(γi)-γif (γi)]=0 (17) Therefore, the expression of the maximum utility function writes as: ui( ṗi,p-i)= Rq(1-φ( γi )) b+ ṗi q(1-φ( γi )) f ( γi ) , with: γi= ṗi |g i | 2 σ 2 + 1 L j =i p j |g j | 2 In a second step, we are interested in studying the behavior of ui ( ṗi , p -i ) as a function of p j for j = i; which amounts to calculating the sign of ∂ ui( ṗi,p-i) ∂pj ,which is shown to be negative in [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF]. Therefore ∂ ui( ṗi,p-i) ∂pj < 0. As ui is a decreasing function of p j , it reaches its maximum when p j = 0 and it is minimum when p j = p max j (for all j = i). A. Expression of ūi The utility ui reaches its maximum when p j = 0. When substituting p j = 0 in the SINR expression γi , this allows the determination of the optimal power ṗi : b |g i | 2 σ 2 Φ (γi)+q 1-Φ(γ i (p i )) f (γ i (p i )) 2 [f(γi(pi))-γif (γi(pi))]=0 (18) As the latter equation is a function of the SINR, the solution will be in terms of SINR and will be denoted as γi . The corresponding optimal power is pi = γiσ 2 |gi| 2 . Then, we have: ūi= Rq(1-φ(γ i )) b+ γi σ 2 |g i | 2 q(1-φ(γ i )) f (γ i ) B. Expression of u i We proceed as described previously and determine the optimal SINR denoted as γ i which is the solution of the following equation: b|g i | 2 σ 2 + 1 L j =i p max j |g j | 2 Φ (γi)+q 1-Φ(γ i ) f (γ i ) 2 [f(γi)-γif (γi)]=0 (19) Then, we have: ui= Rq(1-φ( γ i )) b+ γ i |g i | 2 ( σ 2 + 1 L j =i p max j |g j | 2 ) q(1-φ( γ i )) f ( γ i ) C. Existence proof of γ i and γi Both equations ( 18) and ( 19) are resulting from the same equation (17) for two different forms of the SINR (γ i for p j = 0 and γ i for p j = p max j ). Showing the existence of these two solutions amounts to prove the existence of the solution of equation (17). However, according to the study established in [START_REF] Varma | A cross-layer approach for energy-efficient distributed power control[END_REF], it was proved that u i is quasi-concave in (p i , p -i ) and then it exists γ + such that the first derivative of u i regarding to p i is strictly positive on [0, γ + ] and strictly negative on [γ + , +∞] for all p j ∈ [0, p max j ] : the first derivative is continous and is equal to zero in γ + . According to the utility which we are studying (max or minmax), γ + is either γi (eq. ( 18)) or γ i (eq. ( 19)). D. Proof The SINR γi refers to the SINR when playing the new OP. In order to simplify expressions, we use the following notations: A = Rq(1 -φ(γ i )) B = q(1-φ(γi)) f (γi) C = Rq(1 -φ( γ i )) D = q(1-φ( γi)) f ( γi) E = Rq(1 -φ(γ * i )) F = q(1-φ(γ * i )) f (γ * i ) G = Rq(1 -φ(γ i )) H = q(1-φ(γi)) f (γi) The inequality (20) becomes: A|g i | 2 b|g i | 2 +γ i σ 2 B + T s=T -T min +1 Eg C|g i | 2 b|g i | 2 + γ i( σ 2 + 1 L j =i p max j |g j | 2 ) D ≤ G|g i | Fig. 1 . 1 Fig. 1. Pareto inefficiency of the NE. Fig. 2 . 2 Fig. 2. Pareto dominance and Pareto efficiency of the proposed OP regarding the NE. Fig. Fig.3. Improvement of the social welfare in finite repeated game vs the Nash equilibrium. While the efficiency of the RG while using the traditional metric defined in[START_REF] Goodman | Power control for wireless data[END_REF] seems to be higher, it requires a longer game than in the cross layer model. utilities max and minmax are expressed respectively as follows: ūi = maxp -i maxp i ui(pi,p-i) ui = minp -i maxp i ui(pi,p-i) From [ 2 ] 2 , we have:ūi (p(t)) + T s=T -Tmin+1 E g { u i (p(s))} ≤ ũi (p(t)) + T s=T -Tmin+1 E g {u * i (p(s))}
24,212
[ "963152", "5857", "4065", "1068236" ]
[ "51779", "1289", "92448", "1289", "116203" ]
01745187
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01745187/file/pmc-gs-maxflow.pdf
Zdravko I Botev Pierre L'ecuyer Reliability Estimation for Networks with Minimal Flow Demand and Random Link Capacities Keywords: network reliability, stochastic flow network, Conditional Monte Carlo, permutation Monte Carlo, generalized splitting We consider a network whose links have random capacities and in which a certain target amount of flow must be carried from some source nodes to some destination nodes. Each destination node has a fixed demand that must be satisfied and each source node has a given supply. We want to estimate the unreliability of the network, defined as the probability that the network cannot carry the required amount of flow to meet the demand at all destination nodes. When this unreliability is very small, which is our main interest in this paper, standard Monte Carlo estimators become useless because failure to meet the demand is a rare event. We propose and compare two different methods to handle this situation, one based on a conditional Monte Carlo approach and the other based on generalized splitting. We find that the first is more effective when the network is highly reliable and not too large, whereas for a larger network and/or moderate reliability, the second is more effective. Introduction Network reliability estimation problems are commonplace in various application areas such as transportation, communication, and power distribution systems; see for example [START_REF] Gertsbakh | Models of network reliability[END_REF]. In many of those problems, the states of certain network components are subject to uncertainty and there is a set of conditions under which the network is operational, and one wishes to estimate the network unreliability, defined as the probability u that the network is in a failed state (i.e., is not operational). When u is very small, a standard (crude) Monte Carlo (MC) approach that merely generates the component states, computes the indicator function that the network is operational or not, and averages over n independent runs to estimate u, is unsatisfactory because the relative error (defined as the standard deviation of the estimator divided by the expected value u) of the MC estimator goes to infinity when u → 0. One reliability problem that has received a lot of attention is the static network reliability estimation problem, in which each link of the network is failed with a given probability and the network is operational when a given (specific) subset of the nodes are all connected. Effective estimation methods have been developed for this problem when u is small; see [START_REF] Botev | Static network reliability estimation via generalized splitting[END_REF][START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF][START_REF] Elperin | Estimation of network reliability using graph evolution models[END_REF][START_REF] Fishman | A Monte Carlo sampling plan for estimating network reliability[END_REF][START_REF] Gertsbakh | Models of network reliability[END_REF][START_REF] Lomonosov | Combinatorics and reliability Monte Carlo[END_REF] and the references therein. The model considered in this paper is more general. Instead of having only a binary state (up or down), each link has a random capacity that can take many possible values, there is a fixed demand that must be satisfied at certain nodes (called the destination nodes), a fixed supply is available at some other nodes (the source nodes), and the network is operational when it can carry the flow to satisfy all the demands. As a special case, there can be a single source node and a single destination node, with a fixed demand, and the network is operational when the maximum flow that can be sent from the source to the destination reaches the demand. We will describe our methods in this particular setting to simplify the notation, but the methods apply to the general setting as well. The case of links with binary states is a special case. The several methods developed for this special case do not readily apply to the network flow setting considered here, but we show how two of the best available methods for the binary case, permutation Monte Carlo (PMC) and generalized splitting (GS), can be adapted to this problem. The adaptation is not straightforward. The PMC method [START_REF] Elperin | Estimation of network reliability using graph evolution models[END_REF][START_REF] Gertsbakh | Models of network reliability[END_REF][START_REF] Lomonosov | Combinatorics and reliability Monte Carlo[END_REF]] constructs an artificial continuous-time Markov chain (CTMC) defined as follows. Each capacity is assumed to have a discrete distribution over a finite set of possible values. This can approximate a continuous distribution if needed. We assume that all links start at their minimal capacity, and the capacity of one link may increase each time the CTMC has a jump. The CTMC is constructed so that the probability that the network is failed at time 1 is equal to u. PMC generates the discrete-time Markov chain (DTMC) underlying the CTMC, i.e., only the sequence of states that are visited until the network is operational, and conditional on that sequence it computes the probability that the network is failed at time 1, as an estimator of u. This conditional probability can be computed by exploiting the property that the failure time has a phase-type conditional distribution, whose cumulative distribution function (cdf) and density can be expressed in terms of matrix exponentials. We show how to adapt and apply the PMC principle to our problem. The CTMC construction is quite different than for the binary case. We also prove, under certain conditions, that the resulting PMC estimator has bounded relative error (BRE) when u → 0 for a given network. GS [START_REF] Botev | Efficient Monte Carlo simulation via the generalized splitting method[END_REF] is a rare-event estimation method where the rare event is the intersection of a nested sequence of events and its probability is the product of conditional probabilities. Each conditional probability is estimated thanks to resampling strategies, making the overall estimation more accurate than a direct estimation of the rare-event probability itself. The application of GS to this problem was discussed in [START_REF] Botev | Reliability of stochastic flow networks with continuous link capacities[END_REF] for the situation in which the capacities have a continuous distribution, and experimental results were reported for a small example. But the GS algorithm proposed there does not work in general when the capacities have a discrete distribution. We show however that GS can be applied in the discrete case if we combine it with the same CTMC construction as for PMC. The GS algorithm does not have BRE in the asymptotic regime when u → 0, but it becomes more efficient that PMC when the size of the network increases. The relative error typically increases (empirically) as O(log u). The remainder of the paper is organized as follows. In Section 2, we formulate the network flow model considered in this paper. In Section 3, we construct a CTMC which permits one to apply PMC to this model, for the case where each capacity is distributed over a finite set. In Section 4, we explain how to apply GS to this model. We report numerical experiments in Section 5. Our experimental results agree with the fact that PMC has BRE when u → 0, under appropriate conditions. It can accurately estimate extremely small values of u when the network is not too large. When the network gets larger and u is not too small, on the other hand, GS becomes more effective than PMC. The model Let G = (V , E ) be a graph with a set of nodes V and a set of links E with cardinality m = |E |. For i = 1, . . . , m, link i has a random integer-valued flow capacity X i with discrete marginal distribution p i (x) = P[X i = x] over the set X i = {c i,0 , . . . , c i,b i }, 0 ≤ c i,0 < c i,1 < • • • < c i,b i < ∞. This is a standard assumption; see [START_REF] Botev | Reliability of stochastic flow networks with continuous link capacities[END_REF] and the references therein. Thus, the random network state X = (X 1 , . . . , X m ) belongs to the space X = ∏ m i=1 X i and has joint pdf p(x) = P(X = x), for x ∈ X . We also make the standard independence assumption (see [START_REF] Alexopoulos | Capacity expansion in stochastic flow networks[END_REF][START_REF] Bulteau | A new importance sampling Monte Carlo method for a flow network reliability problem[END_REF][START_REF] Chou | An efficient and robust design optimisation of multi-state flow network for multiple commodities using generalised reliability evaluation algorithm and edge reduction method[END_REF]) that p(x) = ∏ m i=1 P[X i = x i ] and that the nodes do not fail. To keep the notation and the exposition simple, in the remainder of the paper we describe the model and the methods under the assumption that there is a single source and a single destination. The generalization to multiple sources and destinations is straightforward, as explained below. The fixed demand level at the destination is d net > 0 and the maximum flow that can be carried from the source to the destination is a random variable Ψ(X), which is a function of the link capacities. The well-known max-flow min-cut theorem says that the maximum value of a flow from a source to a destination is equal to the minimum capacity of a cut in the network. Efficient algorithms are available to compute Ψ(X); for example the Ford-Fulkerson algorithm. We are interested in estimating the unreliability of the flow network, defined here as u = P[Ψ(X) < d net ] = ∑ {x∈X :Ψ(x)<d net } p(x); that is, the probability that the maximum flow Ψ(X) fails to meet the demand. This problem was considered in [START_REF] Fishman | Monte Carlo: Concepts, algorithms, and applications[END_REF], for example. In the particular case where X i = {0, 1} for each i and d net = 1, we have an instance of the static network reliability problem mentioned in the introduction, with the source and destination as the selected set of nodes to be connected. To generalize to multiple sources and destinations, we would assume a fixed demand d i at each destination node i, a fixed supply s i at each source node i, and the event {Ψ(X) < d net } would be replaced by the event that the network does not have sufficient capacity to send flow to satisfy all the demands from the available supplies. For small networks, it is possible to compute and store most of the minimal cutsets or pathsets and use them to obtain exact or approximate values for u; see [START_REF] Jane | Computing multi-state two-terminal reliability through critical arc states that interrupt demand[END_REF][START_REF] Zuo | An efficient method for reliability evaluation of multistate networks given all minimal path vectors[END_REF] for example. But for large networks, no polynomial-time algorithm is known for computing u exactly [START_REF] Colbourn | The combinatorics of network reliability[END_REF], and one must rely on approximations or on estimation via Monte Carlo. Of particular interest is the situation in which the network is highly reliable, i.e., u is a very small rare-event probability, because crude Monte Carlo then becomes ineffective. Several Monte Carlo variance-reduction methods have been proposed for network reliability estimation in rare-event situations; see, e.g., [START_REF] Botev | Static network reliability estimation via generalized splitting[END_REF][START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF][START_REF] Cancela | On the RVR simulation algorithm for network reliability evaluation[END_REF][START_REF] Elperin | Estimation of network reliability using graph evolution models[END_REF][START_REF] Gertsbakh | Models of network reliability[END_REF][START_REF] Ramirez-Marquez | A Monte-Carlo simulation approach for approximating multi-state two-terminal reliability[END_REF][START_REF] L'ecuyer | Approximate zero-variance importance sampling for static network reliability estimation[END_REF][START_REF] Tuffin | An adaptive zero-variance importance sampling approximation for static network dependability evaluation[END_REF] and the references given there. Most of these methods are for the special case of independent links with binary states and nodes that never fail. Some have been extended to links with three possible states [START_REF] Gertsbakh | Permutational methods for performance analysis of stochastic flow networks[END_REF][START_REF] Gertsbakh | Network reliability Monte Carlo with nodes subject to failure[END_REF][START_REF] Gertsbakh | Ternary networks: Reliability and Monte Carlo[END_REF], but this remains restrictive. We now describe how two of the most efficient methods, PMC and GS, can be adapted to our model. Reformulating the model as a CTMC and applying PMC We now show how to construct an artificial CTMC for this static model, which will permit us to apply PMC as described in the introduction. This CTMC construction differs from that used in [START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF][START_REF] Gertsbakh | Models of network reliability[END_REF]. Constructing the CTMC For each i, let P(X i = c i,k ) = p i (c i,k ) = r i,k > 0 for k = 0, . . . , b i . Define independent exponential random variables Y i,1 , . . . ,Y i,b i with rates λ i,1 , . . . , λ i,b i , respectively, where the λ i,k still have to be chosen. Suppose that the capacity of link i is c i,0 from time T i,0 = 0 to time T i,1 = min(Y i,1 , . . . ,Y i,b i ) (exclusive), after that it is c i,1 from time T i,1 to time T i,2 = min(Y i,2 , . . . ,Y i,b i ), it is c i,2 from time T i,2 to time T i,3 = min(Y i,3 , . . . ,Y i,b i ), and so on, and finally it is c i,b i from time T i,b i to T i,b i +1 = ∞. Under this process, the capacity of link i at time γ ≥ 0 is given by X i (γ) = c i,k for T i,k ≤ γ < T i,k+1 and 0 ≤ k ≤ b i (1) = max k {c i,k : T i,k ≤ γ}. (2) The times T i,1 , . . . , T i,b i are not necessarily all distinct; often, many of them are equal, so that the number of jumps at which the capacity changes can be much smaller than b i . For example, if T i,1 = Y i,b i , then we have T i,1 = T i,2 = • • • = T i,b i . As another example, if b i = 3 and Y i,2 < Y i,1 < Y i,3 , then 0 < T i,1 = T i,2 < T i,3 and the capacity of link i jumps from c i,0 to c i,2 at time T i,2 = Y i,2 and jumps again from c i,2 to c i,3 at time T i,3 = Y i,3 . In general, the process {X i (γ), γ ≥ 0} has an upward jump at each of the distinct jump times T i,k . To show that this process is a CTMC, suppose that we are at time γ ≥ 0 and X i (γ) = c i,k . Then we know that Y i,k ≤ γ and that Y i, > γ for all > k. The Y i, for < k can be anything, but they have no influence on the process trajectory after time γ. This means that the current state X i (γ) contains all the relevant information that needs to be known at time γ to generate the future of the process. The capacity X i (γ) of link i at time γ ≥ 0 satisfies P[X i (γ) ≤ c i,k ] = P[min(Y i,k+1 , . . . ,Y i,b i ) > γ] = exp[-γ(λ i,k+1 + • • • + λ i,b i )]. If we select the λ i,k 's so that the last expression equals r i,0 + • • • + r i,k for each k when γ = 1, then X i (1) has the exact same distribution as X i , the capacity of link i in the original static model. This is equivalent to having λ i,k+1 + • • • + λ i,b i = -ln(r i,0 + • • • + r i,k ). To achieve this, it suffices to put λ i,b i = -ln(r i,0 + • • • + r i,b i -1 ) = -ln(1 -r i,b i ) (3) and then for k = b i -1, . . . , 1 (in descending succession): λ i,k = -ln(r i,0 + • • • + r i,k-1 ) -λ i,k+1 -• • • -λ i,b i . (4) Note that (4) can be rewritten as λ i,k = -ln(r i,0 + • • • + r i,k-1 ) + ln(r i,0 + • • • + r i,k ), which can never be negative. We have proved the following. Proposition 1. If we select λ i,b i , λ i,b i -1 , . . . , λ i,1 according to ( 3) and ( 4) and the process X i (•) as in ( 2), then λ i,k ≥ 0 for each k, {X i (γ), γ ≥ 0} is a CTMC process, and X i (1) has exactly the same discrete distribution as the capacity of link i in the original model: P[X i (1) = c i,k ] = r i,k for k = 0, . . . , b i . As a result, X(1) = (X 1 (1), . . . , X m (1) ) has the same distribution as X and one has u = P[Ψ(X(1)) < d net ]. Applying PMC Under the assumption that all links are independent, a simple way of applying PMC to this model is as follows. Generate all the Y i,k 's independently with their rates λ i,k , put them in a large vector Y = (Y 1,1 , . . . ,Y 1,b 1 , . . . ,Y m,1 , . . . ,Y m,b m ) of size κ = b 1 + • • • + b m , and sort this vector in increasing order to obtain Y π(1) ≤ Y π(2) ≤ • • • ≤ Y π(κ) , where π( j) = (i, k) if Y i,k is in position j in the sorted vector, so that π = (π(1), . . . , π(κ)) can be seen as the permutation of the pairs (i, k) that corresponds to the sort. This permutation gives an ordering of the κ pairs (i, k). When scanning those pairs in the given order, each pair (i, k) corresponds to a potential capacity increase for link i. The capacity increases if and only if no pair (i, k ) for k > k has occurred before. Conditional on π, one can add those pairs in the given order and update the capacities accordingly, until the maximum flow in the network reaches d net . Suppose this occurs when adding the pair (i, k) = π(C) for some integer C > 0. Let T C = Y π(C) . The (unbiased) conditional (PMC) estimator of u is then P[Ψ(X(1)) < d net | π] = P[T C > 1 | π] = P[T C > 1 | π(1), . . . , π(C)] = P[A 1 + • • • + A C > 1], where A 1 = Y π(1) is an exponential random variable with rate Λ 1 = ∑ m i=1 ∑ b i k=1 λ i,k , each A j = Y π( j) -Y π( j-1 ) is an exponential random variable with rate Λ j = Λ j-1λ π( j-1) for j = 2, . . . ,C, and these A j 's are independent. Given π and C, T C = A 1 + • • • + A C is the sum of C independent exponential random variables with rates Λ 1 , . . . , Λ C , which has a phase-type distribution, whose complementary cdf is given by 1 -F(γ | π) = P[T C > γ | π] = e t 1 exp(Q γ)1 (5) where e t 1 = (1, 0, . . . , 0), 1 = (1, . . . , 1) t (the t means "transposed"), and Q =         -Λ 1 Λ 1 0 • • • 0 0 -Λ 2 Λ 2 0 . . . 0 0 . . . . . . 0 . . . 0 0 -Λ C-1 Λ C-1 0 • • • 0 0 -Λ C         . Reliable and fast computation of ( 5) is discussed in [START_REF] Botev | Static network reliability estimation via generalized splitting[END_REF][START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF]. To compute the critical number C at which the flow reaches the demand, we must be able to update efficiently the maximum flow in the network each time we increase the capacity of one link. We do this as explained in Section 4 of [START_REF] Botev | Reliability of stochastic flow networks with continuous link capacities[END_REF]. We refer to this algorithm as the incremental maximum flow algorithm. To estimate u by PMC, for a fixed threshold d net , we simulate n independent realizations W 1 , . . . ,W n of W = W (π) = P[T C > 1 | π] (6) and take the average Wn = (1/n) ∑ n i=1 W i . Compared with the crude Monte Carlo estimator that would take the indicator I = I[Ψ(X(1)) < d net ] in place of W , it is always true that Var[W ] < Var[I], because W = E[I | π]. The estimators discussed so far are for a single (fixed) demand d net . With PMC, it is also possible to estimate u = u(d net ) as a function of the demand d net , over some interval, using the same simulations for all demands. To do this, for any given permutation π, we can compute C = C(d net ) as a function of the demand over the interval of interest. This would be a step function, often with just a few jumps. Then we compute W for all values of C that are visited over this interval. This provides an estimator W (d net ) of u(d net ) as a function of d net . By averaging the n realizations of this estimator, we obtain a functional estimator of u(d net ) over the interval of interest. Improved PMC The PMC strategy described earlier can be improved by removing some useless jumps. First, whenever c i,k ≥ d net for k < b i , we can immediately remove all the jumps (i, k + 1), . . . , (i, b i ), because when the capacity of a link has reached d net , it is useless to increase it further. Capacity levels larger than d net can in fact be all reset to d net right away in the model, the probability of d net in the new model being taken as the the probability of values larger than d net in the initial model. For simplicity, when the demand is fixed, we assume in our algorithm that this has been done already, so that c i,b i ≤ d net for all i, and then there is no need to remove those useless capacity levels. Second, the jump times Y i,k that do not change the capacity of link i can also be removed. That is, whenever π( j) = (i, k) and the capacity of link i has already reached a value c i,k > c i,k , i.e., (i, k ) = π( j ) for some j < j, then there is no need to consider the pair (i, k) when it is encountered in the permutation, so we can remove the corresponding jump. Let π be the permutation obtained after removing all those pairs (i, k) from the sorted vector, and C the corresponding value of C in this reduced permutation. As soon as the max flow reaches d net , we have found C. When we encounter π( j) = (i, k) and the previous capacity of link i was c i,k < c i,k , the capacity of link i jumps to c i,k and we must decrease Λ j by λi,k = λ i,k +1 + • • • + λ i,k , because the jumps that correspond to (i, k + 1), . . . , (i, k) can now be removed from consideration. draw Y i,1 , . . . ,Y i,b i with the appropriate rates λ i,k 3: let L i = {(i, 1), . . . , (i, b i )} 4: min ← (i, b i ) and λi,b i ← λ i,b i 5: for k = b i -1 to 1 do 6: if Y i,k > Y min then 7: remove (i, k) from the list L i 8: λmin ← λmin + λ i,k 9: S i,k ← 0 // this jump is deactivated S i,k ← 1 // this jump is activated 14: merge the sorted lists L 1 , . . . , L m into a single list sorted by increasing order, Y π(1) , . . . ,Y π( κ) 15: Λ 1 ← λ 1,1 + • • • + λ 1,b 1 + • • • + λ m,b m 16: j ← 0 17: X ← (c 1,0 , . . . , c m,0 ) 18: while maximum flow Ψ(X) < d net do 19: j ← j + 1 20: if S π( j) = 1 then 21: (i, k) ← π( j) // this jump has not been removed or executed 22: Λ j+1 ← Λ j -λi,k 23: S i,k ← 0 24: X i ← c i,k // increase capacity of i-th link 25: Filter() // do nothing (default), or FilterSingle, FilterAll, etc. 26: C ← j // the critical jump number 27: return W ← P A 1 + • • • + A k-1 > 1 | π, C . Algorithm 1 describes this reduced version of PMC in a more formal way. It returns one realization of W . Indentation delimits the scope of the loops. In the first for loop, for each link i, the algorithm generates the exponential random variables Y i,k and then immediately eliminates those that correspond to (useless) jump times at which the capacity of the link does not change. This preliminary filtering is very easy and efficient to apply and may eliminate a significant fraction of the jumps, especially for links that have many capacity levels. The remaining jumps are sorted in a single list (for all links) and each one receives a Boolean tag S π( j) , initialized to 1, which means that this jump is currently scheduled to occur. Then these jumps are "executed" in chronological order, by increasing the corresponding capacities, until the critical jump number C is found. After that, W can be computed. The Boolean variables S π( j) are used in the optional Filter() subroutine, which can be used to try to eliminate further useless jumps after a jump is executed and the corresponding capacity is increased (this is discussed in Section 3.4). Algorithm 1 would be invoked n times, independently, and u would be estimated by the average of the n realizations of W . Other variants of the algorithm can be considered and some might be more efficient, but this is not completely clear. For example, instead of generating all the variables Y i,k at the beginning, one may think of generating the permutation π directly without generating those Y i,k , as was done in [START_REF] Gertsbakh | Models of network reliability[END_REF] for the binary case. This appears complicated and we did not implement it. Removing jumps having no impact on maximum flow In Algorithm 1, in the case where Filter() does nothing, all pairs (i, k) for which the capacity of link i increases are retained in π and the corresponding jumps are executed. But it sometimes occurs that increasing the capacity of link i to c i,k (or more) is useless because it can no longer have an impact on the event that the maximum flow exceeds the demand or not. In this case, one can cancel (deactivate) all the future jumps related to the capacity of link i. In our implementation, these future jumps are canceled by setting their Boolean variables S i,k to 0. Increasing the capacity of link i is useless in particular if it is already possible to send d net units of flow between the two nodes connected by link i. This obviously happens if the capacity of link i is already of d net , which is trivial to verify, but under our assumption (made at the beginning of Section 3.3) that a link has no capacity level above d net , this cannot happen, and our algorithm ignores this possibility. Increasing the capacity of link i is also useless when d net units of flow can be sent in total, either directly on link i or indirectly via other links. This is generally harder (more costly) to verify. To detect it, one can run a max-flow algorithm to compute how much flow can be sent between these two nodes. This can be done each time the capacity of a link is increased. Algorithm 2 does this only for the link i whose capacity has just been increased, at each step j. Since the link i generally changes at every step j, we have a different max-flow problem (for a different pair of nodes) at each step. For this reason, in our implementation we recompute the max-flow from scratch at each step j. Of course, this brings significant overhead. Algorithm 3 is even more ambitious: it computes the max-flow between nodes for all pairs of nodes. Then for each link i for which the current max-flow between the corresponding two nodes meets the demand, it cancels all future jumps associated with that link. Doing this at each step j might be too costly, so in our implementation the user selects a positive integer ν and does it only at every ν steps, i.e., when j is a multiple of ν. We compute the max-flow for all pairs of nodes using the algorithm of [START_REF] Gusfield | Very simple methods for all pairs network flow analysis[END_REF], which is a simplified variant of the Gomory-Hu method [START_REF] Gomory | Multi-terminal network flows[END_REF]. This algorithm computes the all-pairs max-flow by applying |V | -1 times a max-flow algorithm for one pair of nodes, which is generally more efficient than applying a max-flow algorithm for each link. Algorithm 3 also recomputes the maximum flows from scratch each time it is called, rather than reusing computations from the previous time and just updating the max flows. (In fact, the |V | -1 pairs of nodes for which the max-flow is computed in the algorithm change from one call to the next, and we are not aware of an effective incremental algorithm that would reuse and just update the previous computations.) Algorithm 2 : FilterSingle f ← compute maximum flow between terminal nodes of link i if f ≥ d net then for k = 1 to b i do if S i,k = 1 then S i,k ← 0 Λ j+1 ← Λ j+1 -λi,k Algorithm 3 : FilterAll if j mod ν = 0 then {F v,w } ← max flow between all pairs of nodes (v, w), computed via Gusfield's algorithm for all i = (v, w) ∈ E do if F v,w ≥ d net then for k = 1 to b i do if S i,k = 1 then S i,k ← 0 Λ j+1 ← Λ j+1 -λi,k Bounded relative error for PMC For a single run, the crude MC estimator I = I[Ψ(X(1)) < d net ] of u, which is a Bernoulli random variable with mean u, has variance u(1 -u), so its relative error (RE) is RE[I] = u(1 -u)/u ≈ u -1/2 → ∞ when u → 0. With n runs, the variance is divided by n and the RE by √ n. When u is very small, we may need an excessively large n to obtain a sufficiently small RE. With PMC, the RE is sometimes much better behaved than with MC. In this section, we obtain conditions under which the PMC estimator W has bounded relative error (BRE), i.e., RE[W ] remains bounded when u → 0. The proofs have some similarity with those in [START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF]. Suppose the probabilities r i,k = r i,k (ε) in our model depend on some parameter ε in a way that u = u(ε) → 0 when ε → 0. In what follows, the quantities in the model are assumed implicitly to depend on ε. A non-negative quantity that may depend on ε is O(1) if it remains bounded when ε → 0. It is Θ(1) if it is bounded and also bounded away from 0, when ε → 0. In our setting, the vector Y and the permutation π have finite length κ and C is bounded by κ. The number of possible permutations is therefore finite. Let p(π) be the probability of permutation π. Proposition 2. (i) If p(π) = Θ(1) for all π, then the PMC estimator has BRE. (ii) This holds in particular if λ i,k i /λ j,k j = Θ(1) for all i, j, k i , k j (we then say that the rates are balanced). Proof. 1), which implies BRE. (i) Note that u ≥ P[T C > 1 | π]p(π) = W (π)p(π) for any π. If p(π) = Θ(1), then W (π)/u = O(1) and max π W (π)/u = O(1). Therefore E[W 2 /u 2 ] = O( (ii) Note that p(π) = ∏ κ j=1 λ π( j) /Λ π( j) . Under the given assumption, λ π( j) /Λ π( j) = Θ(1) for all j, which implies that p(π) = Θ(1) for all π. As a concrete illustration of an asymptotic regime in which the r i,k depend on ε, we define a regime similar to one that has been widely used for highly reliable Markovian systems [START_REF] Nakayama | General conditions for bounded relative error in simulations of highly reliable Markovian systems[END_REF][START_REF] Rubino | Markovian models for dependability analysis[END_REF][START_REF] Shahabuddin | Importance sampling for the simulation of highly reliable Markovian systems[END_REF]. Suppose that link i is operating at capacity c i,k . with probability r i,k = a i,k ε d i,k for some constants a i,k > 0 and d i,k > 0 independent of ε, for all k ∈ {0, . . . , b i -1} (that is, not at full capacity). This implies that r i,b i = 1 -∑ b i -1 j=0 r i,k = Θ(1) for all i. That is, the event that any link is not at full capacity is a rare event. This implies that failure to meet the demand is a rare event, and therefore RE[I] → ∞ when ε → 0. More specifically, any state vector x = (c 1,k 1 , . . . , c m,k m ) for which Ψ(x) < d net has probability P(X = x) = ∏ m i=1 r i,k i = ε d(x) (1 + o(1)) for some d(x) > 0. Let d min = min{d(x) : Ψ(x) < d net }. Then u = ∑ {x:Ψ(x)<d net } P(X = x) = Θ(ε d min ). On the other hand, we have Proposition 3. In the setting just defined, the PMC estimator has BRE. Proof. Recall that for 1 ≤ i ≤ m, λ i,b i = -ln(∑ b i -1 k=0 r i,k ) = Θ(ln(ε)). Moreover, for all k < b i , λ i,k = ln(∑ k =0 r i, )-ln(∑ k-1 =0 r i, ) = Θ(ln(ε)). The conditions of Proposition 2 (ii) are then verified, hence the result. The results of this section apply to the improved PMC variants as well; the proofs are easily adapted. A generalized splitting algorithm Botev et al. [START_REF] Botev | Reliability of stochastic flow networks with continuous link capacities[END_REF] have explained how to adapt the GS algorithm proposed and studied in [START_REF] Botev | Efficient Monte Carlo simulation via the generalized splitting method[END_REF][START_REF] Botev | Static network reliability estimation via generalized splitting[END_REF] to the stochastic flow problem considered here, but for the situation where the capacities have a continuous distribution. The aim of the algorithm is to obtain a sample of realizations of X which is approximately a sample from the distribution of X conditional on Ψ(X) < d net . The estimator is then given by the realized sample size (which is random) divided by its largest possible value. The algorithm uses intermediate demand levels d net = d τ < • • • < d 1 < d 0 , where d 0 is the maximal possible flow, achieved when each link i is at its maximal capacity c i,b i . These levels and their number τ are fixed a priori and chosen so that P[Ψ(X) < d t | Ψ(X) < d t-1 ] ≈ 1/s for t = 1, . . . , τ -1, and at most 1/s for t = τ, where s is a small integer also fixed (usually and in all our experiments in this paper, s = 2). The levels are estimated by pilot runs, as explained in [START_REF] Botev | Efficient Monte Carlo simulation via the generalized splitting method[END_REF][START_REF] Botev | Static network reliability estimation via generalized splitting[END_REF]. The algorithm starts by sampling X from its original distribution. If Ψ(X) < d 1 , it resamples each coordinate of X conditional on Ψ(X) < d 1 , via Gibbs sampling, repeats this s times, and keeps the states X for which Ψ(X) < d 2 (their number is in {0, . . . , s}). At each level t = 3, . . . , τ, this type of resampling is applied to each state that has been retained at the previous step (for which Ψ(X) < d t-1 ), by resampling that state twice from its distribution conditional on Ψ(X) < d t-1 , and retaining the states for which Ψ(X) < d t . At the last level, we count the number N of chains for which Ψ(X) < d τ , and return W = N/s τ-1 as an estimator of u. This is repeated n times independently, to produce n independent realizations of W , say W 1 , . . . ,W n , whose average Wn is an unbiased estimator of u. This estimator does not have BRE, because the RE increases with the number of levels; the RE is typically (roughly) proportional tolog u (see [START_REF] Botev | Dependent failures in highly-reliable static networks[END_REF] for a proof of this result in an idealized setting). It can also handle large networks. In general, this GS algorithm is not directly applicable when the capacities have discrete distributions, because then Ψ(X) also has a discrete distribution and it may happen that this distribution is too coarse (e.g., all the probability mass is on just a few possible values). Then it may be impossible to select levels d t for which P[Ψ(X) < d t | Ψ(X) < d t-1 ] ≈ 1/s for t = 1, . . . , τ -1. It is nevertheless possible to apply GS in that case by constructing the vector Y as for PMC in the previous section, and resampling this vector instead of X. Recall that Y is a vector of κ independent exponential random variables. The GS algorithm will operate similarly as the one described above, except that now the levels 0 = γ 0 < γ 1 < • • • < γ τ = 1 are on T C , and the resampling at each step t is for Y and is conditional on T C > γ t-1 . This is valid because {Ψ(X(γ t )) < d net } = {T C > γ t }. The corresponding GS procedure operates in the same way as the GS algorithm with anti-shocks in [START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF]. We first generate Y from its original distribution. Then at each level t, we take each state (realization or modification of Y) that has been retained at the previous step (for which T C > γ t-1 ), we resample all its coordinates s times (i.e., for s Gibbs sampling steps, where each step starts from the result of the previous step) from its distribution conditional on T C > γ t-1 , to obtain two new states, and we retain the states for which T C > γ t . At the last level, we count the number N of chains for which T C > γ τ = 1, and return W = N/s τ-1 as an estimator of u. The resampling of Y conditional on T C > γ t-1 via Gibbs sampling can be done in a similar way as in [START_REF] Botev | Static network reliability estimation via generalized splitting[END_REF][START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF]. We first select a permutation π of the κ coordinates of the vector Y. Then for j = 1, . . . , κ, we resample Y π( j) as follows: If π( j) = (i, k), the current capacity X i (γ t-1 ) of link i is less than c i,k (or equivalently min(Y i,k , . . . ,Y i,b i ) > γ t-1 ), and by changing the current capacity of link i to c i,k (or equivalently changing Y i,k to 0) we would have T C < γ t-1 (the maximum flow would meet the demand), then we resample Y i,k from its exponential density truncated to (γ t-1 , ∞). Otherwise we resample Y i,k from its original exponential density. To sample from the truncated density, it suffices to generate Y i,k from the original density and add γ t-1 . Numerical Examples In this section, we provide some numerical examples that compare the PMC and GS algorithms, and show how they behave when u → 0. In these examples, we parameterize the models by ε in a way that u = u(ε) → 0 when ε → 0, exactly as in Section 3.5, in the asymptotic regime when the probability that links are not operating at full capacity is getting close to zero. For all variants of PMC, we used formula (2) in [START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF] with high precision arithmetic to compute (5). Experimental setting We used the same experimental protocol as in [START_REF] Botev | Static network reliability estimation under the Marshall-Olkin copula[END_REF], comparing four methods. Method PMC refers to Algorithm 1 without any filtering step. PMC-Single and PMC-All refer to PMC combined with the filtering as in Algorithms 2 and 3 respectively. Method GS refers to generalized splitting, implemented as described in Section 4. The splitting levels were determined via the adaptive Algorithm 3 of [START_REF] Botev | Static network reliability estimation via generalized splitting[END_REF], with n 0 = 500 and s = 2. The levels were estimated using a single run of the adapative algorithm, and these same levels were used for every independent replication of the GS algorithm. For each example and method, we report the unreliability estimate Wn , its empirical relative error RE[ Wn ] = S n /( √ n Wn ) where S 2 n is the empirical variance, and the work-normalized relative variance (WNRV) of Wn , defined as WNRV[ Wn ] = T × RE 2 [ Wn ], where T is the total CPU time (in seconds) for the n runs of the algorithm. One must keep in mind that T and the WNRV depend on the software and hardware used for the computations. The experiments were run on Intel Xeon E5-2680 CPUs, on a linux cluster. The sample size for every algorithm was n = 5 × 10 4 . For each example we use the following model. Each link i has the capacity levels {0, 1, . . . , b i }, i.e., c i,k = k for k = 0, . . . , b i . We take r i,k = P(X i = c i,k ) = ρ b i -k-1 ε for k < b i and r i,b i = 1 - ∑ b i -1 k=0 ρ b i -k-1 ε , where ρ, ε and {b i } are model parameters. A 4 × 4 lattice graph Our first example uses the 4 × 4 lattice graph, which has 16 nodes and 24 links. The flow has to be sent from one corner to the opposite corner. We take b i = 8, ρ = 0.6 and d net = 10, and let ε range from 10 -4 to 10 -13 . Table 1 reports the values of Wn , RE[ Wn ] and WNRV[ Wn ], for methods GS and PMC, for some values of ε. Figure 1 shows plots of RE[ Wn ] and WNRV[ Wn ] for all four methods. We see that for this small example, PMC-All always has the smallest RE, followed by PMC-Single. These REs increase very slowly when ε decreases, for the values we have tried. They should eventually stabilize when ε → 0. In terms of work-normalized relative variance, i.e., when taking the computing time into account, GS is the most effective method when ε is not too small, but when ε gets smaller, GS requires a larger simulation effort while the effort required by PMC variants remains approximately stable, so these PMC methods eventually catch up, in agreement with our asymptotic results. Among them, PMC-Single has the smallest WNRV. ε considered, PMC-All has the smallest RE while GS wins in terms of WNRV. ε = 10 -4 ε = 10 -5 ε = 10 -6 ε = 10 -7 ε = 10 - A dodecahedron network In this example we use the well-known dodecahedron network (Figure 3), with 20 nodes and 30 links, often used as a standard benchmark in network reliability estimation [START_REF] Botev | Static network reliability estimation via generalized splitting[END_REF][START_REF] Cancela | A recursive variance-reduction algorithm for estimating communication-network reliability[END_REF][START_REF] Cancela | Rare event analysis by Monte Carlo techniques in static models[END_REF][START_REF] Cancela | Analysis and improvements of path-based methods for Monte Carlo reliability evaluation of static models[END_REF][START_REF] Tuffin | An adaptive zero-variance importance sampling approximation for static network dependability evaluation[END_REF]. Here we took ρ = 0.7, b i = 4 and d net = 5. Note that when ε is very small, most of the failures will occur because there is not enough capacity in the three links connected to node 1 (links 1, 2, 3), or not enough capacity in the three links connected to node 20 (links [START_REF] Rubino | Markovian models for dependability analysis[END_REF][START_REF] Shahabuddin | Importance sampling for the simulation of highly reliable Markovian systems[END_REF][START_REF] Tuffin | An adaptive zero-variance importance sampling approximation for static network dependability evaluation[END_REF]. These are the two bottleneck cuts. Table 2 reports the values of Wn , RE[ Wn ], and WNRV[ Wn ], for the GS and PMC methods, for different values of ε. We see that the estimates Wn agree very well across the two methods. Figure 4 shows RE[ Wn ] and WNRV[ Wn ] as functions of ε, for all four methods. We see that PMC-All has by far the smallest RE for all ε, and it also wins in terms of WNRV, except for ε > 10 -5 where GS wins. The latter case is approximately when u ≥ 7 × 10 -11 , which is already pretty small. When ε decreases, the WNRV increases for GS in part because the RE increases, but also because the computing time increases. The figure shows what happens when ε gets very small. Algorithm 1 : 1 PMC algorithm for multi-state flow network 1: for i = 1 to m do 2: Figure 1 : 1 Figure 1: RE (left) and WNRV (right) for four methods, for the 4 × 4 lattice graph. Figure 2 : 2 Figure 2: RE (left) and WNRV (right) for four methods, for the 6 × 6 lattice graph. Figure 3 : 3 Figure 3: A dodecahedron graph with 20 nodes and 30 links (figure taken from [3]). 10 - 3 3 10 -5 10 -7 10 -9 10 -11 10 -13 10 -15 10 -17 10 -5 10 -7 10 -9 10 -11 10 -13 10 -15 10 - Figure 4 : 4 Figure 4: RE (left) and WNRV (right) for four methods, for the dodecahedron example 8 Wn for PMC 2.99 × 10 -5 2.98 × 10 -6 2.99 × 10 -7 2.99 × 10 -8 2.99 × 10 -9 RE[ Wn ] for PMC 3.16 × 10 -2 3.34 × 10 -2 3.41 × 10 -2 3.69 × 10 -2 3.74 × 10 -2 WNRV[ Wn ] for PMC 3.17 × 10 -2 3.20 × 10 -2 3.17 × 10 -2 3.62 × 10 -2 3.54 × 10 -2 Wn for GS 2.98 × 10 -5 2.99 × 10 -6 2.99 × 10 -7 2.98 × 10 -8 2.99 × 10 -9 RE[ Wn ] for GS 3.43 × 10 -2 3.32 × 10 -2 3.15 × 10 -2 3.30 × 10 -2 4.33 × 10 -2 WNRV[ Wn ] for GS 1.07 × 10 -2 1.43 × 10 -2 1.68 × 10 -2 2.35 × 10 -2 2.86 × 10 -2 Table 1 : 1 Estimation of u, RE, and WNRV for some values of ε, for the 4 × 4 lattice example 5.3 6 × 6 lattice graph Figure 2 shows plots of RE[ Wn ] and WNRV[ Wn ] for a 6 × 6 lattice graph, with 36 nodes and 60 links. Again, the flow has to be sent from one corner to the opposite corner. Here, for all values of 10 -1.2 Method GS PMC PMC-All 10 -0.4 Relative Error 10 -1.6 10 -1.4 PMC-Single Work Normalized Relative Variance 10 -1 10 -0.8 10 -0.6 Method GS PMC 10 -1.2 PMC-All PMC-Single 10 -6 10 -8 10 -10 10 -12 10 -6 10 -8 10 -10 10 -12 7.06 × 10 -13 7.06 × 10 -15 7.05 × 10 -17 RE[ Wn ] for PMC 8.63 × 10 -2 7.21 × 10 -2 6.68 × 10 -2 5.97 × 10 -2 5.86 × 10 -2 WNRV[ Wn ] for PMC 1.85 × 10 -1 1.37 × 10 -1 1.14 × 10 -1 8.95 × 10 -2 8.57 × 10 -2 Wn for GS 7.07 × 10 -9 7.06 × 10 -11 7.07 × 10 -13 7.07 × 10 -15 7.05 × 10 -17 RE[ Wn ] for GS 3.95 × 10 -2 4.30 × 10 -2 4.58 × 10 -2 5.17 × 10 -2 4.97 × 10 -2 WNRV[ Wn ] for GS 3.13 × 10 -2 4.52 × 10 -2 6.00 × 10 -2 8.81 × 10 -2 1.06 × 10 -1 Table 2 : 2 Estimation of u, RE, and WNRV for some values of ε, for the dodecahedron example Acknowledgments This work has been supported by the Australian Research Council under DE140100993 Grant to Z. I. Botev, an NSERC-Canada Discovery Grant, a Canada Research Chair, and an Inria International Chair to P. L'Ecuyer. P. L'Ecuyer acknowledges the support of the Faculty of Science Visiting Researcher Award at UNSW. We are grateful to Rohan Shah, who performed the numerical experiments.
44,853
[ "833462", "830166" ]
[ "75152", "25645", "491181" ]
01745195
en
[ "spi" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01745195/file/laera_19774.pdf
Davide Laera email: davide.laera@poliba.it Thierry Schuller Kevin Prieur Daniel Durox Sergio M Camporeale Sébastien Candel Flame Describing Function analysis of spinning and standing modes in an annular combustor and comparison with experiments Keywords: This article reports a numerical analysis of combustion instabilities coupled by a spinning mode or a standing mode in an annular combustor. The method combines an iterative algorithm involving a Helmholtz solver with the Flame Describing Function (FDF) framework. This is applied to azimuthal acoustic coupling with combustion dynamics and is used to perform a weakly nonlinear stability analysis yielding the system response trajectory in the frequency-growth rate plane until a limit cycle condition is reached. Two scenarios for mode type selection are tentatively proposed. The first is based on an analysis of the frequency growth rate trajectories of the system for different initial solutions. The second considers the stability of the solutions at limit cycle. It is concluded that a criterion combining the stability analysis at the limit cycle with the trajectory analysis might best define the mode type at the limit cycle. Simulations are compared with experiments carried out on the MICCA test facility equipped with 16 matrix burners. Each burner response is represented by means of a global FDF and it is considered that the spacing between burners is such that coupling with the mode takes place without mutual interactions between adjacent burning regions. Depending on the nature of the mode being considered, two hypotheses are made for the FDFs of the burners. When instabilities are coupled by a spinning mode, each burner features the same velocity fluctuation level implying that the complex FDF values are the same for all burners. In case of a standing mode, the sixteen burners feature different velocity fluctuation amplitudes depending on their relative position with respect to the pressure nodal line. Simulations retrieve the spinning or standing nature of the self-sustained mode that were identified in the experiments both in the plenum and in the combustion chamber. The frequency and amplitude of velocity fluctuations predicted at limit cycle are used to reconstruct time resolved pressure fluctuations in the plenum and chamber and heat release rate fluctuations at two locations. For the pressure fluctuations, the analysis provides a suitable estimate of the limit cycle oscillation and suitably retrieves experimental data recorded in the MICCA setup and in particular reflects the difference in amplitude levels observed in these two cavities. Differences in measured and predicted amplitudes appear for the heat release rate fluctuations. Their amplitude is found to be directly linked to the rapid change in the FDF gain as the velocity fluctuation level reaches large amplitudes corresponding to the limit cycle, underlying the need of FDF information at high modulation amplitudes. Introduction Many recent studies focus on combustion instabilities coupled by azimuthal modes in annular systems. There are yet few comparisons between predictions and well controlled experiments. The present investigation aims at filling this gap by developing a nu-• Two different operating conditions are considered, one leading to a spinning limit cycle and another one leading to a standing limit cycle. • This framework is then used to calculate the limit cycles of standing and spinning solutions and to compare the calculated oscillation with measurements on a laboratory scale test facility "MICCA" developed at EM2C laboratory. The amplitudes and phase relationships of pressure in the plenum and chamber and the heat release rate signals are compared for two different operating conditions. • Finally, two scenarios are tentatively proposed to explain the mode type selection. The first considers that the frequency and growth rate trajectories of initially spinning and standing modes determine the solution at the limit cycle. The second suggested by a reviewer considers that it is the stability of the limit cycle which determines the observed oscillation. At this point, it is worth briefly reviewing some recent investigations of instabilities in annular devices. Combustion instabilities coupled by azimuthal modes are often studied by theoretical or numerical means. On the numerical level one finds a growing number of massively parallel large eddy simulations of annular chambers with multiple burners [START_REF] Staffelbach | Large eddy simulation of self excited azimuthal modes in annular combustors[END_REF][START_REF] Fureby | Les of a multi-burner annular gas turbine combustor[END_REF][START_REF] Wolf | Using les to study reacting flows and instabilities in annular combustion chambers[END_REF] . These calculations retrieve features of azimuthally coupled combustion instabilities observed in experiments in engine-like conditions [START_REF] Worth | Self-excited circumferential instabilities in a model annular gas turbine combustor: global flame dynamics[END_REF] . However, the complexity of the situation precludes direct comparison between calculations and observations. One difficulty in performing such comparisons lies in the definition of the flame model. The first investigations of combustion instabilities coupled to azimuthal modes were performed by combining a low-order acoustic model of an annular combustor with a time delayed n -τ flame response (see for example [START_REF] Stow | Thermoacoustic oscillations in an annular combustor[END_REF][START_REF] Evesque | Low-order acoustic modelling for annular combustors: validation and inclusion of modal coupling[END_REF] ). This kind of model is also assumed in the recent analytical studies developed in [START_REF] Parmentier | A simple analytical model to study and control azimuthal instabilities in annular combustion chambers[END_REF][START_REF] Bauerheim | An analytical model for azimuthal thermoacoustic modes in an annular chamber fed by an annular plenum[END_REF] to analyze the linear stability of annular chambers fed by an annular plenum with multiple discrete burners. Both spinning and standing modes are predicted depending on the circumferential symmetry of the system. Circumferential instabilities of industrial combustors were analyzed in [START_REF] Krebs | Thermoacoustic stability chart for high-intensity gas turbine combustion systems[END_REF][START_REF] Campa | Prediction of the thermoacoustic combustion instabilities in practical annular combustors[END_REF] by means of a Helmholtz solver approach. In these studies, the flame response was modeled by a n -τ description with parameters retrieved from CFD calculations of the steady combustion process. These previous studies carried out with linear tools could not account for finite amplitude effects that determine the oscillation frequency and level at the limit cycle. These features were considered for example in [START_REF] Pankiewitz | Time domain simulation of combustion instabilities in annular combustors[END_REF][START_REF] Stow | A time-domain network model for nonlinear thermoacoustic oscillations[END_REF] by combining different numerical strategies for acoustic propagation with a nonlinear flame model in the time domain. Numerical control strategies for annular configurations featuring spinning limit cycles were developed in [START_REF] Morgans | Model-based control of combustion instabilities in annular combustors[END_REF][START_REF] Illingworth | Adaptive feedback control of combustion instability in annular combustors[END_REF] . Spinning and standing modes were observed in [START_REF] Pankiewitz | Time domain simulation of combustion instabilities in annular combustors[END_REF] depending on whether the mean flow velocity was neglected or considered in the simulations. For a circumferential instability in an axisymmetric geometry, the spinning waveform was always preferred to the standing mode in the simulations carried out in [START_REF] Stow | Thermoacoustic oscillations in an annular combustor[END_REF] . However, no comparison with experiments were reported in these works. On a theoretical level, criteria for appearance of spinning or standing modes have been derived by considering the dynamics of azimuthal modes coupled by a nonlinear flame model expressed in terms of pressure perturbations alone. One of the first analysis was carried out in [START_REF] Schuermans | Non-linear combustion instabilities in annular gas-turbine combustors[END_REF] with a saturation function linking heat release and pressure fluctuations. In this framework, the dynamics of azimuthal modes is described by two harmonic oscillators which are nonlinearly coupled. The stability of standing and traveling waves at limit cycle can then be assessed. A further simplification was later introduced by assuming that the system behaves like a Van der Pol oscillator [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF][START_REF] Ghirardo | Azimuthal instabilities in annular combustors: standing and spinning modes[END_REF][START_REF] Noiray | On the dynamic nature of azimuthal thermoacoustic modes in annular gas turbine combustion chambers[END_REF] . Results indicate that the spinning or standing nature of the unstable mode originates from the nonlinearity and non-uniformity of the flame response and can be influenced by different factors, like transverse velocity fluctuations [START_REF] Dawson | Flame dynamics and unsteady heat release rate of self-excited azimuthal modes in an annular combustor[END_REF][START_REF] Oconnor | Recirculation zone dynamics of a transversely excited swirl flow and flame[END_REF] or turbulence, which can stochastically disturb the limit cycle amplitudes. This nonlinear flame model was used for example in [START_REF] Bothien | Analysis of azimuthal thermo-acoustic modes in annular gas turbine combustion chambers[END_REF] to reproduce the dynamical behavior observed in a real engine. Another comparison is presented in [START_REF] Noiray | Deterministic quantities characterizing noise driven hopf bifurcations in gas turbine combustors[END_REF] between numerical and experimental growth rates calculated by means of a system identification technique, but the oscillation levels of the different pressure signals are not shown. Recently, Ghirardo et al. [START_REF] Ghirardo | State-space realization of a describing function[END_REF][START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] managed to introduce in their time domain model a more reliable FDF, linking heat release and pressure disturbances by a time-invariant nonlinear operator. Criteria for self-sustained thermo-acoustic instabilities coupled by spinning and standing modes were then derived by examining the stability of the analytical solutions at limit cycles. This framework was then tested with experimental data from Bourgouin et al. [START_REF] Bourgouin | Characterization and modeling of a spinning thermoacoustic instability in an annular combustor equipped with multiple matrix injectors[END_REF] where a stable spinning mode is observed at limit cycle. Their analysis confirmed that for the operating condition explored, there was a stable spinning solution and that standing solutions, if they existed, were unstable. In all of these previous studies there are no direct comparisons between predictions and measurements for pressure and heat release rate oscillations and only limited validations of model predictions for different operating conditions. The difficulties encountered in these various investigations are compounded by the presence of multiple flames which respond collectively over a wide frequency range, and by the modal density in the annular geometry when the size of the system is large like in industrial gas turbine combustors. One possible simplification consists in considering that the heat release from the different burners is uniformly distributed over the circumference of the annular chamber. Following this approach, Bourgouin et al. [START_REF] Bourgouin | Characterization and modeling of a spinning thermoacoustic instability in an annular combustor equipped with multiple matrix injectors[END_REF] developed an analytical one-dimensional framework to represent the dynamics of the laboratory scale MICCA annular combustor. Assuming a simplified flame response, the spinning instability recorded during experiments was reproduced in terms of frequency and amplitude of velocity fluctuations at the limit cycle. A theoretical interpretation was also proposed for the angular shift observed in the experiments between the nodal lines in the plenum and in the chamber. However, this analysis was carried out for a fixed frequency and fixed oscillation level corresponding to the values observed in the experiment at the limit cycle. On the experimental level there are relatively few data sets corresponding to instrumented conditions that can be used to benchmark models and simulations. Most of the measurements performed on real systems consist of unsteady pressure signals with no access to the flame dynamics [START_REF] Krebs | Thermoacoustic stability chart for high-intensity gas turbine combustion systems[END_REF][START_REF] Bothien | Analysis of azimuthal thermo-acoustic modes in annular gas turbine combustion chambers[END_REF][START_REF] Mastrovito | Analysis of pressure oscillations data in gas turbine annular combustion chamber equipped with passive damper[END_REF] . Both spinning and standing mode patterns were observed in the laboratory scale annular device equipped with low swirl injectors that was developed in the engineering department of Cambridge University [START_REF] Worth | Self-excited circumferential instabilities in a model annular gas turbine combustor: global flame dynamics[END_REF][START_REF] Worth | Modal dynamics of self-excited azimuthal instabilities in an annular combustion chamber[END_REF] . This setup allows heat release rate measurements and flame dynamics analysis through optical windows, but the flame transfer function was not determined and pressure signals were not recorded in the combustion chamber. This article is organized as follows. A novel procedure is derived in Section 2 that combines a Helmholtz solver with sixteen independent FDFs. This is used to determine the limit cycle conditions by means of a weakly nonlinear stability analysis. Depending on the nature of the mode being considered, the FDFs are assumed to operate at equal or at different velocity fluctuation levels for the examination of the dynamics of spinning and standing modes, respectively. This numerical procedure is validated in Appendix A in an idealized geometrical configuration with a simplified flame model by retrieving the amplitude and stability properties of the theoretical limit cycles [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] . Section 3 describes the MICCA combustor experiment with 16 matrix laminar injectors and the numerical framework used for the stability analysis. The FDF determined for one of the matrix injectors is presented in Section 4 for the first operating condition leading to a spinning mode limit cycle in the MICCA combustor. The same type of analysis is repeated in Section 5 for the second operating point leading to a standing mode limit cycle. The frequency and amplitude of velocity fluctuations predicted at limit cycle are used to reconstruct the pressure oscillations in the plenum and in the chamber and the heat release rate fluctuation signal. These signals are then compared with microphone records and photomultiplier measurements at two different flame locations. A mode type selection analysis is presented in Section 6 . For the two operating points, frequencygrowth rate trajectories are calculated and the stability properties of the simulated limit cycles are discussed. These analyses are tentatively used to determine the limit cycle structure. Weakly nonlinear stability numerical analysis The methodology used to assess the stability of the annular combustor acoustic modes and determine the limit cycle state of the system follows that developed for the FDF based weakly nonlinear stability analysis of single burner setups [START_REF] Noiray | A unified framework for nonlinear combustion instability analysis based on the flame describing function[END_REF][START_REF] Palies | Nonlinear combustion instability analysis based on the flame describing function applied to turbulent premixed swirling flames[END_REF][START_REF] Silva | Combining a Helmholtz solver with the flame describing function to assess combustion instability in a premixed swirled combustor[END_REF][START_REF] Han | Prediction of combustion instability limit cycle oscillations by combining flame describing function simulations with a thermoacoustic network model[END_REF] . In the time domain one has to consider a wave equation including a damping term defined by a first order time derivative of the pressure fluctuations p multiplied by a damping rate δ ( s -1 ) [START_REF] Kinsler | Fundamentals of acoustics[END_REF] and a source term representing effects of the unsteady heat release rate disturbances ˙ q : ∂ 2 p ∂t 2 + 4 πδ ∂ p ∂t -∇ • c 2 ∇ p = (γ -1) ∂ ˙ q ∂t , (1) where it has been assumed that the mean pressure p is essentially constant so that ρc 2 is also constant. Assuming that all fluctuations are harmonic x = ˆ x exp (-iωt) , one obtains the following equation in the frequency domain: ω 2 c 2 ˆ p + i ω4 πδ c 2 ˆ p + ρ∇ • 1 ρ ∇ ˆ p = i γ -1 c 2 ω ˆ ˙ q, ( 2 ) where ω denotes the complex angular frequency. The mean density ρ, speed of sound c and specific heat ratio γ distributions are specified. In this frequency domain, the analysis is carried out by using the finite element method (FEM) based on the commercial software COMSOL Multiphysics . This code solves the classical Helmholtz equation in which heat release rate fluctuations ˆ ˙ q are treated as pressure sources. A nonlinear description of the interaction between combustion and acoustics is required to capture the limit cycles of a thermoacoustic system. If the flame is compact with respect to the wavelength of the unstable mode, the dynamics of the flame may be represented in terms of a global FDF F [START_REF] Silva | Combining a Helmholtz solver with the flame describing function to assess combustion instability in a premixed swirled combustor[END_REF] , where the FDF gain and phase lag are function of the amplitude of the incoming perturbation [START_REF] Noiray | A unified framework for nonlinear combustion instability analysis based on the flame describing function[END_REF][START_REF] Han | Simulation of the flame describing function of a turbulent premixed flame using an open-source les solver[END_REF] . In the frequency domain F links relative heat release rate fluctuations ˆ ˙ Q/ ˙ Q to relative velocity fluctuations | ˆ u / u | measured at a reference point of the system. The FDF is a complex function expressed in terms of a gain G and phase ϕ as follows: F (ω r , | u / u | ) = ˆ ˙ Q (ω r , | u / u | ) / Q | ˆ u / u | = G (ω r , | u / u | ) exp iϕ(ω r , | u / u | ) , (3) where | u / u | stands for the relative velocity fluctuation level, with u the root-mean-square of the velocity signals taken at the reference position in the injector unit j and ω r corresponds to the real part of the complex frequency ω. A weakly nonlinear approach is used to couple Eq. ( 3) with Eq. ( 2) retrieving the solution of the nonlinear problem as a perturbation of a linear problem. This is achieved by linearizing the FDF by fixing a velocity fluctuation level | u / u | . The finite element discretization of this set of linearized equations along with the boundary conditions results in the following eigenvalue problem [START_REF] Nicoud | Acoustic modes in combustors with complex impedances and multidimensional active flames[END_REF][START_REF] Laera | A finite element method for a weakly nonlinear dynamic analysis and bifurcation tracking of thermo-acoustic instability in longitudinal and annular combustors[END_REF] : [ A ] P + ω [ B (ω) ] P + ω 2 [ C ] P = [ D (ω) ] P , ( 4 ) where P is the pressure eigenmodes vector. The matrices [ A ] and [ C ] contain coefficients originating from the discretization of the Helmholtz equation, [ B ( ω)] is the matrix of the boundary conditions and of the damping and [ D ( ω)] represents the source term due to the unsteady heat release. With the introduction of the heat release rate the eigenvalue problem defined by Eq. ( 4) becomes nonlinear and is solved with an iterative algorithm. At the k th iteration Eq. ( 4) is first reduced to a linear eigenvalue problem around a specific frequency k : ( [ A ] + k [ B ( k ) ] -[ D ( k ) ] ) P + ω 2 k [ C ] P = 0 , ( 5 ) where k = ω k -1 is the previous iteration result. The software uses the ARPACK numerical routine for large-scale eigenvalue problems. This is based on a variant of the Arnoldi algorithm, called the implicit restarted Arnoldi method [START_REF] Lehoucq | ARPACK users' guide: solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods[END_REF] . This procedure is iterated until the error defined by = | ω kk | becomes lower than a specific value, typically 10 -6 . Once convergence is achieved, the real part of ω yields the oscillation frequency, f = -( ω)/2 π Hz, while the imaginary part of ω corresponds to the growth rate α = (ω) / 2 π s -1 that allows the identification of unstable modes: p ∝ exp ( 2 παt -i 2 π f t ) . If α is positive, the acoustic mode is unstable and the amplitude of fluctuations grows with time. If α is negative, the acoustic mode is stable and perturbations decay with time. The eigenvalue procedure is repeated by incrementing the amplitude level until a limit cycle condition is reached when α = 0 . Differently from Silva et al. [START_REF] Silva | Combining a Helmholtz solver with the flame describing function to assess combustion instability in a premixed swirled combustor[END_REF] approach for single burner setups, in a multi-burner annular combustor the linearization is performed for each of the FDFs assumed in the model. In the present numerical framework, the distribution of velocity fluctuation levels between the FDFs is fixed a priori depending on the spinning or standing nature of the azimuthal mode under investigation: • In a spinning mode, the nodal line rotates at the speed of sound, however, the oscillation amplitude is uniform all around the chamber and the velocity fluctuation level | u / u | is the same for each burner. In this case, the FDFs of the individual burners have the same complex value [START_REF] Bourgouin | Characterization and modeling of a spinning thermoacoustic instability in an annular combustor equipped with multiple matrix injectors[END_REF] . Mathematically this is formulated as follows: | u / u | ( ) = C. ( 6 ) In this expression, is a vector containing the angular coordinates of the reference points where the velocity fluctuation is specified allowing the calculation of the FDF. Each reference point lies on the injector axis 20 mm below the injector outlet. • In a standing mode, each injector operates with a different amplitude of oscillation depending on its relative position with respect to the nodal line [START_REF] Durox | Nonlinear interactions in combustion instabilities coupled by azimuthal acoustic modes[END_REF] . As a consequence, different am- | u / u | ( ) = ψ ( ) /ψ max | u / u | j , (7) where ψ ( )/ ψ max is the normalized azimuthal eigenmode structure. The numerical implementation of this model in the Helmholtz solver framework is not trivial because the mode ψ is the solution of the eigenvalue analysis. The frequency of this mode and consequently also the pressure distribution is highly influenced by the amount of heat release rate fluctuations considered in the model as shown in [START_REF] Laera | Impact of heat release distribution on the spinning modes of an annular combustor with multiple matrix burners[END_REF] (see the trajectory maps in Fig. 9 ). This results in an iterative procedure where the solution at the k th iteration is obtained using the ψ distribution computed at the k th -1 iteration. This procedure is iterated until the maximum error defined by = | ψ ( ) k -ψ ( ) k -1 | is lower than a threshold value, typically 10 -3 . It should be noticed that at each k th iteration, the nonlinear eigenvalue problem defined by Eq. ( 4) is solved in order to compute the new mode structure ψ [START_REF] Laera | Impact of heat release distribution on the spinning modes of an annular combustor with multiple matrix burners[END_REF] . A validation of the proposed numerical procedure is discussed in Appendix A. The case investigated considers an annular cavity with a uniform distribution of heat release and a simplified model for the nonlinear flame response to pressure disturbances. Analytical solutions were derived for both spinning and standing limit cycles together with conditions for their stability [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] . Confrontations between numerical simulations carried out in this work and analytical expressions perfectly match for both the rotating and standing modes validating the numerical procedure. Experimental setup and numerical representation We now briefly describe the MICCA combustor shown in Fig. 1 a. This system comprises an annular plenum connected by 16 injectors to an annular chamber formed by two cylindrical concentric quartz tubes of 200 mm length. Each injector consists of a cylinder of d br = 33 mm diameter and l br = 14 mm length exhausting gases through a l inj = 6 mm thick perforated plate featuring 89 holes of d p = 2 mm diameter located on a 3 mm square mesh. The system fed by a propane/air mixture allows the stabilization of a set of laminar conical flames above the 16 injectors as shown in Fig. 2 for one injector. Two operating conditions are investigated. Figure 2 a shows flames stabilized above a single matrix injector for a stoichiometric mixture φ= 1, while flames obtained for a slightly richer mixture at φ= 1.11 shown in Fig. 2 b have a higher longitudinal extension. These images were taken for stable operation in a single matrix injector setup. When the MICCA combustor is operated at condition A, a thermo-acoustic instability coupled to a spinning mode with stable limit cycle is observed [START_REF] Bourgouin | Characterization and modeling of a spinning thermoacoustic instability in an annular combustor equipped with multiple matrix injectors[END_REF] . When the MICCA is operated at condition B, a stable limit cycle coupled to a standing mode is found [START_REF] Durox | Nonlinear interactions in combustion instabilities coupled by azimuthal acoustic modes[END_REF] . Slanted [START_REF] Bourgouin | A new pattern of instability observed in an annular combustor: the slanted mode[END_REF] self-sustained combustion oscillations coupled to azimuthal modes were also identified in this setup when the flow operating conditions were modified. The heat release rate is distributed in the simulations over a small volume located at the exit of each burner. It consists of a cylindrical volume of height l f and diameter d f . A previous study on the same combustor indicates that the dynamics of perturbations is influenced by the extension of the flame domain [START_REF] Laera | Impact of heat release distribution on the spinning modes of an annular combustor with multiple matrix burners[END_REF] . For conical flames, the flame volume dimensions are deduced by processing the images of the flame region under steady conditions shown in Fig. 2 as described in [START_REF] Laera | Impact of heat release distribution on the spinning modes of an annular combustor with multiple matrix burners[END_REF] . The numerical representation of the system is shown schematically in Fig. 3 a. The plenum consists of an annular cavity linked to the combustion chamber volume by sixteen injection units as in the real configuration. A model is used to represent the matrix injectors shown in Fig. 3 b. The body of each burner has the same dimensions as the real system. The perforated plate is replaced by a cylindrical volume having the same l inj = 6 mm thickness as the perforated plate and a base area with a diameter of d inj = 18.9 mm corresponding to the total flow passage area of the injector. The total height of the burner is l br + l in j = 20 mm. The combustion chamber is modeled as an annular duct open to the atmosphere with an augmented length of 41 mm to account for an end correction resulting in a total length of l cc = 241 mm. The value of this correction was determined experimentally by submitting the MICCA chamber to harmonic acoustic excitations near the unstable resonant mode and by scanning a microphone along a longitudinal axis above the combustion chamber annulus [START_REF] Laera | Impact of heat release distribution on the spinning modes of an annular combustor with multiple matrix burners[END_REF] . All other boundaries are treated as rigid adiabatic walls. The combustor operates at atmospheric conditions. The temperature of the plenum is set equal to 300 K. Following previous studies [START_REF] Palies | Nonlinear combustion instability analysis based on the flame describing function applied to turbulent premixed swirling flames[END_REF][START_REF] Kinsler | Fundamentals of acoustics[END_REF] , the damping rate is deduced from a resonance response of the system by imposing an external perturbation with a loudspeaker and measuring the resonance sharpness. These measurements were carried out under cold flow conditions avoiding any corrections to account for absorption or generation of acoustic energy by the flame [START_REF] Palies | Nonlinear combustion instability analysis based on the flame describing function applied to turbulent premixed swirling flames[END_REF] . This, however, introduces some uncertainty since the value of the damping rate under hot conditions may differ from that estimated at room temperature. Figure 4 shows two acoustic response curves measured in the plenum ( Fig. 4 a) and in the combustion chamber ( Fig. 4 b) providing the resonance frequency bandwidth f at half-power. The damping rate δ appearing in Eq. ( 2) is deduced from the frequency bandwidth 2 δ = f ( s -1 ). The azimuthal thermo-acoustic instabilities coupled by spinning and standing modes are discussed in what follows. Numerical simulations are compared with experiments and the stability of the numerical predictions is evaluated. Analysis of operating point A For operating condition A corresponding to a stoichiometric propane/air mixture φ= 1 with a bulk flow velocity measured in the cylindrical body of each injector equal to u b = 0.49 m s -1 , the system sustains a well-established spinning limit cycle coupled to the first azimuthal mode at a frequency of 487 Hz [START_REF] Bourgouin | Characterization and modeling of a spinning thermoacoustic instability in an annular combustor equipped with multiple matrix injectors[END_REF] . In the numerical model, effects of steady combustion are taken into account through a temperature distribution in the gas stream. The temperature was measured, along a longitudinal axis passing in the center of one burner location, with a movable thermocouple and varies from 1470 K near the flame zone to 1130 K at the end of the combustion chamber. The flame domain consists of a cylindrical volume of height l f = 4 mm and diameter d f = 36 mm ( Fig. 3 b). These dimensions are deduced by processing the image shown in Fig. 2 a as in [START_REF] Laera | Impact of heat release distribution on the spinning modes of an annular combustor with multiple matrix burners[END_REF] . This gives a flame volume V f = 4.18 cm 3 which, for a global thermal power per burner of ˙ Q = 1.44 kW, yields a heat release rate per unit volume equal to ˙ q = 3 . 2 × 10 8 W m -3 . For each burner, the interaction between combustion and acoustics is expressed by making use of a global FDF determined experimentally in a single burner setup comprising the same injector and equipped with a driver unit, a hot wire and a photomultiplier to measure velocity and heat release rate fluctuations respectively [START_REF] Noiray | A unified framework for nonlinear combustion instability analysis based on the flame describing function[END_REF][START_REF] Boudy | Describing function analysis of limit cycles in a multiple flame combustor[END_REF] . The reference point for the velocity fluctuation measurements is located in the injection unit 20 mm below combustor backplane. Figure 5 shows the FDF used in this analysis. Measurements of the gain and phase lag were carried out for five velocity fluctuation levels ranging from | u / u | = 0.1 to | u / u | = 0.5 (white disks in Fig. 5 ). One difficulty is to gather FDF data at large forcing amplitudes. Due to limitations of the equipment used to modulate the flame, it was not possible to cover the full frequency and amplitude ranges. A well-resolved FDF is used in the simulations by interpolation between the experimental points and extrapolation where experimental samples are missing. At very high amplitude levels that are reached in the MICCA experiment, the flames are disrupted and it is reasonable to represent this behavior by a fast drop in their response to external perturbation. Data are ex- trapolated at these levels with the best fit curve of a smooth fourth order polynomial function using data gathered at lower levels. This is carried out for each forcing frequency. There is a certain amount of uncertainty introduced by this process, but it is based on measured data points while most of the theoretical investigations are based on simplified representations. The nonlinear n -τ models [START_REF] Dowling | Nonlinear self-excited oscillations of a ducted flame[END_REF][START_REF] Li | Time domain simulations of nonlinear thermoacoustic behaviour in a simple combustor using a wave-based approach[END_REF][START_REF] Laera | A weakly nonlinear approach based on a distributed flame describing function to study the combustion dynamics of a fullscale lean-premixed swirled burner[END_REF] , simplified third order polynomials of heat release as a function of pressure [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] or time invariant nonlinear representations of heat release rate as a function of pressure [START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] do not feature all the complexity of the nonlinear flame response considered in the present study. Heat release rate fluctuations are assumed to be driven in the MICCA annular combustor by the fluctuating mass flow rates due to axial velocity perturbations through the corresponding injector [START_REF] Wolf | Using les to study reacting flows and instabilities in annular combustion chambers[END_REF][START_REF] Krebs | Thermoacoustic stability chart for high-intensity gas turbine combustion systems[END_REF] . This assumption is reasonable as the injectors are well separated in the configuration explored and there is no visible mutual interaction [START_REF] Worth | Modal dynamics of self-excited azimuthal instabilities in an annular combustion chamber[END_REF] . The reference points for the velocity fluctuations in the numerical domain are taken on the axis of the burner at the same distance from the flame domain as in the experiments. Dynamics of an initially spinning mode In a first stage, following the experimental observations the stability analysis is carried for the first azimuthal mode (1A) of the MICCA chamber starting the simulations with a spinning mode structure and the velocity distribution described by Eq. ( 6) . Figure 6 compares the system trajectories plotted in a frequencygrowth rate plane for two different values of the damping rate δ= 0 s -1 and δ= 12.5 s -1 in Eq. ( 2) . Each curve is colored with respect to the velocity fluctuation level that varies from | u / u | = 0.10 to | u / u | = 0.63 by steps of 0.01. Symbols are only plotted every 10 increments and at the trajectory endpoint to ease reading. The dynamical trajectories of the system are controlled by three free parameters, the frequency f , the growth rate α and the relative velocity amplitude level | u / u | . The 1A mode is found to be linearly unstable. For small velocity perturbations, the system features the highest growth rate of about 280 s -1 and a frequency around 400 Hz. A reduction of the growth rate and a substantial increase of the instability frequency is observed when | u / u | is augmented. This is due to the reduction of the FDF gain when the velocity fluctuation level increases, as can be observed in Fig. 5 a. If no damping is considered, the limit cycle condition is reached at a velocity fluctuation level that nullifies the gain of the FDF. When damping is considered the limit cycle is reached for an amplitude level | u / u | = 0.61, which corresponds to a FDF gain G = 0.08 at a frequency of f = 473 Hz. This frequency is close to that observed at limit cycle in the experiments f = 487 Hz. It is worth noting that at the velocity fluctuation level that nullifies the FDF gain, the sys- tem shows a negative growth rate of α= -12.5 s -1 which equals the damping rate that was determined experimentally. This checks that the dissipation rate is well represented in the numerical procedure. The structure of the unstable mode is now investigated at the limit cycle. Since the sixteen burners have the same FDF and are assumed to operate at the same amplitude level, the circumferential symmetry of the system defined by the annular geometry is conserved. The nonlinear stability analysis leads to degenerate solutions featuring two azimuthal modes sharing the same frequency and spatial structures shifted by π /2 as in the linear case [START_REF] Bauerheim | Symmetry breaking of azimuthal thermo-acoustic modes in annular cavities: a theoretical study[END_REF] . It is thus possible to add these two solutions at each amplitude level and obtain a spinning mode. The result is shown in Fig. 7 b (bottom) in the form of a pressure phase evolution plotted along the azimuthal direction at a mid-height position in the plenum and in the combustion chamber backplane. The phase evolutions feature a shift of 0.14 rad between the plenum and the combustion chamber. This angular shift is also observed in experiments and confirmed theoretically [START_REF] Bourgouin | Characterization and modeling of a spinning thermoacoustic instability in an annular combustor equipped with multiple matrix injectors[END_REF][START_REF] Laera | Impact of heat release distribution on the spinning modes of an annular combustor with multiple matrix burners[END_REF] . The pressure magnitude | ˆ p | shown in Fig. 7 a is obtained by plotting pressure contour lines computed by the Helmholtz solver on a cylindrical surface passing through the middle of the combustion chamber, the plenum, the burners and the microphone waveguides. The pressure iso-lines are deformed in the vicinity of the burners due to the near field acoustic interactions with the injectors, heat release zone and waveguides. However, a spectral analysis of the pressure distribution p = p (θ ) , not shown here, features small harmonic levels. At the burners locations, the harmonic content remains within 6% of the signal amplitude and falls to 1% one centimeter away from the chamber backplane as highlighted by the pressure iso-lines plotted in Fig. 7 a. This is confirmed observing the pressure distribu- tion p (θ ) = | ˆ p | cos arg( ˆ p ) along the azimuthal direction plotted in Fig. 7 b (top). Deformations appear in the distribution taken at the backplane of the combustion chamber, whereas the influence vanishes in the plenum. Stability of the limit cycle The stability of the spinning equilibrium point is now investigated following ideas developed in [START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] : limit cycles coupled to spinning modes are stable if the derivative of the FDF with respect to the amplitude of the spinning mode oscillation ( | u / u | sp ) is negative. For a constant damping rate, the derivative of the FDF around the equilibrium point can be approximated by the derivative of the growth rate, indicating that the stability criterion can be reformu- ∂α ∂| ˆ p | | u / u | sp < 0 (8) The growth rate α plotted in Fig. 21 c as a function of the perturbation level u / | ū | features a negative derivative around the spinning oscillation equilibrium point reached for u / | ū | = 0.61. Eq. ( 8) is thus satisfied and one can conclude that the predicted spinning mode is stable as observed in the experiments. Comparisons with experiments Spinning pressure oscillations at the limit cycle corresponding One may however note that this harmonic content remains weak. The pressure peak at 974 Hz reaches 10 Pa. In the numerical calculation, signals are reconstructed by considering only the first harmonic found at 473 Hz in the simulations. Nevertheless, a good match is found in terms of amplitude and phase for the four sensors indicating that the fundamental dominates and proving that the numerical procedure is able to predict the difference in amplitude levels between the two cavities of the system. Again, a small phase mismatch can be observed when the comparison is carried out over a longer duration. Time resolved heat release rate signals are deduced in the experiments from two photomultipliers (H1 and H2) equipped with an OH * filter and placed at locations shown in Fig. 1 c. The heat release rate fluctuation ˙ Q can also be deduced from the simulation: ˙ Q ˙ Q = ˆ u u F (ω r , | u / u | ) exp (iω r t ) , (9) where ˆ u / ū is the calculated velocity oscillation level at limit cycle, F is the FDF that needs to be evaluated at the same forcing level, ˙ Q is the mean heat release rate, ω r is the angular frequency at limit cycle and t denotes the time. Figure 9 shows the recorded heat release rate signals (dashed lines) and the numerical reconstructions (solid lines). The two photomultipliers records feature nearly the same amplitudes and a phase shift of 1.63 rad being close to the theoretical value of π /2. However, the amplitude is Table 1 Sensitivity of the limit cycle oscillation frequency f , the pressure amplitude recorded by microphone MC1 in the chamber, the FDF gain and phase lag and the relative level of heat release rate measured by photomultiplier H1 with respect to the velocity fluctuation level | u / ū | reached at limit cycle. continuous lines with rectangular marks in Fig. 9 . They feature the same phase shift as that recorded by the photomultipliers. In terms of amplitude, the two reconstructions share the same amplitude, but the value is about four times lower than the one recorded in the experiments. The reason for this sizable difference is now investigated with the help of Eq. ( 9) . Given the relatively low damping of the system ( δ = 12 . 5 s -1 ), the limit cycle is reached for a large velocity oscillation level | u / u | = 0.61 higher than 0.5, in a range where the FDF gain is slightly extrapolated and features a steep slope as can be seen in Fig. 5 1 . This is not the case for the corresponding heat release rate oscillation ˙ Q / ˙ Q due to the rapid drop of the FDF at large perturbation amplitudes. Data in Table 1 indicate that a reduction of 5% of the level of velocity fluctuation at the limit cycle results in a variation of 170% of the FDF gain G and, correspondingly, in the resulting heat release rate oscillation amplitude. The phase of the FDF and the frequency of the resonant mode remain as a first approximation unaffected by these changes. The amplitude differences between measurements and numerical predictions reduce when the heat release rate signals are reconstructed for an oscillation level | u / u | = 0.58 as shown by the continuous lines with circular marks in Fig. 9 . A perfect match in amplitude between the experimental and numerical signals is still not achieved, but differences are notably reduced. These tests confirm the strong sensitivity of the predicted level of heat release rate reached at limit cycle due to small uncertainties on the velocity oscillation level in the injector. This feature reflects that small uncertainties on the data gathered for FDF at high forcing levels may lead to large deviations of the predicted heat release rate fluctuations due to the rapid drop of the FDF gain with the forcing amplitude when the flame is disrupted. | u / u | f (Hz) MC1 (Pa) Gain ϕ ( × π) | ˙ Q / ˙ Q | 0. Analysis of operating point B For an equivalence ratio φ= 1.11 and a bulk flow velocity equal to u b = 0.66 m s -1 , the system features a well-established selfsustained combustion oscillation associated to standing mode at a frequency of 498 Hz that was fully characterized in [START_REF] Durox | Nonlinear interactions in combustion instabilities coupled by azimuthal acoustic modes[END_REF] . Two standing mode patterns have been observed in the system for the same operating conditions depending on the run recorded. Figure 10 shows long time exposure photographs of these two modes with the position of the nodal line indicated by the red dashed line. The mode structure shown in Fig. 10 a features a nodal line between burners I -II and between burners IX -X . In this work, this mode structure will be designated as "V type". Figure 10 b shows the mode structure with the nodal line between burners V -VI and burners XIV -XV that will be referred as "H type". The FDF corresponding to this new operating condition is shown in Fig. 11 . The same difficulties to get data for the FDF at high amplitudes, typically | u / ū | > 0 . 5 , persist for this new operating condition. As for operating condition A, the heat release rate is uniformly distributed over each burner by post-processing the flame image shown in Fig. 2 b [START_REF] Laera | Impact of heat release distribution on the spinning modes of an annular combustor with multiple matrix burners[END_REF] recorded for operating regime B in the single burner setup in a thermo-acoustically stable state. This procedure results in a flame volume V f = 6.11 cm 3 that is distributed over a cylinder of height l f = 6 mm, 2 mm longer than that used for operating regime A, and a diameter of d f = 36 mm which, for a global thermal power per burner of ˙ Q = 2.08 kW, yields a heat release rate per unit volume equal to ˙ q = 3.31 W m -3 . The mean temperature in the combustion chamber varies from 1521 K near the flame zone to 1200 K at the end of the combustion chamber. The temperature of the plenum is set equal to 300 K. Dynamics of an initially standing mode The numerical procedure described in Section 2 is used for the analysis of the 1A standing mode dynamics when the system operates at regime B. Without unsteady heat release, the circumferential symmetry of the system defined by the annular geometry is conserved and the eigenvalue analysis yields degenerate solutions. Figure 12 2 , computed under passive flame conditions ( ˙ q = 0) and plotted over a plane located at the burner inlet section. The two modes share the same frequency 472 Hz, but their structures are shifted by π /2. The corresponding distribution used to initialize the simulation is shown in Fig. 12 b where symbols indicate the burners' angular locations. When a distribution of FDFs is introduced ( ˙ q = 0), the rotational symmetry of the system is broken [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] and the eigenvalue analysis results no more in degenerate azimuthal modes but yield two distinct waves characterized by different modal structures. The frequency and the growth rate of these modes depend on the level of asymmetry considered in the system. At the onset of instability, Fig. 11 (b) indicates that the velocity fluctuation level weakly influences the flame response. The eigenvalue analysis of this weakly asymmetric system yields two waves with different yet close frequencies and growth rates (see Appendix B). Increasing the velocity fluctuation level, the differences increase between the gain and phase values taken by the FDFs from the different burners. This leads to stronger asymmetric configurations in which the two modes resulting from the eigenvalue analysis are characterized by an important shift in growth rate and also frequency. The frequency shift depends on the amount of heat release rate fluctuation considered in the system [START_REF] Silva | Combining a Helmholtz solver with the flame describing function to assess combustion instability in a premixed swirled combustor[END_REF] . In the validation calculations discussed in the Appendix B with a simplified heat release response, the two waves manifest only a growth rate shift, while they share the same frequency. Here the two solutions feature both a frequency and growth rate shifts. As discussed in Section 2 , for each velocity level | u / u | j , the mode structure ψ k computed at the k th iteration is used in Eq. ( 7) to modulate the FDFs. However, in order to be consistent, only the mode structure closer to the one chosen at the first iteration (the ψ 0 1 or ψ 0 2 shown in Fig. 12 ) is followed until the convergence criterion described in Section 2 is satisfied. The dynamical trajectories of the system are first tracked without considering any damping by setting δ= 0 s -1 in Eq. ( 2) to analyze the influence of the chosen initial distribution. These trajectories are plotted in Fig. 13 the different levels, symbols are only plotted every 10 increments and at the trajectory endpoint. Rectangular marks in Fig. 13 indicate results obtained by considering the initial mode structure ψ (0) 1 shown in Fig. 12 a and designated as H type. Circular marks in Fig. 13 indicate results obtained by considering the initial mode structure ψ (0) 2 in Fig. 12 a and designated as V type. Regardless of the eigenmode used to initialize the simulations the frequencies and growth rates are the same. It is worth noting that, in contrast to spinning mode calculations, a limit cycle condition cannot be reached without accounting for a finite damping level regardless of the velocity oscillation level considered in the system. At each point of the trajectory, flames close to the nodal line always experience a small velocity fluctuation | u / u | . This small oscillation leads to a finite heat release rate fluctuation and this causes the mode to become unstable. With the introduction of a damping term, a limit cycle is obtained as shown in Fig. 14 where the dynamical trajectories without damping, δ= 0 s -1 (circular marks), and with a damping rate of δ= 12.5 s -1 (rectangular marks) are compared. Each curve is colored in this figure by the velocity fluctuation level that varies from | u / u | = 0.1 to | u / u | = 0.9 with steps of 0.01. The limit cycle condition α= 0 is reached in the simulation for a fluctuation level | u / u | = 0.86 at a frequency f = 478 Hz. This value is close to that recorded at the limit cycle in the experiments f = 498 Hz. As discussed in [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] , the amplitude of a standing limit cycle is greater than the amplitude of a spinning limit cycle that would settle for the same operating conditions and the same flame response model. Even though the operating conditions differ, it is found here that the oscillation level | u / u | = 0 . 86 of the limit cycle of operat- ing condition B associated to standing mode is larger than the one | u / u | = 0 . 61 found for the operating condition A featuring a limit cycle with a spinning pattern. In agreement with experiments shown in Fig. 10 , two different modal structures with a nodal line shifted by π /2 are predicted at the limit cycle depending on the chosen initial condition. Figure 15 a shows the H type limit cycle pressure distribution calculated by the Helmholtz solver. This solution is obtained from the modal pressure distribution ψ (0) 1 shown in Fig. 12 as initial condition. Figure 15 b shows the V type limit cycle pressure distribution obtained assuming the initial distribution ψ (0) 2 . The distribution of the velocity fluctuation level | u / u | reached at limit cycle for both modal structures is shown in Fig. 16 . It differs from a pure sinusoid due to the local deformations of the pressure field near the injector outlets as already discussed for operating condition A. These deformations are taken into account in the ψ function used to fix the velocity oscillation level in the different burners. Symbols in Fig. 16 indicate the levels reached at the sixteen burner positions. One may note that for the largest velocity fluctuations, flow reversal conditions can be reached during part of the oscillation cycle in the injectors located close to the pressure anti-nodal lines. It is next interesting to compare the location of the pressure nodal line. There is no precise experimental determination of its angular position. Examining the long time records shown in Fig. 10 , one finds that in both configurations the nodal line is always located between two burners in a region indicated in grey in Fig. 17 . The same figure also shows the predicted nodal line plotted as a red dashed line for the two structures. For the standing limit cycle featuring a V type structure, the nodal line is predicted between two burners in Fig. 17 Stability of the limit cycle The stability of the standing limit cycle is now investigated. According to Ghirardo et al. [START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] , a standing limit cycle is stable if three necessary and sufficient conditions are satisfied. In essence, the first condition requires that at the limit cycle, the growth rate decreases when the velocity level is increased. The growth rate trajectory obtained by varying the maximum amplitude of the velocity oscillation level | u / u | plotted in Fig. 23 c (continuous curve with rectangular marks) indicates that the derivative of the growth rate around the standing equilibrium point is negative. This test confirms that the first stability condition is satisfied. The second criterion defines a condition on the orientation of the nodal line. Ghirardo et al. [START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] have demonstrated that this condition is always fulfilled in configurations with a large number of burners. Although the MICCA combustor features a finite number of burners, this second criterion may be regarded as satisfied considering that heat release rate fluctuations take place in a flame domain, which nearly covers the entire surface area of the combustion chamber. The third condition discusses the stability of the standing wave pattern. Stable standing modes need to comply with the following inequality: N 2 n = 2 π 0 F (ω r , | u / u | i ψ (θ )) /Z(θ ) sin (2 nθ ) dθ > 0 . ( 10 ) In this expression, n is the azimuthal mode order, which is here equal to n = 1 and the mode ψ is chosen such that there is a pressure anti-node at θ = π / 4 . The H type mode has the closest structure fulfilling this condition and is used for the calculation. The FDF values F in Eq. ( 10) are calculated at the limit cycle oscillation frequency f = 480 Hz with the corresponding velocity distribution. The quantity Z ( θ ) designates the impedance, i.e., the ratio of pressure in the flame zone to the velocity at the reference point. Results for F (| u / u | i ψ (θ )) /Z(θ ) are plotted in red in Fig. 18 . It consists of a piecewise function taking constant values over the angular extensions of the flame zones and zero values in the angular extensions between two flames. In the same graph the function sin (2 n θ ) is represented in black. The consequence is that the component in red in Fig. 18 changes sign with amplitude resulting in a positive overall integral for N 2 n . The predicted standing limit cycle is thus found to be stable as observed in experiments. This is due to the fact that the FDF of the matrix burners investigated in the present study features a phase lag which is sensitive to | u / u | as shown in Fig. 11 b. The situation slightly differs from that investigated in Appendix A with a simplified flame response characterized by a monotonically decreasing gain but a constant phase lag independent of the input amplitude level. In this case, limit cycles coupled to standing waves are found to be unstable. Comparisons with experiments Calculations for the standing limit cycle are now compared to measurements. The frequency and amplitude of velocity fluctuations predicted at limit cycle at one reference point of the system are used to reconstruct time resolved pressure fluctuations in the plenum and in the chamber and heat release rate fluctuations. The analysis is only carried out for results corresponding to the V structure type. Figure 19 (a The amplitude and the phase shift between microphones located near the anti-nodal line, i.e., MP1 and MP5, is well captured by the simulations. Calculations also retrieve the amplitudes of the microphones located near the nodal line, i.e., MP3 and MP7, however, with a small phase mismatch. This is due to the shift of θ ∼ 3 °between the angular position of the sensors and of the nodal line shown in Fig. 17 a. In the numerical reconstruction, microphones MP3 and MP7 are located on the two sides of the nodal line and their signals are consequently in phase opposition. In the experiments, these records are nearly in phase as shown by the green and blue dashed lines in Fig. 19 (a) (top). Increasing the interrogation period one finds a small phase mismatch between experiments and simulations due to the 20 Hz difference between the measured and predicted limit cycle oscillation frequencies. Figure 19 (a-bottom) shows the same type of comparison for pressure signals in the combustion chamber. Again, numerical reconstructions and measurements are, respectively, identified by continuous and dashed lines. In the experiments, the signals feature a some harmonic content and significantly differ from one to another depending on the azimuthal position of the sensors. The analysis of the spectrum content of the signals shown in Fig. 19 (bbottom) reveals the second harmonic peak at 992 Hz. A previous study on the same combustor featuring standing mode oscillations [START_REF] Durox | Nonlinear interactions in combustion instabilities coupled by azimuthal acoustic modes[END_REF] has shown that this frequency is associated with the first longitudinal mode of the combustion chamber (1L). For the pressure signals located close to the nodal line, the amplitude of the second harmonic peak is comparable to the amplitude of the 1A mode putting in evidence a competition between the 0L-1A plenum oscillation at 495 Hz and a longitudinal mode associated to the 1L-0A mode of the chamber. This yields the distorted signals recorded by microphones MC3 and MC7 shown in Fig. 19 (a-bottom). The other microphones located closer to the pressure anti-nodes feature a second harmonic at 992 Hz with an amplitude of about one order of magnitude lower than that of the first harmonic [START_REF] Durox | Nonlinear interactions in combustion instabilities coupled by azimuthal acoustic modes[END_REF] . This yields the roughly sinusoidal signals with a peak amplitude at 495 Hz of 60 Pa recorded by microphones MC1 and MC5 shown in Fig. 19 (a-bottom). In the FDF framework, the numerical signals can only be reconstructed for the first harmonic oscillation. Overall, a good match is found in terms of both amplitude and phase for the four sensors indicating that the numerical procedure is able to pre- dict the difference in amplitude levels between the two cavities of the system. Again, a small phase mismatch is observed when the comparison is carried out over a longer duration period. The numerical heat release rate signals (continuous lines) reconstructed with Eq. ( 9) are compared in Fig. 20 with the two OH * light intensity signals (dashed lines) recorded by the photomultipliers (see Fig. 1 (c)). The two photomultipliers signals feature different amplitudes. The injector close to the nodal line features the largest heat release rate oscillations reaching ˙ Q / ˙ Q = 0 . 5 measured by H1. A drop of about 50% of the peak amplitude is observed for the injector close to the anti-nodal line (H2 signal in Fig. 20 ). It is also worth noting that the signal is in this case not a pure sinusoidal wave indicating the presence of harmonics. The two signals measured by the photomultipliers are nearly in phase due to the position of the sensors with respect to the nodal line. The small phase shift is due to the presence of harmonics. As for the pressure signals, the simulated heat release rate signals only consider the first harmonic. The reconstructed numerical signals have the same phase shift as those recorded by the photomultipliers. This indicates that the phase of the FDF is well captured. The numerical procedure also retrieves the amplitude drop between the two signals, however, the predicted amplitude peak and the one recorded in the experiments differ. For the H1 signal, differences between simulations and experiments are mainly due to the mismatch on the position of the nodal line shown in Fig. 17 . As already discussed for operating condition A, the heat release rate oscillation level strongly depends on the predicted value of | u / u | and on the gain of the FDF in Eq. ( 9) . For high velocity fluctuation levels, the FDF gain rapidly drops as shown in Fig. 11 a. As a consequence, the numerical prediction of the H2 signal is quite sensitive to small uncertainties on the velocity oscillation level at the limit cycle. These uncertainties lead to important variations of the FDF gain G and consequently in the predicted heat release rate oscillation at limit cycle. The origin of the differences observed at limit cycles between measurements and simulations is thus the same as for the spinning mode calculations studied in the previous section. Mode type selection Following the suggestion of a reviewer, an investigation of the possible scenarios leading to the spinning and standing limit cycles analyzed in the previous sections is now proposed. Analysis for the operating point A The stability analysis is repeated for operating condition A, but instead of assuming an initially spinning mode as done in Section 4.1 , simulations are initiated with a standing mode structure and the velocity distribution described by Eq. ( 7) . The corresponding trajectory (black continuous line with rectangular marks) plotted in a frequency-growth rate-velocity fluctuation level space is compared in Fig. 21 a with the spinning mode trajectory calculated in Section 4.1 (red continuous line with circular marks). At the starting point, i.e. for | u / u | = 0.1, the two trajectories perfectly match. This is due to the fact that in the linear regime, i.e. for values of | u / u | ≤0.1, the FDF is simply a transfer function and its gain and phase do not depend on the input amplitude level. This also means that in simulations started with a standing mode structure, each burner of the chamber operates with the same velocity fluctuation. Projecting these trajectories on a frequency-growth rate ( f -α) plane, as shown in At the equilibrium point one finds that the two solutions share the same frequency, but differ in the level of velocity fluctuations: the standing mode limit cycle is obtained at higher values of | u / u | with respect to the spinning mode. One may then first discuss the stability of the two equilibrium points. The spinning limit cycle has been already proved to be stable in Section 4.2 . To prove the stability of the standing mode solution, the three criteria summarized in Section 5.2 are applied to this new standing equilibrium point. This analysis with the same experimental FDF shown in Fig. 5 has been recently carried out by Ghirardo et al. [START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] . According to this study, a standing limit cycle assumed to occur for a value of | u / u | st = 0.5 and for a frequency of f = 487 Hz has been shown to be unstable and this was accepted as the main reason why the spinning mode prevails in this case. One may note, however, that the amplitude level adopted in [START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] differs from the value obtained from the present numerical results where | u / u | st 0 . 86 and the frequency is slightly at variance with that deduced from the present calculations. The stability analysis of the limit cycle at | u / u | st 0 . 86 and f = 473 Hz is thus repeated here. The standing mode remains unstable as observed from Fig. 22 where inequality of Eq. ( 10) is graphically verified as described in Section 5.2 . In addition to the stability analysis of the equilibrium solutions, a second possible mode type selection scenario may be considered by analyzing the two trajectories shown in Fig. 21 c. One may observe that the growth rate of the standing mode trajectory is always greater than the one of the spinning mode, with the exception of the range with | u / u | → 0.1 where the spinning mode trajectory has a greater growth rate. This region is highlighted in the zoomed box in Fig. 21 c. Noting that the two dynamical trajectories do not cross ( Fig. 21 a), one may conclude that the nature of the limit cycle is determined in the region where the FDF begins to depend on the input amplitude. According to this scenario, the spinning mode of oscillation is selected in this case because its growth rate slightly exceeds that of the standing mode. Once on that trajectory the velocity fluctuation level remains uniformly the same for all injectors and this leads to a spinning mode limit cycle. Increasing the value of | u / u | on the spinning mode path all the flames respond to the same velocity fluctuation level. A jump to the standing mode solution might be possible in principle but this would require a strong perturbation that would change the velocity distribution and provide the nonuniformity in amplitude pertaining to the standing mode oscillation. The difference in growth rates of the standing and spinning solutions is relatively small and one cannot be totally sure that this explains why the spinning mode is experimentally observed in this case. One can say at least that the range where the bifurcation takes place corresponds to the region where the FDF map shown in Fig. 5 is interpolated between experimental data. The errors in that region are lower than those made when the amplitude of oscillation becomes large near the limit cycle of the standing mode. One may conclude from this analysis that the selection between spinning and standing modes may be explained by examining the stability of the final limit cycles in agreement with criteria derived by Ghirardo et al. [START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] . However, the mode type selection is perhaps not only a matter of limit cycle stability. Simulations also reveal that the spinning mode features a growth rate exceeding that of the standing mode in a narrow region where the FDF starts to change with the input amplitude level. While the difference is minute, an alternative scenario has been developed to explain the spinning mode selection. A jump between the spinning and standing limit cycles seems to be possible but would require a sufficiently strong perturbation modifying the azimuthal distribution in velocity fluctuation. This mechanism would then be somewhat similar to the mode switching process documented in Noiray et al. [START_REF] Noiray | A unified framework for nonlinear combustion instability analysis based on the flame describing function[END_REF] . Analysis for operating point B As for the previous configuration, the stability analysis is now repeated for operating point B with an initial spinning mode structure and the velocity distribution in Eq. ( 6) . The predicted trajectory (red continuous line with circular marks) is compared with the trajectory calculated in Section 5.1 in the three-dimensional 23 c, one finds that the growth rate deduced from the standing mode distribution is always greater than the growth rate of the spinning mode distribution. According to this criterion, the standing mode prevails in this situation leading to a limit cycle with a standing pattern in agreement with experiments. f -α -| u / u | space in The stability of the two possible equilibrium points is now assessed. The standing equilibrium point was already proved to be stable in Section 5.2 . However according to Eq. ( 8) , the spinning equilibrium point is also found to be stable. This state is however not observed in the experiment. As a conclusion, the observed stable limit cycle state is in agreement with the stability criteria developed in Ghirardo et al. [START_REF] Ghirardo | Weakly nonlinear analysis of thermoacoustic instabilities in annular combustors[END_REF] , but these criteria are insufficient for this operating condition to fully determine which state is observed in the experiment. This case indicates without ambiguity that the dynamical trajectories need to be considered to fully determine the state of oscillation observed in the experiments. The main findings for the stability and trajectory analyses of the two operating points are summarized in Table 2 . It is concluded that a criterion combining the stability analysis at the limit cycle with the trajectory analysis (line 7 in Table 2 ) might best define the mode type at the limit cycle. Conclusion A procedure combining a Helmholtz solver and the FDF framework is proposed to analyze the dynamics of an annular combustor. The FDF is determined experimentally, together with the damping rate. This method is first validated by making use of an idealized annular combustor model characterized by a simplified nonlinear flame response. Subsequently, the dynamics of spinning and standing combustion instabilities observed in a laboratory scaled annular combustor are then investigated numerically. Spinning and standing initial solutions are considered first without effects of unsteady heat release. For the spinning mode dynamics, effects of unsteady heat release is treated by assuming that the velocity oscillation level | u / u | is the same in each burner and that the flame response described by the FDF operates at this level in the 16 injectors. For the standing mode dynamics, all injectors operate at different velocity oscillation levels | u / u | (θ ) depending on their relative position θ with respect to the nodal line of the analyzed mode. The distribution of | u / u | over the injectors is deduced from the pressure distribution computed by the code using an iterative procedure. Calculations of the dynamical trajectories of spinning and standing modes are carried out for increasing velocity oscillation levels until a limit cycle is reached when the calculated growth rate equals the damping rate. This numerical procedure is used to tentatively determine the modal structure at the limit cycle. Results indicate that the system trajectories and the stability of the limit cycle need both to be considered to fully determine the final state observed in the experiments for the two operating conditions explored. The problem of mode type selection however deserves further investigation, since the present results correspond to special cases with no proof of generality. Results at limit cycle for pressure, velocity and heat release rate signals are for the first time compared to detailed experimental data. The main findings of this study are: • The predicted instability frequencies match those measured in the experiments for the two different operating conditions investigated. • The proposed numerical strategy allows to capture the correct pressure signals recorded in the plenum and in the chamber with a good match in terms of phase shift and amplitude for both operating conditions featuring a stable spinning limit cycle and a stable standing limit cycle. • For the standing limit cycle, the pressure signals close to the nodal line are less well reproduced than those at the anti-nodal line, but the two possible positions of the nodal lines observed in the experiments between two burners are well captured. • The stability of the predicted spinning and standing limit cycles has been proven numerically by making use of recent theoretical elements and match the states observed in the experiments. • Heat release rate signals at spinning and standing limit cycles are well reproduced in terms of phase shift and of heat release distribution between the different burners, but the predicted amplitudes differ from those measured. This is mainly attributed to the fact that the limit cycle is reached for oscillation states where the gain of the FDF rapidly drops and nonlinearly with the velocity fluctuation level | u / u | . This means that even small uncertainties on | u / u | lead to large variations of the FDF gain and, consequently, to large uncertainties on the amplitude of the predicted heat release rate fluctuations. • Two scenarios are investigated to deduce the preferred modal type that will settle at limit cycle. The first scenario based on a stability analysis of the predicted limit cycle oscillation distinguishes the final outcome in one case but fails to do so in another case. A new scenario relying on a calculations of frequency-growth rate trajectories of initially spinning and standing modes appears to identify the solution type. In one case the difference in growth rate is relatively small and may not be completely meaningful. The combination of these two scenarios might perhaps give the answer but this admittedly needs to be confirmed by further investigations. This study shows that the numerical procedure developed herein adequately reproduces the dynamics of combustion instabilities coupled to azimuthal modes provided that the FDF can be determined accurately at sufficiently large perturbation levels. The lack of such data is a source of uncertainty. Acknowledgment The development of the annular combustor MICCA was funded by the Agence Nationale de la Recherche , Contract No. ANR-08-BLAN-0027-01 , and by Safran Tech. Davide Laera was supported by a research fellow grant provided by the Politecnico di Bari (Bari, Italy) for a six months period at EM2C. The authors wish to thank the reviewers for their critical and constructive examination of this article. Appendix A. Validation of the numerical procedure Theoretical results from Noiray et al. [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] are used validate the proposed numerical procedure. The main elements of this analytical model are reproduced here. The geometry considered is an idealized annular combustion chamber of radius R where pressure fluctuations are assumed to be only function of time and azimuthal coordinate θ . After proper normalization of the flow variables, the normalized pressure disturbances p obey to: ∂ 2 p ∂t 2 + ζ ∂ p ∂t - ∂ 2 p ∂θ 2 = ∂Q ∂t , ( 11 ) where Q represents the normalized volumetric heat release rate disturbance and ζ accounts for damping. It is further assumed that the pressure of any azimuthal mode of order n can be expressed as: p(θ , t ) = A (t ) sin (nt ) cos (nθ ) + B(t ) cos (nt ) sin (nθ ) , (12) where A and B are slowly varying functions with time. For a nonlinear flame response linked to the pressure by: Q = β p -κ p 3 , (13) Noiray et al. [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] found that the dynamics of this system was determined by a set of two-nonlinear coupled differential equations: dA dt = (β -ζ ) A - κ 32 9 A 2 + 3 B 2 A , ( 14 ) dB dt = (β -ζ ) B - κ 32 9 B 2 + 3 A 2 B. ( 15 ) The spinning and standing limit cycles correspond to the fixed points of this system. Spinning modes are found for A = B = 2 (β -ζ ) / 3 κ. Standing modes correspond to solutions with one Figure 24 shows the computational domain used for the Helmholtz solver simulations. It consists of an annular duct with a length of 0.2 m. The radial extension of the annulus is set to 1 mm to avoid any low frequency radial components. Furthermore, a velocity node is assumed at each boundary to exclude the rising of longitudinal components. Under passive flame conditions ( Q = 0 ), the eigenmode analysis leads to degenerate solutions due to the circumferential symmetry of the system. In this case, the two first azimuthal modes ψ 1 and ψ 2 share the same frequency and are shifted by π /2. These two modes are orthogonal and form a basis used to describe the pressure field in the system. The heat release rate is assumed to be distributed in a flame sheet volume extending 4 mm in the longitudinal direction and covering the entire inlet section indicated in blue in Fig. 24 . The Helmholtz solver requires the transposition of the nonlinear flame model in the frequency domain. Taking the Fourier transform of Eq. ( 13) yields: ˆ Q ω r , | ˆ p | = ˆ p ( ω ) 3 4 κ| ˆ p | 2 -β , ( 16 ) where ˆ Q and ˆ p are the dimensionless Fourier transforms of heat release rate and pressure disturbances taken at the inlet. The weakly nonlinear stability analysis around the first azimuthal mode of the system (1A) is repeated for different pres- For the spinning mode calculation, the same forcing level is imposed at each injector. The circumferential symmetry of the system defined by the annular geometry is thus conserved. Consequently, regardless of the pressure fluctuation level | ˆ p | considered in the model, the weakly nonlinear stability analysis always yield degenerate solutions. This indicates that the two degenerate modes ψ 1 and ψ 2 have the same amplitude, a result in line with [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] . also equal to that found analytically [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] . If the pressure fluctuation level is increased further, damping of acoustic energy becomes greater than the energy gained and the oscillation amplitude ceases to grow. For | ˆ p | = 1 the FDF saturates at 3 / 4 κβ, the growth rate is negative (damping rate) and is equal to α = -ζ / 2 . For the standing mode calculation, the value of the FDF gain depends on the angular position and is fixed by the distribution analytical predictions [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] . The stability of the predicted limit cycles is now investigated. By inspection, one observes that the stability condition of Eq. ( 8) is satisfied for the growth rate behavior shown in Fig. 25 a, indicating that the simulated spinning limit cycle is found to be stable in agreement with analytical results [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] . The stability of the standing mode is analyzed by determining the Jacobian matrix. Since B= 0 when A = 0 and vice versa, the only non-zero components of the Jacobian matrix are the coefficients: For each point of the red trajectory B= 0, indicating that this mode has no influence on the standing equilibrium point of this system [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] . For a given oscillation level | ˆ p | max , the growth rate α ψ 2 value on the red curve corresponds to the Jacobian coefficient J 22 . At the equilibrium point J 22 is found to be equal to λ 2 = ( βζ ) / 3 in agreement with the analytical value [START_REF] Noiray | Investigation of azimuthal staging concepts in annular gas turbines[END_REF] . The eigenvalues λ 1 and λ 2 of the computed Jacobian matrix have opposite signs indicating that the standing mode is a saddle point and corresponds thus to an unstable limit cycle in agreement with theoretical predictions. These simulations show that the proposed numerical procedure suitably retrieves the amplitude and the stability properties of both standing and spinning limit cycle in an idealized configuration where analytical results are available. plitudes of velocity fluctuations | u / u | have to be considered to represent the response of the flame above each injector. This leads to consider different FDF gains and phase lags values for each injector. In the proposed methodology, for each level of velocity fluctuations | u / u | j , the distribution of | u / u | over the injectors is determined from the pressure distribution computed by the code by setting: Fig. 1 . 1 Fig. 1. (a) Photograph of the MICCA combustor with a close up view of a matrix injector and a waveguide outlet. (b) Schematic representation of the experimental setup. (c) Top view of the MICCA chamber with microphone and photomultiplier measurement locations indicated. Fig. 2 . 2 Fig. 2. Images of the flame region recorded above one matrix injector under stable operation for the two conditions investigated. Fig. 3 . 3 Fig. 3. (a) Top view schematic representation of the model. (b) Longitudinal A-A cut showing geometrical details of the matrix injector model and of the flame domain. (c) Three dimensional model of the MICCA chamber with the details of the unstructured mesh comprising approx. 130,0 0 0 tetrahedral elements. Fig. 4 . 4 Fig. 4. Acoustic response of the MICCA combustor from 300 Hz to 500 Hz measured by microphones located in the (a) plenum and (b) combustion chamber. The frequency bandwidth f determined at half maximum provides the damping rate in both volumes. Fig. 5 . 5 Fig. 5. Interpolated Flame Describing Function (FDF) obtained for operating point A: φ= 1.00 and u b = 0.49 m s -1 . Experimental data points are displayed as white dots. (a) Gain. (b) Phase ϕ. Fig. 6 . 6 Fig. 6. Dynamical trajectories in the frequency ( f ) -growth rate ( α) plane colored by the velocity fluctuation level | u / u | for two damping rates δ = 0 (rectangular marks) and δ= 12.5 s -1 (circular marks). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 7 . 7 Fig. 7. (a) Pressure mode magnitude | ˆ p | with pressure contour lines plotted on a cylindrical surface equidistantly located from the lateral walls. (b) Top: pressure structure ˆ p = | ˆ p | cos arg( ˆ p ) distribution along the azimuthal direction in the plenum (circular marks) and in the combustion chamber (rectangular marks). Bottom: pressure phase evolution in the azimuthal direction in the plenum (circular marks) and in the combustion chamber (rectangular marks). Fig. 8 . 8 Fig. 8. (a) Four pressure signals recorded by microphones in the plenum (top) and in the combustion chamber (bottom) under spinning limit cycle conditions (dashed lines) compared with numerical reconstructions (solid lines). (b) Spectral content of the pressure signals in the plenum (top) and in the combustion chamber (bottom). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 9 . 9 Fig. 9. Time resolved heat release rate signals recorded by two photomultipliers in the combustion chamber (dashed lines) compared with numerical reconstructions (solid lines) for | u / u | = 0.61 (rectangular marks) and for | u / u | = 0.58 (circular marks). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). to | u / u | = 0.61 are now compared to measurements. In the experiments, these fluctuations are recorded by microphones placed at four positions equidistantly separated on the external perimeter of the plenum and at four positions in the backplane of the combustion chamber as shown in Fig.1 c. Figure8(a) displays microphone measurements in the plenum (dashed lines) compared with the numerical reconstructions (continuous lines) at the same locations. The phase shift between two microphones signals corresponds to their relative position confirming the spinning nature of the mode. Well-established sinusoidal signals with a peak of about 260 Pa at a frequency of 487 Hz, as shown in Fig.8(b) (top), are found in the experiments. The amplitude and the phase shift between microphones is well captured in the simulation. Increasing the examination period, a small phase mismatch between experiments and simulations appears due to the 15 Hz difference between the numerical and experimental frequencies. Figure 8 ( 8 bottom) shows experimental (dashed lines) and numerical signals (solid lines) in the combustion chamber. The microphones mounted on waveguides at a distance of 170 mm away from the backplane of the combustion chamber measure a delayed signal with a time lag τ m -b = 0.5 ms. Since this delay is not negligible compared to the oscillation period of the instability ( 2 ms), it is taken into account in the data processing. It is first worth noting that the pressure fluctuation level only reaches 60 Pa near the chamber backplane and experimental signals are not purely symmetric relative to the ambient mean pressure, indicating the presence of harmonics as revealed in the spectral content of the pressure signals shown in Fig. 8 (b) (bottom). not constant with time indicating the presence of harmonics. As for the pressure signals, the simulated heat release rate signals only consider the oscillation at the fundamental frequency. The reconstructed numerical signals with | u / u | = 0.61 correspond to the Fig. 10 . 10 Fig. 10. Long time exposure photographs of the flames at a limit cycle coupled to a stable standing azimuthal mode. The velocity nodal line is shown as a dashed red line. (a) V type mode structure with a nodal line observed between burners I -II and IX -X . (b) H type mode structure with with a nodal line between burners VIII -IX and XIV -XV [37] . (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). a. A small change of the velocity fluctuation level | u / u | weakly alters the corresponding pressure oscillation level in the plenum and the chamber because these quantities scale linearly with | u / u | as shown in Table Fig. 11 . 11 Fig. 11. Interpolated Flame Describing Function (FDF) obtained for operating point B: φ= 1.11 and u b = 0.66 m s -1 . Experimental data points are displayed as white dots. (a) Gain. (b) Phase ϕ. Fig. 12 . 12 Fig. 12. (a) Pressure modulus | p | of the f = 472 Hz degenerate modes computed under passive flame conditions and plotted in a plane located at the burner inlet section. (b) Pressure distribution ψ used to initialize the computation of the standing mode. Symbols indicate the angular position of the sixteen burners. a shows the pressure modulus | p | of the degenerates modes, ψ (0) 1 and ψ (0) in a frequency-growth rate plane and colored with respect to the maximum amplitude level | u / u | j , which is varied from | u / u | = 0.1 to | u / u | = 0.9. The subscript j will be omitted in what follows. Unless indicated otherwise, | u / u | always refers to the maximum velocity fluctuation level considered in the simulation for the standing mode analysis. In order to emphasize Fig. 13 . 13 Fig. 13. Dynamical trajectories colored by the maximum velocity oscillation level | u / ū | in absence of damping δ= 0 s -1 . Square marks indicate results obtained for the H type standing mode structure. Circular marks indicate results obtained for the V type mode structure. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 14 . 14 Fig. 14. Dynamical trajectories in the frequency ( f ) -growth rate ( α) plane colored by the velocity fluctuation level | u / u | without damping δ= 0 (circular marks) and with damping δ= 12.5 s -1 (square marks) distributed uniformly in the numer- ical domain. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 15 . 15 Fig. 15. (a) Pressure modal distribution | ˆ p | / | ˆ p | max featuring a H type (a) or V type (b) mode structure calculated by the Helmholtz solver with pressure contour lines plotted on a cylindrical surface passing through the middle of the combustion chamber, the plenum, the burners and the microphone waveguides. Fig. 16 . 16 Fig. 16. Velocity distribution at the burners' positions for the standing mode at limit cycle with the V type (black dotted line) and H type (red dotted line) mode structure. Circular symbols indicate the burners' positions oriented as shown in Fig. 17 . (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 17 . 17 Fig. 17. Indication of the predicted angular position of the numerical pressure nodal line (red dashed line) together with the angular region (highlighted in grey) in which the experimental nodal line is recorded for both observed standing mode structures (a) V type (b) H type. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). a. A shift of π /2 is observed for the H type structure in Fig. 17b. Fig. 18 . 18 Fig. 18. The piecewise function F (ω r , | u / u | i ψ (θ )) /Z(θ ) for the combustor under analysis (red line) together with a sin (2 n θ) (black line). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). -top) displays pressure measurements in the plenum (dashed lines) compared with the numerical reconstructions (continuous lines) at the same locations in the numerical domain. Microphones MP3 and MP7, located at θ MP 3 = 78.8 °and θ MP 7 = 258.7 °, respectively, detect pressure oscillations close to the nodal line and they consequently feature low fluctuation levels in the experiments. Microphones MP1 and MP5 located near the anti-nodal line record a well-established sinusoidal signal with a peak amplitude of 350 Pa at a frequency of 498 Hz as shown in Fig. 19 (b-top). Given the standing nature of the recorded mode, the phase shift between two microphone signals corresponds to their relative position with respect to the nodal line. Records of microphones located at opposite sides of the nodal line are shifted by π . Fig. 19 . 19 Fig. 19. (a) Four pressure signals recorded by microphones in the plenum (a) and in the combustion chamber (b) at limit cycle (dashed lines) compared with numerical reconstructions (solid lines). (b) Spectral content of the pressure signals in the plenum (top) and in the combustion chamber (bottom). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 20 . 20 Fig. 20. Time resolved heat release rate signals for the standing mode recorded by H1 and H2 photomultipliers in the combustion chamber (dashed lines) compared with numerical reconstructions (solid lines). Fig. 21 . 21 Fig. 21. (a) Trajectories f -α -| u / u | of solutions of Eq. (2) for operating regime A initiated with a standing mode structure (black continuous line with rectangular marks) and a spinning mode structure (red continuous line with circular marks). Projection of the trajectories on f -α plane (b) and α -| u / u | plane (c). Circle markers indicate results for an initially spinning mode structure, square markers indicate results for a standing mode structure. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 22 . 22 Fig. 22. The piecewise function F (ω r , | u / u | i ψ (θ )) /Z(θ ) for the standing equilibrium point at | u / u | st 0 . 86 and f = 473 Hz (red line) together with a sin (2 n θ) (black line). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 23 a. As already observed during the analysis of the trajectories of the system dynamics operating in regime A, the two trajectories obtained in regime B perfectly match for low velocity fluctuation levels when | u / u | = 0.1. The paths then diverge leading to different limit cycles. In distinction with regime A studied in Fig. 21 b, the two trajectories plotted in the frequencygrowth rate plane in Fig. 23 b overlap only in the linear range. When | u / u | > 0 . 1 , the two trajectories diverge leading to limit cycles with different frequencies and amplitudes. Analyzing the trajectories in the α -| u / u | plane in Fig. Fig. 23 . 23 Fig. 23. Trajectories f -α -| u / u | of solutions of Eq. (2) for operating regime B initiated with a standing mode structure (black continuous line with rectangular marks) and a spinning mode structure (red continuous line with circular marks). Projection of the trajectories on f -α plane (b) and α -| u / u | plane (c). Circle markers indicate results for an initially spinning mode structure, square markers indicate results for a standing mode structure. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Prediction of trajectory analysis Spinning mode at f = 473 Hz with peak pressure of 260 Pa in the plenum and 57 Pa in the chamber Standing mode at f = 478 Hz with a peak pressure of 345 Pa in the plenum and 48 Pa in the chamber Criterion combining stability at limit cycle and trajectory analysis Spinning mode Standing mode Experimental limit cycle oscillation Spinning mode at f = 487 Hz with a peak pressure of 260 Pa in the plenum and 60 Pa in the chamber Standing mode at f = 498 Hz with a peak pressure of 350 Pa in the plenum and 60 Pa in the chamber Fig. 24 . 24 Fig. 24. Computational domain used for the validation of the numerical procedure. The flame domain is highlighted in blue. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). • sure amplitude levels starting from | ˆ p | = 0 and incrementing this value until a limit cycle condition is reached. Considering that the heat release rate is continuously distributed in the flame domain, and designating the angular coordinate by θ , one gets the following pressure distributions: Spinning mode: | ˆ p | ( θ ) =C, where C is a real positive constant, • Standing mode: | ˆ p | ( θ ) = | ˆ p | j ψ ( θ ) . Figure 25 a 25 shows results for the growth rate α as a function of the pressure fluctuation level | ˆ p | , which is varied from 0 to 1. For | ˆ p | → 0 the FDF corresponds to a linear flame transfer function (FTF). The value of the growth rate α = 1 / 2(β -ζ ) derived from the linear stability analysis from [16] is retrieved by the simulation when | ˆ p | → 0 . Increasing the pressure fluctuation amplitude, the growth rate α drops until a limit cycle condition is reached α = 0 for a pressure amplitude | ˆ p | = 2 (βζ ) / (3 κ ) , which is Fig. 25 . 25 Fig. 25. (a) Spinning mode calculations. Growth rate α of the two degenerate modes ψ 1 and ψ 2 as a function of the pressure oscillation level | ˆ p | . The limit cycle (black rectangular symbol) is reached for | ˆ p | = Ā = B = 2 (βζ ) / 3 κ. (b) Standing mode calculations. Growth rate α of mode ψ 1 plotted as a function of the maximum pressure oscillation level | ˆ p | . The limit cycle (black rectangular symbol) is reached for | ˆ p | = A = 4 / 3 (βζ ) /κ. Due to symmetry, results corresponding to the distribution ψ 2 are the same. β= 0.15, ζ = 0.05 and κ= 0.2 for both spinning and standing calculations. | ˆ p | ( θ ) . As a consequence, modes ψ 1 and ψ 2 are not degenerate. In the calculation, for each pressure level | ˆ p | j , the eigenmode ψ 1 is used to distribute | ˆ p | over θ ( B= 0 and A = 0). Due to symmetry, results with the distribution ψ 2 are the same. Figure 25 b shows results of the simulation for the growth rate α as a function of the maximum pressure oscillation level | ˆ p | max , which is varied from 0 to 1. Analytical results for the growth rate α = 1 / 2(β -ζ ) at vanishing perturbation amplitudes are again captured by the simulation when | ˆ p | max → 0. The growth rate then decays until a limit cycle is reached α= 0 when the maximum pressure oscillation level reaches | ˆ p | max = 4 / 3 (βζ ) /κ, a value again in agreement with J 11 = 11 λ 1 = ∂ ˙ A ∂A A= A , B=0 and J 22 = λ 2 = ∂ ˙ B ∂B A= A , B=0 . (17)In these expressions, the dot operator indicates the time derivative. The evaluation of the Jacobian coefficient J 11 starts from the calculation of the time derivative of the mode amplitude A . This can easily be done by recalling the definition of the growth rate ˙ A = αA . This product is calculated for each amplitude using numerical values of the growth rate α and amplitude A determined by the code. Computations of ˙ A (black dashed line) are plotted in Fig. 26 . 26 Fig. 26. Numerical evaluation of the Jacobian matrix coefficients for the standing mode. Calculations make use of results for the growth rate α of the mode ψ 1 (black continuous line), for the growth rate α ψ 2 of the ψ 2 mode when the pressure fluc- tuation level | ˆ p | is distributed following the ψ 1 distribution (red continuous line) and ˙ A (black dashed line) as a function of | ˆ p | max . β= 0.15, ζ = 0.05 and κ= 0.2. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article). Fig. 26 26 Fig. 26 together with the growth rate α (black continuous line) as a function of the maximum pressure fluctuation level | ˆ p | max . Following its definition, J 11 corresponds to the slope of the black dashed curve taken at the equilibrium point | ˆ p | max = A . The computed value λ 1 = ζβ coincides with the one predicted analytically [16] . For A → 0, it is also worth noting that the slope of the black dashed curve is equal to the growth rate α = (βζ ) / 2 predicted by a linear stability analysis. The red continuous curve in Fig. 26 26 Fig. 26 corresponds to the growth rate α ψ 2 trajectory of the ψ 2 mode when the pressure fluctuation level | ˆ p | is distributed following the ψ 1 distribution. When | ˆ p | max → 0, the black and red trajectories converge to the same value α ψ 2 = α = (βζ ) / 2 and the two modes are again found to be degenerate. Increasing the oscillation level | ˆ p | max , the black and red continuous trajectories diverge. For each point of the red trajectory B= 0, indicating that this mode has no influence on the standing equilibrium point of Table 2 2 Synthesis of the stability and trajectory analyses. Operating point A Operating point B Stability of spinning mode based Stable when criterion is applied at | u / u | = 0.61 and Stable when criterion is applied at | u / u | = 0.59 and on Eq. (8) f = 473 Hz f = 466 Hz Stability of standing mode based Unstable when criterion is applied at | u / u | = 0.86 and Stable when criterion is applied at | u / u | = 0.86 and on Eq. (10) f = 473 Hz f = 478 Hz Prediction of stability analysis Spinning mode Spinning or standing mode Trajectory analysis Spinning mode features a growth rate that is slightly Standing mode growth rate is larger than that of the greater than that of the standing mode over a limited spinning mode for all oscillation amplitudes range of amplitudes
100,865
[ "9289", "22040", "1038181" ]
[ "253558", "690", "416", "416", "416", "253558", "416" ]
01745246
en
[ "info" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01745246/file/unet-sara.pdf
Sara Berri email: sara.berri@l2s.centralesupelec.fr Vineeth Varma Samson Lasaulce email: samson.lasaulce@l2s.centralesupelec.fr Mohammed Said Radjef email: radjefms@gmail.com Jamal Daafouz email: jamal.daafouz@univ-lorraine.fr Studying Node Cooperation in Reputation Based Packet Forwarding within Mobile Ad hoc Networks Keywords: Mobile ad hoc networks, Packet forwarding, Cooperation, Evolutionary game theory, ESS, Replicator dynamics In the paradigm of mobile Ad hoc networks (MANET), forwarding packets originating from other nodes requires cooperation among nodes. However, as each node may not want to waste its energy, cooperative behavior can not be guaranteed. Therefore, it is necessary to implement some mechanism to avoid selfish behavior and to promote cooperation. In this paper, we propose a simple quid pro quo based reputation system, i.e., nodes that forward gain reputation, but lose more reputation if they do not forward packets from cooperative users (determined based on reputation), and lose less reputation when they chose to not forward packets from non-cooperative users. Under this framework, we model the behavior of users as an evolutionary game and provide conditions that result in cooperative behavior by studying the evolutionary stable states of the proposed game. Numerical analysis is provided to study the resulting equilibria and to illustrate how the proposed model performs compared to traditional models. Introduction A mobile ad hoc network (MANET) is a wireless multi-hop network formed by a set of mobile independent nodes. A key feature about MANETs is that they are self organizing and are without any established infrastructure. The absence of infrastructure implies that all networking functions, such as packet forwarding, must be performed by the nodes themselves [START_REF] Basagni | Mobile ad hoc networking[END_REF]. Thus, multi-hop communications rely on mutual cooperation among network's nodes. As the nodes of an ad hoc network have limited energy, the nodes may not want to waste their energy by forwarding packets from other nodes. If all the nodes are controlled by a central entity, this will not be a major issue as cooperation can be a part of the design, but in applications where each node corresponds to an individual user, it is crucial to develop mechanisms that promote cooperation among the nodes. Several works in the literature provide solutions based on incentive mechanisms, such as those based on a credit concept [START_REF] Buttyán | Enforcing Service Availability in Mobile Ad-Hoc WANs[END_REF], [START_REF] Buttyán | Stimulating Cooperation in Self-Organizing Mobile Ad Hoc Networks[END_REF], [START_REF] Krzesinski | Promoting Cooperation in Mobile Ad Hoc Networks[END_REF] etc., whose idea being that nodes pay for using some service and they are remunerated when they provide some service (like packet forwarding). Others like [START_REF] Jianl | HEAD: A Hybrid Mechanism to Enforce Node Cooperation in Mobile Ad Hoc Networks[END_REF], [START_REF] Li | Game-Theoretic Analysis of Cooperation Incentive Strategies in Mobile Ad Hoc Networks[END_REF] use reputationbased mechanisms to promote cooperation. Game theory has been a vital tool in literature to study the behavior of self-serving individuals in serval domains including MANETs. In [START_REF] Félegyházi | Game Theory in Wireless Networks: A Tutorial[END_REF], [START_REF] Félegyházi | Nash Equilibria of Packet Forwarding Strategies in Wireless Ad Hoc Networks[END_REF], [START_REF] Jaramillo | A Game Theory Based Incentivize Cooperation in Wireless Ad hoc Networks[END_REF], etc. the interaction among nodes in packet forwarding is modeled as a one shot game based on prison's dilemma model, extended then to repeated game. Furthermore, evolutionary game theory is introduced in [START_REF] Seredynski | Evolutionary Game Theoretical Analysis of Reputation-based Packet Forwarding in Civilian Mobile Ad Hoc Networks[END_REF], [START_REF] Seredynski | Analysing the Development of Cooperation in MANETs using Evolutionary Game Theory[END_REF], [START_REF] Tang | When Reputation Enforces Evolutionary Cooperation in unreliable MANETs[END_REF], to study the dynamic evolution of system composed of nodes and to analyze how cooperation can be ensured in a natural manner. In [START_REF] Seredynski | Evolutionary Game Theoretical Analysis of Reputation-based Packet Forwarding in Civilian Mobile Ad Hoc Networks[END_REF] the evolutionary game theory is applied to study cooperation in packet forwarding in mobile ad hoc networks. Here, the authors used the prison's dilemmabased model [START_REF] Félegyházi | Game Theory in Wireless Networks: A Tutorial[END_REF] and the aim was to implement several strategies in the game and to evaluate performance, by observing their evolution over time. The aforementioned works rely on incentive mechanisms, which has been proved to improve nodes cooperation. However, implementing such solutions often result in a large computational complexity during the game. We would like to find an answer to the following question, "Is it possible to achieve global cooperation in packet forwarding by a simple and natural way?". In this paper, we model the nodes interaction in a MANET as an evolutionary game by proposing a new formulation of the packet forwarding and reputation model. We introduce a simple reputation system with a quid pro quo basis, wherein, reputation is gained by forwarding packets and is lost when refusing to forward. However, a key feature is that the reputation loss depends on the packet source. If the packet is from a node with low reputation, less reputation is lost by not forwarding that packet. This simply means that selfish users will naturally have low reputation, while users are encouraged to help other cooperative users, resulting in a significantly different model from the likes of [START_REF] Seredynski | Evolutionary Game Theoretical Analysis of Reputation-based Packet Forwarding in Civilian Mobile Ad Hoc Networks[END_REF], [START_REF] Tang | When Reputation Enforces Evolutionary Cooperation in unreliable MANETs[END_REF] etc. With this model, we study two node classes, one which try to maintain a certain high reputation, and another class which disregard their reputation. We show that nodes are likely to cooperating by means of evolutionary game theory concepts and provide numerical results showing how the proposed model improves network performance. The novel reputation model we propose will naturally result in the cooperative users cooperating among each other and refusing to forward packets from selfish users, thereby eliminating the need for a third party to punish selfish behavior. The remainder of the paper is structured as follows. In Sec. 2 we formulate the reputation and game models. We propose to analyze the evolutionary game in Sec. 3, by providing the associated equilibrium, and studying strategies evolution. This allows us to determine condition ensuring global cooperative behavior. The numerical results are presented in Sec. [START_REF] Krzesinski | Promoting Cooperation in Mobile Ad Hoc Networks[END_REF]; it has to be noted that the results are the same for any game settings satisfying the provided conditions and not only for the given examples. Finally, Sec. 5 presents the conclusion. Problem formulation and proposed game model In this section, we provide a game model to study the packet forwarding interaction. We consider a packet forwarding game, where the players are the nodes, each of them can be cooperative, by forwarding other nodes' packets, or noncooperative, by dropping other nodes' packets. Thus, the players have to choose a strategy s i from the strategy set S={C, NC}. The actions C and NC mean cooperative and non-cooperative, respectively. The two player packet forwarding game can be defined in its strategic-form as following. G (2) =< {1, 2}, {S i } i∈I , {u i } i∈I >, (1) where: • I = {1, 2} is the set of players (two players), that are the network nodes; • S i is the set of pure strategies of player i ∈ I, which is the same for all the players, corresponding to S={C, NC}; • u i is the utility of player i ∈ I, that depends on its behavior and that of its opponent. To demonstrate the utility formulation, we consider the case of a pair of nodes from the network, within which a node may act as a sender and a relay (and vice versa). Thus, the players' utility can be represented by a payoff matrix as given by [START_REF] Buttyán | Enforcing Service Availability in Mobile Ad-Hoc WANs[END_REF]. A = C NC C λ -1 -1 NC λ 0 , (2) where: λ > 0 is a coefficient representing the benefit associated to successfully sending a packet while spending a unit of energy. The first player actions are along the rows and the second players along the columns. Naturally, when λ < 1 no nodes are motivated to cooperate as the energy cost relative to the gain from having packets relayed is too high. In the interesting case (the case where a MANET framework is feasible) of λ > 1, the outcome of the proposed game can be characterized by the well-known Nash Equilibrium (NE), which is the strategy profile from which no player has interest in changing unilaterally its strategy. The resulting strategy profile is beneficial for players when they act individually. However, the NE of the packet forwarding game is inefficient, corresponding to drop all the time, and provides for players 0 as utility. Thus, to overcome this problem we propose to add to the game (1) a reputation model, that defines the reward and the cost in terms of reputation according to the node decision, cooperative or non-cooperative. On the other hand, it would be better to model the interactions among all the N nodes and not just the two-player case. To deal with this, we propose to introduce evolutionary game theory, where the dynamical evolution of game strategies is studied through pairwise interactions. In the following section, we provide the reputation model we propose, and construct the new packet forwarding game including the reputation mechanism as an integrated system. That means the game is played taking into account the reputation, which we show can be interpreted as a constraint on the strategy space, while the nodes aim to maximize their utility function. Reputation model We assume that there is a reputation system introduced in order to discourage selfish behavior and reward cooperative behavior by separating these two classes of nodes. The reputation system is represented as a function depending on the own action and the opponent's action. The reputation increases by a certain margin δ r whenever a node relays the packet from another node, chooses C as action. Reputation is lost whenever a node refuses to relay a packet, by choosing NC as action. However, the loss of reputation from refusing to relay the packet from a node with low reputation δ b is smaller than the loss incurred by refusing to relay the packet from a well reputed node δ g . For ease of notation the reputation of a user i ∈ I is given by R i (t), and if R i (t) > 0 the node has a good reputation and otherwise bad reputation. The change in reputation is given by: R i (t + 1) = R i (t) + d i (t)δ r -(1 -d i (t))(δ g 1(R j > 0) + δ b 1(R j ≤ 0)), (3) where: d i (t) ∈ {0, 1} is the decision to relay or not at time t. 0 corresponds to the action NC, and 1 to C. j is a random variable indicating the sender requesting i to relay. 1 is the indicator function, it is one when the condition inside the brackets is satisfied and 0 otherwise. We consider two primary classes of nodes based on their reputation value. The set H of "Hawks" who are selfish (non-cooperative) and don't care about reputation. As a result these nodes never relay packets, i.e., s i =NC ∀i ∈ H, but use the network and try to make the other nodes relay their packets, and so we have R i (t) < 0 ∀i ∈ H. These nodes will always have d i (t) = 0 for all t and therefore will also have a low reputation. The other class of nodes are the set D of "Doves", who try to maintain a positive reputation. These nodes will have a strategy s such that on average their reputation gain is positive. Let us denote the dove population share (fraction of users who are in the dove class) by p. The population share of hawks will simply be given by 1 -p. Utility maximization In this subsection, we present how reputation system is integrated in the packet forwarding game in order to improve game outcomes and avoiding the noncooperative situation. As even the doves do not want to waste energy, they will not attempt to transmit ever single packet, but only such that their average reputation gain is at least 0 (reputation must be an increasing function). We assume that even a cooperative node, i.e., the Dove class, does not relay packets all the time. The doves have a mixed strategy to relay messages, and it relays messages from other doves with probability s d , and from hawks with a probability s h , i.e., the action C is chosen with different probabilities depending on the opponent's class. As a result, the net utility is given by the number of times their packets get forward subtracted by the energy cost paid is given by them. The expected payoff of doves is given by the formula (4). U (D, p) = (λ -1)ps d -s h (1 -p). (4) This must be maximized over s d , s h while maintaining a positive reputation, i.e., E[R i (t + 1) -R i (t)] ≥ 0 or p(s d δ r -(1 -s d )δ g ) + (1 -p)(s h δ r -(1 -s h )δ b ) ≥ 0. ( 5 ) Therefore for a given population share of doves p, we can find the strategy of doves by solving the following optimization problem. max s d ,s h U (D, p) p(s d δ r -(1 -s d )δ g ) + (1 -p)(s h δ r -(1 -s h )δ b ) ≥ 0 0 ≤ s d ≤ 1, 0 ≤ s h ≤ 1. (6) Hawks have the same utility function, but don't have a reputation constraint, therefore, the corresponding expected payoff is given by the formula [START_REF] Félegyházi | Game Theory in Wireless Networks: A Tutorial[END_REF]. U (H, p) = λps h . (7) Thus, the expected payoff of any individual is given by (8): U (p, p) = pU (D, p) + (1 -p)U (H, p), (8) where p is the population profile. If λ > 1, U (D, p) is maximized trivially by choosing s d = 1. Therefore, s h will be the smallest such that constraint (5) holds. p(s d δ r -(1 -s d )δ g ) + (1 -p)(s h δ r -(1 -s h )δ b ) ≥ 0 ⇒ pδ r + (1 -p)(s h δ r -(1 -s h )δ b ) ≥ 0 ⇒ pδ r + (1 -p)(s h (δ r + δ b ) -δ b ) ≥ 0 ⇒ s h ≥ (1-p)δ b -pδr (δ b +δr)(1-p) . Thus, we have: s h = max (1 -p)δ b -pδ r (δ b + δ r )(1 -p) , 0 . (9) Note that the introduction of s h < 1 is one of the main novelties of this paper, which can be different from s h = 1 as defined in the traditional forwarding game payoff [START_REF] Félegyházi | Game Theory in Wireless Networks: A Tutorial[END_REF], [START_REF] Félegyházi | Nash Equilibria of Packet Forwarding Strategies in Wireless Ad Hoc Networks[END_REF], [START_REF] Jaramillo | A Game Theory Based Incentivize Cooperation in Wireless Ad hoc Networks[END_REF], [START_REF] Seredynski | Evolutionary Game Theoretical Analysis of Reputation-based Packet Forwarding in Civilian Mobile Ad Hoc Networks[END_REF], [START_REF] Tang | When Reputation Enforces Evolutionary Cooperation in unreliable MANETs[END_REF], etc. where the cooperative nodes forward packet all the time without making distinction among the opponent nodes that can be cooperative, belonging to D, or non-cooperative, belonging to H. Furthermore, the proposed reputation system is simpler and can be defined as a constraint when nodes take decision purely based on the reputation class of the packet source node. Evolutionary game formulation We can formally define the resulting evolutionary game with the strategic form G =< {D, H}, {(s d , s h )} × {0}, p ∈ [0, 1], {u c } c∈{D,H} >, (10) where: • {D, H} are the reputation classes (or population types); • {(s d , s h )} is the set of strategies playable by D, with always playing 0 or NC strategy; • p is the population share of class D; • u c is the utility of class D or H as defined in ( 4) and [START_REF] Félegyházi | Game Theory in Wireless Networks: A Tutorial[END_REF]. Our objective in the following section is to study the evolution of strategies in this game, and analyze possible equilibrium points. Evolutionary game analysis Evolutionary game theory study the dynamic evolution of a given population based on two main concepts: evolutionary stable strategy (ESS) and replicator dynamics. Let p the initial population profile. We assume that a proportion ε of this population plays according to another profile q (population of mutants), while the other individuals keep their initial behavior p. Thus, the new population profile is (1 -ε)p + εq. The expected payoff of a player that plays according to p is U (p, (1 -ε)p + εq), and it is equal to U (q, (1 -ε)p + εq) for the one playing according to q. Definition 1 [START_REF] Smith | The Logic of Animal Conflict[END_REF] A strategy p ∈ ∆ is an evolutionary stable strategy (ESS), if : ∀q ∈ ∆, ∃ ε = ε(q) ∈ (0, 1), ∀ε ∈ (0, ε) U (p, (1 -ε)p + εq) > U (q, (1 -ε)p + εq), (11) ε is called invasion barrier of the strategy p, which may depend on q. The replicator dynamics is the process that specifies how a population is distributed over the pure strategies set in a game evolving in time. Definition 2 (Replicator dynamics). The replicator dynamics is given by ( 12) [START_REF] Jonker | Evolutionary Stable Strategies and Game Dynamics[END_REF]: ṗi = p i [U (i, p) - |S| j=1 p j U (j, p)], i ∈ {1, . . . , |S|}. (12) The system [START_REF] Tang | When Reputation Enforces Evolutionary Cooperation in unreliable MANETs[END_REF] describes the replication process in continuous time. It gives the percentage of individuals newly playing strategy s i in the next period, it depends on the initial value p i (t 0 ). Using the relation [START_REF] Tang | When Reputation Enforces Evolutionary Cooperation in unreliable MANETs[END_REF], the replicator dynamics of the proposed game is: ṗ = p(1 -p)(p(λ -1)(1 -s h ) -s h ). ( 13 ) For the evolutionary game G, we have the following results. Theorem 1. When λ > 1, the evolutionary game G admits exactly two ESS at p = 0 with an invasion barrier ε = min δ b δrq(λ-1) , 1 , and p = 1 with an invasion barrier ε = 1. When the initial configuration is such that p < p T , the replicator dynamics takes the system to p = 0 and when p > p T , the replicator dynamics takes the system to p = 1 with p T = δ b δ b + λδ r corresponding to the mixed NE. Proof. First, we can easily verify that p T corresponds to a mixed NE by noticing that the utilities of H and D classes are identical at this point. Next, we use the definition 1, to prove the results stated in Theorem 1 corresponding to the invasion barrier. Let x = (1 -ε)p + εq, and Ū = U (p, x) -U (q, x). U (p, x) and U (q, x) are defined using the relation [START_REF] Félegyházi | Nash Equilibria of Packet Forwarding Strategies in Wireless Ad Hoc Networks[END_REF]. Ū = p(x(λ -1) -s h (1 -x)) + (1 -p)(λxs h ) -q(x(λ -1) -s h (1 -x)) -(1 -q)(λxs h ) = (p -q)(x(λ -1) -s h (1 -x)) -(p -q)(λxs h ) = (p -q)(x(λ -1)(1 -s h ) -s h ). ( 14 ) 1. In the first case, p = 0, this gives s h = δ b δ b +δr . We can solve for the condition when Ū > 0 ⇒ -q(εq(λ -1)(1 -s h ) -s h ) > 0 ⇒ (-εq(λ -1)(1 -s h ) + s h ) > 0 ⇒ (-εq(λ -1)δ r + δ b ) > 0 ⇒ ε < δ b δ r q(λ -1) . (15) Thus, from the definition 1 we conclude that p = 0 is an ESS with ε = min δ b δrq(λ-1) , 1 as an invasion barrier. Note that p = 0 is an ESS only if the population share of D decreases, and that of H increases, i.e., the replicator dynamics is negative or ṗi < 0. If s h = s h , ṗ < 0 gives the following result: ṗ < 0 ⇒ p(1 -p)(p(λ -1) - (1 -p)δ b -pδ r (δ b + δ r )(1 -p) (p(λ -1) + 1)) < 0 ⇒ p((δ b + δ r λ)p -δ b ) δ b + δ r < 0 ⇒ p < p T (16) 2. Now we prove that p = 1 is an ESS. In this case s h = 0. Thus, the following result: Ū = (1 -ε(1 -q))(λ -1), we have λ > 1 ⇒ Ū > 0. This implies, according to definition 1, that p = 1 is an ESS ε = 1 as an invasion barrier. This occurs if ṗ > 0, i.e., when: p((δ b + δ r λ)p -δ b ) δ b + δ r > 0 ⇒ > δ b δ b + δ r = p T (17) 4 Numerical analysis In this section, we present numerical application of the proposed evolutionary game including a reputation system. All the results are based on the replicator dynamics which describes how the population evolves, and allows one to determine others performance metrics such as expected utility of players and the number of forwarded packets. We study the effect of the proposed reputation model on the evolutionary stable strategy of the game. Fig. 1 presents the results, we consider two scenarios: 1) The curve in solid line corresponds to results provided by the proposed game model including a reputation system, and assuming that the cooperative nodes forward packets of non-cooperative nodes with some probability s h . 2) The curve in dashed line corresponds to results provided by putting s h = 1, meaning that cooperative nodes forward all the time, which corresponds to the previous packet forwarding game introduced in [START_REF] Félegyházi | Game Theory in Wireless Networks: A Tutorial[END_REF], [START_REF] Félegyházi | Nash Equilibria of Packet Forwarding Strategies in Wireless Ad Hoc Networks[END_REF], [START_REF] Jaramillo | A Game Theory Based Incentivize Cooperation in Wireless Ad hoc Networks[END_REF], [START_REF] Seredynski | Evolutionary Game Theoretical Analysis of Reputation-based Packet Forwarding in Civilian Mobile Ad Hoc Networks[END_REF], etc. It is seen that using our new formulation, which integrates a reputation mechanism as a constraint, the system could converge towards a cooperative state, by carefully choosing the game settings. Thus, global cooperation could be guaranteed after a given time. Whereas, when the game does not include the reputation constraint, the population converges to the strategy non-cooperation, which is the unique evolutionary stable strategy of the game, regardless of the initial condition and game settings. The results given by the Figure 1 can be used to characterize the expected utility of players. Fig. 2 presents the results of both cases s h = s h and s h = 1. From these figures, we observe that the utilities evolve over time in the same way that the proportion of the considered population, corresponding to D in that case. Thus, the proposed model provides better results, it promotes cooperation among nodes. Indeed, in order to show the influence level of these results on network performance, we consider a network composed of 50 nodes, randomly placed in surface of 1000m × 1000m, with a transmission range equals to 150m, and plot normalized number of forwarded packets within a network, using the proposed game model with constraint, and that introduced in previous works [START_REF] Félegyházi | Game Theory in Wireless Networks: A Tutorial[END_REF], [START_REF] Félegyházi | Nash Equilibria of Packet Forwarding Strategies in Wireless Ad Hoc Networks[END_REF], [START_REF] Jaramillo | A Game Theory Based Incentivize Cooperation in Wireless Ad hoc Networks[END_REF], [START_REF] Seredynski | Evolutionary Game Theoretical Analysis of Reputation-based Packet Forwarding in Civilian Mobile Ad Hoc Networks[END_REF], etc. defined without any constraint and assuming that the cooperative nodes forward all the time, i.e., s h = 1. We assume that all the nodes need to send 10 packets to a given destination. Fig. 3 represents the results for the following game settings: λ = 3, δ r = 3 and δ b = 1. It clearly shows a direct influence, because the number of forwarded packets is strongly linked to the cooperative nodes proportion in packet forwarding. Remark: While setting δ b = 0 can indeed make p = 1 the only ESS, this may not be a suitable reputation model for the MANET framework due to several reasons. Firstly, setting δ b = 0 will completely discourage D from forwarding packets from H class, which may also include new users to the MANET, thereby discouraging new users as they might be unable to send their packets without increasing their reputation. Secondly, note that D may not always forward packets from D as in practice the channel conditions between the nodes also play a big role in determining the resource cost and therefore the utility gained by forwarding (which we have not accounted for in this work). Accounting for channel fading due to path loss or small-scale fading will therefore, be a relevant extension of this work. These considerations show that reputation model parameters must be carefully designed in practice. The contribution of this paper is to propose a the packet forwarding game [START_REF] Félegyházi | Game Theory in Wireless Networks: A Tutorial[END_REF], introducing a reputation system, which modifies reputation based on the reputation class of the packet source, i.e., cooperative or noncooperative. The aim is to motivate node cooperation using a simple and efficient mechanism. As a smaller reputation is lost by not forwarding packets from selfish users (classified by the reputation system), cooperative users will effectively forward the packets from other cooperative users and may avoid forwarding packets from selfish users. Effectively, we have demonstrated using evolutionary game theory concepts that, global cooperation in the network can be achieved under some conditions we stated related to the game settings with a low computational complexity. Finally, through simulations, we have shown that in terms of the number of forwarded packets in the MANET, the proposed game model provides significant gains over the game model where the cooperative nodes forward packet regardless of the opponent's behavior. As an extension of the present work, we propose to study the multi-hop case, where the interaction involves more than two players. Another relevant extension would be to account for channel fluctuations and the resulting utility function which might result in the cooperative class users forwarding packets from selfish users and not another cooperative user despite the reputation losses, due the channel conditions being favorable. 1 Fig. 1 . 11 Fig. 1. Evolutionary dynamics of the Doves, nodes that play the strategy 'Cooperation' with a proability s * h in the proposed game model, and previous packet forwarding game model where s h = 1. We plot for several initial frequency values. 1 Fig. 2 . 12 Fig. 2. The expected utility of D in the proposed game previous packet game model, where s h = 1. We plot for several initial frequency values 0.7 and 0.3. 1 Fig. 3 . 13 Fig. 3. Normalized number of forwarded packets for an Ad hoc network of 50 nodes, in the proposed game model and previous packet forwarding game model, where s h = 1. We plot for two initial frequency values 0.7 and 0.3. The present work is supported by the LIA project between CRAN, Lorea and the International University of Rabat.
27,097
[ "779851", "5857", "1068236", "936663" ]
[ "93067", "185180", "1289", "93067", "185180" ]
01745270
en
[ "info" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01745270/file/unet6-1.pdf
Daniel Bonilla Licea email: dbonillalicea85@gmail.com Vineeth S Varma Samson Lasaulce email: samson.lasaulce@l2s.centralesupelec.fr Jamal Daafouz email: daafouz6@univ-lorraine.fr Mounir Ghogho email: mounir.ghogho@uir.ac.ma Des Mclernon email: d.c.mclernon@leeds.ac.uk Robust trajectory planning for robotic communications under fading channels come Introduction Traditionally in wireless literature, the trajectory of the mobile node is assumed to be an exogenous variable and the communication resources are optimized based only on the wireless parameters. However, we have seen an emergence of new technology like unmanned aerial or ground vehicles, drones and mobile robots which have communication objectives in addition to their destination or motion based objectives [START_REF] Licea | Trajectory planning for energy-efficient vehicles with communications constraints[END_REF]. Several works [START_REF] Ooi | Minimal energy path planning for wireless robots[END_REF] have studied trajectory optimization problems when the communication constraint is that of having a target SNR. However, we are interested in the case where the communication requirement is downloading a certain number of bits within a given time. Previously, we have studied the problem where a mobile robot (MR) must download (or upload) a given amount of data from an access point and also reach a certain destination within a given time period in [START_REF] Licea | Trajectory planning for energy-efficient vehicles with communications constraints[END_REF]. However, in [START_REF] Licea | Trajectory planning for energy-efficient vehicles with communications constraints[END_REF], we did not account for wireless channel fading and in fact assumed that the wireless signal strength is determined purely based on the path loss. In this work, we want to relax this strong assumption and account for small-scale fading and shadowing effects. In this article we will show how to design offline a robust reference trajectory under limited amount of information and high uncertainty about the wireless channel. This trajectory will allow the MR to reach the goal point and completely transmit the content of its buffer to the access point (AP) with a sufficiently high probability. In practice, this reference trajectory will be preloaded on the MR prior to the execution on the task and it will serve the MR as guide which may need to be slightly modified according to the wireless channel measurements collected by the MR while executing its task. This adaptation mechanism is outside the scope of this article and we will only focus on the design of the reference trajectory. Future works will address the online adaptation mechanism. The main contributions of this paper are as follows. -Trajectory planning of a MR starting from an arbitrary point, which must reach a certain target point and download a certain number of bits from a nearby access point. -Optimization of the trajectory to minimize a cost function which depends on the amount of data left in the buffer to be downloaded and the energy consumed. -Considering a robust cost function which accounts for the random fluctuations of the wireless channel due to small-scale fading and shadowing effects. Note that the first two contributions were also provided in [START_REF] Licea | Trajectory planning for energy-efficient vehicles with communications constraints[END_REF] for the much more simpler case in which only path-loss is assumed to determine the wireless signal. The rest of the paper is structured in the following manner. We provide the model for the wireless communication system and the robot motion in Section 2. We then provide the problem statement in Section 3 and provide a solution concept in Section 4. Finally, we provide numerical simulations in Section 5. System Model The position of the MR is given by p(t) ∈ R 2 , at any time t ∈ R ≥0 . We assume that the robot starts at position s, i.e. p(0) = s. The MR and the AP communicate with a frame duration T during which the channel fading is assumed to be a constant, i.e we assume a block fading model. The robot has a buffer with state b(k) ∈ Z ≥0 denoting the number of bits it must transmit at the discretized time k = t T . The initial buffer size is the total file size and is assume to be given by N , i.e., b(0) = N . The robot is equipped with a wireless system to communicate with an access point at p AP satisfying the following properties. Communications system The MR will move among dynamic scatterers and the bandwidth used for the communication will be lower than the coherence bandwidth. As a consequence the wireless channel between the MR and the access point (AP) will experience time-varying and flat multipath (small scale) fading as well as shadowing (large-scale fading). With loss of generality, we assume, from now on, that the communication problem consists in uploading data from the MR to the AP. The signal received by the AP at time t can be written as y AP (t, p(t)) = h(p(t), t)s(p(t)) p(t) -p AP α/2 2 x(t) + n AP (t), (1) where p AP is the location of the AP, h(p(t), t) represents the time-varying smallscale fading which we assume to be Nakagami distributed and s(p(t)) represents the shadowing term which we assume to be lognormal distributed [START_REF] Cai | A Two-Dimensional Channel Simulation Model for Shadowing Processes[END_REF]. Nakagami fading is well suited to model the behavior of the multipath fading in many practical scenarios [START_REF] Simon | Digital Communications over Fading Channels[END_REF]. Without loss of generality we assume E[|h(p(t))| 2 ] = 1 and so the p.d.f. of |h(p(t))| becomes f h (z, m) = 2m m Γ (m) z 2m-1 exp -mz 2 , ( 2 ) where m is the shape factor of the Nakagami distribution. As mentioned before, the shadowing term s(p(t)) is lognormal distributed and so we have log (s(p(t))) ∼ N 0, σ 2 s with σ 2 s being the its variance. Also, the normalized spatial correlation of the shadowing is r(p, q) = exp - p -q 2 β , (3) where β is the decorrelation distance which will be unknown to the MR prior to the execution of the trajectory. Now, the coefficient α in (1) is the power path loss coefficient which usually takes values between 2 and 6 depending on the environment; x(t) is the signal transmitted by the robot with average power E[|x(t)| 2 ] = P and n AP (t) ∼ CN (0, σ 2 n ) is the zero mean additive white Gaussian (AWGN) noise at the AP's receiver. From [START_REF] Licea | Trajectory planning for energy-efficient vehicles with communications constraints[END_REF] we have that the signal-to-noise ratio (SNR) at the AP (in dB) is: Γ dB (p(t)) = 10 log 10 P σ 2 n + 20 log 10 (s(p(t))) + 20 log 10 (|h(p(t), t)|) -10α log 10 ( p(t) -p AP 2 ) . (4) As a result, the number of bits in the MR's buffer is given by: b(k) =     N - k j=0 R Γ (p(jT ))     + (5) where and a + = a for a > 0 and a + = 0 for a ≤ 0; Γ (p(jT )) is the estimate of Γ (p(jT )) which is Γ dB (p(jT )) in linear scale, N is the initial number of bits in the buffer and R Γ (p(kT )) is the number of bits in the payload of the packet transmitted during the duplexing period k. As mentioned above, the number R Γ (p(kT )) of bits transmitted in the payload is computed by the MR according to its most recent SNR estimate. So we have (for b(k) = 0): R Γ = R j , ∀ Γ ∈ [η j , η j+1 ), j = 0, 1, • • • , J (6) with R j < R j+1 , η j < η j+1 , R 0 = 0, η 0 = 0 and η 1 must be above the sensitivity of the AP's receiver. Mobile robot We assume the MR to be omnidirectional and its velocity is assumed to be controlled directly. This results in its motion described by ṗ(t) = u(t), (7) where p(t) is the MR position at time t and u(t) is the control input which is bounded by: u(t) 2 ≤ u max , (8) Finally, the mechanical energy spent by the MR between t 0 and t 1 while using the control signal u(t) is: E mechanical (t 0 , t 1 , u) = m t1 t0 u(t) 2 dt. ( 9 ) where m is the mass of the MR. Problem statement The objective of the robot is to depart from a starting point s to a goal point g within a time t f and transmit the all the content from its buffer to the AP. The desired trajectory is such that it consumes little mechanical energy from the robot and also allows the robot the transmit all the content of the buffer quickly. In addition we want that when the MR follows this trajectory it succeeds in emptying its buffer with a high probability. We assume that the only knowledge available to the MR (and the designer) about the environment (prior to the execution of the trajectory) is the position of the starting and goal points (i.e., s and g); an estimate of the path loss coefficient α, but we assume no knowledge about the severity of the small-scale fading (i.e., about the shaping factor m in (2)). Solving the general problem with no approximation is very hard due to the large amount of stochastic perturbations, the shadowing correlation and the large number of terms in the sum of [START_REF] Malmirchegini | On the Spatial Predictability of Communication Channels[END_REF]. This results in a very complicated expression for the probability of the buffer to be empty at t f . Therefore, we look at the most likely buffer state given by b (k) =     N - k j=0 R (Γ (p(jT )))     + (10) where R (Γ (p(jT ))) is the statistical mode of R (Γ (p(jT ))), i.e., R (Γ (p(kT ))) = max argmax R∈{Rj } J j=0 Pr (R (Γ (p(kT ))) = R) . ( 11 ) This results in the following optimization problem minimize u θ 1 tf 0 u(t) 2 2 u 2 max dt + θ 2 t f T k=0 T b(k) N s.t. ṗ(t) = u(t) u(t) 2 ≤ u max , p(0) = s, p(t f ) = g, t f T k=0 R (Γ (p(kT ))) ≥ r R N. (12) The optimization target is a convex combination of the energy spent in motion by the robot (9) and of a second term which estimates how quickly the buffer is emptied. This second term is a sum over the most likely number of bits left in the buffer at time instant t = kT (i.e., E [b(k)]). The coefficients {θ k } 2 k=1 of the convex combination determine the relative importance of each optimization criterion. Note that due to the stochastic nature of the channel we can not ensure that when the MR follows the reference trajectory it will always be able to empty its buffer but we can ensure that this happens with a certain probability. As calculating the actual probability of failing to meet the communication requirement constitutes a very hard task as explained above, we introduce r R ≥ 1 which is an overestimation parameter selected by the designer. The final constraint in (12) ensures that the sum of the statistical mode of the bits transmitted in the payload over all the trajectory is equal to an overestimation of the initial number of bits in the buffer, i.e., r R N . So when the trajectory is actually executed, the probability that the buffer will be emptied will be high and by increasing the overestimation parameter r R we can reduce the probability of the MR failing to empty its buffer when it reaches the goal point g. The term b(t) is a discreet and deterministic function of the MR's position. This difference makes the problem much more feasible to solve. Now, to solve the optimization problem (12) we first define the region A j as: A j = {p | R (Γ (p)) = R j }. ( 13 ) Due to the wireless channel model the region A J is circular while the shape of region A j , for j = 1, 2, • • • , J -1, is a ring with inner and outer radii of r j+1 and r j respectively. And r j is given by: r j = min r | R (Γ (r[cos(θ) sin(θ)] -p AP )) = R j (14) The radii r j are computed from the channel statistics which can be estimated using the techniques presented in [START_REF] Malmirchegini | On the Spatial Predictability of Communication Channels[END_REF]. Nevertheless, for lack of space we do not provide here the details on how to compute it. We also define u j (t) as any control law that takes the vehicle through the regions {A k } j k=0 . The set of all control laws u j (t) will be denoted as U j and U = ∪ J j=0 U j is the set of all control laws. One simple way to solve (12) is to first solve it with the additional constraint u ∈ U j , once for each different value of j = 1, 2, • • • , J. We will denote as u * j (t) the optimum control law that solves (12) under the additional constraint u ∈ U j and u * (t) as control law that solves (12) under the constraint u ∈ {u * j (t)} J j=1 . Therefore to solve (12) we will calculate all the optimum control signals u * j . In order to minimize the mechanical energy term in the optimization target of (12) the optimum control law u * j (t) must make the robot enter and exit the convex hull of each region {A n } j n=0 at most once. These input and output points to the convex hull of the area A j are denoted by i j and o j respectively. We regroup these points in the following set C j = {s, i 1 , i 2 , • • • , i j , o j , o j-1 , • • • , o 1 , g} and index them as follows: c j 0 = s, c j n = i n , for n = 1, 2, • • • , j, c j n = o 2j+1-n , for n = j + 1, j + 2, • • • , 2j, c j 2j+1 = g. (15) where s and g are the starting and goal points for the robot. In addition, t n is the time instant in which the robot is at p j n and: τ n t f = (t n+1 -t n ), n = 0, 1, • • • , 2j (16) where: 2j n=0 τ n = 1, τ n > 0, (17) Note that the coefficients {τ n } 2j n=0 determine the portion of time t f that the robot takes to go from c j n-1 to c j n . Let us also write the points belonging to C j in polar coordinates as: c j n = r j n [cos(φ j n ) sin(φ j n )] T . ( 18 ) From the definition of i n and o n we know that they lie in a circle of radius r n which can be computed from the p.m.f. of R (Γ (p(kT ))). Therefore we know {r j n } 2j n=1 and as a consequence the only unknowns to uniquely determine C j are the angles 5 {φ j n } 2j n=1 , where the φ j n is the angle of c j n respect to the AP. Now, the optimum control law u * j (t) takes the robot from c j 0 up to c j 2j+1 in ascending order through each point in C j . We can also see that the second term in the optimization target of (12) depends only the time spent in each region A j (i.e., on the durations τ k t f ) and not on the shape of the particular path followed by the robot nor by its velocity profile. So, the velocity profile and the path must be selected to minimize the mechanical energy (i.e., the first term in the optimization target ( 12)). To do so the vehicle must go from c j n-1 to c j n in a time τ n t f (to be determined) using minimum energy. Using calculus of variations [START_REF] Kirk | Optimal control theory: An introduction[END_REF] we can show that this is achieved by: u j (t) = c j n -c j n-1 τ n-1 t f ∀ t ∈ [t n-1 , t n ). ( 19 ) Therefore if we add the constraint u ∈ U j and then we optimize {τ n } 2j n=0 and the angles {φ j n } 2j n=1 we obtain u * j (t). Now, if we use the constraint u ∈ U j and select u j (t) to take the form (19) then the optimization target of problem (12) becomes: J {τ n } 2j n=0 , {φ j n } 2j n=1 = θ 1 2j+1 n=1 c j n -c j n-1 2 u 2 max τ n-1 t f + θ 2 t f T k=0 T b(k) N . (20) And using (13), ( 19) and the constraint in (12) we have the following approximation: τ j t f T R j + j-1 n=0 (τ 2j-n + τ n )t f T R n ≥ r R N (21) So, taking into account ( 19)-( 21) the optimization problem (12) becomes: minimize {τn} 2j n=0 ,{φ j n } 2j n=1 J {τ n } 2j n=0 , {φ j n } 2j n=1 s.t. 2j+1 n=1 τ n = 1, τ n > 0, r 2 n +r 2 n-1 -2rnrn-1 cos(φ j n -φ j n-1 ) τ 2 n t 2 f ≤ u 2 max , n = 0, 1, • • • , 2j c j n = r j n [cos(φ j n ) sin(φ j n )] T , c j 0 = s c j 2j+1 = g, τj t f T R j + j-1 n=0 (τ 2j-n + τ n )t f T R n ≥ r R N (22) where the first line of constraints ensures that the coefficients {τ k } 2j k=0 determine the portion of the total time t f taken to go from one point in C j to the next one. The next line of constraints establishes the maximum velocity of the robot. The final constraint is the robust constraint which will allow the designer to obtain a high probability of the MR emptying completely its buffer. To solve the optimization problem (22) we first express the angles {φ j n } 2j n=1 as function of the durations {τ n } 2j n=0 . This is achieved by deriving the optimization target of ( 22), see more details in [START_REF] Licea | Trajectory planning for energy-efficient vehicles with communications constraints[END_REF]. Then we use simulated annealing algorithm (SAA) [START_REF] Russell | Artificial Intellingence: A Modern Approach[END_REF] to optimize the durations {τ n } 2j n=0 . This concludes the discussion about the optimization of the trajectory and in the next section we present some simulations to better understand its behaviour and observe its performance. Simulations In this section we present some simulations to gain some insight about the trajectories obtained by the method presented in this paper. We select 10 log 10 P σ 2 n = 33dB. Now, the initial number of bits in the buffer b(0) = 600N s while the possible amount of bits transmitted in one packet can be R 0 = 0, R 1 = 4N s , R 2 = 16N s , R 3 = 64N s where N s is the number of symbols transmitted in one packet. Note that such values for the number of bits transmitted in the payload can be obtained using a rectangular M-QAM modulation. Now, regarding the thresholds {η j } J j=0 we fix them so that the bit error rate is at least 10 -3 . Now regarding the channel we select the path loss coefficient as α = 2, shadowing variance σ 2 s = 2.5 and then for the decorrelation distance we select β = 10λ, where λ is the wavelength of the RF carrier used for communications. We select the starting and the goal points to be s = [8λ 0] and g = [9 -6]λ while we locate the access point at the origin. Then the time to reach the goal point is t f = 20s, the period between packets T = 100ms and the maximum velocity of the MR is 10λ per second. First of all we consider for references a trajectory that goes from s to g using minimum energy. This is achieved by a linear path between both points and a constant velocity profile. We will denote such trajectory as T 0 . Then we consider a trajectory T 1 optimized according to (22) with θ 1 = 1, θ 2 = 0 and r R = 1. This trajectory is optimized to use minimum energy while satisfying constraint (21). Then we also consider another trajectory T 1 optimized according to (22) with θ 1 = 0, θ 2 = 1 and r R = 1. This trajectory is optimized to empty the buffer as quick as possible. In Fig. 1 we can observe the paths corresponding to the trajectories T 0 , T 1 and T 2 . We first note that the path corresponding to T 1 is shorter than the path corresponding to T 2 which agrees with the fact that the trajectory T 1 is optimized to minimize the energy consumed (while satisfying constraint (21)). Then regarding the shape of the paths we see that the path of T 2 reaches A 2 through the shortest path, this is done in order improve as quick as possible the transmission rate in order to empty the buffer as soon as possible. Now, regarding the path for T 1 the robot reaches A 2 by moving in an orthogonal direction with respect to the vector gs, by doing so the robot minimizes the amount of deviation from g which reduces then the distance total distance travelled and consequently the energy spent. When we observe the velocity profiles of both trajectories in Fig. 2 we first note that the period with highest velocity takes place from t = 0 until t = τ 1 this is because the robot is rushing to get out from A 0 to start transmitting as many bits as possible. Then we also observe that the minimum velocity occurs when the robot reaches the inner most area of the trajectory (in this case A 2 ). This is in order to spend as much time as possible in that area with the best channel conditions in the trajectory. Then, in table 1 we observe the average time in which the buffer is emptied E[t empt ], the probability of success P S (i.e., the probability of emptying the buffer when reaching g) and the amount of mechanical energy used normalized by mλ 2 . As it is expected the trajectory T 0 uses minimum energy but its probability of success is very low (0.0868). On the other hand the probability of success for the optimized trajectories T 1 and T 2 is much higher, 0.7206 and 0.9347 respectively, but due to the larger paths and velocities their energy consumption is higher. Now, we observe the effect of the robustness parameter r R , see (21). To do so we consider two more trajectories. The first one, denoted T 3 , is optimized according to (22) with θ 1 = 0.3, θ 2 = 0.7 and r R = 1. While the second trajectory, denoted T 4 , is optimized according to (22) with θ 1 = 0.3, θ 2 = 0.7 and r R = 1.5. We observe in Fig. 3 that their path is really similar (the path corresponding to T 4 is slightly larger) but their velocity profiles are clearly different as we can observe in Fig. 4. The trajectory T 4 spends a larger time in the area A 2 in order to increase the average data rate and therefore increase the probability of success. But by doing so the robot has to move quicker when it gets out from A 2 in order to reach g in time. By comparing the probabilities of success of T 4 with T 3 in table 1 we observe that increasing the robustness parameter r R indeed increases the probability of success although it also increases the energy consumption. Note that all the optimized predefined trajectories were able to produce a relatively large probability of success in a fading channel without the use of any kind of diversity. This large probability of success was achieved by optimizing the trajectories using only first order statistics of the wireless channel. In the future we will take into account channel measurements to develop an online mechanism which further improves the success probability while reducing the amount of mechanical energy. Conclusions We have formulated the problem of robust trajectory optimization for an MR with a target point to reach and a certain number of bits to transmit within a given time. Due to small scale fading and shadowing effects, obtaining a suitable reference trajectory offline is non-trivial. Therefore, we consider the most likely buffer state at each time determined based on the statistical mode and optimize the desired metric by introducing an overestimation parameter for robustness. This approach results in an optimization problem with a feasible solution. Fig. 1 . 2 Fig. 2 . 122 Fig.1. Paths corresponding to trajectories T0 (green), T1 (blue) and T2 (magenta). Starting point s represented by a circle, goal point s represented by a triangle and AP location at the origin. We observe as well the delimitation of the areas {Aj} 3 j=0 . Fig. 3 . 3 Fig. 3. Paths corresponding to trajectories T3 (green) and T4 (dashed red). 4 Fig. 4 . 44 Fig. 4. Velocity profiles of trajectory T3 (top) and T4 (bottom). Table 1 . 1 Performance of different trajectories trajectory E[tempt] (s) PS Energy/(mλ 2 ) (J) T0 14.46 0.0868 0.1859 T1 10.41 0.7206 0.8139 T2 7.72 0.9347 4.8924 T3 7.68 0.8652 3.8928 T4 7.59 0.9262 4.7909 The present work is supported by the LIA project between CRAN, Lorea and the International University of Rabat.
23,226
[ "1029863", "5857", "1068236", "936663" ]
[ "194055", "185180", "1289", "185180", "194055", "549427", "194055" ]
01745316
en
[ "sde" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01745316/file/gr2017-pub00054936.pdf
Spiegelberger Lucie Bezombes email: luciebezombes@irstea.fr Stéphanie Gaucherand Christian Kerbiriou Marie-Eve Reinert Thomas Spiegelberger Joseph William Bull Cara Clark Christian Küpfer Frank Lupi Charles K Minns Serge Muller Ecological equivalence assessment methods: what trade-offs between operationality, scientific basis and comprehensiveness? Keywords: Biodiversity offset, ecological equivalence, ecological equivalence assessment methods, no net loss, mitigation hierarchy, compensation. 1976 US Fish and Wildlife Service, USA Species population terrestrial habitats Biodiversity Offsets Programme, international Terrestrial habitats Habitat Terrestrial habitats Biotope Ecological equivalence assessment methods: what trade-offs between operationality, scientific basis and comprehensiveness? Introduction Biodiversity erosion has accelerated in recent decades [START_REF] Sala | Global Biodiversity Scenarios for the Year 2100[END_REF] and has become a major environmental concern as biodiversity loss is identified as a major driver of ecosystem change [START_REF] Hooper | A global synthesis reveals biodiversity loss as a major driver of ecosystem change[END_REF]. Alongside "classic" answers such as species and ecosystems protection and conservation, biodiversity compensation is increasingly used to counteract impacts from development. It is applied worldwide and has legal status in some countries (e.g., the United States, Canada, Australia, Germany, France and the United Kingdom). Compensation mechanisms remain country-dependent (McKenney & Kiesecker 2010; Commissariat Général au Développement Durable (CGDD) 2012) but are usually integrated in the mitigation hierarchy, after avoidance and reduction of impacts. Efforts have been put into enhancing biodiversity compensation, and biodiversity offset in particular. Biodiversity offset is a way of compensating for biodiversity losses (Business and Biodiversity Offsets Programme, BBOP 2012a) with the aim of achieving "no net loss" (NNL) of biodiversity (ten [START_REF] Ten Kate | Biodiversity offsets: Views, experience, and the business case[END_REF]). Concerns about offset practices have been expressed in the literature for many years [START_REF] Race | Fixing Compensatory Mitigation: What Will it Take?[END_REF] as offset is the last lever on which it is possible to act in order to achieve NNL [START_REF] Gibbons | Offsets for land clearing: No net loss or the tail wagging the dog?[END_REF]. Notably, frameworks have been established to guide offset measures design in order to achieve NNL of biodiversity (Business and Biodiversity Offsets Programme, BBOP). One of the main conditions is that biodiversity gains should be comparable, or equivalent to biodiversity losses [START_REF] Gardner | Biodiversity Offsets and the Challenge of Achieving No Net Loss[END_REF]. When this happens, "ecological equivalence" is reached. Ecological equivalence is one of the most widely discussed conceptual challenges in the related scientific literature [START_REF] Gonçalves | Biodiversity offsets: from current challenges to harmonized metrics[END_REF]. A particularly controversial aspect is how ecological equivalence should be assessed. A number of essential considerations that should be taken into account in order to evaluate equivalence have been identified [START_REF] Quétier | Assessing ecological equivalence in biodiversity offset schemes: Key issues and solutions[END_REF][START_REF] Bull | Biodiversity offsets in theory and practice[END_REF][START_REF] Quetier | No net loss of biodiversity or paper offsets? A critical review of the French no net loss policy[END_REF]), which we summarize in four key groups: ecological, spatial, temporal and uncertainty considerations. Ecological considerations gather (i) issues related to the choice of biodiversity components for which losses and gains are quantified, also called target biodiversity [START_REF] Quétier | Assessing ecological equivalence in biodiversity offset schemes: Key issues and solutions[END_REF] and (ii) the set of indicators that is used to quantify those biodiversity components, also known as currency [START_REF] Bull | Biodiversity offsets in theory and practice[END_REF] or metrics (Business and Biodiversity Offsets Programme, BBOP 2012a). Spatial considerations relate to the integration of impacted and compensatory sites landscape context in equivalence assessment. Landscape context gives information about landscape components influencing biodiversity (e.g., connectivity and metapopulation functioning; [START_REF] Beier | Do Habitat Corridors Provide Connectivity[END_REF] which are notably important to locate offset sites [START_REF] Kiesecker | A Framework for Implementing Biodiversity Offsets: Selecting Sites and Determining Scale[END_REF][START_REF] Saenz | A Framework for Implementing and Valuing Biodiversity Offsets in Colombia: A Landscape Scale Perspective[END_REF]. According to the BBOP (2012b) "a biodiversity offset should be designed and implemented in a landscape context to achieve the expected measurable conservation outcomes". Temporal considerations are related to the time lag (also called delay) between the moment when impact on biodiversity occurs and the moment when offset measures become fully effective [START_REF] Maron | Can offsets really compensate for habitat removal? The case of the endangered red-tailed black-cockatoo[END_REF], ensuing interim losses of biodiversity [START_REF] Dunford | The use of habitat equivalency analysis in natural resource damage assessments[END_REF]. One current solution to avoid or reduce interim losses is to implement compensation ahead of impacts (e.g., by using mitigation banks; [START_REF] Wende | Mitigation banking and compensation pools: improving the effectiveness of impact mitigation regulation in project planning procedures[END_REF]. But when no bank system is available, assessment of equivalence should take into account temporal considerations [START_REF] Laitila | A method for calculating minimum biodiversity offset multipliers accounting for time discounting, additionality and permanence[END_REF]. Finally, considerations on uncertainty refer to the lack of confirmed knowledge and hindsight when assessing equivalence, and particularly in this article we focus on the risk of failure when implementing offset measures [START_REF] Moilanen | How Much Compensation is Enough? A Framework for Incorporating Uncertainty and Time Discounting When Calculating Offset Ratios for Impacted Habitat[END_REF][START_REF] Curran | Is there any empirical support for biodiversity offset policy?[END_REF]. This risk mostly depends on the species or ecosystems concerned by offset [START_REF] Tischew | Evaluating Restoration Success of Frequently Implemented Compensation Measures: Results and Demands for Control Procedures[END_REF], the type of offset implemented [START_REF] Anderson | Ecological restoration and creation: a review[END_REF]) such as habitat restoration, protection, creation or enhancement [START_REF] Levrel | Compensatory mitigation in marine ecosystems: which indicators for assessing the "no net loss" goal of ecosystem services and ecological functions?[END_REF]) and the ecological engineering techniques used [START_REF] Jaunatre | Can ecological engineering restore Mediterranean rangeland after intensive cultivation? A large-scale experiment in southern France[END_REF]. Equivalence Assessment Methods (EAMs) exist worldwide and are used by developers or authorities to evaluate biodiversity losses and gains (e.g., State of Florida 2004;Gibbons et al. 2009;Darbi & Tausch 2010). They are specifically conceived to ensure that offset measures are sufficient to reach ecological equivalence. Although every EAM seeks to ensure NNL of targeted biodiversity, none is fully satisfactory and principles underlying some EAMs have been discussed [START_REF] Mccarthy | The habitat hectares approach to vegetation assessment: An evaluation and suggestions for improvement[END_REF][START_REF] Gordon | Perverse incentives risk undermining biodiversity offset policies[END_REF]. Notably, depending on the method used, calculations result in different offset surfaces for the same impact [START_REF] Bull | Comparing biodiversity offset calculation methods with a case study in Uzbekistan[END_REF]. It seems rather difficult or even impossible to move toward an unanimous worldwide method, mainly because of (i) diversity in offset policies between countries (McKenney & Kiesecker 2010), (ii) disparity between development projects and the resources committed to biodiversity conservation (Regnery et al. 2013b), and (iii) disparities in biodiversity status context and conservation issues. Nonetheless, exploring interactions between the characteristics underlying EAMs could highlight ways of improving equivalence assessment. Thus, we characterized existing EAMs regarding three "challenges" that we identified to be determinant in EAMs effectiveness to meet NNL. In this article, we call these three "challenges" operationality, scientific basis and comprehensiveness. On one hand, operationality is needed by developers and public authorities to carry out standardized assessments in a small amount of time, at reasonable costs [START_REF] Laycock | Biological and operational determinants of the effectiveness and efficiency of biodiversity conservation programs[END_REF]) and in consistence with the skills level of structures involved in mitigation studies. On the other hand, growing awareness comes from the scientific sphere that equivalence assessment should be grounded on scientific basis, including evidence based biodiversity evaluation, objective and transparent metrics and calculation [START_REF] Gonçalves | Biodiversity offsets: from current challenges to harmonized metrics[END_REF] and feedbacks from previous offset related experiences [START_REF] Maron | Can offsets really compensate for habitat removal? The case of the endangered red-tailed black-cockatoo[END_REF][START_REF] Pöll | Challenging the practice of biodiversity offsets: ecological restoration success evaluation of a large-scale railway project[END_REF]. Despite the importance of both operationnality and scientific basis challenges, they are often seen as not fully compatible. Finally, comprehensiveness is a transversal challenge addressing the fact that EAMs development should take into account all four key equivalence considerations, as highlighted by [START_REF] Quétier | Assessing ecological equivalence in biodiversity offset schemes: Key issues and solutions[END_REF]. We can hypothesize that it is an obstacle for operationality and that it is more compatible with scientific basis. The objective of this paper is to provide elements of reflection for the development of future EAMs contributing to design offset measures that lead to NNL, by exploring two main questions: (i) Is there a common structure underlying all EAMs and what elements of such structure could be used as basis when developing an EAM? (ii) What are the synergies and trade-offs in achieving operationality, scientific basis and comprehensiveness. Particularly, is operationality necessarily in contradiction with both other challenges? Is it possible to combine all three challenges in one EAM accepted by both operational and scientific spheres? These EAMs were chosen because they were either published in a scientific journal or had accessible guidelines that could be used to understand how they were constructed and for what purpose. Only main EAMs were analyzed, but we are aware that there are variants adapted to specific cases and that different versions of guidelines are used simultaneously [START_REF] Duel | The habitat evaluation procedure as a tool for ecological rehabilitation of wetlands in The Netherlands[END_REF][START_REF] Tanaka | How to Assess "No Net Loss" of Habitats-A Case Study of Habitat Evaluation Procedure in Japan's Environmental Impact Assessment[END_REF]. The EAM selection intended to give an overview of the current EAMs diversity and also of EAMs commonly used. Thus this is not an exhaustive sample but rather a representative one as it covers North America, Australia and Western Europe which are three main zones where offset policies are well-established [START_REF] Madsen | State of biodiversity markets: offset and compensation programs worldwide[END_REF]. The sample also covers all kind of ecosystems (terrestrial, aquatic, marine or wetlands). Material and method Analysis of EAMs structure In Uncertainty: how do EAMs take into account the risk of offset failure? Finally, we identified the "compensation unit" used in each EAM, which is the currency calculated for a site and then compared between impacted sites (loss of biodiversity units) and offset sites (gains of biodiversity units). Synergies and trade-off between the three EAMs challenges Twelve criteria were defined, covering a large range of characteristics related to how operationality, scientific basis and comprehensiveness are taken into account in EAMs. A description of those criteria and the working hypothesis underlying their choice are specified in Table 2. In our work EAMs are considered operational when they have pre-defined indicators ("Indicators set up"), are rapid to implement ("Implementation rapidity"), when data needed are easily available ("Data availability") and when "like for unlike" offset designs (exchangeability between biodiversity impacted and compensated) are possible ("Exchangeability"). EAMs are considered to have scientific basis when all the indicators used to assess biodiversity are based on scientific documentation ("Biodiversity indicators"), when the metrics used are quantitative and appropriate to the biodiversity component being assessed ("Biodiversity indicator metrics"), when spatial considerations are taken into account with dedicated indicators ("Spatial considerations") and when uncertainty is taken into account based on previous feedbacks ("Uncertainty considerations"). Finally, EAMs are considered comprehensive when they include all key equivalence considerations ("Key equivalence consideration"), when they target species, habitats and ecosystem functions ("Biodiversity components"), when they require various types of data (from the literature, GIS, field data, etc., "Data type") and when they evaluate biodiversity with a relevant set of indicators ("Indicators number"). Each criterion was defined by 3 or 4 modalities (see Appendix B for modalities details). For most modalities, data could be derived from the published version of EAMs. However, to complete certain modalities (e.g., those relating to "Implementation Rapidity") we interviewed experts who either use the EAM in the field or have contributed to its construction (see Appendix C, experts' names and functions are given when they agreed to be cited). When divergent answers were obtained for a given EAM, priority was given to the answer obtained from EAMs developers which was the case for the UMAM, CRAM, UK pilot and German Ökokonto (see Appendix C). We found some mismatches between experts' answers and theoretic guidelines, but this could be explained by differences in EAMs variants or case-by-case practices. In these cases, we decided to stick to the theoretical guidelines (see Appendix D). A score from 1 to 3 or 4 (depending on the number of modalities) was then given to each criterion, where 1 is the lowest level of challenge achievement, and 4 the highest (see Appendix B). For example, an EAM that require only very easy to access data will receive a 4 for the "Data availability" criterion. This scoring system was deliberately simple and linear to give all modalities a similar weight. The aim of this scoring was to highlight synergies and trade-offs between these criteria, and beyond, between the three challenges. We suppose that some correlations between particular criteria will occur, as for example, if large data collection (Data Type) is required, data availability may be low. Moreover, when users have to choose indicators (Indicators set up), they can a priori choose a combination of qualitative and quantitative discrete or continuous metrics (Biodiversity indicator metrics) which would imply a correlation between these criteria. However, it remains theoretical as in practice users could very well choose only indicators with qualitative metrics. Data analysis A principal component analysis (PCA) was performed on all criteria scores (see Appendix C), in order to analyze how EAMs addressed operationality, scientific basis and comprehensiveness. Mean scores were calculated for each challenge (ScoreOp, ScoreScBs and ScoreComp) as the relative mean of the scores attributed to the four criteria describing the challenges, expressed as percentage challenge achievement. These mean scores were added as supplementary variables in the PCA (so that they do not contribute to PCA axis construction). Correlations between criteria were assessed by a nonparametric measure of rank correlation, Spearman rank coefficient (rho), as a complement to PCA, in order to identify oppositions and synergies between criteria underlying the challenges. Criteria were considered correlated for rho ≥ ±0.5 [START_REF] Freckleton | On the misuse of residuals in ecology: regression of residuals vs. multiple regression[END_REF]. The PCA also allows identification of EAMs groups according to the challenge they best achieve. All analyses used R software version 3.1.2 with the corresponding FactoMineR package [START_REF] Husson | Package 'FactoMineR[END_REF]. Results EAMs general structure The analysis of the 13 EAMs indicates that they all share a common structure to calculate losses and gains of biodiversity (Figure 1). They all consider two sites (impacted site and offset site) at two time points (before and after impact or offset measures). One or several indicators are chosen as surrogates to qualify or quantify the targeted biodiversity components, which differ from one EAM to another depending on the context. Two main EAM types can be identified according to the range of biodiversity they target: "specialized", using indicators for a specific ecosystems (for example Australian endemic vegetation for the Habitat Hectare method or Florida's wetlands for UMAM ) and "generalist" using general indicators adapted to a wide range of ecosystems (e.g., terrestrial ecosystems for PilotUK) (Table 3). A benchmark can be used if there is an identified reference state for the targeted biodiversity (e.g., for Habitat Hectare the benchmark is "the same vegetation type in a mature and longundisturbed state", and for UMAM it is a "reference standard wetland" considered as in good ecological quality). A quantitative value based on these indicators is attributed to the site before and after impacts (to calculate biodiversity losses) or offset (to calculate biodiversity gains) and is multiplied by the related site areas. This combination of biodiversity "quality" and "quantity" constitutes the "compensation unit". A tiny majority of EAMs (8 out of 13) evaluate ecological equivalence by attributing "compensation units" to impacted and offset sites (Table 3), allowing biodiversity losses and gains to be assessed and compared on the same basis. There are no specific rules for offsetting one compensation unit by another, only that the number of units exchanged in the offsetting process must be at least equal. The other five EAMs go one step further by using specific rules to size offset measures. This can be done by integrating temporal or uncertainty related ratios to increase the compensatory site area (e.g., Habitat Evaluation Procedure, UMAM; Table 3), or by assessing losses and gains every year during impacts and offset (Figure 1) from the moment impacts occur and the moment when offset measures are considered as effective with a discounted rate (Resource, Habitat, Landscape Evaluation Analysis and Habitat Evaluation Procedure). In all cases, the only values that were calculated based on real measures of the current state of the sites are the one related to the impacted site before impact and to the offset site before offset measures. All other values (after impact or offset measures) are calculated based on predictions. Some EAMs provide a basis for such predictions (i.e. Resource, Habitat, and Landscape Evaluation Analysis), but most of the time, the user has to find a way to make predictions as accurate as possible. 3.2. Trade-off and synergies between the three EAM challenges Correlations among criteria between and within challenges The relationship between criteria and EAMs can be correctly summarized by the two first PCA axes according to the amount of variation explained by these two first axes (64%). There is no clear opposition between scores of operationality, scientific basis and comprehensiveness Positive correlations between criteria related to the same challenges also occur. It is the case for three out of four criteria related to operationality (Data Availability ~ Indicators Setup, rho = 0.86; Indicators Setup ~ Implementation Rapidity, rho = 0.78; and Data Availability ~ Implementation Rapidity, rho = 0.54). This means that it is easy to combine these criteria in order to obtain a good level of operationality. However there is no positive correlation between criteria related to comprehensiveness and negative correlation for criteria related to scientific basis Biodiversity Indicator Metrics ~ Biodiversity Indicators, rho = -0.64) implying difficulties to develop scientific basis in every aspects. Groups of EAMs defined by the challenge they best achieve The PCA highlights the existence of a few groups of EAMs characterized by similar scores for a small number of criteria. Because three criteria (out of the four) related to operationality contributed the most to axis 1, EAMs on the right side of the PCA graph on Figure 2b can be considered as operational ones (HabHect, PilotUK, SomersetHEP, UMAM, CRAM, Ökokonto, LdClEval, and FishHab). They have pre-defined indicators, are rapid to implement (less than 1 week or between 1 week and 6 months) and data used are free and quick to collect, or specific data-bases exist for these methods. On the left side of axis 1, a group of five EAMs (HEP, PilotBBOP, HEA, REA, LEA, Figure 2b) was defined mainly by two other criteria that contribute to axis 1: BiodivIndMc (90%) and DataTp (73%) (Figure 2a). These EAMs need complex data to be implemented (data can come from the literature, GIS, simple field visits, field inventories or field monitoring and modeling) and indicators metrics can be a combination of qualitative and quantitative data (both discrete and continuous). Criteria contributing the most to axis 2 (Figure 2a) are Uncertainty Consideration (86%) and Exchangeability (76%) on the upper side and Spatial Consideration (68%) on the lower side. Quite surprisingly, no EAM combines very well both spatial and uncertainty considerations. Furthermore, EAMs trouble making the integration of uncertainty science based: only the Canadian Fish Habitat method (isolated on axis 2 upper extremity) uses a ratio based on existing data-bases providing scientific feedbacks on previous offset measures (highest score for Uncertainty Consideration) in order to adjust the offset surface areas. A group of three EAMs (HabHect, CRAM and LdClEval) appears clearly on Figure 2b being characterized by high scores for Spatial Consideration, meaning that spatial indicators (e.g., connectivity) are taken into account in the calculation of the compensation unit. Indeed, it make less sense to evaluate impacted and compensatory sites values within a particular landscape context when equivalence is assessed in a "like for unlike" perspective. Finally, no group of EAMs can be characterized by high scientific basis as every criterion related to scientific basis contributes to the PCA graph in a different direction (Figure 2a) involving high scores for this challenge apportioned among EAMs. Discussion We analyzed the structure of existing EAMs and assessed the possible synergies and trade-offs between criteria underlying the way EAMs address operationality, scientific basis and comprehensiveness. The studied EAMs share a common structure to evaluate sites biodiversity and to size offset although they handle ecologic, spatial, temporal considerations and uncertainty in various ways. There is no clear trade-off in challenge achievement but some criteria within or between challenges are negatively correlated. No EAM perfectly addressed all three challenges and groups of EAMs were identified according to criteria or challenge they best achieved. EAMs general structure We identified three main aspects of EAMs common structure that should be considered when developing an EAM and discuss the way they could be improved. Target biodiversity All EAMs evaluated biodiversity losses and gains by combining biodiversity "quality" and area. Biodiversity "quality" is expressed in terms of three main components: species (e.g., threatened, endemic, patrimonial), habitat (e.g., protected ecosystems, wetlands, species habitat) and functionalities (e.g., connectivity, wetland functions). Only 5 EAMs out of 13 focus on ecosystem functionalities in addition to species and habitats, while scientists currently strongly encourage assessing biodiversity functionality, notably in order to better integrate "ordinary" biodiversity in offset processes (Regnery et al. 2013b). Offsetting ecosystem functionalities and "ordinary" biodiversity is also beginning to appear in offset policies: for example, the French consultative process "Grenelle de l'Environnement" (2007) specifies that "ordinary" biodiversity should be evaluated by Environmental Impact Assessment (EIA), notably for the role played as ecological corridors, and be compensated for if impacted [START_REF] Quetier | No net loss of biodiversity or paper offsets? A critical review of the French no net loss policy[END_REF]. That is why at least part of the "compensation units" should be based on ecosystems functionalities. This should be done in consistency with offset policies which influence considerably the biodiversity components targeted (e.g., the US Wetland Mitigation policy requires offset for wetlands, in Europe the Birds and Habitats Directives requires offset for specific birds species or habitats [START_REF]Council Directive 92/43/EEC of 21 May 1992 on the conservation of natural habitats and of wild fauna and flora[END_REF][START_REF]Directive 2009/147/EC of the European Parliament and of the Council of 30 November 2009 on the conservation of wild birds on the conservation of wild birds (codified version)[END_REF] and the offset measures outcomes (e.g., wetland functionalities restoration, species population conservation). According to the targeted biodiversity (either imposed by offset policies or chosen as best surrogate for all biodiversity) the use of "specialized" or "generalist" EAMs is more or less appropriate. Specialized EAMs seem best indicated to maximize the accuracy of equivalence assessment when impacts concern a limited geographic zone composed of a single type of ecosystem. Generalist EAMs are probably more appropriate for projects impacting biodiversity over a large area including various habitat types such as wetlands, forests, rivers, meadows, etc., in order to embrace a global view of the site's biodiversity. Indicators Indicators chosen as surrogates of biodiversity are at the very heart of EAMs in a sense that they enable calculation of the "compensation units" [START_REF] Bekessy | The biodiversity bank cannot be a lending bank[END_REF]. Even when the same type of ecosystem is targeted, the set of indicators is different from one EAM to the other, involving various approaches of ecosystem evaluation. This is for example the case for UMAM Predictions To assess biodiversity losses and gains, predictions have to be made, since offset measures have to be sized mostly before the project can be conducted in order to obtain permits. Predictions concern biodiversity state after impact (effect of habitat destruction or fragmentation on onsite and surrounding biodiversity) and after offset (biodiversity trajectory and likelihood of offset success). The fact that half of the assessment of equivalence is based on prediction means that this assessment is far from precise, especially since accuracy of forecasting is often low. Modeling techniques (e.g., [START_REF] Meineri | Combining correlative and mechanistic habitat suitability models to improve ecological compensation[END_REF] adapted to EAMs could greatly increase efficiency in assessing losses and gains (Resource/Habitat Evaluation Analysis already requires use of modeling, although quite simple). Another way to make more accurate predictions and reduce uncertainty would be for EAM users to take advantage of feedback from previous impacts or offset measures in similar habitats or for the same species or taxa [START_REF] Walker | The restoration and re-creation of species-rich lowland grassland on land formerly managed for intensive agriculture in the UK[END_REF][START_REF] Tischew | Implementation of Basic Studies in the Ecological Restoration of Surface-Mined Land[END_REF][START_REF] Tischew | Evaluating Restoration Success of Frequently Implemented Compensation Measures: Results and Demands for Control Procedures[END_REF]). This could be achieved by drawing tendencies from data [START_REF] Specht | Data management challenges in analysis and synthesis in the ecosystem sciences[END_REF] generated by all EIA individually for a large set of projects. 4.2. Trade-offs and synergies between the three EAM challenges: why do they exist and how could they be overcome (or not)? Based on their average scores, the EAM challenges we identified as operationality, scientific basis and comprehensiveness are not incompatible but still no EAM combines all these challenges perfectly. This is due to some trade-offs occurring between few criteria within and between challenges. Compromises tend to favor operationality The majority of analyzed EAMs showed high operational scores (8 out of 13 EAMs have mean scores of operationality from 64% to 85%, see Appendix E). These more operational EAMs (HabHect, PilotUK, SomersetHEP, UMAM, CRAM, Ökokonto, LdClEval, and FishHab) use a system of predefined indicators, are mostly specialized and are quick to implement. They are reproducible and easy to use but are very context dependent. For project developers, one priority is to propose offset measures that will be accepted by decision-makers, and that can be rapidly implemented at a reasonable cost [START_REF] Cuperus | Ecological compensation in Dutch highways planning[END_REF]. To this end, operational tools are needed and EAMs with predefined indicators seem therefore more suitable, with a higher likelihood of acceptance if assessment is science-based. Most EAMs having predefined indicators with a scoring system rely on rapidly collected and inexpensive (or free) data, and therefore are rapid to implement (UMAM, CRAM, Habitat Hectare, UK Pilot method). However, this can imply compromising on some criteria related to other challenges as it precludes largescale data collection and modeling, which are elements contributing to comprehensiveness. In addition, the use of rapidly collected data implies that indicator metrics are qualitative which leads to a lower level of scientific basis. Therefore, less operational EAMs (HEP, HEA, REA, LEA, PilotBBOP) which better combine both other challenges are often used for large-scale "voluntary" offset (BBOP 2014a(BBOP , 2014b) ) or accidental impacts [START_REF] Roach | Policy evaluation of natural resource injuries using habitat equivalency analysis[END_REF] which should be subject to less temporal, financial and legislative constraints than "classic" development project. Heterogeneity in the integration of scientific basis Trade-offs between criteria within a challenge concern especially scientific basis (EAMs have high scores for one or some criteria related to this challenge but never all of them). Depending Improving synergies between scientific basis and comprehensiveness There are neither trade-offs nor strong synergies between criteria related to scientific basis and comprehensiveness. Existing knowledge could largely benefit to a better combination of these challenges achievement in order to better assess equivalence in the design phase of offset measures. Notably, key equivalence considerations are well identified in literature [START_REF] Norton | Biodiversity Offsets: Two New Zealand Case Studies and an Assessment Framework[END_REF][START_REF] Bull | Biodiversity offsets in theory and practice[END_REF][START_REF] Gardner | Biodiversity Offsets and the Challenge of Achieving No Net Loss[END_REF]) and science-based solutions have already been suggested to integrate delay and uncertainties in offset design [START_REF] Moilanen | How Much Compensation is Enough? A Framework for Incorporating Uncertainty and Time Discounting When Calculating Offset Ratios for Impacted Habitat[END_REF][START_REF] Laitila | A method for calculating minimum biodiversity offset multipliers accounting for time discounting, additionality and permanence[END_REF][START_REF] Cochrane | Modeling with uncertain science: estimating mitigation credits from abating lead poisoning in Golden Eagles[END_REF]. Both ecological and spatial considerations should be addressed using the multiplicity of existing indicators covering a wide range of species and habitats (e.g., [START_REF] Andreasen | Considerations for the development of a terrestrial index of ecological integrity[END_REF][START_REF] Biggs | A biodiversity intactness score for South Africa[END_REF][START_REF] Regnery | Tree microhabitats as indicators of bird and bat communities in Mediterranean forests[END_REF]). Combining operationality, scientific basis and comprehensiveness Finally, our study aimed to identify if all challenges could be combinable in one EAM accepted by both operational and scientific spheres. One issue that affects all 3 challenges is data: operationality relies on data availability, comprehensiveness on data diversity which influences the accuracy of biodiversity assessment (e.g., species conservation status, Bensettiti et al. 2012 ), and scientific basis on data provenance (data updating is notably crucial and even more important with global changes modifying ecosystems dynamics, [START_REF] Vitousek | Human Domination of Earth's Ecosystems[END_REF]. We therefore suggest one main avenue to develop EAMs combining the three challenges: the creation and use of biodiversity offset dedicated data-bases gathering relevant information concerning key equivalence considerations (e.g., risks associated to offset failure based on previous feedback) for at least species and ecosystems frequently targeted in offset procedures. In this way, EAMs implementation could be based on a large amount of data which would be available for users and which could be regularly updated with recent knowledge. This would require a certain investment both in time and money, but would also make information coming from scientific documentation available (for example ecological corridor identification based on the species dispersal ability). An important aspect remains the data interpretation, and tendencies should be established (some data could, for instance, be contradictory) so that the data is used in the most efficient way. Such data-bases could be developed by public authorities at regional or national level (French government intend to create such data base gathering data from all EIA). Moreover, some companies (Virah-Sawmy et al. 2014) own a large amount of land and therefore have the possibility to offset their impacts on biodiversity on their own land. In this purpose, biodiversity issues (e.g., ecosystems maps or species lists) can better be identified in advance for their offset needs (e.g., French biodiversity observatories in alpine ski resorts). In this way, offset measures could be anticipated and launched before impacts occur to reduce time lags, and the offset site location could be made consistent with biodiversity issues improving sites integration in landscape context. Conclusion All studied EAMs share a general framework to assess ecological equivalence where equivalence key considerations (ecological, spatial, temporal and uncertainties) are taken into account in different ways, which influence EAMs operationality, scientific basis comprehensiveness. The analysis of these three "challenges" revealed that operationality tends to be favored in EAMs development, while there is heterogeneity in the integration of scientific basis in EAMs. No EAM is fully satisfying as none combines all challenges perfectly. One way of better combining operationality, scientific basis and comprehensiveness is to develop and use offset dedicated data-bases providing hindsight on local context and previous offset measures. The common structure underlying EAMs suggests that, even though some aspects could be improved, no better solution has yet been found. In developing EAMs, it might be useful to think "out of the box" and invent new structures. Finally, demonstrating ecological equivalence does not guaranty alone offset measures design that reaches the "no net loss" objective. Some issues related to what is really done in practice like offset long-term duration, maintenance and governance, remain of great importance. The way indicators are defined in the method. Predefined indicators make EAMs more standardized and lead to repeatable and comparable equivalence evaluation [START_REF] Quétier | Assessing ecological equivalence in biodiversity offset schemes: Key issues and solutions[END_REF]. Data availability (DataAv) Level of data cost and time to collect data that are needed to fill in indicators. Inexpensive and rapid to collect data will provide more guaranties that EAMs will be widely used than expensive and long to collect data (a parallel can be drawn with river health assessment [START_REF] Boulton | An overview of river health assessment: philosophies, practice, problems and prognosis[END_REF] Implementation rapidity (ImpRp) Cumulative time needed to both collect data and implement EAMs. Rapid method implementation notably reduces the risk of biodiversity losses related to delay in offset measures design [START_REF] Bas | Improving marine biodiversity offsetting: A proposed methodology for better assessing losses and gains[END_REF]. . Exchangeability (Exchg) EAMs adaptation to allow a certain degree of exchangeability between biodiversity impacted and compensated (like for like or like for unlike offset). Developers have more flexibility in designing offsets with like for unlike (or similar) offsets [START_REF] Quétier | Assessing ecological equivalence in biodiversity offset schemes: Key issues and solutions[END_REF]Quétier et al. 2014;Bull et al. 2015). Scientific basis (ScBs) Biodiversity indicators (BiodivInd) On which basis biodiversity indicators were set up in EAMs. The use of indicators based on defensible scientific documentation provides more guaranties that biodiversity evaluation is rigorous (indicator has been demonstrated to be a good surrogate of targeted biodiversity component) and consensual (there is a global agreement among scientific community) [START_REF] Mccarthy | The habitat hectares approach to vegetation assessment: An evaluation and suggestions for improvement[END_REF][START_REF] Gonçalves | Biodiversity offsets: from current challenges to harmonized metrics[END_REF]. Biodiversity indicator metrics (BiodivIndMc) Type of metrics (qualitative, quantitative discrete or continuous) used to inform biodiversity indicators. Quantitative metrics (e.g. number of bat species, height of vegetation) give losses and gain calculation more accuracy and transparency [START_REF] Noss | Indicators for monitoring biodiversity: a hierarchical approach[END_REF]) whereas qualitative metrics are more subject to interpretation bias and subjective judgment. Spatial consideration (SpCd) The way spatial consideration (impacted or compensatory sites insertion in landscape) is taken into account in the method. Measuring landscape components (connectivity, fragmentation…) with appropriate indicators is essential for integrating the effect of surrounding landscape on sites biodiversity (e.g. significance of species richness) to losses and gain comparison [START_REF] Quétier | Assessing ecological equivalence in biodiversity offset schemes: Key issues and solutions[END_REF][START_REF] Gardner | Biodiversity Offsets and the Challenge of Achieving No Net Loss[END_REF]. Uncertainty consideration (UnCd) The way uncertainty (probability of offset failure) is taken into account in the method. As all offsets have a chance of failing to meet expectations, uncertainty can be considered by weighting gains calculation according to the probability of offset success [START_REF] Moilanen | How Much Compensation is Enough? A Framework for Incorporating Uncertainty and Time Discounting When Calculating Offset Ratios for Impacted Habitat[END_REF]. In this purpose, using of area-based offset multipliers is frequent but they are relevant only when based on feedbacks about previous offset measures [START_REF] Tischew | Evaluating Restoration Success of Frequently Implemented Compensation Measures: Results and Demands for Control Procedures[END_REF]. Comprehensiveness (Comp) Key equivalence considerations (EqCd) Number of key equivalence considerations (ecological, spatial, temporal, uncertainty) taken into account in the method. These four considerations have been identified in the literature to be essential when calculating equivalence in order to design offset achieving "no net loss" [START_REF] Moilanen | How Much Compensation is Enough? A Framework for Incorporating Uncertainty and Time Discounting When Calculating Offset Ratios for Impacted Habitat[END_REF][START_REF] Quétier | Assessing ecological equivalence in biodiversity offset schemes: Key issues and solutions[END_REF][START_REF] Bull | Biodiversity offsets in theory and practice[END_REF][START_REF] Gardner | Biodiversity Offsets and the Challenge of Achieving No Net Loss[END_REF] Target Biodiversity (TgBiodiv) Target biodiversity components evaluated in EAMs. In order to capture biodiversity complexity, losses and gains should be evaluated for a maximum of biodiversity components: species populations, ecosystems (or habitats) and functionalities [START_REF] Noss | Indicators for monitoring biodiversity: a hierarchical approach[END_REF][START_REF] Pereira | Essential Biodiversity Variables[END_REF]. Data type (DataTp) Type of data needed to fill in indicators (data from literature, GIS, simple field visit, inventories…). Using all kind of data provides various types of information at different scales and accuracy leading to a more comprehensive losses and gains assessment. Number of indicators (NbInd) Number of indicators used to evaluate biodiversity at impacted and compensatory sites. The multidimensional nature of biodiversity makes it complicated to evaluate and using one single indicator (or proxy) has been demonstrated to be insufficient [START_REF] Bull | Biodiversity offsets in theory and practice[END_REF]. Multiple indicators are preferable to capture a maximum of biodiversity components (diversity, functionality…) [START_REF] Andreasen | Considerations for the development of a terrestrial index of ecological integrity[END_REF]. of the analysis No general rule, consideration treated on a case by case basis. Habitat Unit (HU)=HSI* habitat areal extent HSI is the observed indicator compared to the optimal condition. Resource and Habitat Equivalency Analysis (REA / HEA) (NOAA 1995[START_REF] Washington | Scaling compensatory restoration action: Guidance document for natural resource damage assessment under the Oil Pollution Act of 1990[END_REF] Habitat resource (e.g., species population) or service (e.g., primary production) No general rule, consideration treated on a case by case basis. Resource or service is calculated for each year of the analysis, and at least during all impact duration, and until offset effectiveness. No general rule, consideration treated on a case by case basis. Discounted Resource/ Service Acre Year = proxy value * discounted rate * site area Canadian method Fish Habitat (FishHab) (Minns et al. 2001) Lacustrine habitats condition for fish productivity No general rule, consideration treated on a case by case basis. Not taken into account in offset sizing No general rule, consideration treated on a case by case basis. Habitat Suitability Index (HIS) as surrogate of fish habitat productivity. Calculated from an Habitat Suitability Matrix (HSM) model Habitat Hectare (HabHect) (Parkes et al. 2003) Native vegetation condition Indicators of landscape context Not taken into account in offset sizing No general rule, consideration treated on a case by case basis. Habitat Hectare=Habitat Wetland integrity and functionality Indicators of landscape context and location Multiplier to size offset related to offset effectiveness delay. Multiplier to size offset related to probability of offset success Delta= mean of the three indicators final category score Indicators are scored from 0 to 10 (10 is the benchmark) Landscape Equivalency Analysis (LEA) (Bruggeman et al. 2005) Species population Species population is modeled for different landscape evolution scenarios. Species population is calculated for each year of the analysis. No general rule, consideration treated on a case by case basis. The Habitat Evaluation Procedure (HEP) was developed in the late seventies in USA by the US Fish and Wildlife Service, in order to calculate comparable Habitat Units (HUs) and use them as a basis for sizing optimal offsets. This EAM focuses on habitats. It is stipulated in HEP that an area can have various habitats (with measurable areal extents) and that they can have different suitability for species that may occur in that area. The habitat suitability is quantified in HEP via Habitat Suitability Index Models (HSIs). To calculate HSIs, user has first to select species of interest (they can be patrimonial and endangered species, umbrella species etc. depending on the issues on the site). Then, for each species, a HIS has to be chosen to best reflect species condition in its habitat (or the habitat suitability for this species). It must be in an index form: Discounted Landscape Service 𝐼𝑛𝑑𝑒𝑥 𝑣𝑎𝑙𝑢𝑒 = 𝑉𝑎𝑙𝑢𝑒 𝑜𝑓 𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑜𝑓 𝑐𝑜𝑚𝑝𝑎𝑟𝑎𝑖𝑠𝑜𝑛 so for an HSI it will be: 𝐻𝑆𝐼 = 𝑆𝑡𝑢𝑑𝑦 𝑎𝑟𝑒𝑎 ℎ𝑎𝑏𝑖𝑡𝑎𝑡 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 𝑂𝑝𝑡𝑖𝑚𝑢𝑚 ℎ𝑎𝑏𝑖𝑡𝑎𝑡 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 The "Optimum habitat condition" is a benchmark found in literature or measured in the field. Metrics for "Study area habitat condition" can be for example species abundance or biomass/unit area, but must reflect the habitat suitability for this species. The next step consists in calculating cumulative Habitat Units (HU's) for the species, for each year of the evaluation (e.g. each year of project): 𝐶𝑢𝑚𝑢𝑙𝑎𝑡𝑖𝑣𝑒 𝐻𝑈 ′ 𝑠 = ∑ 𝐻𝑆𝐼 𝑎 * ℎ𝑎𝑏𝑖𝑡𝑎𝑡 𝑎𝑟𝑒𝑎𝑙 𝑒𝑥𝑡𝑒𝑛𝑡 𝑎 𝑝 𝑖=1 Where: 𝐻𝑆𝐼 𝑎 is the species' HSI at year i 𝑎𝑟𝑒𝑎𝑙 𝑒𝑥𝑡𝑒𝑛𝑡 𝑎 is the area of available habitat for species at the year i. It is calculated in different ways depending whether the species habitat include only one vegetation cover type, or more than one. There three possibilities: (i) species habitat includes one cover type (e.g. forest), (ii) species habitat includes several cover types, but each one provides all of species requirements (i.e. shelter, food), (iii) species habitat includes several cover types, but each one provides only one species requirement (e.g. forest/shelter and meadow/food). If HSI value is not available for every year, user can decompose the period of analysis in smaller period and calculate Cumulative HU with a specific formula. Finally, Average Annual Habitat Units (AAHU's) are calculated as follow for each year of the evaluation (e.g. each year of project): 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐴𝑛𝑛𝑢𝑎𝑙 𝐻𝑎𝑏𝑖𝑡𝑎𝑡 𝑈𝑛𝑖𝑡𝑠 (𝐴𝐴𝐻𝑈 ′ 𝑠) = 𝐶𝑢𝑚𝑢𝑙𝑎𝑡𝑖𝑣𝑒 𝐻𝑈′𝑠 𝑃𝑒𝑟𝑖𝑜𝑑 𝑎𝑛𝑎𝑙𝑦𝑠𝑖𝑠 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑦𝑒𝑎𝑟𝑠 This value is the one used in the losses and gains calculation, as follow: 𝐿𝑜𝑠𝑠𝑒𝑠 𝑜𝑟 𝐺𝑎𝑖𝑛𝑠 = 𝐴𝐴𝐻𝑈 ′ 𝑠 𝑤𝑖𝑡ℎ -𝐴𝐴𝐻𝑈 ′ 𝑠 𝑤𝑖𝑡ℎ𝑜𝑢𝑡 Where: 𝐴𝐴𝐻𝑈 ′ 𝑤𝑖𝑡ℎ is the AAHU's for the impacted site or the compensatory site with impact or offsets. 𝐴𝐴𝐻𝑈 ′ 𝑤𝑖𝑡ℎ𝑜𝑢𝑡 is the AAHU's for the impacted site or the compensatory site without impact or offsets (initial state of area before impact and before offsets). This evaluation takes into consideration the natural evolution of both impacted and compensatory site (without any impact or offset). There are two main equations to size offsets, depending on the compensation goal: -In-kind: the HU lost are offset for each evaluation species (the list of target species is identical to the list of negatively impacted species). -Equal replacement: the HU lost are offset through a gain of an equal number of HU's (the list of target species may or may not be identical to the list of negatively impacted species). 𝑂𝑝𝑡𝑖𝑚𝑢𝑚 𝑐𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛 𝑎𝑟𝑒𝑎 = -𝐴 * ∑ 𝐿𝑜𝑠𝑠𝑒𝑠(𝑖) 𝑛 𝑖=1 ∑ 𝐺𝑎𝑖𝑛𝑠(𝑖) 𝑛 𝑖=1 Where: A is the size of candidate compensation study area i is the species number, and n is the total number of identified species.  Resource and Habitat Equivalency Analysis (NOAA 1995[START_REF] Washington | Scaling compensatory restoration action: Guidance document for natural resource damage assessment under the Oil Pollution Act of 1990[END_REF] Resource Equivalency Analysis (REA) and Habitat Equivalency Analysis (HEA) are EAMs developed initially to size offset for accidental impacts on resource (REA) or service (HEA) (i.e. oil spill on salt marsh) by the National Oceanic and Atmospheric Administration, in the United States. These EAMs are based on two restoration actions: one primary restoration on the impacted site, and one compensatory restoration on the compensatory site. The latest aims to offset the impacted site's interim losses. Only one proxy is needed to represent the level of resource or service lost. In REA, it is understood by "resource" a particular species population. So the proxy chosen by the user can be abundance for example. In HEA, it is understood by "service" a particular function of a habitat. The proxy is also chosen by the user and can be the primary productivity of salt marsh for example. First, user has to determine the benchmark, which is the level of resource or service on the impacted site before the accident occurred. Then, losses are calculated according to a "recovery function" representing the evolution of resource or service level from the accident to the benchmark on the impacted site with primary restoration. The same way, gains are calculated according to a "maturity function" representing the evolution of resource or service level from the beginning of offsets to the benchmark for the compensatory site. Losses and gains are calculated each year for a certain amount of time, which at minimum must last until the level of resource or service has reached the benchmark. A discounted rate is used in these EAMs in order to take in consideration the relation the public has with the resource or service losses and gains. In Environmental policies, this discounted rate is often 3% (Commissariat Général du Développement Durable (CGDD) 2011). With the application of a discounted rate, losses have an increasing value over time, and in the contrary, gains have a decreasing value over time. Losses and gains are calculated as follow (the unit is discounted resource or service acre year). Where: i is the year when primary restoration starts on the impacted site (area 1) j is the year when offsets start on the compensatory site (area 2) b is the year when the calculation is stopped (benchmark level have to be reached). R is the % of resource or service lost or gained compared to the benchmark in average at the nth year. D is the discounted rate Ecological equivalence is achieved when losses = gains. The equations to size offsets that achieve equivalence is as follow: Losses 𝐴𝑟𝑒𝑎 2 = ∑ ( 𝑅 𝑡 * 𝐷 𝑏 𝑛=𝑖 ) * 𝑎𝑟𝑒𝑎 1 ∑ ( 𝑅 𝑡 * 𝐷 𝑏 𝑛=𝑗 )  Canadian method Fish Habitat (Minns et al. 2001) In Canada, the Department of Fisheries and Oceans provide tools for managing and protecting Canada's fishery resources. Section 35 of the Fisheries Act is a general prohibition forbidding the Harmful Alteration, Disruption or Destruction (HADD) of fish habitats. If all mitigation measures cannot prevent a HADD, an authorization is required and proponents are then obligated to develop a set of compensatory actions that will result at least in no net loss of fish productivity. The Fish Habitat method is an EAM allowing to size offset so they can achieve no net loss. The method is mainly intended to be used to assess development projects occurring in large inland lakes. It involves the use of a "Habitat Suitability Matrix" (HSM) model implemented as a software package with many features (but regulatory users only use basic elements). "The essence of the approach is the idea that the habitat preferences of individual fish species and life stages can be quantified and aggregated into habitat suitability indices [HSI] that in turn can be used as surrogate measures of fish habitat productivity" (Minns et al. 2001). To calculate HSI's, the HSM model uses pooled matrices representing the aggregate habitat preferences of many species. Species lists are identified and are grouped by life stage, trophic level regime and thermal preference. HIS values are generated for specific combinations of water depth, substrate and vegetation cover that can be assigned to individual habitat patches. HIS values range between 0 and 1, which represent a percentage of the benchmark (1 is the benchmark value). HIS's as surrogates of fish productivity are calculated for three areas: -the area of habitat lost due to development activity (𝐴 𝐿𝑜𝑠𝑠 ) -the area modified, directly and indirectly, as a result of the development activity (𝐴 𝑀𝑜𝑑 ) -the area created or modified elsewhere to compensate for the development activity (𝐴 𝐶𝑜𝑚𝑝 ) To achieve ecological equivalence, the result of the following equation has to be neutral (no net loss of biodiversity) or positive (net gain): 𝛥𝑃 𝑛𝑜𝑤 = [(𝑃 𝑀𝑜𝑑 -𝑃 𝑁𝑜𝑤 ) * 𝐴 𝑀𝑜𝑑 ] -(𝑃 𝑀𝑎𝑥 * 𝐴 𝐿𝑜𝑠𝑠 ) + [(𝑃 𝐶𝑜𝑚 -𝑃 𝑁𝑜𝑤 ) * 𝐴 𝐶𝑜𝑚 ] Where: 𝛥𝑃 𝑛𝑜𝑤 is the net change of natural productivity of fish habitat 𝑃 𝑀𝑎𝑥 is the maximum potential unit area productivity rate (or productive capacity) 𝑃 𝑁𝑜𝑤 is the present unit area productivity rate 𝑃 𝑀𝑜𝑑 is the modified unit area productivity rate in affected areas 𝑃 𝐶𝑜𝑚 is the compensation unit area productivity rate in affected areas.  Habitat Hectare (Parkes et al. 2003) The "Habitat Hectare" approach has been first developed by (Parkes et al. 2003) for the Victorian Department of Natural Resources and Environment in Australia. Here we will name "Habitat Hectare" this particular EAM, even though other EAMs are based on the same principle (e.g. UK pilot method, BBOP pilot method). This principle consists in multiplying a value reflecting the quality of the site with the site area. Habitat Hectare focuses on terrestrial biodiversity related to native vegetation. A site is evaluated according to several indicators (listed below), some related to the site condition, and others related to landscape context. Each indicator is scored as a percentage of a benchmark (Pre-European vegetation condition). A pre-defined weight is attributed to each indicator. The site final score is called the "Habitat Score" (HS) and it is calculating summing all indicators scores. It must be multiplied by the site area (in hectare). Four Habitat Scores are calculated: 𝐻𝑆 𝐴 for the current score of the habitat that will be impacted (area 1) 𝐻𝑆 𝐵 is the predicted score for the habitat after impacts (area 1) 𝐻𝑆 𝐶 is the current score of the habitat proposed for offsets (area 2) 𝐻𝑆 𝐷 is the predicted score of the habitat after offsets (area 2) Ecological equivalence is achieved when: (𝐻𝑆 𝐴 * 𝑎𝑟𝑒𝑎 1 ) -(𝐻𝑆 𝐵 * 𝑎𝑟𝑒𝑎 1 ) (losses) = (𝐻𝑆 𝐷 * 𝑎𝑟𝑒𝑎 2 ) -(𝐻𝑆 𝐶 * 𝑎𝑟𝑒𝑎 2 ) (gains). The equations to size offsets that achieve equivalence is: 𝐴𝑟𝑒𝑎 2 = 𝐻𝑆 𝐴 * 𝑎𝑟𝑒𝑎 1 -𝐻𝑆 𝐵 * 𝑎𝑟𝑒𝑎 1 𝐻𝑆 𝐷 -𝐻𝑆 𝐶 This calculation has to be done for each impacted habitat.  Uniform Mitigation Assessment Method (State of Florida 2004) The Uniform Mitigation Assessment Method (UMAM) is a "rapid assessment method" developed specifically for Florida's wetlands in order to assess their functionality. User have to score each indicator between 0 and 10 depending on the indicator condition (the guidance help the user defined it). An average score is calculated by category and the average of these score is the site final score called Delta. Four Deltas are calculated. 𝛥 𝐴 for the current score of the site that will be impacted (area 1) 𝛥 𝐵 is the predicted score for the site after impacts (area 1) 𝛥 𝐶 is the current score of the site proposed for offsets (area 2) 𝛥 𝐷 is the predicted score of the site after offsets (area 2) The method includes two multipliers for gains calculation: -the T-factor reflects the time lag associated with mitigation (the period of time between when the functions are lost at an impact site and when those functions are replaced by the mitigation), and the additional mitigation needed to account for the deferred replacement of wetland or surface water functions. It determined with a correspondence grid between years and scores. -The mitigation risk, evaluated to account for the degree of uncertainty that the proposed conditions will be achieved, resulting in a reduction in the ecological value of the mitigation assessment area. The risk is scored on a scale from 1 (for no or de minimus risk) to 3 (high risk), on quarter-point (0.25) increments. Losses and gains are calculated as follow: 𝐿𝑜𝑠𝑠𝑒𝑠 = (𝛥 𝐴 -𝛥 𝐵 ) * 𝑎𝑟𝑒𝑎 1 𝐺𝑎𝑖𝑛𝑠 = [ ( 𝛥 𝐷 -𝛥 𝐶 ) 𝑇-𝑓𝑎𝑐𝑡𝑜𝑟 * 𝑟𝑖𝑠𝑘 ] * 𝑎𝑟𝑒𝑎 2 Ecological equivalence is achieved when losses = gains The equations to size offsets that achieve equivalence is as follow: 𝐴𝑟𝑒𝑎 2 = (𝛥 𝐴 -𝛥 𝐵 ) * 𝑎𝑟𝑒𝑎 1 ( 𝛥 𝐷 -𝛥 𝐶 ) * (𝑇 -𝑓𝑎𝑐𝑡𝑜𝑟 * 𝑟𝑖𝑠𝑘)  Landscape Equivalency Analysis (Bruggeman et al. 2005) This EAM has been developed to calculate ecological credit for species mitigation bank in the United States. It is elaborated on the same principles than REA and HEA. The method aim to assess a landscape conservation value (service) for metapopulation, evaluated through two main indicators: abundance and genetic variance. Those indicators are calculated for three landscape evolution scenario. A landscape is modeled as "habitat patches [which] are distinguished by greater habitat quality than surrounding areas. Area outside of the habitat patch that allow low occupancy rates (lower habitat quality) are classify as the matrix" (Bruggeman et al. 2005). As in REA and HEA, a discounted rate is used. Abundance and genetic variance are modeled for: -the B scenario (benchmark) where there is no habitat loss -the M scenario (mitigation) where a conservation bank is added -the W scenario (withdrawal) where impacts sites in the landscape require the withdrawal of credit from the mitigation bank (several choices are possible). Where: i is the year when impacts occurred and when credits are bought to a conservation bank. x is the year when calculation is stopped Nb, Nw and Nm are the abundance calculated in scenario B, W and M respectively, in average for the tth year Gb, Gw and Gm are the abundance calculated in scenario B, W and M respectively, in average for the tth year D is the discounted rate The credits are calculated so that the "landscape configurations that provide equivalent levels of services despite changes in landscape structure that result from losing a patch or changing matrix quality" (Bruggeman et al. 2005).  BBOP pilot method (Business and Biodiversity Offsets Programme (BBOP) 2009) The BBOP has proposed a methodology detailed in the Biodiversity Design Offset Handbook Apendix C. It has been designed for voluntary biodiversity offsets, and it based on "Habitat Hectare" principles. There are two versions of the method, one focusing on habitats, and the other focusing on species. It is recommended to use in priority the habitat version and the species version as a complement. All terrestrial habitats can be assessed. For the habitat version, no indicators are imposed, they are to the choice of user, but the methodology provides guidance in this choice. The 10 to 20 indicators used to assess both impacted and compensatory sites have to be chosen according to ecological, spatial (e.g. connectivity), political (e.g. protected species) and social (e.g. emblematic species) issues. First, they have to be informed for a "benchmark area", chosen as well by the user for its ideal habitat condition. Each indicator value found in the "benchmark area" will be its maximum value. Each indicator has also to be weighted depending on the importance it has in the habitat assessment (the sum of weights equals 100). Four Habitat Scores are calculated. 𝐻𝑆 𝐴 for the current score of the site that will be impacted (area 1) 𝐻𝑆 𝐵 is the predicted score for the site after impacts (area 1) 𝐻𝑆 𝐶 is the current score of the site proposed for offsets (area 2) 𝐻𝑆 𝐷 is the predicted score of the site after offsets (area 2) This calculation has to be done for each impacted habitat. 𝐻𝑆 For the species version, only one indicator has to be chosen, representing the species population (e.g. abundance). A benchmark value is also fixed. The calculation is the same as the one in the habitat version.  Land Clearing Evaluation (Gibbons et al. 2009) The Land Clearing Evaluation (LCE) is the EAM behind the calculation of credits in the context of Biobanking in the New South Wales state in Australia (Department of Environment Climate Change and Water 2009). LCE focuses on terrestrial biodiversity related to native vegetation which will be cleared. A site (whether a proposal site for clearance or a biobank site) is evaluated according to three values: the Regional Value (RV), the Landscape Value (LV) and the Site Value (SV). RV represents the site conservation significance of vegetation at the regional scale. The two latest are calculated using native vegetation biodiversity variables, scored as a percentage of a benchmark (Pre-European vegetation condition). The score goes from 0 to 3 according in which category the variable is. A pre-defined weight is attributed to each variable. Each value calculation is detailed as follow:  Regional Value: It is the same equation for both clearing and offset sites. Where: z is a zone with the same vegetation type and the same condition R is the per cent of the vegetation type in the zth zone that is remaining relative to its predicted pre-European distribution  Landscape Value: Land Clearing Evaluation (Gibbons et al., 2009) 19 Canadian method Fish Habitat (Minns et al., 2001) 20 Other: (1) 22 1-On which biodiversity component(s) does the method focus? 23 Please choose one answer. If your answer is not already mentioned, please precise it in "Other". 25 Species (e.g. protected species) ( calculated as the relative mean of the related criteria scores, see Appendix E), as shown with their projection on Figure2a. However, when considering each criterion separately, negative and positive correlations between criteria related to different challenges or within a single challenge occur.As we expected, criteria related to operationality are negatively correlated to criteria related to scientific basis and also comprehensiveness. Some of these correlations are quite intuitive and confirm what we assumed (Implementation Rapidity ~ Data Type, rho = -0.74; Data Availability ~ Data Type, rho = -0.58; and Indicator Setup ~ Biodiversity Indicator Metrics, rho = -0.87). Using large data collection leads to low implementation rapidity and low data availability. The other correlations constitute less expected results: data needed for filling in indicators with qualitative metrics is more available than for filling in indicators with quantitative metrics (Data Availability ~ Biodiversity Indicator Metrics, rho = -0.83); furthermore spatial considerations are more taken into account when assessing equivalence in a "like for like" perspective (Exchangeability ~ Spatial Consideration, rho = -0.65). As any individual criteria within scientific basis and comprehensiveness are not correlated, those challenges could be combined. Surprisingly, positive correlations also occur between criteria related to operationality and scientific basis (Implementation Rapidity ~ Biodiversity Indicators, rho = 0.66) and between operationality and comprehensiveness (Indicators Setup ~ Number of Indicators, rho= 0.64). In other terms, using scientifically based indicators do not slow down the implementation rapidity and using a set of several well adapted indicators is easier if they have been previously pre-defined. on the context and resources, scientific basis are integrated in EAMs either through development of scientifically documented biodiversity indicators (Land Clearing Evaluation) and landscape context integration (Habitat Hectare, CRAM), or through the use of ratios reflecting uncertainty based on feedbacks (e.g., Fish Habitat). The heterogeneity in the integration of scientific basis can be explained by differences in knowledge and resources available depending on the EAMs developer organism. Developing EAMs with solid scientific basis for every criterion requires researchers to be involved in EAMs design, alongside offset stakeholders and experts.Besides, both EAMs integrating best scientific basis (BBOP pilot method and Land Clearing Evaluation, see Appendix E) included researchers in their design phase. The number of research projects focusing on improving offset design is increasing[START_REF] Gonçalves | Biodiversity offsets: from current challenges to harmonized metrics[END_REF] but there is still a gap between complex and technically advanced tools developed by researchers, such as software implemented for identifying important areas for connectivity (e.g., "Graphab",[START_REF] Foltête | A software tool dedicated to the modelling of landscape networks[END_REF] or "Circuitscape",[START_REF] Koen | Landscape connectivity for wildlife: development and validation of multispecies linkage maps[END_REF] and what is actually used in practice by consultancies and developers. Therefore we strongly encourage researchers to publish or propose research tools and methods available for developers and authorities in the context of biodiversity offset. Fig 1 : 1 Fig 1: Equivalence Assessment Method (EAM) general structure. Two sites are considered: the impacted site (dark grey boxes) and the potential offset site (light grey boxes). Site values are calculated for each site (center boxes) thanks to various indicators, and "compensation units" are obtained by multiplying these values by the site areas. Solid arrows and regular font correspond to features shared by most EAMs. Dotted arrows and italics correspond to main options for EAMs. Fig 2a : 2a Fig 2a: Principal Component Analysis (PCA) variable graph. Criteria relative to operationality are in italic, criteria relative to scientific bases are in regular and criteria relative to comprehensiveness are in bold. Average scores for each challenge (Operationality, Scientific Basis and Comprehensiveness) are represented with dotted arrows. Fig 2b : 2b Fig 2b: Principal Component Analysis (PCA) individuals graph. order to evaluate how EAMs are structured we first conducted a qualitative bibliographic study. We started from Quétier's & Lavorel's publication (2011) to described EAMs characteristics according to the four key equivalence considerations: (i) Ecological: what components of biodiversity do EAMs evaluate? (ii) Spatial: how do EAMs take into account the landscape context? (iii) Temporal: how do EAMs take into account time lags? And (iv) Table 1 : 1 Context of selected EAMs implementation EAM name, code and reference Structure and Country where EAM was implemented initially Offset policy in which EAM can be implemented Type of impacts for which EAM can be implemented Habitat Evaluation Procedure US Fish and Wildlife Service, United US Conservation Banking Development project impacting terrestrial (HEP) (US Fish and Wildlife Service States or aquatic biodiversity (USFWS) 1980) Resource and Habitat Equivalency National Oceanic and Atmospheric Damage Assessment and Accidental impacts on biodiversity Analysis Administration, United States Restoration Program (REA / HEA) (NOAA 1995, 1997) Canadian method Fish Habitat Department of Fisheries and Oceans, Canada's National Fish Habitat Development project impacting lacustrine (FishHab) (Minns et al. 2001) Canada Compensation Program habitats Habitat Hectare Victorian Department of Natural BushBroker Program Projects impacting native vegetation. (HabHect) (Parkes et al. 2003) Resources and Environment, Australia Uniform Mitigation Assessment Florida Department of Environmental US Wetland and Stream Development project impacting wetlands Method Protection, United States Mitigation Banking and wetlands mitigation banks (UMAM) (State of Florida 2004) Landscape Equivalency Department of Fisheries and Wildlife, US Conservation Banking Credits for endangered species mitigation Analysis (LEA) (Bruggeman et al. Michigan State University, United banks 2005) States BBOP pilot method (PilotBBOP) Business and Biodiversity Offsets Every non-constraining offset Development project impacting (Business and Biodiversity Offsets Programme, international policy biodiversity Programme (BBOP) 2009) Land Clearing Evaluation New South Wales Government, BioBanking Proposals to clear native vegetation (LdClEval) (Gibbons et al. 2009) Australia German Ökokonto Baden-Württemberg Region, Nature Conservation Law Development project impacting (Ökokonto) (Darbi & Tausch 2010) Germany biodiversity Californian Rapid Assessment California Wetlands Monitoring US Wetland and Stream Development project impacting wetlands Method Workgroup, United States Mitigation Banking and wetlands mitigation banks (CRAM) (California Wetlands Monitoring Workgroup (CWMW) 2013) Pilot method in United Kingdom Department for Environment, Food & UK Environmental Impact Development project impacting terrestrial (PilotUK) (Department for Rural Affairs, England Assessment biodiversity Environment, Food & Rural Affairs 2012) Somerset Habitat Evaluation Somerset County Council. England National Planning Policy Development project impacting terrestrial Procedure Framework (NPPF) biodiversity (SomersetHEP) (Burrows 2014) Table 2 : 2 Description of criteria related to operationality, scientific basis and comprehensiveness and working hypothesis underlying criteria choices. EAM Criteria Description and working hypothesis challenge Indicators set up (IndSetup) (Op) Operationality Ecological credits (E) for conservation bank are calculated as follow (the unit is discounted landscape service year): With the abundance indicator: 𝐸 = ∑ 𝑥 𝑡=𝑖 1 (1 + 𝐷 2 ) ( 𝑁𝑚 𝑡 -𝑁𝑤 𝑡 𝑁𝑏 𝑡 ) With the genetic variance indicator: 𝐸 = ∑ 𝑥 𝑡=𝑖 1 (1 + 𝐷) ( 𝐺𝑏 𝑡 -𝐺𝑤 𝑡 𝐺𝑏 𝑡 ) -∑ 𝑥 𝑡=𝑐 1 (1 + 𝐷) ( 𝐺𝑏 𝑡 -𝐺𝑚 𝑡 𝐺𝑏 𝑡 ) Where:x is the number of chosen indicators 𝑉 𝐴 , 𝑉 𝐵 , 𝑉 𝐶 , 𝑉 𝐷 are the values of the nth indicator 𝐶 𝑛 is the weight of the nth indicator L is the % of the nth indicator increase thanks to offset * offset success probability Ecological equivalence is achieved when:(𝐻𝑆 𝐴 * 𝑎𝑟𝑒𝑎 1 ) -(𝐻𝑆 𝐵 * 𝑎𝑟𝑒𝑎 1 ) (losses) = (𝐻𝑆 𝐷 * 𝑎𝑟𝑒𝑎 2 ) -(𝐻𝑆 𝐶 * 𝑎𝑟𝑒𝑎 2 ) (gains).The equations to size offsets that achieve equivalence is as follow:𝐴𝑟𝑒𝑎 2 = 𝐻𝑆 𝐴 * 𝑎𝑟𝑒𝑎 1 -𝐻𝑆 𝐵 * 𝑎𝑟𝑒𝑎 1 𝐻𝑆 𝐷 -𝐻𝑆 𝐶 𝐻𝑆 𝐷 = ∑ 𝑥 𝑛=1 [( (𝑉 𝐷 * 𝐿)+ 𝑉 𝐷 𝑉 𝑏 ) 𝑛 * 𝐶 𝑛 ] 𝐴 = ∑ 𝑥 𝑛=1 [( 𝑉 𝐴 𝑉 𝑏 ) 𝑛 * 𝐶 𝑛 ] 𝐻𝑆 𝐵 = ∑ 𝑥 𝑛=1 [( 𝑉 𝐵 𝑉 𝑏 ) 𝑛 * 𝐶 𝑛 ] 𝐻𝑆 𝐶 = ∑ 𝑥 𝑛=1 [( 𝑉 𝐶 𝑉 𝑏 ) 𝑛 * 𝐶 𝑛 ] -How was the method developed? 32 It was developed by researchers or by a governmental organization 36It was developed by a collective group (e.g. researchers, consultants, administration...) at the regional (or 26 27 Natural habitat(s) (e.g. wetlands or old-growth forest) 28 Natural Habitat(s) + species (e.g. wetland with patrimonial species) 29 Natural Habitat(s) + species + ecosystem functions 30 Other: 31 33 2Please choose one answer 34 35 It was developed by consultants 37 38 state) level 39 It was developed by a collective group at the national (or federal) level 40 3-What is the kind of data used in the method? Field monitoring Uniform Mitigation Assessment Method (State ofFlorida, 2004) German Ökokonto(Darbi & Tausch 2010) Simple field data Field inventories This research was financed by the French government "CIFRE" grant for PhD students and Electricité de France (EDF). 1994 National Oceanic and Atmospheric Administration, USA Natural Resources and Environment, Australia Habitat Evaluation Procedure The variables used are listed below. Each variable is scored from 0 to 3 according to which category its value stands. Variables: (1) % Cover of native vegetation within a 1.75 km radius of the site (1000 ha) (2) % Cover of native vegetation within a 0.55 km radius of the site (100 ha) (3) % Cover of native vegetation within a 0.2 km radius of the site (10 ha) (4) Connectivity value (5) Total adjacent remnant area (6) % Within riparian area 𝐿𝑉 𝑐𝑙𝑒𝑎𝑟𝑖𝑛𝑔 𝑠𝑖𝑡𝑒 = ∑ (𝑆 𝑣 * 𝑊 𝑣 ) Where: 𝑆 𝑣 is the score for vth variable (1-6) 𝑊 𝑣 is the weighting for the vth variable (1-6)  Site Value: The variables used are listed below. Each variable is scored from 0 to 3 according to which category its value stands. * zone area) z Where: z is a zone with the same vegetation type and the same condition 𝑆 𝑣 is the score for vth variable (a-j) 𝑊 𝑣 is the weighting for the vth variable (a-j) A is a constant weighting given to the interaction terms (authors used 5) k = (sd + se + sf)/3 c is the maximum score that can be obtained given the variables that occur in the benchmark for the vegetation type zone area is the total area of the nth vegetation zone in hectares. Clearance is accepted only if the gain in each value on offset site is higher or equal than the losses in each value on clearing site (meaning if ecological equivalence is achieved): 𝑅𝑉 𝐷 ≥ 𝑅𝑉 𝐴 𝐿𝑉 𝑜𝑓𝑓𝑠𝑒𝑡 𝑠𝑖𝑡𝑒 ≥ 𝐿𝑉 𝑐𝑙𝑒𝑎𝑟𝑖𝑛𝑔 𝑠𝑖𝑡𝑒 𝑆𝑉 𝐷 -𝑆𝑉 𝐶 ≥ 𝑆𝑉 𝐴 -𝑆𝑉 𝐵 Where: A is the current value of site proposed for clearing B is the predicted value of site after proposed clearing C is the current value of site proposed for offsets D is the predicted value of site proposed for offsets. The equations, metrics and variables detailed here, as well as the data that underpinned them, were codified into a computer software tool to facilitate LCE application for users.  German method Ôkokonto (Darbi & Tausch 2010) In Germany, mitigation modalities are settled in each Land for five environmental components: biotopes and species, water, soil, landscape, and air and climate. The general mitigation method is called Ôkokonto (it is not the only mitigation method used in Germany). Here the modalities for the Bade-Wurtemberg Land are detailed for the biotopes and species component. The method focuses on biotopes, with the assumption that species can be protected through their habitat protection. In Germany, a biotope is a uniform geographic unit from a vegetation typology and/or landscape point of view. The Ökokonto-Verordnung decree indexes and classifies the Land's biotopes (with a total of 223). Each biotope is classified according to: -a "normal" value expressed in EcoPoints/m² corresponding to the average biotope's condition -a value range allowing to take into account the changing biotope's condition In reality, there are two sets of value range with a "normal" value: one called "realistic" and the other elaborated to take into account certain environmental measures uncertainty. For example, a biotope could be characterized as follow: -with a "realistic" value range from 20 to 30 EcoPoints/m² and a 25 EcoPoints/m² "normal" value. -with an "uncertainty" value range from 15 to 25 EcoPoints/m² and a 20 EcoPoints/m² "normal" value. The number of EcoPoints/m² a biotope will get is determined depending on three criteria: its degree of "naturalness", the role it has for endangered or patrimonial species and its distinctiveness in the local scale. A software allows these three criteria combination to calculate the different values above for each biotope. The values go from 1 to 64 EcoPoints/m². To calculate biodiversity losses and gains, four biotope values (V) are needed: 𝑉 𝐴 for the current value of the biotope that will be impacted (area 1) 𝑉 𝐵 is the predicted value for the biotope after impacts (area 1) 𝑉 𝐶 is the current value of the biotope proposed for offsets (area 2) 𝑉 𝐷 is the predicted value of the biotope after offsets (area 2) Ecological equivalence is achieved when: (𝑉 𝐴 * 𝑎𝑟𝑒𝑎 1 ) -(𝑉 𝐵 * 𝑎𝑟𝑒𝑎 1 ) (losses) = (𝑉 𝐷 * 𝑎𝑟𝑒𝑎 2 ) -(𝑉 𝐶 * 𝑎𝑟𝑒𝑎 2 ) (gains). The equations to size offsets that achieve equivalence is as follow: . Distinctiveness includes parameters such as species richness, diversity, rarity (at local, regional, national and international scales) and the degree to which a habitat supports species rarely found in other habitats. The Habitat Score is calculated as (Condition * Distinctiveness). It must be multiplied by the site area (in hectare). Four Habitat Scores are calculated: 𝐻𝑆 𝐴 for the current score of the habitat that will be impacted (area 1) 𝐻𝑆 𝐵 is the predicted score for the habitat after impacts (area 1) 𝐻𝑆 𝐶 is the current score of the habitat proposed for offsets (area 2) 𝐻𝑆 𝐷 is the predicted score of the habitat after offsets (area 2) Ecological equivalence is achieved when: In addition, the UK pilot EAM includes four multipliers which aim to take into account spatial, temporal and uncertainty dimensions in offset sizing: -R1: offset probability of success -R2: duration for the offset to be effective -R3: offset location (ecological network) -R4: condition of hedgerows on impacted site The equations to size offsets that achieve equivalence is as follow: This calculation has to be done for each impacted habitat.  Californian Rapid Assessment Method (California Wetlands Monitoring Workgroup (CWMW) 2013) The Californian Rapid Assessment Method (CRAM) is a "rapid assessment method" like UMAM, but developed specifically for California's wetlands. There are two primary purposes for using CRAM. It is used to assess the ambient condition of a population of wetlands or to assess the condition of an individual wetland or wetland project. Wetland type must be identified following the guideline classification. Where: 𝑠𝑐𝑜𝑟𝑒 𝐵𝐿𝐶 = [𝑏𝑢𝑓𝑓𝑒𝑟 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 * (%𝐴𝐴 𝑤𝑖𝑡ℎ 𝑏𝑢𝑓𝑓𝑒𝑟 * 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑏𝑢𝑓𝑓𝑒𝑟 𝑤𝑖𝑑𝑡ℎ) 0.5 ) 0.5 ] + 𝑎𝑞𝑢𝑎𝑡𝑖𝑐 𝑎𝑟𝑒𝑎 𝑎𝑏𝑢𝑛𝑑𝑎𝑛𝑐𝑒 𝑠𝑐𝑜𝑟𝑒 𝐵𝑆 = 𝑚𝑒𝑎𝑛(𝑝𝑙𝑎𝑛𝑡 𝑐𝑜𝑚𝑚𝑢𝑛𝑖𝑡𝑦) + ℎ𝑜𝑟𝑖𝑧𝑜𝑛𝑡𝑎𝑙 𝑖𝑛𝑡𝑒𝑟𝑠𝑝𝑒𝑟𝑠𝑖𝑜𝑛 + 𝑣𝑒𝑟𝑡𝑖𝑐𝑎𝑙 𝑏𝑖𝑜𝑡𝑖𝑐 𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒 Four CS are calculated for each Assessment Areas (AA). The (AA) is the portion of the wetland that is assessed using CRAM. An AA might include a small wetland in its entirety. But, in most cases the wetland will be larger than the AA. Rules are therefore explained in the guideline to delineate the AA, which must only represent one type of wetland. 𝐶𝑆 𝐴 for the current score of the AA that will be impacted (AA 1) 𝐶𝑆 𝐵 is the predicted score for the AA after impacts (AA 1) 𝐶𝑆 𝐶 is the current score of the site proposed for offsets (AA 2) 𝐶𝑆 𝐷 is the predicted score of the site after offsets (AA 2) Ecological equivalence is achieved when: The equations to size offsets that achieve equivalence is as follow: CRAM Scores are comparable only between the same types of wetland.  Somerset Habitat Evaluation Procedure (Burrows 2014) This EAM has been adapted from the US Fish and Wildlife Service's method to be usable in the English context. It is based on the same principle: the calculation of Habitat Units (HU) which are the product of a Habitat Suitability Index (suitableness of habitat for species) and the total area of habitat affected or required for the species. Habitats are classified into over 400 categories with an Integrative Habitat System (IHS) using hierarchical Habitat Codes. The IHS provides as well Matrix, Formation and Land Use/Management added to the Habitat Code. Each habitat category is scored on a scale from 0 (poor) to 6 (excellent), according to its condition to support species, no matter the distinctiveness (i.e. broadest, priority level). Then the Matrix score (from 0 to 6) is added or subtracted depending on the contribution the "matrix" has on habitat suitability. Matrix here represents certain elements like scrubs or single trees which can influence habitat suitability for species. Formation and Management are scored between 0 and 1, depending on their effect on habitat and are multipliers (e.g. a species could require grazed grassland). All these information are gathered in a database for each habitat (ongoing). So IHS is calculated as follow: The IHS obtained is finally multiplied by the Density Band (scored 1, 2 or 3, according to the occurrence of the species in the habitat). HUs on site are calculated as follow (area is in hectare): 𝐻𝑈𝑠 = (𝐼𝐻𝑆 * 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 𝐵𝑎𝑛𝑑) * 𝐴𝑟𝑒𝑎 The HUs calculation has not to be done necessarily for all species impacted, but some umbrella species should be chosen to represent a habitat. Two HUs are calculated on each impacted and compensatory sites: 𝐻𝑈𝑠 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 for HUs lost due to impacts and that have to be offset 𝐻𝑈𝑠 𝑟𝑒𝑡𝑎𝑖𝑛𝑒𝑑/𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 for HUs retained or enhanced due to onsite or offsite offsets 𝐻𝑈 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 = 𝐻𝑈𝑠 𝑏𝑒𝑓𝑜𝑟𝑒 𝑖𝑚𝑝𝑎𝑐𝑡 * 𝑅𝑖𝑠𝑘𝑠 𝐻𝑈 𝑟𝑒𝑡𝑎𝑖𝑛𝑒𝑑/𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 = 𝐻𝑈𝑠 𝑎𝑓𝑡𝑒𝑟 𝑜𝑓𝑓𝑠𝑒𝑡 -𝐻𝑈𝑠 𝑏𝑒𝑓𝑜𝑟𝑒 𝑜𝑓𝑓𝑠𝑒𝑡 Where: "𝑅𝑖𝑠𝑘𝑠" include delivery and temporal risks about offset measures. They are scored with specific grids provide by DREFA, and depend on the type of habitat. Ecological equivalence is achieved when: 𝐻𝑈 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 -𝐻𝑈 𝑟𝑒𝑡𝑎𝑖𝑛𝑒𝑑 𝑜𝑟 𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 (𝑙𝑜𝑠𝑠𝑒𝑠) = 𝐻𝑈 𝑟𝑒𝑡𝑎𝑖𝑛𝑒𝑑 𝑜𝑟 𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 (𝑔𝑎𝑖𝑛𝑠) It is considered that any impact on a species population affected by the development must be replaced by habitat enhancement or creation that is accessible to that particular population. Appendix B: Description of challenges, criteria and modalities used to characterize EAMs. Challenge Criteria Modalities and Scoring Operationality (Op) Indicator set up (IndSetup) -IndSetup1: user has to choose one or several indicators -IndSetup2: indicators are predefined without a scoring system -IndSetup3: indicators are predefined with a scoring system Data availability (DataAv) DataAv1: data are costly in terms of both time and money DataAv2: data are costly in terms of time (e.g., repeated data collection in the field) but not money DataAv3: data are cost-free (e.g., open-access data-bases) and rapid to collect (e.g., simple indicators measured in the field) DataAv4: Specific data-bases exist for the method (e.g., giving biodiversity units for specific habitat) so data is free and rapid to collect Implementation rapidity (ImpRp) The time to implement method is: -ImpRp1: greater than 1 year -ImpRp2: between 6 months and 1 year -ImpRp3: between 1 week and 6 months -ImpRp4: less than 1 week Exchangeability (Exchg) Exchg1: EAM only allows calculation for "like for like" offset Exchg2: EAM allows calculation for "like for unlike" offset when "like for like" offset is not relevant Exchg3: The method can be adapted to compute "like for like" offset, or "like for unlike" offset when "like for like" offset is not relevant Scientific basis (ScBs) Biodiversity indicators (BiodivInd) -BiodivInd1: Indicators have to be chosen by users, based on examples and advice given in the EAM guideline -BiodivInd2: Indicators are fixed in the method and based on expert opinion -BiodivInd3: Indicators are fixed in the method and based on scientific documentation Biodiversity indicator metrics (BiodivIndMc) BiodivIndMc1: metric is qualitative BiodivIndMc2: metric is quantitative discrete only or combined with qualitative BiodivIndMc3: metric is quantitative continuous only or combined with quantitative discrete BiodivIndMc4: metric is a combination of the three Spatial consideration (SpCd) -SpCd1: spatial consideration is not taken into account in the theoretical guidelines, but is used on a case-by-case basis in practice -SpCd2: a ratio is used to adjust the surface area that will need to be offset -SpCd3: some indicators include these issues directly (e.g., connectivity indicators) Uncertainty consideration (UnCd) UnCd1: uncertainty is not taken into account in the theoretical guidelines, but is considered on a case-by-case basis in practice UnCd2: a ratio is used to adjust the offset surface area (this ratio is the result of expert opinion) UnCd3: some indicators include this consideration directly (e.g., contribution to a site value) UnCd4: a ratio is used to adjust the offset surface area. This ratio is based on scientific literature or an existing data-base which provides scientific feedback on previous restoration actions 94 Please choose one answer Some indicators include this consideration directly (e.g. connectivity indicators) A ratio to adjust the surface area that will need to be offset is used 98 99 9.b-If the method takes into account uncertainty consideration, how is it incorporated in the 100 method? 101 Please choose one answer 103 A ratio is used to adjust the surface of offset area. This ratio is the result of a negotiation on a case by No, It only allows "like for like" offsets (e.g. offsets targets the species impacted) 113 Yes, it allows "like for unlike" offsets (e.g. one species impacted can be offset with another species with 114 the same value) 115 When using the method, it can be adapted to compute "like for like" offsets, or "like for unlike" offsets 116 when "like for like" offsets are not relevant To fill in the form for another method, please click on "Send" and then "Send another answer". 123  Comments about experts' answers to the questionnaire 124 (1) Only one "other" EAM was filled in (Biobanking in Australia), but it has the same principles as Land 125 Clearing Evaluation since this EAM constituted the base for the development of Biobanking in the New 126 South Wales, Australia. Experts were solicited specifically for their experience in the EAMs suggested in 127 the list, which explains why we did not obtain a lot of "other" answers. 129 Experts' contribution was the most important for questions 3, 4, 5, 6, 7 and 9 because the answers 130 required more than a bibliographic study of EAM to be filled in. Notably, to answer questions 3 to 5, expert 131 needs to have tested EAM in practice. Questions 6, 7 and 9 required precision about how EAMs were 132 designed, which is not always explicitly said in theoretical guidelines. 133 Divergences between published documents analysis and experts answers, and between different experts 134 answers when several answers where obtained for the same EAM, concerned particularly questions 8 and 135 9. Indeed, we found out after dialogue with expert that there is certain flexibility for spatial considerations * Answers which were subject to dialogue with expert. The main aspects for which we needed to ask precisions to experts concern the "Equivalence Considerations", and as consequence the "Spatial and uncertainty Considerations". Indeed, in practice, it is usual that ratios taking into account delay or risks are used to adjust the offset area, but in is done as an adaptation in case by case (in addition to the EAM baseline). (1) We did not have an answer for this EAM, because it is almost the same as HEA except for all that concern biodiversity targeted and indicator used. We used only the theoretical guideline to attribute modalities to this EAM. Appendix E: EAMs challenges average scores and EAMs final scores, expressed as a percentage of challenge achievement.
88,739
[ "184273", "747305" ]
[ "182211", "182211", "237629", "496458", "182211" ]
01745351
en
[ "spi" ]
2024/03/05 22:32:07
2012
https://hal.science/hal-01745351/file/%5BZhao12-Nanoarch%5D%20Crossbar%20Architecture%20Based%20on%202R%20Complementary%20Resistive%20Switching%20Memory%20Cell.pdf
W S Zhao Y Zhang J O Klein D Querlioz D Ravelosona C Chappert J M Portal M Bocquet H Aziza D Deleruyelle C Muller Crossbar Architecture Based on 2R Complementary Resistive Switching Memory Cell Keywords: Crossbar, Resistive Switching, complementary cell, I Emerging non-volatile memoires (e.g. STT-MRAM, OxRRAM and CBRAM) based on resistive switching are under intense R&D investigation by both academics and industries. They provide high performance such as fast write/read speed, low power and good endurance (e.g. >10 12 ) beyond Flash memories. However the conventional access architecture based on 1 transistor + 1 memory cell limits its storage density as the selection transistor should be large enough to ensure enough current for the switching operation. This paper describes a design of crossbar architecture based on 2R complementary resistive switching memory cell. This architecture allows fewer selection transistors, and minimum contacts between memory cells and CMOS control circuits. The complementary cell and parallel data sensing mitigate the impact of sneak currents in the crossbar architecture. We performed transient simulations based on two memory technologies: STT-MRAM and OxRRAM to validate the functionality of this design by using CMOS 65 nm design kit and memory compact models. INTRODUCTION Modern computing systems suffer from rising static power due to the high leakage currents, which increase exponentially following the fabrication node miniaturization of CMOS technology (e.g. <90 nm) [1]. According to ITRS 2011, the static power will start to play the major role of whole power consumption in the next years [2]. In order to relieve this power issue, emerging non-volatile memoires (NVM) based on resistive switching are under intense R&D investigation by both academics and industries. Spin transfer torque magnetic random access memory (STT-MRAM) [3]; Conductive-Bridge RAM (CBRAM) [A-4] and Oxide Resistive RAM (OxR-RAM) [A-5] are among the most promising technologies. They promise to provide much higher performances than Flash memory such as fast write/read speed, low power and good endurance (e.g. >10 12 ). Since 2009, a number of NVM preindustrial prototypes [6][7][8][START_REF] Tappertzhofen | Capacity based nondestructive readout for complementary resistive switches[END_REF][10] were presented and the commercial products should be available soon. Even though these emerging NVM are based on different physics, they hold many common features. For instance, they are two terminal nanoscale devices; their resistances vary to present '0' and '1'; their memory cell are implemented at backend of line (BEOL) process [A-6-10] etc. Thereby they use the same access structure 1T (transistor)+ 1R (resistive memory) shown in Fig. 1a, and then benefit the existing peripheral control circuits of Dynamic RAM (DRAM). However this structure presents some drawbacks: the transistor is normally much larger than the minimum size in order to obtain the sufficient current for fast memory cell switching; there are lots of large interconnects between CMOS circuits and memory cell due to the thick metals (e.g. M3). They make the density of these NVM technologies lower than Flash memory. For example, Fig. 2b shows the 65 nm layout implementation of the conventional STT-MRAM access design. In order to get fast switching speed (e.g. 10 ns), the selection transistor should be large enough to ensure a high current (e.g. 100 A). The CMOS circuits impose definitively the density of STT-MRAM density instead of magnetic tunnel junction (MTJ) [A-11]. Crossbar architectures were proposed to relax the density limitation of two-terminal resistive switching devices imposed by the CMOS [A-8-11]. There are only interconnects between CMOS circuits and memory cell on the edge of the crossbar array. A number of memory cells share the same selection transistor (see Fig. 2). The cell area efficiency can be greatly improved, and the back-end process of NVM defines then the density instead of CMOS circuits. However, the conventional crossbar architectures suffer from sneak currents and low data access speed. The former is the most critical as the data sensing can be disturbed completely by the sneak currents. Moreover, there are parasite resistances throughout a large-65nm 580nm 410nm scale crossbar array, which leads to lots of sensing errors [A-6-12]. These issues are difficult to surmount and few efficient design solutions addressing this issue have been reported previously in the literature. In this paper, we present a new crossbar architecture based on complementary resistive switching memory. Combining with parallel data sensing, the impact of sneak currents and parasite resistance can be mitigated to ensure the correct data sensing. In order to validate the functionality of this design, we performed transient simulations based on two memory technologies: STT-MRAM and OxRRAM by using CMOS 65 nm design kit and memory compact models. The rest of the paper is organized as follows. In the next section, we introduce the principles of STT-MRAM and OxR-RAM. In Sections III, we describe the design details of complementary crossbar architecture. In Sections IV, we show the transient simulation of the crossbar architecture by using CMOS 65 nm design kit and compact models of the STT-MRAM and OxR-RAM. Finally, a discussion and concluding remarks are provided in section V. II. EMERGING RESISTIVE SWITCHING MEMORIES In this section, we introduce briefly the principles of STT-MRAM and OxR-RAM, which were studied as resistive switching memory cell in the proposed crossbar architecture. A. STT-MRAM technology principle Magnetic RAM (MRAM) promises stable non-volatility, fast write/read access speed and infinite endurance etc [A-2-3]. The MRAM storage element, MTJ nanopillar is mainly composed of three thin films: a thin oxide barrier and two ferromagnetic (FM) layers (see Fig. 3a). As a result of the tunnel magnetoresistance (TMR) effect [A-12], the nanopillar resistance, RP or RAP, depends on the relative orientation, Parallel (P) or Anti-Parallel (AP), of the magnetization of the two FM layers. By using crystalline MgO barriers, the TMR ratio=RAP-RP/RP of MTJ nanopillars can reach more than 600% at room temperature [A-13-14]. This allows the state of MTJs to be detected easily by CMOS sense amplifiers [A-15]. Spin transfer torque (STT) is one of the most promising switching approaches thanks to its high power efficiency and fast writing speed [A-4-5, 16]. This switching mechanism greatly simplifies the CMOS switching circuit, as only a bidirectional current is required. One time the current through the MTJ exceeds the critical current; the MTJ will switch its state (see Fig. 3b). It opens the door to build up the first true universal memory with MRAM, which should provide both large capacity (> Gigabit) and high speed (<ns). Recent progress demonstrate perpendicular magnetic anisotropy (PMA) in CoFeB/MgO structures provides a high energy barrier E to deal with the issue of thermal stability of in-plane anisotropy, which also presents the advantages of low threshold current, high speed operation and high TMR ratios comparing with in-plane anisotropy. B. OxRRAM technology principle A large number of oxide-based materials showing a resistive switching are reported in the literature [Waser07] [Seo04] [Kim10]. Among them, metal oxides like HfO2, Ta2O5, NiO, TiO2 or Cu2O are promising candidates due their compatibility with CMOS processes and high on/off resistance ratio. In its simplest form, resistive memory element relies on a Metal/Insulator/Metal (MIM) stack (Fig. 4) that can be easily integrated into Back-End Of Line (BEOL). The MIM structure is generally composed of an active layer, usually a non-stoechiometric dielectric. The bipolar behavior is mainly due to an asymmetrical geometry of the electrode. In recent years, many studies have highlighted the good performances of non-stoichiometric HfOx [Lee08-IEDM] films used as switching layer (also used in memristors demonstrated by ). Besides, an additional buffer layer, also called interface layer, such as Al2O3 or Ti may play an important role in the reliability and the reduction of the programming voltage [Lee08-IEDM] [Lee10]. OxRRAM technology is still in its "infancy" since the physics of resistance switching is not yet fully understood. So far, it is broadly accepted that the electro migration of oxygen vacancies plays a critical role in the resistance switching [Ahnl07]. After an initial electroforming process, the memory element may be reversibly switched between reset (high resistance) and set states (low resistance). Electroforming stage corresponds to a voltage-induced resistance switching from an initial very high resistance state (virgin state) to a conductive state. In the literature, unipolar, bipolar and non-polar electrical behaviors are reported. In the case of bipolar switching, addressed in this paper, bipolar voltage sweeps are required to switch the memory element (Fig. 5). Resistive switching in an OxRRAM element corresponds to an abrupt change between a High Resistance State (RHRS or OFF state) and a Low Resistance State (RLRS or ON state). This resistance change is achieved by applying specific threshold voltage to the structure (i.e. VSET and VRESET). III. COMPLEMENTARY CROSS-BAR ARCHITECTURE The cross point architecture is shown in Fig. 6, which is composed of three parts: a memory array for data storage, bit line and word line drivers, read decoder and sense amplifiers for read operation. The operating mechanisms will be detailed in the following subsections: A. Cell structure and operation Every cell is composed of two resistive switching elements (2R) as shown in inset of Fig. 6. For every cross-point, a common word line (WL) is connected to the bottom electrode (BE) of both resistive switching elements while their top electrode (TE) are respectively connected to bit-line (BL) and complementary bit-line ( BL ). Cell programming operation are performed in two phases, as follow:  1 st Phase: For the selected word, the common WL is grounded and all the bit-line and complementary bit-line are biased to VDD and a current flow from top to bottom electrode, to set the resistive elements to a low resistance state (RLRS) as shown Fig. 7.a.  2 nd Phase: For the selected word, the common WL is set to VDD while BL and BL are biased to complementary value in order to selectively reset (high resistance state -RHRS) either the top resistive element or the bottom resistive element. In this case the current flows from the bottom to the top electrode. At the end of the programming operation both elements have opposite resistance state. WL 3 BL 0 BL 1 BL 2 BL 3 BL 0 BL 1 BL 2 BL 3 ! "# ! "# $ "# ! "# ! "# $"# $"# The read operation is performed through a differential sensing of the bit-line and complementary bit-line and will be described in the sense-amplifier sub-section. B. Array architecture As shown in Fig. 6, the array is divided in M word per word line (e.g. 2). Each word is composed of N bits (e.g. 2). There is one driver circuit associated with each word-line and bit-line to ensure the proper biasing conditions for all modes of operations (write, read, unselected). A read decoder connects the N sense-amplifier to the selected word during the read phase. C. Driver circuit When dealing with bipolar resistive switching element, one may be able to apply bipolar voltage between the top and bottom electrode as well as bidirectional current to properly achieved programming phase. Moreover, as described in previous sub-section, voltage and current must remain below threshold to do not changed resistance value of unselected cell. To achieve this set of conditions, drivers have to be connected to the bit-line and the word-line. Row & column decoders together with the control logic activate the drivers. Decoder and control logic are similar to other well-known memory circuits and are not described here. A driver is composed (Fig. 8) of one PMOS to connect a line to VDD and two NMOS to connect a line to gnd or VDD/2. It is important to note that the driver sizing is crucial to control the current and voltage levels applied to the cell, which is a mandatory step to properly determine the RHRS and RLRS values. D. Sense amplifier definition The read operation of data stored in cross-point resistive switching memory is currently one of the major challenges to develop this approach. Indeed, sneak path or destructive read with CRS element [S Tappertzhofen11] are a strong limit to develop this type of architecture. Moreover, the resistance ratio (RHRS/RLRS) and the process variations have to be considered when designing a sense solution. A sense amplifier performing with high reliability is then required. Fig. 9 shows a pre-charge based sense amplifier (PCSA), which has demonstrated the best tolerance to different sources of variation [A-13], while keeping high speed and low power. In this SA, the read operation is performed in two phases: 1 st Phase: The SA is first connected to the bit-line of the selected word with SEN set to '1' and the circuit is precharged with PCH equals '0'. 2 nd Phase: The data stored in the 2R cell can be evaluated to logic level at the output Q as PCH is changed to '1' and WL is pulled down to '0'. ! " " # $%&# ! " " # ! "! # $ ! "! # $ $%&# ! " " # ! " " # ! %! # $ ! "! E. Sneak current mitigation As mentioned previously, this structure is designed to mitigate the impact of sneak currents and parasite resistance. Thanks to the complementary configuration in the memory cell, there are always the same numbers of unselected transistors and the same distance of wire connection (see Fig. 7), this balanced structure allows the impact of parasite resistance to be neglected during data sensing. Parallel data sensing allows the impact of sneak currents to be mitigated as all the bit lines are set to the same voltage potential. We performed a simple simulation of 4×4 crossbar array and evaluated the impact of sneak currents on data sensing for different architectures. The worst case is implemented, where the cell to address is ROFF and all the other cells are RON. The sensing of this cell suffers from the most important impact of sneak current. RON is fixed to 10K ohm; the parasite resistance in each wire (BLs and WLs) connected is set to 10 ohm. Fig. 10a shows that the current value for sensing in series (black square curve) is much higher than the expected current value (red point curve). This confirms that the sneak currents can perturb completely the sensing operation if there are no any mitigation solutions. However, the current value for sensing in parallel (blue triangle curve) equals nearly to the expected current value as the ROFF/RON is lower than 10. This can be explained by the reduction of sneak currents in each memory cell (blue square curve) and the decreasing of sensing current difference between ROFF and RON, as shown in Fig. 10b. IV. VALIDATION WITH BIPOLAR OXRRAM AND STT-MRAM MEMORY CELL The aim of this section is to present the architecture validation with array simulation on both NVM technologies by using compact models and CMOS 65 nm design kit. In the following, we describe briefly the compact models. Transient simulation of the proposed architecture with STT-MRAM and OxRRAM are then presented. The static behaviors of STT switching in PMA MTJ is mainly based on the calculation of threshold or critical current Ic0, which can be expressed by the Eq.A-2 [A-17], A. Resistive Memory Cell modeling 1) STT-MRAM compact model E g e V H M g e I B K S B c        2 ) ( 0 0   (A-2) where E is the barrier energy (see also Eq.1), α is the magnetic damping constant, γ is the gyromagnetic ratio, e is the elementary charge, μB the Bohr magneton, V the volume of the free layer and kB the Boltzmann constant. The switching dynamics of STT in PMA MTJ is presented in [A-18] and Eq.A-3 shows the dependence of switching current Iwrite value with the duration. ) ( ) 1 ( ] ) 4 ln( 2 [ 1 0 2 I I P P em P C c write free ref ref B           (A-3) where C≈0.577 is the Euler's constant, ξ=E/kBT the activation energy in units of kBT, Pref, Pfree the tunneling spin polarizations of the reference and free layers, we assume that Pref=Pfree=P for this compact model, m is the magnetic moment of free layer. Fig. 11 shows the transient simulation of this compact model, which could also verify the agreement of the dynamic behavior between physical models and experimental measurements. We found that the switching delay is inversely proportional to the writing current as described in Eq. A-3. 2) OxRRAM Compact Model The proposed OxRRAM modeling approach relies on a physical model accounting for both set and reset operations in bipolar resistive switching devices. In considering electric field-induced migration of oxygen vacancies within the switching layer, the model enables continuously accounting for both set and reset operations into a single master equation in which the resistance is controlled by the diameter of the conduction pathways (): T k V q E a T k V q E a b c e l l b c e l l e e d t d                       m a x (1) In equ. 1,  represents the creation/destruction rate, Ea the activation energy and VCell the cell voltage. Based on this expression, the cell current can be expressed as a function of the size of this conductor path:     HRS HRS LRS Cell Cell L V I              2 max 2 4 (2) Where LRS andHRS represents the conductivity in the Low Resistance State and the High Resistance State. This physical model demonstrates its flexibility to match the switching voltages, the levels current and the dynamics dependence on various technologies [NVMTS11] [START_REF] Aziza | Bipolar OxRRAM memory array reliability evaluation based on fault injection[END_REF]. This last point is a key point to perform a realistic circuit analysis. In this way, the model carte has been adjusted to fit at the behavior of the most aggressive component from literature: Set/Reset voltage below 1V [Cagli11-IEDM] and programming time around 10ns [Lee08-IEDM]. Table I summarizes the cell operation parameters for very short programming pulse. In order to keep the same data read access speed with that of data programming, the pulse duration of "En_Read" (see also Fig. 6) is set to ~1.1ns. The word address changes between two "En_Read" pulses during ~100 ps and the data stored in this 4×4 cross-point STT-MRAM can be detected word by word in ~5ns. It is noteworthy that the sensing speed can be accelerated up to ~200 ps/word [20], which would lead to an asymmetric delay between the programming and reading operations. Nevertheless, this asymmetric delay is nearly ubiquitous in non-volatile memories and it may present some advantages in terms of power and access speed as the nonvolatile memories are read more frequently than programmed. 2) OxRRAM array simulation To validate the efficiency of the proposed architecture with the OxRRAM technology, an array composed of 32 word-lines of 2 words of 16 bits including line drivers and PCSAs, as shown Fig. 11, is fully simulated. Moreover, as depicted in the inset Fig. 11, WL and BL resistance and capacitance are modeled through RC element distributed all over the array. The simulation is divided in two: a write phase, where all the words in the array are successively written with the pattern given Fig. 12, followed by the selective read of the top-left word in the array. The write time is set to 20ns with 10ns by phase. The read time is also set to 20ns with 10ns pre-charge and 10ns to generate a logic value on the output. It is important to note that the write and read timing value gives a strong margin versus the capability of the PCSA, as depicted on simulation results given Fig. 13. The Fig. 13 gives an overview of the simulation results, with the writing phase of: a cell in the selected word, a cell in an unselected word, a cell in a unselected bit-line and of a read phase. The Fig. 13(a) shows the behavior of the cell00 while in a first phase the cell is programmed (cell both OxRRAM elements are set and only top OxRRAM element is reset). Moreover, this simulation results validates the fact that the cell00 remains unchanged, during the write phase of the second word (word01) on the same word-line (WL0) or during the write phase of the fisrt word (word10) of the second word-line (WL1) 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 sharing the same bit-lines. The Fig. 13(b) validates the ability of the architecture to successfully read in parallel a full word (word00) with typically cell00='0' and cell01='1'. V. CONCLUSION This paper describes a generic design of crossbar architecture based on 2R complementary resistive switching memory cell. This architecture allows fewer selection transistors, and minimum contacts between memory cells and CMOS control circuits. The complementary cell and parallel data sensing mitigate the impact of sneak currents in the crossbar architecture. This generic architecture is implemented on two emerging technologies STT-MRAM and OxRRAM. Compact model of STT-MRAM and OxRRAM resistive elements are developed to simulate full array on both technologies. Simulation results of a 4×4 STT-MRAM array and a 32×32 OxRRAM array (16 bits word length) validate successfully the functionality of the proposed architecture using CMOS 65nm design kit. Figure 1 . 1 Conventional access approach based on 1Transistor and 1 resistive memory cell. (a) Circuit diagram (b) Layout implementation at 65 nm; the size of selection transistor is about 56 F 2. Figure 2 . 2 Figure 2. The layout implementation promises the best area efficiency, where the die area per storage bit is 2F 2 and the selection transistor is shared by a number of MTJs associated in the same word (e.g. 4). Figure 3. (a) Vertical structure of an MTJ nanopillar composed of CoFeB/MgO/CoFeB thin films. (b) Spin transfer torque switching mechanism: the MTJ state changes from parallel (P) to anti-parallel (AP) as the positive electron flow direction IP->AP>IC0, on the contrast, its state will return as the negative electron flow direction IAP->P>IC0. Figure 4 . 4 Figure 4. OxRRAM memory element stack overview. Figure 5 . 5 Figure 5. Typical I-V characteristic of a bipolar OxRRAM memory device. Figure. 6 : 6 Figure. 6: Proposed Cross Point architecture (4 × 2 words array with 2 bits word length). It includes three parts, a cross-point array of 2R cell for data storage, word line (left side) and bit line (bottom) driver and read circuits (top). Figure. 7 . 7 Figure. 7. Operation process in the cross point array: (a) For the selected word, the WL=gnd and both bit-line are biased to VDD (write 1 st phase). (b) For the selected word, WL=VDD while the bit-line are set to opposite values (write 2 nd phase). (c) For the unselected words, both the word-lines and the bit-lines are biased to VDD/2 to ensure that unselected resistive elements do not set or reset. Figure. 8 . 8 Figure. 8. Bit-line and word-line driver description. Two NMOS and one PMOS are connected to each bit-line and word-line to ensure correct biasing during all phases of operation (Write 1 st and 2 nd phase, Read 1 st and 2 nd phase). Figure. 9 : 9 Figure. 9: Pre-Charged Sense Amplifier (PCSA) for data sensing: MPC0 and MPC1 serve to pre-charge the bit-line to VDD; MPA0, MPA1, MNA0 and MNA1 constitute the amplifier; MNE0 and MNE1 play the role of "Enable". . (a) The comparison of expected current value, sensing current in series and sensing current in parallel VS the ROFF/RON ratio (b) Parallel sensing currents to address ROFF and RON in the crossbar array, the sneak currents in unaddressed RON memory cell VS the ROFF/RON ratio. A CoFeB/MgO/CoFeB STT-MTJ compact model has been recently developed based on the physical theories and experimental measurements of perpendicular magnetic anisotropy (PMA) MTJ [A-16]. It integrates the physical models of static, dynamic and stochastic behaviors. The major parameters are shown in the Table.I. Figure. 10 . 10 Figure.10. Transient simulation of the PMA MTJ demonstrates the integration of dynamic model and helps us to evaluate the tradeoff between CMOS die area and switching speed. . (a) Configuration of the small crossbar array (b) Parallel sensing of a 4×4 cross-point array within ~5ns (~1.2ns/word). Fig. 11 11 Fig. 11 (a) and (b) demonstrate the mixed simulation of parallel writing /reading for this 4×4 cross-point resistive Figure. 13 : 13 Figure. 13: Simulation results with (a) write phase of a selected cell, behavior of an unselected cells and read phase of Word00 (cell00 and cell01) TABLE I . I CELL OPERATION PARAMETERS. VTE (V) VBE (V) STATE SET (Write) 0.8@10ns 0 ONOFF RESET (Erase) 0 -0.8V@10ns OFFON Read <0.4@10ns Sensing ON or OFF B. Architecture simulation The aim of this section is to validate the functionality of the architecture for both technologies 1) STT-MRAM array simulation TABLE I PARAMETERS I For instance, it takes only one cycle of switching duration, ~1.1 ns driven by the signal "EN_Write" to program a word to "0000" or "1111". Thanks to the fast computing speed of the PMA MTJ compact model, the simulation of this 4×4 cross-point memory can be performed in ~30 minutes in a medium performance CAD server (two Xeon: 4-Core, 12MB cache, 2.4GHz and 8GB 1.3GHz RAM). switching memory and confirm the expected operations shown in the section III. AND VARIABLES PRESENT IN THE FITTING FUNCTIONS Parameter Description Default Value Area MTJ surface 65 nm x 65 nm TMR(0) TMR ratio with 0 Vbias 120% V Volume of free layer surface x1.3 nm R•A Resistance-area product 10 Ωµm 2 Vwrite Writing voltage 1.5 V Vread Reading voltage 1.2 V Jc0 Critical current density 5.7 x 10 5 A/cm 2 ACKNOWLEDGMENT The authors wish to acknowledge support the French national projects NANOINNOV-SPIN, PEPS-NVCPU, ANR-MARS.
28,227
[ "774684", "738427", "833626", "20388", "18361", "177966", "174122" ]
[ "1362", "1362", "1362", "1362", "1362", "1362", "199957", "199957", "199957", "199957", "199957" ]
01745369
en
[ "info" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01745369/file/TWV_ARXIV_final%285%29.pdf
Chao Zhang Vineeth S Varma Samson Lasaulce Rapha : El Visoz Interference Coordination via Power Domain Channel Estimation A novel technique is proposed which enables each transmitter to acquire global channel state information (CSI) from the sole knowledge of individual received signal power measurements, which makes dedicated feedback or inter-transmitter signaling channels unnecessary. To make this possible, we resort to a completely new technique whose key idea is to exploit the transmit power levels as symbols to embed information and the observed interference as a communication channel the transmitters can use to exchange coordination information. Although the used technique allows any kind of low-rate information to be exchanged among the transmitters, the focus here is to exchange local CSI. The proposed procedure also comprises a phase which allows local CSI to be estimated. Once an estimate of global CSI is acquired by the transmitters, it can be used to optimize any utility function which depends on it. While algorithms which use the same type of measurements such as the iterative waterfilling algorithm (IWFA) implement the sequential best-response dynamics (BRD) applied to individual utilities, here, thanks to the availability of global CSI, the BRD can be applied to the sum-utility. Extensive numerical results show that significant gains can be obtained and, this, by requiring no additional online signaling. I. INTRODUCTION Interference networks are wireless networks which are largely distributed decision-wise or information-wise. In the case of distributed power allocation over interference networks with multiple bands, the iterative water-filling algorithm (IWFA) is considered to be one of the wellknown state-of-the art distributed techniques [START_REF] Yu | Distributed multiuser power control for digital subscriber lines[END_REF] [3] [START_REF] Mertikopoulos | Distributed learning policies for power allocation in multiple access channels[END_REF]. IWFA-like distributed algorithms have at least two attractive features: they only rely on local knowledge e.g., the individual signalto-interference plus noise ratio (SINR), making them distributed information-wise; the involved : L2S (CNRS-CentraleSupelec-Univ. Paris Sud), Gif-sur-Yvette, France. ˚CRAN (CNRS-Univ. of Lorraine), Nancy, France. ; Orange Labs, Issy-les-Moulineaux, France. The material in this paper was presented in part at the 2015 EUSIPCO Conference [START_REF] Varma | Power modulation: Application to inter-cell interference coordination[END_REF]. computational complexity is typically low. On the other hand, one drawback of IWFA and many other distributed iterative and learning algorithms (see e.g., [START_REF] Rose | Learning equilibria with partial information in decentralized wireless networks[END_REF] [START_REF] Lasaulce | Game Theory and Learning for Wireless Networks: Fundamentals and Applications[END_REF]) is that convergence is not always ensured [START_REF] Mertikopoulos | Distributed learning policies for power allocation in multiple access channels[END_REF] and, when converging, it leads to a Nash point which is globally inefficient. One of the key messages of the present paper is to show that it is possible to exploit the available feedback signal more efficiently than IWFA-like distributed algorithms do. In the exploration phase 1 , instead of using local observations (namely, the individual feedback) to allow the transmitters to converge to a Nash point, one can use them to acquire global channel state information (CSI). This allows coordination to be implemented, and more precisely global performance criteria or network utility to be optimized during the exploitation phase. As for complexity, it has to be managed by a proper choice of the network utility function which has to be maximized. To obtain global CSI, one of the key ideas of this paper is to exploit the transmit power levels as information symbols and to exploit the interference observed to decode these information symbols. In the literature of power control and resource allocation, there exist papers where the observation of interference is exploited to optimize a given performance criterion. In this respect, an excellent monograph on power control is [START_REF] Chiang | Power control in wireless cellular networks[END_REF]. Very relevant references include [START_REF] Stańczak | Distributed utility-based power control: Objectives and algorithms[END_REF] and [START_REF] Schreck | Compressive rate estimation with applications to device-to-device communications[END_REF]. In [START_REF] Stańczak | Distributed utility-based power control: Objectives and algorithms[END_REF], optimal power control for a reversed network (receivers can transmit) is designed, in which the receiver uses the interference to estimate the cross channel, assuming perfect exchange of information between the transmitters. In [START_REF] Schreck | Compressive rate estimation with applications to device-to-device communications[END_REF], the authors estimate local CSI from the received signal but in the signal domain and in a centralized setting. To the best to the authors' knowledge, there is no paper where the interference measurement is exploited as a communication channel the transmitters can utilize to exchange information or local CSI (namely, the channel gains of the links which arrive to a given receiver), as is the case under investigation. In fact, we provide a complete estimation procedure which relies on the sole knowledge of the individual received signal strength indicator (RSSI). The proposed approach is somewhat related to the Shannontheoretic work on coordination available in [START_REF] Larrousse | Coded power control: Performance analysis[END_REF] [START_REF] Larrousse | Coordination in distributed networks via coded actions with application to power control[END_REF], which concerns two-user interference 1 IWFA operates over a period which is less than the channel coherence time and it does so in two steps: an exploration phase during which the transmitters update in a round robin manner their power allocation vector; an exploitation phase during which the transmitters keep their power vector constant at the values obtained at the end of the exploration phase. As for IWFA, unless mentioned otherwise, we will assume the number of time-slots of the exploitation phase to be much larger than that of the exploration phase, making the impact of the exploration phase on the average performance negligible. channels when one master transmitter knows the future realizations of the global channel state. It is essential to insist on the fact that the purpose of the proposed estimation scheme is not to compete with conventional estimation schemes such as [START_REF] Caire | Multiuser MIMO achievable rates with downlink training and channel state feedback[END_REF] (which are performed in the signal-domain), but rather, to evaluate the performance of an estimation scheme that solely relies on information available in the power-domain. Indeed, one of the key results of the paper is to prove that global CSI (without phase information) can be acquired from the sole knowledge of a given feedback which is the SINR or RSSI feedback. The purpose of such a feedback is generally to adjust the power control vector or matrix but, to our knowledge, it has not been shown that it also allows global CSI to be recovered, and additionally, at every transmitter. This sharply contrasts with conventional channel estimation techniques which operate in the signal domain and use a dedicated channel for estimation. The main contributions and novelty of this work are as follows: § We introduce the important and novel idea of communication in the power domain, i.e., by encoding the message on the transmit power and decoding by observing the received signal strength. This can be used in fact to exchange any kind of low-rate information and not only CSI. § This allows interfering transmitters to exchange information without requiring the presence of dedicated signaling channels (like direct inter-transmitter communication), which may be unavailable in real systems (e.g., in conventional Wifi systems or heterogeneous networks). § Normal (say high-rate) communication can be done even during the proposed learning phase with a sub-optimal power control, i.e., communication during the learning time in the proposed scheme is similar to communication in the convergence time for algorithms like IWFA. § We propose a way to both learn and exchanged the local CSI. Global CSI is acquired at every transmitter by observing the RSSI feedback. § The proposed technique accounts for the presence of various noise sources which are nonstandard and affect the RSSI measurements (the corresponding modeling is provided in Sec. II). By contrast, apart from a very small fraction of works (such as [START_REF] Mertikopoulos | Distributed learning policies for power allocation in multiple access channels[END_REF] [13] [START_REF] Coucheney | Distributed optimization in multi-user MIMO systems with imperfect and delayed information[END_REF]), IWFA-like algorithms assume noiseless measurements. § We conduct a detailed performance analysis to assess the benefits of the proposed approach for the exploitation phase, which aims at optimizing the sum-rate or sum-energy-efficiency. As (imperfect) global CSI is available, globally efficient solutions become attainable. The proposed work can be extended in many respects; the main extensions are marked as (‹). II. PROBLEM STATEMENT AND PROPOSED TECHNIQUE GENERAL DESCRIPTION Channel and communication model: The system under consideration comprises K ě 2 pairs of interfering transmitters and receivers; each transmitter-receiver pair will be referred to as a user. Our technique directly applies to the multi-band case, and this has been done in the numerical section. In particular, we assess the performance gain which can be obtained with respect to the IWFA. However, for the sake of clarity and ease of exposition, we focus on the single-band case, and explain in the end of Sec. IV, the modifications required to treat the multiband case. From this point on, we will therefore assume the single-band case unless otherwise stated. In the setup under study, the quantities of interest for a transmitter to control its power are given by the channel gains. The channel gain of the link between Transmitter i P t1, ..., Ku and Receiver j P t1, ..., Ku is denoted by g ij " |h ij | 2 , where h ij may typically be the realization of a complex Gaussian random variable, if Rayleigh fading is considered. In several places in this paper we will use the K ˆK channel matrix G whose entries are given by the channel gains g ij , i and j respectively representing the row and column indices of G. Each channel gain is assumed to obey a classical block-fading variation law. More precisely, channel gains are assumed to be constant over each transmitted data frame. A frame comprises T I `TII `TIII consecutive time-slots where T m P N, m P tI, II, IIIu, corresponds to the number of time-slots of Phase m of the proposed procedure; these phases are described further. Transmitter i, i P t1, ..., Ku, can update its power from time-slot to time-slot. The corresponding power level is denoted by p i and is assumed to be subject to power limitation as: 0 ď p i ď P max . The K´dimensional column vector formed by the transmit power levels will be denoted by p " pp 1 , ..., p K q T , T standing for the transpose operator. Feedback signal model: We assume the existence of a feedback mechanism which provides each transmitter, an image or noisy version of the power received at its intended receiver for each time-slot. The power at Receiver i on time-slot t is expressed as ω i ptq " g ii p i ptq `σ2 `ÿ j‰i g ji p j ptq. ( 1 ) where σ 2 is the receive noise variance and p i ptq the power of Transmitter i on time-slot t. We assume that the following procedure is followed by the transmitter-receiver pair. Receiver i: measures the received signal (RS) power ω i ptq at each time slot and quantizes it with N bits (the RS power quantizer is denoted by Q RS ); sends the quantized RS power p ω i ptq as feedback to Transmitter i through a noisy feedback channel. After quantization, we assume that for all i P t1, ..., Ku, p ω i ptq P W , where W " tw 1 , w 2 , . . . , w M u such that 0 ď w 1 ă w 2 ă ¨¨¨ă w M and M " 2 N . Transmission over the feedback channel and the dequantization operation are represented by a discrete memoryless channel (DMC) whose conditional probability is denoted by Γ. The distorted and noisy version2 of ω i ptq, which is available at Transmitter i, is denoted by r ω i ptq P W ; the quantity r ω i ptq will be referred to as the received signal strength indicator (RSSI). With these notations, the probability that Transmitter i decodes the symbol w given that Receiver i sent the quantized RS power w k equals Γpw |w k q. In contrast with the vast majority of works on power control and especially those related to the IWFA, we assume the feedback channel to be noisy. Note also that these papers typically assume SINR feedback whereas the RSSI is considered here. The reasons for this is fourfold: 1) if Transmitter i knows p i ptq, g ii ptq, and has SINR feedback, this amounts to knowing its RS power since ω i ptq " g ii p i ptq ˆ1 `1 SINR i ptq ˙where SINR i ptq " g ii p i ptq σ 2 `ÿ j‰i g ji p j ptq ; 2) Assuming an RS power feedback is very relevant in practice since some existing wireless systems exploit the RSSI feedback signal (see e.g., [START_REF] Sesia | The UMTS Long Term Evolution: From Theory to Practice[END_REF]); 3) The SINR is subject to higher fluctuations than the RS power, which makes SINR feedback less robust to distortion and noise effects and overall less reliable; 4) As a crucial technical point, it can be checked that using the SINR as the transmitter observation leads to complex estimators [START_REF] Lasaulce | Technique de coordination d'émetteurs radio fondé sur le codage des niveaux de puissance d'émission[END_REF], while the case of RS power observations leads to a simple and very efficient estimation procedure, as shown further in this paper. Note that, here, it is assumed that the RS power is quantized and then transmitted through a DMC, which is a reasonable and common model for wireless communications. Another possible model for the feedback might consist in assuming that the receiver sends directly received signal power over an AWGN channel; depending on how the feedback channel gain fluctuations may be accounted for, the latter model might be more relevant and would deserve to be explored as well (‹). Proposed technique general description: The general power control problem of interest consists in finding, for each realization of the channel gain matrix G, a power vector which maximizes a network utility of the form upp; Gq. For this purpose, each transmitter is assumed to have access to the realizations of its RSSI over a frame. One of the key ideas of this paper is to exploit the transmit power levels as information symbols and exploit the observed interference (which is observed through the RSSI or SINR feedback) for inter-transmitter communication. The corresponding implicit communication channel is exploited to acquire global CSI knowledge namely, the matrix G and therefore to perform operations such as the maximization of upp; Gq. The process of achieving the desired power control vector is divided into three phases (see Fig. 1). In Phase I, a sequence of power For every time-slot of Phase I, each transmitter transmits at a prescribed power level which is assumed to be known to all the transmitters. One of the key observations we make in this paper is that, when the channel gains are constant over several time-slots, it is possible to recover local CSI from the RSSI or SINR; this means that, as far as power control is concerned, there is no need for additional signaling from the receiver for local CSI acquisition by the transmitter. Thus, the sequences of power levels in Phase I can be seen as training sequences. Technically, a difference between classical training-based estimation and Phase I is that estimation is performed in the power domain and over several time-slots and not in the symbol domain (symbol duration is typically much smaller than the duration of a time-slot) within a single time-slot. Also note that working in the symbol domain would allow one to have access to h ij but the phase information on the channel coefficients is irrelevant for the purpose of maximizing a utility function of the form upp; Gq. Another technical difference stems from the fact that the feedback noise is not standard, which is commented more a little further. By denoting pp i p1q, ..., p i pT I qq, i P t1, ..., Ku, the sequence of training power levels used by Transmitter i, the following training matrix can be defined: P I " ¨p1 p1q . . . p K p1q . . . . . . . . . p 1 pT I q . . . p K pT I q ‹ ‹ ‹ ' . (2) With the above notations, the noiseless RS power vector ω i " pω i p1q, ..., ω i pT I qq T can be expressed as: ω i " P I g i `σ2 1. (3) where g i " pg 1i , .., g Ki q T and 1 " p1, 1, ..., 1q T . To estimate the local CSI g i from the sole knowledge of the noisy RS power vector or RSSI r ω i we propose to use the least-squares (LS) estimator in the power domain (PD), abbreviated as LSPD, to estimate the local CSI as: r g LSPD i " `PT I P I ˘´1 P T I `r ω i ´σ2 1 ˘. ( 4 ) where σ 2 is assumed to be known from the transmitters since it can always be estimated through conventional estimation procedures (see e.g., [START_REF] Lasaulce | Training-based channel estimation and de-noising for the UMTS TDD mode[END_REF]). Using the LSPD estimate for local CSI therefore assumes that the training matrix P I is chosen to be pseudo-invertible. A necessary condition for this is that the number of time-slots used for Phase I verifies: T I ě K. Using a diagonal training matrix allows this condition to be met and to simplify the estimation procedure. It is known that the LSPD estimate may coincide with the maximum likelihood (ML) estimate. This holds for instance when the observation model of the form r ω i " ω i `z where z is an independent and additive white Gaussian noise. In the setup under investigation, z represents both the effects of quantization and transmission errors over the feedback channels and does not meet neither the independence nor the Gaussian assumption. However, we have identified a simple and sufficient condition under which the LSPD estimate maximizes the likelihood P pr ω i |g i q. This is the purpose of the next proposition. Proposition III.1. Denote by G ML i the set of ML estimates of g i , then we have piq G ML i " arg max g i T I ź t"1 Γ ´r ω i ptq ˇˇQ RS ´eT t P I g i `σ2 ¯¯; piiq r g LSPD i P G ML i when for all , arg max k Γpw |w k q " ; where e t is a column vector whose entries are zeros except for the t th entry which equals 1. Proof. See Appendix A. The sufficient condition corresponding to piiq is clearly met in classical practical scenarios. Indeed, as soon as the probability of correctly decoding the sent quantized RS power symbol (which is sent by the receiver) at the transmitter exceeds 50%, the above condition is verified. It has to be noted that G ML i is not a singleton set in general, which indicates that even if the LSPD estimate maximizes the likelihood, the set G ML i will typically comprise a solution which can perform better e.g., in terms of mean square error. If some statistical knowledge on the channel gains is available, it is possible to further improve the performance of the channel estimate. Indeed, when the probability of g i is known it becomes possible (up to possible complexity limitations) to minimize the mean square error E}p g i ´gi } 2 . The following proposition provides the expression of the minimum mean square error (MMSE) estimate in the power domain (PD) . Proposition III.2. Assume that @i P t1, ..., Ku, p ω i and r ω i belong to the set Ω " tw 1 , ..., w M T I u, where w 1 " pw 1 , w 1 , ..., w 1 q T , w 2 " pw 1 , w 1 , ..., w 2 q T ,..., w M T I " pw M , w M , ..., w M q T (namely, vectors are ordered according to the lexicographic order and have T I elements each). Define G m as G m :" x P R K `: Q RS `PI x `σ2 1 ˘" w m ( . (5) Then the MMSE estimator in the power domain expresses as: r g MMSEPD i " M T I ÿ m"1 T I ś t"1 Γ pr ω i ptq|w m ptqq ż Gm φ i ´gi ¯gi dg 1i ...dg Ki M T I ÿ m"1 T I ś t"1 Γ pr ω i ptq|w m ptqq ż Gm φ i ´gi ¯dg 1i ...dg Ki , (6) where φ i represents the probability density function (p.d.f.) of g i and w m ptq is the t-th element of w m . Proof. See Appendix B. In the simulation section (Sec. V), we will compare the LSPD and MMSEPD performance in terms of estimation SNR, sum-rate, and sum-energy-efficiency. While the MMSEPD estimate may provide a quite significant gain in terms of MSE over the LSPD estimate, it also has a much higher computational cost. Simulations reported in Sec. V will exhibit conditions under which choosing the LSPD solution may involve a marginal loss w.r.t. the MMSEPD solution e.g., when the performance is measured in terms of sum-rate. Therefore the choice of the estimator can be made based on the computation capability, the choice of utility for the system under consideration, or the required number of time-slots (MMSEPD allows for a number of timeslots which is less than K, whereas this is not possible for LSPD). Note that some refinements might be brought to the proposed estimator e.g., by using a low-rank approximation of the channel vector (see e.g., [START_REF] Lasaulce | Training-based channel estimation and de-noising for the UMTS-TDD mode[END_REF]), which is particularly relevant if the channel appears to possess some sparseness. IV. PHASE II: LOCAL CSI EXCHANGE IN THE POWER DOMAIN Phase II comprises T II time-slots. The aim of Phase II is to allow Transmitter i, i P t1, ..., Ku, to exchange its knowledge about local CSI with the other transmitters; the corresponding estimate will be merely denoted by r g i " pr g 1i , ..., r g Ki q T , knowing that it can refer either to the LSPD or MMSEPD estimate. The proposed procedure is as follows and is also summarized in Fig. 3. Transmitter i quantizes the information r g i through a channel gain quantizer called Q II i and maps the obtained bits (through a modulator) into the sequence of power levels p II i " pp i pT I 1q, ..., p i pT I `TII qq T . From the RSSI observations r ω II j " pr ω j pT I `1q, ..., r ω j pT I `TII qq T , Transmitter j (j ‰ i) can estimate (through a decoder) the power levels used by Transmitter i. To facilitate g i Phase I Ý ÝÝÝÝ Ñ r g i Quantizer Ý ÝÝÝÝÝÝ Ñ Q II i pr g i q Modulator Ý ÝÝÝÝÝÝ Ñ p II i Ó Eq.p1q r g j i Dequantizer& Demodulator Ð ÝÝÝÝÝÝÝ Ý r p II i Decoder 7j, j‰i Ð ÝÝÝÝÝÝÝÝÝÝÝÝÝÝÝÝÝÝÝ Ý r ω II j Fig. 2: The figure summarizes the overall processing chain for the CSI the corresponding operations, we assume that the used power levels in Phase II have to lie in the reduced set P " tP 1 , ..., P L u with @ P t1, ..., Lu, P P r0, P max s. The estimate Transmitter j has about the channel vector g i will be denoted by r g j i " `r g j 1i , ..., r g j Ki ˘T. The corresponding channel matrix estimate is denoted by r G j . In what follows, we describe the proposed schemes for the three operations required to exchange local CSI namely, quantization, power modulation, and decoding. The situation where transmitters have different estimates of the same channel is referred to as a distributed CSI scenario in [START_REF] Kerret | Degrees of freedom of the network MIMO channel with distributed CSI[END_REF]. Assessing analytically the impact of distributed CSI on the sum-rate or sum-energy-efficiency is beyond the scope of this paper but constitutes a very relevant extension of it (‹); only simulations accounting for the distributed CSI effect will be provided here. It might be noticed that the communication scenario in Phase II is similar to the X-channel scenario in the sense that each transmitter wants to inform the other transmitters (which play the role of receivers) about its local CSI, and this is done simultaneously. All the available results on the X-channel exploit the channel structure (e.g., the phase information) to improve performance (e.g., by interference alignment [START_REF] Maddah-Ali | Communication Over MIMO X Channels: Interference Alignment, Decomposition, and Performance Analysis[END_REF] or filter design). Therefore, knowing how to exploit the X-channel scenario in the setup under consideration (which is in part characterized by the power domain operation) in this paper, appears to a relevant extension (‹). Channel gain quantization operation Q II i : The first step in Phase II is for each of the transmitters to quantize the K´dimensional vector r g i . For simplicity, we assume that each element of the real K´dimensional vector r g i is quantized by a scalar quantizer into a label of N II bits. This assumption is motivated by low complexity but also by the fact that the components of r g i are independent in the most relevant scenarios of interest. For instance, if local CSI is very well estimated, the estimated channel gains are close to the actual channel gains, which are typically independent in practice. Now, in the general case of arbitrary estimation noise level, the components of r g i will be independent when the training matrix P I is chosen to be diagonal, which is a case of high interest and is motivated further in Sec. V. Under the channel gain (quasi-) independency, vector quantization would bring (almost) no performance improvement. The scalar quantizer used by Transmitter i to quantize r g ji is denoted by Q II ji . Finding the best quantizer in terms of ultimate network utility (e.g., in terms of sum-rate or sum-energy-efficiency) does not appear to be straightforward (‹). We present two possible quantization schemes in this section. A possible, but generally sub-optimal approach, is to determine a quantizer which minimizes distortion. The advantage of such approach is that it is possible to express the quantizer and it leads to a scheme which is independent of the network utility; this may be an advantage when the utility is unknown or changing. A possible choice for the quantizer Q II i is to use the conventional version of the Lloyd-Max algorithm (LMA) [START_REF] Lloyd | Least squares quantization in PCM[END_REF]. However, this algorithm assumes perfect knowledge of the information source to be quantized (here this would amount to assuming the channel estimate to be noiseless) and no noise between the quantizer and the dequantizer (here this would amount to assuming perfect knowledge of the RS power). The authors of [START_REF] Djeumou | Practical quantize-and-forward schemes for the frequency division relay channel[END_REF] proposed a generalized version of the Lloyd-Max algorithm for which noise can be present both at the source and the transmission but the various noise sources are assumed to verify standard assumptions (such as independence of the noise and the source), which are not verified in the setting under investigation; in particular, the noise in Phase I is the estimation noise, which is correlated with the transmitted signal. Deriving the corresponding generalized Lloyd-Max algorithm can be checked to be a challenging task, which is left as an extension of the technical solutions proposed here (‹). Rather, we will provide here a special case of the generalized Lloyd-Max algorithm, which is very practical in terms of computational complexity and required knowledge. The version of the Lloyd-Max algorithm we propose will be referred to as ALMA (advanced Lloyd-Max algorithm). ALMA corresponds to the special case (of the most generalized version mentioned previously) in which the algorithm assumes noise on the transmission but not at the source (although the source can be effectively noisy). This setting is very well suited to scenarios where the estimation noise due to Phase I is negligible or when local CSI can be acquired reliably by some other mechanism. In the numerical part, we can observe the improvements of the proposed ALMA with respect to the conventional LMA. Just like the conventional LMA, ALMA aims at minimizing distortion by iteratively determining the best set of representatives and the best set of cells (which are intervals here) when one of the two is fixed. The calculations for obtaining the optimal representatives and partitions are given in Appendix C for both the special case of no source noise as well as for the general case. Solving the general case can be seen from Appendix C to be computationally challenging. To comment on the proposed algorithm which is given by the pseudo-code of Algorithm 1, a few notations are in order. We denote by q P t1, ..., Qu the iteration index (where Q is the upper bound on the number of iterations) and define R " 2 This minimization operation requires some statistical knowledge. Indeed, the probability that the dequantizer decodes the representative v pqq ji,r given that v pqq ji,n has been transmitted needs to be known; this probability is denoted by π ji pr|nq and constitutes one of the inputs of Algorithm 1. The second input of Algorithm 1 is the p.d.f. of g ji which is denoted by φ ji . The third input is given by the initial choice for the quantization intervals that is, the set ! u p0q ji,1 , ..., u p0q ji,R`1 ) . Convergence of ALMA to a global minimum point is not guaranteed and finding sufficient condition for global convergence is known to be non-trivial. However, local convergence is guaranteed; an elegant and general argument for this can be found in [START_REF] Beaude | Crawford-Sobel meet Lloyd-Max on the grid[END_REF]. Conducting a theoretical analysis in which global convergence is tackled would constitute a significant development of the present analysis (‹), which is here based on typical and realistic simulation scenarios. At this point two comments are in order. First, through ( 7)-( 8), it is seen that ALMA relies on some statistical knowledge which might not always be available in practice. This is especially the case for π ji and γ ji since the knowledge of channel distribution information (CDI, i.e., φ ji ) is typically easier to be obtained. The CDI may be obtained by storing the estimates obtained during past transmissions and forming empirical means (possibly with a sliding window). If the CDI is time-varying, a procedure indicating to the terminals when to update the statistics might be required. Second, if we regard Phase II as a classical communication process, then the amount of information sent by the source is maximized when the source signal is uniformly Inputs: π ji , φ ji pg ji q, ! u p0q ji,1 , ..., u p0q ji,R`1 ) Outputs: u ‹ ji,1 , ..., u ‹ ji,R`1 ( , v ‹ ji,1 , ..., v ‹ ji,R`1 ( Initialization: Set q " 0. Initialize the quantization intervals according to ! u p0q ji,1 , ..., u p0q ji,R`1 ) . Set u p´1q ji,r " 0 for all r P t1, ..., Ru. while max r ||u pqq ji,r ´upq´1q ji,r || ą δ and q ă Q do Update the iteration index: q Ð q `1. For all r P t1, 2, .., Ru set v pqq ji,r Ð R ÿ n"1 π ji pr|nq ż u pq´1q ji,n`1 u pq´1q ji,n g ji φ ji pg ji qdg ji R ÿ n"1 π ji pr|nq ż u pq´1q ji,n`1 u pq´1q ji,n φ ji pg ji qdg ji . ( 7 ) For all r P t2, 3, .., Ru set u pqq ji,r Ð R ÿ n"1 rπ ji pn|rq ´πji pn|r ´1qs ´vpqq ji,n ¯2 2 R ÿ n"1 rπ ji pn|rq ´πji pn|r ´1qs v pqq ji,n . (8) end @r P t2, ..., Ru, u ‹ ji,r " u pqq ji,r , u ‹ ji,1 " 0 and u ‹ ji,R`1 " 8 @r P t1, ..., Ru, v ‹ ji,r " v pqq ji,r Algorithm 1: Advanced Lloyd-Max algorithm (ALMA) distributed. It turns out minimizing the (end-to-end) distortion over Phase II does not involve this. Motivated by these two observations we provide here a second quantization scheme, which is simple but will be seen to perform quite well in the numerical part. We will refer to this quantization scheme as maximum entropy quantizer (MEQ). For MEQ, the quantization interval bounds are fixed once and for all according to: @r P t1, ..., Ru, @pj, iq P t1, ..., Ku 2 , ż u ji,r`1 u ji,r φ ji pg ji qdg ji " 1 R . ( 9 ) The representative of the interval ru ji,r , u ji,r`1 s is denoted by v ji,r and is chosen to be its centroid: v ji,r " ż u ji,r`1 u ji,r g ji φ ji pg ji qdg ji ż u ji,r`1 u ji,r φ ji pg ji qdg ji . ( 10 ) We see that each representative has the same probability to occur, which maximizes the entropy of the quantizer output, hence the proposed name. To implement MEQ, only the knowledge of φ ji is required. Additionally, the complexity involved is very low. Power modulation: To inform the other transmitters about its knowledge of local CSI, Transmitter i maps the K labels of N II bits produced by the quantizer Q II i to a sequence of power levels pp i pT I `1q, p i pT I `2q, . . . , p i pT I `TII qq. Any one-to-one mapping might be used a priori. Although the new problem of finding the best mapping for a given network utility arises here and constitutes a relevant direction to explore (‹), we will not only develop this here. Rather, our main objective here is to introduce this problem and illustrate it clearly for a special case which is treated in the numerical part. To this end, assume Phase II comprises T II " 2 time-slots, K " 2 users, and that the users only exploit L " 2 power levels during Phase II say P " tP min , P max u. Further assume 1´bit quantizers, which means that the quantizers Q II ji produce binary labels. For simplicity, we assume the same quantizer Q is used for all the four channel gains g 11 , g 12 , g 21 , and g 22 : if g ij P r0, µs then the quantizer output is denoted by g min ; if g ij P pµ, `8q then the quantizer output is denoted by g max . Therefore a simple mapping scheme for Transmitter 1 (whose objective is to inform Transmitter 2 about pg 11 , g 21 q) is to choose p 1 pT I `1q " P min if Qpg 11 q " g min and p 1 pT I `1q " P max otherwise; and p 1 pT I `2q " P min if Qpg 21 q " g min and p 1 pT I `2q " P max otherwise. Therefore, depending on the p.d.f. of g ij , the value of µ, the performance criterion under consideration, a proper mapping can chosen. For example, to minimize the energy consumed at the transmitter, using the minimum transmit power level P min as much as possible is preferable; thus if PrpQpg 11 q " g min q ě PrpQpg 11 q " g max q, the power level P min will be associated with the minimum quantized channel gain that is Qpg 11 q " g min . Power level decoding: For every time-slot t P tT I `1, ..., T I `TII u the power levels are estimated by Transmitter i as follows r p ´iptq P arg min p ´iPP K´1 ˇˇˇˇÿ j‰i p j r g ji ´pr ω i ptq ´pi ptqr g ii ´σ2 q ˇˇˇˇ, (11) where p ´i " pp 1 , .., p i´1 , p i`1 , .., p K q. As for every j, r g ji is known at Transmitter i, the above minimization operation can be performed. It is seen that exhaustive search can be performed as long as the number of tests, which is L K´1 , is reasonable. For this purpose, one possible approach is to impose the number of power levels which are exploited over Phase II to be small. In this respect, using binary power over Phase II is not only relevant regarding complexity issues but also in terms of robustness against the various possible sources of noise. As for the number of interfering users using the same channel (meaning operating on the same frequency band, at the same period of time, in the same geographical area), it will typically be small and does not exceed 3 or 4 in real wireless systems. More generally, this shows that the proposed technique can accommodate more than 4 users in total; For example, if we have 12 bands, having 48 " 12 ˆ4 users would be manageable by applying the proposed technique for each band. As our numerical results indicate, using [START_REF] Larrousse | Coordination in distributed networks via coded actions with application to power control[END_REF] as a decoding rule to find the power levels of the other transmitters generally works very well for K " 2. When the number of users is higher, each transmitter needs to estimate K ´1 power levels with only one observation equation, which typically induces a non-negligible degradation in terms of symbol error rate. In this situation, Phase II can be performed by scheduling the activity of all the users, such that only 2 users are active at any given time-slot in Phase II. Once all pairs of users have exchanged information on their channel states, Phase II is concluded. Remark 1. Note that the case where only one user is active at a time is a special case of the decoding scheme assumed here. The advantage of our more general decoding scheme is that it can be used when strict SINR feedback is used [START_REF] Varma | Power modulation: Application to inter-cell interference coordination[END_REF] instead of RSSI; indeed when only one user is active at a time, the SINR becomes an instantaneous SNR and cannot convey any coordination information. Concerning the setting with RSSI feedback, the drawback of our assumption is that in the presence of noise on the RS power feedback, the performance of Phase II may be limited when the cross channel gains are very small. If this turned out to be a crucial problem, allowing only one user to be active at a time is preferable. Remark 2 (required number of time-slots). The proposed technique typically requires K `K " 2K time-slots for the whole exploration phase (Phases I and II). It therefore roughly require the same amount of resources as IWFA, which indeed needs about 2K or 3K SINR samples to converge to Nash equilibrium. While channel acquisition may seem to take some time, please note that regular communication is uninterrupted and occurs in parallel. As already mentioned, the context in which the proposed technique and IWFA are the most suited is a context where the channel is constant over a large number of time-slots, which means that the influence of the exploration phase on the average performance is typically negligible. Nonetheless, some simulations will be provided to assess the optimality loss induced by using power levels to convey information. Remark 3 (extension to the multi-band scenario). As explained in the beginning of this paper, Phases I and II are described for the single-band case, mainly for clarity reasons. Here, we briefly explain how to adapt the algorithm when there are multiple bands. In Phase I, the only difference exists in choosing the training matrix. With say S bands to transmit, for each band s P t1, ..., Su, the training matrix P s I has to fulfill the constraint S ÿ s"1 p s i ptq ď P max where p s i is the power Transmitter i allocates to band s. In Phase II, each band performs in parallel like the single-band case. Since there are power constraints for each transmitter, the modulated power should satisfy S ÿ s"1 p s i ptq ď P max . Remark 4 (extension to the multi-antenna case). To perform operation such as beam-forming, the phase information is generally required. The proposed local CSI estimation techniques (namely, for Phase I) do not allow the phase information or the direction information to be recovered; Therefore, another type of feedback should be considered for this. However, if another estimation scheme is available or used for local CSI acquisition and that scheme provides the information phase, then the techniques proposed for local CSI exchange (namely, for Phase II) can be extended. An extension which is more in line with the spirit of the manuscript is given by a MIMO interference channel for which each transmitter knows the interference-plus-noise covariance matrix and its own channel. This is the setup assumed by Scutari et al in their work on MIMO iterative water-filling [START_REF] Scutari | The MIMO iterative waterfilling algorithm[END_REF]. Remark 5 (type of information exchanged). One of the strengths of the proposed exchange procedure is that any kind of information can be exchanged. However, since SINR or RSSI is used as the communication channel, this has to be at a low-rate which is given by the frequency at which the power control levels are updated and the feedback samples sent. V. NUMERICAL ANALYSIS In this section, as a first step (Sec. V-A), we start with providing simulations which result from the combined effects of Phases I and II. To make a coherent comparison with IWFA, the network utility will be evaluated without taking into account a cost possibly associated with the exploration or training phases (i.e., Phases I and II for the proposed scheme or the convergence time for IWFA). The results are provided for a reasonable scenario of small cell networks which is similar to those already studied in other works (see e.g., [START_REF] Samarakoon | Ultra dense small cell networks: Turning density into energy efficiency[END_REF] for a recent work). As a second step (Sec. V-B and V-C), we study special cases to better understand the influence of each estimation phase and the different parameters which impact the system performance. As shown in Fig. 3, the considered scenario assumes K " 9 small cell base stations with maximal transmit power P max " 30 dBm. One or two bands are assumed, depending on the scenario considered. One user per cell is assumed, which corresponds to a possible scenario in practice (see e.g., [START_REF] Samarakoon | Ultra dense small cell networks: Turning density into energy efficiency[END_REF] [25] [START_REF] Moustakas | Power optimization in random wireless networks[END_REF]). We also use this setup to be able to compare the proposed scheme with IWFA whose performance is generally assessed for the most conventional form of the interference channel, namely, K transmitter-receiver pairs. The normalized receive noise power is σ 2 " 0 dBm. This corresponds to SNRpdBq " 30 where the signal-to-noise ratio is defined by A. Global performance analysis: a simple small cell network scenario SNRpdBq " 10 log 10 ˆPmax σ 2 ˙. ( 12 ) Here and in all the simulation section, we set the SNR to 30 dB by default. RS power measurements are quantized uniformly in a dB scale with N " 8 bits and the quantizer input dynamics or range in dB is rSNRpdBq ´20, SNRpdBq `10s. The DMC Γ is constructed with error probability to the two nearest neighbors, i.e., for the symbols w 1 ă w 2 ă ¨¨¨ă w M (with M " 2 N ), Γ pw i |w j q " if |i ´j| " 1 and Γ pw i |w j q " 0 if |i ´j| ą 1. In this section " 1%; the quantity will be referred to as the feedback channel symbol error rate (FCSER). For all pi, jq and s (s always being the band index) the channel gain g s ij on band s is assumed to be exponentially distributed namely, its p.d.f. writes as φ s ij pg s ij q " , ISD being the inter site distance. this section, the system performance is assessed in terms of sum-rate, the sum-rate being given by: u sum-rate pp 1 , ..., p K ; Gq " K ÿ i"1 S ÿ s"1 logp1 `SINR s i pp 1 , ..., p K ; Gqq. ( 13 ) where p i " pp 1 i , ..., p S i q represents the power allocation vector of Transmitter i, SINR s i is the SINR at Receiver i in band s and expresses as SINR s i " g s ii p s i σ 2 `ÿ j‰i g s ji p s j . Fig. 4a, represents the average sum-rate against the ISD. The sum-rate is averaged over 10 4 realizations of the channel gain matrix G and the inter site distance is the distance between two neighboring small base stations. Three curves are represented. The top curve corresponds to the performance of the sequential best-response dynamics applied to the sum-rate (referred to as Team BRD) in the presence of perfect global CSI. The curve in the middle corresponds to Team BRD which uses the estimate obtained by using the most simple association proposed in the paper namely, LSPD for Phase I and the 2´bit MEQ for Phase II. The LSPD estimator uses K time-slots and the K´dimensional identity matrix P I " P max I K for the training matrix. The 2´bit MEQ uses binary power control (L " 2) and 2K time-slots to send the information, i.e., g i ); this corresponds to the typical number of time-slots IWFA needs to converge. At last, the bottom curve corresponds to IWFA using local CSI estimates provided by Phase I. It is seen that about 50% of the gap between IWFA and Team BRD with perfect CSI can be bridged by using the proposed estimation procedure. When the interference level is higher, the gap becomes larger. Fig. 4b depicts exactly the same scenario as Fig. 4a except that only one band is available to the small cells i.e., S " 1. Here the gap can be bridged at about 65% when using Team BRD with the proposed estimation procedure. In this section, some choices have been made: a diagonal training matrix and the LSPD estimator has been chosen for Phase I and the MEQ has been chosen for Phase II. The purpose of the next sections is to explain these choices, and to better identify the strengths and weaknesses of the proposed estimation procedures. Fig. 4: The above curves are obtained in the scenario of Fig. 4 in which K " 9 transmitter-receiver pairs, SNRpdBq " 30, the FCSER is given by " 0.01, N " 8 quantization bits for the RSSI, and L " 2 power levels. Using the most simple estimation schemes proposed in this paper namely LSPD and MEQ can bridge the gap between the IWFA and the team BRD with perfect CSI, about 50% when S " 2 and about 65% when S " 1. B. Comparison of estimation techniques for Phase I In Phase I, there are two main issues to be addressed: the choice of the estimator and the choice of the training matrix P I . To compare the LSPD and MMSEPD estimators, we first consider the estimation SNR (ESNR) as the performance criterion to compare them. The estimation SNR of Transmitter i is defined here for the case S " 1 and is given by: ESNR i " Er}G} 2 s Er}G ´r G i } 2 s . ( 14 ) where }.} 2 stands for the Frobenius norm and r G i is the global channel estimate which is available to Transmitter i after Phases I and II. In this section, we always assume a perfect exchange in Phase II to conduct the different comparisons. This choice is made to isolate the impact of Phase I estimation techniques on the estimation SNR and the utility functions which are considered for the exploitation phase. After extensive simulations, we have observed that the gain in terms of ESIR by using the best training matrix (computed by an exhaustive search over all the matrix elements) is found to be either negligible or quite small when compared to the best diagonal training matrix (computed by an exhaustive search over the diagonal elements); see e.g., Fig. 6a for such a simulation. Therefore, for the rest of this paper, we will restrict our attention to diagonal training matrices for reducing the computational complexity without any significant performance loss. To conclude about the choice of the training matrix, we assess the impact of using power levels to learn local CSI instead of using them to optimize the performance of Phase I. For this, we compare in Fig. 6b the scenario in which a diagonal training matrix is used to learn local CSI, with the scenario in which the best training matrix in the sense of the expected sum-rate (over Phase I). Global channel distribution information is assumed to be available in the latter scenario. The corresponding choice is feasible computationally speaking for small systems. Fig. 5a represents for K " 2, S " 1, and SNRpdBq " 30, the estimation SNR (in dB) against the signal-to-interference ratio (SIR) in dB SIRpdBq which is defined here as SIRpdBq " 10 log 10 ˆEpg 11 q Epg 21 q ˙" 10 log 10 ˆEpg 22 q Epg 12 q ˙. (15) The three curves in red solid lines represent the MMSEPD estimator performance while the three curves in blue dashed line represent the LSPD estimator performance. The performance gap between MMSEPD and LSPD depends on the quality of the RSSI at the transmitters. When RS power measurements are quantized with N " 8 bits and the feedback channel symbol error rate is " 1%, the gap in dB is very close to 0. Using MMSEPD instead of LSPD becomes much more relevant in terms of ESNR when the quality of feedback is degraded. Indeed, for N " 2 bits and " 10%, the gap is about 5 dB. Note that having a very small number of RSSI quantization bits and therefore significant feedback quality degradation may also occur in classical wireless systems where the feedback would be binary such as an ACK/NACK feedback. Indeed, an ACK/NACK feedback can be seen as the result of a 1´bit quantization of the RSSI or SINR. The proposed technique might be used to coordinate the transmitters just based on this particular and rough feedback. Even though the noise on the RSSI is correlated with the signal and is not Gaussian, we observe that MMSEPD and LSPD (which can be seen as a zero-forcing solution) perform similarly when the noise becomes negligible. At last note that the ESNR is seen to be independent of the SIR; this can be explained by the used training matrix, which is diagonal. The above comparison is conducted in terms of ESNR but not in terms of final utility. To assess the impact of Phase I on the exploration phase, two common utility functions are considered namely, the sum-rate and the sum-energy-efficiency (sum-EE) which is defined as: u sum-EE pp 1 , ..., p K ; Gq " K ÿ i"1 S ÿ s"1 f pSINR s i pp 1 , ..., p K ; Gqq S ÿ s"1 p s i . ( 16 ) where the same notations as in [START_REF] Anandkumar | Robust rate maximization game under bounded channel uncertainty[END_REF] are used; f is an efficiency function which represents the packet success rate or the probability of having no outage. Indeed, the utility function u sum-EE corresponds to the ratio of the packet success rate to the consumed transmit power and has been used in many papers (see e.g., [27] [28] [29] [30] [31]). Here we choose the efficiency function of [START_REF] Belmega | Energy-efficient precoding for multiple-antenna terminals[END_REF]: f pxq " exp ´´c x ¯with c " 2 r ´1 " 1, r being the spectral efficiency. Fig. 5b depicts for K " 2, S " 1, N " 2, " 10% the average relative utility loss ∆u in % against the SIR in dB. The average relative utility loss in % is defined by ∆up%q " 100E « upp ‹ 1 , ..., p ‹ K ; Gq ´upr p ‹ 1 , ..., r p ‹ K ; Gq upp ‹ 1 , ..., p ‹ K ; Gq ff . (17) where upp ‹ 1 , ..., p ‹ K ; Gq is the best sum-utility which can be attained when every realization of G is known perfectly. The latter is obtained by performing exhaustive search over 100 values equally spaced in r0, P max s and this for each draw of G; the average is obtained from 10 4 independent draws of G. The utility upr p ‹ 1 , ..., r p ‹ K ; Gq is also obtained with exhaustive search but by using either the LSPD or MMSEPD estimator and assuming Phase II to be perfect. Fig. 5b shows that even under severe conditions in terms observing the RS power at the transmitter, the MMSEPD and LSPD estimators have the same performance in terms of sum-rate. This holds even though the gap in terms of ESNR is 5 dB (see Fig. 5a). Note that the relative utility loss is about 3% showing that the sum-rate performance criterion is very robust against channel estimation errors. When one considers the sum-EE, the relative utility loss becomes higher and is the range 15% ´20% and the gap between MMSEPD and LSPD becomes more apparent this time and equals about 5%. The observations made for the special setting considered here have been checked to be quite general and apply for more users, more bands, and other propagation scenarios: unless the RSSI is very noisy or when only an ACK/NACK-type feedback is available, the MMSEPD and LSPD estimators perform quite similarly. Since the MMSEPD estimator requires more knowledge and more computational complexity to be implemented, the LSPD estimator seems to be the best choice when the quality of RSSI is good as it is in current cellular and Wifi systems. To conclude this section, we provide the counterpart of Fig. 6b for phase 2 in Fig. 7. The scenario in which a diagonal training matrix is used to exchange local CSI, with the scenario in which power control is to maximize the expected sum-rate (over Phase II). But here, the expectation is not taken over local CSI since it is assumed to be known. The corresponding choice is feasible computationally speaking for small systems. In this section, we assume Phase I to be perfect. Again, this choice is made to isolate the impact of Phase II estimation techniques on the estimation SNR and the utility functions which are considered for the exploitation phase. When L " 2 and we quantize with 1-bit, we map the smallest representative of the quantizer to the lowest power and the largest to the highest power level in P and the other element. If L ą 2, the power levels belong to the set " 0, 1 C. Comparison of quantization techniques for Phase II L ´1 P max , 2 L ´1P max , ..., P max * are picked and the representatives are mapped in the order corresponding to their value. In Phase II, the most relevant techniques to be determined is the quantization of the channel gains estimated through Phase I. For K " 2 users, S " 1 band, L " 2 power levels, and SNRpdBq " 30, Fig. 8a provides ESNR(dB) versus SIR(dB) for the three channel gain quantizers mentioned in this paper: ALMA, LMA, and MEQ. The three quantizers are assumed to quantize the channel gains with only 1 bit. Since only two power levels are exploited over Phase II, this means that the local CSI exchange phase (Phase II) comprises K time-slots. The three top curves of Fig. 8a correspond to N " 8 RS power quantization bits and " 1% while the three bottom curves correspond to N " 2 bits and " 10%. First of all, it is seen that the obtained values for ESNR are much lower than for Phase I. Even in the case where N " 8 and " 1%, the ESNR is around 10 dB whereas it was about 40 dB for Phase I. This shows that the limiting factor for the global estimation accuracy will come from Phase II; additional comments on this point are provided at the end of this section. Secondly, Fig. 8a shows the advantages offered by the proposed ALMA over the conventional LMA. Fig. 8b depicts for K " 2, S " 1, N " 8, " 1% the average relative utility loss ∆u in % against the SIR in dB for ALMA and MEQ. The two bottom (resp. top) curves correspond to the sum-rate (resp. sum-EE). The relative utility loss is seen to be comparable to the one obtained for Phase I. Interestingly, MEQ is seen to induce less performance losses than ALMA, showing that the ENSR or distortion does not perfectly reflect the need in terms optimality for the exploration phase. This observation partly explains why we have chosen MEQ in Sec. V-A for the global performance evaluation; many other simulations (which involve various values for K, N , S, , etc) not provided here confirm this observation. An important comment made previously is that Phase II constitutes the bottleneck in terms of estimation accuracy for the final global CSI estimate available for the exploitation phase. Here, we provide more details about this limitation. Indeed, even when the quality of the RSSI is good, the ESNR only reaches 10 dB and even increasing the quantization bits by increasing the Fig. 9: The power level decoding scheme proposed in this paper is simple and has the advantage of being usable for the SINR feedback instead of RSSI feedback. However, the proposed scheme exhibits a limitation in terms of coordination ability when the inference is very low. The consequence of this is the existence of a maximum ESNR for Phase II. Here we observe that despite increasing the number of quantization bits or time slots used, the ESNR is bounded. power modulation levels or time slots used does not improve the ESNR as demonstrated by the following figures. For N " 8 RS power quantization bits and " 1%, SNRpdBq " 30, Fig. 9a shows the ESNR versus the number of channel quantization bits used by MEQ. It is seen that the ESNR reaches a maximum whether a high interference scenario (SIRpdBq " 0) or a low interference scenario (SIRpdBq " 10) is considered. In Fig. 8a, the ESNR was about 9 dB when the 1´bit MEQ is used and the SIR equals 0 dB. Here we retrieve this value and see that the ESNR can reach 13 dB when the 4´bit MEQ is implemented, meaning that 16 power levels are used in Phase II. Now, when the SIR is higher, using the 2´bit MEQ is almost optimal. If the RSSI quality degrades, then using only 1 or 2 bits for MEQ is always the best configuration. Another approach would be to increase the number of channel gain quantization bits and still only use two power levels over Phase II by increasing the number of time-slots used in Phase II. Fig. 9b assumes exactly the same setup as Fig. 9a but here it represents the ESNR as a function of the number of time-slots used in Phase II. Here again, an optimal number of time-slots appears for the same reason as for Fig. 9a. Both for Fig. 9a and Fig. 9b, one might wonder why the ESNR is better when the interference is high. This is due to the fact that when the interference is very low, the decoding operation of the power levels of the others becomes less reliable. The existence of maximum points in Fig. 9a and Fig. 9b precisely translates the tradeoff between the channel gain quantization noise and power level decoding errors. VI. CONCLUSION First, we would like to remind a few comments about the scope and originality of this paper. One of the purposes of this paper is to show that the sole knowledge of the received power or SINR feedback is sufficient to recover global CSI. The proposed technique comprises two phases. Phase I allows each transmitter to estimate local CSI. Obviously, if there already exists a dedicated feedback or signalling channel which allows the transmitter to estimate local CSI, Phase I may be skipped. But even in the latter situation, the problem remains to know how to exchange local CSI among the transmitters. Phase II proposes a completely new solution for exchanging local CSI, namely using power modulation. Phase II is based in particular on a robust quantization scheme of the local channel gains. Phase II is therefore robust against perturbations on the received power measurements; it might even be used for 1´bit RSSI which would correspond to an ACK/NACK-type feedback, showing that even a rough feedback channel may help the transmitters to coordinate. Note that the proposed technique is general and can be used to exchange and kind of information and not only local CSI. Second, we summarize here a few observations of practical interest. For Phase I, two estimators have been proposed for Phase I: the LSPD and the MMSEPD estimators. Simulations show that using the MMSEPD requires some statistical knowledge and is more complex, but is well motivated when the RS power is quantized roughly or the feedback channel is very noisy. Otherwise, the use of the LSPD estimator is shown to be sufficient. During Phase II, transmitters exchange local CSI by encoding it onto their power level and using interference as a communication channel; Phase II typically requires K time-slots at least (assuming all transmitters simultaneously communicate in Phase II), which makes 2K time-slots for the whole estimation procedure. This is typically the number of time-slots needed by IWFA to converge, when it converges. For Phase II, three estimation schemes are provided which are in part based on one of the two quantizers ALMA and MEQ; the quantizers are computed offline but are exploited online. MEQ seems to offer a good trade-off between complexity and performance in terms of sum-rate or sum-energy-efficiency. In contrast with Phase I in which the estimation SNR typically reaches 40 dB for good RS power measurements, the estimation SNR in Phase II is typically around 10 dB, showing that Phase II will constitute the bottleneck in terms of estimation quality of global CSI. This is due to fact that the cross channel gains may be small when they fluctuate (this would not occur in the presence of Rician fading), which generates power level decoding errors. As explained, one way of improving the estimation SNR over Phase II is to activate only one user at a time, but then the proposed power level decoding scheme would only apply to RSSI feedback and not to SINR feedback anymore. In Phase III, having global CSI, each transmitter can apply the BRD to the sum-utility instead of applying it to an individual utility as IWFA does, resulting in a significant performance improvement as seen from our numerical results. ¯¯ [START_REF] Lasaulce | Training-based channel estimation and de-noising for the UMTS TDD mode[END_REF] where e t is a column vector whose entries are zeros except for the t th . In [START_REF] Lasaulce | Training-based channel estimation and de-noising for the UMTS TDD mode[END_REF], (a) holds as the estimation and feedback process g i to p ω i to r ω i (represented in Fig. 1) is Markovian, (b) holds because the DMC is separable and (c) holds because Pr ´p ω i |g i ¯is a discrete delta function that is zero everywhere except when Q RS ´PI g i ¯" p ω i . From [START_REF] Lasaulce | Training-based channel estimation and de-noising for the UMTS TDD mode[END_REF], the set of the ML estimators can now be written as G ML i " # arg max g i T 1 ź t"1 Γ ´r ω i ptq |Q RS ´eT t P I g i `σ2 ¯¯+ ( 19 ) which is the first claim of our proposition. Now, we look at the LS estimator, which is know from (4) to be P I g LSPD i `σ2 1 " r ω i (20) or equivalently: e T t P I g LSPD i `σ2 " r ω i ptq (21) If for all , arg max k Γpw |w k q " , then the ML set can be evaluated based on [START_REF] Maddah-Ali | Communication Over MIMO X Channels: Interference Alignment, Decomposition, and Performance Analysis[END_REF] as G ML i " ! g i |@t, Q RS ´eT t P I g i `σ2 ¯" r ω i ptq ) (22) Therefore, we observe that if G ML i is given as in [START_REF] Djeumou | Practical quantize-and-forward schemes for the frequency division relay channel[END_REF], then from (21), we have g LSPD i P G ML i , our second claim. Pr `r g k ji " v ji,r |r g ji " r x ˘" R ÿ n"1 Pr `r g k ji " v ji,r |Q II i pr g ji q " v ji,n ˘Pr `QII i pr g ji q " v ji,n |r g ji " r x " R ÿ n"1 πpr|nq Pr `QII i pr g ji q " v ji,n |r g ji " r x ˘ [START_REF] Haddad | Spectral efficiency of energy efficient multicarrier systems[END_REF] where we know Pr `QII i pr g ji q " v ji,n |r g ji " r x ˘" $ & % 1 if r x P ru ji,n , u ji,n`1 q 0 if r x R ru ji,n , u ji,n`1 q (32) Substituting ( 32) and ( 31) in [START_REF] Bacci | A game-theoretic approach for energy-efficient contention-based synchronization in OFDMA systems[END_REF], we get Er `gji ´r g k ji ˘2s " γ ji pr x|xq φ ji pxq px ´vji,r q 2 dxdr x. For fixed transition levels u ji,n , the optimum representatives v ji,r 1 are obtained by setting the partial derivatives of the distortion Er `gji ´r g k ji ˘2s, with respect to v ji,r 1 , to zero. That is BEr `gji ´r g k ji ˘2s Bv ji,r 1 " R ÿ n"1 π ji pr 1 |nq ż 8 x"0 ż u ji,n`1 r x"u ji,n 2γ ji pr x|xqφ ji pxq px ´vji,r 1 q dxdr x " 0 which results in with u ji,1 " 0 and u ji,R`1 " 8 as the boundary conditions. Solving the above conditions is very difficult as the variable to solve is inside the integral as an argument of γ. Therefore we consider the special case where γ ji pp x|xq " δpx ´p xq where δ is the Dirac delta function which is 0 at all points except at 0 and whose integral around a neighborhood of 0 is 1. This corresponds Fig. 1 : 1 Fig. 1: The flowchart of the proposed scheme rate (bps/Hz) Team BRD with perfect global CSI Team BRD with estimated global CSI (LSPD+MEQ) IWFA with estimated local CSI (LSPD) rate (bps/Hz) Team BRD with perfect global CSI Team BRD with estimated global CSI (LSPD+MEQ) IWFA with estimated local CSI (LSPD) (b) S " 1 Using MMSEPD instead of LSPD in Phase I becomes useful in terms of ESNR when the RSSI quality becomes too rough (bottom curves). LSPD perf. in terms of sum-EE MMSEPD perf. in terms of sum-EE LSPD perf. in terms of sum-rate MMSEPD perf. in terms of sum-rate (b) The figure provides the relative utility loss under quite severe conditions in terms of RSSI quality (N " 2, " 10%). Fig. 5 : 5 Fig. 5: Comparing MMSEPD and LSPD assuming perfect Phase II. MMSEPD with best training matrix MMSEPD with best diagonal matrix LSPD with best training matrix LSPD with best diagonal matrix (a) Scenario: K " 2, S " 1, and SNRpdBq " 30, " 0, N " 2 quantization bits. Using a diagonal training matrix typically induces a small performance loss in terms of ESNR even in worst-case scenarios. Optimality loss induced in Phase I when using power levels to learn local CSI instead of maximizing the expected sum-rate. This loss may be influential on the average performance when the number of time-slots of the exploitation phase is not large enough. Fig. 6 : 6 Fig. 6: Influence of the training matrix. Fig. 7 : 7 Fig.7: Optimality loss induced in Phase II when using power levels to exchange local CSI instead of maximizing the expected sum-rate. This loss may be influential on the average performance when the number of time-slots of the exploitation phase is not large enough. ALMA with N=8 and ε=1% LMA with N=8 and ε=1% MEQ with N=8 and ε=1% ALMA with N=2 and ε=10% MEQ with N=2 and ε=10% LMA with N=2 and ε=10% (a) Performance measured by ESNR considering good (three top curves) and bad (three bottom curves) RSSI quality conditions. ALMA perf. in terms of sum-rate MEQ perf. in terms of sum-rate ALMA perf. in terms of sum-EE MEQ perf. in terms of sum-EE (b) Performance measured by relative utility loss, with utility being the sum-EE or sum-rate. Fig. 8 : 8 Fig.8: Performance analysis of conventional LMA, ALMA and MEQ assuming Phase I to be perfect. ESNR against quantization bits used in MEQ. ) = 0 and L = 2 power levels SIR (dB) = 10 and L = 2 power levels (b) ESNR against T II Pr ´p ω i " w m |g i ¯TI ź t" 1 Γ 1 Γ 11 APPENDIX A PROOF OF PROPOSITION III.1 Proof: From Section II, we have p ω i P Ω and r ω i P Ω, where Ω is a discrete set. Therefore, we can rewrite the likelihood probability Pr ´r ω i |g i ¯as follows Pr ´r ω i |g i ¯paq " Pr pr ω i |p ω i " w m q Pr ´p ω i " w m |g i pbq " pr ω i ptq |p ω i ptqq pcq " ´r ω i ptq |Q RS ´eT t P I g i `σ2 v ji,r 1 " R ÿ n" 1 π ji pr 1 |nq ż 8 x" 0 ż u ji,n` 1 r x"u ji,n xγ ji pr x|xqφ ji pxqdr xdx R ÿ n" 1 π ji pr 1 |nq ż 8 x" 0 żpπ ji pr|n 1 ´1q ´πji pr|n 1 qq ż 8 0γ 18011808 u ji,n`1 r x"u ji,n γ ji pr x|xqφ ji pxqdr xdx .(34)For fixed representatives v ji,r , the optimum transition levels u ji,n 1 are obtained by setting the partial derivatives of the distortion Er `gji ´r g k ji ˘2s with respect to u ji,n 1 , to zero. We use the second fundamental theorem of calculus, i.e., d dx ż x a f ptqdt " f pxq to obtain u ji,n 1 for all n 1 P t2, .., Ru as BEr `gji ´r g k ji ˘2s Bu ji,n1 " ji pu ji,n 1 |xqφ ji pxq pv ji,r ´xq 2 dx " 0 N II . For each channel gain estimate r g ji to be quantized, we denote by v ji " ! v ji,1 , ..., v pqq pqq ji,R ) the set of representatives and by ! u pqq ji,1 , ..., u pqq ji,R`1 ) (with u pqq ji,1 " 0 and u pqq ji,R`1 " 8) the set of interval bounds which defines how the set r g ji lies in (namely r0, `8q) is partitioned. At each iteration, the choice of the set of representatives or intervals aims at minimizing the end-to-end distortion E|r g ji ´gji | 2 . TABLE I : I Acronyms used in Sec. V Interference MS 8 Cell size: d ˆd SBS 7 SBS 8 SBS 9 MS 7 Inter-site distance: d MS 9 d MS 6 SBS 4 SBS 5 SBS 6 MS 4 MS 5 MS 1 SBS 1 SBS 2 SBS 3 MS 2 MS 3 Fig. 3: Small cell network configuration assumed in Sec. V-A Transmitter i and Receiver j and d 0 " 5 m is a normalization factor. The normalized coordinates of the mobile stations MS 1 , ..., MS 9 are respectively given by: p3.8, 3.2q, 1 Erg s ij s corresponds to the well-known Rayleigh fading assumption. Here, Epg s ij q models the path loss exp ˆ´g s ij Erg s ˙; this ij s effects for the link ij and depends of the distance as follows: Epg s ij q " ˆd0 d ij ˙2 where d ij is the distance between p7.9, 1.4q, p10.2, 0.7q, p2.3, 5.9q, p6.6, 5.9q, p14.1, 9.3q, p1.8, 10.6q, p7.1, 14.6q, p12.5, 10.7q; the real coordinates are obtained by multiplying the former by the ratio ISD d 0 Note that, for the sake of clarity, it is assumed here that the RS power quantizer and DMC are independent of the user index, but the proposed approach holds in the general case. PROOF OF PROPOSITION III.2 Proof: After the RSSI quantization, the M T I different levels of p ω i or r ω i are w 1 , w 2 , .., w M T I forming the set Ω. Define by h : Ω Ñ G which maps the observed RSSI feedback to a channel estimate, where G :" tg 1 , g 2 , ..., g M T I u, such that hpw m q " g m . That is, when transmitter i observes the RSSI feedback r ω i to be w m , local channel estimate r g i is g m . Based on the above definitions, we have that The term Pr ´r g i " g n |g i " x ¯can be further expanded as Pr ´r g i " g n , r ω i " w , p ω i " w m |g i " x " Pr ´r g i " g n |r ω i " w ¯Pr pr ω i " w |p ω i " w m q Pr ´p ω i " w m |g i " x ¯(24) Now we know that the mapping hpq is deterministic and results in hpw m q " g m . Therefore, Pr ´r g i " g n |r ω i " w ¯" δ n, , where δ n, is the Kronecker delta function such that δ n, " 0 when n ‰ and δ n, " 1 when n " . Additionally, we also know that Pr pr ω i " w |p ω i " w m q " Γ pw ptq|w m ptqq by definition (where w m ptq is the t-th component of w m ) . This results in [START_REF] Samarakoon | Ultra dense small cell networks: Turning density into energy efficiency[END_REF] being simplified to Recall that p ω i " Q RS `PI g i ˘by definition of the quantizer. Define by resulting in Now, we can simplify ( 23) using ( 27) and ( 25) into For a fixed DMC, we can find the g MMSE i which will minimize the distortion by taking the derivative of the distortion over g n : To minimize distortion, this derivative should be equal to zero. The g n minimizing the distortion is by definition, the MMSE of the channel given r ω i " w n . Therefore by rearranging (29), we can find the expression for the MMSE given in the proposition III.2. APPENDIX C CALCULATIONS FOR THE ALMA As defined in the main text, r g k ji P tv ji,1 , ..., v ji,R u and the p.d.f. of r g ji is denoted by γ ji in general. Note that when r g ji belongs to a discrete set, we can replace the integrals and γ ji with a sum and discrete probability function without any significant alteration to our results and calculations. Denoting the p.d.f of g ji by φ ji , the distortion between g ji and r g k ji can be written as Pr `r g k ji " v ji,r |r g ji " r x ˘γji pr x|xq φ ji pxq px ´vji,r q 2 dxdr x which is the distortion observed by transmitter k when transmitter i communicates g ji in Phase II. As the transmitter i estimates g ji as r g ji , the quantization operation Q II i is performed resulting in r g ji being quantized into a certain representative v ji,n , if r g ji P ru ji,n , u ji,n`1 q. Given that the transmitter i operates at a power level corresponding to v ji,n , the transmitter k will decode v ji,r with a probability πpr|nq as defined in Section IV. Now we can expand the term Pr `r g k ji " v ji,r |r g ji " r x ˘in the following manner. to the case where the channel is perfectly estimated after phase I. This directly transforms (34) to [START_REF] Chiang | Power control in wireless cellular networks[END_REF] of the ALMA, and we can simplify (35) into rπ ji pr|n 1 ´1q ´πji pr|n 1 qs φ ji pu ij,n 1 q pv ji,r ´uij,n 1 q 2 (36) We have R ÿ r"1 rπ ji pr|n 1 ´1q ´πji pr|n 1 qs pu ij,n 1 q 2 " 0 since R ÿ r"1 π ji pr|n 1 q " 1, resulting in u ij,n 1 " ř R r"1 rπ ji pr|n 1 ´1q ´πji pr|n 1 qs v 2 ji,r 2 ř R r"1 rπ ji pr|n 1 ´1q ´πji pr|n which is (8) used in the ALMA.
72,186
[ "771954", "5857", "1068236" ]
[ "1289", "185180", "1289", "110918" ]
01741741
en
[ "sdv" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01741741/file/cerino2017.pdf
PharmD Mathieu Cerino email: mathieu.cerino@ap-hm.fr MD Svetlana Gorokhova MD Pascal Laforet Ben Rabah Emmanuelle Yaou Jean Salort-Campana Shahram Pouget Bruno Attarian Jean-François Eymard Anne Deleuze Boland PhD Rabah Ben Yaou MD Emmanuelle Salort-Campana MD Jean Pouget MD Shahram Attarian MD Bruno Eymard PhD Jean-François Deleuze PhD Anne Boland MD Anthony Behin MD Tanya Stojkovic PhD Gisele Bonne MD Nicolas Levy Marc Bartoli Martin Krahn email: martin.krahn@univ-amu.fr Genetic Characterization of a French Cohort of GNE-mutation negative inclusion body myopathy patients with exome sequencing Keywords: exome, hIBM, GNE, NGS, diagnosis, myopathy ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Genetic characterization of a French cohort of GNE-mutation negative inclusion body myopathy patients using exome sequencing (Abstract) INTRODUCTION: Hereditary inclusion body myopathy (hIBM) refers to a group of clinically and genetically heterogeneous diseases. The overlapping histochemical features of hIBM with other genetic disorders lead to low diagnostic rates with targeted single-gene sequencing. This is true for the most prevalent form of hIBM, GNEpathy. Thus, we used whole exome sequencing (WES) to evaluate whether a cohort of clinically suspected GNEpathy patients undiagnosed by targeted GNE analysis could be genetically characterized. METHODS: 20 patients with hIBM but undiagnosed by targeted GNE sequencing were analyzed using WES before data filtering on 306 genes associated with neuromuscular disorders. RESULTS: 7 patients out of 20 were found to have disease-causing mutations in genes associated with hIBM, or genes allowing for hIBM in the differential diagnosis, or associated with unexpected diagnosis. DISCUSSION: NGS is an efficient strategy in the context of hIBM, resulting in a molecular diagnosis for 35% of the patients initially undiagnosed by targeted GNE analysis. INTRODUCTION Hereditary inclusion body myopathies (hIBM) represent a heterogeneous group of muscular disorders defined by the relatively nonspecific criterion of rimmed vacuoles on muscle biopsy 1 . GNEpathy 2 , caused by mutations in GNE (UDP-N-acetylglucosamine-2-epimerase/Nacetylmannosamine kinase, MIM*603824) [START_REF] Eisenberg | The UDP-Nacetylglucosamine 2-epimerase/N-acetylmannosamine kinase gene is mutated in recessive hereditary inclusion body myopathy[END_REF] is the most common form of hIBM, with many clinical features overlapping with other forms of hIBM, implicating other genes or forms with yet unknown underlying genetic defects. Targeted analysis of GNE in a large recentlydescribed French cohort with suspected GNEpathy provides only a 20% diagnostic yield (32 of 164 patients) [START_REF] Cerino | Novel pathogenic variants in a French cohort widen the mutational spectrum of GNE myopathy[END_REF] . In the present study, we evaluated the extent to which a cohort of clinically suspected GNEpathy patients undiagnosed by GNE targeted analysis may be genetically characterized, by implicating other genes previously known to cause neuromuscular disorders using whole exome sequencing (WES) associated with data filtering for 306 genes of interest. METHODS We selected 20 unrelated index cases (IC) with clinically suspected GNEpathy associated with rimmed vacuoles on muscle biopsy samples, but for which no GNE disease-causing mutation had been identified by direct targeted sequencing. Samples had been prepared and stored by the Center of Biological Resources, Department of Medical Genetics, La Timone Hospital, Marseille, and were used following the ethical recommendations of our institution and according to the Declaration of Helsinki. All included patients gave their written consent prior to the genetic study, in accordance with French law. WES was performed using the SureSelect Human All Exon Kit version 5 (Agilent Technologies, Santa Clara, California) and the HiSeq 2000 (Illumina, San Diego, California). Sequencing data were processed on the Illumina pipeline (CASAVA1.8.2) before using GATK 5 variant calling and ANNOVAR 6 annotation using the GRCh37/hg19 Human genome version, coverage statistics were computed using VarAFT (Variant Analysis and Filtration Tool ; http://varaft.eu, 2016), which uses BedTools 7 .VarAFT was also used to sort and filter the obtained variants. Our initial analysis strategy focused on 306 genes previously reported to cause neuromuscular disorders, and selected from the Gene Table of Neuromuscular Disorders 8 (including groups 1 to 5 and the main differential diagnosis genes) as previously described 9,10 . A mean overall sequencing depth of 106X and a mean coverage of the coding exons of 95% (at 20X depth) and 91% (at 30X depth) was obtained for these 306 genes. Predicted pathogenicity of identified variants was determined using UMD-predictor [START_REF] Salgado | UMD-Predictor: a High Throughput Sequencing Compliant System for Pathogenicity Prediction of any Human cDNA Substitution[END_REF] , SIFT (Sort Intolerant From Tolerant human Protein) [START_REF] Vaser | SIFT missense predictions for genomes[END_REF] , PolyPhen-2 (Polymorphism Phenotyping v2) [START_REF] Adzhubei | A method and server for predicting damaging missense mutations[END_REF] and HSF (Human Splicing Finder) [START_REF] Desmet | Human Splicing Finder: an online bioinformatics tool to predict splicing signals[END_REF] softwares. Regarding HSF [START_REF] Desmet | Human Splicing Finder: an online bioinformatics tool to predict splicing signals[END_REF] in silico results, we defined four types of predicted splicing effects: 1) Probably damaging: associated with predicted strong splicing effect due to broken donor site (DS) or acceptor site (AS) and/or new DS/AS creation and/or strong possibility of broken Exonic Splicing Enhancer (ESE) site; 2) Possibly damaging: associated with predicted medium splicing effect relating to newly created DS/AS and/or medium possibility of broken ESE site; 3) Uncertain: associated with predicted mild splicing effect due to newly created DS/AS and/or low possibility of broken ESE site; and 4) Not affected: predicted weak or no splicing effect. The overall pathogenicity score for each variant was determined according to the American College of Medical Genetics (ACMG) guidelines [START_REF] Richards | Standards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology[END_REF] . We established four groups of patients based on the degree of certainty of molecular diagnosis using the ACMG guidelines. The group with "definite diagnosis" consisted of the following patients: 1) Those carrying a homozygous variant classified as "pathogenic" using ACMG guidelines in a gene known to cause an autosomal recessive form of disease; 2) Compound heterozygotes carrying two variants classified as pathogenic; 3) Patients carrying one variant classified as pathogenic in a gene known to cause an autosomal dominant form of disease. The group with "probable diagnosis" was composed of patients carrying variants that were classified as "likely pathogenic" by ACMG guidelines. Patients carrying variants found to be pathogenic by certain prediction tools, but classified as "variants of uncertain significance" by ACMG guidelines were placed in the group with "possible diagnosis". For those patients in the "no established diagnosis" group, no variant compatible with the patient's phenotype was found. All disease-causing variants identified by WES were confirmed using direct targeted sequencing (Genetic analyzer 3500XL; Thermo Fisher Scientific, Waltham, Massachusetts) and the following gene sequence references: ACTA1 (NM_001100), CAPN3 (NM_000070), DES (NM_001927), FLNC (NM_001458), GYG1 (NM_004130), MYH2 (NM_017534), TARDBP (NM_007375), TTN (NM_001267550) and VCP (NM_007126). RESULTS All phenotypic and mutational data are detailed in Table 1. A definite diagnosis was obtained for seven index cases (ICs). Patient P1 harbored a previously reported mutation in TTN (Titin, MIM*188840) associated with hereditary myopathy with early respiratory failure (HMERF) [START_REF] Palmio | Hereditary myopathy with early respiratory failure: occurrence in various populations[END_REF] . The homozygous status of this mutation is consistent with the parental consanguinity. For Patients P2 and P5, the same heterozygous mutation in VCP (Valosin-Containing Protein, MIM *601023), previously described in the literature [START_REF] Stojkovic | Clinical outcome in 19 French and Spanish patients with valosin-containing protein myopathy associated with Paget's disease of bone and frontotemporal dementia[END_REF] , was discovered and associated with similar onset and clinical features (distal myopathy of upper and lower limbs). Compound heterozygous known mutations in TTN 18, [START_REF] Hackman | Truncating mutations in C-terminal titin may cause more severe tibial muscular dystrophy (TMD)[END_REF] were found in patient P3, whereas patient P4 harbored a previously described heterozygous variant in DES (Desmin, MIM*125660) [START_REF] Bär | Conspicuous involvement of desmin tail mutations in diverse cardiac and skeletal myopathies[END_REF] leading to cardiomyopathy and myofibrillar abnormalities on the muscle biopsy, features that were retrieved in patient P4. For patient P6, a known FLNC (Filamin C, MIM *102565) mutation was found [START_REF] Vorgerd | A mutation in the dimerization domain of filamin c causes a novel type of autosomal dominant myofibrillar myopathy[END_REF] . Surprisingly, we identified compound heterozygous mutations for the GYG1 (Glycogenin 1, MIM*603942) gene in patient P7, associated with polyglucosan body myopathy type 2. In this patient, we found a previously described GYG1 variant with a proven deleterious effect on splicing [START_REF] Malfatti | A new muscle glycogen storage disease associated with glycogenin-1 deficiency[END_REF] , associated with a novel GYG1 mutation leading to a frameshift of the reading frame and the introduction of a premature translation termination codon. Further investigations allowed additional clinical and histo-immunological features thus suggesting a polyglucosan body myopathy (data not shown). A probable diagnosis was obtained for patients P8 and P9, with novel compound heterozygous TTN Our study illustrates that a NGS approach is more efficient than the gene-by-gene strategy for several reasons. First, it allowed us to explore genes responsible for disorders within the differential diagnosis of hIBM, including VCP and DES. Second, our strategy permitted sequencing of large-sized genes, such as TTN and FLNC, which is not routinely performed, leading to the identification of variants in five index cases. Third, this approach allowed us to modify the incorrect diagnosis of hIBM in one patient, initially based on the presence of rimmed vacuoles on muscle biopsy, to a different muscle disorder caused by variants in the gene GYG1. Thus, NGS has the potential to alter a misdiagnosis due to a misleading muscle biopsy. Although using WES to explore a subset of genes might not provide as much target sequence coverage as a sequencing strategy specifically designed for these genes 10 , there are several advantages of using this approach. The sequencing results for samples where no pathogenic variants were identified in the initially explored genes can be reanalyzed to explore additional genes or all genes in the whole exome. In this way, further analyses are ongoing for the cases among our cohort that remain without genetic characterization. Another advantage of WES over targeted exome sequencing is its versatility and ability to be applied to many different diseases, as different sets of genes can be assessed without the need to develop and test a specific sequencing strategy 25, [START_REF] Dias | An analysis of exome sequencing for diagnostic testing of the genes associated with muscle disease and spastic paraplegia[END_REF] . In conclusion, the exome-based sequencing strategy described here is an efficient way to diagnose such genetically heterogeneous disorders as hIBMs. Finally, 9 9 mutations and a heterozygous TARDBP (Tar DNA-Binding Protein, MIM *605078) variant respectively while two novel heterozygous variants in the FLNC and the ACTA1 (Actin, Alpha, skeletal muscle 1, MIM *102610) genes fulfilled the possible diagnosis overall pathogenicity score in patients P10 and P11 respectively. ICs remained without a molecular diagnosis following mutational analysis of the 306 genes of interest. Considering only the first (definite) group of patients, the yield of diagnosed patients was 35% in this cohort (7/20). DISCUSSION Next-Generation Sequencing (NGS) is already used by many genetics laboratories and is being used with increasing frequency as the standard initial analysis for myopathies and other heterogeneous genetic disorders. The molecular diagnosis yield of 35% obtained in this study is consistent with other reports showing a range of 25 to 50 percent for rare genetic disorders diagnosis by WES 23,24 . Frequency in 1000G, ESP and ExAC databases of all the variants described in Table1is lower than 0.2%. * Additional variant with uncertain significance found for patient P9: MYH2: c.2090A>G (p.His697Arg). † Variant affecting the same nucleic and amino acid positions as another variant, c.1127G>A (p.Gly376Asp), previously described in the literature[START_REF] Conforti | TARDBP gene mutations in south Italian patients with amyotrophic lateral sclerosis[END_REF][START_REF] Solski | A novel TARDBP insertion/deletion mutation in the flail arm variant of amyotrophic lateral sclerosis[END_REF][START_REF] Lattante | TARDBP and FUS Mutations Associated with Amyotrophic Lateral Sclerosis: Summary and Update[END_REF] in two different familial amyotrophic lateral sclerosis cases with a similar phenotype presentation as patient P9 (upper and lower limb weakness with no cognitive impairment). Table 1 : Pathogenicity assessment for the identified variants in patients with definite, probable and possible diagnoses Patient Gender Phenotype / Genetic inheritance Muscle biopsy Genes/Variants Status UMD- predictor 11 SIFT 12 PolyPhen-2 13 ACMG Guidelines 15 Splicing prediction (HSF 14 ) Pathogenic variants described in literature Patients with definite diagnosis 1 In bold: pathogenicity prediction strong and moderate; NP: not performed (UMD-predictor, SIFT and PolyPhen-2 algorithms do not provide a pathogenicity score for variants creating a stop or a frameshift); UMD-predictor 11 : Universal Mutation Database predictor; SIFT 12 : Sort Intolerant From Tolerant; PolyPhen-2 13 : Polymorphism Phenotyping v2; HSF 14 : Human Splicing Finder; ACMG 15 : American College of Medical Genetics; AD: Autosomal Dominant; AR: Autosomal Recessive; HMREF: Hereditary Myopathy with early REspiratory Failure; Rimmed vacuoles P1 F PM of lower limbs at onset (40yo) evolving towards HMREF / AR Cytoplasmic inclusions Disruption of the intermyofibrillar TTN: c. 95195C>T (p.Pro31732Leu) HOZ Pathogenic Damaging Probably damaging Pathogenic NP YES 16 network P2 F DM of upper and lower limbs (tibialis anterior muscle) / AD No axial muscle weakness Rimmed vacuoles network Disruption of the intermyofibrillar VCP: c.410C>T (p.Pro137Leu) HTZ Pathogenic Damaging Probably damaging Pathogenic Not affected YES 17 Early onset (childhood) DM of lower limbs (tibial Rimmed vacuoles TTN: c.102271C>T (p.Arg34091Trp) HTZ Pathogenic Damaging Probably damaging Pathogenic NP YES 18 P3 F muscular dystrophy) slowly evolving / AR Dystrophic muscle biopsy TTN: c.107647delT (p.Ser35883Glnfs*10) HTZ NP NP NP Pathogenic NP YES 19 Late onset (45yo) DM of Rimmed vacuoles P4 M upper and lower limbs with cardiac involvement / Atrophic fibers Disorganized DES: c.1360C>T (p.Arg454Trp) HTZ Pathogenic Damaging Probably damaging Pathogenic Not affected YES 20 AD myofibrillar network P5 M DM of upper and lower limbs. Onset at 30yo / AD Rare rimmed vacuoles (<5) VCP: c.410C>T (p.Pro137Leu) HTZ Pathogenic Damaging Probably damaging Pathogenic Not affected YES 17 Late onset (45yo) DM of P6 F lower limbs and pelvic girdle myopathy Rimmed vacuoles FLNC: c.8130G>A (p.Trp2710*) HTZ NP NP NP Pathogenic NP YES 21 / AD P7 F Late onset (45yo) DM of upper and lower limbs with slow evolution / AR Rimmed vacuoles (on the initial biopsy) recharacterized as polyglucosan bodies (on the second biopsy) GYG1: c.143+3G>C (p.Asp3Glufs*4) GYG1: c.996_1005del10 (p.Tyr332fs*1) Comp. HTZ NP NP NP NP NP NP Pathogenic Pathogenic Probably damaging NP 22 NO YES Patients with probable diagnosis Early onset (14yo) DM of lower limbs (tibial TTN: c.15346C>T (p.Arg5116*) HTZ NP NP NP Pathogenic NP NO muscular dystrophy) P8 M evolving towards hamstring muscle with quadriceps sparing Rimmed vacuoles TTN: c. 107680G>A (p.Gly35894Arg) HTZ Pathogenic Damaging Probably damaging Likely pathogenic NP NO / AR P9 * M Late onset (50yo) DM of upper and lower limbs / AD Rimmed vacuoles No inflammation Atrophic fibers TARDBP: c.1127G>T (p.Gly376Val) HTZ Pathogenic Tolerated Benign pathogenic Likely damaging Possibly NO † Patients with possible diagnosis P10 M Limb girdle muscular dystrophy / AD Rare rimmed vacuoles (<5) FLNC: c.6526C>T (p.Arg2176Cys) HTZ Pathogenic Tolerated Probably damaging Uncertain significance Not affected NO P11 M DM of lower limbs with very slow evolution / AD Rimmed vacuoles ACTA1: c.437C>T (p.Ala146Val) HTZ Pathogenic Tolerated Probably damaging Uncertain significance Uncertain NO DM: Distal Myopathy; PM: Proximal Myopathy; yo: years old. HOZ: homozygous; HTZ: heterozygous; Comp. HTZ: compound heterozygous (with confirmed segregation analysis). This study was supported by the France Génomique infrastructure (grant no. ANR-10-INBS-09) managed by the National Research Agency (ANR) part of the Investment for the Future program, Fondation Maladies Rares, FHU-MaRCHE, the APHM, Inserm, the "Bureau des PU-PH de l'Assistance Publique -Hôpitaux de Marseille (AP-HM)", and by a Grant FP7/2007-2013 from the European Community Seventh Framework Programme (Grant Agreement No. 2012-305121) "Integrated European omics research project for diagnosis and therapy in rare neuromuscular and neurodegenerative diseases" (NEUROMICS). The authors sincerely thank Christel Castro, Jean-Pierre Desvignes, David Salgado, Christophe Béroud, Eric Salvo, Rafaelle Bernard, Jocelyn Laporte, Johann Bohm and Mark Lathrop for their contributions to this work. We also thank the patients and their referring physicians for their participation.
19,086
[ "781143", "930670", "18130", "835082", "1042088", "757793", "1015006", "757786", "183818", "17086", "14594", "16646" ]
[ "46221", "223602", "46221", "41003", "1053657", "46221", "432938", "46221", "432938", "46221", "432938", "41003", "40508", "40508", "41003", "41003", "1053657", "46221", "223602", "46221", "223602", "46221", "223602" ]
01745366
en
[ "math" ]
2024/03/05 22:32:07
2019
https://hal.science/hal-01745366/file/LSE-FDM.pdf
Weizhu Bao Émi Carles email: remi.carles@math.cnrs.fr AND Su § Chunmei Qinglin Tang email: qinglintang@scu.edu.cn ERROR ESTIMATES OF A REGULARIZED FINITE DIFFERENCE METHOD FOR THE LOGARITHMIC SCHR ÖDINGER EQUATION * Keywords: Logarithmic Schrödinger equation, logarithmic nonlinearity, regularized logarithmic Schrödinger equation, semi-implicit finite difference method, error estimates, convergence rate AMS subject classifications. 35Q40, 35Q55, 65M15, 81Q05 We present a regularized finite difference method for the logarithmic Schrödinger equation (LogSE) and establish its error bound. Due to the blow-up of the logarithmic nonlinearity, i.e. ln ρ → -∞ when ρ → 0 + with ρ = |u| 2 being the density and u being the complex-valued wave function or order parameter, there are significant difficulties in designing numerical methods and establishing their error bounds for the LogSE. In order to suppress the round-off error and to avoid blow-up, a regularized logarithmic Schrödinger equation (RLogSE) is proposed with a small regularization parameter 0 < ε ≪ 1 and linear convergence is established between the solutions of RLogSE and LogSE in term of ε. Then a semi-implicit finite difference method is presented for discretizing the RLogSE and error estimates are established in terms of the mesh size h and time step τ as well as the small regularization parameter ε. Finally numerical results are reported to confirm our error bounds. 1. Introduction. We consider the logarithmic Schrödinger equation (LogSE) which arises in a model of nonlinear wave mechanics (cf. [START_REF] Bia Lynicki-Birula | Nonlinear wave mechanics[END_REF]), (1.1) i∂ t u(x, t) + ∆u(x, t) = λu(x, t) ln(|u(x, t)| 2 ), x ∈ Ω, t > 0, u(x, 0) = u 0 (x), x ∈ Ω, where t is time, x ∈ R d (d = 1, 2, 3) is the spatial coordinate, λ ∈ R\{0} measures the force of the nonlinear interaction, u := u(x, t) ∈ C is the dimensionless wave function or order parameter and Ω = R d or Ω ⊂ R d is a bounded domain with homogeneous Dirichlet or periodic boundary condition 1 fixed on the boundary. It admits applications to quantum mechanics [START_REF] Bia Lynicki-Birula | Nonlinear wave mechanics[END_REF][START_REF] Gaussons | Solitons of the logarithmic Schrödinger equation[END_REF], quantum optics [START_REF] Buljan | Incoherent white light solitons in logarithmically saturable noninstantaneous nonlinear media[END_REF][START_REF] Krolikowski | Unified model for partially coherent solitons in logaritmically nonlinear media[END_REF], nuclear physics [START_REF] Hefter | Application of the nonlinear Schrödinger equation with a logarithmic inhomogeneous term to nuclear physics[END_REF], transport and diffusion phenomena [START_REF] Hansson | Propagation of partially coherent solitons in saturable logarithmic media: A comparative analysis[END_REF][START_REF] Martino | Logarithmic Schrödinger-like equation as a model for magma transport[END_REF], open quantum systems [START_REF] Hernandez | General properties of Gausson-conserving descriptions of quantal damped motion[END_REF][START_REF] Yasue | Quantum mechanics of nonconservative systems[END_REF], effective quantum gravity [START_REF] Zloshchastiev | Logarithmic nonlinearity in theories of quantum gravity: Origin of time and observational consequences[END_REF], theory of superfluidity and Bose-Einstein condensation [START_REF] Avdeenkov | Quantum bose liquids with logarithmic nonlinearity: Self-sustainability and emergence of spatial extent[END_REF]. The logarithmic Schrödinger equation enjoys three conservation laws, mass, momentum and energy [START_REF]Semilinear Schrödinger equations[END_REF][START_REF] Cazenave | Équations d'évolution avec non linéarité logarithmique[END_REF], like in the case of the nonlinear Schrödinger equation with a power-like nonlinearity (e.g. cubic): M (t) : = u(•, t) 2 L 2 (Ω) = Ω |u(x, t)| 2 dx ≡ Ω |u 0 (x)| 2 dx = M (0), P (t) : = Im Ω u(x, t)∇u(x, t)dx ≡ Im Ω u 0 (x)∇u 0 (x)dx = P (0), t ≥ 0, E(t) : = Ω |∇u(x, t)| 2 dx + λF (|u(x, t)| 2 ) dx ≡ Ω |∇u 0 (x)| 2 + λF (|u 0 (x)| 2 ) dx = E(0), (1.2) where Im f and f denote the imaginary part and complex conjugate of f , respectively, and (1.3) F (ρ) = ρ 0 ln(s)ds = ρ ln ρ -ρ, ρ ≥ 0. On a mathematical level, the logarithmic nonlinearity possesses several features that make it quite different from more standard nonlinear Schrödinger equations. First, the nonlinearity is not locally Lipschitz continuous because of the behavior of the logarithm function at the origin. Note that in view of numerical simulation, this singularity of the "nonlinear potential" λ ln(|u(x, t)| 2 ) makes the choice of a discretization quite delicate. The second aspect is that whichever the sign of λ, the nonlinear potential energy in E has no definite sign. In fact, whether the nonlinearity is repulsive/attractive (or defocusing/focusing) depends on both λ and the value of the density ρ := ρ(x, t) = |u(x, t)| 2 . When λ > 0, then the nonlinearity λρ ln ρ is repulsive when ρ > 1; and respectively, it is attractive when 0 < ρ < 1. On the other hand, when λ < 0, then the nonlinearity λρ ln ρ is attractive when ρ > 1; and respectively, it is repulsive when 0 < ρ < 1. Therefore, solving the Cauchy problem for (1.1) is not a trivial issue, and constructing solutions which are defined for all time requires some work; see [START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF][START_REF] Cazenave | Équations d'évolution avec non linéarité logarithmique[END_REF][START_REF] Guerrero | Global H 1 solvability of the 3D logarithmic Schrödinger equation[END_REF]. Essentially, the outcome is that if u 0 belongs to (a subset of) H 1 (Ω), (1.1) has a unique, global solution, regardless of the space dimension d (see also Theorem 2.2 below). Next, the large time behavior reveals new phenomena. A first remark suggests that nonlinear effects are weak. Indeed, unlike what happens in the case of a homogeneous nonlinearity (classically of the form λ|u| p u), replacing u with ku (k ∈ C \ {0}) in (1.1) has only little effect, since we have i∂ t (ku) + ∆(ku) = λku ln |ku| 2 -λ(ln |k| 2 )ku . The scaling factor thus corresponds to a purely time-dependent gauge transform: ku(x, t)e -itλ ln |k| 2 solves (1.1) (with initial datum ku 0 ). In particular, the size of the initial datum does not influence the dynamics of the solution. In spite of this property which is reminiscent of linear equations, nonlinear effects are stronger in (1.1) than in, say, cubic Schrödinger equations in several respects. For Ω = R d , it was established in [START_REF] Cazenave | Stable solutions of the logarithmic Schrödinger equation[END_REF] that in the case λ < 0, no solution is dispersive (not even for small data, in view of the above remark), while if λ > 0, the results from [START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF] show that every solution disperses, at a faster rate than for the linear equation. In view of the gauge invariance of the nonlinearity, for Ω = R d , (1.1) enjoys the standard Galilean invariance: if u(x, t) solves (1.1), then, for any v ∈ R d , so does u(x -2vt, t)e iv•x-i|v| 2 t . A remarkable feature of (1.1) is that it possesses a large set of explicit solutions. In the case Ω = R d : if u 0 is Gaussian, u(•, t) is Gaussian for all time, and solving (1.1) amounts to solving ordinary differential equations [START_REF] Bia Lynicki-Birula | Nonlinear wave mechanics[END_REF]. For simplicity of notation, we take the one-dimensional case as an example. If the initial data in (1.1) with Ω = R is taken as u 0 (x) = b 0 e -a 0 2 x 2 +ivx , x ∈ R, where a 0 , b 0 ∈ C and v ∈ R are given constants satisfying α 0 := Re a 0 > 0 with Re f denoting the real part of f , then the solution of (1.1) is given by [START_REF] Ardila | Orbital stability of Gausson solutions to logarithmic Schrödinger equations[END_REF][START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF] ( 1.4) u(x, t) = b 0 r(t) e i(vx-v 2 t)+Y (x-2vt,t) , x ∈ R, t ≥ 0, with (1.5) Y (x, t) = -iφ(t) -α 0 x 2 2r(t) 2 + i ṙ(t) r(t) x 2 4 , x ∈ R, t ≥ 0, where φ := φ(t) ∈ R and r := r(t) > 0 solve the ODEs [START_REF] Ardila | Orbital stability of Gausson solutions to logarithmic Schrödinger equations[END_REF][START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF] φ = α 0 r 2 + λ ln |b 0 | 2 -λ ln r, φ(0) = 0, r = 4α 2 0 r 3 + 4λα 0 r , r(0) = 1, ṙ(0) = -2 Im a 0 . (1.6) In the case λ < 0, the function r is (time) periodic (in agreement with the absence of dispersive effects). In particular, if a 0 = -λ > 0, it follows from (1.6) that r(t) ≡ 1 and φ(t) = φ 0 t with φ 0 = λ ln(|b 0 | 2 ) -1 , which generates the uniformly moving Gausson as [START_REF] Ardila | Orbital stability of Gausson solutions to logarithmic Schrödinger equations[END_REF][START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF] ( 1.7) u(x, t) = b 0 e λ 2 (x-2vt) 2 +i(vx-(φ0+v 2 )t) , x ∈ R, t ≥ 0. As a very special case with b 0 = e 1/2 and v = 0 such that φ 0 = 0, one can get the static Gausson as (1.8) u(x, t) = e 1/2 e λ|x| 2 /2 , x ∈ R, t ≥ 0. This special solution is orbitally stable [START_REF] Cazenave | Stable solutions of the logarithmic Schrödinger equation[END_REF][START_REF] Cazenave | Orbital stability of standing waves for some nonlinear Schrödinger equations[END_REF]. On the other hand, in the case λ > 0, it is proven in [START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF] that for general initial data (not necessarily Gaussian), there exists a universal dynamics. For extensions to higher dimensions, we refer to [START_REF] Ardila | Orbital stability of Gausson solutions to logarithmic Schrödinger equations[END_REF][START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF] and references therein. Therefore, (1.1) possesses several specific features, which make it quite different from the nonlinear Schrödinger equation. Different numerical methods have been proposed and analyzed for the nonlinear Schrödinger equation with smooth nonlinearity (e.g. cubic nonlinearity) in the literature, such as the finite difference methods [START_REF] Bao | Uniform error estimates of finite difference methods for the nonlinear Schrödinger equation with wave operator[END_REF][START_REF]Optimal error estimates of finite difference methods for the Gross-Pitaevskii equation with angular momentum rotation[END_REF], finite element methods [START_REF] Akrivis | On fully discrete Galerkin methods of second-order temporal accuracy for the nonlinear Schrödinger equation[END_REF][START_REF] Karakashian | A space-time finite element method for the nonlinear Schrödinger equation: the continuous Galerkin method[END_REF] and the time-splitting pseudospectral methods [START_REF] Bao | Numerical solution of the Gross-Pitaevskii equation for Bose-Einstein condensation[END_REF][START_REF] Taha | Analytical and numerical aspects of certain nonlinear evolution equations. II. Numerical, nonlinear Schrödinger equation[END_REF]. However, they cannot be applied to the LogSE (1.1) directly due to the blow-up of the logarithmic nonlinearity, i.e. ln ρ → -∞ when ρ → 0 + . The main aim of this paper is to present a regularized finite difference method for the LogSE (1.1) by introducing a proper regularized logarithmic Schrödinger equation (RLogSE) and then discretizing the RLogSE via a semi-implicit finite difference method. Error estimates will be established between the solutions of LogSE and RLogSE as well as their numerical approximations. The rest of the paper is organized as follows. In Section 2, we propose a regularized version of (1.1) with a small regularization parameter 0 < ε ≪ 1, and analyze its properties, as well as the convergence of its solution to the solution of (1.1). In Section 3, we introduce a semi-implicit finite difference method for discretizing the regularized logarithmic Schrödinger equation, and prove an error estimate, in which the dependence of the constants with respect to the regularization parameter ε is tracked very explicitly. Finally, numerical results are provided in Section 4 to confirm our error bounds and to demonstrate the efficiency and accuracy of the proposed numerical method. Throughout the paper, we use H m (Ω) and • H m (Ω) to denote the standard Sobolev spaces and their norms, respectively. In particular, the norm and inner product of L 2 (Ω) = H 0 (Ω) are denoted by • L 2 (Ω) and (•, •), respectively. Moreover, we adopt A B to mean that there exists a generic constant C > 0 independent of the regularization parameter ε, the time step τ and the mesh size h such that A ≤ C B, and c means the constant C depends on c. 2. A regularized logarithmic Schrödinger equation. It turns out that a direct simulation of the solution of (1.1) is very delicate, due to the singularity of the logarithm at the origin, as discussed in the introduction. Instead of working directly with (1.1), we shall consider the following regularized logarithmic Schrödinger equation (RLogSE) with a samll regularized parameter 0 < ε ≪ 1 as (2.1) i∂ t u ε (x, t) + ∆u ε (x, t) = λu ε (x, t) ln (ε + |u ε (x, t)|) 2 , x ∈ Ω, t > 0, u ε (x, 0) = u 0 (x), x ∈ Ω. 2.1. Conserved quantities. For the RLogSE (2.1), it can be similarly deduced that the mass, momentum, and energy are conserved. Proposition 2.1. The mass, momentum, and 'regularized' energy are formally conserved for the RLogSE (2.1): M ε (t) := Ω |u ε (x, t)| 2 dx ≡ Ω |u 0 (x)| 2 dx = M (0), P ε (t) := Im Ω u ε (x, t)∇u ε (x, t)dx ≡ Im Ω u 0 (x)∇u 0 (x)dx = P (0), t ≥ 0, E ε (t) := Ω |∇u ε (x, t)| 2 + λF ε (|u ε (x, t)| 2 ) (x, t)dx ≡ Ω |∇u 0 (x)| 2 + λF ε (|u 0 (x)| 2 ) dx = E ε (0), (2.2) where (2.3) F ε (ρ) = ρ 0 ln(ε + √ s) 2 ds = ρ ln (ε + √ ρ) 2 -ρ + 2ε √ ρ -ε 2 ln (1 + √ ρ/ε) 2 , ρ ≥ 0. Proof. The conservation for mass and momentum is standard, and relies on the fact that the right hand side of (2.1) involves u ε multiplied by a real number. For the energy E ε (t), we compute d dt E ε (t) = 2 Re Ω ∇u ε • ∇∂ t u ε + λu ε ∂ t u ε ln(ε + |u ε |) 2 -λu ε ∂ t u ε (x, t)dx + 2λ Ω ∂ t |u ε | ε + |u ε | 2 -ε 2 ε + |u ε | (x, t)dx = 2 Re Ω ∂ t u ε -∆u ε + λu ε ln(ε + |u ε |) 2 (x, t)dx = 2 Re Ω i|∂ t u ε | 2 (x, t)dx = 0, t ≥ 0, which completes the proof. Note however that since the above 'regularized' energy involves L 1 -norm of u ε for any ε > 0, E ε is obviously well-defined for u 0 ∈ H 1 (Ω) when Ω has finite measure, but not when Ω = R d . This aspect is discussed more into details in Subsections 2.3.3 and 2.4. The Cauchy problem. For α > 0 and Ω = R d , denote by L 2 α the weighted L 2 space L 2 α := {v ∈ L 2 (R d ), x -→ x α v(x) ∈ L 2 (R d )}, where x := 1 + |x| 2 , with norm v L 2 α := x α v(x) L 2 (R d ) . In the case where Ω is bounded, we simply set L 2 α = L 2 (Ω). Regarding the Cauchy problems (1.1) and (2.1), we have the following result. 0 ∈ H 1 0 (Ω) ∩ L 2 α , for some 0 < α ≤ 1. • There exists a unique, global solution u ∈ L ∞ loc (R; H 1 0 (Ω) ∩ L 2 α ) to (1.1), and a unique, global solution u ε ∈ L ∞ loc (R; H 1 0 (Ω) ∩ L 2 α ) to (2.1). • If in addition u 0 ∈ H 2 (Ω), then u, u ε ∈ L ∞ loc (R; H 2 (Ω)). • In the case Ω = R d , if in addition u 0 ∈ H 2 ∩L 2 2 , then u, u ε ∈ L ∞ loc (R; H 2 ∩L 2 2 ) . Proof. This result can be proved by using more or less directly the arguments invoked in [START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF]. First, for fixed ε > 0, the nonlinearity in (2.1) is locally Lipschitz, and grows more slowly than any power for large |u ε |. Therefore, the standard Cauchy theory for nonlinear Schrödinger equations applies (see in particular [START_REF]Semilinear Schrödinger equations[END_REF]Corollary 3.3.11 and Theorem 3.4.1]), and so if u 0 ∈ H 1 0 (Ω), then (2.1) has a unique solution u ε ∈ L ∞ loc (R; H 1 0 (Ω)). Higher Sobolev regularity is propagated, with controls depending on ε in general. A solution u of (1.1) can be obtained by compactness arguments, by letting ε → 0 in (2.1), provided that we have suitable bounds independent of ε > 0. We have i∂ t ∇u ε + ∆∇u ε = 2λ ln (ε + |u ε |) ∇u ε + 2λ u ε ε + |u ε | ∇|u ε |. The standard energy estimate (multiply the above equation by ∇u ε , integrate over Ω and take the imaginary part) yields, when Ω = R d or when periodic boundary conditions are considered, 1 2 d dt ∇u ε 2 L 2 (Ω) ≤ 2|λ| Ω |u ε | ε + |u ε | |∇|u ε || |∇u ε |dx ≤ 2|λ ∇u ε 2 L 2 (Ω) . Gronwall lemma yields a bound for u ε in L ∞ (0, T ; H 1 (Ω)), uniformly in ε > 0, for any given T > 0. Indeed, the above estimate uses the property Im Ω ∇u ε • ∆∇u ε dx = 0, which needs not be true when Ω is bounded and u ε satisfies homogeneous Dirichlet boundary conditions. In that case, we use the conservation of the energy E ε (Proposition 2.1), and write ∇u ε (t) 2 L 2 (Ω) ≤ E ε (u 0 ) + 2|λ| Ω |u ε (x, t)| 2 |ln (ε + |u ε (x, t)|)| dx + 2ε|λ| u ε (t) L 1 (Ω) + 2|λ|ε 2 Ω |ln (1 + |u ε (x, t)|/ε)| dx 1 + ε|Ω| 1/2 u ε (t) L 2 (Ω) + Ω |u ε (x, t)| 2 |ln (ε + |u ε (x, t)|)| dx 1 + Ω |u ε (x, t)| 2 |ln (ε + |u ε (x, t)|)| dx, t ≥ 0, where we have used Cauchy-Schwarz inequality and the conservation of the mass M ε (t). Writing, for 0 < η ≪ 1, Ω |u ε | 2 |ln (ε + |u ε |)| dx ε+|u ε |>1 |u ε | 2 (ε + |u ε |) η dx + ε+|u ε |<1 |u ε | 2 (ε + |u ε |) -η dx u ε L 2 (Ω) + u ε 2+η L 2+η (Ω) + u ε 2-η L 2-η (Ω) 1 + ∇u ε dη/2 L 2 (Ω) , where we have used the interpolation inequality (see e.g. [START_REF] Nirenberg | On elliptic partial differential equations[END_REF]) u L p (Ω) u 1-α L 2 (Ω) ∇u α L 2 (Ω) + u L 2 (Ω) , p = 2d d -2α , 0 ≤ α < 1, we obtain again that u ε is bounded in L ∞ (0, T ; H 1 (Ω)), uniformly in ε > 0, for any given T > 0. In the case where Ω is bounded, compactness arguments show that u ε converges to a solution u to (1.1); see [START_REF]Semilinear Schrödinger equations[END_REF][START_REF] Cazenave | Équations d'évolution avec non linéarité logarithmique[END_REF]. When Ω = R d , compactness in space is provided by multiplying (2.1) with x 2α u ε and integrating in space: d dt u ε 2 L 2 α = 4α Im x • ∇u ε x 2-2α u ε (t) dx x 2α-1 u ε L 2 (Ω) ∇u ε L 2 (Ω) , where we have used Cauchy-Schwarz inequality. Recalling that 0 < α ≤ 1, x 2α-1 u ε L 2 (Ω) ≤ x α u ε L 2 (Ω) = u ε L 2 α , and we obtain a bound for u ε in L ∞ (0, T ; H 1 (Ω) ∩ L 2 α ) which is uniform in ε. Uniqueness of such a solution for (1.1) follows from the arguments of [START_REF] Cazenave | Équations d'évolution avec non linéarité logarithmique[END_REF], involving a specific algebraic inequality, generalized in Lemma 2.4 below. Note that at this stage, we know that u ε converges to u by compactness arguments, so we have no convergence estimate. Such estimates are established in Subsection 2.3. To prove the propagation of the H 2 regularity, we note that differentiating twice the nonlinearity in (2.1) makes it unrealistic to expect direct bounds which are uniform in ε. To overcome this difficulty, the argument proposed in [START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF] relies on Kato's idea: instead of differentiating the equation twice in space, differentiate it once in time, and use the equation to infer H 2 regularity. This yields the second part of the theorem. To establish the last part of the theorem, we prove that u ∈ L ∞ loc (R; L 2 2 ) and the same approach applies to u ε . It follows from (1.1) that d dt u(t) 2 L 2 2 = -2 Im R d x 4 u(x, t)∆u(x, t)dx = 8 Im R d x 2 u(x, t) x • ∇u(x, t)dx ≤ 8 u(t) L 2 2 x • ∇u(t) L 2 (R d ) . (2.4) By Cauchy-Schwarz inequality and integration by parts, we have x • ∇u(t) 2 L 2 (R d ) ≤ d j=1 d k=1 R d x 2 j ∂u(x, t) ∂x k ∂u(x, t) ∂x k dx = -2 R d u(x, t) x • ∇u(x, t)dx - R d |x| 2 u(x, t)∆u(x, t)dx ≤ 1 2 x • ∇u(t) 2 L 2 (R d ) + 2 u(t) 2 L 2 (R d ) + 1 2 u(t) 2 L 2 2 + 1 2 ∆u(t) 2 L 2 (R d ) , which yields directly that x • ∇u(t) L 2 (R d ) ≤ 2 u(t) L 2 (R d ) + u(t) L 2 2 + ∆u(t) L 2 (R d ) . This together with (2.4) gives that d dt u(t) L 2 2 ≤ 4 x • ∇u(t) L 2 (R d ) ≤ 4 u(t) L 2 2 + 8 u(t) L 2 (R d ) + 4 ∆u(t) L 2 (R d ) . Since we already know that u ∈ L ∞ loc (R; H 2 (R d )), Gronwall lemma completes the proof. Remark 2.1. We emphasize that if u 0 ∈ H k (R d ), k ≥ 3, we cannot guarantee in general that this higher regularity is propagated in (1.1), due to the singularities stemming from the logarithm. Still, this property is fulfilled in the case where u 0 is Gaussian, since then u remains Gaussian for all time. However, our numerical tests, in the case where the initial datum is chosen as the dark soliton of the cubic Schrödinger equation multiplied by a Gaussian, suggest that even the H 3 regularity is not propagated in general. 2.3. Convergence of the regularized model. In this subsection, we show the approximation property of the regularized model (2.1) to (1.1). 2.3.1. A general estimate. We prove: Lemma 2.3. Suppose the equation is set on Ω, where Ω = R d , or Ω ⊂ R d is a bounded domain with homogeneous Dirichlet or periodic boundary condition, then we have the general estimate: (2.5) d dt u ε (t) -u(t) 2 L 2 (Ω) ≤ 4|λ| u ε (t) -u(t) 2 L 2 (Ω) + ε u ε (t) -u(t) L 1 (Ω) . Before giving the proof of Lemma 2.3, we introduce the following lemma, which is a variant of [12, Lemma 9. |Im ((f ε (z 1 ) -f ε (z 2 )) (z 1 -z 2 ))| ≤ |z 1 -z 2 | 2 , z 1 , z 2 ∈ C. Proof. Notice that Im [(f ε (z 1 ) -f ε (z 2 )) (z 1 -z 2 )] = 1 2 [ln(ε + |z 1 |) -ln(ε + |z 2 |)] Im(z 1 z 2 -z 1 z 2 ). Supposing, for example, 0 < |z 2 | ≤ |z 1 |, we can obtain that |ln(ε + |z 1 |) -ln(ε + |z 2 |)| = ln 1 + |z 1 | -|z 2 | ε + |z 2 | ≤ |z 1 | -|z 2 | ε + |z 2 | ≤ |z 1 -z 2 | |z 2 | , and |Im(z 1 z 2 -z 1 z 2 )| = |z 2 (z 1 -z 2 ) + z 2 (z 2 -z 1 ))| ≤ 2|z 2 | |z 1 -z 2 |. Otherwise the result follows by exchanging z 1 and z 2 . Proof. (Proof of Lemma 2.3) Subtracting (1.1) from (2.1), we see that the error function e ε := u ε -u satisfies i∂ t e ε + ∆e ε = λ u ε ln(ε + |u ε |) 2 -u ln(|u| 2 ) . Multiplying the error equation by e ε (t), integrating in space and taking the imaginary parts, we can get by using Lemma 2.4 that 1 2 d dt e ε (t) 2 L 2 (Ω) = 2λ Im Ω [u ε ln(ε + |u ε |) -u ln(|u|)] (u ε -u)(x, t)dx ≤ 2|λ| e ε (t) 2 L 2 (Ω) + 2|λ| Ω e ε u [ln(ε + |u|) -ln(|u|)] (x, t)dx ≤ 2|λ| e ε (t) 2 L 2 (Ω) + 2ε|λ| e ε (t) L 1 (Ω) , where we have used the general estimate 0 ≤ ln(1 + |x|) ≤ |x|. Convergence for bounded domain. If Ω has finite measure, then we can have the following convergence behavior. Proposition 2.5. Assume that Ω has finite measure, and let u 0 ∈ H 2 (Ω). For any T > 0, we have (2.6) u ε -u L ∞ (0,T ;L 2 (Ω)) ≤ C 1 ε, u ε -u L ∞ (0,T ;H 1 (Ω)) ≤ C 2 ε 1/2 , where C 1 depends on |λ|, T , |Ω| and C 2 depends on |λ|, T , |Ω| and u 0 H 2 (Ω) . Proof. Note that e ε (t ) L 1 (Ω) ≤ |Ω| 1/2 e ε (t) L 2 (Ω) , then it follows from (2.5) that d dt e ε (t) L 2 (Ω) ≤ 2|λ| e ε (t) L 2 (Ω) + 2ε|λ||Ω| 1/2 . Applying Gronwall's inequality, we immediately get that e ε (t) L 2 (Ω) ≤ e ε (0) L 2 (Ω) + ε|Ω| 1/2 e 2|λ|t = ε|Ω| 1/2 e 2|λ|t . The convergence rate in H 1 follows from the property u ε , u ∈ L ∞ loc (R; H 2 (Ω)) and the Gagliardo-Nirenberg inequality [START_REF] Leoni | A first course in Sobolev spaces[END_REF], ∇v L 2 (Ω) v 1/2 L 2 (Ω) ∆v 1/2 L 2 (Ω) , which completes the proof. Remark 2.2. The weaker rate in the H 1 estimate is due to the fact that Lemma 2.3 is not easily adapted to H 1 estimates, because of the presence of the logarithm. Differentiating (1.1) and (2.1) makes it hard to obtain the analogue in Lemma 2.3. This is why we bypass this difficulty by invoking boundedness in H 2 and interpolating with the error bound at the L 2 level. If we have u ε , u ∈ L ∞ loc (R; H k (Ω)) for k > 2, then the convergence rate in H 1 (Ω) can be improved as e ε L ∞ (0,T ;H 1 (Ω)) ε k-1 k , by using the inequality (see e.g. [START_REF] Nirenberg | On elliptic partial differential equations[END_REF]): v H 1 (Ω) v 1-1/k L 2 (Ω) v 1/k H k (Ω) . 2.3.3. Convergence for the whole space. In order to prove the convergence rate of the regularized model (2.1) to (1.1) for the whole space, we need the following lemma. Lemma 2.6. For d = 1, 2, 3, if v ∈ L 2 (R d ) ∩ L 2 2 , then we have (2.7) v L 1 (R d ) ≤ C v 1-d/4 L 2 (R d ) v d/4 L 2 2 , where C > 0 depends on d. Proof. Applying the Cauchy-Schwarz inequality, we can get for fixed r > 0, v L 1 (R d ) = |x|≤r |v(x)|dx + |x|≥r |x| 2 |v(x)| |x| 2 dx r d/2 |x|≤r |v(x)| 2 dx 1 2 + |x|≥r |x| 4 |v(x)| 2 dx 1 2 |x|≥r 1 |x| 4 dx 1 2 r d/2 v L 2 (R d ) + r d/2-2 v L 2 2 . Then (2.7) can be obtained by setting r = v L 2 2 / v L 2 (R d ) 1/2 . Proposition 2.7. Assume that Ω = R d , 1 ≤ d ≤ 3, and let u 0 ∈ H 2 (R d ) ∩ L 2 2 . For any T > 0, we have u ε -u L ∞ (0,T ;L 2 (R d )) ≤ C 1 ε 4 4+d , u ε -u L ∞ (0,T ;H 1 (R d ))) ≤ C 2 ε 2 4+d , where C 1 depends on d, |λ|, T , u 0 L 2 2 and C 2 depends on additional u 0 H 2 (R d ) . Proof. Applying (2.7) and the Young's inequality, we deduce that ε e ε (t) L 1 (R d ) ≤ εC d e ε (t) 1-d/4 L 2 (R d ) e ε (t) d/4 L 2 2 ≤ C d e ε (t) 2 L 2 (R d ) + ε 8 4+d e ε (t) 2d 4+d L 2 2 , which together with (2.5) gives that d dt e ε (t) 2 L 2 (R d ) ≤ 4|λ|(1 + C d ) e ε (t) 2 L 2 (R d ) + 4C d |λ|ε 8 4+d e ε (t) 2d 4+d L 2 2 . Gronwall lemma yields e ε (t) L 2 (R d ) ≤ ε 4 4+d e ε (t) d 4+d L 2 2 e tC d,|λ| . The proposition follows by recalling that u ε , u ∈ L ∞ loc (R; H 2 (R d ) ∩ L 2 2 ). Remark 2.3. If we have u ε , u ∈ L ∞ loc (R; L 2 m ) for m > 2, then by applying the inequality ε v L 1 (R d ) ε v 1-d 2m L 2 (R d ) v d 2m L 2 m v 2 L 2 (R d ) + ε 4m 2m+d v 2d 2m+d L 2 m , which can be proved like above, the convergence rate can be improved as u ε -u L ∞ (0,T ;L 2 (R d )) ε 2m 2m+d . Remark 2.4. If in addition u ε , u ∈ L ∞ loc (R; H s (R d )) for s > 2, then the conver- gence rate in H 1 (R d ) can be improved as e ε L ∞ (0,T ;H 1 (R d ))) ≤ Cε 2m 2m+d s-1 s , by using the Gagliardo-Nirenberg inequality: ∇v L 2 (R d ) ≤ C v 1-1/s L 2 (R d ) ∇ s v 1/s L 2 (R d ) . The previous two remarks apply typically in the case of Gaussian initial data. Convergence of the energy. In this subsection we will show the convergence of the energy E ε (u 0 ) → E(u 0 ). Proposition 2.8. For u 0 ∈ H 1 (Ω) ∩ L 1 (Ω), the energy E ε (u 0 ) converges to E(u 0 ) with |E ε (u 0 ) -E(u 0 )| ≤ 4 ε|λ| u 0 L 1 (Ω) . Proof. It can be deduced from the definition that |E ε (u 0 ) -E(u 0 )| = 2|λ| ε u 0 L 1 (Ω) + Ω |u 0 (x)| 2 [ln(ε + |u 0 (x)| -ln(|u 0 (x)|)] dx -ε 2 Ω ln (1 + |u 0 (x)|/ε) dx ≤ 4 ε|λ| u 0 L 1 (Ω) , which completes the proof. Remark 2.5. If Ω is bounded, then H 1 (Ω) ⊆ L 1 (Ω). If Ω = R d , then Lemma 2.6 (and its natural generalizations) shows that H 1 (R) ∩ L 2 1 ⊆ L 1 (R), and if d = 2, 3, H 1 (R d ) ∩ L 2 2 ⊆ L 1 (R d ). Remark 2.6. This regularization is reminiscent of the one considered in [START_REF] Carles | Universal dynamics for the defocusing logarithmic Schrödinger equation[END_REF] in order to prove (by compactness arguments) that (1.1) has a solution, (2.8) i∂ t u ε (x, t) + ∆u ε (x, t) = λu ε (x, t) ln ε + |u ε (x, t)| 2 , x ∈ Ω, t > 0. With that regularization, it is easy to adapt the error estimates established above for (2.1). Essentially, ε must be replaced by √ ε (in Lemma 2.3, and hence in its corollaries). 3. A regularized semi-implicit finite difference method. In this section, we study the approximation properties of a finite difference method for solving the regularized model (2.1). For simplicity of notation, we set λ = 1 and only present the numerical method for the RLogSE (2.1) in 1D, as extensions to higher dimensions are straightforward. When d = 1, we truncate the RLogSE on a bounded computational interval Ω = (a, b) with homogeneous Dirichlet boundary condition (here |a| and b are chosen large enough such that the truncation error is negligible): (3.1) i∂ t u ε (x, t) + ∂ xx u ε (x, t) = u ε (x, t) ln(ε + |u ε (x, t)|) 2 , x ∈ Ω, t > 0, u ε (x, 0) = u 0 (x), x ∈ Ω; u ε (a, t) = u ε (b, t) = 0, t ≥ 0, 3.1. A finite difference scheme and main results on error bounds. Choose a mesh size h := ∆x = (b -a)/M with M being a positive integer and a time step τ := ∆t > 0 and denote the grid points and time steps as x j := a + jh, j = 0, 1, • • • , M ; t k := kτ, k = 0, 1, 2, . . . Define the index sets T M = {j | j = 1, 2, • • • , M -1}, T 0 M = {j | j = 0, 1, • • • , M }. Let u ε,k j be the approximation of u ε (x j , t k ), and denote u ε,k = (u ε,k 0 , u ε,k 1 , . . . , u ε,k M ) T ∈ C M+1 as the numerical solution vector at t = t k . Define the standard finite difference operators δ c t u k j = u k+1 j -u k-1 j 2τ , δ + x u k j = u k j+1 -u k j h , δ 2 x u k j = u k j+1 -2u k j + u k j-1 h 2 . Denote X M = v = (v 0 , v 1 , . . . , v M ) T | v 0 = v M = 0 ⊆ C M+1 , equipped with inner products and norms defined as (recall that u 0 = v 0 = u M = v M = 0 by Dirichlet boundary condition) (u, v) = h M-1 j=1 u j v j , u, v = h M-1 j=0 u j v j , u ∞ = sup j∈T 0 M |u j |; u 2 = (u, u), |u| 2 H 1 = δ + x u, δ + x u , u 2 H 1 = u 2 + |u| 2 H 1 . (3.2) Then we have for u, v ∈ X M , (3.3) (-δ 2 x u, v) = δ + x u, δ + x v = (u, -δ 2 x v). Consider a semi-implicit finite difference (SIFD) discretization of (3.1) as following (3.4) iδ c t u ε,k j = - 1 2 δ 2 x (u ε,k+1 j + u ε,k-1 j ) + u ε,k j ln(ε + |u ε,k j |) 2 , j ∈ T M , k ≥ 1. The boundary and initial conditions are discretized as (3.5) u ε,k 0 = u ε,k M = 0, k ≥ 0; u ε,0 j = u 0 (x j ), j ∈ T 0 M . In addition, the first step u ε,1 j can be obtained via the Taylor expansion as (3.6) u ε,1 j = u ε,0 j + τ u 1 (x j ), j ∈ T 0 M , where u 1 (x) := ∂ t u ε (x, 0) = i u ′′ 0 (x) -u 0 (x) ln(ε + |u 0 (x)|) 2 , a ≤ x ≤ b. Let 0 < T < T max with T max the maximum existence time of the solution u ε to the problem (3.1) for a fixed 0 ≤ ε ≪ 1. By using the standard von Neumann analysis, we can show that the discretization (3.4) is conditionally stable under the stability condition (3.7) 0 < τ ≤ 1 2 max{| ln ε|, ln(ε + max j∈TM |u ε,k j |)} , 0 ≤ k ≤ T τ . Define the error functions e ε,k ∈ X M as (3.8) e ε,k j = u ε (x j , t k ) -u ε,k j , j ∈ T 0 M , 0 ≤ k ≤ T τ , where u ε is the solution of (3.1). Then we have the following error estimates for (3.4) with (3.5) and (3.6). Theorem 3.1 (Main result). Assume that the solution u ε is smooth enough over Ω T := Ω × [0, T ], i.e. (A) u ε ∈ C [0, T ]; H 5 (Ω) ∩ C 2 [0, T ]; H 4 (Ω) ∩ C 3 [0, T ]; H 2 (Ω) , and there exist ε 0 > 0 and C 0 > 0 independent of ε such that u ε L ∞ (0,T ;H 5 (Ω)) + ∂ 2 t u ε L ∞ (0,T ;H 4 (Ω)) + ∂ 3 t u ε L ∞ (0,T ;H 2 (Ω)) ≤ C 0 , uniformly in 0 ≤ ε ≤ ε 0 . Then there exist h 0 > 0 and τ 0 > 0 sufficiently small with h 0 ∼ √ εe -CT | ln(ε)| 2 and τ 0 ∼ √ εe -CT | ln(ε)| 2 such that, when 0 < h ≤ h 0 and 0 < τ ≤ τ 0 satisfying the stability condition (3.7), we have the following error estimates e ε,k ≤ C 1 (ε, T )(h 2 + τ 2 ), 0 ≤ k ≤ T τ , e ε,k H 1 ≤ C 2 (ε, T )(h 2 + τ 2 ), u ε,k ∞ ≤ Λ + 1, (3.9) where Λ = u ε L ∞ (ΩT ) , C 1 (ε, T ) ∼ e CT | ln(ε)| 2 , C 2 (ε, T ) ∼ 1 ε e CT | ln(ε)| 2 and C depends on C 0 . The error bounds in this Theorem show not only the quadratical convergence in terms of the mesh size h and time step τ but also how the explicit dependence on the regularization parameter ε. Here we remark that the Assumption (A) is valid at least in the case of taking Gaussian as the initial datum. Define the error functions e ε,k ∈ X M as (3.10) e ε,k j = u(x j , t k ) -u ε,k j , j ∈ T 0 M , 0 ≤ k ≤ T τ , where u ε is the solution of the LogSE (1.1) with Ω = (a, b). Combining Proposition 2.5 and Theorem 3.1, we immediately obtain (see an illustration in the following diagram): u ε,k O(h 2 +τ 2 ) / / O(ε)+O(h 2 +τ 2 ) * * U U U U U U U U U U U u ε (•, t k ) O(ε) u(•, t k ) Corollary 3.2. Under the assumptions of Proposition 2.5 and Theorem 3.1, we have the following error estimates e ε,k ≤ C 1 ε + C 1 (ε, T )(h 2 + τ 2 ), e ε,k H 1 ≤ C 2 ε 1/2 + C 2 (ε, T )(h 2 + τ 2 ), 0 ≤ k ≤ T τ , (3.11) where C 1 and C 2 are presented as in Proposition 2.5, and C 1 (ε, T ) and C 2 (ε, T ) are given in Theorem 3.1. Error estimates. Define the local truncation error ξ ε,k j ∈ X M for k ≥ 1 as ξ ε,k j = iδ c t u ε (x j , t k ) + 1 2 δ 2 x u ε (x j , t k+1 ) + δ 2 x u ε (x j , t k-1 ) -u ε (x j , t k ) ln(ε + |u ε (x j , t k )|) 2 , j ∈ T M , 1 ≤ k < T τ , (3.12) then we have the following bounds for the local truncation error. Lemma 3.3 (Local truncation error). Under Assumption (A), we have ξ ε,k H 1 h 2 + τ 2 , 1 ≤ k < T τ . Proof. By Taylor expansion, we have (3.13) ξ ε,k j = iτ 2 4 α ε,k j + τ 2 2 β ε,k j + h 2 12 γ ε,k j , where α ε,k j = 1 -1 (1 -|s|) 2 ∂ 3 t u ε (x j , t k + sτ )ds, β ε,k j = 1 -1 (1 -|s|)∂ 2 t u ε xx (x j , t k + sτ )ds, γ ε,k j = 1 -1 (1 -|s|) 3 ∂ 4 x u ε (x j + sh, t k+1 ) + ∂ 4 x u ε (x j + sh, t k-1 ) ds. By the Cauchy-Schwarz inequality, we can get that α ε,k 2 = h M-1 j=1 |α ε,k j | 2 ≤ h 1 -1 (1 -|s|) 4 ds M-1 j=1 1 -1 ∂ 3 t u ε (x j , t k + sτ ) 2 ds = 2 5 1 -1 ∂ 3 t u ε (•, t k + sτ ) 2 L 2 (Ω) ds - 1 -1 M-1 j=0 xj+1 xj (|∂ 3 t u ε (x, t k + sτ )| 2 -|∂ 3 t u ε (x j , t k + sτ )| 2 )dxds = 2 5 1 -1 ∂ 3 t u ε (•, t k + sτ ) 2 L 2 (Ω) ds - 1 -1 M-1 j=0 xj+1 xj ω xj ∂ x |∂ 3 t u ε (x ′ , t k + sτ )| 2 dx ′ dωds ≤ 2 5 1 -1 ∂ 3 t u ε (•, t k + sτ ) 2 L 2 (Ω) + 2h ∂ 3 t u ε x (•, t k + sτ ) L 2 (Ω) ∂ 3 t u ε (•, t k + sτ ) L 2 (Ω) ds ≤ max 0≤t≤T ∂ 3 t u ε L 2 (Ω) + h ∂ 3 t u ε x L 2 (Ω) 2 , which yields that when h ≤ 1, α ε,k ≤ ∂ 3 t u ε L ∞ (0,T ;H 1 (Ω)) . Applying the similar approach, it can be established that β ε,k ≤ 2 ∂ 2 t u ε L ∞ (0,T ;H 3 (Ω)) . On the other hand, we can obtain that γ ε,k 2 ≤ h 1 -1 (1 -|s|) 6 ds M-1 j=1 1 -1 ∂ 4 x u ε (x j + sh, t k+1 ) + ∂ 4 x u ε (x j + sh, t k-1 ) 2 ds ≤ 4h 7 M-1 j=1 1 -1 ∂ 4 x u ε (x j + sh, t k+1 ) 2 + ∂ 4 x u ε (x j + sh, t k-1 ) 2 ds ≤ 8 7 ∂ 4 x u ε (•, t k-1 ) 2 L 2 (Ω) + ∂ 4 x u ε (•, t k+1 ) 2 L 2 (Ω) ≤ 4 u ε 2 L ∞ (0,T ;H 4 (Ω)) , which implies that γ ε,k ≤ 2 u ε L ∞ (0,T ;H 4 (Ω)) . Hence by Assumption (A), we get ξ ε,k τ 2 ∂ 3 t u ε L ∞ (0,T ;H 1 (Ω)) + ∂ 2 t u ε L ∞ (0,T ;H 3 (Ω)) + h 2 u ε L ∞ (0,T ;H 4 (Ω)) C0 τ 2 + h 2 . Applying δ + x to ξ ε,k and using the same approach, we can get that |ξ ε,k | H 1 τ 2 ∂ 3 t u ε L ∞ (0,T ;H 2 (Ω)) + ∂ 2 t u ε L ∞ (0,T ;H 4 (Ω)) + h 2 u ε L ∞ (0,T ;H 5 (Ω)) C0 τ 2 + h 2 , which completes the proof. For the first step, we have the following estimates. Lemma 3.4 (Error bounds for k = 1). Under Assumption (A), the first step errors of the discretization (3.6) satisfy e ε,0 = 0, e ε,1 H 1 τ 2 . Proof. By the definition of u ε,1 j in (3.6), we have e ε,1 j = τ 2 1 0 (1 -s)u ε tt (x j , sτ )ds, which implies that e ε,1 τ 2 ∂ 2 t u ε L ∞ (0,T ;H 1 (Ω)) τ 2 , |e ε,1 | H 1 τ 2 ∂ 2 t u ε L ∞ (0,T ;H 2 (Ω)) τ 2 , and the proof is completed. Proof. [Proof of Theorem 3.1] We prove (3.9) by induction. It follows from Lemma 3.4 that (3.9) is true for k = 0, 1. Assume (3.9) is valid for k ≤ n ≤ T τ -1. Next we need to show that (3.9) still holds for k = n + 1. Subtracting (3.4) from (3.12), we get the error equations (3.14) iδ c t e ε,m j = - 1 2 (δ 2 x e ε,m+1 j +δ 2 x e ε,m-1 j )+r ε,m j +ξ ε,m j , j ∈ T M , 1 ≤ m ≤ T τ -1, where r ε,m ∈ X M represents the difference between the logarithmic nonlinearity (3.15) r ε,m j = u ε (x j , t m ) ln(ε + |u ε (x j , t m )|) 2 -u ε,m j ln(ε + |u ε,m j |) 2 , 1 ≤ m ≤ T τ -1. Multiplying both sides of (3.14) by 2τ (e ε,m+1 j + e ε,m-1 j ), summing together for j ∈ T M and taking the imaginary parts, we obtain for 1 ≤ m < T /τ , (3.16) e ε,m+1 2 -e ε,m-1 2 = 2τ Im(r ε,m + ξ ε,m , e ε,m+1 + e ε,m-1 ) ≤ 2τ r ε,m 2 + ξ ε,m 2 + e ε,m+1 2 + e ε,m-1 2 . Summing (3.16) for m = 1, 2, • • • , n (n ≤ T τ -1), we obtain e ε,n+1 2 + e ε,n 2 ≤ e ε,0 2 + e ε,1 2 + 2τ e ε,n+1 2 + 2τ n-1 m=0 ( e ε,m 2 + e ε,m+1 2 ) + 2τ n m=1 r ε,m 2 + ξ ε,m 2 . (3.17) For m ≤ n, when |u ε,m j | ≤ |u ε (x j , t m )|, we write r ε,m j as |r ε,m j | = e ε,m j ln(ε + |u ε (x j , t m )|) 2 + 2u ε,m j ln ε + |u ε (x j , t m )| ε + |u ε,m j | ≤ 2 max{ln(ε -1 ), | ln(ε + Λ)|}|e ε,m j | + 2|u ε,m j | ln 1 + |u ε (x j , t m )| -|u ε,m j | ε + |u ε,m j | ≤ 2|e ε,m j |(1 + max{ln(ε -1 ), | ln(ε + Λ)|}). On the other hand, when |u ε (x j , t m )| ≤ |u ε,m j |, we write r ε,m j as |r ε,m j | = e ε,m j ln(ε + |u ε,m j |) 2 + 2u ε (x j , t m ) ln ε + |u ε (x j , t m )| ε + |u ε,m j | ≤ 2 max{ln(ε -1 ), | ln(ε + 1 + Λ)|}|e ε,m j | + 2|u ε (x j , t m )| ln 1 + |u ε,m j | -|u ε (x j , t m )| ε + |u ε (x j , t m )| ≤ 2|e ε,m j |(1 + max{ln(ε -1 ), | ln(ε + 1 + Λ)|}) , where we use the assumption that u ε,m ∞ ≤ Λ + 1 for m ≤ n. Thus it follows that r ε,m 2 | ln(ε)| 2 e ε,m 2 , when ε is sufficiently small. Thus when τ ≤ 1 2 , by using Lemmas 3.3, 3.4 and (3.17), we have e ε,n+1 2 + e ε,n 2 e ε,0 2 + e ε,1 2 + τ n-1 m=0 ( e ε,m 2 + e ε,m+1 2 ) + τ n m=1 r ε,m 2 + ξ ε,m 2 (h 2 + τ 2 ) 2 + τ | ln(ε)| 2 n-1 m=0 ( e ε,m 2 + e ε,m+1 2 ). We emphasize here that the implicit multiplicative constant in this inequality depends only on C 0 , but not on n. Applying the discrete Gronwall inequality, we can conclude that e ε,n+1 2 e CT | ln(ε)| 2 (h 2 + τ 2 ) 2 , for some C depending on C 0 , which gives the error bound for e ε,k with k = n + 1 in (3.9) immediately. To estimate |e ε,n+1 | H 1 , multiplying both sides of (3.14) by 2(e ε,m+1 j -e ε,m-1 j ) for m ≤ n, summing together for j ∈ T M and taking the real parts, we obtain |e ε,m+1 | 2 H 1 -|e ε,m-1 | 2 H 1 = -2 Re r ε,m + ξ ε,m , e ε,m+1 -e ε,m-1 = 2τ Im r ε,m + ξ ε,m , -δ 2 x (e ε,m+1 + e ε,m-1 ) = 2τ Im δ + x (r ε,m + ξ ε,m ), δ + x (e ε,m+1 + e ε,m-1 ) ≤ 2τ |r ε,m | 2 H 1 + |ξ ε,m | 2 H 1 + |e ε,m+1 | 2 H 1 + |e ε,m-1 | 2 H 1 . (3.18) To give the bound for δ + x r ε,m , for simplicity of notation, denote u ε,m j,θ = θu ε (x j+1 , t m ) + (1 -θ)u ε (x j , t m ), v ε,m j,θ = θv ε,m j+1 + (1 -θ)v ε,m j , for j ∈ T M and θ ∈ [0, 1]. Then we have δ + x r ε,m j = 2δ + x (u ε (x j , t m ) ln(ε + |u ε (x j , t m )|)) -2δ + x (u ε,m j ln(ε + |u ε,m j |)) = 2 h 1 0 [u ε,m j,θ ln(ε + |u ε,m j,θ |)] ′ (θ)dθ - 2 h 1 0 [v ε,m j,θ ln(ε + |v ε,m j,θ |)] ′ (θ)dθ = I 1 + I 2 + I 3 , where I 1 := 2δ + x u ε (x j , t m ) 1 0 ln(ε + |u ε,m j,θ |)dθ -2δ + x u ε,m j 1 0 ln(ε + |v ε,m j,θ |)dθ, I 2 := δ + x u ε (x j , t m ) 1 0 |u ε,m j,θ | ε + |u ε,m j,θ | dθ -δ + x u ε,m j 1 0 |v ε,m j,θ | ε + |v ε,m j,θ | dθ, I 3 := δ + x u ε (x j , t m ) 1 0 (u ε,m j,θ ) 2 |u ε,m j,θ |(ε + |u ε,m j,θ |) dθ -δ + x u ε,m j 1 0 (v ε,m j,θ ) 2 |v ε,m j,θ |(ε + |v ε,m j,θ |) dθ. Then we estimate I 1 , I 2 and I 3 , separately. Similar as before, we have |I 1 | ≤ 2|δ + x u ε (x j , t m )| 1 0 ln ε + |u ε,m j,θ | ε + |v ε,m j,θ | dθ + 2 δ + x e ε,m j 1 0 ln(ε + |v ε,m j,θ |) dθ = 2|δ + x u ε (x j , t m )| 1 0 ln 1 + |u ε,m j,θ | -|v ε,m j,θ | ε + min{|u ε,m j,θ |, |v ε,m j,θ |} dθ + 2 δ + x e ε,m j 1 0 ln(ε + |v ε,m j,θ |) dθ ≤ 2 ε |δ + x u ε (x j , t m )| |e ε,m j | + |e ε,m j+1 | + 2 δ + x e ε,m j max{ln(ε -1 ), | ln(ε + 1 + Λ)|} 1 ε |e ε,m j | + |e ε,m j+1 | + ln(ε -1 ) δ + x e ε,m j , and |I 2 | = δ + x u ε (x j , t m ) 1 0 |u ε,m j,θ | ε + |u ε,m j,θ | - |v ε,m j,θ | ε + |v ε,m j,θ | dθ + δ + x e ε,m j 1 0 |v ε,m j,θ | ε + |v ε,m j | dθ ≤ |δ + x e ε,m j | + |δ + x u ε (x j , t m )| 1 0 ε|u ε,m j,θ -v ε,m j,θ | (ε + |u ε,m j,θ |)(ε + |v ε,m j,θ |) dθ ≤ |δ + x e ε,m j | + |δ + x u ε (x j , t m )| ε 1 0 |u ε,m j,θ -v ε,m j,θ |dθ |δ + x e ε,m j | + 1 ε |e ε,m j | + |e ε,m j+1 | . In view of the inequality that (u ε,m j,θ ) 2 |u ε,m j,θ |(ε + |u ε,m j,θ |) - (v ε,m j,θ ) 2 |v ε,m j,θ |(ε + |v ε,m j,θ |) = (u ε,m j,θ ) 2 -u ε,m j,θ v ε,m j,θ |u ε,m j,θ |(ε + |u ε,m j,θ |) + u ε,m j,θ v ε,m j,θ |u ε,m j,θ |(ε + |u ε,m j,θ |) - (v ε,m j,θ ) 2 |v ε,m j,θ |(ε + |v ε,m j,θ |) ≤ |u ε,m j,θ -v ε,m j,θ | ε + u ε,m j,θ (v ε,m j,θ ) 2 (u ε,m j,θ -v ε,m j,θ ) + εv ε,m j,θ (u ε,m j,θ |v ε,m j,θ | -|u ε,m j,θ |v ε,m j,θ ) |u ε,m j,θ ||v ε,m j,θ |(ε + |u ε,m j,θ |)(ε + |v ε,m j,θ |) ≤ 4|u ε,m j,θ -v ε,m j,θ | ε , we can obtain that I 3 |δ + x e ε,m j | + 1 ε |e ε,m j | + |e ε,m j+1 | . Thus we can conclude that |δ + x r ε,m j | 1 ε |e ε,m j | + |e ε,m j+1 | + ln(ε -1 ) δ + x e ε,m j . Summing (3.18) for m = 1, 2, • • • , n (n ≤ T τ -1), we obtain |e ε,n+1 | 2 H 1 + |e ε,n | 2 H 1 ≤ |e ε,0 | 2 H 1 + |e ε,1 | 2 H 1 + τ n m=1 |r ε,m | 2 H 1 + |ξ ε,m | 2 H 1 + τ |e ε,n+1 | 2 H 1 + τ n-1 m=0 (|e ε,m | 2 H 1 + |e ε,m+1 | 2 H 1 ). Thus when τ ≤ 1/2, by using Lemmas 3.3 and 3.4, we have |e ε,n+1 | 2 H 1 + |e ε,n | 2 H 1 |e ε,0 | 2 H 1 + |e ε,1 | 2 H 1 + τ n m=1 1 ε 2 |e ε,m | 2 H 1 + |ξ ε,m | 2 H 1 + τ | ln(ε)| 2 n-1 m=0 |e ε,m | 2 H 1 + |e ε,m+1 | 2 H 1 e CT | ln(ε)| 2 ε 2 (h 2 + τ 2 ) 2 + τ | ln(ε)| 2 n-1 m=0 (|e ε,m | 2 H 1 + |e ε,m+1 | 2 H 1 ). Applying the discrete Gronwall's inequality, we can get that |e ε,n+1 | 2 H 1 e CT | ln(ε)| 2 (h 2 + τ 2 ) 2 /ε 2 , which establishes the error estimate for e ε,k H 1 for k = n + 1. Finally the boundedness for the solution u ε,k can be obtained by the triangle inequality u ε,k ∞ ≤ u ε (•, t k ) L ∞ (Ω) + e ε,k ∞ , and the inverse Sobolev inequality [START_REF] Thomée | Galerkin finite element methods for parabolic problems[END_REF] e ε,k ∞ e ε,k H 1 , which completes the proof of Theorem 3.1. Case II: A general initial data, i.e. u 0 in (1.1) is chosen as (4.2) u 0 (x) = tanh(x)e -x 2 , x ∈ R, which is the multiplication of a dark soliton of the cubic nonlinear Schrödinger equation and a Gaussian. Notice that in this case, the logarithmic term ln |u 0 | 2 is singular at x = 0. The RLogSE (2.1) is solved numerically by the SIFD (3.4) on domains Ω = [-12 , 12] and Ω = [-16, 16] for Case I and II, respectively. To quantify the numerical errors, we introduce the following error functions: e ε (t k ) := u(•, t k ) -u ε (•, t k ), e ε (t k ) := u ε (•, t k ) -u ε,k , e ε (t k ) := u(•, t k ) -u ε,k , e ε E := |E(u) -E ε (u ε )|. (4.3) Here u and u ε are the exact solutions of the LogSE (1.1) and RLogSE (2.1), respectively, while u ε,k is the numerical solution of the RLogSE (2.1) obtained by the SIFD (3.4). The 'exact' solution u ε is obtained numerically by the SIFD (3.4) with a very small time step, e.g. τ = 0.01/2 9 and a very fine mesh size, e.g. h = 1/2 15 . Similarly, the 'exact' solution u in Case II is obtained numerically by the SIFD (3.4) with a very small time step and a very fine mesh size as well as a very small regularization parameter ε, e.g. ε = 10 -14 . The energy is obtained by the trapezoidal rule for approximating the integrals in the energy (1.2) and (2.2). 4.1), which confirms the error bounds in Corollary 3.2. Conclusion. In order to overcome the singularity of the log-nonlinearity in the logarithmic Schrödinger equation (LogSE), we proposed a regularized logarithmic Schrödinger equation (RLogSE) with a regularization parameter 0 < ε ≪ 1 and established linear convergence between RLogSE and LogSE in terms of the small regularization parameter. Then we presented a semi-implicit finite difference method Theorem 2 . 2 . 22 Let λ ∈ R and ε > 0. Consider (1.1) and (2.1) on Ω = R d , or bounded Ω with homogeneous Dirichlet or periodic boundary condition. Consider an initial datum u 3.5], established initially in [13, Lemme 1.1.1]. Lemma 2.4. Let ε ≥ 0 and denote f ε (z) = z ln(ε + |z|), then we have 4 . 4 Numerical results. In this section, we test the convergence rate of the regularized model (2.1) and the SIFD(3.4). To this end, we take d = 1, Ω = R and λ = -1 in the LogSE (1.1) and consider two different initial data: Case I: A Gaussian initial data, i.e. u 0 in (1.1) is chosen as(4.1) u 0 (x) = 4 -λ/πe ivx+ λ 2 x 2 , x ∈ R, with v = 1.In this case, the LogSE (1.1) admits the moving Gausson solution (1.7) with v = 1 and b 0 = 4 -λ/π as the exact solution. 4. 1 . 1 Convergence rate of the regularized model. Here we consider the error between the solutions of the RLogSE (2.1) and the LogSE (1.1). Fig. 4.1 shows e ε , e ε H 1 , e ε ∞ (the definition of the norms is given in (3.2)) at time t = 0.5 for Cases I & II, while Fig. 4.2 depicts e ε E (0.5) for Cases I & II and time evolution of e ε (t) with different ε for Case I. For comparison, similar to Fig. 4.1, Fig. 4.3 displays the convergent results from (2.8) to (1.1). From Figs. 4.1, 4.2 & 4.3 and additional numerical results not shown here for brevity, we can draw the following conclusions: (i) The solution of the RLogSE (2.1) converges linearly to that of the LogSE (1.1) in terms of the regularization parameter ε in both L 2 -norm and L ∞ -norm, and respectively, the convergence rate becomes O( √ ε) in H 1 -norm for Case II. (ii) The regularized energy E ε (u ε ) converges linearly to the energy E(u) in terms of ε. (iii) The constant C in (2.6) may grow linearly with time T and it is independent of ε. (iv) The solution of (2.8) converges at O( √ ε) to that of (1.1) in both L 2 -norm and L ∞ -norm, and respectively, the convergence rate becomes O(ε 1/4 ) in H 1 -norm for Case II. Thus (2.1) is much more accurate than (2.8) for the regularization of the LogSE (1.1). (v) The numerical results agree and confirm our analytical results in Section 2. 2 || e ε || H 1 Fig. 4 . 1 :Fig. 4 . 2 : 214142 Fig. 4.1: Convergence of the RLogSE (2.1) to the LogSE (1.1), i.e. the error e ε (0.5) in different norms vs the regularization parameter ε for Case I (left) and Case II (right). 4. 2 . 1 || e ε || H 1 Fig. 4 . 3 : 21143 Fig. 4.3: Convergence of the RLogSE (2.8) to the LogSE (1.1), i.e. the error e ε (0.5) in different norms vs the regularization parameter ε for Case I (left) and Case II (right). 2 Fig. 4 . 4 : 244 Fig. 4.4: Convergence of the SIFD (3.4) to the RLogSE (2.1), i.e. errors e ε (0.5) vs τ (with h = 75τ /64) under different ε for Case I initial data. * This work was partially supported by the Ministry of Education of Singapore grant R-146-000-223-112 (MOE2015-T2-2-146) (W. Bao). for discretizing RLogSE and proved second-order convergence rates in terms of mesh size h and time step τ . Finally, we established error bounds of the semi-implicit finite difference method to LogSE, which depend explicitly on the mesh size h and time step τ as well as the small regularization parameter ε. Our numerical results confirmed our error bounds and demonstrated that they are sharp.
46,629
[ "965568", "886" ]
[ "141319", "75" ]
01745418
en
[ "spi" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01745418/file/Portal.2017.TNANO.Design%20and%20Simulation%20of%20a%20128%20kb%20Embedded%20Nonvolatile%20Memory%20Based%20on%20a%20Hybrid%20RRAM%20%28HfO2%20%2928%20nm%20FDSOI%20CMOS%20Technology..pdf
Jean-Michel Portal email: jean-michel.portal@univ-amu.fr Marc Bocquet Santhosh Onkaraiah Mathieu Moreau Hassen Aziza Damien Deleruyelle Kholdoun Torki Elisa Vianello Alexandre Levisse Bastien Giraud Olivier Thomas Design and Simulation of a 128 kb Embedded Nonvolatile Memory Based on a Hybrid RRAM (HfO_2 )/28 nm FDSOI CMOS Technology Keywords: Embedded non-volatile memory, memory architecture, resistive switching memory, RRAM, I des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. two strong markets: the high performance, high density, high capacity memories for computation in servers, and the low energy, low cost embedded-memories devoted to autonomous connected nodes. The growth of these two markets is strongly related to each other and is at the basis of Internet of Things market. High performance computing applications for servers require high quantities of NVM [START_REF] Lynch | Big data: How do your data grow?[END_REF] in order to store and process tremendous amount of data generated by the autonomous connected nodes that are spread in the environment [START_REF] Gubbia | Internet of Things (IoT): A vision, architectural elements, and future directions[END_REF]. To this aim, high performance Volatile and Non-Volatile memories are used. In classic computing, the first elements of memory hierarchy are the volatile memories: SRAM and DRAM. The DRAM memory is used as a buffer between NVM and SRAM cache memories. At the end of the hierarchy are NVM memories HDD [START_REF] Shiroishi | Future Options for HDD Storage[END_REF] and NAND Flash technologies [START_REF] Helm | A 128Gb MLC NAND-Flash device using 16nm planar cell[END_REF]. The main constraints of these memories are (i) overall power consumption (from the DRAM refreshment to the NVM programming operations), (ii) communication cost between processing units and memories, and (iii) production costs per bit. Embedded memories for autonomous connected objects are more focused in reducing energy consumption and the overall chip cost. These device operations are based on classical memory hierarchy. Von Neumann or Harvard architectures are considered. SRAM memories are used to store the variables while NVM is used to store the instruction code. Embedded NVM are based on NOR flash technologies [START_REF] Ogura | A 90nm Floating Gate "B4-Flash" Memory Technology-Breakthrough of the Gate Length Limitation on NOR Flash Memory[END_REF] [6] [START_REF] Fastow | A 45nm NOR Flash Technology with Self-Aligned Contacts and 0.024µm2 Cell Size for Multi-level Applications[END_REF] down to 40nm commercial CMOS technologies. For both server applications (standalone NVM) and autonomous connected nodes (embedded NVM), scaling down the technology nodes is a real struggle. NAND flash technology complexity (air gap [START_REF] Seo | Highly reliable M1X MLC NAND flash memory cell with novel active air-gap and p+ poly process integration technologies[END_REF], vertical stacking [START_REF] Park | Three-Dimensional 128 Gb MLC Vertical nand Flash Memory With 24-WL Stacked Layers and 50 MB/s High-Speed Programming[END_REF], multiple patterning [START_REF] Pikus | Advanced multi-patterning and hybrid lithography techniques[END_REF]) and production costs are becoming unaffordable. On the embedded side, NOR flash is facing extremely complex structure [START_REF] Do | Scaling of split-gate flash memory and its adoption in modern embedded non-volatile applications[END_REF], and the co-integration with advanced CMOS nodes leads to reliability issues due to the high voltages needed for NVM programming operations. Moreover, scaling down floating gate technologies leads to Design and Simulation of a 128kb Embedded Non-Volatile Memory based on a Hybrid RRAM (HfO 2 ) / 28nm FDSOI CMOS Technology Jean-Michel Portal, Marc Bocquet, Santhosh Onkaraiah, Mathieu Moreau, Hassen Aziza, Damien Deleruyelle, Kholdoun Torki, Elisa Vianello, Alexandre Levisse, Bastien Giraud, Olivier Thomas, Fabien Clermidy reduced endurance and retention [START_REF] Grupp | The bleak future of NAND flash memory[END_REF]. In order to continue the scaling of NVM technologies, for both Embedded and Standalone applications, emerging resistive switching technologies (i.e. RRAM) are extensively investigated [START_REF] Clermidy | Resistive memories: Which applications?[END_REF] because of their Back End of Line (BEoL) compatible structure and their high scalability [START_REF] Lee | Scaling trends and challenges of advanced memory technology[END_REF]. Among all the RRAM technologies, the Oxide-Based RRAM (OxRAM), because of their simple stack structure, fast switching, and common material appear as a promising solution for Flash memories replacement [START_REF] Benoist | Advanced CMOS Resistive RAM Solution as Embedded Non-Volatile Memory[END_REF]. Several metal oxides can be used to obtain the OxRAM behavior such as AlOx, NiOx, TaOx, TiOx or HfOx [16] [17] [18] [START_REF] Kwon | Atomic structure of conducting nanofilaments in TiO2 resistive switching memory[END_REF]. Integrating RRAMs as Flash memories replacement can be a solution either for Standalone Memories or for Embedded Memories. In the context of Standalone Memories, the supporting CMOS technology is totally dedicated to the memory integration and, as a consequence, the associated CMOS technology does not suffer from the memory requirements. In the context of Embedded Memories, CMOS technology is a limitation due to integration severe constraints. For example, in NOR Flash integration at advanced CMOS nodes, High Voltage Thick Oxide Transistors are integrated at the cost of additional masks, higher thermal budget and higher occupied area [START_REF] Tanzawa | High-voltage transistor scaling circuit techniques for high-density negative-gate channel-erasing NOR flash memories[END_REF]. The co-integration of RRAM at sub 30nm CMOS technologies leads to reliability issues on the Thin Oxide Transistors and is usually solved by using the IO transistors (thicker gate oxide process with no impact on the thin oxide transistor performances) [START_REF] Fackenthal | 19.7 A 16Gb ReRAM with 200MB/s write and 1GB/s read in 27nm technology[END_REF]. The direct impact is a higher maximum voltage, but also a larger footprint (up to 15 times for an equivalent drive). Several Embedded RRAM Memories are presented considering thick gate oxide transistors, in [START_REF] Sheu | A 4Mb embedded SLC resistive-RAM macro with 7.2ns read-write random-access time and 160ns MLC-access capability[END_REF] a 4Mb HfO2-based RRAM memory in 180nm CMOS technology with write verify technic is demonstrated, in [START_REF] Chang | A High-Speed 7.2-ns Read-Write Random Access 4-Mb Embedded Resistive RAM (ReRAM) Macro Using Process-Variation-Tolerant Current-Mode Read Schemes[END_REF] a 4Mb embedded RRAM memory is demonstrated with process variation control circuits in 130nm CMOS technology. In [START_REF] Chang | A Low-Voltage Bulk-Drain-Driven Read Scheme for Sub-0.5 V 4 Mb 65 nm Logic-Process Compatible Embedded Resistive RAM (ReRAM) Macro[END_REF] a 4Mb macro using a unipolar RRAM is embedded in 65nm CMOS technology. A 1Mb RRAM memory embedded 28nm bulk CMOS is demonstrated in [START_REF] Chang | Embedded 1Mb ReRAM in 28nm CMOS with 0.27-to-1V Read Using Swing-Sample-and-Couple Sense Amplifier and Self-Boost-Write-Termination Scheme[END_REF] with write assist circuits. However, in these papers the reliability issue on MOS transistors during programming operations and during the forming step is not considered. Others studies using thick oxide transistors and Conductive Bridge Random Access Memories (CBRAM) [START_REF] Fackenthal | 19.7 A 16Gb ReRAM with 200MB/s write and 1GB/s read in 27nm technology[END_REF] or Phase Change Memories (PCM) [START_REF] Borghi | r. a. t. a. 1. w. throughput[END_REF] are reported. In [START_REF] Shen | High-K metal gate contact RRAM (CRRAM) in pure 28nm CMOS logic process[END_REF], a testchip with 28nm thin-gate oxide bitcells was presented and the MOS transistors reliability is shown for a 3V programming voltage. This paper presents the architecture of an embedded 128kb memory cut based on a hybrid RRAM (HfO 2 ) and thin-gate oxide 28nm Fully Depleted Silicon On Insulator (FDSOI) CMOS technology suitable for NOR Flash replacement. Validation of the proposed architecture is performed through post-layout simulation by using RRAM compact model calibrated on CEA-Leti RRAM samples [START_REF] Vianello | Resistive Memories for Ultra-Low-Power embedded computing design[END_REF] and implemented under transistor-level simulator Eldo [START_REF]Mentor Graphics Website[END_REF] and CMOS 28nm FDSOI design kit from STMicroelectronics. The rest of the paper is organized as follows: In the next section, the RRAM technology is briefly introduced together with the compact model calibrated on state of the art devices. In section III, the bit-cell and the memory array organization are described with the different modes of operation (FORMING/SET, RESET and READ). Section IV is dedicated to the full memory macro-cell description, including peripheral circuits, addressing hierarchy and scheduler. The validation of the macro-cell through simulation is presented in Section V. Finally, Section VI gives some concluding remarks and highlights on the proposed embedded memory. II. RRAM OVERVIEW: TECHNOLOGY AND COMPACT MODEL RRAM based on HfO 2 are studied as part of the bit-cell of the proposed memory architecture. The RRAM stack is composed of a 5 nm thick HfO 2 resistive switching layer embedded in-between a TiN/Ti Top Electrode (TE) and a TiN Bottom Electrode (BE). The resistive switching layers are deposited by Atomic Layer Deposition (ALD), whereas the metallic electrodes are deposited by Physical Vapor Deposition (PVD) [START_REF] Vianello | Resistive Memories for Ultra-Low-Power embedded computing design[END_REF]. RRAM modeling used in this study is based on the work presented in [START_REF] Bocquet | Robust compact model for bipolar oxidebased resistive switching memories[END_REF]. This approach relies on electric fieldinduced creation/destruction of oxygen vacancies within the switching layer. The model enables continuous accounting for FORMING, SET and RESET operations into a single master equation in which the resistance is controlled by the radius of a conductive filament (namely r CF ). After calibration, the model satisfactorily matches quasistatic and dynamic experimental data measured on actual HfO 2 -based memory elements. Moreover, to account for the variability of RRAM technology, two corners cases were simulated. They include the two extreme behaviors observed experimentally: one favoring the SET mechanism and slowing the RESET, and the other being the exact opposite. The Fig. 1 show the quasi-static behavior of RRAM devices and the good modeling correlation. The corners encompass the full range of features that ensure to take into account the worst case of FORMING, SET and RESET. The Fig. 2.a describes the dependence of the switching time according to the amplitude of the programming pulse voltage. It is important to note that rapid operations -low consumptions -need higher CMOS standard voltage. Furthermore, the Fig. 2.b. underlines the interest of high voltage applied during the RESET to ensure a high resistance value. These two central behaviors are perfectly captured by the model implemented and extreme behaviors are included within in the corner simulations. III. BIT-CELL AND MEMORY ARRAY ORGANIZATION In this section, the bit-cell structure is detailed together with memory array organization. Based on the array arrangement, biasing conditions for selected and unselected cells in the array are discussed for the different modes of operation. A. Bit-cell structure The bit-cell is based on a 2T1R structure [START_REF] Jovanovic | Design considerations for reliable OxRAM-based non-volatile flipflops in 28nm FD-SOI technology[END_REF], as described in Fig. 3.a and exhibits one NMOS and one PMOS to access the Top Electrode (TE) of the RRAM, whereas the Bottom Electrode (BE) is connected to the Reset Line (RL). In addition to the classical Bit-Line (BL) and Word-Line (WL), two others lines are present namely Reset-Line (RL) and Set-Line (SL). As depicted Fig. 3.b, to avoid any CMOS core process modification, the RRAM elements are introduced in the Metal Insulator Metal (MIM) stack between the first and second 8x metallization level in place of the decoupling capacitance. On one hand, this solution is fully compatible with the standard CMOS 28nm FDSOI process flow but on the other hand this integration scheme is achieved at the cost of a larger RRAM foot-print on the upper metallization level. The additional lines RL and SL are used to perform FORMING/SET and RESET/READ operations on two different paths, as illustrated Fig. 4. Doing so FORMING/SET voltages are applied on the top electrode of the RRAM through the PMOS (Fig. 4.a), whereas ground (gnd) is applied on the top-electrode through a NMOS and RESET voltage is applied on the RL during a RESET operation (Fig. 4.b). With this scheme, degradation of the voltage level for the different operations is reduced, since positive voltages are applied through PMOS transistor whereas ground voltages are applied through NMOS. Moreover, the compliance current, during FORMING / SET operations, is defined by the sizing of the PMOS transistor and by the V DDM biasing. Given that the current flowing through the cell during the RESET operation must be above the FORMING / SET current, the NMOS transistor has to be larger than the PMOS for an equivalent biasing. B. Memory array organization The arrangement of the memory array is given in Fig. 5.a with the schematic view and Fig. 5.b with the corresponding layout view. The RL and WL lines are shared per row whereas BL and SL are shared per column. To access a cell for a given operation, biasing of all the four access lines is mandatory, whereas inhibition voltages must be applied on unselected cells. Depending, on the operation unselected cells are inhibited by turning OFF the access transistor (NMOS and PMOS) with V GS below the threshold voltage or by having a voltage difference on the RRAM close to zero volt. Fig. 6 summarizes the biasing of the different access lines for four cases: selected bit-cell, unselected bit-cell on the same row, unselected bit-cell on the same column, unselected bit-cell on the rest of the array without any common lines with the selected cell. Three potentials are used, V DD equal to V DDM in the array, ground (gnd) and a high voltage (HV). HV takes a value close to 2.5×V DDM for the FORMING operations and a value close to 2×V DDM for the SET and RESET operations. Fig. 6 depicts a single cell access, but this memory array organization enables to perform all the programming operations by selecting an entire or partial row or a full or partial array. This feature offers the capability to have the best compromise between speed and consumption. In the proposed memory macro-cell, the array is defined by the bank size. A bank is composed of 1024 cells organized on 32 rows per 32 columns, in other words, 32 words of 32 bits. To program a word, a RESET operation is first performed on the bits to reset and followed by a SET operation on the remaining bits of the word. Similarly to NOR Flash memory, two programming phases are used to write a word. But, in our case, both operations are selective contrary to Flash memory, where erase operation is applied to all bits in the word followed by a selective write operation. In this section the full macro-cell architecture is detailed, starting with the peripheral blocks to address the bank array, up to the full macro-cell hierarchy including scheduler finite state machine to generate timing and internal signals. It is worth to note that the macro-cell communication bus is fully compatible with AMBA 3 AHB lite protocol [START_REF]AMBA 3 AHB-Lite Protocol Specification Documentation v1.0[END_REF]. A. Peripheral block description The macro-cell architecture is massively multi-bank, thus all peripheral circuits to program and read the content of the bit-cells are introduced at bank level. In this subsection, levelshifters used for programming operations as well as sense amplifier used for reading operation are detailed. Banks are powered with two power supply V DD =V DDana and a higher voltage for FORMING/SET/RESET operations named HV. During programming operation, level shifters are used to drive HV on RL and SL access lines, while classic buffers are used to drive gnd/V DDana on BL and WL access lines. Table I gives a summary of possible biasing of the different access lines. The architecture of level shifters acting on SL and RL are represented in Fig. 7 and Fig. 8 respectively with their schematic and layout views. Level shifter structures are designed with cascade MOS since the voltage difference between any MOS transistor nodes has to be below or equal to VDD to ensure reliability and avoid gate-oxide breakdown. Only bit-cells on a common row are activated at a time during programming operation, thus since SL level shifters are shared per column, they only have to drive single cell FORMING/SET compliance current, limiting the sizing of their output stage. On the contrary, RL level shifters are shared per row, thus during a RESET operation, they may drive up to the 32 bitcells of the addressed word. Thus, the sizing of the RL level shifter output stage is able to drive 32 times the current of a single cell. It is to be noted on the layout view of the RL level shifter (Fig. 8.b), that a single output buffer is multiplied 32 times to form the complete output stage. Similar to the SL level shifter, the input of the RL level shifter is driven by logic gate biased in the V DD domain. In order to sense the value of each bit-cells, sense amplifiers are added on top of the corresponding columns. Applying a current I READ through the resistive element and comparing the resulting voltage V READ with a voltage reference V REF gives the logic value of the bit-cell. I READ is generated thanks to a Wilson current mirror structure. An operational full swing amplifier is used to discriminate between V READ and V REF . Since, RRAM technology is still in development, the reference voltage V REF is provided externally for characterization purpose. Indeed, by trimming V REF value, knowing I READ , it is possible to extract the resistance value of all the bit-cells in the array and thus extract Low Resistance State (LRS) and High Resistance State (HRS) distributions on chip. Doing so at the end of the characterization procedure, V REF value is set between the voltage distribution corresponding to LRS and HRS distributions. Moreover, an external pad gives a direct access to the bit-cell content to extract RRAM resistance value in order to verify the value extracted from V REF trimming. This pad can be connected to the sense amplifier of the first column (BL 0 ) of each bank. A bank is composed of 32 by 32 bit-cells to avoid voltage drop on the access line. Indeed, since the whole circuit is designed with GO1 devices the voltage budget is limited, especially during the FORMING steps. Thus, all peripheral circuits, i.e. level shifters, sensing circuitry are implemented at bank level, as depicted Fig. 10 with the layout view. Moreover, addressing signal gates all control signals at bank level. Thus, unselected bank are completely inactive, with no internal signal change, preventing any disturb or extra powerconsumption. B. Bank organization and addressing hierarchy For the selected bank an acknowledgement signal is generated to properly apply all control signal and to avoid any temporal drift between signals due to hierarchy routing. The top-analog circuit is divided into four sectors of eight pages and each page contains four banks. Control signals, generated by the digital scheduler, are enabled at each level of the hierarchy. Moreover, buffers are inserted on all input signals (data, address and control) and tri-state buffer are inserted on the data-out signals, to prevent delay issue. C. Scheduler description The scheduler is a Finite State Machine (FSM) compatible with AMBA 3 AHB lite protocol, which generates all necessary timing and internal signal to drive the top analog circuit. The timing can be trimmed since a programmable timer gives the time reference. Indeed, the targeted bus clock is in the range of a few ten's of MHz, in other words a few hundreds of 'ns' period, while FORMING/RESET/SET operations are in a range of 'ms' to 'µs' depending on the targeted HV voltage and RRAM technology variability. Thus, to enhance yield, timing has to be programmable in order to fit the voltage/duration dependency of the RRAM technology for the outlier cells of the memory array. The scheduler states are Ready, Read, Write. The Write state is decomposed in a second FSM, since a write operation can be of two kinds FORMING or RESET/SET. Moreover, a write/verify procedure is embedded in the scheduler. After a write, which is a RESET operation followed by a SET operation, a read is performed and data are compared to data in, in case of mismatch, up to 10 cycles of RESET/SET operations can be applied. After 10 cycles, if Write operation still fails the HRESP signal indicates an error. This write/verify procedure has been implemented to tackle cycleto-cycle variability of the RRAM technology. The full layout of the 128kb Non-Volatile Memory based on a Hybrid RRAM (HfO2) / 28nm FDSOI CMOS Technology is given Fig. 11 and its main simulated features are summarized Table II • READ sequentially each addressed word of the bank to retrieve the inverse checkerboard pattern. Fig. 12 shows the FORMING operation of a full bank, all the words are sequentially formed, it is worth to note that the FORMING process duration for a word is 100µs, thus for a full bank, it represents 3.2ms. The Fig. 12 highlights the voltage, current and Conductive Filament (CF) radius variation for the first bit-cell of the first word (Cell0,0) and for the last bit-cell of the last word (Cell31,31). For all the bitcell, the compliance current during FORMING is set to 62µA, representing an overall current of nearly 2mA for a full word. The evolution of the CF radius clearly shows the FORMING operation. The biasing conditions extracted from the simulation are: • All the BL and WL remains to VDDana=1.2V during the FORMING process, • SL is set to HVFORMING=2.75V, instead of 2.8V due to voltage drop in the SL level-shifter, • RL is set to 5mV, instead of ground due to voltage drop in the RL level-shifter for the selected word, whereas it is set to HV FORMING for unselected words. Thus the selected RRAM are biased to 2.75V during FORMING. One can notice a very low leakage on the already formed cell of 0.76µA (see I CELL (cell 0,0 ) during the FORMING of the word 31 in Fig. 12). Fig. 13 shows simulation results for the overall READ/PROGRAM process on a bank (READ to verify FORMING, PROGRAM checkerboard pattern, READ to verify checkerboard pattern, PROGRAM inverse checkerboard pattern, READ to verify inverse checkerboard pattern). Functionality is validated, since the programmed patterns are successfully read. Fig. 14 and Fig. 15 highlight respectively a PROGRAM operation and a READ operation. The PROGRAM operation is divided into two steps, in a first step bit-cells corresponding to DATA_IN='1' are RESET, whereas in a second step bitcells corresponding to DATA_IN='0' are SET. On Fig. 14 DATA_IN 0 ='1', thus the cell 0,0 is RESET, with a current of 87µA for a voltage on the RRAM of 2.32-0.11 = 2.21V. In the SET step, the cell remains unchanged. Each steps, RESET and SET, takes 80ns for a global PROGRAM time of 170ns per word. Global current consumption is in a range of 1.5mA (SET all the cell) to 2.9mA (RESET all the cell). It is important to notice that the RESET current, as shown in Fig. 14, is not constant during the RESET phase. Fig. 15 exhibits the two steps of a READ operation, in the first step the current source is enabled to pre-charge the bit-line for 40ns, during the second step the sense amplifier is activated for 2ns to differentiate between HRS and LRS state. The current through the cell during the READ operation is below 2µA. The READ operation duration together with the voltage on the RRAM (worst case 0.8V for a HRS RRAM) allow to avoid any disturb on the cell. This assumption is validated since there is no CF radius change during the READ process presented Fig. 15. The same simulations are performed for the corner cases obtained from the worst-case measurements on the RRAM cell as depicted in Section II together with the SS corner of the CMOS core process. The To guarantee the functionality of the macro-cell in the worstcase scenarios, timing has to be adapted, due to the voltage/timing characteristic of the RRAM. Doing so for nominal voltage configuration, timings and energy per cell are strongly degraded versus typical simulation as represented Fig. 16 (Typical (V 1 ) versus Corners (V 1 )). To recover typical results at nominal voltages for worst-case scenarios, voltages have to be increased as represented Fig. 16 by V 2 and V 3 voltage sets. However, the maximal stress voltage defined as gate/source or gate/drain voltage ramps up to 1.8V for Moreover, it is interesting to note that the best efficiency is achieved with higher voltages but this track of optimization must be carefully used considering reliability issue of the standard logic MOS devices. Since January 2015 he is the project leader of Silicon Impulse an IC competence center helping companies to design innovative products based on the latest low-power semiconductor technologies. He is author or co-author of 75 articles in international refereed journals and conferences and 25 patents. Figure 1 : 1 Figure1: Experimental I(V) characteristics for Electroforming, Set, and Reset measured on a large number of memory elements reflecting the device-todevice variability presented in[START_REF] Vianello | Resistive Memories for Ultra-Low-Power embedded computing design[END_REF] and simulation results including corners definition. Figure 2 : 2 Figure 2: Experimental (a) switching time for Forming, Set and Reset operations as a function of voltage and (b) RHRS as a function of stop voltage during Reset operation presented in [28] and corresponding simulation results. Figure 3 : 3 Figure 3: (a) Schematic view of the bit-cell with 2T1R and four access lines (BL, WL, SL and RL) (b) Layout of the bit-cell, with a RRAM element introduces in the Metal Insulator Metal (MIM) stack to avoid CMOS process modification at the cost of large cell area. Figure 4 : 4 Figure 4: (a) FORMING/SET are performed through the PMOS, SL is set to the positive operation voltage, RL is grounded whereas NMOS transistor is off (b) RESET is performed through NMOS, RL is set to the positive operation voltage, BL is grounded whereas PMOS transistor is off. For READ operation, BL is set to the read voltage and RL is grounded, here also the PMOS transistor is off. To reduce voltage degradation, two different paths are used through PMOS and NMOS, in order to fit the bipolar behavior of the RRAM. Figure 5 : 5 Figure 5: (a) Schematic view of a two by two bit-cell array (b) Layout view of a two by two bit-cell array. SL and BL lines are shared in column; RL and WL are shared in row. Figure 6 : 6 Figure 6: (a) Array biasing condition for a FORMING/SET operation performed on a single cell (b) Array biasing condition for a RESET operation performed on a single cell (c) Array biasing condition for a READ operation performed on a single cell. Biasing condition of the selected cell can be extended to any cells on a row to perform parallel operations. Biasing conditions can also be applied on the full array to perform global operation. MACRO-CELL ARCHITECTURE OVERVIEW Figure 7 : 7 Figure 7: (a) Schematic view of the level shifter used to drive SL access line (b) Layout view of the level shifter used to drive SL access line. The SL level shifter output swing is defined between VDDM and HV. The level shifter input is controlled with standard voltage logic (VDD domain). Figure 8 : 8 Figure 8: (a) Schematic view of the level shifter used to drive RL access line (b) Layout view of the level shifter used to drive RL access line. The RL level shifter output swing is defined between gnd and HV. The level shifter input is controlled with standard voltage logic (VDD domain). Figure 9 : 9 Figure 9: (a) Schematic view of the sensing circuitry connected to a bit-line including read current generation, output direct access and operational amplifier (b) Layout view of the 32 sensing circuitry associated to the 32 bitline of a single bank. Fig. 9 . 9 Fig.9.a illustrates the sense amplifier architecture, including CDMA standing for Current Direct Memory Access to provide I OUT on the first column of each bank from an external PAD for characterization purpose, doing so internal current mirror is disconnected. Finally, it is worth noting that internal current mirror as well as operational amplifier can be disconnected from V DD when no sensing operation is required. The layout of the sense amplifier together with current mirror is given Fig.9.b. Figure 10 : 10 Figure 10: Layout view of a full bank including a 32 by 32 bit-cells with surrounding peripherals circuits, i.e. 32 SL level shifters below the array, 32 BL drivers and 2 sensing circuitry on top of the array and 32 RL level shifter and 32 WL drivers on the right of the array. Figure 11 : 11 Figure 11: Layout view of the full macro-cell with top analog (right) and scheduler (left). . Figure 12 : 12 Figure 12: Simulation results (corner Typical) of the FORMING process for a full bank. Current through RRAM of the cell0,0 (Word 0, Bit 0) and of the cell31,31 (Word 31, Bit 31), CF radius evolution during forming, as well as access-line voltage to the cell0,0 and cell31,31 are plotted. Figure 13 : 13 Figure 13: Simulation results (corner Typical) of the full PROGRAM/READ process for a full bank. CF radius evolution of the cell0,0 RRAM (Word 0, Bit 0) and of the cell31,31 RRAM (Word 31, Bit 31), during PROGRAM&READ, as well as access-line voltage to the cell0,0 and cell31,31 are plotted. Finally, the similarity of the DATA_IN and DATA_OUT values shows the success of PROGRAM and READ operations. Figure 14 : 14 Figure 14: Simulation results (corner Typical) of a PROGRAM operation on cell0,0. Current and CF radius of the cell0,0 RRAM (Word 0, Bit 0) and of the cell31,31 RRAM (Word 31, Bit 31), during PROGRAM, as well as access-line voltage to the cell0,0 and cell31,31 are plotted. The program operation is composed of two steps, RESET/SET. Fig. 16 . 16 summarizes the time and energy per cell for different operations, comparing typical (CMOS & RRAM TT) and worst-case results using the corners case (CMOS SS & RRAM corner Slow FORMING/SET and Slow-corner RESET) for the nominal voltages, defined as V1 (HV FORMING =2.8V, HV SET/RESET =2.4V) and for two others voltages V2 (HV FORMING =3.0V, HV SET/RESET =2.6V) and V3 (HV FORMING =3.2V, HV SET/RESET =2.8V). Figure 15 : 15 Figure 15: Simulation results (corner Typical) of a READ operation for a full word. CF radius evolution of the cell0,0 RRAM (Word 0, Bit 0) and of the cell31,31 RRAM (Word 31, Bit 31), during PROGRAM&READ, as well as access-line voltage to the cell0,0 and cell31,31 are plotted. Pre-charge and sense phase are detailed. Figure 16 : 16 Figure 16: Operation duration and energy per cell for typical and corners cases, for 3 different voltage sets. FORMING and 1 . 1 4V for SET/RESET with set V 2 and respectively 2V and 1.6V with set V 3 . These voltages remain acceptable for the FDSOI 28nm standard logic CMOS technology. VI. CONCLUSION In this paper, a full 128kb Embedded Non-Volatile Memory based on a Hybrid RRAM (HfO 2 ) & 28nm FDSOI CMOS Technology is presented. The key points of the architecture are the use of standard logic MOS exclusively, avoiding any high voltage MOS usage, program/verify procedure to mitigate cycle to cycle (C2C) variability and direct bit-cell read access for characterization purpose. This architecture has been fully validated through an extensive set of postlayout simulations at different voltage levels and using typical and the most pessimistic corners (MOS SS and RRAM worst FORMING/SET and RESET corners). The memory is functional at all corners for the different set of voltages owing to the time/voltage dependencies of RRAM. 25 papers in international conferences and journals. He is the main author of a book chapter and the main inventor or coinventor of 15 patents. Fabien Clermidy obtained his Ph.D in microelectronic from INPG, Grenoble in 1999 and his supervisor degree in 2011. He is a pioneer in designing Network-on-Chip based multicore. He was the leader of the second generation of Networkon-Chip based multicore dedicated to 3GPP-LTE. At this period, his team elaborated one of the first 3D multi-core prototypes embedding a WIDE-IO DRAM memory called WIOMING. He is currently managing the digital circuit laboratory implied in the development of new architectures using emerging technologies such as 3D TSV, 3D monolithic integration and emerging memories. He has published 2 books, more than 75 journal and conferences papers and is author or co-author of[START_REF] Benoist | Advanced CMOS Resistive RAM Solution as Embedded Non-Volatile Memory[END_REF] patents. Alexandre Levisse received his B.S. (Electrical Engineering) degree in 2012 and his M.S. (Electrical Engineering) degree in 2014, both from Aix-Marseille University, France. He is currently PhD student in CEA-Leti (Grenoble, France) and IM2NP (Aix-Marseille University, France). His research interests include emerging resistive memories with emphasis on circuit design, architecture and crossbar architecture. Elisa Vianello received the Ph.D. degree in microelectronics from the University of Udine, Udine, Italy, and the Polytechnic Institute of Grenoble, Grenoble, France, in 2009. She has been a Scientist with CEA-LETI, Grenoble, since 2011. Olivier Thomas received the M.S. Electrical Engineering degree in 2001 and the Ph.D. degree in microelectronics in 2004. He joined the CEA-LETI Laboratory in the Center for Innovation in Micro & Nanaotechnology (MINATEC), Grenoble, France in 2005. He was first involved in the development of low-power and low-leakage design solutions for digital wireless applications in 65nm Partially-Depleted SOI technology in collaboration with STMicroelectronics. From 2006 to 2010, he was in charge of low power SRAM and Digital design projects in Thin Film SOI technologies. His work was focused on efficient and simple multiple-VT design solutions. From 2010 to 2012, he was a visiting researcher at Berkeley Wireless Research Center (BWRC) of University of California at Berkeley. He worked on methodologies to characterize on large-scale static/dynamic SRAM performances. Back to CEA-LETI, from 2012 to 2014, he launched and led a advanced memory design group at LETI. TABLE I VOLTAGE I SWING ON THE BIT-CELL ACCESS LINES Access lines Voltage swing WL gnd to VDDana BL gnd to VDDana SL VDDana to HV RL gnd to HV scaled Flash memories. In 2005 he joined the Memory group of Im2np (Institut Matériaux Microélectronique Nanosciences de Provence) and became associate professor at Aix-Marseille Université. His research topics include nanoscale electrical characterization by scanning probe microscopy and physical modeling of emerging memory devices such as RRAM. In 2016 he became Professor at the Institut des Nanotechnologies de Lyon (INSA de Lyon) where he currently works on plastic electronics. Hassen Aziza received his B.S. and M.S. degrees in Electrical Engineering, both from University of Marseille, France. He received his Ph.D. degree in 2002 from the University of Marseille, France. Hassen Aziza is currently associate professor at Aix-Marseille University-IM2NP laboratory (Institute of materials, microelectronics and nanosciences of Provence). His research fields cover design, test and reliability of conventional non-volatile memories (Flash & EEPROM) as well as emerging memories (Resistive RAM). He is (co)author of more than 90 papers in international conferences and journals and is (co)inventor of 4 patents. Marc Bocquet received the Ph.D. degree in micro and nanoelectronics from University Grenoble, Grenoble, France, in 2009. He became an Associate Professor with the University of Marseille -Polytech'Marseille, in 2010, and he is a member of the Memories Team, IM2NP. He has conducted several studies on understanding the physical mechanisms in dielectrics to link the physical and chemical properties to the electricalperformance/reliability of memory devices: flash and resistive memory. Kholdoun Torki received the Ph.D. degree in Microelectronics from the Institut National Polytechnique de Grenoble, France, in 1990. He joined CMP as senior engineer in 1990, later on joining CNRS/CMP in 1994. He is currently Technical Director at CMP since 2002. His research interest includes Deep Submicron design methodologies, Non Volatile Memory CMOS co-integration, and 3D-IC integration. He authored and co-authored more than 100 scientific papers, coauthored 2 patents, designed more than 30 ASIC circuits, and participated or coordinated 15 European and National projects. He is member of the Board of Directors at iRoC Technologies. Bastien Giraud received the Ph.D. degree in 2008 from Telecom ParisTech France. The PhD thesis focused on SRAM design in Double Gate FDSOI. In 2009, he was postdoctoral researcher at UC Berkeley working on low power circuits and SRAM variability. From 2010, he works at CEA/Leti as a circuit designer specialized in memory and low power circuit in advanced technologies. His research interests include resilient memory with assist technics, energy efficiency, specific design technics and non-volatile memories. His current research are focused on SRAM ULV and robust, smart CAM, logic in memory, crossbar, using advanced CMOS technologies and non-volatile RRAM technologies such as CBRAM, OxRAM and PCRAM. He has published more than Jean-Michel Damien Deleruyelle received the Ph.D. degree in micro and nanoelectronics from Aix-Marseille University in 2004 after a thesis carried out at CEA-Leti (Grenoble, France) on ultra-
38,807
[ "20388", "18361", "1029576", "18508", "177966", "174122", "841005", "786114", "755890" ]
[ "199957", "199957", "199957", "199957", "199957", "527406", "39001", "40214", "199957", "40214", "40214", "40214", "40214" ]
01745507
en
[ "spi" ]
2024/03/05 22:32:07
2012
https://hal.science/hal-01745507/file/Potal.2012.JOLPE.AuthorVer.Non-Volatile%20Flip-Flop%20Based%20on%20Unipolar%20ReRAM%20for%20Power-Down%20Applications.pdf
Jean-Michel Portal email: jean-michel.portal@im2np.fr Marc Bocquet Damien Deleruyelle Christophe Muller Non-Volatile Flip-Flop Based on Unipolar ReRAM for Power-Down Applications Keywords: Low-power, Power-down, Flip-Flop, Non volatile memory, Resistive switching Memory, ReRAM In this paper, we propose a new architecture of non-volatile Flip-Flop based on ReRAM unipolar resistive memory element (RNVFF). This architecture is proposed in the context of powerdown applications. Flip-Flop content is saved into ReRAM memory cell before power-down and restored after power-up. To simulate such a structure a compact model of unipolar ReRAM was developed and calibrated on best in class literature data. The architecture of the RNVFF, based on the insertion of a non-volatile memory block before a master-slave Flip-Flop, is detailed. The save and restore processes are described from the succession of four operating modes (normal, save, read, reset) needed by the save and restore processes. Finally, the structure is fully validated through electrical simulations, when the data to save is either '0' or '1'. INTRODUCTION A major challenge in nomad applications is the reduction of power consumption. The mainstream of power reduction is driven since many years by transistor downscaling and concomitant voltage reduction. A side effect of this reduction is the increase of leakage current in sub-threshold regime with more than 40% of active mode energy dissipation due to power leakage [START_REF] Kursun | Multi-Voltage CMOS Circuit Design[END_REF][START_REF] Sery | Life is CMOS: Why chase life after?[END_REF] of idle transistors. To overcome this issue, solutions based on process changes have been proposed such as high- oxide associated with a metal gate [START_REF] Robertson | High dielectric constant gate oxides for metal oxide Si transistors[END_REF]. Another well-known solution to save power is to power-down subcircuits of System on Chip (SoC) during idle state. However, when sub-circuits are powered-down, the data saved in the Flip-Flops are lost with a subsequent high power budget required for saving/restoring their contents together with sub-threshold leakage current. Numerous design solutions have been proposed to maintain Flip-Flop contents such as multithreshold voltages MOS transistors used with power gating techniques [START_REF] Jiao | Low-Leakage and Compact Registers with Easy-Sleep Mode[END_REF]. The basic principle to save the Flip-Flop's content during power-down relies on a retention circuit also known as balloon circuit [START_REF] Matsuya | A 1-V high-speed MTCMOS circuit scheme for power-down application circuits[END_REF]. The scheme of a retention Flip-Flop with balloon latch is reproduced in Fig. 1 [START_REF] Matsuya | A 1-V high-speed MTCMOS circuit scheme for power-down application circuits[END_REF]. Using this technique, the master-slave Flip-Flop is connected either to virtual ground or VDD while a balloon latch is connected to real ground and VDD. During power-down, the data of the slave latch in the Flip-Flop is memorized in the balloon latch while the Flip-Flop is disconnected from the ground or VDD thanks to a switch inserted between the real and the virtual ground line. The integration of Non-Volatile Flip-Flop (NVFF) in SoC may also be a solution to lower power consumption. The recent emergence of innovative low voltage memory concepts paves the way for novel NVFF solutions as already demonstrated with either ferroelectric FeRAM [START_REF] Yan | A design of ferro-DFF for non-volatile systems[END_REF][START_REF] Wang | A Compare-and-write Ferroelectric Nonvolatile Flip-Flop for Energy-Harvesting Applications[END_REF] or magnetoresistive MRAM [START_REF] Zhao | Spin-MTJ based Non-Volatile Flip-Flop[END_REF][START_REF] Guillemenet | On the use of magnetic RAMs in field-programmable gate arrays[END_REF][START_REF] Yamamoto | Nonvolatile delay Flip-Flop based on spin-transistor architecture and its power-gating applications[END_REF] memories and recently with bipolar memristive devices [START_REF] Robinett | A memristor-based nonvolatile latch circuit[END_REF]. Fig. 2 depicts FeRAM-based NVFF [START_REF] Wang | A Compare-and-write Ferroelectric Nonvolatile Flip-Flop for Energy-Harvesting Applications[END_REF] solution, in which a non-volatile back-up module based on the insertion of two FeRAM memory cells is used to save and restore the Flip-Flop content during power-down. This back-up module is connected to the output of the slave stage of the Flip-Flop. The MRAM-based solution [START_REF] Zhao | Spin-MTJ based Non-Volatile Flip-Flop[END_REF] illustrated in Fig. 3 is based on the insertion of two MRAM cells in the master stage of the Flip-Flop. Here again, the modified non-volatile master stage enables storing data during the power-down phase. Even if both technologies are compatible with CMOS standard processes, they rely either on complex stack of magnetic layers for MRAM or on a high temperature crystallization ferroelectric oxide for FeRAM. The main purpose of this paper is to show how emerging memory concept relying on unipolar resistive switching (namely ReRAM standing for Resistive Random Access Memory) may also represent an interesting solution for implementation in NVFF (RNVFF for ReRAM NVFF). This solution could benefit from the good compatibility between ReRAM and CMOS technologies. The paper is organized as follows. Section 2 is dedicated to unipolar ReRAM physical model description and its calibration on best in class literature data. In section 3, the save/restore processes of RNVFF are detailed together with the architecture of the Flip-Flop. Section 4 presents simulation results that validate the efficiency of the solution. Finally, section 5 gives concluding remarks. UNIPOLAR RERAM PHYSICAL MODEL OVERVIEW Introduction NiO-based unipolar resistive switching device (ReRAM) is a good candidate for distributed memory applications due to its simple MIM (metal/Insulator/metal) structure, good compatibility with standard CMOS processes, low operating voltage (i.e. below 1 V in [START_REF] Kim | Electrical observations of filamentary conductions for the resistive memory switching in NiO films[END_REF]) and fast programming time (i.e. in the sub-10 ns range in [START_REF] Tsunoda | Low Power and High Speed Switching of Ti-doped NiO ReRAM under the Unipolar Voltage Source of less than 3 V[END_REF]). For the particular class of ReRAM devices relying on thermochemical mechanisms, the memory effect is due to creation/destruction of conductive filaments (CF) within the oxide providing two resistive states named low resistance state (LRS) and high resistance state (HRS). In unipolar ReRAM, the same voltage polarity enables switching either from HRS to LRS (set) and from LRS to HRS (reset). The main drawback is the "electroforming" stage required to create initial CFs within a virgin dielectric oxide. In fact, this process requiring a higher voltage compared to set and reset voltages could be a strong issue when embedding ReRAM in CMOS logic. However, recent works have proposed technological solutions that enable reducing forming voltages to the level of set voltage, paving the way toward "forming-free" devices [START_REF] Bruchhaus | Memristive Switches with Two Switching Polarities in a Forming Free Device Structure[END_REF]. Unipolar ReRAM physical model description The proposed RNVFF circuit relies on a compact model accounting for both set and reset operations in NiO-based unipolar resistive switching devices [START_REF] Bocquet | Self-consistent physical modeling of set/reset operations in unipolar resistive-switching memories[END_REF]. The initial physical model takes into account two mechanisms: redox reactions (i.e. electrochemical oxidation/reduction processes) and thermal diffusion. Set operation is governed by a local reduction process leading to the creation of CFs, whereas reset operation involves both oxidation reaction and thermal diffusion. Nevertheless, considering involved activation energies, the oxidation mechanism may be neglected, the diffusion process mainly governing the reset operation. The description of set and reset operations relies on a self-consistent kinetics equation (eq. 1) linking diffusion and reduction velocities diff and red respectively to the dimensionless concentration of metallic species CNi. The local diffusion velocity diff (eq. 2) of the metallic species explains the thermal rupture of CF during reset operation [START_REF] Russo | Self-Accelerated Thermal Dissolution Model for Reset Programming in Unipolar Resistive-Switching Memory (ReRAM) Devices[END_REF]. In equation 2, Ea is the activation energy governing the thermally-assisted exodiffusion of metallic species, kdiff is the thermal diffusion rate and TCF represents the temperature of CF: Ni T k Ea diff diff C e k CF b       (2) Besides, equation 3 gives a simplified expression of the reduction velocity red (expressed by diff red Ni dt dC     (1) classical Butler-Volmer equation [START_REF] Bard | Electrochemical Methods: Fundamentals and Applications[END_REF]), in which  is the asymmetry factor, k0 is the reaction rate, E0 is the free energy of the reaction at equilibrium, VCell is the applied voltage and TOx is the oxide temperature:   Ni T k V q E 0 red C 1 e k v Ox b Cell 0          (3) In equation 2, the local CF temperature TCF(x) in x direction increases along with the applied voltage (VCell) due to Joule effect as described in equation 4. In this latter equation, CF is the CF conductivity, Kth is the CF thermal conductivity, tOx is the oxide thickness and Tamb is ambient temperature. Solving the 1D heat equation, the CF temperature is given by:                         2 2 Ox 2 Ox Cell th CF amb CF x 4 t t V K T x T (4) Finally, it must be underlined that the present physical model enables continuously accounting for both creation (set) and destruction (reset) of conductive filaments. This numerical feature is a key point for a model dedicated to be implemented in computer-aided design tools. ReRAM model calibration and model card extraction Before simulating circuits integrating ReRAM devices, the physical model was confronted to quasi-static and dynamic I(V) characteristics measured on actual devices [START_REF] Kim | Electrical observations of filamentary conductions for the resistive memory switching in NiO films[END_REF][START_REF] Cagli | Evidence for threshold switching in the set process of NiO-based ReRAM and physical modeling for set, reset, retention and disturb prediction[END_REF][START_REF] Lee | 2-stack 1D-1R Cross-point Structure with Oxide Diodes as Switch Elements for High Density Resistance RAM Applications[END_REF]. Fig. 4 shows quasi-static set and reset I(V) characteristics measured on NiO-based memory elements by several authors [START_REF] Kim | Electrical observations of filamentary conductions for the resistive memory switching in NiO films[END_REF][START_REF] Cagli | Evidence for threshold switching in the set process of NiO-based ReRAM and physical modeling for set, reset, retention and disturb prediction[END_REF][START_REF] Lee | 2-stack 1D-1R Cross-point Structure with Oxide Diodes as Switch Elements for High Density Resistance RAM Applications[END_REF]. The physical model shows an excellent agreement with experimental data for both set and reset operations, which demonstrates its flexibility to match electrical data reported on various technologies. Moreover, Fig. 5 reports experimental and simulated evolutions of reset current Ireset as a function of the maximum current ISetMax used in preceding set operation [START_REF] Nardi | Control of filament size and reduction of reset current below 10 μA in NiO resistance switching memories[END_REF]. The proposed model well catches the universal Ireset = f(ISetMax) trend observed on various NiO-based technologies and confirms the scalability of the reset current [START_REF] Nardi | Control of filament size and reduction of reset current below 10 μA in NiO resistance switching memories[END_REF]. Besides, as reported in ref. [START_REF] Bocquet | Self-consistent physical modeling of set/reset operations in unipolar resistive-switching memories[END_REF], the model is also able to fit the evolution of programming voltages along with ramp speed to describe the dynamic behavior of memory elements. Among I(V) characteristics shown in Fig. 4, data published by Kim et al. [START_REF] Kim | Electrical observations of filamentary conductions for the resistive memory switching in NiO films[END_REF] exhibiting switching voltages compatible with 65 nm CMOS technology VDD (Fig. 4c) are selected to extract the model card for design purpose. This latter model card also fulfilled the condition of achieving set and reset operations in 10 ns under 1.2 V bias are reported in ref. [START_REF] Tsunoda | Low Power and High Speed Switching of Ti-doped NiO ReRAM under the Unipolar Voltage Source of less than 3 V[END_REF]. RNVFF ARCHITECTURE WITH SAVE/RESTORE PROCESS DESCRIPTION Introduction In power-down applications, Flip-Flop with non-volatile capability might be an alternative solution to power gating technique. The architecture of the non-volatile Flip-Flop is presented in section 3.2, while the different operating phases of the save and restore processes are described in section 3.3. RNVFF architecture Proposed RNVFF solution relies on the implementation of a non-volatile memory (NVM) block connected to the input of a conventional master-slave Flip-Flop. As illustrated Fig. 6 the NVM block is composed of routing components (input tri-states inverters and output multiplexer). In between lies a branch that connects a central point (MEM) to VDD through two serial PMOS (MP1 and MP2) on one hand and to ground through a ReRAM memory cell on the other hand. It has to be noticed that the area of this latter branch is reduced since ReRAM element may be processed in the back-end of line on the top of CMOS level. The input tri-states inverter enables connecting and isolating the test feature with a multiplexer on their input. Therefore the multiplexer of the NVM block could be mixed with the scanmultiplexer to introduce a minimal delay overhead. To summarize, the structure of the NVM block is very simple with two routing elements (input tristates inverter and output multiplexer) and a branch with two transistors and one ReRAM element (i.e. 2T/1R structure). Considering that the output multiplexer may be mixed with a scan multiplexer, the area overhead introduced by the structure is one tri-states inverter and a 2T/1R branch. Save and restore processes description To save and restore Flip-Flop content, four successive operating modes are required:  The normal mode in which the Flip-Flop works in a conventional manner;  The save mode in which Flip-Flop content is saved in NVM block;  The read mode in which NVM block content is restored in the Flip-Flop;  The reset mode in which ReRAM memory cell turns back to a high resistance state while the Flip-Flop works in a conventional way. Active path in normal operating mode is illustrated by solid red line in Fig. 7. In this mode the ReRAM cell is isolated from the input (IN) of the structure thanks to the input tri-states inverter, which is open (SAVE_EN = '0'). It has to be mentioned, that in this operating mode, the ReRAM cell remains in its initial high resistance state (i.e. HRS). The Flip-Flop works in a conventional manner by connecting the input (IN) of the structure to the input (D) of the Flip-Flop thanks to the output multiplexer (READ_EN = '1'). The delay overhead introduces is due to the extra-load of the tri-states input inverters and the routing through the output multiplexer. Again, the actives paths of the save operating mode are illustrated by the solid red line in Fig. 8. In this mode the ReRAM cell is connected to the input (IN) of the structure thanks to the input tri-states inverter, which is turned on (SAVE_EN = '1'). The MEM central point is now the complement of the input (IN) value. When the tri-states inverter is activated, the ReRAM cell is switched to its low resistance state LRS (IN = '0' and MEM = '1') or remains in a high resistance state HRS (IN='1' and MEM='0'). The Flip-Flop continues to work in a conventional manner and store the value of the input (IN) thanks to the output multiplexer (READ_EN = '1'). So during the save mode, the ReRAM resistance state reflects the value of the Flip-Flop. Logic '1' corresponds to a high resistance state whereas logic '0' corresponds to a low resistance state. Right after the save mode, the Flip-Flop can be completely powered-down since ReRAM cell stores the information. After power-up, a read mode is mandatory to restore Flip-Flop content. Active path in read operating mode is illustrated in Fig. 9. In this mode, the output multiplexer (READ_EN = '0') enables connecting the MEM node to the input (D) of the Flip-Flop, while a clock edge restores the content of the Flip-Flop. The resistive bridge (voltage divider) composed by MP1 (turned-on with RESET_EN = '0') and MP2 on one side and the ReRAM cell on the other side determines the voltage on the node MEM. If the ReRAM cell is in its LRS state then the MEM voltage is grounded and a '0' is restored in the Flip-Flop. In contrast, if the MEM voltage is closed to VDD, a '1' is restored in the Flip-Flop. MP2 is introduced in the branch to limit the voltage on the node MEM when close to VDD in order to initiate the switching of the ReRAM cell to a HRS state during the read mode, if needed (ReRAM cell in a LRS state). Active paths of the reset operating mode are illustrated in Fig. 10. In this mode, the Flip-Flop works again in a conventional way thanks to the output multiplexer that connects the input (IN) of the structure to the input of the Flip-Flop (READ_EN = '1'). Doing so, the ReRAM cell is isolated from the rest of the circuit and can turn back to a HRS state, if needed, thanks to the application of a voltage close to VDD through MP1 (turned-on with RESET_EN = '0') and MP2. When RESET_EN = '1', the reset process is stopped and the whole structure turns back to a normal operating mode. Conclusion In conclusion, it is important to note that the standard functionality of the Flip-Flop is guaranteed in all modes, except during read mode. Moreover, the ReRAM state is HRS in all modes when the content to save is '1' with a minimal leakage current consumption. When the content to save is '0' then the current consumption is restricted to the save, read and reset modes with the switching of the ReRAM cell from HRS to LRS and respectively from LRS to HRS. It is also important to underline that no biasing is necessary during power-off to preserve ReRAM state. RNVFF SIMULATION RESULTS To validate the RNVFF functionality, the full structure is simulated under electrical simulator using a low power CMOS 65 nm design kit and the unipolar ReRAM compact model fitted on best in class literature data. VDD is nominal for the technology and set to 1.2 V during all operating modes except during power-off where it is set to 0 V. All operating modes are simulated for input values of '1' and '0'. MP2 has a minimal length (L=0.06 µm) and a double width (W=0.24 µm) and MP1 has minimal dimensions (L=0.06 µm and W=0.12 µm). The tri-states inverter is composed of NMOS and PMOS transistors with a double length (L=0.12 µm) to limit current during set process and standard width (WPMOS=0.15 µm, WNMOS=0.12 µm). The output multiplexer and the Flip-Flop are standard-cells from the library. i.e. normal mode, save mode, power-off, read mode, reset mode and again normal mode. Fig. 12 shows the chronograms of the input (IN) , the Flip-Flop input (D) and the current through the ReRAM cell (IReRAM) to save and restore a logic '0'. As described in the previous section, the ReRAM is set during the save mode with a current of 12 µA during 2 ns. During the read and reset modes, the ReRAM is reset during 5 ns with a current decreasing from 9 µA to 0 µA. During the read mode, the input IN is set to '1' while the read process forces a '0' value at the input D of the Flip-Flop, validating successfully the save and restore processes for a data equal to '0'. Fig. 13 shows the chronograms of the input (IN), the Flip-Flop input (D) and the current through the ReRAM cell (IReRAM) to save and restore a logic '1'. As previously described, the ReRAM cell remains in HRS during the save mode with a current below 0.1 µA during 2 ns. During the read and reset modes, the ReRAM cell remains in HRS during 5 ns with a current decreasing from 0.4 µA to 0 µA. During the read mode, the input IN is set to '0' while the read process enforces a '1' value at the input D of the Flip-Flop, validating successfully the save and restore processes of a data equal to '1'. In conclusion, the simulation results validate successfully the functionality of the RNVFF in all operating modes. The simulation also demonstrates that the current consumption of this structure is restricted to the save and restore processes of a logic '0'. Indeed, the ReRAM cell remains always in HRS, when the data to save and restore is equal to '1'. CONCLUSION In this paper, a new architecture of non-volatile Flip-Flop based on unipolar Resistive RAM is proposed. This latter architecture is dedicated to power-down applications, in which the content of the Flip-Flop is saved as resistance states in a ReRAM device before power-down and restored after power-up. The overall save and restore processes are detailed together with the architecture of the proposed structure. One may notice that this architecture relies on a non-volatile memory block inserted at the front of a Flip-Flop. The first advantage of such a structure is a better compatibility between the ReRAM memory element and the CMOS level as compared to MRAM or FeRAMbased solutions. Moreover, the use of such a structure does not require any biasing during power-off in comparison to retention Flip-Flop employing a balloon latch. Another point to underline is the low power consumption during all operating modes, except when the cell is set or reset (corresponding to save and read/reset of a '0' content). Finally, the full structure is successfully validated with electrical simulation using a 65 nm CMOS design kit and the unipolar compact model calibrated on best in class data from the literature. FIGURES AND TABLES Figure 1. Architecture of a classical balloon latch used with power-gating technique (redrawn from [START_REF] Matsuya | A 1-V high-speed MTCMOS circuit scheme for power-down application circuits[END_REF]). ISet IReset [START_REF] Cagli | Evidence for threshold switching in the set process of NiO-based ReRAM and physical modeling for set, reset, retention and disturb prediction[END_REF] Simu. this work V Cell (V) ICell (A) VCell (V) Symbols: [START_REF] Kim | Electrical observations of filamentary conductions for the resistive memory switching in NiO films[END_REF] Lines: Simulations central point (MEM) to the input of the structure (IN). PMOS transistors of this inverter are adequately sized to provide a current limitation through the ReRAM element when connecting the input IN to the central point MEM. The two inputs multiplexer enables bypassing NVM block, when the signal READ_EN = '1', otherwise (READ_EN = '0') the output of the NVM block is connected to the input (D) of the Flip-Flop. It is worth noting that in most of SoC, Flip-Flops integrate a scan Fig. 11 11 Fig. 11 presents the chronograms of VDD and control signals, i.e. SAVE_EN, READ_EN and Figure 2 . 2 Figure 2. Flip-Flop architecture with a back-up module based on FeRAM memory for energy harvesting application (redrawn from[START_REF] Wang | A Compare-and-write Ferroelectric Nonvolatile Flip-Flop for Energy-Harvesting Applications[END_REF]). Figure 3 . 3 Figure 3. Architecture of a latch with a non-volatility capability based on two magnetic tunnel junctions MTJ (redrawn from[START_REF] Zhao | Spin-MTJ based Non-Volatile Flip-Flop[END_REF]). Figure 4 . 4 Figure 4. Experimental I(V) characteristics measured on a NiO-based memory structure reported in (a) [18], (b) [19], and (c) [12] and corresponding simulations using the presented ReRAM physical model. Figure 5 . 5 Figure 5. Maximum current during the reset operation (IReset) as a function of the maximum current during the preceding set operation (ISetMax). Experimental data were extracted from [20]. Figure 6 .Figure 7 . 67 Figure 6. RNVFF architecture with Non-volatile block based on ReRAM cell connected to the Flip-Flop input D. Figure 8 .Figure 9 . 89 Figure 8. RNVFF active path during save operating mode. Figure 10 .Figure 11 . 1011 Figure 10. RNVFF active path during reset operating mode. Figure 12 .Figure 13 . 1213 Figure 12. Chronograms of data signals and ReRAM current during the save & restore processes using a RNVFF, with IN='0'.
25,164
[ "20388", "18361", "174122" ]
[ "199957", "199957", "199957", "199957" ]
01745537
en
[ "spi" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01745537/file/doc00028799.pdf
Ali Castaings Alain Bouscayrol email: alain.bouscayrol@univ-lille1.fr Walter Lhomme R Trigui Power Hardware-In-the-Loop simulation Keywords: hardware-in-the-loop, energy management, supercapacitor, battery, fuel cell, vehicle come The urban travel demand is significantly growing. According to the International Energy Agency the 2012 concentration of CO2 was about 40% higher than in the mid-1800s (IEA, 2013). It is then important to find out alternatives to conventional thermal vehicles. Several solutions have been depicted such as battery electric vehicles or Fuel Cell (FC) vehicles [START_REF] Chan | Electric, Hybrid, and Fuel-Cell Vehicles: Architectures and Modeling[END_REF]. However, each solution has some limitations. FCs have some power transfer issues [START_REF] Bernard | Fuel-Cell Hybrid Powertrain: Toward Minimization of Hydrogen Consumption[END_REF] while batteries have some lifetime issues [START_REF] Omar | Lithium iron phosphate based battery -Assessment of the aging parameters and development of cycle life model[END_REF]. Multi-sources vehicles represent an interesting alternative as they enable to take advantage of the properties of the different sources [START_REF] Ehsani | Modern Electric, Hybrid Electric, and Fuel Cell Vehicles: Fundamentals, Theory, and Design[END_REF]. However, they represent very complex systems. It is then difficult to manage such systems. Several works has been done on Energy Management Strategies (EMSs) of multi-sources vehicles. Two approaches have been depicted [START_REF] Salmasi | Control Strategies for Hybrid Electric Vehicles: Evolution, Classification, Comparison, and Future Trends[END_REF][START_REF] Wirasingha | Classification and Review of Control Strategies for Plug-In Hybrid Electric Vehicles[END_REF], rule-based approach [START_REF] García | Operation mode control of a hybrid power system based on fuel cell/battery/ultracapacitor for an electric tramway[END_REF][START_REF] Thounthong | Energy management of fuel cell/battery/supercapacitor hybrid power source for vehicle applications[END_REF] and optimization-based approach [START_REF] Yu | An innovative optimal power allocation strategy for fuel cell, battery and supercapacitor hybrid electric vehicle[END_REF][START_REF] Odeim | Power Management Optimization of a Fuel Cell/Battery/Supercapacitor Hybrid System for Transit Bus Applications[END_REF]. The main issues are related to real-time applications. Moreover, the EMSs have to ensure the physical limitations (i.e. overcharge or depleting) of the sources for any driving condition. As a consequence, it is important to find testing procedures to assess the EMSs before their implementation in a real vehicle. Power Hardware-in-The-Loop (P-HIL) simulation [START_REF] Bouscayrol | Hardware-in-the-loop simulation[END_REF] has been used in several applications for testing components before their implementation in a real system. P-HIL has thus been used for testing EMSs of hybrid and electric vehicles in real-time conditions [START_REF] Allègre | Flexible real-time control of a hybrid energy storage system for electric vehicles[END_REF][START_REF] Castaings | Practical control schemes of a battery/supercapacitor system for electric vehicle[END_REF][START_REF] Odeim | Power Management Optimization of an Experimental Fuel Cell/Battery/Supercapacitor Hybrid System[END_REF]. The objective of this paper is to present a P-HIL simulation of a FC-battery-Supercapacitors (SCs) vehicle. The developed platform enables to assess an EMS in real-time conditions (e.g. various driving conditions). The control organization of the P-HIL simulation is achieved by using Energetic Macroscopic Representation (EMR) [START_REF] Bouscayrol | Graphic Formalisms for the Control of Multi-Physical Energetic Systems[END_REF]. The second section is devoted to the description of the P-HIL simulation. The control organization is presented in the third section. The results are given in the last section before the conclusion. II. P-HIL SIMULATION OF THE STUDIED SYSTEM A. P-HIL principle Hardware-In-the Loop simulation consists in adding some actual elements (hardware) in the simulation loop [START_REF] Bouscayrol | Hardware-in-the-loop simulation[END_REF]. In Power HIL, some power elements can be tested before their implementation on the real system. It is useful for testing the subsystem and its control in real-time conditions. (Figure 1.a). In P-HIL simulation, the power part is split into two parts, the part under test (with its control) and the emulated part. An interface (interf. in ). The emulation system must have the same behavior than the simulated system. The control references of the emulation system come from the simulation of the emulated system. Also, the interface must be faster than the emulated system to emulate without delay. The next part is devoted to the control organization of the P-HIL system. Different parts have to be interconnected. Indeed, there are the models simulations, the emulation subsystems with their control, the tested subsystems and their control. A graphical formalism, Energetic Macroscopic Representation (EMR) is used as a tool for achieving the subsystems interconnection. First, EMR is based on action-reaction principle. It enables to ensure a physical connection between the elements. Second, EMR approach is based on causality principle. It enables to deduce the control structure of the system and to use real-time models for the emulation subsystems. III. CONTROL ORGANIZATION A. Real part The control of the system is achieved by using Energetic Macroscopic Representation (EMR). EMR highlights energetic properties of the components of a system to develop control schemes [START_REF] Bouscayrol | Graphic Formalisms for the Control of Multi-Physical Energetic Systems[END_REF]. There are several pictograms to represent the system model (see Appendix). By using EMR approach, the control part is organized in two levels, the "local control" part and the "global control" part (i.e. EMS). The main interest of using EMR is that the "local control" part of the system can be systematically deduced by "mirror" effect from its EMR. The EMR and the control part of the "real part" of the system are depicted Figure 4. Local control part The local control is represented by the light blue blocks in Global control part The global control part corresponds to the Energy Management Strategy (EMS). That aims to use the degrees of freedom of the control in the best way. There are two kinds of EMSs for multi-sources vehicles; rule-based EMSs and optimization-based EMSs [START_REF] Salmasi | Control Strategies for Hybrid Electric Vehicles: Evolution, Classification, Comparison, and Future Trends[END_REF] Figure 4: EMR and control organization of the "real part" B. Emulated parts The EMR and its control organization of the emulated parts are depicted Figure 5. The purple blocks correspond to the simulated part of the P-HIL simulation. As it can be noticed, the control references come from the simulation of the real components models (purple pictograms). Also this is a reduced-scale P-HIL simulation. As a result, some adaption coefficients are taken into account [START_REF] Allègre | Flexible real-time control of a hybrid energy storage system for electric vehicles[END_REF]. These coefficients enable to pass from the full-scale simulated models to the reference signals of the reduced-scale system (1). The reduced-scale coefficients values are given in Table 1. VALIDATION OF AN ENERGY MANAGEMENT STRATEGY A. Principle The experimental setup is presented Figure 6. A dSPACE 1005 card is used as an interface between the power part and the computer board. The EMS is an optimization-based strategy. It consists in minimizing the hydrogen consumption while improving the battery lifetime [START_REF] Castaings | Comparison of energy management strategies of a battery/supercapacitors system for electric vehicle under real-time constraints[END_REF]. The first test is achieved on a standard driving cycle (WLTC class 2, low velocity phase Figure 7) where the EMS parameters have been identified. This corresponds to "ideal" driving conditions. The second test is carried out using a real driving cycle (Figure 8) coming from results on the instrumented car (Tazzari Zero) [START_REF] Depature | Efficiency Map of the Traction System of an Electric Vehicle from an On-Road Test Drive[END_REF]. This test enables to assess the robustness of the EMS when varying the driving conditions. The parameters of the full scale and reduced scale systems are given in Table 1. This is interesting for its lifetime (Figure 9). As depicted in Figure 10 the EMS enables to respect the SCs voltage limitations. This is important for ensuring the system safety. This aspect has been assessed thanks to the P-HIL platform. The same trends can be noticed for the real driving cycle (Figure 11 and Figure 12). The key point is that the EMS still enables to reach interesting performances while ensuring the system safety. However, as the parameters were not computed on this driving cycle, the SCs tend to be discharged at the end of the driving cycle. This can cause some repeatability issues. Indeed, if the same driving cycle is repeated, the results won't be the same as the previous one. Csc=19 F usc-M=44 V | usc-m=0.65usc-M uSC-0=0.9usc-M Battery 24 cells (3.3 V / 20 Ah / 820 W) SoCb-M=100 % | SoCb-m=90 % SoCb-0=95 % ki-sbat=1/17 ku-sbat=1 Figure 1.a) is required for connecting the simulation signals and the power signals. The interface has then power and signals elements. (Figure 1.b Figure 1 1 Figure 1: P-HIL, (a) principle, (b) practical scheme Figure 2 : 2 Figure 2: Architecture of the studied vehicle P-HIL organization The objective of the P-HIL simulation is to assess the system controllability in real-time and to validate the FC and SCs behavior. In the presented work, the emulated parts are the battery branch and the traction part. The corresponding emulation systems are depicted Figure 3. For the battery branch, the battery is replaced by a SCs bank. The SCs bank has to reflect two battery characteristics  the battery SoC limitations : this depends on the SCs bank size  the battery voltage dynamics. The SCs voltage has higher dynamics than the battery voltage ones. If the battery model is accurate enough, the battery voltage dynamics can be reflected by the SCs. The traction part is emulated by a current source composed of a DC-DC converter, a smoothing inductor and a SCs bank. The main dynamics of the inverter current are taken into account in the traction model (cf. section III). Figure 3 : 3 Figure 3: P-HIIL system architecture Figure 4 . 4 It manages the system components to track the reference of the DC bus voltage. The right duty cycles of the converters (αb, αfc and αsc) are then defined. In addition, the local control points out the control requirements. In the studied case 4 sensors and 4 controllers (closed-loop control) are required as well. The inversion of an accumulation element is performed via a closed-loop control (crossed blue parallelogram). A conversion element is directly inverted with an open-loop control (blue parallelogram). The inversion of a coupling element depicts degrees of freedom that correspond to the output of the EMS (global control). Figure 5 : 5 Figure 5: EMR and control organization of the emulated parts Figure 6 : 6 Figure 6: Experimental setup Figure 9 : 9 Figure 9: FC branch and traction currents Figure 11: FC branch and traction currents
11,655
[ "775405", "180004" ]
[ "222114", "13338", "13338", "222114" ]
01745567
en
[ "info" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01745567/file/lasaulce-tarbouriech-v3%283%29.pdf
Samson Lasaulce email: lasaulce@l2s.centralesupelec.fr Sophie Tarbouriech email: so-phie.tarbouriech@laas.fr Information constraints in multiple agent problems with i.i.d. states . In wireless communications, it may represent the state of the global communication channel. The approach used is to exploit Shannon theory to characterize the achievable long-term utility region. Two scenarios are described. In the first scenario, the number of agents is arbitrary and the agents have causal knowledge about the state. In the second scenario, there are only two agents and the agents have some knowledge about the future of the state, making its knowledge non-causal. Chapter overview This chapter concerns the problem of coordination among agents. Technically, the problem is as follows. We consider a set of K ≥ 2 agents. Agent k has a utility, payoff, or reward function u k (x 0 , x 1 , ..., x K ) where x k , k ≥ 1, is the action of Agent k while x 0 is the action of an agent called Nature. The Nature's actions correspond to the system state and is assumed to be non-controlled; more precisely, Nature corresponds to an independent and identically distributed (i.i.d.) random process. The problem studied in this chapter is to characterize the long-term utility region under various assumptions in terms of observation at the agents. By long-term utility for Agent k we mean the following quantities: U k (σ 1 , ..., σ K ) = lim T →+∞ 1 T E T ∑ t=1 u i (X 0 (t), ..., X K (t)) (1) where σ k = (σ k,t ) t≥1 is a sequence of functions which represent the strategy of Agent k, x k (t) is the action chosen by Agent k at time or stage t ≥ 1, t being the time or stage index; concerning notations, as far as random variables are concerned, capital letters will stand for random variables whereas, small letters will stand for realizations. Note that, implicitly, we assume sufficient conditions (such as utility boundedness) under which the above limit exists. The functions σ k,t , k ∈ {1, ..., K}, map the available knowledge to the action of the considered agent. The available knowledge depends on the information assumptions made (e.g., the knowledge of the state can be causal or non-causal). We will distinguish between two scenarios. In the first scenario, agents are assumed to have some causal knowledge (in the wide sense) about the state whereas, in the second scenario non-causal knowledge (i.e., some knowledge about the future) about the state is assumed. The second scenario is definitely the most difficult one technically, which is why only two agents will be assumed. Remarkably, the long-term utility region, whenever available, can be characterized in terms of elegant information constraints. For instance, in the scenario of non-causal state information, determining the long-term utility region amounts to solving a convex optimization problem whose non-trivial constraints are the derived information-theoretic constraints. Introduction An important example, which illustrates well how the results reported in this chapter can be used, is given by the problem of power control in wireless networks (see Fig. 1). Each transmitter has to adapt its transmit power not only to the fluctuations of the quality of the link (or channel gain) between itself and its respective receiver but also to the transmit power levels of the other transmitters that uses the same radio resources (and therefore create interference). This problem is a multi-agent problem where the agents are the transmitters, the actions of the agents are their transmit power level, and the system state is given by the set of channel gains of the various links in presence; channel gains are typically non-controlled variables (they do not depend on the transmit power levels) and evolve in a random manner; in practice, each transmitter has a partial and imperfect knowledge of the system state. Now, if the agents (namely, the transmitters in the considered example) have a certain performance criterion, which will be referred to as a utility function for the general setup considered in the chapter, the important problem of knowing the best achievable utilities appears. For instance, a transmitter might be designed to maximize its communication rate. The best data rate of a given transmitter would be obtained if all the other transmitters would be silent (i.e., when they don't transmit) and when the transmitter perfectly adapts its power to the channel gain fluctuations of the link between itself and its intended receiver. Obviously, in the real life, several transmitters will transmit at the same time, hence the need to coordinate as well as possible, which leads to the problem of characterizing the best performance possible in terms of coordination. This precisely corresponds to the problem of characterizing the long-term utility region i.e., the set of possible achievable points (U 1 ,U 2 , ...,U K ) for a given definition for the strategies. In Sections. 3 and 4, we will consider two different definitions for the strategies, each of them corresponding to a given observation structure that is, to some given information assumptions. Fig. 1 The problem of power control in wireless networks is a typical application for the results provided in this chapter. The agents are the transmitters, the agents' actions are given by the transmit power level, and the agent utility function may be its communication rate with its intended receiver. General problem formulation This chapter aims at describing a few special instances of a general problem which has been addressed in several recent works [START_REF] Larrousse | Coded Power Control: Performance Analysis[END_REF], [START_REF] Larrousse | Implicit coordination in two-agent team problems. Application to distributed power allocation[END_REF], [START_REF] Larrousse | Coordination in state-dependent distributed networks: The two-agent case[END_REF], [START_REF] Larrousse | Coordination in State-Dependent Distributed Networks[END_REF], [START_REF] Agrawal | A framework for decentralized power control with partial channel state information[END_REF], [START_REF] Larrousse | Coordination in distributed networks via coded actions with application to power control[END_REF], [START_REF] Treust | Joint empirical coordination of source and channel[END_REF]. We consider K ≥ 2 agents, where Agent k ∈ {1, . . . , K} produces time-t action x k (t) ∈ X k 1 for t ∈ {1, . . . , T }, T ≥ 1, the set X k representing the set of actions for Agent k. Each agent has access to some observations associated with the chosen actions and the realization of a random process {X 0,t } T t=1 = {X 0,1 , ..., X 0,T } ∈ X T 0 . In the motivating example described in the introduction, the random process was given by the global wireless channel state i.e., the set of qualities of all the links in presence. In a control problem, the random process may represent a non-controlled perturbation or some uncertainty. All agents' actions and the random process also affect the agents' individual stage or instantaneous utility functions u 1 , ..., u K where for all k ∈ {1, ..., K} the function u k writes: u k : X 0 × X 1 × ... × X K → R (x 0 , x 1 , ..., x K ) → u k (x 0 , x 1 , ..., x K ) . (2) One of the main goals of the chapter is to explain how to determine the set of feasible expected long-term utilities: U (T ) k = E 1 T T ∑ t=1 u k (X 0,t , X 1,t , ..., X K,t ) , (3) that are reachable by some strategies for the agents. The set of feasible utilities is fully characterized by the set of feasible averaged joint probability distributions on the (K +1)-uple {(X 0,t , X 1,t , ..., X K,t )} T t=1 . Indeed, denoting by P X 0,t X 1,t ...X K,t the joint probability distribution of the time (K + 1)-uple (X 0,t , X 1,t , ..., X K,t ), we have U (T ) k = 1 T T ∑ t=1 E [u k (X 0,t , X 1,t , ..., X K,t )] = 1 T T ∑ t=1 ∑ x 0 ,...,x K P X 0,t X 1,t ...X K,t (x 0 , x 1 , ..., x K )u k (x 0 , x 1 , ..., x K ) = ∑ x 0 ,...,x K u k (x 0 , x 1 , ..., x K ) 1 T T ∑ t=1 P X 0,t X 1,t ...X K,t (x 0 , x 1 , ..., x K ). Therefore the problem of characterizing the long-term utility region amounts to determining the set of averaged distributions P (T ) (x 0 , x 1 , ..., x K ) = 1 T T ∑ t=1 P X 0,t X 1,t ...X K,t (x 0 , x 1 , ..., x K ) (4) that can be induced by the agents' strategies. For simplicity, and in order to obtain closed-form expressions, we shall focus on the case where T → ∞ [START_REF] Gossner | Optimal use of communication resources[END_REF], [START_REF] Larrousse | Coded Power Control: Performance Analysis[END_REF]. We consider two types of scenarios with two different observation structures. In the first scenario, referred to as the non-causal state information scenario, the agents 1 Throughout the chapter, we assume that all the alphabets such as X k are finite. observe the system states non-causally. That means, at each stage t ∈ {1, . . . , T } they have some knowledge about the entire state sequence X T 0 = (X 0,1 , . . . , X 0,T ). In the second scenario, referred to as the causal state information scenario, the agents learn the states only causally and therefore, at any stage t, the agents have some knowledge about the sequence X t 0 = (X 0,1 , . . . , X 0,t ), where throughout the chapter we use the shorthand notations A m and a m for the tuples (A 1 , . . . , A m ) and (a 1 , . . . , a m ), when m is a positive integer. 3 Coordination among agents having causal state information Limiting performance characterization Firstly, we define the information structure under consideration. At every instant or stage t, Agent k is assumed to have an image or a partial observation S k,t ∈ S k of the nature state X 0,t with respect to which all agents are coordinating. In the case of the wireless power control example described in the introduction, this might be the knowledge of local channel state information, e.g., a noisy estimate of the direct channel between the transmitter and the associated receiver. The observations S k,t are assumed to be generated by a memoryless channel. By memoryless it is meant that the joint conditional probability on sequences of realizations factorizes the product of individual conditional probabilities. Denoting by k the transition probability for the observation structure of Agent k, the memoryless condition can be written as: P(s T K |x T 0 ) = T ∏ t=1 k (s k (t)|x 0 (t)). (5) The strategy or the sequence of decision functions for Agent k, σ k,t , is defined by: σ k,t : S t k -→ X k (6) (s k (1), s k (2), ..., s k (t)) -→ x k (t) (7) where S k is observation alphabet for Agent k. As mentioned in Section 2, the problem of characterizing the long-term utility region amounts to determining the achievable correlations measured in terms of joint distribution, hence the notion of implementability for a distribution. Definition 1. (Implementability) The probability distribution Q(x 0 , x 1 , ..., x N ) is im- plementable if there exist strategies (σ 1,t ) t≥1 , ..., σ K,t ) t≥1 such that as T → +∞, we have for all x ∈ X , 1 T T ∑ t=1 P X 0,t ...X K,t (x 0 , ..., x K ) -→ Q(x 0 , ..., x K ) (8) where P X 0,t ...X K,t is the joint distribution induced by the strategies at stage t. The following theorem is precisely based on the notion of implementability and characterizes the achievable long-term utilities that are implementable under the information structure [START_REF] Agrawal | A framework for decentralized power control with partial channel state information[END_REF]; for this, we first define the weighted utility function w as a convex combination of the individual utilities u k : w = K ∑ k=1 λ k u k . ( 9 ) Theorem 1. [5] Assume the random process X 0,t to be i.i.d. following a probability distribution ρ and the available information to the transmitters S k,t to be the output of a discrete memoryless channel obtained by marginalizing the joint conditional probability . An expected payoff w is achievable in the limit T → ∞ if and only if it can be written as: w = ∑ x 0 ,x 1 ,...x N , u,s 1 ,...s N ρ(x 0 )P U (u) (s 1 , ..., s K |x 0 )× ∏ K k=1 P X k |S k ,U (x k |s k , u) w(x 0 , x 1 , ..., x K ). ( 10 ) where U is an auxiliary variable, which can be optimized, and P X k |S k ,U (x k |s k , u) is the probability that Agent k, chooses action x k after observing s k , u. The auxiliary variable U is an external lottery known to the agents beforehand, which can be used to achieve better coordination e.g., in presence of individual constraints or at equilibrium. Theorem 1 allows us find all the achievable utility vectors (U 1 , ...,U K ). Indeed, the long-term utility region being convex (this readily follows from a time-sharing argument), its Pareto boundary can be found by maximizing the weighted utility w. Of course, remains the problem of determining the strategies allowing to operate at a given arbitrary point of the utility region. Since, this problem is non-trivial and there does not exist any methodology for this, we provide an algorithm which allows one to find a suboptimal strategies. Indeed, the associated multilinear optimization problem is too complex to be solved and to overcome this we resort to an iterative technique which is much less complex but is suboptimal. An algorithm to determine suboptimal strategies One of the merits of Theorem 1 is to provide the best performance achievable in terms of long-term utilities when agents have an arbitrary observation structure. However, Theorem 1 does not provide practical strategies which would allow a given utility vector to be reached. Finding "optimal" strategies consists in finding good sequences of functions as defined per [START_REF] Agrawal | A framework for decentralized power control with partial channel state information[END_REF], which is an open and promising direction to be explored. More pragmatically, the authors of [START_REF] Agrawal | A framework for decentralized power control with partial channel state information[END_REF] proposed to restrict to stationary strategies which are merely functions of the form f k : S k → X k . This choice is motivated by practical considerations such as computational complexity and it is also coherent with the current state of the literature. The water-filling solu-tion is a special instance of this class of strategies. To find good decision functions, the idea, which is proposed in [START_REF] Agrawal | A framework for decentralized power control with partial channel state information[END_REF], is to exploit Theorem 1. This is precisely the purpose of this section. The first observation we make is that the best performance only depends on the vector of conditional probabilities P X 1 |S 1 ,U , ..., P X K |S K ,U and the auxiliary variable probability distribution P U , the other quantities being fixed. It is therefore relevant to try to find an optimum vector of lotteries for every action possible and use it to take decisions. Since this task is typically computationally demanding, a possible and generally suboptimal approach consists in applying a distributed algorithm to maximize the expected weighted utility. The procedure proposed in [START_REF] Agrawal | A framework for decentralized power control with partial channel state information[END_REF] is to use the sequential best response dynamics (see e.g., [START_REF] Lasaulce | Game Theory and Learning for Wireless Networks: Fundamentals and Applications[END_REF]). The idea is to fix all the variables (that are probability distributions here) expect one and maximize the expected weighted utility with respect to the only possible degree of freedom. This operation is then repeated by considering another variable. The key observation to be made is then to see that when the distributions of the other agents are fixed, the best distribution for Agent k boils down to a function of s k , giving us a candidate for a decision function which can be used in practice. To describe the algorithm of [START_REF] Agrawal | A framework for decentralized power control with partial channel state information[END_REF] (see also Fig. 2), we first rewrite the expected weighted utility in the following manner: W = ∑ x 0 ,x 1 ,...x K ,u,s 1 ,...s K ρ(x 0 )P U (u)× (11) Γ (s 1 , ..., s K |x 0 , x 1 , ..., x K )× (12) K ∏ k=1 P X k |S k ,U (x k |s k , u) w(x 0 , x 1 , ..., x K ) = ∑ i k , j k ,u δ i k , j k ,u P X k |S k ,U (x k |s k , u) (13) where i k , j k , u are the respective indices of x k , s k , u and δ i k , j k ,u = ∑ i 0 ρ(x i 0 )Γ k (s k |x i 0 ) ∑ i -k u k (x i 0 , x i 1 , ...x i K )× ∑ j -k ∏ k =k Γ k (s j k |x 0 ) ∏ k =k P X k |S k ,U (x k |s k , u) P U (u) (14) where i -k , j -k are the indices which represent i k , j k being constant, while all the other indices are summed over. To make the description of the algorithm clearer, we have also assumed the independence of the observation channels as well as independence of the signal with the strategies chosen by the agents, i.e., Γ (s 1 , ..., s K |x 0 , x 1 , ..., x K ) = Γ 1 (s 1 |x 0 ) × . . . ×Γ K (s K |x 0 ) . Written under this form, for every agent, optimizing the expected weighted utility in a distributed manner im-plies giving a probability 1 for the optimal coefficient δ i k , j k ,u , and every player does that turn by turn. ! Fig. 2 Pseudo-code of the Algorithm proposed in [START_REF] Agrawal | A framework for decentralized power control with partial channel state information[END_REF] to find suboptimal strategies. To conclude, note that the above algorithm always converges. This can be proved e.g., by induction or by calling for an exact potential game property (see e.g., [START_REF] Mondrere | Potential games[END_REF], [START_REF] Lasaulce | Game Theory and Learning for Wireless Networks: Fundamentals and Applications[END_REF]). Coordination between two agents having noncausal state information Limiting performance characterization As explained previously, the problem of characterizing the utility region in the case where the state is known non-causally to the agents is much more involved technically. Even in the case of two agents, one may have to face with an open problem, depending on the observation structure assumed for the agents. Here, we consider an important case for which the problem can be solved, as shown in [START_REF] Larrousse | Coordination in State-Dependent Distributed Networks[END_REF]. Therein, the authors consider an asymmetric observation structure. In the case of non-causal state information, agents' strategies are sequences of functions that are defined as follows. For Agent 1 the strategy is defined by: σ 1,t : S T 1 × Y t-1 1 -→ X 1 (15) (s 1 (1), ..., s K (T ), y 1 (1), ..., y 1 (t -1)) -→ x 1 (t) (16) and for Agent 2 the strategy is defined by: σ 2,t : S T 2 × Y t-1 2 -→ X 2 (17) (s 2 (1), ..., s 2 (t), y 2 (1), ..., y 2 (t -1)) -→ x 2 (t) (18) where y k (t) ∈ Y k is the observation Agent k has about the triplet (x 0 (t), x 1 (t), x 2 (t)) whereas s k (t) ∈ S k is the observation Agent k has about the state x 0 (t). Note that distinguishing between the two observations s k and y k is instrumental. Indeed, it does not make any sense physically speaking to assume that an agent might have some future knowledge about the actions of the other agents, which is why the feedback signal is strictly causal. On the other hand, assuming some knowledge about the future of the non-controlled state x 0 perfectly makes sense, as motivated the chapter abstract and the works quoted in the list of references. More precisely the observation is assumed to be the output of a memoryless channel whose transition law is denoted by Γ : Pr Y 1 (t) = y 1 (t),Y 2 (t) = y 2 (t) X t 0 = s t 0 , X t 1 = x t 1 , X t 2 = x t 2 ,Y t-1 1 = y t-1 1 ,Y t-1 2 = y t-1 2 = Γ (y 1 (t), y 2 (t)|sx 0 (t), x 1 (t), x 2 (t)). (19) We now provide the characterization of the set of implementable probability distributions both for the considered non-causal strategies. Theorem 2. [START_REF] Larrousse | Coordination in State-Dependent Distributed Networks[END_REF] The distribution Q is implementable if and only if it satisfies the following condition2 I Q (S 1 ;U) ≤ I Q (V ;Y 2 |U) -I Q (V ; S 1 |U). (20) where U and V are auxiliary random variables and Q is any joint distribution that factorizes as Q(x 0 , s 1 , s 2 , u, v, x 1 , x 2 , y 1 , y 2 ) = ρ(x 0 ) (s 1 , s 2 |x 0 )P UV X 1 |S 1 (u, v, x 1 |s 1 )P X 2 |US 2 (x 2 |u, s 2 )Γ (y 1 , y 2 |x 0 , x 1 , x 2 ) (21) In practice, to plot the utility region, one typically has to solve a convex optimization problem. To be illustrative, we consider the special case of [START_REF] Larrousse | Coded Power Control: Performance Analysis[END_REF] namely, Y = X 1 . Denoting by H the entropy function, the problem of finding the Pareto-frontier of the utility region exactly corresponds to solving the following optimization problem: minimize -∑ x 0 ,x 1 ,x 2 Q(x 0 , x 1 , x 2 )w(x 0 , x 1 , x 2 ) subject to H Q (X 0 ) + H Q (X 2 ) -H Q (X 0 , X 1 , X 2 ) ≤ 0 -Q(x 0 , x 1 , x 2 ) ≤ 0 -1 + ∑ x 0 ,x 1 ,x 2 Q(x 0 , x 1 , x 2 ) = 0 -ρ(x 0 ) + ∑ x 1 ,x 2 Q(x 0 , x 1 , x 2 ) = 0 . The above problem can be shown to be convex (see [START_REF] Larrousse | Coded Power Control: Performance Analysis[END_REF]). In the next section we exploit this result to assess the performance gain brought by implementing coordination for distributed power control in wireless networks. Application to distributed power control Here we apply the results of the previous section to the wireless power control problem. A flat-fading interference channel (IC) with two transmitter-receiver pairs is considered. Transmissions are assumed to be time-slotted and synchronized; the timeslot or stage index is denoted by t ∈ N * . For k ∈ {1, 2} and " = -k" (-k stands for the terminal other than k), the signal-to-noise plus interference ratio (SINR) at Receiver k on a given stage writes as SINR k = g kk x k σ 2 +g k x -k where x k ∈ X IC i = {0, P max } is the power level chosen by Transmitter k, g k represents the channel gain of link k , and σ 2 the noise variance. If Transmitter 1 is fully informed of x 0 = (g 11 , g 12 , g 21 , g 22 ) for the next stage and Transmitter 2 has no transmit CSI while both transmitters want to maximize the average of a common stage payoff which is w IC (x 0 , x 1 , x 2 ) = ∑ 2 k=1 f (SINR k (x 0 , x 1 , x 2 ) ), there may be an incentive for Transmitter 1 to inform Transmitter 2 what to do for the next stage; a typical choice for f is f (a) = log(1 + a). Since Transmitter 1 knows the optimal pair of power levels to be chosen on the next stage, say (x * 1 , x * 2 ) ∈ arg max (x 1 ,x 2 ) w(x 0 , x 1 , x 2 ), a simple coded power control (CPC) policy for Transmitter 1 consists in transmitting on stage t at the level Transmitter 2 should transmit on stage t + 1. Therefore, if Transmitter 2 is able to observe the actions of Transmitter 1, power levels will be optimally tuned half of the time. Such a simple policy, which will be referred to as semicoordinated PC (SPC), may outperform (in terms of average payoff) pragmatical PC policies such as the one for which the maximum power level is always chosen by both Transmitters ((x 1 , x 2 ) = (P max , P max ) is the Nash equilibrium of the static game whose individual utilities are u k = f (SINR k )). The channel gain of the link between Transmitter k and Receiver is assumed to be Bernouilli distributed: g k ∈ {g min , g max } is i.i.d. and Bernouilli distributed g k ∼ B(p k ) with P(g k = g min ) = p k . The utility function is either f (a) = log(1 + a) or f (a) = a. We define SNR[dB] = 10 log 10 P max σ 2 and set g min = 0.1, g max = 1.9, σ 2 = 1. The low and high interference regimes (LIR for low interference regime, HIR and for high interference regime) are respectively defined by (p 11 , p 12 , p 21 , p 22 ) = (0.5, 0.9, 0.9, 0.5) and (p 11 , p 12 , p 21 , p 22 ) = (0.5, 0.1, 0.1, 0.5). At last, Y ≡ X 1 and we define two reference PC policies : full power control (FPC) policy x k = P max for every stage ; the semi-coordinated PC (SPC) policy x 2 = P max , x † 1 ∈ arg max x 1 w IC (x 0 , x 1 , P max ). Fig. 3 and 4 depict the relative gain in % in terms of average payoff versus SNR[dB] which is obtained by costless optimal coordination and information-constrained coordination. Compared to FPC, gains are very significant whatever the interference regime and provided the SNR has realistic values. Compared to SPC, the gain is of course less impressive since SPC is precisely a coordinated PC scheme but, in the HIR and when the communication cost is negligible, gains as high as 25% can be obtained with f (a) = log(1 + a) and 45% with f (a) = a. Conclusion In this chapter, we have described an information-theoretic framework to characterize the limiting performance of a multiple agent problem. More precisely, the theoretical performance analysis has been conducted in terms of long-term utility region. We have seen that the problem amounts to finding the set of implementable joint distribution over the system state and actions. Both in the scenarios of causal and non-causal state information, auxiliary random variables appear in the characterization of implementable joint distribution. To be able to assess numerically the limiting performance for given utility functions, an optimization problem has to be solved. In the causal state information scenario, the problem is multilinear and the challenge is due to the dimension of the vectors involved. In the non-causal state information scenario, the problem to be solved is a convex problem; more precisely, the information constraint function which translates the agent capabilities in terms of coordination is a convex function of the joint distribution. Note that although the state is not controlled and evolves randomly, the general problem of characterizing the utility region for any number of agents is not trivial. Of course, the problem Fig. 4 The difference with Fig. 3 is that the reference power control policy is the semi-coordinated power control policy (SPC), which is already a CPC policy. Additionally, the top curve is obtained with f (a) = a. is even more difficult in the case of controlled states, which therefore constitutes one possible non-trivial extension of the results reported in this chapter. Another interesting research direction would be to consider the case where the state and actions are continuous. A first attempt to this has be made in [START_REF] Agrawal | Implicit coordination in two-agent team problems[END_REF]. Interestingly, the corresponding problem can be shown to be strongly connected to the famous Witsenhausen problem [START_REF] Witsenhausen | A counterexample in stochastic optimum control[END_REF], [START_REF] Sahai | Demystifying the Witsenhausen counterexample[END_REF], which is a typical decentralized control problem where control and communication intervene in an intricate manner. Fig. 3 3 Fig.3Relative gain in terms of expected payoff ("CPC/FPC -1" in [%]) vs SNR[dB] obtained with CPC (with and without communication cost) when the reference power control policy is to transmit at full power (FPC). expected payoff & HIR Costless communication case & LIR Information-constrained expected payoff &LIR Costless communication & HIR & f(a)=a The notation I Q (A; B) indicates that the mutual information should be computed with respect to the probability distribution Q.
27,556
[ "1068236", "14668" ]
[ "1289", "388529" ]
01745578
en
[ "spi" ]
2024/03/05 22:32:07
2007
https://hal.science/hal-01745578/file/Bocquet%20-%20ICMTD07%20-%20fixed%20charge%20%26%20trapping%20-%20HfAlO%20IPD.pdf
J Grampeix F Buckley J.-P Martin M Colonna G Gély G Pananakakis B Ghibaudo De Salvo M Bocquet email: marc.bocquet@cea.fr G Molas H Grampeix J Buckley F Martin J P Colonna M Gély G Ghibaudo B De Salvo S Deleonibus Intrinsic fixed charge and trapping properties of HfAlO interpoly dielectric layers L'archive ouverte pluridisciplinaire Introduction In order to meet the performance requirements of future generations of Flash memory [1], one of the nearest major improvements will concern the scaling of the InterPoly Dielectric (IPD) stack. For the 45nm and 35nm nodes, in order to compensate the loss of the vertical sidewalls of the poly-Si floating gate and keep high the coupling ratio [START_REF] Alessandri | Proc. Of the 208th ECS Meeting[END_REF][START_REF] Houdt | [END_REF], the IPD thickness should be reduced. In a previous work [START_REF] Molas | Proc. of ESSDERC[END_REF], we proposed HfAlO high-k materials to replace the nitride layer in ONO interpoly dielectric stacks for future Flash memories, arguing the advantages both in terms of coupling and insulating properties. We showed that the leakage current was strongly governed by the trapping in the high-k layer, with a strong temperature activation. Indeed, a Poole-Frenkel conduction, probably assisted by the traps in the HfAlO layer, was identified. In this paper, we further developed the analysis of HfAlO high-k materials, embedded in OHO stacks, by focusing on the trapping properties and fixed charges. In particular, we investigate: (1) the intrinsic negative fixed charge in the oxides, (2) the trapping phenomena which take place in the high-k dielectrics with various compositions and thicknesses during a gate stress, (3) the retention properties of these layers, and (4) finally, we present simulations based on a Shockley Read Hall (SRH) approach which allow us to model the electron trapping in the OHO stacks. Experimental results Sample description The schematic of the triple layer capacitors studied in this work is shown in Figure 1. The high-k films were sandwiched between two HTO (High Thermal Oxide) deposited at 730°C, with a thickness of 4nm. Three different high-k materials, deposited by ALCVD, were studied: Hafnium Oxide (HfO 2 ), Aluminum Oxide (Al 2 O 3 ) and Hafnium Aluminate (HfAlO). In HfAlO films, the Hf concentrations were controlled by the HfCl 4 :Al(CH 3 ) 3 deposition cycle ratio, which are respectively: 9:1 (94% of Hf), 1:4 (31% of Hf), and 1:9 (27% of Hf). Different high-k physical thicknesses ranging between 3nm and 9nm were fabricated by controlling the number of ALD deposition cycles. Intrinsic negative fixed charge In this section we investigate the intrinsic fixed charge of the HfAlO layers. We can observe that the flatband voltages of OHO samples are shifted compared to the 10nm-thick HTO reference sample, due to the presence of intrinsic fixed charge in the high-k layers. The shift increases monotonically when the Al concentration of the HfAlO alloy increases. The origin of the intrinsic negative fixed charge in HfAlO materials is nowadays still not clear [START_REF] Lee | [END_REF]. In amorphous Al 2 O 3 layer, it was suggested that the Al 2 O 3 could be dissociated into (AlO 4/2 ) -and Al 3+ [6] and that, at the SiO 2 /Al 2 O 3 interface, the charge compensation does not take place. The inset of Figure 2 shows that V FB is a linear function of the HfAlO thickness, which suggests a surface rather than a volume distribution of the fixed charge according to Equation 1, being in agreement with previous works reported in the literature [7][8][9]. ( ) H H H H H SiO T fb t t t t V ε ρ ε σ ρ σ ε ⋅ ⋅ + ⋅ + ⋅ + ⋅ = ∆ 2 2 2 (1) with : : charge density at high-k/Bottom HTO interface. : volume charge density in high-k. t H : thickness of high-k layer. t T : thickness of HTO top layer. 2) localised at the bottom HTO /High-k interface. ε H : high-k dielectric constant. ε SiO2 : SiO 2 dielectric constant. Trapping properties In this section, we investigate in more details the charge trapping phenomena of OHO samples during a gate stress. The gate current density as a function of the electric field is reported in Figure 3. On the same graph is also reported the time evolution of the gate current at constant gate bias (J G -Time) on virgin devices. The hysteretic behaviour, and the continuous decreasing of the leakage current with the elapsing time demonstrate that trapping phenomena take place in the high-k materials. Note that the charge trapping could be an issue for IPD applications, as it may degrade the reliability of the memory, generating threshold voltage instabilities. To evaluate more precisely the trapping capabilities of the interpoly stacks, we monitored the evolution of the flatband voltages as a function of time when the devices were submitted to different gate stresses (Figure 4). A continuous V FB shift is observed, showing the progressive electron trapping in the stack as the stress time increases. It clearly appears that for a given stress condition, the trapping capability increases with the Hf concentration. As already reported in the literature [START_REF] Molas | Proc. of ESSDERC[END_REF]10], this result could be correlated with the crystalline structure of the high-k materials: the larger the Hf concentration, the more crystalline the layer, and hence, the higher the trapping capability. Based on these measurements, we extracted the charges trapped in the gate stack after programming. We assume that the charges are localized at the interface between the Bottom HTO and the HfAlO layer. Indeed, the further the traps are from the cathode, the slower the charging kinetics. Making this assumption, it appears that the extracted trapped charge value does not depend on the HfAlO thickness (Figure 5). This confirms (see Equation 1) that charges are mainly trapped at the high-k interface and that the bulk contribution is negligible at the first order. The stressing conditions are performed at constant V G /EOT. We assume that the charges are localized at the interface between the Bottom HTO and the HfAlO layer. Retention characteristics In this section, we investigate the dynamic discharging of the charges previously trapped in the OHO layers by a writing stress. Figure 6 plots the room temperature retention characteristics of OHO samples with different compositions of the HfAlO layer. We observe that for the Hf-rich sample, the electron discharging rate is quite fast, whereas the HfAlO 1:4 and 1:9 keep most of the charge after 10 6 s. Figure 6 also shows the strong temperature activation. Indeed, the charge loss is strongly accelerated at 85°C (within a factor 2 for the 9:1 sample). Nevertheless, the trend observed at room temperature is still conserved, i.e.: the 1:4 and 1:9 HfAlO layers exhibit the same charge decay, while for the 9:1 HfAlO sample, only 50% of trapped charge remains after 10 6 s. These observations are consistent with experiments performed on SONOS-like structures with high-k trapping layers [11]. Modelling In this part, we introduce an analytical model to qualitatively explain the trapping characteristics of our IPD stacks. To this aim, we use the SRH model presented in [10], and we focus on the Hf-rich (9:1) OHO samples. The simulations are performed assuming that: The trapped charge is localized at the bottom HTO / HfAlO interface, which is consistent with our experimental data. Note that in a more realistic approach, we should take into account the charge trapped in the HfAlO bulk, characterised by slower trapping time constants. The thermalization of electrons in the SiO 2 layer close to the cathode is neglected. In fact, in our range of programming voltages, the electron paths in the conduction band of the HTO, after tunneling, is inferior to 1nm (Figure 7), which is indeed shorter than the mean free path of electrons in SiO 2 [12]. In other words, we assume that the electron energy remains constant till the trapping in HTO/HfAlO interface states happens. The gap (E G =5.65eV) and the permittivity (εr=17) of 9:1 HfAlO were extracted by ellipsometry and by C-V G measurements, respectively. We also consider that the bottom and top HTO are 4.5nm and 5nm thick respectively, which is in agreement with TEM observations. The shift between the conduction band of Si and HfAlO, E C , and the trapping cross section, , were fixed based on literature data: E C =2eV [13,14], =10 -18 cm 2 [START_REF] Fernandes | Proc. of ESSDERC[END_REF]10]. N st , the trap density, is adjusted to fit the experimental saturation level. Fig. 8: Modeling of the trapping characteristics of OHO stack (HfAlO thickness is 6nm, concentration is 9:1), based on structure and parameters reported in Figure 7. Based on these assumptions, we use the following equations to simulate the programming characteristics: st N ft Vth Ct × ∆ = ( ) ( ) ( ) 1 dft ft cn ep ft en cp dt = -× + -× + cn/en and cp/ep are the electron and hole capture/emission rates which govern carrier exchanges between the traps and substrate. ft: trap occupation probability according to Shockley-Read-Hall statistics model [START_REF] Ielmini | [END_REF]. Ct: trap to gate coupling capacitance. Figure 8 shows the experimental and the modelling trapping characteristics. We observe a very good correlation between the simulation and the experimental data for the three programming voltages, which validates our theoretical approach. Conclusions In this paper we investigated the intrinsic fixed charge and trapping phenomena happening under stress of HfAlO based interpoly dielectric stacks. We demonstrated that the fixed charge content increases with the Al concentration of the HfAlO layer. We showed that the trapping capability when the device is submitted to a constant voltage stress increases as the Hf ratio of the compound increases. Based on programming measurements, we proved that in our devices the electron trapping mainly occurs at the first interface, between the bottom HTO and the HfAlO layer, rather than in the volume of the high-k dielectrics. We also observed that the discharging rate of the previously trapped charges is more important for Hf-rich alloys. Finally, an analytical model based on a SRH approach allowed us to fit our experimental data and to extract the main trapping parameters of HfAlO high-k materials. Fig. 1 : 1 Fig. 1: Schematic showing the capacitor device stack studied in this work. Various high-k were investigated: HfO 2 , Al 2 O 3 and HfAlO with different Hf:Al ratios (9:1, 1:4 and 1:9). Figure 2 Fig. 2 : 22 Figure 2 plots the capacitance-voltage characteristics of the studied OHO triple layers in the virgin state. Fig. 3 : 3 Fig.3: Current density J G versus equivalent electric field E G of HTO /HfAlO 9:1 -9nm/HTO stack. J G -time measurements on virgin devices are also represented for different electric fields. Inset: J G -time measurements performed at different applied electric field (1MV/cm, 6MV/cm and 10MV/cm) as a function of time. Fig. 4 :Fig. 5 : 45 Fig. 4: Programming characteristics of OHO samples with various HfAlO compositions. The HfAlO thickness is 6nm. Fig. 6 : 6 Fig. 6: Room temperature and 85°C retention characteristics of OHO samples with various compositions of HfAlO. The HfAlO thickness is 6nm. The programming conditions are fixed to have an initial flatband voltage shift of 1.5V for each sample. 2 Fig. 7 : 27 Fig. 7: Energy band diagram of OHO stack at V G =10V simulated in this work. Fitting parameters are indicated. The HfAlO thickness is 6nm, the concentration is 9:1. The charges trapped in the HTO-HfAlO interface are responsible for the different electric field values in the bottom and top oxide layers. Table 1 1 summarizes the number of equivalent fixed charge localised at the bottom HTO/High-k interface, calculated from the V FB shifts. HfAlO Number of fixed charge composition nb/cm 2 9:1 3•10 12 1:4 3.5•10 12 1:9 4•10 12 Table 1 : 1 Number of equivalent fixed charge (extracted from the characteristics reported in Figure Acknowledgments Part of this work was supported by the MEDEA+ NEMESYS project.
12,256
[ "18361", "172242", "170596" ]
[ "40214", "48098", "40214", "40214", "40214", "40214", "40214", "40214", "48098", "48098", "40214", "40214" ]
01745625
en
[ "info" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01745625/file/Finite%20element%20method-based%20kinematics%20and%20closed-loop%20control%20of%20soft%2Ccontinuum%20manipulators.pdf
Frederick Largilliere Alexandre Kruszewski Zhongkai Zhang Rochdi Merzouki Christian Duriez email: christian.duriez@inria.fr Finite element method-based kinematics and closed-loop control of soft, continuum manipulators Thor Morales Bieze, FEM-based kinematics and closed-loop control of soft, continuum manipulators Thor Morales Bieze, Frederick Largilliere, Alexandre Kruszewski, Zhongkai Zhang, Rochdi Merzouki and Christian Duriez Abstract-This paper presents a modeling methodology and experimental validation for soft 1 manipulators to obtain forward and inverse kinematic models under quasistatic conditions. It offers a way to obtain the kinematic characteristics of this type of soft robots that is suitable for offline path planning and position control. The modeling methodology presented relies on continuum mechanics which does not provide analytic solutions in the general case. Our approach proposes a real-time numerical integration strategy based on Finite Element Method (FEM) with a numerical optimization based on Lagrangian Multipliers to obtain forward and inverse models. To reduce the dimension of the problem, at each step, a projection of the model to the constraint space (gathering actuators, sensors and endeffector) is performed to obtain the smallest number possible of mathematical equations to be solved. This methodology is applied to obtain the kinematics of two different manipulators with complex structural geometry. An experimental comparison is also performed in one of the robots, between two other geometric approaches and the approach that is showcased in this paper. A closed-loop controller based on a state estimator is proposed. The controller is experimentally validated and its robustness is evaluated using Lypunov stability method. Index Terms-Soft manipulators, Continuum robots, Soft robots, Finite Element Method and Robotic control I. INTRODUCTION For the past four decades, rigid-link manipulators have been successfully deployed in the industrial environment. Their rigid bodies and high-torque joints are perfectly fitted to perform tasks that involve accurate positioning while carrying considerable payloads. However, as the applications for these systems move away from this structured environment, traditional rigid manipulators have been less successful. Indeed, their rigid and bulky bodies is a problem for adaptation to dynamic environments. Roboticists, trying to cope with the new applications for manipulators, have turned their attention to nature, seeking for inspiration to design new robot manipulators. Soft manipulators are robots often inspired by the morphology and functionality of biological agents like octopus tentacles and elephant trunks [START_REF] Neppalli | Octarm-a soft robotic manipulator[END_REF] [2] [3] [4] or tendrils [5] [6]. This type of manipulator deforms continuously to achieve a certain pose and can exhibit key advantages over their rigid counterpart with suitable design: they are lighter and therefore have less energy consumption, present a bigger power-to-weight ratio as well as a natural compliance because of their material properties. This compliance also gives the manipulators the ability to better adapt themselves to dynamic work environments and to work side by side with humans, without the concern of hazardous collisions. Because of these characteristics, soft manipulators have found a niche of applications in the medical field [START_REF] Shiva | Tendon-based stiffening for a pneumatically actuated soft manipulator[END_REF] [8] [START_REF] Simaan | A dexterous system for laryngeal surgery[END_REF] [10] [START_REF] Haraguchi | A pneumatically driven surgical manipulator with a flexible distal joint capable of force sensing[END_REF]. The compliant nature of soft continuum manipulators comes with the issues of modeling and control of their behavior, which is highly non-linear. A popular approach to model continuum robots has been the modification of methods already established to model rigid manipulators. In [START_REF] Hannan | Analysis and initial experiments for a novel elephant's trunk robot[END_REF], Hannan and Walker presented the development of the kinematic model for a trunk robot. The model considers that each section bends with constant curvature. This approach has been used to express the kinematics of continuum trunks [START_REF] Jones | Kinematics for multisection continuum robots[END_REF] and tendrillike continuum robots [START_REF] Bardou | Control of a multiple sections flexible endoscopic system[END_REF]. The constant curvature models can be used even when the shape of the continuum robot does not conform to a circular arc. In [START_REF] Mahl | A variable curvature continuum kinematics for kinematic control of the bionic handling assistant[END_REF], the kinematics of the bionic handling assistant are obtained by modeling each section of the robot as a finite number of serially connected circular arcs with different parameters each. The models mentioned, while producing close-form analytic models, are only based on the geometry of the robot, without consideration for the mechanics of the structure, necessary to properly describe the deformation of this type of robot. A. Model requirements In contrast with rigid robots, soft manipulator kinematics not only depend on the geometry of the robot, but also on its mechanical properties, in particular the stiffness of the structure. While rigid manipulator kinematics can be used to solve positioning problems with the assumption of resistance/counter-actuation to gravity or load effects, soft manipulators easily comply to these forces and deform. To answer the same problems of positioning, it is then necessary to take into account the current deformation (ie change of geometry) induced by these forces to obtain a kinematic relation between position of end-effector and position of actuators. (Fig. 1). The degrees of freedom in a rigid manipulator are determined by the joints of the manipulator and are often all actuated. In soft manipulators, the degrees of freedom are generated by the deformation of the continuum and their number is infinite. (It can be noted that it is disconnected from Fig. 1: In this example a tendon is pulled to create the motion of an elastic soft robot. Starting with the same geometry, the material stiffness has an influence on the kinematics (output vs input displacements). the number of actuators). Usually this problem is addressed by a discretization of the degrees of freedom of the continuum, through methods provided by computational mechanics. Another difference in soft manipulators, compared to rigid ones, is that if a load is carried by a soft manipulator, this load will deform the robot and modify its kinematics. B. Related Work This recapitulation of previous work is centralized in the use of continuum mechanics in the modeling of soft manipulators. A discussion on the use of mechanics-based methods to describe soft continuum robots can start by mentioning the work of Chirikjian [START_REF] Chirikjian | A modal approach to hyperredundant manipulator kinematics[END_REF] [START_REF]Kinematically optimal hyper-redundant manipulator configurations[END_REF], who used continuum mechanics to define a curve that describes the pose of hyper-redundant robots without considering actuation. This work laid the basis for subsequent work on continuum manipulator models. Given the uprising trend in the design of manipulators composed by a single elongated backbone actuated by tendons, researchers have explored beam theory-based approaches to describe the pose of this particular type of robots. Rucker et al. present in [START_REF] Rucker | Statics and dynamics of continuum robots with general tendon routing and external loading[END_REF] the potential of this theory applied to the modeling of manipulators with tendon routing. In [START_REF] Smoljkic | Compliance computation for continuum types of robots[END_REF] the compliance model of continuum robots is obtained by considering the robot as a single section of Cosserat rod. In [START_REF] Jones | Three dimensional statics for continuum robotics[END_REF], Jones presents a static model in three-dimensional space and in [START_REF] Giorelli | A two dimensional inverse kinetics model of a cable driven manipulator inspired by the octopus arm[END_REF], this theory was implemented to compute the inverse kinematic model of a tendon-driven tentacle manipulator in two dimensions under Euler-Bernoulli beam hypothesis. While providing models suitable for fast computing, beam theory is limited in its application by the shape of the backbone, i.e. when the backbone cannot be assimilated as a uniform beam, this modeling approach loses relevance. In particular, when the body of the robot is actuated intrinsically by pneumatic or hydraulic actuators it tends to have a more complex shape, and its behavior cannot be described by beam theory methods. Finite Element Analysis is increasingly used in the field of soft robots. In [START_REF] Connolly | Mechanical programming of soft actuators by varying fiber angle[END_REF], a direct finite element simulation is used to observe the behavior of soft pneumatic actuators. In this article, the development of a method to compute the kinematic model of soft manipulators that relies on the finite element model of the structure of the robot is presented. By using different types of elements (tetrahedral, beam, or shell elements), the methodology can be used on robots of very complex shapes. Contrary to the majority of models currently available in the literature, this approach also models two types of actuators, which enables this technique to be used as part of the control of the robot as well as in off-line analysis. Moreover, gravity and payload carried by the end-effector can be accounted for by this approach. This article presents the following contribution towards the kinematic modeling of soft manipulators: • A FEM-based modeling approach that accounts for complex structural shapes and the mechanics of the employed material. • The model of two actuation systems (i.e. pulling on cables and pneumatics) currently implemented in the majority of designs of soft manipulators. • The integration of sensors in the simulation that allows for an observer of the manipulator in the configuration space. • The validation of the modeling approach using two completely different deformable manipulators. • The experimental comparison of this approach to two other geometric-based models. • An experimental study on the robustness of the model under external loading. • A closed-loop controller based on a state estimator and the robustness analysis of the closed-loop system. In section II, the formulation of the static equilibrium and the constraints for end-effector, actuators and sensors is explained. Section III shows the projection of the model in the constraint space and the convex optimization process used to solve the reduced model. Section IV presents the experimental validation of the forward and inverse kinematic models and finally, in Section V, a discussion about the results and limitations of the model, as well as the perspectives for future work are presented. II. METHODS In this section, the development of the Finite Element Method (FEM) of soft manipulators is presented. The method relies on the constitutive law of the material from which the robot is made. This constitutive law can be directly measured by conducting stress/strain mechanical tests to a material sample in the ideal case. When the strain/stress tests cannot be done, an approximation of the constitutive law can be obtained in the simulation. The main idea is to tune these material parameters qualitatively by approximating the deformation seen in reality by that observed on simulation in which the deformation of the real robot is matched given a known static load. A similar approach is presented in [START_REF] Coevoet | Registration by interactive inverse simulation: application for adaptive radiotherapy[END_REF] in the context of radiotherapy. After measuring the constitutive law, a volumebased approach is used with tetrahedral elements. Then, the mathematical formulation of the constraints is introduced using Lagrange multipliers. In this method, the end-effector, actuators, and sensors use constraint models. A. FEM model of soft and continuum robots Depending on the shape of the robot, one could use volume, surface or linear elements to compute the non-linear deformation of the structure. In this paper, volumetric elements 2 are used. A non-linear formulation accounts for the large displacements and rotations of the structure. In continuum mechanics, this is considered as the case of large strain but small stress. More sophisticated FEM models can be proposed in the future, according to the constitutive law and the solicitation of the employed material (i.e. large stress). The computation in real-time with such models will be even more challenging, but the principles of the method described in this paper would still apply. The corotational implementation of volume FEM, presented in [START_REF] Felippa | A systematic approach to the element-independent corotational dynamics of finite elements[END_REF], is particularly suitable for linear elasticity under the hypothesis of large displacements. The shape of the robot is meshed using linear tetrahedral elements, but the same method could be used with other elements, shape functions and more advanced material laws. In the FEM, the nodes at the vertices of the elements represent the degrees of freedom of the manipulator. Even for a considerable amount of nodes, the approach is fast to compute, numerically stable and a free implementation in C++ is available in the open-source framework SOFA [START_REF] Faure | Sofa: A multi-model framework for interactive physical simulation[END_REF]. During each step i of the simulation, the following linearization of the internal forces is updated: f(x i ) ≈ f(x i-1 ) + K(x i-1 )dx (1) where f provides the volumetric internal stiffness forces at a given position x of the nodes, K(x) is the tangent stiffness matrix that depends on the actual positions of the nodes and dx is the difference between positions dx = x i -x i-1 . This linearization is valid as long as the displacement of the nodes dx is small. The lines and columns that correspond to fixed nodes are removed from the system to get a full rank for matrix K. In f and K, the rows (and columns for K) contain the component of the internal forces (x, y, z) for the nodes, in the order corresponding to their numbering in the mesh. In this paper, the study is limited to quasi-static behavior on purpose, since the simulation step required to capture high frequency vibrations is not feasible. Thus, in a first approach, the assumption is that the control of the robot is performed at low velocities, so that the inertia effects in the motion of the robot can be neglected. One seeks to establish static equilibrium at each step from the first law of Newton: f ext + f(x i ) + H T λ = 0 (2) 2 The method has also been tested with beam elements. where f ext represents the external forces (e.g. gravity) and H T λ gathers the contributions of the end-effector, actuators and the contact forces as Lagrange multipliers (see the following sections). The way H is obtained is explained in sections II-B and II-C but its computation is performed with the values obtained from the previous step. We then use the expression H(x i-1 ) and through the linearization explained in (1), we obtain the following formulation : -K(x i-1 )dx = f ext + f(x i-1 ) + H(x i-1 ) T λ (3) The variables dx and λ are both unknown and are found during the optimization process. It is noted that the matrix K is highly sparse. In the implementation, a conjugate gradient solver is used and preconditioned by a sparse LDL T decomposition. For a mesh composed of about 1000 nodes and about 3000 tetrahedral elements, a refresh rate of 60Hz is obtained with the implementation available in SOFA. B. Constraint for the end-effector To set the Lagrange multiplier on the end-effector, a point or a set of points of the robot needs to be considered as the endeffector. It could be any point(s) mapped on the finite element mesh. For each point, the constraint objective is to reduce the difference between the end-effector position and its desired position p des . Thus, a function δ e (x) : R 3n → R 3 with n being the number of nodes, evaluates this difference along x, y and z. If the end-effector corresponds to a node i of the mesh, the function is: δ e (x) = x i -p des , where x i is the position of node i. If the effector is set inside an element, we use: δ e (x) = n ∑ i=0 φ i (p e f f )x i (4) where p e f f is the position of the end-effector in the rest configuration of the FEM model and φ i is the shape (interpolation) function associated to node i. If several points are used for end-effector position, the vector δ e (x) gathers the evaluation of the difference for all the points. The function is then R 3n → R 3m , where m is the number of end-effector points. The matrix H used for the end-effectors corresponds to H e (x) = ∂ δ e (x) ∂ x . The matrix H e is highly sparse: A row, that corresponds to a component of a point of the end-effector, will contain non null values on a very small number of columns: As the point is mapped on a single tetrahedral element, there is a maximum of 4 non-null value per row. Of course, the column should match with the components of the nodes, given the fact that the nonnull values are gathered in 3x3 diagonal block matrices. Finally, an important point is the effort value that is put on the Lagrange multiplier that corresponds to the terminal effector. The value of λ e will depend on the load applied on the end-effector. Two cases can be considered: (I) if the points defined as end-effector move freely in the space, there is no physical interaction, so the contribution of the constraint vanishes λ e = 0. (II) if one or several points of the end-effector carries one object l which mass creates a load that could deform the structure. In such cases, the corresponding load should be set on λ e = m l g. with m l the mass of the object and g the gravity field. C. Actuator constraint model In this work, the actuator model takes into account its physical characteristics. Two types of actuators have been implemented in the framework: Tendon (cable) and pneumatic actuators. The contributions of these actuator constraints are unknown before the optimization process. However, given the type of actuation, the constraint is not set the same way. a) Cable actuator: In a first case (Fig 2 ), a cable is used to actuate the structure. The cable can simply be attached at one point of the structure, but it can also go through several other points (frictionless guides are considered) In all cases, the unknown λ a is the stretching force inside the cable. This force is unilateral (λ a ≥ 0). Let's suppose now that the points are numbered starting from the extremity where the cable is being pulled. The matrix H is computed this way: At each point p, we take the direction of the cable before d b = x p -x p-1 x p -x p-1 and after d a = x p+1 -x p x p+1 -x p . To obtain the constraint direction that is applied to the point, we use d p = d a -d b . Note that the direction of the final point is equal to the direction "before" as d a does not exist. These constraint directions are mapped on the nodes using the interpolation:     . . . f n . . .     =     . . . φ n (α, β , γ) d p . . .     λ a = H T a λ a (5) A function δ a (x) is defined to provide the length of the cable, given the position of the constrained node(s). The actuator stroke can also be included by imposing δ a (x) ∈ [δ min δ max ]). Through the use of this function, we get H a = ∂ δ a (x) ∂ x . b) Pneumatic actuator: The formulation is compatible with pressure-based actuation of cavities that are placed on the structure, as seen in Fig. 3. In that case, the effort λ a is the pressure exerted on the wall of the cavity. As the pressure is uniform inside the cavity, a single constraint can be set for each pneumatic actuator. All triangles of the cavity wall will be involved: For each triangle t, the area and the normal direction are computed. If this result is multiplied by the pressure, one obtains the force applied by the pneumatic actuator on the nodes t of this triangle. Consequently, the contribution of each triangle is added in the corresponding column of H T a . Fig. 3: Pressure actuation In the particular case of a pneumatic actuation, λ a provides the difference of pressure inside the cavity compared to the atmospheric pressure. Usually, pneumatic actuators only provide positive pressure so λ a ≥ 0. However, in some cases, it is also possible to create both negative and positive pressure using vacuum/pressure actuation. In that case, there is no particular constraint on the unknown value of λ a , despite an eventual limit (max / min) of pressure that can be achieved by the actuator. The approach to the modeling of fluidic actuators can also account for hydraulic actuators, by accounting for the weight distribution of the liquid at any given configuration, as presented in [START_REF] Rodríguez | Real-time simulation of hydraulic components for interactive control of soft robots[END_REF]. D. Sensor constraint model In order to relate the end-effector position and the geometry of the manipulator, one needs sensors that can measure the geometric state of the robot. In this study, the sensors used can measure the lengths of the sections that compose the manipulator, but also that can be easily integrated in the design. String potentiometers offer a good solution to acquire information on the real geometric state of the robot. As in the case of the cable actuator, the string of the sensor is routed through several frictionless guides, at n points x n . In the model, the measure of the lengths read by the sensor will be n-1 ∑ i=1 x i+1 -x i (6) which evaluates the distance between each sensor guide after the position of the nodes have been updated. A function δ s is defined to represent the difference between the current lengths of the sensors and the desired lengths. The matrix H s that gathers the directions of the sensor constraint is obtained in the same way as for the cable actuator, shown in section II-C. III. REDUCED MODEL IN THE CONSTRAINT SPACE The classical resolution of a FEM problem (like solving the static equilibrium of the structure described at equation 3) provides a forward model: it allows to compute the displacements of the structure, given the values of the efforts put on the actuator λ a . However, in the case of position control, the actuation λ a is the unknown. Yet, for controlling the motion of the soft robot, an inverse model is needed, which is challenging to compute in real-time as the size of the system is in the range of several thousands degrees of freedom. In this work, another approach is used, based on the projection of the mechanics in the constraint space that drastically reduces the size of the optimization problem. This approach, initially developed in [START_REF] Duriez | Control of elastic soft robots based on real-time finite element method[END_REF], is generalized. A new formulation of the inverse problem in the form of a quadratic programming (QP) optimization (developed in [START_REF] Largilliere | Real-time control of soft-robots using asynchronous finite element modeling[END_REF]) is used. A. Reduced compliance in constraint space As stated above, the optimization process relies on a projection of the mechanics in the constraint space. Each constraint has a direction that is set by a line of the matrices H e and H a This matrix is sparse, as the direction of the constraints is mapped on few nodes of the FE mesh. The values of the effort applied by the actuators λ a are not known at the beginning of the optimization process, whereas the value of λ e is supposed to be known. The first step consists of obtaining a free configuration x free of the robot which is found by solving the equation 3 while considering that there is no actuation applied to the deformable structure. In practice, the known value of λ e is used and λ a = 0 is imposed. The linear equation 3 is solved using a LDL T factorization of the matrix K. Given this new free position x free for all the nodes of the mesh, one can evaluate the values of δ free e = δ e (x free ), the shift between the effector point(s) position and the desired position introduced in section II-B. One can also evaluate δ free a = δ a (x free ) the position of the actuated points without actuation effort. From the FEM formulation of the problem that uses a large matrix K, a formulation that accounts for the directions of the constraints placed for actuators and end-effectors is derived. Using the Schur complement of matrix K in the Lagrange multiplier-augmented system [29], a small formulation of δ e is obtained. This variable expresses the difference between the desired position for the end-effector and its current position in terms of the actuators contributions λ a : δ e = H e K -1 H T a W ea λ a + δ free e (7) The Schur complement also provides similar formulations for the difference between a desired sensor or actuator position and its current position: δ a = H a K -1 H T a W aa λ a + δ free a (8) δ s = H s K -1 H T a W sa λ a + δ free s (9) This step is central in the method. It consists in projecting the mechanics into the constraint space. As the constraints are the inputs (effector position shift and sensor length shift) and outputs (effort to apply on the actuators) of the inverse problem, the smallest possible projection space for the inverse problem is obtained. It allows for a projection that drastically reduces the size of the search space without loss of information. Indeed, section III-B shows how the matrices W ea and W aa provides the mechanical coupling equations between actuators and effector point(s). After this projection, the optimization is processed in the reduced constraint space to get the values of λ a . This part is described in the section III-C. The final configuration of the soft robot, at the end of the time step, is obtained as : x t = x free + K -1 H T a λ a . (10) It should be emphasized that one of the main difficulties is to compute W ea and W aa in a fast manner. No pre-computation is possible as their value changes at each iteration. However, this type of projection problem is frequent when solving friction contact on deformable objects. Thus, several strategies are already implemented in SOFA [START_REF] Faure | Sofa: A multi-model framework for interactive physical simulation[END_REF]. B. Coupled Kinematic Equations Using the compliance operator W ea , one can get a measure of the mechanical coupling between effector and actuator, and with W aa , the coupling between actuators. For instance, the displacement δ i e created on the end-effector (along a direction stored on the line i of matrix H e ) by a unitary force λ j a applied by the actuator (which is stored at the line j of matrix H a ) is directly obtained by ∆δ i e = w i j ea λ j a + δ i,free e . As the motion is created by deformation, the motion of actuator j is influenced by actuator k. Through the same principle, actuator k also influences the displacement of the effector. To get a kinematic link between actuators and effector, the method needs to account for the mechanical coupling that can exist between actuators. It is captured by W aa that can be inverted if actuators are defined on independent degrees of freedom. Thus one can get a kinematic link by rewriting equation [START_REF] Kato | Tendondriven continuum robot for endoscopic surgery: Preclinical development and validation of a tension propagation model[END_REF] as: δ e = W ea W -1 aa (δ a -δ free a ) + δ free e (11) Equation ( 11) is composed of a reduced number of linear equations that relate the displacement of the actuators to the displacement of the effector. Consequently, matrix W ea W -1 aa is equivalent to the Jacobian matrix of a rigid manipulator. This matrix is a local linearization provided by the FEM model on a given position. It needs to be updated for deformations with large displacements. C. Inverse model by convex optimization The goal of the optimization is to find how to actuate the structure so that the end-effector of the robot reaches a desired position. This was initially proposed in [START_REF] Largilliere | Real-time control of soft-robots using asynchronous finite element modeling[END_REF]. It consists in reducing the norm of δ e which actually measures the shift between the end-effector and its desired position. Thus, computing min( 12 δ T e δ e ) leads to a Quadratic Programming (QP) problem: min 1 2 λ a T W T ea W ea λ a + λ a T W T ea δ free e ( 12 ) sub ject to (course of actuators) : δ min ≤ δ a = W aa λ a + δ free a ≤ δ max and (case of unilateral effort actuation) : λ a ≥ 0 (13) The use of a minimization allows to find a solution even when the desired position is out of the workspace of the robot. In such a case, the algorithm will find the point that minimizes the distance to the desired position while staying in the limits introduced by the course of the actuators. In practice, the QP solver available in the Computational Geometry Algorithms Library (CGAL) [START_REF] Fabri | Cgal: The computational geometry algorithms library[END_REF] is used. The matrix of the QP W T ea W ea is symmetric. If the number of actuators is equal or inferior to the size of the end-effector space, the matrix is also definite. In such a case, the solution of the minimization is unique. In the case when the number of actuators is greater than the degrees of freedom of the effector points, the matrix of the QP is only semi-definite. Consequently, the solution could be non-unique. A new criterion for the minimization can be introduced, based on the deformation energy. Indeed, this energy E de f is linked to the mechanical work of the forces exerted by the actuators. E de f can be computed by evaluating the dot product between λ a and the displacements of the actuators ∆δ a = δ aδ free a due to the actuator forces E de f = λ a T ∆δ a = λ a T W aa λ a . Yet, matrix W aa is positive-definite if the actuators are placed on different nodes of the FEM or with different directions (i.e. if there is no linear dependencies between lines of H a . Thus, one can add this energy in the minimization process by replacing [START_REF] Hannan | Analysis and initial experiments for a novel elephant's trunk robot[END_REF] with: min 1 2 λ a T (W T ea W ea + εW aa )λ a + λ a T W T ea δ free e ( 14 ) with ε chosen sufficiently small so that the deformation energy does not disrupt the quality of the effector positioning. In practice, ε = tr(W T ea W ea ) tr(W aa ) * 10 -3 so that the term εW aa does not alter the value of the QP matrix. Thanks to this modification, the QP matrix becomes positive-definite and a unique solution of the problem can be found. The CBHA is the bionic continuum manipulator component of the RobotinoXT, a didactic mobile platform designed by Festo Robotics. The system is shown in Fig. 4 (a). The bionic continuum manipulator is formed by 2 serially connected sections of pneumatic actuators, an axially rotating wrist and a compliant gripper. Without actuation, the manipulator has a length of 206mm, with each section having a length of 103mm. The width at the base of the manipulator is 100mm long and the top has 80mm of width. In our study, the end of the second section is considered as the end-effector. IV. KINEMATIC MODELS OF Each section of the manipulator is composed of a parallel array of pneumatic actuators, as shown in Fig. 4 (b). By applying different pressures to the bellows, each section can bend or extend independently. The pose of the manipulator is obtained as the contribution of the poses of the 2 sections. In order to sense the state of the robot, string potentiometers measure the lengths of the actuators. B. Forward kinematic models The forward kinematic model of a soft manipulator deals with the problem of finding the end-effector position, given a defined configuration of the manipulator. For a rigid manipulator, this configuration is simply the set of variables associated with the joints of the robot. In contrast with the rigid robots, the variables that express the configuration of a soft manipulator change with respect to the structure of the robot and its type of actuation, and therefore, cannot be obtained in a straightforward manner. The FEM-based methods explained before provide an easy way to obtain the kinematic relation between the end-effector and the configuration of the manipulator. • FEM-based model Given the intrinsic nature of the CBHA, the configuration of the robot is represented by the lengths of the pneumatic actuators that correspond to an end-effector position. Of course, the description of the robot could be given in the actuator space directly, using in this case Equation 7, to attain a pressureto-position model, but that requires a precise control over the actuation (in this case the pressure inside the cavities) in order to obtain a good estimation of the position of the end-effector. Instead, Equation 9, which is reproduced here for clarification, is used to relate the end-effector position to the configuration of the manipulator represented by the lengths of each pneumatic actuator, given by the sensors. This representation is clearer in the context of kinematic modeling, and allows for a position-to-position model which is less sensitive to unknown hardware parameters. δ s = H s K -1 H T a W sa λ a + δ free s (15) In this approach, no geometrical assumptions are needed. Each part of the robot is modeled in detail using shell and tetrahedral elements, as presented in Fig. 5. The mesh used in the model of the pneumatic cavities is composed by 3528 elements. Once the constraints have been incorporated in the model, the convex optimization finds each actuator contribution required to have the desired sensor lengths. The final position of the end-effector is obtained after the position of the nodes of the mesh is updated. • Constant curvature model This model of the CBHA, which was developed in [START_REF] Escande | Modelling of multisection bionic manipulator: Application to robotinoxt[END_REF] and [START_REF] Escande | Geometric modelling of multisection bionic manipulator: Experimental validation on robotinoxt[END_REF] and validated in [START_REF] Escande | Kinematic calibration of a multisection bionic manipulator[END_REF], works under the assumption that, after actuation, the resulting pose of each section in the robot can be represented by an arc section with constant curvature (Fig. 6). The evolution from end-to-end of a section i is described, in terms of backbone parameters, by 2 coupled rotations and one translation in the homogeneous transformations: i j T =     c 2 φ i cθ i + s 2 φ i cφ i sφ i (cθ i -1) cφ i sθ i x i cφ i sφ i (cθ i -1) s 2 φ i cθ i + c 2 φ i sφ i sθ i y i -cφ i sθ i -sφ i sθ i cθ i z i 0 0 0 1     (16) where the notations s and c mean sine and cosine respectively. The cartesian coordinates of the end of the bending section i are given by (x i , y i , z i ), where x i = r i cφ i (1cθ i ), y i = r i sφ i (1cθ i ) and z i = r i sθ i . The backbone variables φ i , θ i and r i can be expressed in terms of the actuator lengths in order to have the correct kinematic relations : φ i = tan -1 ( √ 3(l 3 -l 1 ) 2l 1 -l 2 -l 3 ) θ i = D i 3d i r i = (l 1 +l 2 +l 3 )d i D i (17) with D i = 2 l 2 1 + l 2 2 + l 2 3 -l 1 l 2 -l 1 l 3 -l 2 l 3 (18) The parameter d i represents the diameter of section i. In this model, each section is considered to be a cylinder with constant radius. The lengths of each actuator in the section i are represented by l 1 , l 2 and l 3 . • Hybrid model In this approach, developed in detail in [START_REF] Lakhal | Hybrid approach for modeling and solving of kinematics of compact bionic handling assistant manipulator[END_REF], the CBHA is considered as 17 vertebrae serially connected. Between each pair of vertebrae, an inter-vertebra section is modeled as a 3UPS-1UP joint (3 universal-prismatic-spherical joints and one universal-prismatic joint). The behavior of a sub-structure composed by 2 vertebrae and an inter-vertebra is represented by a parallel robot with 3 DoF, as shown in Fig. 7. Fig. 7: Sub-structure of the CBHA modeled as a parallel robot. The parallel robot consists of an upper and a lower platform connected by 3 limbs and a central leg. The limbs are modeled by a UPS joint in which only the prismatic part is active allowing the control of the position and orientation of the upper vertebra, with respect to the lower vertebra. The central leg is modeled as a passive UP joint and is used to constraint the rotation about the longitudinal axis of the parallel robot, as well as any shearing motion between the vertebrae. The position and orientation of the upper vertebra k +1, with respect to the lower vertebra k is given by the transformation matrix k k+1 T =     cθ k sθ k sψ k sθ k cψ k 0 0 cψ k -sψ k 0 -sθ k sψ k cθ k cθ k cψ k z k 0 0 0 1     (19) where the angles θ k and ψ k represent pitch and roll angles, respectively, and the notations s and c mean sine and cosine respectively. In this model, the prismatic variable q n,k shown in Fig. 7 represents the length of each inter-vertebra, which is a percentage of the total length of the actuator. This percentage can be obtained by considering the minimum and maximum elongation of each inter-vertebra. This development is presented in detail in [START_REF] Lakhal | Hybrid approach for modeling and solving of kinematics of compact bionic handling assistant manipulator[END_REF]. C. Experimental validation and model comparison In order to validate the model, a set of 50 end-effector positions distributed inside the task space of the manipulator were selected. For each position, the correspondent configuration of the robot was recorded using the string potentiometers that are placed along the structure of the robot. The set of lengths recorded were used as an input for the forward kinematic model. The experiment assumes zero-end-effector payload. The results are compared to those of the Constant Curvature and also the Hybrid approach. This comparison is presented in Fig. 8 and9. The results show that the constraint approach is more accurate in estimating the position of the end-effector, with a Root-Mean-Square (RMS) error of 4.66mm, compared to 12.87mm and 17.09mm of error for the Constant curvature and the Hybrid approaches, respectively. We hypothesize that the imprecise measurement of displacement for each vertebra may be the cause of the hybrid approach being less precise than ours, as there were only 6 string potentiometers available to chart the displacements. Moreover, this model was initially developed to be able to inverse it, more than for the pure precision of the forward kinematic model. Nevertheless, the FEM model still has a few limitations in its development. These limitations represent the main source of error in the results: for the moment, the constitutive law used to model the material of the trunk is only an approximation. Another source of error comes from the geometry of the trunk itself. When the trunk is bent at a maximum angle, the outer walls of the pneumatic cavities collide with each other, as shown in Fig. 10. The consideration of these collisions is not yet implemented in the simulation. The generic nature of the approach showcased in this article is illustrated by obtaining the inverse kinematics of two different soft manipulators. The simulation of the inverse model provides the position control of the robots in open loop that can be used to pilot directly the robot, as in the case of the parallel soft robot. D. Inverse kinematic model In this section, an experimental validation of the modeling methodology is conducted two different soft robots: • A parallel soft robot made of silicone, actuated with tendons (cables) controlled in position, • The Compact Bionic Handling Assistant (CBHA). The inverse model provided by convex optimization in realtime allows to teleoperate the robots in open-loop: Given a desired input position of the effector, the desired output for the actuators is computed. For the soft parallel robot, a desired position of the tendons is provided. 1) Modeling and feed-forward control of a parallel soft robot: This experiment is based on a 3D soft robot, made of silicone, which design is inspired by parallel robots with closed kinematic chains (Fig. 11). In its rest position, the dimensions of the robot are 180 × 180 × 130mm The robot naturally deforms and sinks under the action of gravity, but 4 unilateral actuators (servo-motors that are connected to the structure of the robot with cables) are placed on each leg to prevent and pilot the deformation. The effector position is placed on the upper part of the robot. Its trajectory is defined in 3D (and can interactively be changed by a user) and the algorithm provides the position to apply to the servomotors. The Young modulus of the silicone, measured experimentally, is used to parametrize the robot. The FEM model of the robot is composed of 4147 Tetrahedrons and 1628 Nodes. When projected in the constraint space, the size of the system is highly reduced: 3 equations for the effector, and 4 equations for the actuators. The convex optimization that leads to the inverse model can be performed in real-time. The most timeexpensive part of the computation is the projection expressed in equations ( 7) to ( 8) (50 ms on a Core i7, 2.8GHz), but when using the Graphics Processing Unit (GPU) method described in [START_REF] Courtecuisse | Preconditionerbased contact response and application to cataract surgery[END_REF], it significantly reduces the computation time of the projection (15ms in this case). To validate the method, a study of the discrepancy between the desired positions and the obtained positions is conducted on static positions distributed across a workspace of 25mm × 25mm × 50mm around the rest-position of the robot (see Fig. 12). The measurements are performed using a motion capture system based on infrared cameras 3 . On a sample of 28 positions, a mean error of 1.4mm is obtained with a standard deviation of 0.6 mm and a maximum error of 2.9 mm. This illustrates the precision that can be achieved using such FEM approaches. It can be noted that these results are obtained using an open-loop and with a position to position control: for a given position of the effector, the algorithm finds a position for the actuator cables. This is a favorable case for FEM precision because the partial differential equations are enforced with Dirichlet boundary conditions. 2) Inverse kinematics of the CBHA: Considering the kinematic relationship for the CBHA given in section IV-D, that is the link between the actuator lengths and the position of the end-effector, the inverse kinematic model, solved by the convex optimization will give the actuator lengths that result from a predefined end-effector position. The FEM analysis applied to model this soft robot is detailed in A domain decomposition strategy is applied in order to perform the computation of the model (Equation ( 3)) and the projection in the constraint space (Equation ( 7)). After the actuator contribution required to achieve the desired position of the end-effector is applied to the model, and the position of the nodes is updated, the readings from the sensors in the simulation, given by Equation 6, will give the lengths of the pneumatic actuators that represent the output of the inverse kinematic model. To validate the method, a set of 50 end-effector positions are selected inside the task space of the robot and the corresponding set of lengths for each position is recorded by the sensors of the robot. The same set of positions is used as inputs for the inverse model, and the resultant length of each actuator is estimated. This study is summarized in Table 1, where l 1 , . . . , l 6 represent the lengths of the actuators and their values are in mm, µ represents the mean error and σ is the standard deviation. The results are presented in Fig. 13. The results show a mean error between 2.43mm and 4.08mm across all lengths, which represents between 1.21% and 2.04% of the total length of the manipulator. As in the case of the parallel robot, the set of actuator contributions (in this case the pressures applied to the cavities) obtained from the optimization process can be used as input for the real robot to obtain a feed-forward control. However, as explained before, there are some considerable discrepancies between the pressure calculated by the simulation and the pressure applied to the cavities, mainly caused by the way the pressure is regulated in the robot. This leads to less accurate results. In order to improve the results in terms of efforts (pressure,force) in the inverse model, one can use more advanced constitutive models for the materials. One of these models is the St Venant-Kirchhoff hyper-elastic model. The stress/strain relationship in the St Venant-Kirchhoff model is represented by the Second Piola-Kirchhoff stress tensor S that has the form: S = λ tr(E)I + 2µE ( 20 ) where E is the Lagrange-Green strain tensor and λ and µ are the Lamé constants that can be approximated from the Young's modulus and Poisson's ratio of the material in question. We have conducted tests on the parallel soft robot using the St. Venant-Kirchhoff model to compare the results to those obtained using the corotational formulation. In the tests we observed very small errors in the displacement output (3.24% of a total cable stroke of 50mm). In the case of the force output we observed bigger errors (16.02% of the tension in the cable) related to the errors made by the corotational formulation in the stress computation. E. Deflection of end-effector under external loading As explained in section II, one of the advantages of this modeling approach is the ability to predict the deflection of the robot under external loading, given a good representation of the material mechanics. If the load is known a priori, the value of the force acting on the end-effector λ e can be used in equation [START_REF] Zhao | Design and analysis of a kind of biomimetic continuum robot[END_REF] to compute the position that accounts for said force. In order to validate this modeling feature, a set of experiments were conducted on both manipulators using known loads. First, an initial configuration for the manipulator without loading is selected and the position of the end-effector is measured, then the load is applied and the new position of the end-effector is recovered after the robot achieves static equilibrium. The same load is then applied to the model of the manipulator using the same initial pose and the resulting endeffector position is also recovered. In the case of the CBHA, the model of the sensors presented in section II-D is used to apply the configuration of the real robot measured by the string potentiometers to the simulation model. A vector that connects initial and final end-effector positions represents the deflection caused by the loading. In order to assess the repeatability of the measurements, the loading sequence described before is performed 40 times for each loading value and the average value is then used for the model validation. A standard deviation of 0.4838mm is obtained across all the measurements. The comparison between measured and model deflections for both manipulators is presented in Fig. 14 and15. In the figures, the blue line represents the compliance to loading of the manipulators and the red line is the prediction of the model. In the case of the CBHA, the maximum error is 4.107mm with an average error of 2.1047mm. Nevertheless, Fig. 15 shows the CBHA presents a strain hardening/necking stages of plastic behavior at the beginning of the loading profile which corresponds to the compliance of the plastic material from which the manipulator is made of (polyamide nylon), and therefore the model prediction is accurate only for a small region of the profile. In order to improve the model predictive capabilities for the CBHA in particular, two constitutive laws could be implemented to account for the different behaviors, but this would modify the way the inverse FEM is formulated. In contrast, the maximum error in the case of the parallel manipulator is 2.06mm with an average error of 2.01mm. The reason we obtain better results is because the material of the parallel robot conforms better to the assumption of high deformation and low stress, while also being an elastic material with no plastic behavior. V. FEM-BASED CLOSED-LOOP CONTROL OF CONTINUUM ROBOTS In section IV-D2, the relationship between the sensor lengths and the end-effector position of the CBHA was obtained based on the FEM simulation of the robot, however, in order to control the motion of the robot, the set of pressures applied to the actuators is to be computed. Indeed, the relationship given by equation ( 7) can be used to control directly the robot in open-loop, but as explained in IV-B this requires an accurate control over the pressures applied to the robot. Moreover, non-linear behaviors like the hysteresis and strainrate dependency of the material (which is not considered in the model) render the feedforward control of the manipulators unusable in real applications. Controllers for soft manipulators have been investigated in the past with the intention of rejecting non-linear behaviours and model uncertainties that result from the complex dynamics of the manipulators. Control based on energy formulations [START_REF] Ivanescu | Dynamic control for a tentacle manipulator with sma actuators[END_REF], model-less approaches [START_REF] Yip | Model-less feedback control of continuum manipulators in constrained environments[END_REF] and feedback controllers [START_REF] Penning | Towards closed loop control of a continuum robotic manipulator for medical applications[END_REF] [40] have been proposed before with the intention of achieving accurate positioning of the manipulators in presence of nonmodeled dynamics. In this section, a closed-loop control strategy based on a state estimator is proposed. A. Closed-loop control design The closed-loop controller is designed to ensure the correct configuration of the robot, given a desired end-effector position. A reference computation is performed to transform the desired position to the correspondent configuration. Assuming that the external forces are constant, the discrete model of the system, derived form of equation ( 9), takes the form: δ s,k+1 = δ s,k + J sa (x k )∆λ a,k+1 (21) where J sa = W sa is the Jacobian matrix between sensors and actuators. When the desired sensor lengths are provided by the reference computation, we can propose the closed-loop control approach shown in Fig. 16 In Fig. 16 the blue blocks represents the computations performed by simulation. Two simulations executing simultaneously are implemented in the closed-loop system: One main simulation that computes the Inverse kinematic model and a second simulation that acts as a state estimator for the system. The state estimator is the Forward kinematic model simulation of the robot that computes an estimated configuration for the robot based on the lengths of the sensors. This configuration is used to update the state of the Inverse kinematic model at each simulation step. In this way, we make sure that the configurations of both simulation model and the manipulator are similar before the estimation of the Jacobian is computed. The tracking error e k in the closed-loop system is computed as: e k = δ s,k -δ d s,k (22) with δ d s,k represents the desired lengths of the sensors and δ s,k represents the current lengths in the robot. We define the control vector v k as: where Ĵsa (x k ) is the estimated Jacobian matrix between the sensors and actuators and r k = ∆λ a,k+1 . Using Eq. 23, the kinematic model can be rewritten as: v k = Ĵsa (x k )r k (23) δ s,k+1 = δ s,k + v k (24) The control law is based on proportional integrative strategy, therefore, the control vector v k is designed in the sensor space as: v k = -k p e k -k i h k (25) with k p and k i being the proportional and integrative gains of the controller, respectively. The integrative term h at time k + 1 is computed as: h k+1 = h k + e k (26) Then, the control allocation based on a Quadratic Programming (QP) formulation [START_REF] Johansen | Control allocation-a survey[END_REF] is employed to find a unique solution to: r k = Ĵ+ sa (x k )v k (27) where Ĵ+ sa is the pseudo-inverse of the estimated Jacobian. In practice, as Ĵsa (x k ) may not be fully invertible, we introduce a variable O defined as O = Ĵ+ sa (x k )r k -v k (28) Using O, the QP problem formulation (III-C) becomes: min u k (O T O) (29) the resulting r k will be the best possible inversion of Eq. 23 in the least square sense. In addition, the QP formulation allows to define constraints like actuator saturation or positive direction of actuation. Using Eq. 25 in Eq. 27, r k is rewritten as r k = -Ĵ+ sa (x k )(k p e k + k i h k ) (30) Using Eq. 30 in Eq. 21, the closed-loop system is defined as: e k+1 = e k + J sa (x k )r k (31) which in the ideal case in which Ĵsa is invertible, can be written as: e k+1 = e k + v k (32) The system of Eq. 32 is a simple first order discrete model that can be controlled with any standard controller. We choose the control strategy to be based on proportional-integrative control law because we want to improve the convergence rate and remove any steady state error (in the sensor space at least). After testing, the selected gain values are k p = 0.14 and k i = 0.0003 as a compromise between the rise time of the signal and its overshot. Fig 17 shows the response of one actuator length of the simulated robot and the real robot given a pre-computed set points corresponding to an end-effector position inside the task space of the robot. The position is chosen so that the actuators are far from their saturation points. The model simulation and the real robot have different initial condition. The experiment was performed for 2500 simulation steps with a simulation step of 0.1 seconds. After 1000 simulation steps, the set points are changed in both the simulation and the real robot. The results show that both, the simulation of the robot and the robot itself have the same settling time t s ≈ 400 simulation steps. We can also see that the curve that represents the measured value of the lengths in the robot jumps between two values. This behavior is a consequence of the poor resolution of the string potentiometers. Fig 17 also shows a different behavior in the transitory stage of the curve of the measured lengths. This behavior can be attributed to different factors; first, there is the time required to compute the configuration of the manipulator from the measured sensor lengths; second, there is a time delay for the desired pressure to be applied to the robot, and finally, the plasticity of the material from Fig. 17: Comparison of real and estimated actuator length of the CBHA. A second set point is applied to the system after 1000 simulation steps in order to observe the performance of the controller. The step is 0.1s for the experiment. which the manipulator is built (polyamide-nylon) which is not accounted for in our FEM model. On the other hand, the pneumatic valves that control the pressure inside the actuators have a small dead zone, so, when the manipulator starts its motion from a zero-pressure condition, very small increments in the pressure do not produce any motion until this dead zone is surpassed, which is not considered in the FEM. A second experiment was performed using the real robot in the loop. In this experiment an external unknown force was applied to the manipulator in order to see the uncertainty rejection capabilities of the controller. Fig. 18 shows the results of this experiment. B. Robustness analysis Because of modeling uncertainties, the estimated Jacobian matrix Ĵsa (x k ) is, in general, different from the Jacobian of the robot J sa (x k ). We introduce the vector ω k that represents the disparities between the real Jacobian and the estimated Jacobian. We call this vector the inversion error and is defined as: ω k = [I -J sa (x k ) Ĵ+ sa (x k )]v k (33) Then, the closed-loop system is re-written as: e k+1 = e k + v k + ω k = e k -k p e k -k i h k + ω k (34) The disturbed closed-loop system is: e k+1 h k+1 = I -k p I -k i I I I e k h k + I 0 ω k (35) It can be disturbing that we end up with such a simple linear system. We emphasize to the reader that the non-linearities are taken into account by the two simulation blocks (FKM and IKM in Fig. 16) in the closed-loop control. In Eq. 35, we are writing the system in terms of e k and h k and if the model was perfect, the system would be trivial. However, we can have modeling errors, that is why, in the following, we will prove that the control is robust to these modeling uncertainties ω k . To simplify the notation of the problem, we define the following vectors: X k+1 = e k+1 h k+1 , X k = e k h k , D = I 0 , F = k p k i (36) Also I -k p I -k i I I I = A -BF (37) where A = I 0 I I , B = I 0 (38) Using this notation, matrix ω k is written as: ω k = [I -J sa (x k ) Ĵ+ sa (x k )]FX k (39) We assume that the error in the Jacobian estimation is bounded by a bounding parameter γ such that: ω T k ω k = X T k F T [I -J sa (x k ) Ĵ+ sa (x k )] T [I -J sa (x k ) Ĵ+ sa (x k )]FX k ≤ γ 2 X T k F T FX k (40) with [I -J sa (x k ) Ĵ+ sa (x k )] T [I -J sa (x k ) Ĵ+ sa (x k )] ≤ γ 2 I (41) For the proof of stability, we use Lyapunov's second method of stability [START_REF] Lyapunov | The general problem of the stability of motion[END_REF]. We define the Lyapunov candidate function as: V = X T k PX k ( 42 ) where P is an unknown Lyapunov matrix with the properties P T = P > 0 (43) From Eq. 42 and the notation given in Eq. 36, the variation of the Lyapunov function is defined as: ∆V = X T k+1 PX k+1 -X T k PX k (44) Using Eq. 38, Eq. 44 is re-defined as: ∆V = ((A -BF)X k + Dω k ) T P((A -BF)X k + Dω k ) -X T k PX k (45) By A -BF = C (46) Eq. 45 is written as: ∆V = (CX k + Dω k ) T P(CX k + Dω k ) -X T k PX k = X T k C T PCX k + X T k C T PDω k + ω T k D T PCX k + ω T k D T PDω k -X T k PX k (47) Reverting the notation in Eq. 38, Eq. 47 can be written in matrix form as: ∆V = X k ω k T (A -BF) T P(A -BF) -P (A -BF) T PD D T P(A -BF) D T PD X k ω k (48) For the proof, we introduce an accessory parameter α ≥ 0 in Eq. 40, such that: ϒ = αω T ω -αγ 2 X T k F T FX k < 0 (49) From Eq. 49, the left hand side of the inequality is written in matrix form as: ϒ = X k ω k T -αγ 2 F T F 0 0 αI X k ω k < 0 (50) Adding and subtracting this term to Eq. 48 allow us to find a bounding for ∆V as: ∆V -ϒ + ϒ = X k ω k T Q X k ω k + ϒ ( We know from Eq. 49 that ϒ < 0. Therefore, if Q is definite negative, then ∆V < 0. To prove the closed-loop system to be stable, the values for matrix P > 0 and α ≥ 0 need to be found such as matrix Q is definite negative, given the predefined values of the boundary parameter γ and the tuned controller parameter k p and k i . To this end, a Linear Matrix Inequality [START_REF] Boyd | Linear matrix inequalities in system and control theory[END_REF] Solver called SeDuMi [START_REF] Sturm | Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones[END_REF] is used in the software Matlab. In order to describe the LMI given by Eq. 46, Yalmip [START_REF] Lofberg | Yalmip: A toolbox for modeling and optimization in matlab[END_REF], a toolbox for optimization that is compatible with Matlab is employed. Given a value of γ = 0.98 and the gain values k p = 0.14 and k i = 0.0003, the LMI was solved successfully. The resulting matrix P and the parameter α that make matrix Q negative definite are: Using the LMI solver, we can also compute the maximum value of γ, which provides an insight on the robustness of the closed-loop system. After some iterations we have: max γ = 0.98685 < 1 (54) Recalling Eq. 39, if ω T ω > 1, then matrices J sa (x k ) and Ĵ+ sa (x k ) do not have the same sign, which means that the robot Jacobian and the estimated Jacobian indicate opposite directions. In our case, γ = 0.98685 is close to the limit case. The proposed closed-loop system is robustly stable and can handle high Jacobian inversion errors in the change of control variables. VI. CONCLUSIONS AND FUTURE WORK This paper presents a modeling methodology to obtain the kinematic relationships of soft manipulators. The kinematic equations are derived from a FEM model (or any equivalent physics based model) that can be obtained from the geometry and the material properties of a soft manipulator. After a projection in a small constraint space, a set of coupled equations relate the position of the end-effector to the contribution of actuators and displacement of sensors. The validity of the method is demonstrated in two different manipulators with complex geometry. In the case of the CBHA, the results were compared to those obtained with two geometric models developed for the same robot. While the model of the material used does not take into account the properties of viscosity, this consideration is only due to the absence of knowledge of these specific properties for the material used. Indeed, the framework used allows for modeling viscoelasticity with Prony series [START_REF] Marchesseau | Multiplicative jacobian energy decomposition method for fast porous visco-hyperelastic soft tissue model[END_REF]. In general, a viscoelastic model is characterized by a rate-independent term, which in this case is the shear modulus representing the elastic behavior, and a rate-dependent modulus. The rate-dependent modulus of the material is defined by the Prony series based on time; faster strain rates will induce higher modulus than static loads. The limitations on the use of Prony series come with the determination of the required coefficients, since it involves stress relaxation tests performed under controlled temperature and loading speed. Another way to model viscoelasticity behaviour is to introduce a ratedependent damping effect using Rayleigh equation. Rayleigh damping is a viscous damping that is proportional to a linear combination of mass and stiffness. Using Rayleigh damping, The internal forces in the robot (equation 1) takes the form: f(x i ) ≈ f(x i-1 ) + K(x i-1 )dx + B(x i-1 )dx (55) where the Rayleigh damping matrix is computed as: B = αM + β K (56) where M and K are the mass and stiffness matrices, respectively, and α and β are the coefficients of proportionality. The problem of position control for soft manipulators was solved by obtaining the inverse kinematic relationships of two different types of robots. The implementation of the simulation of the model was then used to directly pilot one of the manipulators given a desired position of the end-effector in feed-forward control. The feed-forward control of the robots relies entirely on its model. Because the lack of material parameters, the openloop system does not account for non-linear behaviors such as viscosity. The closed-loop controller proposed in this papers was proven to be able to reject these model uncertainties and improve the overall behavior of the manipulator. Moreover, the proposed controller can be used even when high Jacobian inversion errors are present. It is important to remark that the method is no longer viable when we leave the quasi-static motion case, and for the moment, the sampling rate required to capture vibrations in the robot is not feasible. Nevertheless, this first approach to the kinematics and control for soft manipulator opens up some interesting perspectives for future work: • The model of the tendons does not account for the friction between the cable and the guides it passes through. Including a term in the formulation of the direct model to account for the friction can be done, but the way it will change the inverse model should be investigated. • Given the information provided by the FEM model, a study on the impedance control of the robot is feasible. The information regarding the compliance of the robot can be directly extracted from the FEM. Fig. 2 : 2 Fig. 2: Tendon actuation. d b and d a on the figure, represent the direction of the tendon before and after the cable guide, respectively, which are used to compute the normal forces at the guides. Fig. 4: The RobotinoXT by Festo Robotics. (left) The anatomy of the Compact Bionic Handling Assistant. (right) A section of the manipulator, composed by 3 pneumatic actuators and their correspondent length sensor. Fig. 5 : 5 Fig. 5: Visual model of the trunk and the underlying finite element model. Fig. 6 : 6 Fig. 6: Constant curvature model Fig. 8 : 8 Fig. 8: X/Y view of the results from the model comparison. Fig. 9 : 9 Fig. 9: X/Z view of the results from the model comparison. Fig. 10 : 10 Fig. 10: Collision of the outer wall of the cavities. The collisions occur on the orange edges depicted in the bent actuator (right). Fig. 11 : 11 Fig. 11: Deformable parallel manipulator. Fig. 12 : 12 Fig. 12: Comparison of desired trajectory and measured trajectory of the parallel manipulator Fig. 13 : 13 Fig. 13: Comparison of measured and estimated lengths of one of the sensors given a predefined set of end-effector positions for the CBHA Fig. 14 :Fig. 15 : 1415 Fig. 14: Comparison between measured and predicted deflections caused by external loading on the parallel manipulator Fig. 16 : 16 Fig. 16: Closed-loop control of the CBHA based on IKM and FKM simultaneous and the controller Fig. 18 : 18 Fig. 18: Measured lengths of the CBHA in closed-loop. An external force is applied to the manipulator after 1050 time steps. The time step is 0.1s for the experiment. BF) T P(A -BF) -P + αγ 2 F T F (A -BF) T PD D T P(A -BF) DPD T -αI P = 646.4512 1.2983 1.2983 0.0087 and α = 4655 (53) TABLE 1 : 1 Statistical analysis of the error between measured and estimated lengths for the CBHA l(mm) l 1 l 2 l 3 l 4 l 5 l 6 µ(mm) 3.2 2.43 3.86 4.08 3.6 3.69 σ (mm) 1.55 1.76 2.05 2.12 2.56 2.06 In the literature, these manipulators are usually classified as continuum robots. However, their main characteristic of interest in this paper is that they create motion by deformation, as opposed to the classical use of articulations. The positioning precision provided by the motion capture system is less than 0.1mm VII. AUTHOR DISCLOSURE STATEMENT No competing financial interests exist. VIII. ACKNOWLEDGMENTS This research was part of the project COMOROS supported by ANR (Tremplin-ERC) and the Conseil Régional Hautde-France and the European Union through the European Regional Development Fund (ERDF).
67,718
[ "995431", "7036", "4013" ]
[ "410272", "410272", "120930", "409871", "410272", "409871" ]
01745644
en
[ "spi" ]
2024/03/05 22:32:07
2011
https://hal.science/hal-01745644/file/Muller.2011.CICC.Design%20Challenges%20for%20Prototypical%20and%20Emerging%20Memory%20Concepts%20Relying%20on%20Resistance%20Switching_Invited.pdf
Ch Muller D Deleruyelle O Ginez J-M Portal M Bocquet Design Challenges for Prototypical and Emerging Memory Concepts Relying on Resistance Switching Keywords: Prototypical and emerging memory concepts, resistance switching, operation mechanisms, memory design, embedded or distributed memory circuits Integration of functional materials in memory architectures led to emerging concepts with disruptive performances as compared to conventional charge storage technologies. Beside floating gate solutions such as EEPROM and Flash, these alternative devices involve voltage or current-controlled switching mechanisms between two distinct resistance states. The origin of the resistance change straightforwardly depends upon the nature and fundamental physical properties of functional materials integrated in the memory cell. After a general overview of non volatile memories, this paper is focused on prototypical and emerging memory cells and on their ability to withstand a downscaling of their critical dimensions. In addition, despite different maturity levels, a peculiar attention is turned toward common guidelines helpful for designing embedded or distributed resistive switching memory circuits. I. OVERVIEW Currently, the microelectronic industry is facing new technological challenges for continuously improving performances of memory devices in terms of access time, storage density, endurance or power consumption. The major bottleneck to overcome is the downsizing of memory cell dimensions necessary to integrate a larger number of elementary devices and subsequently more functionalities on a constant silicon surface. This drastic size reduction is constrained, in particular, by the intrinsic physical limits of materials integrated in the memory architecture. Beside volatile random access memories (RAM) such as Dynamic (DRAM) or Static (SRAM), non volatile memory technologies may be subdivided in two categories depending upon the mechanism used to store binary data. A first group of solid state devices is based on charge storage in a polysilicon floating gate. In this family, gathering usual EEPROM and Flash technologies [START_REF] Van Houdt | Physics of Flash memories[END_REF], new concepts or architectures are required to satisfy CMOS "More Moore" scaling trends. For instance, Si1-xGex "strained silicon" technology enables boosting the charge carrier mobility and discrete charge trapping improves data retention and enables an extension towards "multi-bits" storage. Presently, Flash technology still remains the undisputed reference, regardless of its NAND (dense and cheap) or NOR (fast) architecture. However, as the scaling of the conventional floating gate cell becomes ever more complicated below 32 nm node, opportunities for alternative concepts are rising. In the last two decades, several major semiconductor Companies have explored new solutions integrating a functional material, whose fundamental physical property enables data storage. Older technology, called FRAM for Ferroelectric RAM [START_REF] Scott | Ferroelectric memories[END_REF], is in small volume production for several years by Fujitsu and Texas Instruments which license Ramtron patents. With their DRAM-like architecture FRAM memories integrate ferroelectric capacitors that permanently store electrical charges. Thanks to their low voltage operations, fast access time and low power consumption, FRAM devices mainly address "niche" applications such as contactless smart cards, RFID tags and nomad devices. Excluding FRAM technology, recent R&D efforts also led to a new class of disruptive technologies based on resistive switching mechanisms. These memories, presenting two stable resistance states controlled by an external current or voltage, attract much attention for future solid state devices. The physical origin of the resistance change depends upon the nature and fundamental physical property of materials integrated in the memory cell. As a result, a broad panel of new concepts is currently rising: magnetoresistive memories (MRAM) and derived concepts; phase change memories (PCM); resistive memories (RRAM) including oxide resistive (OxRRAM) and "nanoionic" memories (CBRAM and PMC) [START_REF] Ch | Emerging memory concepts: materials, modeling and design[END_REF]. As compared to other technologies, PCM and RRAM memories are more favourably positioned as alternatives to Flash for sub-22 nm nodes. Nevertheless, these two technologies have different maturity levels: while first PCM products are available (e.g. Omneo TM by Micron, former Numonyx), RRAM memories are still at early R&D stage. Following 2010 ITRS's report, MRAM, PCM and FRAM are categorized as "prototypical" memories whereas RRAM is an "emerging" memory [START_REF]Potential and Maturity of Selected Emerging Research Memory Technologies[END_REF]. Hence, alternative non volatile memory concepts, either evolutionary or revolutionary, are being explored to satisfy the need for higher storage capacity and better performances. Within a broad panel of innovative solutions, this review specifically addresses resistive switching technologies. After a brief description of memory cells and their ability to withstand downscaling of their critical dimensions, peculiar attention is turned toward common guidelines helpful for designing embedded or distributed memory circuits. II. MEMORY MARKETS FOR RESISTIVE TECHNOLOGIES To penetrate the markets currently covered by SRAM, DRAM and Flash memories, prototypical and emerging concepts are facing stringent requirements of process compatibility and scalability at material, device and circuit levels. Status and outlook may be proposed as follows:  MRAM arising from spintronics may be viewed as a credible candidate to replace existing technologies in many applications requiring standalone or embedded solutions combining Flash-like non volatility, SRAM-like fast nanosecond switching and DRAM-like infinite endurance. For future, "Spin Torque Transfer" (STT-MRAM) concept appears as a promising technology able to merge aforementioned advantages and scalable for aggressive technological nodes (cf. §III).  PCM is probably the most advanced alternative memory concept in terms of process maturity, storage capacity and access time. Furthermore, the phase change memory element exhibits an excellent ability to withstand a downsizing of its critical dimensions. Considering limitations in terms of high reset current and low performances in retention, PCM is rather positioned as a NOR Flash substitute. Nevertheless, it may be also envisaged to rethink PCM subsystem architecture to bring the technology within competitive range of DRAM. To exploit PCM's scalability as a DRAM alternative, new design solutions are expected to balance long latencies, high energy writing and finite endurance [START_REF] Lee | Architecting phase change memory as a scalable DRAM alternative[END_REF].  RRAM concept is in an earlier stage of development compared with MRAM and PCM. Since RRAM memory elements can be integrated into Back-End Of Line (BEOL), this technology is of particular interest for high density storage with possibly multi-levels threedimensional architectures. Metal-organic complex-based RRAM [START_REF] Ch | Resistance change in memory structures integrating CuTCNQ nanowires grown on dedicated HfO2 switching layer[END_REF], transition metal oxide (TMO) based Ox-RRAM [START_REF] Sawa | Resistive Switching in Transition Metal Oxides[END_REF][START_REF] Dumas | Resistive switching characteristics of NiO films deposited on top of W and Cu pillar bottom electrodes[END_REF] and chalcogenide-based CBRAM appear as promising candidates [START_REF] Kozicki | Nanoscale memory elements based on solid-sate electrolytes[END_REF]. However, retention and endurance still remain to be demonstrated for CBRAM devices, while the high reset current is an issue in TMO-based memory devices. As a consequence, RRAM concepts, still in their infancy, require further academic and industrial investigations (i) to validate integration and scalability capabilities; (ii) to uncover the physical origin of resistance switching which remains sometimes controversial; (iii) to model the reliability. Beside conventional standalone (Fig. 1a) and embedded (Fig. 1b) memories, emerging resistive concepts also enable designing innovative electronic functions such as field programmable gate array (FPGA) or logic devices (e.g. Flip Flop) in which non volatile memory circuits are distributed over a single chip (Fig. 1c). This latter distributed implementation requires CMOS compatible, low cost, low voltage and low power non volatile memory elements. Before investigating novel architectures, it is of primary importance to develop design kits compatible with microelectronics design suites like Cadence or Mentor. Design kits dedicated to memories require first compact models describing cell's properties and verification tools that enable designing and checking the circuit layout [START_REF] Ch | Emerging memory concepts: materials, modeling and design[END_REF]. III. MAGNETORESISTIVE MEMORIES, MRAM In magnetoresistive devices, data are no longer stored by electrical charges, as in semiconductor-based memories, but by a resistance change of a complex magnetic nanostructure [START_REF] Tehrani | Status and outlook of MRAM memory technology[END_REF]. MRAM cells integrate a magnetic tunnel junction (MTJ) consisting of a thin insulating barrier (i.e. tunnel oxide) separating two ferromagnetic (FM) layers. Using lithographic processes, junctions are etched in the form of sub-micron sized pillars connected to electrodes. In conventional FIMS (Field Induced Magnetic Switching), each MTJ is located at the intersection of two perpendicular metal lines (Fig. 2a). MTJ resistance depends upon the relative orientation of magnetizations in the two FM layers: the magnetization in FM reference layer is fixed whereas the one in FM storage layer is switchable (Fig. 2b). Magnetization reversal in FM storage layer is controlled by two external magnetic fields produced by currents injected in the surrounding metal lines. Tunnel magneto-resistance (TMR) is due to tunnelling of electrons through the thin oxide layer sandwiched between two FM films having either antiparallel (high resistance state "0") or parallel (low resistance state "1") magnetizations. In MRAM technology three major issues are identified: (i) low resistance discrimination between "0" and "1" states (i.e. small sensing margin); (ii) high sensitivity to disturb during writing ("bit fails"); (iii) high currents (few mA) required for magnetization reversal. To overcome these issues, different solutions are proposed:  To enlarge the sensing margin, the tunnel barrier in amorphous aluminium oxide AlOx of first device generations was progressively replaced by crystallized magnesium oxide MgO.  Everspin Company (former Freescale), world's first volume MRAM supplier, already sells a 4, 8 and 16 Mb MRAM chips based on 180 nm CMOS technology and relying on the "Toggle" concept to limit disturbs during writing [START_REF] Andre | A 4 Mb 0.18 µm 1T1MTJ Toggle MRAM with balanced three input sensing scheme and locally mirrored unidirectional write drivers[END_REF]. Thanks to a new free layer magnetic nanostructure called "synthetic antiferromagnet" (SAF), an appropriate bit orientation and a current pulse sequence, MRAM cell bit state is programmed via Savtchenko switching. In that configuration, a single metal line alone cannot switch the bit, providing greatly enhanced selectivity over the conventional FIMS MRAM.  In October 2010, French startup Crocus Technology and TowerJazz completed the first stage integration of "Thermally-Assisted" MRAM into 130 nm CMOS platform. In TA-MRAM cell, a current injected in MTJ during writing induces a Joule heating in FM layers and the subsequent temperature increase facilitates the magnetization reversal [START_REF] Prejbeanu | Thermally assisted switching in exchange-biased storage layer magnetic tunnel junctions[END_REF]. TA-MRAM concept enables (i) shrinking the memory cell size with only one metal line to produce magnetic field; (ii) reducing the power consumption thanks to limited writing current; (iii) improving bit fails immunity.  More recently, STT ("Spin Torque Transfer") switching was proposed to solve aforementioned issues and make MRAM memory compatible with more aggressive nodes. Large Companies (e.g. Renesas, Hynix, IBM, Samsung…) and few startups (e.g. Grandis, Avalanche, Everspin, Crocus…) are actively working on this new concept and state they can push the integration limits beyond 45 and 65 nm nodes for standalone and embedded memories respectively [START_REF] Nagai Hide | Spin-Transfer Torque writing technology (STT-RAM) for future MRAM[END_REF]. In STT writing mode a spin-polarized current, which flows through MTJ, exerts a spin torque on the magnetization of FM storage layer [START_REF] Diao | Spintransfer torque switching in magnetic tunnel junctions and spintransfer torque random access memory[END_REF]. Technologically, STT concept requires the integration of a polarizer to select the spin of electrons injected into the magnetic nanostructure and ensuring the magnetization reversal. As compared to conventional MRAM cells, STT-RAM solution is said to consume less power, to improve bit selectivity and to reduce the memory cell size to approximately 6-9 F 2 . Similarly to DRAM, EEPROM or NOR Flash memories and whatever the writing mode (Toggle, TA or STT), MRAM core-cell is based on the association of 1 select MOS transistor with 1 magnetic junction (i.e. 1T/1MTJ). To individually access each memory cell, magnetic tunnel junctions are located at each intersection of a bottom "Word Line" (WL) connected to the transistor gate and an upper "Bit Line" (BL) (Fig. 2c). In contrast to multi-layer structure proposed for RRAM technology, such 1T/1MTJ cells disable stacking memory layers due to disturbs between neighbouring cells during magnetic field-assisted writing. Polarization of "Word", "Bit" and "Source" (SL) lines used to select one core-cell in the memory matrix straightforwardly depends on the writing mode. In TA-MRAM, writing requires a heating current injection in MTJ obtained in applying a positive voltage on the gate of the select transistor (WL) of the addressed core-cell whereas the source line is grounded. Concomitantly, a current is injected in single metal line (BL) to create a magnetic field that enables switching of FM storage layer. In STT writing mode, a positive voltage is applied on the select transistor gate (WL) of the addressed cell [START_REF] Hosomi | A novel nonvolatile memory with spin torque transfer magnetization switching: spin-RAM[END_REF]. Since writing does not require magnetic field, only bipolar voltage is applied between BL and SL to inject a top-down or bottom-up spin polarized current enabling magnetization reversal. Beside memory architectures, TA [START_REF] Guillemenet | Non-volatile run-time field-programmable gate arrays structures using thermally assisted switching magnetic random access memories[END_REF] or STT [START_REF] Zhao | Spin-MTJ based non volatile flip-flop[END_REF] MTJs may be also integrated in reconfigurable FPGA. Using a heterogeneous design there is no need to load the configuration data from an external non volatile memory as required in SRAM-based FPGA. Non volatility enables reducing both power consumption and configuration time required at each start-up of the circuit in comparison with classical static random access memory-based FPGAs. IV. PHASE CHANGE MEMORIES, PCM Since the 1970s, chalcogenide phase change materials attract much attention for data storage applications [START_REF] Ovshinsky | Reversible electrical switching phenomenon in disordered structures[END_REF], particularly in rewriteable optical media such as compact, digital versatile or more recently Blu-ray discs. In addition, they offer a great potential for non volatile phase change memories abbreviated as PCM [START_REF] Lankhorst | Low-cost and nanoscale non-volatile memory concept for future silicon chips[END_REF][START_REF] Cho | A 0.18 µm 3.0 V 64 Mb nonvolatile Phase transition Random Access Memory (PRAM)[END_REF]. This concept exploits the resistance change (few orders of magnitude) due to reversible amorphous (state "0") to crystalline (state "1") phase transitions. As depicted in Fig. 3, to reach amorphous state (i.e. reset operation), GST material is heated above the melting temperature TM (typically 600-700°C) and then rapidly cooled. In contrast, GST material is placed in the crystalline state (i.e. set operation) when "slowly" (few tens of ns) cooled from a temperature in between melting point TM and crystallization temperature TX (Fig. 3). Technologically, a layer of chalcogenide alloy (e.g. Ge2Sb2Te5, GST) is sandwiched between top and bottom electrodes: the phase change is induced in GST programmable volume through an intense local Joule effect caused by a current injected from bottom electrode contact acting as a "heater" (Fig. 4a). As illustrated in Fig. 3, improvement of PCM performances needs to overcome three main issues:  Design innovative cell architectures that enable reducing reset current which remains quite high in existing prototypes (typically 0.5 mA).  Propose new chalcogenide materials exhibiting a higher crystallization temperature to improve data retention (spontaneous amorphous-to-crystalline transition) and crystallizing rapidly to decrease switching time.  Fabricate confined phase change volume to limit heat spread toward neighbouring cells during reset operation. As compared to MRAM technology, PCM memories have a better ability to withstand size reduction. Indeed, although the endurance is still limited (less than 10 10 cycles), the memory cell is much more compact (typically 10 F 2 ) and downsizing should be extended beyond 25 nm node by shrinking programmable volume. PCM is rather positioned as a NOR Flash substitute: on this market segment, PCM technology displays significant improvements in terms of endurance, access time and bit alterability (i.e. state switching without intermediate erase operation required in floating gate technologies). PCM technology basically uses 1X/1R core-cells (Fig. 4b), X being one access device associated with one phase change resistor R. As shown in Fig. 4c, each core-cell is addressed thanks to WL connected to access device, BL and SL. Considering the high reset current required to switch chalcogenide material from crystalline to amorphous phase, access device must be able to drive high current during this operation. Access may be controlled by a bipolar junction transistor (BJT) [START_REF] Gill | Ovonic unified memorya highperformance nonvolatile memory technology for stand-alone memory and embedded applications[END_REF], a diode [START_REF] Oh | Full integration of highly manufacturable 512Mb PRAM based on 90nm technology[END_REF] or a MOS transistor [START_REF] Hyung-Rok On | Enhanced write performance of a 64 Mb phase-change random access memory[END_REF]. Dedicated peripheral circuits have to be designed to write [START_REF] Bedeschi | An 8 Mb demonstrator for high-density 1.8 V phasechange memories[END_REF]. Current passing through the cell is measured thanks to a mirror biased structure and the current is compared to a reference 1BJT/1R dummy cell. V. REDOX MEMORIES, RRAM Last ten years have seen the emergence of new memories labelled as RRAM for Resistive RAM and based on various mechanisms of resistance switching excluding phase change as used in PCM (cf. §IV). In its simplest form, RRAM element relies on MIM structures (Metal/Insulator/Metal) whose conductivity can be electrically switched between high (state "0") and low (state "1") resistance states (Fig. 5a). RRAM memory elements are gaining interest for (i) their intrinsic scaling characteristics compared to the floating gate Flash devices; (ii) their potential small size; (iii) their ability to be organized in dense crossbar arrays (Fig. 5b). Hence, RRAM concept is seen as a promising candidate to replace Flash memories at or below 22 nm technological nodes. Following the latest ITRS classification published in 2010 on "Potential and Maturity of Selected Emerging Research Memory Technologies" [START_REF]Potential and Maturity of Selected Emerging Research Memory Technologies[END_REF], different "flavours" of RRAM memory elements are disclosed depending on the mechanism involved in the resistance change [START_REF] Waser | Nanoionics-based resistive switching memories[END_REF][START_REF] Waser | Redox-Based Resistive Switching Memories -Nanoionic Mechanisms, Prospects, and Challenges[END_REF]. In the ITRS's report, all RRAM concepts are categorized as "Redox RAM" that encompasses a wide variety of MIM structures and materials sharing reduction/oxidation (i.e. redox) electrochemical processes [START_REF] Waser | Redox-Based Resistive Switching Memories -Nanoionic Mechanisms, Prospects, and Challenges[END_REF]. Redox mechanisms can operate in the bulk I-layer, along conductive filaments formed within the I-layer, and/or at the I-layer/metal contact interfaces in MIM structure. The following sub-categories are proposed:  In "Fuse/anti-fuse" memories, the resistance switching is driven by local reduction/oxidation mechanisms leading to formation/dissolution of conductive filaments within a transition metal oxide [START_REF] Sawa | Resistive Switching in Transition Metal Oxides[END_REF][START_REF] Dumas | Resistive switching characteristics of NiO films deposited on top of W and Cu pillar bottom electrodes[END_REF][START_REF] Waser | Nanoionics-based resistive switching memories[END_REF][START_REF] Waser | Redox-Based Resistive Switching Memories -Nanoionic Mechanisms, Prospects, and Challenges[END_REF]. Operations do not depend upon applied voltage polarity (unipolar switching).  In "valence change mechanism", the resistance switching relies on redox electrochemistry that changes the conductivity of the I-layer. Field-induced oxygen vacancies drift plays a predominant role in few specific oxidebased memory elements: TiO2-based "Memristors" [START_REF] Strukov | The missing memristor found[END_REF] recently demonstrated by Hewlett-Packard Company; CMOx TM elements developed by Unity Semiconductor [START_REF] Meyer | Oxide dual-layer memory element for scalable non-volatile cross-point memory technology[END_REF]; NiO layers obtained from nickel oxidation in small via structures [START_REF] Courtade | Integration of resistive switching NiO in small via structures from localized oxidation of nickel metallic layer[END_REF][START_REF] Courtade | Method for manufacturing a memory element comprising a resistivity-switching NiO layer and devices obtained thereof[END_REF][START_REF] Goux | Coexistence of the bipolar and unipolar resistive switching modes in NiO cells made by thermal oxidation of Ni layers[END_REF]. Memory operations require polarity reversal (i.e. bipolar operations).  CBRAM ("Conductive Bridge RAM") [START_REF] Symanczyk | Conductive bridging memory development from single cells to 2Mbit memory arrays[END_REF] and PMC ("Programmable Metallization Cells") [START_REF] Kozicki | Nanoscale memory elements based on solid-sate electrolytes[END_REF] belong to "nanoionic" memories. MIM-like memory elements consist in an inert electrode (W, Pt ...), an ionic conductor used as solid electrolyte (WO3, MoO3, GeSe, Ag-GeSe...) and an active electrode (Ag, Cu...) producing, through an electrochemical reaction, ions (Ag + , Cu + …) drifting within solid electrolyte. For this type of mechanism, a polarity change of applied voltage is mandatory (i.e. bipolar operations). As previously underlined, RRAM memories are still in a R&D stage and only few memory circuits have been published in the literature. As memory elements may be integrated into BEOL, RRAM technology is of particular interest for high density storage with possibly multi-levels threedimensional architectures (Fig. 6a) [START_REF] Kügeler | High density 3D memory architecture based on the resistive switching effect[END_REF][START_REF] Baek | Multilayer cross-point binary oxide resistive memory (OxRRAM) for post-NAND storage application[END_REF]. Nevertheless, it has to be stressed out that from a memory cell and array design point of view, unipolar operations are preferred over bipolar operations. For this promising class of memories, two cell architectures are generally proposed depending on the nature of RRAM element: 1T/1R (i.e. one transistor associated with one resistive element) for CMOS-based active matrix; 1R enabling crossbar-type memory passive matrix. The first demonstrators integrated memory cells with size of 4-8 F 2 for active matrix and (4/n) F 2 for a passive matrix (i.e. crossbar) with n storage layers [START_REF] Waser | Nanoionics-based resistive switching memories[END_REF][START_REF] Symanczyk | Conductive bridging memory development from single cells to 2Mbit memory arrays[END_REF]. Nevertheless, in passive crossbars, the memory elements cannot be electrically isolated while neighboring cells are addressed. This problem can be solved by adding serial elements with a specific (high) non-linearity (e.g. diode as shown in Fig. 6b) at each resistively switching cell, depending on the resisting properties and the array size. Samsung Company already demonstrated a "non-CMOS" solution with a two-layer architecture based on 1D/1R memory cells associating a diode with a nickel oxide-based resistive element exhibiting unipolar switching [START_REF] Lee | 2-stack 1D-1R cross-point structure with oxide diodes as switch elements for high density resistance RAM applications[END_REF]. Such a crossbar memory matrix was easily designed with the help of BL and WL to access each cell individually. CBRAM derives from parent technology PMC, which was developed by Axon TC in collaboration with Arizona State University. In 2007, Qimonda/Altis/Infineon consortium demonstrated a 2 Mb CBRAM test chip with read-write control circuitry implemented in a 90 nm technological node with read/write cycle time less than 50 ns [START_REF] Dietrich | A nonvolatile 2-Mbit CBRAM memory core featuring advanced read and program control[END_REF]. The corresponding CBRAM circuit was developed using 8 F 2 corecells associating 1 MOS transistor with 1 Conductive Bridging Junction (i.e. 1T/1CBJ). The chip design was based on a fast feedback regulated CBJ read voltage and on a novel program charge control using dummy cell bleeder devices. It has to be mentioned that Adesto startup has acquired intellectual property and patents related to CBRAM technology from Qimonda in October 2010. In addition, Adesto announced in November 2010 a manufacturing partnership with Altis Semiconductor. The low power resistive switching at voltages below 1 V, the ability to scale to minimum geometries below 20 nm and the multi-level capability make CBRAM a very promising non volatile emerging memory technology. Nevertheless, as compared to prototypical MRAM, FRAM and PCM memories, CBRAM concept requires refresh operations due to the poor retention capability in low resistance state (contrary to the high resistive state which is usually much more stable in time). Thus, a refresh voltage is applied to the CBRAM memory element at a predetermined time to strengthen and stabilize low resistance state. This smart refresh enables preserving the resistance margin ΔR necessary to unambiguously discriminate high and low resistance states during read. This refresh is performed without destroying data stored in CBRAM element whereas in DRAM a rewriting of respective state is mandatory. To conclude, RRAM technology is very promising and once again many design solutions exist for implementation. However, depending on RRAM concept, specific peripheral circuits are required to guaranty reliable memory operations. VI. TWO GUIDING PRINCIPLES For both prototypical (MRAM, PCM) and emerging (RRAM) technologies, two guiding principles are shared for designing generic resistive switching memory circuits:  The first guideline is related to the implementation of bistable resistive elements in a memory array. In most of cases, memory cells rely on the association of a select/access device with a resistive element. Depending on the resistive concept (magnetic, phase change or redox), the select/access device can be a BJT, a MOS transistor or a diode. Even if MOS transistor is frequently preferred, access device is adapted to electrical characteristics of resistive element (set and reset currents and voltages, resistance levels, memory window…). To access each 1T/1R memory cell individually, suited bias conditions are applied to Bit (BL), Source (SL) and Word (WL) Lines of the addressed cell, other lines being grounded and/or floating. For illustration, Hush and Baker [START_REF] Hush | Complementary bit PCRAM sense amplifier and method of operation[END_REF] have proposed an architecture for sensing the resistance state of a programmable conductor random access memory element (Fig. 7a). The design is based on complementary memory elements, one holding the resistance state being sensed and the other holding a complementary resistance state. A sense amplifier detects voltages discharging through high and low resistance elements to determine the resistance state.  Resistance in "0" and "1" states represents the second common guideline (Fig. 7b). Indeed, reliable read operations require an unambiguous discrimination of low and high resistances, below and above the median resistance RRef defined as R/2 = [RHigh -RLow]/2. Moreover, the bit cell resistance distributions must be narrow to avoid any overlap between "0" and "1" states (i.e. large R/ ratio, with  the distribution standard deviation). Finally, as the maximum current consumed during read operation is linked to RLow, resistance in "1" state must be as large as possible to decrease overall power consumption. Figure 7. (a) Architecture that enables sensing the resistance state of a programmable conductor random access memory element using complementary resistive elements (dashed square). (b) Bit cell resistance distributions of "0" and "1" resistance states. VII. CONCLUSION In summary, reliable memory operations require a close matching between electrical characteristics of resistive elements and design rules. In other words, to reach suitable sensing margin (i.e. large R) shifted toward high resistances altogether with narrow bit cell resistance distributions (i.e. low ), it is of primary importance to:  Control materials microstructure, critical dimensions of memory element, process variability…;  Understand physical mechanisms explaining resistance switching;  Develop relevant compact models to be implemented in electrical simulators. Hence, the development of prototypical and emerging memory concepts emphasizes the necessity to make stronger links between memory cell materials and processes, modeling and circuit design. Figure 1 . 1 Figure 1. Circuit/architecture of non volatile systems: (a) Conventional implementation with logic block and standalone non volatile memory (NVM) on separated chips. This configuration enables integrating separately optimized technologies but requires tradeoffs in terms of cost and communication speed between chips. (b) Embedded implementation with NVM and logic block on a single monolithic substrate. (c) Implementation with distributed NVM circuits on a single chip. Figure 2 . 2 Figure 2. (a) To write MRAM bit, currents are passing through perpendicular metal lines surrounding MTJ. (b) The resultant magnetic fields enable programming a bit in reversing magnetization of upper FM storage layer (select MOS transistor is OFF). To read a bit, a current is passing through the MTJ and its resistance is sensed (select MOS transistor is ON). (c) Schematic diagram of series-parallel architecture for 1T/1MTJ cells. Figure 3 . 3 Figure 3. Temperature profiles used for set and reset operations in PCM.Innovations are mandatory to improve data retention, reduce power con- Figure 4 . 4 Figure 4. (a) Conventional PCM architecture in which two adjacent memory cells are coupled to a common digit line. Access MOS transistor driven by a word line WL is connected to phase change element through a conductive plug acting as a "heater". (b) 1 access device/1 resistor memory cell. (c) Electrical schematic arrangement of a PCM memory array. Figure 5 . 5 Figure 5. (a) Scheme of typical RRAM memory element: resistive switching layer is sandwiched between top and bottom electrodes to form simple MIM (Metal/Insulator/Metal) structures. (b) Crossbar-type memory architecture in which memory elements are located at the intersections of per- Figure 6 . 6 Figure 6. Three-dimensional view (a) and schematic (b) of a two-layer crossbar memory array integrating 1 diode/1 resistor (1D/1R) core-cells. 1D/1R cell enables isolating a memory element while neighbouring cells are addressed. and read data in PCM memory cell: during writing, access device injects current into the storage material and thermally induces phase change, which is detected during reading. Contrary to MRAM concept, a special emphasis is required on write drivers. The time-dependent temperature profiles used to switch chalcogenide material have to be carefully controlled through the monitoring of set and reset currents injected in the programmable volume. For instance, Woo Yeong Cho et al. have proposed a solution based on a current mirror source belonging to a local column constituted of 1NTMOS/1R cells[START_REF] Cho | A 0.18 µm 3.0 V 64 Mb nonvolatile Phase transition Random Access Memory (PRAM)[END_REF]. The main advantage of such a structure is the reuse of the current source for all columns through the entire memory chip. For read operations, PCM memory circuits require sense amplifier able to discriminate the two resistance states. As compared to MRAM devices, the resistance margin R is larger (at least 1 decade) and constraints on sense amplifier are slightly relaxed. Bedeschi et al. have proposed a solution based on current measurement mode implemented in 8 Mb PCM memory matrix using a BJT as selector
34,481
[ "174122", "20388", "18361" ]
[ "199957", "199957", "199957", "199957", "199957" ]
01745647
en
[ "sdv", "shs" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01745647/file/Truchetet-Pradeu_Robustness%20and%20repair_Online%20version.pdf
Eric Vivier Marie-Elise Truchetet Thomas Pradeu email: thomas.pradeu@u-bordeaux.fr Re-thinking our understanding of immunity: Robustness in the tissue reconstruction system Seminars in Immunology (2018) Issue on "Redundancy and Robustness", guest edited by Keywords: Tissue repair, tissue regeneration, robustness, redundancy, plasticity, surveillance Robustness, understood as the maintenance of specific functionalities of a given system against internal and external perturbations, is pervasive in today's biology. Yet precise applications of this notion to the immune system have been scarce. Here we show that the concept of robustness sheds light on tissue repair, and particularly on the crucial role the immune system plays in this process. We describe the specific mechanisms, including plasticity and redundancy, by which robustness is achieved in the tissue reconstruction system (TRS). In turn, tissue repair offers a very important test case for assessing the usefulness of the concept of robustness, and identifying different varieties of robustness. Introduction Robustness can be defined as the maintenance of specific functionalities of a given system against internal and external perturbations [START_REF] Csete | Reverse engineering of biological complexity[END_REF][START_REF] Kitano | Biological robustness[END_REF]. The term, routinely used in engineering (e.g. [START_REF] Maynard | Architectural elements of language engineering robustness[END_REF]), is now pervasive in the life sciences [START_REF] Whitacre | Biological Robustness: Paradigms, Mechanisms, and Systems Principles[END_REF]. Systems and processes as diverse as bacterial chemotaxis, biochemical networks, cells, organisms, and ecosystems, among many others, have been described as robust [START_REF] Stelling | Robustness of cellular functions[END_REF][START_REF] Barkai | Robustness in simple biochemical networks[END_REF][START_REF] Alon | Robustness in bacterial chemotaxis[END_REF][START_REF] Wilmers | Understanding ecosystem robustness[END_REF][START_REF] Wagner | Robustness and evolvability in living systems[END_REF]. For example, a plane is robust when it continues to fly despite severe turbulence (for example thanks to the flexibility of its wings), and a bacterial cell is robust to modifications in genetic regulation when it tolerates a high number of these modifications [START_REF] Isalan | Evolvability and hierarchy in rewired bacterial gene networks[END_REF]. The notion of robustness, however, is very broad, and often elusive. To make it more precise, it has long been emphasized (e.g., [START_REF] Lesne | Robustness: confronting lessons from physics and biology[END_REF]) that two crucial questions must systematically be addressed when talking about robustness: first, what is robust, and second to what is it robust? In other words, a system is not robust in general; rather, it is robust to a certain kind of perturbations that can occur at a given level (or at a limited number of levels). The most stirring applications of the concept of robustness are those where talking about robustness seems directly operative, that is, sheds a new and important light on a given phenomenon, as illustrated by several cases including bacterial chemotaxis [START_REF] Alon | Robustness in bacterial chemotaxis[END_REF]. The aim of the present paper is to ask whether the concept of robustness can illuminate the processes of tissue repair and tissue regeneration, and whether, in turn, tissue repair and tissue regeneration offer a promising basis to better define the notion of robustness applied to biological phenomena. We are therefore interested in robustness at a particular level, namely that of tissues, and against a particular set of perturbations, namely damages made on tissues (physical or chemical aggressions, infectious agents, or "internal" stresses). Our focus on repair and regeneration at the tissue level is justified by the recent wealth of data on this issue [START_REF] Eming | Inflammation and metabolism in tissue repair and regeneration[END_REF], and by the obvious clinical interest of this topic, especially in the age of regenerative medicine [START_REF] Pang | An overview of the therapeutic potential of regenerative medicine in cutaneous wound healing[END_REF], but it is important to keep in mind that repair occurs also at other levels (including genetic [START_REF] David | Base-excision repair of oxidative DNA damage[END_REF] and cellular [START_REF] Tang | Self-repairing cells: How single cells heal membrane ruptures and restore lost structures[END_REF] level) in the organism. The idea that repairing oneself is fundamental to the organism's unity and individuality has been suggested at least since the 19 th century, particularly by physiologist Claude Bernard [START_REF] Bernard | Lectures on the phenomena of life common to animals and plants[END_REF]. More recently, the concept of robustness has been commonly associated with repair and regeneration [START_REF] Ninov | Current advances in tissue repair and regeneration: the future is bright[END_REF][START_REF] Galliot | Trends in tissue repair and regeneration[END_REF][START_REF] Bateson | Plasticity, Robustness, Development and Evolution[END_REF][START_REF] Laurent | Immune-Mediated Repair: A Matter of Plasticity[END_REF]. Much remains to be said, however, about how robustness and tissue repair can shed light one on the other. Tissue repair and regeneration involve a horde of components and pathways, including structural (e.g., fibroblasts, ECM, etc.) and immunological (e.g., neutrophils, macrophages, etc.) ones [START_REF] Eming | Inflammation and metabolism in tissue repair and regeneration[END_REF][START_REF] Laurent | Immune-Mediated Repair: A Matter of Plasticity[END_REF][START_REF] Gurtner | Wound repair and regeneration[END_REF][START_REF] Eming | Evolution of immune pathways in regeneration and repair: Recent concepts and translational perspectives[END_REF]. For this reason, we propose the concept of the "tissue reconstruction system" (TRS) to embrace all the different aspects of this phenomenon (see Figure 1). Repair is essential for the survival and maintenance of the body [START_REF] Bernard | Lectures on the phenomena of life common to animals and plants[END_REF][START_REF] Gurtner | Wound repair and regeneration[END_REF]. Failures in the repair process can lead to various pathological conditions, including fibrotic diseases, ulcers, hypertrophic and keloid scars, as well as cancers [START_REF] White | Inflammation, wound repair, and fibrosis: reassessing the spectrum of tissue injury and resolution[END_REF][START_REF] Menke | Impaired wound healing[END_REF][START_REF] Serra | From Inflammation to Current and Alternative Therapies Involved in Wound Healing[END_REF]. Repair is continuously occurring, to some degree, in organisms (e.g., skin renewal), in response to their constant exposure to damages of different types (physical, chemical, radiological, etc.). Even though there exists to a large extent a continuum between repair and regeneration [START_REF] Eming | Wound repair and regeneration: mechanisms, signaling, and translation[END_REF], the two phenomena can be considered distinct in several respects. Regeneration describes the capacity to regrow complex organs entirely, generally with the implication of several cell types [START_REF] Galliot | Trends in tissue repair and regeneration[END_REF][START_REF] Brockes | Comparative aspects of animal regeneration[END_REF][START_REF] Alvarado | Bridging the regeneration gap: genetic insights from diverse animal models[END_REF][START_REF] Birnbaum | Slicing across Kingdoms: Regeneration in Plants and Animals[END_REF]. In mammals, for example, the renewal of the epidermis is a form of repair, because it involves a single cell type (keratinocytes), whereas for the liver one can talk about regeneration as it involves several cell types (hepatocytes, sinusoidal endothelial cells, stellate cells, Kupffer cells, etc.) [START_REF] Michalopoulos | Liver Regeneration[END_REF]. Many repair mechanisms have been conserved across different taxa, including Drosophila, zebrafish, chick, and mammals [START_REF] Eming | Evolution of immune pathways in regeneration and repair: Recent concepts and translational perspectives[END_REF][START_REF] Eming | Wound repair and regeneration: mechanisms, signaling, and translation[END_REF]. The capacity to regenerate many complex organs such as limbs, however, is found only in a subset of living things [START_REF] Eming | Wound repair and regeneration: mechanisms, signaling, and translation[END_REF][START_REF] Brockes | Comparative aspects of animal regeneration[END_REF]. One important aim of this paper is to better clarify the similarities and differences between repair and regeneration, thanks to the concept of robustness. We explain here how robustness can help better characterize the process of tissue reconstruction, through a description of the specific mechanisms, including plasticity and redundancy, by which robustness is achieved in the TRS. We also demonstrate that different repair-associated disorders (such as fibrosis, ulcers, and cancers) can be understood as the result of deregulated robustness. In turn, we show that the TRS offers a remarkable test case to defining the notion of robustness in a more precise and operational way, and more specifically to distinguishing different forms of robustness (structural vs. functional; preventive vs. corrective; partial vs. complete; dysfunctional vs. as a dysfunction). What is robustness? With the increasing attention paid recently to systems biology and complex systems, many living processes or systems have been described as "robust" [START_REF] Csete | Reverse engineering of biological complexity[END_REF][START_REF] Kitano | Biological robustness[END_REF][START_REF] Kitano | Towards a theory of biological robustness[END_REF]. The exact meaning of the word "robustness" often remains, however, elusive. The term originated in physics [START_REF] Lesne | Robustness: confronting lessons from physics and biology[END_REF], and engineering [START_REF] Alon | Biological networks: the tinkerer as an engineer[END_REF] (though the engineering-related meaning is itself rooted in the physiology of the 19 th and 20 th century, including the work of Claude Bernard [START_REF] Kourilsky | The natural defense system and the normative self model[END_REF]). (On the relationship between biology and engineering, see [START_REF] Calcott | Engineering and Biology: Counsel for a Continued Relationship[END_REF]). In general, robustness is defined as the maintenance of specific functionalities of the system against internal and external perturbations. Two major requirements for any claim about biological robustness are to determine what exactly the robust system is, and against which type(s) of perturbations it is said to be robust. Importantly, robustness does not amount to conservation or absence of change. Robustness allows changes in the structure and components of the system owing to perturbations, but the key idea is that robustness leads to the maintenance of specific functions. It is likely that robustness is an evolved trait [START_REF] Wagner | Robustness and evolvability in living systems[END_REF][START_REF] Wagner | Robustness, evolvability, and neutrality[END_REF][START_REF] Wagner | Robustness and evolvability: a paradox resolved[END_REF]. Moreover, there are often trade-offs between robustness and other traits. In particular, systems that are evolved to be robust against certain perturbations can be extremely fragile to unexpected perturbations (see, e.g., [START_REF] Kitano | Biological robustness[END_REF][START_REF] Whitacre | Biological Robustness: Paradigms, Mechanisms, and Systems Principles[END_REF]). Despite the fact that, historically, the concept of robustness took root to some extent in the concept of homeostasis, the two notions are different. Homeostasis is about maintaining constant (or almost constant, within a certain range) a value (e.g., body temperature in homeothermic animals) [START_REF] Cannon | Organization for Physiological Homeostasis[END_REF][START_REF] Kotas | Homeostasis, Inflammation, and Disease Susceptibility[END_REF]. Robustness, in contrast, is about maintaining a given function F against given types of perturbations (P1, P2, etc.). Examples of robust processes or systems in biology abound [START_REF] Whitacre | Biological Robustness: Paradigms, Mechanisms, and Systems Principles[END_REF]. These include chemotaxis in bacteria [START_REF] Barkai | Robustness in simple biochemical networks[END_REF][START_REF] Alon | Robustness in bacterial chemotaxis[END_REF], cell cycle in budding yeast [START_REF] Li | The yeast cell-cycle network is robustly designed[END_REF], reliable development despite noise and environmental variations [START_REF] Félix | Robustness and evolution: concepts, insights and challenges from a developmental model system[END_REF], ecosystem reconstruction after a catastrophic event [START_REF] Wilmers | Understanding ecosystem robustness[END_REF], among many others. As shown by Kitano [START_REF] Kitano | Biological robustness[END_REF], the four main mechanisms that ensure robustness are: system control, alternative mechanisms, modularity, and decoupling. System control consists in negative and positive feedbacks that enable the system to reach robustness against some perturbations. An example is bacterial chemotaxis, in which negative feedback plays a major role [START_REF] Yi | Robust perfect adaptation in bacterial chemotaxis through integral feedback control[END_REF]. Robustness can also be realized by alternative (or "fail-safe") mechanisms, that is, multiple routes to achieve a given function, which is to say that the failure of one of these routes can be compensated by another. This includes redundancy (where identical or nearly identical components can realize a given function) and diversity (where heterogeneous components can realize a given function). There are now many examples of these phenomena in the immune system (e.g., [START_REF] Vély | Evidence of innate lymphoid cell redundancy in humans[END_REF]). Modularity is another important dimension of robustness: robustness is often achieved by modules, that is, flexible sets of components that collectively realize a given function, rather than by individual components [START_REF] Hartwell | From molecular to modular cell biology[END_REF]. Finally, decoupling is the prevention of undesired connection between low-level variations and highlevel functionalities. An example is the buffer mechanisms that decouple genetic variations from phenotypic expression, e.g., HSP chaperones [START_REF] Rutherford | Between genotype and phenotype: protein chaperones and evolvability[END_REF]. Here we focus on how the concept of robustness can be applied to the immune system and the TRS across the living world. Robustness has not been widely mentioned in immunology, though some exceptions exist (e.g., [START_REF] Kourilsky | The natural defense system and the normative self model[END_REF][START_REF] Feinerman | Variability and Robustness in T Cell Activation from Regulated Heterogeneity in Protein Levels[END_REF][START_REF] Jonjic | Functional plasticity and robustness are essential characteristics of biological systems: lessons learned from KLRG1-deficient mice[END_REF][START_REF] Mantovani | The chemokine system: redundancy for robust outputs[END_REF]). In particular, Mantovani [START_REF] Mantovani | The chemokine system: redundancy for robust outputs[END_REF] proposed that robustness provides a conceptual framework to understand intriguing aspects of the chemokine system, most prominently its redundancy (see also Mantovani, this special issue). Germain, Altan-Bonnet, and colleagues have explored theoretically and experimentally the mechanisms through which T cells can be both robust and adaptable to variations in protein expression [START_REF] Feinerman | Variability and Robustness in T Cell Activation from Regulated Heterogeneity in Protein Levels[END_REF]. Kourilsky has proposed to understand the immune system as conferring robustness to the whole organism via its capacity to systematically detect and respond to internal as well as external perturbations [START_REF] Kourilsky | The natural defense system and the normative self model[END_REF]. The question raised here is different and complementary, in so far as robustness is examined at the tissue level, and we ask which exact roles the immune system plays in this tissue-level robustness. In what follows, we detail how the TRS works, mainly via five processes, namely plasticity, functional redundancy, constant surveillance, restraint, and dynamic adjustment. We then show how pathologies associated with dysfunctions in tissue repair (e.g, fibrosis, ulcers, and cancer) can be understood as resulting from a deregulation of one or several of these five processes. We propose that the TRS offers a remarkable test case to define the notion robustness in a more precise and operational way, and more specifically to distinguishing different forms of robustness (structural vs. functional; preventive vs. corrective; partial vs. complete; dysfunctional vs. as a dysfunction). Importantly, we will consider both "repair" (defined as the partial reconstruction of an organ or tissue) and "regeneration" (defined as the complete reconstruction of a complex organ or tissue) examples, and explain how the concept of robustness helps clarify the differences between repair and regeneration. The mechanisms that mediate tissue reconstruction Tissue reconstruction is a complex and dynamic process, comprising overlapping, highly orchestrated stages -namely inflammation, tissue formation, and tissue remodeling [START_REF] Gurtner | Wound repair and regeneration[END_REF]. Tissue reconstruction involves many molecular and cellular components, which tightly interact. Understanding the interactions between these components and how they are regulated both spatially and temporally is a major aim for anyone interested in tissue repair, regeneration, and repair-associated pathologies. We show here that the TRS exhibits five key features that participate in robustness, and which are shared by many actors involved in the TRS: the TRS is plastic, redundant, under constant surveillance, restrained, and continuously dynamic. Plasticity in the TRS First, a major feature of the TRS is the plasticity of the cells involved in tissue reconstruction. The word "plasticity" is used with different and sometimes confusing meanings in the scientific literature. Here we understand cell plasticity in two different and important senses [START_REF] Laurent | Immune-Mediated Repair: A Matter of Plasticity[END_REF]. The first sense is intra-lineage cell plasticity, that is, changes in cell function and phenotype within a given cell lineage -for example, M1 macrophages turning into M2 macrophages. This is sometimes called "functional plasticity" [START_REF] Galli | Phenotypic and functional plasticity of cells of innate immunity: macrophages, mast cells and neutrophils[END_REF]. The second sense is translineage cell plasticity, that is, the switch from one lineage to another -e.g., from macrophages to fibroblasts [START_REF] Chang-Panesso | Cellular plasticity in kidney injury and repair[END_REF]. This can also be called plasticity by "transdifferentiation" [START_REF] Das | Monocyte and macrophage plasticity in tissue repair and regeneration[END_REF] or by "reprogramming" -a phenomenon now known to occur in some non-immune cells [START_REF] Plikus | Regeneration of fat cells from myofibroblasts during wound healing[END_REF]. Actors of plasticity in tissue reconstruction are diverse, from immune to nonimmune cells. In what follows, we describe the main cellular actors in the repair process, with a particular emphasis on how they illustrate the phenomenon of plasticity. We show that this plasticity is central to the functioning of the TRS. Far from being "one-shot" weapons, long-living neutrophils -which are central players in tissue reconstruction -are remarkably plastic. Indeed, neutrophils can differentially switch phenotypes, and display distinct subpopulations under different microenvironments [START_REF] Yang | The Diverse Biological Functions of Neutrophils, Beyond the Defense Against Infections[END_REF]. At the inflammatory stage of the repair process, neutrophils can play either a pro-resolving or an anti-resolving role. In addition to this intra-lineage plasticity, repair-associated neutrophils are capable of trans-lineage plasticity (plasticity by transdifferentiation) [START_REF] Balta | Qualitative and quantitative analysis of PMN/T-cell interactions by InFlow and super-resolution microscopy[END_REF][START_REF] Takashima | Neutrophil plasticity: acquisition of phenotype and functionality of antigen-presenting cell[END_REF][START_REF] Matsushima | Neutrophil differentiation into a unique hybrid population exhibiting dual phenotype and functionality of neutrophils and dendritic cells[END_REF][START_REF] Hampton | The lymph node neutrophil[END_REF]. Type 1 macrophages (M1) drive the early inflammatory responses that lead to tissue destruction, whereas type 2 macrophages ("M2" or "alternatively activated reparative macrophages") exert a central role in wound healing [START_REF] Biswas | Macrophage plasticity and interaction with lymphocyte subsets: cancer as a paradigm[END_REF][START_REF] Mantovani | Macrophage plasticity and polarization in tissue repair and remodelling[END_REF][START_REF] Nahrendorf | The healing myocardium sequentially mobilizes two monocyte subsets with divergent and complementary functions[END_REF][START_REF] Wynn | Macrophages in Tissue Repair, Regeneration, and Fibrosis[END_REF][START_REF] Wynn | Quantitative assessment of macrophage functions in repair and fibrosis[END_REF][START_REF] Mills | Anatomy of a Discovery: M1 and M2 Macrophages[END_REF]. Generation of a pro-type 2 microenvironment gradually leads to the switch from inflammatory to pro-repair macrophages. These cells promote tissue repair by producing pro-reparative cytokines and participate in a pro-type 2 microenvironment. A wide range of macrophage subtypes exists [START_REF] Das | Monocyte and macrophage plasticity in tissue repair and regeneration[END_REF][START_REF] Mantovani | Macrophage plasticity and polarization in tissue repair and remodelling[END_REF][START_REF] Jenkins | Local macrophage proliferation, rather than recruitment from the blood, is a signature of TH2 inflammation[END_REF][START_REF] Chávez-Galán | Much More than M1 and M2 Macrophages, There are also CD169+ and TCR+ Macrophages[END_REF]. Efficient tissue repair requires inflammatory macrophages, tissue repair macrophages, and resolving macrophages (producers of resolvins, IL-10 and TGF-b) [START_REF] Das | Monocyte and macrophage plasticity in tissue repair and regeneration[END_REF][START_REF] Wynn | Macrophages in Tissue Repair, Regeneration, and Fibrosis[END_REF][START_REF] Wang | Molecular mechanisms that influence the macrophage m1m2 polarization balance[END_REF]. Beyond intra-lineage plasticity, macrophages might participate actively in the tissue-remodeling phase of repair process by transdifferentiation into other cell types, notably endothelial cells [START_REF] Yan | Vascular endothelial growth factor modified macrophages transdifferentiate into endothelial-like cells and decrease foam cell formation., Vascular endothelial growth factor modified macrophages transdifferentiate into endothelial-like cells and decrease foam cell formation[END_REF]. Innate Lymphoid Cells (ILCs) are a recently discovered family of immune cells that includes three subsets: ILC1, ILC2 and ILC3 [START_REF] Eberl | The brave new world of innate lymphoid cells[END_REF][START_REF] Spits | Innate lymphoid cells--a proposal for uniform nomenclature[END_REF][START_REF] Vivier | The evolution of innate lymphoid cells[END_REF]. ILC2-secreted amphiregulin, a protein shown to orchestrate tissue repair [START_REF] Zaiss | Emerging functions of amphiregulin in orchestrating immunity, inflammation, and tissue repair[END_REF], promotes wound healing by acting directly on fibroblasts, leading to ECM deposit. ILC responses to different stimuli allow intra-lineage plasticity between the different subsets [START_REF] Ohne | IL-1 is a critical regulator of group 2 innate lymphoid cell function and plasticity[END_REF][START_REF] Silver | Inflammatory triggers associated with exacerbations of COPD orchestrate plasticity of group 2 innate lymphoid cells in the lungs[END_REF]. This plasticity between different ILC subtypes might allow for rapid innate immune responsiveness in repair [START_REF] Almeida | Innate lymphoid cells: models of plasticity for immune homeostasis and rapid responsiveness in protection[END_REF][START_REF] Zhang | Cutting Edge: Notch Signaling Promotes the Plasticity of Group-2 Innate Lymphoid Cells[END_REF]. Overall, cell plasticity is a pivotal process by which tissue reconstruction is achieved. This is confirmed by the fact that, as detailed below, inappropriate realizations of cellular plasticity (excess or insufficiency) may lead to various disorders. Functional redundancy in the TRS Functional redundancy is another important feature of the TRS. Functional redundancy describes a situation in which different elements have similar functions or similar effects on a trait [START_REF] Whitacre | Biological Robustness: Paradigms, Mechanisms, and Systems Principles[END_REF]. Though some forms of functional redundancy occur in every organism as part of normal functioning, this phenomenon has often been observed in pathological contexts, where it appears that an organism deficient in one cell type can "compensate" this deficiency thanks to other cell types or other molecules or pathways [START_REF] Edelman | Degeneracy and complexity in biological systems[END_REF]. The TRS often displays "degeneracy", which refers to the existence of structurally diverse but functionally similar components [START_REF] Edelman | Degeneracy and complexity in biological systems[END_REF]. Overall, the TRS is characterized by a high level of redundancy, even though some components and pathways seem to be pivotal in the reconstruction process. ILCs have potent immunological functions in experimental conditions, but their contributions to immunity in natural conditions are unclear. It has been shown that SCID patients with IL2RG and JAK3 mutations and ILC-deficient had no particular susceptibility to disease [START_REF] Vély | Evidence of innate lymphoid cell redundancy in humans[END_REF]. Thus, ILCs appear to be dispensable in humans who have a functional adaptive immune system, at least in the context of modern medicine and hygiene conditions [START_REF] Vély | Evidence of innate lymphoid cell redundancy in humans[END_REF][START_REF] Rankin | Complementarity and redundancy of IL-22-producing innate lymphoid cells[END_REF]. Functional redundancy allows the evocation of an overall type 1 or 2 immune response rather than talking more restrictively about type 1 or 2 neutrophils/macrophages/T cells. Those cells often produce the same types of molecules (albeit sometimes with different temporal patterns). This redundancy is not only important to maintain robustness against perturbations; it also creates feedback loops (and thereby a virtuous or vicious circle, depending on the situation), participating in the establishment of a local microenvironment that displays particular features. Besides immunological redundancy, immune cells participate in the secretion of structural molecules such as matrix metalloproteinases (MMP) altogether with fibroblasts, pericytes, and endothelial cells. The relative importance of macrophage and other immune cell contribution to tissue reconstruction compared to the aforementioned structural cells might depend on the nature of the tissue and the injury. Constant surveillance Tissue reconstruction is an active process where some actors are on constant standby. It is of major importance at the level of DNA repair, as DNA lesions occurring during reprogramming are monitored by a surveillance mechanism called the zygotic checkpoint [START_REF] Ladstätter | A Surveillance Mechanism Ensures Repair of DNA Lesions during Zygotic Reprogramming[END_REF]. At the tissue level, some cells, including various types of immune cells [START_REF] Fan | Hallmarks of Tissue-Resident Lymphocytes[END_REF], are highly specialized in the surveillance of damages. Of crucial importance are tissue-resident sentinel cells, as they are present and on standby before any damages. ILCs are found preferentially on epithelial barrier surfaces such as the skin, lungs, and gut, where they protect against infection and maintain the integrity of the barriers. ILCs are tissue-resident sentinels enriched at mucosal surfaces. They exert a constant surveillance on epithelia, and have a complex crosstalk with their microenvironment. They are highly involved in tissue repair through their sentinel position and the cytokines they produce [START_REF] Klose | Innate lymphoid cells as regulators of immunity, inflammation and tissue homeostasis[END_REF][START_REF] Rak | IL-33-Dependent Group 2 Innate Lymphoid Cells Promote Cutaneous Wound Healing[END_REF]. Different tissues often have their preferential sentinels, such as NK cells in the liver, or Langerhans cells in the skin. Cells of the innate but also adaptive immune system are involved in this surveillance. In particular, tissue resident memory T cells (TRM) -which reside in tissues without recirculating through the blood or lymph, and constitute a transcriptionally and phenotypically unique T cell lineage -have been shown to be key guardians against viral infections [START_REF] Mueller | Tissue-resident memory T cells: local specialists in immune defence[END_REF]. Cells traditionally seen as non-immune such as epithelial cells (ECs) play an important role in this collaborative surveillance process. They line body surface tissues and provide a physicochemical barrier to the external environment. This barrier is not a mere passive mechanical protection. Frequent microbial and non-microbial challenges cause activation of ECs, with release of cytokines and chemokines as well as alterations in the expression of cell-surface ligands. Epithelial stress is rapidly sensed by tissue-resident immune cells, which can directly interact with self-moieties on ECs and initiate both local and systemic immune responses. ECs are thus key drivers of immune surveillance at body surface tissues [START_REF] Dalessandri | Beneficial Autoimmunity at Body Surfaces -Immune Surveillance and Rapid Type 2 Immunity Regulate Tissue Homeostasis and Cancer[END_REF]. Restraint of the TRS Detecting and responding to damages is so central for the organism's survival that the TRS is always on alert, ready to be triggered. But at the same time this system also constitutes a potential threat for the organism (inflammation, tissue formation, and tissue remodeling can all go awry, with potentially dramatic consequences), and must therefore be constantly kept under control. Numerous cells restrain the TRS through negative feedback, active production of pro-resolving molecules, and other dynamic mechanisms. These cells are important at all stages but they are particularly crucial for the pro-resolving phase after inflammation. Pro-resolving neutrophils demonstrate the ability to: (i) produce several pro-resolving mediators (as lipoxins), (ii) form NETs and aggregated NETs, according to a cell-density dependent sensing mechanism, which dismantles the pro-inflammatory gradient by degrading the inflammatory cytokines and chemokines, (iii) store and release the proresolving protein annexin A1 [START_REF] Jones | The role of neutrophils in inflammation resolution[END_REF]. Inflammation resolution is partly mediated by the clearance of apoptotic neutrophils by macrophages through efferocytosis [START_REF] Stark | Phagocytosis of apoptotic neutrophils regulates granulopoiesis via IL-23 and IL-17[END_REF]. Non-apoptotic neutrophils can leave the injury site by reverse transmigration. Recently described resolving macrophages (producers of resolvins, IL-10 and TGF-b) are important actors of repair regulation. In mice, some regulatory T cells (Tregs) are able to produce amphiregulin, favoring the resolving phase of the inflammation process [START_REF] Burzyn | A special population of regulatory T cells potentiates muscle repair[END_REF]. Depletion of muscle Tregs has profound impact on muscle regeneration with loss of regenerative fibers, collagen deposition and fibrosis, leading to a disorganized tissue structure. In the absence of Tregs, effector T cell infiltrate increases in the injured muscle and the switch from inflammatory to anti-inflammatory macrophage diminishes. Dynamic adjustment of the TRS TRS is a highly dynamic process implying a large recruitment of various cells, with movements in a tri-dimensional matrix, and with many back and forth between different steps that are not fixed and can often overlap. The dynamic character of the TRS is visible at the level of the recruited cells, but also of the resident cells. Standby periods are not to be considered totally at rest. Resident cells are never completely motionless. Moreover, cells are constantly replaced in a dynamic process. Tissues are continuously exposed to potentially hazardous environmental challenges in the form of inert material and microbes. In the epidermis, for example, Langerhans cells (LC) form a dense network of cells capable of capturing antigens and migrating to the lymph node after crossing the basement membrane into the dermis, and they are able to promote tolerance or immune responses [START_REF] Seneschal | Human epidermal Langerhans cells maintain immune homeostasis in skin by activating skin resident regulatory T cells[END_REF]. Velocity of migration is partly regulated by the microenvironment, and skin Tregs display a much slower migration compared to effector CD4+ T cells, although acute inflammation results in a rapid increase in their motility [START_REF] Mueller | Tissue-resident T cells: dynamic players in skin immunity[END_REF]. Gradients of chemokines largely participate in cell recruitment when damages occur. CD14+ monocytes and neutrophils are very mobile cells, highly and promptly recruited in case of injury [START_REF] Achachi | UV Radiation Induces the Epidermal Recruitment of Dendritic Cells that Compensate for the Depletion of Langerhans Cells in Human Skin[END_REF]. The recruitment of neutrophils during the inflammatory phase is linked to a sharply regulated communication system based on the CXC chemokine/CXC receptors balance [START_REF] Nair | Study Investigators, Safety and efficacy of a CXCR2 antagonist in patients with severe asthma and sputum neutrophils: a randomized, placebo-controlled clinical trial[END_REF]. The injury triggers the production of G-CSF that converts the CXCR4 dominant signaling to that of CXCR2 in the bone marrow microenvironment, leading to the release of more mature neutrophils into the peripheral blood stream [START_REF] Silvestre-Roig | Neutrophil heterogeneity: implications for homeostasis and pathogenesis[END_REF]. Functional aberrancy in these systems leads to impaired wound healing [START_REF] Su | Chemokine Regulation of Neutrophil Infiltration of Skin Wounds[END_REF]. In a collaborative pathway, the release of chemoattractant factors by neutrophils, such as lactoferrin, attracts monocytes and activates macrophages [START_REF] Frangogiannis | Regulation of the inflammatory response in cardiac repair[END_REF]. A counterpart to recruitment is obviously needed to ensure robustness of a tissue, or else an overabundance of cells could lead to tissue destruction. Efferocytocis, transmigration, and specific apoptosis allow recruited cells to be cleaned up after damages. As the rest of the paper will show, two kinds of consequences follow from this analysis of the five key features of the TRS. First, it offers an important basis to re-think some tissue reconstruction-associated pathologies as dysfunctions of robustness. Second, it offers a test case to assess the usefulness of the notion of robustness in physiological and pathological conditions, and leads to distinguishing different forms of robustness. Dysfunctions of the tissue reconstruction system It has been suggested by Kitano and others that the concept of robustness can shed light on certain pathological processes [START_REF] Kitano | Biological robustness[END_REF]. Pathologies could result from robustness as a dysfunction (the process under consideration is robust, but this robustness is detrimental to the organism, as happens for example in AIDS or some cancers, where the robustness of a system is "hijacked" [START_REF] Kitano | Biological robustness[END_REF][START_REF] Kitano | Biological robustness in complex host-pathogen systems[END_REF][START_REF] Kitano | Cancer as a robust system: implications for anticancer therapy[END_REF]) or a dysfunctional robustness, which is to say a rupture of robustness (i.e., the process should be robust, but is not). This approach applies very well to the dysfunctions of the TRS. Mechanisms that mediate tissue reconstruction to ensure robustness are constantly challenged. These mechanisms are sometimes overwhelmed, leading to various consequences depending on the situation, from the rupture of robustness to the promotion of the disease thanks to robustness-associated mechanisms, and to an excess of robustness. The final consequence of each situation is a pathological process. Through concrete examples (ulcers, fibrosis, and cancers), we will illustrate these different threats to the TRS to ensure robustness, emphasizing in each case exactly which mechanisms are challenged (see Table 1, which present several additional examples). Ulcers, or rupture of robustness Pathological situations of insufficient repair such as ulcers underlie that the TRS mechanisms ensuring robustness are overwhelmed. Since the robustness of the tissue can be jeopardized, it is important to analyze the various components listed earlier. The value of a more detailed analysis of component robustness is dual. This makes it possible to precisely identify vulnerabilities, which vary depending on the clinical situation, but also to work out innovative therapeutic strategies. As we saw, cell plasticity is a crucial dimension of the TRS and it is especially true for neutrophils. This is confirmed by the fact that incapacity of neutrophils to switch plastically from one state to the other can contribute to ulcers, e.g., skin or gastric ulcers. Impossibility of tuning the response toward a pro-resolving phase by experimentally blocking neutrophils in a pro-inflammatory state directly leads to chronic inflammation and deregulation of the TRS [START_REF] Whitmore | Cutting Edge: Helicobacter pylori Induces Nuclear Hypersegmentation and Subtype Differentiation of Human Neutrophils In Vitro[END_REF], while the reintroduction of very plastic cells in damaged tissues can overcome this defect. Understanding that in this case the ulcer is a rupture of the robustness due to insufficient cellular plasticity allows to consider completely new therapeutic options. Several studies in animal models showed that adipose tissuederived stem cell sheet application to mucosal or skin wounds accelerates wound healing and decreases the degree of fibrosis [START_REF] Perrod | Cell Sheet Transplantation for Esophageal Stricture Prevention after Endoscopic Submucosal Dissection in a Porcine Model[END_REF][START_REF] Kato | Allogeneic Transplantation of an Adipose-Derived Stem Cell Sheet Combined With Artificial Skin Accelerates Wound Healing in a Rat Wound Model of Type 2 Diabetes and Obesity[END_REF]. While inflammation has to be regulated to ensure the completion of the TRS, a failure in that process can lead to a rupture of robustness. A deficiency of efferocytosis has been identified as a causative agent of sterile chronic granulomatous disease in mice [START_REF] Zeng | An efferocytosis-induced, IL-4-dependent macrophage-iNKT cell circuit suppresses sterile inflammation and is defective in murine CGD[END_REF]. In chronic ulcers, favoring the resolving phase (e.g., through efferocytosis, pro-Treg therapeutics, or resolving compounds) could be an innovative strategy [START_REF] Chan | Interleukin 2 Topical Cream for Treatment of Diabetic Foot Ulcer: Experiment Protocol[END_REF][START_REF] Reis | Lipoxin A4 encapsulated in PLGA microparticles accelerates wound healing of skin ulcers[END_REF][START_REF] Lohmann | Glycosaminoglycan-based hydrogels capture inflammatory chemokines and rescue defective wound healing in mice[END_REF]. Even though defects of inflammatory regulation are clearly involved in the deregulation of the TRS, therapeutic avenues to counteract these defects are still in their infancy, and mostly limited to animal models. A better understanding of the crucial place of that mechanism in the global robustness of the TRS will tend to raise the interest for therapeutics targeting regulation. In ulcers, the loss of epithelial cells disturbs the ongoing surveillance of the TRS. In the eye, the inflammation of the cornea leads to damages of this protective barrier. Given that the cornea is an avascular tissue and contains few immune cells, corneal resident cells function as sentinel cells as well as immune modulators during corneal inflammation. They are able to sense bacterial infection through toll like receptor (TLR)-mediated detection. As a consequence, a loss of substance (i.e., a very significant injury) could lead to the disappearance of key first-line sentinel cells, normally responsible for the recruitment of other crucial cells in the repair process [START_REF] Allen | The Silent Undertakers: Macrophages Programmed for Efferocytosis[END_REF][START_REF] Fukuda | Corneal Fibroblasts as Sentinel Cells and Local Immune Modulators in Infectious Keratitis., Corneal Fibroblasts as Sentinel Cells and Local Immune Modulators in Infectious Keratitis[END_REF]. Other resident cells are involved in this mechanism (see Table 1). Targeting sentinels could constitute a new therapeutic avenue in the treatment of chronic ulcers. Finally, due to the reduction of the dynamic flow to the damage site, new cells cannot come from the upstream and revitalize the system in an ulcer. Promoting the migration and proliferation of cells could accelerate wound healing [START_REF] Liu | CXCR4 antagonist delivery on decellularized skin scaffold facilitates impaired wound healing in diabetic mice by increasing expression of SDF-1 and enhancing migration of CXCR4-positive cells, Wound Repair Regen[END_REF][START_REF] Kai | Accelerated Wound Healing on Skin by Electrical Stimulation with a Bioelectric Plaster[END_REF]. Each of the five components of robustness can be compromised depending on the type of ulcer. For clinicians, thinking according to our classification and identifying which mechanism is deficient can therefore change very concretely their therapeutic management. Fibrosis or excess of robustness Keloid and hypertrophic scars can also be seen as the result of a dysfunction in the fundamental mechanisms of the TRS. One could consider fibrosis as a kind of hypertophic scar, and as such fibrosis could follow from a deregulated TRS as well. In a normal repair cycle, the resolution of damage-induced inflammation allows the system to rebuild itself efficiently. In contrast, the absence of resolution means the persistence of inflammation and also, especially in fibrosis, a disconnection between the levels of resolution and remodeling. An excess of plasticity can also be pathological. For example, epithelial-mesenchymal transition (EMT) reflects a high level of cell plasticity essential during embryogenesis and wound healing, but EMT can be aberrantly regulated in fibrosis [START_REF] Skrypek | Epithelial-to-Mesenchymal Transition: Epigenetic Reprogramming Driving Cellular Plasticity[END_REF][START_REF] Gasparics | Alterations in SCAI Expression during Cell Plasticity, Fibrosis and Cancer[END_REF]. Cell plasticity could also be a hurdle for achieving some cell therapy. A pro-fibrotic microenvironment results in systematic M2 polarization even if macrophages of another type are injected. In contrast, the infusion of stabilized pro-resolving macrophages is associated with reduced kidney interstitial fibrosis and inflammation, as well as preservation of the phenotype and functions of macrophages [START_REF] Guiteras | Macrophage Overexpressing NGAL Ameliorated Kidney Fibrosis in the UUO Mice Model[END_REF]. Thus, a precise knowledge of the proper physiopathology of the studied condition is crucial to understanding whether a higher or a lower plasticity is needed. The cause of fibrosis is sometimes attributed to the persistence of damage triggers such as chronic infection. Nevertheless, in hepatitis C, it is the inadequacy of the TRS response rather than the persistence of the infection that is at stake [START_REF] Rios | Chronic hepatitis C liver microenvironment: role of the Th17/Treg interplay related to fibrogenesis[END_REF]. Tregs or proresolving cells have also been suspected to be involved in more general fibrotic processes, such as systemic sclerosis (SSc) [START_REF] Ugor | Increased proportions of functionally impaired regulatory T cell subsets in systemic sclerosis[END_REF][START_REF] Bhattacharyya | Endogenous ligands of TLR4 promote unresolving tissue fibrosis: Implications for systemic sclerosis and its targeted therapy[END_REF]. From this point of view, promoting the resolution of inflammation could be considered as a key aim to reverse fibrosis [START_REF] Brennan | Specialized pro-resolving mediators in renal fibrosis[END_REF][START_REF] Grabiec | The role of airway macrophages in apoptotic cell clearance following acute and chronic lung inflammation[END_REF]. A constant monitoring is an essential element of the TRS responsiveness. The fact that it is provided by resident cells guarantees this prompt response when damages occur. However, in some cases, including fibrosis, this surveillance can be over-stimulated and associated with an overly sustained response. As described before, innate immune signaling via TLRs is a key driver of persistent fibrotic response. Chronic signaling on resident mesenchymal cells underlies the switch from a self-limited repair response to non-resolving pathological fibrosis characteristic of systemic sclerosis. Limiting the responsiveness of resident cells to innate stimulation could be of interest to prevent fibrotic processes [START_REF] Bhattacharyya | Endogenous ligands of TLR4 promote unresolving tissue fibrosis: Implications for systemic sclerosis and its targeted therapy[END_REF]. Resident cells themselves can also be responsible for the excessive stimulation without any clear external trigger [START_REF] Cheng | Guards at the gate: physiological and pathological roles of tissue-resident innate lymphoid cells in the lung[END_REF][START_REF] Hams | IL-25 and type 2 innate lymphoid cells induce pulmonary fibrosis[END_REF][START_REF] Li | Skin-Resident Effector Memory CD8+CD28-T Cells Exhibit a Profibrotic Phenotype in Patients with Systemic Sclerosis[END_REF]. A static TRS cannot result in normal repair. Indeed, different but more or less intricate phases must follow one another. Nonetheless, an excess of migration of pro-fibrotic cells into the tissue can be detrimental in a normal repair process. This happens, for example, in the lungs with fibrocytes. These cells enter the lungs in response to their chemoattractant CXCL12, and differentiate into fibroblasts or myofibroblasts, leading to excessive deposition of collagen-rich extracellular matrix. It has been shown that inhibiting the flow of fibrocytes to the lungs by a peptide called R1R2 attenuates pulmonary fibrosis by reducing the invasion of fibrocytes through basement membrane-like proteins [START_REF] Chiang | R1R2 peptide ameliorates pulmonary fibrosis in mice through fibrocyte migration and differentiation[END_REF]. Cancer or hijacking of robustness Cancerous tumors have been related to deregulated repair by Dvorak, who describes them as "wounds that do not heal" [START_REF] Dvorak | Tumors: wounds that do not heal. Similarities between tumor stroma generation and wound healing[END_REF]. It is now well established that an inflammatory microenvironment promotes cancer [START_REF] Mantovani | Cancer-related inflammation[END_REF]. It has also been suggested that the formation and maintenance of a cancerous tumor could be seen as a robust process [START_REF] Kitano | The theory of biological robustness and its implication in cancer[END_REF]. Here we consider cancer as evolving from a damaged tissue, where the TRS could act to prevent the expansion of the injury and promote repair. The cancerous tumor, to drive its own development, hijacks some properties of the TRS that normally ensure robustness in physiological conditions. Epithelial-mesenchymal transition (EMT) is an important process in embryonic development, fibrosis, but also in cancer metastasis. The activation of EMT in cancer allows cells to acquire migratory, invasive, and stem-like properties. SCAI is characterized as a tumor suppressor inhibiting metastasis in different human cancer cells, and which is thought to be reduced in some tumors. SCAI expression decreases in a model of endothelialmesenchymal transition, which suggests that it could be important for cell plasticity. Nevertheless, its role in cancer remains to be further investigated, as its expression could be associated with high or low progression of the tumor depending on the type of cancer [START_REF] Gasparics | Alterations in SCAI Expression during Cell Plasticity, Fibrosis and Cancer[END_REF]. Macrophage polarization could influence immune checkpoint therapy resistance. The plasticity of macrophages is used by cancerous tumors, and targeting this plasticity could be of interest to increase the response to immunotherapy such as ipilimumab [START_REF] Soto-Pantoja | Unfolded protein response signaling impacts macrophage polarity to modulate breast cancer cell clearance and melanoma immune checkpoint therapy responsiveness[END_REF]. Redundancy may explain cancer resistance to certain treatments. Recently developed immunotherapies do not target the tumor cells as such; instead, they promote the local immune responses in the tumor microenvironment, which has some important consequences on key immunological actors of the TRS. Redundancy of the TRS becomes central in these conditions [START_REF] Lavi | Redundancy: a critical obstacle to improving cancer therapy[END_REF]. For example, IL-6 belongs to a family of cytokines with highly redundant functions, which use the glycoprotein 130 chain for signal transduction. It has an important role in the pathophysiology of multiple myeloma, where it supports the growth and survival of the malignant plasma cells in the bone marrow. Because of this redundancy, targeting IL-6 is highly difficult. Antibodies against glycoprotein 130 constitute a better option, as they can overcome this redundancy [START_REF] Burger | Due to interleukin-6 type cytokine redundancy only glycoprotein 130 receptor blockade efficiently inhibits myeloma growth[END_REF]. Insofar as cancer may be seen as both a cause and consequence of tissue damages, cancer cells could activate the TRS. In particular, cancerous tumors can use the above described restrain mechanisms of the TRS to induce a type of immune tolerance that will be at their own advantage. Tumor associated macrophages (TAMs) are one of the main actors of this phenomenon. Different therapeutic strategies have been proposed to address this problem, such as the suppression of TAM recruitment, their depletion, the switch of M2 TAMs into antitumor M1 macrophages, and the inhibition of TAM-associated molecules [START_REF] Santoni | Triple negative breast cancer: Key role of Tumor-Associated Macrophages in regulating the activity of anti-PD-1/PD-L1 agents[END_REF]. Immune surveillance could be considered as insufficient in cancer. The TRS lets cancer cells grow and develop as if it were incapable of "seeing" them. Resident memory CD8+T cells (TRMs) represent a recently described subset of long-lived memory T cells that remain in the tissues, do not recirculate, and are therefore very important actors in immunosurveillance. It has been shown that TRMs were present in human non-small cell lung tumor tissues, and their frequency was correlated with better overall survival than other infiltrating immune cells. In that case, the cancer misleads the immunosurveillance system, which suggests that strategies increasing the number of TRMs or activating them such as vaccines could be developed following this concept [START_REF] Granier | Tissue-resident memory T cells play a key role in the efficacy of cancer vaccines[END_REF]. Prior to metastatic cell arrival, a premetastatic niche in distant organs could be an important step in the metastatic cascade. This phenomenon suggesting highly dynamic process from cancer cells could be preceded by neutrophil migration and recruitment. As such the dynamic property of the TRS is used to prepare the basis for tumor cell engraftment in parenchyma [START_REF] Donati | Neutrophil-Derived Interleukin 16 in Premetastatic Lungs Promotes Breast Tumor Cell Seeding[END_REF]. Overall, the concept of robustness helps better understand TRS-associated pathologies, either as a deficiency in the fundamental processes by which robustness is normally realized (plasticity, etc.), or as an emerging, local form of robustness that is detrimental to the organism. Conclusion: the virtues of thinking about tissue reconstruction in terms of robustness In light of the various physiological and pathological examples examined in this paper, we propose that it is extremely fruitful to conceive of the tissue reconstruction system in terms of robustness, for three main reasons. First, the recognition by Kitano and others [START_REF] Kitano | Biological robustness[END_REF] of different robustness-promoting mechanisms (system control, alternative mechanisms, modularity, decoupling) constitutes a useful conceptual framework to better describe the TRS and its dysfunctions in pathological situations. For example, plasticity and redundancy of immune components within the TRS have been described by scientists who were supportive of the concept of robustness [START_REF] Mantovani | The chemokine system: redundancy for robust outputs[END_REF][START_REF] Mantovani | Macrophage plasticity and polarization in tissue repair and remodelling[END_REF], and it seems likely that continuing to apply this concept will reveal even more plasticity and redundancy. Moreover, thinking in terms of robustness helps understanding that even a situation that could seem static, such as skin renewal for example, is in fact the outcome of a highly dynamic, continuously ongoing, process, and that it is pivotal to study in detail the mechanisms ensuring this process. It also suggests that the "default state" of the TRS is to be on alert, which means that tissue reconstruction is always active, though under constant restraint. As soon as the brake is lifted, the whole process of tissue repair (i.e., inflammation, tissue formation, and tissue remodeling) is triggered, which guarantees a higher capacity to react to various and often inevitable damages. Of course, this constant activation is energetically costly, but one should keep in mind that it is a low-level activation, and that it is probably essential for survival. From a pathological point of view, the emphasis on the redundancy of the TRS, for example, is of the utmost importance. It shows that it is often entirely inadequate to hope for important benefits by intervening on just one actor or pathway. Indeed, in many cases, although some cells or pathways seemed crucial in a pathological process in vitro, their inhibition in vivo does not lead systematically to the pathological phenotype, because of the redundancy of some components and pathways. In other words, some pathological conditions reveal the role of certain cells or pathways, which are not normally indispensable, but become central when other components are missing. For example, alarmins IL-25, IL-33, and TSLP have a high level of redundancy, which makes anti-fibrotic treatments very difficult to develop. Blocking a single pathway is most often ineffective, so it is more promising to consider modulation of the response at a very early stage, or to identify common pathways that could be targeted. Moreover, acting on one of the mechanisms of robustness while others play a more significant role is not effective in achieving repair. This is probably what happens when one treats fibrosis with immunosuppressive therapy in cases where restoring cellular plasticity would in fact be more adequate. As a general rule, then, one should never draw hasty conclusions about whether or not some actors have an important role in the TRS before having tested them in real-life pathological conditions. Second, the example of the TRS can, in turn, help us make some crucial conceptual distinctions about robustness. On the basis of the examples explored here, one can indeed distinguish functional vs. structural robustness, partial vs. complete robustness, and corrective vs. preventive robustness (Figure 2). Functional robustness in the case of the TRS means that tissue function (or, at least, one tissue function) is restored, but not tissue structure. For example, after significant skin injury, a scar will form, which will restore the protective function of the skin, but the initial structure of the skin will not be restored. In contrast, regeneration often leads to the restoration of both the structure and the function of the tissue. For example, adult zebrafish fins, including their complex skeleton, regenerate exactly to their original form within two weeks after an amputation. Importantly, some forms of complete tissue regeneration can also be observed in mammalian embryos, but this capacity is subsequently lost for most tissues (the most significant exception in humans is the liver, which can indeed regenerate, though it does not always recover its initial structure). Along similar lines, it is important to emphasize that robustness to a given challenge and at a certain level can be more or less effective. Robustness is partial when tissue function and/or structure is not completely restored, as illustrated by most cases of tissue repair in mammals, for example. Robustness is said to be complete when tissue function and/or structure is entirely restored, as illustrated by cases such as fin regeneration in zebrafish already mentioned, limb regeneration in many amphibians [START_REF] Brockes | Amphibian Limb Regeneration: Rebuilding a Complex Structure[END_REF], or tissue regeneration in many echinoderms [START_REF] Carnevali | Regeneration in Echinoderms: repair, regrowth, cloning[END_REF]. Furthermore, robustness can be corrective or preventive. It is corrective when it consists in the active restoration of a strongly disturbed state. For example, tissue repair or regeneration after significant damages is a form of corrective robustness, because it follows a major perturbation (damages), and it involves, as we saw, a complex, dynamic, and regulated interplay of many different components. In contrast, robustness is preventive when it occurs in the absence of a major perturbation while minimizing the risk of a major perturbation and its detrimental consequence. For example, epithelial repair occurs continuously in the body, which requires an extremely rich orchestration of events [START_REF] Peterson | Intestinal epithelial cells: regulators of barrier function and immune homeostasis[END_REF]. This preventive robustness helps insure that the skin is always sufficiently "sealed off" and at the same time sufficiently smooth to achieve its functions. When this process is interrupted, for example in ulcers, the organism is at a high risk of being damaged and invaded by pathogens or toxic substances. Of course, there will be a grey zone here, because it is not always clear whether a perturbation is major or not, and different tissues are likely to perceive perturbations differently. For example, the liver is constantly exposed to toxic chemicals that could endanger the rest of the body, and its regenerative capacities are certainly evolutionarily related to this particular exposition [START_REF] Michalopoulos | Liver Regeneration[END_REF]. An additional distinction seems important to better grasp the role of robustness in pathological contexts. Dysfunctions in robustness of the TRS can indeed be understood along two different lines. In some cases, the tissue fails to be robust, presumably because one or several important components or pathways of the TRS are not working properly. This is what we call a dysfunction of robustness. For example, we saw that the incapacity to realize cell plasticity can sometimes lead to the failure of tissue reconstruction. Yet, in other cases, the tissue is robust, but this robustness is, in this specific context, detrimental to the organism. This is what we call robustness as a dysfunction. For example, a tumor can constitute a robust tissue, which is well vascularized, nourished, and constantly repaired, often via the co-optation of classical physiological mechanisms to the benefit of the tumor itself. Here again, this distinction between a dysfunction of robustness and dysfunctional robustness might prove useful in other contexts, beyond the case of the TRS. A third and final consequence concerns the very understanding of immunity. Since the beginnings of immunology, immunity has been conceived primarily as a form of defense -most often against pathogens. Yet, if the perspective offered in this paper is correct, then immunity needs to be re-defined within a much wider context. Immune processes, we submit, concern not only defense, but also the construction (development) and reconstruction (constant repair; occasional repair after a significant damage; regeneration) of the organism,. Indeed, a typical immune system in nature is constantly busy surveying, renewing, and repairing the body,. This is not to say, obviously, that immune defense is not important, and has not been a major selective pressure in the evolution of immune systems. Our suggestion is that immune systems have evolved under a multidimensional complex selective pressure, which includes a capacity to develop and repair as well as a capacity to defend against different sorts of threats. The way scientists traditionally delineate the immune system reflects an intellectual decision. This does not mean, of course, that the immune system is not "real", but rather that there exist many different ways to divide up living entities into different "systems". In the present paper, we have argued in favor of another intellectual decision by suggesting that it is more appropriate to focus on a functionally defined "system" of interest (namely the tissue reconstruction system) than on traditionally defined systems (such as the immune system). Repair and defense are probably just two sides of the same coin -a lesson that thinking immunity in terms of robustness might help us keep in mind. EMT and SCAI in renal fibrosis [START_REF] Gasparics | Alterations in SCAI Expression during Cell Plasticity, Fibrosis and Cancer[END_REF] -MET/EMT with the tumorinitiating ability required for metastatic colonization [START_REF] Yao | Mechanism of the mesenchymal-epithelial transition and its relationship with metastatic tumor formation[END_REF] -Plasticity between the epithelial and the mesenchymal states rather than a fixed phenotype [START_REF] Liao | Revisiting epithelial-mesenchymal transition in cancer metastasis: the connection between epithelial plasticity and stemness[END_REF] -UPR in macrophage polarization and plasticity with shift to M1-like profile [START_REF] Soto-Pantoja | Unfolded protein response signaling impacts macrophage polarity to modulate breast cancer cell clearance and melanoma immune checkpoint therapy responsiveness[END_REF] Functional redundancy ILC redundancy [START_REF] Vély | Evidence of innate lymphoid cell redundancy in humans[END_REF] -IL-25, IL-33, and TSLP redundancy and fibrosis [START_REF] Vannella | Combinatorial targeting of TSLP, IL-25, and IL-33 in type 2 cytokine-driven inflammation and fibrosis[END_REF][START_REF] Gieseck | Type 2 immunity in tissue repair and fibrosis[END_REF] -Targeting porcupine in kidney fibrosis and Wnt O-acylation [START_REF] Madan | Experimental inhibition of porcupine-mediated Wnt O-acylation attenuates kidney fibrosis[END_REF] -IL-6 and glycoprotein 130 in the pathophysiology of multiple myeloma [START_REF] Burger | Due to interleukin-6 type cytokine redundancy only glycoprotein 130 receptor blockade efficiently inhibits myeloma growth[END_REF] Constant surveillance -Loss of substance (i.e., a very significant injury) and disappearance of sentinel cells in ulcers [103] -Langerhans cells and hypoxia [START_REF] Pierobon | Regulation of Langerhans cell functions in a hypoxic environment[END_REF] -Fibronectin-EDA and tenascin-C sensed by TLR4 on resident cells and fibrotic processes [START_REF] Bhattacharyya | Endogenous ligands of TLR4 promote unresolving tissue fibrosis: Implications for systemic sclerosis and its targeted therapy[END_REF] -ILC2s in pulmonary fibrosis [START_REF] Cheng | Guards at the gate: physiological and pathological roles of tissue-resident innate lymphoid cells in the lung[END_REF][START_REF] Hams | IL-25 and type 2 innate lymphoid cells induce pulmonary fibrosis[END_REF]] -CD8+CD28-T cells and profibrotic cytokine IL-13 in the skin of systemic sclerosis (SSc) patients [START_REF] Li | Skin-Resident Effector Memory CD8+CD28-T Cells Exhibit a Profibrotic Phenotype in Patients with Systemic Sclerosis[END_REF] -TRMs in human non-small cell lung tumor tissue [START_REF] Granier | Tissue-resident memory T cells play a key role in the efficacy of cancer vaccines[END_REF] -Role of amphiregulin in orchestrating responses to tumors [START_REF] Zaiss | Emerging functions of amphiregulin in orchestrating immunity, inflammation, and tissue repair[END_REF] Restraint -Resolution deficiency and sterile chronic granulomatous disease [98-101] -Imbalance Treg/Th17 in pyoderma gangrenosum [START_REF] Caproni | The Treg/Th17 cell ratio is reduced in the skin lesions of patients with pyoderma gangrenosum[END_REF] -Chronic hepatitis C and hepatic fibrosis with the Th17/Treg balance Table 1. Main mechanisms involved in the robustness of the tissue reconstruction system (TRS), and its dysfunctions in major pathological situations (ulcer, fibrosis, and cancer). preventive robustness. The four cases presented here are merely illustrations, among others, showing how these distinctions can be applied to real-life cases. In severe skin injury, robustness is corrective, partial, and functional. In liver regeneration in mammals, robustness is corrective, almost complete, and functional. In the continuous renewal of epithelia, robustness is preventive, complete, and functional. Finally, in limb regeneration in salamander, robustness is corrective, complete, and functional. Figure 2 2 Figure 2 sumps up the three distinctions proposed here (structural vs. functional; partial vs. Figures and Tables - [START_REF] Rios | Chronic hepatitis C liver microenvironment: role of the Th17/Treg interplay related to fibrogenesis[END_REF] -Role of TRegs in SSc[START_REF] Ugor | Increased proportions of functionally impaired regulatory T cell subsets in systemic sclerosis[END_REF] -SSc and TLRs with persistence of the response[START_REF] Bhattacharyya | Endogenous ligands of TLR4 promote unresolving tissue fibrosis: Implications for systemic sclerosis and its targeted therapy[END_REF]] -Resolving inflammation against fibrosis and specialized pro-resolving lipid mediators [112] -Macrophages and efferocytosis [113] -TAMs recruitment in triple negative breast cancer [124] -Tregs in tumor progression [138] -Tregs and cancer cell clearance [139] -Tregs and cancer immunotherapies with IL-2 [140] -To target immune checkpoints such as CTLA4, PD1 or TIGIT to both interfere with Treg function and enhance effector responses at the same time [Cancer cells and use of the dynamic potential of neutrophils [126] -CCL26 in colorectal cancer cells invasion by inducing TAM infiltration [142] -Inhibitors of the receptor tyrosine kinase c-MET and impairment of the mobilization and recruitment of neutrophils into tumors [143] Figure 1 . 1 Figure1. Overview of the "tissue reconstruction system" (TRS). Many various components and pathways are involved in both tissue repair and tissue regeneration. Crucial components of the TRS include structural (e.g., fibroblasts, extracellular matrix, etc.) and immunological (e.g., neutrophils, macrophages, etc.) components. The concept of TRS is intended to embrace all the main entities and mechanisms responsible for tissue repair and tissue regeneration. Figure 2 . 2 Figure 2. The exploration of the tissue reconstruction system (TRS) leads to distinguishing different types of robustness, namely structural vs. functional robustness, partial vs. complete robustness, and corrective vs. preventive robustness. The four cases presented here are merely illustrations, among others, showing how these distinctions can be applied to real-life cases. In severe skin injury, robustness is corrective, partial, and functional. In liver regeneration in mammals, robustness is corrective, almost complete, and functional. In the continuous renewal of epithelia, robustness is preventive, complete, and functional. Finally, in limb regeneration in salamander, robustness is corrective, complete, and functional. Acknowledgements We would like to thank Cécile Contin-Bordes, Paôline Laurent, Alberto Mantovani, Jean-François Moreau, and Derek Skillings for discussions about tissue repair and robustness. Thomas Pradeu has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme -grant agreement n°637647 -IDEM.
70,994
[ "779519", "9251" ]
[ "300122", "473693", "473693" ]
01745684
en
[ "info" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01745684/file/214.pdf
Nicolas Gauvrit email: ngauvrit@me.com Jean-Charles Houillon Jean-Paul Delahaye Generalized Benford's Law as a Lie Detector may come IntroductIon During the late 19th century, an intriguing phenomenon was discovered by [START_REF] Newcomb | note on the frequency of use of the different digits in natural numbers[END_REF]: The first significant digit (leftmost nonzero digit) of seemingly random numbers often fails to follow a flat distribution with an equal proportion of 1s, 2s, … , 9s, as one would expect, but instead follows a decreasing distribution, with more 1s than 2s, more 2s than 3s, and so forth. The same phenomenon was later rediscovered and detailed by [START_REF] Benford | the law of anomalous numbers[END_REF]. According to what is now referred to as Benford's law or Newcomb-Benford law (NBL), the distribution of the first significant digit X of a "random" number follows a logarithmic law given by P(X = d) = Log(1+1/d), where Log stands for the base 10 logarithm and d stands for a digit (in the range of 1-9; see Table 1). Many real-world datasets approximately conform to NBL (Hill, 1998;[START_REF] Nigrini | Benford's Law: Applications for forensic accounting, auditing, and fraud detection[END_REF]. For instance, the distance between earth and known stars (Alexopoulos & Leontsinis, 2014) or exoplanets [START_REF] Aron | crime-fighting maths law confirms planetary riches[END_REF], crime statistics (Hickman & Rice, 2010), the number of daily-recorded religious activities [START_REF] Mir | the Benford law behavior of the religious activity data[END_REF], earthquake depths [START_REF] Arroucau | Benford's law of first digits: From mathematical curiosity to change detector[END_REF], interventional radiology Dose-Area Product data [START_REF]the novel application of Benford's second order analysis for monitoring radiation output in interventional radiology[END_REF], financial variables [START_REF] Ausloos | Benford's law and theil transform of financial data[END_REF], and internet traffic data (Arshadi & Jahangir, 2014), were found to conform to NBL. In psychology, NBL was found relevant in the study of gambling behaviors [START_REF] Chou | Benford's law and number selection in fixed-odds numbers game[END_REF], brain activity recordings (Kreuzer et al., 2014), language [START_REF] Dehaene | cross-linguistic regularities in the frequency of number words[END_REF][START_REF] Delahaye | Culturomics: Le numérique et la culture [culturomics: the numerical and the culture[END_REF], or perception [START_REF] Beeli | Frequency correlates in grapheme-color synaesthesia[END_REF]. this is an open access article under the cc By-nc-nd license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Prop (%) 30.1 17.6 12.5 9.69 7.92 6.69 5.80 5.12 4.58 the SenSItIvIty and SpecIfIcIty of Benford analySIS Human pseudorandom productions are in many ways different from true randomness [START_REF] Nickerson | the production and perception of randomness[END_REF]. For instance, participants' productions show an excess of alternations [START_REF]Bias and processing capacity in the generation of random time intervals[END_REF] or are overly uniform (Falk & Konold, 1997). As a consequence, fabricated data might fit NBL to a lesser extent than genuine data [START_REF] Banks | the apparent magnitude of number scaled by random production[END_REF]. Haferkorn ( 2013 These results support the so-called Benford analysis, which uses a measure of discrepancy from NBL to detect fraudulent or erroneous data [START_REF] Bolton | statistical fraud detection: A review[END_REF][START_REF] Kumar | data mining for detecting carelessness or mala fide intention[END_REF][START_REF] Nigrini | Benford's Law: Applications for forensic accounting, auditing, and fraud detection[END_REF]. It has been used to audit industrial and financial data [START_REF] Rauch | deficit versus social statistics: empirical evidence for the effectiveness of Benford's law[END_REF][START_REF] Rauch | liBor manipulation -empirical analysis of financial market benchmarks using Benford's law[END_REF] Hsü (1948), [START_REF] Kubovy | response availability and the apparent spontaneity of numerical choices[END_REF]Hill (1988) provided direct experimental evidence that human-produced data conform poorly to NBL. Another study went even further in formulating meaningful tasks by using the type of data known to exhibit a Benford distribution. [START_REF] Burns | sensitivity to statistical regularities: People (largely) follow Benford's law[END_REF] asked participants to guess real-world values, such as the US gross national debt or the peak summer electricity consumption in Melbourne. He found that although participants' first digit responses did not perfectly follow a logarithmic law, they conformed to the logarithmic distribution better than to the uniform distribution. Burns concluded that participants are not too bad at producing a distribution that conforms to NBL as soon as the task involves the type of realworld data that do follow NBL. One limitation of Burns' ( 2009) study is that it only works at a population level. We cannot know from his data if a particular individual would succeed in producing a pseudorandom series conforming to NBL, since each participant produced a single value. Nevertheless, his and Diekmann's studies certainly suggest that using Benford's law to detect fraud is questionable in general since humans may be able to produce data confirming to NBL, in which case a Benford test will yield many undetected frauds, lacking sensitivity. As mentioned above, not all random variables or real-world datasets conform to NBL (and when they do, it is generally only in an approximate manner). Because many real-world datasets do not conform to NBL, a Benford test used to detect fraud not only may have low sensitivity but may also have low specificity. GeneralIzed Benford'S law Several researchers (e.g., [START_REF] Schmeiser | survival distributions satisfying Benford's law[END_REF] have studied conditions under which a distribution seems more likely to satisfy NBL. Fewster ( 2009) provided an intuitive explanation of why and when the law applies and concluded that any dataset that smoothly spans several orders of magnitude tends to conform to NBL. Data limited to one or two orders of magnitude would generally not conform to the law. To pursue the question of why many data conform to NBL further, the conservative version of the NBL may be a better starting point than the mere first-digit analysis. Recall that in the conservative version, a random variable X conforms to NBL if Frac(Log(X)) ~ U([0,1[). In an attempt to show that the roots of NBL ubiquity should not be looked for in the specific properties of the logarithm, Gauvrit and Delahaye (2008,2009) defined a generalized Benford's law (GBL) associated with any function f as follows: A random variable X conforms to a GBL as- sociated with function f if Frac(f(X)) ~ U([0,1[). The classical NBL thus appears as a special case of GBL, associated with function Log. Testing several mathematical and real-world datasets, Gauvrit and Delahaye (2011) found that several of them fit GBL better than NBL. Of 12 datasets they studied, six conformed to the classical NBL, while Study 1 In Study 1, we examined human pseudorandom productions in four different realistic settings, such as those where a classical Benford's law has been initially observed, and compared these responses with true sample values. Participants A sample of 169 adults (63 women) took part in this experiment. Participants were recruited via social networks and e-mails. Ages ranged from 13 to 73 years (M Age = 40.9, SD = 11.6). Method Participants were randomly assigned to one of four groups: cities (n = 41), numbers (n = 44), stars (n = 36), or tuberculosis (n = 48). In each group, participants were informed that a series of 30 numbers had been selected at random from a dataset, and were instructed to produce what they thought would be a credible outcome-that is, to supply 30 plausible numbers. In the cities group, the list was the set of popula- Measures For each set, X of 30 values' (either fabricated or real samples) observed distributions of the fractional parts of f(X), with f(X) = Log(X), f(X) = π × X 2 and f(X) = √(X) Results As expected, fabricated data were usually less consistent with GBL than real data (see Figure 1, Table 2). There is only one exception to this feature: GBL associated with square function in the case of numbers. Depending on the context, different computational variants of GBL seemed more appropriate for segregating true values from fabricated ones. In the case of the largest US cities, for instance, human and real data did not significantly differ in terms of conformity to NBL or GBL with the square function, but they differed when calculated with a square-root function. To analyze the specificity and sensitivity of a fraud detection tool based on GBL, we drew receiver operating characteristic (ROC) curves (see Figure 2) and computed the areas under the curves (AUCs). As shown in Table 3, different sets of data resulted in different patterns. For the cities condition, classical NBL was barely efficient, whereas square root yielded better results. With the Plouffe database, all GBLs were relevant, although the one associated with function π × X 2 appeared to be the best one. Discussion Overall, NBL appeared to be a better means than the other tested variants of GBL for distinguishing between fabricated and real data. However, NBL is not always the best measure, as the cities condition showed. Even when NBL was an efficient measure, such as in the number condition, some other GBL may have been even better or at least as good as NBL, for example, GBL associated with square, for which the AUC was greater than the NBL AUC. Depending on the type of data one tests, different types of GBL could thus be adviced, either replacing or complementing classical Benford analysis. A further argument in favor of the GBL analysis is that, with the growing popularity of the Benford analysis, potential swindlers might become aware of the necessity of conforming to the NBL. Alternative methods complementing the classical analysis (for another such method, see [START_REF] Miller | data diagnostics using secondorder tests of Benford's law[END_REF] could thus prove useful, especially in view of the fact that it would be particularly difficult to fabricate data conforming to a whole set of variants of GBL. Study 2 One possible reason why [START_REF] Diekmann | not the first digit! Using Benford's law to detect fraudulent scientific data[END_REF] found that humans were able to produce accurate values (r) is that the notion was familiar to the participants (students in the social sciences), a feature that may have had an impact on the outcome. If this is true, our positive results in Study 1 might have been the result of too low a familiarity with the material at hand. In Study 2, we investigated the possible effects of familiarity, as well as that of cognitive effort, on the accuracy of the Benford analysis. Participants A sample of 124 first-year psychology students (103 women) from a distant learning program volunteered in the experiment. Ages ranged from 22 to 55 years (M Age = 38.27; SD = 9.08). Participants were recruited by e-mail and voluntarily accepted to participate. We chose distant learning students as participants because, contrary to ordinary students, they have various backgrounds and previous working experiences in a diversity of fields, warranting greater variation in the familiarity with the material. Method The experiment was performed online using a Google Form (https:// www.google.com/forms/about/). We used country population data, as this was believed to grant a somewhat larger variation in familiarity. Participants were asked to produce series of only 20 data points to lower the risk of tiredness. Participants were randomly assigned to one of two groups (no time pressure or time pressure condition). Each group included 62 participants. They were informed that they would have to supply a list of 20 values that could be the numbers of inhabitants of 20 randomly selected countries in the world. They were asked to try to guess what these populations could be. They were told that a true sampling would be performed for comparison with their answers. In the no time pres-sure condition, the instruction was to be "as accurate as possible, taking as much time as needed. " In the time pressure condition, the instruction was to be "as fast and accurate as possible. " The former condition is known to conduce to superior cognitive effort than the latter (e.g., [START_REF] Maule | the effects of time pressure on human judgment and decision making[END_REF]. Self-reported data on level of expertise in the field of country populations were also collected using a 6-point Likert scale, from 0 = absolutely naïve about country populations to 5 = expert in country populations. This measure serves as an indication of the participants' familiarity with the material. Sixty-two true samples of 20 country populations were selected from real-world data (found at http://data.okfn.org/data/core/ population#data) to be compared with participants' productions. Measures For each set X of 20 values (either fabricated or real samples), observed distributions of Frac(Log(X)) were computed. The discrepancy from uniformity was assessed in each case using the Kolmogorov-Smirnov statistic D. Results Reported expertise was rather low (M = 1.3; SD = .97; range of 0-4). To assess the effect of expertise, we performed an analysis of variance (ANOVA), with the dependent variable D and the independent variable Level of Expertise (6). No significant effect was found, F(4, 119) = 0.64, p = .63. The same procedure yielded nonsignificant statistics for GBL associated with the square-root function, F(4, 119) = 1.29, p = .28, and with GBL associated with function π × X 2 , F(4, 119) = 0.20, p = .94. Correlation analysis showed nonsignificant links between level of expertise and D, for classical NBL, r(122) = -.11, p = .20 , GBL associated with a square-root function, r(122) = -.14, p = .10, and also with function πX 2 , r(122) = -.06, p = .53. To assess the effect of cognitive effort, we performed an ANOVA with a dependent variable D and an independent variable Group (2). There was a significant influence, F(2, 183) = 7.33, p < .001, but a Tukey HSD test showed that the two groups did not differ significantly from one another (p = .46), although they both differed from real data values.The same procedure was used to assess the effect of cognitive effort related to other variants of GBL. No effect was found for GBL associated with the square-root function, F(2, 183) = 1.53, p = .22, or with function πX 2 , F(2, 183) = 0.01, p = .99. Discussion Familiarity does not appear to influence the quality of participants' responses in terms of GBL. The experimental procedure used to assess the possible effect of time pressure or cognitive effort yielded no significant results either. These results thus suggested that producing fraudulent data that would remain undetected under the Benford analysis is not necessarily a matter of familiarity or cognitive effort. It is, however, fair to note two limitations. First, time pressure, although widely used to increase cognitive effort, probably does not result in large differences, especially if a task requires little effort in general. Concerning familiarity with the material, no participants declared themselves experts (score of 5 on the scale) about country populations, most of the sample lying in the range of 0-3, so that we would not have detected a specific effect only appearing with true experts. concludInG dIScuSSIon We performed the first investigation of the generalized Benford analysis, an equivalent of the classical Benford analysis, but based on the broader GBL. Results from Study 1 rendered mild support for the generalized Benford analysis, including the classical Benford analysis. They also draw attention to the fact that different types of data yielded different outcomes, suggesting that the best way of detecting fraud using GBL associated with some function f would be obtained either by finding the function f that best matches the particular data at hand or by combining different analyses. Although the classical Benford analysis was validated in our studies, it occasionally failed at detecting humanproduced data as efficiently as other generalized Benford analysis. The present positive results could have been the result of our sample characteristic, in which participants, contrary to real swindlers, might have put little effort into the task since the stakes were low. Plus, the participants were not highly familiar with the material at hand. To rule out the possibility that our results resulted from such features and GBL would be inapplicable in real situations, Study 2 aimed at demonstrating that cognitive effort and familiarity with the material have little effect on the participants' responses. The data supported this view, although further studies (including higher levels of cognitive pressure and true experts) would be recommended. With Benford analysis having become more common in fraud detection, new complementary analyses are needed [START_REF] Miller | data diagnostics using secondorder tests of Benford's law[END_REF]. The GBL analysis potentially provides a whole set of such fraud detection methods, which means making it more difficult, even for informed swindlers intentionally conforming to NBL, to remain undetected. http://www.ac-psych.org 2017 • volume 13(2) • 121-127 127 Falk, r., & Konold, c. (1997). Making sense of randomness: implicit encoding as a basis for judgment. Psychological Review, 104, 301-318. doi: 10.1037/0033-295X.104.2.301 Fewster, r. M. (2009). A simple explanation of Benford's law. The American Statistician, 63, 27-32. doi: 10.1198Statistician, 63, 27-32. doi: 10. /tast.2009Statistician, 63, 27-32. doi: 10. .0005 gauvrit, n., & delahaye, J.-P. (2008)). Pourquoi la loi de Benford n'est pas mystérieuse [Why Benford's law is not mysterious]. Humaines, 182, 7-16. gauvrit, n., & delahaye, J.-P. (2009). loi de Benford général- Criminology, 26, 333-349. doi: 10.1007Criminology, 26, 333-349. doi: 10. /s10940-010-9094-6 hill, t. P. (1988)). random-number guessing and the first digit phenomenon. Psychological Reports, 6, 967-971. doi: 10.2466Reports, 6, 967-971. doi: 10. / pr0.1988Reports, 6, 967-971. doi: 10. .62.3.967 hill, t. P. (1998)). the first digit phenomenon: A century-old observation about an unexpected pattern in many numerical tables applies to the stock market, census statistics and accounting data. American Scientist, 86, 358-363. hsü, e. h. (1948). An experimental study on "mental numbers" and a new application. Journal of General Psychology, 38, 57-67. doi: 10.1080/00221309.1948.9711768 Kreuzer, M., Jordan, d., Antkowiak, B., drexler, B., Kochs, e. F., & schneider, g. (2014). Brain electrical activity obeys Benford's law. Anesthesia & Analgesia, 118, 183-191. doi: 10.1213/ Mathématiques et Sciences ) compared algorithm-based and human-based trade orders and concluded that algorithm-based orders approximated NBL better than human-based orders. Hales, Chakravorty, and Sridharan (2009) showed that NBL is efficient in detecting fraudulent data in an industrial supply-chain context. , to gauge the scientific publication process (de Vries & Murk, 2013), to separate natural from computer-generated images (Tong, Yang, & Xie, 2013), or to detect hidden messages in images' .jpeg files (Andriotis, Oikonomou, & Tryfonas, 2013).As a rule, the Benford analysis focuses on the distribution of the first digit and compares it to the normative logarithmic distribution.However, a more conservative version of Benford's law states that numerical values or a variable X should conform to the following property: Frac(Log(X)) should follow a uniform distribution in the range of[0,1[. Here, Frac(x) stands for x-Floor(x), Floor(x) being the largest integer inferior or equal to x. The logarithmic distribution of the first digit is a mathematical consequence of this version[START_REF]the first digit problem[END_REF]. 10 conformed to a GBL with function f(x) = π × x 2 , and nine with square-root function. On the other hand, none conformed to GBL with function Log o Log. These findings suggest that a GBL associated with the relevant function-depending on the context-might yield more specific or sensitive tests for detecting fraudulent or erroneous data.We addressed this question in two studies. In both studies, each participant produced a whole series of values, allowing analyzing the resulting distribution at an individual level. In Study 1, we examined three versions of GBL in four different situations in order to compare the sensitivity and specificity of different types of GBL analyses. Study 2 explored the potential effects of variations of familiarity with the material and of cognitive effort on the productions. tions of the 5,000 most populated US cities. Numbers corresponded to the dataset of numerical constants published by Simon Plouffe and described in the experiment as an extensive encyclopedia of mathematical constants. Participants assigned to the stars group were told that the dataset was the list of distances from earth to all known visible stars expressed in light-years. Last, the tuberculosis group dealt with the set of known country-wise incidences of tuberculosis as measured in 2012.Real samplings of 30 numbers from the corresponding databases were also performed. The set of populations of the biggest US cities came from an online dataset (http://factfinder2.census.gov/). Numbers were randomly selected from the Simon Plouffe database of numerical values (http://www.plouffe.fr). The distances to the stars were read from the HYG2.0 dataset (http://www.astronexus.com/hyg) and multiplied by 3.262 to render them in light-years instead of parsecs.Lastly, the tuberculosis dataset was downloaded from the World Health Organization's (WHO) website (http://www.who.int/tb/country/data/ download/en/). , were computed. The two last functions were selected on the basis of previous studies indicating that they led to satisfying fits with several numerical datasets(Gauvrit & Delahaye, 2009).The deviation from GBL was measured by the Kolmogorov-Smirnov statistic D-that is, the maximum difference between the cumulative distribution function of Frac(f(X)) and the cumulative distribution function of U([0,1[). D thus serves as a proximal measure of conformity to GBL. This statistic is a classical measure of distance between distributions that grounds the classical Kolmogorov-Smirnov test. isée [generalized Benford's law]. Mathématiques et Sciences Humaines, 186, 5-15. doi: 10.4000/msh.11034 gauvrit, n., & delahaye, J.-P. (2011). scatter and regularity implies Benford's law… and more. in h. Zenil (ed.), Randomness through computation (pp. 53-69). london, england: World scientific. gauvrit, n., & Morsanyi, K. (2014). the equiprobability bias from a mathematical and psychological perspective. Advances in Cognitive Psychology, 10, 119-130. doi: 10.5709/acp-0163-9 haferkorn, M. (2013). humans vs. algorithms -who follows newcomb-Benford's law better with their order volume? in F. A. rabhi & P. gomber (eds.), Enterprise Applications and Services in the Finance Industry (pp. 61-70). Berlin, germany: springer. hales, d. n., chakravorty, s. s., & sridharan, v. (2009). testing Benford's law for improving supply chain decision-making: A field experiment. International Journal of Production Economics, 122, 606-618. doi: 10.1016/j.ijpe.2009.06.017 hickman, M. J., & rice, s. K. (2010). digital analysis of crime statistics: does crime conform to Benford's law? Journal of Quantitative tAble 1 . 1 Proportion of 1s, 2s,…, 9s as First significant digit in a series conforming to nBl Note. NBL = Newcomb-Benford law. Digit 1 2 3 4 5 6 7 8 9 Author Note P., oikonomou, g., & tryfonas, t. (2013). JPeg steganography detection with Benford's law. Digital Investigation, 9, 246-257. doi: 10.1016/j.diin.2013.01.005
24,431
[ "1212983" ]
[ "65", "410272", "109853" ]
01745729
en
[ "spi" ]
2024/03/05 22:32:07
2013
https://hal.science/hal-01745729/file/001_WP802.pdf
Built-In Self-Test Structure (BIST) for Resistive RAMs Characterization: Application to Bipolar OxRRAMs Hassen AZIZA, Marc BOQUET, Mathieu MOREAU and J-Michel PORTAL IM2NP-UMR CNRS 6242, Aix-Marseille University, France, hassen.aziza@im2np.fr Problem Formulation: Resistive Random Access Memory (ReRAM) is a form of nonvolatile storage that operates by changing the resistance of a specially formulated solid dielectric material [START_REF] Gibbons | Switching properties of thin Nio films[END_REF]. Among ReRAMs, Oxide-based Resistive RAMs (so-called OxRRAM) are promising candidates due their compatibility with CMOS processes and high ON/OFF resistance ratio. Common problems with OxRRAM are related to high variability in operating conditions and low yield. OxRRAM variability mainly impact ON/OFF resistance ratio. This ratio is a key parameter to determine the overall performance of an OxRRAM memory. In this context, the presented built-in structure allows collecting statistical data related to the OxRRAM memory array (ON/OFF resistance distributions) for reliability assessment of the technology. Built-In Self-Test (BIST) structure: Fig. 1a presents the elementary array used for simulation which is constituted by a 3×3 1T/1R cell matrix, a row decoder, a column decoder and a sense amplifier for the read operation. The OxRRAM compact model used to build the memory array satisfactorily matches quasi-static and dynamic experimental data measured on actual HfO 2 -based devices (Fig. 1b). Notice that the OxRRAM model allows a variability analysis based on OxRRAM card model parameters variation. The card model parameter variations are chosen to feet experimental data [START_REF] Aziza | Evaluation of OxRAM cell variability impact on memory performances through electrical simulations[END_REF]. Fig. 2 presents the single-ended sense amplifier (solid line). The circuit working principle is quite simple. During a READ operation, the OxRRAM cell is biased throw the row decoder (with V read > 0 and WL high). The current cell value I cell is generated according to the memory state (RON≈7kΩ after a SET operation and ROFF≈25 kΩ after a RESET operation). Simulation results: To validate the BIST structure, a variability analysis is conducted through 500 Monte Carlo simulations. The elementary matrix presented in Fig. 1, embedding the BIST structure, is considered for simulations. As a result of cell variability, circuit performance exhibits much wider variability. In Fig. 3a, RON and ROFF distributions are plotted (RON/ROFF variability being correlated with actual silicon results). In Fig. 3b, V IN SET and V IN RESET distributions are also plotted. These last distributions are similar to RON and ROFF distributions. This trend is confirmed by the correlation curve presented in Fig. 4. Therefore, the modified sense amplifier structure can be used as a powerful tool to track any resistance variations but also to characterize the memory array variability. To summarize, a built in self-test structure is presented for Resistive RAM characterization and variability evaluation. The area overhead introduced by structure is relatively low as the structure is integrated in the sense amplifiers. The normal mode of operation of the memory is preserved. Besides, extracted values are given in a digital data format, so the extraction process does not required any analog pin on the tester, making it fully digital tester compliant or easily observable via the random logic. Fig. 1 (Fig. 2 12 Fig. 1 (a) OxRRAM memory array (a) I-V characteristic measured on HfO2-based devices and corresponding simulation using the OxRRAM model & SET voltage as a function of the programming ramp The comparator input V IN is directly proportional to I cell and therefore to the OxRRAM resistance (two distinct values are available for V IN : V IN SET for RON and V IN RESET for ROFF). So that the sense amplifier operates properly, on the one hand the difference between V IN SET and V IN RESET must be the highest and on the other hand V REF has to be set exactly between V IN SET and V IN RESET values. At a circuit level, V IN is the parameter to consider in terms of memory functionality. To extract V IN value, a variable voltage reference source and a multiplexer are incorporated in the sensing circuit (dotted part of the circuit) and the READ operation is modified as follow:• V REF increases step by step from 0 to V dd (V REF increase is controlled by a shift register),• V IN value is detected when the sense amplifier output switches (the shifting process stops when V REF > V IN , i.e Rd_REG signal becomes active), • V IN is available at the circuit output in a numerical value when BIST_EN signal is high.
4,743
[ "177966", "18361", "18508", "20388" ]
[ "199957", "199957", "199957", "199957" ]
01745768
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01745768/file/paper-hal.pdf
Tien-Duc Cao email: tien-duc.cao@inria.fr Ioana Manolescu email: ioana.manolescu@inria.fr Xavier Tannier email: xavier.tannier@sorbonne-universite.fr Searching for Truth in a Database of Statistics The proliferation of falsehood and misinformation, in particular through the Web, has lead to increasing energy being invested into journalistic fact-checking. Fact-checking journalists typically check the accuracy of a claim against some trusted data source. Statistic databases such as those compiled by state agencies are often used as trusted data sources, as they contain valuable, high-quality information. However, their usability is limited when they are shared in a format such as HTML or spreadsheets: this makes it hard to find the most relevant dataset for checking a specific claim, or to quickly extract from a dataset the best answer to a given query. We present a novel algorithm enabling the exploitation of such statistic tables, by (i) identifying the statistic datasets most relevant for a given fact-checking query, and (ii) extracting from each dataset the best specific (precise) query answer it may contain. We have implemented our approach and experimented on the complete corpus of statistics obtained from INSEE, the French national statistic institute. Our experiments and comparisons demonstrate the effectiveness of our proposed method. INTRODUCTION The media industry has taken good notice of the value promises of big data. As more and more important and significant data is available in digital form, journalism is undergoing a transformation whereas technical skills for working with such data are increasingly needed and also present in newsrooms. In particular, the areas of data journalism and journalistic fact checking stand to benefit most from efficient tools for exploiting digital data. Statistics databases produced by governments or other large organizations, whether commercial or administrative ones, are of particular interest to us. Such databases are built by dedicated personnel at a high cost, and consolidated carefully out of multiple inputs; the resulting data is typically of high value. Another important quality of such data is that it is often shared as open data. Example of such government open data portals http://data.gov (US), http://data.gov.uk (UK), http://data.gouv.fr and http://insee.fr (France), http://wsl.iiitb.ac.in/sandesh-web/sandesh/index (India) etc. While the data is open, it is often not linked, that is, it is not organized in RDF graphs, as recommended by the Linked Open Data best practice for sharing and publishing data on the Web [START_REF] W3c | Best Practices for Publishing Linked Data[END_REF]. Instead, such data often consists of HTML or Excel tables containing numeric data, links to which appear in or next to text descriptions of their contents. This prevents the efficient exploitation of the data through automated tools, capable for instance to answer queries such as "find the unemployment rate in my region in 2015". In a prior work [START_REF] Cao | Extracting Linked Data from Statistic Spreadsheets[END_REF], we have devised an approach to extract from high-quality, statistic Open Data in the "tables + text description" frequently used nowadays, Linked Open Data in RDF format; this is a first step toward addressing the above issues. Subsequently, we applied our approach to the complete set of statistics published by INSEE, a leading French national statistics institute, and republish the resulting RDF Open Data (together with the crawling code and the extraction code 1 ). While our approach is tailored to some extent toward French, its core elements are easily applicable to another setting, in particular for statistic databases using English, as the latter is very well-supported by text processing tools. In this work, we introduce novel algorithms for searching for answers to keyword queries in a database of statistics, organized in RDF graphs such as those we produced. First, we describe a dataset search algorithm, which given a set of user keywords, identifies the datasets (statistic table and surrounding presentations) most likely to be useful for answering the query. Second, we devised an answer search algorithm which, building on the above algorithm, attempts to answer queries, such as "unemployment rate in Paris in 2016", with values extracted from the statistics dataset, together with a contextualization quickly enabling the user to visually check the correctness of the extraction and the result relevance. In some cases, there is no single number known in the database, but several semantically close ones. In such cases, our algorithm returns the set of numbers, again together with context enabling its interpretation. We have experimentally evaluated the efficiency and the effectiveness of our algorithms, and demonstrate their practical interest to facilitate fact-checking work. In particular, while political debates are broadcast live on radio or TV, fact-checkers can use such tools to quickly locate reference numbers which may help them publish live fact-checking material. In the sequel, Section 2 outlines the statistic data sources we consider, their organization after our extraction [START_REF] Cao | Extracting Linked Data from Statistic Spreadsheets[END_REF], and our system architecture. Section 3 defines the search problem we address, and describes our search algorithms. Section 4 presents our experimental evaluation. We discuss related work in Section 5, then conclude. 1. The outline, when present, is a short paragraph on the Web page from where D i can be downloaded. The outline extends the title with more details about the nature of the statistic numbers, the interpretation of each dimension, methodological details of how numbers were aggregated etc. INPUT DATA AND ARCHITECTURE A dataset consists of header cells and data cells. Data cells and header cells each have attributes indicating their position (column and row) and their value. For instance, in Figure 1, "Belgium", "Bulgaria" etc. up to "France", "Oct-2016" to "Oct-2017", and the two fused cells above them, are header cells, while the other are data cells. Each data cell has a closest header cell on its column, and another one on its row; these capture the dimension values corresponding to a data cell. To enable linking the results with existing (RDF) datasets and ontologies, we extract the table contents into an RDF graph. We create an RDF class for each entity type, and assign an URI to each entity instance, e.g., dataset, table, cell, outline etc. Each relationship is similarly represented by a resource described by the entity instances participating to the relationship, together with their respective roles. System architecture A scraper collects the Web pages publicly accessible from a statistic institute Web site; Excel and HTML tables are identified, traversed, and converted into RDF graphs as explained above. The resulting RDF data is stored in the Apache Jena TDB server 2 , and used to answer keyword queries as we explain next. SEARCH PROBLEM AND ALGORITHM Given a keyword-based query, we focus on returning a ranked list of candidate answers, ordered in the decreasing likelihood that they contain (or can be used to compute) the query result. A relevant candidate answer can be a data cell, a data row, column or even an entire dataset. For example, consider the query "youth unemployment in France in August 2017". An Eurostat dataset3 is a good candidate answer to this query, since, as shown in Figure 1, it contains one data cell, at the intersection of the France row with the Aug-2017 column. Now, if one changes the query to ask for "youth unemployment in France in 2017", no single data cell can be returned; instead, all the cells on the France row qualify. Finally, a dataset containing 2017 French unemployment statistics over the general population (not just youth) meets some of the search criteria (2017, France, unemployment) and thus may deserve to appear in the ranked list of results, depending on the availability of better results. This task requires the development of specific novel methods, borrowing ideas from traditional IR, but following a new methodology. This is because our task is very specific: we are searching for information not within text, but within tables, which moreover are not flat, first normal form database relations (for which many keyword search algorithms have been proposed since [START_REF] Hristidis | DISCOVER: Keyword Search in Relational Databases[END_REF]), but partially nested tables, in particular due to the hierarchical nature of the headers, as we explained previously. While most of the reasoning performed by our algorithm follows the two-dimensional layout of data in columns and tables, bringing the data in RDF: (i) puts a set of interesting, high-value data sources within reach of developers and (ii) allows us to query across nested headers using regular path queries expressed in SPARQL (as we explain in Section 3.5). We describe our algorithms for finding such answers below. Dataset search The first problem we consider is: given a keyword query Q consisting of a set of keywords u 1 , u 2 , . . ., u m and an integer k, find the k datasets most relevant for the query (we explain how we evaluate this relevance below). We view each dataset as a table containing a title, possibly a comment, a set of header cells (row header cells and column header cells) and a set of data cells, the latter containing numeric data 4 . At query time, we transform the query Q into a set of of keywords W = w 1 , w 2 , . . . , w n using the method described in Section 3.2. Offline, this method is also used to transform each dataset's text to words and we compute the score of each word with respect to a dataset, as described in Section 3.3. Then, based on the word-dataset score and W , we estimate datasets' relevance to the query as we explain in Section 3.4. Text processing Given a text t (appearing in a title, comment, or header cell of the dataset, or the text consisting of the set of words in the query Q), we convert it into a set of words using the following process: • First, t is tokenized (separated into distinct words) using the KEA5 tokenizers. Subsequently, each multi-word French location found in t that is listed in Geonames 6 , is put together in a single token. • Each token (word) is converted to lowercase, stop words are removed, as well as French accents which complicate matching. • Each word is mapped into a word2vec vector [START_REF] Mikolov | Distributed Representations of Words and Phrases and their Compositionality[END_REF], using the gensim [START_REF] Radim | Software Framework for Topic Modelling with Large Corpora[END_REF] tool. Bigrams and trigrams are considered following [START_REF] Mikolov | Distributed Representations of Words and Phrases and their Compositionality[END_REF]. We had trained the word2vec model on a generaldomain French news web page corpus. Word-dataset score For each dataset extracted from the statistic database, we compute a score characterizing its semantic content. A first observation is that datasets should be returned not only when they contain the exact words present in the query, but also if they contain very closely related ones. For example, a dataset titled "Duration of marriage" could be a good match for the query "Average time between marriage and divorce" because of the similarity between "duration" and "time". To this effect, we rely on word2vec [START_REF] Mikolov | Distributed Representations of Words and Phrases and their Compositionality[END_REF] which provides similar words for any word in its vocabulary: if a word w appears in a dataset D, and w is similar to w ′ , we consider w ′ also appears in D. The score score (w ) of a dataset D w.r.t the query word w is 1 if w appears in D. . If w does not appear in D: • If there exists a word w ′ , from the list of top-50 similar words of w according to word2vec, which appears in D, then score (w ) is the similarity between w and w ′ . If there are several such w ′ , we consider the one most similar to w. • If w is the name of a Geonames place we can't apply the above scoring approach because "comparable" places (e.g., cities such as Paris and London) will have high similarity in the word2vec space. As the result, when user asks for "unemployment rate Paris", the data of London might be returned instead of Paris's. Let p be the number of places that Geonames' hierarchy API 7 returns for w (p is determined by Geonames and depends on w). For instance, when querying the API with Paris, we obtain the list Île-de-France, France, Europe. Let w ′ i be the place at position i, 1 ≤ i ≤ p in this list of returned places, such that w ′ i appears in D. Then, we assign to D a score for w equal to (p + 1 -i)/(p + 1), that is, the most similar place according to Geonames has rank p/(p + 1), and the least similar has the rank 1/(p + 1). If D contains several of the places from the w's hierarchy, we assign to D a score for w corresponding to the highest-ranked such place. • 0 otherwise. Based on the notion of word similarity defined above, we will write w ≺ W to denote that the word w from dataset D either belongs to the query set W , or is close to a word in W . Observe that, by definition, for any w ≺ , we have score (w ) > 0. We also keep track of the location(s) (title, header and/or comment) in which a word appears in a dataset; this information will be used when answering queries, as described in Section 3.4. In summary, for each dataset D and word w ∈ D such that w ≺ W , we compute and store tuples of the form: These tuples are encoded in JSON and stored in a set of files; each file contains the scores for one word (or bigram) w, and all the datasets. Relevance score function Content-based relevance score function. This function, denoted д 1 (D,W ), quantifies the interest of dataset D for the word set W ; it is computed based on the tuples (w, score (w ), location(w, D), D) where w ≺ W . We experimented with many score functions that give high ranking to datasets that have many matching keywords (see details in section 4.2.2). These functions are monotonous in the score of D with respect to each individual word w. This enables us to apply Fagin's threshold algorithm [START_REF] Fagin | Optimal Aggregation Algorithms for Middleware[END_REF] to efficiently determine the k datasets having the highest д 1 score for the query W . Location-aware score components. The location -title (T), row or column headers (HR or HC), or comments (C) -where a keyword occurs in a dataset can also be used to assess the dataset relevance. 7 http://www.geonames.org/export/place-hierarchy.html For example, a keyword appearing in the title often signals a dataset more relevant for the search than if the keyword appears in a comment. We exploit this observation to pre-filter the datasets for which we compute exact scores, as follows. We run the TA algorithm using the score function д 1 to select r × k datasets, where r is an integer larger than 1. For each dataset thus retrieved, we compute a second, refined score function д 2 (see below), which also accounts for the locations in which keywords appear in the datasets; the answer to the query will consists of the top-k datasets according to д 2 . The second score function д 2 (D,W ) is computed as follows. Let w ′ be a word appearing at a location loc ∈ {T, HR, HC, C} such that w ′ ≺ W . We denote by w ′ loc, D (or just w ′ loc when D is clear from the context) the existence of one or several located occurrence of w ′ in D in loc. Thus, for instance, if "youth" appears twice in the title of D and once in a row header, this leads to two located occurrences, namely youth T, D and youth HR, D . Then, for loc ∈ {T, HR, HC, C} we introduce a coefficient α loc allowing us to calibrate the weight (importance) of keyword occurrences in location loc. To quantify D's relevance for W due to its loc occurrences, we define a location score component f loc (D,W ). In particular, we have experimented with two f loc functions: • f sum loc (D,W ) = α w ≺W scor e (w l oc, D ) loc • f count loc (D,W ) = α count {w ≺W } loc where for score (w loc, D ) we use the value score (w ), the score of D with respect to w (Section 3.3). Thus, each f loc "boosts" the relevance scores of all loc occurrences by a certain exponential formula, whose relative importance is controlled by α loc . Further, the relevance of a dataset increases if different query keywords appear in different header locations, that is, some in HR (header rows) and some in HC (header columns). In such cases, the data cells at the intersection of the respective rows and columns may provide very precise answers to the query, as illustrated in Figure 1: here, "France" is present in HC while "youth" and "17" appear in HR. To reflect this, we introduce another function f H (D,W ) computed on the scores of all unique located occurrences from row or column headers; we also experimented with the two variants, f sum H and f count H introduced above. Putting it all together, we compute the content-and location-aware relevance score of a dataset for W as: д 2 (D,W ) = д 1 (D,W ) + Σ loc ∈ {T,HR,HC,C} f loc (D,W ) + f H (D,W ) Finally, we also experimented with another function д * (D,W ) defined as: д * 2 (D,W ) =      д 2 (D,W ), if f T (D,W ) > 0 0, otherwise д * 2 discards datasets having no relevant keyword in the title. This is due to the observation that statistic dataset titles are typically carefully chosen to describe the dataset; if no query keyword can be connected to it, the dataset is very likely not relevant. Data cell search We now consider the problem of identifying the data cell(s) (or the data rows/columns) that can give the most precise answer to the user query. Such an answer may consist of exactly one data cell. For example, for the query "unemployment rate in Paris", a very good answer would be a data cell D r,c whose closest row header cell contains "unemployment rate" and whose closest column header cell contains "Paris". Alternatively, query keywords may occur not in the closest column header cell of D r,c but in another header cell that is its ancestor in D. For instance, in Figure 1, let D r,c be the data cell at the intersection of the Aug-17 column with the France row: the word "youth" occurs in an ancestor of the Aug-17 header cell, and "youth" clearly characterizes D r,c 's content. We say the closest (row and column) header cells of D r,c and all their ancestor header cells characterize D r,c . Another valid answer to the "unemployment rate in Paris" query would be a whole data row (or a whole column) whose header contains "unemployment" and "Paris". We consider this to be less relevant than a single data cell answer, as it is less precise. We term data cell answer an answer consisting of either a data cell, or a row or column; below, we describe how we identify such answers. We identify data cells answers from a given dataset D as follows. Recall that all located occurrences in D, and in particular those of the form w HR and w HC for w ≺ W , have been pre-computed; each such occurrence corresponds either to a header row r or to a header column c. For each data cell D r,c , we define #(r , c) as the number of unique words w ≺ W occurring in the header cells characterizing D r,c . Data cells in D may be characterized by: (1) Some header cells containing HR occurrences (for some w ≺ W ), and some others containing HC occurrences; (2) Only header cells with HR occurrences (or only header cells with HC ones). Observe that if D holds both cell answers (case 1) and row-or column answers (case 2), by definition, they occur in different rows and columns. Our returned data cell answers from D are: • If there are cells in case 1, then each of them is a data cell answer from D, and we return cell(s) with highest #(r, c) values. • Only if there are no such cells but there are some relevant rows or columns (case 2), we return the one(s) with highest #(r , c) values. This is motivated by the intuition that if D has a specific, one-cell answer, it is unlikely that D also holds a less specific, yet relevant one. Concretely, we compute the #(r , c) values based on the (word, score, location, dataset) tuples we extract (Section 3.3). We rely on SPARQL 1.1 [START_REF] W3c | SPARQL Protocol and RDF Query Language[END_REF] queries on the RDF representation of our datasets (Section 2) to identify the cell or row/column answer(s) from each D. SPARQL 1.1 features property paths, akin to regular expressions; we use them to identify all the header cells characterizing a given D r,c . Note that this method yields only one element (cell, row or column) from each dataset D, or several elements if they have the exact same score. An alternative would have been to allow returning several elements from each dataset; then, one needs to decide how to collate (inter-rank) scores of different elements identified in different datasets. We consider that this alternative would increase the complexity of our approach, for unclear benefits: the user experience is often better when results from the same dataset are aggregated together, rather than considered as independent. Suggesting several data cells per dataset is then more a question of result visualization than one pertaining to the search method. EVALUATION This section describes our experimental evaluation. Section 4.1 describes the dataset and query workload we used, which was split into a development set (on which we tuned our score functions) and a test set (on which we measured the performance of our algorithms). It also specifies how we built a "gold-standard" set of answers against which our algorithms were evaluated. Section 4.2 details the choice of parameters for the score functions. Datasets and Queries We have developed our system in Python (61 classes and 4071 lines). Crawling the INSEE Web site, we extracted information out of 19,984 HTML tables and 91,161 spreadsheets, out of which we built a Linked Open Data corpus of 945 millions RDF triples. We collected all the articles published online by the fact-checking team "Les Décodeurs",8 a fact-checking and data journalism team of Le Monde, France's leading national newspaper, between March 10th and August 2nd, 2014; there were 3,041 articles. From these, we selected 75 articles whose fact-checks were backed by INSEE data; these all contain links to https://www.insee.fr. By reading these articles and visiting their referenced INSEE dataset, we identified a set of 55 natural language queries which the journalists could have asked a system like ours. 9We experimented with a total of 288 variants of the д 2 function: • д 1 was either д 1a , д 1b or д 1c ; • д 2 relied either on f sum loc or on f count loc ; for each of these, we tried different value combinations for the coefficients α T , α HC , α HR and α C ; • we used either the д 2 formula, or its д * 2 variant. We built a gold-standard reference to this query set as follows. We ran each query q through our dataset search algorithm for each of the 288 д 2 functions, asking for k = 20 most relevant datasets. We built the union of all the answers thus obtained for q and assessed the relevance of each dataset as either 0 (not relevant), 1 ("partially relevant" which means user could find some related information to answer their query) or 2 ("highly relevant" which means user could find the exact answer for their query); a Web front-end was built to facilitate this process. Experiments We specify our evaluation metric (Section 4.2.1), then describe how we tuned the parameters of our score function, and the results of our experiments focused on the quality of the returned results (Section 4.2.2). Last but not least, we put them into the perspective of a comparison with the baselines which existed prior to our work: IN-SEE's own search system, and plain Google search (Section 4.2.3). Figure 2: Screen shot of our search tool. In this example, the second result is a full column; clicking on "Tous les résultats" (all results) renders all the cells from that column. Evaluation Metric. We evaluated the quality of the answers of our runs and of the baseline systems by their mean average precision (MAP 10 ), widely used for evaluating ranked lists of results. MAP is traditionally defined based on a binary relevance judgment (relevant or irrelevant in our case). We experimented with the two possibilities: • MAP h is the mean average precision where only highly relevant datasets are considered as relevant • MAP p is the mean average precision where both partially and highly relevant datasets are considered relevant. We also experimented with some modified variants that take into account the sum of matching keywords: 1). • We computed the MAP scores obtained on the full development set for all 288 combinations of parameters, and plotted them from the best to the worst (Figure 3; due to the way we plot the data, two MAP h and MAP p values shown on the same vertical line may not correspond to the same score function). The figure shows that the best-performing 15 combinations leads to scores higher than 0.80, indicating that any of these could be used with pretty good results. These two results tend to show that we can consider our results as stable, despite the relatively small size of the query set. Running time. Processing and indexing the words (close to those) appearing in the datasets took approximately three hours. We ran our experiments on a machine with 126GB RAM and 40 CPUs Intel(R) Xeon(R) E5-2640 v4 @ 2.40GHz. The average query evaluation time over the 55 queries we identified is 0.218 seconds. Comparison against Baselines. To put our results into perspective, we also computed the MAP scores of our test query set on the two baselines available prior to our work: INSEE's own dataset search system 11 , and Google search instructed to look only within the INSEE web site. Similarly to the evaluation process of our system, for each query we selected the first 20 elements returned by these systems and manually evaluated each dataset's relevance to the given query. Table 2 depicts the MAP results thus obtained, compared against those of our system. Google's MAP is very close to ours; while our work is obviously not placed as a rival of Google in general, we view this as validating the quality of our results with (much!) smaller computational efforts. Further, our work follows a white-box approach, whereas it is well known that the top results returned by Google are increasingly due to other factors beyond the original PageRank [START_REF] Brin | The Anatomy of a Large-Scale Hypertextual Web Search Engine[END_REF] information, and may vary in time and/or with result personalization, Google's own A/B testing etc. We end this comparison with two remarks. (i) Our evaluation was made on INSEE data alone due to the institute's extensive database on which fact-checking articles were written, from which we derived our test queries. However, as stated before, our approach could be easily adapted to other statistic Web sites, as we only need the ability to crawl the tables from the Web site. As is the case for INSEE, this method may be more robust than using the category-driven navigation or the search feature built in the Web site publishing the statistic information. (ii) Our system, based on a fine-granularity modeling of the data from statistic tables, is the only one capable of returning cell-level answers (Section 3.5). We show such answers to the users together with the header cells characterizing them, so that users can immediately appreciate their accuracy (as in Figure 2). RELATED WORK AND PERSPECTIVES In this work, we focused on how to improving the usability statistic tables (HTML tables or spreadsheets) as reference data sources against which claims can be fact-checked. Other works focused on building textual reference data source from general claims [START_REF] Bar-Haim | Stance Classification of Context-Dependent Claims[END_REF][START_REF] Levy | Context Dependent Claim Detection[END_REF], congressional debates [START_REF] Thomas | Get out the vote: Determining support or opposition from Congressional floor-debate transcripts[END_REF] or tweets [START_REF] Rajadesingan | Identifying Users with Opposing Opinions in Twitter Debates[END_REF]. Some works focused on exploiting the data in HTML and spreadsheet tables found on the Web. Tschirschnitz et al. [START_REF] Tschirschnitz | Detecting Inclusion Dependencies on Very Many Tables[END_REF] focused on detecting the semantic relations that hold between millions of Web tables, for instance detecting so-called inclusion dependencies (when the values of a column in one table are included in the values of a column in another table). Closest to our work, M. Kohlhase et 11 Available at https://insee.fr al. [START_REF] Kohlhase | XLSearch: A Search Engine for Spreadsheets[END_REF] built a search engine for finding and accessing spreadsheets by their formulae. This is less of an issue for the tables we focus on, as they contain plain numbers and not formulas. Google's Fusion Tables work [START_REF] Gonzalez | Google Fusion Tables: Data Management, Integration, and Collaboration in the Cloud[END_REF] focuses on storing, querying and visualizing tabular data, however, it does not tackle keyword search with a tabular semantics as we do. Google has also issued Google Tables as a working product 12 . In March 2018, we tried to use it for some sample queries we addressed in this paper, but the results we obtained were of lower quality (some were irrelevant). We believe this may be due to Google's focus on data available on the Web, whereas we focus on very high-quality data curated by INSEE experts, but which needed our work to be easily searchable. Currently, our software is not capable of aggregating information, e.g., if one asks for unemployed people from all departements within a region, we are not capable of summing these numbers up into the number corresponding to the region. This could be addressed in future work which could focus on applying OLAP-style operations of drill-down or roll-up to further process the information we extract from the INSEE datasets. Figure 1 : 1 Figure 1: Sample dataset on French youth unemployment. RDF data extracted from the INSEE statistics INSEE publishes a set D of datasets D 1 , D 2 , . . . etc. where each dataset D i consists of (w, score (w ), location(w, D), D) where location(w, D) ∈ {T, HR, HC, C} indicates where w appears in D: T denotes the title, HR denotes a row header cell, HC denotes a column header cell, and C denotes an occurrence in a comment. 4. 2 . 2 22 Parameter Estimation and Results. We experimented with the following flavors of the д 1 function :• д 1b (D,W ) = 10 w ≺W scor e (w ) • д 1d (D,W ) = 10 count {w ≺W } • д 1f (D,W ) = w ≺W10 scor e (w ) • д 1c (D,W ) = w ≺W score (w ) + д 1b (D,W ) • д 1e (D,W ) = w ≺W score (w ) + д 1d (D,W ) • д 1д (D,W ) = w ≺W score (w ) + д 1f (D,W )A randomly selected development set of 29 queries has been used to select the best values for the 7 parameters of our system : α T , α C , α HR , α HC , α H , as well as the different versions of д 1 and д 2 . For this purpose, we ran a grid search with different values of these parameters, selected among {3, 5, 7, 8, 10}, on the development query set, and applied the combination obtaining the best MAP results on the test set (composed of the remaining 26 queries).We found that a same score function has lead to the best MAP h and the best MAP p on the development query set. In terms of the notations introduced in Section 3.4, this best-performing score function is obtained by:• Using д 1,c ; • Using f sumand the coefficient values α T = 10, α C = 3, α HR = 5, α HC = 5 and α H = 7; • Using the д * 2 variant, which discards datasets lacking matches in the title. On the test set, this function has lead to MAP p = 0.76 and MAP h = 0.70 .10 https://en.wikipedia.org/wiki/Information_retrieval#Mean_average_precision Figure 3 : 3 Figure 3: MAP results on the development set for 288 variants of the score function. . . etc. where each dataset D i consists of a table (HTML table or a spreadsheet file), a title D i .t and optionally an outline D i .o. The title is a short nominal phrase stating what D i contains, e.g., "Seasonally adjusted youth (under 25s) unemployment" in Figure Table 1 : 1 Results on the first and second development set.Given that our test query set was relatively small, we performed two more experiments aiming at testing the robustness of the parameter selection on the development set:• We used a randomly selected subset of 17 queries among the 29 development queries, and used it as a new development set. The best score function for this new development set was the same; further, the MAP results on the two development sets are very similar (see Table Dev. set 29 queries Dev. set 17 queries MAP p 0.82 0.83 MAP h 0.78 0.80 Table 2 : 2 Comparing our system against baselines. Our system INSEE search Google search MAP p 0.76 0.57 0.76 MAP h 0.70 0.46 0.69 https://gitlab.inria.fr/cedar/insee-crawler, https://gitlab.inria.fr/cedar/excel-extractor https://jena.apache.org/documentation/tdb/index.html http://ec.europa.eu/eurostat/statistics-explained/images/8/82/Extra_tables_Statis-tics_explained-30-11-2017.xlsx This assumption is backed by an overwhelming majority of cases given the nature of statistic data. We did encounter some counterexamples, e.g. http://ec.europa.eu/eurostat/cache/metadata/Annexes/mips_sa_esms_an1.xls. However, these are very few and thus we do not take them into account in our approach. https://github.com/boudinfl/kea http://www.geonames.org/ http://www.lemonde.fr/les-decodeurs/ This was not actually the case; our system was developed after these articles were written. https://research.google.com/tables
34,226
[ "1004811", "742652", "18076" ]
[ "451441", "2071", "451441", "2071", "542073" ]
01745792
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01745792/file/main.pdf
Arthur Charguéraud email: arthur.chargueraud@inria.fr Alan Schmitt email: alan.schmitt@inria.fr Thomas Wood email: thomas.wood09@imperial.ac.uk JSExplain: A Double Debugger for JavaScript We present JSExplain, a reference interpreter for JavaScript that closely follows the specification and that produces execution traces. These traces may be interactively investigated in a browser, with an interface that displays not only the code and the state of the interpreter, but also the code and the state of the interpreted program. Conditional breakpoints may be expressed with respect to both the interpreter and the interpreted program. In that respect, JSExplain is a double-debugger for the specification of JavaScript. Indeed, in JS, the evaluation of any sub-expression, of any type conversion, and of most internal operations from the specification may result in the execution of user code, hence the raising of an exception, interrupting the normal control flow. Throught its successive editions, the ECMA standard progressively introduced a notation akin to an exception monad ( §1.2). This notation is naturally translated into real code by a proper monadic bind operator of the exception monad. Regarding the state, the standard assumes a global state. A reference interpreter could either assume a global state, modified with side-effects, or thread the state explicitly in purely-functional style. We chose the latter approach for three reasons. First, we already need a monad for exceptions, so we may easily extend this monad to also account for the state. Second, starting from code with an explicit state would make it easier to generate a corresponding inductive definition in a formal logic (e.g., Coq), which we would like to investigate in the future. Third, to ease the reading, one may easily hide a state that is explicitly threaded; the converse, materializing a state that is implicit, would be much more challenging. We thus write our reference interpreter in a purely-functional language extended with syntactic sugar for the monadic notation to account for the state and the propagation of abrupt termination ( §2). For historical reasons, we chose a subset of OCaml as source syntax, but other languages could be used. In fact, we implemented a translator from our subset of OCaml to a subset of JS (a subset involving no side effects and no type conversions). We thereby obtain a JS interpreter that is able to execute JS programs inside a JS virtual machine-JS fans should be delighted. To further improve accessibility to JS programmers, we also translate the source code of our interpreter into a human-readable JS-style syntax, which we call pseudo-JS, and that essentially consists of JS syntax augmented with a monadic notation and with basic pattern matching. Our reference semantics for JS is inherently executable. We may thus execute our interpreter on test suites, either by compiling and executing the OCaml code, or by executing the JS translation of that code. It is indeed useful to be able to check that the evaluation of examples from the JS test suites against our reference semantics produces the desired output. Even more interesting is the possibility to investigate, step by step, the evaluation of the interpreter on a given test case. Such investigation allows to understand why the evaluation of a particular test case produces a particular output-given the complexity of JS, even an expert may easily get puzzled by the output value of a particular piece of code. Furthermore, interactive execution makes it easier for the contributor of a new JS feature to add new test cases and to check that these tests trigger the new features and correctly interact with existing features. In this paper, we present a tool, called JSExplain, for investigating JS executions. This tool can be thought as a double debugger, which displays both the state of the interpreted program and that of the interpreter program. In particular, our tool supports conditional breakpoints expressed simultaneously on the interpreter program and the interpreted program. To implement this tool, we generate a version of our interpreter that is instrumented for producing execution traces ( §3), and we provide a web-based tool to navigate through such traces ( §4). As far as we know, our tool is the first such double debugger, i.e., debugger with specific additions for dealing with programs that interpret other programs ( §5). English Specification of JS To illustrate the style in which the JavaScript standard (ECMA) [START_REF]Ecmascript 2017 language specification[END_REF] is written, consider the description of addition, which will be our running example throughout the paper. In JS, the addition operator casts its arguments to integers and computes their sum, except if one of the two arguments is a string, in which case the other argument is cast to a string and the two strings are concatenated. The ECMA5 presentation (prior to June 2016) appears in Figure 1. First, observe that the presentation describes both the parsing rule for addition and its evaluation rule. Presumably for improved accessibility, the JS standard does not make explicit the notion of an abstract syntax tree (AST). The semantics of addition goes as follows: first evaluate the left branch to a value, then evaluate the right branch to a value, then converts both values (which might be objects) into primitive values (e.g., string, number, ...), then test whether one of the two arguments is a string. If so, cast both arguments to strings and return their concatenation; otherwise cast both arguments to numbers and return their sum. This presentation style used in ECMA5 gives no details about the propagation of exceptions. While the treatment of exceptions is explicit for statements, it is left implicit for expressions. For example, if the evaluation of the left branch raises an exception, the right branch should not be evaluated. It appeared that leaving the treatment of exceptions implicit could lead to ambiguities at what exactly should or should not be evaluated when an exception gets triggered. The ECMA committee hates such ambiguities, because it could (and typically does) result in different browsers exhibiting different behaviors-the nightmare of web-developers. In ECMA6, such ambiguities were resolved by making the propagation of exceptions explicit. Figure 2 shows the updated specification for the addition operator. There are two main changes. First, each piece of evaluation is described on its own line, thereby making the evaluation order crystal clear. Second, the meta-operation ReturnIfAbrupt is invoked on every intermediate result. This metaoperation essentially corresponds to an exception monad. The ECMA6 standards, which aims to be accessible to a large audience, avoids the introduction of the word "monad". Instead, it specifies ReturnIfAbrupt as a "macro", as shown in Figure 3. Essentially, every result consists of a "completion record", which corresponds to a sum type distinguishing normal results from exceptions. In all constructs except try-catch blocks, exceptions interrupt the normal flow of the evaluation. As a result, ECMA6 specification is scattered with about 1100 occurences of ReturnIfAbrupt operations. Realizing the impracticability of that style of specification, the standardization committee decided to introduce in ECMA7 an additional layer of syntactic sugar in subsequent versions of the Evaluation of: AdditiveExpression : AdditiveExpression + MultiplicativeExpression 1. Let lref be the result of evaluating AdditiveExpression. 2. Let lval be GetValue(lref). 3. Let rref be the result of evaluating MultiplicativeExpression. 4. Let rval be GetValue(rref). 5. Let lprim be ToPrimitive(lval). 6. Let rprim be ToPrimitive(rval). 7. If Type(lprim) is String or Type(rprim) is String, then -Return the String that is the result of concatenating ToString(lprim) followed by ToString(rprim). 8. Return the result of applying the addition operation to ToNumber(lprim) and ToNumber(rprim). specification. As detailed in Figure 4, they define the question mark symbol to be a lightweight shorthand for ReturnIfAbrupt steps. The specification of addition in that new style is shown in Figure 5. The presentation of ECMA7 and ECMA8 (Figure 5) is both more concise than that of ECMA6 (Figure 2) and more formal than that of ECMA5 (Figure 1). The use of question marks is to be compared in §2 with the monadic notation that we use for our formal semantics. Requests from the JS Committee The JavaScript standardization body, part of ECMA and known as TC39, includes representatives from browser vendors, major actors of the web, and academics. It aims at defining a common semantics that all browsers should implement. TC39 faces major challenges. On the one hand, it must ensure full backward compatibility, to avoid "breaking the web". In particular, no feature used in the wild ever gets removed from the specification. On the other hand, the committee imposes the rule that no feature may be added to the standard before it has been implemented, shipped, and tested at scale in at least two distinct major browsers. Any member of the committee may propose new features, hence there are many proposals being actively developed, at different stages of formalization [START_REF]TC39 proposals[END_REF]. The rapid evolution of the standard stresses the need for appropriate tools to assist in the rewriting, testing, and debugging of the semantics. In particular, several members with whom we have had interactions expressed their need for several basic tools, such as: • a tool for knowing whether all variables that occur in the specification are properly defined (bound) somewhere; • a tool to perform basic type-checking of the meta-functions and of the variables involved in the specification; • a tool for checking that effectful operations go on a line of their own, to avoid ambiguity in the order of evaluation; • a tool for checking that the behavior is specified in all cases; Evaluation of: AdditiveExpression : AdditiveExpression + MultiplicativeExpression 1. Let lref be the result of evaluating AdditiveExpression. 2. Let lval be ? GetValue(lref). 3. Let rref be the result of evaluating MultiplicativeExpression. 4. Let rval be ? GetValue(rref). 5. Let lprim be ? ToPrimitive(lval). 6. Let rprim be ? ToPrimitive(rval). 7. If Type(lprim) is String or Type(rprim) is String, then a. let lstr be ? ToString(lprim). b. let rstr be ? ToString(rprim). c. Return the String that is the result of concatenating lstr and rstr. 8. let lnum be ? ToNumber(lprim). 9. let rnum be ? ToNumber(rprim). 10. Return the result of applying the addition operation to lnum and rnum. • a tool able to tell which lines from the specification are not covered by any test from the main test suite (test262 [15]); • a tool able to execute step-by-step the specification on concrete JS programs, and to inspect the value of the variables. In particular, step-by-step execution is critically needed to evaluate new features. When the committee decides that a feature proposal is worth integrating, it typically does not accept the proposal as is, but instead modifies the proposal in a way that is amenable to a simple, clear specification without corner cases, carefully trying to avoid harmful interactions with other existing features (or planned features). During this process, at some point the committee members have in their hand a draft of the extended semantics as well as a collection of test cases illustrating the new behaviors that should be introduced. Naturally, they would like to check that their formalization of the extended semantics assigns the expected behavior to each of the test cases. One might argue that such a task could be performed by modifying one of the mainstream browsers. Yet, existing JS runtimes are built with efficiency in mind, with huge code bases involving numerous optimizations. As such, modifying the code in any way is too costly for committee members to investigate variations on a feature request. Even if they could invest the effort, the distance between the English prose specification and the implementation would be too large to have any confidence that the two match, i.e., that the behavior implemented in the code matches the behavior described by the prose. An alternative approach to testing a new feature is to develop an elaboration (local translation) of that feature into plain JS. This can take the form of syntactic sugar adding a missing API, namely a polyfill, or the form of a source to source translation, namely a transpiler. For instance, one might translate so-called "template literals" into simple string concatenation. While polyfills and transpilers are a simple approach to testing new features, they have two major limitations. First, the encoding might be very invasive. For instance, the 2015 version of ECMAScript added proxies, and as a consequence significantly changed the internal methods of the language; the Babel [1] transpiler for proxies [START_REF]Babel proxy plugin[END_REF] is able to simulate this feature in prior version of JS, but at the cost of replacing all field access operations with calls to wrapper functions. Second, the interaction of several new features implemented using these approaches is very difficult to anticipate. Formal Specifications of JS In recent years, two projects, JSCert [START_REF] Bodin | A trusted mechanised javascript specification[END_REF] and KJS [START_REF] Park | KJS: a complete formal semantics of javascript[END_REF] have proposed a formal specification for a significant subset of JS. JSCert provides a big-step inductive definition for ECMA5, (technically, a pretty-bigstep specification [START_REF] Charguéraud | Pretty-big-step semantics[END_REF]), formalized in the Coq proof assistant [START_REF]The Coq proof assistant reference manual[END_REF]. JSCert comes along with a reference interpreter, called JSRef, that is proved correct with respect to the inductive definition. JSRef may be extracted into executable OCaml code for executing tests. KJS describes a small-step semantics for JS as a set of rewriting rules, using the K framework [START_REF] Rosu | An overview of the K semantic framework[END_REF]. This framework has been used to formalize the semantics of several other real-world languages. It provides in particular tool support for executing (syntax-directed) transition rules on a concrete input program. At first sight, it might seem that a formal specification addresses all the requests from the committee. Definitions are thoroughly type-checked; in particular, all variables must be properly bound. Definitions, being defined in a formal language, are ambiguityfree; in particular, the order of evaluation and the propagation of exceptions is precisely specified. The semantics can be executed on concrete input programs; moreover, with some extra tooling, one may execute a set of programs and report on the coverage of the specification by the tests. Given all the nice features of formal semantics, why wouldn't the standardization committee TC39 consider one of these formal semantics as the reference for the language? After discussing with senior members from TC39, we understand that there are (at least) three main reasons why there is no chance for a formal semantics such as JSCert or KJS to be adopted as reference semantics. (1) Formal specifications in Coq or in K use syntax and concepts that are not easily accessible to JS programmers. Yet, the specification is meant to be read by a wide audience. (2) These formal languages have a cost of entry that is too high for committee members to reach the level of proficiency required for contributing new definitions all by themselves. (3) JSCert and KJS come with specifications that can be executed, yet provide no debugger-style support for interactively navigating through an execution and for visualizing the state and the values of the variables, and thus do not help so much in tuning the description of new features. In the present work, we temporarily leave aside the motivation of giving a formal semantics to JS that one could use to formalize properties of the language (e.g., security properties), and rather focus on trying to provide a formal semantics that meets better the day-to-day needs of the TC39 committee. Contribution In this paper, we present a tool, called JSExplain, which aims at providing a formal semantics for JS that addresses the aforementioned limitations of prior work. Our contribution is two-fold. First, we present a specification for JS expressed in a language that, we argue, JS programmers can easily read and write ( §2). Second, we present an interactive tool that supports step-by-step execution of the specification on an input JS program ( §3 and §4). Our tool mimics the features of a debugger, such as navigation controls, state and variable visualization, and conditional breakpoints, but does so for both the interpreter program and the interpreted program. The language in which we display the specification consists of a subset of JS extended with syntactic sugar for monads and basic pattern matching. This language, which we call pseudo-JS, could be the source syntax for our specification. However, for historical and technical reasons, we use as input syntax a subset of OCaml, which is processed using the OCaml type-checker. Our current tool automatically converts the OCaml AST into pseudo-JS code. In the future, we might as well have our reference interpreter be directly in pseudo-JS syntax, and we could typecheck that code either by converting it to OCaml or by reimplementing a basic ML typechecker. A third alternative would be to use the Reason syntax [START_REF][END_REF], a JS-like syntax for OCaml programs. The only difference between the approaches is whether TC39 committee members would prefer to write OCaml style or JS style code. SPECIFICATION LANGUAGE 2.1 Constructs of the Language The input syntax in which we write and display the specification is a purely-functional language that includes the following constructs: variables, constants, sequence, conditional, let-binding, function definition, function application (with support for prefix and infix functions), data constructors, records (including record projections, and the "record-with" construct to build a copy of a record with a number of fields updated), tuples (i.e., anonymous records), and simple pattern matching (only with non-nested patterns, restricted to data constructors, constants, variables, and wildcards). For convenience, let-bindings and functions may bind patterns (as opposed to only variables). We purposely aim for a specification language with a limited number of constructs and a very standard semantics, to minimize the cost of entry into that language. Note that the input code is typechecked in ML. (Polymorphism is used mainly for type-checking options and lists, and operations on them.) As explained earlier ( §1.2), the semantics involves the propagation of exceptions and other abrupt behaviors (break, continue, and return). Their propagation can be described within our small language, using functions and pattern matching. Nevertheless, introducing a little bit of syntactic sugar greatly improves readability. For example, we write "let%run x = e1 in e2" to mean "if_run e1 (fun x -> e2)", where if_run is a function that implements our monad. The monadic operator if_run admits a polymorphic type, hence functions from the specification may return objects of various types. Nevertheless, in practice most functions from the ECMA standard Figure 6: Current input syntax of our specification language: a subset of pure OCaml, extended with monadic notation. are described as returning a "completion triple", which either describe abrupt termination or describe a value. In a number of cases, the value is in fact constrained to be of a particular type. For example, if to_number produces a value, then this value is necessarily a number. The standard exploits this invariant implicitly in formulation such as "let n be the number produced by calling to_number". In constrast, our code needs to explicitly project the number from the value returned. To that end, we introduce specialized monads such as if_number, written in practice "let%number n = e1 in e2". (An alternative approach would be to assign polymorphic types to completion triples, however following this route would require diverging slightly from ECMA's specification in a number of places.) Figure 6 shows the specification of addition in our reference interpreter, in OCaml syntax extended with the monadic notation. This code implements its informal equivalent from Figure 2. In that code, s denotes the state, c denotes the environment (variable and lexical environment, in JS terminology), op corresponds to the operator (here, the constructor C_binary_op_add corresponds to the AST token describing the operator +), v1 and v2 corresponds to the arguments, and w1 and w2 to their primitive values. The function strappend denote string concatenation, whereas "+." denotes addition on floating pointer numbers (i.e., JS's numbers). First observe that, as explained earlier ( §1.1), the state is threaded throughout the code. We show in the next section how to hide the state variables ( §2.2). Observe also that the code also relies on a few auxiliary functions. The function type_compare implements comparison over JS types-to keep our language small and explicit, we do not want to assume a generic comparison function with nontrivial specification. The functions to_primitive_def, to_string, and to_number are internal functions from the specification that implement conversions. These operations might end up evaluating arbitrary user code, and thus could perform side-effects or raise exceptions, hence the need to wrap them in monadic let-bindings. One important feature of this source language is that it does not involve any "implicit" mechanism. All type conversions are explicit in the code, so it is always perfectly clear what is meant. In particular, there is no need to type-check the code to figure out its semantics. In summary, the OCaml code of the interpreter (e.g., Figure 6) is well-suited as a non-ambiguous input language. Note that this code may be compiled using OCaml's compiler in order to run test cases; the current version of our interpreter passes more than 5000 test cases from the official test suite (test262). Translation into Pseudo-JS Syntax Although we believe that it is a desirable feature to have a source langage fully explicit, there is also virtue in pretty-printing the source code of our interpreter in a more concise syntax. The "noise" that appears in the formal specification (e.g. Figure 6) comes from three main sources: (1) every function takes as argument the environment; (2) every function takes as argument and returns a description of the mutable state (a.k.a. heap); (3) values are typically built using numerous constructors, e.g. C_value_prim, which lifts a number (an OCaml value of type float) into a JS value (an OCaml value of type value). Fortunately, we can easily eliminate these three sources of noises. First, the environment is almost always passed unchanged. It may be modified only during the scope of a function call, a with construct, or a block. When it is modified, new bindings are simply pushed into the environment (which behaves like a stack), and subsequently popped. Thus, we may assume, like the ECMA specification does, that the environment is stored in a global state. This saves the need to pass an argument called "c" around. Second, the description of the mutable state is threaded through the code. The "current state" is passed as argument to every function that might perform side-effects, and, symmetrically, the "updated state" is returned to the caller, which binds a fresh name for it. Considering that there is only one version of the state at any given point of an execution, we may assume, like the ECMA specification does, that state to be stored in a global variable. This saves the need to pass values called "s1", "s2"... around. Third, the presence of many constructors is due to the need for casts. Many of these casts could, however, be viewed as "implicit casts" (or "coercions", in Coq's terminology). For a carefully chosen set of casts, defined once and for all, and for a well-typed program with implicit casts, there exists a unique (non-ambiguous) way to insert casts in order to make the program type-check. Although we have previously argued that explicit casts are useful, as they allow giving a semantics that does not depend on type-checking, we now argue that it may also be useful to pretty-print the code assuming implicit casts, in order to improve readability. In summary, we propose to the reader of the specification a version that features implicit state, implicit context, and implicit casts. Given that we are playing the game of pretty-printing syntax, we take the opportunity to switch along the way to a JS-friendly syntax, using brackets and semicolons. This target language, called pseudo-JS syntax, consists of a subset of the JS syntax, extended with monadic notation, and an extended switch construct that is able to bind variables (like OCaml's pattern matching, but restricted to non-nested patterns for simplicity). The pretty-printing of the addition operator in pseudo-JS syntax appears in Figure 7. To illustrate our extended pattern matching syntax feature of pseudo-JS, we show below an excerpt from the main switch that interprets an expression. switch (t) { case Coq_expr_identifier(x): var%run r = identifier_resolution(x); return (r); case Coq_expr_binary_op(e1, op, e2): ... TRACE-PRODUCING EXECUTIONS JSExplain is a tool for interactively investigating execution traces of our JS interpreter executing example JS programs. The interface consists of a web page [START_REF][END_REF] that embeds a JS parser and a traceproducing version of our interpreter implemented in standard JS. So far, we have shown how to translate the OCaml source into pseudo-JS syntax ( §2.2). In this section, we explain how to translate the OCaml source into proper JS syntax, and then how to instrument the JS code in a systematic way for producing execution traces. Figure 8 illustrates the output of translating from our OCaml subset towards JS. Note that this code is not meant for human consumption. We implement monadic operators as function calls, introduce the return keyword where necessary, encode sum types as object literals with a tag field, encode tuples as arrays (encoding tuples as object literals would work too), turn constructor applications into functions calls, implement pattern matching by first switching on the tag field then binding fresh variables to denote the arguments of constructors. We thus obtain an executable JS interpreter in JS which, like our JS interpreter in OCaml, may be used for executing test cases. One interest of the JS version is that it may be easily executed inside a browser, a set up that might be more convenient for a number of users. One limitation, however, is that the number of steps that can be simulated may be limited on JS virtual machines that do not optimize tail-recursive function calls. Indeed, the execution of monadic code involves repeated calls to continuations, whose (tail-call) invocation unnecessarily grows the call stack. To set up our interactive debugger, we produce, from our OCaml source code, an instrumented version of the JS translation. This instrumented code produces execution traces as a result of interpreting an input JS program. These traces store information about all the states that the interpreter goes through. In particular, each event in the trace provides information about the code pointer and the instantiations of local variables from the interpreter code. More precisely, we log events at every entry point of a function, every exit point, and on every variable binding. Each event captures the state, the stack, and the values of all local variables in scope of the interpreter code at the point where the event gets triggered. To reduce noise in the trace, we only log events in the core code of the interpreter, and not the code from the auxiliary libraries. Overall, an execution of the instrumented interpreter on some input JS program produces an array of events. This array can then be investigated using our double debugger ( §4). Figure 9 shows an example snippet of code, giving an idea of the mechanisms at play. Note, again, that this code is not meant for human consumption. The function log_event augments the trace. Consider for instance log_event("Main.js", 4033, ctx_747, "enter"). The first two arguments identify the position in the source file, as a file name and a unique token used to recover the line numbers. The third argument is a context describing values of the local variables, and the fourth argument describes the type of event. When investigating the trace, we need to be able to highlight the corresponding line of the interpreter code. We wish to be able to do so for the three versions of the interpreter code: the OCaml version, the pseudo-JS version, and the plain JS version. To implement this feature, our generator, when processing the OCaml source code, also produces a table that maps, for each version and for each file of the interpreter, event tokens to line numbers. The contexts stored in events are extended each time a function is entered, a new variable is declared, or the function returns (so as to capture the returned value). Contexts are represented as a purelyfunctional linked list of mappings between variable names and values. This representation maximizes sharing and thus minimize the memory footprint of the generated trace. The length of the trace grows linearly with the number of execution steps performed. For example, the simple program "var i = 0; while (i < N) { i++ }" var run_binary_op_add = function (s0, c, v1, v2) { var ctx_747 = ctx_push(ctx_empty, [{key: "s0", val: s0}, {key: "c", val: c}, {key: "v1", val: v1}, {key: "v2", val: v2}]); log_event("JsInterpreter.js", 4033, ctx_747, "enter"); var _return_1719 = if_prim((function () { log_event("JsInterpreter.js", 3985, ctx_747, "call"); var _return_1700 = to_primitive_def(s0, c, v1); log_event("JsInterpreter.js", 3984, ctx_push(ctx_747, [{key: "#RETURN_VALUE#", val: _return_1700}]), "return"); return (_return_1700); }()), function(s1, w1) { ... }); log_event("JsInterpreter.js", 4028, ctx_push(ctx_748, [{key: "#RETURN_VALUE#", val: _return_1718}]), "return"); return (_return_1718); }); log_event("JsInterpreter.js", 4032, ctx_push(ctx_747, [{key: "#RETURN_VALUE#", val: _return_1719}]), "return"); return (_return_1719); }; The fact that these numbers are large reflects the fact that the reference interpreter is inherently vastly inefficient, as it follows the specification faithfully, without any optimization. Due to our use of functional data structures, the memory footprint of the trace should be linear in the length of the trace. We have not observed the memory footprint to be a limit, but if it were we could more carefully select which events should be stored. JSEXPLAIN: A DOUBLE DEBUGGER FOR JS The global architecture of JSExplain is depicted in Figure 10. Starting from our JS interpreter in OCaml, we generate a JS interpreter in JS. We instrument the JS code to produce a trace of events. This compilation is done ahead of time and depicted by solid arrows. When hitting the run button, the flow depicted by the dotted arrows occurs. The web page parses the code from the text area, using the Esprima library [START_REF]ECMAScript parsing infrastructure for multipurpose analysis[END_REF]. This parser produces an AST, with nodes annotated with locations. This AST is then provided as input to the instrumented interpreter, which generates a trace of events. This trace may then be inspected and navigated interactively. For a given event from the execution trace, our interface highlights the corresponding piece of code from the interpreter, and shows the values of the local variables, as illustrated in Figure 11. It also highlights the corresponding piece of code in the interpreted program, as illustrated at the top of Figure 13, and displays the state and the environment of the program at that point of the execution, as illustrated in Figure 12. Recovering the information about the interpreted code is not completely straightforward. For example, to recover the fragment of code to highlight, we find in the trace the closest previous event that contains a call to function with an argument named _term_. This argument corresponds to the AST of a subexpression, and this AST is decorated (by the parser) with locations. Note that, for efficiency reasons, we associate to each event from the trace its corresponding _term_ argument during a single pass, performed immediately after the trace is produced. Similarly, we are able to recover the state and environment associated with the event. The state of the interpreted program consists of four fields: the strictness flag, the value of the this keyword, the lexical environment, and the variable environment. We implemented a custom display for these elements, and also for values of the languages, in particular for objects: one may click on an object to reveal its contents and recursively explore it. We provide several ways to navigate the trace. First, we provide buttons for reaching the beginning or end of the execution, and buttons for stepping one event at a time. Second, we provide, similarly to debuggers, next and previous buttons for skipping function calls, as well as a finish button to reach the end of the current function. These features are implemented by navigating the trace, keeping track of the number of enter and return events. Third, we provide buttons for navigation based not on steps related to the interpreter program but instead based on steps of the interpreted program: source previous and source next find the closest event which induces a change in the location on the subexpression evaluated in the interpreted code, and source cursor finds the last event in the trace for which the associated subexpression contains the active cursor in the "source program" text area. The aforementioned tools are sufficient for simple explorations of the trace, yet we have found that it is sometimes useful to reach events at which specific conditions occur, such as being at a specific line in the interpreter, in the interpreted code, with variables from the interpreter or interpreted code having specific values. We thus provide a text box to enter arbitrary breakpoint conditions to be evaluated on events from the trace. For example, the condition in Figure 13 reaches the next occurrence of a call to run_binary_op_add in a context where the source variable j has value 1. The break point condition may be any JS expression using the following API: I_line() returns the current line of the interpreter, S_line() returns the current line of the source, I('x') returns the value of x in the interpreter, S_raw('x') returns the value of x in the source (e.g. the JS object {tag: "value_number", arg: 5}), and S('x') returns the JS interpretation of the value of x in the source (e.g. the JS value 5). RELATED WORK There are many formal semantics of JavaScript, from pen-and-paper ones [START_REF] Maffeis | An operational semantics for javascript[END_REF], to the aforementioned JSCert [START_REF] Bodin | A trusted mechanised javascript specification[END_REF] and KJS [START_REF] Park | KJS: a complete formal semantics of javascript[END_REF]. As described in §1.4, these semantics are admirable but lack crucial features to be actively used in the standardization effort. To our knowledge, the closest work to the double-debugger approach is the multi-level debugging approach of Kruck et al. [START_REF] Kruck | Multi-level debugging for interpreter developers[END_REF]. They present a debugger for an interpreter for domain-specific languages that lets developers choose the level of abstraction at which they debug their program. An abstraction is a way to display some values (encoded in the host language, or as present in the DSL) as well as showing only stack frames that represent computation at the DSL level. Our technique is more general as it does not focus on domain-specific languages. In fact, our double-debugger approach could be easily adapted to interpreter for other languages than JS. To that end, it suffices to implemented an interpreter for the desired language in the subset of OCaml that we support, and to provide code for extracting and displaying the term and state associated with a given event. We have recently followed that approach and adapted our framework to derive a double-debugger for (a significant subset of) the OCaml programming language. Regarding the translation from OCaml to JS that we implement, one might consider using an existing, general-purpose tool. Js_of_ocaml [START_REF] Vouillon | From bytecode to javascript: the js_of_ocaml compiler[END_REF] converts OCaml bytecode into efficient JS code. Presumably, we could implement the logging instrumentation as an OCaml source-to-source translation and then invoke Js_of_ocaml. Yet, with that approach, we would need to convert the representation of trace events from the encoding of these values performed by Js_of_ocaml into proper JS objects that we can display in the interactive interface. This conversion is nontrivial, as some information, such as the name of constructors, is lost in the process. As we already implemented a translator from OCaml to pseudo-JS, it was simpler to implement a translator from OCaml to plain JS. Another translator from OCaml to JS is Bucklescript [START_REF]Bucklescript[END_REF], which was released after we started our work. Similarly to our translator, Bucklescript converts OCaml code into JS code advertised as readable. Bucklescript also has the limitation that the names of constructors are lost, although presumably this could be easily fixed. Besides, if we wanted to revisit our implementation to base it on Bucklescript, for trace generation we would need to either modify Bucklescript, which is quite complex as it covers the full OCaml language, or to reimplement trace instrumentation at the OCaml level, which should be doable yet would involve a bit more work than at the level of untyped JS code. CONCLUSION AND FUTURE WORK We presented JSExplain to TC39 1 and the committee expressed strong interest. They would like us to extend our specification to cover all of the specification. We have almost finished the formalization of proxies, which are a challenging addition to the language as they change many internal methods. Although all members seem to agree that the current toolset for developing the specification is inappropriate, it requires a strong leadership and a consensus to commit to a new toolchain. Our goal is to cover the current version of ECMAScript, we currently cover ECMAScript 5, and to help committee members use it to formalize new additions to JavaScript. There are numerous directions for future work. (1) We plan to set up a modular mechanism for describing unspecified behaviors (e.g. "for-in" enumeration order) as well as browser-specific behaviors (sometimes browsers deviate from the specification, for historical reasons). (2) We could investigate the possibility of extending the formalization of the standard by also covering the parsing rules of JS; currently, our semantics is expressed with respect to the AST of the input program. (3) To re-establish a link with the original JSCert inductive definition, which is useful for conducting formal proofs about the metatheory of the langage, we would like to investigate 1 https://tc39.github.io/tc39-notes/2016-05_may-25.html#jsexplain-as--tw the possibility of automatically generating pretty-big-step [START_REF] Charguéraud | Pretty-big-step semantics[END_REF] definitions from the reference semantics expressed in our small language, possibly using some amount of annotation to guide the process. (4) To close even further the gap between a formal language and the English prose, we could also investigate the possibility of automatically generating English sentences from the code. Indeed, the prose from the ECMAScript standard is written in such a systematic manner that this should be doable, at least to some extent. Figure 1 : 1 Figure 1: ECMA5 specification of addition. Figure 2 : 2 Figure 2: ECMA6 specification of addition. Figure 4 : 4 Figure 4: ECMA7 and ECMA8 addition for ReturnIfAbrupt. Figure 5 : 5 Figure 5: ECMA7 and ECMA8 specification of addition. and run_binary_op s c op v1 v2 = match op with | C_binary_op_add -> run_binary_op_add s c v1 v2 ... and run_binary_op_add s0 c v1 v2 = let%prim (s1, w1) = to_primitive_def s0 c v1 in let%prim (s2, w2) = to_primitive_def s1 c v2 in if (type_compare (type_of (Coq_value_prim w1)) Coq_type_string) || (type_compare (type_of (Coq_value_prim w2)) Coq_type_string) then let%string (s3, str1) = to_string s2 c (Coq_value_prim w1) in let%string (s4, str2) = to_string s3 c (Coq_value_prim w2) in res_out (Coq_out_ter (s4, (res_val (Coq_value_prim (Coq_prim_string (strappend str1 str2)))))) else let%number (s3, n1) = to_number s2 c (Coq_value_prim w1) in let%number (s4, n2) = to_number s3 c (Coq_value_prim w2) in res_out (Coq_out_ter (s4, (res_val (Coq_value_prim (Coq_prim_number (n1 +. n2)))))) Figure 7 : 7 Figure 7: Generated code for the interpreter in pseudo-JS syntax, with implicit environments, state, and casts. Figure 8 : 8 Figure 8: Snippet of generated code for the interpreter in standard JS syntax, without trace instrumentation. Figure 9 :Figure 10 : 910 Figure 9: Snippet of generated code for the interpreter in standard JS syntax, with trace instrumentation. Figure 11 :Figure 12 : 1112 Figure 11: Display of the variables from the interpreter code Figure 13 : 13 Figure 13: Example of a conditional breakpoint, constraining the state of both the interpreter and the interpreted code. /* new feature */ /* plain JavaScript */ var name = "me"; var name = "me"; `hello ${name}`;"hello " + name; ACKNOWLEDGMENTS We acknowledge funding from the ANR project AJACS ANR-14-CE28-0008 and the CominLabs project SecCloud.
42,684
[ "170232", "2387", "185016" ]
[ "233382", "491212", "69530" ]
01439363
en
[ "sdv" ]
2024/03/05 22:32:07
2016
https://univ-rennes.hal.science/hal-01439363/file/Mutational%20Spectrum%20in%20Holoprosencephaly.pdf
Dr Christèle Dubourg email: christele.dubourg@chu-rennes.fr Wilfrid Carré Houda Hamdi-Rozé Charlotte Mouden Joëlle Roume Benmansour Abdelmajid Daniel Amram Clarisse Baumann Nicolas Chassaing Christine Coubes Laurence Faivre-Olivier Emmanuelle Ginglinger Marie Gonzales Annie Levy-Mozziconacci Sally-Ann Lynch Sophie Naudion Laurent Pasquier Amélie Poidvin Fabienne Prieur Pierre Sarda Annick Toutain Valérie Dupé Linda Akloul Sylvie Odent Marie De Tayrac Véronique David Mutational Spectrum in Holoprosencephaly Shows That FGF is a New Major Signaling Pathway Keywords: Holoprosencephaly, FGF signaling pathway, multigenic inheritance, targeted NGS, brain malformation ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. INTRODUCTION Holoprosencephaly (HPE; MIM# 236100) is the most frequent congenital brain malformation (1 in 10,000 live births, 1 in 250 conceptuses). It results from incomplete midline division of the prosencephalon between 18 th and 28 th day of gestation, affecting both the forebrain and the face [START_REF] Dubourg | Holoprosencephaly[END_REF][START_REF] Marcorelles | Neuropathology of holoprosencephaly[END_REF]. The clinical spectrum is very wide, ranging from severe HPE with a single cerebral ventricle and cyclopia to clinically unaffected carriers in familial HPE. Three classic anatomical classes have been described, in decreasing order of severity: alobar, semi-lobar, and lobar HPE. The full spectrum of HPE includes also middle interhemispheric variants (MIH) or syntelencephaly, septopreoptic HPE and microforms characterized by midline defects (eg, single maxillary median incisor (SMMI) or hypotelorism) without the brain malformations typical of HPE [START_REF] Barkovich | Analysis of the cerebral cortex in holoprosencephaly with attention to the sylvian fissures[END_REF][START_REF] Hahn | Septopreoptic holoprosencephaly: a mild subtype associated with midline craniofacial anomalies[END_REF][START_REF] Lazaro | Phenotypic and molecular variability of the holoprosencephalic spectrum[END_REF][START_REF] Simon | The middle interhemispheric variant of holoprosencephaly[END_REF]. Not only is HPE highly variable phenotypically, but also very heterogeneous etiologically [START_REF] Bendavid | Holoprosencephaly: An update on cytogenetic abnormalities[END_REF][START_REF] Pineda-Alvarez | Current recommendations for the molecular evaluation of newly diagnosed holoprosencephaly patients[END_REF][START_REF] Roessler | The molecular genetics of holoprosencephaly[END_REF]. HPE may be due to chromosome abnormalities, such as trisomy 13, 18, and triploidy, or may be one of the components of a multiple malformation syndrome, such as Smith-Lemli-Opitz or CHARGE syndrome. The Hartsfield syndrome associates HPE with ectrodactyly, with and without cleft lip and palate. HPE may also result from exposure to maternal diabetes during gestation [START_REF] Johnson | Non-genetic risk factors for holoprosencephaly[END_REF][START_REF] Miller | Risk factors for nonsyndromic holoprosencephaly in the National Birth Defects Prevention Study[END_REF]. Isolated HPE presents a high genetic heterogeneity. To date heterozygous mutations in 15 genes have been identified in HPE patients with four major genes (Sonic hedgehog or SHH MIM# 600725, ZIC2 MIM#603073, SIX3 MIM# 603714, TGIF1 MIM# 602630), and eleven genes that are [START_REF] Arauz | A hypomorphic allele in the FGF8 gene contributes to holoprosencephaly and is allelic to gonadotropin-releasing hormone deficiency in humans[END_REF][START_REF] Bae | Mutations in CDON, encoding a hedgehog receptor, result in holoprosencephaly and defective interactions with other hedgehog receptors[END_REF][START_REF] Bendavid | Holoprosencephaly: An update on cytogenetic abnormalities[END_REF][START_REF] Dupe | NOTCH, a new signaling pathway implicated in holoprosencephaly[END_REF][START_REF] Mouden | Homozygous STIL Mutation Causes Holoprosencephaly and Microcephaly in Two Siblings[END_REF][START_REF] Pineda-Alvarez | Missense substitutions in the GAS1 protein present in holoprosencephaly patients reduce the affinity for its ligand, SHH[END_REF]. These genes encode proteins playing a role in early brain development, which mostly belong to the signaling pathway Shh, and to a lesser extent Nodal and Fgf pathways [START_REF] Arauz | A hypomorphic allele in the FGF8 gene contributes to holoprosencephaly and is allelic to gonadotropin-releasing hormone deficiency in humans[END_REF][START_REF] Mercier | NODAL and SHH dose-dependent double inhibition promotes an HPE-like phenotype in chick embryos[END_REF]. Mutations in SHH, SIX3 and TGIF1 are inherited from an unaffected parent or parent harboring only a microform of HPE in 70% of the cases [START_REF] Mercier | New findings for phenotypegenotype correlations in a large European series of holoprosencephaly cases[END_REF]. It suggests that other events are necessary to develop the disease. Consequently, the mode of inheritance initially described as autosomal dominant with an incomplete penetrance and a variable expression has been redefined [START_REF] Odent | Segregation analysis in nonsyndromic holoprosencephaly[END_REF][START_REF] Mouden | Complex mode of inheritance in holoprosencephaly revealed by whole exome sequencing[END_REF]. HPE is now listed as a polygenic disease having multiple inheritance modes. Among them, polygenic inheritance would require two or more events involving genes from the same or different signaling pathways with functional relationship. This polygenic inheritance plays a role in the variability of the phenotype especially when there is a functional relationship between mutated genes, as this is the case for HPE genes [START_REF] Mercier | NODAL and SHH dose-dependent double inhibition promotes an HPE-like phenotype in chick embryos[END_REF]. This has significant implications for genetic counseling and for risk assessment of patient relatives. Until recently, HPE molecular diagnosis had relied on the detection of point mutations in the four main HPE genes (SHH, ZIC2, SIX3 and TGIF1) by Sanger sequencing and on the search for deletions in either known HPE genes or in the entire genome (using CGH array). Targeted next-generation sequencing (NGS) has been proven in the recent years to be very beneficial clinically, especially for the molecular diagnosis of genetically heterogeneous diseases, such as intellectual disability, hearing loss [START_REF] Shearer | Comprehensive genetic testing for hereditary hearing loss using massively parallel sequencing[END_REF], and ciliopathies like Bardet-Biedl syndrome (M 'Hamdi et al., 2014). Targeted NGS appears to be more suitable for routine clinical practice than whole-exome sequencing as it provides better coverage of particular genes for a lower cost and easier and quicker data interpretation [START_REF] Rehm | Disease-targeted sequencing: a cornerstone in the clinic[END_REF]. Therefore we have developed a targeted NGS panel for the molecular diagnosis of HPE by screening twenty genes positively involved in HPE or defined as candidates for this disorder using the Ion Torrent AmpliSeq and Ion Personal Genome Machine (PGM) strategy. In a cohort of 271 HPE probands tested since the beginning of 2014, we were able to provide a diagnosis in approximately 24% of patients. We also show that components of the FGF signaling pathway are clearly involved in HPE. MATERIALS AND METHODS Sample Collection A total of 257 patients (131 fetuses and 126 living children) with normal conventional karyotype were referred by the French geneticists from the eight different CLAD (Centres Labellisés pour les Anomalies du Développement) of the country, French centers of prenatal diagnosis (CPDPN), fetopathologists from the French Fetopathology Society (SOFFOET), as well as several European centers. The 257 patients are described in Table 1. This cohort includes 130 males and 127 females, who have been diagnosed with alobar (n=62), semilobar (n=54), lobar (n=43), syntelencephaly (n=12), HPE microform (n=80), Hartsfield syndrome (n=3) or Kallmann syndrome (n=3). All samples were obtained with informed consent according to the protocols approved by the local ethics committee (Rennes hospital). Gene selection and panel design Gene Library preparation and DNA sequencing An adapter-ligated library was constructed with the Ion AmpliSeq Library Kit 2.0 (Life Technologies) following the manufacturer's protocol. Briefly, 10 ng of DNA was amplified in two pooled reactions then gathered together. Amplicons were partially digested at primer sequences before ligation with Ion Torrent adapters P1 and A, and the adapter-ligated products were then purified with AMPure beads (Beckman Coulter Genomics, Brea, CA, USA), and PCR-amplified for 7 cycles. The resulting libraries of 11 patients were equalized using the Ion Library Equalizer Kit (Life Technologies) and then pooled. Sample emulsion PCR, emulsion breaking, and enrichment were performed with the Ion PGM Template OT2 200 Kit (Life Technologies), according to the manufacturer's instructions. Briefly, an input concentration of one DNA template copy per Ion Sphere Particles (ISPs) was added to emulsion PCR master mix, and the emulsion was generated with an Ion OneTouch system (Life Technologies). Next, ISPs were recovered, and templatepositive ISPs were enriched with Dynabeads MyOne Streptavidin C1 beads (Life Technologies). The Qubit 2.0 fluorometer (Life Technologies) was used to confirm ISP enrichment. An Ion PGM 200 Sequencing Kit was used for sequencing reactions, as recommended in the protocol, and chips 316 were used to sequence barcoded samples on the Ion Torrent PGM for 500 dNTP-flows. In order to achieve a complete coverage of at least the four main genes for each patient, six fragments, respectively one in SHH, four in ZIC2 and one in SIX3, were systematically studied by Sanger method. Depending on the coverage, analysis of other genes was completed according to the patient phenotype by Sanger sequencing. Bioinformatical analysis The sequencing data produced by the PGM were first processed with the Torrent Suite 4.2.1, Ion Torrent platform-specific pipeline including signal processing, adapter trimming, filtering of poor signal-profile reads and alignment to the hg19 human reference genome with TMAP (Torrent Mapping Alignment Program). Four independent variant calling algorithms from the Torrent suite were used. The four VCF (variant calling format) files were combined and annotated with ANNOVAR (February 2014 build) [START_REF] Wang | ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data[END_REF]. A gene-based annotation identified whether SNPs cause protein-coding changes and the amino acids that were affected based on RefSeq. A filter based annotation identified variants and their associated frequency that were reported in the following databases: dbSNP138, 1000-Genome (1000G), NHLBI-ESP, ExAC (Exome Aggregation Consortium) and ClinVar [START_REF] Landrum | ClinVar: public archive of relationships among sequence variation and human phenotype[END_REF]. ANNOVAR was also used to annotate the predicted functional consequences of missense variants using dbNSFP (database for synonymous SNP's functional predictions) v2.6 (http://sites.google.com/site/jpopgen/dbNSFP) [START_REF] Liu | dbNSFP: a lightweight database of human nonsynonymous SNPs and their functional predictions[END_REF][START_REF] Liu | dbSNP v2.0: a database of human non-synonymous SNVs and their functional predictions and annotations[END_REF]. This database compiles prediction scores and interpretation from ten different algorithms: SIFT, Polyphen2_HDIV, Polyphen2_HVAR, LRT, MutationTaster, MutationAssessor, FATHMM, CADD, MetaSVM and MetaLR (Suppl. Tables S1 andS2). Three conservation scores (GERP++, PhyloP and SiPhy) are also included in dbNSFP v2.6 (Suppl. Tables S1 andS2). The variant annotation was completed with "in-house" data regarding variants frequency within each run, across runs and during previous annotation helping to identify recurring false positives and polymorphisms. Furthermore only variants with a frequency less than 1/1,000 in 1000G, EVS (Exome Variant Server), ExAC held our interest. After variants validation by visualization with IGV (Integrative Genomics Viewer), complementary annotations were performed using Condel v2.0 (Gonzalez-Perez and Lopez-Bigas, 2011) and Alamut Visual v2.4.5 (Interactive Biosoftware) to estimate variant pathogenicity. The information given by different tools were re-examined with caution to provide accurate results: PolyPhen [START_REF] Adzhubei | Predicting Dunctional Effect of Human Missense Mutations Using PolyPhen-2[END_REF], SIFT [START_REF] Kumar | Predicting the effects of coding non-synonymous variants on protein function using the SIFT algorithm[END_REF], Mutation Taster [START_REF] Schwarz | MutationTaster2: mutation prediction for the deep-sequencing age[END_REF], and Align-GVGD [START_REF] Tavtigian | Comprehensive statistical study of 452 BRCA1 missense substitutions with classification of eight recurrent substitutions as neutral[END_REF] were tested for exonic variants. In order to study the effect of potential splice variations, Alamut Visual integrates various splice site prediction methods: SpliceSiteFinder-like [START_REF] Zhang | Statistical features of human exons and their flanking regions[END_REF], MaxEntScan [START_REF] Yeo | Maximum entropy modeling of short sequence motifs with applications to RNA splicing signals[END_REF], NNSPLICE [START_REF] Reese | Improved splice site detection in Genie[END_REF], GeneSplicer [START_REF] Pertea | GeneSplicer: a new computational method for splice site prediction[END_REF], Human Splicing Finder [START_REF] Desmet | Human Splicing Finder: an online bioinformatics tool to predict splicing signals[END_REF], ESEFinder [START_REF] Cartegni | ESEfinder: A web resource to identify exonic splicing enhancers[END_REF], RESCUE-ESE [START_REF] Fairbrother | Predictive identification of exonic splicing enhancers in human genes[END_REF] and EX-SKIP [START_REF] Raponi | Prediction of single-nucleotide substitutions that result in exon skipping: identification of a splicing silencer in BRCA1 exon 6[END_REF] were interrogated. The first five gave scores increasing with the importance of the predicted impact on the splice. Finally, a variant was retained for diagnosis when a majority of tools predicted it as potentially deleterious and/or when family pedigree segregation was consistent. Nucleotide numbering uses +1 as the A of the ATG translation initiation codon in the reference sequence, with the initiation codon as codon 1. We use the tool ProteinPaint (http://pecan.stjude.org) for visualizing amino acid changes corresponding to the retained variants [START_REF] Zhou | Exploring genomic alteration in pediatric cancer using ProteinPaint[END_REF]. Mutation validation All variants with a potential deleterious effect were confirmed by Sanger sequencing. They were submitted to ClinVar (ClinVar accessions SCV000268717 -SCV000268738 on http://www.ncbi.nlm.nih.gov/clinvar/). Segregation analyses were performed whenever DNA was available for additional family members. RESULTS Targeted NGS analysis of the 257 patients identified candidate and diagnosis variants in 23.7% of the cases: mutations with high confidence in their deleterious effect in three of the main genes SHH, ZIC2 and SIX3 were identified in 13.2% of the cases (34/257), and in other tested genes in 10.5% (27/257). For these cases, we were able to give a diagnosis. We also found variants classified as variants of uncertain significance (VUS) in 10% (26/257) of the cases. From these data, the ten first-ranked genes involved in HPE are SHH (5.8%), ZIC2 (4.7%), GLI2 (3.1%), SIX3 (2.7%), FGF8 (2.3%), FGFR1 (2.3%), DISP1 (1.2%), DLL1 (1.2%) and SUFU (0.4%) (Table 1, Fig. 1). All variants were found in a heterozygous state and were held for diagnosis. SHH, ZIC2 and SIX3 retain their position of major genes. Description of the SHH, ZIC2 and SIX3 mutations is provided in Figures 1 and2. As previously described by Mercier et al. [START_REF] Mercier | New findings for phenotypegenotype correlations in a large European series of holoprosencephaly cases[END_REF], our results confirmed that SHH is the major gene implicated in HPE. SHH mutations are mostly missense (Fig. 1) and are inherited in 80% of cases of this study. The spectrum of clinical manifestations associated with SHH mutations is very large and includes severe forms as well as microforms. ZIC2 is the second major gene, which is affected by all types of mutations: missense (42%), frameshift and nonsense (42%) and also splice mutations (16%). ZIC2 alterations are generally associated with severe HPE forms and few facial features and are de novo in 92% of cases in our study. Probands with SIX3 mutation mostly had severe HPE correlated with severe facial features. Like SHH mutations, SIX3 variants are mostly inherited. Altogether, these results support that mutations in SHH and SIX3 are highly inherited, whereas most of the ZIC2 mutations are de novo. GLI2 is mostly involved in midline abnormalities. Six GLI2 heterozygous variants were hold for diagnosis (Fig. 1 and 3, Table 2, Suppl. Table S2). The c.596dupG/p.Ala200Argfs*151 (A200Rfs*151) mutation was identified in a boy with nasal pyriform aperture atresia and was inherited from his asymptomatic mother. The c.790C>T/p.Arg264* (R264*) mutation was identified in a 2-years-old girl with isolated solitary median maxillary central incisor and was inherited from her asymptomatic mother. The c.2064delC/p.Ser690Alafs*5 (S690Afs*5) mutation was identified in a 20-years-old girl with hexadactyly, choanal atresia, hypopituitarism and cerebellar atrophia. This mutation occurred de novo. The c.2237G>A/p.Trp746* (W746*) mutation was identified in a male fetus aborted because of lobar holoprosencephaly, premaxillary agenesis, hexadactyly, pituitary hamartoma, and short femur. Moreover his karyotype revealed a mosaic fragility on chromosome 3 (3p24.1, so very far from TDGF1). This mutation was not inherited from his mother, and DNA from the father was unavailable. The c.4761G>C/p.*1587Tyrext*46 (*1587Y) mutation was found in a 16-year-old boy with hypopituitarism, solitary median maxillary central incisor and choanal atresia. It was inherited from his asymptomatic mother. The c.349G>A/p.Ala117Thr (A117T) variant was found in two brothers, one with hypopituitarism and optic atrophia, the other with bilateral cleft lip and palate. This variant was inherited from the father presenting only subtle hypotelorism. The effect of this variant is uncertain as it involves a moderately conserved amino acid and the physicochemical gap between alanine and threonine is low (Grantham distance = 58). Except the A117T, which is of uncertain clinical significance, all the other variations modify the stop codon. They are inherited in the majority of cases, implicating that these variants in GLI2 clearly show incomplete penetrance. Altogether, the mutations in GLI2 are mostly associated with spectrum linked to midline and characterized by solitary median maxillary central incisor and pituitary insufficiency. Only one is associated with classic HPE. FGF8 reaches the top genes. Six patients of our cohort presented heterozygous variations in FGF8 gene (Fig. 1 and3, Table 2, Suppl. Table S2). A fetus with semilobar HPE presented the c. 356C>T/p.Thr119Met (T119M) variant in FGF8 in association with a splice mutation in FGFR1. The couple had already had a termination of pregnancy due to semilobar HPE and the paternal grandmother presents a right cleft lip. DNA samples were not available, preventing further Sanger validation. The c.317C>A/p.Ala106Glu (A106E) was identified in 4-years old boy with semi-lobar HPE. This variant implicates a highly conserved aminoacid (through 13 species until Fugu) located in the interleukin-1/heparin-binding growth factor domain. It is predicted as possibly damaging by SIFT, PolyPhen and Mutation taster. This mutation occurred de novo. This is the first time that a FGF8 mutation is described in association with syntelencephaly. The c.385C>T/p.Arg129* (R129*) was identified twice in two unrelated families. The first patient is a boy with alobar HPE and the second one is a boy with syntelencephaly. In both cases, the mutation was inherited from the asymptomatic father. The c.617G>A/p.Arg206Gln (R206Q) was also identified twice in two unrelated families. The first case is a 3 year-old girl with microform (pyriform aperture stenosis, solitary median maxillary central incisor, hypotelorism) presenting an additional variant in DLL1 (p.Asp601_Ile602delinsVal). These two variants are also present in her older sister who was operated on for bilateral cleft lip and palate and are inherited from the mother presenting hypotelorism and microretrognathism. So there is an apparent co-segregation of these mutations with minor signs of HPE spectrum in this family. The second case is a female fetus with lobar HPE. Overall, the mutation frequency (2.2%) in FGF8 demonstrates that this gene can be classified as a major gene. FGFR1 is a new major gene in HPE. Six heterozygous variants in FGFR1 (NM_023110.2) were identified in our cohort: five in the intracellular tyrosine kinase domain (TKD, aminoacids 478-767): p.Gly485Val, p.Gly490Arg, p.Gly643Asp, c.1977+1G>A, p.Glu692Lys, and one in the extracellular ligand binding domain (p.Arg250Pro) (Fig. 1 and 3, Table 2, Suppl. Table S2). The c.1454G>T/p.Gly485Val (G485V) and the c.1468G>C/p.Gly490Arg (G490R) were identified in patients with Harstfield syndrome and occurred de novo. The latter has already been reported by Simonis et al. [START_REF] Simonis | FGFR1 mutations cause Hartsfield syndrome, the unique association of holoprosencephaly and ectrodactyly[END_REF]. The c.1928G>A/p.Gly643Asp (G643D) mutation occurred de novo in a patient with nasal pyriform aperture hypoplasia, single central incisor and intellectual deficiency. It involves a highly conserved residue (through 16 species from Caenorhabiditis elegans to Homo sapiens) located in the serine-threonine/tyrosine-protein kinase catalytic domain and the physicochemical gap between glycine and aspartate is important (Grantham distance = 94). AlignGVGD, SIFT and MutationTaster predict a deleterious effect. The c.1977+1G>A variant was identified in a patient with semilobar HPE in association with a variant in FGF8, p.Thr119Met, as described above. The c.1977+1G>A variant is predicted to induce a skipping of exon 17 by all five splice prediction tools. The c.2074G>A/p.Glu692Lys (E692K) mutation was identified in a fetus with HPE and cleft lip and palate, and was inherited from his mother with hypogonadotropic hypogonadism. The c.749G>C/p.Arg250Pro (R250P) mutation was identified in a boy with lobar HPE and bilateral cleft lip and palate. Sanger sequencing suggested a very low proportion of the mutated base (cytosine) to the normal base (guanine) in the father leucocyte DNA (Fig. 4). This was confirmed by NGS sequencing showing mosaicism for the presence of the mutation (GRCh37 genome build: g.38282214C>G) with a frequency of 6% in the peripheral blood, and was perfectly correlated with the phenotype of the father presenting a microform with a right unilateral hypoplasia of the orbicularis of the upper lip and bilateral nasal slot, and MRI showed agenesis of the corpus callosum. The 15-month-old boy now presents diabetes insipidus and septo-optic dysplasia. Mutations in FGFR1 were recently described in Hartsfield syndrome (OMIM 300571), that is a rare and unique association of HPE and ectrodactyly, with or without cleft lip and palate, and variable additional features [START_REF] Hong | Dominant-negative kinase domain mutations in FGFR1 can explain the clinical severity of Hartsfield syndrome[END_REF][START_REF] Simonis | FGFR1 mutations cause Hartsfield syndrome, the unique association of holoprosencephaly and ectrodactyly[END_REF]. Here we identified four FGFR1 mutations in patients presenting HPE without extremities abnormalities. Minor HPE genes present mutations that are associated with a second one in most of the cases. The three HPE minor genes identified by our study are DLL1, DISP1 and SUFU (Fig. 1, Table 2, Suppl. Table S2). In the DLL1 gene, we identified twice the same mutation c.1802_1804del/p.Asp601_Ile602delinsVal (or 601_602del) in two unrelated patients. First, this mutation was found in a patient with semilobar HPE and has already been reported by our group [START_REF] Dupé | NOTCH, a new signaling pathway implicated in holoprosencephaly[END_REF]. Secondly, it was identified in a 3 year-old girl with microform (pyriform aperture stenosis, solitary median maxillary central incisor, hypotelorism). It was found in association with a VUS in FGF8 (R206Q); the two variants perfectly co-segregate with the phenotype in the family and may be implicated in the phenotype as we have shown that Fgf pathway might regulate expression of DLL1 in the chick developing brain [START_REF] Dupé | NOTCH, a new signaling pathway implicated in holoprosencephaly[END_REF]. We also found the c.2117C>T/p.Ser706Leu (S706L) mutation in the DLL1 gene in a fetus with alobar HPE in association with an in-frame deletion in SHH (c.1157_1180del/p.Leu386_Ala393del). The two mutations were however inherited from her asymptomatic father. Regarding the DISP1 gene, we identified two compound heterozygous mutations in a 9-yearold girl with a mild form of lobar HPE, facial dysmorphism and hypotelorism: the c.1087A>G transition leading to a missense mutation p.Asn363Asp (N363D) and the c.1657G>A transition leading to a missense mutation p.Glu553Lys (E553K). The p.Asn363Asp mutation was inherited from the father and the p.Glu553Lys mutation was inherited from the mother [START_REF] Mouden | Complex mode of inheritance in holoprosencephaly revealed by whole exome sequencing[END_REF]. In one polymalformative fetus with bilateral cleft lip and facial dysmorphism suggesting HPE microform, we found a nonsense heterozygous mutation c.2898G>A or p.Trp966* (W966*) in DISP1, associated with a mutation in SUFU (c.1022C>T/p.Pro341Leu) that substitutes the last base of exon 8 and that is predicted deleterious by most bioinformatics prediction tools mutation. Family study unfortunately could not be performed because DNA samples were not available. These results suggest that mutations in minor genes would be found more often in HPE patients with polygenic inheritance. DISCUSSION HPE is a very complex disorder both in clinical and genetic terms involving two or more genetic events. We present here the first large HPE series studied by targeted NGS and we provide a new classification of genes involved in HPE. SHH, ZIC2 and SIX3 remain the top genes in terms of importance with GLI2, and are followed by FGF8 and FGFR1. The fraction of mutations in the major genes (SHH, ZIC2, SIX3) is reduced in the present study compared to previous studies [START_REF] Mercier | New findings for phenotypegenotype correlations in a large European series of holoprosencephaly cases[END_REF]; it is probably due to the present cohort which included more patients with microforms and syntelencephaly. TGIF1 was previously classified as a major HPE gene [START_REF] Mercier | New findings for phenotypegenotype correlations in a large European series of holoprosencephaly cases[END_REF] but did not present any mutation in our study. Similarly PTCH1, GAS1, TDGF1, CDON, FOXH1, NODAL and SHH regulating sequences LMBR1 and RBM33 showed no mutations held for diagnosis in the 257 cases sequenced. New case-control studies need to be performed in larger cohorts to better evaluate their role and diagnosis potential in HPE. Such studies may be much more capable to evaluate the implication of rare variants. The candidate HHAT and SOX2 genes did not present any mutation either. Significantly, the identification of numerous mutations in FGF8 and FGFR1 in our cohort strengthens FGF signaling involvement in HPE. FGF8 is a ligand of the large fibroblast growth factor (FGF) family and is important for gonadotropin releasing hormone (GnRH) neuronal development with human mutations resulting in hypogonadotropic hypogonadism and Kallmann syndrome [START_REF] Falardeau | Decreased FGF8 signaling causes deficiency of gonadotropin-releasing hormone in humans and mice[END_REF][START_REF] Hardelin | The complex genetics of Kallmann syndrome: KAL1, FGFR1, FGF8, PROKR2, PROK2, et al[END_REF]. Our targeted NGS approach demonstrates that mutation in FGF8 occurs more commonly than previously thought [START_REF] Arauz | A hypomorphic allele in the FGF8 gene contributes to holoprosencephaly and is allelic to gonadotropin-releasing hormone deficiency in humans[END_REF][START_REF] Mccabe | Novel FGF8 mutations associated with recessive holoprosencephaly, craniofacial defects, and hypothalamo-pituitary dysfunction[END_REF]. The phenotype associated with FGF8 alterations is variable and mutation can be de novo or inherited. Interestingly, the same inherited nonsense mutation (p.Arg129*) was identified in two unrelated patients, one with a severe HPE and the other with a mild form. It supports that another event could be necessary to lead to severe HPE. We also describe here convincing examples of FGFR1 mutations in patients with isolated HPE. FGFR1 belongs to the tyrosine kinase receptor superfamily and contains an extracellular ligand binding domain with three immunoglobulin (Ig)-like domains (D1-D3) and a cytoplasmic domain responsible for tyrosine kinase activity (Fig. 3). The clinical manifestations of FGFR1 alterations are very heterogeneous since loss-of-function mutations in FGFR1 have been linked to Kallman syndrome (Dode et al., 2003;[START_REF] Albuisson | Kallmann syndrome: 14 novel mutations in KAL1 and FGFR1 (KAL2)[END_REF] Villanueva and de Roux, 2010), hypogonadotropic hypogonadism with or without anosmia [START_REF] Balasubramanian | Prioritizing genetic testing in patients with Kallmann syndrome using clinical phenotypes[END_REF][START_REF] Villanueva | Congenital hypogonadotropic hypogonadism with split hand/foot malformation: a clinical entity with a high frequency of FGFR1 mutations[END_REF][START_REF] Vizeneux | Congenital hypogonadotropic hypogonadism during childhood: presentation and genetic analyses in 46 boys[END_REF], and Hartsfield syndrome [START_REF] Hong | Dominant-negative kinase domain mutations in FGFR1 can explain the clinical severity of Hartsfield syndrome[END_REF][START_REF] Simonis | FGFR1 mutations cause Hartsfield syndrome, the unique association of holoprosencephaly and ectrodactyly[END_REF]. Gain-of-function mutations in FGFR1 have also been identified in about 5% of Pfeiffer syndrome with or without craniosynostosis [START_REF] Chokdeemboon | FGFR1 and FGFR2 mutations in Pfeiffer syndrome[END_REF]. We describe here one case of FGFR1 mutation (p.Glu692Lys) associated both with Kallmann syndrome and HPE. The location of this mutation is consistent with Kallmann syndrome as mutations of neighboring residues (p.Leu590Pro, p.Ile693Phe) were already described in patients with this syndrome [START_REF] Bailleul-Forestier | Dental agenesis in Kallmann syndrome individuals with FGFR1 mutations[END_REF]Dodé et al., 2007). Out of the six FGFR1 mutations described in our study, two were found in patients with Hartsfield syndrome. Previous reports of Hartsfield syndrome implicate FGFR1 mutations in the ATP binding site and the protein tyrosine kinase domain [START_REF] Dhamija | Novel de novo heterozygous FGFR1 mutation in two siblings with Hartsfield syndrome: a case of gonadal mosaicism[END_REF][START_REF] Hong | Dominant-negative kinase domain mutations in FGFR1 can explain the clinical severity of Hartsfield syndrome[END_REF][START_REF] Simonis | FGFR1 mutations cause Hartsfield syndrome, the unique association of holoprosencephaly and ectrodactyly[END_REF]. These mutations would have a dominant-negative activity that would account for the most severe phenotype of Hartsfield syndrome [START_REF] Hong | Dominant-negative kinase domain mutations in FGFR1 can explain the clinical severity of Hartsfield syndrome[END_REF]. Concordantly, the two FGFR1 mutations (p.Gly485Val, p.Gly490Arg) that are associated with Hartsfield syndrome in our cohort are localized in the region coding for ATP binding site (Fig. 3). However, two of the mutations identified in HPE patients without abnormalities of the extremities are also found in the region coding for activation loop of the protein tyrosine kinase domain (p.Gly643Asp; c.1977+1G>A). We hypothesized that these FGFR1 mutations rather lead to a classic loss of function [START_REF] Hong | Dominant-negative kinase domain mutations in FGFR1 can explain the clinical severity of Hartsfield syndrome[END_REF]. FGF8 and FGFR1 are not the only members of the FGF family to be expressed in the early forebrain. Other members should be considered as strong potential candidates for HPE. FGF signaling pathway plays a dominant role in embryonic development and is essential for ventral telencephalon development and digits formation [START_REF] Ellis | ProNodal acts via FGFR3 to govern duration of Shh expression in the prechordal mesoderm[END_REF][START_REF] Gutin | FGF signalling generates ventral telencephalic cells independently of SHH[END_REF][START_REF] Li | FGFR1 function at the earliest stages of mouse limb development plays an indispensable role in subsequent autopod morphogenesis[END_REF]. FGF signaling is involved in maintaining Shh expression in the prechordal tissue, where it plays a crucial role in induction of the ventral forebrain [START_REF] Ellis | ProNodal acts via FGFR3 to govern duration of Shh expression in the prechordal mesoderm[END_REF]. FGFR1 also maintains expression of Shh in the developing limb [START_REF] Li | FGFR1 function at the earliest stages of mouse limb development plays an indispensable role in subsequent autopod morphogenesis[END_REF]. According to our hypothesis, dominant-negative FGFR1 mutations would lead to a more severe downregulation of Shh activity compared to a classic loss of function. It would explain the presence of limb defect in Hartsfield syndrome similar to those observed in the Shh-/knockout mice [START_REF] Chiang | Cyclopia and defective axial patterning in mice lacking Sonic hedgehog gene function[END_REF]. The knowledge of the mode of inheritance in HPE has evolved since the description of an autosomal dominant model with an incomplete penetrance and a variable expression [START_REF] Odent | Segregation analysis in nonsyndromic holoprosencephaly[END_REF] through an autosomal dominant model with modifier genes [START_REF] Roessler | Utilizing prospective sequence analysis of SHH, ZIC2, SIX3 and TGIF in holoprosencephaly probands to describe the parameters limiting the observed frequency of mutant genexgene interactions[END_REF]. Thanks to our NGS strategy targeting twenty genes, we have shown that sixteen per cent of mutations kept for diagnosis was found in association with a second one (FGF8/FGFR1, FGF8/DLL1, DLL1/SHH, DISP1/DISP1, DISP1/SUFU). These cases of double-mutations in two different genes -and even in the same one -strengthen the polygenic inheritance previously illustrated by [START_REF] Mouden | Complex mode of inheritance in holoprosencephaly revealed by whole exome sequencing[END_REF]. Here, a second event in FGF8 was identified in one patient with FGFR1 mutation. In the same way, a gene synergistic interaction between a deleterious FGFR1 allele transmitted from one parent and a loss-of-function allele in FGF8 from the other parent was recently described in two sisters with semilobar and lobar HPE respectively [START_REF] Hong | Dominant-negative kinase domain mutations in FGFR1 can explain the clinical severity of Hartsfield syndrome[END_REF]. Altogether these observations strongly suggest that a cumulative effect on the FGF signaling pathway leads to HPE. We showed that most of mutations were inherited mainly from an asymptomatic parent, which suggests that another event could be necessary to cause HPE. The important and wide variability of expression from an asymptomatic to severe form for a same mutation, the incomplete penetrance and the identification of several mutations in the same patient argue for this oligogenic inheritance. Furthermore, the description of numerous mouse models carrying mutations in two genes of the same or different signaling pathways involved in forebrain development strongly support this mode of inheritance by showing that a cumulative partial inhibition of signaling pathways is necessary to develop HPE [START_REF] Allen | The Hedgehog-binding proteins Gas1 and Cdo cooperate to positively regulate Shh signaling during mouse development[END_REF][START_REF] Krauss | Holoprosencephaly: new models, new insights[END_REF][START_REF] Mercier | NODAL and SHH dose-dependent double inhibition promotes an HPE-like phenotype in chick embryos[END_REF]. However, only a few examples of digenic inheritance in human were reported in the literature until now [START_REF] Hong | Dominant-negative kinase domain mutations in FGFR1 can explain the clinical severity of Hartsfield syndrome[END_REF][START_REF] Lacbawan | Clinical spectrum of SIX3-associated mutations in holoprosencephaly: correlation between genotype, phenotype and function[END_REF][START_REF] Ming | Multiple hits during early embryonic development: digenic diseases and holoprosencephaly[END_REF][START_REF] Mouden | Complex mode of inheritance in holoprosencephaly revealed by whole exome sequencing[END_REF][START_REF] Nanni | The mutational spectrum of the Sonic Hedgehog gene in holoprosencephaly: SHH mutations cause a significant proportion of autosomal dominant holoprosencephaly[END_REF]. The present study demonstrates that digenism would not be so rare in human HPE. Systematic implementation of next generation sequencing in HPE diagnosis will be necessary to account for this multigenic inheritance and to improve genetic counselling. considered as minor genes (PTCH1 MIM#601309, TDGF1 MIM#187395, FOXH1 MIM# 603621, GLI2 MIM# 165230, DISP1 MIM# 607502, FGF8 MIM# 600483, GAS1 MIM# 139185, CDON MIM#608707, NODAL MIM# 601265, DLL1 MIM# 606582 and very recently STIL MIM# 181590) selection was based on their proved or suspected involvement in HPE, or in syndromes including HPE, membership in signaling pathways implicated in HPE, and expression in the developing forebrain compatible with HPE. Known regulatory regions of SHH (LMBR1 MIM# 605522, RBM33) have also been included. The panel was designed with Ion AmpliSeq™ Designer (Life Technologies, ThermoFisher Scientific). It includes coding and flanking intronic sequences (50 base pairs) of the following 20 genes: SHH (NM_000193.2), ZIC2 (NM_007129.3), SIX3 (NM_005413.3), TGIF1 (NM_170695.2), GLI2 (NM_005270.4), PTCH1 (NM_000264.3), GAS1 (NM_002048.2), TDGF1 (NM_003212.3), CDON (NM_016952.4), DISP1 (NM_032890.3), FOXH1 (NM_003923.2), NODAL (NM_018055.4), FGF8 (NM_033163.3), HHAT (NM_018194.4) MIM# 605743, DLL1 (NM_005618.3), SUFU (NM_016169.3) MIM# 607035, SOX2 (NM_003106.3) MIM# 184429, RBM33 (NM_053043.2), LMBR1 (NM_022458.3) and FGFR1 (NM_023110.2) MIM# 136350. It covers 111 kb. Figure 1 . 1 Figure 1. Distribution of mutations held for diagnosis in the top ten holoprosencephaly genes Figure 2 . 2 Figure 2. Mutational landscape of SHH, ZIC2 and SIX3 genes. This comprehensive Figure 3 . 3 Figure 3. Mutational landscape of FGF8, FGFR1 and GLI2 genes, performed with Figure 4 4 Figure 4. p.Arg250Pro (R250P) mutation in FGFR1. (I) Proband. (II) Father. (a) Partial Table 1 . Distribution of holoprosencephaly types and mutations in the cohort of 257 patients. Type All (Male,Female) SHH ZIC2 GLI2 SIX3 FGF8 FGFR1 DISP1 DLL1 SUFU 1 Alobar 62 (24,38) 9,7% 8,1% - 6,5% 1,6% - - 1,6% - Semilobar 54 (26,28) 3,7% 5,6% - 1,9% 3,7% 3,7% - 1,9% - Lobar 43 (27,16) 2,3% 9,3% 2,3% 2,3% 2,3% 2,3% 4,7% - - Syntelencephaly 12 (7,5) 8,3% - - 8,3% 8,3% - - - - Microform 80 (42,38) 6,3% - 8,8% - 1,3% 1,3% 1,3% 1,3% 1,3% Hartsfield 3 (3,0) - - - - 66,7% - - - Kallmann 3 (1,2) - - - - - - - - - TOTAL 257 5,8% 4,7% 3,1% 2,7% 2,3% 2,3% 1,2% 1,2% 0,4% Table 2 . Characteristics of variants identified in GLI2, FGF8, FGFR1, DLL1, DISP1 and SUFU, and associated phenotypes. 2 The GenBank references used for nucleotide numbering were NM_005270.4 for GLI2, NM_033163.3 for FGF8, NM_023110.2 for FGFR1, NM_005618.3 for DLL1, NM_032890.3 for DISP1, NM_016169.3 for SUFU and NM_000193.2 for SHH. Nucleotide numbering uses +1 as the A of the ATG translation initiation codon in the reference sequence, with the initiation codon as codon 1. The deleterious score was given by 10 predictions tools (SIFT, Polyphen2_HDIV, Polyphen2_HVAR, LRT, MutationTaster, MutationAssessor, FATHMM, CADD, MetaSVM and MetaLR).*: For detailed prediction data, see Suppl. TableS2. D, deleterious; P, possibly deleterious; T, tolerated; NPAS, nasal pyriform aperture stenosis; SMMCI, solitary median maxillary central incisor; ND: not determined. Deleteri Gen e Variant (gDNA) Variant (cDNA) (protein) ous score* Patient's phenotype Inheritance Paired mutation or effect hypopituitar GLI2 g.121708913G> A c.349G> A p.Ala117Thr D:2 P:1 T:7 ism, optic atrophia / bilateral father (hypoteloris m) cleft g.121712959dup G c.596du pG p.Ala200Argfs *151 Frameshi ft NPAS mother g.121726436C>T c.790C> T p.Arg264* Stop gain SMMCI mother hexadactyly , choanal g.121743961del C c.2064d elC p.Ser690Alafs *5 Frameshi ft atresia, ism, hypopituitar de novo cerebellar atrophia lobar HPE, premaxillary g.121744134G> A c.2237G >A p.Trp746* Stop gain agenesis, pituitary hamarthom not inherited from the mother a, hexadactyly hypopituitar g.121748251G> C c.4761G >C p.*1587Tyrex t*46 Stop loss ism, choanal SMMCI, mother atresia FGF 8 g.103534509G> T c.317C> A p.Ala106Glu D:9 P:0 T:1 semilobar HPE de novo ACKNOWLEDGMENTS This work was supported by the CHU of Rennes (Innovation Project). We would like to thank the families for their participation in the study, all clinicians who referred HPE cases, the eight CLAD (Centres Labellisés pour les Anomalies du Développement) within France that belong to FECLAD, French centers of prenatal diagnosis (CPDPN) and the SOFFOET for fetus cases, and the "filière AnDDI-Rares". We particularly thank all members of the Molecular Genetics Laboratory (CHU, Rennes) and of the Department of Genetics and Development (UMR6290 CNRS, Université Rennes 1) for their help and advice. We are grateful to Артем Ким for carefully reading this manuscript. The authors acknowledge the Centre de Ressources Biologiques (CRB) Santé BB-0033-00056 (http://www.crbsante-rennes.com) of Rennes for managing patient samples.
43,479
[ "834354", "976409", "902643", "973285", "825028" ]
[ "187162", "74578", "187162", "74578", "187162", "239014", "460911", "84005", "300075", "83991", "157792", "495930", "360410", "203638", "300122", "301327", "486293", "51335", "93599", "184955", "187162", "190951", "187162", "190951", "301327", "187162", "187162", "74578" ]
01745814
en
[ "info" ]
2024/03/05 22:32:07
2015
https://inria.hal.science/hal-01745814/file/340025_1_En_20_Chapter.pdf
Igor Mishsky email: igormishsky@gmail.com Nurit Gal-Oz email: galoz@sapir.ac.il Ehud Gudes A Topology based Flow Model for Computing Domain Reputation The Domain Name System (DNS) is an essential component of the internet infrastructure that translates domain names into IP addresses. Recent incidents verify the enormous damage of malicious activities utilizing DNS such as bots that use DNS to locate their command & control servers. Detecting malicious domains using the DNS network is therefore a key challenge. We project the famous expression Tell me who your friends are and I will tell you who you are, motivating many social trust models, on the internet domains world. A domain that is related to malicious domains is more likely to be malicious as well. In this paper, our goal is to assign reputation values to domains and IPs indicating the extent to which we consider them malicious. We start with a list of domains known to be malicious or benign and assign them reputation scores accordingly. We then construct a DNS based graph in which nodes represent domains and IPs. Our new approach for computing domain reputation applies a flow algorithm on the DNS graph to obtain the reputation of domains and identify potentially malicious ones. The experimental evaluation of the flow algorithm demonstrates its success in predicting malicious domains. Introduction Malicious botnets and Advanced Persistent Threats (APT) have plagued the Internet in recent years. Advanced Persistent Threat, often implemented as a botnet, is advanced since it uses sophisticated techniques to exploit vulnerabilities in systems, and is persistent since it uses an external command and control (C&C) site which is continuously monitoring and extracting data of a specific target. APTs are generated by hackers but are operated from specific domains or IPs. The detection of these misbehaving domains (including zero day attacks) is difficult since there is no time to collect and analyze traffic data in real-time, thus their identification ahead of time is very important. We use the term domain reputation to express a measure of our belief that a domain is benign or malicious. The term reputation is adopted from the field of social networks and virtual communities, in which the reputation of a peer is derived from evidences regarding its past behavior but also from its relations to other peers [START_REF] Page | Pagerank citation ranking: Bringing order to the web[END_REF]. A domain reputation system can support the decision to block traffic or warn organizations about suspicious domains. Currently lists of domains which are considered legitimate are published by web information companies (e.g., Alexa [START_REF] Gudes | Websites Ranking[END_REF]) while black-lists of malware domains are published by web threat analysis services (e.g., VirusTotal [START_REF]VirusTotal: A free virus, malware and URL online scanning service[END_REF].) Unfortunately the number of domains appearing in both types of lists is relatively small, and a huge number of domains is left unlabeled. Therefore, the problem of assigning reputation to unlabeled domains is highly important. The Domain Name Service (DNS) maps domain names to IP addresses and provides an essential service to applications on the internet. Many botnets use a DNS service to locate their next C&C site. For example, botnets tend to use short-lived domains to evasively move their C&C sites. Therefore, DNS logs have been used by several researchers to detect suspicious domains and filter their traffic if necessary. Choi and Lee [START_REF] Choi | Identifying botnets by capturing group activities in dns traffic[END_REF], analyzed DNS traffic to detect APTs. Such analysis requires large quantities of illegitimate DNS traffic data. An alternative approach was proposed in the Notos system [START_REF] Antonakakis | Building a dynamic reputation model for dns[END_REF] which uses historical DNS information collected passively from multiple recursive DNS resolvers to build a model of how network resources are allocated and operated for legitimate Internet services. This model is mainly based on statistical features of domains and IPs that are used for building a classifier which assigns a reputation score to unlabeled domains. The main difference between the DNS data used for computing reputation and the data used for malware detection is that the first consists of mainly static properties of the domain and DNS topology data, while the latter requires behavioral and time-dependent data. While DNS behavior data may involve private information (e.g., the domains that an IP is trying to access), which ISPs may be reluctant to analyze or share, DNS topology data is much easier to collect. Our research also focuses on computing domain reputation using DNS topology data. Various definitions of the terms trust and reputation have been proposed in the literature as the motivation for a computational metric. Trust is commonly defined following [START_REF] Mui | A computational model of trust and reputation for e-businesses[END_REF] as a subjective expectation an agent has about another's future behavior based on the history of their encounters. The history is usually learned from ratings that peers provide for each other. If such direct history is not available, one derives trust based on reputation. Reputation is defined [START_REF] Mui | A computational model of trust and reputation for e-businesses[END_REF] as the aggregated perception that an agent creates through past actions about its intentions and norms. , where this perception is based on information gathered from trusted peers. These definitions are widely used in state of the art research on trust and reputation as the logic behind trust based reputation computational models in web communities and social networks. However, computing reputation for domains raises several new difficulties: -Rating information if exists, is sparse and usually binary, a domain is labeled either "white" or "black" -Static sources like blacklists and whitelists are often not up-to-date -There is no explicit concept of trust between domains which make it difficult to apply a flow or a transitive trust algorithm. -Reputation of domains is dynamic and changes very fast These difficulties make the selection of an adequate computational model for computing domain reputation a challenging task. The focus of our paper and its main contribution is therefore a flow model and a flow algorithm for computing domain reputation which uses a topology-based network that maps connections of domains to IPs and other domains. Our model uses DNS IP-Domain mappings and statistical information but does not use DNS traffic data. Our approach is based on a flow algorithm, commonly used for computing trust in social networks and virtual communities. We are mainly inspired by two models: the Eigentrust model [START_REF] Kamvar | The eigentrust algorithm for reputation management in p2p networks[END_REF] that computes trust and reputation by transitive iteration through chains of trusting users; and the model by Guha et al. [START_REF] Guha | Propagation of trust and distrus[END_REF] which combines the flow of trust and distrust. The motivation for using a flow algorithm is the hypothesis that IPs and domains which are neighbors of malware generating IPs and domains, are more likely to become malware generating as well. We construct a graph which reflects the topology of domains and IPs and their relationships and use a flow model to propagate the knowledge received in the form of black list, to label domains in the graph as suspected domains. Our preliminary experimental results support our proposed hypothesis that domains (or IPs) connected to malicious domains have a higher probability to become malicious as well. The main contribution of this paper lies in the novelty of the algorithm and the strength of the supporting experimental study. The experimental study supporting our results, uses a sizable DNS database (more than a one million IPs and domains) which proves the feasibility of our approach. The rest of this paper is organized as follows. Section 2 provides a more detailed background on DNS and IP characteristics and on the classical flow models, and then surveys the related work. Section 3 discusses the graph construction, the weight assignment problem and the flow algorithm. Section 4 presents the results of our experimental evaluation and Section 5 concludes the paper and outlines future research. Background and Related Work The domain name system (DNS) translates Internet domains and host names into IP addresses. It is implemented as an hierarchical and distributed database containing various types of data, including host names and domain names, and provides application level protocol between clients and servers. An often-used analogy to explain the Domain Name System is that it serves as the phone book for the Internet by translating human-friendly computer host names into IP addresses. Unlike a phone book, the DNS can be quickly updated, allowing a service's location on the network to change without affecting the end users, who continue to use the same host name. Users take advantage of this when they use meaningful Uniform Resource Locators (URLs). In order to get the IP of a domain, the host usually consults a local recursive DNS server (RDNS).The RDNS iteratively discovers which authoritative name server is responsible for each zone. The result of this process is the mapping from the requested domain to the IP requested. Two categories of models are related to our work. The first category deals with ways to compute domain reputation. The second deals with flow algorithms for the computation of trust and reputation in general. Domain reputation is a relatively new research area. The Notos model for assigning reputation to domains [START_REF] Antonakakis | Building a dynamic reputation model for dns[END_REF] was the first to use statistical features in the DNS topology data and to apply machine learning methods to construct a reputation prediction classifier. Notos uses historical DNS information collected passively from multiple DNS resolvers to build a model of how network resources are allocated and operated for professionally run Internet services. Specifically it constructs a set of clusters representing various types of popular domains statistics and computes features which represent the distance of a specific domain from these clusters. Notos also uses information about malicious domains obtained from sources such as spam-traps, honeynets, and malware analysis services to build a set of features representing how network resources are typically allocated by Internet miscreants. With the combination of these features, Notos constructs a classifier and assigns reputation scores to new, previously unseen domain names.(Note that Notos uses heavily, information about highly popular sites such as Acamai which is not publicly available and therefore make it difficult to compare to.) The Exposure system [START_REF] Leyla | Exposure finding malicious domains using passive dns analysis[END_REF] collects data from the DNS answers returned from authoritative DNS servers and uses a set of 15 features that are divided into four feature types: time-based features, DNS answer-based features, TTL valuebased features, and domain name-based features. The above features are used to construct a classifier based on the J48 decision tree algorithm [START_REF] Witten | Data Mining: Practical Machine Learning Tools and Techniquel[END_REF] in order to determine whether a domain name is malicious or not. Kopis [START_REF] Antonakakis | Detecting malware domains at the upper dns hierarchy[END_REF] is a system for monitoring the high levels of the DNS hierarchy in order to discover the anomaly in malicious DNS activities. Unlike other detection systems such as Notos [START_REF] Antonakakis | Building a dynamic reputation model for dns[END_REF] or Exposure [START_REF] Leyla | Exposure finding malicious domains using passive dns analysis[END_REF],Kopis takes advantage of the global visibility of DNS traffic at the upper levels of the DNS hierarchy to detect malicious domains. After the features are collected it uses the random forest technique as the machine learning algorithm to build the reputation prediction classifier. In the category of flow algorithms for computation of trust in general, two models are of specific interest to our work. The first is Eigentrust [START_REF] Kamvar | The eigentrust algorithm for reputation management in p2p networks[END_REF], a reputation management algorithm for peer-to-peer network. The algorithm provides each peer in the network a unique global trust value based on the peer's history of uploads and thus aims to reduce the number of inauthentic files in a P2P network. The algorithm computes trust and reputation by transitive iteration through chains of trusting users. The page-rank algorithm [START_REF] Page | Pagerank citation ranking: Bringing order to the web[END_REF] uses a similar approach, however it contains special features related to URL referencing. Guha et al. [START_REF] Guha | Propagation of trust and distrus[END_REF] introduce algorithms for implementing a web-of-trust that allows people to express either trust or distrust in other people. Two matrices representing the trust and distrust between people are built using four types of trust relationships. They present several schemes for explicitly modeling and propagating trust and distrust and propose methods for combining the two, using weighted linear combination. The propagation of trust was also used by Coskun et al. [START_REF] Coskun | Friends of an enemy: identifying local members of peer-to-peer botnets using mutual contacts[END_REF] for detecting potential members of botnets in P2P networks. Their proposed technique is based on the observation that peers of a P2P botnet with an unstructured topology, communicate with other peers in order to receive commands and updates. Since there is a significant probability that a pair of bots within a network have a mutual contact, they construct a mutual contact graph. This graph is different than the DNS topology graph we rely on, the attributes and semantics underlying our approach are different and accordingly the algorithm we propose. Wu et al. [START_REF] Wu | Propagating trust and distrust to demote web spam[END_REF] use the distrust algorithm presented by Guha et al. [START_REF] Guha | Propagation of trust and distrus[END_REF] for detecting spam domains but use URL references rather than DNS data to derive the edges between domain nodes. They also discuss trust attenuation and the division of trust between a parent and its "children". Yadav et al. [START_REF] Yadav | Detecting algorithmically generated malicious domain names[END_REF] describe an approach to detect malicious domains based mainly on their names distribution and similarity. They claim that many botnets use the technique of DGA (domain generating algorithm) and they show that domains generated in this form have certain characteristics which help in their detection. The domain names usually have a part in common, e.g. the top level domain (TLD), or a similar distribution of alpha-numeric characters in their names. The success of using the above characteristics in [START_REF] Yadav | Detecting algorithmically generated malicious domain names[END_REF] motivates the construction of domain-domain edges in our graph as well. There are quite a few papers which use DNS data logs to detect Botnets and malicious domains. However these papers use the DNS traffic behavior and not the mapping information used by Notos and in our work. Villarmin et al. [START_REF] Villamarin-Salomon | Bayesian bot detection based on dns traffic similarity[END_REF] provide C&C detection technique motivated by the fact that bots typically initiate contact with C&C servers to poll for instructions. As an example, for each domain, they aggregate the number of non-existent domains (NXDOMAIN) responses per hour and use it as one of the classification features. Another work of this category, presented by Choi and Lee [START_REF] Choi | Identifying botnets by capturing group activities in dns traffic[END_REF] monitor DNS traffic to detect botnets, which form a group activity in similar DNS queries simultaneously. They assume that infected hosts perform DNS queries at several occasions and using this data they construct a feature vector and apply a clustering technique to identify malicious domains. As discussed above, although DNS traffic data has significant features, it is difficult to obtain comparing to DNS topological data. To summarize, although there are some previous works on domain reputation using DNS statistical features (e.g., Notos), and there exist flow algorithms in other trust and reputation domains, the combination of the two as used in this paper is new. The Flow Model The goal of the flow algorithm is to assign domains with reputation scores given an initial list of domains with known reputation (good or bad). The first step is the construction of the Domain-IP graph based on information obtained from a large set of successful DNS transactions represented as A-records. The A-records are used to construct the DNS topology graph, where vertices represent IPs and domains, and the weighted edges represent the strength of their connections. This is described next. In subsection 3.2 we present in detail the flow algorithm, and describe the method for combining good and bad reputation scores. Finally we discuss an optimization of the algorithm needed for large graphs. Constructing the graph and assigning edge weights The DNS topology graph consists of two types of vertices: domains and IPs, deriving four types of edges between them. To construct the graph we use Arecords, and also data available from public sources to estimate the strength of connections between any two vertices, IP or domain, by the amount of common data between them. The IP data for each IP consists of the following five characteristics available from sources such as e.g., WHOIS databae [15]: -Autonomous System number (ASN): a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators that present a common, clearly defined routing policy to the Internet. The ASN number is the indexation of the collection. -Border Gateway Protocol (BGP) prefix: a standardized exterior gateway protocol designed to exchange routing and reachability information between autonomous systems (AS) on the Internet. The protocol defines the routing of of each ASN.BGP prefix as a range of IPs to which it routes. -Registrar: the name of the organization that registered the IP. -Country: the country to which the IP belongs. -Registration date: the date and time at which the IP was registered. For Domain data the key concept is the parent of a domain. A k-Top Level Domain (kTLD) is the k suffix of the domain name [START_REF] Antonakakis | Building a dynamic reputation model for dns[END_REF]. For example: for domain finance.msn.com, the 3TLD is the same as the domain name: finance.msn.com, the 2TLD is .msn.com and the 1TLD is .com. We use the following notation: Set IP is the set containing all the IPs. Set domain is the set containing all the domains. Set parent ⊆ Set domain is the set containing all the parents. Set commonAtt is the set of attributes vectors derived from IP data. Attribute appear in the following order: country, ASN, BGP prefix, registrar, registration date. Missing information is replaced with 'none'. For example (DE, none, none, ST RAT O.DE, none) ∈ Set CommonAtt is the vector element in which the only information available is the country and the registrar. We define a weight function that assigns a weight to each edge in the graph. Let w be a weight function w : (u, v) → [0, 1] used to assign weight to the edge (u, v) where u, v ∈ Set IP ∪ Set domain , for each edge type we consider three alternative weight functions as follows: 1. IP to domain: For ip ∈ Set IP and the list of A-records, let D ip be all the domains mapped to ip. For each d ∈ D ip we define: w(ip, d) = w(d 1 , d 2 ) = 1 |P d | ; 1 log |P d | ; 1. The intuition behind the above definition of weights is that, the effect of a domain reputation on the IPs it is mapped to, increases, as the amount of mapped IPs decreases. We use three approaches for computing weight, which produce 81 different combinations from which a subset was tested. We represent our graph as an adjacency matrix M , in which the value of an entry (i, j) is the weight between vertex i and j computed according to the selected combination. The Flow algorithm The flow algorithm models the idea that IPs and domains affect the reputation of IPs and domains connected to them. This is done by propagating a node's reputation iteratively, so that the reputation in each iteration is added to the total reputation accumulated within a domain or IP node, using some attenuation factor. The attenuation factor is a means to reduce the amount of reputation accumulated by transitivity. The flow algorithm is executed separately to propagate good reputation and bad reputation. The algorithm is presented in two parts, the first is the Basic Flow, which describes the flow algorithm in general, and the second is the Combined Algorithm, which focuses on the way bad reputation and good reputation are combined. Figure 1 outlines the preparation steps prior to the execution of the basic algorithm. The Basic Flow algorithm starts with an initial set of domains which are labeled either bad or good. The parameters of the algorithm are: 1. A matrix M , where each entry represents a weighted edge between the vertices (domains or IPs); M T denotes the transpose of the matrix M . 2. V initial -a vector representing the initial reputation value of each vertex, based on the initial set of labeled vertices. 3. n ∈ N -the number of iterations. 4. atten ∈ [0, 1] -the attenuation factor. 5. θ ∈ [0, 1] -a reputation threshold above which a vertex is considered bad. Algorithm 1 outlines the basic flow algorithm that propagates the reputation from a node to its neighbors and is carried out in three steps: 1. Calculate the matrix for the i th iteration: 2. Calculate the final Matrix after n iterations as the sum of all the matrices with attenuation: M i = (M T • atten) i M f inal = n i=0 (M T • atten) i 3. Calculate the final reputation vector: V f inal = M n • V initial , the reputation scores of all vertices after the propagation. Applying the algorithm separately to propagate bad and good reputation may result in a node that is labeled both as good and bad. The final labeling of such node depends on the relative importance given to each label as done in the combined algorithm. Algorithm 1 Basic procedure Basic(M, n, atten, V initial ) for i = 0 to n do Mi ← (atten • M T ) i M f inal ← n i=0 Mi V f inal ← M f inal • V initial return V f inal The combined algorithm runs the basic flow algorithm twice with V initial = V good and V initial = V bad . Each flow is configured independently with the following parameters: factor, threshold, and number of iterations. We denote n good , n bad ∈ N as the number of iterations for the good and bad flow respectively; atten good , atten bad ∈ R, as the attenuation factor for the good and bad flow respectively; and w ∈ R the weight of the "good" reputation used when combining the results. Algorithm 2 uses of the basic flow algorithm to compute the good and bad reputation of each vertex and merge the results. Set M al is the result set of domains identified as bad. Algorithm 2 Combined 1: V good ← basic(M, n good , atten good , V good ) 2: V bad ← basic(M, n bad , atten bad , V bad ) 3: Set M al ← ∅ 4: for d ∈ Domains do 5: if V bad [d] + w • V good [d] > θ then Set M al ← Set M al ∪ {d} 6: return Set M al The combined algorithms ignores the following observations which lead us to examine another approach: -A reputation score is a value in the range of [0,1], therefore a vertex should not distribute a reputation score higher then 1 to it's neighbors. -Initial labels are facts and therefore domains that were initially labeled as good or bad should maintain this label throughout the propagation process. -A domain gaining a reputation value above a predefined threshold, bad or good, is labeled accordingly and maintain this label throughout the propagation process. The extended algorithm shown as Algorithm 3 is proposed to address these observations. The main new procedure is the N ormalize procedure which normalizes the scores after each iteration. The threshold used is the average of the scores so far (good or bad according to the actual procedure). Optimization for large graphs As we deal with millions of domains and IPs we have to calculate the scores in an efficient way. The most computationally intensive step is the matrix multiplication (see Figure 1). To speed it up we use a special property of our graph which is the existence of Cliques. There are two kinds of cliques in the graph: cliques of IPs which share the same set of common attributes and cliques of domains which share the same parent or the same name server. Since a clique can contain thousands of nodes we calculate the flow within it separately and not as part of the matrix multiplication. A clique in a graph is a subset of its vertices such that every two vertices in the subset are connected by an edge. We define a Balanced Clique as a clique such that all vertices have the same attribute values. This necessarily leads to a clique with a single weight value on all of its edges. Theorem 1. Let M be a matrix representing a weighted directed graph and V a vector representing vertices values, and let BC be the set of vertices that form a balanced clique, such that the weight on every edge in BC is const BC ∈ R; For Algorithm 3 Extended 1: for x ∈ Domains do 2: V bad [x] ← 0; V good [x] ← 0 3: if x ∈ Set bad then V bad [x] ← 1 4: if x ∈ Set good then V good [x] ← 1 5: for i = 1 to n do 6: V good ← V good + (M T ) • V good 7: V bad ← V bad + (M T ) • V bad 8: N ormalize(Set good , V bad , V good ) 9: N ormalize(Set bad , V good , V bad ) 10: Set M al ← ∅ 11: for d ∈ Domains do 12: if V bad [d] + w • V good [d] > θ then Set M al ← Set M al ∪ {d} 13: return Set M al 14: procedure N ormalize(Set1, V1, V2) 15: avg2 ← 0 16: if d∈Set 1 V good [d] > 0 then avg2 ← d∈Set 1 V 2 [d] |{d∈Set 1 :V 2 [d]>0}| 17: for x ∈ Domains do 18: if V1[x] > 1 then V1[x] ← 1 19: if x ∈ Set1 then V2[x] ← 0 20: if V1[x] > 1 -avg2 then V1[x] ← 1 a vertex v ∈ BC, connected only to vertices in BC ((M T ) * V )[v] = const BC • v =i∈BC V [i] (1) Due to space limitation, the proof of this theorem is omitted. Using the property of a balanced clique we devised the following algorithm for computing the scores. The reputation of every clique vertex is the sum of reputation scores of all other vertices of the clique, multiplied by the constant edge weight of the clique. This is shown in algorithm 4. Algorithm 4 AssignBCScores procedure AssignBCScores(BC, constBC , V ) Sum ← i∈BC V [i] for i ∈ BC do V result [i] = (Sum -V [i]) • constBC return V result [i] The complexity of this algorithm is O(|BC|) which is a significant improvement to O(|BC| 2 ), the complexity of the matrix multiplication approach. In our graph, all edges between the same type of vertices (IPs or domains) belong to balanced cliques and therefore this optimization plays a major factor. Experiment results The evaluation of the algorithm uses real data collected from several sources. To understand the experiments and the results, we first describe the data obtained for constructing the graph, and the criteria used for evaluating the results. Data sources We used five sources of data to construct the graph. -A-records: a database of successful mappings between IPs and domains, collected by Cyren [START_REF] Cyren | A provider of cloud-based security solutions[END_REF] from a large ISP over several months. This data consists of over one milion domains and IPs which are used to construct the nodes of the graph. -Feed-framework: a list of malicious domains collected and analyzed by Cyren over the same period of time as the collected A-records. This list is intersected with the domains that appeared in the A-records and serves as the initial known "bad" domains vector. -Whois [15]: a query and response protocol that is widely used for querying databases that store the registered users or assigners of an Internet resource, such as a domain name, an IP address block, or an autonomous system. We use WHOIS to get the IP data, which consists of the five characteristics of IP (ASN, BGP prefix, registrar, country, registration date). -VirusTotal [START_REF]VirusTotal: A free virus, malware and URL online scanning service[END_REF] -a website that provides scanning of domains for viruses and other malware. It uses information from 52 different antivirus products and provides the time a malware domain was detected by one of them. -Alexa: Alexa database ranks websites based on a combined measure of page views and distinct site users. Alexa lists the "top websites" based on this data averaged over a three-months period. We use the set of top domains as our initial benign domains, intersecting it with the domains in the Arecords. This set is filtered to remove domains which appeared as malicious in VirusTotal. We conducted two sets of experiments Tuning-test and Time-test . In the Tuningtest experiment, the DNS graph is built from the entire set of A-records, but the domains obtained by the Feed-Framework are divided into two subsets: an Initial set and a Test set. The Initial set is used as the initial vector of "bad" domains for the flow algorithm. The test set is left out to be identified as bad by the algorithm. Obviously, not all bad domains of the test set can be identified, mainly because the division of domains was done randomly and some of the domains in the test set are not connected to the initial set. Yet, this experiment was very useful to determine the best parameters of the algorithm which were used later in the Time-Test experiment. The time-test experiment, is carried out in two steps corresponding to two consecutive time periods. In the first step, we construct the graph from the data sources described above. We use the feedframework data to set the initial vector of bad domains in the first time period and the Alexa data to set the initial vector of good domains. We execute the flow algorithm (combined or extended) and assign the final score for each node of the graph. To validate the results of the Time-test, we check the domains detected as malicious by the algorithm against data from VirusTotal for the period following the time period used in the first step. We sort the domains by descending bad reputation score and use the score of the domain on the k-th position as the threshold. As the evaluation criteria for this test, we define a Prediction Factor (PF) which evaluate the ability of our algorithm to detect bad domains identified later by VirusTotal. We compute this factor as the ratio of domains labeled as bad that were tagged by VirusTotal and a random set of domains of the same size, that were tagged by VirusTotal. A domain tested with VirusTotal, is considered as tagged only if it was tagged by at least one anti-virus program. A P F i factor considers a domain as tagged by VirusTotal if at least i anti viruses tagged the domain. To compute the prediction factor we extract the set of domains with the khighest bad reputation scores HSet k found by the algorithm and select randomly a set of k domains RSet k : P F i = |HSet k ∩ Set i tagged | |RSet k ∩ Set i tagged | (2) where set i tagged is the set of all domains tagged by at least i anti viruses. A value of P F i indicates the extent to which our algorithm identifies correctly bad domains, compared to the performance of randomly selected domains. A similar approach was used by Cohen et al. [START_REF] Cohen | Early detection of outgoing spammers in large-scale service provider networks[END_REF] that compared the precision of a list of suspicious accounts returned by a spam detector against a randomly generated list. Next we describe the results of the two experiments. The Tuning test There are two main objectives to the Tuning test. The first is to verify that domains with higher bad reputation values are more likely to be detected as bad domains. The second objective is to understand the impact of the different parameters on the final results and configure the evaluation experiment accordingly. We tested the different weight functions discussed in section 3.1. The following combination was found best and was selected to compute the weights on the graph edges: -IP to domain -for ip ∈ Set The tuning test also examines the attenuation of the flow, the number of iterations, and the relative importance of the good domains used to calculate the final score. The tuning test repeats the following steps: 1. Construct the graph from the entire A-records database. 2. Divide the set of 2580 tagged malicious domains from the feed-framework into two sets: an initial set of 2080 domains and a test set consisting of 500 domains. 3. Apply the flow algorithm (combined or extended) using the Initial set to initiate the vector of bad domains and the data from Alexa to initiate the vector of good (as described above). We repeat these steps with different combinations of parameters (see Figure 2). We then compute the reputation scores of all domains, sort the domains by these scores and divide them into 10 bins of equal size. For each bin S i , i = 1..10 we measure the presence of domains that belong to the test set S test (known malicious domains), as |Si∩Stest| |Stest| . In Figure 2 we can see that the 10% of domains with the highest 'bad' reputation score contain the highest amount of malicious domains. Moreover, in all five type of experiments, the bin of domains with the highest scores consists of 20-25 percents of all known malicious domains of S test . The tests we conducted demonstrate the following three claims: 1. Domains with high 'bad' scores have a higher probability to be malicious. 2. The combinations of the "good" and "bad" reputation scores improve the results. 3. After a relatively small number of iterations the results converge. Time-test The time test experiment uses the best parameters as determined by the Tuning test. The first step of the experiment uses data collected during three months: September-November 2014 while the second step uses data collected during the following two months (i.e., December 2014-January 2015). After the first step in which we apply the flow algorithm (either the combined or the extended), and assign bad reputation scores to domains, we validate the domains with the highest 'bad' reputation scores that did not appear in the initial set of malicious domains against the information provided by VirusTotal. The data we used for the time test consists of 2, 385, 432 Domains and 933, 001 IPs constructing the nodes of the graph, from which 956 are tagged as maliciouse and 1, 322 are tagged as benign (Alexa). The resulting edges are 2, 814, 963 Ip to Domain edges, 13, 052, 917, 356 Domain to Domain edges and 289, 097, 046 IP to IP edges. The very large numbers of edges between IP to IP and Domain to Domain, emphasize the importance of the optimized algorithm 4 in Section 3.3. For the validation test we selected the 1000-highest bad reputation domains and calculated the prediction factor P F i for i ∈ {1, 2} (see equation 2). Out of 1000 random domains checked against VirusTotal, only 2.775% were tagged by at least one anti virus product and 0.875% of the random domains were tagged by at least two anti virus product. Table 1 presents the results of the experiment using the Combined and Extended algorithms. We used only 5 iterations since the Tuning test results indicates that this number is sufficient. For the combined algorithm the weight of good reputation score was set to w = -1 and the bad attenuation was set to 1 and 0.8 in tests 1 and 2 respectively. We can see from the table that with the parameters of P2, that from the 1000 domains with the highest score, 323 were tagged by at least one of the anti viruses in VirusTotal (tagged by one), and 203 were tagged by at least two anti viruses (tagged by two) which derive the prediction factor of P F 1 = 11.61 and P F 2 = 23.2, respectively. Due to space limitations we do not present all the results, however the best results were achieved in the second test. For example, out of 200 domains with the highest scores in test 2, 107 domains were tagged which reflect 53.5% of all known malicious domains. Figure 3 demonstrates the ability of our algorithms to predict malicious domains, using the prediction factor (eq. 2). The figure shows that if we consider smaller size of domains with the highest score the prediction rate is better but the overall predicted malicious domains are less. If we consider a larger size we end up with a larger number of domains that we suspect as malicious by mistake( i.e., false positives). Conclusions This paper discusses the problem of detecting unknown malicious domains by estimating their reputation score. It applies a classical flow algorithms for propagating trust, on a DNS topology graph database, for computing reputation of domains and thus discovering new suspicious domains. Our work faces two major challenges: the first is building a DNS graph to represent connections between IPs and domains based on the network topology only and not on the dynamic behavior. The second challenge was the design and implementation of a flow algorithm similar to those used for computing trust and distrust between people. We presented an algorithm that can be deployed easily by an ISP using non private DNS records. The algorithm was evaluated to demonstrate its effectiveness in real-life situations. The results presented in this paper are quite encouraging. In future work we intend to further improve the results by taking more data features into consideration and by conducting further tests with more parameters. Fig. 1 : 1 Fig. 1: The process for computing reputation scores: (1) Create the topology graph, assign weights and represent as a matrix; (2) Create the initial vector used for propagation; (3) Use the vector and the matrix as input to the flow algorithm; (4) output final reputation scores. IP and d ∈ D ip , w(ip, d) = 1 |Dip| . -Domain to IP -for d ∈ Set domain and ip∈ I d , w(d, ip) = 1 |I d | . -IP to IP -for ip 1 , ip 2 ∈ Set commonAtt , s.t. ip 2 = ip 1 , w(ip 1 , ip 2 ) = 1 |Set commonAtt | .-Domain to Domain -for d 1 , d 2 ∈ Set domain , parents P d ⊂ Set parent , andd 1 , d2 ∈ P d , s.t. d 1 = d 2 , w(d 1 , d 2 ) = 1 |P d | .The most effective combination of features for IP data turned to be (none, ASN, BGP-Prefix, Registrar, none), which derives cliques of IPs with the same ASN, BGP-Prefix and Registrar. Fig. 2 : 2 Fig. 2: The percentage of malicious domains by reputation score Fig. 3 : 3 Fig. 3: Prediction factor with respect to the number of highest bad domains Domain to IP: For d ∈ Set domain and a list of A-records, let I d be all the IPs that were mapped to d. For each ip ∈ I d we define: w(d, ip) = 1 |I d | ; 1 log |I d | ; 1. 3. IP to IP: Let commonAtt be a combination of the five attributes of IP data. Let Set commonAtt be the set of all IPs with the attribute combination Common A tt. For each ip 1 , ip 2 ∈ Set commonAtt s.t. ip 1 = ip 2 we define: w(ip 1 , ip 2 ) = Domain to Domain: Let P d be the set of all domains with the same parent domain d. For each d1, d2 ∈ P d s.t. d1 = d2 we define: 2. 1 |Set commonAtt | ; 1 log |Set commonAtt | ; 1. 4. 1 |Dip| ; 1 log |Dip| ; 1. Table 1 : 1 Results for time test experiment Atten bad Atten good Algorithm Tagged by one Tagged by two 1 0.8 1 combined 334 212 2 1 1 combined 323 203 3 none none extended 247 136 Acknowledgement This research was supported by a grant from the Israeli Ministry of Economics, The Israeli Ministry of Science , Technology and Space and the Computer Science Frankel center.
40,614
[ "1029886", "990584", "978093" ]
[ "300759", "468838", "300759" ]
01745856
en
[ "sdv" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01745856/file/2016%201220%20E-health%20platform%20adherence%20factors%20sans%20fig.pdf
Sophie Trijau Vincent Pradel Hervé Servy Pierre Lafforgue Thao Pham email: thao.pham@ap-hm. Patient e-health platform for Rheumatoid Arthritis: accuracy and adherence factors Keywords: personal health records, rheumatoid arthritis, accuracy factors, adherence Background: Personal health records (PHRs) are patient-controlled repositories, capturing health data entered by individuals and providing information related their care. These tools improve treatment adherence but data are scarce concerning tool adherence. The accuracy of the self-recorded data remains controversial. We assessed how support measures improve PHR adoption determined the factors that influence the accuracy of self-recorded data and tool adherence of RA patients. Methods: A controlled randomized study with a PHR tool with integrated electronic health records developed by SANOIA. RA patients with ACR/EULAR 2010 criteria with web access randomized into 3 groups: Group 1 patients were given written information to create and manage a PHR; Group 2 patients received written information and a web technician hotline 48 hours after inclusion; Group 3 patients began their PHR with their rheumatologist during the consultation. Results: 56 RA patients were included (female: 73%, mean age: 57.1, mean DAS28: 3.04, mean RAPID-3: 2.93). Self-reported data accuracy was significantly higher in Groups 2 (73.7%) and 3 (82.4%) than in Group 1 (45.0%), (P = 0.04). Patient adherence was higher in Group 2 (78.9%) compared with Groups 1 (55.0%) and 3 (58.8%) (P = 0.45). Accuracy was correlated to adhesion (P <0.0001). Gender, age, disease duration and activity, treatments, and patient level of interest were not correlated to data accuracy or patient adherence. Conclusion: Information accuracy collected with PHR was relevant and better when patients were initially assisted either by their physician or by non-medical phone support. We also observed better adherence when patients were initially assisted. Background Personal health records (PHRs) are tools enabling patients to collect patient-reported outcomes (PROs) and to report their medical conditions. The aim of such patientcontrolled services is to help individuals play a more active role and to contribute to shared decisions in chronic diseases such as rheumatoid arthritis (RA). These tools have shown their ability to improve treatment adherence [START_REF] Chrischilles | Personal health records: a randomized trial of effects on elder medication safety[END_REF][START_REF] Vrijens | Electronically monitored dosing histories can be used to develop a medication-taking habit and manage patient adherence[END_REF][START_REF] Hogan | Accuracy of data in computer-based patient records[END_REF] but data are scarce concerning adherence to the tool itself. For instance, 153 patients with rheumatic disease consecutively interviewed reported that although they appreciated having access to their online electronic health records, they expressed low confidence rates in the Internet [START_REF] Richter | Changing attitudes towards online electronic health records and online patient documentation in rheumatology outpatients[END_REF]. This lack of confidence may impact patient adhesion. The accuracy of self-recorded data related to medical condition in e-health platforms also remains controversial. Our goal was to assess how support measures, technical or medical, could improve electronic personal health record system (PHR) adoption and to determine the accuracy of self-recorded data and RA patient tool adherence modifying factors. Materials and methods Electronic personal health records tool Sanoia has developed a web tool integrated with electronic personal health records which offers full privacy protection using an innovative anonymity technique [START_REF] Chiche | Evaluation of a prototype electronic personal health record for patients with idiopathic thrombocytopenic purpura[END_REF]. The tool already included numerous factors related to chronic disease such as vaccination records and history. It can also be designed for a specific disease, in our case RA, and proposes an adapted PROs evaluation, such as the Routine Assessment of Patient Index Data (RAPID-3) [START_REF] Pincus | An index of the three core data set patient questionnaire measures distinguishes efficacy of active treatment from that of placebo as effectively as the American College of Rheumatology 20% response criteria (ACR20) or the Disease Activity Score (DAS) in a rheumatoid arthritis clinical trial[END_REF]. Patients Inclusion criteria : outpatients fulfilling the ACR/EULAR 2010 criteria for rheumatoid arthritis [START_REF] Aletaha | Rheumatoid arthritis classification criteria: an American College of Rheumatology/European League Against Rheumatism collaborative initiative[END_REF]. As our aim was to assess factors associated with PHR adhesion and data accuracy, we focused only on patients able to use the e-PHR, so patients without web access were not invited to participate in the study. Study Design We conducted a prospective controlled randomized study. From February to March 2011, the five participating rheumatologists proposed to their consecutive RA patients to use an e-health platform and aimed at collecting patient-related outcomes (PROs) and medical conditions. Each rheumatologist had a randomization list Patients were randomized into 3 groups: Group 1 patients were given simple written information about how to create and manage their file on the Sanoia platform; Group 2 patients received written information and support to manage their files on the platform via a web-technician hotline 48 hours after inclusion; and Group 3 patients started their Sanoia platform files with their rheumatologist during the consultation. Patients were randomized individually by each rheumatologist. Each patient fulfilling inclusion criteria and accepting to participate in the study was assigned sequentially to a group according to his order of inclusion by his/her rheumatologist. This method was preferred to bloc randomization in order to maximize the balance of group effectives even if some rheumatologists include few patients in a context of competitive recruitment. Collected data Adherence Assessments Patients were considered as tool adherent if they connected at least twice and as nonadherent if they connected once or never between baseline (M0) and the 3-month evaluation (M3). Adherence was also assessed at 6 months (M6). Accuracy Assessment We collected the following data: demographics, disease activity data including the disease activity score (DAS-28) and the RAPID-3 [START_REF] Pincus | An index of the three core data set patient questionnaire measures distinguishes efficacy of active treatment from that of placebo as effectively as the American College of Rheumatology 20% response criteria (ACR20) or the Disease Activity Score (DAS) in a rheumatoid arthritis clinical trial[END_REF][START_REF] Prevoo | Modified disease activity scores that include twenty-eight-joint counts. Development and validation in a prospective longitudinal study of patients with rheumatoid arthritis[END_REF], the amount and accuracy of selfrecorded data, ongoing treatment at baseline and 3 months after. For the latter, we focused on medical history, current treatment, consultations, underlying events reporting and other points of interest such as the vaccine situation, smoking status, contraception, imaging. Information accuracy was compared with rheumatologist medical records considered as the gold standard. All the records were assessed by the same reviewer (ST) and scored as following: medical history (0-4 points), current treatment (0-4 points), vaccine status (0-1 point) and known allergies (0-1 point), for a maximum score of 10. A good accuracy score was defined as>9/10. Main outcomes The 2 primary outcomes were the adherence at M3, defined as the proportion of patients who connected at least 2 times between baseline and 3 months, and the accuracy of the patient self-declared data, defined as the proportion of patients with a good accuracy score. Statistical Analysis We compared reporting accuracy and adherence among the groups. Accuracy was assessed as a dichotomous variable. Patients with a good accuracy score, i.e.>9/10, were scored 1. Patients with an accuracy score ≤9/10 or with missing data were scored 0. We also assessed the impact of the following variables on accuracy and adherence: age (age as a continuous variable and age> 60 years yes/no), gender, disease duration, co-morbidities, baseline disease activity (DAS28, RAPID-3), treatments including biologics and corticosteroids and the patient's level of interest in the e-PHR tool. The patient's interest was assessed on a 0-10 numerical scale. Continuous data were described by means (SD) and categorical variables were expressed as frequencies and percentages. To compare groups, we used Chi-square (Fisher when Chi-square application conditions were not met), Mann-Whitney and Krusakl-Wallis tests (depending on categorical/continuous variables and the number of modalities of the categorical variables). SPSS 17.0 version was used for management and statistical analysis. Results We included 56 RA patients, with 20, 19 and 17 patients in Groups 1, 2 and 3 respectively. Their main baseline characteristics were the following: female: 73%, mean age: 57.1 years, mean DAS28: 3.04 and mean RAPID-3: 2.93. Detailed characteristics are reported in Table 1. The proportion of patients who did not use the PHR tool was 35.0%, 21.1% and 19.6% in Groups 1, 2 and 3 respectively. Self-reported data accuracy was significantly higher in Groups 2 (73.7%) and 3 (82.4%) than in Group 1 (45.0%), (p < 0.04) (Figure 1). Moreover, two patients reported medical events that were not in their physician medical records: a history of tuberculosis in a Group 3 patient and a costal chondroma in a Group 2 patient. Patient adherence was higher in Group 2 (78.95%) compared with those of Groups 1 (55.0%) and 3 (58.8%) (P = 0.45) at 3 months. Mean ITT frequency connections are presented in Table 2. Among the patients who connected at least once (N = 13 for Group 1, 16 for Group 2 and 17 for Group 3), the mean number of connections between baseline and M3 was 10.3, 19.3 and 12.2 in Groups 1, 2 and 3 respectively. Adherence remained stable in Group 2 at 6 months (78.9%), whereas it decreased in Groups 1 and 3 (respectively 15.0% and 47.1%). Connection frequencies are presented in Table 2. Accuracy was correlated to adherence (P <0.0001). Gender, age, disease duration, activity of disease (DAS28, RAPID-3), treatments including biologics and corticosteroids, and patient level of interest were not correlated to data accuracy (Table 3). These variables were also not correlated to patient adherence. Discussion This is the first study showing how support measures can influence adoption, adherence to PHR and the accuracy of recorded information. The quality of the information collected with PHR was meaningful and better when patients were initially assisted either by their physician or by non-medical phone support. Agreement between self-report information on PHR and medical records have been assessed with various results with regards to the disease [START_REF] Simpson | Agreement between self-report of disease diagnoses and medical record validation in disabled older women: factors that modify agreement[END_REF][START_REF] Bergmann | Validity of self-reported diagnoses leading to hospitalization: a comparison of self-reports with hospital records in a prospective study of American adults[END_REF][START_REF] Van Gelder | Using Web-Based Questionnaires and Obstetric Records to Assess General Health Characteristics Among Pregnant Women: A Validation Study[END_REF][START_REF] Haapanen | Agreement between questionnaire data and medical records of chronic diseases in middle-aged and elderly Finnish men and women[END_REF][START_REF] De-Loyde | Which information source is best? Concordance between patient report, clinician report and medical records of patient co-morbidity and adjuvant therapy health information[END_REF][START_REF] Van Den Akker | Disease or no disease? Disagreement on diagnoses between self-reports and medical records of adult patients[END_REF]. Comparisons have shown good agreement for diabetes, hypertension, pulmonary disease, cerebrovascular disease and myocardial infarction while other comparisons found low agreement for heart failure, chronic bronchitis, chronic obstructive pulmonary disease, hypertension, osteoporosis, osteoarthritis and RA [START_REF] Simpson | Agreement between self-report of disease diagnoses and medical record validation in disabled older women: factors that modify agreement[END_REF][START_REF] Van Den Akker | Disease or no disease? Disagreement on diagnoses between self-reports and medical records of adult patients[END_REF][START_REF] Okura | Agreement between self-report questionnaires and medical record data was substantial for diabetes, hypertension, myocardial infarction and stroke but not for heart failure[END_REF][START_REF] Engstad | Validity of self-reported stroke : The Tromso Study[END_REF][START_REF] Klungel | Cardiovascular diseases and risk factors in a population-based study in The Netherlands: agreement between questionnaire information and medical records[END_REF][START_REF] Goldman | Evaluating the quality of self-reports of hypertension and diabetes[END_REF][START_REF] Malik | Patient perception versus medical record entry of health-related conditions among patients with heart failure[END_REF][START_REF] Merkin | Agreement of self-reported comorbid conditions with medical and physician reports varied by disease among end-stage renal disease patients[END_REF][START_REF] Skinner | Concordance between respondent self-reports and medical records for chronic conditions: experience from the Veterans Health Study[END_REF][START_REF] Hansen | Agreement between self-reported and general practitioner-reported chronic conditions among multimorbid patients in primary care -results of the MultiCare Cohort Study[END_REF]. In a large community-based cohort, low agreement was found between the 2,893 participating patients and their GP, especially for RA (kappa: 0.17 [0.23-0.11] [START_REF] Van Den Akker | Disease or no disease? Disagreement on diagnoses between self-reports and medical records of adult patients[END_REF]. In RA patients, over-reporting was associated with the male gender, a higher number of diseases and a lower physical and mental quality of life. A higher number of diseases was also associated with underreporting. We did not collect a quality of life variable, but in our RA sample, gender and comorbidities were not correlated with record accuracy. The study sample size was small and possibly not powerful enough to demonstrate a correlation with the potential factors, such as biologic and corticosteroid treatment. The mode of questionnaire administration can affect data quality. Survey responses are different with regards to the type of mode (e.g. self-administered versus interview modes, mail versus telephone interviews, telephone versus face-to-face interviews) [START_REF] Bowling | Mode of questionnaire administration can have serious effects on data quality[END_REF][START_REF] Feveile | A randomized trial of mailed questionnaires versus telephone interviews: response patterns in a survey[END_REF][START_REF] Christensen | Effect of survey mode on response patterns: comparison of face-to-face and self-administered modes in health surveys[END_REF]. As expected, our results confirm that patients that have been supported in the process, either with a technician without medical knowledge or with an MD, have better data collection accuracy. It may appear to be the support itself and not its quality because we found no difference between the methods of support, but the sample size of our study was insufficient to determine differences between the 2 Figure 1 : 1 Figure 1: Proportion of patients with reported data of good accuracy on the e-PHR tool Table and Figure Legends Table 1 : andLegends1 Baseline characteristics Table 2: PHR tool adherence in the 3 groups at M3 and M6 (ITT) Table 3 : 3 Accuracy factors Acknowledgements We gratefully thank C. Foutrier-Morello, J. Fulpin, E. Senbel, S. Steib, S. Trijau for their help in recruiting patients for the study, and the study patients for their contribution. This study was supported by an unrestricted educational grant from UCB Pharma The authors are grateful to Peter Tucker for his careful reading of the manuscript. support groups. Technical or medical support also improved adoption of and adherence to the PHR tool. In conclusion, our results suggest good or very good agreement between selfreported data with a PHR tool and medical records for RA patients. This agreement is improved with a technical or medical process support that also improves adoption and adherence to the tool. Our results confirm that a patient with the right level of support could be a source of reliable data collection, opening new paths that should be confirmed over a longer time period and from an economic point of view.
17,342
[ "780993" ]
[ "300889", "198056", "300889", "527022", "300889", "198056" ]
01435097
en
[ "spi" ]
2024/03/05 22:32:07
2016
https://hal.science/hal-01435097/file/Krakovinsky.2016.ICMTS.pdf
A Krakovinsky email: alexis.krakovinsky@cea.fr M Bocquet R Wacquez J Coignus D Deleruyelle C Djaou G Reimbold J-M Portal Impact of a Laser Pulse On HfO 2 -based RRAM Cells Reliability and Integrity Keywords: OxRRAM, Laser, Security, Integrity, HfO 2, Simulation, Thermal Attacks, Optical Attacks . Moreover these solutions propose lower switching energy and faster operations compared to the state of the art for Flash, and thus, are seen as an opportunity for the rise of the IoT market. But one of the main concerns regarding IoT is the protection of the data. Contrary to Flash, security of the data in emerging NVM is yet to be evaluated. In order to verify capability of the technology in terms of data integrity, we propose to investigate reliability and integrity of HfO 2 -based Resistive RAM (OxRRAM). This paper details the experimental protocol defined for laser-based attacks, shows that a laser pulse can affect the information stored in a single OxRRAM bit. The occurring phenomenon is then explained by mean of thermal and electrical simulations. I. Introduction Innovative technologies are emerging as solutions for the future of Non Volatile Memories (NVM). One can cite Magnetoresistive Random Access Memories (MRAM), Phase Change RAM (PCRAM) or Resistive RAM (RRAM) as the main technologies of interest. A HfO 2based RRAM solution is considered in this work. Due to its low cost of fabrication and its inherent lower switching energy, RRAM is of high interest for the IoT market. In the following years, billions of smart objects will be interacting between each other. Power consumption is of course an essential issue. But if we consider the amount and nature of the processed data, the security aspect must not be neglected. In other words, emerging NVM must fulfill three essential criteria that are data integrity, data confidentiality and data accessibility. However, most studies still focus on the reliability of these technologies. Therefore none of them have been confronted to attacks such as UV lamp with masking [START_REF] Fournier | Memory Address Scrambling Revealed Using Fault Attacks, Fault Diagnosis and Tolerance in Cryptography (FDTC)[END_REF] or focused laser [START_REF] Skorobogatov | Local Heating Attacks on Flash Memory Devices Hardware-Oriented Security and Trust[END_REF] attacks that have been already shown as sucessful on Flash technology. The first security criterion to be studied for NVM (especifically RRAM) is material integrity. This work proposes to evaluate the impact of external physical constraints on the material. Laser has been aleready proven able to disturb the behaviour of a circuit as well as old Flash technology [START_REF] Skorobogatov | Optical Fault Masking Attacks Fault Diagnosis and Tolerance in Cryptography (FDTC)[END_REF] through fault attacks. As a follow up of these works, it is interesting to attack RRAM in the same way. Section 2 describes the experimental setup whose results are presented in section 3 and 4. Section 5 is focused on the simulation of a laser pulse on a RRAM structure and of RRAM set operation. II. Experimental Setup A. RRAM Principle and Characteristics Parameters A RRAM cell is made of two metal electrodes with a transition metal oxide (TMO) in-between. In our case (see Fig. 1), these are made of a 5nm-thick HfO 2 layer, located between a 10nm-thick Ti top electrode and a 10nm-thick TiN bottom electrode, as described in [START_REF] Vianello | Resistive Memories for Ultra-Low-Power embedded computing design[END_REF]. This technology relies on resistance switching, that is to say, in this case, migrating oxygen ions (as pictured in Fig. 1.) from the oxide to the electrode where a voltage is applied [START_REF] Nardi | Resistive Switching by Voltage-Driven Ion Migration in Bipolar RRAMPart I: Experimental Study[END_REF]. This process creates or dissolves a conductive filament (CF) of oxygen vacancies in the oxide, which is tuning its resistance. The way set and reset (i.e programming and erasing) operations are performed is presented on Fig. 2. V stopreset and the set compliance current I c are parameters which are set experimentally. For each experiment shown in this paper, V stopreset = -1V and I c = 1 mA. B. Instrumentation and Experimental Protocol We use a laser bench with a Nd:YAG source. Three wavelenghths are available : 355 nm (Ultraviolet/UV), 533 nm (Green) and 1064 nm (Infrared/IR). It has a circular 50µm-diameter spot size (as seen in Fig. 3.) that allows shooting the whole cell -whose size is 3 µm up to a 400 µJ energy during a 10 ns pulse. In the first place, to see the influence of laser pulses on the memory cells, their electrical characteristics were evaluated. These were conducted in quasi-static conditions (i.e without considerating temporal aspects) and consisted in 10 cycles of reset/set operations. The resistance value has been measured thanks to a read operation performed after each operation. These preliminary measurements allow verifying that the cells have a regular behavor, in conformity with what has been published. These data will be then used as reference data for the following experiments (See Fig. 4.). The average set and reset voltages are respectively V Set = 0.6 V and V Reset = -0.54 V. Moreover, the average LRS and HRS resistance values are about R LRSmean = 550Ω and R HRSmean =35 kΩ. The experimental protocol of the next step is summed up in Fig. 5. For the need of the experiment, 50% of all devices are left in HRS, and 50% in LRS. A single pulse in either UV or IR has been performed in order to check the potential impact of each wavelength. In the end, a III. Influence on Resistance Values A. Cells state variation and wavelength influence Regarding LRS cells after laser pulse, the first remarkable result is that the LRS remains unaffected. Fig. 6 shows that the cumulative distributions of the resistance values of the cells left initially in the LRS before and right after the pulse are similar. Moreover, both UV and IR data are overlaying. However, contrary to LRS cells (as noticed on Fig. 7), a gap from about a decade in terms of resistance distribution can be seen between HRS cells before (R mean b ef ore = 34 kΩ) and after (R meanaf ter = 3400Ω) they were shot . Besides, half of these cells have switched to the LRS after the pulse. Finally, as far as wavelength is concerned, UV and IR pulses have the same effect on HRS cells. Which means the laser impact on OxRRAM cells is independant from the wavelength used. Therefore, all the following data will not refer to the wavelength used during the experiment. The energy used will rather be taken into account as the most essential parameter. B. Laser Low Resistive State Analysis By confronting the resistance values of LRS cells to HRS cells that switched to the LRS cells after a laser pulse, it can be seen on Fig. 8 that the electrical LRS seems slightly different from the LRS obtained with laser. Indeed, the values of the switching cells (R LRSmean1 = 880Ω) are quite higher than those of the LRS cells (R LRSmean2 = 550Ω). In a nutshell, this analysis showed that a laser pulse can disturb the behaviour of only HRS cells. Nevertheless, the nature of the resistive state obtained after exposition is yet to be confirmed. IV. Impact on Electrical Characteristics A. Cells left in the LRS Even though the resistance values of LRS cells are showing the non-effectiveness of a laser beam on their state, V Set and V Reset measured after the attack may vary from those obtained before. But as expected, laser has not disturbed the cells since V Set and V Reset are matching the reference data (see Fig. 9). B. Cells left in the HRS Concerning the cells left in the HRS state, it has to be figured out whether or not cycling can be performed normally and if the laser set operation can be considered as equivalent to an electrical set operation. The cells that were submitted to a reset operation in a first place were hardly set to the HRS, since more than 80% of our devices first V Reset are above the average voltage reset according to the reference data (as pictured in Fig. 10). This means laser set and electrical set are not equivalent, which is confirmed by the data coming from cells whose first post-laser operation was a set operation. Indeed, their first reset voltage is corresponding to the reference data. Explaining the phenomena responsible for laser switching will require a model of the laser impact on the structure. As well as a model of an electrical set operation so we can compare them. V. Simulations There are two possibilities to explain a change in the oxide structure by the use of a laser beam. The first one is an optical effect provided by the photon injection and the second one is a temperature effect. Indeed, experiments have shown [START_REF] Cabout | Temperature impact (up to 200 C) on performance and reliability of HfO2-based RRAMs, Memory Workshop[END_REF] that high temperature operations tend to Where k is the imaginary part of the refractive index of the material, l the material thickness and λ the wavelength of the laser beam. For the metallization layer only (shown on Fig. 12), c ≈ 1,7 . 10 -15 for λ = 1064 nm and c ≈ 4,3 . 10 -24 for λ = 355 nm. In other words, photons are not able to go through the metallization layer and therefore reach the oxide. Which also means that instead of being caused by an optical effect, the laser switching shall be a consequence of temperature, the main focus of our modeling. A. Laser Modeling The geometry used for this model is presented on Fig. 11. For more accuracy, 1 nm of titanium dioxide has been considered on the top of our structure. The reason is since Regarding the laser impact, the model was designed by using [START_REF] Sands | Pulsed Laser Heating and Melting, Heat Transfer -Engineering applications[END_REF]. Therefore, the transmitted part of the laser energy to the surface of the structure and the heat emitted by each layer are respectively defined as follows : I T = I 0 (1 -R)g(x) (2) S(z) = α × I T L × e -αz (3) Where I 0 is the laser surface power, R the reflectivity of the surface material, g(x) the gaussian profile (whose standard deviation is a third of the laser spot radius) value for the x coordinate, α the optical penetration depth which equals 4πk λ (with k and λ defined as in (1)), z the depth in the layer and I T L the intensity transmitted to the layer top interface. The simulation results show that the maximum temperature in the oxide is 610 K and that it is reached 0.8 ns after the end of the pulse (see also Fig. 12 and13). This temperature is the result of the thermal diffusion coming from the titanium dioxide surface (whose maximum temperature is reached 0.8 ns before the end of the pulse). We might expect higher temperatures provided that the estimated silicon dioxide temperature is about 3000 K after the end of the pulse (beyond SiO 2 melting point which is not taken into consideration in this model). However the duration needed for thermal diffusion from the SiO 2 surface to the HfO 2 layer is about 1 µs since silicon dioxide has a low thermal conductivity ( k = 1.4 W.m -1 .K -1 [START_REF]CRC Handbook of Chemistry and Physics[END_REF]) and the corresponding temperature much lower (about 500 K). The objective of the following electrical set modeling is to verify if the temperature reached during laser exposition is relevant for an electrical set operation. B. Electrical Set Simulation For the electrical set simulation, the model described in [START_REF] Russo | Self-Accelerated Thermal Dissolution Model for Reset Programming in Unipolar Resistive-Switching Memory (RRAM) Devices[END_REF] has been applied to the cell structure. The main objective is to compare the temperature reached during an electrical set operation to the value obtained from thermal modeling. V Set , I c as well as both thermal (k) and electrical (σ) conductivities of the CF are the parameters of interest. By choosing the CF conductivities, we aim to get temperature according to its state after the set operation. In our case, the laser pulses were not able to really set the cells, which means the CF has a structure not far from HfO 2 , whose conductivities are lower than Hf. We then simulated different conductivities values for each couple (V Set ) , I c ) which gave us maps of the temperature reached in the CF in function of k and σ at constant set voltage and compliance current. The values chosen for V Set and I c are respectively 0.5 V/ 0.6 V/ 1.2 V and 1 µA/ 10 µA/ 0.1 mA / 1 mA . Regarding the values of the conductivities, we decided to choose 10 values decreasing linearly (for k) and logarithmically (for σ) from the values given in [START_REF]Properties of Solids; Thermal and Physical Properties of Pure Metals / Thermal Conductivity of Crystalline Dielectrics / Thermal Conductivity of Metals and Semiconductors as a Function of Temperature[END_REF] and [START_REF]CRC Handbook of Chemistry and Physics[END_REF] for Hf. The results presented on Fig. 14 were obtained for V Set = 0.5 V and I c = 10 µA. The temperature calculated with laser modeling (610 K) is reached for k ≈ 4 W.m -1 .K -1 and σ ≈ 5 10 5 S.m -1 . These low conductivities mean that the set hasn't been totally completed since the CF structure is closer to hafnium dioxide than hafnium, explaining the experimental results obtained by laser pulse. VI. Conclusion For the first time, RRAM cells have been disturbed by laser exposition, independently from the wavelength used. It is possible to perform a bitflip only from the HRS to the LRS. Simulations allowed us to explain that this phenomenon is due to the temperature brought by laser heating. Fig. 1 . 1 Fig. 1. Cell layer structure and ion migration description Fig. 2 .Fig. 3 . 23 Fig. 2. Standard I-V characteristic of the 1R cells to be studied. For the set operation, a progressive voltage sweep is applied from '1'. Then the cell switch to the LRS in '2' before its current reaches Ic in '3'. For the reset operation, a voltage sweep is applied from 'A'. The cell switches from the LRS to the HRS in 'B' . The sweep is stopped at Vstopreset in 'C' Fig. 4 . 4 Fig. 4. Boxplot of 1500 LRS/HRS reference values (a) and Set/Reset voltages (b) obtained during preliminary characterisations of the studied cells. The box extends from the lower to upper quartile values of the data, with a line at the median. Fig. 5 . 5 Fig. 5. Experimental Protocol Diagram Fig. 6 .Fig. 9 . 69 Fig. 6. Cumulative distributions of the resistance values of 84 LRS cells before and after a single laser pulse, sorted by wavelength used Fig. 10 . 10 Fig. 10. Cumulative distribution of set and reset voltages of HRS cells after 1 Reset/Set cycle sorted by first operation performed . These results were compared to reference data Fig. 12 . 12 Fig. 12. Temperature repartition 0.8 ns after the 10 ns laser pulse on the whole structure Fig. 13 . 13 Fig. 13. Temperature repartition 0.8 ns after the laser pulse in the area around the hafnium dioxide layer Acknowledgements : This work was performed with the support of the CATRENE CA208 Mobitrust project.
15,285
[ "18361", "764486", "174122", "1248416", "20388" ]
[ "473225", "199957", "473225", "40214", "199957", "199957", "40214", "199957" ]
01415935
en
[ "sdv" ]
2024/03/05 22:32:07
2016
https://univ-rennes.hal.science/hal-01415935/file/The%20Impact%20of%20Donor%20Type%20on%20Long-Term.pdf
Sandrine Visentin email: sandrine.visentin@ap-hm.fr MD, PhD Pascal Auquier MD, PhD Yves Bertrand MD, PhD André Baruchel MD Marie-Dominique Tabone MD Cécile Pochon MD Charlotte Jubert Maryline Poiree MD, PhD Virginie Gandemer MD Anne Sirvent An L E A Study Sandrine Visentin MD Maryline Poirée MD Jacinthe Bonneau MD, PhD Catherine Paillard MD Claire Freycon MD, PhD Justyna Kanold Virginie Villes MD, PhD Julie Berbis MD Claire Oudin MD Claire Galambrun MD, PhD Isabelle Pellier MD Geneviève Plat MD, PhD Hervé Chambost MD, PhD Guy Leverger MD, PhD Jean-Hugues Dalle MD, PhD Gérard Michel The Impact of Donor Type on Long-Term Health Status and Quality of Life after Allogeneic Hematopoietic Stem Cell Transplantation for Childhood Acute Leukemia: A Leucemie de l'Enfant et de L'Adolescent Study Keywords: hematopoietic stem cell transplantation, late effects, quality of life, childhood leukemia, cord blood transplantation published or not. The documents may come INTRODUCTION Hematopoietic stem cell transplantation (HSCT) has been successfully used to treat children with high-risk or relapsed acute leukemia. Many children and adolescents who undergo HSCT become long-term survivors and may develop long-term complications, such as endocrinopathies, musculoskeletal disorders, cardiopulmonary compromise and subsequent malignancies [START_REF] Faraci | Non-endocrine late complications in children after allogeneic haematopoietic SCT[END_REF][START_REF] Nieder | NHLBI First International Consensus Conference on Late Effects after Pediatric Hematopoietic Cell Transplantation: Long Term Organ Damage and Dysfunction Following Pediatric Hematopoietic Cell Transplantation[END_REF][START_REF] Cohen | Endocrinological late complications after hematopoietic SCT in children[END_REF][START_REF] Chow | Late Effects Surveillance Recommendations among Survivors of Childhood Hematopoietic Cell Transplantation: A Children's Oncology Group Report[END_REF]. When available, an HLA-matched sibling donor (SD) remains the donor of choice for children who require HSCT. However, only approximately 25% of candidates eligible for allogeneic HSCT have an HLA-matched SD. In the absence of a SD, an HLA-matched unrelated volunteer donor (MUD) or unrelated umbilical cord blood (UCB) are alternative transplant sources. In fact, despite the establishment of bone marrow donor registries with more than 25 million volunteers worldwide, finding a MUD remains a problem for many patients. Thus, the use of UCB as an alternative source for HSCT has increased substantially in the last decade, especially for children [START_REF] Ballen | Umbilical cord blood transplantation: the first 25 years and beyond[END_REF]. Currently, it is estimated that several thousand UCB transplantations have been performed. The short-term outcome of children transplanted with UCB (e.g., hematopoietic recovery, acute and chronic graft versus host disease (GvHD), treatment-related mortality, survival and causes of death) have been well described [START_REF] Eapen | Outcomes of transplantation of unrelated donor umbilical cord blood and bone marrow in children with acute leukaemia: a comparison study[END_REF][START_REF] Benito | Hematopoietic stem cell transplantation using umbilical cord blood progenitors: review of current clinical results[END_REF][START_REF] Grewal | Unrelated donor hematopoietic cell transplantation: marrow or umbilical cord blood?[END_REF][START_REF] Zheng | Comparative analysis of unrelated cord blood transplantation and HLA-matched sibling hematopoietic stem cell transplantation in children with high-risk or advanced acute leukemia[END_REF][START_REF] Tang | Similar outcomes of allogeneic hematopoietic cell transplantation from unrelated donor and umbilical cord blood vs. sibling donor for pediatric acute myeloid leukemia: Multicenter experience in China[END_REF]. Although overall survival is comparable, it has been clearly established that the course of the early post-transplant period and principal complications differ with respect to the transplant cell source. The risk of GvHD and related complications is intrinsically higher after MUD transplantation compared with sibling transplantation, even if a recent extensive pediatric study has shown that this risk can be overcome by using intensive prophylaxis with cyclosporine, methotrexate and anti-thymocyte globulin [START_REF] Peters | Stem-Cell Transplantation in Children With Acute Lymphoblastic Leukemia: A Prospective International Multicenter Trial Comparing Sibling Donors With Matched Unrelated Donors-The ALL-SCT-BFM-2003 Trial[END_REF]. UCB transplant induces GvHD to a lesser degree than MUD transplantation, although UCB hematopoietic recovery is slower, thereby resulting in an extended duration of the aplastic phase and subsequent increased risk of severe infection [START_REF] Gluckman | Resultsof Unrelated Umbilical Cord Blood Hematopoietic Stem Cell Transplantation[END_REF][START_REF] Zecca | Chronic graft-versus-host disease in children: incidence, risk factors, and impact on outcome[END_REF]. In contrast, very few studies have assessed long-term post-transplant health status with regard to donor type in a multivariate analysis [START_REF] Bresters | High burden of late effects after haematopoietic stem cell transplantation in childhood: a single-centre study[END_REF][START_REF] Armenian | Long-term health-related outcomes in survivors of childhood cancer treated with HSCT versus conventional therapy: a report from the Bone Marrow Transplant Survivor Study (BMTSS) and Childhood Cancer Survivor Study (CCSS)[END_REF][START_REF] Hows | Comparison of long-term outcomes after allogeneic hematopoietic stem cell transplantation from matched sibling and unrelated donors[END_REF][START_REF] Baker | Late effects in survivors of chronic myeloid leukemia treated with hematopoietic cell transplantation: results from the Bone Marrow Transplant Survivor Study[END_REF][START_REF] Khera | Nonmalignant Late Effects and Compromised Functional Status in Survivors of Hematopoietic Cell Transplantation[END_REF], and to our knowledge, no studies have compared childhood leukemia survivors who received UCB with those who underwent SD or MUD HSCT. Using the data extracted from the French cohort of childhood leukemia survivors (L.E.A., "Leucémie de l'Enfant et de L'Adolescent"), our primary objective was to describe the long-term health status and quality of life (QoL) after HSCT for childhood leukemia survivors with respect to donor type (SD, MUD or UCB transplantation). Because the patients were transplanted between May 1997 and June 2012 and transplantations involving an HLA haplo-identical family donor were rare in France during this time period, the few patients who underwent such transplantation were not included in this study. METHODS Patients Evaluation of physical health status Medical visits were conducted to detect the occurrence of late effects based on clinical examinations and laboratory tests when required. Clinical follow-up commenced one year after HSCT; these examinations were repeated every two years until the age of 20 and for at least ten years of complete remission; patients were then examined every four years thereafter. Height, weight and body mass index (BMI) were measured at transplantation, study inclusion, and each subsequent medical examination. The measurements were then converted to standard deviation scores (SDS) based on the normal values for the French population [START_REF] Sempé | Auxologie : méthodes et séquences[END_REF]. Growth failure (stunted height) was defined by a cumulative SDS change equal to or lower than -1 (minor failure for a value between -1.0 and -1.9, and major failure for a value equal to or lower than 2). Overweight was defined as a BMI of 25 kg/m² or more for adults (minor: BMI of 25.0-29.9, major: BMI of 30 or more) and a cumulative SDS change of +1 or more for children under 18 (minor: between 1.0 and 1.9, major: equal to or higher than 2). Low weight was defined as a BMI lower than 18.5 kg/m² in adults and a cumulative loss in SDS of -1.0 or more in children under 18. Children were not assessed for gonadal function if they were under 15 years of age and had not experienced menarche (girls) or did not have any pubertal signs (boys). Patients were diagnosed with gonadal dysfunction if they showed signs of precocious puberty or hypergonadotropic hypogonadism (low estradiol levels with high follicle stimulating hormone (FSH) and luteinizing hormone (LH) levels in women; low testosterone with high FSH and LH levels in men). Hypothyroidism was defined as a nontransient increasein thyroid stimulating hormone levels. All second tumors (including basal cell carcinoma) were taken into consideration for this analysis. Cardiac function was considered impaired when any one of the following three conditions was present: the echocardiographic shortening fraction was inferior to 28%, the left ventricular ejection fraction was inferior to 55% or specific treatments were required. Femoral neck and lumbar bone mineral density were measured using dual energy X-ray absorptiometry for all adults. Patients were considered to have low bone mineral density when the Z-score was inferior or equal to -2 in at least one of the two sites examined. Metabolic syndrome was defined according to the NCEP-ATPIII revised in 2005 (metabolic syndrome patients had at least three of the five criteria: [START_REF] Faraci | Non-endocrine late complications in children after allogeneic haematopoietic SCT[END_REF] increased waist circumference (≥102 cm in men, ≥88 cm in women); (2) elevated blood pressure (systolic blood pressure ≥130 mmHg and/or diastolic blood pressure ≥85 mmHg and/or treatment necessitated); (3) reduced high-density lipoprotein cholesterol (≤40 mg/dL in men, ≤50 mg/dL in women); (4) elevated fasting glucose (≥1 g/L or drug treatment needed for elevated glucose levels); and (5) elevated triglycerides (≥ 150 mg/dL or drug treatment required for elevated triglycerides)) [START_REF] Grundy | Diagnosis and Management of the Metabolic Syndrome An American Heart Association/National Heart, Lung, and Blood Institute Scientific Statement[END_REF] The SF-36 (the Medical Outcome Study Short Form 36 Health Survey) is a widely used QoL measure that provides a non-disease-specific assessment of adult functioning and well-being, which enables comparison with a broad range of age-matched norm groups [START_REF] Leplège | The French SF-36 Health Survey[END_REF][START_REF] Reulen | The use of the SF-36 questionnaire in adult survivors of childhood cancer: evaluation of data quality, score reliability, and scaling assumptions[END_REF]. The SF-36 is a generic QoL scale for adults consisting of 36 items describing eight dimensions: physical functioning, social functioning, role limitations due to physical health problems, role limitations due to emotional health, mental health, vitality, bodily pain and general health. Two summary scores are also calculated from the subscales: a physical component score and a mental component score. This is a reliable instrument to assess self-perceived health status in adult survivors of childhood cancer. The French version is well validated. All scores range between 0 and 100, with higher scores indicating better QoL. Statistical analysis Chi-squared, Fisher's exact and ANOVA tests were used to compare demographic and clinical variables between the SD, MUD and UCB transplant groups. ANOVA was used to compare the mean number of late effects experienced per patient in each donor type group. Each of the following complications (as defined above) were considered as one late effect: height growth failure (minor or major), overweight (minor or major), low weight, gonadal dysfunction, hypothyroidism, second tumors, cataracts, alopecia, impaired cardiac function, osteonecrosis, low BMD, diabetes, metabolic syndrome, iron overload and central nervous system complications. To determine the link between each assessed adverse effect and donor type (i.e., SD, MUD or UCB), adjusted logistic regression models were performed. The six following covariates were included in the models: gender, age at diagnosis, age at last visit, history of relapse, conditioning regimen (TBI-versus busulfan-based), and leukemia type (acute myeloid leukemia (AML) versus acute lymphoblastic leukemia (ALL)). GvHD was considered as a potential intervening variable, i.e., a variable that is on the causal pathway between the transplant source and health status. Consequently, GvHD was not included in the model [START_REF] Katz | Multivariable analysis: a practical guide for clinicians[END_REF]. Adjusted odds ratio (OR) and risk of having one type of late effect (including 95% confidence intervals) were estimated. Adjusted multiple linear regression models were generated to explore the link between the long-term QoL scores and donor type with the same covariates. Each model is presented with its standardized β coefficient, which measures the strength of the effect of graft type on the QoL dimension score. The SF-36 mean scores reported by adult patients were compared with those obtained from age-and sex-matched French control subjects, using the paired Student's t-test [START_REF] Berbis | A French cohort of childhood leukemia survivors: impact of hematopoietic stem cell transplantation on health status and quality of life[END_REF]. Statistical significance was defined as p<0.05. RESULTS Patient characteristics A total of 314 patients fulfilled all selection criteria and were included in the analysis. The patient characteristics are summarized in Table 1. One hundred twenty-seven patients had received stem cells from a SD (40.5%), 99 from a MUD (31.5%) and 88 from unrelated UCB (28.0%). The mean follow-up duration from diagnosis and HSCT to last L.E.A. visit were 7.7±0.2 and 6.2±0.2 years, respectively. The mean age at acute leukemia diagnosis was 7.5±0.3 years; UCB recipients were significantly younger at diagnosis (p=0.02). As expected, the percentage of patients who relapsed before HSCT was significantly higher in the MUD and UCB groups than in the SD group (p=0.001). More patients in the SD group (66.1%) were in first hematologic complete remission at the time of transplantation, compared with MUD (51.5 %) and UCB (40.9%); whereas in RC2 or more advance hematologic status, patients more often received an alternative donor type (MUD or UCB) (p=0.009).The incidence of significant GvHD (grade II-IV aGvHD or extensive cGvHD) was lower among UCB recipients (27.3% versus 43.3% for SD; and 62.6% for MUD, p<10 -3 ). A greater proportion of patients in the UCB and MUD groups had received post-transplant corticosteroids (p=0.02); this high percentage in spite of the low GvHD incidence in the UCB group can be explained by the fact that steroids were included in the GvHD prophylaxis regimen of most UCB recipients. The three groups were similar with regard to gender, previous irradiation, age at HSCT, leukemia type, conditioning regimen (TBI-or busulfanbased) and follow-up duration from diagnosis and HSCT to last visit. The patients of the UCB group were younger at last L.E.A. evaluation compared with the other groups, although this difference was not statistically significant (p=0.11). Long-term late effects Overall, 284 of 314 patients (90.4%) were found to have at least one late effect, without any apparent difference between the three groups. Among the SD survivors, 92.1% suffered from at least one late effect compared with the MUD (92.9%) and UCB (85.2%) survivors (p=0.14). The average number of adverse late effects was 2.1±0.1, 2.4±0.2 and 2.4±0.2, respectively (non-significant). Twenty-two percent of the transplanted patients had one late effect, 31% had two late effects and 37% had three or more late effects. As shown in Figure 1, no significant difference was found between the donor cell sources (p=0.52). The occurrence of each side effect for each group is outlined in Table 2. The patients treated using SD transplant were considered the reference group for all comparisons. The multivariate analysis indicated that donor type did not have an impact on most sequelae. The only two significant differences were higher risk of major height growth failure after MUD transplantation (OR[95%CI]=2.42[1.06-5.56], p=0.04) and osteonecrosis following UCB transplantation (OR[95%CI]=4.15 [1.23-14.04], p=0.02). None of the other comparisons revealed significant differences in the multivariate models. Quality of life Adults Adults of the three groups reported very similar QoL (Table 3). The physical composite scores were 52.1±1.6 for the SD group, 50.4±1.8 for the MUD group and 50.3±2.2 for the UCB group (p=0.72). The mental composite scores were 43.4±1.4 for the SD group versus 47.3±1.7 for the MUD group and 43.3±2.6 for the UCB group (p=0.28). Considering SD as the reference group, multivariate linear regression analysis did not show any difference between the donor sources for each dimension. Parents' point of view The QoL of children and adolescents was reported by 204 parents (Table 4). The summary scores were 68.4±1.7 for the SD group, 68.8±2.0 for the MUD group and 69.8±1.9 for the UCB group (p=0.87). Parent-reported scoring of the nine dimensions did not indicate that donor type had an impact on the QoL of children and adolescents. Children and Adolescents The mean scores reported by children (n=35) were comparable for all VSPAe subscales (Table 5). The summary scores were 72.4±3.4, 74.4±3.7 and 70.3±4.2 for the SD, MUD and UCB groups, respectively (p=0.78). Regarding adolescent QoL (Table 6), no significant difference was found between the three groups, with the exception of 'relationship with parents' and 'school work'. In fact, adolescents of the SD group reported a significantly better 'school work' mean score than those of the MUD group (p=0.05) and a lower 'relationship with parents' mean score compared with the UCB group (p=0.03). The summary scores were 64.1±1.7, 67.6±2.0 and 69.2±2.0 for the SD, MUD and UCB groups, respectively (p=0.15). Comparison to French norms The QoL assessed in 84 adults of this cohort was compared to age-and sex-matched French reference scores (Figure 2). Almost all subscales were significantly lower in the L.E.A. cohort. The physical composite (51.2±1.1 versus 55.2±0.1, p<0.001) and mental composite scores (44.3±1.1 versus 47.9±0.3, p=0.001) were both lower in the L.E.A. group. DISCUSSION The main objective of this study was to assess the long-term health status and QoL of a French cohort of childhood leukemia survivors who had received HSCT from three different donor types. HLA-identical sibling transplanted patients were chosen as the reference group and compared with MUD and UCB transplantations. During the immediate post-transplant phase, MUD transplantation patients are at increased risk of GvHD, while UCB transplants are associated with a slower hematologic recovery [START_REF] Eapen | Outcomes of transplantation of unrelated donor umbilical cord blood and bone marrow in children with acute leukaemia: a comparison study[END_REF][START_REF] Gluckman | Resultsof Unrelated Umbilical Cord Blood Hematopoietic Stem Cell Transplantation[END_REF][START_REF] Rocha | Comparison of outcomes of unrelated bone marrow and umbilical cord blood transplants in children with acute leukemia[END_REF]. We aimed to determine whether donor type also had an impact on long-term health status and QoL. With a 6.2-year post-transplant follow-up, this study showed that regardless of the donor type, the development of adverse health outcomes and QoL in long-term survivors were markedly similar. The mean number of late effects experienced per patient was a little more than two for each group; 90.4% of HSCT survivors in this study developed at least one adverse effect. Although the occurrence of late effects in patients transplanted during childhood has been described, the impact of donor type on side effects was seldom taken in consideration. In the study by Bresters et al., among 162 survivors of HSCT, 93.2% had sequelae after a median follow-up time of 7.2 years. Donor type was not found to be a risk factor for increased burden of late effects in a multivariate analysis, although only two patients had received UCB transplantation (1.2%) [START_REF] Bresters | High burden of late effects after haematopoietic stem cell transplantation in childhood: a single-centre study[END_REF]. Armenian et al. have found at least one chronic health condition in 79.3% of childhood HSCT survivors (n=145) after a median follow-up time of 12 years. In a multivariate analysis, compared with conventionally treated cancer survivors, HSCT survivors had a significantly elevated risk of adverse health-related outcomes, and unrelated HSCT recipients were at greatest risk [START_REF] Armenian | Long-term health-related outcomes in survivors of childhood cancer treated with HSCT versus conventional therapy: a report from the Bone Marrow Transplant Survivor Study (BMTSS) and Childhood Cancer Survivor Study (CCSS)[END_REF]. Another study involving a cohort of 463 adults and children has reported a significantly higher cumulative incidence of extensive GvHD, cataracts and bone necrosis at 12 years after MUD, compared with SD transplants [START_REF] Hows | Comparison of long-term outcomes after allogeneic hematopoietic stem cell transplantation from matched sibling and unrelated donors[END_REF]. To our knowledge, the health status of long-term survivors after UCB transplant has never been described. A few studies have reported late complications after HSCT during childhood, in which some patients had received UCB transplantation. However, no comparison between the donor source was performed, and the cohorts included a very limited proportion of UCB recipients: between 1.2% ( 14) and 5% [START_REF] Ferry | Long-term outcomes after allogeneic stem cell transplantation for children with hematological malignancies[END_REF]. The absolute number of late effects per patient is not a sufficient data point to comprehensively describe health status, as the burden of each late effect may markedly vary. Consequently, in this study we described the risks of specific late effects with respect to stem cell sources. Only two late effects were significantly associated with donor type: osteonecrosis was more frequent in the UCB group and major growth failure occurred more often following MUD transplant. Steroids have been shown to play a role in the pathophysiology of post-transplant osteonecrosis; other well-described risk factors include older age, female gender and GvHD [START_REF] Girard | Symptomatic osteonecrosis in childhood leukemia survivors: prevalence, risk factors and impact on quality of life in adulthood[END_REF][START_REF] Li | Avascular necrosis of bone following allogeneic hematopoietic cell transplantation in children and adolescents[END_REF][START_REF] Mcavoy | Corticosteroid Dose as a Risk Factor for Avascular Necrosis of the Bone after Hematopoietic Cell Transplantation[END_REF][START_REF] Mcclune | Bone Loss and Avascular Necrosis of Bone After Hematopoietic Cell Transplantation[END_REF]. In the current study, although GvHD risk was lower following UCB transplant compared with MUD and SD transplant, the use of posttransplant steroids was very common as steroids were included in the GvHD prophylaxis regimen of most UCB recipients. Additionally, the higher proportion of patients with a history of pre-transplant leukemia relapse and ALL in the UCB and MUD groups may have played a role by increasing the pre-transplant cumulative steroid dose. Several studies have reported the impact of TBI conditioning regimens on post-transplant growth [START_REF] Sanders | Growth and development after hematopoietic cell transplant in children[END_REF][START_REF] Bernard | Height growth during adolescence and final height after haematopoietic SCT for childhood acute leukaemia: the impact of a conditioning regimen with BU or TBI[END_REF]. In the present study, the risk of major growth failure was higher in patients who had received MUD, whereas the proportion of patients treated with TBI as a pre-transplant conditioning regimen was not significantly greater. Poor post-transplant growth may be due to many other factors, including GvHD and its treatments [START_REF] Isfan | Growth hormone treatment impact on growth rate and final height of patients who received HSCT with TBI or/and cranial irradiation in childhood: a report from the French Leukaemia Long-Term Follow-Up Study (LEA)[END_REF][START_REF] Majhail | Recommended Screening and Preventive Practices for Long-term Survivors after Hematopoietic Cell Transplantation[END_REF]. Significant GvHD occurred more frequently following MUD transplantation compared with the two other groups. However, our data do not support this explanation as we were unable to demonstrate a significant effect of GvHD on major growth failure in our cohort (data not shown). To evaluate QoL, we used self-reported questionnaires for adults, children and adolescents as well as parent-reported questionnaires for patients less than 18 years of age. We found comparable results among the three study groups for all composite scores. This observation suggests that even if the immediate post-transplant period and burden of early complications experienced by transplanted children may differ with respect to the donor type, this does not explain the QoL reported many years after HSCT. In contrast, the adult QoL scores were significantly lower than sex-and age-matched French norms. Previous L.E.A. reports studying QoL have found similar results regardless of treatment or health condition, thus suggesting that suffering from acute leukemia may also play a role [START_REF] Berbis | A French cohort of childhood leukemia survivors: impact of hematopoietic stem cell transplantation on health status and quality of life[END_REF][START_REF] Michel | Health status and quality of life in long-term survivors of childhood leukaemia: the impact of haematopoietic stem cell transplantation[END_REF]. We acknowledge that the observed differences in the physical and mental composite scores, albeit statistically significant, were relatively small and their clinical relevance must be thus interpreted with caution. Others studies showed that cGvHD is the major contributor to reduced QoL after HSCT [START_REF] Eapen | Outcomes of transplantation of unrelated donor umbilical cord blood and bone marrow in children with acute leukaemia: a comparison study[END_REF][START_REF] Fraser | Impact of chronic graft-versus-host disease on the health status of hematopoietic cell transplantation survivors: a report from the Bone Marrow Transplant Survivor Study[END_REF]. In our study, significant GvHD incidence was statistically higher among recipients of MUD grafts although QoL was similar. This is perhaps due to the fact that QoL scores reported in our study are the most recent measure for each patient and that survivors with resolved cGvHD may have a comparable long-term QoL to those never diagnosed with cGvHD [START_REF] Fraser | Impact of chronic graft-versus-host disease on the health status of hematopoietic cell transplantation survivors: a report from the Bone Marrow Transplant Survivor Study[END_REF][START_REF] Sun | Burden of morbidity in 10+ year survivors of hematopoietic cell transplantation: a report from the Bone Marrow Transplant Survivor Study[END_REF]. Data concerning the impact of donor type on QoL are very scarce. Lof et al. did not identify any difference between patients with a related or unrelated donor [START_REF] Löf | Health-related quality of life in adult survivors after paediatric allo-SCT[END_REF]. Very little is known regarding QoL among long-term survivors following UCB transplant. Routine evaluation of health-related QoL should be an integral part of patient follow-up after childhood leukemia, especially when patients are treated by HSCT regardless of the donor type. As UCB transplant has only recently become available, UCB patients of the L.E.A. cohort had a shorter follow-up duration than SD or MUD patients. More precisely, the date of the first UCB transplant reported to L.E.A. was May 1997. Thus, in the present study, only patients transplanted after that date were included, to both obtain a similar follow-up duration among the three groups and compare patients who had been treated in the same country during the same period of time. As a consequence, the follow-up duration (7.7 years after diagnosis and 6.2 years after HSCT) is shorter than that in other L.E.A. studies. This is a limitation of our study as some late effects may occur after a longer period of time. Some complications such as hypogonadism manifest during adulthood, thus requiring an extended follow-up period for detection. Other studies with a prolonged follow-up period are warranted to confirm our results. It is, however, important to note that more than one third of the patients in our cohort were adults at last assessment. The strengths of this study include cohort size and the large proportion of patients who received UCB transplantation (28%). To our knowledge, this represents the first comprehensive study to describe the long-term late effects and QoL after UCB transplant for childhood leukemia. In conclusion, long-term acute leukemia survivors treated with HSCT during childhood are at risk for treatment-related sequelae, although donor type appears to have a very low impact on long-term outcomes and QoL. This analysis provides additional information for patients and physicians to assist in treatment decisions when a SD is not available and the transplant donor type must be selected between MUD and UCB. To prevent Gonadal dysfunction a (39.4) 26 (37.7) 24 (41.4) 1.17 (0.52 -2.66) 0.71 All patients described here were included in the L.E.A. program. This French multicenter program was established in 2003 to prospectively evaluate the long-term health status, QoL and socioeconomic status of childhood leukemia survivors. Patients were included in L.E.A. program if they met the following criteria: treated for acute leukemia after 1980 in one of the participating centers, were younger than 18 years of age at the time of diagnosis, and agreed (or their parents/legal guardians) to participate in the study. The present L.E.A. study focused on patients who received allogeneic HSCT with HLA-identical SD, MUD or UCB stem cells after a total body irradiation (TBI)-or busulfan-based myeloablative conditioning regimen before June 2012. To avoid potential bias due to different treatment periods and follow-up durations, we only included HSCTs performed after May 1997, the date of the first UCB transplant reported in the L.E.A. cohort. Patients were excluded from the study if they underwent more than one HSCT, if they were treated before May 1997, if they were conditioned with a non-myeloablative regimen, or if they received autologous or HLA mismatched related transplantation. All patients (or their parents) provided written informed consent to participate in the program. The French National Program for Clinical Research and the French National Cancer Institute approved this study. Iron overload was indicated by hyperferritinemia (a serum ferritin dosage ≥ 350ng/ml at least one year after HSCT) in the absence of concomitant high erythrocyte sedimentation rates. Other late effects (cataracts, alopecia, osteonecrosis, diabetes and central nervous system complications) were systematically screened during every medical visit. Evaluation of quality of life (QoL) The VSPAe (Vécu et Santé Perçue de l'Enfant) and VSPA (Vécu et Santé Perçue de l'Adolescent) questionnaires are generic health-related QoL questionnaires specifically designed to evaluate self-reported QoL in 8-to 10-year-old children and 11-to 17-year-old adolescents. VSP-Ap questionnaires (Vécu et Santé Perçue de l'Enfant et de l'Adolescent rapportés par les parents) are used to assess the parental point of view of their child's or adolescent's QoL. These questionnaire responses consider nine dimensions: psychological well-being, body image, vitality, physical well-being, leisure activities, relationship with friends, relationship with parents, relationship with teachers and school work. In addition to specific scores for each subscale, a global health-related QoL score is computed (21-23). Figure 1 : 1 Figure 1: Number of late effects per patient with respect to donor type. Figure 2 : 2 Figure 2: SF-36 results in adults compared with sex-and age-matched French norms. Table 3 :Table 4 : 34 ᵃ:gonadal function was assessable in 221 patients (92 girls and 129 boys/94 SD, 69 MUD and 58 UCB) ᵇ: data available in 67 adults (27 SD, 21 MUD and 19 UCB) c : data of metabolic syndrome was assessable in 78 adults (36 SD, 21 MUD and 21 UCB) ᵈ: iron overload was assessable in 293 patients (121 SD, 90 MUD and 82 UCB) QoL of adults (n=84) using SF-36 questionnaire. -variates: gender, leukemia type, age at diagnosis, age at last visit, relapse and conditioning (TBI/Bu). QoL of children and adolescents reported by their parents (n=204) using VSP-Ap. -variates: gender, leukemia type, age at diagnosis, age at last visit, relapse and conditioning (TBI/Bu). Table 1 : 1 Patient Characteristics (n=314) SD MUD UCB HSCT, hematopoietic stem cell transplantation; ALL, acute lymphoblastic leukemia; AML, acute myeloid leukemia; CNS, central nervous system; TBI, total body irradiation; Bu, Busulphan; CR, complete remission; GvHD, Graft versus host disease *Significant GvHD: comprises acute GvHD grade II-IV and extensive chronic GvHD Table 2 : Occurrence of late effects according to donor type Multivariate analysis SD MUD UCB MUD versus SD UCB versus SD 2 (n=127) (n=99) (n=88) n(%) n(%) n(%) OR (95% CI) p OR (95% CI) p Height growth failure Minor or major (31.5) 44 (44.4) 27 (30.7) 1.68 (0.93 -3.01) 0.08 0.94 (0.50 -1.77) 0.84 Major 12 (9.4) 20 (20.2) 13 (14.8) 2.42 (1.06 -5.56) 0.04 1.60 (0.65 -3.97) 0.30 GH treatment 8 (6.3) 10 (10.1) 6 (6.8) 1.50 (0.54 -4.18) 0.44 0.91 (0.29 -2.86) 0.88 Overweight Minor or major (18.9) 20 (20.2) 18 (20.5) 1.22 (0.61 -2.44) 0.58 1.15 (0.55 -2.41) 0.70 Major 7 (5.5) 8 (8.1) 7 (8.0) 1.57 (0.52 -4.71) 0.42 1.32 (0.41 -4.28) 0.64 Low weight (25.2) 24 (24.2) 20 (22.7) 0.75 (0.39 -1.42) 0.38 0.68 (0.35 -1.35) 0.27 Page 16 of 29 Acknowledgements The study was funded in part by the French National Clinical Research Program, the French National Cancer Institute (InCA), the French National Research Agency (ANR), the Cancéropôle PACA, the Regional Council PACA, the Hérault and the Bouches-du-Rhône departmental comities of the Ligue Contre le Cancer and the French Institute for Public Health Research (IRESP). The authors would like to thank the patients and their family as well as all members of the L.E.A. study group (Supplemental Data S1). Conflict of interest: The authors declare no potential conflict of interest. long follow-up of transplant patients is recommended regardless of the donor type.
35,553
[ "860193", "899520", "753397", "864947", "931384", "767006" ]
[ "233198", "216005", "17849", "360410", "187092", "300169", "209565", "187162", "301327", "258773", "127985", "301327", "464887", "439023", "233198", "233198", "104634", "233198", "541105", "151939", "300075", "541105", "360410", "116327", "458700" ]
01745901
en
[ "sdv", "shs", "info" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01745901/file/TwoCenturiesDemographicEvolutionSurvey_v1.pdf
Nicole El Karoui email: nicole.el_karoui@upmc.fr Kaouther Hadji email: kaouther.hadji@gmail.com. Sarah Kaakai email: sarah.kaakai@polytechnique.edu. Inextricable complexity of two centuries of demographic changes: A fascinating modeling challenge demographic transition affected most of European countries and countries with Introduction In over two centuries, the world population has been transformed dramatically, under the effect of considerable changes induced by demographic, economic, technological, medical, epidemiological, political and social revolutions. The age pyramids of ageing developed countries look like "colossus with feet of clay", and the complexity of involved phenomena makes the projection of future developments very difficult, especially since these transitions are unprecedented. The problem does not lie so much in the lack of data or empirical studies. For several years now, a considerable amount of data have been collected at different levels. A number of international organizations 1 have their own open databases, and national statistical institutes 2 have been releasing more and more data. On top of that, more than fifty public reports are produced each year. The private sector is also very active on these issues, especially pension funds and insurance companies which are strongly exposed to the increase in life expectancy at older ages. However, the past few years have been marked by a renewed demand for more efficient models. This demand has been motivated by observations of recent demographic trends which seem to be in contradiction with some firmly established ideas. New available data seem to indicate a paradigm shift over the past decades, toward a more complex and individualized world. Countries which had similar mortality experiences until the 1980s now diverge, and a widening of health and mortality gaps inside countries has been reported by a large number of studies. These new trends have been declared as key public issues by several organizations, including the WHO in its latest World report on ageing and health (World Health Organization (2015)), and the National Institute on Aging in the United States which created in 2008 a panel on Understanding Divergent Trends in Longevity in High-Income Countries, leading to the publication of a comprehensive report in National Research Council and Committee on Population (2011). In the face of the considerable amount of literature, data and points of views concerning the evolution of human longevity and populations in the last two century, we came to the conclusion that it was necessary to highlight a number of key observations to avoid the pitfalls of an overly naive approach. The goal of this cross-disciplinary survey is to help a modeler of human population dynamics to find a coherent way (for instance by taking into account the whole population dynamics and not only old ages) around this mass of multidisciplinary information. Based on numerous surveys from various academic disciplines and many contradictory readings, we offer a subjective selection of what we believe to be the most important ideas or facts, from a mathematical modeling perspective. As we will not be able to devote the necessary time to each point, we try to illustrate some of our points with examples that will support the intuition about mentioned phenomena. It should be emphasized from the start that if the discussion is greatly enriched by the multidisciplinary nature of the field, the presentation of ideas is also made more difficult, especially for matters of vocabulary. It should also be noted that issues related to medical advances and to the biology of human ageing are dealt with in a very cursory way, as we focus mainly on economic and social issues. The survey is composed of three main parts which are summarized in the next subsection. The first part deals with the historic demographic transition. The importance of public health is dealt with, with a specific focus on the cholera epidemic outbreaks that took place in France and in the UK during the nineteenth century. Other features of the historic demographic transition are also considered. In particular, we explore the relationship between the economic growth and mortality improvements experienced during the past century. In the second part of the survey, we examine the implications for population modeling and the key features of this shift in paradigm that have been observed since the 2000s. We first give a brief overview of the so-called demographic transition, and the move toward the description of increased complexity and diverging trends that have been recently observed, based principally on the experience of developed countries. A special attention is paid to socioeconomic differences in health and mortality. In the last part of the survey, we give a short review of microsimulation models and agent based models widely used in social sciences, and in particular in demographic applications. We first describe the main components of a dynamic microsimulation exercise to study heterogeneous individual trajectories in order to obtain macro outcomes by aggregation in the form of a data-driven complex model. Then, we present the agent based models which take into account individual interactions for explaining macroscopic regularities. The historic demographic transition Since the nineteenth century, most countries have experienced a remarkable evolution of their populations, referenced by demographers and economists as the demographic transition [START_REF] Bongaarts | Human Population Growth and the Demographic Transition[END_REF]). The historic demographic transition of the developed world 3 began in the nineteenth century and was completed in over a century (∼ (1850 -1960)). This historical process is mainly referred to as "the secular shift in fertility and mortality from high and sharply fluctuating levels to low and relatively stable ones" [START_REF] Lee | Demographic transition and its consequences[END_REF]). These substantial demographic changes caused life expectancy at birth to grow by more than 40 years over the last 150 ans (for instance, life expectancy at birth rose in the United Kingdom and France from about 40 years in the 1870s 4 , to respectively 81 and over 82 years in 2013 5 ), and the world population to grow from around 1 billion in 1800 to 2.5 billions by 1950 [START_REF] Bloom | The global demography of aging: Facts, explanations, future[END_REF]). The treatment of infectious diseases constituted the vast bulk of the causes that explain the historic fall in mortality. For example, infectious diseases had virtually disappeared by 1971 in England and Wales while they were responsible for 60 percent of deaths in England and Wales in 1848 [START_REF] Cutler | The determinants of mortality[END_REF]). The causes of this reduction have been extensively debated. Among the main causes that have been put forward are economic growth, improvement in living standards, education and most importantly social and public health measures [START_REF] Bloom | The global demography of aging: Facts, explanations, future[END_REF], [START_REF] Cutler | The determinants of mortality[END_REF]). For instance, Cutler and Miller (2005) estimated that the purification of water explained half of the mortality reduction in the US in the first third of the twentieth century. 2.1 The cholera pandemic: a starting point of the demographic transition The cholera pandemic: a starting point of the demographic transition In order to understand the unprecedented rise of life expectancy during the first half of the twentieth century, one has to go back 50 years which foreshadow the demographic transition. At the beginning of the nineteenth century, the Industrial Revolution led to a total upheaval of society, associated with unbridled urban sprawl and unsanitary living conditions. In Paris, the population doubled from 1800 to 1850 to attain over one million inhabitants [START_REF] Jardin | Restoration and Reaction, 1815-1848[END_REF]), while London grew by 2.5 fold during those 50 years, to attain more than 2 million inhabitants [START_REF] Chalklin | The rise of the English town, 1650-1850[END_REF]). In this context, epidemics were frequent and deadly. The cholera pandemic, which struck fear and left indelible marks of blue-black dying faces due to cyanose' on the collective imagination (hence the nickname "blue death"), had the most important social and economic consequences. It is often refered to as an iconic example where medicine was confronted to statistics [START_REF] Dupaquier | Cholera in England during the nineteenth century: medicine as a test of the validity of statistics[END_REF]) and was regarded as "the real spark which lit the tinder of the budding philanthropic movement, culminating in the social reforms and the foundation of the official public health movement seventeen years later" [START_REF] Underwood | The History of Cholera in Great Britain[END_REF]). The cholera pandemics originated in India and spread to Europe in the 1830s. Four subsequent outbreaks (1831, 1848 -1854, 1866 -1867 and 1888 -1889) mainly affected France and England, causing 102.000 deaths in France in 1832 and 143.000 in the 1850s over a population of 36 million [START_REF] Haupt | Histoire sociale de la France depuis 1789. Bibliothèque allemande[END_REF]). In London, 6536 deaths were reported in 1831 and 14137 deaths during the 1848-1849 cholera outbreak [START_REF] Underwood | The History of Cholera in Great Britain[END_REF]). In the following paragraphs, we will focus on the cholera outbreaks in France and England, in order to illustrate the profound changes which occured at different levels (city, state and international), and which still give valuable insight on contemporary challenges. Cholera in England The intensity of the first cholera outbreak in London in 1831, combined with the growing influence of advocates of public health, brought to light the need for public measures to improve sanitation. At that time, a lot of reformers considered that statistics were a prerequisite for any intervention, and the enthusiasm in the field expanded very quickly, which is somehow reminiscent of the current craze for data science. In this context, the General Register Office (G.R.O) was created in 1836, with the aim of centralizing vital statistics. England and Wales were divided in 2193 registration sub districts, administered by qualified registrars (often doctors). In charge 2.1 The cholera pandemic: a starting point of the demographic transition of compiling data from registration districts, W. Farr served as statistical superintendent from 1839 to 1880 and became "the architect of England's national system of vital statistics" [START_REF] Eyler | William Farr on the cholera: the sanitarian's disease theory and the statistician's method[END_REF]). The precise mortality data collected by the G.R.O during cholera outbreaks turned out to be instrumental in the analysis of the disease. In his pioneering Report on the Mortality of Cholera in England, 1848-49 (Farr (1852)), Farr and the G.R.O produced almost four hundred pages of statistics. His main finding, based on the collected data of the 1848-49 outbreak, was the existence of an inverse relationship between cholera mortality rates and the elevation of registration districts above the Thames. Farr was particularly pleased with this statistical law, since it validated his beliefs in the prevailing miasmatic theories, which predicted that the passing on of the disease was airborne. It was actually J. Snow who first claimed that cholera communication was waterborne, with his famous experiment of the Broad street pump [START_REF] Brody | Mapmaking and myth-making in Broad Street: the London cholera epidemic, 1854[END_REF]). However, Farr's statistics were decisive in supporting and validating his theory. Although Snow's theory was not widely accepted, he contributed to raising the issue of water quality. Under the impulsion of the General Board of Health created in 1848, the Metropolis Water Act of 1852 introduced for the first time regulations for (private) water supply companies, to take effect by 1855. At the time of the 1853-54 cholera outbreak, Farr found out that only one company had complied with the new regulations, and that in a number of districts, it was competing with another company drawing water from a highly polluted area. The perfect conditions for a full-scale experiment were brought together, and Farr and Snow joined their investigations to conclude that without doubt, water played an important role in the communication of the disease. In 1866, a smaller outbreak hit London. More specifically, the reintroduction of sewage contaminated water by the East London Water Company caused in just one week 908 over 5596 deaths in London [START_REF] Dupaquier | Cholera in England during the nineteenth century: medicine as a test of the validity of statistics[END_REF], [START_REF] Underwood | The History of Cholera in Great Britain[END_REF]) . Despite the overwhelming amount of evidence, the Medical Officer himself tried to exonerate the company, causing the wrath of Farr. This event, however, was a wake-up call for the English political class to guarantee the supply of clean water. Several public health measures were taken from the 1850s in order to improve public health and water quality. A new administrative network was established in London in 1855, which undertook the development of the city's main drainage system which was completed in 1875. Among other measures were the Rivers Pollution act in 1876 and the carrying out of monthly water reports from the 1860s [START_REF] Hardy | Water and the search for public health in london in the eighteenth and nineteenth centuries[END_REF]). 2.1 The cholera pandemic: a starting point of the demographic transition Cholera in France France's experience with cholera varied from England's, due to its different scientific environment and unstable political situation. The first epidemic reached Paris by the spring of 1832 causing, in four months, the death of almost 2.1% of the 774.338 Parisians (Paillar (1832)). In his remarkable report addressed to the Higher Council of Health (Moreau de Jones (1831) 6 ), the former military A. Moreau de Jonnès (1778 -1870) gave considerable details on the international spread along trade routes of the pandemic that started in India in 1817, including the treatments and precautions taken against the disease. He clearly attested that cholera was "incontestably" contagious. In 1833, he became the first chief of the Statistique Generale de la France (SGF), the nearest equivalent to the G.R.O in England. Like Farr, Moreau de Jonnès published many reports (13 volumes) and contributed to the development of Statistics and its applications in France. Unfortunately, little attention was paid to his findings by the French health care community. In the end of 1831, France was anticipating a cholera epidemic. Health commissions were established in Paris and in other departments in order to control the disease with the help of health councils (conseils de salubrité); but the organization was less systematic and data collection was less reliable than in England. During the first epidemic, social unrest among the lower classes, who saw the disease "as a massive assassination plot by doctors in the service of the state", were the worst fears of the government [START_REF] Kudlick | Cholera in Post-revolutionary Paris: A Cultural History[END_REF]). The government was supported by the Faculty of medicine in its efforts to reduce fear and avoid a population uprising, and the latter stated in 1832 that the disease was not communicable [START_REF] Fabre | Conflits d'imaginaires en temps d'épidémie[END_REF]). In 1848, a public health advisory committee was created and attached to the Ministry of Agriculture and Commerce, in charge of sanitary issues (housing, water and protection of workers) and prophylactic measures to prevent the epidemic from spreading Le Mée (1998). As in 1832, this committee stated that cholera was not contagious [START_REF] Dupaquier | Cholera in England during the nineteenth century: medicine as a test of the validity of statistics[END_REF]). In 1849, the second epidemic broke out after the 1848 revolution. Contrary to the first epidemic characterized by riots and tensions, the reaction to the outbreak was more peaceful, with more efficient collaboration between scientists and the administration. At the same time, the perception of the lower classes also changed with the idea of struggling against destitution in order to prevent revolt [START_REF] Kudlick | Cholera in Post-revolutionary Paris: A Cultural History[END_REF]). As a consequence, the response to the second epidemic was better organized and social laws were passed in 1850 -51. In the following years, hygiene problems and unsanitary living conditions caused by the rapid growth of Paris's population were 2.1 The cholera pandemic: a starting point of the demographic transition addressed to by important public health measures. In particular, the massive public work projects led by Baron Haussman 7 in less than two decades from 1852 to 1870 remains as a symbol of the modernization of Paris at the end of the nineteenth century (Raux (2014)). Cholera Pandemic and International Health Organization The international dimension of the problem raised by cholera, reported in France by Moreau de Jonnès in 1824-31, was widely publicized by The Lancet, which published in 1831 a map on the international progress of cholera 8 (Koch (2014)). This map suggested a relation between human travel and the communication of the disease, accelerated by the industrial revolution in transport, in particular with steamships and railways. Cholera was regarded as an issue transcending national boundaries, which needed international cooperation to control it [START_REF] Huber | The unification of the globe by disease? the international sanitary conferences on cholera, 1851-1894[END_REF]). Europe had succeeded in setting up an efficient protective system against the plague, based on ideas such as quarantine and "cordon sanitaire". But those measures were very restrictive and seemed inefficient against cholera. Moreover, in the second half of the nineteenth century, Western European countries were involved in competitive colonial expansion, and were rather hostile to travel restrictions, even if increased global circulation was a threat to populations. The opening of the Suez Canal in 1869 was an emblematic example of those changes. Under the influence of French hygienist doctors, the first International Sanitary Conference opened in Paris in 1851 [START_REF] Huber | The unification of the globe by disease? the international sanitary conferences on cholera, 1851-1894[END_REF]) gathering European states and Turkey. It was the first international cooperation on the control of global risk to human health, and so the beginning of international health diplomacy. It took more than ten international conferences over a period of over 50 years to produce tangible results. During the first five conferences 9 , the absence of clear scientific explanation on the origin of cholera prevented any agreement. It was only with the formal identification of the V. cholerae bacterium by R. Koch in 1883 10 and the work of L. Pasteur that infectious diseases were clearly identified and efficiently fought against. Indeed, technological progress as evinced through disinfection machines could allow the technological implementation of new measures [START_REF] Huber | The unification of the globe by disease? the international sanitary conferences on cholera, 1851-1894[END_REF]). Furthermore, advances on germ theory "allowed diplomats to shape better informed policies and rules" [START_REF] Fidler | The globalization of public health: the first 100 years of international health diplomacy[END_REF]). At the Seventh Conference (1892), the first maritime regulation treaty was adopted 7 Napoléon III appointed Baron Haussmann as Préfet de la Seine 8 The map was completed in 1832 by Brigham to include Canada and the USA 9 (1851,1859,1866, 1874 and 1881) 10 The bacterium had been isolated before by other scientists such as F.Pacini in 1854, but his work did not had a wide diffusion. 2.1 The cholera pandemic: a starting point of the demographic transition for ship traveling via the Suez Canal. During the ninth conference (1894), sanitary precautions were taken for pilgrims traveling to Mecca. Participants finally agreed that cholera was a waterborne disease in 1903 during the eleventh conference. The International Sanitary Conferences provided a forum for medical administrators and researchers to discuss not only on cholera but also on other communicable diseases, and brought about the first treaties and rules for international health governance. Ultimately, this spirit of international cooperation gave birth in 1948 to the World Health Organization, an agency of the United Nations, conceived to direct and coordinate intergovernmental health activities. Discussion In England, Farr's discoveries could not have been made without the cutting edge organization and the power of the G.R.O. It is worth noting that only a governmental organization such as the G.R.O was able to collect the data fast enough for the 1854 experiment of Farr and Snow [START_REF] Dupaquier | Cholera in England during the nineteenth century: medicine as a test of the validity of statistics[END_REF]) to be possible. The modern organization of the G.R.O undoubtedly contributed to the remarkable quality of today's England vital databases. Across the Channel, France did not manage to create the same kind of centralized authority. On the grounds of their hostility to the communicable disease theory, French doctors did not rely on statistics. On the other hand, the use of statistics made by Farr contributed to a better understanding of the disease. It was only more than a century and a half later that a major breakthrough was made in the understanding of the origins of the disease, with the work of R. Colwell showing that the V. cholerae bacterium appears naturally in the environment. Yet the ambition to find causal factors by the sole analysis of data is not devoid of risks, and thus constitutes a major challenge for the data science era. Farr's elevation law is a textbook case of an unexpected correlation that turns out to have a great influence. Despite claimed impartiality, his choice to highlight the elevation law among all the findings mentioned in his report on the 1848-49 outbreak was clearly biased by his beliefs in the predominant (though false) miasma theory. While he later accepted that epidemics could be waterborne, Farr continued to believe in the prevailing role of elevation, even when deaths due to cholera during the 1854 and 1866 outbreak were not consistent with the elevation law. Rather than allowing the discovery of the causes of cholera, Farr's statistics were actually more useful for testing and validating the relationships predicted by Snow's theory. Another point is that the conditions that made the 1854 experiment possible were quite extraordinary. Testing theories regarding the complex events of health and 2.1 The cholera pandemic: a starting point of the demographic transition mortality in human communities is often nearly impossible. Only a handful of studies can take advantage of natural experiments. More often than not, as stated in National Research Council and Committee on Population (2011), "they are limited ethical opportunities to use randomized controlled trials to study the question at issue". Furthermore, governments failed to come to an agreement during the first international conferences because of the lack of scientific explanations on the origin of cholera. The need of theoretical arguments for public decisions to be made is still an important issue, especially when considering human health and longevity, for which no biological or medical consensus has emerged. As will be developed further in this survey, the use of a mathematical model and simulations can operate as a proxy to real life experiments and help decision making. Even when theories are publicized, there are often important delays (one or two generations) before action is taken. For instance, even if Snow's theory was better known in 1866, and despite the development of germ theory in the early 1880s, political divergences prevented any action before 1892. The example of asbestos, which took 50 years to be banned after the exhibition of its link with cancer, shows us that these delays in public response did not diminish over time [START_REF] Cicolella | Santé et Environnement : la 2e révolution de Santé Publique[END_REF]). More generally speaking, around 30 years elapsed between the first epidemic and the real development of public health policies in England and in France. The example of cholera illustrates the complexity of studying mortality evolution, inseparable from societal and political changes. Although cholera outbreaks occurred at about the same time in France and in England, they were experienced very differently owing to the different political and scientific climates in both countries. This shows that the sole study of mortality data could not be sufficient to understand the future trends of mortality. In particular, the explosion of the London population, whose size was twice as large as that of Paris, brought about social problems on a much greater scale, which played a determining role as a catalyst of public health changes. The cholera outbreaks contributed to the development of important public health measures, which played a major role in the reduction of infectious diseases. For instance, [START_REF] Cutler | The role of public health improvements in health advances: the twentieth-century United States[END_REF] estimated that water purification explained half of the mortality decline in the United States between 1900 and 1930. In comparison, the discoveries of new vaccines for a number of diseases at the beginning of the twentieth century seem to have had little impact on the reduction of mortality from those diseases. For instance, the reduction in mortality due to those diseases (except tuberculosis) following the introduction of those vaccines is estimated to have contributed to the emergence of only 3 percent of total mortality reduction ( A century of economic growth The twentieth century was the century of "the emergence for the first time in history of sustained increases in income per head" (Canning (2011)), and the association of economic growth and mortality improvements have been extensively discussed by economists. During the nineteenth century, individuals in rich and poor countries experienced similar health conditions. The 1870s were a turning point with the improvement of health in rich countries [START_REF] Bloom | Commentary: The preston curve 30 years on: still sparking fires[END_REF]). In his seminal article, [START_REF] Preston | The changing relation between mortality and level of economic development[END_REF] was one of the first economists to examine the relationship between life expectancy at birth and national income per head in different countries 11 , for three different decades: the 1900s, 1930s and 1960s (see Figure 1). In each decade, Preston brought to light a strong positive association between life expectancy and national income. He also stated that the relationship was curvilinear. For instance, the so-called Preston curve of 1960 appeared "to be steeper at incomes under 400$ and flatter at incomes over 600$" [START_REF] Preston | The changing relation between mortality and level of economic development[END_REF]. Preston also noted an upward shift of the curve characterized by a rise of life expectancy over time at all income levels. These empirical results showed that economic growth alone did not explain the remarkable mortality decline. For instance, the income level corresponding to a life expectancy of 60 was about three time higher in 1930 than in 1960. Another example is China which had in 2000 the same income level as the USA in the 1880s, but the life expectancy level of the USA in 1970. Preston (1975) estimated that national income accounted for only 10 to 25 percent of the growth of life expectancy between 1930s and 1960s. [START_REF] Bloom | Commentary: The preston curve 30 years on: still sparking fires[END_REF] also estimated that increases in income between 1938 and 1963 were responsible for about 20% of the increase in the global life expectancy. ). On the one hand, some studies on the causal link between health and wealth suggested that "health can be a powerful instrument of economic development" [START_REF] Bloom | Commentary: The preston curve 30 years on: still sparking fires[END_REF]). On the opposite side, Acemoglu and Johnson (2007) argue that improvements in population health, especially the reduction of children mortality, might have negative impacts on economic growth, due to the increase in the population size. They argue that a positive effect of economic growth on health may be counterbalanced by the negative effect of population growth on health. However, Reher (2011) describes the increase in the proportion of working age poeple in the population that occurred in developed countries during the twentieth century as a situation which had "profound economic implications for society, as long as the economy was able to generate enough jobs to accommodate the growing population of working age". A century of economic growth For a more complete picture, it is thus interesting to go beyond "macro" environmental indicators such as public health and economic growth, and to look at mortality experiences on different scales, by exploring differences between countries, and within countries. 3 A new era of diverging trends 3.1 A second demographic transition? In the early 1970s, many demographers and population scientists had supported for the idea that populations would ultimately reach the last stage of the classical demographic transition, described as an "older stationary population corresponding with replacement fertility (i.e., just over two children on average), zero population growth, and life expectancies higher than 70y" (Lesthaeghe (2014)). More generally speaking, populations were supposed to attain an equilibrium state, characterized by a significant level of homogeneity. For instance, the nuclear family composed of a married couple and their children was expected to become the predominant family model. Yet, in most countries which experienced the historic transition, the baby boom of the 1960s12 was characterized by higher fertility rates, followed by a decline in fertility in the 1970s (baby bust). In response to these fluctuations, attempts were made to modify the original theory. For instance, Easterlin (1980) developed a cyclical fertility theory, linking fertility rates to labor-market conditions. Smaller cohorts would benefit from better living conditions when entering the labor market, leading to earlier marriage and higher fertility rates. On the contrary, larger cohorts would experience worse living conditions, leading to later marriage and lower fertility rates. However, it turned out that this state of equilibrium and homogeneity in populations was never realized. Actually, fertility rates remained too low to ensure the replacement of generations; mortality rates, especially at advanced ages, declined at a faster rate than ever envisaged; and contemporary societies seem to be defined by more and more heterogeneity and diverging trends. The idea of a renewed or sec- Lesthaeghe and Van de Kaa also define the second demographic transition as a shift in the value system. The first phase of the demographic transition was a period of economic growth and aspirations to better material living conditions. In contrast, the past few years have seen a rise of "higher order" needs and individualization. In this new paradigm, individuals are overwhelmingly preoccupied by individual autonomy, self-realization and personal freedom of choice, resulting in the creation of a more heterogeneous world. Even if the framework of the second demographic transition has been criticized, this viewpoint shed an interesting light on recent longevity trends. Indeed, divergences in mortality levels and improvements between and within high income countries are at the heart of numerous debates and research works. As the average life expectancy has been rising unprecedentedly, gaps have also been widening at several scales. What may be somehow surprising is that up until the 1980s, high income countries had roughly similar life expectancy levels. For example, the comparison of the female life expectancy at age 50 in ten high income countries 13 shows that the gap was of less than one year in 1980. By 2007, the gap had risen to more than 5 years, with the United States at the bottom of the panel with Denmark, more than 2 years behind Australia, France, Italy and Japan (National Research Council and Committee on Population (2011)). On another scale, a great amount of evidence shows that socioeconomic differentials have also widened within high income countries. For instance, the gap in male life expectancy at age 65 between higher managerial and professional occupations and routine occupations in England 13 Australia, Canada, Denmark, England and Wales, France, Italy, Japan, Netherlands, Sweden and the United States. 3.2 Diverging trends between high-income countries: the impact of smoking behaviors. and Wales was of 2.4 years in 1982 to 1986, and rose to 3.9 years in 2007 to 2011 14 . The following part focuses on two angles of analysis on these diverging trends: the impact of smoking behaviors and socioeconomic inequalities. The goal of the following discussion is not to detail further the impact of these risk factors, but rather to show the complexity of understanding current longevity trends, which cannot be disentangled from the evolution of the whole population, and which require a multiscale analysis of phenomena while keeping in mind that obtaining comparable and unbiased data is also a challenge in order to explain longevity. 3.2 Diverging trends between high-income countries: the impact of smoking behaviors. In 15 . The evolution is even more striking for women: the ranking for the female life expectancy at age 50 fell from the 13th to the 31th position, with an increase of only about 60 percent of the average increase of high income countries. In addition, the gap with higher achieving countries such as France or Japan grew from less than one year in 1980-85 to more than 3 years in 2010-15 16 . Netherlands and Denmark also show similar patterns of underachievement in life expectancy increases. Although many methodological problems may arise when using cause-of-death statistics, a cause-of-death analysis can provide a powerful tool for understanding divergences in mortality trends. In a commissioned background article for the report, Glei et al. ( 2010) have studied cause-of-death patterns for 10 different countries in order to identify the main causes of death possibly responsible for diverging trends. The 3.2 Diverging trends between high-income countries: the impact of smoking behaviors. case of lung cancer or respiratory diseases, which are relevant indicators concerning smoking is particularly interesting. Age-standardized mortality rates from lung cancer among men aged 50 and older in the U.S decreased from 1980 to 2005 while they increased for women, although they remain higher for men than for women. In addition, the increase of age-standardized mortality rates due to lung cancer for women was much faster in the U.S, Denmark and Netherlands than the average increase of the studied countries and especially than Japan where age-standardized mortality rates remained flat. These findings of [START_REF] Glei | Diverging trends in life expectancy at age 50: A look at causes of death[END_REF] clearly point out to smoking as the main underlying factor explaining those divergences. Over the past 30 years, evolution of mortality due to lung cancer and respiratory diseases has had a positive effect on gains in life expectancy for males, while the effect was negative for females. These gender differences can be linked to the fact that women began to smoke later than men, and have been quitting at a slower pace [START_REF] Cutler | The determinants of mortality[END_REF]). In addition, fifty years ago, people smoked more intensively in the United States, Denmark and the Netherlands than in other European countries or in Japan. These differences can give precious information as for future mortality patterns. Because of its delayed effects on mortality, the impact of smoking behaviors on future trends is somehow predictable. Just as the causes of death of individuals aged 50 and older give some insight on what happened in the past, current behaviors among younger individuals can be a useful indicator of future trends. Thus, life expectancy for males in the United States is likely to increase rather rapidly following reductions in the prevalence of smoking over the past twenty years, while slower life expectancy improvements can be expected for women in the coming years (National Research Council and Committee on Population (2011)). According to a panel of experts, life expectancy in Japan is also expected to increase at a slower pace in the future due to an increase in the prevalence of smoking. Differences in the timing of evolutions in smoking behaviors across gender and countries might also give additional information. The impact of smoking on male life expectancy in the past could help predict future trends for women, and the experience of the United States could shed light on the future impact of smoking in Japan. But smoking is certainly not a sufficient explanation, and other risk factors may have contributed to the underachievement of the United States. In particular, the obesity epidemic may partly account for the slower increase in life expectancy experienced by the United States. Quantifying the impact of the obesity epidemic is much more complicated, since no clear markers are available such as lung cancer and respiratory diseases concerning smoking. According to some researchers, the obesity epidemic in the United States might even offset gains in life expectancy due to the decline of smoking (Stewart et Differences within countries: the impact of social inequalities Discussion The "predictable" effects of smoking could be integrated in a population dynamic framework taking into account the whole age structure of the population. Countries experiencing similar phenomena but with different timings could also be compared in a theoretical framework of population dynamics. Furthermore, a finer-grained model could help to better understand the future impacts of emerging issues such as the obesity epidemic, as well as the potential compensating effect of a decrease in smoking prevalence. Differences within countries: the impact of social inequalities Research on the relationship between socioeconomic status and mortality and health can be traced back as far as the nineteenth century. In France, Villermé (1830)compared mortality rates in Paris' boroughs with the rates of non-taxable households in each borough [START_REF] Mireaux | Un chirurgien sociologue: Louis-René Villermé[END_REF] The persistence and widening of socioeconomic inequalities in longevity has created a new paradigm, in which the increased heterogeneity has brought out even more complexity in understanding longevity evolution, and which has now to be taken into account in mortality predictions. New interlinked problems have arisen on mul-3.3 Differences within countries: the impact of social inequalities tiple scales. On an individual level, underlying factors linking individuals' health to their socioeconomic status are still debated. Another subject of no little interest to us is the critical challenge of understanding the impact of this rising heterogeneity on aggregated variables. In the following part, we will focus on some selected topics which have been discussed by sociologists, demographers, social epidemiologists and other scientists, with the aim of highlighting modeling challenges and solutions hidden beyond these reflections. Measuring the socioeconomic status The concept of socioeconomic status (SES) is broad and can encompass numerous characteristics, observable or not. Translating socioeconomic status into empirical measurements in order to better understand the links between SES and health and mortality, is in itself a challenge. Proxy variables such as educational attainment, occupation, income or wealth usually serve as SES measures, with different practices and habits in different countries [START_REF] Elo | Social class differentials in health and mortality: Patterns and explanations in comparative perspective[END_REF]). However, their ability to model the complexity of the social hierarchy and to produce comparable data through different times and places are often quite limited. Educational systems, even in groups of similar countries, can differ substantially from one country to another and make cross-national comparisons difficult [START_REF] Elo | Social class differentials in health and mortality: Patterns and explanations in comparative perspective[END_REF]). Furthermore, there is a real difficulty in comparing certain groups at different periods in time. Important changes can occur in group sizes and composition. For instance, the proportion of women in France with higher managerial and professional occupations increased from about 2 percent in 1975 to 6 percent in 1999 (10 to 14 percent for males). The evolution in the number of women long term unemployed or not in the labor force is even more striking. Their proportion decreased from 45 percent in 1975 to only 21 percent in 1999. Besides, Blanpain (2011) observed an important widening of mortality inequalities between this subgroup and other occupational subgroups over the period. The widening of these gaps is actually a typical consequence of important changes in the composition of the long term unemployed or not in the labor force subgroup. The major decrease in the size of the subgroup can be explained by the important decrease of the number of housewives over time, leaving only the most precarious in the subgroup. Proxies for the SES can be measured at different periods in the life course of an individual, and can have different causal relations with health or mortality. Education is rather consistent across the lifespan (which allows for an easier dynamic modeling), and permits to assess the stock of human capital accumulated early in life and available throughout the individual life course [START_REF] Elo | Social class differentials in health and mortality: Patterns and explanations in comparative perspective[END_REF]). On the 2006)). Explaining the socioeconomic gradient in health and mortality The difficulty in interpreting results of empirical measurements of the SES gradient in mortality reflects our little understanding of the risk factors that underlie the repercussions of socioeconomic inequalities on health and mortality. Theories explaining the SES gradient are still being debated, and their testing is often not straightforward and not unbiased as far as the measurements are concerned (see below the discussion on absolute versus relative measures). Furthermore, the impact of inequalities on aggregated variables or on the interpretation in terms of public policy can differ substantially according to different theories. The mechanisms through which SES is assumed to generate inequalities in health and longevity are usually grouped in three broad categories: material, behavioral and psychosocial [START_REF] Cutler | The determinants of mortality[END_REF]). Material risk factors Maybe one of the most natural explanation of socioeconomic differences in health is that wealthier individuals have better access to health care, even in countries with national health care coverage where potential two-tiered systems can also create inequalities. Individuals with a higher income can also maintain a healthier lifestyle, being able to buy expensive organic food or pay for gym memberships. However, access to health care or material resources does not appear to be the primary factor explaining the SES gradient (National Research Council Behavioral risk factors The second explanation is that individuals with higher educational attainment are more likely to adopt healthier behaviors and to avoid risks. By accumulating knowledge, skills and ressources, individuals who are higher on the SES ladder should be able to take better advantage of new health knowledge and technological innovations, as well as to turn more rapidly toward healthier behaviors. This behavioral explanation of socioeconomic inequalities is linked to the theory of Link and Phelan of fundamental causes [START_REF] Link | Social conditions as fundamental causes of disease[END_REF] shown that if behavioral differences play a significant role in explaining the SES gradient in mortality, it does not explain everything, and may not even account for the major part of the differentials. For instance, the famous study of Whitehall civil servants [START_REF] Marmot | Social differentials in health within and between populations[END_REF]) showed that health differentials subsisted even when factors such as smoking or drinking were controlled. Psychosocial Factors Another prominent and rather recent theory explaining socioeconomic differentials in mortality is that health is impacted by the SES through pyschosocial factors [START_REF] Cutler | The determinants of mortality[END_REF], [START_REF] Wilkinson | Income inequality and social dysfunction[END_REF]). Among pyschosocial factors are stress, anxiety, depression or anger. Accumulated exposure to stress has received particular attention in literature, due to its pervasive effects on health. Indeed, prolonged exposure to chronic stress affects multiple physiological systems by shifting priorities from systems such as the immune, digestive or cardiovascular systems in favor of systems responding to threat or danger [START_REF] Wilkinson | Income inequality and social dysfunction[END_REF]), and by leading to a state of so-called "allostatic load". The link between low social status and stress has been supported by a number of studies on primates. For instance, [START_REF] Sapolsky | Social status and health in humans and other animals[END_REF][START_REF] Sapolsky | Sick of poverty[END_REF] showed that among wild baboons, subordinate animals presented higher level of glucocorticoids, a hormone with a cen-3.3 Differences within countries: the impact of social inequalities tral role in stress response. The impact on aggregated variables On an aggregated level, socioeconomic inequalities impact national mortality not only through the importance of the SES gradient, but also through the composition of the population and its heterogeneity. From a material point of view, the relationship between income and health or mortality was initially thought of as a curvi-linear relation [START_REF] Preston | The changing relation between mortality and level of economic development[END_REF], [START_REF] Rodgers | Income and inequality as determinants of mortality: an international cross-section analysis[END_REF]). According to this analysis, a reditribution of income from the wealthiest groups to the poorest would result in improving the health of the poor rather than endanger the health of the wealthy. This non-linear relationship shows that the impact of inequality at the aggregated level of a country is not trivial. For instance, if a country experiences a high level of income inequalities, the overall mortality in the country can be higher than in a country with the same average level of income but with a lower level of inequality. But recent studies, based on the pyschosocial explanation of the SES gradient, seem to indicate that the relation between aggregated mortality and inequality is even more complicated. They argue that the presence of inequality itself impacts the health and mortality of individuals. For instance, [START_REF] Wilkinson | The spirit level: Why more equal societies almost always do better[END_REF] studied the association between life expectancy and income inequality 19 among the 50 richest countries of more than 3 million inhabitants, and found out a correlation of 0.44 between life expectancy and the level of inequalities, while no significant association was found between life expectancy and the average income. These results suggest that health and mortality are impacted by the relative social position of individuals, rather than their absolute material living standards [START_REF] Wilkinson | Income inequality and social dysfunction[END_REF], [START_REF] Pickett | Income inequality and health: a causal review[END_REF]). This is closely linked to the theory 18 The RII is an index of inequality which takes into account differences in mortality as well the populations composition, see [START_REF] Regidor | Measures of health inequalities: part 2[END_REF] for details on the computation of the index. 19 Income inequality was measured in each country as the ratio of income of the poorest 20% to the richest 20%. 3.3 Differences within countries: the impact of social inequalities of psychosocial factors, which assumes that it is the relative social ranking which determines the level of exposure to psychosocial problems and the ability to cope with them. [START_REF] Wilkinson | Income inequality and social dysfunction[END_REF] go even further and argue that inequalities affect not only individuals at the bottom of the socioeconomic ladder, but the vast majority of the population. For instance, Wilkinson and Pickett (2008) compared standardized mortality rates in counties of the 25 more equal states in the US and in counties of the 25 less equal states20 (see Figure 3). They found out that for counties with the same median income, mortality rates were higher in counties in the more equal states than in counties in less equal states. The relation held at all levels of median income, with more important differences for counties with lower median income. When measuring the impact of inequality on health, the size of the area appears to be an important variable to take into account. On the one hand, the relationship between health and inequalities, when measured at the level of large areas such as states of big regions, seems to be fairly strong. On the other hand, [START_REF] Wilkinson | Income inequality and social dysfunction[END_REF] note that at the level of smaller areas such as neighborhoods, the average level of income seems to matter more than one's relative social position in the neighborhood. This "neighborhood effect" has been studied by many authors and constitutes a field of research in itself (se e.g [START_REF] Kawachi | Neighborhoods and health[END_REF], Diez Roux (2007), Diez Roux and Mair (2010), Nandi and Kawachi (2011)). Societal inequalities, neighborhood environment and individual socioeconomic characteristics thus impact health and mortality at multiple scales, making the analysis of factors responsible for poorer health highly difficult. It is even more difficult to understand what happens at the aggregated level. 3.3 Differences within countries: the impact of social inequalities Discussion The problems surrounding the measure of the SES is revealing of the issues at hand. The interpretation of data across time and places is a delicate matter. As illustrated by the major changes in the composition of women occupational subgroups, the effects of composition changes have to be carefully addressed to. Besides, it is rather unlikely that a single measure of SES, at only one point in the life course of individuals, could capture accurately the many pathways by which social status can affect health and mortality [START_REF] Elo | Social class differentials in health and mortality: Patterns and explanations in comparative perspective[END_REF]). However, there are many limitations in the ability to obtain reliable data from multiple measures of SES. There are often limited opportunities for the empirical testing of complex theories such as the fundamental cause theory or the theory of psychosocial factors. The design of empirical test is not straightforward, to say the least. Natural experiments, such as the evolution of smoking behavior, or experiments on non-human populations, such as Saplosky's study of baboons, can give valuable insights on theories. However, as stated in the conclusion of the report of National Research Council and Committee on Population (2011), " it is sometimes difficult, expensive, and ethically challenging to alter individual behavior". Pathways involved in translating SES into mortality outcomes can differ substantially according to the theory taken into account. Moreover, the impact of these underlying mechanisms on aggregated variables can also differ a lot, ranging from composition effects due to the curvilinear relation between material resources and longevity, to the global (non-linear) effects of the social stratification on individu- We believe that the dynamic modeling of the evolution of the population may help to address these issues. A fine-grained modeling of the population dynamic could help to evaluate the impact of changes in the composition of socioeconomic subgroups. In addition, modeling the population dynamics can serve as a simulation tool in order to take into account various measures of SES, when empirical data are limited. It can also be used to test hypotheses regarding which aspects of SES are the most important for reducing socioeconomic inequalities in health and mortality. By using population simulation as an experimenting tool when real life experiences are not possible, theories can be tested by comparing the aggregated outcomes produced by the model to what is observed in reality. However, the above paragraph shows us the complexity of the phenomena involved. Socioeconomic inequalities impact health and mortality through complicated pathways. Phenomena are often non-reproducible -risk factors, as well as the economic, social or demographic environment have changed dramatically over the recent years -with effects which are often delayed. Furthermore, findings suggest that the impact of socioeconomic inequalities is highly non-linear. Individual characteristics do not fully explain the longevity of individuals. Mechanisms acting at different scales appear to be equally important. For instance, the neighborhood effect, the relative social position of individuals or the global level of inequality in society are also important factors to take into account. From these examples, it is quite easy to see the modeling challenges brought about by the new paradigm of the second demographic transition. At yet, there is also an urgent need for complex population models, for a better understanding of the observed data, as well as to serve as an alternative when empirical testing is not possible. Modeling complex population evolutions Before the 1980s, demographic models were principally focused on the macro-level, and used aggregate data to produce average indicators. In view of the previous considerations, producing a pertinent modeling directly at the macro-level appears to be a more and more complicated-if not impossible-task. Hence, demographic models have increasingly shifted towards a finer-grained modeling of the population in the last decades. There is thus an intrinsic interest in describing the variability and heterogeneity of the population on a more detailed level, in order to obtain macro-outcomes by aggregation, to be used forecasting/projections and/or policy recommendations, or in a broader sense for the analysis of social economic policies. Over the last two decades, the increase of computing power and improvements in numerical methods have made it possible to study rich heterogeneous individual models. Indeed, a wide variety of models simulating individual behavior have been developed for different purposes and used in different domains. In this section, we give an overview of two types of models widely used in demography: Standard microsimulation models (MSMs) and Agent based models (ABMs) which are derived from the idea of Orcutt (1957) (see [START_REF] Morand | Demographic modelling: the state of the art[END_REF]). 2009)). This model is used for instance to measure the efficiency of reforms on state pension systems, and is based on a representative sample of the national population. Dynamic microsimulation A dynamic microsimulation exercise A demographic micro-model can be viewed as a population database, which stores dynamically information (characteristics) on all members (individuals) of the heterogeneous population [START_REF] Willekens | Biographic forecasting: bridging the micro-macro gap in population forecasting[END_REF]). [START_REF] Zinn | A Continuous-Time Microsimulation and First Steps Towards a Multi-Level Approach in Demography[END_REF] gives the main steps of a microsimulation exercise which consist of: (i) State space and state variables: The state space is composed of all the combinations of the values (attributes) of individuals' characteristics, called state variables. Age, sex, marital status, fertility and mortality status, education or emigration are 21 Since then, updated versions were developed, with for example DYNASIM3 in 2004 (Li and O'Donoghue (2013)). 22 The micmac project is documented in [START_REF] Willekens | Biographic forecasting: bridging the micro-macro gap in population forecasting[END_REF]. Dynamic microsimulation examples of state variables. An example of a state is given by the possible values of state variables: (Female, Married, 1 Child, Alive, Not emigrated, Lower secondary school)23 . (ii) Transition rates: Events occurring during the life course of individuals are characterized by individual hazard functions, or individual transition rates / probabilities. Each of the transition probabilities is related to an event, i.e. a change in one of the state variables of the individual. These probabilities are estimated conditionally on demographic covariates (i.e explanatory variables such as gender, age, educational attainment, children born, ethnicity), and other risk factors that affect the rate of occurrence of some events (environmental covariates that provide external information on the common (random) environment) [START_REF] Spielauer | What is social science microsimulation?[END_REF]). In microsimulation models, the covariates are often estimated by using logit models (see [START_REF] Zinn | A Continuous-Time Microsimulation and First Steps Towards a Multi-Level Approach in Demography[END_REF]). (iii) Dynamic simulation: Dynamic simulation aims at predicting the future state of the population, by making the distinction between events influencing the population itself and those affected by it (population ageing, concentration of wealth, sustainability of social policies...). (iv) Internal consistency: Microsimulation models can handle links between individuals, which can be qualified as "internal consistency" (Van Imhoff and Post (1998)). Individuals can be grouped together in the database into "families" (for instance, if they are married or related). When a state variable of an individual in the group changes, the state variables of the other members are updated if needed. For example, this can be the case when such events as marriage, divorce or a child leaving the parental home take place. (v) Output of microsimulation exercise and representation: The output of a dynamic microsimulation model is a simulated database with longitudinal information, e.g. in the form of individual virtual biographies, viewed as a sequence of state variables. The effects of different factors can be revealed more clearly when grouping individuals with life courses embedded in similar historical context. Usually, individuals are grouped in cohorts (individuals with the same age) or in generations. The aggregation of individual biographies of the same cohort yields a bottom-up estimate of the so-called cohort biography. Nevertheless, in the presence of interactions, all the biographies have to be simulated simultaneously, which is challenging for large populations. Dynamic microsimulation Sources of randomness Microsimulation models are subject to several sources of uncertainty and randomness which have been discussed in detail in the work of Van Imhoff and Post (1998). The so-called "inherent randomness" is due to the nature of Monte Carlo random experiments (different simulations produce variable sets of outcomes). This type of randomness can be diminished when simulating large populations (increasing the number of individuals in the database) or repeating random experiments many times to average the results, which implies important computational cost 24 . In the presence of interactions, one should be careful since the two techniques are not equivalent. Such is the case of agent based models, which are discussed in the next section. The starting population, which is the initial database for the microsimulation model, can be either a sample of the population based on survey data, or a synthetic population created by gathering data from different information sources. This initial population is subject to random variations and sampling errors. Moreover, at the individual level, the state variables and the covariates must be known before starting the simulation, and their joint distribution within the initial database is random. Van Imhoff and Post (1998) note that any deviation of the sample distribution may impact future projections. These previous sources of randomness can be mitigated by increasing the size of the database and are probably less important in comparison with the so-called specification randomness. The outputs of a microsimulation model can be subject to a high degree of randomness when an important number of covariates are included. Indeed, there are calibrating errors resulting from the estimation of empirical data each relationship between probabilities of the model and covariates. Moreover, each additional covariate requires an extra set of Monte Carlo experiments, with a corresponding increase in Monte Carlo randomness. The specification randomness can be reduced by using sorting or alignment methods, a calibration technique that consists in selecting the simulated life course in such a way that the micro model respects some macro properties, including the property of producing the expected values. In destinie 2, this alignment is ensured by adjusting the individual transition rates to obtain the annual number of births, deaths and migration consistent with some macro projections [START_REF] Blanchet | The Destinie 2 microsimulation model: overview and illustrative results[END_REF]). Discussion In many cases, behaviors are more stable or better understood on the micro level than on aggregated levels that are affected by structural changes when the number 24 Various techniques to accelerate Monte Carlo simulation coupled with variance reduction has been developed in many areas. Agent Based Models (ABM) or size of the micro-units in the population changes. Thus, microsimulation models are well suited to explain processes resulting from the actions and interactions of a large number of micro-units. For instance, according to Spielauer (2011), an increase of graduation rates25 at the macro level can lie entirely in the changing composition of the parents' generations, and not necessarily in a change of individuals' behaviors. In order to produce more micro-level explanations for population change, microsimulation models require an increasing amount of high quality data to be collected. [START_REF] Silverman | Feeding the beast: can computational demographic models free us from the tyranny of data?[END_REF] point out the "Over-dependence on potentially immense sets of data" of microsimulation models and the expensive data collection required to provide inputs for those models. Most of the time, only large entities such as national or international institutions are able to complete this demanding task. The size of the samples also has an impact on the run time of the model; the larger the sample's size is, the longer the run speed will be, which will result in a trade-off. [START_REF] Silverman | Feeding the beast: can computational demographic models free us from the tyranny of data?[END_REF] argue in favour of the use of more abstract computational models rather than on highly data-driven research. More recently, Agent Based Models (ABM), which also derived from individual-based models, have been increasingly applied in various areas to analyze macro level phenomena gathered from micro units. These models emphasize interactions between individuals through behavioral rules and individual strategies. In this context, Zinn (2017) stressed the importance of incorporating behavioral rules through ABM models (e.g kinship, mate matching models..) since demographic microsimulation is well suited for population projection, if only the model considers independent entities. Agent Based Models (ABM) What is ABM? The main purpose of the Agent Based Models (ABM) is to explain macroscopic regularities by replicating the behavior of complex, real-world systems with dynamical systems of interacting agents based on the so-called bottom-up approach [START_REF] Tesfatsion | Agent-based computational economics: Growing economies from the bottom up[END_REF], [START_REF] Billari | Agent-Based Computational Modelling: Applications in Demography[END_REF]). ABM consists basically of the simulation of interactions of autonomous agents i.e independent individuals (which can be households, organizations, companies, or nations...depending on the application). As in microsimulation, agents are defined by their attributes. Each single agent is also defined by behavioral rules, which can be simple or complex (e.g utility optimization, complex social patterns...), deterministic or stochastic, on whose basis she/he interacts with other agents and with the simulated environment ( A dynamic exercise of an ABM The key defining feature of an ABM model is the interactions between heterogeneous individuals. Moreover, an agent based model is grounded on a dynamic simulation, which means that agents adapt dynamically to changes in the simulated environment. They act and react with other agents in this environment at different spatial and temporal scales [START_REF] Billari | Agent-Based Computational Modelling: Applications in Demography[END_REF]). This contrasts with Microsimulation models (MSM) which rely on transition rates that are determined a priori (and once). Agent based models are based on some rules, or heuristics, which can be either deterministic or stochastic, and which determine the decision-making process. For example, in an agent based marriage market model, the appropriate partner can be chosen as the one who has the most similar education level to the considered agent, or an ideal age difference [START_REF] Billari | The "Wedding-Ring": An agent-based marriage model based on social interaction[END_REF]). Besides, in comparison with Microsimulation models, which operate on a realistic Conclusion scale (real data), but use very simple matching algorithms (often a Monte Carlo "roll the dice" styled decision rule), agent based models use small and artificial data sets, but show more complexity in modeling how the agents viewed and chose partners. Limitations The design of agent based model needs a certain level of expertise in the determining of behavioral rules. Furthermore, when modeling large systems (large number of agents), computational time rises considerably. Indeed, ABM models are not designed for extensive simulations. The parameters of an agent based model can be either calibrated using accurate data, or consider sensitivity analysis incorporating some level of comparison with actual data. For instance, [START_REF] Hills | Population heterogeneity and individual differences in an assortative agent-based marriage and divorce model (madam) using search with relaxing expectations[END_REF] compare the results of their Agent-Based Marriage and Divorce Model (MADAM) to real age-at-marriage distributions. But the outputs of an agent based model also depend on the "internal" structure of the model, determined by the behavioral rules (Gianluca (2014)). Consequently, the strategies to calibrate parameters, and to overcome the problem of dependency on the model's structure, rely on available empirical information. It is important to note that ABMs are designed to focus on process related factors or on the demonstration of emergent properties, rather than to make projections. Conclusion The interest of dynamic microsimulation is to constitute both a modeling exercise, and an exercise to run the model and experiment with it (Spielauer (2011)). In addition to helping to test theory or to picture the future, the exercise may be used as a simulator by policy makers (or citizens) or for a better assessment of the impact of public policies. The results/outputs of microsimulation models are population projections rather than forecasts, which is what would happen if the assumptions and scenarios chosen were to prove correct on what the future will probably be. The discussion on demographic modeling demonstrated that Microsimulation models (MSM) strongly depend on data [START_REF] Silverman | Feeding the beast: can computational demographic models free us from the tyranny of data?[END_REF]. Then, it faces pragmatic challenges in collecting and cleaning data, in addition to the different sources of randomness discussed above. In parallel with the spread of microsimulation models, there is a growing interest in Agent Based Models, which are suited to model complex systems that take full account of interactions between heterogeneous agents. The major difficulty in using of Agent based model is the absence of theoretical model. Indeed, there is no codified set of recommendations or practices on how to use these models within a program of empirical research. It is essentially based on the cognition and expertise of the developer. In this context, new hybrid applications (combining MSM and ABM models) have been recently proposed in literature. For instance, [START_REF] Grow | Agent-Based Modelling in Population Studies: Concepts, Methods, and Applications[END_REF] present many examples that combine MSM and ABM in demographic models. These new models aim at describing the heterogeneous movements, interactions and behaviors of a large number of individuals within a complex social system at a fine spatial scale. For instance, Zinn (2017) uses a combination of MSM and ABM for modeling individuals and couples life courses by integrating social relations and interactions. The efficiency of these combined and "sophisticated" models to overcome the loopholes of the simple models is an open issue. General conclusion and perspectives Facing all these modeling challenges, we advocate the development of a new mathematical theoretical framework for the modeling of complex population dynamics in demography. As we have seen in Section 3, a number of questions cannot be answered by the sole study of data, and models allow us to generate and experiment with varyous scenarios, so as to test theories or causal links for instance. Theoretical models can help us "to escape from the tyranny of data", as claimed by [START_REF] Silverman | Feeding the beast: can computational demographic models free us from the tyranny of data?[END_REF]. On the other hand, empirical evidence point out a number of key issues which cannot be overlooked, and which demonstrate the "inextricable complexity" of dynamic modeling of realistic human populations. Variables such as mortality or fertility rates are by no means stationary; populations are more and more heterogeneous, with socioeconomic inequality playing an important role at several levels (individual, neighborhood and societal); interactions between individuals and their environment are bidirectional. These are just a few examples illustrating the complexity of modeling. An adapted mathematical framework could contribute significantly to better understand aggregation issues and find out adequate policy recommendations, in concordance with this new paradigm of heterogeneity and non-linearity. More specifically, theoretical models often allow us to reduce complexity by deriving and/or justifying approximations in population dynamics. By changing point of view, data can also be represented differently,and thus permit to go beyond what is usually done. The historical analysis of these two centuries of demographic transitions show that populations have experienced dramatic changes and upheavals. But we can also see, a number of phenomena and timescales present remarkable regularities. These profound regularities, or "fundamental causes", have been noted by several authors, in very different contexts. In our opinion, the identification and understanding of these regularities or cycles is fundamental. Age is also a critical dimension when studying human population dynamics. The age structure of a population generates a lot of complexity in the representation and statistical analysis of data. This so-called Age Period Cohort (APC) problem has been well documented in statistical literature, and should be a main focus in the dynamical modeling of populations. Furthermore, the human life cycle is composed of very different periods, with transition rates of a different order and phenomena of a different nature at each stage. Understanding how to take into account this heterogeneity in age is a critical point. The notion of age itself changes over time. Individuals seem to have rejuvenated, in the sense that today's 65-year-olds are "much younger" than individuals of the same age thirty years ago. Thus, the shift in paradigm observed in recent demographic trends has highlighted a number of new issues which force us to reconsider many aspects of the traditional modeling of human populations. Multiple questions are still open, with difficult challenges ahead, but also exciting perspectives for the future. Figure 1 : 1 Figure 1: Preston curves, 1900, 1930, 1960, reproduced from[START_REF] Preston | The changing relation between mortality and level of economic development[END_REF] Figure 2 : 2 Figure 2: Preston curve in 2000, reproduced from Deaton (2003) al. (2009), Olshansky et al. (2005)). 3. 3 3 Differences within countries: the impact of social inequalities and Committee on Population (2011),[START_REF] Cutler | The determinants of mortality[END_REF]). For instance, the education gradient in the U.S steepened between the sixties and the eighties, even though the Medicare program was enacted in 1965[START_REF] Pappas | The increasing disparity in mortality between socioeconomic groups in the United States, 1960 and 1986[END_REF], cited in[START_REF] Elo | Social class differentials in health and mortality: Patterns and explanations in comparative perspective[END_REF]). Figure 3 : 3 Figure 3: Relationship between median county income and standardized mortality rates among working-age individuals, reproduced from Wilkinson and Pickett (2009) (Figure 11) 3. 3 3 Differences within countries: the impact of social inequalities als. From the fundamental causes point of view, new advances in knowledge and technology related to health will probably increase the SES gradient in health and mortality[START_REF] Phelan | Social conditions as fundamental causes of health inequalities: theory, evidence, and policy implications[END_REF],[START_REF] Cutler | The determinants of mortality[END_REF]). This illustrates how different underlying factors explaining the SES gradient can influence our views on the impact of socioeconomic inequalities on aggregated quantities, and in turn influence choices of public policies. Different types of policy recommendation can be made, according to the underlying factors or measures of SES which are considered to be most prominent. For instance, Phelan et al. (2010) recommend two types of public interventions. The first type focuses on reducing socioeconomic inequality itself in order to redistribute resources and knowledge. This would be consistent with the views of[START_REF] Wilkinson | Income inequality and social dysfunction[END_REF] on the general impact of inequality in a country. The second type of recommendations falls into the domain of public health. Governments should be careful and design interventions which do not increase inequalities, by favouring for instance health interventions which would benefit everyone automatically. But public health should not be underestimated in this new age of "degenerative and man-made diseases"[START_REF] Bongaarts | Trends in causes of death in low-mortality countries: Implications for mortality projections[END_REF]). Public health campaigns against tobacco have played an important role in reducing cardiovascular disease mortality caused by smoking[START_REF] Cutler | The determinants of mortality[END_REF]), although with varying degrees of success depending on countries, gender or social classes. The increase of environmental risks constitutes one of the major challenges faced by contemporary societies, and public action will play a central role in preventing and successfully reducing those risks[START_REF] Cicolella | Santé et Environnement : la 2e révolution de Santé Publique[END_REF]). [START_REF] Cutler | The determinants of mortality[END_REF] ). The second half of the twentieth century was marked by the rise of more intensive medical interventions, and by an epidemiological transition from infectious 2.2 A century of economic growth diseases to chronic diseases. 3.1 A second demographic transition? ond demographic transition, distinct from the classical demographic transition, was originally formulated by[START_REF] Lesthaeghe | Twee demografische transities[END_REF] in an article in Dutch, followed by a series of articles[START_REF] References Lesthaeghe | The unfolding story the second demographic transition[END_REF][START_REF] Lesthaeghe | The second demographic transition: A concise overview of its development[END_REF], Van de Kaa (2010)). In the early 1980s, a number of researchers had already observed that a shift of paradigm (Van de Kaa (2010)) had occured. In particular, the French historian P. Ariès suggested that motivations explaining the decline birth rate in the West had changed[START_REF] Ariès | Two successive motivations for the declining birth rate in the west[END_REF]). During the historic demographic transition, the decline in fertility rates was assumed to originate from an increased parental investment in the child. P. Ariès explained more recent declines in fertility by an increasing interest of individuals in self-realization in which parenthood is only one particular life course choice among many others. More specifically,[START_REF] Lesthaeghe | The second demographic transition: A concise overview of its development[END_REF] characterized the second demographic transition by multiple lifestyle choices and a more flexible life course organization. A striking example can be found in the emerging of multiple types of family arrangements. the comprehensive report of the National Research Council on explaining divergent levels of longevity in high income countries (National Research Council and Committee on Population (2011), a panel of experts have debated on the role of different risk factors for explaining the slower increase of life expectancy in the United States over the last 30 years, in comparison with other high income countries. From 1980 to 2015, the world ranking for life expectancy of the United States kept falling significantly. Furthermore, the gap between the United States and other high income countries widened, due to the slower increase of life expectancy at all ages in the United States. The ranking of the United States for male life expectancy at age 50 fell from 17th in 1980-85 to 28th in 2010-2015, with an increase of life expectancy of 4.58 years, smaller in absolute and relative terms than the average of high income countries ). In England and Wales, a systematic documentation of mortality by occupational class was made by the G.R.O starting from 1851[START_REF] Elo | Social class differentials in health and mortality: Patterns and explanations in comparative perspective[END_REF]). Since then, studies have consistently exhibited a pervasive effect of socioeconomic inequalities on longevity, regardless of the period or country. A recent study based on the French longitudinal survey 17 has found out that males with managerial and higher professional occupations have a life expectancy 6.3 years higher than working-class males (in the mortality conditions of 2000-2008, Blanpain (2011)). Numerous other examples can be found in the review of Elo (2009). Moreover, despite unprecedented rise in life expectancy during the 20th century, evidence shows that socioeconomic inequalities have widened in many developed countries in recent decades (Elo (2009)), or have remained identical at best (Blanpain (2011)). For instance, Meara et al. (2008) (cited in National Research Council and Committee on Population (2011)) argue that the educational gradient in life expectancy at age 25 rose from the eighties to the nineties of about 30 percent. Similarly in England, socioeconomic status measurements using the geographically based Index of Multiple Deprivation (IMD) (see next paragraph for more details) have shown that the average mortality improvement rates at age 65 and older have been about one percent higher in the least deprived quintile that in the most deprived quintile during the period 1982-2006 (Haberman et al. (2014),Lu et al. (2014)). 3.3 Differences within countries: the impact of social inequalities other hand, occupation, income or wealth allow to take into account latter parts of the life course and might allow to capture impacts of public policies better than educational attainment measurements (National Research Council and Committee on Population (2011)). However, the variability of the occupational status through the life course and the difficulty of assigning an occupational group to individuals is important when studying socioeconomic gradients by occupational rankings. For instance, the issue of assigning an occupational group to individuals who are not in the labor force or retirees is classical. Interpretations can also differ significantly depending on the period in the life course at which occupational status is measured. Additional complexity is also generated by the potential bidirectionality of causation, especially concerning economic measures of SES such as income or wealth, for which causal pathways are debated. Evidence from the economic literature has shown that ill health can also lead to a decrease in income or wealth. This is particularly true in countries like the United States with poorer national health care coverage than most Western Europe countries, and where poor health is a significant contributor to bankruptcy (Himmelstein et al. (2009)), retirement or unemployment (Smith (2007), cited in National Research Council and Committee on Population (2011), Case and Deaton (2005), cited in Cutler et al. ( ,[START_REF] Phelan | Social conditions as fundamental causes of health inequalities: theory, evidence, and policy implications[END_REF]). The aim of Link and Phelan's theory is to explain the persistence of pervasive effects of social class inequalities through time, despite dramatic changes in diseases and risk factors. According to the theory, the accumulation among other resources of so-called human capital allows more educated individuals to use re- sources and develop better protective strategies, whenever they can and no matter what the risks are. Let us take the example of smoking, described in [START_REF] Link | Epidemiological sociology and the social shaping of population health[END_REF] ). When first evidence linking smoking to lung cancer emerged in the fifties, smoking was not correlated to SES. But as the knowledge of the harm caused by smoking spread, strong inequalities in smoking behavior appeared, reflected in the fact that more educated individuals quit smoking earlier. However, a number of studies (see National Research Council and Committee on Population (2011) for examples) have For instance, in a study based on the comparison of the United States with 14 European countries,[START_REF] Avendano | Do Americans Have Higher Mortality Than Europeans at All Levels of the Education Distribution?: A Comparison of the United States and 14 European Countries[END_REF] observed that the unusually high educational gradient in mortality in the United States seems to be counterbalanced by an attractive educational distribution. As a consequence, they found out that the Relative Index of Inequality (RII)18 of the United States was not especially high in comparison with other countries with an educational gradient of lower magnitude. The age structure of the population also plays a determining role, and the socioeconomic composition of different age classes can vary a lot (see Chapter 4 for a more detailed discussion on this subject). Examples of microsimulation models The history of microsimulation in social sciences goes back to the work of[START_REF] Orcutt | A new type of socio-economic system[END_REF], who developed so-called data-driven dynamic microsimulation models. Following the original model of Orcutt, the first large-scale dynamic microsimulation model called dynasim21 was developed for the forecasting of the US population up to 2030. This model considered different demographic and economic scenarios, meant to analyse the socioeconomic status and behavior of individuals and families in the US (cost of teenage childbearing for the public sector, unemployment compensations and welfare programs...). Since then, most statistical or demographical government bodies in developed countries have used their own microsimulation models, developed for different purposes. A comprehensive description of various microsimulation models can be found in the surveys of[START_REF] Morand | Demographic modelling: the state of the art[END_REF], Zaidi and Rake (2001), Li and O'Donoghue (2013). For instance dynacan in Canada was designed to model the Canada Pension Plan (CPP) and analyze its contributions and benefits at individual and family level. In Australia, dynamod was developed to carry out a projection of the outlook of Australian population until the year 2050. In Europe, the micmac project 22 was implemented by a consortium of research centers whose objective is to provide demographic projections concerning detailed population categories, that are required for the design of sustainable (elderly) health care and pension systems in the European Union. The specificity of the micmac consists in providing a micro-macro modeling of the population, with micro level projections that are consistent with the projections made by the macro model. In France, the INSEE developed different versions of a microsimulation model, the current version being destinie 2(Blanchet et al. ( 4.1 Dynamic microsimulation 4.1.1 Microsimulation models Microsimulation issues A dynamic microsimulation model provides a simulation tool of individual trajectories in order to obtain macro outcomes by aggregation. It provides a way of combining different processes (biological, cognitive, social) de- scribing the lives of people who evolve over time. One main feature of this class of model is its capacity to interpret macro level changes, represented by macroe- conomic complex quantities (or indicators) (e.g life expectancy, mortality rate,...), resulting from the simulation of the dynamic life courses of individuals, also called micro units. A dynamic microsimulation model, usually relying on an important amount of empirical data, is parametrized with micro-econometrics and statistical methods (Spielauer (2011)). Applications The applications of ABM range from social, economic or political sciences to demography[START_REF] Billari | Agent-Based Computational Modelling: Applications in Demography[END_REF]). For instance,[START_REF] Tesfatsion | Agent-based computational economics: Growing economies from the bottom up[END_REF] used Agent-based Computational economies (ACE) in order to model decentralized economic markets through the interaction of autonomous agents. In demography, ABM are used in[START_REF] Diaz | Transition to Parenthood: The Role of Social Interaction and Endogenous Networks[END_REF] to explain trends in fertility by simple local interactions, in order to solve the difficult problem of age-specific projection of fertility rates.[START_REF] Billari | The "Wedding-Ring": An agent-based marriage model based on social interaction[END_REF] developed an ABM based on the interaction between heterogeneous potential partners, which typically takes place in the marriage market (partnership formation) and which is called "The Wedding Ring model". The purpose of this model is to study the age pattern of marriage using a bottom-up approach. This model was implemented using the software package NetLogo (Wilensky (1999)) which is designed for constructing and exploring multilevel systems 26 .[START_REF] Burke | The Strength of Social Interactions and Obesity among Women[END_REF] suggested the use of an agent based model to explain the differences in obesity rates between women with different educational attainment in the United States. The model integrates biological complex agents (variation of women's metabolism) interacting within a social group, and is able to reproduce the fact that better educated women experience on average lower weights and smaller dispersion of weights. For more examples of agent based models applications, we refer to the work of[START_REF] Morand | Demographic modelling: the state of the art[END_REF] that details different examples of ABM in spatial demography, family demography and historical demography. The book of[START_REF] Billari | Agent-Based Computational Modelling: Applications in Demography[END_REF] also presents various applications of agent-based computational modeling, in particular in demography. 4.2 Agent Based Models (ABM) Morand et al. (2010), Billari (2006)). The report is available on the website of BNF. The national income per head was converted in 1963 U.S dollars. The baby boom affected several countries such as France, the United Kingdom or the United States, although with different timings from the early 1950s to 1970s. Source: Office for National Statistics (ONS). High income country classification based on 2014 GNI per capita from the World Bank. United Nations, Department of Economic and Social Affairs, Population Division (2015). World Population Prospects: The 2015 Revision, custom data acquired via website. description EDP The measure of inequality was based on the Gini coefficient of household income. The state variables Dead and Emigrated are considered as absorbing states, i.e. once they have been entered, they will never be left again. Graduation rate represents the estimated percentage of people who will graduate from a specific level of education over their lifetime. Multi-level agent based models integrate different levels (complementary points of view) of representation of agents with respect to time, space and behavior.
92,749
[ "777326", "1029917" ]
[ "542130", "542130", "89626" ]
01745909
en
[ "shs" ]
2024/03/05 22:32:07
2018
https://shs.hal.science/halshs-01745909/file/Rahman_The%20Logic%20of%20Reasons%20and%20EndorsementWeis28March.pdf
inferential pragmatism. However, there are also some significant differences that are at center of the dialogical approach to meaning. The present paper does not discuss explicitly phenomenology, however, one might see our proposal as setting the basis for a further study linking phenomenology and the dialogical conception of meaningthe development of such a link is part of several ongoing researches. different fashion, Hintikka's plea for the fruitfulness of game-theoretical semantics in the context of epistemic approaches to logic, semantics, and the foundations of mathematics. 4 From the dialogical point of view, the actions-such as choices-that the particle rules associate with the use of logical constants are crucial elements of their full-fledged (local) meaning: if meaning is conceived as constituted during interaction, then all of the actions involved in the constitution of the meaning of an expression should be made explicit; that is, they should all be part of the object-language. This perspective roots itself in Wittgenstein's remark according to which one cannot position oneself outside language in order to determine the meaning of something and how it is linked to syntax; in other words, language is unavoidable: this is his Unhintergehbarkeit der Sprache. According to this perspective of Wittgensteins, language-games are supposed to accomplish the task of studying language from a perspective that acknowledges its internalized feature. This is what underlies the approach to meaning and syntax of the dialogical framework in which all the speech-acts that are relevant for rendering the meaning and the "formation" of an expression are made explicit. In this respect, the metalogical perspective which is so crucial for model-theoretic conceptions of meaning does not provide a way out. It is in such a context that Lorenz writes: Also propositions of the metalanguage require the understanding of propositions, […] and thus cannot in a sensible way have this same understanding as their proper object. The thesis that a property of a propositional sentence must always be internal, therefore amounts to articulating the insight that in propositions about a propositional sentence this same propositional sentence does not express a meaningful proposition anymore, since in this case it is not the propositional sentence that is asserted but something about it. Thus, if the original assertion (i.e., the proposition of the ground-level) should not be abrogated, then this same proposition should not be the object of a metaproposition […]. 5 was the first to link game-theoretical approaches with CTT. Ranta took Hintikka's (1973) Game-Theoretical Semantics (GTS) as a case study, though his point does not depend on that particular framework: in game-based approaches, a proposition is a set of winning strategies for the player stating the proposition. In game-based approaches, the notion of truth is 4 Cf. Hintikka (1973). 5 Lorenz (1970, p. 75), translated from the German by Shahid Rahman. 6 Lorenz (1970, p. 109), translated from the German by Shahid Rahman. respectively as on one hand must-requests (commitments or obligations) and on the other mayrequests (or entitlements or rights) as follows:1 […] So, let's call them rules of interaction, in addition to inference rules in the usual sense, which of course remain in place as we are used to them. […] Now let's turn to the request mood. And then it's simplest to begin directly with the rules, because the explanation is visible directly from the rules. So, the rules that involve request are these, that if someone has made an assertion, then you may question his assertion, the opponent may question his assertion. (Req1) ⊢ 𝐶 ? ⊢ 𝑚𝑎𝑦 𝐶 Now we have an example of a rule where we have a may. The other rule says that if we have the assertion ⊢ 𝐶, and it has been challenged, then the assertor must execute his knowledge how to do 𝐶. [ … ]. (Req2) ⊢ 𝐶 ? ⊢ 𝐶 ⊢ 𝑚𝑢𝑠𝑡 𝐶′ In relation to the third condition of Brandom, endorsement, it involves the use of assertions brought forward by the interlocutor. In this context Göran Sundholm (2013, p. 17) produced the following proposal that embeds Austin's remark (1946, p. 171) on assertion acts in the context of inference: When I say therefore, I give others my authority for asserting the conclusion, given theirs for asserting the premisses. Herewith, the assertion of one of the interlocutors entitles the other one to endorse it. Moreover, in recent lectures, Per [START_REF] Martin-Löf | Is Logic Part of Normative Ethics? Lecture Held at the research Unity Sciences, Normes, Décisions (FRE 3593)[END_REF] used this dialogical perspective in order to escape a form of circle threatening the explanation of the notions of inference and demonstration. A demonstration may indeed be explained as a chain of (immediate) inferences starting from no premisses at all. That an inference 𝐽 1 . . . 𝐽 𝑛 𝐽 is valid means that one can make the conclusion (judgement 𝐽) evident on the assumption that 𝐽 1 , … , 𝐽 𝑛 are known. Thus the notion of epistemic assumption appears when explaining what a valid inference is. According to this explanation however, we cannot take 'known' in the sense of demonstrated, or else we would be explaining the notion of inference in terms of demonstration when demonstration has been explained in terms of inference. Hence the threatening circle. In this regard Martin-Löf suggests taking 'known' here in the sense of asserted, which yields epistemic assumptions as judgements others have made, judgements whose responsibility others have already assumed. An inference being valid would accordingly mean that, given others have assumed responsibility for the premisses, I can assume responsibility for the conclusion. So, again it is the dialogical take on endorsement which is at stake here that amounts to the following: whatever reason the Opponent has for stating some elementary assertion authorizes the Proponent to use it himself. In other words, whatever reason the Opponent adduces for some elementary assertion the Proponent can take it as rendering the asserted proposition true and therefore can now use the same reason for defending his own assertion of that proposition. In doing so, the Proponent attributes knowledge to the Opponent when he asserts some elementary proposition, but the Opponent does nothe is trying to build a counterargument after all. Thus, the dialogical framework already seems to offer a formal system where the main features of Brandom's epistemological games can be rendered explicit. However, the system so far does not make explicit the reasons behind an assertion. In order to do so we need to incorporate into the dialogical framework expressions standing for those reasons. This requires combining dialogical logic with Per Martin Löfs Constructive Type Theory (1984) in a more thorough way. We call the result of such enrichment of the expressive power of the dialogical framework, dialogues for immanent reasoning precisely because reasons backing a statement, now explicit denizens of the object-language of plays, are internal to the development of the dialogical interaction itselfsee Rahman/McConaugey/Klev/Clerbout (2018). 2However, despite the undeniable links of the dialogical framework to both CTT and Brandom's inferentialist approach to meaning there are also some significant differences that are at the center of the dialogical conception of meaning, namely the identification of a level of meaning, i.e. the play-level, that does not reduce to the proof-theoretical one. We will start by presenting the main features of dialogues for immanent reasoning and then we will come back to the general philosophical discussion on the play-level as the core of what is known as dialoguedefiniteness. The present paper does not discuss explicitly phenomenology, however, Mohammad [START_REF] Shafiei | Intentionnalité et signification: Une approche dialogique[END_REF] developed in his thesis: Intentionnalité et signification: Une approche dialogique, a thorough study of the bearing of the dialogical framework for phenomenology. Nevertheless, his work did not deploy the new development we call immanent reasoning. So, one might see our proposal as setting the basis for a further study linking phenomenology and the dialogical conception of meaning. Local reasons Recent developments in dialogical logic show that the Constructive Type Theory approach to meaning is very natural to the game-theoretical approaches in which (standard) metalogical features are explicitly displayed at the object language-level. 3 This vindicates, albeit in quite a at the level of such winning strategies. Ranta's idea should therefore in principle allow us to apply, safely and directly, instances of game-based methods taken from CTT to the pragmatist approach of the dialogical framework. From the perspective of a general game-theoretical approach to meaning however, reducing a proposition to a set of winning strategies is quite unsatisfactory. This is particularly clear in the dialogical approach in which different levels of meaning are carefully distinguished: there is indeed the level of strategies, but there is also the level of plays in the analysis of meaning which can be further analysed into local, global and material levels. The constitutive role of the play level for developing a meaning explanation has been stressed by Kuno Lorenz in his (2001) paper: Fully spelled out it means that for an entity to be a proposition there must exist a dialogue game associated with this entity, i.e., the proposition A, such that an individual play of the game where A occupies the initial position, i.e., a dialogue D(A) about A, reaches a final position with either win or loss after a finite number of moves according to definite rules: the dialogue game is defined as a finitary open two-person zero-sum game. Thus, propositions will in general be dialogue-definite, and only in special cases be either proofdefinite or refutation-definite or even both which implies their being value-definite. A. 7 Given the distinction between the play level and the strategy level, and deploying within the dialogical framework the CTT-explicitation program, it seems natural to distinguish between local reasons and strategic reasons: only the latter correspond to the notion of proof-object in CTT and to the notion of strategic-object of Ranta. In order to develop such a project we enrich the language of the dialogical framework with statements of the form "𝑝 ∶ 𝐴". In such expressions, what stands on the left-hand side of the colon (here 𝑝) is what we call a local reason; what stands on the right-hand side of the colon (here 𝐴) is a proposition (or set). Within this game-theoretic framework […] truth of A is defined as existence of a winning strategy for A in a dialogue game about A; falsehood of A respectively as existence of a winning strategy against The local meaning of such statements results from the rules describing how to compose (synthesis) within a play the suitable local reasons for the proposition A and how to separate (analysis) a complex local reason into the elements required by the composition rules for A. The synthesis and analysis processes of A are built on the formation rules for A. The most basic contribution of a local reason is its contribution to a dialogue involving an elementary proposition. Informally, we can say that if the Proponent P states the elementary proposition 𝐴, it is because P claims that he can bring forward a reason in defence of his statement, it is this reason that provides content to the proposition. Local meaning and local reasons Statements in dialogues for immanent reasoning Dialogues are games of giving and asking for reasons; yet in the standard dialogical framework, the reasons for each statement are left implicit and do not appear in the notation of the statement: we have statements of the form 𝐗 ! 𝐴 for instance where 𝐴 is an elementary proposition. The framework of dialogues for immanent reasoning allows to have explicitly the reason for making a statement, statements then have the form 𝐗 𝑎 ∶ 𝐴 for instance where 𝑎 is the (local) reason 𝐗 has for stating the proposition 𝐴. But even in dialogues for immanent reasoning, all reasons are not always provided, and sometimes statements have only implicit reasons for bringing the proposition forward, taking then the same form as in the standard dialogical framework: 𝐗 ! 𝐴. Notice that when (local) reasons are not explicit, an exclamation mark is added before the proposition: the statement then has an implicit reason for being made. A statement is thus both a proposition and its local reason, but this reason may be left implicit, requiring then the use of the exclamation mark. Adding concessions In the context of the dialogical conception of CTT we also have statements of the form X ! (x 1 , …, x n ) [x i : A i ] where "" stands for some statement in which (x 1 , …, x n ) ocurs, and where [x i : A i ] stands for some condition under which the statement (x 1 , …, x n ) has been brought forward. Thus, the statement reads: X states that (x 1 , …, x n ) under the condition that the antagonist concedes x i : A i . We call required concessions the statements of the form [x i : A i ] that condition a claim. When the statement is challenged, the antagonist is accepting, through his own challenge, to bring such concessions forward. The concessions of the thesis, if any, are called initial concessions. Initial concessions can include formation statements such as A : prop, B : prop, for the thesis, AB : prop. Formation rules for local reasons: an informal overview It is presupposed in standard dialogical systems that the players use well-formed formulas (wff). The well formation can be checked at will, but only with the usual meta reasoning by which one checks that the formula does indeed observe the definition of a wff. We want to enrich our CTT-based dialogical framework by allowing players themselves to first enquire on the formation of the components of a statement within a play. We thus start with dialogical rules explaining the formation of statements involving logical constants (the formation of elementary propositions is governed by the Socratic rule, see the discussion above on material truth). In this way, the well formation of the thesis can be examined by the Opponent before running the actual dialogue: as soon as she challenges it, she is de facto accepting the thesis to be well formed (the most obvious case being the challenge of the implication, where she has to state the antecedent and thus explicitly endorse it). The Opponent can ask for the formation of the thesis before launching her first challenge; defending the formation of his thesis might for instance bring the Proponent to state that the thesis is a proposition, provided, say, that A is a set is conceded; the Opponent might then concede that A is a set, but only after the constitution of A has been established, though if this were the case, we would be considering the constitution of an elementary statement, which is a material consideration, not a formal one. These rules for the formation of statements with logical constants are also particle rules which are added to the set of particle rules determining the local meaning of logical constants (called synthesis and analysis of local reasons in the framework of dialogues for immanent reasoning). These considerations yield the following condensed presentation of the logical constants (plus falsum), in which "K" in AKB"expresses a connective, and "Q" in "(Qx : A) B(x) " expresses a quantifier. Formation rules, condensed presentation Connective Quantifier Falsum Move X AKB : prop X (Qx : A) B(x) : prop X  : prop Challenge Y ? F K 1 and/or Y ? F K  Y ? F Q 1 and/or Y ? F Q  - Defence X A : prop (resp.) X B : prop X A : set (resp.) X B(x) : prop (x : A) - Synthesis of local reasons The synthesis rules of local reasons determine how to produce a local reason for a statement; they include rules of interaction indicating how to produce the local reason that is required by the proposition (or set) in play, that is, they indicate what kind of dialogic actionwhat kind of move-must be carried out, by whom (challenger or defender), and what reason must be brought forward. Implication For instance, the synthesis rule of a local reason for the implication ABstated by player X indicates: i. that the challenger Y must state the antecedent (while providing a local reason for it): Y p 1 : A 8 ii. that the defender X must respond to the challenge by stating the consequent (with its corresponding local reason): X p 2 : B. In other words, the rules for the synthesis of a local reason for implication are as follows: Synthesis of a local reason for implication Implication Move X ! AB Challenge Y p 1 : A Defence X p 2 : B Notice that the initial statement (X ! AB) does not display a local reason for the claim the the implication holds: player X simply states that he has some reason supporting the claim. We express such kind of move by adding an exclamation mark before the proposition. The further dialogical actions indicate the moves required for producing a local reason in defence of the initial claim. Conjunction The synthesis rule for the conjunction is straightforward: Synthesis of a local reason for conjunction Conjunction Move X ! 𝐴 ∧ 𝐵 Challenge Y ? 𝐿 ∧ or Y ? 𝑅 ∧  Defence X 𝑝 1 : 𝐴 (resp.) X 𝑝 2 : 𝐵 Disjunction For disjunction, as we know from the standard rules, it is the defender who will choose which side he wishes to defend: the challenge consists in requesting of the defender that he chooses which side he will be defending: Synthesis of a local reason for disjunction Disjunction Move X ! 𝐴 ∨ 𝐵 Challenge Y ? ∨  Defence X 𝑝 1 : 𝐴 or X 𝑝 2 : 𝐵 The general structure for the synthesis of local reasons More generally, the rules for the synthesis of a local reason for a constant K is determined by the following triplet: General structure for the synthesis of a local reason for a constant A constant K Implication Conjunction Disjunction Move X ! K X claims that 𝜙 X ! AB X ! 𝐴 ∧ 𝐵 X ! 𝐴 ∨ 𝐵 Analysis of local reasons Apart from the rules for the synthesis of local reasons, we need rules that indicate how to parse a complex local reason into its elements: this is the analysis of local reasons. In order to deal with the complexity of these local reasons and formulate general rules for the analysis of local reasons (at the play level), we introduce certain operators that we call instructions, such as 𝐿 ∨ (𝑝) or 𝑅 ∧ (𝑝). Approaching the analysis rules for local reasons Let us introduce these instructions and the analysis of local reasons with an example: player X states the implication (A∧B)A. According to the rule for the synthesis of local reasons for an implication, we obtain the following: Move X ! (A∧B)B Challenge Y p 1 : A∧B Recall that the synthesis rule prescribes that X must now provide a local reason for the consequent; but instead of defending his implication (with 𝐗 𝑝 2 : 𝐵 for instance), X can choose to parse the reason p 1 provided by Y in order to force Y to provide a local reason for the right-hand side of the conjunction that X will then be able to copy; in other words, X can force Y to provide the local reason for B out of the local reason 𝑝 1 for the antecedent 𝐴 ∧ 𝐵 of the initial implication. The analysis rules prescribe how to carry out such a parsing of the statement by using instructions. The rule for the analysis of a local reason for the conjunction 𝑝 1 : 𝐴 ∧ 𝐵 will thus indicate that its defence includes expressions such as  the left instruction for the conjunction, written 𝐿 ∧ (𝑝 1 ), and  the right instruction for the conjunction, written 𝑅 ∧ (𝑝 1 ). These instructions can be informally understood as carrying out the following step: for the defence of the conjunction 𝑝 1 : 𝐴 ∧ 𝐵 separate the local reason 𝑝 1 in its left (or right) component so that this component can be adduced in defence of the left (or right) side of the conjunction. Here is a play with local reasons for the thesis (𝐴 ∧ 𝐵) ⊃ 𝐵 using instructions: O P ! (𝐴 ∧ 𝐵) ⊃ 𝐵 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 3 𝑝 1 ∶ 𝐴 ∧ 𝐵 0 𝑅 ∧ (𝑝 1 ) ∶ 𝐵 6 5 𝑅 ∧ (𝑝 1 ) ∶ 𝐵 3 ? 𝑅 ∧ 4 P wins. In this play, P uses the analysis of local reasons for conjunction in order to force O to state 𝑅 ∧ (𝑝 1 ) ∶ 𝐵, that is to provide a local reason9 for the elementary statement 𝐵; P can then copy that local reason in order to back his statement 𝐵, the consequent of his initial implication. With these local reasons, we explicitly have in the object-language the reasons that are given and asked for and which constitute the essence of an argumentative dialogue. The general structure for the analysis rules of local reasons Move Challenge Defence Interaction procedures embedded in instructions Carrying out the prescriptions indicated by instructions require the following three interaction-procedures: 1. Resolution of instructions: this procedure determines how to carry out the instructions prescribed by the rules of analysis and thus provide an actual local reason. 2. Substitution of instructions: this procedure ensures the following; once a given instruction has been carried out through the choice of a local reason, say b, then every time the same instruction occurs, it will always be substituted by the same local reason b. 3. Application of the Socratic rule: the Socratic rule prescribes how to constitute equalities out of the resolution and substitution of instructions, linking synthesis and analysis together. Let us discuss how these rules interact and how they lead to the main thesis of this study, namely that immanent reasoning is equality in action. From Reasons to Equality: a new visit to endorsement One of the most salient features of dialogical logic is the so-called, Socratic rule (or Copycat rule or rule for the formal use of elementary propositions in the standard-that is, non-CTTcontext), establishing that the Proponent can play an elementary proposition only if the Opponent has played it previously. The Socratic rule is a characteristic feature of the dialogical approach: other game-based approaches do not have it and it relates to endorsing condition mentioned in the introduction. With this rule the dialogical framework comes with an internal account of elementary propositions: an account in terms of interaction only, without depending on metalogical meaning explanations for the non-logical vocabulary. The rule has a clear Platonist and Aristotelian origin and sets the terms for what it is to carry out a formal argument: see for instance Plato's Gorgias (472b-c). We can sum up the underlying idea with the following statement: there is no better grounding of an assertion within an argument than indicating that it has been already conceded by the Opponent or that it follows from these concessions. 10 What should be stressed here are the following two points: 1. formality is understood as a kind of interaction; and 2. formal reasoning should not be understood here as devoid of content and reduced to purely syntactic moves. Both points are important in order to understand the criticism often raised against formal reasoning in general, and in logic in particular. It is only quite late in the history of philosophy that formal reasoning has been reduced to syntactic manipulation-presumably the first explicit occurrence of the syntactic view of logic is Leibniz's "pensée aveugle" (though Leibniz's notion was not a reductive one). Plato and Aristotle's notion of formal reasoning is neither "static" nor "empty of meaning". In the Ancient Greek tradition logic emerged from an approach of assertions in which meaning and justification result from what has been brought forward during 10 Recent researches on deploying the dialogical framework for the study of history of logic claim that this rule is central to the interpretation of dialectic as the core of Aristotle's logicsee Crubellier (2014, pp. 11-40) and [START_REF] Marion | Aristotle on universal quantification: a study from the perspective of game semantics[END_REF]. argumentative interaction. According to this view, dialogical interaction is constitutive of meaning. Some former interpretations of standard dialogical logic did understand formal plays in a purely syntactic manner. The reason for this is that the standard version of the framework is not equipped to express meaning at the object-language level: there is no way of asking and giving reasons for elementary propositions. As a consequence, the standard formulation simply relies on a syntactic understanding of Copy-cat moves, that is, moves entitling P to copy the elementary propositions brought forward by O, regardless of its content. The dialogical approach to CTT (dialogues for immanent reasoning) however provides a fine-grain study of the contentual aspects involved in formal plays, much finer than the one provided by the standard dialogical framework. In dialogues for immanent reasoning which we are now presenting, a statement is constituted both by a proposition and by the (local) reason brought forward in defence of the claim that the proposition holds. In formal plays not only is the Proponent allowed to copy an elementary proposition stated by the Opponent, as in the standard framework, but he is also allowed to adduce in defence of that proposition the same local reason brought forward by the Opponent when she defended that same proposition. Thus immanent reasoning and equality in action are intimately linked. In other words, a formal play displays the roots of the content of an elementary proposition, and not a syntactic manipulation of that proposition. Statements of definitional equality emerge precisely at this point. In particular reflexivity statements such as p = p : A express from the dialogical point of view the fact that if O states the elementary proposition A, then P can do the same, that is, play the same move and do it on the same grounds which provide the meaning and justification of A, namely p. These remarks provide an insight only on simple forms of equality and barely touch upon the finer-grain distinctions discussed above; we will be moving to these by means of a concrete example in which we show, rather informally, how the combination of the processes of analysis, synthesis, and resolution of instructions lead to equality statements. Example Assume that the Proponent brings forward the thesis (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴): O P ! (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴) 0 Both players then choose their repetition ranks: O P ! (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴) 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 O must now challenge the implication if she accepts to enter into the discussion. The rule for the synthesis of a local reason for implication (provided above) stipulates that in order to challenge the thesis, O must state the antecedent and provide a local reason for it: O P ! (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴) 1 𝑚 ≔ 1 𝑛 ≔ 2 Synthesis of a local reason for conjunction 3 𝑝 ∶ 𝐴 ∧ 𝐵 0 According to the same synthesis-rule P must now state the consequent, which he is allowed to do because the consequent is not elementary: O P ! (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴) 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 3 𝑝 ∶ 𝐴 ∧ 𝐵 0 𝑞 ∶ 𝐵 ∧ 𝐴 4 The Opponent launches her challenge asking for the left component of the local reason 𝑞 provided by P, an application of the rule for the analysis of a local reason for a conjunction described above. O P ! (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴) 1 𝑚 ≔ 1 𝑛 ≔ 2 3 𝑝 ∶ 𝐴 ∧ 𝐵 0 𝑞 ∶ 𝐵 ∧ 𝐴 Analysis of a local reason for conjunction 5 ? 𝐿 ∧ 4 Assume that P responds immediately to this challenge: O P ! (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴) 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 3 𝑝 ∶ 𝐴 ∧ 𝐵 0 𝑞 ∶ 𝐵 ∧ 𝐴 4 5 ? 𝐿 ∧ 4 𝐿 ∧ (𝑞): 𝐵 6 O will now ask for the resolution of the instruction: O P ! (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴) 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 3 𝑝 ∶ 𝐴 ∧ 𝐵 0 𝑞 ∶ 𝐵 ∧ 𝐴 4 5 ? 𝐿 ∧ 4 𝐿 ∧ (𝑞): 𝐵 6 Resolution of an instruction 7 ? …/𝐿 ∧ (𝑞) 6 In this move 7, O is asking P to carry out the instruction 𝐿 ∧ (𝑞) by bringing forward the local reason of his choice. The act of choosing such a reason and replacing the instruction for it is called resolving the instruction. In this case, resolving the instruction will lead P to bring forward an elementary statement-that is, a statement in which both the local reason and the proposition are elementary, which falls under the restriction of the Socratic rule. The idea for P then is to postpone his answer to the challenge launched with move 7 and to force O to choose a local reason first so as to copy it in his answer to the challenge. This yields a further application of the analysis rule for the conjunction: Move 11 thus provides P with the information he needed: he can then copy O's choice to answer the challenge she launched at move 7. O P ! (𝐴 ∧ 𝐵) ⊃ (𝐵 ∧ 𝐴) 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 3 𝑝 ∶ 𝐴 ∧ 𝐵 0 𝑞 ∶ 𝐵 ∧ 𝐴 4 5 ? 𝐿 ∧ 4 𝐿 ∧ ( Note: It should be clear that a similar end will come about if O starts by challenging the right component of the conjunction statement, instead of challenging the left component. Analysis of the example Let us now go deeper in the analysis of the example and make explicit what happened during the play: When O resolves R  (p) with the local reason b (for instance) and P resolves the instruction L  (q) with the same local reason, then P is not only stating b : In other words, the definitional equality 𝑅 ∧ (𝑝) 𝑂 = 𝑏: 𝐵 that provides content to B makes it explicit at the object-language level that an application of the Socratic rule has been initiated and achieved by means of dialogical interaction. B The development of a dialogue determined by immanent reasoning thus includes four distinct stages: 1. applying the rules of synthesis to the thesis; 2. applying the rules of analysis; 3. launching the Resolution and Substitution of instructions; 4. applying the Socratic rule. 5. We can then add a fifth stage: Producing the strategic reason. While the first two steps involve local meaning, step 3 concerns global meaning and step 4 requires describing how to produce a winning strategy. Now that the general idea of local reasons has been provided, we will present in the next chapter all the rules together, according to their level of meaning. The dialogical roots of equality: dialogues for immanent reasoning In this section we will spell out a simplified version of the dialogues for immanent reasoning, that is, the dialogical framework incorporating features of Constructive Type Theory-a dialogical framework making the players' reasons for asserting a proposition explicit. The rules can be divided, just as in the standard framework, into rules determining local meaning and rules determining global meaning. These include: We will be presenting these rules in this order in the next two sections, along with the adaptation of the other structural rules to dialogues for immanent reasoning in the second section. Local meaning in dialogues for immanent reasoning The formation rules The formation rules for logical constants and for falsum are given in the following table. Notice that a statement ' : prop' cannot be challenged; this is the dialogical account for falsum '⊥' being by definition a proposition. Formation rules Move Challenge Defence In the formulation of this rule, "𝜋" is a statement and "𝜏 𝑖 " is a local reason of the form either 𝑎 𝑖 : 𝐴 𝑖 or 𝑥 𝑖 ∶ 𝐴 𝑖 . A particular case of the application of Subst-D is when the challenger simply chooses the same local reasons as those occurring in the concession of the initial statement. This is particularly useful in the case of formation plays: The rules for local reasons: synthesis and analysis Now that the dialogical account of formation rules has been clarified, we may further develop our analysis of plays by introducing local reasons. Let us do so by providing the rules that prescribe the synthesis and analysis of local reasons. For more details on each rule, see section 0. Synthesis rules for local reasons Move Challenge Defence Analysis rules for local reasons Move Anaphoric instructions: dealing with cases of anaphora One of the most salient features of the CTT framework is that it contains the means to deal with cases of anaphora. For example anaphoric expressions are required for formalizing Barbara in CTT. In the following CTT-formalization of Barbara the projection fst(z) can be seen as the tail of the anaphora whose head is 𝑧: (z : (x : D)A)B[fst(z)] true premise 1 (z : (x : D)B)C[fst(z)] true premise 2 -------------- (z : (x : D)A)C[fst(z)] true conclusion In dialogues for immanent reasoning, when a local reason has been made explicit, this kind of anaphoric expression is formalized through instructions, which provides a further reason for introducing them. For example if a is the local reason for the first premise we have P p : (z : (x : D)A(x))B(L  (L  (p) O )) However, since the thesis of a play does not bear an explicit local reason (we use the exclamation mark to indicate there is an implicit one), it is possible for a statement to be bereft of an explicit local reason. When there is no explicit local reason for a statement using anaphora, we cannot bind the instruction L  (p) O to a local reason 𝑝. We thus have something like this, with a blank space instead of the anaphoric local reason: P ! (z : (x : D)A(x))B(L  ( L  ( ) O )) But this blank stage can be circumvented: the challenge on the universal quantifier will yield the required local reason: O will provide 𝑎: (∃𝑥: 𝐷)𝐴(𝑥), which is the local reason for 𝑧. We can therefore bind the instruction on the missing local reason with the corresponding variable-𝑧 in this case-and write P ! (z : (x : D)A(x))B(L  (L  (z) O )) We call this kind of instruction, Anaphoric instructions. For the substitution of Anaphoric instructions the following two cases are to be distinguished: Substitution of Anaphoric Instructions 1 Given some Anaphoric instruction such as L  (z) Y , once the quantifier (∀𝑧: 𝐴)𝐵(… ) has been challenged by the statement a : A the occurrence of L  (z) Y can be substituted by a. The same applies to other instructions. In our example we obtain: P ! (z : (x : D)A(x))B(L  ( L  (z) O )) O a : (x : D)A(x) P b : B(L  (L  (z) O )) O ? a / L  (z) O P b : B(L  (a)) … Substitution of Anaphoric Instructions 2 Given some Anaphoric instruction such as L  (z) Y , once the instruction L  (c)-resulting from an attack on the universal z : has been resolved with a : then any occurrence of L  (z) Y can be substituted by a. The same applies to other instructions. Global Meaning in dialogues for immanent reasoning We here provide the structural rules for dialogues for immanent reasoning, which determine the global meaning in such a framework. They are for the most part similar in principle to the precedent logical framework for dialogues; the rules concerning instructions are an addition for dialogues for immanent reasoning. Structural Rules SR0: Starting rule The start of a formal dialogue of immanent reasoning is a move where P states the thesis. The thesis can be stated under the condition that O commits herself to certain other statements called initial concessions; in this case the thesis has the form ! [  , …,  n ], where 𝐴 is a statement with implicit local reason and 𝐵 1 , … , 𝐵 𝑛 are statements with or without implicit local reasons. A dialogue with a thesis proposed under some conditions starts if and only if O accepts these conditions. O accepts the conditions by stating the initial concessions in moves numbered 0.1, …, 0.n before choosing the repetition ranks. After having stated the thesis (and the initial concessions, if any), each player chooses in turn a positive integer called the repetition rank which determines the upper boundary for the number of attacks and of defences each player can make in reaction to each move during the play. SR1: Development rule The Development rule depends on what kind of logic is chosen: if the game uses intuitionistic logic, then it is SR1i that should be used; but if classical logic is used, then SR1c must be used. SR1i: Intuitionistic Development rule, or Last Duty First Players play one move alternately. Any move after the choice of repetition ranks is either an attack or a defence according to the rules of formation, of synthesis, and of analysis, and in accordance with the rest of the structural rules. Players can answer only against the last non-answered challenge by the adversary. Note: This structural rule is known as the Last Duty First condition, and makes dialogical games suitable for intuitionistic logic, hence the name of this rule. SR1c: Classical Development rule Players play one move alternately. Any move after the choice of repetition ranks is either an attack or a defence according to the rules of formation, of synthesis, and of analysis, and in accordance with the rest of the structural rules. If the logical constant occurring in the thesis is not recorded by the table for local meaning, then either it must be introduced by a nominal definition, or the table for local meaning needs to be enriched with the new expression. Note: The structural rules with SR1c (and not SR1i) produce strategies for classical logic. The point is that since players can answer to a list of challenges in any order (which is not the case with the intuitionistic rule), it might happen that the two options of a P-defence occur in the same play-this is closely related to the classical development rule in sequent calculus allowing more than one formula at the right of the sequent. SR2: Formation rules for formal dialogues SR2i: Starting a formation dialogue A formation-play starts by challenging the thesis with the formation request O ? prop ; P must answer by stating that his thesis is a proposition. SR2ii: Developing a formation dialogue The game then proceeds by applying the formation rules up to the elementary constituents of prop/set. After that O is free to use the other particle rules insofar as the other structural rules allow it. Note: The constituents of the thesis will therefore not be specified before the play but as a result of the structure of the moves (according to the rules recorded by the rules for local meaning). SR3: Resolution of instructions SR4: Substitution of instructions Once the local reason b has been used to resolve the instruction I K (p) X , and if the same instruction occurs again, players have the right to require that the instruction be resolved with 𝑏. The substitution request has the form ?𝑏/I k (p) X . Players cannot choose a different substitution term (in our example, not even X, once the instruction has been resolved). This rule also applies to functions. SR5: Socratic rule and definitional equality The following points are all parts of the Socratic rule, they all apply. SR5.1: Restriction of P statements P cannot make an elementary statement if O has not stated it before, except in the thesis. An elementary statement is either an elementary proposition with implicit local reason, or an elementary proposition and its local reason (not an instruction). SR5.2: Challenging elementary statements in formal dialogues Challenges of elementary statements with implicit local reasons take the form: 𝑿 ! 𝐴 𝒀 ? 𝑟𝑒𝑎𝑠𝑜𝑛 𝑿 𝑎 ∶ 𝐴 Where 𝐴 is an elementary proposition and 𝑎 is a local reason.13 P cannot challenge O's elementary statements, except if O provides an elementary initial concession with implicit local reason, in which case P can ask for a local reason, or in the context of transmission of equality. SR5.3: Definitional equality O may challenge elementary P-statements; P then answers by stating a definitional equality expressing the equality between a local reason and an instruction both introduced by O (for nonreflexive cases, that is when O provided the local reason as a resolution of an instruction), or a reflexive equality of the local reason introduced by O (when the local reason was not introduced by the resolution of an instruction, that is either as such in the initial concessions or as the result of a synthesis of a local reason). We thus distinguish two cases of the Socratic rule: 1. non-reflexive cases; 2. reflexive cases. These rules do not cover cases of transmission of equality. The Socratic rule also applies to the resolution or substitution of functions, even if the formulation mentions only instructions. SR5.3.1: Non-reflexive cases of the Socratic rule We are in the presence of a non-reflexive case of the Socratic rule when P responds to the challenge with the indication that O gave the same local reason for the same proposition when she had to resolve or substitute instruction I. Here are the different challenges and defences determining the meaning of the three following moves: Non-reflexive cases of the Socratic rule The P-statements obtained after defending elementary P-statements cannot be attacked again with the Socratic rule (with the exception of SR5.3.1c), nor with a rule of resolution or substitution of instructions. SR5.3.2: Reflexive cases of the Socratic rule We are in the presence of a reflexive case of the Socratic rule when P responds to the challenge with the indication that O adduced the same local reason for the same proposition, though that local reason in the statement of O is not the result of any resolution or substitution. The attacks have the same form as those prescribed by SR5.3.1. Responses that yield reflexivity presuppose that O has previously stated the same statement or even the same equality. The response obtained cannot be attacked again with the Socratic rule. Definitional Equality transmits by reflexivity, transitivity and symmetry Content and Material Dialogues As pointed out by Krabbe (1985, p. 297), material dialoguesthat is, dialogues in which propositions have content-receive in the writings of Paul Lorenzen and Kuno Lorenz priority over formal dialogues: material dialogues constitute the locus where the logical constants are introduced. However in the standard dialogical framework, since both material and formal dialogues marshal a purely syntactic notion of the formal rule-through which logical validity is defined-, this contentual feature is bypassed,14 with this consequence that Krabbe and others after him considered that, after all, formal dialogues had priority over material ones. As can be gathered from the above discussion, we believe that this conclusion stems from shortcomings of the standard framework, in which local reasons are not expressed at the objectlanguage level. We thus explicitly introduced these local reasons in order to undercut this apparent precedence of a formalistic approach that makes away with the contentual origins of the dialogical project. And yet the Socratic Rule, as defined in the preceding sections of our study entirely leaves the introduction of local reasons to the Opponent (the Proponent only being allowed to endorse what the Opponent introduced). This rule applying to any proposition (or set), it can be considered as a formal rule; so if we are to specify the rules for material reasoningto use Peregrin's (2014, p. 228) apt terminology-, the rules specifying the elementary propositions involved in a dialogue must also be defined: whereas in the structural rules for formal dialogues of immanent reasoning only the Socratic rules dealt with elementary statements, and without providing any specification on that statement beside the simple fact that it must be the Opponent who introduces them in the dialogue, the structural rules for material dialogues of immanent reasoning will have both Socratic rules that are player dependent rules for elementary statements specific to that very statement, but also global rules, that is player independent rules for elementary statements, specific to those statements (thus providing the material level). In fact, in principle; a local reason prefigures a material dialogue displaying the content of the proposition stated. This aspect makes up the ground level of the normative approach to meaning of the dialogical framework, in which use-or dialogical interaction-is to be understood as use prescibed by a rule; such a use is what Peregrin (2014, pp. 2-3) calls the role of a linguistic expression. Dialogical interaction is this use, entirely determined by rules that give it meaning: the linguistic expression of every statement determines this statement by the role it plays, that is by the way it is used, and this use is governed by rules of interaction. The meaning of elementary propositions in dialogical interaction thus amounts to their role in the kind of interaction that is governed by the Socratic and Global rules for material dialogues, that is by the specific formulations of the Socratic and Global rules for precisely those very propositions. It follows that material dialogues are important not only for the general issue on the normativity of logic but also for rendering a language with content. We cannot in this paper develop these kind of dialogues, however we invite the reader to visit the chapter on material dialogues in Rahman/McConaughey/ [START_REF] Rahman | Immanent Reasoning. A plaidoyer for the Play-Level[END_REF], where the main we sketch the main features of material dialogues that include sets of natural numbers and the set 𝐁𝐨𝐨𝐥. The latter allows for expressing classical truth-functions within the dialogical framework, and it has an important role in the CTT-approach to empirical propositions. 15 . The final section of the chapter on Marial dialogues in Rahman/McConaughey/Klev/Clerbout (2018), discusses the epistemological notion of internalization of contes. 16 In this respect, the dialogical framework can be considered as a formal approach to reasoning rooted in the dialogical constitution and "internalization" of content-including empirical content-rather than in the syntactic manipulation of un-interpreted signs. This discussion on material dialogues provides a new perspective on Willfried Sellars ' (1991, pp. 129-194) notion of Space of Reasons: the dialogical framework of immanent reasoning enriched with the material level should show how to integrate world-directed thoughts (displaying empirical content) into an inferentialist approach, thereby suggesting that immanent reasoning can integrate within the same epistemological framework the two conflicting readings of the Space of Reasons brought forward by John McDowell (2009, pp. 221-238) on the one hand, who insists in distinguishing world-direct thought and knowledge gathered by inference, and Robert [START_REF] Brandom | A Study Guide[END_REF] on the other hand, who interprets Sellars' work in a more radical anti-empiricist manner. The point is not only that we can deploy the CTT-distinction between reason as a premise and reason as a piece of evidence justifying a proposition, but also that the dialogical framework allows for distinguishing between the objective justification level targeted by Brandom (1997, p. 129) and the subjective justification level stressed by McDowell. According to our approach the sujective feature corresponds to the play level, where a concrete player brings forward the statement It looks red to me, rather than It is red. The general epistemological upshot from these initial reflections is that, on our view, many of the worries on the interpretation of the Space of Reasons and on the shortcomings of the standard dialogical approach to meaning (beyond the one of logical constants) have their origin in the neglect of the Strategic reasons in dialogues for immanent reasoning The conceptual backbone on which rests the metalogical properties of the dialogical framework is the notion of strategic reason which allows to adopt a global view on all the possible plays that constitute a strategy. However, this global view should not be identified with the perspective common in proof theory: strategic reasons are a kind of recapitulation of what can happen for a given thesis and show the entire history of the play by means of the instructions. Strategic reasons thus yield an overview of the possibilities enclosed in a thesis-what plays can be carried out from it-, but without ever being carried out in an actual play: they are only a perspective on all the possible variants of plays for a thesis and not an actual play. In this way the rules of synthesis and analysis of strategic reasons provided below are not of the same nature as the analysis and synthesis of local reasons, they are not produced through challenges and their defence, but are a recapitulation of the plays that can actually be carried out. The notion of strategic reasons enables us to link dialogical strategies with CTTdemonstrations, since strategic reasons (and not local reasons) are the dialogical counterpart of CTT proof-objects; but it also shows clearly that the strategy level by itself-the only level that proof theory considers-is not enough: a deeper insight is gained when considering, together with the strategy level, the fundamental level of plays; strategic reasons thus bridge these two perspectives, the global view of strategies and the more in-depth and down-to-earth view of actual plays with all the possible variations in logic they allow,17 without sacrificing the one for the other. This vindication of the play level is a key aspect of the dialogical framework and one of the purposes of the present study: other logical frameworks lack this dimension, which besides is not an extra dimension appended to the concern for demonstrations, but actually constitutes it, the heuristical procedure for building strategies out of plays showing the gapless link there is between the play level and the strategy level: strategies (and so demonstrations) stem from plays. Thus the dialogical framework can say at least as much as other logical frameworks, and, additionally, reveals limitations of other frameworks through this level of plays. Introducing strategic reasons Strategic reasons belong to the strategy level, but are elements of the object-language of the play level: they are the reasons brought forward by a player entitling him to his statement. Strategic reasons are a perspective on plays that take into account all the possible variations in the play for a given thesis; they are never actually carried out, since any play is but the actualization of only one of all the possible plays for the thesis: each individual play can be actualized but will be separate from the other individual plays that can be carried out if other choices are made; strategic reasons allow to see together all these possible plays that in fact are always separate. There will never be in any of the plays the complex strategic reason for the thesis as a result of the application of the particle rules, only the local reason for each of the subformulas involved; the strategic reason will put all these separate reasons together as a recapitulation of what can be said from the given thesis. Consider for instance a conjunction: the Proponent claims to have a strategic reason for this conjunction. This means that he claims that whatever the Opponent might play, be it a challenge of the left or of the right conjunct, the Proponent will be able to win the play. But in a single play with repetition rank 1 for the Opponent, there is no way to check if a conjunction is justified, that is if both of the conjuncts can be defended, since a play is precisely the carrying out of only one of the possible O-choices (challenging the left or the right conjunct): to check both sides of a conjunction, two plays are required, one in which the Opponent challenges the left side of the conjunction and another one for the right side. So a strategic reason is never a single play, but refers to the strategy level where all the possible outcomes are taken into account; the winning strategy can then be displayed as a tree showing that both plays (respectively challenging and defending the left conjunct and right conjunct) are won by the Proponent, thus justifying the conjunction. Let us now study what strategic reasons look like, how they are generated and how they are analyzed. A strategic perspective on a statement In the standard framework of dialogues, where we do not explicitly have the reasons for the statements in the object-language, the particle rules simply determine the local meaning of the expressions. In dialogues for immanent reasoning, the reasons entitling one to a statement are explicitely introduced; the particle rules (synthesis and analysis of local reasons) govern both the local reasons and the local meaning of expressions. But when building the core of a winning Pstrategy, local reasons are also linked to the justification of the statements-which is not the case if considering single plays or non-winning strategies, for then only one aspect of the statement may be taken into account during the play, the play providing thus only a partial justification. Take again the example of a P-conjunction, say P 𝑤 ∶ 𝐴 ∧ 𝐵. In providing a strategic reason 𝑤 for the conjunction 𝐴 ∧ 𝐵, P is claiming to have a winning strategy for this conjunction, that is, he is claiming that the conjunction is absolutely justified, that he has a proper reason for asserting it and not simply a local reason for stating it. Assuming that O has a repetition rank of 1 and has stated both 𝐴 and 𝐵 prior to move 𝑖, two different plays can be carried out from this point, which we provide without the strategic reason: Introducing strategic reasons: stating a conjunction O P Concessions Thesis 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 … … … … … … ! 𝐴 ∧ 𝐵 𝑖 Introducing strategic reasons: left decision option on conjunction O P Concessions Thesis 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 … … … … … … ! 𝐴 ∧ 𝐵 𝑖 𝑖 + 1 ? ∧ 1 𝑖 ! 𝐴 𝑖 + 2 Introducing strategic reasons: right decision option on conjunction O P Concessions Thesis 0 1 𝑚 ≔ 1 𝑛 ≔ 2 2 … … … … … … ! 𝐴 ∧ 𝐵 𝑖 𝑖 + 1 ? ∧ 2 𝑖 ! 𝐵 𝑖 + 2 So if P brings forward the strategic reason 𝑤 to support his conjunction at move 𝑖, he is claiming to be able to win both Erreur ! Source du renvoi introuvable. and Erreur ! Source du renvoi introuvable., and yet the actual play will follow into only one of the two plays. Strategic reasons are thus a strategic perspective on a statement that is brought forward during actual plays. An anticipation of the play and strategy as recapitulation Since a strategic reason (𝑤 for instance) is brought forward during a play (say at move 𝑖), it is clear that the play has not yet been carried out fully when the player claims to be able to defend his statement against whatever challenge his opponent might launch: bringing forward a strategic reason is thus an anticipation on the outcome of the play. But strategic reasons are not a simple claim to have a winning strategy, they also have a complex internal structure: they can thus be considered as recapitulations of the plays of the winning strategy produced by the heuristic procedure, that is the winning strategy obtained only after running all the relevant plays; this strategy-building process specific to the dialogical framework is a richer process than the one yielding CTT demonstrations-or proof theory in general-, since the strategic reasons will contain traces of choice dependences, which constitute their complexity. Choice dependences link possible moves of a player to the choices made by the other player: a player will play this move if his opponent used this decision-option, that move if the opponent used that decision-option. In the previous example, the Proponent will play move 𝑖 + 2 depending on the Opponent's decision at move 𝑖 + 1, so the strategic object 𝑤 played at move 𝑖 will contain these two possible scenarios with the 𝑖 + 2 P-move depending of the 𝑖 + 1 Odecision. The strategic reason 𝑤 is thus a recapitulation of what would happen if each relevant play was carried out. When the strategic reason makes clearly explicit this choice-dependence of P's moves on those of O, we say that it is in a canonical argumentation form and is a recapitulation of the statement. The rules for strategic reasons do not provide the rules on how to play but rather rules that indicate how a winning strategy has been achieved while applying the relevant rules at the play level. Strategic reasons emerge as the result of considering the optimal moves for a winning strategy: this is what a recapitulation is about. The canonical argumentation form of strategic reasons is closely linked to the synthesis and analysis of local reasons: they provide the recapitulation of all the relevant local reasons that could be generated from a statement. In this respect following the rules for the synthesis and analysis of local reasons, the rules for strategic reasons are divided into synthesis and analysis of strategic reasons, to which we will now turn. In a nutshell, the synthesis of strategic reasons provides a guide for what P needs to be able to defend in order to justify his claim; the analysis of strategic reasons provides a guide for the local reasons P needs to make O state in order to copy these reasons and thus defend his statement. Assertions and statements The difference between local reasons and strategic reasons should now be clear: while local reasons provide a local justification entitling one to his statement, strategic reasons provide an absolute justification of the statement, which thus becomes an assertion. The equalities provided in each of the plays constituting a P-winning strategy, and found in the analysis of strategic objects, convey the information required for P to play in the best possible way by specifying those O-moves necessary for P's victory. This information however is not available at the very beginning of the first play, it is not made explicit at the root of the tree containing all the plays relevant for the P-winning strategy: the root of the tree will not explicitly display the information gathered while developing the plays; this information will be available only once the whole strategy has been developed, and each possible play considered. So when a play starts, the thesis is a simple statement; it is only at the end of the construction process of the strategic reason that P will be able to have the knowledge required to assert the thesis, and thus provide in any new play a strategic reason for backing his thesis. The assertion of the thesis, making explicit the strategic reason resulting from the plays, is in this respect a recapitulation of the result achieved after running the relevant plays, after P's initial simple statement of that thesis. This is what the canonical argumentation form of a strategic object is, and what renders the dialogical formulation of a CTT canonical proof-object. It is in this fashion that dialogical reasons correspond to CTT proof-objects: introduction rules are usually characterized as the right to assert the conclusion from the premises of the inference, that is, as defining what one needs in order to be entitled to assert the conclusion; and the elimination rules are what can be inferred from a given statement. Thus, in the dialogical perspective of P-winning strategies, since we are looking at P's entitlements and duties, what corresponds to proof-object introduction rules would define what P is required to justify in order to assert his statement, which is the synthesis of a P-strategic reason; and what corresponds to proof-object elimination rules would define what P is entitled to ask of O from her previous statements and thus say it himself by copying her statements, which is the analysis of P-strategic reasons. We will thus provide the rules for the synthesis and analysis of strategic reasons (always in the perspective of a P-winning strategy), followed by their corresponding CTT rule. We have in this regard a good justification of Sundholm's idea that inferences can be considered as involving an implicit interlocutor, but here at the strategy level. 3.2 Rules for the synthesis of P-strategic reasons: P-strategic reasons must be built (synthesis of P-strategic reasons); they constitute the justification of a statement by providing certain information-choice-dependences-that are essential to the relevant plays issuing from the statement: strategic reasons are a recapitulation of the building of a winning strategy, directly inserted into a play. Thus a strategic reason for a P-statement can have the form p 2 P ⟦ p 1 O ⟧ and indicates that P's choice of 𝑝 2 is dependent upon O's choice of 𝑝 1 . Strategic reasons for P are the dialogical formulation of CTT proof-objects, and the canonical argumentation form of strategic reasons correspond to canonical proof-objects. Since in this section we are seeking a notion of winning strategy that corresponds to that of a CTTdemonstration, and since these strategies have being identified to be those where P wins, we will only provide the synthesis of strategic reasons for P. 18Synthesis of strategic reasons for P: For negation, we must bear in mind that we are considering P-strategies, that is, plays in which P wins, and we are not providing particle rules with a proper challenge and defence, but we are adopting a strategic perspective on the reason to provide backing a statement; thus the response to an O-challenge on a negation cannot be 𝐏 ! ⊥, which would amount to P losing; this statement "P n O Move Correspondence between the synthesis of strategic reasons and CTT introduction rules and elimination rules Since we are considering a P-winning strategy, we are searching what P needs to justify in order to justify his thesis, which is the point of the synthesis rules for strategic reasons. This search corresponds to the CTT introduction rules, since these determine what one needs in order to carry out an inference. The following table displays the correspondence between the procedures of synthesis of a strategic reason and an introduction rule. Correspondence between synthesis of strategic reasons and introduction rules Synthesis of P-strategic reasons: CTT-introduction rule: Existential quantification 𝐏 ! (∃𝑥 ∶ 𝐴)𝐵(𝑥) (∃𝑥 ∶ 𝐴)𝐵(𝑥) 𝒕𝒓𝒖𝒆 𝑝 1 ∶ 𝐴 𝑝 2 ∶ 𝐵(𝑝 1 ) 〈𝑝 1 , 𝑝 2 〉 ∶ (∃𝑥 ∶ 𝐴)𝐵(𝑥) 𝐎 ? 𝐿 ∃ 𝐏 𝑝 1 ∶ 𝐴 𝐎 ? 𝑅 ∃ 𝐏 𝑝 2 ∶ 𝐵(𝑝 1 ) 𝐏 〈𝑝 1 , 𝑝 2 〉 ∶ (∃𝑥 ∶ 𝐴)𝐵(𝑥) Conjunction 𝑷 ! 𝐴 ∧ 𝐵 𝐴 ∧ 𝐵 Dependences In the case of material implication and universal quantification, a winning P-strategy literally displays the procedure by which the Proponent chooses the local reason for the consequent depending on the local reason chosen by the Opponent for the antecedent. What the canonical argumentation form of a strategic object does is to make explicit the relevant choicedependence by means of a recapitulation of the plays stemming from the thesis. This corresponds to the general description of proof-objects for material implications and universally quantified formulas in CTT: a method which, given a proof-object for the antecedent, yields a proof-object for the consequent. P-strategic reasons as recapitulations of procedures of analysis and record of instructions The analysis of P-strategic reasons focuses on this other essential aspect of P's activity while playing: not determining what he needs in order to justify his statement-that aspect is dealt with by the synthesis of P-strategic reasons-, but determining how he will be able to defend his statement through O's statement and through those alone; that is, the analysis of Pstrategic reasons are a direct consequence of the Socratic rule: since P must defend his thesis using only the elements provided by O, P must be able to analyze O's statements and find the elements he needs for the justification of his own statements, so as to force O to bring these elements forward during the play. In this regard, the analysis of strategic reasons constitute both the analogue of the elimination rules in CTT and the equality rules of a type, to which we now turn. The second presentation on the other hand, allows P to back any proposition 𝐶 with the local reason '𝑦𝑜𝑢 𝑔𝑎𝑣𝑒 𝑢𝑝 (𝑛)' once O has stated ⊥ at move 𝑛. Thus the strategic reason for any proposition 𝐶 is constituted by '𝑦𝑜𝑢 𝑔𝑎𝑣𝑒 𝑢𝑝 (𝑛)', provided that O has provided P with the means for resolving the instruction 𝐿 ⊃ (𝑝). Analysis rules for P-strategic reasons Move Correspondence between the analysis of strategic reasons and CTT equality and elimination rules We will not present here the table of correspondences since they can be reconstructed by the reader emulating the table of correspondence for procedures of synthesis. Let us only indicated that: P 𝑦𝑜𝑢 𝑔𝑎𝑣𝑒 𝑢𝑝 (𝑛) ⟦𝐿 ⊃ (𝑝) 𝑃 = 𝑝 1 𝑷,𝑶 ⟧ : C corresponds to the CTT-elimination-rule for absurdity, that is: ⊥true 𝐶 true interpreted as the fact that we shall never get an element of ⊥ defined as the empty ℕ 0  More precisely, if 𝑐 ∶ ℕ 0 , then the proof-object of 𝐶 is "𝑅 0 " understood as an "aborted programme" 𝑐 ∶ ℕ 0 𝑅 0 (𝑐) ∶ 𝐶(𝑐). In this respect the dialogical reading of the abort-operator is that a player gives up, and the reason for the other player to state C is that the antagonist gave up. A Plaidoyer for the play-level To some extent, the criticisms the dialogical approach to logic has been subject to provides an opportunity for clarifying its basic tenets. We will therefore herewith consider some recent objections raised against the dialogical framework in order to pinpoint some of its fundamental features, whose importance may not have appeared clearly enough through the main body of the paper; namely, dialogue-definiteness, player-independence, and the dialogical conception of proposition. Showing how and why these features have been developed, and specifying their point and the level they operate on, will enable us to vindicate the play level and thus disarm the objections that have been raised against the dialogical framework for having neglected this crucial level. We shall first come back on the central notion of dialogue-definiteness and on the dialogical conception of propositions, which are essential for properly understanding the specific role and importance of the play level. We shall then be able to address three objections to the dialogical framework, due to a misunderstanding of the notion of Built-in Opponent, of the principles of dialogue-definiteness and of player-independence, and of the reflection on normativity that constitutes the philosophical foundation of the framework; all of these misunderstandings can be reduced to a misappraisal of the play level. We shall then go somewhat deeper in the normative aspects of the dialogical framework, according to the principle that logic has its roots in ethics. Dialogue-Definiteness and Propositions The dialogical theory of meaning is structured in three levels, that of the local meaning (determined by the particle rules for the logical constants), of the global meaning (determined by the structural rules), and the strategic level of meaning (determined by what is required for having a winning strategy). The material level of consideration is part of the global meaning, but with particular rules so precise that they determine only one specific expression (through a modified Socratic rule). A characteristic of the local meaning is that the rules are player independent: the meaning is thus defined in the same fashion for each player; they are bound by the same sets of duties and rights when they start a dialogue. This normative aspect is thus constitutive of the play level (which encompasses both the local meaning and the global meaning): it is even what allows one to judge that a dialogue is taking place. In this regard, meaning is immanent to the dialogue: what constitutes the meaning of the statements in a particular dialogue solely rests on rules determining interaction (the local and the global levels of meaning). The strategy level on the other hand is built on the play level, and the notion of demonstration operates on the strategy level (it amounts to having a winning strategy). Two main tenets of the dialogical theory of meaning can be traced back to Wittgenstein, and ground in particular the pivotal notion of dialogue-definiteness: 1. the internal feature of meaning (the Unhintergehbarkeit der Sprache 19 ), and 2. the meaning as mediated by language-games. As for the first Wittgensteinian tenet, the internal feature of meaning, we already mentioned in the introduction that if we relate the notion of internalization of meaning with both languagegames and fully-interpreted languages of CTT, then a salient feature of the dialogical approach to meaning can come to fore: the expressive power of CTT allows all these actions involved in the dialogical constitution of meaning to be incorporated as an explicit part of the object-language of the dialogical framework. In relation to the second tenet, the inceptors of the dialogical framework observed that if language-games are to be conceived as mediators of meaning carried out by social interaction, these language-games must be games actually playable by human beings: it must be the case that we can actually perform them, 20 which is captured in the notion of dialogue-definiteness. 21Dialogue-definiteness is essential for dialogues to be mediators of meaning, but it is also constitutive of what propositions are, as Lorenz clearly puts it: […] for an entity to be a proposition there must exist an individual play, such that this entity occupies the initial position, and the play reaches a final position with either win or loss after a finite number of moves according to definite rules. (Lorenz, 2001, p. 258) A proposition is thus defined in the standard presentation of dialogical logic as a dialoguedefinite expression, that is, an expression 𝐴 such that there is an individual play about 𝐴, that can be said to be lost or won after a finite number of steps, following given rules of dialogical interaction. 22The notion of dialogue-definiteness is in this sense the backbone of the dialogical theory of meaning: it provides the basis for implementing the human-playability requirement and the notion of proposition. 19 See Tractatus Logico-Philosophicus, 5.6. 20 As observed by Marion (2006, p. 245), a lucid formulation of this point is the following remark of Hintikka (1996, p. 158) who shared this tenet (among others) with the dialogical framework: [Finitism] was for Wittgenstein merely one way of defending the need of language-games as the sense that [sic] they had to be actually playable by human beings. […] Wittgenstein shunned infinity because it presupposed constructions that we human beings cannot actually carry out and which therefore cannot be incorporated in any realistic language-game. […] What was important for Wittgenstein was not just the finitude of the operations we perform in our calculi and other language-games, but the fact that we can actually perform them. Otherwise the entire idea of language-games as meaning mediators will lose its meaning. The language-games have to be humanly playable. And that is not possible if they involve infinitary elements. Thus it is the possibility of actually playing the meaning-conferring language-games that is the crucial issue for Wittgenstein, not finitism as such. Dialogue-definiteness sets apart rather decisively the level of strategies from the level of plays, as Lorenz's notion of dialogue-definite proposition does not amount to a set of winning strategies, but rather to an individual play. Indeed, a winning strategy for a player X is a sequences of moves such that X wins independently of the moves of the antagonist. It is crucial to understand that the qualification independently of the moves of the antagonist amounts to the fact that the one claiming 𝐴 has to play under the restriction of the Copy-cat rule: if possessing a winning strategy for player X involves being in possession of a method (leading to the win of X) allowing to choose a move for any move the antagonist might play, then we must assume that the propositions brought forward by the antagonist are justified. There is a winning strategy if X can base his moves leading to a win by endorsing himself those propositions whose justification is rooted on Y's authority. For short, the act of endorsing is what lies behind the so-called Copy-cat rule and structures dialogues for immanent reasoning: it ensures that X can win whatever the contender might bring forward in order to contest 𝐴 (within the limits set by the game). Furthermore, refuting, that is bringing up a strategy against 𝐴, amounts to the dual requirement: that the antagonist Y possess a method that leads to the loss of X ! 𝐴, whatever X is can bring forward, and that she can do it under the Copy-cat restriction: X ! 𝐴 is refuted, if the antagonist Y can bring up a sequence of moves such that she (Y) can win playing under the Copy-cat restriction. Refuting is thus different and stronger than contesting: while contesting only requires that the antagonist Y brings forward at least one counterexample in a kind of play where Y does not need to justify her own propositions, refuting means that Y must be able to lead to the loss of X ! A, whatever X's justification of his propositions might be. In this sense, the assumption that every play is a finitary open two-person zero-sum game does not mean that either there is a winning strategy for 𝐴 or a winning strategy against 𝐴: the play level cannot be reduced to the strategy level. For instance, if we play with the Last-duty first development rule P will lose the individual plays relevant for the constitution of a strategy for ∨ ¬𝐴 . So 𝐴 ∨ ¬𝐴 is dialogue-definite, though there is no winning strategy against 𝐴 ∨ ¬𝐴. The distinction between the play level and the strategy level thus emerges from the combination of dialogue-definiteness and the Copy-cat rule. The classical reduction of strategies against 𝐴 to the falsity of 𝐴 (by means of the saddlepoint theorem) assumes that the win and the loss of a play reduce to the truth or the falsity of the thesis. But we claim that the existence of the play level and a loss in one of the plays introduces a qualification that is not usually present in the purely proof-theoretic approach; to use the previous example, we know that P does not have a winning strategy for ! 𝐴 ∨ ¬𝐴 (playing under the intuitionisitic development rule), but neither will O have one against it if she has to play under the Copy-cat rule herself (notice the switch in the burden of the restriction of the Copy-cat rule when refuting a thesis). Let us identify the player who has to play under the Copy-cat restriction by highlighting her moves: Play against P ! 𝐴 ∨ ¬𝐴 O P ! 𝐴 ∨ ¬𝐴 0 1 𝑛 ≔ 1 𝑚 ≔ 2 2 3 ? ∨ 0 ! 𝐴 4 P wins The distinction between the play and the strategy level can be understood as a consequence of introducing the notion of dialogue-definiteness which amounts to a win or a loss at the play level, though strategically seen, the proposition at stake may be (proof-theoretically) undecidable. Hence, some criticisms to the purported lack of dynamics to dialogical logic are off the mark if they are based on the point that "games" of dialogical logic are deterministic: 23 plays are deterministic in the sense that they are dialogue-definite, but strategies are not deterministic in the sense that for every proposition there would either be a winning strategy for it or a winning strategy against it. Before ending this section let us quote quite extensively [START_REF] Lorenz | Basic Objectives of Dialogue Logic in Historical Perspective[END_REF], who provides a synopsis of the historical background that lead to the introduction of the notion of dialoguedefiniteness and the distinction of the deterministic conception of plays-which obviously operates at the level of plays-from the proof-theoretical undecidable propositions-which operate at the level of strategies: […] It was Alfred Tarski who, in discussions with Lorenzen in 1957/58, when Lorenzen had been invited to the Institute for Advanced Study at Princeton, convinced him of the impossibility to characterize arbitrary (logically compound) propositions by some decidable generalization of having a decidable proof-predicate or a decidable refutation-predicate. […] It became necessary to search for some decidable predicate which may be used to qualify a linguistic entity as a proposition about any domain of objects, be it elementary or logically compound. Decidability is essential here, because the classical characterization of a proposition as an entity which may be true or false, has the awkward consequence that of an undecided proposition it is impossible to know that it is in fact a proposition. This observation gains further weight by L. E. J. Brouwer's discovery that even on the basis of a set of "value-definite", i.e., decidably true or false, elementary propositions, logical composition does not in general preserve value-definiteness. And since neither the property of being proof-definite nor the one of being refutation-definite nor properties which may be defined using these two, are general enough to cover the case of an arbitrary proposition, some other procedure had to be invented which is both characteristic of a proposition and satisfies a decidable concept. The concept looked for and at first erroneously held to be synonymous with argumentation[ 24 ] turned out to be the concept of dialogue about a proposition 23 For such criticismssee Trafford (2017, pp. 86-88). 24 Lorenz identifies argumentation rules with rules at the strategy level and he would like to isolate the interaction displayed by the moves constituting the play levelsee Lorenz (2010a, p.79). We deploy the term argumentationrule for request-answer interaction as defined by the local and structural rules. It is true that nowadays argumentation-rules has even a broader scope including several kinds of communicative interaction and this might produce some confusion on the main goal of the dialogical framework which is in principle, to provide an argumentative understanding of logic rather than the logic of argumentation. However, once this distinction has been drawn nothing prevents to develop the interface dialogical-understanding of logic/logical structure of a dialogue. In fact, it is our claim that in order to study the logical structure of a dialogue, the dialogical conception of logic provides the right venue. A (which had to replace the concept of truth of a proposition A as well as the concepts of proof or of refutation of a proposition A, because neither of them can be made decidable). Fully spelled out it means that for an entity to be a proposition there must exist a dialogue game associated with this entity, i.e., the proposition A, such that an individual play of the game where A occupies the initial position, i.e., a dialogue D(A) about A, reaches a final position with either win or loss after a finite number of moves according to definite rules: the dialogue game is defined as a finitary open twoperson zero-sum game. Thus, propositions will in general be dialogue-definite, and only in special cases be either proof-definite or refutation-definite or even both which implies their being valuedefinite. Within this game-theoretic framework where win or loss of a dialogue D(A) about A is in general not a function of A alone, but is dependent on the moves of the particular play D(A), truth of A is defined as existence of a winning strategy for A in a dialogue game about A; falsehood of A respectively as existence of a winning strategy against A. Winning strategies for A count as proofs of A, and winning strategies against A as refutations of A. The meta-truth of "either 'A is true' or 'A is false' " which is provable only classically by means of the saddlepoint theorem for games of this kind may constructively be reduced to the decidability of win or loss for individual plays about A. The concept of truth of dialogue-definite propositions remains finitary, and it will, as it is to be expected of any adequate definition of truth, in general not be recursively enumerable. The same holds for the concept of falsehood which is conspicuously defined independently of negation. (Lorenz, 2001, pp. 257-258). 4.2 The Built-in Opponent and the Neglect of the Play Level In recent literature Catarina Duthil Novaes (2015) and James Trafford (2017, pp. 102-105) deploy the term internalization for the proposal that natural deduction can be seen as having an internalized Opponent, thereby motivating the inferential steps. This form of internalization is called the built-in Opponent. The origin of this concept is linked to Göran Sundholm who, by 2000, in order to characterize the fundamental links between natural deduction and dialogical logic, introduced in his lectures and talks the term implicit interlocutor. Yet, since the notion of implicit interlocutor was meant to link the strategy level with natural deduction, the concept of built-in Opponent-being the implicit interlocutor's offspring-inherited the same strategic perspective on logical truth. Thus, logical truth can be seen as the encoding of a process through which the Proponent succeeds in defending his assertion against a stubborn ideal interlocutor. 25From the dialogical point of view however, the ideal interlocutor of the strategy level is the result of a process of selecting the relevant moves from the play level. Rahman/Clerbout/ [START_REF] Keiff | Dialogical Logic[END_REF], in a paper dedicated to the Festschrift for Sundholm, designate the process as incarnation, using Jean-Yves Girard's term. Their thorough description of the incarnation process already displays those aspects of the cooperative endeavour, which was formulated by Duthil Novaes (2015) and quoted by Trafford (2017, p. 102) as a criticism of the dialogical framework. Their criticism seems to rest on the idea that the dialogues of the dialogical framework are not truly cooperative, since they are reduced to constituting logical truth. If this is really the point of their criticism, it is simply wrong, for the play level would then be completely neglected: the intersubjective in-built and implicit cooperation of the strategy level (which takes care of inferences) grows out of the explicit interaction of players at the play level in relation to the formation-rules; accepting or contesting a local reason is a process by the means of which players cooperate in order to determine the meaning associated to the action-schema at stake. 26 It is fair to say that the standard dialogical framework, not enriched with the language of CTT, did not have the means to fully develop the so-called material dialogues, that is dialogues that deal with content. Duthil Novaes (2015, p. 602)-but not Trafford (2017, p. 102)-seems to be aware that dialogues are a complex interplay of adversarial and cooperative moves, 27 even in Lorenzen and Lorenz' standard formulation. However, since she understands this interplay as triggered by the built-in implicit Opponent at the strategy level, Duthil Novaes suggestions or corrections motivated by reflections on the Opponent's role cannot be made explicit in the framework, and the way this role contributes in finally constituting a winning strategy cannot be traced back. 28 Duthil Novaes ' (2015, pp. 602-604) approach leads her to suggest that monotonicity is a consequence of the role of the Opponent as a stubborn adversary, which takes care of the nondefeasibility of the demonstration at stake; from this perspective, she contends that the standard presentations of dialogical logic, being mostly adversarial or competitive, are blind to defeasible forms of reasons and are thus […] rather contrived forms of dialogical interaction, and essentially restricted to specific circles of specialists (Duthil Novaes, 2015, p. 602). But this argument is not compelling when considering the strategy level as being built from the play level: setting aside the point on content mentioned above, if we conceive the constitution of a strategy as the end-result of the complementary role of competition and cooperation taking place at the play level, we do not seem to need-at least in many cases-to endow the notion of 26 In fact, when [START_REF] Trafford | Meaning in Dialogue. An Interactive Approach to Logic and Reasoning[END_REF] criticizes dialogical logic in his chapter 4, he surprisingly claims that this form of dialogical interaction does not include the case in which the plays would be open-ended in relation to the logical rules at stake, though it has already been suggested-see for instance in (Rahman & Keiff, 2005, pp. 394-403)-how to develop what we called Structure Seeking Dialogues (SSD). Moreover, [START_REF] Keiff | Le Pluralisme dialogique : Approches dynamiques de l'argumentation formelle[END_REF] PhD-dissertation is mainly about SSD. The idea behind SSD is roughly the following; let us take some inferential practice we would like to formulate as an action-schema, mainly in a teaching-learning situation; we then search for the rules allowing us to make these inferential practices to be put into a schema. For example: we take the third excluded to be in a given context a sound inferential practice; we then might ask what kind of moves P should be allowed to make if he states the third excluded as thesis. It is nonetheless true to say that SSD were studied only in the case of modal logic. 27 To put it in her own words: "the majority of dialogical interactions involving humans appear to be essentially cooperative, i.e., the different speakers share common goals, including mutual understanding and possibly a given practical outcome to be achieved." Duthil Novaes (2015, p. 602). 28 See for instance her discussion of countermoves Duthil Novaes (2015, p. 602) : indefeasibility means that the Opponent has no available countermove: "A countermove in this case is the presentation of one single situation, no matter how far-fetched it is, where the premises are the case and the conclusion is not-a counterexample." The question then would be to know how to show that the Opponent has no countermove available. The whole point of building winning strategies from plays is to actually construct the evidence that there is no possible move for the Opponent that will lead her to win: that is a winning strategy. But when the play level is neglected, the question remains: how does one know the Opponent has no countermove available? It can actually be argued that the mere notion of countermove tends to blurr the distinction between the level of plays and of strategies: a countermove makes sense if it is 'counter' to a winning strategy, as if the players were playing at the strategy level, but that is something we explicitly reject. At the play level, there are only simple moves: these can be challenges, defences, counterattacks, but countermoves do not make any sense. inference with non-monotonic features. The play level is the level were cooperative interaction, either constructive or destructive, can take place until the definitive answer-given the structural and material conditions of the rules of the game-has been reached. 29 The strategy level is a recapitulation that retains the end result. These considerations should also provide an end to Trafford's (2017, pp. 86-88) search for open-ended dialogical settings: open-ended dialogical interaction, to put it bluntly, is a property of the play level. Certainly the point of the objection may be to point out either that this level is underdeveloped in the literature-a fact that we acknowledge with the provisos formulated above-, or that the dialogical approach to meaning does not manage to draw a clean distinction between local and strategic meaning-the section on tonk below intends to make this distinction as clear as possible. At this point of the discussion we can say that the role of the (built-in) Opponent in Lorenzen and Lorenz' dialogical logic has been fully misunderstood. Indeed, the role of both interlocutors (implicit or not) is not about assuring logical truth by checking the non-defeasibility of the demonstration at stake, but their role is about implementing both the dialogical definiteness of the expressions involved and the internalization of meaning.30 Pathological cases and the Neglect of the Play Level The notorious case of [START_REF] Prior | The Runabout Inference-ticket[END_REF] tonk has been several times addressed as a counterargument to inferentialism and also to the "indoor-perspective" of the dialogical framework. This also seems to constitute the background of how Trafford (2017, p. 86) for instance reproduces the circularity objection against the dialogical approach to logical constants. At this point of the discussion, Trafford (2017, pp. 86-88) is clearly aware of the distinction between the rules for local meaning and the rules of the strategy level, though he points out that the local meaning is vitiated by the strategic notion of justification. This is rather surprising as [START_REF] Rahman | On How to be a Dialogician[END_REF], Rahman/Clerbout/ [START_REF] Keiff | Dialogical Logic[END_REF], [START_REF] Rahman | Negation in the Logic of First Degree Entailment and Tonk. A Dialogical Study[END_REF] and [START_REF] Redmond | Armonía Dialógica: tonk Teoría Constructiva de Tipos y Reglas para Jugadores Anónimos[END_REF] have shown it is precisely the case of tonk that provides a definitive answer to the issue. In this respect, three well distinguished levels of meaning are respectively determined by specific rules:  the local meaning of an expression establishes how a statement involving such an expression is to be attacked and defended (through the particle rules);  the global meaning of an expression results from structural rules prescribing how to develop a play having this expression for thesis;  the strategy rules (for P) determine what options P must consider in order to show that he does have a method for winning whatever O may do-in accordance with the local and structural rules. It can in a quite straightforward fashion be shown (see below) that an inferential formulation of rules for tonk correspond to strategic rules that cannot be constituted by the formulation of particle rules. The player-independence of the particle rules-responsible for the branches at the strategy level-do not yield the strategic rules that the inferential rules for tonk are purported to prescribe. For short, the dialogical take on tonk shows precisely how distinguishing rules of local meaning from strategic rules makes the dialogical framework immune to tonk. As this distinction is central to the dialogical framework and illustrates the key feature of player-independence of particle rules, we will now develop the argument; we will then be able to contrast this pathological tonk case to another case, that of the black-bullet operator. The tonk challenge and player-independence of local meaning To show how the dialogical framework is immune to tonk through the importance and priority it gives to the play level, winning strategies are linked to semantic tableaux. According to the dialogical perspective, if tableaux rules (or any other inference system for that matter) are conceived as describing the core of strategic rules for P, then the tableaux rules should be justified by the play level, and not the other way round: the tonk case clearly shows that contravening this order yields pathological situations. We will here only need conjunction and disjunction for dealing with tonk. 31 A systematic description of the winning strategies available for P in the context of the possible choices of O can be obtained from the following considerations: if P is to win against any choice of O, we will have to consider two main different dialogical situations, namely those (a) in which O has uttered a complex formula, and those (b) in which P has uttered a complex formula. We call these main situations the O-cases and the P-cases, respectively. In both of these situations another distinction has to be examined: (i) P wins by choosing i.1. between two possible challenges in the O-cases (a), or i.2. between two possible defences in the P-cases (b), iff he can win with at least one of his choices. (ii) When O can choose ii.1. between two possible defences in the O-cases (a), or ii.2. between two possible challenges in the P-cases (b), P wins iff he can win irrespective of O's choices. The description of the available strategies will yield a version of the semantic tableaux of Beth that became popular after the landmark work on semantic-trees by Raymond Smullyan (1968), where O stands for T (left-side) and P for F (right-side), and where situations of type ii (and not of type i) will lead to a branching-rule. (𝐎)𝐴 ∧ 𝐵 (𝐎)𝐴 ∨ 𝐵 〈𝐏 ?∧ 1 〉 (𝐎)𝐴 〈𝐏?∧ 2 〉 (𝐎)𝐵 〈𝐏? 〉 (𝐎)𝐴  (𝐎)𝐵 However, as mentioned above, semantic tableaux are not dialogues. The main point is that dialogues are built bottom up, from local to global meaning, and from global meaning to validity. This establishes the priority of the play level over the winning strategy level. From the dialogical point of view, Prior's original tonk contravenes this priority. Let us indeed temporarily assume that we can start not by laying down the local meaning of tonk, but by specifying how a winning strategy for tonk would look like with the help of T(left)side and F(right)-side tableaux-rules (or sequent-calculus) for logical constants; in other words, let us assume that the tableaux-rules are necessary and sufficient to set the meaning of tonk. Prior's tonk rules are built for half on the disjunction rules (taking up only its introduction rule), and for half on the conjunction rules (taking up only its elimination rule). This renders the following tableaux version for the undesirable tonk: 32 Tonk is certainly a nuisance: if we apply the cut-rule, it is possible to obtain a closed tableau for T𝐴, F𝐵, for any 𝐴 and 𝐵. Moreover, there are closed tableaux for both {𝐓𝐴, 𝐴𝑡𝑜𝑛𝑘¬𝐴} and {𝐓𝐴, ¬(𝐴𝑡𝑜𝑛𝑘¬𝐴)}. From the dialogical point of view, the rejection of tonk is linked to the fact there is no way to formulate rules for its local meaning that meet the condition of being player-independent: if we try to formulate rules for local meaning matching the ones of the tableaux, the defence yields a different response, namely the tail of tonk if the defender is O, and the head of tonk if the defender is The fact that we need two sets of rules for the challenge and the defence of a tonk move means that the rule that should provide the local meaning of tonk is player-dependent, which should not be the case. Summing up, within the dialogical framework tonk-like operators are rejected because there is no way to formulate player-independent rules for its local meaning that justify the tableaux rules designed for these operators. The mere possibility of writing tableaux rules that cannot be linked to the play level rules shows that the play level rules are not vitiated by strategic rules. This brief reflection on tonk should state our case for both, the importance of distinguishing the rules of the play level from those of the strategy level, and the importance of including in the rules for the local meaning the feature of player-independence: it is the player-independence that provides the meaning explanation of the strategic rules, not the other way round. The black-bullet challenge and dialogue-definiteness Trafford (2017, pp. 37-41) contests the standard inferentialist approach to the meaning of logical constants by recalling the counterexample of Stephen Read, the black-bullet operator. Indeed, [START_REF] Read | Harmony and Modality[END_REF][START_REF] Read | General Elimination Harmony and the Meaning of the Logical Constants[END_REF] introduces a different kind of pathological operator, the black-bullet •, a zero-adic operator that says of itself that it is false. Trafford (2017, p. 39 footnote 35) suggests that the objection also extends to CTT; this claim however is patently wrong, since those counterexamples would not meet the conditions for the constitution of a type. 33 Within the dialogical framework, though player-independent rules for black-bullet can be formulated (as opposed to tonk), they do not satisfy dialogue-definiteness. Let us have the following tableaux rules for the black-bullet, showing that it certainly is pathological: they deliver closed tableaux for both • and ¬ •: (P) • 〈𝐎? 〉 (P)•⊃⊥ (𝐎) • 〈𝐏? 〉 (𝐎) •⊃⊥ We can in this case formulate the following player-independent rules: 33 Klev (2017, p. 12 footnote 7) points out that the introduction rule of such kind of operator fails to be meaninggiving because the postulated canonical set Λ(𝐴) occurs negatively in its premiss, and that the restriction avoiding such kind of operators have been already formulated by Martin-Löf (1971, pp. 182-183), and by [START_REF] Dybjer | Inductive families[END_REF]. Black-bullet player-independent particle rules Move Challenge Defence 𝐗 ! • 𝐘 ? • 𝐗! •⊃⊥ The black-bullet operator seems therefore to meet the dialogical requirement of playerindependent rules, and would thus have local meaning. But if it does indeed have playerindependent rules, the further play on the defence (which is a negation) would require that the challenger concedes the antecedent, that is black-bullet itself: Deploying the black-bullet challenges Y X … … ! • 𝑖 𝑖 + 1 ? • 𝑖 ! •⊃⊥ 𝑖 + 2 𝑖 + 3 ! • 𝑖 + 2 𝑖 + 3 ? • 𝑖 + 4 Obviously, this play sequence can be carried out indefinitely, regardless of which player initially states black-bullet. So the apparently acceptable player-independent rules for playing black-bullet would contravene dialogue-definiteness; and the only way of keeping dialoguedefiniteness would be to give up player-independence!34 Conclusion: the meaning of expressions comes from the play level The two pathological cases we have discussed, the tonk and the black-bullet operators, stress the difference between the play level and the strategy level and how the meaning provided by rules at the strategy level does not carry to the local meaning. Thus, from the dialogical point of view, the rules determining the meaning of any expression are to be rooted at the play level, and at this level what is to be admitted and rejected as a meaningful expression amounts to the formulation of a player-independent rule, that prescribe the constitution of a dialogue-definite proposition (where that expression occurs as a main operator). Notice that if we include material dialogues the distinction between logical operators and non-logical operators is not important any more. If we enrich the dialogical framework with the CTT-language, this feature comes more prominently to the fore. What the dialogical framework adds to the CTT framework is, as pointed out by Martin-Löf (2017a;[START_REF] Martin-Löf | Assertion and Request[END_REF], to set a pragmatic layer where normativity finds its natural place. Let us now discuss the notion of normativity. Normativity and the Dialogical Framework: A New Venue for the Interface Pragmatics-Semantics In his recent book, Jaroslav [START_REF] Peregrin | Inferentialism. Why Rules Matter[END_REF] marshals the distinction between the play level and the strategy level (that he calls tactics) in order to offer another insight, more general, into the issue of normativity mentioned at that start of our volume (Indeed, Peregrin understands the normativity of logic not in the sense of a prescription on how to reason, but rather as providing the material by the means of which we reason. It follows from the conclusion of the previous section that the rules of logic cannot be seen as tactical rules dictating feasible strategies of a game; they are the rules constitutive of the game as such. (MP does not tell us how to handle implication efficiently, but rather what implication is.) This is a crucial point, because it is often taken for granted that the rules of logic tell us how to reason precisely in the tactical sense of the word. But what I maintain is that this is wrong, the rules do not tell us how to reason, they provide us with things with which, or in terms of which, to reason. (Peregrin, 2014, pp. 228-229) Peregrin endorses at this point the dialogical distinction between rules for plays and rules for strategies. In this regard, the prescriptions for developping a play provide the material for reasoning, that is, the material allowing a play to be developped, and without which there would not even be a play; whereas the prescriptions of the tactical level (to use his terminology) prescribe how to win, or how to develop a winning-strategy: This brings us back to our frequently invoked analogy between language and chess. There are two kinds of rules of chess: first, there are rules of the kind that a bishop can move only diagonally and that the king and a rook can castle only when neither of the pieces have previously been moved. These are the rules constitutive of chess; were we not to follow them, we have seen (Section 5.5) we would not be playing chess. In contrast to these, there are tactical rules telling us what to do to increase our chance of winning, rules advising us, e.g., not to exchange a rook for a bishop or to embattle the king by castling. Were we not to follow them, we would still be playing chess, but with little likelihood of winning. (Peregrin, 2014, pp. 228-229) This observation of Peregrin plus his criticism on the standard approach to the dialogical framework, according to which this framework would only focus on logical constants (Peregrin, 2014, pp. 100, 106)-a criticism shared by many others since (Hintikka, 1973, pp. 77-82)naturally leads to the main subject of our book, namely immanent reasoning, or linking CTT with the dialogical framework. The criticism according to which the focus would be on logical constants and not on the meaning of other expressions does indeed fall to some extent on the standard dialogical framework, as little studies have been carried out on material dialogues in this basic framework;35 but the enriched CTT language in material dialogues deals with this shortcoming. Yet this criticism seems to dovetail this other criticism, summoned by Martin-Löf as starting point in his Oslo lecture: I shall take up criticism of logic from another direction, namely the criticism that you may phrase by saying that traditional logic doesn't pay sufficient attention to the social character of language. (Martin-Löf, 2017a, p. 1) The focus on the social character of language not only takes logical constants into account, of course, but it also considers other expressions such as elementary propositions or questions, as well as the acts bringing these expressions forward in a dialogical interaction, like statements, requests, challenges, or defences-to take examples from the dialogical framework-and how these acts made by persons intertwine and call for-or put out of order-other specific responses by that person or by others. In this regard, the social character of language is put at the core of immanent reasoning through the normativity present in dialogues: normativity involves, within immanent reasoning, rules of interaction which allow us to consider assertions as the result of having intertwined rights and duties (or permissions and obligations). This central normative dimension of the dialogical framework at large, which stems from questionning what is actually being done when implementing the rules of this very framework, entails that objections according to which the focus would be only on logical constants will always be, from the dialogical perspective, slightly off the mark. As mentioned in the introduction, in his Oslo and Stockholm lectures, Martin-Löf's (2017a;[START_REF] Martin-Löf | Assertion and Request[END_REF] delves in the structure of the deontic and epistemic layers of statements within his view on dialogical logic. In order to approach this normative aspect which pervades logic up to its technical parts, let us discuss more thoroughly the following extracts of "Assertion and Request": 36 […] we have this distinction, which I just mentioned, between, on the one hand, the social character of language, and on the other side, the non-social […] view of language. But there is a pair of words that fits very well here, namely to speak of the monological conception of logic, or language in general, versus a dialogical one. And here I am showing some special respect for Lorenzen, who is the one who introduced the very term dialogical logic. The first time I was confronted with something of this sort was when reading Aarne Ranta's book Type-Theoretical Grammar in (1994). Ranta there gave two examples, which I will show immediately. The first example is in propositional logic, and moreover, we take it to be constructive propositional logic, because that does matter here, since the rule that I am going to show is valid constructively, but not valid classically. Suppose that someone claims a disjunction to be true, asserts, or judges, a disjunction to be true. Then someone else has the right to come and ask him, Is it the left disjunct or is it the right disjunct that is true? There comes an opponent here, who questions the original assertion, and I could write that in this way: ? ⊢ 𝐴 ∨ 𝐵 𝑡𝑟𝑢𝑒 And by doing that, he obliges the original assertor to answer either that 𝐴 is true that is, to assert either that 𝐴 is true or that 𝐵 is true, so he has a choice, and we need to have some symbol for the choice here. (Dis) ⊢ 𝐴 ∨ 𝐵 𝑡𝑟𝑢𝑒 ? ⊢ 𝐴 ∨ 𝐵 𝑡𝑟𝑢𝑒 ⊢ 𝐴 𝑡𝑟𝑢𝑒 | ⊢ 𝐵 𝑡𝑟𝑢𝑒 Ranta's second example is from predicate logic, but it is of the same kind. Someone asserts an existence statement, ⊢ (∃𝑥 ∶ 𝐴)𝐵(𝑥) 𝑡𝑟𝑢𝑒 and then someone else comes and questions that ? ⊢ (∃𝑥 ∶ 𝐴)𝐵(𝑥) 𝑡𝑟𝑢𝑒 And in that case the original assertor is forced, which is to say, he must come up with an individual from the individual domain and also assert that the predicate 𝐵 is true of that instance. […] So, what are the new things that we are faced with here? Well, first of all, we have a new kind of speech act, which is performed by the| oh, I haven't said that, of course I will use the standard terminology here, either speaker and hearer, or else respondent and opponent, or proponent and opponent, as Lorenzen usually says, so that's terminology but the novelty is that we have a new kind of speech act in addition to assertion. […] So, let's call them rules of interaction, in addition to inference rules in the usual sense, which of course remain in place as we are used to them. […] Now let's turn to the request mood. And then it's simplest to begin directly with the rules, because the explanation is visible directly from the rules. So, the rules that involve request are these, that if someone has made an assertion, then you may question his assertion, the opponent may question his assertion. (Req1) ⊢ 𝐶 ? ⊢ 𝑚𝑎𝑦 𝐶 Now we have an example of a rule where we have a may. The other rule says that if we have the assertion ⊢ 𝐶, and it has been challenged, then the assertor must execute his knowledge how to do 𝐶. And we saw what that amounted too in the two Ranta examples, so I will write this schematically that he will continue by asserting zero, one, or more we have two in the existential case so I will call that schematically by 𝐶0. (Req2) ⊢ 𝐶 ? ⊢ 𝐶 ⊢ 𝑚𝑢𝑠𝑡 𝐶′ The Oslo and the Stockholm lectures of Martin-Löf (2017a; 2017b) contain challenging and deep insights in dialogical logic, and the understanding of defences as duties and challenges as rights is indeed at the core of the deontics underlying the dialogical framework.37 More precisely, the rules Req1 and Req2 do both, they condense the local rules of meaning, and they bring to the fore the normative feature of those rules, which additionally provides a new understanding for Sunholm's notion of implicit interlocutor: once we make explicit the role of the interlocutor, the deontic nature of logic comes out. 38 Moreover, as Martin-Löf points out, and rightly so, they should not be called rules of inference but rules of interaction. Accordingly, a dialogician might wish to add players X and Y to Req2, in order to stress both that the dialogical rules do not involve inference but interaction, and that they constitute a new approach to the action-based background underlying Lorenzen's (1955) Operative Logik. This would yield the following, where we substitute the horizontal bar for an arrow: 39 ⊢ 𝐗 𝐶 ? ⊢ 𝑚𝑎𝑦 𝐘 𝐶 (Req2) ⇓ ? ⊢ 𝑚𝑢𝑠𝑡 𝐗 𝐶 ′ Such a rule does indeed condense the rules of local meaning, but it still does not express the choices while defending or challenging; yet it is the distribution of these choices that determines for example that the meaning of a disjunction is different from that of a conjunction: while in the former case (disjunction) the defender must choose a component, the latter (conjunction) requires of the challenger that, her right to challenge is bounded to her duty to choose the side to be requested (though she might further on request the other side). Hence, the rules for disjunction and conjunction (if we adapt them to Martin-Löf's rules) would be the following: Notice however that these rules only determine the local meaning of disjunction and conjunction, not their global meaning. For example, while classical and constructive disjunction share the same rules of local meaning, they differ at the global level of meaning: in a classical disjunction the defender may come back on the choice he made for defending his disjunction, though in a constructive disjunction this is not allowed, once a player has made a choice he must live with it. What is more, these rules are not rules of inferences (for example rules of introduction and elimination): they become rules of inference only when we focus on the choices P must take into consideration in order to claim that he has a winning strategy for the thesis. Indeed, as mentioned at the start of the present chapter strategy rules (for P) determine what options P must consider in order to show that he has a method for winning whatever O does, in accordance with the rules of local and global meaning. The introduction rules on the one hand establish what P has to bring forward in order to assert it, when O challenges it. Thus in the case of a disjunction, P must choose and assert one of the two components. So, P's obligation lies in the fact that he must choose, and so P's duty to choose yields the introduction rule. Compare this with the conjunction where it is the challenger who has the right to choose (and who does not assert but request his choice). But in both cases, defending a disjunction and defending a conjunction, only one conclusion will be produced, not two: in the case of a conjunction, the challenger will ask one after the other (recall that it is an interaction taking place within a dialogue where each step alternates between moves of each of the players). The elimination rules on the other hand prescribe what moves O must consider when she asserted the proposition at stake. So if O asserted a disjunction, P must be able to win whatever the choices of O be. The case of the universal quantifier adds the interdependence of choices triggered by the may-moves and the must-moves: if the thesis is a universal quantifier of the form (∀𝑥 ∶ 𝐴) 𝐵(𝑥), P must assert 𝐵(𝑎), for whatever 𝑎 O may chose from the domain 𝐴: this is what correspond to the introduction rule. If it is O who asserted the universal quantifier, and if she also conceded that, 𝑎 ∶ 𝐴, then P may challenge the quantifier by choosing 𝑎 ∶ 𝐴, and request of O that she asserts 𝐵(𝑎); this is how the elimination rule for the universal quantifier are introduced in the dialogical framework (for details see chapter 0). These distinctions can be made explicit if we enrich the first-order language of standard dialogical logic with expressions inspired by CTT. The first task is to introduce statements of the form " 𝑝 ∶ 𝐴". On the right-hand side of the colon is the proposition 𝐴, on the left-hand side is the local reason 𝑝 brought forward to back the proposition during a play. The local reason is therefore local if the force of the assertion is limited to the level of plays. But when the assertion "𝑝 ∶ 𝐴" is backed by a winning strategy, the judgement asserted draws its justification precisely from that strategy, thus endowing 𝑝 with the status of a strategic reason that, in the most general cases, encodes an arbitrary choice of O. The rock bottom of the dialogical approach is still the play level notion of dialoguedefiniteness of the proposition, namely For an expression to count as a proposition 𝐴 there must exist an individual play about the statement 𝑿 ! 𝐴, in the course of which 𝑿 is committed to bring forward a local reason to back that proposition, and the play reaches a final position with either win or loss after a finite number of moves according to definite local and structural rules. The deontic feature of logic is here built directly within the dialogical concept of statements about a proposition. More generally, the point is that, as observed by Martin-Löf (2017a, p. 9), according to the dialogical conception, logic belongs to the area of ethics. One way of explaining how this important aspect has been overseen or misunderstood might be that the usual approaches to the layers underlying logic got the order of priority between the deontic notions and the epistemic notions the wrong way round. 40Martin-Löf's lectures propose a fine analysis of the inner and outer structure of the statements of logic from the point of view of speech-act theory, that put the order of priority mentioned above right; in doing so it pushes forward one of the most cherished tenets of the dialogical framework, namely that logic has its roots in ethics. In fact, Martin-Löf's insights on dialogical logic as re-establishing the historical links of ethics and logic provides a clear answer to Wilfried Hodges's (2008) 41 sceptical view in his section 2 as to what the dialogical framework's contribution is. Hodges's criticism seems to target the mathematical interest of a dialogical conception of logic, rather than a philosophical interest which does not seem to attract much of his interest. In lieu of a general plaidoyer for the dialogical framework's philosophical contribution to the foundations of logic and mathematics, which would bring us too far, let us highlight these three points which result from the above discussions: 1) the dialogical interpretation of epistemic assumptions offers a sound venue for the development of inference-based foundations of logic; 2) the dialogical take on the interaction of epistemic and deontic notions in logic, as well as the specification of the play level's role, display new ways of implementing the interface pragmatics-semantics within logic. 3) the introduction of knowing how into the realm of logic is of great import (Martin-Löf, 2017a;[START_REF] Martin-Löf | Assertion and Request[END_REF]. Obviously, formal semantics in the Tarski-style is blind to the first point, misunderstands the nature of the interface involved in the second, and ignores the third. Final Remarks The play level is the level where meaning is forged: it provides the material with which we reason. 42 It reduces neither to the (singular) performances that actualize the interaction-types of the play level, nor to the "tactics" for the constitution of the schema that yields a winning strategy. We call our dialogues involving rational argumentation dialogues for immanent reasoning precisely because reasons backing a statement, that are now explicit denizens of the objectlanguage of plays, are internal to the development of the dialogical interaction itself. More generally, the emergence of concepts, so we claim, are not only games of giving and asking for reasons (games involving Why-questions) they are also games that include moves establishing how is it that the reason brought forward accomplishes the explicative task. Dialogues for immanent reasoning are dialogical games of Why and How. Notice that the notion of dialogue-definiteness is not bound to knowing how to win-this is rather a feature that characterizes winning strategies; to master meaning of an implication, within the dialogical framework, amounts rather to know how to develop an actual play for it. In this context it is worth mentioning that during the Stockholm and Oslo talks on dialogical logic, Martin-Löf (2017a;[START_REF] Martin-Löf | Assertion and Request[END_REF] points out that one of the hallmarks of the dialogical approach is the notion of execution, which-as mentioned in the preface-is close to the requirement of bringing forward a suitable equality while performing an actual play. Indeed from the dialogical point of view, an equality statement comes out as an answer to a question on the local reason 𝑏 of the form how: How do you show the efficiency of 𝑏 as providing a reason for 𝐴? In this sense the how-question presupposes that 𝑏 has been brought-forward as an answer to a why question: Why does A hold? Thus, equalities express the way how to execute or carry out the actions encoded by the local reason; however, the actualization of a play-schema does not require the ability of knowing how to win a play. Thus, while execution, or performance, is indeed important the backbone of the framework lies in the dialogue-definiteness notion of a play. The point of the preceding paragraph is that though actualizing and schematizing are processes at the heart of the dialogical construction of meaning, they should not be understood as performing two separate actions: through these actions we acquire the competence that is associated to the meaning of an expression by learning to play both, the active and the passive role. This feature of Dialogical Constructivism stems from Herder's view 43 that the cultural process is a process of education, in which teaching and learning always occur together: dialogues display this double nature of the cultural process in which concepts emerge from a complex interplay of why and how questions.. In this sense, as pointed out by Lorenz (2010a, pp. 140-147) the dialogical teaching-learning situation is where competition, the I-perspective, and cooperation interact, the You-perspective: both intertwine in collective forms of dialogical interaction that take place at the play level. If the reader allows us to condense our proposal once more, we might say that the perspective we are trying to bring to the fore is rooted in the intimate conviction that meaning and knowledge are something we do together; our perspective is thus an invitation to participate in the open-ended dialogue that is the human pursuit of knowledge and collective understanding, since philosophy's endeavour is immanent to the kind of dialogical interaction that makes reason happen. [START_REF] Brandom | Making it Explicit[END_REF]. Making it Explicit. Cambridge: Harvard University Press. 43 See [START_REF] Herder | Abhandulung über der Ursprung der Sprache[END_REF][START_REF] Herder | Abhandulung über der Ursprung der Sprache[END_REF], Part II). 𝐗 𝑅 ∧ (𝑝) : 𝐵 Existential quantifiation 𝐗 𝑝: (∃𝑥: 𝐴)𝐵(𝑥) 𝐘 ? 𝐿 ∃ or 𝐘 ? 𝑅 ∃ 𝐗 𝐿 ∃ (𝑝) : 𝐴 (resp.) 𝐗 𝑅 ∃ (𝑝) : 𝐵(𝐿 ∃ (𝑝) ) Subset separation 𝐗 𝑝: {𝑥 ∶ 𝐴 |𝐵(𝑥)} 𝐘 ? 𝐿 or 𝐘 ? 𝑅 𝐗 𝐿 {… } (𝑝) : 𝐴 (resp.) 𝐗 𝑅 ∧ (𝑝) : 𝐵(𝐿 {… } (𝑝) ) 𝐘 𝐿 ¬ (𝑝) : 𝐴 𝐗 𝑅 ¬ (𝑝) : ⊥ Also expressed as 𝐗 𝑝: 𝐴 ⊃⊥ 𝐘 𝐿 ⊃ (𝑝) : 𝐴 𝐗 𝑅 ⊃ (𝑝) : ⊥ 1. A player may ask his adversary to carry out the prescribed instruction and thus bring forward a suitable local reason in defence of the proposition at stake. Once the defender has replaced the instruction with the required local reason we say that the instruction has been resolved. 2. The player index of an instruction determines which of the two players has the right to choose the local reason that will resolve the instruction. a. If the instruction I for the logical constant K has the form I K (p) X and it is Y who requests the resolution, then the request has the form Y ?…/ I K (p) X , and it is X who chooses the local reason. b. If the instruction I for the logic constant K has the form I K (p) Y and it is player Y who requests the resolution, then the request has the form Y p i / I K (p) Y , and it is Y who chooses the local reason. P I = a : A SR5.3.1b 𝐏 𝑎 ∶ 𝐴(𝑏) 𝐎 ? = 𝑏 𝐴(𝑏) P I = b : D SR5.3.1c P I = b : D (this statement stems from SR5.3.1b ) 𝐎 ? = 𝐴(𝑏) P A(I) = A(b) : prop Presuppositions: (i) The response prescribed by SR5.3.1a presupposes that O has stated A or a = b : A as the result of the resolution or substitution of instruction I occurring in I : A or in I = b : A. (ii) The response prescribed by SR5.3.1b presupposes that O has stated A and b : D as the result of the resolution or substitution of instruction I occurring in a : A(I). (iii) SR5.3.1c assumes that P I = b : D is the result of the application of SR5.3.1b. The further challenge seeks to verify that the replacement of the instruction produces an equality in prop, that is, that the replacement of the instruction with a local reason yields an equal proposition to the one in which the instruction was not yet replaced. The answer prescribed by this rule presupposes that O has already stated A(b) : prop (or more trivially A(I) = A(b) : prop). 1 〉(𝐏)𝐴  〈𝐏?∧ 2 〉(𝐏)𝐵 〈𝐎? 〉 (𝐏)𝐵The expressions of the form 〈𝐗 … 〉 constitute interrogative utterances.The expressions of the form 〈𝐗 … 〉 constitute interrogative utterances. (O) [or (T)] 𝐴𝑡𝑜𝑛𝑘𝐵 (O) [(T)] 𝐵 (P) [or (F)] 𝐴𝑡𝑜𝑛𝑘𝐵 (P)[(F)] 𝐴 These rules can be considered as inserting in the rules the back and forth movement described byMartin-Löf (2017a, p. 8) with the following diagram: but he is doing this by choosing 𝑏 as local reason for B, that is, by choosing exactly the same local reason as O for the resolution of R  (p).Let us assume that O can ask P to make his choice for a given local reason explicit. P would then answer that his choice for his local reason depends on O's own choice: he simply copied what O considered to be a local reason for 𝐵, that is 𝑅 ∧ (𝑝) 𝑂 = 𝑏: 𝐵. The application of the Socratic rule yields in this respect definitional equality. This rule prescribes the following response to a challenge on an elementary local reason:When O challenges an elementary statement of P such as b : B, P must be able to bring forward a definitional equality such as P R  (p) = b : B. Which reads: P grounds his choice of the local reason b for the proposition B in O's resolution of the instruction R  (p). At the very end P's choice is the same local reason brought forward by O for the same proposition B. .1.2 The substitution rule within dependent statements 𝐗 𝜋(𝑥 1 , … , 𝑥 𝑛 )[𝑥 𝑖 : 𝐴 𝑖 ] 𝐘 𝜏 1 : 𝐴 1 , … , 𝜏 𝑛 : 𝐴 𝑛 𝐗 𝜋(𝜏 1 , … , 𝜏 𝑛 ) Existential quantification X (∃𝑥: 𝐴)𝐵(𝑥): 𝒑𝒓𝒐𝒑 Y ? 𝐹 ∃1 or X 𝐴: 𝒔𝒆𝒕 (resp.) Y ? 𝐹 ∃2 X 𝐵(𝑥): 𝒑𝒓𝒐𝒑[𝑥: 𝐴] Subset separation 𝐗 {𝑥 ∶ 𝐴 |𝐵(𝑥)}: 𝒑𝒓𝒐𝒑 Y ? 𝐹 1 or X 𝐴: 𝒔𝒆𝒕 (resp.) Y ? 𝐹 2 X 𝐵(𝑥): 𝒑𝒓𝒐𝒑[𝑥: 𝐴] Falsum X ⊥: 𝒑𝒓𝒐𝒑 - - 2.2The following rule is not really a formation-rule but is very useful while applying formation rules where one statement is dependent upon the other such as 𝐵(𝑥): 𝒑𝒓𝒐𝒑[𝑥: 𝐴]. 11 Substitution rule within dependent statements (subst-D) Move Challenge Defence Subst-D Y ? 𝐹 ∧1 X 𝐴: 𝒑𝒓𝒐𝒑 Conjunction X 𝐴 ∧ 𝐵: 𝒑𝒓𝒐𝒑 or (resp.) Y ? 𝐹 ∧2 X 𝐵: 𝒑𝒓𝒐𝒑 Y ? 𝐹 ∨1 X 𝐴: 𝒑𝒓𝒐𝒑 Disjunction X 𝐴 ∨ 𝐵: 𝒑𝒓𝒐𝒑 or (resp.) Y ? 𝐹 ∨2 X 𝐵: 𝒑𝒓𝒐𝒑 Y ? 𝐹 ⊃1 X 𝐴: 𝒑𝒓𝒐𝒑 Implication X 𝐴 ⊃ 𝐵: 𝒑𝒓𝒐𝒑 or (resp.) Y ? 𝐹 ⊃2 X 𝐵: 𝒑𝒓𝒐𝒑 Universal quantification X (∀𝑥: 𝐴)𝐵(𝑥): 𝒑𝒓𝒐𝒑 Y ? 𝐹 ∀1 or X 𝐴: 𝒔𝒆𝒕 (resp.) Y ? 𝐹 ∀2 X 𝐵(𝑥): 𝒑𝒓𝒐𝒑[𝑥: 𝐴] ⟦p 1 A" indicates that P's strategic reason for the negation is based on O's move n (where O is forced to state move 𝑛 which is dependent upon O's choice p 1 as local reason for the antecedent of the negation. This yields the following rule for the synthesis of the strategic reason for negation: Synthesis of the strategic reason for negation Move Challenge Defence Strategic reason (synthesis) 𝐎 ! ⊥ Negation 𝐏 ! ¬𝐴 Also expressed as 𝐏 ! 𝐴 ⊃⊥ 𝐎 𝑝 1 : 𝐴 P's successful defence of the negation amounts to a switch such that O must now state that she has a local reason for 𝐴. However this move leads her to give up by bringing forward ⊥ (𝑛) O ⟧ : A The move O p 1 : A, P n O ⟦p 1 allows P to force her to give up in move n, which leads to P's victory. O ⟧ : Note that the analysis of strategic reasons for negation is divided into two presentations of negation, 𝐎 𝑝: ¬𝐴 and 𝐎 𝑝: 𝐴 ⊃⊥, which, at the play level, are governed by SR7 (see p. The first presentation yields O stating ⊥, that is giving up, and therefore the play ends with P winning without further ado. Thus the strategic reason is constituted by the resolution of the instruction for 𝐴 with the means provided by O Analysis of local reasons Analysis of P- Challenge Defence strategic reasons Conjunction O 𝑝: 𝐴 ∧ 𝐵 𝐏 ? 𝐿 ∧ or 𝐏 ? 𝑅 ∧ 𝐎 𝐿 ∧ (𝑝) 𝐎 : 𝐴 (resp.) 𝐎 𝑅 ∧ (𝑝) 𝐎 : 𝐵 𝑷,𝑶 : 𝐴 𝑷,𝑶 : 𝐵 𝐏 𝐿 ∧ (𝑝) 𝑶 = 𝑝 1 (resp.) 𝐏 𝑅 ∧ (𝑝) 𝑶 = 𝑝 2 Existential quantification 𝐎 𝑝: (∃𝑥: 𝐴)𝐵(𝑥) 𝐏 ? 𝐿 ∃ or 𝐏 ? 𝑅 ∃ 𝐎 𝐿 ∃ (𝑝) 𝐎 : 𝐴 (resp.) 𝐎 𝑅 ∃ (𝑝) 𝑂 : 𝐵(𝐿 ∃ (𝑝) 𝐎 ) 𝐏 𝐿 ∃ (𝑝) 𝐎 = 𝑝 1 (resp.) 𝐏 𝑅 ∃ (𝑝) 𝐎 = 𝑝 2 𝐏,𝐎 : 𝐵(𝑝 1 𝐏,𝐎 : 𝐴 𝐏,𝐎 ) Subset separation 𝐎 𝑝: {𝑥 ∶ 𝐴 |𝐵(𝑥)} 𝐏 ? 𝐿 or 𝐏 ? 𝑅 𝐎 𝐿 {… } (𝑝) 𝐎 : 𝐴 (resp.) 𝐎 𝑅 ∧ (𝑝) 𝐎 : 𝐵(𝐿 {… } (𝑝) 𝐎 ) 𝐏 𝐿 {… } (𝑝) 𝐎 = 𝑝 1 (resp.) 𝐏 𝑅 ∧ (𝑝) 𝐎 = 𝑝 2 𝐏,𝐎 : 𝐵(𝑝 1 𝐏,𝐎 : 𝐴 𝐏,𝐎 ) Disjunction 𝐎 𝑝: 𝐴 ∨ 𝐵 𝐏 ? ∨ 𝐎 𝐿 ∨ (𝑝) 𝐎 : 𝐴 or 𝐎 𝑅 ∨ (𝑝) 𝐎 : 𝐵 𝐏 𝐿 ∨ (𝑑) 𝐎 = 𝑑 1 = 𝑑 2 𝐏,𝐎 : 𝐶 𝐏,𝐎 |𝑅 ∨ (𝑑) 𝐎 𝑷 𝑅 ⊃ (𝑝) 𝐎 Implication 𝐎 𝑝: 𝐴 ⊃ 𝐵 𝐏 𝐿 ⊃ (𝑝) 𝐏 : 𝐴 𝐎 𝑅 ⊃ (𝑝) 𝐎 : 𝐵 = 𝑝 2 𝐏,𝐎 ⟦𝐿 ⊃ (𝑝) 𝐏 = 𝑝 1 𝐏,𝐎 ⟧ : 𝐵 Universal quantification 𝐎 𝑝: (∀𝑥: 𝐴)𝐵(𝑥) 𝐏 𝐿 ∀ (𝑝) 𝐏 : 𝐴 𝐎 𝑅 ∀ (𝑝) 𝐎 : 𝐵(𝐿 ∀ (𝑝) 𝐏 ) 𝑷 𝑅 ∀ (𝑝) 𝑶 𝑷,𝑶 ⟦𝐿 ∀ (𝑝) 𝐏 = 𝑝 2 = 𝑝 1 𝑷,𝑶 ⟧: 𝐵(𝑝 1 𝐏,𝐎 ) 𝐎 𝑝: ¬𝐴 𝐏 𝐿 ¬ (𝑝) 𝐏 : 𝐴 𝐎 𝑅 ¬ (𝑝) 𝐎 : ⊥ 𝑷 𝐿 ¬ (𝑝) 𝑷 = 𝑝 1 𝐏,𝐎 : 𝐴 Negation Also expressed as 𝐎 𝑝: 𝐴 ⊃⊥ 𝐏 𝐿 ⊃ (𝑝) 𝐏 : 𝐴 𝐎 𝑅 ⊃ (𝑝) 𝐎 : ⊥ P 𝑦𝑜𝑢 𝑔𝑎𝑣𝑒 𝑢𝑝 (𝑛)⟦𝐿 ⊃ (𝑝) 𝐏 = 𝑝 1 𝑷,𝑶 ⟧ : C (𝐿 ¬ (𝑝) = 𝑝 1 𝑶 ). Erreur ! Signet non défini.). Ansten Klev's transcription of Martin Löf (2017a, pp. 1-3, 7). In fact, the present paper relies on the main technical and philosophical results of Rahman/McConaugey/ Klev/Clerbout (2018). Lorenz (2001, p. 258). Speaking of local reasons is a little premature at this stage, since only instructions are provided and not actual local reasons; but the purpose is here to give the general idea of local reasons, and instructions are meant to be resolved into proper local reasons, which requires only an extra step. This rule is an expression at the level of plays of the rule for the substitution of variables in a hypothetical judgement. SeeMartin-Löf (1984, pp. 9-11). Note that P is allowed to make an elementary statement only as a thesis (Socratic rule); he will be able to respond to the challenge on an elementary statement only if O has provided the required local reason in her initial concessions. Krabbe (1985, p. 297). See[START_REF] Martin-Löf | Truth of Empirical Propositions[END_REF]. By "internalization" we mean that the relevant content is made part of the setting of the game of giving and asking for reasons: any relevant content is the content displayed during the interaction. For a discussion on this conception of internalizationseePeregrin (2014, pp. 36-42). Among these variations can be counted cooperative games, non-monotony, the possibility of player errors or of limited knowledge or resources, to cite but a few options the play level offers, making the dialogical framework very well adapted for history and philosophy of logic. The table which follows is in fact the dialogical analogue to the introduction rules in CTT: dialogically speaking, these rules display the duties required by P's assertions. The fact that these language-games must be finite does not rule out the possibility of a (potentially) infinite number of them. While establishing particle rules the development rules have not been fixed yet, so we might call those expressions propositional schemata. With "ideal" we mean an interlocutor that always make the optimal choices in order to collaborate in the task of testing the thesis. See Rahman (2015) and[START_REF] Rahman | Unfolding parallel reasoning in islamic jurisprudence (I). Epistemic and Dialectical Meaning within Abū Isḥāq al-Shīrāzī's System of Co-Relational Inferences of the Occasioning Factor[END_REF]. Notice that if the role of the Opponent in adversial dialogues is reduced to checking the achievement of logical truth, one would wonder what the role of the Opponent might be in more cooperation-featured dialogues: A soft interlocutor ready to accept weak arguments? We could provide at the local level of meaning a set of player-independent rules, and add some special structural rule in order to force dialogue-definiteness-seeRahman (2012, p. 225); however, such kinds of rules would produce a mismatch in the formation of black-bullet: the formulation of the particle rule would have to assume that black-bullet is an operator, but the structural rule would have to assume it is an elementary proposition. This kind of criticism does not seem to have been aware of[START_REF] Lorenz | Elemente der Sprachkritik. Eine Alternative zum Dogmatismus und Skeptizismus in der Analytischen Philosophie[END_REF] 2009; 2010a; 2010b), carrying out a thorough discussion on predication from a dialogical perspective, which discusses the interaction between perceptual and conceptual knowledge. However, perhaps it is fair to say that this philosophical work has not been integrated into the dialogical logic-we will come back to this subject below. See Lorenz (1981, p. 120), who uses the expressions right to attack and duty to defend. This crucial insight of Martin-Löf on dialogical logic and on the deontic nature of logic seems to underly recent studies on the dialogical framework which are based on Sundholm's notion of the implicit interlocutor, such as Duthil Novaes (2015) and[START_REF] Trafford | Meaning in Dialogue. An Interactive Approach to Logic and Reasoning[END_REF]. See (Martin-Löf, 2017b, p. 9). See also[START_REF] Hodges | Dialogue foundations: A sceptical look[END_REF] andTrafford (2017, pp. 87-88). To usePeregrin's (2014, pp. 228-229) words.
131,056
[ "5613" ]
[ "11909" ]
01745950
en
[ "sdv" ]
2024/03/05 22:32:07
2018
https://pasteur.hal.science/pasteur-01745950/file/17-1783.pdf
Maria Dolores Fernandez-Garcia Romain Volle Marie-Line Joffret Serge Alain Sadeuh-Mba Ionela Gouandjika-Vasilache Ousmane Kebe Michael R Wiley Manasi Majumdar Etienne Simon-Loriere Anavaj Sakuntabhai Marie-Line Joffret Gustavo Palacios Javier Martin Francis Delpeyroux Kader Ndiaye Maël Bessaud M D Fernandez-Garcia M.-L Joffret Genetic Characterization of Enterovirus A71 Circulating in Africa à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. E nterovirus A71 (EV-A71; species Enterovirus A, genus Enterovirus, family Picornaviridae) is a common etiologic agent of hand, foot and mouth disease in young children. In addition, EV-A71 has been associated with severe and sometimes fatal neurologic diseases, including aseptic meningitis, encephalitis, and poliomyelitis-like acute flaccid paralysis (AFP) [START_REF] Solomon | Virology, epidemiology, pathogenesis, and control of enterovirus 71[END_REF][START_REF] Chang | The current status of the disease caused by enterovirus 71 infections: Epidemiology, pathogenesis, molecular epidemiology, and vaccine development[END_REF]. EV-A71 is classified into 7 genogroups (A-G). Genogroup A includes the prototype strain BrCr that was isolated in the United States in 1969 [START_REF] Solomon | Virology, epidemiology, pathogenesis, and control of enterovirus 71[END_REF][START_REF] Chang | The current status of the disease caused by enterovirus 71 infections: Epidemiology, pathogenesis, molecular epidemiology, and vaccine development[END_REF]. Most EV-A71 isolates belong to genogroups B or C, which are each further divided into subgenogroups [START_REF] Solomon | Virology, epidemiology, pathogenesis, and control of enterovirus 71[END_REF][START_REF] Chang | The current status of the disease caused by enterovirus 71 infections: Epidemiology, pathogenesis, molecular epidemiology, and vaccine development[END_REF]. Subgenogroups B4, B5, and C4 are mainly restricted to countries in Asia, whereas C1 and C2 circulate primarily in Europe and the Asia-Pacific region [START_REF] Solomon | Virology, epidemiology, pathogenesis, and control of enterovirus 71[END_REF]. Genogroup D and the newly proposed genogroup G appear to be indigenous to India, whereas genogroups E and F were recently discovered in Africa and Madagascar, respectively [START_REF] Bessaud | Molecular comparison and evolutionary analyses of VP1 nucleotide sequences of new African human enterovirus 71 isolates reveal a wide genetic diversity[END_REF]. Although EV-A71 has been reported in many parts of the world, its epidemiology remains largely unexplored in Africa. An EV-A71 outbreak was documented in 2000 in Kenya, where HIV-infected orphans were infected by EV-A71 genogroup C (4). Several AFP cases have been associated with EV-A71 infection during 2000-2013 throughout Africa: in Democratic Republic of the Congo (5) (2000, n = 1); Nigeria ( 6 [START_REF] Faleye | Direct detection and identification of enteroviruses from faeces of healthy Nigerian children using a cell-culture independent RT-seminested PCR assay[END_REF]. Molecular identification of all these isolates was based only on the analysis of sequences of the viral protein (VP) 1 capsid protein region. Recombination events may be associated with the emergence and global expansion of new groups of EV-A71 that have induced large outbreaks of hand, foot and mouth disease with high rates of illness and death [START_REF] Leitch | The association of recombination events in the founding and emergence of subgenogroup evolutionary lineages of human enterovirus 71[END_REF]. For EV-A71, genetic exchanges have been described both within a given genogroup and with other types of enterovirus A (EV-A), usually in nonstructural genome regions P2 and P3 [START_REF] Solomon | Virology, epidemiology, pathogenesis, and control of enterovirus 71[END_REF][START_REF] Leitch | The association of recombination events in the founding and emergence of subgenogroup evolutionary lineages of human enterovirus 71[END_REF][START_REF] Yoke-Fun | Phylogenetic evidence for inter-typic recombination in the emergence of human enterovirus 71 subgenotypes[END_REF]. However, before 2017, no complete genome sequence of EV-A71 detected in Africa has been reported, diminishing the power of such analysis. We examined the complete genome of most EV-A71 isolates reported to date in Africa to characterize the evolutionary mechanisms of genetic variability. The Study We sequenced the full genome of 8 EV-A71 isolates obtained from patients with AFP (Table ): isolates 14-157, 14-250, , and 15-355 from West Africa and isolates 08-041, 08-146, and 03-008 from Central Africa. We isolated and typed these isolates as previously described (7-10) and obtained nearly complete genomic sequences using degenerated primers [START_REF] Yoke-Fun | Phylogenetic evidence for inter-typic recombination in the emergence of human enterovirus 71 subgenotypes[END_REF] and additional primers designed for gene-walking (available on request) or unbiased sequencing methods [START_REF] Kugelman | US Army Medical Research Institute of Infectious Diseases; National Institutes of Health; Integrated Research Facility-Frederick Ebola Response Team 2014-2015[END_REF]. We determined the 5′-terminal sequences by means of a RACE kit (Roche, Munich, Germany). We deposited viral genomes in GenBank (accession numbers in Table ) and submitted sequence alignments under BioProject PRJNA422891. We aligned sequences using ClustalW software (http://www.clustal.org). To investigate the genetic relationship between Africa and global EV-A71 isolates, we constructed subgenomic phylogenetic trees based on the P1, P2, and P3 regions of the genome (Figure 1). We identified viral isolates showing related sequences in 1 of these 3 regions by BLAST search (http://www.ncbi.nlm.nih.gov/BLAST) and included them in the corresponding datasets used for analyses. We completed these datasets with a representative global set of EV-A71 sequences available in GenBank and belonging to the different EV-A71 genogroups (https:///wwwnc.cdc.gov/ EID/article/24/4/17-1783-Techapp1.pdf). As expected, in the structural P1 region, the 8 isolates we studied clustered within their respective genogroups (C1, C2, and E), previ-ously established by VP1-based typing (Figure 1, panel A). In particular, the isolates of genogroup E consistently clustered together (bootstrap value 100%), confirming their belonging to the EV-A71 type and their divergence from the other isolates belonging to the common genogroups A, B, and C. Analysis of the nonstructural P2 and P3 genome regions were in agreement with these data. However, the genetic heterogeneity, <12%, observed among the complete genome of genogroup E sequences highly suggested that they have circulated and diverged for years in a large geographic area in Africa. The unique Africa EV-A71-C1 strain clustered with other C1 strains originating worldwide, regardless of which genome region we analyzed. In contrast, the nonstructural sequences of Africa EV-A71 isolates of subgenogroup C2 did not cluster with their non-Africa C2 counterparts or with any of the existing EV-A71 genogroups. The incongruent phylogenetic relationships of Africa C2 strains in the different regions of the genome suggested that recombination events have occurred during evolution. To examine further recombination events, we analyzed EV-A71-C2 study strains by similarity plot against potential parental genomes (Figure 2). This analysis showed that sequences 14-157, 14-250, and 15-355 had high similarity (>95%). By contrast, 13-365 diverged from the other C2 isolates around nt 5600 in the P3 region, suggesting a recombination breakpoint. The analysis showed high sequence similarity (>97%) between the studied EV-A71-C2 isolates and other subgenogroup C2 strains over the P1 capsid region. Conversely, in the noncapsid region, sequence similarity between Africa EV-A71-C2 isolates and classical subgenogroup C2 isolates (e.g., GenBank accession no. HQ647175) was much lower (66%-77%). This finding confirmed a recombination event of the Africa EV-A71 C2 lineage with an unknown enterovirus, the most likely breakpoint being located between nt 3596 and 3740, within the 2A gene. Sequence identity of EV-A71-C2 study strains with their closest related viruses (coxsackievirus A10 [CV-A10], CV-A5, EV-A120, and EV-A71 genogroup E strains) in the 3′ half of the genome was <87.7%. Of note, we found much higher sequence identity with the full-genome sequence of CV-A14 isolate in our database, obtained in 2014 from a patient with AFP in Senegal [START_REF] Fernandez-Garcia | Identification and molecular characterization of non-polio enteroviruses from children with acute flaccid paralysis in West Africa, 2013-2014[END_REF]. This strain features a high similarity value (>97%) with the 3′ half of the genomes of EV-A71-C2 West Africa strains (Figure 2), indicating that their P3 regions share a recent common ancestor. Because these strains belong to 2 different types, this finding strongly suggests that genetic exchanges occurred through intertypic recombination. This result cannot be a result of cross-contamination during the sequencing process because the CV-A14 and EV-A71 isolates were sequenced on 2 different platforms. tensively circulating in Africa. We also suggest that the common ancestor of EV-A71-C2 strains in West Africa has undergone recombination with >1 EV-A circulating in Africa. Genogroup E and recombinant C2 appear to be indigenous to Africa; they have not yet been detected elsewhere. Further exploration of environmental or clinical samples using deep sequencing technology would be of interest to determine the extent of EV-A71 circulation in Africa in the absence of AFP cases. Systematic surveillance based on full-genome sequencing could also serve to monitor these viruses for potential recombinations and to study their role in the emergence of new EV-A71 variants in Africa. Conclusions ) (2004, n = 1, genogroup E); Central African Republic (7) (2003, n = 1, genogroup E); Cameroon (8) (2008, n = 2, genogroup E); Niger (9) (2013, n = 1, genogroup E); and Senegal, Mauritania, and Guinea (9) (2013-2014, n = 3, subgenogroup C2). Four additional EV-A71 strains were obtained from captive gorillas in Cameroon during 2006-2008 (n = 2, genogroup E) (10) and from healthy children in Nigeria in 2014 (n = 2, genogroup E) Figure 1 . 1 Figure 1. Phylogenetic relationships of EV-A71 isolates from patients with acute flaccid paralysis in Africa based on 3 coding regions: A) P1, B) P2, and C) P3. Apart from the studied sequences, subgenomic datasets included their best nucleotide sequence matches identified by NCBI BLAST search (http://www.ncbi.nlm.nih.gov/BLAST) as well as representative sequences of different EV-A71 genogroups and subgenogroups originating worldwide. Trees were constructed from the nucleotide sequence alignment using MEGA 5.0 software (http:// megasoftware.net/) with the neighbor-joining method. Distances were computed using the Kimura 2-parameter model. The robustness of the nodes was tested by 1,000 bootstrap replications. Bootstrap support values >75 are shown in nodes and indicate a strong support for the tree topology. For clarity, CV-A10, CV-A5, and EV-A71 subgenogroups C3, C4, and C5 have been collapsed. Study strains are indicated by laboratory code, country of origin, and year of isolation; previously published strains are indicated by GenBank accession number, isolate code, country of origin, and year of isolation. Black triangles indicate EV-A71 strains from this study; black square indicates the CV-A14 strain from this study. Strains gathered in brackets belong to EV-A71 genogroups or subgenogroups; strains marked in blue color belong to other species of EV-A. Scale bars indicate nucleotide substitutions per site. CV, coxsackievirus; EV, enterovirus. Figure 2 . 2 Figure2. Identification of recombinant sequences in the genome of EV-A71 C2 isolates from patients with acute flaccid paralysis in Africa (14-157, 14-250, 13-365, 15-355) by similarity plot against potential parent genomes (CV-A14 strain 14-254; EV-A71 genogroup E strains 13-194, 08-146, and 03-008) and from GenBank (CV-A10, CV-A5, EV-A120). Similarity plot analysis was performed using SimPlot version 3.5.1 (http://sray.med.som.jhmi.edu/SCRoftware/simplot) on the basis of full-length genomes. For the analysis, we used a window of 600 nt moving in 20-nt steps. Approximate nt positions in the enterovirus genome are indicated. The enterovirus genetic map is shown in the top panel. We used the genome of EV-A71 study strain 14-157 as a query sequence. UTR, untranslated region. Table . . Emerging Infectious Diseases • www.cdc.gov/eid • Vol. 24, No. 4, April 2018 755 Description of enterovirus isolates from patients with acute flaccid paralysis in Africa that were sequenced for characterization of enterovirus A71* Patient age at Genogroup or Strain (reference) Country of isolation diagnosis, y Year Virus subgenogroup Genbank accession no. 14-157 (9) Senegal 3 2014 EV-A71 C2 MG672480 14-250 (9) Mauritania 1.6 2014 EV-A71 C2 MG672481 13-365 (9) Guinea 1.7 2013 EV-A71 C2 MG672479 15-355 (this study) Senegal 2.4 2015 EV-A71 C2 MG013988 13-194 (9) Niger 1.3 2013 EV-A71 E MG672478 03-008 (7) Central African Republic 1.9 2003 EV-A71 E LT719068 08-146 (8) Cameroon 2.6 2008 EV-A71 E LT719066 08-041 (8) Cameroon 1.7 2008 EV-A71 C1 LT719067 14-254 (15) Senegal 3 2014 CV-A14 NA MG672482 *NA, not available. Acknowledgments We thank Karla Prieto and Catherine Pratt, who assisted in obtaining nearly complete genomes of West Africa strains, and Joseph Chitty for analysis of the next-generation sequencing data. The next-generation sequencing equipment at Institut Pasteur of Dakar was provided by the Defense Biological Product Assurance Office under the Targeted Acquisition of Reference Materials Augmenting Capabilities Initiative. This work was supported by the IPD, the Pasteur Institute's Transverse Research Program PTR484, Actions Concertées Inter-Pasteuriennes A22-16, Fondation Total Grant S-CM15010-05B, Roux Howard Cantarini postdoctoral fellowship, and Grant Calmette and Yersin from the International Directorate of the Institut Pasteur. About the Author Dr. Fernandez-Garcia is a scientist with a PhD degree in virology and is involved in research and surveillance of enterovirus infections at Institut Pasteur of Dakar, Senegal. Her research interests include infectious disease epidemiology and public health microbiology.
14,853
[ "774319", "735028", "853465", "734968", "172917", "771122", "853422" ]
[ "300027", "303623", "300027", "303623", "55917", "55916", "42095", "302687", "1098189", "307683", "300027", "300027", "1098189", "307683", "300027", "303623", "42095", "300027" ]
01718887
en
[ "phys" ]
2024/03/05 22:32:07
2016
https://hal.science/hal-01718887/file/Structural%20and%20magnetic%20properties%20of%20Co2MnSi%20heusler%20alloys%20irradiated%20with%20He%20ions.pdf
Iman Abdallah Nicolas Ratel-Ramond C Magen Béatrice Pécassou Robin Cours Alexandre Arnoult Marc Respaud Jean-François F Bobo Gérard Benassayag Etienne Snoeck B Pecassou N Biziere Structural and magnetic properties of He+ irradiated Co2MnSi Heusler alloys Keywords: Heusler Alloys, Structure, X-ray diffraction, HAADF-STEM niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. average magnetization of the alloy, which is due to D0 3 disorder and local defects induced by irradiation. I. Introduction In the last ten years, full Heusler compounds with general formula X 2 YZ have become a major topic in spintronics, especially for spin torque devices requiring low damping and high spin polarization. Among them, Co 2 MnSi (CMS) is a very promising candidate. It is predicted to be half metallic, with a Curie temperature well above the room temperature [1][2][3][4] and a very low Gilbert damping coefficient as compared to other ferromagnetic metals [5,6]. In all Heusler alloys, the magnetic properties and the atomic order are intimately related. For example, half metallicity has been predicted in CMS for the L2 1 or B2 structure only [7]. The L2 1 structure is the most ordered phase and corresponds to 8 body centered cubic (bcc) sub-lattices having the Co atoms at the corners of the bcc cells and the center sites occupied either by the Mn or Si atoms. A random distribution between Mn and Si or between Mn and Co atoms corresponds to the B2 or D0 3 order respectively while a random distribution of the Co, Mn and Si atoms between the different atomic sites leads to the disordered A2 phase. Ion irradiation with light ions is an efficient technique to improve the local chemical order in different magnetic alloys such as FePt [8] or more recently in Heusler alloy [9]. Indeed, Gaier et al. demonstrated that He + ion irradiation at 30 keV increases the long range order parameter in CMS grown in the B2 phase. Therefore, ion irradiation appears as a very interesting complementary or alternative technique to high temperature annealing, incompatible with microelectronic processes. Before this, further studies about the structural modification induced by irradiation are needed. In this work, we offer to study the structural modification in both the L2 1 and B2 order in Co 2 MnSi irradiated with 150 keV He + ions. Several techniques can be used to characterize the crystal structure such as neutron diffraction, nuclear magnetic resonance, X-ray absorption and circular magnetic dichroism (XAS/XMCD), photoemission spectroscopy (HAXPES) or HAADF-STEM techniques [10][11][12][13]. However, X-ray diffraction is an accessible and commonly used technique for macroscopic characterization of the crystal order [14][15][16]. Quantitative information on the presence of the different phases can be extracted from the measurements of the intensity of different diffraction peaks. For example, superlattice (h,k,l) diffraction peaks for which with h, k and l are odd numbers (e.g. 111) only appear when the L2 1 and/or D0 3 phases are present while diffraction peaks for which h+k+l = 4n+2 (e.g. 002) appear for L2 1 , D0 3 and B2 phases. Finally, the fundamental peaks whose h+k+l = 4n (e.g. 022) appear for all crystal phases. One of the major issues encountered in X-ray diffraction experiments to discriminate between the various ordered phases in CMS is that the Co and Mn scattering factors are very close at the Cu Kα edge, making it almost impossible to distinguish between D0 3 and L2 1 phases. This issue can however be overpassed using a Co-Kα source because, due to anomalous diffraction of the Co at the K-edge, the Co and Mn scattering factors becomes very different and Co and Mn atoms can be differentiated. Thus, the different disorder parameters in CMS can be obtained combining diffraction measurements using Cu and Co Kα sources. The method, based on a model proposed by Niculescu et al. [17], has recently been applied by Takamura et al. [18] for the structural characterization of Co 2 FeSi. In this model α, β and γ are three disorder parameters. α corresponds to the number of Mn atoms located on Si sites and then represents the Mn/Si substitution per CMS unit. Similarly, β and γ corresponds to the number of Co atoms on Si and Mn sites respectively. Then, the structure factors for the different peak of interest are expressed as: F 111 ∝ (1 -2α -β)(f Mn -f Si ) + (γ -β)(f Co -f Mn ) (1) F 002 ∝ (1 -2β)(f Co -f Si ) + (1 -2γ)(f Co -f Mn ) (2) F 022 ∝ 2f Co + f Mn + f Si (3) where f Co , f Mn and f Si are the scattering factors calculated from Ref [START_REF]International Tables for X-ray Crystallography[END_REF]. We can see from eq. 1 and 2 that if f Co and f Mn are close, as for Cu Kα source, the intensity of the diffraction peaks I 111 (∞ │F 111 │ 2 ) and I 002 (∞│F 002 │ 2 ) are not sensitive to D0 3 order while it is for Co-Kα source in anomalous conditions. I. Experimental details. In this study CMS have been grown by magnetron sputtering on MgO (001) single crystals in a Plassys MPU 600 S ultrahigh vacuum (UHV) chamber. Details about the deposition conditions are presented in Ref [START_REF] Ortiz | [END_REF]. The thickness of the CMS layer is 42 +/-1 nm and a 10 nm MgO capping layer is deposited to avoid oxidation. This reference sample is then cut into four pieces, one as a reference and three for He + irradiation at 150 keV performed with a 200A2 Varian ion implanter. The irradiation is performed at room temperature and the fluences for the three samples are 1x10 15 , 5x10 15 and 1x10 16 ions per cm² respectively. The high kinetic energy of the ions prevents from implantation of He + in the CMS film as they stop several hundreds of nm deep in the substrate. X-ray diffraction experiments were performed on a Bruker D8-Discover (Da-Vinci) diffractometer equipped with a Cu Kα 1 source (λ=0.154 nm) to measure the (002) and (004) peaks while a Panalytical Empyrean diffractometer equipped with a Co Kα 1 source (λ=0.179 nm) has been used to measure the (111), ( 002) and (022) diffraction peaks. Note that the (004) and (022) peaks show similar structure factor and are both 'fundamental' peaks. Figure 1 presents examples of different diffraction peaks obtained either with Co or Cu Kα 1 sources. II. Results and discussion. The results for the deduced α, β, γ are presented in Figure 1.d. For all the samples, we observe that the β parameter, i.e. Co/Si disorder, remains almost constant (0.04 +/-0.02) whatever the ion fluence. Additionally α increases from 0.14 +/-0.01 to 0.22 +/-0.01 for the reference sample and the 10 16 irradiated one respectively. While we already observe a small increase of α for 1x10 15 ions/cm², there is a clear step at 5x10 15 . Similarly the γ parameter, i.e. Co/Mn disorder, increases significantly for fluences above 5x10 15 . For lower fluences, the Co-Mn substitution (γ) is not detectable within the uncertainty of the measurements, meaning it is less than 0.02. Using these disorder parameters one can calculate the probability of presence of the different atoms on their original site considering the L2 1 phase as starting structure (see details in Ref 18). We found values of 98, 86 and 83 % for the Co, Mn and Si respectively for the reference sample. These probabilities fall down to 93, 71 and 75% at 10 16 ions/cm². Considering that α = 0.5 for full B2 order, we can estimate that our sample grows as mixture of L2 1 and B2 phases, with ratios of about 25% of B2 and 75% of L2 1 order (neglecting the small amount of Co/Si disorder). We also observe structural modifications of the alloy for fluences above 5.10 15 /cm². The 002 and 004 diffraction peaks of the irradiated sample in Figure 1 are slight shifted toward low angle compared to the peak position of the reference sample, indicating an increase of the out-of-plane lattice parameter from 5.67 to 5.69 Ǻ. In contrary the (111) reflection remains at the same Bragg angle. As the (111) peak is related to the L2 1 and/or D0 3 phases only, we may assume that only regions of the thin film presenting the B2 order have their out-of-plane lattice parameter increasing with the irradiation. While X-ray diffraction gives macroscopic information about the structural order, it does not explain the way the different atomic exchanges get organized in the material. For example, it is difficult to know if the B2 and D0 3 disorders are diluted in the initial matrix or if grains of a particular ordered phase grow from the L2 1 starting structure. To answer this question we performed HAADF-STEM experiments which provide information on the local ordering at the atomic scale. Measurements have been performed at 300 kV on an FEI Titan 60-300 microscope, equipped with a spherical aberration corrector for the probe. Two lamellas have been prepared by FIB, one extracted from the reference sample and the other one prepared from the sample irradiated at 10 16 ions/cm². HAADF-STEM studies were realized in the [-110] zone axis. In this orientation and for L2 1 order, the intensity of each atomic column, which increases with Z, corresponds to only one type of atoms. order. We also note that the difference of intensity between Co and Mn columns is weak as expected from the Co and Mn atomic numbers (Z Co = 27). Similarly to classical X-ray diffraction using a Cu Kα source, HAADF-STEM is not very sensitive to D0 3 disorder. In order to demonstrate that the L2 1 order is the main phase of the sample we performed a statistical analysis of maximum peak intensities of the HAADF-STEM images for the reference sample (Figure 2) and for the sample irradiated at 10 16 ions/cm² (Figure 3). For the first one, three distinct intensity distributions corresponding to the Co, Mn and Si atomic columns are observed (Figure 2.c). The values are normalized by the value of the Co intensity at the center of the Co distribution. The appearance of three different intensities in the HAADF-STEM image is in good agreement with the L2 1 order even if some spreading of the intensity distribution is observed. One source of spreading comes from the slight change of thickness across the lamella prepared for STEM experiment. This effect prevents any statistics over very large areas. In some particular regions of the film other intensity distribution are observed, as the one reported in Figure 2.d which corresponds to statistical analysis performed in the black box in Figure 2.a. The inset in Figure 2.d shows that STEM intensities corresponding to the Mn and Si columns are very similar. Even if the statistical analysis is performed on a small number of atomic columns, it clearly shows that the STEM intensities corresponding to Mn and Si atomic columns converge to a single value, which could be associated to the appearance of the B2 order. This result indicates that very small grains with B2 order are distributed in the L2 1 matrix. While the B2 order in the reference sample is only observed in small regions of similar size as the black box in Figure 2.a, it can be observed in more extended areas for a fluence of 10 16 ions/cm². This is shown in Figure 3.a. The B2 order can be observed either from the intensity profile of the different We also observe for the irradiated sample that the relative STEM intensity of the different atomic columns spread over slightly larger values as compared to the reference sample. The most probable explanation relies on the difference between the two prepared samples, especially the thicknesses which modify the absolute STEM intensity. Another possibility relies on local defects such as vacancies induced by the irradiation. As already explained, it is very challenging to state about any Co/Mn exchange from HAADF-STEM experiments. However one can argue that if Co/Mn substitutions arise in the B2 phase, one should observe a spreading of the STEM intensity associated to the Mn columns that range between Si and Co ones. This is however not what is observed. Moreover, the relative intensity of the Mn columns compared to the Co ones in the L2 1 phase shows similar values (as for the reference sample). Therefore we assume that the D0 3 order most probably arises in the L2 1 matrix. Finally, we measured the magnetization of the four samples (Figure 4) with a QUANTUM design PPMS VSM magnetometer at 300 K. We clearly observe a decrease of the magnetization versus the ion fluence. As demonstrated in Ref [2] the B2 and L2 1 phase have the same magnetic moment (µ 0 Ms ≈ 1.25 T) while it decreases by 10% in the D0 3 phase. Therefore we argue the decrease of the magnetization can be accounted for the local Co-Mn exchange in the L2 1 matrix and vacancies induced by the irradiation. III. Conclusions. In conclusion we demonstrated that He + ion irradiation at 150 keV induces Mn/Si exchange and then favors the B2 structure at the cost of the L2 1 phase. However, for fluences above 5x10 15 ions/cm² , the out of plane lattice parameter of the B2 phase increase and might have an impact upon half metallicity. In addition, for this threshold value, Co/Mn substitution occurs in the L2 1 structure leading to a decreased magnetization and so half metallicity. Below this threshold, no significant improvement of L2 1 order has been observed. Therefore our study demonstrates that ion irradiation is an interesting alternative to annealing only for the B2 order as long as fluence is kept under a threshold value. According to eq 2 and 3, comparing the experimental I 002 /I 004 ratio to the calculated one allows the extraction of the β values. Then α and γ are deduced from the measurements of the I 002 /I 022 and I 111 /I 022 ratios. Figure 1 . 1 Figure 1. (color online) (002) (a) and (111) (b) diffraction peak at Co Kα 1 edge for reference (black) and 10 16 ions/cm² irradiated (red) samples. (c) (002) and (004) diffraction peaks at Cu Kα 1 edge. (d) Disorder parameters obtained from X-ray diffraction experiments. Figure 2 . 2 Figure 2.a shows an example of a HAADF-STEM image obtained on the reference sample. The insert shows a zoom on few atomic columns. Figure 2.b shows the intensity profiles of STEM image taken along different lines reported in Figure2.a. The profiles clearly shows the alternation of high and low peak intensity corresponding to Mn (Z = 25) and Si (Z = 14) columns, demonstrating the L2 1 Figure 2 . 2 Figure 2. (color online) (a) HAADF-STEM image of CMS. The insert at top right is a zoom over 6x6 atomic columns. (b) Intensity profile of the lines denoted by the colored arrows in (a). (c) Statistical analysis of intensity profile obtained from (a) on a region of 11 x 23 atomic columns. (d) statistical analysis of the area denoted by the black box in (a). In insert the intensity profile of the three central lines in the black box. lines (Figure 3 3 .b) or statistically over the selected area (Figure3.c). Similar values for the STEM intensity of the Mn and Si columns are measured while the intensity corresponding to the Co columns remains higher. On other part of the sample we still observe the L2 1 order (Figure3.d). We then suggest that ion irradiation induces a L2 1 to B2 transformation, which occurs around the initial B2 grains. Figure 3 . 3 Figure 3. (color online) (a) HAADF-STEM image of CMS irradiated at 10 16 ions/cm². The black boxes correspond to a zone with B2 or L2 1 character. (b) Intensity profile of the lines denoted by red and black lines in the B2 or L2 1 boxes in (a). Statistical analysis of the B2 (c) and L2 1 (d) regions in (a). Figure 4 . 4 Figure 4. (color online) Evolution of the CMS average magnetization at 300 K as a function of the He + fluence. Acknowledgments The authors thank LPCNO for X-ray diffraction facilities. This work has been supported by the French Agence Nationale pour la Recherche (ANR NASSICS 12-JS10-008 01) and the "French RENATECH network".
16,067
[ "779812", "18776", "19009", "747960", "171221", "18562", "18558", "11430", "18554" ]
[ "460", "460", "159887", "183309", "460", "460", "389044", "43574", "519179", "519179", "519177", "519179" ]
00000546
en
[ "math" ]
2024/03/05 22:32:07
2005
https://inria.hal.science/inria-00000546v2/file/eabloat.pdf
Sylvain Gelly Olivier Teytaud Nicolas Bredeche Marc Schoenauer Apprentissage statistique et programmation génétique: la croissance du code est-elle inévitable ? programmation genetique : la croissance du code est-elle inevitable ? pp163-178. Proceedings of CAP'2005. Universal Consistency, the convergence to the minimum possible error rate in learning through genetic programming (GP), and Code bloat, the excessive increase of code size, are important issues in GP. This paper proposes a theoretical analysis of universal consistency and code bloat in the framework of symbolic regression in GP, from the viewpoint of Statistical Learning Theory, a well grounded mathematical toolbox for Machine Learning. Two kinds of bloat must be distinguished in that context, depending whether the target function has finite description length or not. Then, the Vapnik-Chervonenkis dimension of programs is computed, and we prove that a parsimonious fitness ensures Universal Consistency (i.e. the fact that the solution minimizing the empirical error does converge to the best possible error when the number of examples goes to infinity). However, it is proved that the standard method consisting in choosing a maximal program size depending on the number of examples might still result in programs of infinitely increasing size with their accuracy; a fitness biased by parsimony pressure is proposed. This fitness avoids unnecessary bloat while nevertheless preserving the Universal Consistency. Introduction Universal Consistency denotes the convergence of the error rate, in expectation on the unknown distribution of examples, to the optimal one. Despite it's a fundamental element of learning, it has not been widely studied yet in Genetic Programming (GP). Its restricted version, consistency, i.e. convergence to the optimum when the optimum lies in the search space, has not been more studied. Code bloat (or code growth) denotes the growth of program size during the course of Genetic Programming runs. It has been identified as a key problem in GP from the very beginning [START_REF] Koza | Genetic Programming: On the Programming of Computers by Means of Natural Selection[END_REF], and to any variable length representations based learning algorithm [START_REF] Langdon | The evolution of size in variable length representations[END_REF]. It is today a well studied phenomenon, and empirical solutions have been proposed to address the issues of code bloat (see section 2). However, very few theoretical studies have addressed the issue of bloat. The purpose of this paper is to provide some theoretical insights into the bloat phenomenon and its link with universal consistency, in the context of symbolic regression by GP, from the Statistical Learning Theory viewpoint [START_REF] Vapnik | The nature of statistical learning theory[END_REF]. Statistical Learning Theory is a recent, yet mature, area of Machine Learning that provides efficient theoretical tools to analyse aspects of learning accuracy and algorithm complexity. Our goal is both to perform an in-depth analysis of bloat and to provide appropriate solutions to avoid it. The paper is organized as follows : in the section below, we briefly survey some explanations for code bloat that have been proposed in the literature, and provide an informal description of our results from a GP perspective before discussing their interest for the GP practitioner. Section 2 gives a brief overview of the basic results of Learning Theory that will be used in Section 3 to formally prove all the advertised results. Finally, section 5 discusses the consequences of those theoretical results for GP practitioners and gives some perspectives about this work. The several theories that intend to explain code bloat are : -the introns theory states that code bloat acts as a protective mechanism in order to avoid the destructive effects of operators once relevant solutions have been found [START_REF] Nordin | Complexity compression and evolution[END_REF][START_REF] Mcphee | Accurate replication in genetic programming[END_REF][START_REF] Blickle | Genetic programming and redundancy[END_REF]. Introns are pieces of code that have no influence on the fitness: either subprograms that are never executed, or sub-programs which have no effect; -the fitness causes bloat theory relies on the assumption that there is a greater probability to find a bigger program with the same behavior (i.e. semantically equivalent) than to find a shorter one. Thus, once a good solution is found, programs naturally tends to grow because of fitness pressure [START_REF] Langdon | Fitness causes bloat: Mutation[END_REF]. This theory states that code bloat is operatorindependent and may happen for any variable length representation-based algorithm. As a consequence, code bloat is not to be limited to population-based stochastic algorithm (such as GP), but may be extended to many algorithms using variable length representation [START_REF] Langdon | The evolution of size in variable length representations[END_REF]; -the removal bias theory states that removing longer sub-programs is more tacky than removing shorter ones (because of possible destructive consequence), so there is a natural bias that benefits to the preservation of longer programs [START_REF] Soule | Exons and code growth in genetic programming[END_REF]. While it is now considered that each of these theories somewhat captures part of the problem [START_REF] Banzhaf | Some considerations on the reason for bloat[END_REF], there has not been any definitive global explanation of the bloat phenomenon. At the same time, no definitive practical solution has been proposed that would avoid the drawbacks of bloat (increasing evaluation time of large trees) while maintaining the good performances of GP on difficult problems. Some common solutions rely either on specific operators (e.g. size-fair crossover [START_REF] Langdon | Size fair and homologous tree genetic programming crossovers[END_REF], or different Fair Mutation [START_REF] Langdon | The evolution of size and shape[END_REF]), on some parsimony-based penalization of the fitness [START_REF] Soule | Effects of code growth and parsimony pressure on populations in genetic programming[END_REF] or on abrupt limitation of the program size such as the one originally used by Koza [START_REF] Koza | Genetic Programming: On the Programming of Computers by Means of Natural Selection[END_REF]. Some other more particular solutions have been proposed but are not widely used yet [START_REF] Ratle | Avoiding the bloat with probabilistic grammar-guided genetic programming[END_REF][START_REF] Silva | Dynamic maximum tree depth : A simple technique for avoiding bloat in tree-based gp[END_REF][START_REF] Luke | Lexicographic parsimony pressure[END_REF]. In this paper, we prove, under some sufficient conditions, that the solution given by GP actually converges, when the number of examples goes to infinity, toward the actual function used to generate the examples. This property is known in Statistical Learning as Universal Consistency. Note that this notion is a slightly different from that of Universal Approximation, that people usually refer to when doing symbolic regression in GP: because polynomial for instance are known to be able to approximate any continuous function, GP search using operators {+, * } is also assumed to be able to approximate any continuous function. However, Universal Consistency is concerned with the behavior of the algorithm when the number of examples goes to infinity: being able to find a polynomial that approximates a given function at any arbitrary precision does not imply that any interpolation polynomial built from an arbitrary set of sample points will converge to that given function when the number of points goes to infinity. But going back to bloat, and sticking to the polynomial example, it is also clear that the degree of the interpolation polynomial of a set of examples increases linearly with the number of examples. This leads us to start our bloat analysis by defining two kinds of bloat. On the one hand, we define the structural bloat as the code bloat that unavoidably takes place when no optimal solution (i.e. no function that exactly matches all possible examples) is approximated by the search space. In such a situation, optimal solutions of increasing accuracy will also exhibit an increasing complexity, as larger and larger code will be generated in order to better approximate the target function. The extreme case of structural bloat has also been demonstrated in [START_REF] Gustafson | Problem difficulty and code growth in Genetic Programming[END_REF]. The authors use some polynomial functions of increasing difficulty, and demonstrate that a precise fit can only be obtained through an increased bloat (see also [START_REF] Daida | What makes a problem gp-hard? analysis of a tunably difficult problem in genetic programming[END_REF] for related issues about problem complexity in GP). On the other hand, we define the functional bloat as the bloat that takes place when programs length keeps on growing even though an optimal solution (of known complexity) does lie in the search space. In order to clarify this point, let us use a simple symbolic regression problem defined as follow : given a set S of examples, the goal is to find a function f (here, a GP-tree) that minimized the Least Square Error (or LSE). If we intend to approximate a polynomial (ex. : 14 * x 2 ), we may observe code bloat since it is possible to find arbitrarily long polynomials that gives the exact solution (ex. : 14 * x 2 + 0 * x 3 + ...). Most of the works cited in section 1 are in fact concerned with functional bloat which is the simplest, yet already problematic, kind of bloat. Overview of results. In section 3, we shall investigate the Universal Consistency of Genetic Programming, and study in detail structural and functional bloat that might take place when searching program spaces using GP. A formal and detailed definition of the program space in GP is given in Lemma 1, section 3, and two types of results will then be derived: i) Universal Consistency results, i.e. does the probability of misclassification of the solution given by GP converges to the optimal probability of misclassification when the number of examples goes to infinity? ii) Bloat-related results, first regarding structural bloat, and second with respect to functional bloat in front of various types of fitness penalization and/or bounds on the complexity of the programs. Let us now state precisely, yet informally, our main results. First, as already mentioned, we will precisely define the set of programs under examination, and prove that such a search space fulfills the conditions of the standard theorems of Statistical Learning Theory listed in Section 2. Second, applying those theorems will immediately lead to a first Universal Consistency result for GP, provided that some penalization for complexity is added to the fitness (Theorem 3). Third: the first bloat-related result, Proposition 4, unsurprisingly proves that if no optimal function belongs to the search space, then converging to the optimal error implies an infinite increase of bloat. Fourth, theorem 5 is also a negative result about bloat, as it proves that even if the optimal function belongs to the search space, minimizing the LSE alone might lead to bloat (i.e. the com-plexity of the empirical solutions goes to infinity with the sample size). Finally, the last two theorems (5' and 6) are the best positive results one could expect considering the previous findings: it is possible to carefully adjust the parsimony pressure so as to obtain both Universal Consistency and bounds on the complexity of the empirical solution (i.e. no bloat). Section 4 discuss some properties of alternate solutions for complexity penalization : cross-validation or hold out, with various pairing of data sets. Note that, though all proofs in Section 3 will be stated and proved in the context of classification (i.e. find a function from R d into {0, 1}), their generalization to regression (i.e. find a function from R d into R) is straightforward. Discussion The first limit of our work is the fact that all these results consider that GP finds a program which is empirically the best, in the sense that given a set of examples and a fitness function based on the Least Square Error (and possibly including some parsimony penalization), it will be assumed that GP does find one program in that search space that minimizes this fitness -and it is the behavior of this ideal solution, which is a random function of the number of examples, that is theoretically studied. Of course, we all know that GP is not such an ideal search procedure, and hence such results might look rather far away from GP practice, where the user desperately tries to find a program that gives a reasonably low empirical approximation error. Nevertheless, Universal Consistency is vital for the practitioner too: indeed, it would be totally pointless to fight to approximate an empirically optimal function without any guarantee that this empirical optimum is anywhere close to the ideal optimal solution we are in fact looking for. Furthermore, the bloat-related results give some useful hints about the type of parsimony that has a chance to efficiently fight the unwanted bloat, while maintaining the Universal Consistency property -though some actual experiments will have to be run to confirm the usefulness of those theoretical hints. Elements of Learning theory In the frameworks of regression and classification, Statistical Learning Theory [START_REF] Vapnik | The nature of statistical learning theory[END_REF] is concerned with giving some bounds on the generalization error (i.e. the error on yet unseen data points) in terms of the actual empirical error (the LSE error above) and some fixed quantity depending only on the search space. More precisely, we will use here the notion of Vapnik-Chervonenkis dimension (in short, VCdim) of a space of functions. Roughly, VC-dim provides bounds on the difference between the empirical error and the generalization error. Consider a set of s examples (x i , y i ) i∈{1,...,s} . These examples are drawn from a distribution P on the couple (X, Y ). They are independent identically distributed, Y = {0, 1} (classification problem), and typically X = R d for some dimension d. For any function f , define the loss L(f ) to be the expectation of |f (X) -Y |. Similarly, define the empirical loss L(f ) as the loss observed on the examples: L(f ) = Theorem A [5, Th. 12.8, p206] : Consider F a family of functions from a domain X to {0, 1} and V its VC-dimension. Then, for any ǫ > 0 P ( sup P ∈F |L(P ) -L(P )| ≥ ǫ) ≤ 4 exp(4ǫ + 4ǫ 2 )s 2V exp(-2sǫ 2 ) and for any δ ∈]0, 1]P ( sup P ∈F |L(P ) -L(P )| ≥ ǫ(s, V, δ)) ≤ δ where ǫ(s, V, δ) = 4-log(δ/(4s 2V )) 2s-4 . Interpretation : In a family of finite VC-dimension, the empirical errors and the generalization errors are probably closely related. Other forms of this theorem have no log(n) factor ; they are known as Alexander's bound, but the constant is so large that this result is not better than the result above unless s is huge ([5, p207]): if s ≥ 64/ǫ 2 , P ( sup P ∈F |L(P ) -L(P )| ≥ ǫ) ≤ 16( √ sǫ) 4096V exp(-2sǫ 2 ) We classically derive the following result from theorem A: Theorem A' : Consider F s for s ≥ 0 a family of functions from a domain X to {0, 1} and V s its VC-dimension. Then, sup P ∈Fs |L(P ) -L(P )| → 0 as s → ∞ almost surely whenever V s = o(s/ log(s)). Interpretation : The maximal difference between the empirical error and the generalization error goes almost surely to 0 if the VC-dimension is finite. Proof : We use the classical Borell-Cantelli lemma 1 , for any ǫ ∈ [0, 1] : s≥64/ǫ 2 P (|L(P ) -L(P )| > ǫ) ≤ 16 s≥64/ǫ 2 ( √ sǫ) 4096Vs exp(-2sǫ 2 ) ≤ 16 s≥64/ǫ 2 exp(4096V s (log( √ s) + log(ǫ)) -2sǫ 2 ) which is finite as soon as V s = o(s/ log(s)). Theorem B in [5, Th. 18.2, p290] : Let F 1 , . . . , F k . . . with finite VC-dimensions V 1 , . . . , V k , . . . Let F = ∪ n F n . Then, being given s examples, consider P ∈ F s minimizing the empirical risk L among F s . Then, if V s = o(s/log(s)) and V s → ∞, P (L( P ) ≤ L( P ) + ǫ(s, V s , δ)) ≥ 1 -δ P (L( P ) ≤ inf P ∈Fs L(P ) + 2ǫ(s, V s , δ)) ≥ 1 -δ and L( P ) → inf P ∈F L(P ) a.s. Note that for a well chosen family of functions (typically, programs), inf P ∈F L(P ) = L * for any distribution ; so, theorem B leads to universal consistency (i.e. ∀P ; L( P ) → L * ), for a well-chosen family of functions. 1 If P n P (Xn > ǫ) is finite for any ǫ > 0 andXn > 0, then Xn → 0 almost surely. Interpretation : If the VC-dimension increases slowly enough as a function of the number of examples, then the generalization error goes to the optimal one. If the family of functions is well-chosen, this slow increase of VC-dimension leads to universal consistency. In the following theorem, we use d ′ , t ′ , q ′ instead of d, t, q for the sake of notations in a corollary below. Theorem C (8.14 and 8.4 in [START_REF] Antony | Neural network learning : Theoretical foundations[END_REF]) : Let H = {x → h(a, x); a ∈ R d ′ } where h can be computed with at most t ′ operations among α → exp(α) ; +, -, ×, / ; jumps conditioned on >, ≥, =, ≤, = ; output 0 ; output 1. Then : V Cdim(H) ≤ t ′2 d ′ (d ′ + 19 log 2 (9d ′ )) . Furthermore, if exp(.) is used at most q ′ times, and if there are at most t ′ operations executed among arithmetic operators, conditional jumps, exponentials, π(H, m) ≤ 2 (d ′ (q ′ +1))2/2 (9d ′ (q ′ + 1)2 t ) 5d ′ (q ′ +1) (em(2 t ′ -2)/d ′ ) d ′ where π(H, m) is the m th shattering coefficient of H, and hence V Cdim(H) ≤ (d ′ (q ′ + 1)) 2 + 11d ′ (q ′ + 1)(t ′ + log 2 (9d ′ (q ′ + 1))) Finally, if q ′ = 0 then V Cdim(H) ≤ 4d ′ (t ′ + 2 . . . with finite VC-dimensions V 1 , . . . , V k , . . . Let F = ∪ n F n . Assume that all distribution lead to L F = L * where L * is the optimal possible error (spaces of functions ensuring this exist). Then, given s examples, consider f ∈ F minimizing L(f ) + 32 s V (f ) log(e × s), where V (f ) is V k with k minimal such that f ∈ F k . Then : • if additionally one optimal function belongs to F k , then for any s and ǫ such that V k log(e × s) ≤ sǫ 2 /512, the generalization error is lower than ǫ with probability at most ∆ exp(-sǫ 2 /128) + 8s V k × exp(-sǫ 2 /512) where ∆ = ∞ j=1 exp(-V j ) is assumed finite. • the generalization error, with probability 1, converges to L * . Interpretation : The optimization of a compromise between empirical accuracy and regularization lead to the same properties as in theorem B, plus a stronger convergence rate property. Results This section presents in details results surveyed above. They make an intensive use of the results of Statistical Learning Theory presented in the previous section. More precisely, Lemma 1 defines precisely the space of programs considered here, and carefully shows that it satisfies the hypotheses of Theorems A-C. This allows us to evaluate the VC-dimension of sets of programs, stated in Theorem 2. Then, announced results are derived. Finally, next we propose a new approach combining an a priori limit on VC-dimension (i.e. size limit) and a complexity penalization (i.e. parsimony pressure) and state in theorem 6 that this leads to both universal consistency and convergence to an optimal complexity of the program (i.e. no bloat). We first prove the following Lemma 1 : Let F be the set of functions which can be computed with at most t operations among : • operations α → exp(α) (at most q times); • operations +, -, ×, / ; • jumps conditioned on >, ≥, =, ≤, = ; and • output 0 ; • output 1 ; • labels for jumps ; • at most m constants ; • at most z variables by a program with at most n lines. We note log 2 (x) the integer part (ceil) of log(x)/ log [START_REF] Banzhaf | Some considerations on the reason for bloat[END_REF]. Then F is included in H as defined in theorem C, for a given P with t ′ = t + t max(3 + log 2 (n) + log 2 (z), 7 + 3 log 2 (z)) + n(11 + max(9log 2 (z), 0) + max(3log 2 (z) -3, 0)), q ′ = q, d ′ = 1 + m. Interpretation : This lemma states that a family of programs as defined above is included in the parametrizations of one well-chosen program. This replaces a family of programs by one parametric program, and it will be useful for the computation of VC-dimension of a family of programs by theorem C. Proof : In order to prove this result, we define below a program as in theorem above that can emulate any of these programs, with at most t ′ = t + t max(3 + log 2 (n) + log 2 (z), 7 + 3 log 2 (z)) + n(11 + max(9log 2 (z), 0) + max(3log 2 (z) -3, 0)), q ′ = q, d ′ = 1 + m. The program is as follows : • label "inputs" • initialize variable(1) at value x(1) • label "output 0" • output 0 • label "output 1" • output 1 "operation decode c" can be developed as follows. Indeed, we need m real numbers, for parameters, and 4n integers c(., .), that we will encode as only one real number in [0, 1] as follows : • initialize variable(2) at value x(2) • . . . • initialize variable(dim(x)) at value x(dim(x)) • label "constants" • initialize variable(dim(x) + 1) at value a 1 • initialize variable(dim(x) + 2) at value a 2 • . . . • initialize variable(dim(x) + m) at value a m • label "Decode the program into c" • operation decode c • label "Line 1" • operation c(1, 1. let y ∈ [0, 1] 2. for each i ∈ [1, . . . n] : • c(i, 1) = 0 • y = y * 2 • if (y > 1) then { c(i, 1) = 1 ; y = y -1 } • y = y * 2 • if (y > 1) then { c(i, 1) = c(i, 1) + 2 ; y = y -1 } • y = y * 2 • if (y > 1) then { c(i, 1) = c(i, 1) + 4 ; y = y -1 } 3. for each j ∈ [2, 4] and i ∈ [1, . . . n] : • c(i, j) = 0 • y = y * 2 • if (y > 1) then { c(i, j) = 1 ; y = y -1 } • y = y * 2 • if (y > 1) then { c(i, j) = c(i, j) + 2 ; y = y -1 } • y = y * 2 • if (y > 1) then { c(i, j) = c(i, j) + 4 ; y = y -1 } • . . . • y = y * 2 • if (y > 1) then { c(i, j) = c(i, j) + 2 log2(z)-1 ; y = y -1 } The cost of this is n × (3 + max(3 × log 2 (z), 0)) "if then", and n × (3 + max(3 × log 2 (z), 0)) operators ×, and n(2 + max(3(log 2 (z) -1), 0)) operators +, and n × (3 + max(3 × log 2 (z), 0)) operators -. The overall sum is bounded by n(11 + max(9 log 2 (z), 0) + max(3log 2 (z) -3, 0)). The result then derives from the rewriting of "operation c(i, 1) with variables c(i,2) and c(i,3)". This expression can be developed as follows: • if c(i, 1) == 0 then goto "output1" • if c(i, 1) == 1 then goto "output 0" • if c(i, 2) == 1 then c = variable(1) • if c(i, 2) == 2 then c = variable(2) • . . . • if c(i, 2) == z then c = variable(z) • if c(i, 1) == 7 then goto "Line c" (must be encoded by dichotomy with log 2 (n) lines) • if c(i, 1) == 6 then goto "exponential(i)" • if c(i, 3) == 1 then b = variable(1) • if c(i, 3) == 2 then b = variable(2) • . . . • if c(i, 3) == z then b = variable(z) • if c(i, 1) == 2 then a = c + b • if c(i, 1) == 3 then a = c -b • if c(i, 1) == 4 then a = c × b • if c(i, 1) == 5 then a = c/b • if c(i, 4) == 1 then variable(1) = a • if c(i, 4) == 2 then variable(2) = a • . . . • if c(i, 4) == z then variable(z) = a • label "endOfInstruction(i)" For each such instruction, at the end of the program, we add three lines of the following form : • label "exponential(i)" • a = exp(c) • goto "endOfInstruction(i)" Each sequence of the form "if x=... then" (p times) can be encoded by dichotomy with log 2 (p) tests "if ... then goto". Hence, the expected result. Theorem 2 : Let F be the set of programs as in lemma 1, where q ′ ≥ q, t ′ ≥ t + t max(3 + log 2 (n) + log 2 (z), 7 + 3 log 2 (z)) + n(11 + max(9log 2 (z), 0) + max(3log 2 (z) -3, 0)), d ′ ≥ 1 + m. V Cdim(H) ≤ t ′2 d ′ (d ′ + 19 log 2 (9d ′ )) V Cdim(H) ≤ (d ′ (q ′ + 1)) 2 + 11d ′ (q ′ + 1)(t ′ + log 2 (9d ′ (q ′ + 1))) If q = 0 (no exponential) then V Cdim(H) ≤ 4d ′ (t ′ + 2). Interpretation : interesting and natural families of programs have finite VCdimension. Effective methods can associate a VC-dimension to these families of programs. Proof : Just plug Lemma 1 in Theorem C. We now consider how to use such results in order to ensure universal consistency. First, we show why simple empirical minimization (consisting in choosing one function such that L is minimum) does not ensure consistency. Precisely, we state that, for some distribution of examples, and some i.i.d sequence of examples (x 1 , y 1 ), . . . , (x n , y n ), there exists P 1 , . . . , P n , . . . such that ∀i ∈ [[1, n]]P n (x i ) = y i and however ∀n ∈ NP (f (x) = y) = 0. The proof is as follows. Consider the distribution with x uniformly drawn in [0, 1] and y constant equal to 1. Consider P n the program that compares its entry to x 1 , x 2 , . . . , x n , and outputs 1 if the entry is equal to x j for some j ≤ n, and 0 otherwise else. With probability 1, this program output 0, whereas y = 1 with probability 1. We therefore conclude that minimizing the empirical risk is not enough for ensuring any satisfactory form of consistency. Let's now show that structural risk minimization, i.e. taking into accound a penalization for complex structures, can do the job, i.e. ensure universal consistency, and fast convergence when the solution can be written within finite complexity. Theorem 3 : Consider q f , t f , m f , n f and z f integer sequences, non-decreasing functions of f . Define V f = V Cdim(H f ), where H f is the set of programs with at most t f lines executed, with z f variables, n f lines, q f exponentials, and m f constants. Then with q ′ f = q f , t ′ f = t f + t f max(3 + log 2 (n f ) + log 2 (z f ), 7 + 3 log 2 (z f )) + n f (11 + max(9log 2 (z f ), 0) + max(3log 2 (z f ) -3, 0)), d ′ f = 1 + m f , V f = (d ′ f (q ′ f + 1)) 2 + 11d ′ f (q ′ f + 1)(t ′ f + log 2 (9d ′ f (q ′ f + 1))) or, if ∀f q f = 0 then define V f = 4d ′ f (t ′ f + 2). Then, being given s examples, consider f ∈ F minimizing L(f ) + 32 s V (f ) log(e × s), where V (f ) is the min of all k such that f ∈ F k . Then, if ∆ = ∞ j=1 exp(-V j ) is finite, -the generalization error, with probability 1, converges to L * . -if one optimal rule belongs to F k , then for any s and ǫ such that V k log(e × s) ≤ sǫ 2 /512, the generalization error is lower than L * + ǫ with probability at most ∆ exp(-sǫ 2 /128) + 8s V k × exp(-sǫ 2 /512) where ∆ = ∞ j=1 exp(-V j ) is as- sumed finite. Interpretation : Genetic programming for bi-class classification, provided that structural risk minimization is performed, is universally consistent and verifies some convergence rate properties. Proof : Just plug theorem D in theorem 2. We now prove the non-surprising fact that if it is possible to approximate the optimal function (the Bayesian classifier) without reaching it exactly, then the "complexity" of the program runs to infinity as soon as there is convergence of the generalization error to the optimal one. Proposition 4: Consider P s a sequence of functions such that P s ∈ F V (s) , with F 1 ⊂ F 2 ⊂ F 3 ⊂ . . . , where F V is a set of functions from X to {0, 1} with VC-dimension bounded by V . Define L V = inf P ∈FV L(P ) and V (P ) = inf{V ; P ∈ F V } and suppose that ∀V L V > L * . Then, (L(P s ) s→∞ -→ L * ) =⇒ (V (P s ) s→∞ -→ ∞). Interpretation : This is structural bloat : if your space of programs approximates but does not contain the optimal function, then bloat occurs. Proof: Define ǫ(V ) = L V -L * . Assume that ∀V ǫ(V ) > 0. ǫ is necessarily non-increasing. Consider V 0 a positive integer ; let us prove that if s is large enough, then V (P s ) ≥ V 0 . There exists ǫ 0 such that ǫ(V 0 ) > ǫ 0 > 0. For s large enough, L(P s ) ≤ L * + ǫ 0 , hence L Vs ≤ L * + ǫ 0 , hence L * + ǫ(V s ) ≤ L * + ǫ 0 , hence ǫ(V s ) ≤ ǫ 0 , hence V s > V 0 . We now show that the usual procedure defined below, consisting in defining a maximum VC-dimension depending upon the sample size (as usually done in practice and as recommended by theorem B) and then using a moderate family of functions, leads to bloat. With the same hypotheses as in theorem B, we can state Theorem 5 (bloat theorem for empirical risk minimization with relevant VCdimension): Let F 1 , . . . , F k . . . non-empty sets of functions with finite VC-dimensions V 1 , . . . , V k , . . . Let F = ∪ n F n . Then, given s examples, consider P ∈ F s minimizing the empirical risk L in F s . ¿From Theorem B we already know that if V s = o(s/log(s)) and V s → ∞, then P (L( P ) ≤ L( P ) + ǫ(s, V s , δ)) ≥ 1 -δ, and L( P ) → inf P ∈F L(P ) a.s.. We will now state that if V s → ∞, and noting V (f ) = min{V k ; f ∈ F k }, then ∀V 0 , P 0 > 0, ∃P , distribution of probability on X and Y , such that ∃g ∈ F 1 such that L(g) = L * and for s sufficiently large P (V ( P ) ≤ V 0 ) ≤ P 0 . Interpretation : The result in particular implies that for any V 0 , there is a distribution of examples such that ∃g; V (g) = V 1 and L(g) = L * , with probability 1, V ( f ) ≥ V 0 infinitely often as s increases. This shows that bloat can occur if we use only an abrupt limit on code size, even if this limit depends upon the number of examples (a fortiori if there's no limit). Proof (of the part which is not theorem B) : See figure 3 for a figure illustrating the proof. Consider V 0 > 0 and P 0 > 0. Consider α such that (eα/2 α ) V0 ≤ P 0 /2. Consider s such that V s ≥ αV 0 . Let d = αV 0 . Consider x 1 , . . . , x d d points shattered by F d ; such a family of d points exist, by definition of F d . Define the probability measure P by the fact that X and Y are independent and P (Y = 1) = 1 2 and P (X = x i ) = 1 d . Then, the following holds, with Q the empirical distribution (the average of Dirac masses on the x i 's) : 1. no empty x i 's : P (E 1 ) → 0 where E 1 is the fact that ∃i; Q(X = x i ) = 0, as s → ∞. 2. no equality : P (E 2 ) → 0 where E 2 is the fact that E 1 occurs or ∃i; We now only have to use classical results. It is well known in VC-theory that S(a, b) ≤ (ea/b) b (see for example [5, chap.13]), hence S(d, d/α) ≤ (ed/(d/α)) d/α and P (E 3 |E 2 does not hold) ≤ (eα) d/α /2 d ≤ P 0 /2. If n is sufficiently large to ensure that P (E 2 ) ≤ P 0 /2 (we have proved above that P (E 2 ) → 0 as s → ∞) then Smallest Q(Y = 1|X = x i ) = 1 2 . 3. P (E 3 ) ≤ P (E 3 |¬E 2 )×P (¬E 2 )+P (E 2 ) ≤ P (E 3 |¬E 2 )+P (E 2 ) ≤ P 0 /2+P 0 /2 ≤ P 0 We now show that, on the other hand, it is possible to optimize a compromise between optimality and complexity in an explicit manner (e.g., replacing 1 % precision with 10 lines of programs or 10 minutes of CPU) : Theorem 5' (bloat-control theorem for regularized empirical risk minimization with relevant VC-dimension): Let F 1 , . . . , F k . . . be non-empty sets of functions with finite VC-dimensions V 1 , . . . , V k , . . . Let F = ∪ n F n . Consider W a user-defined complexity penalization term. Then, being given s examples, consider P ∈ F s minimizing the regularized empirical risk L(P ) = L(P ) + W (P ) among F s . If V s = o(s/log(s)) and V s → ∞, then L( P ) → inf P ∈F L(P ) a.s. where L(P ) = L(P ) + W (P ). Interpretation : Theorem 5' shows that, using a relevant a priori bound on the complexity of the program and adding a user-defined complexity penalization to the fitness, can lead to convergence toward a user-defined compromise ( [START_REF] Zhang | Balancing accuracy and parsimony in genetic programming[END_REF][START_REF] Zhang | Evolutionary induction of sparse neural trees[END_REF]) between classification rate and program complexity (i.e. we ensure almost sure convergence to a compromise of the form "λ 1 CPU time + λ 2 misclassification rate + λ 3 number of lines", where the λ i are user-defined). Remark : the drawback of this approach is that we have lost universal consistency and consistency (in the general case, the misclassification rate in generalization will not converge to the Bayes error, and whenever an optimal program exists, we will not necessarily converge to its efficiency). Proof : See figure 3 We now turn our attention to a more complicated case where we want to ensure universal consistency, but we want to avoid a non-necessary bloat ; e.g., we require that if an optimal program exists in our family of functions, then we want to converge to its error rate, without increasing the complexity of the program. We consider a merge between regularization and bounding of the VC-dimension ; we penalize the complexity (e.g., length) of programs by a penalty term R(s, P ) = R(s)R ′ (P ) depending upon the sample size and upon the program ; R(., .) is user-defined and the algorithm will look for a classifier with a small value of both R ′ and L. We study both the universal consistency of this algorithm (i.e. L → L * ) and the no-bloat theorem (i.e. R ′ → R ′ (P * ) when P * exists). Theorem 6 : Let F 1 , . . . , F k . . . with finite VC-dimensions V 1 , . . . , V k , . . . Let F = ∪ n F n . Define V (P ) = V k with k = inf{t|P ∈ F t }. Define L V = inf P ∈FV L(P ). Consider V s = o(log(s)) and V s → ∞. Consider P minimiz- ing L(P ) = L(P ) + R(s, P ) in F s and assume that R(s, .) ≥ 0. Then (consistency), whenever sup P ∈FV s R(s, P ) = o(1), L( P ) → inf P ∈F L(P ) almost surely (note that for well chosen family of functions, inf P ∈F L(P ) = L * ). Moreover, assume that ∃P * ∈ F V * L(P * ) = L * . Then with R(s, P ) = R(s)R ′ (P ) and with R ′ (s) = sup P ∈FV s R ′ (P ) : 1. non-asymptotic no-bloat theorem : R ′ ( P ) ≤ R ′ (P * ) + (1/R(s))2ǫ(s, V s , δ) with probability at least 1 -δ (this result is in particular interesting for ǫ(s, V s , δ)/R(s) → 0, what is possible for usual regularization terms as in theorem D, 2. almost-sure no-bloat theorem : if R(s)s (1-α)/2 = O(1), then almost surely R ′ ( P ) → R ′ (P * ) and if R ′ (P ) has discrete values (such as the number of instructions in P or many complexity measures for programs) then for s sufficiently large, R ′ ( P ) = R ′ (P * ). 3. convergence rate : with probability at least 1 -δ, Interpretation : Combining a code limitation and a penalization leads to universal consistency without bloat. L( P ) ≤ inf P ∈FV s L(P ) + R(s)R ′ (s) Remarks : The usual R(s, P ) as used in theorem D or theorem 3 provides consistency and non-asymptotic no-bloat. A stronger regularization leads to the same results, plus almost sure no-bloat. The asymptotic convergence rate depends upon the regularization. The result is not limited to genetic programming and could be used in other areas. As shown in proposition 4, the no-bloat results require the fact that ∃V * ∃P * ∈ F V * L(P * ) = L * . Interestingly, the convergence rate is reduced when the regularization is increased in order to get the almost sure no-bloat theorem. Proof : Define ǫ(s, V ) = sup f ∈FV | L(f )-L(f )|. Let us prove the consistency. For any P , L( P )+R(s, P ) ≤ L(P )+R(s, P ). On the other hand, L( P ) ≤ L( P )+ǫ(s, V s ). So : L( P ) ≤ ( inf P ∈FV s ( L(P ) + R(s, P ))) -R(s, P ) + ǫ(s, V s ) ≤ ( inf P ∈FV s (L(P ) + ǫ(s, V s ) + R(s, P ))) -R(s, P ) + ǫ(s, V s ) ≤ ( inf P ∈FV s (L(P ) + R(s, P ))) + 2ǫ(s, V s ) as ǫ(s, V s ) → 0 almost surely2 and (inf P ∈FV s (L(P ) + R(s, P ))) → inf P ∈F L(P ), we conclude that L( P ) → inf P ∈F L(P ) a.s. We now focus on the proof of the "no bloat" result : By definition of the algorithm, for s sufficiently large to ensure P * ∈ F Vs , L( P ) + R(s, P ) ≤ L(P * ) + R(s, P * ) hence with probability at least 1 -δ, R ′ ( P ) ≤ R ′ (P * ) + (1/R(s))(L * + ǫ(s, V s , δ) -L( P ) + ǫ(s, V s , δ)) hence R ′ ( P ) ≤ R ′ (V * ) + (1/R(s))(L * -L( P ) + 2ǫ(s, V s , δ)) As L * ≤ L( P ), this leads to the non-asymptotic version of the no-bloat theorem. The almost sure no-bloat theorem is derived as follows. R ′ ( P ) ≤ R ′ (P * ) + 1/R(s)(L * + ǫ(s, V s ) -L( P ) + ǫ(s, V s )) hence R ′ ( P ) ≤ R ′ (P * ) + 1/R(s)(L * -L( P ) + 2ǫ(s, V s )) R ′ ( P ) ≤ R ′ (P * ) + 1/R(s)2ǫ(s, V s ) All we need is the fact that ǫ(s, V s )/R(s) → 0 a.s. For any ǫ > 0, we consider the probability of ǫ(s, V s )/R(s) > ǫ, and we sum over s > 0. By the Borell-Cantelli lemma, the finiteness of this sum is sufficient for the almost sure convergence to 0. The probability of ǫ(s, V s )/R(s) > ǫ is the probability of ǫ(s, V s ) > ǫR(s). By theorem A, this is bounded above by O(exp(2V s log(s) -2sǫ 2 R(s) 2 )). This has finite sum for R(s) = Ω(s -(1-α)/2 ). Let us now consider the convergence rate. Consider s sufficiently large to ensure L Vs = L * . As shown above during the proof of the consistency, L( P ) ≤ ( inf so with probability at least 1 -δ, ≤ inf P ∈FV s L(P ) + R(s)R ′ (s) + 2ǫ(s, V s , δ) P Extensions We have studied above : -the method consisting in minimizing the empirical error, i.e. the error observed on examples (leading to bloat (this is an a fortiori consequence of theorem 5) without universal consistency (see remark before theorem 3)) ; -the method consisting in minimizing the empirical error, i.e. the error observed on examples, with a hard bound on the complexity (leading to universal consistency but bloat, see theorem 5) ; -the method, inspired from (but slightly adapted against bloat) structural risk minimization, consisting in minimizing a compromize between the empirical error and a complexity bound including size and computation-time (see theorem 6). We study the following other cases now : -the case in which the level of complexity is chosen through resamplings, i.e. crossvalidation or hold out ; -the case in which the complexity penalization does not include any time bound but only size bounds ; We mainly conclude that penalization is necessary, cannot be replaced by crossvalidation, cannot be replaced by hold-out, and must include time-penalization. About the use of cross-validation or hold-out for avoiding bloat and choosing the complexity level Note UC for universal consistency and ERM for empirical risk minimization. We considered above different cases : -evolutionary programming with only "ERM" fitness ; -evolutionary programming with ERM+bound (leading to UC + bloat) ; -evolutionary programming with ERM+penalization+bound (leading to UC without bloat). One can now consider some other cases : 1 ) 1 with variables c(1, 2) and c(1, 3) and c(1, 4) • label "Line 2" • operation c(2, 1) with variables c(2, 2) and c(2, 3) and c(2, 4) • . . . • label "Line n" • operation c(n, 1) with variables c(n, 2)and c(n, 3) and c(n, 4) Fig. 1 . 1 Fig. 1. Illustration of the proof. With a larger k, F k has a smaller best error. for a figure illustrating the proof. sup P ∈Fs | L(P ) -L(P )| ≤ sup P ∈Fs | L(P ) -L(P )| ≤ ǫ(s, V s ) → 0, almost surely, by theorem A'. Hence the expected result. =o( 1 ) 1 by hypothesis+2ǫ(s, V s , δ) where ǫ(s, V, δ) = 4-log(δ/(4s 2V )) 2s-4is an upper bound on ǫ(s, V ) = sup f ∈FV | L(f ) -L(f )| (given by theorem A), true with probability at least 1 -δ. ∈FV s (L(P ) + R(s, P ))) + 2ǫ(s, V s ) ≤ ( infP ∈FV s (L(P ) + R(s)R ′ (P ))) + 2ǫ(s, V s ) ≤ inf P ∈FV s L(P ) + R(s)R ′ (s) + 2ǫ(s, V s ) the best function is not in F V0 : P (E 3 |E 2 does not hold) ≤ S(d, d/α)/2 d where E 3 is the fact that ∃g ∈ F d/α=V0 ; L(g) = inf F d L, with S(d, d/α) the relevant shattering coefficient, i.e. the cardinal of F d/α restricted to {x 1 , . . . , x d }. Illustration of the proof. With a larger k, F k has a smaller best error, but the penalization is stronger than the difference of error. Smallest Error in Fk HatLTilde HatL LTilde L Complexity Vk of family Fk Fig. 2. s i |f (x i ) -y i |.Finally, define L * , the Bayes error, as the smallest possible generalization error for any mapping from X to {0, 1}.The following 4 theorems are well-known in the Statistical Learning community: See theorem A' Acknowledgements This work was supported in part by the PASCAL Network of Excellence. We thank Bill Langdon for very helpful comments. -hold out in order to choose between different complexity classes (i.e., in the Paretofront corresponding to the compromise between ERM and complexity, choose the function by hold out) ; -idem through cross-validation. This section is devoted to these cases. First, let's consider hold-out for choosing the complexity level. Consider that the function can be chosen in many complexity levels, F 0 ⊂ F 1 ⊂ F 2 ⊂ F 3 ⊂ . . . , where F i = F i+1 . Note L(f, X) the error rate of the function f in the set X of examples: where l(f, X i ) = 1 if f fails on X i and 0 otherwise. Define f k = arg min F k L(., X k ). In hold-out, f = f k * where k * = arg min k l k where In all the sequel, we assume that We consider that all X k 's and Y k 's have the same size n. There are different cases : X k = Y k and ∀k, X k = X 0 is the naive case (studied above). The case with hold out leads to different cases also : -Greedy case: all X k 's and Y k 's are independent. -Case with pairing: Case of greedy hold-out. -consider the case of an output y independent of the input x, and P (y = 1) = k * is therefore a Poisson law with parameter 1/2 n . Its expectation is TODO and its standard deviation is TODO -therefore, almost surely, k * → ∞ as n → ∞. This is shown with one distribution, which does not depend upon the number of examples. This happens whereas an optimal function lies in F 0 . Case of hold-out with pairing. - with v minimal realizing this condition. -Consider A = {a 1 , . . . , a V }, a set of points shattered by F v . -Consider a distribution of examples with x uniform on A and y independent of x with P (y = 1) = P (y = 0) = 1 2 . -Consider PX the empirical distribution associated to X and PY the empirical distribution associated to Y . -There is at least one function on A which does not belong in F k-1 . -With probability at least (1 -P (E Y ))/2 V , this function is optimal for L(., Y 0 ). -With probability at least (1 -P (E X ))/2 V , f k is equal to this function. -Combining the two probabilities above, as the events are independent, we see that with probability at least p -this implies the first result : P (k * ≥ v) does not go to 0, whereas a function in F 0 is optimal. -Now, let's consider that we can change the distribution as n moves. -For n sufficiently large, choose v maximal such that p(v, n) =≥ 1/n and F v has VC-dimension greater than the VC-dimension of F v-1 . Consider the distribution associated to v as above (uniform on A, a set of shattered point). We have therefore shown, with a distribution dependent on n, that k * → ∞. And for a distribution that does not depend upon n, that P (k * < v) is lower bounded. In both cases, an optimal function lies in F 0 . We now turn our attention to the case of cross-validation. We formalize N-crossvalidation as follows : Greedy cross-validation could be considered as in the case of hold out above. This leads to the same result (for some distribution, k * → ∞). We therefore only consider cross-validation with pairing : X i k = X i We only consider cross-validation as a method for computing k * , and not for computing the classifier. We note Pi the empirical law associated to X i . We consider A a set of points shattered by F v , |A| = V , A not shattered by F v-1 . We consider f ∈ F v realizing a dichotomy of A that is not realized by F v-1 . We define E i the event {∀a ∈ A; Pi (y = f (a)|x = a) > 1 2 }. We assume that the distribution of examples is, for x, uniform on A, and for y, independently of x, uniform on {0, 1}. The probability of E i goes to We therefore have the following result : Theorem : one can not avoid bloat with only hold-out or cross-validation. Consider greedy hold-out, hold out with pairing and cross-validation with pairing. Then, -for some well-chosen distribution of examples, greedy hold-out almost surely leads to k * → ∞ whereas an optimal function lies in F 0 . -whatever may be V = V C -dimension(F v ), for some well-chosen distribution, hold-out with pairing almost surely leads to k * > V infinitely often whereas an optimal function lies in F 0 . -whatever may be V = V C -dimension(F v ), for some well-chosen distribution, cross-validation with pairing almost surely leads to k * > V infinitely often whereas an optimal function lies in F 0 . Is time-complexity required ? Consider any learning algorithm working on a sequence of i.i.d examples (x 1 , y 1 ), . . . , (x n , y n ) and outputting a program. We formalize as follows the fact that this algorithm does not take into account any form of time-complexity but only the size of programs : If the learning program outputs P , then there is not program P ′ with the same length as P that has a better empirical error rate. We show in the sequel that such a learning program can not verify convergence rates as shown in theorem 6, i.e. a guaranteed convergence rate in O(1/ √ n) when an optimal function has bounded complexity. In the sequel, we assume that the reader is familiar with statistical learning theory and shattering properties ; the interested reader is referred to [START_REF] Devroye | A probabilistic theory of pattern recognition[END_REF]. The main element is that theorem C does not hold without bounded time. The following program has bounded length, only one parameter α, but generates as α ∈ R a family of functions which shatters an infinite set : -consider x the entry in R and α ∈ R a parameter ; -if x ≤ 0 then go to FINISH. -if α ≥ 0.5, output 1 and stop. -output 0 and stop. This program shifts α and x to the left until x > 1 2 . It then replies 1 if and only if α, after shift, has its first digit equal to 1. Therefore, this program can realize any dichotomy of { 1 2 , 1 4 , 1 8 , . . . }. This is exactly the definition of the fact that this set is shattered. So, we have shown that an family of functions shattering an infinite set was included in the set of programs with bounded length. Now, consider a learning program which has a guaranteed convergence rate in a family of functions including the family of functions computed by the program above. Ie, we assume that Theorem : fitnesses without time-complexity-pressure do not ensure consistency. What ever may be the sequence a 1 , . . . , a n , . . . decreasing to 0, there's no learning program ensuring that for any distribution of examples such that P (y = f (x)) = 1 for some f with bounded length, the expectation of P (P n (x) = y) is O(a n ). Conclusion In this paper, we have proposed a theoretical study of two important issues in Genetic Programming known as universal consistency and code bloat. We have shown that GP trees used in symbolic regression (involving the four arithmetic operations, the exponential function, and ephemeral constants, as well as test and jump instructions) could benefit from classical results from Statistical Learning Theory (thanks to Theorem C and Lemma 1). This has led to two kinds of original outcomes : i) some results about Universal Consistency of GP, i.e. almost sure asymptotic convergence to the optimal error rate, ii) results about the bloat. Both the unavoidable structural bloat in case the ideal target function does not have a finite description, and the functional bloat, for which we prove that it can be avoided by simultaneously bounding the length of the programs with some ad hoc bound and using some parsimony pressure in the fitness function. Some negative results have been obtained, too, such as the fact though structural bloat was known to be unavoidable, functional bloat might indeed happen even when the target function does lie in the search space, but no parsimony pressure is used. Interestingly, all those results (both positive and negative) about bloat are also valid in different contexts, such as for instance that of Neural Networks (the number of neurons replaces the complexity of GP programs). Moreover, results presented here are not limited to the scope of regression problems, but may be applied to variable length representation algorithms in different contexts such as control or identification tasks. Finally, going back to the debate about the causes of bloat in practice, it is clear that our results can only partly explain the actual cause of bloat in a real GP run -and tends to give arguments to the "fitness causes bloat" explanation [START_REF] Langdon | Fitness causes bloat: Mutation[END_REF]. It might be possible to study the impact of size-preserving mechanisms (e.g. specific variation operators, like size-fair crossover [START_REF] Langdon | Size fair and homologous tree genetic programming crossovers[END_REF] or fair mutations [START_REF] Langdon | The evolution of size and shape[END_REF]) as somehow contributing to the regularization term in our final result ensuring both Universal Consistency and no-bloat.
48,881
[ "581", "184446", "739309" ]
[ "56056", "56056", "56056", "56056" ]
01746090
en
[ "phys" ]
2024/03/05 22:32:07
2017
https://hal.science/hal-01746090/file/manuscript.pdf
We-Hyo Soe Yasuhiro Shirai Corentin Durand Yusuke Yonamine Kosuke Minami Xavier Bouju Marek Kolmer Katsuhiko Ariga Christian Joachim Waka Nakanishi Conformation Manipulation and Motion of a Double Paddle Molecule on an Au(111) Surface The molecular conformation of a bisbinaphthyldurene (BBD) molecule is manipulated using a lowtemperature ultrahigh-vacuum scanning tunneling microscope (LT-UHV STM) on an Au(111) surface. BBD has two binaphthyl groups at both ends connected to a central durene leading to anti/syn/flat conformers. In solution, dynamic nuclear magnetic resonance indicated the fast interexchange between the anti and syn conformers as confirmed by density functional theory calculations. After deposition in a submonolayer on an Au(111) surface, only the syn conformers were observed forming small islands of self-assembled syn dimers. The syn dimers can be separated into syn monomers by STM molecular manipulations. A flat conformer can also be prepared by using a peculiar mechanical unfolding of a syn monomer by STM manipulations. The experimental STM dI/dV and theoretical elastic scattering quantum chemistry maps of the low-lying tunneling resonances confirmed the flat conformer BBD molecule STM production. The key BBD electronic states for a step-by-step STM inelastic excitation lateral motion on the Au(111) are presented requiring no mechanical interactions between the STM tip apex and the BBD. On the BBD molecular board, selected STM tip apex positions for this inelastic tunneling excitation enable the flat BBD to move controllably on Au(111) by a step of 0.29 nm per bias voltage ramp. With the tip of a scanning tunneling microscope (STM), atomic-scale manipulation protocols are well-known since the pioneering work of D. Eigler, [START_REF] Frisch | Revision B.01[END_REF] and precise studies have described the various mechanisms of single atom (a small molecule) mechanical manipulations. [START_REF] Becke | Density-Functional Exchange-Energy Approximation with Correct Asymptotic Behavior[END_REF][START_REF] Becke | Density-Functional Thermochemistry. III. The Role of Exact Exchange[END_REF] Pushing a single large molecule on a surface with the tip of the STM [START_REF] Lee | Development of the Colle-Salvetti Correlation-Energy Formula into a Functional of the Electron Density[END_REF][START_REF] Ditchfield | Self-Consistent Molecular-Orbital Methods. IX. An Extended Gaussian-Type Basis for Molecular-Orbital Studies of Organic Molecules[END_REF] is now a standard procedure to position precisely functioning molecules on a surface for single molecule mechanics experiments [START_REF] Hehre | Self-Consistent Molecular Orbital Methods. XII. Further Extensions of Gaussian-Type Basis Sets for Use in Molecular Orbital Studies of Organic Molecules[END_REF] and also for single molecule electronic measurements. [START_REF] Hariharan | Accuracy of AH n Equilibrium Geometries by Single Determinant Molecular Orbital Theory[END_REF] To perform an atomically precise lateral manipulation of a molecule on a metallic surface with no mechanical interactions between the STM tip apex and the molecule, the bias tip must be able to feed up energy to the molecule with a few picometer lateral precision. [START_REF] Hariharan | The Influence of Polarization Functions on Molecular Orbital Hydrogenation Energies[END_REF] This excitation can be either inelastic from the tunneling current itself or originate from the enhanced electric field located in the biased tip/surface junction when the molecule carries a local dipolar moment. 9 For inelastic tunneling excitations, the energy entry port is generally the low-lying reduced electronic states of the molecule. [START_REF] Hariharan | Accuracy of AH n Equilibrium Geometries by Single Determinant Molecular Orbital Theory[END_REF]10 Here, a precise design of the molecule is required to avoid the energy provided by the tunneling current passing through the molecule from being equally distributed among the many mechanical degrees of freedom of this molecule. If not, a conformation change of the molecule may happen but with no lateral displacements. The molecule can also be broken in small chemical groups by the applied bias voltage pulse 11 instead of moving step-by-step on the supporting surface by steps, generally the commensurable surface atomic lattice constant. To also avoid energy redistribution toward the supporting surface, different leg and wheel molecular groups have been early identified. They can efficiently maintain a space (van der Waals distances) between the planar molecular chassis and the supporting surface. 12-14 Due, for example, to steric crowding, lateral chemical groups not having the shape of a leg or a wheel, mounted on the chassis in a symmetric way and holding it at van der Waals distances from the surface, are also interesting for molecular design as presented in this paper. [START_REF] Soe | Mapping the Excited States of Single Hexa-Peri-Benzocoronene Oligomers[END_REF] The light-driven molecular motor of the Feringa group the first switchable chemical group to be mounted by the Tour group on a chassis equipped with four wheels 16 in an attempt to leave space for this molecular group to change its conformation/configuration using an optical excitation. 17 A similar molecular switch was used by the Feringa group to obtain a molecule with four of those, used as switchable legs under a tunneling inelastic excitation. 18 Other switchable chemical groups are also available for equipping a molecular chassis. For example, molecules carrying a photoisomerizable double bond, such as stilbene, azobenzene, or diarylethene, have been used as molecular switches. 19 Their photoisomerization is usually studied in the gas phase or in solution. Conformation/configuration change triggered by tunneling electrons has also been observed in STM single molecule experiments, like with azobenzene. [20][21][22] Other molecules are also available which can twist around a single bond by photoirradiation. Twisted intramolecular charge transfer (TICT) molecules provide a nice example of such a light-activated conformation change. 23 Binaphthyl molecules or their derivatives (Scheme ) belong to another group of photosensitive compounds which are also known to change conformation under UV irradiation. 24,25 In this paper, we present the design and synthesis of a bisbinaphthyldurene (BBD) molecule (Scheme ) for STM imaging, single molecule manipulation, and step-by-step lateral motions. This molecule is equipped with two binaphthyl paddles mounted laterally on a very simple central phenyl chassis. On a planar BBD, we demonstrate here how to use the low amplitude vibration modes of its 1,1'-binaphthyl lateral paddles 26,27 for manipulating the BBD along a Au(111) surface using STM inelastic tunneling effects. Not existing in solution, this planar conformation is stabilized by the Au(111) surface. On Au(111), it enters in competition with its native in solution nonplanar conformation which can also be reached by the same excitation on a metallic surface as presented below. In the initial subsections of the Results and Discussion, the design, the synthesis, and the structural analysis of the BBD molecules in solution are provided together with a detailed DFT theoretical study of the different possible conformations of a BBD molecule. In the following subsections, STM images of the BBD molecules on the Au(111) surface acquired at low-temperature (LT) and in ultrahigh-vacuum environment (UHV) are provided. We demonstrate how to prepare the BBD molecule in a planar conformation on the Au(111) surface using a very specific STM tip lateral molecular manipulation protocol. In this planar conformation, the BBD electronic probability density map of its electronic states around the Au(111) surface Fermi level can be recorded to prepare the BBD inelastic manipulation. In the final subsection, the entry ports for tunneling electron energy transfer to the BBD molecule are identified. It is shown how to STM manipulate step-by-step the BBD molecule by step of 0.29 nm on the Au(111) surface. (TICT) molecules provide a nice example of such a lightactivated conformation change. 23 Binaphthyl molecules or their derivatives (Scheme 1) belong to another group of photosensitive compounds which are also known to change conformation under UV irradiation. 24,25 In this paper, we present the design and synthesis of a bisbinaphthyldurene (BBD) molecule (Scheme 1) for STM imaging, single molecule manipulation, and step-by-step lateral motions. This molecule is equipped with two binaphthyl paddles mounted laterally on a very simple central phenyl chassis. On a planar BBD, we demonstrate here how to use the low amplitude vibration modes of its 1,1′-binaphthyl lateral paddles 26,27 for manipulating the BBD along a Au(111) surface using STM inelastic tunneling effects. Not existing in solution, this planar conformation is stabilized by the Au(111) surface. On Au(111), it enters in competition with its native in solution nonplanar conformation which can also be reached by the same excitation on a metallic surface as presented below. around this Coriginating from 360°naphthylbond. 24,25,[28][29][30] T are classified usin their absolute co binaphthyls and between -180° and those betw configurations, r and as a function conformation an mol of energy. conformation ch 90°saddle point at |θ| ∼ 70°and | chiral binaphthy (CD) spectra, co was demonstrate mechanical force binaphthyl relaxe singlet state con internal mechani the S 1 (θ) relativ curve minima. 26,[START_REF] Becke | Density-Functional Exchange-Energy Approximation with Correct Asymptotic Behavior[END_REF] (|θ| > 90°), the S discussion, and th BBD design. For theory (TD-DFT naphthol) molec synthesis; Sche conformation be that triggers an a optically from Information). Entering now molecules are use reversal of the re and S 0 and bec Scheme 1. Synthetic Route of BBD 10358 Scheme 1. Synthetic Route of BBD. RESULTS AND DISCUSSIONS A. Design and Chemical Synthesis A 1,1'-binaphthyl molecule consists of two naphthalene moieties with one single phenyl per moiety connected via a C-C single bond. The distinct characteristics of a 1,1'-binaphthyl are (1) its flexibility around this C-C bond 24,25,28-30 and (2) the axial chirality originating from the inhibition by steric crowding of a complete 360 • naphthyl-naphthyl rotation around its joint C-C bond. 24,25,[28][29][30] The enantiomers of axially chiral compounds are classified using the stereochemical labels R and S based on their absolute configuration around a stereocenter. Chiral 1,1'-binaphthyls and derivatives having a naphthyl torsion angle between -180 • < θ < 0 • correspond to the R configurations and those between 0 • < θ < 180 • correspond to the S configurations, respectively. In its S 0 electronic ground state and as a function of the torsion angle |θ|, the 1,1'-binaphthyl conformation angle can vary from 60 • to 120 • within < 1 kcal/ mol of energy. The potential energy curve along this conformation change is a flat-bottomed well where the |θ| ∼ 90 • saddle point separates two shallow wells whose minima are at |θ| ∼ 70 • and |θ| ∼ 110 • . 24,25,[28][29][30] Since the conformation of chiral binaphthyls can be monitored by circular dichroism (CD) spectra, conformation controllability in this ground state was demonstrated at the air-water interface by applying a small mechanical force. 31,32 The difference between the S 0 1,1'-binaphthyl relaxed conformations and the S 1 lowest excited singlet state conformations 26,27 is at the origin of our BBD internal mechanical vibrations because of the reversal in θ of the S 1 (θ) relative to the S 0 (θ) double well potential energy curve minima. 26,27 Although for cisoid (|θ| < 90 • ) and transoid (|θ| > 90 • ), the S 1 and S 0 relaxed conformations are still under discussion, and this difference was important to preserve in the BBD design. For example, time-dependent density functional theory (TD-DFT) calculations show that the (R)-1,1'-bi (2-naphthol) molecule (the starting compound for the BBD synthesis; Scheme ) still preserves a different relaxed conformation between S 0 (θ = -91 • ) and S 1 (θ = -119 • ) that triggers an almost 30 • paddle effect going back and forth optically from S 0 to S 1 (Figure S16 in the Supporting Information). Entering now in the design of our molecule, two binaphthyl molecules are used in the BBD as lateral paddles because of this reversal of the relative minimum energy |θ| value between S 1 and S 0 and because of the low 1 kcal/mol energy barrier between the two minima in the S 0 ground state. The two binaphthyls are connected laterally to a very small chassis made simply of a central phenyl (BBD in Scheme 1). This covalent binding of each binaphthyl via the methylene oxy bridges modifies the paddle switch ability with, for example, the suppression of the S 1 (θ = -119 • ) torsion angle energy minimum. What is important here is that S 0 keeps its awaited mechanical characteristics, that is, the possibility of its vibrational oscillations around its new (θ = -61 • ) ground-state minimum (TD-DFT calculated) reachable, for example, by optical excitation and relaxation via its new S 1 (θ = -58 • ) relaxed conformation for the (R,R) isomer (Figure S17 in the Supporting Information). On a metallic surface and in a planar conformation, the two BBD binaphthyl groups permit to space the BBD chassis away from this surface at a distance compatible with a physisorption state. To drive the BBD molecule step-by-step along an fcc track of the Au(111) surface using the inelastic effect of the STM tunneling current, one has to first virtually prepare this molecule in its instantaneous virtual reduced electronic state well described for its mechanics by considering in first approximation the BBD S 1 excited state. Afterward, relaxation to the S 0 ground state will result in a small amplitude and noncoherent binaphthyl oscillations. As a function on the tip apex location on BBD, this will generate a surface lateral motion over the lateral diffusion barrier of the Au(111) fcc portion of the herringbone surface reconstruction. The BBD molecule was synthesized by a one-pot reaction from commercially available (R)-1,1'-bi(2-naphthol) and α,α ,α ,α -tetrabromodurene (Scheme 1). Before its evaporation in the STM preparation chamber, it was further ultrapurified by sublimation to produce a colorless powder with no crystallinity. BBD UV-vis absorption spectrum is similar to the one of binaphthyl molecules 33 and shows an absorption peak maximum at 334 nm (Figure S1 in the Supporting Information), demonstrating a good electronic separation between the two BBD lateral paddles. Recorded conventional CD spectra of chiral binaphthyls (Figure S2 in the Supporting Information) confirmed that the chirality of the binaphthyl groups remained after purification. B. The Native BBD Molecule Conformation in Solution. Variable-temperature (VT) analysis with NMR spectroscopy (Figure S13 in the Supporting Information) revealed the dynamic fluctuations between two BBD conformers with the same equilibrium population in solution. The two sets of 1 H NMR peaks that originate from those two conformers were also observed at low temperature from 218 to 223 K showing no sign of a favored conformer (Figure S13 in the Supporting Information). The analysis of those VT NMR spectra by a line-shape-fitting provides the experimental energetics for the interexchange processes between the two BBD conformers. Using an Eyring plot, the parameters of this interexchange were estimated to be ∆H = 9.8 kcal/mol and ∆S = -17 cal/(mol K) (Figures S14 and S15 in the Supporting Information), supporting the possibility of a fast interconversion of the two conformers at ambient temperature and in solution. As presented in Figure 1, three BBD conformers were identified using DFT calculations (B3LYP/6-31G(d,p)) depending on the location of the two binaphthyl paddles relative to the central phenyl. They correspond to the flat, syn, and anti conformations of a BBD molecule. The syn and anti conformers are expected to be the principal BBD isomers in solution since after molecular structure optimization, the flat, syn, and anti relative conformation energies are 20, 0.4, and 0.0 kcal/mol, respectively. Experimental ROESY peaks in NMR spectroscopy are also consistent with the existence of the anti-syn conformer in solution since a weak correlation was observed between the central aromatic CH and the side binaphthyl aromatic CH proton (Figures S10-S12 in the Supporting Information). the S 0 ground state. The two rally to a very small chassis made BD in Scheme 1). This covalent via the methylene oxy bridges ability with, for example, the -119°) torsion angle energy here is that S 0 keeps its awaited that is, the possibility of its d its new (θ = -61°) groundlculated) reachable, for example, ation via its new S 1 (θ = -58°) (R,R) isomer (Figure S17 in the n a planar conformation, the two it to space the BBD chassis away compatible with a physisorption lecule step-by-step along an fcc using the inelastic effect of the has to first virtually prepare this virtual reduced electronic state hanics by considering in first cited state. Afterward, relaxation result in a small amplitude and lations. As a function on the tip will generate a surface lateral sion barrier of the Au(111) fcc rface reconstruction. nthesized by a one-pot reaction e (R)-1,1′-bi(2-naphthol) and (Scheme 1). Before its evapoation chamber, it was further to produce a colorless powder V-vis absorption spectrum is thyl molecules 33 and shows an at 334 nm (Figure S1 in the monstrating a good electronic BBD lateral paddles. Recorded iral binaphthyls (Figure S2 in the firmed that the chirality of the fter purification. le Conformation in Solution. nalysis with NMR spectroscopy ting Information) revealed the two BBD conformers with the in solution. The two sets of 1 H those two conformers were also from 218 to 223 K showing no (Figure S13 in the Supporting those VT NMR spectra by a linexperimental energetics for the een the two BBD conformers. meters of this interexchange were /mol and ΔS = -17 cal/(mol K) the Supporting Information), syn, and anti relative conformation energies are 20, 0.4, and 0.0 kcal/mol, respectively. Experimental ROESY peaks in NMR spectroscopy are also consistent with the existence of the antisyn conformer in solution since a weak correlation was observed between the central aromatic CH and the side binaphthyl aromatic CH proton (Figures S10-S12 in the Supporting Information). A f lat BBD conformer is supposed by design to render accessible the different entry ports on its board for local STM excitations. In the following section, we will demonstrate how this f lat conformer can be produced molecule per molecule by STM single molecule mechanical manipulations. When obtained, this flat conformer turns out to be quite stable on the Au(111) surface. Native BBD Conformation and 2D Organization on the Au(111) Surface. Two typical constant current STM images obtained after BBD molecules deposition on the Au(111) reconstructed surface are presented in Figure 2. They were mainly found self-assembled in small 2D islands (Figure 2a). In some place, single BBD molecule lines can also be observed. This pseudo-1D growth along the Au(111) herringbone track is usually stopped at both ends of the line by a different surface BBD molecular ordering (Figure 2b). In all those observed pseudo-1D and 2D surface molecular orderings, the BBD molecules appear having the shape of a curve letter Two typical constant current STM images obtained after BBD molecules deposition on the Au(111) reconstructed surface are presented in Figure 2. They were mainly found self-assembled in small 2D islands (Figure 2a). In some place, single BBD molecule lines can also be observed. This pseudo-1D growth along the Au(111) herringbone track is usually stopped at both ends of the line by a different surface BBD molecular ordering (Figure 2b). In all those observed pseudo-1D and 2D surface molecular orderings, the BBD molecules appear having the shape of a curve letter " ". Those " " BBD molecules have three possible adsorption directions on the Au(111) surface (see the Figure 2a insert). As presented in Figure 2b, the observed single " " BBD lines confirm that the BBD molecules are sensitive to the lateral ridges of the herringbone reconstruction (in average 0.03 nm in height). [START_REF] Hehre | Self-Consistent Molecular Orbital Methods. XII. Further Extensions of Gaussian-Type Basis Sets for Use in Molecular Orbital Studies of Organic Molecules[END_REF] As certified experimentally by STM single molecule manipulations in the next subsection, each " " STM molecular feature is a BBD dimer consisting of two syn conformers oriented perpendicular to the surface plane. They are coupled by a pair along one of the three As discussed in the previous subsection, the BBD molecules are found equally in the syn and anti conformations in solution. Molecular dynamics (MD) simulations were performed to simulate a hot adsorption process of the BBD molecules on a Au(111) surface (see Supporting Information). When the BBD molecules are annealed on the surface up to 500 K, the syn and anti conformers are deformed but remain on the surface with no transformation in the flat conformer. At this temperature and during their 2D diffusion around the Au(111) surface, the BBD molecules have enough kinetic energy to mutually transform between the syn and anti as they certainly performed in solution and at room temperature. Upon cooling down the surface to room temperature, the BBD molecules thermalize toward the syn conformers since syn is 9.0 kcal/mol lower in energy as compared to anti on the Au(111) surface. Furthermore, the syn thermalize with still their central phenyl perpendicular to the Au(111) surface because laterally stabilized by their two paddles. During a slow thermalization process, the syn will also continue to diffuse on the surface. While in this perpendicular adsorption conformation, they can pair via a central phenyl π-stacking interactions as also confirmed by MD calculation (see Supporting Information). It results in the " " As discussed in the previous subsection, the BBD molecules are found equally in the syn and anti conformations in solution. Molecular dynamics (MD) simulations were performed to simulate a hot adsorption process of the BBD molecules on a Au( 111) surface (see Supporting Information). When the BBD molecules are annealed on the surface up to 500 K, the syn and anti conformers are deformed but remain on the surface with no transformation in the f lat conformer. At this temperature and during their 2D diffusion around the Au(111) surface, the confirmed by MD calcula It results in the "f " STM Figure 2 assembled in th islands. According to MD calcul conformers would be mo surface syn and anti confor conformer from the nativ Au(111) surface would r about 1500 K. Therefore, the surface directly afterwa conformers. At such a h molecules will break and/ the gold melting point is l below, f lat conformers can on the Au( 111 According to MD calculations and if accessible, the flat BBD conformers would be more stable than the orthogonal to the surface syn and anti conformers. However, getting directly a flat conformer from the native syn and anti conformers on the Au(111) surface would require an annealing temperature of about 1500 K. Therefore, during deposition or by heating up the surface directly afterward, it will be difficult to produce flat conformers. At such a high temperature, most of the BBD molecules will break and/or desorb from the surface. Anyhow, the gold melting point is lower than 1500 K. As demonstrated below, flat conformers can be produced molecule per molecule on the Au(111) surface starting from the orthogonal to the surface syn conformers using a very specific STM single molecule mechanical manipulation protocol. D. STM Single Molecule Mechanical Manipulation for Preparing flat BBD Conformers. To produce a flat BBD conformer, a selected syn conformer dimer adsorbed perpendicular to the Au( 111) surface (one of the " " molecular units imaged in Figure 2) must be first separated into independent syn monomers. For this purpose and starting from a 2D island of the sort imaged in Figure 2a, STM lateral BBD molecule mechanical manipulation has been first performed as presented in Figure 3. Here, the threshold STM tunneling resistance for molecule manipulation is around R T = 270 MΩ. In most cases, a " " dimer can readily be separated out of the 2D island but only as a single " " dimer entity with no monomer separation as presented in the Figure 3a,b. Then, a " " dimer can be step-by-step displaced over quite long distances over the surface in such STM manipulation conditions. When they are sometimes disassembled into two syn monomers during this process, one syn of the " " pair is generally transferred to the STM tip, and the other one remains in the island as shown in the sequence Figure 3b,c. Notice also that after the breaking of a " " pair at the 2D island border and in its orthogonal to the surface adsorption configuration, the syn monomer left in the island (as obtained in Figure 3c) can be further extracted from this island byr a further lateral STM manipulation. In this case, it has also a high probability to be captured by the tip apex, confirming how this syn monomer orthogonal configuration is not very stable on an Au(111) surface. We have succeeded to manipulate a few of those syn monomers toward specific Au(111) surface areas like the herringbone kinks where generally the surface atomic order is not regular and can stabilize them. They can also be dragged e. They are coupled 11] crystallographic this surface (see the cation of this pairing the only molecular BDs molecular row. , the BBD molecules rmations in solution. were performed to BD molecules on a ion). When the BBD o 500 K, the syn and on the surface with At this temperature Au(111) surface, the energy to mutually certainly performed n cooling down the olecules thermalize 0 kcal/mol lower in Au( 111) surface. their central phenyl because laterally slow thermalization use on the surface. formation, they can nteractions as also conformers. At such a high temperature, most of the BBD molecules will break and/or desorb from the surface. Anyhow, the gold melting point is lower than 1500 K. As demonstrated below, f lat conformers can be produced molecule per molecule on the Au(111) surface starting from the orthogonal to the surface syn conformers using a very specific STM single molecule mechanical manipulation protocol. STM Single Molecule Mechanical Manipulation for Preparing f lat BBD Conformers. To produce a f lat BBD conformer, a selected syn conformer dimer adsorbed perpendicular to the Au( 111) surface (one of the "f " molecular units imaged in Figure 2) must be first separated into independent syn monomers. For this purpose and starting from a 2D island of the sort imaged in Figure 2a, STM lateral BBD molecule mechanical manipulation has been first performed as presented in Figure 3. Here, the threshold STM tunneling resistance for molecule manipulation is around R T = 270 MΩ. In most cases, a "f " dimer can readily be separated out of the 2D island but only as a single "f " dimer entity with no monomer separation as presented in the Figure 3a,b. Then, a "f " dimer can be step-by-step displaced over quite long distances over the surface in such STM manipulation conditions. When they are sometimes disassembled into two syn monomers during this process, one syn of the "f " pair is generally transferred to the STM tip, and the other one remains in the island as shown in the sequence Figure 3b,c. Notice also that after the breaking of a "f " pair at the 2D island border and in its orthogonal to the surface adsorption configuration, the syn monomer left in the island (as obtained in Figure 3c) can be further extracted from this island by a further lateral STM manipulation. In this case, it has also a high probability to be captured by the tip apex, confirming how this syn monomer orthogonal configuration is not very stable on an along the surface during standard imaging conditions, that is, for STM R T around 10 GΩ and a tunneling current below 10 pA. At the border of a 2D-island and using R T ∼ 120 MΩ, a new specific BBD dimer molecule manipulation protocol can bring a different manipulation outcome. When in a " " pair located at the border of a 2D island, a syn BBD molecule is manipulated without trying to extract it directly from this island border; however, following the appropriate manipulation tip trajectory presented in Figure 4, a flat monomer can be produced with its two-fold paddles now fully open. Figure 4 presents an example of such an outcome with a reasonable 10% probability of success. There are two essential conditions for this specific protocol to produce successfully a flat conformer and to open the BBD two paddles. First, the syn targeted BBD molecule must be paired with a syn BBD anchored at the edge of a 2D island but not at a corner. This anchoring will serve as a pivot for the opening following a classical molecular mechanical motion of the molecule, as if it was a solid and rigid body pivoting around a fix point. Second, the targeted syn BBD must be "rubbed" laterally on another " " dimer of the island during the manipulation, that is, the manipulation trajectory must maintain a lateral interaction with the other " " dimer for the flat flipping of the manipulated BBD molecule to be complete. As a consequence, the manipulated BBD molecule performs a 90 • flip down to the surface to reach a planar central phenyl configuration with the two paddles opened flat on the Au(111) surfacem as illustrated in Figure 4a,b (more examples in Supporting Information section 9). In this case, the lateral required interactions between the manipulated BBD and the border 2D island BBDs seem to be attractive according to the recorded manipulation signal, but this requires a more detailed interpretation in the future. When produced, a flat conformer is very stable on the Au(111) surface as theoretically predicted by MD calculations. After its production, this is confirmed by the experimental RT threshold value for a flat conformer STM molecular manipulation in a pushing mode along the Au(111) surface, the lowest (∼ 66 MΩ) of all the R T values used for the different BBD molecule configurations met on this surface. A flat BBD monomer can be truly and reproducibly STM manipulated mechanically over long distances as presented in Figure 4b,c. E. Tunneling Spectroscopy and States Mapping of the Flat BBD Conformer. A flat BBD conformer on the Au(111) surface opens access to the detail STM dI/dV mapping of its low-lying molecular electronic states around the Au(111) surface Fermi energy. Such a the herringbone kinks where generally the surface atomic order is not regular and can stabilize them. They can also be dragged along the surface during standard imaging conditions, that is, for STM R T around 10 GΩ and a tunneling current below 10 pA. At the border of a 2D-island and using R T ∼ 120 MΩ, a new specific BBD dimer molecule manipulation protocol can bring a different manipulation outcome. When in a "f " pair located at the border of a 2D island, a syn BBD molecule is manipulated without trying to extract it directly from this island border; however, following the appropriate manipulation tip trajectory presented in Figure 4, a f lat monomer can be produced with its two-fold paddles now fully open. Figure 4 presents an example of such an outcome with a reasonable 10% probability of success. There are two essential conditions for this specific protocol to produce successfully a f lat conformer and to open the BBD two paddles. First, the syn targeted BBD molecule must be paired with a syn BBD anchored at the edge of a 2D island but not at a corner. This anchoring will serve as a pivot for the opening following a classical molecular mechanical motion of the molecule, as if it was a solid and rigid body pivoting around a fix point. Second, the targeted syn BBD must be "rubbed" laterally on another "f " dimer of the island during the manipulation, that is, the manipulation trajectory must maintain a lateral interaction with the other "f " dimer for the flat flipping of the manipulated BBD molecule to be complete. As a After its production, threshold value fo manipulation in a pu the lowest (∼66 MΩ) BBD molecule config monomer can be tru mechanically over lon Tunneling Spect Flat BBD Conforme surface opens access low-lying molecular surface Fermi energy confirm the f lat confo specific manipulation determining the loca weight of its reduced BBD molecular struct principal port for intra tunneling electrons in structure to move on push. Figure 5a presents f lat BBD molecule ad STM tip apex was po lobes identified on the identifying the Au(11 V) in this spectrum, observed, one at -1. +2.7 V. This gives an for a f lat BBD mo compared to the 3.7 e the syn and anti conf Performed exactly precise dI/dV STM m molecular orbital el reduced and oxidized two tunneling resonan dV elastic scattering calculations were pe imental dI/dV maps. adapted to provide a molecules. 35,36 Thes contributor to theoccupied molecular unoccupied molecu resonance. Those im optimized f lat BBD co confirms how the B conformation by STM to Figure 5 images, electronically decoupl Step-by-Step Ma mapping is of importance to confirm the flat conformation interpretation after the Figure 4 specific manipulation protocol. It is also very appropriate for determining the location of the maximum molecular orbital weight of its reduced and oxidized electronic states along the BBD molecular structure. Those maxima are known to be the principal port for intramolecular inelastic excitations induced by tunneling electrons in a way to bring energy to the molecular structure to move on a metallic surface with no mechanical push. Figure 5a presents a typical dI/dV spectrum recorded on a flat BBD molecule adsorbed on the Au(111) surface. Here, the STM tip apex was positioned at the center of one of the two lobes identified on the topographic image (see Figure 4c). After identifying the Au(111) surface states energy location (∼ -0.5 V) in this spectrum, two differential conductance peaks are observed, one at -1.6 V and a small bump centered around +2.7 V. This gives an apparent electronic gap of about To manipulate a BBD molecule by inelastic electron tunneling effects along a fcc portion of the Au(111) surface, the single molecule must capture enough energy from the tunneling current to pass over the fcc surface lateral diffusion barrier. One way to trigger the inelastic energy release on the BBD vibronic mode is to increase the tunneling current intensity through the molecule reaching its first low-lying reduced states. According to the dI/dV spectrum in Figure 5a, this can be achieved with a bias voltage applied to the tunnel junction greater than about 2 V to reach at least the tail of the +2.7 V BBD tunneling electronic resonance. Notice here that the energy captured by the vibronic modes of the BBD molecule will be a small fraction of the 2 eV. 38 To maximize the tunneling inelastic excitations, it is also usually taken for granted to position the STM tip apex at the locations at the highest electron density of the targeted in energy molecular electronic states. This strategy also helps to minimize the STM bias voltage range in a way not to destroy the molecule. 11 The BBD dI/dV images presented in Figure 5 are mapping those maxima and minima in its flat surface conformation. This mapping results from the electronic coupling between the tip apex and the molecular orbitals entering in the composition of resonating BBD electronic states when considering that those electronic states can be well described by a superposition of Slater determinants constructed using a molecular orbital basis set. 37,39 The pixel-by-pixel construction of the Figure 5b,c maps results from the local measurement of the conductance of the BBD molecule at each pixel. In effect, this measurement projects the total BBD electronic probability density on a 2D plane. This is a very convenient way to identify where to position the tip for triggering a tunneling inelastic effect. For BBD, the highest electronic probability density (the highest dI/dV) sites are located on the BBD binaphthyl paddles as observed in Figure 5c. ACS Nano As indicated in Figure 5c, when positioning the tip apex at location 1 on the BBD molecule and then ramping up the bias voltage further than +2.3 V (but without reaching +2.7 V), the BBD changes its conformation with one naphthyl paddle going up the surface in a conformation similar to syn (syn/flat-like conformation, see Figure S21 in the Supporting Information). This conformational change occurs systematically opposite to the paddle been excited. On the corresponding reduced state potential energy surface, this indicates that the energy captured by the BBD molecule from the tunneling current is initiating a conformation change trajectory at the onset of the +2.7 V resonance. Starting from the flat ground-state conformation, this trajectory certainly reaches a minimum on this reduced state potential energy manifold corresponding to the rotating up of a paddle to form a "syn/flat-like" conformation. Then, the BBD molecule relaxes in its ground state in this new stable "syn/flat-like" conformation not observable natively event after an STM mechanical lateral manipulation procedure. Notice that in the first approximation, the reduced state potential energy manifold can be explored using the potential energy surface of the BBD S 1 vertically accessible excited electronic state. After having tried to induce the paddle vibrations directly, we have selected the electronic probability density maximum 2 as indicated in Figure 5c. For this new excitation location and as presented in left column of Figure 6, when the bias voltage reaches about +2.3 V and the tunneling current several hundred pico-amperes, the current intensity through the BBD suddenly jumps up due to the molecule one step lateral translation. This very reproducible behavior shows how a BBD molecule can be step-by-step driven by steps of 0.29 nm on an fcc flat area on the Au(111) surface. To be more precise on the inelastic manipulation direction, an atomic resolved image of the Au(111) surface recorded using molecule terminated tip is inserted in Figure 6a to certify the accurate moving direction: exactly one of 110 orientations of the Au(111) surface inits fcc portion. The interatomic gold atom distance along those orientations is 0.288 nm, in complete agreement with the experimentally observed 0.29 nm long step motion per voltage ramp. As presented in Figure 6, a controllable inelastic motion is only possible when the molecule lies parallel to and in between two herringbones of the reconstruction. When the molecule sits on a herringbone (even when only one paddle end is laying on it), it is stuck and difficult to manipulate inelastically, as also recently observed with a windmill molecule. 40 For a BBD molecule laying parallel to the herringbones and located at the fcc portion of the Au(111) surface, the probability of a controllable stepwise motion is about 50% after a single shot bias voltage ramp. Incidentally, the probability of molecule motion regardless the adsorption site is around 5% and the probability of a breaking or a conformation changes is around 3%. As a consequence and after a three consecutive lateral step-by-step motions, the BBD molecule is stopped because of herringbone lateral diffusion barrier and must be reprepared for a new run like with the windmill molecule. 40 Here, the BBD molecule must be manipulated with care not to open a conformation change path on its reduced state potential energy surface nor a chemical reaction path breaking some of its chemical bonds leading to the final destruction of the molecule since a molecule is often very unstable under high positive STM bias voltage pulses. [START_REF] Hariharan | Accuracy of AH n Equilibrium Geometries by Single Determinant Molecular Orbital Theory[END_REF]41 While exciting the BBD molecule at two different spatial locations of the same resonance maxima, the difference of mechanical response is a nice indication of how the electronic coupling between the tip apex and the electronic states of a molecule can give rise to different mechanical responses. Here and during an STM excitation (or imaging), the effective lateral extension of the tunneling electrons inelastic excitation is much narrower than the BBD molecular orbitals spatial lateral extension. As a consequence, the electronic coupling between the tip apex and the BBD molecule is very local. For each tip positioning on the molecule, this brings out a very specific is often very unstable under high positive STM bias voltage pulses. [START_REF] Hariharan | Accuracy of AH n Equilibrium Geometries by Single Determinant Molecular Orbital Theory[END_REF]41 While exciting the BBD molecule at two different spatial locations of the same resonance maxima, the difference of mechanical response is a nice indication of how the electronic coupling between the tip apex and the electronic states of a molecule can give rise to different mechanical responses. Here and during an STM excitation (or imaging), the effective lateral extension of the tunneling electrons inelastic excitation is much narrower than the BBD molecular orbitals spatial lateral extension. As a consequence, the electronic coupling between the tip apex and the BBD molecule is very local. For each tip positioning on the molecule, this brings out a very specific superposition of BBD molecular orbitals to contribute to the motion as also observed for STM imaging in the case, for example, of an hexabenzocoronene (HBC) molecule. 42 This can trigger a large conformation of the paddle for one tip apex location or a gentle stepwise lateral motion for another location supposing that the effective potential energy surface built up from this superposition is different in the two cases. For negative applied bias voltage, we have also tried the same strategy by locating the tip apex at one of the many maxima indicated in Figure 5b. No movement of the BBD molecule was observed down to a bias voltage ramp reaching a maximum of -2.0 V with several nA of tunneling current intensity. We do not yet have a detailed explanation of this observation. CONCLUSION A bisbinaphthyldurene (BBD) molecule was designed, synthesized, and deposited on an Au(111) surface, mechan- superposition of BBD molecular orbitals to contribute to the motion as also observed for STM imaging in the case, for example, of an hexabenzocoronene (HBC) molecule. 42 This can trigger a large conformation of the paddle for one tip apex location or a gentle stepwise lateral motion for another location supposing that the effective potential energy surface built up from this superposition is different in the two cases. For negative applied bias voltage, we have also tried the same strategy by locating the tip apex at one of the many maxima indicated in Figure 5b. No movement of the BBD molecule was observed down to a bias voltage ramp reaching a maximum of -2.0 V with several nA of tunneling current intensity. We do not yet have a detailed explanation of this observation. 111) is a perpendicular dimer conformation. Once a BBD molecule was prepared in its flat conformation, dI/dV molecular orbital mapping was performed to determine the best tip apex location to free up this molecule inelastically for a manipulation on an Au(111) fcc flat terrace. The intuitive on-paddle excitation is not a good entry port for an inelastic tunnel manipulation since it leads to a drastic conformation change of the BBD molecule entering in competition with its step-by-step motion on the Au(111) surface. We have demonstrated that on the molecule and nearby the paddle location, there exists another energy entry port where the BBD molecule remains flat on the surface and can be laterally manipulated step-by-step with a step of about 0.29 nm per excitation. The BBD molecule was used by the MANA-NIMS Japanese team during the first international nanocar race in Toulouse. 43 MATERIALS AND METHODS The synthetic procedure of BBD is described in the Supporting Information. The STM experiments were conducted as follows. The BBD molecules were deposited on a Au(111) single crystal surface previously cleaned by standard metal surface UHV preparation methods consisting of several cycles of ion sputtering and subsequent annealing. [START_REF] Hehre | Self-Consistent Molecular Orbital Methods. XII. Further Extensions of Gaussian-Type Basis Sets for Use in Molecular Orbital Studies of Organic Molecules[END_REF]10 The BBD molecules were sublimated from about 3 mg of the colorless BBD molecular powder by heating a Kentax quartz crucible at 563 K during 30 s. The gold substrate temperature was kept below 323 K during this deposition. The evaporation parameters were selected in a way to deposit a minute amount of BBD molecules to produce a submonolayer coverage in order to leave enough large molecule-free areas on the clean Au(111) surface to be able to use the STM single molecule manipulation protocol. The Au(111) sample was then loaded on the STM sample stage kept at cryogenic temperature (LT) and rapidly cooled down to ∼ 5 K. All low-temperature ultrahigh-vacuum scanning tunneling microscopy (LT-UHV STM) experiments presented, that is, constant current imaging, molecule manipulations, tunneling spectroscopic measurements, and intramolecular dI/dV mapping were performed on one of the four STM heads of our new ScientaOmicron LT-UHV 4 independent STM instrument. 44 * E-mail: we-hyo.soe@cemes.fr † Present Address: Department of Chemical Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan. ‡ E-mail: Nakanishi.Waka@nims.go.jp. Photophysical properties See Fig. S1 and Fig. S2. NMR spectra See Fig. S3 to Fig. S15. Theoretical calculations (DFT) All calculations were performed using the Gaussian 09 program, 1 and the results were analyzed and visualized on GaussView 5.0.9. Calculations were performed at the density functional theory (DFT) level with the B3LYP functional, the gradient correction of the exchange functional by Table S16. Potential energy (S 0 and S 1 ) of (a) 1,1'-binaphthyl and (b) BINOL, which were modified from the reported one for 1,1'-binaphthyl. 9 As reported, two sets of optimized structure of 1,1'-binaphthyl were found both in S 0 and S 1 by DFT and TD-DFT (B3LYP/6-31G(d,p)) calculations. On the other hand, only one sets of optimized structure were found for BINOL by the same procedures. Theoretical calculations (MD simulations) Structures The simulated systems consisted of one or two molecules on Au(111) ------------------------------------------------------------------- ------------------------------------------------------------------ ------------------------------------------------------------------- ------------------------------------------------------------------- ------------------------------------------------------------------ ------------------------------------------------------------------- ------------------------------------------------------------------- ------------------------------------------------------------------ ------------------------------------------------------------------- ------------------------------------------------------------------- ------------------------------------------------------------------ .661329 ---------------------------------------------------------------------Table S5. Cartesian coordinate anti-conformer SCF Done: E(RB3LYP) = -2226.74970717 A.U. after 14 cycles ---------------------------------------------------------------------Center Atomic Atomic Coordinates (Angstroms) Number Number Type X Y Z --------------------------------------------------------------------- 1.490691 ---------------------------------------------------------------------Table S5. deformed. On the other hand, the stability of flat-conformer mainly originate from the large surface interaction energy (E surfaceinteraction_flat -E sur f aceinteraction_syn = -29.3 kcal/mol more favorable than syn-conformer) rather than molecular deformation energy (E flat -E syn = +6.9 kcal/mol higher than syn-conformer). Total energy of syn-syn dimer on Au(111) surface (E total ) was -42 912.6 kcal/mol. The following energies were obtained as a single point energy calculation with the same force field. The energy of Au ( 111) (E surface ) was -43 945.7 kcal/mol, that of syn-syn dimer (E syn-syn-dimer ) is 1 438.2 kcal/mol, those of each syn-conformers (E syn-conformer-1 and E syn-conformer-2 ) are 724.3 kcal/mol and 724.3, respectively. The interaction energy between Au(111) and syn-syn dimers (E surfaceinteraction ) was calculated to be -405.1 kcal/mol and that between syn-syn dimers (E molecularinteraction ) was to be -10.4 kcal/mol based on the following equations: E surfaceinteraction = E total -(E surface + E syn-syn-dimer ) E molecularinteraction = E syn-syn-dimer -(E syn-conformer-1 + E syn-conformer-2 ) The formation of syn-syn dimer is more favorable than being two syn-monomers, by the energy of E = (E total -E surface )/2 -(E total s yn -E surface m onomer ) = -2.8 kcal/mol. The formation of syn-syn dimer (in vacuum) is estimated to be E syn-syn-dimer /2 -Esyn = +4.8 kcal/mol more unfavorable compared with being monomers. On the other hand, surface interaction energy became E surfaceinteraction /2 -E surfaceinteraction s yn = -7.6 kcal/mol more favorable for formation of dimers. As a result, each syn-conformers are E = -7.6 + 4.8 = -2.8 kcal/mol more favorable as a dimer compared with being monomers. Conformation transformations by forcible molecule manipulation Here we provide more details and examples of lateral single molecule manipulation protocol in a mechanical mode to succeed to prepare one by one single planar BBD molecules are presented and used in the manuscript. See Fig. S20. Inelastic electronic tunneling effect After some of the inelastic electronic tunneling excitation of a BBD molecule by locating the STM tip apex on the highest electronic probability density site of a BBD molecule, we have S18 conformer on Au(111) surface was also obtained with higher energy cycle temperature was raised to 1500 K, flat-conformer was found wit compared with the syn-conformer. For calculations of a dimer on Au syn-conformers were used as initial structures, and syn-syn-dimer on A a stable conformer. Figure 1 .Figure 1 . 11 Figure 1. Various possible conformers of BBD molecule. (a) Flat, (b) syn, and (c) anti conformers obtained from DFT calculations (B3LYP/6-31G(d,p)) with relative energies of +20, +0.4, and 0.0 kcal/mol, respectively. Each structure was optimized with its D 2 , C 2 , and C 2 symmetry, respectively. The calculated torsion angles of the binaphthyl are θ = -61°, -64°and -64°, respectively. In solution, only the anti and syn conformers have been identified. Physisorbed on an Au(111) surface, STM molecular manipulations of BBD lead to the production of the flat conformer. C . Native BBD Conformation and 2D Organization on the Au(111) Surface. [211] crystallographic orientations of the Au(111) fcc portion of this surface (see the Figure 2a insert). A first experimental indication of this pairing is evidenced in Figure 2a by analyzing the only molecular alignment defect at the top of the last left BBDs molecular row. oriented perpendicular to the surface plane. They are coupled by a pair along one of the three [211] crystallographic orientations of the Au(111) fcc portion of this surface (see the Figure 2a insert). A first experimental indication of this pairing is evidenced in Figure 2a by analyzing the only molecular alignment defect at the top of the last left BBDs molecular row. ) surface s surface syn conformers molecule mechanical man STM Single Molecul Preparing f lat BBD Co conformer, a selected perpendicular to the Au(1 units imaged in Figure independent syn monom from a 2D island of the so BBD molecule mechan performed as presented STM tunneling resistance R T = 270 MΩ. In most separated out of the 2D i entity with no monomer s 3a,b. Then, a "f " dimer can Figure 2 .Figure 3 .Figure 2 . 232 Figure 2. (a) A typical STM topographic image of BBD molecules forming a small 2D island on the Au(111) reconstructed surface.Each building block of the imaged island is a dimer of syn-syn conformers having the shape of a curved "f " letter. Insert a: the three possible adsorption orientations of the syn dimers. The atomic resolved image was also recorded using a molecule terminated STM tip to confirm the molecular orientation. (b) A good example of supramolecular assembly with a line of 10 BBD molecular dimers formed along an Au(111) herringbone whose growth was stopped at both ends of the BBD line by 2 "f " dimers of a different surface orientation. (All LT-UHV STM constant current image images were generally recorded at I = 20 pA, V = 0.5 V.) Figure 3 . 3 Figure 3. An example of the molecule manipulation experiments using the tunneling condition near the threshold R T = 270 MΩ. (a) 2D island of self-assembled syn-syn dimers on the Au(111) surface observed just after Au(111) sample preparation. (b) A "f " BBD molecule was step-by-step manipulated and extracted from its island while maintaining its dimer structure. (c) A monomer was detached from a down right corner "f " of the island and adsorbed to the tip apex and remains of the second syn of this dimer initially perpendicular to the surface conformation (same STM image conditions as in Figure 2. Image size: 15 nm × 15 nm). Figure 3 . 3 Figure 3. An example of the molecule manipulation experiments using the tunneling condition near the threshold R T = 270 MΩ. (a) 2D island of self-assembled syn-syn dimers on the Au(111) surface observed just after Au(111) sample preparation. (b) A " " BBD molecule was step-by-step manipulated and extracted from its island while maintaining its dimer structure. (c) A monomer was detached from a down right corner " " of the island and adsorbed to the tip apex and remains of the second syn of this dimer initially perpendicular to the surface conformation (same STM image conditions as in Figure 2. Image size: 15 nm × 15 nm). Figure 4 . 4 Figure 4. An example of the forcible molecule manipulation experiments using tunneling conditions less than 1/2 R T in Figure 3. Topographic images (a) before and (b) after R T = 120 MΩ manipulation. Using this condition, a structural transformation from syn to flat conformer occurs. (c) The so-produced flat conformer was manipulated with the STM tip away from its original four "f " dimer line with R T = 66 MΩ in order to isolate it to prevent the influence from other molecules during dI/dV STM spectroscopic measurements. The specific tip trajectories during the manipulation to produce a flat conformer are indicated by the arrows in (a) and (b). The tip location selected during the Figure 5 spectrum recording on this flat conformer is indicated by a dot in (c). Same STM image conditions as in Figure 2. Image size: 12 nm × 15 nm.Figure 4. An example of the forcible molecule manipulation experiments using tunneling Figure 4 . 4 Figure 4. An example of the forcible molecule manipulation experiments using tunneling conditions less than 1/2 R T in Figure 3. Topographic images (a) before and (b) after R T = 120 MΩ manipulation. Using this condition, a structural transformation from syn to flat conformer occurs. (c) The so-produced flat conformer was manipulated with the STM tip away from its original four "f " dimer line with R T = 66 MΩ in order to isolate it to prevent the influence from other molecules during dI/dV STM spectroscopic measurements. The specific tip trajectories during the manipulation to produce a flat conformer are indicated by the arrows in (a) and (b). The tip location selected during the Figure 5 spectrum recording on this flat conformer is indicated by a dot in (c). Same STM image conditions as in Figure 2. Image size: 12 nm × 15 nm.Figure 4. An example of the forcible molecule manipulation experiments using tunneling conditions less than 1/2 R T in Figure 3. Topographic images (a) before and (b) after R T = 120 MΩ manipulation. Using this condition, a structural transformation from syn to flat conformer occurs. (c) The so-produced flat conformer was manipulated with the STM tip away from its original four " " dimer line with R T = 66 MÎl' in order to isolate it to prevent the influence from other molecules during dI/dV STM spectroscopic mΩ. The specific tip trajectories during the manipulation to produce a flat conformer are indicated by the arrows in (a) and (b). The tip location selected during the Figure 5 spectrum recording on this flat conformer is indicated by a dot in (c). Same STM image conditions as in Figure 2. Image size: 12 nm × 15 nm. 14 4. 3 3 eV for a flat BBD molecule on the Au(111) surface to be compared to the 3.7 eV (334 nm) UV optical gap observed for the syn and anti conformers in solution.Performed exactly at these resonances, the Figure5very precise dI/dV STM mapping permits to determine the spatial molecular orbital electronic distribution of the flat BBD reduced and oxidized electronic states at the origin of those two tunneling resonances. Monoelectronic constant current dI/dV elastic scattering quantum chemistry (ESQC) 34 image calculations were performed to compare with those experimental dI/dV maps. The ESQC calculation is particularly welladapted to provide accurate STM images for large adsorbed molecules.35,36 These calculations confirm that the main contributor to the -1.6 V resonance is the BBD highest occupied molecular orbital (HOMO) and the BBD lowest unoccupied molecular orbital (LUMO) to the +2.7 V resonance. Those images were calculated starting from an optimized flat BBD conformation on the Au(111) surface. This confirms how the BBD molecule can be prepared in a flat conformation by STM single molecule manipulation. According to Figure5images, the central phenyl chassis is relatively electronically decoupled for the Au(111) surface. F . Step-by-Step Manipulation for Moving a flat BBD Conformer on the Au(111) Surface. Figure 5 . 5 Figure 5. (a) The dI/dV spectrum and (b and c) dI/dV maps recorded on flat BBD produced in Figure 4. The first tunneling resonances appear at -1.6 V and +2.7 V bias voltage (sample grounded on the LT-UHV 4-STM). A dI/dV map captures the spatial distribution of the electron density of the corresponding molecular electronic states contributing to the resonance. 37 (d and e) At energies corresponding to HOMO and LUMO of the flat BBD monomer, monoelectronic ESQC STM calculated images are also presented for comparison. Resonance at -1.6 V appears to be mainly coming from the HOMO component of the imaged ground state and at +2.7 V from the LUMO contribution of BBD reduced (image size: 3.6 nm × 2.4 nm). Figure 5 . 5 Figure 5. (a) The dI/dV spectrum and (b and c) dI/dV maps recorded on flat BBD produced in Figure 4 . 4 Figure 4. The first tunneling resonances appear at -1.6 V and +2.7 V bias voltage (sample grounded on the LT-UHV 4-STM). A dI/dV map captures the spatial distribution of the electron density of the corresponding molecular electronic states contributing to the resonance. 37 (d and e) At energies corresponding to HOMO and LUMO of the flat BBD monomer, monoelectronic ESQC STM calculated images are also presented for comparison. Resonance at -1.6 V appears to be mainly coming from the HOMO component of the imaged ground state and at +2.7 V from the LUMO contribution of BBD reduced (image size: 3.6 nm × 2.4 nm). Figure 6 . 6 Figure 6. Manipulation of a BBD molecule on the Au(111) surface using inelastic electrons tunneling excitations. (a-d) A series of images demonstrating the BBD molecule motion along the timeline. An atomic resolved image and its significant crystallographic orientation [101̅ ] is also presented. The red dots on each image indicate the tip apex location for applying the bias voltage ramp, resulting in the measured tunneling current. The moving direction, of a BBD molecule is always the [101̅ ] orientation and its average lateral motion per voltage ramping is around 0.29 nm, which perfectly matches with the 0.288 nm interatomic distance on Au(111) in the ⟨11̅ 0⟩ surface directions. The voltage ramp duration from +1.0 to +2.3 V was 20 s in each case. Left column shows I-V characteristics for each inelastic event. Right column: 12.0 nm × 5.6 nm constant current STM images recorded at V = +0.5 V and I = 10 pA. In each image, top left, two dimers and one BBD monomer in a syn conformation have been imaged at the same time to provide a clear measurement of the BBD molecule motion per excitation. Figure 6 . 6 Figure 6. Manipulation of a BBD molecule on the Au(111) surface using inelastic electrons tunneling excitations. (a-d) A series of images demonstrating the BBD molecule motion along the timeline. An atomic resolved image and its significant crystallographic orientation [101] is also presented. The red dots on each image indicate the tip apex location for applying the bias voltage ramp, resulting in the measured tunneling current. The moving direction, of a BBD molecule is always the [101] orientation and its average lateral motion per voltage ramping is around 0.29 nm, which perfectly matches with the 0.288 nm interatomic distance on Au(111) in the 110 surface directions. The voltage ramp duration from +1.0 to +2.3 V was 20 s in each case. Left column shows I -V characteristics for each inelastic event. Right column: 12.0 nm × 5.6 nm constant current STM images recorded at V = +0.5 V and I = 10 pA. In each image, top left, two dimers and one BBD monomer in a syn conformation have been imaged at the same time to provide a clear measurement of the BBD molecule motion per excitation. CONCLUSIONA bisbinaphthyldurene (BBD) molecule was designed, synthesized, and deposited on an Au(111) surface, mechanically manipulated with a STM tip, and then with the STM inelastic contribution of the tunneling current passing through this molecule. The BBD molecule is equipped with two lateral binaphthyl paddles mounted on a very simple central phenyl chassis to separate it from the supporting surface. Single molecule STM lateral mechanical manipulation must be first performed for this molecule to reach a flat conformation on the Au(111) surface since its native surface conformation on Au( S3 4 . 4 Figure S1. UV-vis absorption spectra of BBD in THF (5.6 × 10 -6 M). Figure S2 . S2 Figure S2. CD spectra of BBD in THF (1.0 × 10 -5 M). Figure S1 . S1 Figure S1. UV-vis absorption spectra of BBD in THF (5.6 × 10 -6 M). S3Figure S1 . S1 Figure S1. UV-vis absorption spectra of BBD in THF (5.6 × 10 -6 M). Figure S2 .Figure S2 . S2S2 Figure S2. CD spectra of BBD in THF (1.0 × 10 -5 M). Becke 2 , 2 [START_REF] Becke | Density-Functional Thermochemistry. III. The Role of Exact Exchange[END_REF] and the correlation functional by Lee, Yang and Parr,[START_REF] Lee | Development of the Colle-Salvetti Correlation-Energy Formula into a Functional of the Electron Density[END_REF] and the 6-31G(d,p) split valence plus polarization basis set[START_REF] Ditchfield | Self-Consistent Molecular-Orbital Methods. IX. An Extended Gaussian-Type Basis for Molecular-Orbital Studies of Organic Molecules[END_REF][START_REF] Hehre | Self-Consistent Molecular Orbital Methods. XII. Further Extensions of Gaussian-Type Basis Sets for Use in Molecular Orbital Studies of Organic Molecules[END_REF][START_REF] Hariharan | Accuracy of AH n Equilibrium Geometries by Single Determinant Molecular Orbital Theory[END_REF][START_REF] Hariharan | The Influence of Polarization Functions on Molecular Orbital Hydrogenation Energies[END_REF] was used. Relaxed S 1 structures (= optimized structures in S 1 state) were calculated by time-dependent (TD) DFT.See Tab. S1, Tab. S2, Fig.S16, Fig.S17 Figure S5 . S5 Figure S5. DEPT 90 of BBD. Figure S5 .S5Figure S5 . S5S5 Figure S5. DEPT 90 of BBD. Figure S6 . S6 Figure S6. DEPT 135 of BBD. Figure S6 . S6 Figure S6. DEPT 135 of BBD. Figure S7 . S7 Figure S7. COSY of BBD. Figure S7 .S6Figure S7 . S7S7 Figure S7. COSY of BBD. Figure S8 . S8 Figure S8. HMQC of BBD. Figure S8 . S8 Figure S8. HMQC of BBD. Figure S9 . S9 Figure S9. HMBC of BBD. Figure S9 .S7Figure S9 . S9S9 Figure S9. HMBC of BBD. Figure S10 . S10 Figure S10. NOESY of BBD. Figure S10 . S10 Figure S10. NOESY of BBD. Figure S11 .Figure S12 . S11S12 Figure S11. ROESY of BBD. Figure S11 . S11 Figure S11. ROESY of BBD. Figure S11 . S11 Figure S11. ROESY of BBD. Figure S12 .S9Figure S13 . S12S13 Figure S12. Selected correlation in NOESY (dashed lines) and Figure S13 .S10Figure S14 . S13S14 Figure S13. Temperature dependent 1 H NMR (400 MHz, CDCl 3 ) spectra of BBD. The temperature was changed from 313 to 218 K. White and black dots at 218 and 223 K correspond to signals from two conformers. Proton resonances of aromatic region at the lower magnetic field were further analyzed for energetics, since they are comparably isolated. Two doublet peaks from one isomer and one overlapped doublet peaks from another isomer were observed at 218 K, which are merged to form two doublet peaks at 313 K. Figure S15 .FigureFigure S15 .Figure S16 . S15S15S16 Figure S15. The Eyring plot for BBD. The energetics parameters are shown at the bottom of the plots. H = 9.8 kcal/mol, S = -17 cal/mol K. Figure S18 . S18 Figure S18. (a, d) anti-, (b, e) syn-, (c, f) flat-BBD on Au(111) layers simu Fig. S18 . S18 Fig. S18. (a, d) anti-, (b, e) syn-, (c, f) flat-BBD on Au(111) layers simulated by MD calculations. Bartels, L.; Meyer, G.; Rieder, K.-H. Basic Steps of Lateral Manipulation of Single Atoms and Diatomic Clusters with a Scanning Tunneling Microscope Tip. Phys. Rev. Lett. 1997, 79, 697-700. Bouju, X.; Joachim, C.; Girard, C.; Tang, H. Mechanics of (Xe) N Atomic Chains under STM Manipulation. Phys. Rev. B: Condens. Matter Mater. Phys. 2001, 63, 085415. 1 Eigler, D. M.; Schweizer, E. K. Positioning Single Atoms with a Scanning Tunnelling Microscope. Nature 1990, 344, 524-526. 2 3 6-layer, with a vacuum layer. The size of unit cell for a single molecule on Au(111) was 19.9795 × 23.0703 × 51.7730 Å and that for two molecules was 29.9693 × 34.6055 × 61.7730 Å. All simulations were performed 7. Cartesian coordinates Table S3. Cartesian coordinate flat-conformer SCF Done: E(RB3LYP) = -2226.71856349 A.U. after 14 cycles - S14 Table S4 . S4 -Cartesian coordinate syn-conformer SCF Done: E(RB3LYP) = -2226.74911677 A.U. after 7 cycles - Table S4 . S4 -Cartesian coordinate syn-conformer SCF Done: E(RB3LYP) = -2226.74911677 A.U. after 7 cycles - Table S5 . S5 -Cartesian coordinate anti-conformer SCF Done: E(RB3LYP) = -2226.74970717 A.U. after 14 cycles - Jung, T. A.; Schlittler, R. R.; Gimzewski, J. K.; Tang, H.; Joachim, C. Controlled Room-Temperature Positioning of Individual Molecules: Molecular Flexure and Motion. Science 1996, 271, 181-184. Hochstrasser, R. M. The Effect of Intramolecular Twisting on the Emission Spectra of Hindered Aromatic Molecules. Can. J.Chem. 1960, 39, 459-470. ACKNOWLEDGMENTS We thank Dr. Y. Okawa, Dr. T. Uchihashi, and Dr. K. Sagisaka in NIMS for prescreening of STM conditions and helpful discussions and Prof. M. Aono for its continuous support during this work. Funding WPI MANA MEXT program and by JSPS KAKENHI (16H07436, JP16H06518, 26790003). M.K. acknowledges financial support received from the Foundation for Polish Science (FNP). We thank TOYOTA as an official sponsor for our NIMS MANA team in the nanocar race. 43 Notes The authors declare no competing financial interest. SUPPORTING INFORMATION 1-General Analytical thin-layer chromatography (TLC) was performed on a glass plate coated with silica gel (230-400 mesh, 0.25 mm thickness) containing a fluorescent indicator (silica gel 60F254, Merck). Flash silica gel column chromatography was performed on silica gel 60N (spherical and neutral gel, 40-50 µm, Kanto). Infrared (IR) spectra were recorded on Thermo Scientific Nicolet NEXUS 670 FT-IR and were reported as wavenumbers (ν) in cm -1 . Proton ( 1 H) and carbon ( 13 C) nuclear magnetic resonance (NMR) spectra were recorded on a JEOL JNM-ECA400 spectrometer. Mass spectra were obtained on an Applied Biosystems Voyager DE STR SI-3 instrument (MALDI-TOF MS). UV-Vis absorption spectra were obtained on JASCO V-670. Circular dichroism (CD) spectra were obtained on JASCO, J-820. Materials Solvents and materials were purchased from Aldrich, Tokyo Kasei Chemical Co. or Wako Chemical Co., and were used without further purification. Synthesis To a mixture of (R)-(+)-1,1'-bi (2-naphthol) (1.00 g, 3.49 mmol) and CsCO 3 (2.84 g, 8.73 mmol) in dry acetone (100 mL) was added 1,2,4,5-tetrakis(bromomethyl)benzene (715 mg, 1.59 mmol) and the mixture was refluxed 48 h. The mixture was extracted with CH 2 Cl 2 (2 × 200 mL) and concentrated in vacuo. The crude material was purified by silica gel column chromatography (eluent: CH 2 Cl 2 /Hexane) to give pure desired compound, bisbinaphthyldurene (BBD) (410 mg, 37%). The compound was further purified by sublimation (< 300 • C) for the STM experiments. Mp > 250 • C; FT-IR (KBr, cm 1 ) 3047, 2934, 2880, 1918, 1593, 1472, 1321, 1244, 1147, 1079, 1009, 893, 805, 749; 1 H NMR (400 MHz, CDCl 3 ) δ 5. 15 (d, J = 11.2 Hz, 4H), 5.21 (d, J = 11.2 Hz, 4H), 7.16 (ddd, J = 7.6, 7.6, 0.8 Hz, 4H), 7.18 (dd, J = 7.6, 7.2 Hz, 4H), 7.27 (ddd, J = 7.2, 7.2, 1.4 Hz, 4H), 7.32 (s, 2H), 7.46 (d, J = 9.0 Hz, 4H), 7.75 (d, J = 7.6 Hz, 4H), 7.80 (d, J = 9.0 Hz, 4H); 13 C NMR (400 MHz, CDCl 3 ): δ 71. 4, 117.0, 121.6, 123.9, 126.1, 126.3, 128.1, 129.3, 129.8, 133.4, 134.5, 136.6, 154.4 Theoretical calculations (DFT) All calculations were performed using the Gaussian 09 program, 1 and the results were analyzed and visualized on GaussView 5.0.9. Calculations were performed at the density functional theory (DFT) level with the B3LYP functional, the gradient correction of the exchange functional by Becke 2,[START_REF] Becke | Density-Functional Thermochemistry. III. The Role of Exact Exchange[END_REF] and the correlation functional by Lee, Yang and Parr, [START_REF] Lee | Development of the Colle-Salvetti Correlation-Energy Formula into a Functional of the Electron Density[END_REF] and the 6-31G(d,p) split valence plus polarization basis set [START_REF] Ditchfield | Self-Consistent Molecular-Orbital Methods. IX. An Extended Gaussian-Type Basis for Molecular-Orbital Studies of Organic Molecules[END_REF][START_REF] Hehre | Self-Consistent Molecular Orbital Methods. XII. Further Extensions of Gaussian-Type Basis Sets for Use in Molecular Orbital Studies of Organic Molecules[END_REF][START_REF] Hariharan | Accuracy of AH n Equilibrium Geometries by Single Determinant Molecular Orbital Theory[END_REF][START_REF] Hariharan | The Influence of Polarization Functions on Molecular Orbital Hydrogenation Energies[END_REF] was used. Relaxed S1 structures (= optimized structures in S1 state) were calculated by time-dependent (TD) DFT. Table S1. Torsion angle of flat-, syn-, and anti-BBD optimized by DFT (B3LYP/6-31G(d,p)). Torsion angle Flat Syn Anti C2-C1-C1'-C2' ( ) -60.5 -63.9 -63.9 C2'-C1'-C1''-C2'' ( ) -60.5 -63.9 -63.9 C2-O1-C3-C4 -158.9 -61.6 -61.7 C2'-O1'-C3'-C4' -158.9 -144.1 -144.6 C2''-O1''-C3''-C4'' -158.9 -144.1 -61.7 C2'''-O1'''-C3'''-C4'' ' -158.9 -61.6 -144.6 Table S2. Table S1. Table S1. Torsion angle of flat-, syn-, and anti-BBD optimized by DFT (B3LYP/6-31G(d,p)). Torsion angle Flat Syn Anti C2-C1-C1'-C2' ( ) -60.5 -63.9 -63.9 C2'-C1'-C1''-C2'' ( ) -60.5 -63.9 -63.9 C2-O1-C3-C4 -158.9 -61.6 -61.7 C2'-O1'-C3'-C4' -158.9 -144.1 -144.6 C2''-O1''-C3''-C4'' -158.9 -144.1 -61.7 C2'''-O1'''-C3'''-C4'' ' -158.9 -61.6 -144.6 Table S2. Table S2. S13 in the NVT, and controlled using a NHL thermostat, with a decay constant of 1 ps. To find several conformations of one or two molecules on Au(111) surface, anneal dynamics were used. Anneal dynamics consists of a dynamics simulation where the temperature is periodically increased from an initial temperature (4 K) to a mid-cycle temperature (500 K) and back again. All MD calculations were performed with Forcite in Materials Studio, and force field COMPASS was used. All the structures obtained were optimized with the same force field. For calculations of a monomer on Au(111), the DFT optimized anti-structure was used as an initial structure, and syn-conformer on Au(111) surface was obtained as a stable conformer. The anti-conformer on Au(111) surface was also obtained with higher energy (+9.0 kcal/mol). When a mid-cycle temperature was raised to 1500 K, flat-conformer was found with lower energy, -22.5 kcal/mol compared with the syn-conformer. For calculations of a dimer on Au(111), the DFT optimized two syn-conformers were used as initial structures, and syn-syn-dimer on Au(111) surface was obtained as a stable conformer. See Fig. S18 and Fig. S19. Interaction energies Each total energies of syn-, anti-, and flat-conformers on Au( 111 The interaction energy between Au(111) and syn--conformer (E surfaceinteraction_syn ) was calculated to be -194.92 kcal/mol based on the following equation: The interaction energy between Au(111) and anti-and flat-conformer (E surfaceinteraction_anti , E surfaceinteraction_flat ) was calculated with the same method to be -194.3 and -244.2 kcal/mol, respectively. The surface interaction energy for syn-or anti-conformers were at the same level (E surfaceinteraction_syn = -194.9 kcal/mol, E surfaceinteraction_anti = -194.3 kcal/mol). The difference in stability of synand anti-conformers on Au(111) originates from the difference in molecular deformation energy (E anti -E syn = +8.4 kcal/mol higher than syn-conformer) rather than interaction energy (E surfaceinteraction_anti -E surfaceinteraction_syn = +0.6 kcal/mol more unfavorable for the anti-conformer). In another words, to obtain similar interaction with Au(111) surface, anti-conformer needed to be Interaction energies Each total energies of syn-, anti
75,075
[ "20071", "18062", "765273", "19546", "786148" ]
[ "233966", "145364", "233966", "519181", "145364", "145364", "233966", "531257", "145364", "233966", "145364" ]
01746106
en
[ "spi", "sdu" ]
2024/03/05 22:32:07
2015
https://hal.science/hal-01746106/file/Piazolo_Montagnat2015_postRevVersion.pdf
S Piazolo M Montagnat F Grennerat H Moulinec J Wheeler Effect of local stress heterogeneities on dislocation fields: Examples from transient creep in polycrystalline ice Keywords: stress heterogeneities, dislocation field, electron backscatter diffraction, full-field modeling, kink bands, viscoplastic anisotropy This work presents a coupled experimental and modeling approach to better understand the role of stress field heterogeneities on deformation behavior in material with a high viscoplastic anisotropy e.g. polycrystalline ice. Full-field elasto-viscoplastic modeling is used to predict the local stress and strain field during transient creep in a polycrystalline ice sample. Modeling input includes the experimental starting microstructure and a validated slip system dependent flow law. EBSD measurements on selected areas are used to estimate the local dislocation field utilizing the Weighted Burgers Vector (WBV) analysis. Areas of local stress concentration correlate with triple junctions and grain boundaries, originating from strain incompatibilities between differently oriented grains. In these areas of highly heterogeneous stress patterns, (a) kink bands are formed and (b) WBV analysis shows a non-negligible c-axis component of the WBV. The correlation between this defect structure and presence of kink bands suggest that kink band formation is an efficient accommodation deformation mode. Introduction When polycrystalline material is plastically deforming, stress and strain heterogeneity fields are developing due to strain incompatibilities between grains of different crystallographic orientations. Depending on the level of viscoplastic anisotropy of the material, the heterogeneity amplitude can be high. The viscoplastic anisotropy of ice is known to be very strong, with dislocations gliding mostly on the basal plane with three equivalent < 11 20 > Burgers vector directions [START_REF] Hondoh | Physics of Ice Core Records[END_REF]. This results in strong kinematic hardening at grain boundaries and triple junctions [START_REF] Duval | [END_REF]. As such ice is a good model for materials with high viscoplastic anisotropy, such as magnesium [3,4], quartz [START_REF] Hobbs | Preferred orientation in deformed metals and rocks, 405 chapter The geological significance of microfabric analyses[END_REF] and olivine [START_REF] Nicolas | Crystalline Plasticity and Solid State Flow 408 in Metamorphic Rocks[END_REF]. In ice, strong heterogeneity fields were measured by Digital Image Correlation on polycrystalline samples deformed by compression creep with local strain amplitude as high as ten times 18 the macroscopic strain [START_REF] Grennerat | [END_REF]. The strain heterogeneities 19 were also indirectly observed through lattice misori-20 entation measurements via EBSD [8,[START_REF] Montagnat | Lebensohn 414 RA[END_REF] and were 21 simulated using full-field viscoplastic approaches based 22 on Fast Fourier Transform formulation [10,[START_REF] Montagnat | Lebensohn 414 RA[END_REF][START_REF] Grennerat | [END_REF]. In [10,[START_REF] Montagnat | Lebensohn 414 RA[END_REF]. The same numerical constraint was derived for 34 polycrystalline Mg in conditions where twinning was not activated [11]. However, up to now there exists no unequivocal evidence for macroscopic strain resulting 37 from non-basal slip in ice [12,13]. 38 The presence of heterogeneous dislocation fields in experimentally and naturally deformed ice was mostly 40 observed indirectly from substructures observations (Xray diffraction, optical analyses, EBSD...) [8,[START_REF] Montagnat | Lebensohn 414 RA[END_REF]14,[START_REF] Donges | Nuclear Instru-424 ments and Methods in Physics Research Section B: Beam Inter-425 actions with[END_REF]. 42 In particular kink bands and double kink bands has been 43 commonly observed in polycrystalline ice deformed 44 in the laboratory. The origin of these kink bands was following [16,17]. Accuracy of EBSD data is within figures with both the sample coordinate system and the 125 crystal coordinate system (see Fig. 2). 83 0.3 • -0.4 • [18]. 126 Table 1 presents the integral WBV calculated over se-127 lected areas in the analyzed samples (Fig. 2 and Fig. 3), 1 shows that the integral WBV values vary significantly depending on microstructure type and location. However, for each selected area, we obtained several WBV values and chose to present here values that are representative for the selected area. The accuracy of the integral WBV is dependent on the angular resolution of the EBSD data. Using EBSD data with high angular resolution (here within 0.3 degrees), we consider an integral WBV ratio of one specific Burger vector over the maximum WBV value of 0.5 significant. Local stress field estimation Full-field numerical simulations were performed using the CraFT code as presented in [START_REF] Suquet | [END_REF]. The code is based on the FFT method initially proposed in [22,23], extended to elasto-viscoplastic composites using a step-by-step integration in time in [24] (see also the numerical details in [START_REF] Suquet | [END_REF]). The method used in the CraFT code finds a strain rate field associated with a kinematically admissible velocity field that minimizes the average local work rate under the compatibility and equilibrium constraints. An iterative scheme is used following a fixed point approach. It is numerically more efficient than the finite element method [25], but is limited to simulations with microstructures with periodic boundary conditions. The exact experimental boundary conditions (stress free lateral surface) could not be reproduced due to the numerical periodicity constraint. therefore, Accordingly, stress and strain fields predicted close to the specimen edges are not expected to be very accurate, however Grennerat et al. [START_REF] Grennerat | [END_REF] showed a limited impact on the macroscopic response and estimated fields, especially in the center of the modeled microstructure, where the 185 areas of interest here are located. 186 The elasto-viscoplastic response of ice was modeled 187 following the law and hypothesis discussed in details in 188 [START_REF] Suquet | [END_REF]. To model ice, the used crystal plasticity formula- This is illustrated by the decomposition of the integral 261 WBV data (Table 1) and represented in the inverse pole 262 figure plots in Fig. 3 (see for instance mapG1-3 areas A 263 and B and mapG4-1-5 area B, Table 1). 264 As illustrated by the inverse pole figure of subarea D 265 from map G4-1-5, there is a systematic presence of a 266 minor but significant c-axis component close to triple 267 junctions. Such a c-axis component is considered as 268 non-negligible when the ratio K c /max(K ai ) is higher 269 than 0.5 (see Table 1). In contrast, most of the areas 270 analyzed in a grain interior show a c-axis component of 271 the WBV which is very close to the detection limit (see 272 Table 1). that cannot be resolved by the easy slip system i.e. 297 basal slip in ice [3,4,29]. In metals, kink bands 298 can occur at large deformation as strain localization 299 modes, as predicted by the bifurcation analysis of 300 [30], but also at the early stage of deformation in the 301 presence of obstacles [31]. Chang et al. [31] used 373 -The approach presented is applicable to materials 374 with significant viscoplastic anisotropy. Analyses areas Integral WBV (µm) - 2: Relative CRSS for the three slip system families at the beginning of the transient creep, τ iniR , and at the end, τ statR . n s is the stress exponent that was adjusted by [START_REF] Suquet | [END_REF] for each family. The absolute value of τ ini for the basal slip systems was fitted at 0.1 by [START_REF] Suquet | [END_REF], and the absolute value of τ stat for the basal slip systems was fitted at 0.022. 23 these 23 modelling approaches, plasticity is simulated 24 by the activation of several slip systems (e.g. basal, 25 prismatic and pyramidal slip in the case of ice), where 26 each slip system is assigned a relative critical resolved 27 shear stress for slip activation. In this frame, these 28 last works have shown, among other results, that 29 observed high stress and strain heterogeneities could 30 only be correctly simulated providing a significant 31 amount of non-basal slip activity in the corresponding 32 areas, while the global non-basal activity remained low 33 3 . 3 Fig. 1 presents selected EBSD maps 84 showing the local variations in orientations. 85 86 Weighted Burgers Vector analysis 87 To quantitatively analyze the EBSD point grid data 88 we utilized the Weighted Burgers Vector (WBV) analy-89 sis explained in details in [19]. The WBV is a recently 90 developed new quantity to constrain dislocation densi-91 ties and dislocation types using EBSD data on two di-92 mensional sections through crystalline materials. The 93 WBV is defined as the sum, over all dislocation types, 94 of [(density of intersections of dislocation lines with a 95 map) × (Burgers vector)] and as such can be calculated 96 from a planar set of orientation measurements such as 97 in an EBSD orientation map. There is no assumption 98 about the orientation gradient in the third dimension. 99 The magnitude of the WBV gives a lower bound of 100 the magnitude of the dislocation density tensor. The di-101 rection of the WBV can be used to constrain the types 102 of Burgers vectors of the geometrically necessary dis-103 locations present in the microstructure and their geo-104 metric relationship to intra-grain structures, for exam-105 ple subgrain walls. The WBV can then be decomposed 106 in terms of the 3 main lattice vectors [20]. In the case 107 of ice, the two a-axis lattice vectors are further decom-108 posed into the 3 equivalent a-axis lattice vectors. We 109 can calculate the net Burgers vector content of disloca-110 tions intersecting a given area of a map by an integra-111 tion around the edge of this area. This integral WBV 112 method is fast, complements point-by-point WBV cal-113 culations and, thanks to this integration, reduces the ef-114 fect of noise on the analysis. A lower bound of the 115 density of geometrically necessary dislocations can be 116 estimated from this calculation. This estimation is not 117 absolute but can be used for comparison purposes. It 118 should be noted that the net Burgers vector value de-119 rived using the integral WBV method is sensitive to the 120 chosen area both in terms of size and location. 121 WBV analysis data is represented in color coded maps 122 showing the WBV magnitude, WBV directions as pro-123 jected arrows onto the color coded maps and in pole 124 128 decomposed into the three a-axes and the c-axis of the 129 ice crystallographic structure. Areas were selected to 130 represent the observed spectrum of different microstruc-131 tures present within the analyzed sample. Such mi-132 crostructures include triple junctions, grain boundaries 133 with or without asperities, areas in close vicinity or at distance to grain boundaries. Data from Table The specimen undeformed microstructure was discretized into 512 × 512 Fourier points with a single layer of Fourier points in the third (Z) direction, assuming thus infinite column length. To represent the microstructure, each Fourier point is allocated a c-axis orientation according to the measured orientation of the underformed experimental sample. Consequently grain boundary are not specifically defined as discrete objects with specific physical characteristics other than a change in crystal orientation. Throughout the numerical simulations, no crystallographic orientation changes are imposed, thus the microstructure (orientation, grain boundaries) does not evolve. Creep conditions equivalent to the experimental ones were applied (constant 0.5 MPa stress in the vertical direction, transient creep up to 1% macroscopic strain). 225 189 tion accounts for three different families of slip systems, 190 namely the basal, prismatic and pyramidal systems, 191 the latter two being taken stiffer than basal slip. A 192 power law is considered for the evolution of the shear 193 strain rate as a function of the resolved shear stress on 194 each slip system that integrates kinematic hardening, 195 slip system interactions and their evolution with time 196 during transient creep. The material parameters of the 197 law were determined by adjusting them according to 198 experimental data available for single crystals as well 199 as for polycrystals, as detailed in [21]. The Critical 200 Resolved Shear Stress (CRSS) on each slip system 201 was allowed to evolve with strain between initial and 202 stationary values. Table 2 provides the relative CRSS 203 for each slip system family, and the value of the stress 204 exponent attributed to each family. It is worth noting 205 that the relative CRSS is much higher for the pyramidal 206 family, and that there is a relative softening of basal 207 systems during transient creep [21]. 208 The local heterogeneities of the strain and stress fields 209 are illustrated in Fig. 4a) and b) where the spatial 210 variation of the equivalent stress (σ eq = 3 2 σ i j σ i j ) 211 and equivalent strain (ε eq = 3 2 ε i j ε i j ) are repre-212 sented, respectively. For each simulated pixel of 213 the microstructure, the deviatoric part of the stress 214 tensor was decomposed into its eigenvector frame. 215 From this decomposition, we extracted the eigenvector 216 corresponding to the largest absolute eigenvalue. We 217 built a composite vector representation based on the 218 in-plane projection of a vector having the direction of 219 this eigenvector, and the amplitude of the associated 220 eigenvalue. This representation is given in Fig. 4c) for 221 the area of interest, superimposed on the equivalent 222 stress contour plot. Numerical modeling shows that local stress concen-226 trations and stress field heterogeneities occur close to 227 triple junctions and grain boundaries. The higher the 228 mismatch between crystallographic grain orientations, 229 the higher the concentration and heterogeneities (Fig. 230 Fig. 3 3 Fig.3where the heterogenous stress band crossing 253 260 273 Close to triple junctions, we observe near straight 274 subgrain boundaries. Their traces are parallel to the 275 c-axis, and the WBV are predominantly oriented along 276 the a-axis perpendicular to the subgrain boundary trace 277 (Fig. 1 and Fig.3). The other a-axis contribution is 278 related to a continuous crystal bending perpendicular to 279 the c-axis (see Fig.5with a sketch of the bending axis). 280 Similar observations were made in[8,[START_REF] Montagnat | Lebensohn 414 RA[END_REF]. 281 These only slightly curved subgrain boundaries com-282 monly occur in parallel pairs with the main WBV 283 pointing in opposite directions (Fig. 3 mapG1-3 284 and mapG4-1-5, Table 1). The parallelism of the 285 subgrain boundary traces, their orientation relative to 286 the crystallographic axes of the host grain together 287 with the orientation of the main WBV a-axis compo-288 nent is consistent with a strongly crystallographically 289 determined boundary development. The opposing 290 directions of WBV in pairs of subgrain boundaries 291 (Fig. 3) is consistent with kink bands with alternating 292 opposing dislocation structures [26, 27, 28]. These 293 crystallographically well defined kink boundaries could 294 appear similar to the twinning modes in magnesium 295 in the way that they both accommodate shear stress 296 302 Dislocation Dynamic simulation to illustrate the stable 303 position of an edge dislocation within the stress field 304 of an alignment of edge dislocations forming the kink 305 band. Consequently, in contrast to what was initially 306 thought [9], climb or cross-slip do not seem to be 307 necessary mechanisms to explain kink band formation. 308 In summary, observations suggest that kink bands form 309 in areas of very high local stress fields originating either 310 from strain incompatibilities developed in materials 311 with high viscoplastic anistropies or from high strain 312 rates. 313 It should be noted that the used numerical model does 314 not account for specific grain boundary properties. The 315 question of the strain continuity at an interface such as 316 grain boundary is therefore not addressed. In contrast, 317 recent work [32] showed that stress accumulation at 318 boundaries can explain grain boundary delamination 319 in alloys. In the case of ice deforming during tran-320 sient creep under low applied stress, no cracking was 321 observed at grain boundaries, but a better account for 322 continuity conditions of the plastic distortion at GB 323 might be necessary to evaluate the impact of strain 324 heterogeneities on recrystallization mechanisms for 325 instance. 326 Our data show that in high stress concentration areas, 327 close to triple junctions, a Burgers vector component 328 parallel to c-axis is present. Cross slip of dislocations 329 with Burgers vector lying in the basal plane cannot 330 explain this; instead it could be interpreted by a local 331 activity of a non-basal slip system. Conceptually, 332 such a non-basal slip activity is expected only in such 333 high stress areas, as non-basal activity requires high 334 level of local stress in order to overcome the high 335 critical resolved shear stress required for non-basal slip 336 dislocation activity in ice were performed in conditions 338 with very few dislocations activated, that could hardly 339 be extrapolated. Indeed, they were performed using 340 X-ray diffraction topography which enable individual 341 dislocation observation at the very early stage of 342 deformation [33, 34]. 343 The observed crystal bending parallel as well as per-344 pendicular to the kink bands which is accommodated 345 by the shown presence of Burgers vectors along not 346 one but two a-axes in high stress areas, may results in 347 the development of subgrain boundaries both parallel 348 and perpendicular to c-axis. This then will result in 349 the formation of a subgrain with near perpendicular 350 subgrain boundaries, one parallel to c-axis and one 351 perpendicular to c-axis. Such a developing subgrain is 352 shown in Fig. 5 and has been described in [8]. This 353 subgrain boundary formation represents one of the re-354 crystallization processes relaxing strain heterogeneities 355 at macroscopic strain exceeding the transient creep 356 regime [2, 35]. - Coupled full field elasto-viscoplastic modelling 360 and detailed EBSD analysis show the effect of 361 stress heterogeneities (magnitude and orientation) 362 on the dislocation field. 363 -The WBV c-axis component measured in high 364 stress areas could be consistent with local activa-365 tion of non-basal slip. 366 -At low strain, formation of kink bands with a well 367 defined crystallographic character appears as an ef-368 ficient accommodation deformation mode, similar 369 to twinning in Mg. 370 -Distinct misorientations across and perpendicular 371 to kink boundaries form substructures which act as 372 precursors for grain nucleation. Figure 1 :Figure 2 : 12 Figure1: Microstructure of a selected area of the columnar ice sample (in the plane perpendicular to the columns) after compressive strain of 10% along y direction. The central microstructure is color coded according to the measured c-axis orientation as represented by the color wheel (inset). EBSD maps of selected areas (marked by white rectangles) are shown with color code for change in orientation according to color scheme provided on the left side of the central microstructure. Insets of 3D crystal orientations (grey hexagons) are provided by EBSD analyses. Grains are given numbers for ease of reference (e.g. gr 1). Note labeling of the selected areas for which EBSD maps were obtained corresponds to the grain numbers of the grains present in the respective map. Figure 3 : 3 Figure 3: Comparison between stress field and dislocation field; deviatoric stress eigenvectors (left) zoomed in the areas selected for the WBV analyses (right maps). WBV analyses are shown with WBV magnitude (color range), WBV directions shown as white arrows on map (over a threshold value), and on inverse pole figures for selected areas (labeled A to D). Only upper limits are shown for these areas. Integral WBV of subareas are given in Table1. Black lines represent grain boundaries. Due to limitation in the simulation configuration (see text), the correspondence between modeled and observed microstructures such as the position of grain boundaries is approximate. Figure 4 : 4 Figure 4: Maps of equivalent stress (in MPa) (a) and of equivalent strain (b) predicted by the full-field simulation using CraFT on the area of interest after 0.01 macroscopic strain under compression creep. c) provides a map of the deviatoric stress eigenvectors projected on the plane, superimposed on the equivalent stress contour plot (same scale as in a)).Grain boundary traces are superimposed for clarity but they are not physical entities in the code. Figure 5 : 5 Figure 5: Planes of crystal bending (white stippled lines) for subgrain boundaries and associated rotation directions. The nearly horizontal subgrain boundaries are associated with abrupt misorientations, while the vertical ones correspond to more progressive misorientations. Both sub-structures were quantified in term of WBV, see Fig.3. 3 3 Figure 5: Planes of crystal bending (white stippled lines) for subgrain boundaries and associated rotation directions. The nearly horizontal subgrain boundaries are associated with abrupt misorientations, while the vertical ones correspond to more progressive misorientations. Both sub-structures were quantified in term of WBV, see Fig.3. Figure 5: Planes of crystal bending (white stippled lines) for subgrain boundaries and associated rotation directions. The nearly horizontal subgrain boundaries are associated with abrupt misorientations, while the vertical ones correspond to more progressive misorientations. Both sub-structures were quantified in term of WBV, see Fig.3. Table 1 : 1 Integral WBV decomposed onto the three crystallographic axis of the ice crystal, for specific areas in EBSD maps shown in Figs.1, 2 and3. The two Ka vector directions are symmetrically equivalent, they were decomposed into the 3 a-directions following[START_REF] Wenk | Minerals: Their Constitution and Origin[END_REF]. The relative activity of K c /max(K ai ) is also given. ρ provides a lower bound estimate of the geometrically necessary dislocation density in the area. Step size is 15µm for all EBSD maps. * minimum value of the net Burgers vector magnitude. "gr" signifies grain number (cf. Fig.1) and "area", subarea as shown in Figs.2 and 3. 2 ρ Table Table Acknowledgments: Financial support by French ANR is 377 project DREAM #ANR-13-BS09-0001). Together with 378 support from institutes INSIS and INSU of CNRS, 379 France. The authors gratefully acknowledge the ESF support of the European Science 384 Foundation under the EUROCORES Programme, 385 EuroMinSci, MinSubStrDyn, No. ERAS-CT-2003-386 980409 of the European Commission, DG Research, 387 FP6. SP acknowledges additional funding from ARC 388 DP120102060 and FT1101100070. Funding by ARC 389 Centre of Excellence for Core to Crust Fluid Systems 390 (www.CCFS.mq.edu.au) allowed a Research visit of 391 M.M. to Macquarie University. This is contribution 392 534 from the ARC Centre of Excellence for Core to 393 Crust Fluid Systems (http://www.ccfs.mq.edu.au) and 394 975 in the GEMOC Key Centre. Funding from the 395 Australian Research Council (ARC) through the CCFS 396 Visiting Researcher scheme is gratefully acknowledged
23,914
[ "17449", "7653" ]
[ "522835", "389386", "664", "136844", "221258" ]
01617193
en
[ "sdv" ]
2024/03/05 22:32:07
2017
https://inserm.hal.science/inserm-01617193/file/RambeaudIntJCardiol2017.pdf
Pierre Rambeau Emilie Faure Alexis Théron Jean-François Avierinos Chris Jopling Stéphane Zaffran Adèle Faucherre Reduced aggrecan expression affects cardiac outflow tract development in zebrafish and is associated with bicuspid aortic valve disease in humans Keywords: Hemodynamic forces have been known for a long time to regulate cardiogenic processes such as cardiac valve development. During embryonic development in vertebrates, the outflow tract (OFT) adjacent to the ventricle comes under increasing hemodynamic load as cardiogenesis proceeds. Consequently, extracellular matrix components are produced in this region as the cardiac cushions form which will eventually give rise to the aortic valves. The proteoglycan AGGRECAN is a key component of the aortic valves and is frequently found to be deregulated in a variety of aortic valve diseases. Here we demonstrate that aggrecan expression in the OFT of developing zebrafish embryos is hemodynamically dependent, a process presumably mediated by mechanosensitive channels. Furthermore, knockdown or knockout of aggrecan leads to failure of the OFT to develop resulting in stenosis. Based on these findings we analysed the expression of AGGRECAN in human bicuspid aortic valves (BAV). We found that in type 0 BAV there was a significant reduction in the expression of AGGRECAN. Our data indicate that aggrecan is required for OFT development and when its expression is reduced this is associated with BAV in humans. Introduction Arterial valve leaflets are composed of three distinct layers of Extra-Cellular Matrix (ECM) called fibrosa, spongiosa and ventricularis. The spongiosa layer is particularly rich in proteoglycans which will provide compressive properties to the tissue and allow the leaflet to change shape during the cardiac cycle. Indeed, valves are submitted to extreme hemodynamic forces such as shear stress and cyclic strain that will subsequently regulate both their development and function. Studies in chick and zebrafish have shown that disruption of the hemodynamic forces results in cardiac valves defects including defects of the outflow tract (OFT) cushion formation and severe heart defects associated with a total absence of valves [START_REF] Menon | Altered hemodynamics in the embryonic heart affects outflow valve development[END_REF][START_REF] Hove | Intracardiac fluid forces are an essential epigenetic factor for embryonic cardiogenesis[END_REF]. Among valve diseases, bicuspid aortic valve (BAV) is one of the most common pathologies found in patients, occurring in 1-2% of living births and are frequently associated with aortic stenosis, regurgitation, endocarditis and calcified valves [START_REF] Padang | Genetic basis of familial valvular heart disease[END_REF]. AGGRECAN is one of the major members of the large proteoglycans found in cartilage and provides the ability to resist compressive loads [START_REF] Lincoln | Hearts and bones: shared regulatory mechanisms in heart valve, cartilage, tendon, and bone development[END_REF]. A recent transcriptomic study of human BAV has shown that AGGRECAN expression is decreased in BAV patients with mild calcification compared with calcified tricuspid aortic valve (TAV) [START_REF] Padang | Comparative transcriptome profiling in human bicuspid aortic valve disease using RNA sequencing[END_REF]. Zebrafish possess 2 aggrecan paralogues, aggrecanA (acana) and aggrecanB (acanb). Here we show that acana expression in the zebrafish OFT is dependent on hemodynamic forces and that knockdown or knockout of acana during development induces cardiovascular defects. Moreover, we observe that in humans, AGGRECAN (ACAN) expression is reduced in type 0 BAV compared to normal TAV. Materials and methods Zebrafish strains and husbandry Zebrafish were maintained under standardized conditions [START_REF] Westerfield | The zebrafish book, A Guide for the Laboratory Use of Zebrafish (Danio rerio)[END_REF] and experiments were conducted in accordance with the European Communities council directive 2010/63. The Tg(fli1a:GFP)y1 line was provided by the CMR[B]. ISH were performed as described previously [START_REF] Brend | Zebrafish whole mount high-resolution double fluorescent in situ hybridization[END_REF][START_REF] Thisse | High-resolution in situ hybridization to whole-mount zebrafish embryos[END_REF] (see Supplementary information). For RR treated fish, the proteinase K treatment was reduced to 15 min. Ruthenium Red treatment Morpholinos and injections Morpholino oligonucleotides were obtained from Gene Tools (Philomath, OR, USA) and injected into one-cell stage embryos (see Supplementary information). CRISPR/Cas9 Acana target sequences were identified using ZiFiT online software [START_REF] Sander | Zinc Finger Targeter (ZiFiT): an engineered zinc finger/target site design tool[END_REF]. 150 pg of acana gRNA was co-injected with nls-Cas9 protein (N.E.B) (see Supplementary information). Cardiovascular parameters analysis Cardiovascular parameters were determined using the MicroZebraLab™ software from ViewPoint [START_REF] Parker | A multi-endpoint in vivo larval zebrafish (Danio rerio) model for the assessment of integrated cardiovascular function[END_REF] (see Supplementary information). Real-time quantitative Reverse Transcription Polymerase Chain Reaction (qRT-PCR) Human aortic valve tissues were collected after surgery by the Department of cardiac surgery at "La Timone" Hospital, Marseille, France. The protocol was evaluated and authorised by the "CPP Sud Méditerranée" n°: 13.061. and by the "Agence de la biomedicine" n°PFS14-011. (see Supplementary information). Results AggrecanA is expressed in the zebrafish OFT and is dependent on hemodynamic forces To determine the expression pattern of aggrecan, we performed in situ hybridization (ISH) on 4 days post-fertilization (dpf) embryos using antisense acana and acanb probes. Acana showed a high expression in craniofacial cartilage as previously described [START_REF] Kang | Molecular cloning and developmental expression of a hyaluronan and proteoglycan link protein gene, crtl1/hapln1, in zebrafish[END_REF], however we were also able to observe a clear expression of acana in the OFT (Fig. 1.A). In contrast, we could not detect any acanb expression in this cardiac structure (Fig. 1.B). In order to confirm that acana expression was localised to the OFT, we performed double fluorescent ISH on 4dpf larvae using antisense probes targeting acana and elastin b (elnb), an abundant component of the zebrafish OFT [START_REF] Miao | Differential expression of two tropoelastin genes in zebrafish[END_REF]. In this manner we were able to observe that acana expression co-localised with elnb expression in the OFT (Fig. 1.C-H). Because ECM composition can be modulated by hemodynamic forces, we sought to determine whether this was the case for acana expression in the OFT. To achieve this, we used a previously described morpholino targeting tnnt2 which effectively stops the heart from beating [START_REF] Sehnert | Cardiac troponin T is essential in sarcomere assembly and cardiac contractility[END_REF]. In this manner we found that tnnt2 morphants display an apparent lack of acana expression in the OFT (Fig. 1.I). Previous research has indicated that mechanosensitive ion channels (MSC) can detect hemodynamic forces and trigger valve formation [START_REF] Heckel | Oscillatory flow modulates mechanosensitive klf2a expression through trpv4 and trpp2 during heart valve development[END_REF]. To determine whether MSC could mediate acana expression in response to hemodynamic forces, we employed the nonselective MSC blocker Ruthenium Red (RR). In this manner we could observe that larvae incubated with RR showed a dose responsive decrease of acana expression in the OFT (Fig. 1.J-L). AggrecanA is involved in cardiac development Due to its expression in the OFT, we speculated that acana may play a role in cardiac development. To answer this question we adopted a morpholino (MO) mediated approach targeting an internal splice site. Injection of this MO produced a phenotype characterized by defective cardiogenesis including a larger atrium and associated edema at 3dpf (Fig. 1.M-N, P-Q). To confirm the specificity of the phenotype, we performed several control experiments (Suppl. Fig. 1 and Suppl. information). We also implemented a previously described transient CRISPR/ Cas9 knockout strategy [START_REF] Willems | The Wnt Co-receptor Lrp5 is required for cranial neural crest cell migration in zebrafish[END_REF][START_REF] Moriyama | Evolution of the fish heart by sub/neofunctionalization of an elastin gene[END_REF]. In this manner we were able to observe zebrafish embryos displaying a similar cardiac phenotype to that observed when acana was knocked-down with a MO (Fig. 1.O,R). Although this approach will result in a mosaic knockout of acana, all embryos which displayed a cardiac phenotype tested positive for KO of acana by T7 assay (n = 6/44) (Suppl. Fig. 2). Together these results indicate that loss of acana either by knockdown or knockout leads to perturbed cardiogenesis. To better understand how the OFT has been affected, we analysed this structure in Tg(fli1a:GFP)y1 larvae which express GFP in all endothelial cells. At 3dpf, the wild-type OFT displays a typical "pear-shape" structure (Fig. 2.A). By contrast, in the acana morphants this structure has failed to develop (Fig. 2.B). In parallel, we also analysed a number of cardiovascular parameters such as heart rate and stroke volume. No differences were observed in the heart rate between the wild-type and acana morphants (Suppl. Fig. 3). However, the cardiac output and stroke volume were significantly decreased in acana morphants when compared to controls (Fig. 2.C and Suppl. Fig. 3). We also recorded high-speed movies of the beating embryonic heart and observed that in acana morphants there was a significant amount of blood regurgitation between the ventricle and the atrium, most likely caused by failed OFT development, forcing blood back through the AV canal (Suppl. movies M1 and M2). ACAN expression is reduced in human BAV patients Based on our findings in zebrafish, we hypothesised that reduced AGGRECAN (ACAN) expression may also be associated with aortic valve diseases such as BAV. Firstly, we determined the relative abundance of ACAN in the aortic valve by qPCR and were able to detect ACAN expression during foetal valve development in human (Suppl. Fig. 4). By 13 weeks of gestation, the ACAN expression greatly increased and this expression was even more abundant in adult valves. To assess the possibility that ACAN expression could be reduced in BAV patients, we performed RT-qPCR analysis using RNA extracted from aortic valves surgically removed from patients diagnosed type 0 "pure" BAV. As a control we used RNA extracted from normal TAV. We observed a significant (16 fold) decrease in the expression of ACAN in the type 0 BAV samples when compared to the control TAV samples (Fig. 2.D). Discussion Here we have shown that hemodynamically dependent acana expression is required for OFT development in zebrafish. Moreover, we showed that a MSC could be the sensor of these hemodynamic forces. It has recently been shown that the Trpv4 MSC, a target of RR, is involved in atrioventricular valve development in response to hemodynamic forces [START_REF] Heckel | Oscillatory flow modulates mechanosensitive klf2a expression through trpv4 and trpp2 during heart valve development[END_REF]. However, because RR is non-specific, we cannot determine the true identity of the MSC at this juncture. Because the analogous region in humans will give rise to the aortic valves, we also analysed ACAN expression during human aortic valve development and in patients who suffer from type 0 BAV we found a significant reduction when compared to TAV. Because of the rarity of this condition, we were only able to analyse relatively few patients, however the difference was in excess of 16 fold. It will therefore be necessary to expand this cohort to determine fully the reduction in the expression of AGGRECAN associated with BAV type 0. At present little is known about the genetic causes of BAV, with only a handful of genes thus far identified and, as one can imagine, there is even less known about what causes the different types of BAV. Although we cannot categorically state that reduced AGGRECAN expression is the root cause of type 0 BAV, its association with this condition does appear to be significant. Decreased AGGRECAN expression with BAV type 0 may not be so surprising considering it is required to strengthen and provide rigidity to the developing valves, and when this is lost the developmental process will malfunction. Why AGGRECAN expression is reduced in BAV type 0 remains unclear at this juncture, however it is possible that defective hemodynamics or defective mechanosensation of these forces during OFT development could be involved with this condition. Supplementary data to this article can be found online at https://doi. org/10.1016/j.ijcard.2017.09.174. Fig. 1 . 1 Fig. 1. Acana is hemodynamically expressed in the OFT and is involved in heart development. (A-B) ISH of 4dpf larvae using antisense acana (A) and acanb (B) probes. Acana is expressed in the OFT (A, yellow arrowhead). Acanb expression was not detected in the OFT (B, yellow arrowhead). (C-E) Confocal maximal projection images of double fluorescent ISH on 4dpf WT larvae using antisense acana (C, red) and elnb (D, green) probes (Blue: DAPI, yellow arrowheads indicate the OFT). (F-H) Same larvae at higher magnification (dashed rectangle in E). The merged images (E,H) indicate that acana is expressed in the OFT labelled by elnb. Scale bars E: 100 μm; H: 20 μm. (I) ISH of 4dpf tnnt2-MO injected larvae using an antisense acana probe. Acana expression was not detected in the OFT (yellow arrowhead). (J-L) ISH of 4dpf larvae using an antisense acana probe on nontreated (J, control, n = 19), or treated specimen with 10 μM (K, n = 15) and 20 μM (L, n = 6) of Ruthenium Red (RR). Black arrowheads indicate the OFT. (M-O) Bright field images of non-injected (M), acana-MO (N) and CRISPR/Cas9 injected (O) 3dpf larvae. Both morphant and transient knockout display heart edema (black arrowheads). (P-R) myl7 ISH showing the morphology of a wild-type heart (P) or of hearts from acana-MO (Q) and CRISPR/Cas9 injected (R) 3dpf larvae. Both morphant and transient knockout display a larger atrium (v: ventricle; a: atrium). Fig. 2 . 2 Fig. 2. Knockdown of acana disrupts OFT development and AGGRECAN expression is modified in human BAV patients. (A,B) Confocal maximal projection image of 3dpf Tg(fli1a:GFP)y1 OFT from WT (A) and acana morphant (B) larvae. White dashed lines indicate the OFT. (C) Graph showing the difference in stroke volume between noninjected (NI, n = 13) and acana morphants (n = 12). ***p b 0.001 (Student's t-test) (D) Graph showing relative mRNA expression of ACAN between TAV controls (n = 4) and BAV Type 0 (n = 2). **p b 0.05 (nonparametric Mann and Withney test). Error bars represent mean ± SEM in all histograms. Acknowledgements A.F is currently supported by a Labex ISCT postdoctoral fellowship with previous support provided by a Fondation Lefoulon-Delalande postdoctoral fellowship. P.R is supported by the Labex ISCT PhD program. C.J is supported by an INSERM ATIP-AVENIR grant and a Marie Curie CIG (PCIG12-GA-2012-332772). A.F, P.R and C.J are members of the Laboratory of Excellence «Ion Channel Science and Therapeutics» supported by a grant from the ANR. Work in C.J lab is supported by a grant from the Fondation Leducq and by the Fédération pour la Recherche sur le Cerveau (FRC). Work in S.Z lab is supported by INSERM, the Fédération Française de Cardiologie, and the Association Française contre les Myopathies (AFM-Telethon). Conflict of interest The authors report no relationships that could be construed as a conflict of interest.
16,083
[ "781144", "16072" ]
[ "1087315", "46221", "46221", "28675", "28674", "46221", "1087315", "46221" ]
01746129
en
[ "sdv" ]
2024/03/05 22:32:07
2017
https://hal.sorbonne-universite.fr/hal-01746129/file/ARACHNIDA-16-Leiurus-hoggarensis-compressed_sans%20marque.pdf
Wilson R Lourenço email: wilson.lourenco@mnhn.fr Mohamed Lamine Kourim Salah Eddine Sadine Vachon Una nuova specie africana del genere Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) Keywords: Scorpion, new species, Leiurus hoggarensis sp. n., Buthidae, Algeria, Hoggar. Riassunto Scorpione, nuova specie, Leiurus hoggarensis sp. n., Buthidae, Algeria, Hoggar A new species of buthid scorpion belonging to the genus Leiurus Ehrenberg 1828 is described on the basis of four males and six females collected in the region of Amesmessa Tamanrasset in the south of Algeria. The new species, Leiurus hoggarensis sp. n., most certainly corresponds to the Leiurus population previously cited by Vachon from both the Hoggar and the Tassili N'Ajjer as Leiurus quinquestriatus. Several characteristics, however, attest that this population is unquestionable distinct from these found in Egypt, and both species can be distinguished by a distinct coloration pattern, different morphometric values and different number of teeth on the pectines. The type locality of the new species represents the most westerly record of the genus Leiurus in Africa, and the new species also inhabit a more mesic zone when compared to the central compartment of the Saharan desert. Leiurus hoggarensis sp. n., apparently does not present characteristics of a psamophilic species and may be considered as a lithophilic species. This is the 12 th species to be described for this buthid genus. Introduction As already outlined in several previous publications [START_REF] Lourenço | Description of a new species of Leiurus Ehrenberg, 1828 (Scorpiones, Buthidae) from the South of Jordan[END_REF][START_REF] Lourenço | The African species of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) with the description of a new species[END_REF][START_REF] Lourenço | One more African species of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) from Somalia[END_REF], the genus Leiurus Ehrenberg, 1828 was represented over many decades by a single species, Leiurus quinquestriatus, containing two subspecies, L. quinquestriatus quinquestriatus (Ehrenberg, 1828) and L. quinquestriatus hebraeus (Birula, 1908). Leiurus quinquestriatus seems to be a common species of desert faunas in certain regions of Egypt, Sinai and Sudan, although the precise identity of some regional populations from these areas requires yet further investigations [START_REF] Lowe | A review of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) with description of four new species from the Arabian Peninsula[END_REF]. Contrarily, L. hebraeus Birula, 1908 (now recognized as a valid species) is largely distributed in Israel and nearby countries [START_REF] Levy | Leiurus quinquestriatus hebraeus (Birula, 1908) (Scorpiones; Buthidae) and its systematic position[END_REF]. Leiurus species are globally infamous since they secrete one of the most noxious venoms among buthid scorpions in general, and are responsible for very serious human incidents (for details refer to [START_REF] Lourenço | One more African species of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) from Somalia[END_REF]. Because of its infamous reputation as a very dangerous scorpion, the toxins of both L. quinquestriatus and L. hebraeus have been the subject of numerous biochemical studies (for references see [START_REF] Simard | Venoms and Toxins[END_REF][START_REF] Hammock | Structure and neurotoxicity of venoms[END_REF]. Neverthless, many aspects of the taxonomy of the genus Leiurus remained confused for many decades (for more details see [START_REF] Lourenço | The African species of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) with the description of a new species[END_REF][START_REF] Lourenço | One more African species of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) from Somalia[END_REF]. Only in recent years, totally new species were finally described for the genus Leiurus. The description which really changed most conservative views about this group of scorpions was that of Leiurus jordanensis Lourenço, Modry et Amr, 2002 described from Jordan [START_REF] Lourenço | Description of a new species of Leiurus Ehrenberg, 1828 (Scorpiones, Buthidae) from the South of Jordan[END_REF]. Just a few years later, Leiurus savanicola Lourenço, Qi et Cloudsley Thompson, 2006[START_REF] Lourenço | The African species of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) with the description of a new species[END_REF] was described from Cameroon, representing the second confirmed species from Africa. In a more recent contribution, [START_REF] Lowe | A review of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) with description of four new species from the Arabian Peninsula[END_REF] proposed, in a very extensive article, a full revision of the genus Leiurus, but dealing mainly with the populations from the Arabian Peninsula. The status of some old species was revalidated, one recently described species was placed in synonymy, one subspecies was raised to species and four new species were described. This raised the total number of species in the genus Leiurus to ten. The characters used by these authors to define the species, as well as the proposed dichotomic key are rather difficult to be used. Nevertheless, we globally agree with these authors and, in particular with their opinion about the African species, stated as follows: "Our findings show that, like many other scorpion genera, Leiurus is comprised of an assemblage of allopatric or parapatric species spread across different regions separated by physiographic barriers, each adapted to local environments and substrates. Additional species diversity may emerge when other local populations are analysed in more detail, for example those in southern Sinai and in more central parts of North Africa". Moreover, after the study of the original syntypes used in the description of Leiurus quinquestriatus, these same authors [START_REF] Lowe | A review of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) with description of four new species from the Arabian Peninsula[END_REF] suggest as follows: "The syntypes include assorted material from the Sinai, the Nile Valley in Egypt and Sudan, and the desert region of Egypt east of the Nile. These could represent more than one species if the populations in the Sinai are distinct from those of the Nile Valley". Once again we globally agree with this suggestion. It is most obvious that the African populations of Leiurus have been largely neglected and still require intensive further studies. If [START_REF] Vachon | Etude sur les Scorpions[END_REF] associated the few specimens he studied from Hoggar to L. quinquestriatus this can be attributed to both the typical incertitude of this author, but also to the very limited material (mainly fragments) he disposed. Again [START_REF] Vachon | Scorpions, Mission scientifique au Tassili des Ajjer[END_REF] in a synopsis of the scorpions from the Tassili N'Ajjer, another mountain range in southern Algeria, cited Leiurus quinquestriatus from this locality, but based on the study of two very young juveniles. None of these specimens was located in the collections of the Museum in Paris and probably were deposited in other collections such as that of the Institute Pasteur of Algeria. In his monograph on the scorpions of North Africa, [START_REF] Vachon | Etude sur les Scorpions[END_REF] also referred to several specimens collected in the Fezzan (Libya) as L. quinquestriatus. It is quite possible however, that this population corresponds to Buthus quinquestriatus libycus Birula, 1908 (= Leiurus quinquestriatus libycus). Nevertheless, only the study of more fresh material from Libya will allow a confirmation to this suggestion. In the present contribution we describe a new species based on material collected in the region of the Hoggar Massif in the south of Algeria. This raises the number of Leiurus species to twelve. Methods Illustrations and measurements were obtained using a Wild M5 stereomicroscope with a drawing tube and ocular micrometer. Measurements follow [START_REF] Stahnke | Scorpion nomenclature and mensuration[END_REF] and are given in mm. Trichobothrial notations follow [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF] and morphological terminology mostly follows [START_REF] Hjelle | Anatomy and morphology[END_REF]. Etymology: specific name makes reference to the Hoggar, the region where the new species was found. Diagnosis. Scorpion of large size when compared with the other species of the genus, having a maximum total length of 77.7 mm for male and 94.6 mm for female. Ground colour yellow to orangeyellow with the body and pedipalps almost totally orangeyellow. Male carapace with a brownish which covers the ocular tubercle; metasomal segment V only slightly infuscate, including in juvenile specimens; other metasomal segments orangeyellow. Ocular tubercle strongly prominent. Pectines with 32 to 34 and 26 to 29 teeth for males and females respectively. Median carinae on sternites IIIIV moderately to strongly marked; sternite VII with mediate intercarinal surface presenting a thin granulation. Pedipalp fingers with 1112 or 1212 rows of granules for both males and females. Description based on male holotype and paratypes. (Morphometric measurements in Table I). Coloration. Ground colour yellow to orangeyellow; body and pedipalps almost totally orangeyellow; legs yellow. Male carapace orangeyellow with a brownish spot which covers the ocular tubercle. Mesosoma tergites with some infuscations in male, absent from female. Metasoma orangeyellow on segments I to IV; segment V slightly infuscate, including on juveniles. Vesicle yellow with reddish tonalities on lateral sides; aculeus yellow at the base and dark red at its extremity. Venter yellow to slightly orange yellow without spots. Chelicerae yellow without any dark reticulated spots; teeth dark red. Pedipalps yellow to orangeyellow overall except for the rows of granules on chela fingers which are dark red. Legs yellow with some zones slightly orangeyellow. Morphology. Prosoma: Anterior margin of carapace with a weak concavity. Carapace carinae moderately to strongly developed; central median and posterior median carinae moderate to strong; anterior median carinae strong; central lateral carinae moderate to strong; posterior median carinae moderate to strong, terminating distally in a small spinoid process that extends very slightly beyond the posterior margin of the carapace. All carinae better marked on males. Intercarinal spaces with very few irregular granules, and the reminder of the surface almost smooth, in particular laterally and distally. Median ocular tubercle in a central position and strongly prominent; median eyes large in size and separated by about two ocular diameters. Four/five pairs of lateral eyes; the fourth and fifth are vestigial. Mesosomal tergites III pentacarinate; IIIVI tricarinate. All carinae strong, granular, better marked on male; each carina terminating distally in a spinoid process that extends slightly beyond the posterior margin of the tergite. Median carinae on I moderate, on IIVI strong, crenulated. Tergite VII pentacarinate, with lateral pairs of carinae strong and fused; median carinae present on the proximal half in female and on the 2/3 on male, moderate to strong. Intercarinal spaces weakly to moderately granular. Lateral carinae absent from sternite III; moderate to strong on sternites IVVI; strong, crenulate on VII; median carinae on sternites IIIIV moderate to strong. Pectines long; pectinal tooth count 3434 on male holotype and 2928 for female paratype (see diagnosis for variation). Metasomal segments IIII with ten carinae, moderately crenulate; lateral inframedian carinae on I moderate; on II present on the posterior third; on III limited to a few posterior granules; IV with eight carinae. Dorsal and dorsolateral carinae moderate, without any enlarged denticles distally. All the other carinae moderate to weak on segments IIV. Segment V with five carinae; ventromedian carinae with several slightly spinoid granules distally; anal arch with three slightly spinoid lobes, better marked in female. Dorsal furrows of all segments weakly developed and smooth; intercarinal spaces almost smooth, with only a few granules on the ventral surface of segment V. Telson smooth; subaculear tubercle absent; aculeus as long as vesicle. Chelicerae with two reduced denticles at the base of the movable finger [START_REF] Vachon | De l'utilité, en systématique, d'une nomenclature des dents des chélicères chez les Scorpions[END_REF]. Pedipalps: Trichobothrial pattern orthobothriotaxic, type A [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF]; dorsal trichobothria of femur in configuration [START_REF] Vachon | Sur l'utilisation de la trichobothriotaxie du bras des pédipalpes des Scorpions (Arachnides) dans le classement des genres de la famille des Buthidae Simon[END_REF]. Femur pentacarinate; all carinae moderately β crenulate. Patella with seven carinae; all carinae moderately to weakly crenulate; dorsointernal carinae with 23 spinoid granules distally. Chelae slender, with elongated fingers; all carinae weakly marked, almost vestigial. Dentate margins of fixed and movable fingers composed of 1112 or 1212 almost linear rows of granules in both sexes. Legs: Ventral aspect of tarsi with short spiniform setae more or less arranged in two rows. Tibial spurs present on legs III and IV, moderately marked. Pedal spurs present on all legs, strongly marked. Relationships. Based on the key supplied by [START_REF] Lowe | A review of the genus Leiurus Ehrenberg, 1828 (Scorpiones: Buthidae) with description of four new species from the Arabian Peninsula[END_REF], the new species seems to present affinities with L. quinquestriatus 'typicus' normally only distributed in Egypt, mainly Sinai, and perhaps also in Sudan. Nevertheless the two species differs by a number of characters: I) distinct patterns of pigmentation, with the population from Hoggar showing a more orangeyellow colour; II) quite distinct morphometric values for specimens of a similar global size; III) lower numbers of pectinal teeth counts. Moreover, the geographic distributions of the African populations are not continuous. The future examination of material from Libya should confirm the existence of an intermediate population between Egypt and Algeria. Ecology As already outlined in a recent publication [START_REF] Lourenço | Scorpions from the region of Tamanrasset, Algeria. Part I. A new species of Buthacus Birula, 1908 (Scorpiones: Buthidae)[END_REF], the Region of El Ahaggar (Tamanrasset) which corresponds to the El Ahaggar National Park is very important in surface. It is located in the Central Massif of the southeastern Algerian region (Fig. 15) and covers a total area of ca. 450,000 km 2 . The main locality in this area is Tamanrasset [START_REF] Wacher | lnventaire de la faune sahelosaharienn[END_REF]. The very diverse geomorphological features are constituted by the Regs, Ergs, Stone Plateaux (Figs. 1617) but also very high summits such as the Tahat with more than 3000 meters, constituting the highest mountain in Algeria [START_REF] Sahki | Guide des principaux arbres et arbustes du Sahara central (Ahaggar et Tassili)[END_REF]. The region of El Ahaggar, as all the others regions around Tamanrasset is characterized by a typical arid climate with mild winters but also important thermal amplitudes between the day and the night [START_REF] Kourim | Biodiversité faunistique dans le Parc National de l'Ahaggar[END_REF]. The hottest months range from June to August. Rain fall is extremely rare in the region of El Ahaggar and the average values can vary extremely accordingly to the year; very important dry periods can be observed over more than three years. Maximum rain is generally observed during the hot period from June to August [START_REF] Hamdine | Conservation du Guépard (Acinonyx jubatus Schreber, 1776) de la région de l'Ahaggar et du Tassili n'Adjjer en Algérie[END_REF]. The new species described here was collected in the region of Amesmessa which is located about 450 Km NW of Tamanrasset. This site is located in a region of mountains with quite many sand deposits which are the consequence of gold mining. The new Leiurus species appears as the most common scorpion in the region of study, representing up to 76.24% in the region of Tamanrasset and up to 84.72% in the region of Amesmessa. Leiurus hoggarensis sp. n., apparently does not present characteristics of a psamophilic species and may be considered as a lithophilic species (Fig. 18). Figs. 14 . 14 Figs. 14. Leiurus hoggarensis sp. n. Habitus. 12. Male holotype. 34. Female paratype. 59. Leiurus hoggarensis sp. n. Male holotype (58). Female paratype (9). 5. Chelicera, dorsal aspect. 6. Cutting edge of movable finger, showing rows of granules. 7. Idem, detail of the extremity. 89. Metasomal segment V and telson, lateral aspect.Taxonomic treatmentFamily Buthidae C. L. Koch, 1837 Genus Leiurus Ehrenberg, 1828Leiurus hoggarensis sp. n. (Figs. 114)Type material: Algeria, AmesmessaTamanrasset (21°03' N -02°28' E), 326/X/2015 (M. L. Kourim). Male holotype, 2 males and 3 females paratypes deposited in the Muséum national d'Histoire naturelle, Paris, France. 1 male and 3 female paratypes deposited in the University of Ghardaïa, Algeria. Figs. 1014 . 1014 Figs. 1014. Leiurus hoggarensis sp. n. Male holotype. Trichobothrial pattern. 1011. Chela, dorsoexternal and ventral aspects. 1213. Patella, dorsal and external aspects. 14. Femur, dorsal aspect. Fig. 15 . 15 Fig. 15. Map of Algeria showing in detail the region of El Ahaggar (Tamanrasset), with the type locality of Leiurus hoggarensis sp. n. (black star). Figs. 1617 . 1617 Figs. 1617. Aspects of the biotopes found in the region of El Ahaggar with typical sites in Amesmessa where the Leiurus hoggarensis sp. n. was collected. Fig. 18 . 18 Fig. 18. A preadult male of Leiurus hoggarensis sp. n. alive in its natural habitat. Table I . I Morphometric values (in mm) of the male holotype and female paratype of Leiurus hoggarensis sp. n. Male holotype Female paratype Total length 77.7 94.6 Carapace: length 8.4 10.5 anterior width 5.8 7.2 posterior width 9.5 12.5 Mesosoma length 17.8 20.7 Metasomal segment I: length 6.7 8.2 width 5.6 6.2 Metasomal segment II: length 8.2 9.8 width 5.2 5.3 Metasomal segment III: length 8.3 10.3 width 4.7 4.9 Metasomal segment IV: length 9.1 11.4 width 4.3 4.6 Metasomal segment V: length 10.5 12.5 width 4.0 4.6 depth 3.6 3.9 Telson length 8.7 11.2 Vesicle: width 3.4 4.2 depth 3.2 3.8 Pedipalp: Femur length 8.9 11.1 Femur width 2.2 2.7 Patella length 9.8 12.3 Patella width 2.8 3.2 Chela length 15.8 19.9 Chela width 2.5 3.2 Chela depth 2.6 3.3 Movable finger: length 11.2 14.4 Acknowledgements We are most grateful to Mohamed Kraimat (University of Ghardaïa) for his assistance both during the field work and the preparation of this article, and in special to EliseAnne Leguin (MNHN, Paris) for her assistance in the preparation of the photos and plates.
19,484
[ "1011345" ]
[ "519585", "512759", "512759" ]
01637751
en
[ "shs", "sde" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01637751/file/Anthraco-typology%20as%20a%20key%20approach_revised.pdf
Alexa Dufraisse email: alexa.dufraisse@mnhn.fr Sylvie Coubray Olivier Girardclos Noémie Nocus Michel Lemoine Jean-Luc Dupouey Dominique Marguerie approach to past firewood exploitation and woodland management reconstructions. Dendrological reference dataset modelling with dendro-anthracological tools Keywords: firewood management, dendro-anthracological tools, anthraco-typology, anthraco-group, deciduous oak scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Anthracology and past woodland reconstruction The questions raised by relationships between people and the environment in time and space can be explored by archaeological, ethnographic or environmental approaches. The management of the environment for (plant or animal) food strategies reflects, to some extent, human societies and their organization, their lifestyles, their perception of the environment and the landscape in which they operate. Forest exploitation in order to produce wood material for multiple needs, is perceptible at different scales: the tree, the woodland and the landscape (Michon, 2005(Michon, , 2015)); Humans R e v i s e d m a n u s c r i p t "domesticate" the tree by modifying its architecture, its growth cycle, its production and its reproductive functions. This domestication also concerns the forest ecosystem transformed by practices. Wood stands are shaped by and for societies living in them as a result of the installation of fields, herds and villages in forest areas. Variable spatial patterns results from this articulation between forest and agriculture, the landscapes. Firewood management contributes to this "domestication". It is part of a complex system closely related to social organization, technical and economic systems, and the environment itself (Chabal, 1997;[START_REF] Picornell Gelabert | People, trees and charcoal: somes reflections about the use of ethnoarchaeology in archaeological charcoal analysis[END_REF][START_REF] Dufraisse | Firewood and woodland management in their social, economic and ecological dimensions[END_REF][START_REF] Salavert | Understanding the impact of socio-economic activities on archaeological charcoal assemblages in temperate areas: comparative analysis of firewood management of two Neolithic societies of Western Europe (Belgium, France)[END_REF]. Thus, archaeological charcoal fragments, residues of firewood selected and transported by humans, are valuable ecofacts reflecting use, techniques and woodland management, and are themselves conditioned by environmental resources (available wood resources, i.e. biodiversity and biomass). In forest science, the criteria characterizing wood stands are the composition (dominant and secondary species), stand density (number of stems per hectare), structure (distribution of age and diameter classes of trees) and modes of regeneration (seeded or vegetative renewal) [START_REF] Rondeux | La mesure des arbres et des peuplements forestiers[END_REF]. The methods usually employed in dendrochronology to extract this information are not suited to anthracological material. In dendrochronology, samples usually come from timber wood and generally from trunks and/or branches or roots, the wood is not charred, the methods are based on statistical tools that require at least 50 consecutive rings and it is possible to individualize the signals (study of distinct elements). In anthracology, fragments derive from trunks and/or branches or roots, the wood is charred, fragmented and incomplete as it is partially reduced to ashes, the fragments present on average less than five rings and result from the exploitation of many indistinguishable individuals [START_REF] Dufraisse | Charcoal anatomy potential, wood diameter and radial growth[END_REF][START_REF] Marguerie | Short Tree ring series: the study materials of the dendro-anthracologist[END_REF]. Consequently, in the absence of adequate tools, charcoal analysis is most often limited to the study of a list of taxa and their relative frequency without exploiting the information contained in the wood anatomy. The identification of the morphological characteristics of harvested firewood (part of the tree, age, R e v i s e d m a n u s c r i p t shape, etc.) still raises methodological problems even though it is a fundamental element for characterizing firewood exploitation techniques and reconstructing the populational and environmental parameters of wood stands. In order to address this need to learn more about forest exploitation and practices, the ANR DENDRAC project "Development of dendrometrical tools used in anthracology: study of the interactions between Man, resources and environments" aims to convert dendroecological data measured on fresh wood material from modern-day oak wood standscorresponding to different types of historical woodland practices -into parameters adapted to charcoal analysis, using a method similar to that developed by A. Billamboz, termed dendrotypology [START_REF] Billamboz | Applying dendro-typology to large timber series[END_REF][START_REF] Billamboz | Regional patterns of settlement and woodland developments: Dendroarchaeology in the Neolithic pile dwellings on Lake Constance (Germany)[END_REF]. His method consists in establishing a typological classification of tree-ring series according to their growth patterns. The application of a similar method in anthracology involves associating the identification of the taxa with the examination of dendrological and anatomical parameters; a concept that leads to the notion of dendro-anthracology (Marguerie et al., 2010). Deciduous European oak (Quercus petraea/robur) was chosen for its abundance in temperate forests, its anatomy with clearly identifiable growth rings and its representativity in anthracological spectra. In the present study, we postulate that the characteristics of an assemblage of treerings can be exploited, without taking into account tree-ring series in terms of time series like in dendrochronology. The first step consisted in developing dendro-anthracological tools based on morpho-anatomical features. The second step was to convert dendroecological data to form an anthraco-typological grid, which could then be used as a key approach for the interpretation of archaeological charcoal assemblages. This approach was applied to three modern-day wood stands: a coppice under standard, a high forest and a young stand formed by a mixture of seeded and coppice trees. Analysis was conducted at different levels: the whole tree, and trunks and branches separately, in order to model different modes of wood exploitation. R e v i s e d m a n u s c r i p t The dendro-anthracological tools Growth rate is a widely used dendro-anthracological parameter, but the successive tree-rings width series of each charcoal fragment must be localized as precisely as possible on the stem cross-section. In that aim, different dendro-anthracological tools are proposed in order i) to distinguish sapwood from heartwood which provides information about the minimal age of the wood (heartwood formation i.e. duraminisation starts when deciduous oak is around 25 years old) ii) to localize the tree-ring series in respect to the center of the stem, iii) to model dendroecological data from modern wood stands into dendro-anthracological parameters adapted to charcoal analysis. The Heartwood-Sapwood discriminating tool In some species the coloration of heartwood due to the deposition of lignins and polyphenols makes heartwood recognizable, but the charcoalification process that occurs during carbonization obliterates the colour difference, making this feature unusable in anthracology. Fortunately, in some Angiospermae, such as deciduous European oak (Quercus petraea/robur), the formation of tyloses (cellulose walls expansions) in earlywood vessels is an important feature of the changeover of sapwood to heartwood. However, tylosis formation also occurs in sapwood and increases with the formation of heartwood, from 0% of tyloses in the cambial region and close to 100% in the heartwood. Thus, we quantify the number of vessels sealed by tylosis in order to establish discriminating thresholds between sapwood and heartwood (Fig. 1a) [START_REF] Dufraisse | Contribution of tyloses quantification in early wood oak vessels to archaeological charcoal analyses: estimation of a minimum age and influences of physiological and environmental factors[END_REF]. Trunks and branches of ten deciduous oak trees from 15 to 60 years old were sampled in three stations in order to evaluate the number of earlywood vessels with tylosis in sapwood and heartwood. For an application to archaeological charcoal (tyloses are preserved until 800°C), at least one tree ring and 15 vessels must be counted. The best strategy is to count 50 vessels spread over 3-4 tree rings. Thresholds of less than 65% for sapwood and up to 85% for heartwood are R e v i s e d m a n u s c r i p t significant. Besides the discrimination of sapwood and heartwood, the process of heartwood formation starts when deciduous oak is about 25 years old. The absence of heartwood is thus an indication of the exploitation of young wood (trunks or branches). The pith estimation tool The pith estimation tool is used to measure the distance between the charcoal fragment and the center of the stem (or the missing pith), named the "charcoal-pith distance". This measurement is taken with the trigonometric pith estimation tool based on measurements of the angle and the distance between two ligneous rays (Fig. 1b) [START_REF] Dufraisse | Mesurer les diamètres du bois de feu en anthracologie. Outils dendrométriques et interprétation des données[END_REF]Paradis-Grenouillet et al., 2013). This tool was evaluated on fresh and carbonized oak wood discs with different angle values and distances between ligneous rays. This work enables us i) to propose exclusive criteria (angle < 2° and distance < 2 mm) for reducing the margin of error and improving results in archaeological applications, ii) to establish correction factors linked to the trigonometric tool itself (underestimation of distance values between 5 and 10 cm) and the shrinkage which occurs during charcoalification, iii) to highlight that there are no reliable measurements for charcoal-pith distances beyond 12.5 cm, i.e. diameter of 25 cm [START_REF] Dufraisse | Mesurer les diamètres du bois de feu en anthracologie. Outils dendrométriques et interprétation des données[END_REF][START_REF] Martinez | Correction factors on archaeological wood diameter estimation[END_REF]. The values were ordered into diameter classes chosen to be compatible with standards used in dendrometrical plans by foresters [START_REF] Gaudin | Dendrométrie des peuplements[END_REF][START_REF] Deleuze | Estimer le volume total d'un arbre, quelles que soient l'essence, la taille, la sylviculture[END_REF]. For Angiospermae the conventional wood cuts are 4 cm, 7 cm, 20 cm, etc. Two cuts were added at 2 and 10 cm for more accurate interpretation of charcoal diameters, that is to say ]0-2] cm, ]2-4] cm, ]4-7] cm, ]7-10] cm, ]10-20] and >20 cm. For Gymnospermae it is more appropriate to add a cut at 14 cm, namely ]0-2] cm, ]2-4] cm, ]4-7] cm, ]7-10] cm, ]10-14] cm, ]14-20] cm and >20 cm. Therefore, an Analysis Diameter model (ADmodel) was developed, based on the fact that a trunk is biologically considered to be a stack of cones (Fig. 1c) [START_REF] Dufraisse | Les Habitats littoraux néolithiques des lacs de Chalain et Clairvaux (Jura, France): collecte du bois de feu, gestion de[END_REF][START_REF] Dufraisse | Charcoal anatomy potential, wood diameter and radial growth[END_REF][START_REF] Dufraisse | Mesurer les diamètres du bois de feu en anthracologie. Outils dendrométriques et interprétation des données[END_REF]. These cones are hollow and their thickness corresponds to the amplitude of the diameter classes. It is based on a calculation table that provides the respective distribution of these cones in terms of volume. The ADmodel breaks down unburnt wood diameter into an expected distribution of charcoal-pith distances. In return, the ADmodel is a helpful tool to interpret the distribution of charcoal-pith distances from a charcoal assemblage as unburnt wood diameter (UWD). However, this model does not reconstruct the initially quantity of burnt wood [START_REF] Théry-Parisot | Charcoal analysis and wood diameter: inductive and deductive methodological approaches for the study of firewood collecting practices[END_REF]. In the present study, only the ADmodel running into UWD decomposition is used. In the present study, each cone thickness was also characterized by a growth rate (cumulated tree-ring width divided by the number of tree rings) and its sapwood/heartwood affiliation. Material and Method The general analytical protocol consists in sampling modern-day oak woodlands corresponding to specific archaeological questions, removing logs from felled trees, cutting wood discs from logs and producing experimental charcoal assemblages (Fig. 2). Various kinds of datasets were produced: i) dendrometrical plans to characterize tree morphology and wood stands (composition, structure stand density, regeneration modes), ii) dendrochronological data from wood discs, iii) anthracological data modelled with the dendro-anthracological tools. With respect to historical woodland practices and to answer to specific archaeological issues such as the distinction branch/ trunk or coppice /high forest three "contrasted" deciduous oak stands managed by the National Forestry Office (ONF) in France were chosen (Fig. 3). The first one is " Les Cagouillères ", located in the Vienne department, on a limestone plateau (altitude: 115m). It is in an old abandoned coppice woodland, about 62 years old, currently undergoing conversion to high forest. The second stand is "Bogny-sur-Meuse" located in the Ardennes department. This is a coppice-under-standard growing on an acidic brown soil on schists, about 68 years old. The third stand is "Le Bois de l'Or", also located in the Ardennes department, near Bogny-sur-Meuse. This is a young stand, about 15 years old, formed by a mixture of even aged seeded and coppice trees (altitude: 350m). Stand analysis In order to characterize the wood stands, forest inventory and dendrometric data were compiled. The basal area increment (m²/ha), stand density (number of stems per hectare), dominant height of the trees and the average square diameter were recorded distinguishing trees with diameters of more and less than 7.5 cm (table 1, Fig. 4). Dendrometry and tree ring analysis The dendrological information for each tree, such as diameter, age, growth rate and radial growth trend, was defined at breast height on the field and from disc located at 1.30 m above ground, as is usual in dendroecology. However, the nature and representativeness of archaeological samples are different in dendroecology and anthracology. Consequently, for R e v i s e d m a n u s c r i p t the conversion in anthracological data according to anthracological constraints, the dendrological data were measured in the whole tree. For the study of tree ring-climate relations in sessile oak, six is the number of optimal trees to sample. For our purpose, and taking into account our archaeological questions, one to five trees were felled and registered meter by meter, one dominant tree in the coppiceunder-standard at "Bogny-sur-Meuse", four dominant stems from distinct multi-stem trees at "Les Cagouillères" and five coppice shoots and five seeded trees at "Le Bois de l'Or". For each tree, the total height, the height of the first large branch insertion on the stem and the height of the crown base were recorded as well as the diameter at breast height and regeneration modes. The set of tree morphology indicators is presented in table 1 and figure 4. In order to estimate the relative proportion of trunk and branches for each tree and each stand, each tree was cut into logs of 1-metre-long including branches with a diameter of more than 4 cm. A code was attributed to each log according to its position in the tree (height, number of branches, location in the branch). Length and circumference (at three points) of each log were measured to calculate the mean diameter and the volume. Branches with a diameter of less than 4 cm were packed into bundles according to two diameter classes; 0-2 cm and 2-4 cm. Each bundle was weighed. Sub-samples of wood were collected from each bundle to estimate the density of the wood and then to calculate the volume of each bundle. In order to characterize each tree and then each wood stand at different levels (whole tree, and trunks and branches separately), one disc was removed from the extremity of each log. In the present study, a subsample of the set of discs was taken by selecting discs at different heights in the trunk and in the crown (23 discs for the four trees at "Les R e v i s e d m a n u s c r i p t Cagouillères", 14 discs for the tree at "Bogny-sur-Meuse" and 77 discs for the 10 trees at "Le Bois de l'Or" (Table 2a,2b). The tree-ring widths (discriminating earlywood and latewood) of each disc were measured to the nearest 0.01 mm using a LINTAB measurement device and associated TSAP software (Frank Rinn, Heidelberg, Germany) along 5 radii and averaged in order to reduce intra-tree variability. Each tree-ring was then associated with a diameter class (calculated by the cumulated ring widths) and sapwood/heartwood. Thus, the proportion of sapwood and heartwood was characterized by averaging tree-ring numbers, tree-ring width and wood volume. The usual dendro-anthracological parameters were first independently considered to obtain a "whole tree" estimation, and then the trunks and branches were separated. i) the distribution of growth-ring width, ii) the proportion of sapwood/heartwood, iii) the distribution of the decomposed unburnt wood diameters (UWD) were recorded. Results 3.1. Dendrological features of the three wood stands (Fig. 4). For the four sampled 62 years-old trees from "Les Cagouillères", the average diameter at breast height is 20.75 cm. The average tree height is 17.7 m and 90,4% of the wood volume is from to the trunk. The low proportion of branches, with a diameter of less than 7 cm, reflects an undeveloped crown (probably due to competition, a consequence of the abandonment of forest management). The dominant tree at "Bogny-sur-Meuse" is 68 years old with a diameter of 33 cm at breast height. The height is 20.3 m with a first large branch at 7.7 m and a more developed crown; branches represent 37.4% of the tree volume and can reach a diameter of 20 cm. The trees at "Le Bois de l'Or" are 14 years old, their diameters average 10.21 cm, 8.6 m high, the trunk forms 78.38% of the volume and the diameter of branches less than 7 cm in diameter. Thus, the tree at "Bogny-sur-Meuse" is less R e v i s e d m a n u s c r i p t slender than trees at "Les Cagouillères" and "Le Bois de l'Or" (see the height/diameter ratio, table 1). In the three sampled stands, trunk volume is always predominant and branches are poorly represented. The diameter 20-40 cm class is the best represented at "Bogny-sur- Meuse" whereas the 10-20 cm diameter class characterizes "Les Cagouillères". The main volume at "Le Bois de l'Or" is distributed between 7-10 diameter but a few trees reach 11 cm and thus belong to the 10-20 cm diameter class. Radial growth rate and growth trends are different in each stand. Tree-ring widths average 1.23 mm/year at "Les Cagouillères" coppice, and the growth trend has been decreasing over the past 20 years due to strong competition between shoots, intra-tree and between stools. At "Bogny-sur-Meuse", growth-ring widths average 1.35 mm/year and the growth trend has been decreasing slightly over the past 20 years. At "Le Bois de l'Or", growth-ring widths average 2.99 mm/year and are marked during the 1 to 10 first years by a steady increase in the coppice trees while seeded trees are characterized by narrowest rings than coppice from around the pith to 6-7 years, followed by an intensive growth period before a relatively sudden decrease (for more details, see [START_REF] Girardclos | Improving identification of coppiced and seeded tress in past woodland management by comparing growth and wood anatomy of living sessile oaks (Quercus petraea)[END_REF]. Simple dendro-anthracologial parameters Growth rate The distribution of the growth rates indicates differences at stand and tree levels (Fig. 5a). First, the difference in growth rate observed in § 3.1 and only based on one disc localized at 1.30 m in the trunk, is conserved when the whole tree is taken into account, what is more realistic for anthracology. The growth rate at "Les Bois de l'Or" is the highest, followed by "Bogny-sur-Meuse" and "Les Cagouillères". For a same stand, we also note a significant difference between trunks and branches, the latter being characterized by a lower rate. Moreover, considering the different parts of the trunk (base, top, upper part in the crown), we note that the annual ring-width in the top of the bole is wider than in the lower R e v i s e d m a n u s c r i p t part, and that the growth rate of the trunk localized in the crown is comparable with branches (Fig. 5b). However, this latter observation is less clear at "Le Bois de l'Or". Sapwood/heartwood The trees at "Le bois de l'Or", less than 15 years old, are characterized by the absence of heartwood, contrasting with "Bogny-sur-Meuse" and "Les Cagouillères" (Fig. 6). However, at "Les Cagouillères", heartwood formation is not yet initiated in branches. Conversely, the trunk and branches of the dominant tree at « Bogny-sur-Meuse » are characterized by heartwood and sapwood. The relative proportion of sapwood in trunk is less important at "Bogny-sur-Meuse" than at "Les Cagouillères". Likewise, the average number of sapwood tree-rings is less important at "Bogny-sur-Meuse". Nevertheless the average sapwood ring width is higher at "Bogny-sur-Meuse" reflecting more vigorous growth. Diameters The unburnt wood diameters (UWD) were decomposed with the ADmodel, according to the relative volume of each hollow cone composing the logs (Fig. 1c,7). The raw dendrological data indicate that there is little overlap between the diameters of branches and trunks. In fact, the low proportion of the trunk represented in the smallest diameter classes corresponds to the upper part of the trunk localized in the crown. Therefore, for each wood stand, the distribution of the decomposed UWD of branches is clearly distinct from the trunk. Besides, as the volume of branches is weak, the wood diameter pattern for whole trees does not show clear differences with that of the trunk. The first combination consisted in assessing growth trends by characterizing each wood stand. In dendroecology, growth trends are obtained by combining tree-ring width and cambial age. Given that i) the analysis of tree-ring patterns in segment of cambial age is considered relevant for studying forest dynamics and development [START_REF] Haneca | Growth trends reveal the forest structure during Roman and Medieval times in Western Europe: a comparison between archaeological and actual oak ring series (Quercus robur and Quercus petraea)[END_REF] ii) the distance of the charcoal from the pith can be estimated by the charcoal-pith tool, we combined tree-ring width with diameter classes. For the dendrological data, average tree-ring width was calculated for each cambial age. For the modelled anthracological data, average tree-ring width was calculated for each diameter class (Fig. 8). The radial growth trends of the three wood stands are different and the modelled anthracological data correspond well to their dendrological characteristics. Even though the anthracological data are smoother because of the calculation of average ring width per diameter class, the radial growth trend is consisting of i) a strong increase in the radial growth of trees at "Le Bois de l'Or", reflecting a free juvenile growth, ii) the increase followed by a decrease at "Les Cagouillères" due to the high density of trees over a long period of time, iii) a slight decrease in the life of the tree at "Bogny-sur-Meuse" due to a managed coppice-under-standards. However, the differences observed between seeded and coppice trees at "Le Bois de l'Or" are no longer evident. The branches at "Les Cagouillères" and "Bogny-sur-Meuse" are characterized by a lower growth rate than in the corresponding trunks (cf. § 3.2.1.) and by a downward growth. In contrast, the young seeded and coppice trunks at "Le Bois de l'Or", with diameters comparable to the branches, are characterized by a clearly higher growth rate and a more upward growth. The radial growth rate of whole trees is lower in the first diameter classes than in the trunk considered separately, as it includes the low growth rates of branches. Then, radial growth increases from the boundary of the step between diameters of trunks and branches. R e v i s e d m a n u s c r i p t Diameter classes versus heartwood/sapwood The second combination aims to improve the interpretation of the distribution of the decomposed UWD by associating them with the presence or absence of heartwood, and the sapwood/heartwood ratio in each diameter class, as decomposed by the ADmodel (Fig. 9). The distribution of heartwood/sapwood according to the diameter classes shows specific patterns for the different wood stands and possible exploitation modes (whole trees, trunks/branches separately). At "Les Cagouillères", where branches are characterized by the absence of heartwood, the volume of the trunk is mainly distributed in the penultimate diameter class. The pattern of the whole trees is similar to that of the trunk, as branches only represent 9.58% of the volume. At "Bogny-sur-Meuse", the same pattern is observable but the main volume is represented in the last two diameter classes. However, regarding the whole tree, sapwood is better represented in the small diameter classes than at "Les Cagouillères", as branches account for 37.4% of the tree volume. While the mature trees contain a central heartwood core (reflected by heartwood in the small diameter classes) and peripheral sapwood (reflected by sapwood in the largest diameter classes), the absence of heartwood in trunks from "Le Bois de l'Or" and in branches from "Les Cagouillères" is in agreement with young trunks and young branches respectively (less than 25 years old for oak). They are characterized by small diameters with sapwood, and the absence of heartwood and of large diameters. The biggest branches of the tree from "Bogny-sur-Meuse", i.e. 10-20 cm, contain small amounts of heartwood, and traces of heartwood in the smaller classes. The third combination consists in combining tree-ring width with the decomposed UWD and their respective affiliation to sapwood or heartwood (Fig. 10). Globally, the pattern between whole trees and trunks from a same stand is similar. This is less obvious at "Bogny-sur-Meuse" where no disc from the upper part of the trunk without heartwood has yet been studied. However, we can expect the same pattern, characterized by sapwood and heartwood in all the diameter classes, and a lower average tree-ring width in sapwood corresponding to the external rings, which is coherent with the growth dynamic of trees [START_REF] Fritts | Tree Rings and Climate[END_REF]. The exploitation of branches only is clearly distinct, with a low growth rate and the absence of heartwood in the case of young branches, as at "Les Cagouillères". If branches are a little older as at "Bogny-sur-Meuse", heartwood is absent in the largest classes of diameter. Lastly, the young vigorous seeded and coppice trees are characterized by a high growth rate in sapwood, while heartwood is absent. Discussion and application to charcoal assemblages The dendrological characteristics of each wood stand, discriminating branches, trunks and whole trees, were defined with the help of the dendro-anthracological tools. The dendroanthracological parameters (growth rate, heartwood/sapwood, diameters) were recorded independently of each other and then combined, forming anthraco-types (Fig. 11). First of all, annual ring width was considered individually. Considering the whole tree and the trunk, ring width distribution is significantly different among stands. However, the distribution between seeded and coppice trees at "Le Bois de l'Or" is not significantly different. Likewise, the distribution between branches at "Les Cagouillères" and "Bogny-sur-Meuse" is not different. In each stand, branches are characterized by a lower growth rate than in the trunk. This observation is in agreement with the study of the variation of annual tree-ring width along the stem marked by a slight increase from the base to the top of the At the scale of a charcoal assemblage, these data can be obtained by measuring each tree ring of each charcoal fragment and averaging them (per fragment). However, their interpretation remains problematic at this stage as they may come from different wood stands, trunks and/or branches, and it is not possible to distinguish them. In addition, it is difficult to interpret growth rate without contemporary, diachronic or modern-day reference standards. The presence or the absence of heartwood and the proportion of sapwood/heartwood are good indicators of the maturity of the wood. In anthracology, sapwood and heartwood can be distinguished using the proportion of vessels sealed by tylosis [START_REF] Dufraisse | Contribution of tyloses quantification in early wood oak vessels to archaeological charcoal analyses: estimation of a minimum age and influences of physiological and environmental factors[END_REF]. Then each fragment can be affiliated to sapwood or heartwood. However, if although the absence of heartwood reflects the exploitation of young trees, it is difficult to interpret sapwood and heartwood proportions as external and internal sapwood are not differentiated. Unburnt wood diameter (UWD) was decomposed using ADmodel. In a charcoal assemblage, charcoal diameters are obtained by measuring the charcoal-pith distance. The results indicate a diameter limit between branches and trunks for each wood stand, with almost no overlap, which is in agreement with the literature [START_REF] Deleuze | Estimer le volume total d'un arbre, quelles que soient l'essence, la taille, la sylviculture[END_REF]. However, the exploitation of whole trees is difficult to distinguish from the exploitation of trunks on account of the low branch volume. Consequently, if we hypothesize the exploitation of whole trees, the proportion of branches will be inconspicuous and difficult to distinguish from the exploitation of trunks. In addition, as regards the exploitation of different wood stands, it is R e v i s e d m a n u s c r i p t problematic to differentiate branches and young trunks solely on the basis of diameter distribution. Thus, growth rate, heartwood/sapwood and wood diameters are three parameters that can be applied to charcoal assemblages. However, their use independently of each other is somewhat limited and sometimes difficult to interpret despite their information potential. A first combination consisted in associating heartwood/sapwood and diameter parameters in order i) to differentiate the two kinds of sapwood: external sapwood in mature woods, and internal sapwood (absence of heartwood) in young woods ii) to improve the interpretation of the distribution of the decomposed UWD. Specific patterns were recorded according to wood stands and the exploitation modes (whole trees, trunks/branches separately). Young woods (trunk or branches) are characterized by absence of heartwood and small diameter classes, whereas mature wood is characterized by heartwood in small diameter classes and sapwood in the largest ones. In the scope of application to charcoal assemblages, this first combination yields four groups of charcoal fragments depending on their position in the wood: i) small diameter associated with sapwood corresponding to young woods, ii) small diameter associated with heartwood corresponding to the internal part of mature woods, iii) large diameter associated with heartwood corresponding to the middle part of mature woods and iv) large diameter associated with sapwood corresponding to the external part of mature woods. The association of growth rates with the sapwood/heartwood ratio can provide information about the vigour of wood stands and tree morphology. For example, the proportion of sapwood is higher in trunks from "Les Cagouillères" (high forest) than in the trunk of the dominant tree at "Bogny-sur-Meuse" (coppice-under-standard). However, average sapwood ring-width and sapwood width are higher at "Bogny-sur-Meuse" than in "Les Cagouillères" (Fig. 7). This observation shows that i) for a same age (Bogny: 68 years R e v i s e d m a n u s c r i p t old, Cagouillères: 62 years old), the most vigorous trees have a more extensive sapwood surface [START_REF] Lebourgeois | Les chênes sessile et pédonculé (Quercus petraea Liebl. et Quercus robur L.) dans le réseau RENECOFOR : rythme de croissance radiale, anatomie du bois, de l'aubier et de l'écorce[END_REF] ii) sapwood width is higher in coppice-under-standard than in high forest [START_REF] Dhôte | Profil de la tige et géométrie de l'aubier chez le Chêne sessile (Quercus petraea Liebl.)[END_REF]. Thus, the under-representation of sapwood in the trunk of the tree in "Bogny-sur-Meuse" is probably due to a larger tree diameter, 33 cm as opposed to 20.75 cm. The third combination consists in associating tree-ring width and diameters (distribution of the decomposed unburnt wood diameters). For an application to charcoal assemblage, each tree ring is associated with a charcoal-pith distance, then to a diameter class and finally an average tree-ring width is calculated for each diameter class. Radial growth trends appear to be preserved keeping with dendrological radial growth. An original pattern marked by a low growth rate along the smallest diameter classes followed by a higher rate in the largest diameter classes may be a characteristic of the exploitation of whole trees. However, as it is often the case in dendroecology, one pattern may correspond to several scenarios. Here for example, a partial clearing of the wood stand could lead to a comparable growth trend. Thus interpretations have to be associated with the results established by other disciplines. In addition, an initial distinction between young trunks (coppice) and young branches becomes possible as their growth rate and growth trend differ (high rate and upward trend for coppice, low rate and downward trend for branches). However, no further distinction is visible between coppice and seeded trees at "Le Bois de l'Or". In fact, only the proportion of earlywood is only significant when radius is up to 1,6 cm [START_REF] Girardclos | Improving identification of coppiced and seeded tress in past woodland management by comparing growth and wood anatomy of living sessile oaks (Quercus petraea)[END_REF]. The last combination is the association of all the dendro-anthracological parameters: heartwood/sapwood, tree-ring width and diameters. Besides the distinction between young and mature woods based on the association between heartwood/sapwood and diameters, it becomes possible to discriminate branches from trunks among young woods. Indeed, branches are characterized by sapwood, a low growth rate and rather downward growth, R e v i s e d m a n u s c r i p t whereas young trunks (coppice and seeded trees) are characterized by sapwood, a high growth rate and rather upward growth. Specific patterns appear depending on the stand and the potential types of wood exploitation (trunks and/or branches). Thus, anthracological types could be defined forming an interpretative grid which can act as a useful key for the interpretation of archaeological charcoal assemblages. Moreover, the recorded dendrological information is not the same depending on the position in the tree. For example, the information recorded in tree-ring width depends on the position of the charcoal fragment; tree-ring width and growth trend in young woods may be a good indicator of the origin of the wood in the tree (crown or bole) whereas stand characteristics (stand density according to strong or low competition between trees) are more perceptible in trunk, i.e. large diameter of mature wood (Marguerie and Hunot, 2007). These results entail a new approach to anthracological material. Charcoal fragments have to be sorted according to their position in the stem cross-section and in the tree. For that purpose, an anthracological key based on dendro-anthracological parameters and forming anthraco-groups is proposed (Fig. 12). Each oak fragment is characterized by a charcoal-pith distance, sapwood/heartwood affiliation and annual tree-ring width. The first division at the threshold of a diameter of 7 cm is often used by foresters and corresponds to the diameter limit between branches and trunks in deciduous oak forest. Concerning tree-ring width, charcoal fragments with regular and irregular tree-ring width series are taken into account separately. For example in northern France, according to V. Bernard (1998, p. 96), narrow rings are less than 0.7 mm/year and large rings are between 0.7 and 3 mm/year for deciduous oak. Very large rings, up to 3 mm, can be also considered (i.e. 12 groups). The use of this anthracological key enables us to sort charcoal fragments according to their position in the tree. Then, measurements of each batch can be processed separately. R e v i s e d m a n u s c r i p t To close, it is important to make several remarks concerning the dendroanthracological tools and their applications. i) The application of dendro-anthracological tools requires a minimum transversal plane size of about 4 mm x 4 mm and at least one whole growthring. The optimal number of fragments to analyze is around 100 per sampling unit (structure, layer, etc. according to the problematic). ii) The choice of the diameter classes, chosen to be compatible with standards used in dendrometrical plans by foresters, seems to be relevant. However, a charcoal fragment can be classified in a class or the other when the value of the charcoal-pith distance is close to a limit but usually the interpretation is not affected. iii) Given that it exists a boundary between the diameters of trunks and branches within a wood stand and that the part of the trunk located in the crown presents the same dendrological characteristics as a branch, it is more relevant and accurate for charcoal analysis to distinguish bole from crown than trunk from branch when considering oak and probably more generally Angiospermea. However, by Gymnospermea, the trunk can be easily followed until the apex with a clear separation of the branch material. Thus this distinction bole/crown or trunk/branch has to be adapted according to the architecture of the tree. In addition, variations in growth rates are often considered and interpreted in terms of environmental (light, soil or climate) and human factors (clearings or woodland management). However, we have to keep in mind that they can also result from a change in exploitation techniques (whole trees, trunks, branches). The use of the anthracological key may allow for the classification of the growth-ring width signal and thus bring more accurate information. R e v i s e d m a n u s c r i p t iv) Shrinkage during charcoalification leads to lower tree-ring width. This process is not consistent, depending on sapwood/heartwood and charcoal-pith distance. A preliminary study on shrinkage offers promising results in order to propose correction factors (Garcia [START_REF] Martinez | Correction factors on archaeological wood diameter estimation[END_REF]. v) The relative frequency of the different taxa in charcoal assemblages is representative of the used biomass (wood volume). In the same way, the use of the dendro-anthracological parameters is based on the assumption that charcoal fragments represent the different parts of trees proportionally to their volume, with their dendrological characteristics (growth, ratio sapwood/heartwood, diameter). That is why the ADmodel is based on wood volume (and not on the number of fragments). However, we stress that, this model cannot reconstruct the quantity of initially burnt wood. vi) As for the interpretation of tree-ring width (Marguerie, 1992, p. 72;Marguerie and Hunot, 2007), several conditions are required to interpret the dendroanthracological parameters: charcoal assemblages must come from numerous trees, tree-ring series are randomly distributed in the transversal sections of charcoal fragments, ring series must be numerous enough and with a homogeneous width, acquisitions areas are subjected to the same climatic influences and the geological substratum must be homogeneous. Conclusion In line with the work of D. Marguerie (Marguerie, 1992, Marguerie and Hunot, 2007[START_REF] Marguerie | Short Tree ring series: the study materials of the dendro-anthracologist[END_REF], Marguerie et al., 2010), combining charcoal identification and dendrological examination, the aim of this study was to improve methods to assess whether it was pertinent to develop quantitative measurements, such as estimating pith-charcoal distance, and whether the combination of dendro-anthracological parameters provides new information on wood exploitation and forest management. R e v i s e d m a n u s c r i p t Besides the measurement of tree-ring width, the present study is based on the development of three anthracological tools consisting in i) measuring charcoal-pith distance, ii) discriminating heartwood/sapwood and iii) modelling dendrological data to make them compatible with charcoal analysis. Three dendro-anthracological parameters i.e. growth ring width, charcoal-pith distances and heartwood/sapwood, modelled with ADmodel, were tested on modern-day oak wood stands chosen with respect to historical woodland practices: a coppice-under-standard, an old coppice undergoing conversion to high forest and a young stand formed by a mixture of seeded and coppice trees. For a more realistic representation of dendrological data according to anthracological constraints, different levels of analysis were considered: the whole tree, and trunks and branches separately, allowing us to further consider various modes of wood exploitation. The dendro-anthracological parameters taken into account independently of each other provide interesting results but rather limited interpretation, especially for tree-ring width or sapwood/heartwood. Indeed the dendrological information cannot be interpreted in the same way depending on its position in the tree. For example, growth conditions and thus paleo-environmental information are essentially recorded in the trunk. In contrast, the combination of the dendro-anthracological parameters highlights specific patterns between organs, stands and regeneration modes, and enables us to establish an anthraco-typology forming an interpretative grid. A major result here is the identification of the position of the charcoal fragment belonging to young woods or internal/middle/external parts of mature woods and the distinction between branches and young trunks when associated with the tree-ring width. These results lead to the establishment of an anthracological key aiming to sort charcoal fragments into anthraco-groups according to their position in the tree and their growth rate. Finally, these results offer new opportunities for the interpretation of archaeological charcoal assemblages as well as the development of new dendro-anthracological tools adapted to species other than deciduous oak. R e v i s e d m a n u s c r i p t also grateful to two anonymous reviewers for their valuable remarks and suggestions which helped to improve this publication. Captions Table 1 Dendrological characteristics of each wood stand and sampled trees. undergoes both mass loss and charcoal fragmentation. Consequently, the distribution of the charcoal-pith distances does not indicate unburnt wood diameter. 3.3. Combination of dendro-anthracological parameters 3.3.1. Decomposed UWD versus tree-ring width R e v i s e d m a n u s c r i p t 3.3.3. Tree-ring width versus diameter classes versus sapwood/heartwood s c r i p t trunk and a strong decreasing in the upper part of the trunk (in the crown). These results are similar to those of[START_REF] Dhôte | Profil de la tige et géométrie de l'aubier chez le Chêne sessile (Quercus petraea Liebl.)[END_REF], based on 82 Quercus petraea distributed in five regions in France. Consequently, growth conditions are mainly recorded in the trunk and branches should be avoided for palaeo-environmental reconstruction. This result fits with the method of D.Marguerie and J.-Y. Hunot (2007) whose the principle is to keep only tree-ring width measurements based on charcoal with large charcoal-pith distance. Fig. 1 . 1 Fig. 1. Dendro-anthracological tools. Fig. 2 . 2 Fig. 2. General analytical protocol developed in the ANR DENDRAC program. Experimental charcoal assemblages are not considered in this paper. Fig. 4 . 4 Fig.4. Main dendrological characteristics of the wood stands: modes of regeneration, average age, average diameter at breast height, relative proportion of trunks and branches (expressed according to volume), distribution of the diameters of trunks and branches (each log and its volume was attributed to an unburnt wood diameter class), average growth rate and growth trends (tree-ring width measurements were taken on each disc at a height of 1.30 m, along 5 radii and averaged). Fig. 5 . 5 Fig. 5. Annual ring-width was averaged from 5 radii in each disc. (a) Distribution of annual ring-width (maximum and minimum values, 1st and 3rd quartiles and median) considering whole trees (white), trunks (brown) and branches (green). (b) Distribution of the annual ringwidth along the trunks and in branches. Fig. 6 . 6 Fig.6. Relative proportion of heartwood (brown) and sapwood (yellow) for each stand, considering whole trees, trunks and branches. The volume proportion of sapwood and heartwood was estimated for each disc, then each log and tree, and averaged for each wood stand. The average number of sapwood tree-rings, average sapwood width (cm) and average sapwood growth rate (mm/year) are indicated in boxes when heartwood is present. Fig. 8 . 8 Fig. 8. Dendrological data (simple line): average tree-ring width calculated by cambial age. Modelled anthracological data (solid line): average tree-ring width calculated for each diameter class. Fig. 10 . 10 Fig. 10. Average tree-ring width according to the diameter classes (decomposed UWD) and their respective affiliation to sapwood (yellow) or heartwood (brown). Fig. 11 .Fig. 12 . 1112 Fig. 11. Anthraco-typology for deciduous oak: an interpretative grid for archaeological charcoal assemblages. Table 2a 2a Analyzed wood discs and dendrological characteristics: Bogny-sur-Meuse; Les Cagouillères Table2bAnalyzed wood discs and dendrological characteristics: Bois de l'Or. Acknowledgment The authors thank the Agence Nationale de la Recherche (ANR JCJC 200101 DENDRAC, dir. A. Dufraisse) for financing this study and Louise Byrne for English correction. They are
47,924
[ "750689", "923017", "1026730", "1203713", "177404" ]
[ "118105", "525321", "57629", "118105", "118105", "527637", "928" ]
01746228
en
[ "chim" ]
2024/03/05 22:32:07
2011
https://hal.univ-lorraine.fr/tel-01746228/file/SCD_T_2011_0061_WANG.pdf
Keywords: Tetramethoxysilane APTES: Aminopropyltriethoxysilane GPS: 3-Glycidoxypropyl-trimethoxysilane PDDA: Poly(dimethyldiallylammonium chloride) PAA: Poly (allylamine Ferrocene functionalized organoalkoxysilane MWCNTs: Multiwalled carbon nanotubes Abbreviations 2 SWCNTs: Single-walled carbon nanotubes Cyclic voltammetry UV: UV-vis spectrophotometry IR: infrared spectrophotometry GPES: General Purpose Electrochemical System SECM: Scanning electrochemical microscopy SEM: Scanning electron microscopy Silice, sol-gel, déshydrogénase, cofacteur NAD + /NADH, médiateur, polyélectrolyte, bioencapsulation, électrogénération, film, électrodes macroporeuses, nanotubes de carbone DSDH: D-sorbitol dehydrogenase Preface Considerable interests have been drawn in the development of electrochemical reactors for the manufacture of fine chemicals with dehydrogenases as a process with almost zero waste emission. The system requires that all active compounds like cofactor, mediator and dehydrogenase can be functionally immobilized on the working electrode surface in such a way that dehydrogenases are durably immobilized and active, their cofactor is durably immobilized close to the enzyme, and the mediator can reduce the overpotential without leaching. However, it is still a challenge to construct this kind of functional layer with long term stability. This is the goal of the present thesis prepared in the frame of an European program (ERUDESP, enantioselective……). Stable immobilization of active enzyme on electrodes is a prerequisite for bio-electrochemical applications. Sol-gel-derived silica-based materials have proven to provide a rather suitable environment for biomolecules entrapment, ensuring conformation changes similar to their water solution and even enhanced stability. The quite recent development of the electrochemicallyassisted deposition of sol-gel silica thin films opens new opportunities for enzyme encapsulation into sol-gel thin film onto electrode surfaces [1]. In this method, a sufficiently negative potential was applied to the electrode surface to generate hydroxyl ions, which play the role of the catalyst for the polycondensation [2]. In comparison with the traditional sol-gel methods which involve deposition by evaporation (dip-, spin-, spray-coating) and can only be applied to basically flate surfaces, the combination of electrochemistry with the sol-gel process makes possible the well controlled modification of complex electrode surfaces, for example, macroporous gold electrodes. This approach is promising for the electrochemical biocatalysis application. Although there have been a long history of studying and using sol-gel material to encapsulate enzymes, little has been reported regarding the encapsulation of enzyme in sol-gel silica films in the course of their electrodeposition, especially for dehydrogenase encapsulation. The realization of cofactor regeneration with both the dehydrogenase and its cofactor immobilized is the biggest challenge in construction of the functional layer. One obstacle for this approach is that the water soluble cofactor is a relatively small molecule, so it is likely to diffuse away from the electrode surface into solution, thus limiting the long-time durability of the Preface 4 modified electrode. Another obstacle is that high overpotential at electrode surfaces leading to undesired side reactions producing enzymatically-inactive dimers and isomers of cofactors. As far as biocatalysis with electrochemical regeneration of cofactor is concerned, the effectiveness of electron transfer is a key parameter affecting the performance of the process so that the resort to charge transfer mediators has often been proposed to improve the turnover of the overall reaction. However, further difficulties may arise with many mediators due to their poor stability or due to electrode modification procedures. Taking into account the above problems that are likely to exist in the construction of such functional films, this research developed a series of sol-gel immobilization matrixes to improve the perfomance and long-term stability of the biocomposite films. The details are summarized as follows: In Chapter I, the first part is devoted to a brief introduction to the ERUDESP project and to a definition of the subject. Then, we describe the main immobilization methods of enzyme, cofactor and mediator reported sofar in the literature. In the experimental section (Chapter II), we describe the physico-chemical properties of the studied compounds, various sol-gel preparation methods, electrode modifications and the experimental techniques used in this work. In Chapter III, we show the feasibility of dehydrogenase encapsulation in sol-gel matrices by both drop-coating and electrodeposition. DSDH was found to be very sensitive to the silica gel environment, first, the influence of polyelectrolyte additives on the sol-gel encapsulation of dehydrogenases has been evaluated by drop-coating. Then, we report that the electrochemicallyassisted deposition of silica thin films can be a good strategy for DSDH immobilization as well as DSDH and diaphorase co-immobilization in an active form (Diaphorase is an useful enzyme to catalyze cofactor regeneration in a smooth way). In Chapter IV, one has compared various strategies for cofactor immobilization in sol-gel matrices, i.e. the simple encapsulation of the native cofactor, the encapsulation via attachment to a high molecular weight compound (NAD-Dextran), the adsorption on carbon nanotubes introduced in the sol-gel matrix and finally the use of glycidoxypropylsilane (GPS) as additive. In Chapter V, the immobilization of mediators (ferrocene species and osmium polymer) in the sol-gel matrix is first studied. The influence of GPS as additives for the mediator immobilization is also presented. Then, the feasibility of co-immobilization base on sol-gel film is evaluated by one step drop-coating and electrodeposition. In Chapter VI, different strategies for mediator immobilization on CNTs are developed. Here, the co-immobilization strategies base on CNTs/sol-gel matrix are used to develop a reagent free device because of the problems encountered with mediator immobilization through one step electrodepostion in chapter V. The first layer of CNTs functionalized with mediator is covered with an additional drop-coated or electrodeposited sol-gel layer containing the dehydrogenase (and eventually diaphorase) and the cofactor covalently bond to GPS. Chapitre I. Introduction Ce chapitre introductif présente tout d'abord le projet ERUDESP dans le cadre duquel s'est déroulé ce travail de thèse. L'objectif de cette étude était la co-immobilisation au sein d'une couche mince sol-gel d'une déshydrogénase, du cofacteur enzymatique NAD + et du médiateur électrochimique permettant de catalyser la régénération électrochimique de ce cofacteur. Cette couche mince devant ensuite être déposée sur la surface interne d'une électrode d'or macroporeuse et être intégrée dans le réacteur pour application en électrosynthèse enzymatique (ces derniers travaux étant menés dans le projet ERUDESP, mais hors du cadre de cette thèse). L'état de l'art sur les différents aspects de ce projet est ensuite donné (chap. I). Les méthodes couramment utilisée pour obtenir l'immobilisation d'une protéine sous une forme active à la surface d'une électrode sont présentés. Le procédé sol-gel et son application en bioencapsulation sont décrits. Enfin la génération de couche minces sol-gel par assistance électrochimique est présentée ainsi que son utilisation pour l'immobilisation de protéines. Nous discutons ensuite des difficultés et des besoins concernant l'immobilisation du cofacteur enzymatique NAD + et de sa régénération électrochimique à l'aide de médiateurs électrochimiques dans le cadre particuliers de l'électrosynthèse enzymatique. Les méthodes conventionnelles pour la régénération du cofacteur sont alors présentées. Finalement, une revue des travaux existant sur la co-immobilisation de déshydrogénases, du cofacteur enzymatique et de médiateurs est donnée. La présentation de cette étude expérimentale a été organisée en différents chapitres décrivant les étapes successives de cette recherche. L'encapsulation de déshydrogénase au sein de la matrice sol-gel a tout d'abord été décrite, par les méthodes de drop-coating et d'électrogénération. Bien que l'encapsulation de protéines, voire de déshydrogénase au sein de couches minces sol-gel ait été décrite, il est vite apparu que la matrice de silice perturbait fortement l'activité enzymatique de la protéine. L'addition de charges positive au sein du matériau sol-gel, par introduction de polyélectrolyte chargés positivement dans le sol, a alors permis l'encapsulation de déshydrogénases sous leur forme active. Le matériau sol-gel peut alors être déposé sur l'électrode sous forme de couche mince par évaporation ou par électrogénération (chap. III). La co-immobilisation de la déshydrogénase avec une diaphorase a également été étudiée. La diaphorase catalyse alors l'oxydation de NADH en présence d'un médiateur électrochimique en solution, par exemple le ferrocenedimethanol. La déshydrogénase catalyse les réactions d'oxydation ou de réduction du substrat enzymatique en présence du cofacteur NAD + /NADH. Contrairement à d'autres cofacteurs enzymatiques qui sont liés à la protéine (par exemple FAD), NAD + est libre de diffusé en solution. Il y a alors un grand intérêt à immobiliser cette molécule pour limiter le coût du procédé (en diminuant la quantité de molécules utilisées). Différentes stratégies d'immobilisation ont été comparés dans le chapitre IV, la simple encapsulation dans le sol-gel, l'adsorption sur des nanotubes de carbone immobilisés, l'encapsulation du NAD + chimiquement lié à la macromolécule dextran pour limiter sa diffusion et enfin la condensation avec le groupement epoxy du glycidoxypropylsilane (GPS). Ce dernier système a ensuite été utilisé pour élaborer une couche mince sol-gel dans laquelle sont coimmobilisées la déshydrogénase (et éventuellement la diaphorase), le cofacteur enzymatique NAD + et le médiateur électrochimique. Plusieurs stratégies ont été mises en oeuvre, en incorporant le médiateur dans la matrice sol-gel (Chap. V) ou en utilisant des nanotubes de carbone fonctionnalisés (Chap. VI). Une attention particulière a été donnée à la préparation de la couche mince sol-gel par électrogénération. Nous verrons ainsi que si de nombreux systèmes peuvent fonctionner lorsqu'ils sont préparés par évaporation, il est beaucoup plus difficile d'atteindre la co-immobilisation de tous les éléments autorisant leur communication lorsque le matériau est généré électrochimiquement. Bien que la plupart des travaux aient été menés pour obtenir un système bioélectrocatalytique fonctionnant en oxydation, nous avons également étudié l'immobilisation sur nanotubes de carbone d'un médiateur électrochimique à base de rhodium permettant de catalyser la réduction de NAD + . Chapter I. Introduction In the chapter, we start by the introduction of the ERUDESP project and the definition of the PhD subject. We then discuss the immobilization of enzymes on electrode surfaces. We recall some of the conventional enzyme immobilization methods, in particular, we introduce the sol-gel materials for enzyme encapsulation and electrochemically-assisted generation of silica films for bioencapsulation. After that, we also discuss the need for cofactor immobilization and regeneration as well as the immobilization of charge transfer catalyst on electrode surfaces. We explain the necessity and difficulties of cofactor immobilization and regeneration, and introduce the conventional methods to modify the electrode and to regenerate the cofactor. At last, we describe the functional films for co-immobilization of dehydrogenase, cofactor and mediator, and review the work of the literature dealing with such reagentless system. ERUDESP project The project full title is "Development of Electrochemical Reactors Using Dehydrogenases for Enantiopure Synthon Preparations" (ERUDESP website: http://www.erudesp.eu). The main objective of the project is the development of electrochemical reactors for the manufacture of fine chemicals with dehydrogenases as a process with almost zero waste. The production of enantiopure compounds with high enantiomeric excess (EE) can be achieved by using dehydrogenases as biocatalysts, because they express high enantioselectivity in ketone reduction combined with broad substrate spectra by some of these enzymes. As these dehydrogenases typically require co-substrate regeneration by aid of a second enzymatic reaction, we are looking for alternative solutions for cofactor regeneration to avoid the contamination of the reaction fluid by other proteins and chemical compounds. In this project we will use an electrochemical approach for the regeneration of cofactors. Hydrogen gas is oxidized into a mixture of hydrogen ions and electrons on the anode side of the cell, then, hydrogen ions diffusion through a nafion membrane to the cathode side of the cell. If all active compounds like the mediator, the cofactor and the dehydrogenase can be functionally immobilized on the working electrode surface (on the cathode of the cell), educt in the input flow will be reduced into the product in the output flow avoiding any contaminations. The interesting thing is that the same electrochemical reactor can also be used for oxidation reaction. In this case, all active compounds will be immobilized on the anode side of the cell. Oxygen continuously passes over the cathode, which react with the electrons and protons (coming from the oxidation reaction at the anode side of the cell) to form water. As counter electrode a gas diffusion electrode will be employed; it delivers clean protons (no liquid anolyte!) to the catholyte and simultaneously reduces the cell voltage by about 1 V; hence undesired side reactions/degradation processes will be suppressed and thus the longterm stability of the whole electroenzymatic system will be improved. There are six participants in this project. Participant 1 (Saarland University, Germany) In this project, we are Participant 3, my PhD research was focused on designing functional layers based on silica sol-gel thin films to co-immobilize enzyme, cofactor and electron mediator to get the active systems and such modifications of electrode surfaces should be adaptable to the macroporous electrodes. Initially, the reduction of prochiral ketones to enantiopure hydroxylated products was the most desirable reaction. However, a major obstacle was encountered with the loss of reduction mediator activity upon immobilization on electrode surface. The analyses of the actual market by the industrial partner demonstrated that the production of rare sugars was also interesting. Therefore, the oxidation of sugar alcohols to rare sugars with electrochemical cofactor oxidation has also come into focus of the project. includes Immobilization of enzymes on electrode surfaces The development of a simple and effective strategy for immobilizing enzymes on or into an electrode is a crucial step in the design and fabrication of electrochemical biosensors, bioreactors or biofuel cells. Ideally, this immobilization should be totally irreversible and stable with time, without deactivation of the biomolecule, while managing excellent accessibility to this biomolecule and ensuring a certain conformational mobility [ 1 ]. Adsorption [ 2 , 3 ], covalent grafting [ 4 , 5 ], and entrapment [ 6 , 7 ] are conventional immobilization methods. Adsorption is a rather soft and the simplest immobilization method, but the disadvantage of the method is the low mechanical strength of the assembly. Desorption phenomena are regularly observed when environmental conditions (pH, ionic strength) change. Covalent grafting is a chemical immobilization technique consisting of creating a covalent bond between the sensitive element and the support. This technique permits the orientation of the grafted molecules and thus optimization of the recognition probability. However, deactivation and the irreversible immobilization of the enzyme components restrict the performance of these types of enzyme immobilization. Entrapment of the receptor in a host matrix can be done by simply mixing the various components and depositing the mixture onto a suitable support. This method is the most widespread. A large diversity of materials is thus used: inorganic materials, natural or synthetic organic polymers. However, steric stresses and interactions with the matrix may denature the species. Also diffusional limitations may occur when the receptor is not sufficiently accessible. Some other immobilized schemes and advanced materials that can improve the analytical capacities of sensor devices are highly desired. The last decade has seen a revolution in the area of sol-gel-derived biomaterials since the demonstration that these materials can be used to encapsulate biological species such as enzymes, antibodies and other proteins in a functional state [8]. Upon encapsulation, the biomolecules retain their spectroscopic properties and biological activity [9,10]. Silica Solgel could offer some advantages such as improved mechanical strength and chemical stability. It does not swell in aqueous or organic solvents, preventing leaching of entrapped biomolecules. Silica sol-gel materials are particularly interesting, because they can be synthesized with a large variety of organic functionalities, such as hydrophobic or hydrophilic ones [ 11 , 12 , 13 ]. They can be used to retain metallic complexes (e.g., mediators) by covalently grafting, electrostatic pH dependent interaction or simply as adsorption by the intermediate of weak physical bonds [11]. The trapped biomolecules reside in an interconnected mesoporous network and become part of the entire material [13], and they usually exhibit better activity and longer life times than free enzymes. During encapsulation, they remain trapped within a silica cage tailored to their size, and it provides a chemical surrounding that favours the activity (Figure I-2)). Sol-gel immobilization is characterized by physical entrapment without chemical modification. This approach also permits the biomolecules to be isolated and stabilized against aggressive chemical and thermal environments [10,13]. While the relatively large biomolecules are immobilized within the silica network, small ions or molecules can be easily transported into the interior of the matrix, which has been largely exploited in the field of biosensors [9,14,15]. Figure I-2. The enzymes entrapped in sol-gel matrices [8]. [16] A sol is a stable dispersion of colloidal particles or polymers in a liquid. Colloids are solid particles with diameters of 1-100 nm. A gel is an interconnected, rigid network with pores of submicrometer dimensions and polymeric chains whose average length is greater than a micrometer. A silica gel may be obtained by formation of an interconnected 3-D network by the simultaneous hydrolysis and polycondensation of a precursor. When the pore liquid is removed as a gas phase from the interconnected solid gel network under hypercritical conditions, the network does not collapse and a low density aerogel is produced. The sol-gel process generally involves the use of alkoxide precursors, which undergo hydrolysis, condensation, aging and drying to give gels or xerogel. Sol-gel process Hydrolysis The preparation of a silica glass begins with an appropriate alkoxide, such as Si(OR) 4 , where R is mainly CH 3 , C 2 H 5 , or C 3 H 7 , which is mixed with water and a mutual solvent to form a solution. Hydrolysis leads to the formation of silanol groups (SiOH). It has been well established that the presence of H 3 O + in the solution increases the rate of the hydrolysis reaction. Condensation In a condensation reaction, two partially hydrolyzed molecules can link together through forming siloxane bonds (Si-O-Si). This type of reaction can continue to build larger and larger silicon-containing molecules (linkage of additional Si-OH) and eventually results in a SiO 2 network. The H 2 O (or alcohol) expelled from the reaction remains in the pores of the network. When sufficient interconnected Si-O-Si bonds are formed in a region, they respond cooperatively as colloidal (submicrometer) particles or a sol. The gel morphology is influenced by temperature, the concentrations of each species (attention focuses on R ratio, R = [H 2 O]/[Si(OR) 4 ), and especially acidity: • Acid catalysis generally produces weakly-crosslinked gels which easily compact under drying conditions, yielding low-porosity microporous (smaller than 2 nm) xerogel structures • Under some conditions, base-catalyzed and two-step acid-base catalyzed gels (initial polymerization under acidic conditions and further gelation under basic conditions [17,18]) exhibit hierarchical structure and complex network topology (Figure I-3c). Aging Gel aging is an extension of the gelation step in which the gel network is reinforced through further polymerization, possibly at different temperature and solvent conditions. During aging, polycondensation continues along with localized solution and reprecipitation of the gel network, which increases the thickness of interparticle necks and decreases the porosity. The strength of the gel thereby increases with aging. An aged gel must develop sufficient strength to resist cracking during drying. Drying The gel drying process consists in removal of water from the interconnected pore network, with simultaneous collapse of the gel structure, under conditions of constant temperature, pressure, and humidity. Large capillary stresses can develop during drying when the pores are small (<20 nm). These stresses will cause the gels to crack catastrophically unless the drying process is controlled by decreasing the liquid surface energy by addition of surfactants or elimination of very small pores, by hypercritical evaporation, which avoids the solid-liquid interface, or by obtaining monodisperse pore sizes by controlling the rates of hydrolysis and condensation. Optimization of the sol-gel process for bioencapsulation Sol-gel is known to be a suitable matrix for bioencapsulation. However, sol-gel conditions are sometimes not mild enough for biomolecules. Recent efforts have been made to optimize the process by controlling the porosity of bioencapsulates and the chemical environment of trapped species [8]. This notably involved the use of biocompatible silane precursors, sugars and amino acid N-methylglycine or polymers. Biocompatible silane precursors The release of alcohol during the hydrolysis-condensation of silicon alkoxides has been considered an obstacle, due to its potential denaturing activity on the entrapped biological moiety. Biocompatible silane precursors have to be used for avoiding enzyme denaturation due to alcohol release. TMOS, Si(OMe) 4 , is therefore currently used instead of TEOS, Si(OEt) 4 , as methanol is less harmful than ethanol. However some enzymes are specially sensitive to traces of alcohol so that the usual two-step alkoxide route has to be modified. One way to overcome this drawback would be to remove the alcohol via evaporation under vacuum in order to get a fully hydrolyzed solution before adding enzymes [20]. Aqueous solgel processes have been developed in order to avoid any trace of alcohol, for example a sodium silicate solution [21], or a mixture of sodium silicate and Ludox suspension [22]. Another way to avoid denaturation by alcohol is to use biocompatible alcohols such as polyolbased silanes as hydrolyzable groups that can be hydrolyzed under mild pH conditions [23]. A biocompatible reagent, glycerol, is produced so that the catalytic efficiency and long-term stability of enzymes are significantly improved [24]. Sugar and amino acid N-methylglycine additives Sugar and amino acid N-methylglycine additives can be used to stabilize enzymes within sol-gel matrices. Chymotrypsin and ribonuclease T1 have been trapped in the presence of sorbitol and N-methylglycine. Both osmolytes significantly increase the thermal stability and biological activity of the proteins by altering their hydration and increasing the pore size of the silica matrix [25]. D-glucolactone and D-maltolactone have also been covalently bound via a coupling reagent aminopropyltriethoxysilane (APTES) to the silica network giving nonhydrolyzable sugar moieties. Firefly Luciferase, trapped in such matrices, has been used for the ultra sensitive detection of ATP via bioluminescent reactions [26]. Polymer additives The silica matrix forms around the trapped biomolecule, but some shrinkage always occurs during the condensation process and the drying of the gel. Stresses can then lead to some partial denaturation of the enzymes. Polymers have been used as additives to form hybrid organic-inorganic gels in order to reduce shrinkage via a 'pore filling' effect. Some polymer additives such as polyethylene glycol (PEG) or polyvinyl alcohol (PVA) have been introduce in sol-gel matrix to increase the catalytic activity of entrapped enzyme [27,28,29]. Electrostatic interactions may also occur between silicate sites and specific residues on the protein surface. Silica surfaces are negatively charged above the point of zero charge (pH =3) and electrostatic interactions mainly depend on the isoelectric point of the protein. Sometimes, electrostatic interactions decrease the catalytic activity of the enzyme. However, the detrimental effects of these electrostatic interactions can be reduced by complexing the enzyme with a polyelectrolyte that shields the critical charged sites [30,31]. Electrochemically-assisted generation of silica films Principle and significant Usually, sol-gel silica films are prepared by polycondensation of silicon alkoxides (alone or in the presence of organosilanes) as induced by evaporation of a sol solution via spincoating, dip-coating or spraying. When the sol is doped with biomolecules, the silica framework is formed around them, leading to the desired composite films containing physically encapsulated biomolecules [ 32 ]. Such "conventional" sol-gel film formation procedures are however restricted to flat surfaces, which are unsuited to porous substrates. At the end of the nineties, a novel method to prepare silica-based thin films on electrode surfaces has been proposed by Shacham et al [ 33 ], involving an elegant combination of electrochemistry with the sol-gel process. The principle (Figure I-4) is based on the electrochemical manipulation of pH on the electrode surface affecting thereby the kinetics associated to the sol-gel process. The electrode is immersed in a stable silica sol (mild acidic medium: pH 3-4) where a negative potential is applied to increase the pH locally at the electrode/solution interface, inducing therefore polycondensation of the silica precursors only on the electrode surface which makes the process applicable to deposit thin films on nonplanar surfaces [34]. The electrochemically-assisted generation of sol-gel thin films have been applied to produce porous silica deposits [35,36], zirconia or titania thin films [37,38], as well as protective layers against corrosion [ 39 ]. The versatility of this novel process was also exploited to produce functionalized silica coatings [40] or molecularly imprinted silica films [41], which can be used for electrochemical sensing purposes [41]. It is noteworthy beyond these considerations that such electrochemically-assisted deposition can be advantageously combined with the surfactant templating process to generate highly ordered mesoporous silica films with unique mesopore orientation normal to the underlying support [42,43] and this was also exploited to prepare vertically aligned silica mesochannels bearing organo-functional groups [44,45]. Application in bioencapsulation The developed electrochemically-assisted generation of sol gel thin films is interesting for biomolecules immobilization. Figure I-5 shows the process of protein enscapsulated in thin sol-gel film through electrodeposition. In the presence of a biomolecule in the starting sol, this electrodeposition process leads to its encapsulation within the thin film, after washing and drying, a thin sol-gel film containing biomolecule modified electrode is obtained. The possible application was shown recently by some research groups [46,47,48]. Nadzhafova, et al. point out that electrogeneration of silica gel (SG) films on glassy carbon electrodes (GCEs) can be applied to immobilize biomolecules -hemoglobin (Hb) or glucose oxidase (GOD) or both of them in mixture [46]. After encapsulation, Hb was found to keep its peroxidase properties and GOD its enzymatic activity, and the biomolecules are very close to the electrode surface as direct electrochemistry has been pointed out. At the same time, Xia, et al. also present a strategy for the one-step immobilization of GOD in a 3D porous silica matrix using an electrochemically promoted sol gel process and bubble template [47], and the formed silica structure was proven to reduce mass transport resistance greatly because the porous structure existed. Tian, et al. report the development of microelectrode biosensors based on a Ruthenium Purple (RP)-coated gold electrode [48]. A desired gel layer is formed on the RP modified electrode using controllable sol-gel deposition technique to fabricate ATP and hypoxanthine amperometric biosensors. The formation of the gel film is not affected by the inner RP layer and the bioactivity of enzymes entrapped in gel film is well retained. Despite these different examples, the application of electrochemically-assisted generation of sol gel thin films in bioencapsulation has been little exploited. To date, and to our knowledge, no attempt has been made to use this synthetic approach to entrap dehydrogenase in sol-gel silica films in the course of their electrodeposition, moreover, such modifications has not been applied to non-planar supports to functionalize the internal surfaces of macroporous (pore size, 0.5 -1 µm) gold electrodes. Glassy carbon Dehydrogenase-based electrochemical reactor: the need for immobilization and cofactor regeneration Enzymes and cofactor Electroenzymatic reaction can provide a promising process to synthesize chiral compounds. Furthermore, in most cases no side reactions occur and therefore downstream processing can be simplified. It is reported that there are more than 250 NAD-dependent dehydrogenases and 150 NADP dependent dehydrogenases which can catalyze the oxidation/reduction of a variety of substrates. The advantage of dehydrogenase enzymes over the oxidases is that they are oxygen-independent, more abundant, and more substrate specific. [49]. The dehydrogenase needs a small molecule, called a cofactor, in order to be active [50]. The cofactor acts an acceptor or donor of small groups or atoms or electrons and provides the driving force for the oxidation or reduction of the substrate. However, the applications of dehydrogenase in electrosynthesis and biosensor are not yet a success story. This can be attributed to some restrictions in the use of dehydrogenases. The primary limitation is that, in contrast to oxidases, which have redox cofactors tightly bound within their molecules, the NAD(P) cofactor is not bound to dehydrogenase molecules. Therefore, unlike the case of oxidase-based biosensors, dehydrogenase-based systems cannot be readily incorporated into reagentless devices. The latter systems require that both a dehydrogenase and its cofactor are immobilized in such a way that a cofactor has an easy access to the enzyme. Another limitation in using dehydrogenases is the fact that their cofactors are recycled only at high potentials at most electrodes, which leads to interferences from redox active species and usually yield enzymatically inactive NAD-dimers [49]. Cofactor immobilization Physical entrapment The easiest solution to immobilize the cofactor is to bulk-modify carbon paste like materials and this approach is quickly adopted for the majority of studies involving NAD-dependent dehydrogenase electrode. In this case the cofactor is mixed with carbon power and a binder (typically paraffin oil) before being packed into a cavity to form the electrode [51]. Noguer et al. designed amperometric acetaldehyde biosensor based on sol-gel immobilization of aldehyde dehydrogenase (AlDH) and NAD + on screen-printed electrodes [52]. However, the simple encapsulation displays a major drawback, as it leads to rapid leaching of the cofactor in the solution during the electrochemical operation. Covalent bonding One possibility to improve the stability of the immobilization is the chemical attachment of the cofactor to a macromolecule that can be encapsulated or immobilized on the electrode without leaching. Mobility of the cofactor is vital for its efficient interaction with enzymes. The spacer is usually linked to the adenine moieties of the NAD(P) + molecule, and should provide some flexibility for the bioactive part of the cofactors, allowing them to be associated with the enzyme molecules. One approach of cofactor immobilization involves the covalent coupling of the NAD(P) + derivatives to some water-soluble polymers, dextran [START_REF] Leca | Reusable ethanol sensor based on a NAD + -dependent dehydrogenase without coenzyme addition[END_REF][START_REF] Leca | Reagentless ethanol sensor based on a NAD-dependent dehydrogenase[END_REF][START_REF] Montagnk | Bi-enzyme amperometric D-lactate sensor using macromolecular NAD +[END_REF], poly(ethylenimine) (PEI) [START_REF] Zheng | Electrical communication between electrode and dehydrogenase by a ferrocene-labeled high molecular-weight cofactor derivative: application to a reagentless biosensor[END_REF], poly(ethyleneglycol) (PEG) [START_REF] Mak | An amperometric bienzyme sensor for determination of formate using cofactor regeneration[END_REF] or chitosan [49]. For example, a reagentless ethanol biosensor was developed by incorporating alcohol dehydrogenase, NADH-oxidase and NAD + -dextran in a poly(vinylalcohol) (PVA) matrix on an electrode surface [START_REF] Leca | Reagentless ethanol sensor based on a NAD-dependent dehydrogenase[END_REF]. Zheng et al [START_REF] Zheng | Electrical communication between electrode and dehydrogenase by a ferrocene-labeled high molecular-weight cofactor derivative: application to a reagentless biosensor[END_REF] explored a reagentless biosensor by using a ferrecene-labled high meolecular weight cofactor derivative (PEI-Fc-NAD + ). Mak et al [START_REF] Mak | An amperometric bienzyme sensor for determination of formate using cofactor regeneration[END_REF] presents an amperometric formate biosensor using FDH and SHL coimmobilized with PEG-NAD + (MW=20 000) in a photopolymerized PVA matrix in front of a Clark-electrode. Zhang et al [49] developed an electrochemical biosensor through the co-immobilization of glucose dehydrogenase (GDH) and its cofactor NAD + on the scaffolds of a biopolymer chitosan, in this method, the dehydrogenase and the cofactor are covalently attached to the polymeric chains on the surface of the electrode. This electrode displays a good operational stability during continuous 25-h long experiments. Of related interest is the immobilization of the cofactor on particles. Liu et al. reported that the cofactor NAD + covalently attached to silica nanoparticles could be successfully coordinated with particle-immobilized enzymes and enabled multistep biotransformations [START_REF] Liu | Nanoparticle-supported multi-enzyme biocatalysis with in situ cofactor regeneration[END_REF]. In this method, NAD + was immobilized onto the silica particles by forming covalent bonds through the epoxide group on the surface of Glycidoxypropyl trimethoxysilane (GPS) functionalized silica nanoparticles. One limitation of covalent coupling comes from the rather complex modification, which results in a substantial decrease of cofactor efficiency. However, the reaction between epoxide group and the adenine moieties of the cofactor is rather simple, which allows good activity of cofactor to be detected with high stability. In the present work, we will consider GPS for direct attachment of NAD + in sol-gel thin film. π-π stacking The soft immobilization of NAD + on carbon nanotubes by π-π-stacking is also a promising avenue to avoid decrease of their efficiency. Zhou et al [START_REF] Zhou | Noncovalent attachment of NAD + cofactor onto carbon nanotubes for preparation of integrated dehydrogenase-based electrochemical biosensors[END_REF] describes a facile approach to the preparation of integrated dehydrogenase-based electrochemical biosensors through noncovalent attachment of NAD + onto carbon nanotubes with the interaction between the adenine subunit in NAD + molecules and multiwalled carbon nanotubes (MWCNTs). Compared with the existing methods for surface confinement of NAD + cofactor, this method is simple and is thus envisaged to be useful for general development of integrated dehydrogenase-based bioelectrochemical devices. To date, however, this method was not estimated with the use of additional electron transfer mediator. Electrochemical cofactor regeneration The electrochemistry of NAD(P) + /NAD(P)H is highly irreversible, and the oxidation of NADH or the reduction of NAD + at bare electrode only occurs at high overpotential. Several efficient methods have been developed for cofactor regeneration. Syntheses where the cofactor NAD(P) + / NAD(P)H has to be regenerated to its oxidized/reduced state can be carried out with direct, indirect and enzyme-coupled electrochemical cofactor regeneration Indirect electrochemical regeneration Indirect electrochemical regeneration is widely used to overcome the problems of the high overpotential and the formation of inactive NAD-dimers. A mediator meeting the following criteria has to be applied [START_REF] Steckhan | Electroenzymic synthesis[END_REF]: (1) The mediator must be stable. (2) The electrochemical activation of the mediator must be possible at suitable potentials. (3) The mediator must not transfer the electrons to the substrate. (4) Only enzymatically active cofactor must be formed. NAD(P) + -dependent oxidation reactions For the efficient electrochemical oxidation of NAD(P)H, mediated electrocatalysis is necessary, and a wide range of mediators has been studied. Organic compounds that undergo two-electron reduction-oxidation processes and also function as proton acceptors-donors upon their redox transformations have been found to be ideal for the mediation of NAD(P)H oxidation. Some redox mediators such as quinones [ 62 , 63 , 64 ], diamines [ 65 , 66 ] and phenazine and phenoxazine dyes [START_REF] Wu | Patternable Artificial Flavin: Phenazine, Phenothiazine, and Phenoxazine[END_REF][START_REF] Lawrence | Chemical adsorption of phenothiazine dyes onto carbon nanotubes: Toward the low potential detection of NADH[END_REF][START_REF] Zhang | Electrochemical sensing platform based on the carbon nanotubes/redox mediators-biopolymer system[END_REF] have been used to recycle the NADH back to the enzymatically active NAD + . The activity of these mediators toward the NADH oxidation has been explained in terms of a hydride transfer mechanism in which the mediator accepts the hydride and has the ability to delocalize the electrons. The oxidation of NADH has also been investigated using ferrocene derivatives [START_REF] Kwon | An electrochemical immunosensor using paminophenol redox cycling by NADH on a self-assembled monolayer and ferrocenemodified Au electrodes[END_REF], transition-metal complexes [START_REF] Wu | Electrocatalytic Oxidation of NADH at Glassy Carbon Electrodes Modified with Transition Metal Complexes Containing 1,10-Phenanthroline-5,6-dione Ligands[END_REF], conductive polymers [START_REF] Manesh | Electrocatalytic oxidation of NADH at gold nanoparticles loaded poly (3,4-ethylenedioxythiophene)-poly(styrene sulfonic acid) film modified electrode and integration of alcohol dehydrogenase for alcohol sensing[END_REF][START_REF] Valentini | Chemical reversibility and stable lowpotential NADH detection with nonconventional conducting polymer nanotubule modified glassy carbon electrodes[END_REF], nitrofluorenone derivatives [START_REF] Munteanu | NADH electrooxidation using carbon paste electrodes modified with nitro-fluorenone derivatives immobilized on zirconium phosphate[END_REF][START_REF] Mano | Electrodes modified with nitrofluorenone derivatives as a basis for new biosensors[END_REF], dichloroindophenol [START_REF] Forrow | Development of a commercial amperometric biosensor electrode for the ketone D-3-hydroxybutyrate[END_REF], and tetracyanoquinodimethane-tetrathiafulvalene [START_REF] Pandey | Ethanol biosensors and electrochemical oxidation of NADH[END_REF]. Some compounds demonstrate very high rates for the mediated oxidation of NAD(P)H in aqueous solutions. However, such mediator-based electrodes have displayed intrinsic difficulties, which were related to the limited stability of mediators and their leaching from the electrodes. Another approach to the facilitated oxidation of NADH included the use of electrodes based on different forms of carbon, e.g., carbon nanotubes (CNTs) [START_REF] Zhou | The characteristics of highly ordered mesoporous carbons as electrode material for electrochemical sensing as compared with carbon nanotubes[END_REF][START_REF] Wang | Rapidly Functionalized, Water-Dispersed Carbon Nanotubes at High Concentration[END_REF][START_REF] Musameh | Electrochemical activation of carbon nanotubes[END_REF] and pyrolytic graphite [START_REF] Moore | Basal plane pyrolytic graphite modified electrodes: comparison of carbon nanotubes and graphite powder as electrocatalysts[END_REF]. Such materials significantly decreased the NADH overpotential, which was ascribed to the edge-plane sites/defects present in the pyrolytic graphite and suspected in CNTs [START_REF] Banks | Edge plane pyrolytic graphite electrodes in electroanalysis: An overview[END_REF]. Recently, one interesting paper demonstrates that further decrease in the NADH overpotential can be achieved at CNTs that were activated by microwaving CNTs in concentrated nitric acid [START_REF] Wooten | Facilitation of NADH Electro-oxidation at Treated Carbon Nanotubes[END_REF], as indicated by a shift in the anodic peak potential of NADH (E NADH ) from 0.4 V to 0.0 V. NAD(P)H-dependent reduction reactions Contrary to the wide range of mediators available for NADH oxidation, the number of mediators for the regeneration of reduced cofactors is relatively small. The mediator for the regeneration of reduced cofactors must indeed operate at potentials less cathodic than -0.9 V (otherwise direct electrochemical reduction of NAD(P) + will lead to dimmer formation) and more cathodic than the standard potential of the cofactor redox couple ( i.e., -0.59V vs. SCE for NAD + /NADH [START_REF] Karyakin | Equilibrium (NAD + /NADH) potential on poly(Neutral Red) modified electrode[END_REF]) to make the reduction reaction thermodynamically feasible. The systems to date fulfilling these requirments are (2, 2'-bipyridyl)rhodium complexes, Rh(bpy)). The electrocatalytic process includes the regioselective transfer of two electrons and a proton to NAD(P) + . In these systems, hydrido-rhodium species are assumed to be the active catalytic moiety. Tris(bipyridine)rhodium (III) [START_REF] Wienkamp | Indirect electrochemical regeneration of NADH by a bipyridinerhodium(I) complex as electron-transfer agent[END_REF], (pentamethylcyclopentadienyl-2, 2'-bipyridine-chloro) rhodium (III) [START_REF] Ruppert | Efficient indirect electrochemical in situ regeneration of NADH: electrochemically driven enzymatic reduction of pyruvate catalyzed by D-LDH[END_REF][START_REF] Koelle | The effect of chloride on the electroreduction of NAD + in the presence of [Cp*RhIII] 2+ species[END_REF], and chlorotris[diphenyl(m-sulfonatophenyl)phosphine] rhodium (I) [START_REF] Willner | Thermal and photochemical regeneration of nicotinamide cofactors and a nicotinamide model compound using a water-soluble rhodium phosphine catalyst[END_REF] have been used as homogeneous mediation of electrons to NAD(P) + . The catalytic efficiency of a series of Rh-complexes has been studied [START_REF] Steckhan | Analytical study of a series of substituted (2,2'-bipyridyl) (pentamethylcyclopentadienyl) rhodium and -iridium complexes with regard to their effectiveness as redox catalysts for the indirect electrochemical and chemical reduction of NAD(P) +[END_REF] and it was shown that the catalyst activity decreases in the presence of electron-withdrawing substituents in the 2, 2'-bipyridine ligand and increases by electron-donating substituents. Substituents in the 6-position of the ligand slow the catalytic reaction because of steric effects. Structure-activity relationships were found in the mechanism of the regioselective reduction of NAD + by Rh-complexes [START_REF] Lo | Bioorganometallic chemistry part 11. regioselective reduction of NAD + models with [Cp*Rh(bpy)H] + : structure-activity relationships and mechanistic aspects in the formation of the 1,4-NADH derivatives[END_REF]. Enzyme-coupled electrochemical regeneration When considering electrosynthesis applications, it is essential to perform a very smooth regeneration of the cofactor, especially if the cofactor is successfully immobilized with the protein(s), to prevent non-controlled oxidation of the expensive NADH (or reduction of NAD + ). The regeneration of NAD(P)H with the participation of mediator-contacted enzymes ensures that the regeneration of cofactor proceeds selectively, and only enzymatically active cofactor is produced. NAD(P) + -dependent oxidation reactions It is possible to combine indirect regeneration of NAD(P) + with an enzymatic regeneration step. For example, diaphorase has been applied to oxidize NADH using a variety of quinone compounds [START_REF] Ogino | Reactions between diaphorase and quinone compounds in bioelectrocatalytic redox reactions of NADH and NAD +[END_REF], ferrocene derivatives [START_REF] Kashivagi | Electrocatalytic oxidation of NADH on thin poly(acrylic acid) film coated graphite felt electrode coimmobilizing ferrocene and diaphorase[END_REF][START_REF] Osa | Electroenzymatic oxidation of alcohols on a poly(acrylic acid) film coated graphite felt electrode terimmobilizing ferrocene, diaphorase and alcohol dehydrogenase[END_REF], or osmium redox polymers [START_REF] Nikitina | Bi-enzyme biosensor based on NAD + -and glutathione-dependent recombinant formaldehyde dehydrogenase and diaphorase for formaldehyde assay[END_REF][START_REF] Antiochia | Development of a carbon nanotube paste electrode osmium polymer-mediated biosensor for determination of glucose in alcoholic beverages[END_REF] as mediators between the enzyme and electrode. Ferrocene/diaphorase is commonly employed for NAD + regeneration. Kashivagi et al. report the study on the characteristics of poly(acrylic acid) coated electrode coimmobilizing Fc and diaphorase and the application of the electrode to macro-electrocatalytic oxidation of NADH [START_REF] Kashivagi | Electrocatalytic oxidation of NADH on thin poly(acrylic acid) film coated graphite felt electrode coimmobilizing ferrocene and diaphorase[END_REF]. To continue the studies, this group has carried out further immobilization of ADH into poly(acrylic acid) layer of the above modified electrode and achieved a smooth electrocatalytic oxidation of alcohol [START_REF] Osa | Electroenzymatic oxidation of alcohols on a poly(acrylic acid) film coated graphite felt electrode terimmobilizing ferrocene, diaphorase and alcohol dehydrogenase[END_REF]. Osmium/diaphorase is also an efficient system for NAD + regeneration. Nikitina et al. report on the development of a bi-enzyme biosensor using diaphorase and formaldehyde dehydrogenase (FDH) as bio-recognition elements. The sensor architecture comprises a first layer containing diaphorase cross-linked with an osmium complex-modified redox polymer. On its top, a second layer was formed by additional cross-linking of FDH with poly(ethylene glycol)(400)diglycidyl ether [START_REF] Nikitina | Bi-enzyme biosensor based on NAD + -and glutathione-dependent recombinant formaldehyde dehydrogenase and diaphorase for formaldehyde assay[END_REF]. Antiochia et al. develop an amperometric biosensor for glucose monitoring. Glucose dehydrogenase(GDH) and diaphorase (DI) were co-immobilized with NAD + into a carbon nanotube paste (CNTP) electrode modified with an osmium functionalized polymer [START_REF] Antiochia | Development of a carbon nanotube paste electrode osmium polymer-mediated biosensor for determination of glucose in alcoholic beverages[END_REF]. NADH oxidase is also immobilized on the electrode surface for NAD + regeneration. Its stability and its range of useful pH are better than those of diaphorase. A series of dehydrogenase biosensors without cofactor addition have been developed by Marty's group, which depended on the immobilized NADH oxidase to regenerate NAD + [START_REF] Leca | Reusable ethanol sensor based on a NAD + -dependent dehydrogenase without coenzyme addition[END_REF][START_REF] Leca | Reagentless ethanol sensor based on a NAD-dependent dehydrogenase[END_REF][START_REF] Montagnk | Bi-enzyme amperometric D-lactate sensor using macromolecular NAD +[END_REF]. The biosensors were developed through the combination of alcohol dehydrogenase, NADH oxidase and NAD-dextran with addition of a mediator hexacyanoferrate (III). The detection was based on the oxidation of the mediator hexacyanoferrate (III) by applying a potential difference of 100 mV between two platinum electrodes in the presence of an excess of hexacyanoferrate (III), corresponding to 250 mV versus SCE [START_REF] Leca | Reusable ethanol sensor based on a NAD + -dependent dehydrogenase without coenzyme addition[END_REF][START_REF] Montagnk | Bi-enzyme amperometric D-lactate sensor using macromolecular NAD +[END_REF]. Contrary to diaphorase, NADH oxidase also accepts oxygen as an electron acceptor. A reagentless sensor without addition of cofactor and mediator can thus be designed since oxygen is always present in the working medium [START_REF] Leca | Reagentless ethanol sensor based on a NAD-dependent dehydrogenase[END_REF]. NADH oxidase catalyses the reaction of NADH oxidation in the presence of oxygen to generate hydrogen peroxide. Hydrogen peroxide can be detected by applying a potential difference of 550 mV between two platinum electrodes, equivalent to ca. 600 mV versus SCE. NAD(P)H-dependent reduction reactions As it is very difficult to find an electrochemical redox catalyst that fulfils all requirements for regenerating NAD(P)H effectively, the interest of this second enzyme is to extend the choice of mediators that are likely to regenerate NAD(P)H. Many enzymes have been used in this context to provide the bioelectrocatalytic reduction of NAD(P) + . ferredoxin-NADP + reductase (FNR) [START_REF] Kano | Quinone-mediated bioelectrochemical reduction of NAD(P) + catalyzed by flavoproteins[END_REF], lipoamide dehydrogenase (LipDH) [START_REF] Delecouls-Servat | Designing membrane electrochemical reactors for oxidoreductase-catalysed synthesis[END_REF], diaphorase [START_REF] Kashiwagi | Preparative, electroenzymic reduction of ketones on an all componentsimmobilized graphite felt electrode[END_REF][START_REF] Kang | Optimization of the mediated electrocatalytic reduction of NAD + by cyclic voltammetry and construction of electrochemically driven enzyme bioreactor[END_REF], alcohol dehydrogenase [START_REF] Yuan | Fabrication of novel electrochemical reduction systems using alcohol dehydrogenase as a bifunctional electrocatalyst[END_REF], and hydrogenase [START_REF] Cantet | Bioelectrocatalysis of NAD + reduction[END_REF][START_REF] Delecouls | Mechanism of the catalysis by Alcaligenes eutrophus H16 hydrogenase of direct electrochemical reduction of NAD +[END_REF]. A variety of low potential electron-transfer mediators have been used to activate the reductive enzymes, for instance viologen derivatives [START_REF] Delecouls-Servat | Designing membrane electrochemical reactors for oxidoreductase-catalysed synthesis[END_REF][START_REF] Kashiwagi | Preparative, electroenzymic reduction of ketones on an all componentsimmobilized graphite felt electrode[END_REF][START_REF] Kang | Optimization of the mediated electrocatalytic reduction of NAD + by cyclic voltammetry and construction of electrochemically driven enzyme bioreactor[END_REF], flavins [START_REF] Cantet | Bioelectrocatalysis of NAD + reduction[END_REF], quinones [START_REF] Kano | Quinone-mediated bioelectrochemical reduction of NAD(P) + catalyzed by flavoproteins[END_REF], or the redox protein ferredoxin [START_REF] Nishiyama | Aminosilane modified indium oxide electrodes for direct electron transfer of ferredoxin[END_REF]. Some redox enzymes can directly communicate with electrode supports and thus stimulate the regeneration of the NAD(P)H cofactor. For example, hydrogenases (from Rhodococcus opacus and Atcaligenes eutrophus H16) have been successfully applied for the bioelectrocatalytic regeneration of NADH without the application of a redox-mediator [START_REF] Gros | Direct electrochemistry of Rhodococcus opacus hydrogenase for the catalysis of NAD + reduction[END_REF]. However, the electrocatalytic rates of these systems are generally too slow to produce observable catalytic current on the cyclic voltammetric time scale. Among the electrontransfer mediators, viologen is often used in combination with enzyme for NADH regeneration. The viologen together with LipDH has been tested in a continuous process. Bergel et al. applied this regeneration system in their Dialysis-Membrane Electrochemistry Reactor (D-MER) together with an alcohol dehydrogenase for the synthesis of cyclohexanol from cyclohexanone [START_REF] Delecouls-Servat | Designing membrane electrochemical reactors for oxidoreductase-catalysed synthesis[END_REF]. Kashiwagi, et al present a poly(acrylic acid) coated graphite felt electrode immobilizing all the components of viologen, diaphorase, NAD + , and alcohol dehydrogenase for the enzymatic reaction. NADH was regenerated with viologen together with diaphorase [START_REF] Kashiwagi | Preparative, electroenzymic reduction of ketones on an all componentsimmobilized graphite felt electrode[END_REF]. The regeneration system viologen/diaphorase is also used incombination with NAD + and D-lactate dehydrogenase, The optimal concentration of diaphorase, viologen and NAD + in the mediated electrocatalytic reduction of NAD + were studied by applying cyclic voltammetry [START_REF] Kang | Optimization of the mediated electrocatalytic reduction of NAD + by cyclic voltammetry and construction of electrochemically driven enzyme bioreactor[END_REF]. Among the enzymes for cofactor regeneration, diaphorase is the most interesting one. The same diaphorase can be used for both NADH oxidation and NAD + reduction. The coimmobilization in active forms of both dehydrogenase and diaphorase would allow the reactor to perform alternatively oxidation or reduction reaction with simply changing the mediator system and the applied potential. For this reason the preparation of such active bio-composite layers would be of great value for electrosynthesis application. Immobilization of charge transfer catalyst on electrode surface For a technologyically useful configuration, the mediator must be immobilized on the electrode surface. A successful system should show good stability of the immobilized mediator, regenerate NAD + faster than it can be consumed by the enzyme, to appropriately shift the dehydrogenase equilibrium towards the product side so that high current densities can be obtained. A wide variety of ways to immobilize mediator species at the electrode surfaces have been described in the literature. They are briefly summarized hereafter. Carbon paste electrode The carbon paste (CP) electrodes provide a straightforward way to immobilize the mediators. In several cases mediators have been directly incorporated in CP electrodes [START_REF] Dominguez | A carbon paste electrode chemically modified with a phenothiazine polymer derivative for electrocatalytic oxidation of NADH. Preliminary study[END_REF][START_REF] Koyuncu | A new amperometric carbon paste enzyme electrode for ethanol determination[END_REF][START_REF] Weiss | Dehydrogenase based reagentless biosensor for monitoring phenylketonuria[END_REF] to produce NADH and dehydrogenase substrate electrodes. Weiss et al. present a base-stable electron mediator, 3,4-dihydroxybenzaldehyde (3,4-DHB), modified electrode with the mediator mixed directly into the carbon paste [START_REF] Weiss | Dehydrogenase based reagentless biosensor for monitoring phenylketonuria[END_REF]. The kinetics of the catalytic oxidation of NADH is studied at these modified carbon paste electrodes. However, the voltammograms were unstable and change upon continued scanning. The key element for such configurations is that the mediator has to have a higher affinity for the hydrophobic carbon paste phase than for the aqueous analytical matrix. This has not always been achieved, and mediator leaching could be the limiting stability factor in this case. To solve the problem, Yao, et al synthesized an oil-soluble mediator, 7-dimethylamine-2-methyl-3-b-naphtamidophenothiazinium chloride (3-NTB) and apply the mediator to the CP/dehydrogenase electrode [START_REF] Yao | Preparation of a carbon paste /alcohol dehydrogenase electrode using polyethylene glycol-modified enzyme and oil-soluble mediator[END_REF]. In the case of the 3-NTB modified electrode, 3-NTB was not soluble in an aqueous solution, so that the magnitude of the current response did not change for 2 weeks. Thus, the usage of an oil-soluble mediator can improve the long term stability of the CP electrode. Surface activation Surface activation approaches have also been reported in the last few years. As mentioned earlier, these electrodes cause electrocatalytic oxidation at lower potentials but not necessarily at those mediators. For example, electrochemical anodization [START_REF] Kuhr | Dehydrogenasemodified carbon-fiber microelectrodes for the measurement of neurotransmitter dynamics. 1. NADH voltammetry[END_REF][START_REF] Nowall | Electrocatalytic Surface for the Oxidation of NADH and Other Anionic Molecules of Biological Significance[END_REF], microwave plasma [START_REF] Wooten | Facilitation of NADH Electro-oxidation at Treated Carbon Nanotubes[END_REF], vacuum heat treatments [START_REF] Fagan | Vacuum heat-treatment for activation of glassy carbon electrodes[END_REF] are used to improve the heterogenenous electron transfer rate of selected redox couples at various forms of carbon electrode. Remarkable stability and retention of the electrocatalytic activity were observed when carbon fibers were electroactivated, and this carbon fiber was used for NADH detection and discrimination from interferents with fast scan voltammetry [START_REF] Kuhr | Dehydrogenasemodified carbon-fiber microelectrodes for the measurement of neurotransmitter dynamics. 1. NADH voltammetry[END_REF][START_REF] Nowall | Electrocatalytic Surface for the Oxidation of NADH and Other Anionic Molecules of Biological Significance[END_REF]. In a recent publication, CNTs were activated by microwaving in concentrated nitric acid, the shift in E NADH was due to the redox mediation of NADH oxidation by traces of quinone species that were formed on the surface of treated CNTs [START_REF] Wooten | Facilitation of NADH Electro-oxidation at Treated Carbon Nanotubes[END_REF]. Precipitation One simple method included "precipitation" on electrode surfaces of various transition metal hexacyanoferrate [START_REF] Chen | Preparation, characterization, and electrocatalytic properties of copper hexacyanoferrate film and bilayer film modified electrodes[END_REF]. Among the transition metal hexacyanoferrates, cobalt hexacyanoferrate is considered as attractive material to modify the electrode surfaces for NADH oxidation due to its excellent reversible redox behavior [START_REF] Cai | Cobalt hexacyanoferrate modified microband gold electrode and its electrocatalytic activity for oxidation of NADH[END_REF][START_REF] Chen | Preparation, Characterization, and Electrocatalytic Properties of Cobalt Oxide and Cobalt Hexacyanoferrate Hybrid Films[END_REF]. The simple electrochemical deposition process can lead to the formation of the metal hexacyanoferrate on the electrode surface. In general, this type of modified electrodes, showed low detection limits, well behaved electrochemistry, fast response times, and high sensitivities to NADH, while being amenable to careful mechanistic studies and valid conclusions for mediator design for NADH electrochemical oxidation. However, their stability was generally low. Monolayers Various ways to immobilize mediator monolayers are available. Self assembled monolayers could be formed if the mediator contains groups that absorb strongly, such as thiol groups on gold [START_REF] Lorenzo | Thermodynamics and kinetics of adsorption and electrocatalysis of NADH oxidation with a self-assembling quinone derivative[END_REF][START_REF] Raj | Facilitated electrochemical oxidation of NADH and its model compound at gold electrode modified with terminally substituted electroinactive selfassembled monolayers[END_REF]. Another way to form monolayers is to covalently attach the mediator to the electrode surface [ 117 , 118 ]. To do this the electrode surface is first fuctionalized by generation of groups that will permit the subsequent covalent attachment of the mediators, for example, the surface of electrode could be functionalized by a strongly absorbed species, such as thiol compound on gold, which has an amino or carboxylic group at the other end. In this case, the sulfur strongly absorbs onto the gold surface, giving an electrode functionalized with NH 2 or COOH groups. The mediator species themselves can then be covalently attached to the electrode by forming a chemical bond to the NH 2 or COOH groups. Recently, one interesting application of the monolayer is developed. For the first time the inner surface of highly organized macroporous electrodes was modified with monlayer catalyst [START_REF] Ben-Ali | Electrocatalysis with monolayer modified highly organized macroporous electrodes[END_REF] and also a model bioelectrocatalytical system containg a redox mediator, a cofactor, and dehydrogenase [START_REF] Ben-Ali | Bioelectrocatalysis with modified highly ordered macroporous electrodes[END_REF][START_REF] Szamocki | Macroporous Ultramicroelectrodes for Improved Electroanalytical Measurements[END_REF]. Such monolayer could provide a means to control the distribution and orientation of immobilized species, however, the concentration of mediator groups at the electrode surface is limited by steric packing constraints. Electropolymerization Conducting polymers are a natural choice for preparing arrays of voltammetric sensors because they have a rich electrochemical behavior and their electrochemical properties can be modulated by introducing chemical modifications in the sensitive materials [START_REF] Macdiarmid | Synthetic Metals: A Novel Role for Organic Polymers (Nobel Lecture)[END_REF]. Electropolymerization is a good approach to prepare polymer modified electrodes (PMEs) as adjusting electrochemical parameters can control film thickness, permeation and charge transport characteristics. PMEs have many advantages in the detection of analytes because of its selectivity, sensitivity and homogeneity in electrochemical deposition, strong adherence to electrode surface and chemical stability of the film [START_REF] Kumar | Poly(4-amino-1-1'-azobenzene-3, 4'-disulfonic acid) coated electrode for selective detection of dopamine from its interferences[END_REF]. Various dyes like Meldola blue [START_REF] Vasilescu | Strategies for developing NADH detectors based on Meldola Blue and screen-printed electrodes: a comparative study[END_REF], phenothiazine [START_REF] Gao | Electro-oxidative polymerization of phenothiazine dyes into a multilayer-containing carbon nanotube on a glassy carbon electrode for the sensitive and low-potential detection of NADH[END_REF], thionine [START_REF] Gao | Preparation of poly(thionine) modified screen-printed carbon electrode and its application to determine NADH in flow injection analysis system[END_REF], and methylene green [START_REF] Dai | Electrocatalytic detection of NADH and ethanol at glassy carbon electrode modified with electropolymerized films from methylene green[END_REF] have been immobilized on electrode by polymerization for detection of NADH at low potential. These films typically have a 10 -6 -10 -8 mol/cm 2 surface coverage and they present more important swelling problems in aqueous solutions than the previously mentioned adsorbed monolayers. These problems are reflected in longer response time, higher detection limit, and lower, in general, sensitivity for NADH detection. Dehydrogenase, cofactor and mediator coimmobilization Dehydrogenases have attracted considerable attention because they are of increasing interest for electroenzymatic synthesis, biosensor and bio-fuel-cells or biobattery. The main technological barrier in the fabrication these reagent free devices is the development of functional film allowing the stable immobilization of all component of the electrochemical detection like cofactor, mediator and dehydrogenase at the surface of the transducer. Ideally, the immobilization should be done in such a way (Figure I-9) that dehydrogenases are durably immobilized and active, the cofactor is durably immobilized close to the enzyme, and mediator can reduce the overpotential without leaching. Moreover, in this system, it is essential that the produced NADH (or NAD + ) is instantaneously consumed by the mediator (or another enzyme on the electrode surface for cofactor regeneration), otherwise the equilibrium will be reached and the further production of NADH (or NAD + ) will cease. The reduced (or oxidised) mediator in turn must also be rapidly reoxidised (or reduced) to recreate its active form. In essence, this means that all three reaction steps (enzymatic, mediated and electrochemical) need to occur very close in space for a successful approach. To achieve the co-immobilization, a suitable matrix should be found, which should have good balance between the permeability of the substrate or products and the retention of the enzyme, cofactor and mediator. Consequently, and predictably, the development of this kind of functional film has been rather slow, only few examples in the application of biosensor are available. Typically, such biosensors have been designed using redox mediators to recycle enzyme cofactors and immobilizing the dehydrogenases and their cofactors by entrapping them in carbon pastes, membranes, composite materials, macroporous electrodes, and assembled layers. Table I-1 shows the comparison of amperometric reagentless biosensors based on dehydrogenase/cofactor system. We can observe from the table, some of them show low sensitivity, some of them have the problem of the stability. So it is still a challenge to construct this kind of functional layer. Figure I-9 A scheme of functional film. Up to now, sol-gel chemistry was not considered for durable cofactor immobilization, which is however mandatory for the elaboration of reagentless devices. The focus of the thesis is on the comparison of different strategies in order to get the stable immobilization of a bienzymatic system (a dehydrogenase and a diaphorase), the cofactor NAD + /NADH and an electron mediator in a sol-gel matrix deposited as a thin film on an electrode surface. This layer can be applied in electro-enzymatic synthesis, but the finding of this work can also be applied in reagent free amperometric sensors and bio-fuel-cells or biobattery. cofactor Mediator Substrate Enzyme Objective beyond the state-of-the-art Based on the works reported in the literature, we have first investigated in detail the preparation of different kinds of sol-gel routes for dehydrogenase encapsulation. It was known that protein and enzyme encapsulation into electrogenerated sol-gel thin films was possible [46,47,48]. However, first attempt to use similar approach for bioencapsulation of dehydrogenase was not successful. DSDH was found to be very sensitive to the silica gel environment. The addition of a positively-charged polyelectrolyte was necessary to ensure effective operational behavior of the biomolecules, which allow the successful encapsulation of dehydrogenase inside sol-gel matrix by using both drop-coating and electrodeposition (chapter III). Dehydrogenase is dependent on free diffusing cofactor in stoichiometric amounts to shuttle the redox equivalents from the enzyme to the substrate. We have investigated different strategies allowing the stable immobilization of this cofactor in the sol-gel matrix (chapter IV). In order to overcome the inherent difficulties of cofactor regeneration, a common approach is to confine the mediator at the electrode surface to facilitate the interfacial electron transfer reaction. Several strategies for mediator immobilization have been developed and used for the elaboration of reagentless device with co-immobilized dehydrogenase and cofactor (chapter V and chapter VI). A special attention was given to the preparation of the sol-gel layer by electrochemically-assisted deposition. While several systems operated well when the sol-gel layer was prepared by drop-coating, it was much more difficult to co-immobiliz all the elements of the biocomposite by electrochemical deposition...... Most of the studies presented in this thesis have been performed in the oxidation reaction. In addition, we also studied the immobilization of Rh(III) mediator on carbon nanotubes for the electrocatalytic reduction of NAD + . Chapitre II. Partie expérimentale Chapter II. Experimental part To conduct the work presented in this thesis, several chemicals and techniques have been used to prepare and characterize the active film. This chapter presents the description of the sol-gel precursors, enzymes, cofactors, mediators, Polymer additives and the protocols used for the cofactor and mediator synthesis. A series of methods used to prepare the starting sols and the working electrodes in this thesis are also described. At last, the analytical techniques used in the studies are described. Chemicals and biomolecules Enzymes In the production of chiral compounds, the oxidation or reduction of prochiral substrates by dehydrogenases is the preferred reaction. Two kinds of dehydrogenase have been cloned and used in this project, D-sorbitol dehydrogenase (DSDH) and galactitol dehydrogenase (GatDH). DSDH and GatDH have been applied in both anodic and cathodic modes, using respectively D-sorbitol and fructose (for DSDH) and hexanediol and hydroxyacetone (for GatDH). D-sorbitol dehydrogenase solution (DSDH, 10 mg/mL, 100 units/mg) and Galactitol dehydrogenase (GatDH, 10 mg/mL, 14 units/mg ) have been provided by Pro. G. W. Kohring (The group of microbiology of Saarland University, Germany), which have been prepared by overproduction of the His(6) tagged protein in Escherichia coli BL21GOLD (DE3) and purification of the enzyme with Histrap columns (GE Healthcare). The activity of the protein suspension was measured as NADH production by oxidation of D-glucitol in a photometric assay at 340 nm. Diaphorase (DI, 1020 units/mg) was obtained from Unitika, Japan. Cofactors Commercial cofactors The Nicotinamide redox cofactors (NAD + and NADH) play important roles in biological electron carriers, which are based on the transfer of two electrons and one proton. β-Nicotinamide adenine dinucleotide (NAD + , ~98 %), β-Nicotinamide adenine dinucleotide, reduced dipotassium salt (NADH, ~97 %) and β-Nicotinamide adenine dinucleotide-dextran (NAD + -dextran, attached through C8 to soluble dextran, D-4133, with a 6 carbon spacer) were supplied by Sigma. NAD-derivatives Synthesis PEI-NAD PEI-NAD was prepared according to the reference [1]. NAD + (200 mg) was dissolved in 40 mL dimethylsulfoxide containing 8 g succinic anhydride. After 24 h at room temperature, the NAD + components (mixture of unreacted NAD + and succinyl-NAD) were precipitated with 60 mL acetone. The precipitate was washed 3 times with acetone and recovered by centrifugation. After centrifugation, 20 mL buffer containing 700 mg EDC was added, and pH was adjusted to 4.7 for activating the carboxylic groups of succinyl-NAD for 1 h. After activation, 300 mg PEI was added and reacted for another 12 h at 4 °C. The reaction mixture was dialysis against 50 mM phosphate buffer (pH 7.0) for 12 h at 4 °C. NAD-GPS The method involves the direct in situ functionalization of NAD + with a glycidopropylsilane, GPS, precursor [2]. NAD + and GPS educts were typically prepared by mixing together 25 mg NAD + and 37.5 mg GPS in 400 µL Tris-HCl buffer solution (pH 7.5) at 4°C under shaking for 12 h. Different kinds of MWCNTs were treated by microwaving based on the procedure reported in the previous article [ 5 ]. Table II-3 shows their information. The MWCNTs were microwaved in a concentrated nitric acid (70%) for 10-20 min. The microwaving was performed at 50 °C, 20 psi, and 100% power using the Discover Labmate single-mode microwave oven (CEM, 300 W). After microwaving, the MWCNTs were subjected to at least three centrifugation/decanting cycles in fresh aliquots of deionized water to remove any remaining impurities. In addition, the acid-treated MWCNTs suspensions were neutralized with a sodium hydroxide solution and washed extensively with water to a neutral pH. The final rinsing was performed with ethanol. The MWCNTs were dried in an oven at 85 °C overnight and stored in closed vials at room temperature. Mediators • Polymerized methylene green(MG) on the MWCNTs modified GCE (GCE/MWCNTs-PMG) Chitosan/MWCNTs suspension was prepared by dispersing 1.0 mg MWCNTs (MWCNTs, Aldrich) in 1mL of chitosan solution (0.2 % in 0.05 M acetic acid solution). An aliquot (5 µL) of this resulting solution was deposited onto the surface of the GCE. The solution was then allowed to dry at room temperature to get MWCNTs/GC electrode. Electropolymerization of MG on MWCNTs/GC electrode was carried out using cyclic voltammograms in 0.1 M pH 7.0 PBS containing 0.5 mM MG and 0.1 M KCl in a potential range from -0.5 to 1.2V at a scan rate of 50 mV/s. After successively cycling for 10 cycles, the electrodes were rinsed with doubly distilled water thoroughly and kept at room temperature drying for further use [6,7]. • MWCNTs-Osmium (MWCNTs-Os) Polymer additives A series of polymer additives with different charges have used in this work. The effects of the introduction of these polymer additives into sol-gel for bioencapsualtion are also investigated. Table II-4 shows the information of used polymer additives. The other chemicals and solutions All other reagents are of analytical grade. They include K 2 HPO 4 (99%, Prolabo), KH 2 PO 4 (99%, Prolabo), ethanol (Merck) and HCl (36%, Prolabo). Chitosan (medium molecular weight) was supplied by Aldrich. D-Sorbitol (98%), D-fructose (99 %), ferrocene carboxyaldehyde (FcCHO), succinic anhydride, 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride (EDC), sodium borohydride (NaBH 4 ) and tris(hydroxymethyl) amino-methane (Tris) were obtained from Sigma. Tris-HCl (pH 9.0) buffers were prepared by adding suitable amounts of HCl to 0.1 M Tris solutions, which was used to investigate the oxidation reaction. Phosphate buffer solution (PBS, 0.1M, pH 6.5) is used to investigate the reduction reaction. All solutions were prepared with high-purity water (18 M cm -1 ) from a Millipore milliQ water purification system. Electrodes In this work, a glassy carbon electrode (GCE, 3mm in diameter), a gold electrode (Au, 4 mm in diameter) or a macroporous gold electrode served as workiong electrode. Prior each measurement, glassy carbon electrodes (GCE) or gold electrodes were first polished on wet emery paper 4000, using Al 2 O 3 powder (0.05 mm, Buehler), then rinsed thoroughly with water and ultrasonicated in water and alcohol bath to remove the embedded alumina particles. Macroporous gold electrodes (Figure II-4) were provided by partner 2 (ENSCPB, Bordeaux), which were obtained by electrodeposition of gold through a silica bead colloidal crystal followed by the dissolution of the silica template. The Langmuir Blodgett technique is used to transfer successive layers of monodisperse beads on gold-coated glass substrates, previously treated by cysteamine in order to make the sample surface hydrophilic [14,15]. The gold electrodeposition was operated at -0.66 V vs. saturated Ag/AgCl after 10 minutes dipping in the commercial gold plating bath in order to let the solution infiltrate the template. After the deposition step the samples were rinsed with distilled water and placed 10 minutes in 5% HF in order to remove the silica colloidal crystal. The pore diameter and the thickness of the porous material were controlled as described in previous work [16]. Preparation of sol-gel for bioencapsulation We have explored various starting sol compositions for bioencapsulation (see Table II-5). At that stage of the project, it has thus been decided to initiate studies with chitosan as an alternative encapsulation matrix, to evaluate the interest of polymeric additives and, depending on the obtained results, to exploit the conclusions to improve the response of solgel biocomposites. Sol G was developed for dehydrogenase enzymes encapsulation. All investigations performed using drop-coating or spin-coating to deposit sol-gel films have been found to lead to electrochemically detectable bioactivity. However, no detectable electrochemical signal was observed by electrodeposition. The first experiments performed The release of alcohol during the hydrolysis of with Sol H and DSDH showed the feasibility of encapsulation of active enzymes in electrodeposited sol-gel thin films. However, this kind of sol-gel can not work for enzyme and cofactor co-immobilization. At this stage, Sol I was developed for enzyme and cofactor coimmobilization. Finally, Sol J and K are developed for co-encapsulate enzyme, cofactor and mediator. Table II-5. Information of different methods used to prepare the starting sols Materials Procedures used to prepare the starting sols for films formation Sol A [17] The sol was prepared by dissolving 2.125 g TEOS, 2 mL H 2 O and 2.5 mL HCl (0.01 M), which were mixed for 12 h using a magnetic stirrer, then NaOH was added in the medium to increase pH at a value of about 4 and electrodeposition was applied. Sol B Same system as Sol-Gel A, but using TMOS instead of TMOS Sol C [18] The sol was prepared by stirring 2.56 g TMOS with 0.6 mL H 2 O and 0.06 mL HCl (0.62 M) for 20 min. Phosphate buffer (1.0 mL, 0.01M, pH=8.2) was added to the sol (1.0 mL) which is shaken vigorously. Sol D [19] The sol was prepared by stirring of 4.46 mL TEOS, 1.44 mL H 2 O and 0.04 mL HCl (0.62 M) for 1 h. Then 1 mL of the resulting sol was mixed with 1 mL of deionised water, and evaporated for a weight loss of 0.62 g. Sol E [20] 11.5 g Sodium silicate (3.25 SiO 2 /Na 2 O) was combined with 34 mL DI water. To this aqueous solution is added 15.4 g of strongly acidic cation-exchange resin with stirring to bring the pH of the solution to a value of 4. The resin is then filtrated. 0.3 mL of 2 M Hydrochloric acid was added to the sol to adjust the pH 2.0. A phosphate buffer (1 M, pH 7) containing enzyme was added to the sol solution in a 1:5 (volume) ratio. Sol F [21] 0.61 g Sodium silicate was combined with 50 mL de-ionized water. To this aqueous solution was added 1.6 mL 37% HCl to decrease the pH to 0.84. 2 mL of resulting 0.1 M sodium silicate was added to 3 mL ludox (SM-30), which was shaken vigorously. Sol G The sol was prepared by dissolving 0.04 g TEOS, 800 µL ethanol and 1 mL HCL (0.01M), which were mixed for 2.5 h using a magnetic stirrer. Sol H Same system as Sol-Gel A, but diluted 3 times with water for further use. Sol I Same system as Sol-Gel A, but diluted 6 times with water for further use. Sol J The sol was prepared by dissolving 0.18g TEOS, 0.13g GPS, 0.5 mL H 2 O and 0.625 mL HCl (0.01M), which were mixed for 12 h using a magnetic stirrer. Then diluted 2 times with water for further use. Sol K The sol was prepared by dissolving 0.18 g TEOS, 0.13 g GPS, 0.02 g Fc-silane, 0.5 mL H 2 O and 0.625 mL HCl (0.01 M), which were mixed for 12 h using a magnetic stirrer. Then diluted 2 times with water for further use. where electrochemically-assisted deposition was performed at -1.3 V at room temperature for 60 s. The electrodes were immediately rinsed with water, and dried overnight at 4°C. MWCNTs-PMG &sol-gel matrix The preparation of GCE/MWCNT-PMG has been described in chapter II 1.4.1.2. Chitosan/CNTs/Rh solution and allowed to evaporate at the room temperature. ① Methods of analysis 5.1 Electrochemical measurements [ 24] Cyclic voltammetry (CV) Cyclic voltammetry is the most widely used technique for acquiring qualitative information about electrochemical reactions. It is often the first experiment performed in an electroanalytical study. In particular, it offers a rapid location of redox potential of the electroactive species, and convenient evaluation of the effect of various parameters on the redox process. This technique is based on varying the applied potential at a working electrode in both forward and reverse directions (at selected scan rates) while monitoring the resulting current. The corresponding plot of current versus potential is termed a cyclic voltammogram. Chronoamperometry in hydrodynamic mode The basis of chronoamperometry techniques is the measurement of the current response to an applied potential. A stationary working electrode and stirred solution are used. The resulting current-time dependence is minitored. As mass transport under these conditions is solely by convection (steady state diffusion), the current-time curve reflects the change in the concentration gradient in the vicinity of the surface, which is directly related to concentration in solution. In this work, all cyclic voltammetry (CV) and chrono amperometry experiments are conducted using a conventional three-electrode cell. The working electrode was a glassy carbon electrode (GCE, 3mm in diameter), a gold electrode (Au, 4 mm in diameter) or a macroporous gold electrode, a gold wire served as auxiliary electrode, and an Ag/AgCl electrode (saturate KCl internal electrolyte) was used as the reference. All electrochemical experiments have been performed at room temperature and carried out using an Autolab PGSTAT-12 potentiostat (Eco Chemie) monitored by the GPES (General Purpose Electrochemical System) software. UV-VIS Spectroscopy (UV) The principles of UV centre on the fact that molecules have the ability to absorb ultraviolet or visible light. This absorption corresponds to the excitation of outer electron in the molecules concerned. When a molecule absorbs energy an electron is promoted from the Highest Occupied Molecular Orbital (HOMO) to the lowest Unoccupied Molecular Orbital (LUMO). As with any UV-Vis spectrometer, three of the main elements are a UV-light source, a monochromator and a detector. The monochromator works as a diffraction grating to dispense the beam of light into various wavelengths. The detectors role is to record the intensity of the light which has been transmitted. Before the samples are run, a reference must first be taken. This calibrates the spectra to screen out any spectral interference. In the case of liquid samples the solvent which has been used to dissolve the sample is used. However, there are certain criteria that solvent must pass before they can be deemed as suitable solvents. The main criterion is that the solvent should not absorb ultraviolet radiation in the same region as the sample being analysed. The apparatus used in this study is a UV-vis spectrophotometer Beckman Du 7500, which was used to measure NADH concentrations in solution at 340 nm. Attenuated Total Reflectance-Fourier Transform Infrared Spectroscopy (ATR-FTIR spectroscopy) Figure II-6. A multiple reflection ATR system. Infrared spectroscopy is a widely used technique that for many years has been an important tool for investigating chemical processes and structure [25]. The combination of infrared spectroscopy with the theories of reflection has made advances in surface analysis possible. Specific IR reflectance techniques may be divided into the areas of specular reflectance, diffuse reflectance, and internal reflectance. An innovative technique (ATR-FTIR spectroscopy) for monitoring the transport process of low molecular weight species was established based upon internal reflectance. This enabled the monitoring of individual species in-situ, while providing additional chemical information on any changes that may be occurring during the transport process. Figure II-6 shows the principles of ATR-FTIR. An attenuated total reflection accessory operates by measuring the changes that occur in a totally internally reflected infrared beam when the beam comes into contact with a sample. An infrared beam is directed onto an optically dense crystal with a high refractive index at a certain angle. This internal reflectance creates an evanescent wave that extends beyond the surface of the crystal into the sample held in contact with the crystal. It can be easier to think of this evanescent wave as a bubble of infrared that sits on the surface of the crystal.This evanescent wave protrudes only a few microns (0.5 µ -5 µ) beyond the crystal surface and into the sample. Consequently, there must be good contact between the sample and the crystal surface. In regions of the infrared spectrum where the sample absorbs energy, the evanescent wave will be attenuated or altered. The attenuated energy from each evanescent wave is passed back to the IR beam, which then exits the opposite end of the crystal and is passed to the detector in the IR spectrometer. The system then generates an infrared spectrum. The apparatus used in this study is a Bruker Vector 22 spectrometer equipped with a KBr beam splitter and a deuterated triglycine sulfate (DTGS) thermal detector. FTIR spectra were recorded between 4000 and 700 cm -1 . Recording of spectra, data storage and data processing were performed using the Bruker OPUS 3.1 software. The resolution of the single beam spectra was 4 cm -1 . The number of bidirectional double-sided interferogram scans was 200, which corresponds to a 2 min accumulation. All interferograms were Fourier processed using the Mertz phase correction mode and a Blackman-Harris three-term apodization function. Measurements were performed at 22 ± 1°C in an air-conditioned room. A nine-reflection diamond ATR accessory (DurasamplIR ™, SensIR Technologies) was used for acquiring spectra. The incidence angle was 45° and the refraction index of the crystal was 2.4. No ATR correction was performed. 80 µL of the studied solution were put on the crystal. Appropriate spectra were used to remove spectral background: an air-reference, a water-reference, or a Tris-HCl buffer solution-reference. Water vapor subtraction was performed when necessary. In the course of reaction monitoring experiments, ATR-FTIR spectra were recorded every 5 or 15 min.. Scanning electron microscopy (SEM) The scanning electron microscope (SEM) is a type of electron microscope that images the sample surface by scanning it with a high-energy beam of electrons in a raster scan pattern. The electrons interact with the atoms that make up the sample producing signals that contain information about the sample's surface topography, composition and other properties such as electrical conductivity. Due to the very narrow electron beam, SEM micrographs have a large depth of field yielding a characteristic three-dimensional appearance useful for understanding the surface structure of a sample. A wide range of magnifications is possible, from about 10 times (about equivalent to that of a powerful hand-lens) to more than 500,000 times, about 250 times the magnification limit of the best light microscopes. Thus we are able to observe the particle morphology closely on a very fine scale. For conventional imaging in the SEM, specimens must be electrically conductive, at least at the surface, and electrically grounded to prevent the accumulation of electrostatic charge at the surface. Nonconductive specimens are therefore usually coated with an ultrathin coating of electrically-conducting material, commonly gold or graphite, deposited on the sample either by low vacuum sputter coating or by high vacuum evaporation. The technique of SEM is to focus on a surface of specimen by lens using a condensed electron beam. The interaction between electrons and the material leads to the backscattered electron emission, X-rays, secondary electrons and so on. These electrons are collected by a detector, converted to a voltage and finally amplified. In this work, The morphologies and structures of the membranes of DSDH-encapsulated electrogenerated sol-gel silica on a GCE were examined on a scanning electron microscope (SEM, Hitachi X-650, Japan). [ 26, 27] Scanning electrochemical microscopy (SECM) is a technique in which the current that flows through a very small electrode tip (generally an ultramicroelectrode with a tip diameter of 10 pm or less) near a conductive, semiconductive, or insulating substrate immersed in solution is used to characterize processes and structural features at the substrate as the tip is moved near the surface. The tip can be moved normal to the surface (the z direction) to probe the diffusion layer, or the tip can be scanned at constant z across the surface (the x and y directions). The tip and substrate are part of an electrochemical cell that usually also contains other (e.g., auxiliary and reference) electrodes. The device for carrying out such studies involves means of moving the tip with a resolution down to the A region, for example, by means of piezoelectric elements or stepping motors driving differential springs, and is called a scanning electrochemical microscope. The abbreviation SECM is used interchangeably for both the technique and the instrument. In SECM the current is carried by redox processes at tip and substrate and is controlled by electron transfer kinetics at the interfaces and mass transfer processes in solution, so that measurements at large spacings, e.g., the range of 1 nm to 10 pm, can be made. In addition to the electrochemical measurement, the machine can be equipped by a sheaforce detection module that helps in positioning the electrode before electrochemical measurement. Here, only the sheaforce detection was used in order to determine the film thickness on conductive surface. The apparatus has been developed in the lab on the base of the SECM instrument from Senslytics (Ruhr-Universität, Bochum, Germany). Scanning electrochemical microscopy (SECM) Chapitre Chapter III. Feasibility of dehydrogenase encapsulation in sol-gel matrix In this chapter, the work focuses on the immobilization of the active dehydrogenase on the electrode surface. Here, sol-gel is chosen as the matrix for the D-sorbitol dehydrogenase (DSDH) encapsulation, which is expected to be likely to provide suitable environment for bioencapsulation. First of all, the feasibility of DSDH encapsulation in sol-gel film is evaluated by drop-coating (see section 2). DSDH encapsulation in pure silica thin films resulted in undetectable electrochemical signal. Then, the influence of polyelectrolyte (PE) additives on the sol-gel encapsulation of dehydrogenases has been evaluated by drop-coating. DSDH was found to be very sensitive to the silica gel environment and the addition of a positively-charged polyelectrolyte was necessary to ensure effective operational behavior of the biomolecules. Since the suitable sol-gel environment for DSDH encapsulation has been found by drop-coating, we then investigate the electrochemically-assisted deposition of solgel thin films for DSDH encapsulation (see section 3). This was achieved via the electrolysis of a hydrolyzed sol containing the biomolecules to initiate the polycondensation of silica precursors upon electrochemically-induced pH increase at the electrode/solution interface. The composition of the sol and the conditions for electrolysis have been optimized with respect to the intensity and the stability of the electrochemical response to D-sorbitol oxidation. The electrochemically-assisted deposition of silica thin films was found to be a good strategy for DSDH immobilization as well as DSDH and diaphorase co-immobilization. At the end, this process has been extended to macroporous electrodes exhibiting a much bigger electroactive surface area. Introduction Dehydrogenases are interesting enzymes for electrosynthesis applications, especially for the production of rare sugars as building blocks for pharmaceutical and food industry [1]. One of the main requirement for electrosynthesis is the stable immobilization of a large amount of active proteins on the electrode surface of the reactor [2,3]. The encapsulation of proteins in a silica matrix using the sol-gel process is known to prevent their denaturation and to keep these proteins active for a long time, in some case longer than in solution, and can also allow the use of some enzymes in rather harsh conditions of pH and temperature [4 ]. When tetraethoxysilane (TEOS) is used as silica precursors, the hydrolysis of the ethoxy groups leads to a significant amount of ethanol potentially harmful for the enzyme. Ethoxy groups can be replaced by methoxy groups, to produce methanol that was reported to be less toxic for the enzyme [5]. Alcohol molecules can also be removed by solvent evaporation [6] and other sol-gel routes can also be involved in order to prevent the presence of any alcohol molecules in the starting sol [7]. A good example of this last strategy was reported recently for the encapsulation of horseradish peroxidase (HRP) in silica thin films obtained by electrochemically-assisted deposition with a sol based on ammonium hexafluorosilicate [8]. A favorable environment can also be obtained by the introduction of additives into the sol [9]. Recent efforts have been made to optimize the process by controlling the porosity of the material and the chemical environment of trapped species. This notably involved the use of biocompatible silane precursors, protein-stabilizing additives, charged polymers, or sugars and aminopeptides [4,10,11]. For electrochemical applications, the silica thin films can be effectively deposited on the surface of a flat electrode by the conventional sol-gel method using the controlled evaporation of the solvent by dip-coating, drop-coating, etc [12]. This has been largely exploited to design amperometric biosensor devices [ 13 ]. In the particular case of electro-enzymatic synthesis, the electroactive surface area of the reactor has to be increased in order to allow high flux of substrate to be oxidized or reduced. This can be achieved, e.g., by the use of macroporous electrodes exhibiting very high surface areas [14,15]. However, coating such porous electrodes in a controlled way with an enzyme-doped silica gel is somehow difficult with the evaporation methods cited above because they are mostly restricted to flat surfaces [16]. As an alternative, the silica layer can be produced at porous electrode surfaces by a local modulation of pH [17]. This is achieved by a controlled electrolysis of the pre-hydrolyzed sol that induces the rapid gelification of the thin film. The possible application of the electrochemically-assisted sol-gel deposition to bio-encapsulation was shown recently by some research groups, using glucose oxidase as a model enzyme [18,19]. It is also essential for electrosynthesis application to ensure a very smooth regeneration of the cofactor, especially if the cofactor is successfully immobilized with the protein(s), to prevent non-controlled oxidation of the expensive NADH (or reduction of NAD + ). Diaphorase catalyzes this cofactor regeneration in the presence of a molecular mediator, for example ferrocene species for oxidation or methylviologen for reduction [ 20 ]. The coimmobilization of both dehydrogenase and diaphorase in active forms into a thin silica film would allow the reactor to perform alternatively oxidation or reduction reactions by "simply" changing the mediator system and the applied potential [21,22]. For this reason the wellcontrolled deposition of such active bio-composite layers in macroporous electrodes would be of great value for bioelectrocatalytic application. In this work, we used DSDH as model enzyme to evaluate the feasibility of dehydrogenase encapsulation in sol-gel film. First, we have evaluated the interest of various polyelectrolytes in combination to silica films for DSDH encapsulation (section 2). We have chosen positively-charged polyelectrolytes because of expected favorable interactions with the negatively-charged enzyme surface [23]. Comparing the electrochemical responses observed in the presence and absence of these additives allowed to evidence the critical role played by the polyelectrolyte in enhancing cofactor regeneration. Then, we show the electrochemicallyassisted deposition of silica-based thin films for DSDH immobilization as well as DSDH and diaphorase co-immobilization (section 3). The process and the sol composition have been optimized on flat glassy carbon electrodes before being applied to macroporous gold electrodes. The electrodeposited bio-composite containing both DSDH and diaphorase has been tested for electrochemical oxidation of D-sorbitol and a comparison was made between flat and macroporous gold electrodes. Critical effect of polyelectrolytes on the electrochemical response of dehydrogenases entrapped in sol-gel thin films 2.1 Preliminary observations D-sorbitol dehydrogenase (DSDH) needs electron transfer cofactors (NADH/NAD + ) for its enzymatic activity. A large amount of work has been done over the past years in order to decrease the overpotentials for the electrochemical detection of this cofactor, especially when operating in oxidation mode [24]. For simplicity, we first considered the direct oxidation of NADH (i.e., without mediator) to evaluate the activity of the enzyme encapsulated into the sol-gel matrix to be tested. The overall reaction scheme is shown on Enzyme encapsulation in pure silica films The encapsulation of DSDH was first evaluated with using pure silica thin films (i.e., without any additive). UV monitoring of the enzyme activity in gel monoliths In order to distinguish between the above hypotheses, DSDH was encapsulated in sol-gelderived monoliths and its biological activity was monitored by UV spectroscopy via NADH generation upon addition of D-sorbitol in the medium. The biomaterial was prepared according to a protocol reported by Miller et al. [25], requiring notably a rather long aging period (6 days at 4°C). Significant shrinkage of the monolith occurred during gelification. The solid was then washed three times in 3 mL phosphate buffer solution in order to remove weakly encapsulated enzymes. It was then introduced into a solution containing 0.36 mM NAD + and 5 mM D-sorbitol. The activity of encapsulated DSDH can be evidenced by UV monitoring of the solution phase at 340 nm (maximum absorbance for NADH detection). The pH of the sol used for the enzyme encapsulation was comprised in between 5 and 7. In these conditions, negative charges are present on the silica surface due to silanol deprotonation (the point of zero charge of silica is reported to be in the range 2-3 [26]). The isoelectric point of DSDH is 4.3 [27]. During the protein encapsulation, the interaction between the silica matrix and the protein seems to be not so favorable in the pH conditions used here. The above sol-gel entrapment and UV monitoring experiments have thus been repeated on the basis of monoliths prepared in the presence of a positively-charged polyelectrolyte (PDDA) likely to act as a stabilizing intermediate between the enzyme and the silica surface. Results presented in Figure III-3 (curve "c") show that this is indeed the case as a much higher activity was observed with the gel containing PDDA for which almost 90 % of NAD + have reacted after the same period of time (6 h). It is noteworthy that the process is still much slower than for the free enzyme in solution, but the presence of PDDA in the solgel encapsulation matrix provides definite advantage in comparison to undoped silica. Interest of additives for DSDH encapsulation onto electrode surfaces 2.2.1 Sol-gel matrices The above results suggest that the presence of positively-charged moieties in the silica/DSDH biocomposite would be helpful to improve the enzyme activity and thereby to enhance the electrochemical response of GCE covered with such biocomposite films. We have evaluated two ways to introduce positive charges in the material: (1) the resort to a positively-charged organosilane (i.e., protonated aminopropyltriethoxysilane, APTES) and ( 2) the addition of positively-charged polyelectrolytes. 4A). This suggests that the presence of the silica film induces some resistance to charge transfer kinetics. One can conclude from this first series of experiments that favorable electrostatic interactions between DSDH and the aminopropyl-functionalized silica matrix (point of zero charge = 9.8 [28]) is beneficial for getting electrochemically detectable enzymatic activity. The positive charges held by the aminopropyl groups could also interact with NAD + (which is also negatively charged), bringing DSDH and NAD + together and allowing a higher net production of NADH [29,30]. Figure III-4B shows that even more impressive behavior can be obtained when using a polyelectrolyte (i.e., 5% PDDA) as additive in the starting sol. A well-defined electrochemical response was observed, increasing regularly by increasing the D-sorbitol concentration from 1 to 9 mM, in the same conditions as those applied for films prepared in the absence of PDDA Extension to chitosan and chitosan sol/gel composites Chitosan is produced by deacetylation of chitin bio-polymer. This reaction allows generating a certain fraction of amine functions that makes the polymer soluble and suitable for bio-encapsulation [32]. This property has been advantageously used for electroanalytical purposes [33]. One observed in the previous section that doping silica sol-gel films with polymers bearing protonated amine or ammonium groups was essential for improving the enzymatic activity of DSDH immobilized on GCE. We have thus evaluated if chitosan, in combination with silica gel, could provide this favorable environment because of the amine groups it holds. Factors affecting the electrode response 2.3.1 Sol composition The sol composition has been optimized in order to define conditions leading to the highest electrochemical response. Both polyelectrolyte (PDDA has been selected because it gave rise to best results among others, see Fig. III-5) and precursor (TEOS) concentrations have been found to affect significantly the film electrode response (Figure III-8). Figure III-8A shows the influence of PDDA content into the starting sol. As already discussed above, the absence of PDDA led to inactive films since no NADH was detected in the presence of D-sorbitol (FigureIII-2). Addition of PDDA, even in few amounts (e.g., 1.7 %) resulted in significant electrochemical signals, the intensity of which increasing up to 5 % and decreasing then quite regularly for higher polyelectrolyte contents (up to 10 %). All electrodes displayed well-defined voltammetric peaks for NADH oxidation. This trend leading to an optimal value of 5 % can be explained by the role played by PDDA, acting somewhat as "macromolecular glue" between the enzyme and silica surfaces, too low or too high contents of this additive contributing to unbalance the stabilizing effect. Another explanation can be found in the concentration-dependent effect of polycationic macromolecules on biosilication, in providing a favorable interaction with silica precursors during the sol-gel process that contributes to the gelification [7]. They can also induce modification in the texture of the final material [34], which would affect mass transport processes in the biocomposite and, therefore, the rate of cofactor generation/regeneration. No noticeable response can be detected when using TEOS concentrations above 1 M. The electrochemical response is not only related to the enzyme activity but also strongly dependent on the diffusion of both D-sorbitol substrate and NAD + cofactor inside the film. While a low concentration of silica precursor induced the formation of a rather porous matrix, suitable for both encapsulation and diffusion, increasing TEOS concentrations led to densification of the silica matrix and thicker films [35], which became less suitable for fast diffusion of species from the solution to the enzyme and from the enzyme to the electrode surface. Stability with time Both composition of the film and type of additives have been found to affect significantly the operational and long-term stability of the biocomposite electrodes. This has been studied With the Silica/PDDA/DSDH composite (curve "c") an increase in peak current intensity ca. 65% was first observed, which can be due to changes originating from the film hydration (dissolution/precipitation of silica can occur and the presence of PDDA can influence strongly this silication process [7]). After this initial step, the current reached more stable values, yet continuing to increase slowly with time, varying from 1.65 to 1.8 for the last 3 h of the experiment. Immobilization of DSDH in a pure PDDA film, in the absence of silica precursor, also allowed measuring a current response when the electrode was introduced in solution but this was not stable in successive measurements (increase in the first 30 min of experiment and then continuous decrease down to the initial value). This can be due to lack of mechanical stability of the film, with progressive leaching of the enzyme in solution. Despite Chapter III. Feasibility of dehydrogenase encapsulation in sol-gel matrix 91 chitosan is known to be a suitable matrix for bio-encapsulation [33], the last system Chitosan/PDDA/DSDH gave rise to even more variable response, starting with a sharp increase in peak currents and following with dramatic decrease in the signal intensity after 30 min of use. For all electrodes, a reorganization/modification of the film resulted in an enhancement of the electrochemical response during the first minutes. This probably arises from easier diffusion of the substrate and cofactor species into the composite matrix. However, after this first step, only the sol-gel-derived film resulted in steady-state values of peak currents, all other systems underwent significant degradation of the electrochemical response. This can be ascribed to the rigid character of the inorganic network likely to ensure more durable bioencapsulation, which appears promising for bio-electrochemical applications. The long-term stability of the best systems (sol-gel based biocomposite film electrodes with added polyelectrolytes) has been also considered. PDDA on the DSDH activity is confirmed as no polyelectrolyte in the silica film means no detectable enzymatic activity. In the conditions of electrochemically-assisted deposition, negative charges are present on the silica surface due to silanol deprotonation (the point of zero charge of silica is reported to be in the range 2-3 [26]). The isoelectric point of DSDH is 4.3 [27]. During the protein encapsulation, the interaction between the silica matrix and the protein seems to be not so favourable. The positively charged polyelectrolyte (PDDA) is likely to act as a stabilizing intermediate between the enzyme and the silica surface. to the silica network (~1000-1200 cm -1 ) and the surface silanol groups Si-OH (~900-1000 cm - 1 ), the amide I and II bands of the proteins (1653 and 1538 cm -1 ), the band from the ammonium groups of PDDA (1474 cm -1 ), and the C-H stretching bands of both PDDA and proteins (~2800-3050 cm -1 ) were distinctively observed on the same spectrum. So the FTIR measurement supports the observation made by cyclic voltammetry and indicates that PDDA and proteins are indeed co-encapsulated in the electrodeposited silica network. Optimization of the electrode response The effect of sol composition has been thoroughly studied and the three major parameters The quantity of protein introduced into the sol has a strong influence on the electrode response, a rapid increase was observed from 0.3 to 2.5 mg/mL, which started to level off for 3.3 mg protein/mL of sol. Higher concentrations of protein cannot be used for this process due to a rapid gelification of the sol (proteins or buffer of the suspension facilitate the sol-gel transition). PDDA concentration has also a dramatic influence on the electrode response. In the absence of PDDA, no signal could be measured (see Figure III-12B). The electrode became active in the presence of 1.8 % polyelectrolyte and the electrode response increased then regularly with increasing the polyelectrolyte concentration up to 6.7 %. As for the protein content, a higher quantity of PDDA is difficult to handle as it facilitates also the gelification of the sol before application of the electrolysis potential. The influence of TEOS concentration follows a different trend as it induces first an increase in the electrode response, passes through a maximum and then decreases. The optimal signal was observed for 0.17 M TEOS in the sol. We can assume that the deposition rate is too slow for low TEOS concentrations [35] inducing thereby less efficient protein encapsulation. On the other hand, a higher TEOS concentration can lead to hindered mass transport (see the next section) as a result of thicker films that limit the efficiency of the bio-electrode (restricted diffusion of the reactants). Deposition time has also a strong influence on the electrode response. Thickness / µm Deposition time / s by the PDDA. The optimal sol composition that has been determined using only electrochemical measurements leads also to the more homogeneous sol-gel biocomposite, showing a good incorporation of both protein and polyelectrolyte. Relationship between the reactivity, the permeability and the film stability All the optimization steps reported in the previous section have been made in order to obtain the highest electrochemical response as possible. When considering electrosynthesis applications it is critical to have an intense electrochemical response, but it is also critical to get sufficient long-term stability of the electrode, at least at a time scale compatible with industrial processes. The quantity of TEOS in the starting sol has in principle an influence on the permeability of the layer [35]. Linear sweep voltammetry for ferrocenedimethanol (Fc) oxidation has been performed at a rotating disc electrode with a bare glassy carbon electrode (GC) and GC covered by thin silica films prepared with sols containing 0.08, 0.17 Comparison of the enzyme activity into the film and in solution The variation of the electrochemical response for increasing concentrations of D-sorbitol into the solution has been studied (see Figure III-16). The measurements have been done during the one hour for which the electrode gave a stable electrochemical response. In these conditions, the peak current intensity increases regularly with the D-sorbitol concentration up to 10 mM and starts to level off for higher concentrations. We can extract from these data a K m value of about 3 mM, slightly lower than the K m of about 6 mM observed for the same protein in solution [36]. The electrochemically-assisted deposition allows thus maintaining an enzymatic activity similar as in solution, with a small improvement due to the encapsulation in the silica gel. pH is known to affect strongly the enzymatic activity of DSDH [36]. (phosphate buffer), followed by a sharp decrease above this optimum value. With the oxidation of D-sorbitol, the electrode response increases more regularly from pH 6 to 9 and decreases slightly at pH 10. The behavior of DSDH in the silica layer is very comparable with the data coming from protein activity in solution with an optimal pH of 6.5 for the reduction of fructose and an optimal oxidation of D-sorbitol at pH 9, suggesting again that the electrogenerated silica layer offers a good environment for DSDH encapsulation onto the electrode surface. The immobilized protein exhibits good activity, similar as the free enzyme in solution concerning both the enzymatic kinetics and the sensitivity to pH. Figure III-17. Evolution of the peak current response versus the pH for (A) reduction of 6 mM fructose in 0.1 M Tris-HCl buffer with 1 mM NADH and (B) oxidation of 6 mM Dsorbitol in 0.1 M tris-HCl buffer with 1 mM NAD + . The modified glassy carbon electrode was prepared with a sol containing 0.17 mM TEOS, 6.7 % PDDA and 3.3 mg/mL DSDH. The electrochemically-assisted deposition was done by applying -1.3 V for 60 s. All cyclic voltammograms have been performed 50 mV/s potential scan rate. Co-immobilization of DSDH and diaphorase The direct electrochemical detection of NADH, occurring at high potential compared to E 0 , induces uncontrolled oxidation and rapid deactivation of the bio-molecule. A large number of studies have been and are still devoted to the electro-catalytic oxidation of NADH that allows Chapter III. Feasibility of dehydrogenase encapsulation in sol-gel matrix 102 decreasing this over-potential. An elegant and very versatile regeneration can be obtained with using diaphorase. An additional electron mediator is then used to transfer the electrons from this protein to the electrode. Interestingly, the same diaphorase can be used for both NADH oxidation and NAD + reduction. Among others, ferrocene species can be used for mediating electron transfer for the oxidation of NADH and methylviologen can be used for the reduction of NAD + [20]. giving rise to a similar trend as for DSDH alone, with an optimal pH value located around pH 9. Note that the bi-enzyme system is much more sensitive to small pH variations, which is illustrated by a current decrease by 75 % when passing from pH 9 to pH 9.5 or pH 8. This influence of pH is mainly due to the strong effect of pH on the diaphorase activity (Figure III-19B). The co-immobilization of DSDH and diaphorase does not prevent the efficient communication between the two proteins. NADH can diffuse from one enzymatic center to the other for efficient bioelectrocatalysis. The electrochemically-assisted silica gel deposition is thus an effective method for the elaboration of complex enzymatic layers on electrode surfaces. Extension to the particular case of macroporous electrodes Macroporous electrodes display pores of about 440 nm with well defined interconnections allowing good mass transport [38]. Figure III-20. Preparation of the macroporous gold electrode by electrodeposition through nanospheres assembly. To further point out the interest of the electrochemically-assisted deposition method for bioelectrocatalysis purposes, the above approach was extended to macroporous electrodes The electrochemical response increases significantly from one half layer to three half layers when detecting 1 mM D-sorbitol. This result is consistent with recent observations made with macroporous electrodes modified by a thin silica film containing hemoglobin [17] in which it was shown that the electrochemical signal of hemoglobin as well as the catalytic current for H 2 O 2 detection is increasing significantly with increasing the number of half-layer from 3 to 9. However, the direct transposition of the results from this previous study to the present work is not possible because the compositions of the sols are very different. The sol used for hemoglobin encapsulation contained much less silane precursors (13.6 mM TEOS) than the sol used here for DSDH and diaphorase encapsulation (0.17M). In addition, the sol we developed for dehydrogenase is complex to handle with the macroporous electrodes because of the presence of the polyelectrolyte. These experiments point out the interest and the complexity of surface modification of such porous material with an elaborated sol-gel biocomposite. Additional optimization will be necessary to carefully control the film deposition inside the macropores. The optimal thickness of the macroporous electrode will also have to be defined with respect to the application in electrosynthesis. Conclusion The first part of the work has pointed out the importance of adding positively-charged polyelectrolytes into sol-gel-derived films doped with dehydrogenase enzymes for providing a good environment for encapsulation of the biomolecules in an active form. Among the tested additives, PDDA offered the best results. The improved behavior in the presence of polyelectrolyte was also observed for other kinds of thin films (i.e., based on chitosan). Then, Chapter IV. Co-immobilization of dehydrogenase and cofactor in sol-gel matrix In this chapter, successful strategies for dehydrogenase and cofactor co-immobilization in sol-gel films have been developed by both drop-coating and electrochemically-assisted deposition. First of all, we compare various strategies directed to the durable immobilization of NAD + /NADH cofactors in biocompatible sol-gel matrices encapsulating a bi-enzymatic system (a dehydrogenase and a diaphorase, this latter being useful to the safe regeneration of the cofactor), which were deposited by drop-coating as thin films onto glassy carbon electrode surfaces. These strategies are (1) the "simple" entrapment of NAD + in the sol-gel matrix, alone or in the presence of carbon nanotubes; (2) the formation of interpenetrated organicinorganic networks using a high molecular weight NAD derivative (NAD-dextran); (3) the chemical attachment of NAD + to the silica matrix using glycidoxypropylsilane in the course of the sol-gel process (in smooth chemical conditions). The third approach based on chemical bonding of the cofactor (which was checked by infrared spectroscopy) led to much better performance in terms of long-term stability of the electrochemical response. The coimmobilization of DSDH, diaphorase (DI) and NAD + was then obtained by electrochemically-assisted deposition. Finally, the functional layer has been successfully deposited in macroporous gold electrodes and applied for the oxidation of D-sorbitol. Introduction The main difficulty to fabricate reagentless dehydrogenase-based bioelectrodes is that the cofactor must be immobilized and regenerated in a stable and active way. In most studies, the native cofactor is added to the starting electrolyte before the enzymatic reaction. Despite the good performance of these methods, a maybe drawback exists: the operation is not only complicated but also involves high cost because the expensive cofactor cannot be reused. One way to overcome this problem is to co-immobilize cofactor and dehydrogenase in the sensing layer, but the difficulty is that the water soluble cofactor is a relatively small molecule, so is likely to diffuse away from the electrode surface into the solution, thus limiting the long-time durability of the modified electrode. Different strategies have been proposed for cofactor immobilization on electrode surfaces. The simple encapsulation leads to rapid leaching of the cofactor in the solution during the electrochemical operation and can only be considered for disposable sensors [1]. One possibility to improve the stability of the immobilization is the chemical attachment of the cofactor to a macromolecule that can be encapsulated or immobilized on the electrode without leaching. Dextran [2,3,4], PEG [5], Chitosan [6], and PEI [7], have been reported to allow this cofactor immobilization for biosensor applications. The stability of some systems has been studied and the bio-electrode can in some cases be operating for several days. One limitation of this approach comes from the rather complex modification, especially if proteins [6] or mediator [7] are also immobilized on the same macromolecule. This induces a significant cost and limits the number of functionalized groups in the layer; More recently the adsorption of NAD + on carbon nanotubes was also proposed as a new strategy for the biomolecules immobilization [8]. Up to now, sol-gel chemistry was only used for enzyme and mediator immobilization [1, 9, 10], and was not considered for durable cofactor immobilization. During operation of bioelectrodes containing coimmobilized dehydrogenase and cofactor, the cofactor must be detected or regenerated electrochemically and this operation has to be done with using a mediator. The demand for electrocatalytic detection of NAD + cofactor comes from the nature of this molecule, free diffusing in the living cell and for which a high electrochemical overpotential is observed for both oxidation and reduction reactions. The molecule is protected from side reaction, but the direct electrochemical detection at high overpotential can lead to the irreversible degradation of the compound and to the simultaneous detection of interfering species. Many strategies for electrocatalytic detection of the cofactor have been developed. Organic mediators [11,12,13,14,15] , carbon nanotubes (CNTs) [10,16,17,18,19] or even gold nanoparticles [20] have been proposed to recycle the NADH back to the enzymatically active NAD + . The cofactor can also be efficiently regenerated by the use of diaphorase in the presence of several molecular mediators, metal complexes [21 , 22 ], quinones and flavins [ 23,24], and also viologens [25 ]. This later approach being very appealing when smooth cofactor regeneration is needed for improving the long term stability of the device, notably if NAD + is immobilized for reagentless device. In this study, we will use diaphorase (DI) in combination with ferrocenedimethanol for cofactor regeneration. We have compared here different strategies for cofactor immobilization in sol-gel matrix, i.e. simple encapsulation of the native cofactor, encapsulation of NAD-Dextran, adsorption on carbon nanotubes introduced in the sol-gel matrix and finally the use of glycidoxypropylsilane (GPS) as additive. This later molecule displays an epoxide ring susceptible to react with the adenine moieties of the cofactor. According to the literature, such coupling has to be prepared in basic solution in order to get the most active biomolecules [ 26 ], but sol-gel bioencapsulation can only be obtained in neutral conditions. We will show here that despite this limitation, the confinement of the linked cofactor in the sol-gel matrix with using GPS allows good activity to be detected with high stability. Evidence of reaction between cofactor and GPS were obtained by FTIR. The co-encapsulation of DSDH and NAD was then evaluated by electrochemically-assisted deposition. The process and the sol composition have been optimized on flat glassy carbon and gold electrodes before being applied to macroporous gold electrodes. At the end, the electrodeposited bio-composite containing DSDH, diaphorase and cofactor has been tested for the electrochemical oxidation of D-sorbitol and a comparison was made between flat and macroporous gold electrodes. Co-immobilization of dehydrogenase and cofactor in solgel matrix by drop-coating "Simple" physical entrapment of the cofactor in the sol-gel film A straightforward way to associate the cofactor to the biocomposite layer is its addition to the starting sol so that, after gelification, it would be physically entrapped in the silica matrix. In Figure IV-2A shows the electrochemical response obtained with the modified electrode prepared with PEI as polyelectrolyte additive. Before addition of D-sorbitol, a well defined electrochemical signal due to ferrocenedimethanol could be measured (plain line). The addition of D-sorbitol in the solution from 2 to 16 mM led to a significant increase in the current response (dashed lines) and typical S-shape curves expected for a bio-catalytic process. The same experiment was performed with PDDA or PAA as additive, but no electrochemical activities were observed (see Figure IV-2B&C). PEI was already reported in the literature to allow efficient immobilization of several dehydrogenase, alcohol [27], D-lactate [28], or glucose dehydrogenase [29]. The effect of PEI for co-immobilization could be explained by the formation of "conjugates" by electrostatic attraction among the cationic polymer and the negatively charged dehydrogenase and NAD + . These "conjugates" could make the enzyme more rigid and stable against unfolding, presenting a more stable conformation, which also could enrich the cofactor in the vicinity of to ensure cofactor entrapment in an active form in the films whereas other polyelectrolytes did not. A possible explanation can be related to the fact that PEI is a branched polyelectrolyte while the others are linear macromolecules. for the biocatalytic event. This explanation is also supported by the faster signal decrease when operating under convective conditions (response vanishing in less than 2 hours). So, in spite of exhibiting a rather good bioelectrocatalytic response, the system based on the simple encapsulation of NAD + into the sol-gel biocomposite (GCE/TEOS/PEI/(DSDH+DI)/NAD + ) Figure IV-3. Evolution of the peak current intensity recorded for successive analyses of 10 mM D-sorbitol solutions at distinct periods of time, using GCE modified with the same sol as does not allow to get long-term stability. Effect of carbon nanotubes on the electrode stability One knows from a recent report by Zhou et al. [8] that noncovalent attachment of NAD + to carbon nanotubes is possible by taking advantage of the strong π-π stacking interaction between the adenine moiety in the NAD + molecule and the nanotube surface. We have thus evaluated if such interaction could contribute to improve the long-term stability of cofactor immobilization in our sol-gel biocomposites. In the present case, however, it was necessary to use carboxylate-functionalized SWCNTs because crude carbon nanotubes cannot be easily dispersed in the water-based sols utilized here. The first straightforward attempt was to incorporate SWCNTs in the starting sol containing all other ingredients (TEOS, PEI, enzymes, NAD + ) so that they are expected to be dispersed Encapsulation of NAD-Dextran Chemical attachment of the cofactor to a macromolecule is the most usual protocol for their immobilization on electrode surface [5,6,7]. Cofactor immobilization via the formation of interpenetrated organic-inorganic networks using a high molecular weight NAD + derivative (NAD-dextran) was also tested. The commercially available NAD-Dextran compound is indeed known to be active when associated to dehydrogenases [2,3,4]. The macromolecule was introduced in the sol-gel matrix following a similar protocol as previous experiments. The electrode response increased during the first hour of experiment and was then stable for more than 6 hours. This experiment was performed with using ferrocenedimethanol in solution as electron mediator between DI and the electrode surface for recycling the immobilized cofactor. (A) (B) Figure IV-5. (A) Cyclic voltammograms obtained with GCE/TEOS/PEI/(DSDH+DI)/NADdextran in the absence of D-sorbitol (solid lines) and in the presence of D-sorbitol from 2 to 16 mM (dashed line). (B) Evolution of the peak current intensity recorded for successive analyses of 10 mM D-sorbitol solutions at distinct periods of time, using GCE modified with the same sol as (A). Cyclic voltammograms have been performed in Tris-HCl buffer (pH 9) containing 0.1 mM FDM. Potential scan rate was 50 mV/s. Covalent attachment to the silica matrix with Glycidoxypropylsilane Glycidoxypropyl-trimethoxysilane (GPS) is a well known compound in sol-gel chemistry, widely used as adhesive layer in composite material, and can find application as part of protective coatings. This compound was recently proposed separately for chemical attachment of cofactor of silica nanoparticles [30] or protein immobilization into macroporous silica monolith [31]. Both approaches used first chemical grafting of the GPS on the silica substrate before taking advantage of the epoxide ring for further functionalization with the enzyme or the cofactor. GPS was also successfully used as silica precursor for the encapsulation of enzymes [32,33]. In these previous reports, however, no attempt was made to get in situ According to the literature, coupling an epoxy ring with the adenine moieties of the cofactor should be made in basic medium in order to get the most active biomolecules [26], but such conditions are prevented here as sol-gel bio-encapsulation can only be obtained in neutral conditions. We will show hereafter that despite this limitation, the confinement of the linked cofactor in the sol-gel matrix with using GPS allows good activity to be detected with high stability. Evidence of reaction between NAD + and GPS can be obtained indirectly by electrochemistry and directly by infrared spectroscopy. Electrochemical evidences of NAD + immobilization Here GPS was first let to react with NAD + for at least 12 hours, before to be introduced as NAD-GPS educt into the sol that was deposited on the electrode surface along with other components (TEOS, PEI, enzymes). First of all, GPS provides to the film a very good adhesion to the glassy carbon electrode surface, because of the adhesive properties of this organosilane [ 34 ]. Typical amperometric and voltammetric responses of ATR-FTIR spectroscopy ATR-FTIR monitoring of GPS hydrolysis Because the methoxy groups of GPS can easily hydrolyze in aqueous media, it was important to identify the characteristic spectral features occurring during the hydrolysis reaction before the analysis of the spectra in the presence of NAD + . All spectra are discussed in the more useful 1800-700 cm -1 region. Figure IV-7. (a)-(j) Time evolution of ATR-FTIR spectra during 18 hours GPS hydrolysis in Tris-HCl buffer (pH=7.5) on a diamond ATR crystal (one spectrum every 2 hours). (k) ATR-FTIR spectrum of GPS in water after 1 h 40 min. hydrolysis on a diamond crystal. Offsets of spectra are used for clarity. One interesting band absorbs at 911 cm -1 . It is characteristic of the epoxide group and it was assigned to C-O, C-C stretchings and C-O torsion modes of this little cycle. One can note that the wavenumber of this band do not change in the course of hydrolysis. However, one can observe a quite big intensity increase with the increase of hydrolysis time. This is probably due a drastic change in the polarity of GPS with hydrolysis that leads to higher transition moments for these vibrational modes. This led us to conclude that the epoxide group did not open in the Tris-HCl buffer used in the study. The hydrolysis reaction in Tris-HCl buffer is also slower than in pure water, since after 18 hours of reaction the spectrum shows a mixture of the spectra of pure GPS and hydrolyzed GPS (Figure j,k)). It was probably because of some protecting interactions between the Si(OCH 3 ) 3 group of GPS and the amino or hydroxide groups of the Tris molecule. Figure IV-8. ATR-FTIR spectra on a diamond ATR crystal of (a) pure GPS, (b) GPS and PEI(5%) after about 15 h in Tris-HCl buffer (pH=7.5), (c) GPS after about 14 hours in Tris- HCl buffer, (d) GPS after 1h40 in pure water and (e) and 5 % PEI in Tris-HCl buffer at pH 7.5. Offsets of spectra are used for clarity. ATR-FTIR spectrum of NAD + Figure IV-9 shows the spectrum of NAD + in Tris-HCl buffer with main group assignments determined according to the literature [37,38,39,40,41] . Wavenumber (cm -1 ) Figure IV-9. ATR-FTIR spectrum of NAD + (0.3 M) in Tris-HCl buffer (pH=7.5) on a diamond ATR crystal. Evidence of reaction between cofactor and GPS Figure IV-10 shows the time-evolution of the ATR-FTIR spectra during 18 hours reaction of GPS with NAD + (1:1 molar ratio) in Tris-HCl buffer. Here, C=O stretching band at 1696 cm -1 from the nicotinamide of NAD + does not change during the course of the reaction. It can be concluded that this group do not react with GPS (such reaction would have prevented electrocatalytic activity). Weak bands that absorb between 1370 and 1310 cm -1 are characteristic of the adenine part of NAD + [39]. The profile of these weak, poorly resolved bands change with increasing time. This suggested that the adenine group react with GPS, but it is not possible to identify precisely the reacting bonds. Figure IV-10. Time evolution of ATR-FTIR spectra during 18 hours reaction of GPS (0.3 M) with NAD + (0.3 M) in Tris-HCl buffer (pH=7.5) on a diamond ATR crystal (one spectrum every 5 min. during 30 min. then at 1 h, 2 h, 3 h, 5 h, 8 h, 11 h, 14 h, 18 h of reaction). Inset plot shows a detail of this spectrum between 870 and 940 cm -1 . The scheme shows the possible reaction between N-1 of NAD + and the epoxide ring of GPS. The intensity of the bands between 1300 and 1000 cm -1 increases with increasing time. Even though GPS was already partly hydrolyzed when FTIR monitoring started (bands around 1100 and at 1015 cm -1 ), it shows the continuing hydrolysis and rearrangement of hydrolyzed species during the time of the reaction monitoring (note that PO 2 elongation from phosphate groups can also be found in this region). One interesting spectral feature is the intensity decrease of the band at 911 cm -1 that is assigned to the epoxide group of GPS. It shows that this group is opened when in the presence of NAD + while no opening of this epoxide ring was observed in Tris-HCl buffer alone (Figure IV-7.). The intensity of this band decreases during 8 hours of reaction, and then stays almost constant showing the end of the reaction. After 8 hours, a residual weak band stays with constant intensity on the spectrum. It is due to a ribose νC-C mode from NAD + (see Figure IV-9). According to previous work from Fuller et al [26], the conditions that were used here should lead to the alkylation of NAD + at position N-1 (see scheme on Figure IV-10). This compound displays very low activity with dehydrogenase [42]. For optimal enzymatic activity, the cofactor should better be attached through the C-6 amino group rather than the N-1 ring nitrogen. This is usually obtained in basic medium using hours of reaction. But such condition cannot be used here as it would induce a rapid gelification and would make impossible the further encapsulation of proteins. Despite this possible limitation, the high concentration of Figure IV-12. Time evolution of the ATR-FTIR spectra during the 18 hours reaction of GPS (0.3 M) with NADH (0.3 M) in Tris-HCl buffer (pH=7.5) on a diamond ATR crystal (one spectrum at 0, 15, 30 min and at 1, 2, 5, 8, 11, 14 and 18 h of reaction). Inset plot shows a detail of the same spectra between 940 and 870 cm -1 . A final control experiment was made by replacing NAD + by NADH for the reaction with GPS following the same protocol. Indeed, the electrochemical biosensor/bioreactor is expected to operate independently on the initial form of the cofactor (because the required form is continuously regenerated via the electron mediator). This time, no decrease was observed at 911 cm -1 (Figure IV-12). At the opposite, the band ascribed to GPS was slightly increasing due to the hydrolysis of alkoxysilane moieties. A similar effect was observed during the simple hydrolysis of GPS in Tris-HCl buffer (Figure IV-7). The reduced form of the cofactor did not react with the epoxide ring in the smooth condition of the sol-gel protocol. This absence of alkylation was confirmed by electrochemical measurements. The electrode was responding to D-sorbitol additions, but the electrochemical response was not stable and disappeared in less than 3 hours (Figure IV-11). This trend is comparable to the one observed with the film prepared in the absence of GPS (Figure IV-6B, curve b). In addition to the attachment of the cofactor, the different components participate to the good stability of the sol-gel film. By comparison, PEI was here replaced by PDDA. As mentioned in the first section of the discussion, this later polyelectrolyte did not lead to electrocatalytic activity when NAD + is present in the film. In addition, it was observed that PDDA led to swelling films with limited stability (few hours). Only the association between PEI and GPS permitted to obtain good film stability with good catalytic activity. FTIR was also used to monitor an eventual reaction between PEI and GPS but no evidence of coupling between them could be observed by following the intensity of the signal at 911 cm -1 (Figure IV-8). Contrarily to PAA or PDDA that are linear polymer, PEI is a branched polymer and we suppose it can be better distributed in the sol-gel layer. This would be the main explanation of the good stability of this bio-organic-inorganic hybrid sol-gel layer, but reaction between PEI and GPS can not be totally exclued during drying and aging that were not monitored by FTIR. Co-immobilization dehydrogenase and cofactor in electrodeposited sol-gel thin film It was demonstrated in section 2 of this chapter that dehydrogenase and cofactor can be readily co-immobilized in silica sol-gel films on electrode surfaces, showing effective bioelectrochemical activity if using PEI and GPS as additive in the sol, but those works were performed using drop-coating to deposit sol-gel films. In the present section, this approach has been extended to electrochemically-assisted deposition of the sol-gel layer. The feasibility on GCE First of all, the co-immobilization of DSDH, DI and NAD-GPS by electrochemicallyassisted deposition is evaluated on glassy carbon electrode. Before addition of D-sorbitol, a well defined electrochemical signal due to ferrocenedimethanol could be observed (plain line). The addition of D-sorbitol in the solution from 2 to 6 mM led to a significant increase in the current response (dashed lines). The electrochemically-assisted silica gel deposition is thus an effective method for the coimmobilization of DSDH and NAD on electrode surfaces. Note that PEI was here used as polyelectrolyte for stabilizing the protein in the sol-gel film. Figure IV-13. Cyclic voltammograms obtained using GCE modified by TEOS/NAD-GPS/PEI/(DSDH+DI) film in the absence and presence of D-sorbitol. Films have been deposited by electrolysis at -1.3 V for 60 s with a sol containing 0.15M TEOS, 14 mM NAD-GPS, 2.3 mg/mL DSDH, 0.76 mg/mL DI and 1.5% PEI. Cyclic voltammograms have been performed in 0.1 M Tris-HCl buffer, in the presence of 0.1m M FDM. Potential scan rate was 50 mV/s. The feasibility on flat gold electrode The co-immobilization of DSDH, DI and NAD-GPS by electrochemically-assisted deposition was then evaluated on flat gold electrode. We first started with the same conditions as used for the experiment on glassy carbon (Figure IV-13), but these conditions did not allow to be extended to gold electrode without changing. It was in fact necessary to introduce a small amount of PDDA in the sol for improving the adhesion of the film on the gold electrode. Cofactor immobilization together with DSDH and diaphorase was successfully achieved in sol-gel film prepared by electrodeposition. As for the proteins, the appropriate choice of polyelectrolyte and concentration in the sol was critical for getting the optimal film deposition and electrocatalytic activity. A combination of PEI and PDDA was used for electrochemically-assisted deposition of this sol-gel biocomposite with co-immobilized DSDH, diaphorase and cofactor on gold surface. Such process has been also applied to the functionalization of macroporous electrodes. Figure IV-14. Cyclic voltammograms obtained using gold electrode modified by TEOS/NAD-GPS/PEI/PDDA/(DSDH+DI) film in the absence and presence of D-sorbitol. Films have been deposited by electrolysis at -1.3 V for 60 s with a sol containing 0.15M TEOS, 14 mM NAD-GPS, 2.3 mg/mL DSDH, 0.76 mg/mL DI and 1 % PEI and 0.5% PDDA. Cyclic voltammograms have been performed in 0.1 M Tris-HCl buffer, in the presence of 0.1m M FDM. Potential scan rate was 50 mV/s. Extension to the macroporous gold electrode The above approach was extended to macroporous gold electrode displaying a much larger electroactive surface area in comparison to the geometric one. A deposition condition -1.3 V, 60 s (versus the Ag/AgCl reference electrode) was used in previous experiment. Despite the good catalytic response was obtained on the flat gold electrode, the deposited sol-gel film was rather thick. In macroporous electrodes the situation is even more complex as the pore interconnections can be rapidly blocked by the electrogenerated silica gel layer. Low potential and deposition times are here needed to prevent the rapid macropore clogging. Here, films have been deposited by electrolysis at -1.1 V for 30 s. polyetheleneimine additive (PEI). All the operations are done in smooth conditions compatible with the sol-gel bio-encapsulation process. By comparison with the simple encapsulation of NAD + or NAD-dextran or adsorption of NAD + on carbon nanotubes, the strategy with GPS is cheaper, simpler to implement and leads to more stable sol-gel films. Conclusion The electrode shows good stability under stirring for more than 14 hours. The sol-gel biocomposite can be deposited on the electrode surface either by evaporation of the sol or by sol electrolysis (i.e. electrochemically-assisted deposition). Finally, the functional layer has been successfully deposited in macroporous gold electrodes and applied for the oxidation of D-sorbitol. The macroporous texture of the gold electrode improves significantly the catalytic efficiency of the sol-gel biocomposite in comparison with a flat gold electrode. By comparison with the methods previously employed, the strategy that is described here offers a simple, cheap and clean way but effective approach to the development of integrated dehydrogenase-based bio-electrochemical devices. Chapter V. Mediator immobilization in sol-gel matrix and co-immobilization with dehydrogenase and cofactor In this chapter, diaphorase has been used in addition to the dehydrogenase in order to ensure the safe regeneration of the NAD + cofactor, using ferrocene species or osmium polymer as electron shuttles between the diaphorase and the electrode. First of all, the immobilization of ferrocene species and osmium polymer in sol-gel matrix was studied. The influence of GPS as additives for the mediator immobilization was described. Then, the feasibility of coimmobilization base on sol-gel film was evaluated by one step drop-coating. In addition, electrochemically-assisted deposition of sol-gel thin film to co-immobilize dehydrogenase, diaphorase, the cofactor NAD + and an electron mediator was also investigated. Introduction Near 300 dehydrogenases are known to catalyze the oxidation of a variety of substrates. However, there are some difficulties in the development of dehydrogenase-based reagentless devices because of the high over-potential for the direct oxidation of NADH that results in the formation of non-active form of cofactors. Some of these difficulties can be minimized through the use of redox mediators that shuttle electrons between the cofactor and the electrode surface [ 1]. The oxidation of NADH can also be achieved using the enzyme diaphorase (DI). The association of diaphorase with mediators to regenerate cofactor at the electrode surface has already been reported [2,3,4]. The regeneration of cofactor with the participation of diaphorase ensures that only enzymatically active cofactors are produced selectively. Mediator is necessary because the diaphorase shows only very slow rates of electron transfer to electrode surfaces [5]. Ferrocene species [3] are by far the most common among the mediators used in combination with diaphorase for cofactor regeneration. Osmium redox polymers have also been proposed recently for such applications [4]. The difficulty to elaborate reagent free device comes from the development of a suitable matrix in which all component of the electrochemical detection are immobilized in a stable form, i.e. the enzyme(s), the cofactor and the electrocatalytic system for cofactor detection and regeneration. Sol-gel materials are known to have very promising features as immobilization matrices, because they can be prepared at room temperature and can retain the catalytic activity of the biomolecules [6,7,8]. By changing the experimental conditions of the sol-gel process, gel structures with related characteristics can be obtained. A very large number of enzymes have been trapped within sol-gel films, showing that they usually retain their catalytic activity and can even be protected against degradation [9,10,11]. Moreover, during the sol-gel process, additional substances such as redox mediators, e.g. Ruthenium complex [ 12 ], toluidine blue [ 13 ], thionin [ 14 ], or ferrocene [ 15 ], can also be easily incorporated into the final structure. Despite the long history of using sol-gel material for enzymes immobilization and mediator immobilization, no work has been reported regarding the co-immobilization of enzyme, cofactor and mediator by using a sol-gel material to construct this reagentless device. In this work, a series of strategies allowing dehydrogenase, cofactor and electron mediator coimmobilization in sol-gel thin films have been investigated. As illustrated on Scheme V-1, oxidation of the enzymatic substrate by the immobilized dehydrogenase induces NAD + reduction to NADH. Diaphorase catalyses then the oxidation of the immobilized NADH back to NAD + and the electron transfer from the diaphorase to the glassy carbon electrode surface is carried out by the immobilized mediators. These mediator species can be ferrocene species (ferrocene linked to a poly(ethylenimine) or a ferrocene-silane) or an osmium polymer. First of all, the immobilization ferrocene species and osmium polymer in the sol-gel matrix are shown. The influence of glycidoxypropylsilane (GPS) as additive to improve the long term stability of the mediator immobilization is studied. Then, the feasibility of co-immobilization of dehydrogenase, diaphorase and cofactor in the sol-gel film is evaluated by one step drop-coating. Attempt to perform this co-immobilization by electrochemically-assisted sol-gel deposition is finally presented. Scheme V-1. Illustration of the electrochemical pathway used for the detection of the dehydrogenase enzymatic substrate. Fc-PEI as co-immobilized mediator Ferrocene (Fc) species are commonly used in combination with diaphorase for cofactor regeneration. The great interest in ferrocene derivatives originates from their fast electron transfer properties, their pH-independent redox potentials, and their efficient electrochemical reversibility [16]. In this process, NADH reacts with diaphorase and the electrochemically generated ferricinium ions to produce NAD + and ferrocene species that can be re-oxidized in the electrocatalytic scheme (see Scheme V-1). Co-immobilization in drop-coated sol-gel film 2.1.1 Effect of GPS on Fc-PEI immobilization The ferricinium ion is much more soluble than ferrocene itself [17]. Therefore, the main problem in using Fc is a gradual leakage, from the electrode surface, of the mediator via its oxidized form. One way to overcome the above problem is attaching the Fc to a polymer. Poly(ethylenimine) (PEI) is an attractive candidate to serve as a redox polymer backbone for a high degree of functional density on the polymer to facilitate modification and high segmental mobility. Liu et al. coupled ferrocene carboxaldehyde to PEI and incorporated these redox polymers into polyelectrolyte multilayer films via a layer-by-layer deposition technique [18]. Hodak et al. reported the redox mediation of glucose oxidase (GOx) in a self-assembled structure of cationic protonated poly-(allylamine) modified by ferrocene (PAA-Fc) and anionic GOx deposited electrostatically layer-by-layer on negatively charged alkanethiolmodified Au surfaces [19]. Zheng et al. prepared PEI with attaching both the redox mediator (ferrocene) and the native cofactor (NAD + ), and then the modified polymer was immobilized with the NAD-dependent dehydrogenase to construct reagentless amperometric biosensor [20]. Here, the synthesized Fc-PEI is introduced into the sol-gel matrix to construct the mediator modified electrode with the idea to form an interpenetrated organic-inorganic hybrid ensuring durable immobilization of the mediator. Figure V-1A shows cyclic voltammograms recorded with a GCE modified by TEOS sol-gel film containing Fc-PEI. A well-defined CV signal can be observed, but it is not stable on multiple potential scan due to the rapid leaching of the mediator. Unfortunately, the leakage of Fc from the electrode can not be prevented by the chemical attachement of Fc to PEI. The chemical structure of the alkoxysilane precursors and the composition of the sol-gel mixture influenced the roughness, the size and the distribution of pores in the sol-gel films, which are important criteria for both efficient enzyme and mediator encapsulation. It is reported that the formation of sol-gel from GPS precursor comparing with the other (organol)alkoxyoxysilane precursors lead to the formation of a more uniform thin film with smaller pores [21,22], which would allow stable enzyme and mediator immobilization. Here, we try to introduce GPS inside the sol-gel matrix to improve the stability of mediator immobilization with the idea to encapsulate more effectively Fc-PEI to the structure via favourable interaction between amino group and epoxide function of GPS. Co-immobilization of Fc-PEI, DSDH, DI and cofactor Here, the co-immobilization of Fc-PEI, cofactor, DSDH and diaphorase inside the silica gel has been investigated. Fc-PEI, GPS functionalized cofactor (NAD-GPS), DSDH and 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0, I / µA E versus Ag/AgCl / V 0,0 0,1 0,2 0,3 0,4 An increasing electrocatalytic response is obtained upon the addition of 0.2 mM D-sorbitol. The electrocatalytic response is about three times lower with the mediator inside the film by comparison with the response with the mediator in solution (Figure IV-6). Figure V-2C shows the corresponding calibration plot, the current intensity increases regularly with the Dsorbitol concentration up to 2.2 mM and starts to level off for higher concentrations. We have here studied the electrode stability (Figure V-3). The response to 2 mM Dsorbitol of electrodes prepared the same way, with (curve a) and without GPS (curve b) are monitored during 14 hours. In the absence of GPS, the electrode current decreases dramatically during the first 500 s (before the addition of D-sorbitol), due to the rapide loss of Ferrocene and NAD + in the solution. After the addition of D-sorbitol, only faintly visible current increase is obtained, that reach quickly a current value close to zero. At the opposite, the electrode prepared with GPS display a very good stability, and just a limited (few percents) decrease in current intensity was observed, possibly due to loss of enzymatic activity during the long operation. GPS has here two functions, it allows to chemically attach NAD + to the silica matrix and it stabilizes the overall assembly for improved long-term stability. Co-immobilization in electrogenerated sol-gel films 2.2. Immobilization of Fc-PEI in electrodeposited sol-gel thin films The electrochemically-assisted deposition of silica thin film involves the local increase of the pH that induces rapid gelification at the electrode surface. In order to develop a mediator immobilization method compatible to macroporous electrodes, we try to extend the previous droping/evaporation approach to the electrochemically-assisted deposition of sol-gel film. ferrocene was observed in the cyclic voltammograms. At the contrary to drop-coating, the electrodeposition involves a significant increase of pH during the gelification. It seems that these conditions limit the homogeneous incorporation of the ferrocene species in the sol-gel film, and limit strongly its electrochemical detection. Figure V-4. Cyclic voltammograms recorded with a GCE modified by electrodeposited TEOS/GPS/PEI/Fc-PEI in the 0.1 M Tris-HCl buffer (pH 9) at a scan rate of 50 mV/s, scan cycle, 10. Films have been deposited by electrolysis at -1.3 V for 60 s with a TEOS/GPS sol containing Fc-PEI. Co-immobilization in electrodeposited sol-gel thin film It has been demonstrated in chapter IV that dehydrogenase and cofactor can be readily encapsulated in electrochemically-assisted deposition of sol-gel films on electrode surfaces. Although the immobilization of Fc-PEI by electrodepostion was not successful, we here try to co-immobilize DSDH, NAD-GPS and Fc-PEI in electrochemically-assisted deposition sol-gel films. Indeed, the introduction of enzyme and cofactor in the starting sol solution may bring some changes of the film properties. However, only faintly visible electrochemical signal of ferrocene could be observed in the absence of D-sorbitol (Figure V-5). The addition of Dsorbitol did not induce any increase of the anodic current. Oppositely, the electrochemical signal of ferrocene decreased strongly. Obviously the differences in gel texture/structure between the film obtained by dropcoating and electrodeposition lead to different electrochemical behaviour. In order to improve the incorporation of the ferrocene species in the electrogenerated sol-gel film, another strategy involving Fc-silane has been tested and is presented in the next section. Fc-silane as co-immobilized mediator The synthesis of sol-gel silica material [23,24,25] has become a vast area of research during the last few years. The silica framework can be synthesized in part from alkoxide precursors containing a nonhydrolyzable Si-C bond, i.e. R 4-x Si(OR') x , where R represents the desired reagent or functional group. One of the possible applications of such materials in the development of sensors is the attachment of the redox material to the surface of electrode. Audebert et al. [25] developed a modified electrode from organic-inorganic hybrid gels formed by hydrolysis-polycondensation of some trimethoxysilylferrocenes. In this work, the used organic-inorganic hybrid gels contain ferrocene units covalently bonded inside a silica network. There is a great potential to study such ferrocene linked sol-gel silica material for mediated biosensor applications. Some works on ferrocene based sol-gel sensors are available [26,27,28]. Here, Fc-silane (see the top of Figure V-6) comes to our consideration due to the problems of mediator immobilization through electrodepostion as demonstrated above. We expect Ferrocene functionalized with silane could improve mediator immobilization through electrodeposition. As previously, the first tests have been performed on the basis of dropcoated sol-gel film before to consider electrodeposition. Co-immobilization of Fc-silane, DSDH, DI and cofactor The ferrocene-silane compound has been then associated with the other components of the reagentless device. First, a sol solution with Fc-silane as co-condensation precursors was prepared, then, GPS functionalized cofactor, DSDH and diaphorase were introduced into this Co-immobilization in electrogenerated sol-gel film Immobilization of Fc-silane in electrodeposited sol-gel thin film Although several reports on silylated ferrocene derivatives immobilized on electrode surfaces are available [25,26,27,28], most of them were prepared by drop-coating or spincoating. To our knowledge, no work has been reported regarding ferrocene-silane derivatives 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 no.sorbitol_C A7mMsorbitol_B I / µA E versus Ag/AgCl / V 0 100 200 300 400 500 0,0 Good catalytic characteristic to D-sorbitol oxidation has been indicated in previous experiments with both proteins and cofactor immobilized in the electrodeposited sol-gel layer (Chapter IV section 3). In order to analyse this negative result, a similar film has been prepared without immobilized NAD + (TEOS/GPS/Fc-silane/PEI/DSDH/DI/NAD-GPS) (Figure V-10). In this experiment, the enzyme and the mediator were co-immobilized on the electrode surface, and the cofactor was introduced into the solution before electrochemical experiments. In the absence of D-sorbitol, only the reversible electrochemical signal of ferrocene was observed. The addition of D-sorbitol into the solution does not lead to noticeable modification of the current response and no increase of peak current can be observed at the potential of NADH oxidation. Obviously, the co-immobilization of the enzymes and mediator in the sol-gel film by electrodepostion does not exhibit electrochemically detectable activity. The communication between of immobilized ferrocene and diaphorase was not sufficient to allow electro-catalysis. The different gel texture expected for film obtained by evaporation (drop-coating) or electrodeposition could explain the difference observed between these two kinds of electrode. The co-immobilization of dehydrogenase, cofactor and electron mediator (Fc-PEI or Fcsilane) in sol-gel matrix by drop-coating was successful. However, such co-immobilization by by using electrogenerated sol-gel thin films was not possible. For Fc-PEI, no electrochemical signal of ferrocen was observed after the immobilization by electrodepostion. The incorporation of ferrocene in the electrogenerated sol-gel film could be improved by using Fcsilane, but no electrocatalysis was observed. These negative results obtained with ferrocene species in sol-gel electrodeposition led us to use different redox polymer for improving the connection between diaphorase and the electrode surface. Due to electrochemical reversibility, high electron transfer rate constant and stability of the Os-complexes, Osmium polymer has been recently proposed for such application and has been tested here. Co-immobilization in electrodeposited sol-gel thin film Os-polymer as co-immobilized mediator Flexible osmium redox polymers attracted the attention of a number of researchers due to the efficient electron shuttling properties combined with the polymeric structure, promoting a stable adsorption, as well as a possibility to immobilize the enzyme into multiple layers [29,30] on the electrode surface. Osmium polymers can serve as mediator for a wide range of oxidases to fabricate biosensor, such as, glucose oxidase [31], lactate oxidase [32] and alcohol oxidase [33]. Osmium polymer as mediator can also be used to fabricate dehydrogenase 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 -0,4 -0,2 0,0 0,2 developed reagentless amperometric formaldehyde-selective biosensors based on the recombinant yeast formaldehyde dehydrogenase. In this method, the polymer layers simultaneously served as a matrix for keeping the negatively charged cofactors and glutathione in the bioactive layer [36]. Here, we tested four kinds of synthesized Os-polymer (see Immobilization of DI The catalytic characteristic of the immobilized osmium was evaluated through the simple co-encapsulation of osmium and diaphorase. -0,1 0,0 0,1 0,2 0,3 0,4 0,5 -0,1 0,0 0,1 0,2 0,3 0,4 0,5 -1 The conclusion for Os-polymer is the same as for ferrocene species. Co-immobilization of all components (dehydrogenase, cofactor and electron mediator) in the sol-gel films can be successfully achieved by using drop-coating. However, the co-immobilization by using electrogenerated sol-gel thin films was not so successful due to the same limitation with the mediator immobilization. Conclusion A series of strategies for dehydrogenase, cofactor and electron mediator co-immobilization in sol-gel thin films have been developed. First of all, ferrocene species (Fc-PEI and Fc-silane) or osmium polymers as mediators between diaphorase and the electrode have been successfully immobilized on the electrode within the sol-gel film, which allows the smooth regeneration of the cofactor. The importance of introducing GPS as an additive into the TEOS sol-gel-derived films has been pointed out with respect to mediator immobilization. GPS can greatly enhance the stability of the electrochemical response. Finally, successful co- and co-immobilization of the NAD + cofactor. Introduction The direct oxidation or reduction of cofactor on a bare electrode requires high overpotential, and usually leads to enzymatically inactive NAD-dimers and serious side reactions. However, only reversible cofactors recycling can guarantee the reagentless aspect of the bioelectrocatalytic devices. In order to overcome these inherent difficulties, a common approach is to confine the mediator at the electrode surface to facilitate the interfacial electron transfer kinetics. The mediators for the regeneration of NAD + are diverse. Several mediators such as quinones [ 1 , 2 ], oxometalates [ 3 ], ruthenium complexes [ 4 ], phenazines [ 5 , 6 ] and phenoxazines [ 7 , 8 ] quinonoid redox dyes have been proposed for NAD + regeneration. Traditionally, the mediators were directly adsorbed, electropolymerized or covalently bound onto the electrode surface [9,10,11]. The mediators for the regeneration of NADH are relatively few. The best systems to date fulfilling these requirements are tris(2,2'-bipyridyl) rhodium complexes [ 12 , 13 , 14 ] and substituted or non substituted (2,2'-bipyridyl) (pentamethylcyclopentadienyl)-rhodium complexes [ 15 ], and some others [ 16 ]. The mechanism of this electrocatalytic process has been largely studied and the effect of various parameters (e.g., solution composition, temperature) has been discussed in the literature [17,18]. However, it might be surprising that not much effort was made to immobilize these mediators in an electrocatalytically-active form and further use it with immobilized dehydrogenase. During the last decades, carbon nanotubes (single-or multiwalled) have emerged as attractive materials in electroanalysis [ 19 , 20 ]. Indeed they display attractive chemical stability, strong absorptive properties and excellent biocompatibility [21,22]. CNTs-based electrodes are known to decrease the overpotential for the oxidation of NADH, however, the extent of decrease is not sufficient for the selective detection and regeneration of cofactor [23]. Recently, Wooten, et al demonstrates that further decrease in the NADH overpotential can be achieved at CNTs that were activated by microwaving in concentrated nitric acid [24]. An alternative method of incorporation the mediators onto carbon nanotubes (CNTs) for NAD + regeneration have attracted considerable study. A number of mediators such as toluidin blue [25], nile blue [26,27], meladola blue [28], methylene green (MG) [29] and an osmium polymer [30] have been immobilized by adsorption onto CNTs, resulting in a remarkable improvement of electocatalysis toward NADH oxidation. Another approach to form stable films of mediators on electrode surfaces, is to use electropolymerization which has many advantages including selectivity, sensitivity and homogeneity in electrochemical deposition, strong adherence to electrode surface and chemical stability of the film [31,32,33]. CNTs/mediator composite as electrode materials has been already explored for the construction of dehydrogenase-based biosensors [34]. Of particular interest is the report by Yan et al which described the assembly of integrated, electrically contacted NAD(P) +dependent enzyme-SWCNT electrodes [27]. The SWCNTs were functionalized with Nile Blue, and the affinity complexes of dehydrogenase with cofactor were crosslinked with glutaric dialdehyde and the biomolecule-functionalized SWCNT materials were deposited on glassy carbon electrodes. This is the unique example of reagentless device based on the combination of dehydrogenase and carbon nanotubes. The combination of sol-gel material and carbon nanotubes has also been considered for electroanalytical applications [35]. Recently, co-immobilization of lactate dehydrogenase and functionalized carbon nanotubes in sol-gel has been developed for biosensor [ 36 ]. The nanocomposite was prepared by the sol-gel process incorporating a redox mediator and carbon nanotubes, which was mixed with enzyme solution in a certain ratio for enzyme encapsulation. To date, and to our knowledge, no attempt was made to use sol-gel thin film to co-immobilized dehydrogenase and NAD + cofactor on the electrode surface of CNTs/mediator composite. Moreover, electrochemically-assisted deposition of sol-gel biocomposite on carbon nanotube assembly is also a new approach. We have investigated here various strategies for the elaboration of a reagentless sensor based on NAD-dependant dehydrogenase using the electrochemically-assisted deposition of the sol-gel biocomposite on carbon nanotubes (CNTs). CNTs have been functionalized by three different protocols in order to provide them catalytic properties for NADH detection. These protocols are (1) micro-wave treatment (MWCNTs-µW), ( 2 complex on SWCNTs has also been studied. Deposition sol-gel film at microwaved MWCNTs (GCE/MWCNTs-µW) A recent report by Wooten et al. described that microwave treatment of MWCNTs resulted in a dramatic shift of the oxidative peak potential of NADH (E NADH ) to a lower value, from +0.4 V to about 0 V [24]. The efficient system could be interesting to be further used in combination with immobilized dehydrogenase and cofactor to develop the reagentless device. Electrocatalytic oxidation of NADH at GCE/MWCNTs-µW Q + NADH + H + → QH 2 + NAD + (1) which is followed by the recycling of quinone species on the surface of treated MWCNTs. QH 2 → Q + 2e -+ 2H + (2) Because reactions 1 and 2 are faster than the direct electrooxidation of NADH to NAD + , the mediated process (1)-( 2) allows conversion of the NADH to NAD + at less-positive potentials close to the formal potential of the Q/QH 2 redox couple (~ 0 V). Importance of bilayers (drop-coated sol-gel film) The efficient system was further used in combination with the sol-gel biocomposite. V (see curve a). Faintly visible electrocatalytic response started to be observed at +0.5V, and the optimal potential was found to be +0.8V. Co-immobilization of DSDH and cofactor in electrogenerated sol-gel thin film This efficient approach has been further used to develop a reagentless system (coimmobilization of DSDH and cofactor with MWCNTs-µW as mediator). The presence of MWCNTs-µW on the GCE surface greatly improved the electrocatalytic efficiency of the bioelectrode and decreased to a certain extent the overpotential of NADH detection (from +0.8V to +0.4V). But the benefit of the microwave treatment was lost when MWCNTs-µW was covered with an additional sol-gel layer. Moreover, the system failed to regenerate the immobilized cofactor. One alternative modification of the multiwalled carbon nanotube is the electrodeposition of methylene green. We expected that this functionalization could be less sensitive than the surface quinones to the sol-gel deposition, and could be more suitable to regeneration of the immobilized cofactor. Deposition of sol-gel film at MWCNTs modified by poly (methylene green) (GCE/MWCNTs-PMG) Electrocatalytic oxidation of NADH at GCE/MWCNTs-PMG The methylene green (MG) has shown to be a good NADH oxidation electrocatalysts [37,38] and was employed in this study. The principle of this mediator can be schematized as responded rapidly to the changes of the concentration and displayed a higher sensitivity and wider detection range at +0.2 V. If a higher potential was applied, the direct uncatalysed oxidation of the enzymatically generated NADH may happen and lead to enzymatically inactive NAD-dimers and serious side reactions. In this experiment, +0.2 V was selected as the working potential which guarantees both good selectivity and sensitivity. This potential was very comparable with peak potential observed in cyclic voltammograms. Encapsulation of DSDH and cofactor in sol-gel film drop-coated onto GCE/MWCNTs-PMG As aforementioned, co-immobilizing all the necessary enzyme, cofactor and mediator onto the electrode surface without any additional reagents in solution is an advantage to reagentless device. As disscussed in chapter IV, Glycidoxypropyl-trimethoxysilan (GPS) [39,40] provides a promising approach for NAD + immobilization. Electrodeposition of sol-gel thin film at GCE/MWCNTs-PMG Electrodeposition of sol-gel thin film containing DSDH at GCE/MWCNTs-PMG (cofactor in solution) The above approach has been extended to electrochemically-assisted deposition. First of all, the encapsulation of DSDH inside the electrodeposited silica gel has been investigated. Then 1 mM NAD + was added in the solution and a significant increase in the current response was observed, indicating a good enzymatic activity of the immobilized protein. DSDH encapsulated in the electrodeposited sol-gel film was active, but the immobilized cofactor did not interact with the carbon nanotubes modified with poly(methylene-green). -0,6 -0,4 -0,2 0,0 0,2 0,4 0,6 -80 The strategy based on MWCNTs-PMG can decrease the oxidation overpotential to be 0.2V for the smooth regeneration of cofactor and the mediator was now stable in the presence of sol-gel layer. Co-immobilization of DSDH and cofactor on MWCNTs-PMG could be obtained by drop-coating. However, this strategy did not allow the extension to electrochemicallyassisted deposition. A bad communication was observed between the immobilized cofactor and the poly(methylene-green) deposited on MWCNTs. A system base MWCNT wrapped by Osmium(III) polymer has finally been tested. This functionalization is expected to give sufficient flexibility to mediator to interact with the sol-gel material electrochemically deposited on its surface. Deposition of sol-gel film at MWCNT wrapped by Osmium(III) polymer (GCE/MWCNTs-Os) Carbon nanotubes as Osmium immobilization support The final system considered in this study was the carbon nanotubes modified with an osmium polymer. At first, the osmium polymer was simply mixed with the carbon nanotube in 0.2wt% chitosan. As display in Figure VI-13A, this protocol led to a well-defined CV. But Co-immobilization of DSDH, DI and cofactor at GCE/MWCNTs-Os The co-encapsulation of GPS functionalized cofactor, DSDH and DI on GCE/MWCNT-Os was then evaluated. The protocol for the electrode preparation was rather simple; cofactor, DSDH and diaphorase are mixed together inside the sol and are casted on GCE/MWCNT-Os. It is shown in Electrodepostion of sol-gel film at GCE/MWCNTs-Os Electrocatalytic oxidation of NADH This approach has been extended to electrochemically-assisted deposition. First of all, the encapsulation of DI inside the electrodeposited silica gel has been investigated. Co-immobilization of DSDH, DI and cofactor at GCE/MWCNTs-Os The co-encapsulation of DSDH, DI and cofactor inside the electrodeposited silica gel has been finally investigated. Up to now, carbon nanotubes wrapped by osmium has been the only system that was successfully combined with sol-gel electrodeposition for bioencapsulation of dehydrogenase and cofactor. With MWCNT-µW and MWCNT-PMG, the functionalization was confined at the surface of the nanotube as part of the material (MWCNT-µW) or as a thin surface layer (MWCNT-PMG). The NADH cofactor had to come close the carbon nanotube surface in order to be regenerated in the enzymatically active NAD + . When cofactor was free diffusing in the solution, the regeneration was always successful and carbon nanotubes provided higher surface area resulting in higher sensitivity of the bio-electrode to D-sorbitol with comparison to the bare GCE. However the immobilization of the cofactor in the electrochemically deposited sol-gel thin film did not provide a sufficient mobility to the cofactor and the molecule did not interact with the functionalized MWCNT (MWCNT-µW or MWCNT-PMG). Osmium polymer is composed by a polyacrylate backbone with the osmium complex attached by 5 atoms linkers. While the polymer is linked to the MWCNT by wrapping, the linker gives sufficient flexibility to the mediator to interact with the sol-gel material electrochemically deposited on its surface. The same polymer was first introduced in the sol-gel matrix. But the electrochemical properties of the polymer were lost during the electrodeposition process (see Chapter V). A first immobilization on the carbon nanotube surface before sol-gel electrodeposition gave the unique opportunity to conserve the electrochemical activity of the osmium complexes and to display good flexibility for effective interaction with the encapsulated diaphorase that catalyses the NADH oxidation. The direct electrochemical reduction of NAD(P) + requires high overpotentials and usually leads to enzymatically inactive NAD-dimers generated due to the one-electron transfer reaction [41,42]. The mediators for the regeneration of NADH should react with NAD + and not transferring directly the electrons (or hydride ion) to the substrate, and the potential window for electrochemical activation of the catalysts is rather narrow (-0.59 ~ -0.9 V vs. SCE) [43]. (2,2'-bipyridyl) rhodium complexes are the best systems for the regeneration of NAD + . As illustrated on Figure VI-18, their electrocatalytic behaviour is rather complicated, involving first a two-electron electrochemical reduction of Rh III (M ox ) into transient Rh I species (M red1 ; this reaction occurring itself in several steps [44]) that can be transformed upon protonation into a rhodium hydride complex (M red2 ), which is then likely to transfer the hydride to NAD(P) + under formation of only 1,4-NAD(P)H [45,46]. The interest of Substituent effects and mediator immobilization attempts Derivatives functionalized with thiol groups are attractive for immobilization of reagents in the form of self-assembled monolayers (SAMs) on gold electrodes [47,48], and the aminefunctionalized ones are good precursors to form organosilane reagents likely to be grafted onto metal oxides or incorporated within sol-gel matrices then deposited onto electrode surfaces [49,50,51]. We have examined the behaviour of several functionalized mediators of this family bearing various organic groups, which could be used as precursors to immobilize such compounds onto electrode surfaces. Here, we just give one example. Compounds 11 is a thiol-functionalized Rh complex. Conclusion The goal of the study was the implementation of a reagentless device displaying an efficient interaction between the immobilized dehydrogenase and cofactor in the electrogenerated sol-gel matrix and the functionalized multi-walled carbon nanotubes (MWCNTs). In this work, three different protocols have been developed to functionalize The catalytic property of macrowaved MWCNTs was significantly disturbed by sol-gel material and failed to regenerate the cofactor at 0V. The modification of the MWCNTs with electrodeposited poly(methylene-green) and wrapped Osmium(III) polymer were not so sensitive to the sol-gel material, and allowed the smooth regeneration of the immobilized cofactor in drop-coated sol-gel film. However, when the reagentless devices obtained by drop-coating were extended to electrochemically-assisted deposition, a direct interaction between NAD + immobilized in the sol-gel matrix and the functionalized MWCNTs was not possible with MWCNTs-µw and MWCNTs-PMG. Sol-gel deposition by electrogenerated limited the interaction of NAD + with the mediator confined on MWCNTs. Only MWCNT wrapped with Osmium(III) polymer and in the presence of diaphorase allowed to observe the electrochemical detection of D-sorbitol in a reagentless configuration. This is probably due to high mobility of the osmium complexes immobilized on MWCNTs. For the reduction reaction, we studied the immobilization of functionalized mediators of [Cp*Rh(bpy)Cl] + family. The presence of substituents bearing nucleophilic moieties such as S-or N-containing groups, on the bipyridine ligand, was proven to be harmful to the electrocatalytic properties of the [Cp*Rh(bpy)Cl] + mediator, limiting therefore most immobilization strategies based on covalent bonding onto electrode surfaces. A way to circumvent this problem is the soft immobilization of such [Cp*Rh(bpy)Cl] + mediator by π-πstacking onto CNTs. However, this strategy does not show good operational behaviour when associated to a dehydrogenase enzyme with silica sol-gel for protein encapsulation. Conclusion et perspectives Conclusion and outlook The focus of the research work carried out in this thesis is on the development of different strategies allowing the stable immobilization of a dehydrogenase, the cofactor NAD + /NADH and an electron mediator in a sol-gel matrix deposited (either by evaporation or by electrogeneration) as a thin film on an electrode surface. This layer is intended to be applied in electro-enzymatic synthesis for the production of fine chemicals. To achieve the objective of the project, we have divided the work into three steps: (1) dehydrogenase immobilization, to the epoxide group of glycidoxypropylsilane (GPS) before co-condensation of the organoalkoxisilane with tetraethoxysilane in the presence of the proteins (dehydrogenase and diaphorase) and a poly(etheleneimine) additive (PEI). All the operations were performed in smooth conditions compatible with the sol-gel bio-encapsulation process. By comparison with the simple encapsulation of NAD + or NAD-dextran or adsorption of NAD + on carbon nanotubes, the strategy with GPS is either cheaper or simpler to implement, and leads by far to more stable sol-gel films and durable bioelectrocatalytic responses. At the end, the efficient drop-coating method has been extended to the electrochemically-assisted deposition of sol-gel film with encapsulated enzymes and cofactor on macroporous electrodes. Finally, the last part of this work has been devoted to the development of different strategies for mediator immobilization which could used for the elaboration of reagentless device with co-immobilized dehydrogenase and cofactor. First of all, a series of successful strategies for co-immobilization of all components (dehydrogenase, cofactor and electron mediator) in sol-gel films have been developed by using one step drop-coating. here DSDH was chosen as model enzyme, NAD + functionalized with GPS was used as mediator, and ferrocene species or osmium polymers were introduced inside the matrix as co-immobilized mediators. The importance of introducing GPS as an additive into the TEOS sol-gel-derived films has been pointed out with respect to the stable mediator immobilization. However, such co-immobilization applied to electrochemically-assisted deposition of sol-gel thin films was not successful due to some problems to keep mediators in an active form. To overcome this problem, we have developed different strategies for the elaboration of a reagentless devices based on deposition of the sol-gel biocomposite on mediators functionalized multi-walled carbon nanotubes (MWCNT). Surface modification of the carbon nanotube with quinone moieties by microwave treatment or the electrodeposition of poly(methylene-green) on the MWCNT resulted in good electrocatalytic detections of the free diffusing NADH produced by the immobilized dehydrogenase. However these systems failed to regenerate the cofactor immobilized in electrogenerated sol-gel film as the mediators immobilized on carbon nanotubes did not display enough mobility to react with NAD + linked to the sol-gel matrix. Finally, the sol-gel thin film with co-immobilized dehydrogenase, diaphorase and cofactor was deposited on MWCNT wrapped by osmium polymer. The flexibility of the osmium complexes allowed the smooth regeneration of the immobilized Conclusion and outlook 207 cofactor. All the components are able to communicate inside the silica gel layer for efficient electro-catalytic oxidation of D-sorbitol. The combination of the carbon nanotubes with the Osmium(III) polymer was a suitable electrode material for further electrogeneration of sol-gel materials with co-immobilized proteins and cofactor. The layer developed in this study can be applied in electro-enzymatic synthesis for the manufacture of chiral fine chemicals. Because all reactive agents have been immobilized on the electrode surface, it represents an environmentally friendly process by avoiding organic solvents and reducing purification steps to a minimum. This concept meets the standards of green chemistry and leads to processes which come close to zero waste emissions. However, it will still take some time to improve the reaction productivity before electroenzymatic processes can be applied on an industrial scale. The finding of this work can also be applied in the development of dehydrogenase-based reagentless biosensors. It is reported there are more than 300 kinds of dehydrogenases. These enzymes catalyze the oxidation of a variety of substrates including alcohols, aldehydes, glucose and etc., which are of great interests from the analytical point of view because of the practical application on food industry, environment, and clinical chemistry. This study also offers a facile and versatile approach to the development of some other integrated dehydrogenase-based electrochemical devices, such as biofuel cells and biobattery. [ 1 ] 1 Shacham, R.; Avnir, D.; Mandler, D., Adv. Mater. 1999, 11, 384 [2] Walcarius, A.; Mandler, D.; Cox, J. A.; Collinson, M. M.; Lev, O., J. Mater. Chem. 2005, 15, 3663. Figure I- 1 . 1 Figure I-1. Model of an electrochemical reactor for enantiopure synthon preparation. Since all active compounds (mediator, cofactor, dehydrogenase) are immobilized only the educt and the product are components of the reaction buffer. The gas diffusion counter electrode provides clean protons and improves the long-term stability. Figure I- 1 1 Figure I-1 shows the scheme of an electrochemical reactor for enantiopure synthon preparation. Two main electrochemical reactions occur in the electrochemical reactor. Participant 4 ( 6 ( 46 two groups: Physical Chemistry and Applied Microbiology. The Physical Chemistry group designed of electrochemical multicell with 16 individual cells and electrochemical reactor with a macroporous working electrode and a proton conducting membranes/gas diffusion counterelectrode. The Applied Microbiology group provided several dehydrogenases with enhanced stability and activity in the environment of the developed electrode surfaces. Participant 2 (Ecole Nationale Supérieure de Chimie et de Physique de Bordeaux, Molecular Sciences, France) developed macroporous metal electrodes with high surface area support to immobilize mediators and enzymes in functional internal surface layers using a sol-gel matrix prepared and optimized by partner 3. Participant 3 (CNRS, Laboratory of Physical Chemistry and Microbiology for the Environment, France) developed electrode surface layers for functional immobilization of enzymes, cofactors and mediators. University of Copenhagen, Denmark) included two groups: Biophysical Chemistry Group and Dept. of Chemistry, Bioinformatics. The biophysical group crystallized the dehydrogenases produced by partner 1b and determined their three-dimensional atomic structures by crystallographic methods. The bioinformatics group used computer modeling to provide a molecular level description of how the enzyme activities and stabilities can be enhanced. Participant 5 (Middle East Technical University, Turkey) This group developed mediators for electron transfer to the cofactor NAD + in the described systems. Participant IEP GmbH Wiesbaden IEP, Germany) supported the project from an industrial point of view. (• Conditions of neutral to basic pH result in relatively mesoporous xerogels after drying, as rigid clusters a few nanometers across pack to form mesopores. The clusters themselves may be microporous (Figure I-3b). Figure I- 3 . 3 Figure I-3. Schematic wet and dry gel morphologies and representative transmission electron micrographs. (Adapted from Brinker and Scherer, Sol Gel Science, chapter 9, figures 2a-2c. [19].) Figure I- 4 . 4 Figure I-4. The principle of electrochemically-assisted generation of silica film on the electrode surface. Figure I- 6 . 6 Figure I-6. The structures of cofactors NAD(P) + and NAD(P)H. [ 60 ] 60 Figure I-7. NAD(P) + /NAD(P)H-dependent reactions: (A) direct electrochemical regeneration, (B) indirect electrochemical regeneration, (C) enzyme-coupled electrochemical regeneration[START_REF] Kohlmann | Electroenzymatic synthesis[END_REF]. Figure I- 8 . 8 Figure I-8. Structures of some typical mediators. 12 Figure II- 3 . 123 Figure II-3. Functionalized Rh(III) mediators. Figure II- 4 . 4 Figure II-4. The 3-dimensional structure of macroporous gold electrode. sol A has been considered an obstacle, due to its potential denaturing activity on the entrapped protein. Methanol being less harmful than ethanol, TEOS can be replaced by TMOS (Sol B and C). Alcohol can also be removed from the sol by evaporation (Sol D). Aqueous sol-gel routes (Sol E and F) have been tested in order to avoid any trace of alcohol. A natural polymer (chitosan), or a polyelectrolyte (PDDA and PEI) was used as additive in Sol G, H, I, J and K which can be also advantageous in providing a better environment for biosencapsualtion. Sol A and B have been mainly used during the first series of electrodeposition and Sol C, D, E, and F have been widely used in control experiments and tested, when possible, for spincoating deposition, drop-coating or electrodeposition. Actually, none of them were satisfactory when used as thin biocomposite films on glassy carbon as no voltammetric signals can be detected in the presence of the enzymatic substrates, independently on the used enzyme (DSDH, GatDH), contrarily to what is observed for haemoglobin (which is found electrochemically and electrocatalytically active in all gels). 4 . 1 . 2 4 . 2 Preparation of electrodes for chapter IV 4 . 2 . 1 ③ 41242421 Sol G was mixed with 15 µL PDDA solution (20 wt. %) and 20 µL of the enzyme solution (10 mg/mL). An aliquot (5 µL) of this resulting sol was deposited onto the surface of the GCE. The solution was then allowed to dry at 4 °C overnight. The prepared electrodes were rinsed thoroughly with water and stored in Tris-HCl buffer solution for 15 min prior to the electrochemical measurement. In attempting to optimize the film composition, the electrodes containing various amounts of TEOS and additives (PDDA, PEI, PAA, Nafion, Chitosan) were prepared by adjusting the concentrations of each component as desired and applying the same protocol as above to form the biocomposite films. Composites made of both silica and chitosan were typically prepared by mixing together 20 µL of the starting sol G with 20 µL of a chitosan solution (0.5 % in 0.05 M acetic acid solution), 20 µL PDDA solution and 20 µL DSDH suspension. The pure chitosan film was obtained by mixing 20 µL chitosan, 20 µL PDDA and 20 µL DSDH suspension. When PDDA was not introduced, it was replaced by the same volume of water. An aminopropyl-functionalized material was also prepared by introducing aminopropyltriethoxysilane (APTES) into the sol in addition to TEOS (1:1 molar ratio). Some biocomposite materials were also prepared in the form of monoliths for UV-vis monitoring of the enzymatic activity. They were prepared according to a protocol from the literature[22]. Sol 1 was prepared by mixing 125 µL of the starting Sol C with 125 µL of phosphate buffer (5 mM, pH 8.0) and 25 µL of the DSDH suspension. Sol 2 was prepared as sol 1 and 25 µL of PDDA solution was also added. The gels were allowed to age at 4°C for 6 days. Before use, the gels have been washed three times with 3 mL of 0.1M Tris-HCl (pH 9.0) to remove weakly adsorbed DSDH. Electrodepostion 100 µL of DSDH (10 mg/mL) and 100µL of PDDA were added to 100µL of the above hydrolyzed Sol H. The mixture was put into the electrochemical cell where electrochemicallyassisted deposition was performed at -1.3 V at room temperature for several tens of s. The electrodes were immediately rinsed with water, and dried overnight in a fridge at 4 °C. The prepared electrodes were rinsed thoroughly with water and stored in Tris-HCl buffer solution for 15 min prior to the electrochemical measurements. In attempting to optimize the film composition, the electrodes containing various amounts of TEOS, PDDA and DSDH were prepared by adjusting the concentrations of each component as desired and applying the same protocol as above to form the bio-composite films. Co-immobilization of DSDH and NAD + in sol-gel matrix prepared by drop-coating ① Preparation of GCE/TEOS/PEI/(DSDH+DI)/NAD + 20 µL of sol I was mixed with 10 µl PEI solution (20 wt%, pH 9.0), 10 µl NAD + solution (90 mM), 15 µl DSDH solution (10 mg/ml) and 10 µl DI solution (5 mg/ml). The biocomposite film was then formed by drop-coating, by depositing an aliquot (5 µl) of this resulting sol onto the GCE surface and allowing the solvent to evaporate by drying overnight at 4 °C. ② GCE/TEOS/PDDA/(DSDH+DI)/NAD + and GCE/TEOS/PAA/(DSDH+DI)/NAD + have been prepared as above by replacing the PEI solution by PDDA and PAA solutions (20 wt. % in water), respectively. Electrodes doped with carbon nanotubes, GCE/TEOS/PEI/SWCNT/(DSDH+DI)/NAD + and GCE/TEOS/PEI/(DSDH+DI)/NAD-SWCNT were obtained by suspending 1.0 mg SWCNT or NAD-SWCNT in 1.0 ml of sol I prior to mixing with PEI, DSDH and DI solutions and following the same drop-coating procedure afterwards. The NAD-SWCNT sample (carbon nanotubes with adsorbed/noncovalently attached NAD + ) was previously prepared according to a protocol reported in the literature [23], by mixing 2 mg SWCNT with 50 mg NAD + in 2 mL water under stirring for 48 h and then recovering the solid phase by filtration (0.45 µm, Millipore).④ GCE/TEOS/PEI/(DSDH+DI)/NAD-dextran and GCE/TEOS/PEI/(DSDH+DI)/NAD-GPShave been also prepared as above except that the 10 µl aliquot of NAD + solution was replaced by respectively 10 µl of NAD-dextran solution (62.5 mg/ml in Tris-HCl buffer at pH 7.5) or 10 µl of NAD-GPS solution (typically prepared by mixing together 25 mg NAD + and 37.5 mg GPS in 400 µl Tris-HCl buffer solution (pH 7.5) at 4°C under shaking for 12 h. 4. 2 . 1 4 . 3 . 2 4 . 4 2143244 Co-immobilization of DSDH and NAD + in electrodeposited sol-gel thin film 70 µL DSDH (10 mg/mL), 30 µL PDDA, 40 µL PEI, 40 µL DI and 50 µL NAD-GPS composite were added to 70 µL of the above hydrolyzed Sol H. The mixture was put into the electrochemical cell where electrochemically-assisted deposition was performed at -1.3 V at room temperature for 60 s. The electrodes were immediately rinsed with water, and dried overnight in a fridge at 4 °C. The prepared electrodes were rinsed thoroughly with water and stored in Tris-HCl buffer solution for 15 min prior to the electrochemical measurements. In attempting to optimize the film composition, the electrodes containing only PDDA or PEI were prepared by adjusting the concentrations of each component as desired and applying the same protocol as above to form the bio-composite films.4.3 Preparation of electrodes for chapter V 4.3.1 Fc-PEI and Os-polymer as co-immobilized mediator 20 µL Sol J was mixed with 10 µL PEI solution (10 wt. %, pH 9.0), 15 µL the Fc-PEI or Os-polymer solution, 10µL DI (5mg/mL), 15µL DSDH and 10µL NAD-GPS. An aliquot (5 µL) of this resulting sol was deposited onto the surface of the GCE. The solution was then allowed to dry at 4 °C overnight. The prepared electrodes were rinsed thoroughly with water and stored in Tris-HCl buffer solution for 15 min prior to the electrochemical measurement. The electrodes prepared in the absence of GPS or the enzyme and cofactor were obtained by adjusting the concentrations of each component as desired and applying the same protocol as above to form the biocomposite films. 50 µL PEI, 60 µL Fc-PEI or Os-polymer solution, 40 µL DI, 60 µL DSDH and 40 µL NAD-GPS were added to 100 µL of the Sol J. The mixture was put into the electrochemical cell where electrochemically-assisted deposition was performed at -1.3 V at room temperature for 60 s. The electrodes were immediately rinsed with water, and dried overnight at 4°C. Fc-silane as co-immobilized mediator Drop-coated sol-gel film modified electrodes were prepared with the same protocol as above by mixing 20 µL Sol K, 10 µL PEI solution (10 wt. %, pH 9.0), 15 µL water, 10 µL DI (5 mg/mL), 15 µL DSDH and 10 µL NAD-GPS. Electrodeposited so-gel film modified electrodes are prepared with the same protocol as above by mixing 50 µL PEI, 60µL water 40µL DI, 60µL DSDH, 40µL NAD-GPS and 100µL of the Sol K. Preparation of electrodes for chapter VI 4.4.1 MWCNTs-µW & sol-gel matrix GCE/MWCNTs-µW (GCE/MWCNTs) were prepared by casting 5 µL of suspension of 1.0 mg of microwaved MWCNTs (see chapter II 1.4.1.2) (untreated CNTs) in 1.0 mL of 0.1wt % chitosan solution on the surface of GC electrode and dried for 2 h at room temperature. The film electrodes were rinsed repeatedly with water and soaked in a pH 7.40 phosphate buffer solution while stirring to remove any loose materials. The electrodes were stored at room temperature when not in use. ① ① ① ① GCE/MWCNT-µWs&TEOS/PEI/DSDH 20 µL of Sol I was mixed with 20 µL PEI solution (10 wt %, pH 9.0) and 20 µL DSDH solution (10 mg/mL). An aliquot (5 µL) of this resulting sol was deposited onto the surface of the GCE/MWCNT-µWs. The solution was then allowed to dry at 4°C overnight. ② ② ② ② GCE/MWCNT-µWs &electrodepositedTEOS/PEI/DSDH 70 µL PEI, 80 µL DSDH and 50 µL water were added to 100 µL of the Sol H. The mixture was put into the electrochemical cell with GCE/MWCNT-µWs as working electrode where electrochemically-assisted deposition was performed at -1.3 V at room temperature for 60 s. The electrodes were immediately rinsed with water, and dried overnight at 4°C. ③ ③ ③ ③GCE/MWCNT-µWs&electrodepositedTEOS/PEI/DSDH/NAD-GPS film modified electrode 70 µL PEI, 80 µL DSDH and 50 µL NAD-GPS were added to 100 µL of the Sol H. The mixture was put into the electrochemical cell with GCE/MWCNT-µWs as working electrode 4 . 4 . 3 4 . 1 . 2 ) 4 . 4 . 4 4 . 2 . 3 ) 443412444423 ① ① ① GCE/MWCNT-PMG &TEOS/PEI/DSDH film modified electrode 20 µL of Sol I was mixed with 10 µL PEI solution (10 wt %, pH 9.0), 10 µL water and 20 µL DSDH solution. An aliquot (5 µL) of this resulting sol was deposited onto the surface of the GCE/MWCNT-PMG. The solution was then allowed to dry at 4°C overnight. The prepared electrodes were rinsed thoroughly with water and stored in Tris-HCl buffer solution for 15 min prior to the electrochemical measurement. ② ② ② ② GCE/MWCNT-PMG &TEOS/PEI/DSDH/NAD-GPS film modified electrode 20 µL of Sol I was mixed with 10 µL PEI solution (10 wt %, pH 9.0), 10 µL NAD (or NAD-GPS composite) solution and 20 µL DSDH solution. An aliquot (5 µL) of this resulting sol was deposited onto the surface of the GCE/MWCNT-PMG. The solution was then allowed to dry at 4°C overnight. The prepared electrodes were rinsed thoroughly with water and stored in Tris-HCl buffer solution for 15 min prior to the electrochemical measurement. ③ ③ ③ ③ GCE/MWCNT-PMG&electrodepositedTEOS/PEI/DSDH/NAD-GPS film modified electrode 70 µL PEI, 80 µL DSDH and 50 µL NAD-GPS are added to 100 µL the Sol H. The mixture is put into the electrochemical cell with GCE/MWCNT-PMG as working electrode where electrochemically-assisted deposition was performed at -1.3 V at room temperature for 60 s. The electrodes are immediately rinsed with water, and dried overnight at 4°C. MWCNTs-Os&sol-gel matrix Chitosan/MWCNTs-Os solution was prepared by mixing in 1:1 volume the resulting MWCNTs-Os solution (see chapter II 1.to 0.2 % chitosan solution (0.2 % chitosan solution in 1 % acetic acid). GCE/MWCNT-Os was prepared by depositing 5 µL Chitosan/MWCNTs-Os solution and allowed to evaporate at the room temperature. ① ① ① ① GCE/MWCNT-Os &TEOS/PEI/DSDH/NAD-GPS/DI 20 µL of Sol I was mixed with 10 µL PEI solution (10 wt %, pH 9.0), 10 µL NAD-GPS solution, 10 µL DI and 15 µL DSDH solution. An aliquot (5 µL) of this resulting sol was deposited onto the surface of the GCE/MWCNT-Os. The solution was then allowed to dry at 4°C overnight. The prepared electrodes were rinsed thoroughly with water and stored in Tris-HCl buffer solution for 15 min prior to the electrochemical measurement. ② ② ② ② GCE/MWCNT-Os &electrodepositedTEOS/PEI/DSDH/NAD-GPS/DI 50 µL NAD-GPS, 40 µL DI (5mg/mL), 70 µL DSDH (10 mg/mL) and 60 µL PEI were added to 80 µL of the above Sol H. The mixture was put into the electrochemical cell with GCE/MWCNT-Os as working electrode where electrochemically-assisted deposition was performed at -1.3 V at room temperature for 60 s. The electrodes were immediately rinsed with water, and dried overnight at 4°C. CNTs/Rh(III)&sol-gel matrix Chitosan/CNTs/Rh solution was prepared by mixing in 1:1 volume the resulting CNTs/Rh suspension (see chapter II 1.to 0.5 % chitosan solution (0.5 % chitosan solution in 1 % acetic acid). The CNTs-Rh modified electrode was prepared by depositing 5 µL Figure II- 5 5 Figure II-5 shows the response of a reversible redox couple during a single potential cycle. It is assumed that only the oxidized form O is present initially. Thus, a negative-going potential scan is chosen for the first half-cycle, starting from a value where no reduction occurs. As the Figure II- 5 5 Figure II-5 Typical cyclic voltammogram for a reversible O + ne-→ R and R → O + ne-Redox process. III. Etude de faisabilité de l'encapsulation d'une déshydrogénase dans une matrice sol-gel Ce chapitre montre les études qui ont été menées pour immobiliser sous une forme active la D-sorbitol déshydrogénase (DSDH) dans une couche mince sol-gel à la surface d'une électrode. Les études ont été menées sur électrodes de carbone vitreux avant d'être appliquées à la modification d'électrodes d'or macroporeuses. Dans un premier temps, l'étude a été menée en utilisant le dépôt par évaporation à partir d'un sol à base seulement de TEOS conduisant à l'encapsulation de la DSDH dans une matrice de silice pure. Ces conditions d'immobilisation ne permettent pas de mesurer une activité catalytique (par oxydation du NADH devant être produit par la protéine pendant l'oxydation du sorbitol). L'influence de l'introduction de différents additifs dans le sol sur l'activité enzymatique a ensuite été étudiée. Il a ainsi été montré que l'introduction de polyélectrolyte positivement chargés au sein de la couche mince sol-gel permettait d'observer une bonne activité de la DSDH vis-à-vis de l'oxydation du D-sorbitol. La bioencapsulation sol-gel a ensuite été obtenue par électrochimie. L'électrodépôt sol-gel est basé sur la modulation électrochimique du pH à la surface de l'électrode qui conduit à une transition sol-gel rapide seulement à proximité de l'électrode et à la formation de la couche mince. Le rôle des polyélectrolytes mis précédemment en évidence a été confirmé et la composition du sol ainsi que les paramètres d'électrogénération (temps et potentiel d'électrolyse) ont été optimisés. Il a été observé que ce procédé d'électrogénération sol-gel était parfaitement adapté à l'encapsulation de la DSDH et à la co-immobilisation avec la diaphorase (cette dernière protéine catalysant la régénération du cofacteur enzymatique en présence d'un médiateur redox). Enfin, le protocole optimal d'électrogénération a été appliqué à la fonctionnalisation contrôlée d'électrodes d'or macroporeuses présentant une grande surface électroactive. Figure III- 1 . 1 The sol-gel film contains encapsulated DSDH and the cofactor is present in solution (typically at a concentration of 1 mM) to which various concentrations of D-sorbitol were added. DSDH catalyses the oxidation of D-sorbitol into fructose and simultaneously NAD + is reduced into NADH. The equilibrium of this enzymatic reactions strongly favors the substrate (D-sorbitol) rather than the product (fructose) side because of the very low formal potential of the NAD + /NADH redox couple (-560 mV versus saturated calomel electrode, pH 7.0, 25°C). Here, the electrochemical oxidation of NADH at the electrode surface pushes the reaction to the product side. The reduced form of the cofactor can be detected electrochemically on glassy carbon. In our experimental conditions, a well-defined oxidation peak was observed on bare glassy carbon electrode (GCE) around +0.7 V (versus Ag/AgCl reference electrode). Figure III- 1 . 1 Figure III-1. Enzymatic and electrochemical reactions occurring on the gel-enzyme modified Figure III- 2 2 reports the electrochemical characterization of the film deposited by drop-coating on GCE. It is shown that the addition of D-sorbitol into the solution does not lead to noticeable modification of the current response and no peak current can be observed at the potential of NADH oxidation. Obviously, the enzyme does not exhibit electrochemically detectable activity when encapsulated into the pure silica layer. The actual sol contains a certain amount of alcohol that could be harmful for the enzyme (possible denaturation). Other sol-gel protocols have thus been tested: alcohol evaporation before enzyme encapsulation, use of other silica precursors (aqueous silicates or TMOS instead of TEOS). In all cases, the biocomposite films deposited on GCE did not reveal any measurable electroactivity. Several hypotheses can be proposed to explain the absence of response: (1) DSDH was not successfully incorporated into the sol-gel matrix; (2) enzyme entrapment was successful but led to loss in its biological activity; (3) DSDH was successfully entrapped in an active form but lack of effective electrochemical transduction made the detection not visible. Figure III- 2 . 2 Figure III-2. Cyclic voltammograms obtained using GCE modified with a silica/DSDH film in the absence (dashed line) and in the presence (plain line) of 12 mM D-sorbitol. The measurements have been performed in 0.1 M Tris-HCl buffer solution (pH 9) containing 1 mM NAD + . Potential scan rate was 50 mV/s. Figure III- 3 3 Figure III-3 shows the variation of generated NADH concentrations versus the contact time between the D-sorbitol solution and the enzyme-entrapped gel. As shown, DSDH was still active when encapsulated into the sol-gel silica material (see curve "b" in Figure III-3) but the response was much slower and much lower than that of free enzyme in solution (compare with inset in Figure III-3) as only 20 % of NAD + have been transformed after 6 hours of reaction. A blank experiment (curve "a" in Figure III-3) confirms that UV response was indeed due to NADH generation originating from the activity of encapsulated DSDH. Figure III- 3 . 3 Figure III-3. Evolution of the NADH concentration in solution (a) in the absence of enzyme encapsulated silica gel and in the presence of a gel monolith containing (b) only DSDH and (c) both DSDH and PDDA (1.7 % w/w) . Experiments were performed in 0.1M pH 9.0 Tris-HCl buffer solution containing 0.36 mM NAD + and 5 mM D-sorbitol. NADH concentration was determined by UV absorbance at 340 nm. Inset: free enzyme in solution. Figure Figure III-4A shows the typical response of a bio-composite film deposited on GCE from a sol made of 50 mol% APTES relative to the total content of precursor (TEOS+APTES), for increasing D-sorbitol concentrations in the solution. The modified electrode is now sensitive to the addition of the substrate in the 2 to 12 mM concentration range. The signals correspond to the oxidation of NADH produced by the enzyme encapsulated into the film. They are however less well-defined and positively-shifted (by ca. 100-150 mV) in comparison to NADH oxidation on bare GCE (inset in Figure III-4A). This suggests that the presence of the Figure III- 4 . 4 Figure III-4. Influence of additives (A, 50 mol % APTES; B, 5 % w/w PDDA) introduced into the synthesis sol on the electrochemical response of DSDH-silica composite films on GCE, as measured by cyclic voltammetry in the presence of increasing concentrations of D-sorbitol. Cyclic voltammogram for solution-phase NADH (curve "a" without NADH and curve "b" with 5mM NADH) at bare GCE has been added as inset in part A of the figure. Potential scan rate 50 mV/s. Others conditions as in Figure III-2. ( Figure III-2.). The signal corresponds to the oxidation of NADH produced by the enzyme encapsulated into the film. The measured peak currents (~30 µA for 9 mM D-sorbitol, Figure III-4B), were largely superior to those reported when using APTES (~4µA for 10 mM Dsorbitol, Figure III-4A). The interpenetrating PDDA-silica network thus provides a suitable microenvironment for DSDH encapsulation; it may also contribute to generate a more open structure than in pure silica, which would accelerate mass transport of the various reagents (substrate and cofactors). The key parameter seems however to be the favorable electrostatic interactions originating from the positive charges of the polymer additive, as confirmed by comparing the results obtained for various polyelectrolytes, i.e., three positively-charged compounds in the condition of encapsulation (PDDA, PAA and PEI) and one displaying negative charges (Nafion). This is illustrated in Figure III-5, showing the variation of peak currents versus the concentration of D-sorbitol for the different polyelectrolytes. The encapsulation of alcohol dehydrogenase in Nafion membrane has been reported[31], but here the introduction of this negatively charged polyelectrolyte into the sol, led to inactive electrode as no NADH can be detected when increasing the substrate concentration in the solution (curve "a"). At the opposite, all polyelectrolytes bearing positive charges gave rise to good sensitivity of the modified electrode to the enzymatic substrate, the use of PDDA (curve "d") being somewhat more efficient than PEI (curve "b") or PAA (curve "c"). These results support the aforementioned assumption concerning the need of positive charges into the gel for ensuring good encapsulation of DSDH in an active form. Figure III- 5 . 5 Figure III-5. Variation in peak currents for NADH oxidation versus the d-sorbitol concentration measured with films prepared with TEOS as silica precursor and (a) Nafion, (b) PEI, (c) PAA and (d) PDDA as polyelectrolyte (5% w/w). Other conditions as inFigure III-2. Figure III-6A shows the typical response observed with a chitosan-silica gel composite with encapsulated DSDH. The electrode was first tested with cyclic voltammetry in a Tris-HCl buffer solution containing 1 mM NAD + (dashed line) and, as expected, no anodic peak was observed. More surprising was the absence of signal upon addition of D-sorbitol from 2 to 8 mM in the medium, suggesting that no NADH was produced by the DSDH enzyme (only a decrease in background currents was observed). It was necessary to introduce PDDA in addition to chitosan in the biocomposite film to observe the production of NADH in the presence of D-sorbitol (Figure III-6B). The measured currents are in the same magnitude as those observed with silica gel/polyelectrolyte composites (Figure III-6B). Figure III- 6 . 6 Figure III-6. Cyclic voltammograms obtained at GCE modified with (A) a chitosan/silica/DSDH film and (B) a chitosan/silica/PDDA/DSDH film. Measurements have been performed in the absence (dashed line) and in the presence (plain line) of D-sorbitol, up to 12 mM. Potential scan rate 50 mV/s. Other conditions as in Figure III-2. Figure III- 7 . 7 Figure III-7. Cyclic voltammograms obtained at GCE modified with (A) a chitosan/DSDH film and (B) a chitosan/PDDA/DSDH film. Measurements have been performed in the absence (dashed line) and in the presence (plain line) of D-sorbitol, up to 10 mM. Potential scan rate 50 mV/s. Other conditions as in Figure III-2. Figure III- 8 . 2 . 82 Figure III-8. Variation of peak currents sampled at DSDH-doped sol-gel modified GCE in the presence of 8 mM D-sorbitol, as a function of (A) the amount of PDDA in the starting sol prepared with 42 mM TEOS and (B) the TEOS concentration initially introduced in the synthesis medium, in the presence of 5 % PDDA. Other conditions are the same as in Figure III-2. for three typical cases (PDDA/DSDH, Chitosan/PDDA/DSDH, and Silica/PDDA/DSDH films on GCE) and variations in the relative peak currents of the modified electrodes to 6 mM D-sorbitol in the presence of 1 mM NAD + are shown in Figure III-9. The relative current values are given versus the peak height measured during the first cycle, about 1 min after the electrode was placed in solution. Let's first consider the short-term operational stability (Figure III-9A). Figure III- 9 . 9 Figure III-9. Evolution of the relative peak currents recorded for successive analyses of 6 mM D-sorbitol solutions (in the presence of 1 mM NAD + ) at distinct periods of time, using various DSDH-doped film electrodes: (A) short-term measurements have been made with GCE modified with (a) PDDA/DSDH film, (b) chitosan/PDDA/DSDH film, and (c) silica/PDDA/DSDH film; (B) long-term measurements have been made with GCE modified with (a) silica/PAA/DSDH film, (b) silica/PEI/DSDH film, and (c) silica/PDDA/DSDH film. Other conditions as in Figure III-2. 3 . 1 . 31 Figure III-9Bshows the evolution of the electrode response during almost 1 month for three electrodes prepared with combining positively charged polyelectrolytes (PAA, PEI and PDDA) with silica precursors for the enzyme encapsulation. The worst stability was observed with PAA (curve "a") for which current response of the electrode was found to drop by 50 % within three weeks. PEI (curve "b") displayed a rather stable electrochemical response as the measured peak currents did not change significantly during the first 15 days of investigation and then decreased by less than 15 % in the second half of the month. Finally, the behavior of the electrode prepared with PDDA (curve "c") resembles to that prepared with PEI, being slightly less stable, as its response was maximum after 15 days and decreased somewhat afterwards, suggesting some lack of long-term stability.3. Electrochemically-assisted deposition of sol-gel biocomposite with co-immobilized dehydrogenase and diaphorase Feasibility of the electrochemically-assisted depositionThe electrochemically-assisted deposition of silica thin films involves the local perturbation of the pH at the electrode solution interface. Starting from a stable sol, slightly acidic, the electrochemically-induced pH increase allows a fast gelification only at the electrode surface. The electrolysis of the sol at -1.3 V leads to the rapid formation of the thin sol-gel biocomposite layer. All films that are displayed in Figure III-10 have been prepared using 60 s electrolysis. Figure III-10A shows the electrochemical response of the DSDH-modified electrode to successive addition of D-sorbitol in the solution from 1 to 8 mM. Before addition of this enzymatic substrate, no electrochemical signal could be observed between 0 and 1 V. A well defined voltammetric signal with a peak potential located between 0.7 and 0.8 V (versus Ag/AgCl reference electrode) appears when D-sorbitol is added to the solution and this signal increases with the D-sorbitol concentration. It corresponds to the electrochemical oxidation of the NADH cofactor produced by the encapsulated DSDH while oxidizing Dsorbitol. Here the enzymatic cofactor is directly detected at the glassy carbon electrode without using electron mediator. Figure III-10B confirms the results observed in Figure III-10A, with the same enzyme, but for the reduction of fructose. No electrochemical signal could be observed in the absence of fructose. The addition of fructose from 1 to 8 mM produces well defined electrochemical responses with a peak potential close to -1.1 V versus Ag/AgCl reference electrode. This signal corresponds to the reduction of the NAD + cofactor produced by DSDH in the presence of fructose. The electrode was here sensitive to the concentration of fructose from 1 to 8 mM. DSDH immobilized in electrogenerated silica film is thus active in both oxidation and reduction sides. Figure III- 10 . 10 Figure III-10. Electrochemical responses to D-sorbitol (A) and to fructose (B) measured at GCE modified by DSDH with using sol-gel E-AD. E-AD was done at -1.3 V for 60 s from a sol containing 0.17 M TEOS, 3.3 mg/mL DSDH and 6.7% PDDA. (A) Responses in the absence of D-sorbitol (dashed line) and in the presence of D-sorbitol from 1 to 8 mM (solid lines). CVs were done in Tris-HCl buffer (pH 9) containing 1 mM NAD + (B) Responses in the absence of fructose (dashed line) and in the presence of fructose from 1 to 8 mM (solid lines).CVs were done in 0.1 M phosphate buffer (pH 6.5) containing 1 mM NADH. (C) Blank experiments have been performed in tris-HCl buffer (pH 9.0) containing 1 mM NAD + and 6mM D-sorbitol with GCE modified by a film prepared with the same procedure as (A) but in the absence of the protein (solid line) or PDDA (dotted line) or prepared with the same sol (with protein and PDDA) but without applying the electrolysis potential for the EAD (dashed line). Figure Figure III-10C shows several blank experiments (CV in the presence of 6 mM D-sorbitol) with films prepared in the absence of DSDH (plain line), in the absence of polyelectrolyte (dotted line), or by using the same sol and the same protocol as for the electrode presented in Figure III-10A, but without applying the electrolysis step (dashed line). In all three cases, no electrochemical signal was observed. The later blank experiment (no electrolysis) demonstrates that the electrode response observed in Figure III-10A is only due to the DSDH protein encapsulated in a silica film that has been produced by electrochemistry and neither to any non specific protein adsorption nor gel deposition by evaporation that might have occurred in the course of the electrode preparation. Electrode modification is totally controlled by the electrochemically-assisted sol-gel deposition. Moreover the blank experiment performed in the absence of protein confirms that the signal observed around 0.7- Figure III- 11 . 11 Figure III-11. FTIR spectrum measured on a thin biocomposite film deposited on indium tin oxide electrode by sol-gel E-AD at -1.3 V for 10 s from a sol containing 0.17 M TEOS, 3.3 mg/mL DSDH and 6.7% PDDA. Figure III- 11 11 Figure III-11 shows the FTIR spectrum, which has been obtained on a thin film deposited on indium tin oxide electrode by sol-gel electrochemically-assisted deposition at -1.3 V for 10 s from the same sol as reported in Figure III-10 (60 s electrochemically-assisted deposition led to a signal saturation, so shorter electrolysis time was necessary). Background measurement was done on unmodified ITO electrode. As expected the different bands related of the sol, i.e. the DSDH concentration (Figure III-12A), the PDDA concentration (Figure III-12B) and the TEOS concentration (Figure III-12C) have been varied and their influence on the electrochemical response of the resulting film to 6 mM D-sorbitol has been studied. Figure III-12D shows the effect of the time of electrolysis on the electrode response for sols containing (a) 1.67, (b) 3.35 and (c) 6.7 % PDDA. For high polyelectrolyte concentration, the optimal deposition time was found to be 60 s (3.35 and 6.7 %, curves a&b) and a higher time was observed with lower PDDA content (about 120 s for 1.67 %, curve c). It can be assumed that the quantity of encapsulated protein increases with increasing the deposition time, but film thickness probably becomes a significant limitation for too long deposition times. Figure III- 12 .Figure III- 13 . 1213 Figure III-12. Influence of DSDH concentration (A), PDDA (B), deposition time (C) and TEOS concentration (D) on the peak current response measured at glassy carbon electrodes modified by an electrodeposited silica films with encapsulated DSDH. (A) The electrodes were prepared with 0.17 M TEOS sol, 6.7% PDDA and different concentrations of DSDH from 0.33 mg/mL to 3.3 mg/mL. The electrochemically-assisted deposition was done at -1.3 V for 60 s. (B) The electrodes were prepared with 0.17 M TEOS sol, 3.3 mg/mL DSDH and different concentrations of PDDA from 0 to 6.7 %. The electrochemically-assisted deposition was done at -1.3 V for 60 s. (C) The electrodes were prepared with a sol containing 3.3 mg/mL DSDH, 6.7 % PDDA and different concentrations of TEOS. The electrochemicallyassisted deposition was done at -1.3 V for 60 s. (D) The electrodes were prepared with 0.17 M TEOS sol, 3.3 mg/mL DSDH and different concentrations of PDDA from 1.67 to 6.7 %. The electrochemically-assisted deposition was performed at -1.3 V for different deposition time from 10 to 150 s. For all experiments, cyclic voltammetry has been performed in 0.1 M Tris-HCl buffer (pH 9.0) containing 1 mM NAD + and 6 mM D-sorbitol. Potential scan rate was 50 mV/s. Figure III- 14 14 Figure III-14 reports some SEM pictures of films prepared with the optimal sol composition (A) or containing less protein (B) or less PDDA (C). Optimal composition leads to a film with a homogenous texture (Figure III-14A). Only few filaments about 1-2 µm long, probably due to PDDA can be observed on the top of the film. Changing the composition of the sol does not change significantly the texture of the silica gel layer (the surface of allelectrodes is covered by a sol-gel film). However, using a lower protein concentration leads to the presence of more filaments on the surface (FigureIII-14B). In the presence of lower PDDA concentration, these filaments disappeared and some aggregates can be observed (FigureIII-14C). We suppose that these aggregates are due to proteins that are not protected Figure III- 14 . 14 Figure III-14. SEM image of different films prepared by sol-gel electrochemically-assisted deposition at -1.3 V for 60 s from a sol containing (A) 0.17 M TEOS, 3.3 mg/mL DSDH and 6.7% PDDA; (B) 0.17 M TEOS, 3.3 mg/mL DSDH and 3.35 % PDDA; (C) 0.17 M TEOS, 1.65 mg/mL DSDH and 6.7% PDDA. and 0. 33 M 33 TEOS. This experiment allows the estimation of the effective kinetic of the heterogeneous electron transfer reaction (k eff ) for the chosen electrochemical reaction. This constant can be affected by the film permeability, the eventual defects in the layer and the thickness of the electrodeposited films. Here, k eff decreases regularly from 0.039 to 0.033 cm.s -1 when increasing the TEOS concentration in the starting sol from 0.08 to 0.33 M (see Figure III-15A). Figure III- 15 . 15 Figure III-15. (A) 1/I versus 1/w 2 measured in the presence of 0.05 mM Ferrocenedimethanol with (a, squares) bare glassy carbon electrode and electrodes modified by a thin film prepared from a sol containing various TEOS concentration: (b, circles) 0.08 M, (c, triangles) 0.17 M and (d, stars) 0.33 M. Steady state current was measured by linear sweep voltammetry with using a potential scan rate of 20 mV/s. (B) Evolution of the peak current response of glassy carbon electrode modified by film (b), (c) and (d) in the presence of 1mM NAD + and 6 mM D-sorbitol in 0.1 M Tris-HCl buffer (pH 9). All electrodes have been prepared by electrochemically-assisted deposition at -1.3 V for 60 s from a sol containing 6.7% PDDA, 3.3 mg/mL DSDH and various concentrations of TEOS (see above). Potential scan rate 50 mV/s. Figure III- 16 . 16 Figure III-16. Peak current response versus D-sorbitol concentration measured with a glassy carbon electrode that has been modified by a thin silica film with encapsulated DSDH. The film was prepared with a sol containing 0.17 mM TEOS, 6.7 % PDDA and 3.3 mg/mL DSDH. The electrochemically-assisted deposition was done by applying -1.3 V for 60 s. All cyclic voltammograms have been performed at 50 mV/s, in the presence of 1 mM NAD + and different D-sorbitol concentrations in Tris-HCl buffer (pH 9). Figure III- 17 17 shows the influence of pH on the electrochemical response for both reduction of fructose (Figure III-17A) and oxidation of D-sorbitol (Figure III-17B), from measurements carried out with glassy carbon electrodes covered by the thin silica gel layer with encapsulated DSDH. A sharp increase in the electrode response is observed for fructose reduction around pH 6.5 Figure III- 18 . 18 Figure III-18. Electrochemical behavior of glassy carbon electrode modified by a silica thin film with encapsulated diaphorase. (A) Electrode response in the absence (a) and in the presence of 1 mM (b) and 2 mM (c) NADH. (B) Evolution of the electrode response to 0.4 mM NADH at pH 7 (a), 8 (b) and 9 (c). Films have been deposited by electrolysis at -1.3 V for 60 s with a sol containing 6.7 % PDDA, 0.17 M TEOS sol and 0.83 mg/mL diaphorase. Cyclic voltammograms have been performed in 0.1 M Tris-HCl buffer, in the presence of 0.1 M ferrocenedimethanol (and different NADH concentrations). Potential scan rate was 50 mV/s. Figure III- 19 . 19 Figure III-19. Electrochemical behavior of glassy carbon electrodes modified by a silica film with co-encapsulated DSDH and diaphorase. (A) Evolution of the electrode response to increasing concentration of D-sorbitol from 1 to 9 mM. The measurement was done at pH 9. (B) Evolution of the peak current response to 6 mM D-sorbitol from pH 6 to 10. Films have been deposited by electrolysis at -1.3 V for 60 s with a sol containing 6.7 % PDDA, 0.17 M TEOS sol and 0.83 mg/mL diaphorase. The cyclic voltammograms have been performed in 0.1 M Tris-HCl buffer in the presence of 1 mM NAD + and 0.1 mM ferrocendimethanol. Potential scan rate was 50 mV/s. They were obtained by gold electrodeposition inside an opal network of silica beads. The height of the different gold macroporous electrodes were 220 nm for one half layer of gold and 660 nm for three half layers (Figure III-20). displaying a much larger electroactive surface area in comparison to the geometric one (Figure III-21B to 21C) and compared to flat gold electrode (Figure III-21A). The electrolysis of the sol does not occur on gold at the same potential as for glassy carbon. A potential of -1.3 V versus the Ag/AgCl reference electrode was used in previous optimization on glassy carbon but only -1.1 V was necessary for macroporous electrodes in order to trigger the sol-gel deposition. Figure III- 21 . 21 Figure III-21. Electrochemical behavior of flat and macroporous gold electrodes modified by silica films with co-encapsulated DSDH and diaphorase. (A) Evolution of the flat gold electrode response to increasing concentrations of D-sorbitol from 1 to 7 mM. Dashed curves show the electrode response in the absence of D-sorbitol. (B) Evolution of the macroporous gold electrode response to increasing concentrations of D-sorbitol from 0.5 to 2.5 mM. Dashed curves show the electrode response in the absence of D-sorbitol. The measurement was done with a macroporous electrode of 660 nm thickness (three half layers) modified with using 30 s electrolysis. (C) Evolution of the peak current response to 0.5 mM D-sorbitol with modified electrodes prepared when using different electrolysis times from 10 s to 60 s (curves a to c). The measurements were done with a macroporous electrode of 660 nm thickness (three half layers). (D) Evolution of the peak current response to 0.5 mM D-sorbitol with macroporous electrodes displaying one half layer (a) and three half layers (b). Films were obtained when using 30 s electrolysis. (A-D) All films have been deposited by electrolysis at -1.1 V with a sol containing 6.7 % PDDA, 0.17 M TEOS sol and 0.83 mg/mL diaphorase. The cyclic voltammograms have been performed in 0.1 M Tris-HCl buffer at pH 9 in the presence of 1 mM NAD + and 0.1 mM ferrocendimethanol. Potential scan rate was 50 mV/s. Figure III - III Figure III-21A displays the response to D-sorbitol obtained with a flat gold electrode modified by the electrogenerated silica film with encapsulated DSDH and diaphorase. The electrode was found to be sensitive to successive additions of D-sorbitol, as shown by the increase in peak current intensity for the oxidation of ferrocenedimethanol. The addition of 1 mM D-sorbitol led here to 16 % peak current increase. By comparison, a much higher response was observed when doing the same experiment with a macroporous electrode (three half layers) modified in the same conditions by the bio-composite layer (Figure III-21B), and the peak current intensity was increasing by 1000 % when adding 1 mM D-sorbitol in the solution. The macroporous texture of the gold electrode improves thus significantly the catalytic efficiency of the sol-gel biocomposite. electrochemically-assisted deposition of silica gel layer has been successfully adapted to the encapsulation of DSDH. The encapsulated DSDH enzyme is likely to either oxidize Dsorbitol and produce simultaneously NADH inside the silica gel layer or reduce fructose and generate NAD + . The functional layer has been successfully deposited in macroporous gold electrodes and applied for the oxidation of D-sorbitol. When deposited in the same conditions and in the presence of the same concentration of D-sorbitol, the catalytic current increased much more with a macroporous electrode (1000 %) in comparison with a flat gold electrode(16 %). The macroporous texture of the gold electrode improves thus significantly the catalytic efficiency of the sol-gel biocomposite. The next step(s) of this work will be the immobilization of the electron-mediator and the enzymatic cofactor inside the silica gel matrix for application in zero-waste electrosynthesis. Chapitre IV. Co-immobilisation d'une déshydrogénase et du cofacteur enzymatique NAD + au sein de la matrice sol-gel L'immobilisation du cofactor enzymatique NAD + au sein de la matrice sol-gel a été étudiée en présence de DSDH et de diaphorase co-encapsulés au sein de cette même matrice. Dans cette configuration, le cofacteur doit avoir suffisamment de mobilité pour se déplacer du centre enzymatique de la DSDH au centre enzymatique de la diaphorase. Un médiateur électrochimique, le ferrocenedimethanol, permet alors la communication électronique entre la diaphorase et la surface de l'électrode. La faisabilité de cette étude a tout d'abord été menée en préparant la couche mince sol-gel par évaporation du sol initial. Différentes stratégies d'immobilisation du cofacteur ont alors été étudiées : (1) la simple encapsulation du NAD + dans la matrice sol-gel, seul ou en présence de nanotubes de carbone ; (2) l'encapsulation d'un dérivé du NAD + à haut poids moléculaire (NAD-dextran) ; (3) la fixation chimique du NAD + à la matrice silicatée via sa condensation sur le groupement époxy du glycidoxypropylsilane au cours du processus sol-gel (cette réaction a alors été suivie par spectroscopie infrarouge). Cette dernière approche s'est révélée être la plus intéressante en terme de réponse électrochimique et de stabilité du signal bioélectrocatalytique. Cette même approche a ensuite été étendue pour la génération électrochimique de couches minces sol-gel pour la co-immobilisation de la DSDH, de la diaphorase et du cofacteur NAD + . Cette expérience est alors applicable aux électrodes planes (carbone vitreux ou or) et aux électrodes macroporeuses. Figure IV- 1 1 Figure IV-1 describes the reaction pathway used in this work. Oxidation of the enzymatic substrate by the immobilized dehydrogenase induces NAD + reduction to NADH. Diaphorase catalyses then the oxidation of NADH back to NAD + . Electron transfer from the diaphorase to the glassy carbon electrode surface is carried out by ferrocene species that are introduced in the solution. The main technological barrier is the durable immobilization of the cofactor inan active form, the comparison of the various approaches proposed here to overcome this limit will be made with the redox mediator in solution. Figure IV- 1 . 1 Figure IV-1. Illustration of the electrochemical pathway used for the detection of the dehydrogenase enzymatic substrate. Figure IV- 2 . 2 Figure IV-2. Cyclic voltammograms obtained with (A) GCE/TEOS/PEI/(DSDH+DI)/NAD + ; (B) GCE/TEOS/PDDA/(DSDH+DI)/NAD + and (C) GCE/TEOS/PAA/(DSDH+DI)/NAD + in the absence of D-sorbitol (solid lines) and in the presence of D-sorbitol from 2 to 16 mM. All cyclic voltammograms have been performed in Tris-HCl buffer (pH 9) containing 0.1 mM FDM. Potential scan rate was 50 mV/s. dehydrogenase. In principle, various polyelectrolytes can be used to ensure the biological activity of DSDH and DI enzymes, such as PDDA, PAA or PEI, which gave rise to good catalytic responses when NAD + was introduced in solution. In the present case, the cofactor was part of the sol-gel film and only PEI gave rise to well-defined bioelectrocatalytic responses (Figure IV-2A) whereas no response to D-sorbitol could be observed when using PDDA or PAA additives (Figure IV-2B&C). The reason for such behavior/difference is not clear because favorable electrostatic interactions are expected in all cases, but comparison between Figure IV-2A and Figure IV-2B and 2C clearly demonstrates the crucial role of PEI Figure IV-3. Evolution of the peak current intensity recorded for successive analyses of 10 mM D-sorbitol solutions at distinct periods of time, using GCE modified with the same sol as Figure IV-2A, (a) in the absence and (b) in the presence of SWCNTs. All cyclic voltammograms have been performed in Tris-HCl buffer (pH 9) containing 0.1 mM FDM. Potential scan rate was 50 mV/s. in the final sol-gel biocomposite film deposited on GCE and likely to hold the cofactor by adsorption. Typical response of GCE/TEOS/PEI/SWCNT/(DSDH+DI)/NAD + to successive detections of 10 mM D-sorbitol is illustrated in Figure IV-3 (curve b). As shown, the voltammetric signals were slightly larger (by ca. 30 %) than in the absence of SWCNT, as well as some improved stability (almost constant response for 4 hours) but not at long time (> 50 % lost in sensitivity between 4 and 6 hours of use). The larger signal intensities can be explained by the increase in the electroactive surface area of the bio-electrode as a result of the introduction of SWCNT, but the fact that such increase remains rather small also suggests poor electrical interconnection, possibly due to the deposition of insulating sol-gel material on the surface of the individual nanotubes during the film formation.In a second step, experiments have been performed by adsorbing first the cofactor to carbon nanotubes (to get NAD-SWCNT) and introducing them afterwards in the biocomposite solgel matrix without any additional "free" NAD + . The resulting system (GCE/TEOS/PEI/(DSDH+DI)/NAD-SWCNT) led however to a limited electrochemical activity (about 10 times lower electrochemical response) (FigureIV-4), probably because the quantity of cofactor introduced by this route was rather low, and the stability of the cofactor immobilization was not significantly improved by comparison with the previous route. So, even if carbon nanotubes provides some improvements, the long-term operational stability was not yet satisfactory. Figure IV- 4 . 4 Figure IV-4. (A) Chronoamperogram recorded at 0.4 V with GCE modified with GCE/TEOS/PEI/(DSDH+DI)/SWCNT-NAD film to successive additions of D-sorbitol from 0.5 to 14.5 mM; (B) Cyclic voltammograms measured with the same electrode as (A) in the absence of D-sorbitol and in the presence of 14.5 mM D-sorbitol. Measurements have been performed in Tris-HCl buffer (pH 9) containing 0.1mM FDM. Figure IV - IV Figure IV-5A reports the typical response when increasing concentrations of D-sorbitol were introduced in the solution. The catalytic current increased regularly up to 16 mM, confirming the good behavior of this system. The stability of the electrode response was again evaluated by successive cyclic voltammograms in the presence of 10 mM D-sorbitol (Figure IV-5B). Chapter IV. Co-immobilization of dehydrogenase and cofactor in sol-gel matrix 123 functionalization in a one-pot reaction and no discussion was provided on the question if, or how, the epoxy ring participates effectively to the immobilization process. GCE/TEOS/PEI/(DSDH+DI)/NAD-GPS are reported inFigure IV-6A. Note that in this experiment the electron mediator (ferrocenedimethanol) was not immobilized on the electrode surface but simply introduced in solution. In these conditions a good electrocatalysis could be observed as shown by the increase in current upon successive additions of D-sorbitol from 0.5 to 15.5 mM. CV curves in the presence of 15.5 mM D-sorbitol also display the typical curve of electrocatalytic detection (inset of Figure IV-6A). FigureFigure IV- 6 . 6 Figure IV-6B compares the response of bioelectrodes prepared with and without GPS (i.e., GCE/TEOS/PEI/(DSDH+DI)/NAD-GPS and GCE/TEOS/PEI/(DSDH+DI)/NAD + , respectively) to 2 mM D-sorbitol. Both electrodes showed comparable current response around 6 µA at the beginning of the experiment but, while the amperometric signal of the bioelectrode prepared with GPS kept a rather stable value for ca. 14 hours of continuous reaction (curve a), that prepared without GPS showed very low stability as the bioelectrocatalytic response decreased dramatically during the first hour of the experiment and totally vanished after 5 hours (curve b). The current value of ca. 2.5 µA remaining at that time corresponds only to that of ferrocene (FDM) species in solution (which is even lower than that recorded in a control experiment made on bare GCE (about 3µA)). Figure IV- 7 7 Figure IV-7 shows the time-evolution of the ATR-FTIR spectra during 18 hours GPS hydrolysis in Tris-HCl buffer at pH=7.5. This figure shows also the ATR-FTIR spectrum of GPS in water after 1 h 40 min hydrolysis. The bands assignment was made according to the literature [35, 36]. The more interesting bands against hydrolysis of GPS are discussed below. In the spectrum of pure GPS (Figure IV-8), broad, poorly resolved bands are centered at 1076 cm -1 . They are mainly assigned to C-O, Si-O stretching modes of glycidoxy and methoxy groups. Their NAD + and the confinement inside the gel at close distance to the proteins (dehydrogenase and diaphorase) allows measuring good enzymatic activity with DSDH (Figure IV-6.). Figure IV- 11 . 11 Figure IV-11. Experiment of chronoamperometry recorded at 0.4 V with GCE/TEOS/PEI/(DSDH+DI)/NADH+GPS (i.e. replacing NAD + by NADH and following the same protocol). Measurements have been performed for 14 hours under convective conditions in 0.1 M Tris-HCl buffer (pH 9) containing 0.1 mM FDM and 2 mM D-sorbitol. Figure IV- 13 13 displays the response to D-sorbitol obtained with a GCE electrode modified by the electrogenerated silica film with encapsulated DSDH, diaphorase and GPS functionalized cofactor (NAD-GPS). Figure IV-14 displays the response to D-sorbitol obtained with a flat gold electrode modified by the electrogenerated silica film with encapsulated DSDH, diaphorase and NAD-GPS. Figure IV- 14 14 shows the electrochemical response obtained with the modified electrode prepared with both PEI and PDDA as polyelectrolyte additive. Before addition of D-sorbitol, a well defined electrochemical signal due to ferrocenedimethanol could be observed. The addition of Dsorbitol in the solution from 2 to 10 mM led to a significant increase in the current response.The co-immobilization of DSDH and GPS-NAD on gold electrode surfaces was properly achieved by using both PEI and PDDA as polyelectrolyte additive. Figure IV- 15 . 15 Figure IV-15. Cyclic voltammograms obtained using (A) flat and (B) macroporous (660 nm, three half layers) gold electrode modified by TEOS/NAD-GPS/PEI/PDDA/(DSDH+DI) film in the absence and presence of D-sorbitol. Films have been deposited by electrolysis at -1.1 Vfor 30 s with a sol containing 0.15M TEOS, 14 mM NAD-GPS, 2.3 mg/mL DSDH, 0.76 mg/mL DI and 1 % PEI and 0.5%. Cyclic voltammograms have been performed in 0.1 M Tris-HCl buffer, in the presence of 0.1m M FDM. Potential scan rate was 50 mV/s. A successful strategy for dehydrogenase, diaphorase and cofactor co-immobilization in solgel films has been developed. It involves the chemical bonding of NAD + to the epoxide group of glycidoxypropylsilane (GPS) before co-condensation of the organoalkoxisilane with tetraethoxysilane in the presence of the proteins (dehydrogenase and diaphorase) and a Figure V- 1 . 1 Figure V-1. Cyclic voltammograms recorded with a GCE modified by drop-coated (A) TEOS/PEI/Fc-PEI;(B) TEOS/GPS/PEI/Fc-PEI film in the 0.1 M Tris-HCl buffer (pH 9) at a scan rate of 50 mV/s, scan cycle, 5. Figure V- 2 . 2 Figure V-2. (A) Cyclic voltammograms recorded at GCE modified with TEOS/GPS/Fc-PEI/NAD-GPS/DSDH/DI film in the absence of D-sorbitol and in the presence of 2.8 mM Dsorbitol. (The film was prepared with a sol containing 0.077 M TEOS, 0.0385MGPS, 14 mM NAD-GPS, 2.3 mg/mL DSDH, 0.76 mg/mL DI and Fc-PEI by drop-coating); (B) Amperometric responses recorded at an applied potential of 0.4 V to successive additions of 0.2mM D-sorbitol in stirred solution. (C) Corresponding calibration plot. All measurements were performed in the 0.1 M Tris-HCl buffer (pH 9). Figure V - V Figure V-2A shows the electrochemical response of the film to the addition of D-sorbitol in solution. In the absence of D-sorbitol, only the electrochemical signal of ferrocene is observed. The addition of D-sorbitol induces an increase of the anodic current and a decrease of the cathodic current. It corresponds to the electrochemical oxidation of the NADH cofactor produced by the encapsulated DSDH while oxidizing D-sorbitol. Here the enzymatic cofactor is detected at the diaphorase modified GCE by using ferrocene as electron mediator. NADH reacts with diaphorase and the electrochemically generated ferricinium ions to produce NAD + and ferrocene species that can be re-oxidized in the electrocatalytic scheme.Figure V-2B Figure V-2Bshows amperometric responses of the modified electrode to D-sorbitol in a stirred solution. Figure V- 3 . 3 Figure V-3. Chronoamperograms recorded at 0.4 V with GCE modified by (a) TEOS/GPS/Fc-PEI/NAD-GPS/DSDH/DI; (b) TEOS/Fc-PEI/NAD-GPS/DSDH/DI film. Measurements have been performed for 14 hours oxidation under convective conditions in 0.1 M Tris-HCl buffer (pH 9) containing 2 mM D-sorbitol. Figure V- 4 4 Figure V-4 shows typical cyclic voltammograms recorded with a GCE modified by electrodeposited TEOS/GPS sol-gel film containing Fc-PEI. No electrochemical signal of Figure V- 5 . 5 Figure V-5. Electrochemical response of glassy carbon electrode modified by TEOS/GPS/PEI/Fc-PEI/NAD-GPS/DSDH/DI film in the absence and in the presence of Dsorbitol from 2mM to 4mM. Films have been deposited by electrolysis at -1.3 V for 60 s with a TEOS/GPS sol containing DSDH, DI, NAD-GPS and Fc-PEI. Cyclic voltammograms have been performed at 50 mV/s in 0.1 M Tris-HCl buffer. 1 Figure V- 6 . 16 Figure V-6. Cyclic voltammograms recorded with a GCE modified by drop-coated (A) TEOS/Fc-silane/PEI;(B) TEOS/GPS /Fc-silane/PEI film in the 0.1 M Tris-HClO 4 buffer (pH 9) at a scan rate of 50 mV/s, scan cycle, 10. /AgCl / V sol and drop-coated on GCE. Figure V-7A shows that it worked quite nicely with a well defined reversible electrochemical signal of ferrocene (plain line). It was found that the oxidation peak increase in the presence of 7 mM D-sorbitol and the reduction peak disappeared at the same time (dashed line). Figure V- 7 . 7 Figure V-7. Cyclic voltammograms recorded with a GCE modified by drop-coated TEOS/GPS /Fc-silane/PEI/DSDH/DI/NAD-GPS film in the absence and in the presence of 7 mM D-sorbitol. Potential scan rate: 50 mV/s. (B) Amperometric responses recorded at an applied potential of 0.4 V to successive additions of different concentration of D-sorbitol in 0.1 M Tris-HClO 4 buffer (pH 9). Figure V- 8 . 8 Figure V-8. Cyclic voltammograms recorded with a GCE modified by electrodeposited TEOS/GPS/PEI/Fc-silane film in the 0.1 M Tris-HClO 4 buffer (pH 9) at a scan rate of 50 mV/s, scan cycle, 10. Films have been deposited by electrolysis at -1.3 V for 60 s with a TEOS/GPS sol with Fc-silane as co-condensation precursors. Figure V- 9 Figure V- 9 . 99 Figure V-9 shows cyclic voltammograms recorded with a GCE modified by electrodeposited sol-gel film prepared with TEOS, GPS and Fc-silane and containing DSDH, DI and NAD-GPS. In the absence of D-sorbitol, the well reversible electrochemical signal of ferrocene was observed. However, the addition of D-sorbitol in the solution did not induce any increase of the anodic current. Figure V- 10 . 10 Figure V-10. Electrochemical response of glassy carbon electrode modified by TEOS /GPS /Fc-silane/PEI/DSDH/DI film in the absence and in the presence of D-sorbitol from 2 to 4mM. Films have been deposited by electrolysis at -1.3 V for 60 s. Cyclic voltammograms have been performed at 50 mV/s in 0.1 M Tris-HClO 4 buffer containing 1mM NAD + . 4. 1 1 Co-immobilization in drop-coated sol-gel film4.1.1 Effect of GPS on Os-polymer immobilizationOne knows from literatures, osmium polymers as mediators have been immobilized on electrode surface to develop reagentless dehydrogenase biosensors. Riccarda et al. reported a biosensor based on glucose dehydrogenase (GDH) and diaphorase (DI) co-immobilized with NAD + into a carbon nanotube paste (CNTP) electrode modified with an osmium functionalized polymer[34]. The carbon nanotubes could be wrapped up in the Os-polymer molecules when they are mixed together in the paste and this combination could improve the transfer of electrons between the mediator and the electrode material itself. Olha et al. Figure II- 2 )Figure V- 11 . 211 Figure V-11. Cyclic voltammograms recorded with a GCE modified by drop-coated (A) TEOS/PEI/Os-polymer;(B) TEOS/GPS/PEI/Os-polymer film in the 0.1 M Tris-HCl buffer (pH 9) at a scan rate of 50 mV/s, scan cycle, 10. Figure V- 12 12 shows the typical electrocatalytic response of the TEOS/GPS/PEI/Os-polymer/DI film to NADH. In the absence of NADH, the reversible electrochemical signal of osmium was observed. The addition of 0.3 mM NADH induced a modification of the electrochemical response, the anodic current increased while the cathodic signal disappeared, and the anodic peak current continued to increase with the NADH concentration. Diaphorase kept good catalytic characteristic toward NADH inside the sol-gel film, and the immobilized osmium can efficiently transfer the electron between the electrode and the diaphorase. Figure V- 12 . 4 . 1 . 3 12413 Figure V-12. Cyclic voltammograms recorded with a GCE modified by drop-coated TEOS/GPS/PEI/Os-polymer/DI film in the absence and presence NADH from 0.3 to 1.2mM. All cyclic voltammograms have been performed in Tris-HCl buffer (pH 9) at a scan rate of 50 mV/s. 2 2 Co-immobilization in electrodeposited sol-gel thin film The encapsulation of DSDH, DI, cofactor and Os redox polymer inside the electrodeposited silica gel led to a similar result and no visible electrochemical signals of Os neither before nor after the addition of D-sorbitol was observed (Figure V-15). Figure V- 15 . 15 Figure V-15. Electrochemical response of glassy carbon electrode modified by TEOS/GPS/PEI/Os/NAD-GPS/DSDH/DI film in the absence and in the presence of D-sorbitol from 2mM to 4mM. Films have been deposited by electrolysis at -1.3 V for 60 s with a TEOS/GPS sol containing DSDH, DI, NAD-GPS and Os-polymer. Cyclic voltammograms have been performed at 50 mV/s in 0.1 M Tris-HCl buffer. /AgCl / V immobilization of all components (dehydrogenase, cofactor and electron mediator) in sol-gel films was achieved by using drop-coating. All the components were able to communicate inside the silica gel layer for efficient electro-catalytic oxidation of D-sorbitol. The electrode displayed good stability under stirring for more than 14 hours. However, such coimmobilization applied to electrochemically-assisted deposition of sol-gel thin films was not successful, revealing limitation with the mediator immobilization. An adequate distribution of the mediator in the sol-gel film could be achieved by drop-coating, allowing an efficient communication between the electrode and the protein. However, using the same sol, film prepared by electrodeposition did not display this property, and the electrocatalysis (for Fcsilane) or even the oxidation/reduction of the mediator (for Fc-PEI and osmium polymer) was not observed. Chapitre VI. Immobilisation du médiateur électrochimique sur les nanotubes de carbone et co-immobillisation avec une déshydrogénase et le cofacteur NAD + Dans ce chapitre, différentes stratégies ont été développées pour élaborer un composite solgel/nanotubes de carbone permettant la co-immobilisation de la D-sorbitol déshydrogénase (DSDH), du cofacteur NAD + et du médiateur (et éventuellement de la diaphorase). Une configuration en bicouche a été utilisée consistant à immobiliser dans un premier temps les nanotubes de carbone fonctionnalisés et à déposer ensuite la couche mince sol-gel par évaporation du sol ou par électrogénération. Une attention particulière a été donnée à la coimmobilisation par électrochimie des différents éléments du système bioélectrocatalytique dans la mesure où cet objectif n'avait pu être atteint par électrogénération en une étape (chapitre V). Les nanotubes de carbones à parois simples (SWCNT) ou multiples (MWCNT) ont été fonctionnalisés par quatre protocoles différents pour leur donner des propriétés catalytiques intéressantes pour l'oxydation de NADH (ou la réduction du NAD + ) ; Ces protocoles sont (1) le traitement des nanotubes par micro-ondes (MWCNT-µW), (2) l'électropolymérisation du vert de méthylène (MWCNT-PMG), (3) le recouvrement des nanotubes par un polymère de type polyacrylate portant des complexes d'osmium(III) (MWCNT-Os), et (4) adsorption des complexes d'un complexe de rhodium (III) à la surface de nanotubes de carbone à paroi simple. Les électrodes de carbone vitreux fonctionnalisées par ces nanotubes de carbone présentes de bonnes propriétés catalytiques pour l'oxydation de NADH (1 et 2) ou la réduction de NAD + (4), ou l'oxydation de NADH en présence de diaphorase (3). Les films sol-gel ont ensuite été déposés à la surface de ces nanotubes de carbone by évaporation du sol ou par électrogénération afin d'obtenir la co-immobilisation de la DSDH (et de la diaphorase quand nécessaire) et du cofacteur NAD + .Chapter VI. Mediator immobilization on carbon nanotubes and co-immobilization with dehydrogenase and cofactorIn this chapter, various bilayers strategies based on CNTs/sol-gel matrix are developed for the fabrication of reagent free devices in attempting to overcome the problems encountered with mediator immobilization through one step electrodepostion (chapter V). Carbon nanotubes (CNTs) have been functionalized by four different protocols in order to provide them catalytic properties for NADH (or NAD + ) detection; they include (1) micro-wave treatment (MWCNTs-µW), (2) electrochemical deposition of poly(methylene green) (MWCNTs-PMG), (3) wrapping with a polyacrylate polymer holding osmium(III) complexes (MWCNTs-Os), and (4) adsorption a Rh (III) complex on SWCNTs. GCE electrodes modified with these functionalized CNTs show good electrochemical properties and allow the direct electrocatalytic detection of NADH (1 and 2) or NAD + (4), or the detection of NADH in the presence of diaphorase (3). To the last configuration, a sol-gel thin film has been further deposited on the carbon nanotube layer by drop-coating or by electrochemically-assisted deposition for encapsulation of D-sorbitol dehydrogenase (and diaphorase when necessary) ) electrochemical deposition of poly(methylene green) (MWCNTs-PMG), and (3) wrapping with a polyacrylate polymer holding osmium(III) complexes (MWCNTs-Os). The catalytic properties of the functionalized carbon nanotubes have been first checked by covering them with an additional drop-coated sol-gel layer before use as support for electrodeposited sol-gel films. The electrochemical response of the biocomposite containing the immobilized protein has been compared when NAD + was simply introduced in the solution or when it was chemically attached to the sol-gel matrix (i.e., reagentless device). The study has been developed with the enzyme D-sorbitol dehydrogenase and D-sorbitol was used as a model analyte. Immobilization of Rh (III) Figure Figure VI-1A shows the cyclic voltammograms recorded at a glassy carbon electrode modified with the original (i.e., not treated) MWCNTs. Before addition of NADH, no electrochemical signal could be observed between -0.4 and +0.6 V. A well defined voltammetric signal with a peak potential located at ~ +0.4 V (versus Ag/AgCl reference electrode) appears when NADH was added to the solution. It corresponds to the electrochemical oxidation of the NADH cofactor on MWCNTs modified electrode. Figure VI-1B shows the cyclic voltammograms recorded at the glassy carbon electrode modified with acid-microwaved MWCNTs. A well defined voltammetric signal with a peak potential located at ~ 0 V appears when NADH was added to the solution and this signal increased with the NADH concentration. The microwaving of MWCNTs in acidic solution for 20 min resulted in a dramatic shift of the oxidative peak potential of NADH (E NADH ) to a lower value, from ~ +0.4 V to ~ 0 V (in agreement with previous observations [24]). The shift in E NADH illustrated can be attributed to the redox mediation of NADH oxidation by surface quinones (Q) (formed during the microwave treatment). Figure VI- 1 . 1 Figure VI-1. Cyclic voltammograms recorded at (A) GCE/MWCNTs, (B) GCE/MWCNTs-µW in deoxygenated solutions with and without NADH. Measurements have been performed in 0.1 M Tris-HCl buffer (pH 9), Scan rate, 5 mV/ s. FigureVI-Figure VI- 2 . 2 . 3 2 . 3 . 1 223231 Figure VI-2. Electrochemical response of (A) GCE/chitosan/TEOS/MWCNTs-µW/PEI/DSDH; (B) GCE/MWCNTs-µW&TEOS/PEI/DSDH in the absence and in the presence of D-sorbitol from 2 to 8 mM. Cyclic voltammograms have been performed at 5 mV/s in deoxygenated 0.1 M Tris-HCl buffer containing 1mM NAD + . Figure VI- 3 B 3 Figure VI-3 B shows its amperometric responses at different applied potential. No electrocatalytic response was obtained upon addition of D-sorbitol in stirred solution at +0.4 Figure VI- 3 CFigureVI- 3 DFigure VI- 3 . 333 Scheme of (A)&(B)Scheme of (C)&(D) Figure VI- 4 AFigure VI- 4 . 44 Figure VI-4. Electrochemical response of GCE/MWCNTs-µW&TEOS/PEI/DSDH/NAD-GPS film. (A) Cyclic voltammograms in the absence and in the presence of D-sorbitol from 2 to 4mM. Potential scan rate: 5 mV/s. (B) Amperometric responses recorded at an applied potential of 0.4 V to successive additions of D-sorbitol in the solution. Figure VI- 5 . 5 Figure VI-5A shows the electropolymerization process of MG on the MWCNTs modified GC electrode. It is shown that the peak current increased with the number of cycles, which means more and more MG are deposited on the surface of the electrode. After a few cycles the deposition gradually reaches equilibrium. Figure VI-5B shows a comparative amperometic response of PMG modified GCE in the absence and presence of MWCNTs upon successive addition of 0.2 mM NADH to 0.1M pH 9.0 tri-HCl buffer at an applied potential of +0.2V. A catalytic response is obtained in the absence of MWCNTs, but current intensity were very low. A significant improvement in the current response was obtained when MG was electrodeposited on MWCNT modified GCE. An increasing electrocatalytic response was observed due to the addition of NADH. So with carbon nanotubes, PMG film induces a stable and significantly improved electrochemical response for the mediated oxidation of NADH. 3. 2 3 . 2 . 1 2321 Drop-coating of sol-gel film at GCE/MWCNTs-PMG Encapsulation of DSDH in sol-gel film drop-coated onto GCE/MWCNTs-PMGThe efficient system (GCE/MWCNTs-PMG) was first evaluated in combination with Dsorbitol dehydrogenase. Figure VI-6 shows the amperometic response of different films to D-sorbitol in the Tris-HCl buffer containing 1mM NAD + at an applied potential of 0.2V. The background current was allowed to decay to a steady value before the addition of D-sorbitol. Obvious current responses were observed due to the addition of D-sorbitol on the GCE/MWCNTs-PMG/&TEOS/PEI/DSDH, and this signal increased with the D-sorbitol concentration (curve a). It corresponds to the electrochemical oxidation of the NADH cofactor produced by the encapsulated DSDH while oxidizing D-sorbitol. Curve b and c shows the comparison of the response obtained with films prepared in the absence of MWCNTs and in the absence of PMG. In both cases, only small electrochemical signals were observed. So only the system including both MWCNTs and PMG display a good sensitivity to D-sorbitol. The effects can be ascribed to the increased electroactive surface area obtained with the introduction of MWCNTs for PMG deposition. Figure VI- 6 . 6 Figure VI-6. A comparative view of the amperometric response obtained by using (a) GCE/MWCNTs-PMG&TEOS/PEI/DSDH; (b) GCE/PMG&TEOS/PEI/DSDH; (c) GCE/MWCNTs& TEOS/PEI/DSDH to successive additions of 0.5, 0.5, 1, 1, 2, 2, 3, 3mM Dsorbitol (Eappl = +0.2 V vs. Ag/AgCl). Measurements have been performed in 0.1 M Tris-HCl buffer (pH 9) containing 1mM NAD + . Figure VI- 7 . 7 Figure VI-7. (A) Cyclic voltammograms obtained by using GCE/MWCNTs-PMG/&TEOS/PEI/DSDH in the absence of D-sorbitol (solid line) and in the presence of Dsorbitol from 2 to 6mM(dashed lines), scan rate: 50mV/s. (B) A comparative view of the amperometric response at different applied potential to successive additions of 0.5, 0.5, 1, 1, 2, 2, 3, 3mM D-sorbitol. All measurements have been performed in 0.1 M Tris-HCl buffer (pH 9) containing 1 mM NAD + . Figure /DSDH/NAD + to D-sorbitol. The current increased regularly with D-sorbitol concentration during the first measurement. However, the second measurement made after /AgCl / V electrolyte medium exchange led to complete vanishing of the response. Possible reason is that NAD + is a small molecule, diffusing easily away from the electrode surface into solution. Figure VI- 8 . 8 Figure VI-8. Amperometric response recorded at +0.2 V with GCE/MWCNTs-PMG&TEOS/PEI/DSDH/NAD + in 0.1 M Tris-HCl buffer (pH 9), different concentration of D-sorbitol 0.5,0.5, 1, 1, 2, 2mM were added after 2 min of current recording. Figure VI- 9 9 compares the electrode response to 1 mM D-sorbitol of bioelectrodes obtained with (curve a) and without GPS (curve b) in the starting sol. Both electrodes showed comparable response around 3 µA at the beginning of the experiment. But the electrode prepared with GPS displayed a good operational stability, more than 75% current response was kept after continuous 14-h long experiments in a stirred solution (curve a). At the opposite, the bioelectrode prepared without GPS showed very low stability and the electrode response decreased dramatically during the first hours of experiments (curve b). Most of the catalytic activity was lost during the first 6 hours, as the electrode response almost reached null current. The immobilized cofactor in the sol-gel matrix can be regenerated by PMG deposited on MWCNTs. And as it was previously observed, this functionalization of NAD + with GPS can greatly enhance the stablilty of electrochemical response of the reagentless device. Figure VI- 9 . 9 Figure VI-9. Chronoamperograms recorded at 0.2 V with (a) GCE/MWCNTs-PMG&TEOS/PEI/DSDH/NAD-GPS; (b) GCE/MWCNTs-PMG&TEOS/PEI/DSDH/NAD + . Measurements have been performed for 14 hours oxidation under convective conditions in 0.1 M Tris-HCl buffer (pH 9) containing 1 mM D-sorbitol. Figure VI- 10 AFigure VI- 10 . 3 . 3 . 2 1010332 Figure VI-10 A shows the electrochemical response obtained with GCE/MWCNTs-PMG covered with an additional electrodeposited sol-gel layer with encapsulated DSDH, which has been deposited by electrolysis at -1.3 V for 60 s with a sol containing DSDH (GCE/MWCNTs-PMG/&TEOS/PEI/DSDH), here the cofactor was not attached to the silica matrix. The addition of D-sorbitol induced the modification of oxidation peak current at 0.1-0.2 V, and this signal increased with the D-sorbitol concentration from 2 to 6 mM. Figure VI-10 B shows its amperometric responses at an applied potential of +0.2V. An increasing and stable electrocatalytic response was obtained upon addition of D-sorbitol in the stirred solution. The electrochemically-assisted deposition of silica thin films can be a good strategy to immobilize DSDH in an active form on GCE/MWCNTs-PMG. Figure VI- 11 . 11 Figure VI-11. Electrochemical response of GCE/MWCNTs-PMG/&TEOS/PEI/DSDH/NAD-GPS. (A) Cyclic voltammograms in the absence and in the presence of 8mM D-sorbitol. Potential scan rate: 50 mV /s. (B) Amperometric response at an applied potential of +0.2 V to successive additions of 2mM D-sorbitol in the solution. All measurements have been performed in 0.1 M Tris-HCl buffer (pH 9). Figure VI- 12 . 12 Figure VI-12. Electrochemical response of GCE/MWCNTs-PMG/&TEOS/PEI/DSDH/NAD-GPS. (A) Amperometric response at an applied potential of 0.2 V to successive additions of 2mM D-sorbitol and 0.2 mM NADH in the solution (B) Amperometric response at an applied potential of 0.2 V to additions of 1mM D-sorbitol and 1mM NAD + in the solution. All measurements have been performed in 0.1 M Tris-HCl buffer (pH 9). this electrochemical signal was not stable with time and decreased when multiple potential scan were performed. So, simply mixing MWCNTs and Os-polymer solution in the presence of chitosan did not result in the stable immobilization of osmium polymer in the MWCNT layer. A second protocol for immobilization of this polymer with carbon nanotube was tested.MWCNTs have been first sonicated for 1h and then incubated in osmium-polymer solution for 12h. In these conditions, the positively-charged polymer can wrap on the sidewall surface of MWCNT (MWCNT-Os). Figure VI- 13 . 4 . 2 4 . 2 . 1 1342421 Figure VI-13. Cyclic voltammograms recorded with (A) GCE/MWCNTs/Os (B) GCE/MWCNTs-Os. 10 successive scans in the 0.1 M Tris-HCl buffer (pH 9), potential scan rate: 50 mV/s. Figure VI- 14 . 14 Figure VI-14. Amperometric response obtained at GCE/MWCNT-Os&TEOS/PEI/DI to successive additions of 0.2 mM NADH (Eappl = + 0.3 V vs. Ag/AgCl). Measurements have been performed in 0.1 M Tris-HCl buffer (pH 9). Figure V- 15 . 15 Figure V-15. Cyclic voltammograms recorded with GCE/MWCNT-Os&TEOS/PEI/DI/NAD-GPS in the absence and in the presence of 3 mM D-sorbitol. Potential scan rate: 50 mV/s. (B) Amperometric responses recorded at an applied potential of 0.3 V to successive additions of 0.5 mM D-sorbitol in 0.1 M Tris-HCl buffer (pH 9). Figure VI- 16 shows 16 the amperometric responses at an applied potential of 0.3V. An increasing and stable electrocatalytic response was obtained upon addition of NADH in stirred solution. The cyclic /AgCl / V voltammograms recorded before addition of NADH (Inset of Figure shows that the electrochemical signal from the osmium complexes has been significantly decreased in the presence of the sol-gel material. Nethertheless, the addition of NADH leads to a significant increase in the current intensity. The shape of the catalytic signal was poorly resolved and could be a mix between the detection of NADH through osmium complex and diaphorase and the simultaneous detection of NADH at the carbon nanotube surface, as reported in Figure VI-1A. Figure VI- 16 . 16 Figure VI-16. Amperometric response obtained at GCE/MWCNT-Os&TEOS/PEI/DI to successive additions of 0.2 mM NADH (Eappl = + 0.3 V vs. Ag/AgCl). Inset plot shows cyclic voltammograms recorded before and after addition of 1.4mM NADH. Films have been deposited by electrolysis at -1.3 V for 60 s. Measurements have been performed in 0.1 M Tris-HCl buffer (pH 9). Figure VI-17. (A) Cyclic voltammograms recorded with GCE/MWCNT-Os&TEOS/PEI/DI/NAD-GPS in the absence and in the presence of D-sorbitol from 2 to 8mM. Potential scan rate: 50 mV/s. (B) Amperometric responses recorded at an applied potential of 0.3 V to successive additions of 0.4mM D-sorbitol in stirred solution. Inset plot shows the corresponding calibration plot. (C) Amperometric response obtained in the same conditions. measurements have been performed for 14 hours under convective conditions in 0.1 M Tris-HCl buffer (pH 9) containing 2 mM D-sorbitol. SWCNTs for [Cp*Rh(bpy)Cl] + immobilization: towards a device operating in reduction 5.1 [Cp*Rh(bpy)] 2+ as suitable mediator for NADH regeneration Figure VI- 18 . 18 Figure VI-18. The mechanism of [Cp*Rh(bpy)] 2+ electrocatalytic process. Figure VI- 19 Figure VI- 19 . 1919 Figure VI-19. Cyclic voltammograms recorded with compounds 11 immobilized as a selfassembled monolayer on gold electrode: (A) effect of potential scan rate; (B) effect multisweep; (C) electrode response in the absence and presence of increasing NAD + concentrations. Potential scan rate: 100 mV/ s. Figure VI- 20 .Figure VI- 21 . 2021 Figure VI-20. [Cp*Rh(bpy)] + -type mediators functionalized with siloxy-function. MWCNTs. They include (1) the simple microwave treatment of MWCNTs, (2) electrochemical deposition of poly(methylene green) on MWCNTs, and (3) wrapping of MWCNTs with an Osmium(III) polymer. All the developed strategies show significantly decreased overpotentials for NADH oxidation. A sol-gel thin film has been further deposited functionalized carbon nanotubes by drop-coating for encapsulation of D-sorbitol dehydrogenase (and diaphorase when necessary) and co-immobilization of the NAD + cofactor. Le but des travaux menés dans cette thèse était l'immobilisation sous une forme stable et active de protéines de type déshydrogénase, du cofacteur NAD + /NADH et d'un médiateur électrochimique au sein d'une matrice sol-gel déposée sous forme de couche mince à la surface d'une électrode par évaporation du sol ou électrogénération. Cette électrode ainsi modifiée doit alors être utilisée en électrosynthèse enzymatique. Afin d'atteindre les objectifs posés au commencement du projet, ce travail a été divisé en trois grandes étapes :(1) immobilisation au sien du film sol-gel de protéines de type déshydrogénase sous une forme active, (2) immobilisation du cofacteur et(3) immobilisation du médiateur électrochimique et co-immobilisation avec la déshydrogénase et le cofacteur. Les besoins identifiés pour l'électrosynthèse enzymatique sont l'immobilisation stable d'une quantité importante de déshydrogénase actives à la surface de l'électrode du réacteur. Les études de faisabilité sur l'encapsulation de déshydrogénase au sein de la matrice sol-gel ont été menés en utilisant la D-sorbitol déshydrogénase (DSDH) comme enzyme modèle et en déposant la couche mince par évaporation du sol. Il est rapidement apparu que la DSDH était très sensible à l'environnement sol-gel et que son encapsulation dans une matrice de silice pure conduisait à une absence d'activité électrocatalytique. L'intérêt d'additifs dans le sol initial a ainsi été évalué et il a été montré que l'ajout de polyélectrolytes positivement chargés permettait d'augmenter de façon importante l'activité catalytique de la protéine encapsulée. Les charges positive du polymère (par exemple, Poly(dimethyldiallylammonium chloride), PDDA) ont en effet une interaction favorable avec la protéine négativement chargée pendant le processus d'encapsulation. Ce protocole a ensuite été adapté et optimisé pour le dépôt électrochimiquement assisté de cette couche mince sol-gel. Il a également été montré que la diaphorase pouvait être co-encapsulée avec la DSDH et permettre la régénération du cofacteur NADH en présence du médiateur ferrocènedimethanol. Cette procédure de bioencapsulation sol-gel électrochimique a ensuite été appliquée à la modification d'électrodes macroporeuses et il a été montré que la plus grande surface électroactive apportée par cette macroporosité permettait d'augmenter de façon significative la réponse bioélectrocatalytique vis-à-vis de l'oxydation du D-sorbitol. Conclusion and outlook 202 Un des plus gros défis de ce travail concernait l'immobilisation du cofacteur NAD + / NADH tout en permettant de conserver son activité pendant un temps le plus long possible. Une stratégie efficace pour répondre à cet objectif a ainsi été développée dans la seconde partie de ce travail. Elle met en oeuvre la réaction chimique entre NAD + et le groupement epoxy du composé glycidoxypropylsilane (GPS) avant sa co-condensation avec le tetraethoxysilane (TEOS) en présence des protéines (déshydrogénase et diaphorase) et de poly(ethyleneimine) (PEI). Toutes ces opérations se font dans des conditions douces compatibles avec la bioencapsulation sol-gel. Par comparaison avec l'encapsulation simple de NAD + avec ou sans nanotubes de carbone ou l'encapsulation de NAD-dextran, la procédure utilisant GPS est moins couteuse, et plus simple à mettre en oeuvre. Enfin, ce protocole conduit à une activité catalytique très stable, pendant plus de 12h en solution en présence de convection. Finalement cette méthode développée pour le dépôt du film par évaporation du sol a été adaptée pour l'électrogénération sur électrode d'or macroporeuse. Dans toutes ces études le médiateur électrochimique ferrocènediméthanol se trouvait dans la solution. Finalement, la dernière partie de ce travail a été consacrée à l'immobilisation du médiateur électrochimique à la surface de l'électrode de façon à ce qu'il puisse communiquer efficacement avec l'électrode et avec la diaphorase ou directement avec le cofacteur et ainsi obtenir la régénération électrochimique du cofacteur. Différentes stratégies ont ainsi été étudiées pour obtenir le système final contenant l'ensemble des éléments de cette chaîne bioélectrocatalytique co-immobilisés dans la matrice sol-gel. Ces études ont d'abord été développées en déposant le film sol-gel par évaporation du sol. Comme pour les études précédentes la DSDH est utilisée comme protéine modèle, le cofacteur est immobilisé par couplage avec le GPS et le médiateur est introduit dans la matrice sol-gel à l'aide de polymère portant des espèces ferrocène ou osmium ou à l'aide d'un ferrocène fonctionnalisé par un groupement silane pouvant être co-condensé avec les autres silanes du sol (TEOS et GPS). Il a tout d'abord été montré que GPS permettait d'augmenter fortement la stabilité de l'immobilisation du médiateur en plus de son rôle dans l'immobilisation du cofacteur. Ceci est du à la plus grande stabilité mécanique des films préparés avec GPS. La co-immobilisation de ces médiateurs avec le cofacteur et les protéines (DSDH et diaphorase) a ainsi permis d'obtenir pour la première fois dans une matrice sol-gel une activité catalytique stable. Finalement, les tentatives pour transposer ce résultat obtenu par évaporation du sol à l'électrogénération ont échoués. La distribution du médiateur électrochimique dans les Conclusion and outlook 203 matrices sol-gel obtenus par électrochimie semble ne pas être suffisamment homogène ou leur mobilité ne pas être suffisante pour permettre le transport électronique et une interaction avec l'électrode ou la diaphorase. Afin de résoudre le problème posé par l'absence d'activité catalytique avec les films préparés par électrogénération, nous avons développé une stratégie différente, basée sur le dépôt de la couche mince sol-gel à la surface de nanotubes de carbone fonctionnalisés par différents médiateurs. Cette fonctionnalisation des nanotubes de carbone a été obtenue par la formation de fonctions quinones par traitement micro-ondes, par électropolymérisation du vert de méthylène et par recouvrement des nanotubes par un polymère de type acrylate portant des complexes d'osmium(III). Seul ce dernier système a alors permis d'observer, en présence de diaphorase, une régénération du cofacteur immobilisé dans la matrice sol-gel obtenue par électrogénération. La flexibilité du polymère d'osmium et l'intermédiaire de la diaphorase permettent alors la régénération douce du cofacteur enymatique. Tous les composants immobilisés communiquent efficacement à l'intérieur du gel de silice pour permettre l'oxydation électrocatalytique du D-sorbiltol. Les nanotubes de carbone fonctionnalisés par un médiateur preséntant une certaine faisabilité est un bon substrat d'électrode pour l'électrogénération sol-gel pour la co-immobilisation du cofacteur et des protéines. La couche mince sol-gel qui a été développée au cours de cette thèse peut être appliquée à la préparation de composés chiraux par électrosynthèse enzymatique. Tous les éléments participant à cette synthèse étant immobilisés sur l'électrode, cette approche est intéressante autant d'un point de vue économique que environnemental, en évitant l'utilisation de solvant et en réduisant es étapes de purification au minimum. Un tel concept correspond aux standards de la chimie verte avec un procédé induisant très peu de déchets. Cependant, il est encore nécessaire d'améliorer l'efficacité du procédé avant application à une échelle industriel, ceci passe notamment par le dépôt de ces couches minces sol-gel sur des électrodes macroporeuses de grandes dimensions et leur intégration dans le réacteur. Les découvertes faites dans cette thèse peuvent également être utilisées pour le développement de biocapteurs à base de déshydrogénase ne nécessitant l'ajout d'aucun réactif supplémentaire dans le milieu à analyser. Il existe en effet plus de 300 sortes de déshydrogénase catalysant l'oxydation d'une grande variété de substrats (éthanol, glucose, Conclusion and outlook 204 lactate, etc ) qui peuvent être d'un grand intérêt d'un point de vue analytique du fait de leur présence ou utilisation dans l'industrie alimentaire, l'environnement ou pour suivre certaines pathologies. Les résultats de cette étude sont également d'un grand intérêt pour le développement futur de systèmes électrochimiques à base de déshydrogénase, tel que les biopiles à combustible et biobatteries. ( 3 ) 3 cofactor immobilization and (3) mediator immobilization and co-immobilization with dehydrogenase and cofactor. The first requirements for electrosynthesis involves a stable immobilization of a large amount of active dehydrogenase on the electrode surface of the reactor. Thus, the feasibility of dehydrogenase encapsulation in sol-gel film is first evaluated using D-sorbitol dehydrogenase (DSDH) as a model enzyme by drop-coating. However, DSDH is sensitive to the sol composition during the encapsulation in a silica gel. The direct encapsulation of DSDH in a pure silica gel leads to total deactivation of the protein. We have thus evaluated the interest of various polyelectrolytes in combination with sol-gel deposition of silica films with encapsulated DSDH. We have chosen positively-charged polyelectrolytes because of the expected favorable interactions with the negatively-charged enzyme surface. Then, electrochemically-assisted deposition of silica gel layer has been successfully adapted to the encapsulation of DSDH as well as the co-encapsulation of DSDH and diaphorase in the presence of Poly(dimethyldiallylammonium chloride) (PDDA). The process and the sol composition have been optimized on flat glassy carbon electrodes before being applied to gold macroporous electrodes. At the end, the electrochemically-assisted deposition of the solgel bio-composite has been extended to macroporous electrodes displaying a much bigger electroactive surface area. The macroporous texture of the gold electrode improves thus significantly the catalytic efficiency of the sol-gel biocomposite. The bioelectrocatalytic response looks promising for electro-enzymatic applications. The durable attachment of NAD + /NADH cofactors with long-term activity was the biggest challenge in the project. A successful strategy for cofactor immobilization in sol-gel thin films has been developed in the second part of our work. It involves the chemical bonding of NAD + Conclusion and outlook 206 Si (O Et) 4 Si(OEt) 4 Si(OEt) 4 Si(OEt) 4 S i( O E t ) 4 Si (O Et) 4 Si (O Et) 4 Si(OEt) 4 H + Si(OEt) 4 S i( O E t ) 4 S i( O E t ) 4 H + S i( O E t) 4 S i( O E t) 4 H 2 protein Si(OEt) 4 Si(OEt) 4 Si(OEt) 4 Si(OEt) 4 S i( O E t) 4 S i( O E t) 4 S i( O E t) 4 S i( O E t) 4 H 2 protein S i( O E t) 4 S i(O E t) 4 4 Si( OE t) OH -protein S i( O E t) 4 S i( O E t) 4 S i(O E t) 4 S i(O E t) 4 Si( OE t) 4 4 Si( OE t) OH -protein Si(OEt) 4 S i( O E t) 4 S i( O E t) 4 H 2 O Si(OEt) 4 Si(OEt) 4 S i( O E t) 4 S i( O E t) 4 4 S i( O E t) H 2 O 2. Washing and drying and drying 2. Washing Si OH O Si O O Si O O Si O O O OH protein Si O OH OH O O Si O Si O O Si O O protein Si O Si O O O H Si O Si Si O HO OH OH O Si O protein Si OH OH Si O O O Si Si O H O O Si O Si O HO Si O Si OH OH OH O Si OH Si O -O Si O Si OH O OH OH O Si Si -O OH O Si O O O O Si O Glassy carbon Glassy carbon Glassy carbon 1. Electrodeposition procedure 1. Electrodeposition procedure 3. Thin SG-protein film 3. Thin SG-protein film protein Figure I-5.The process of protein enscapsulated in thin sol-gel film through electrodeposition. Table I -1. Comparison of Amperometric Reagentless Biosensors Based on Dehydrogenase /Cofactor System I Sensor assembly a Sensitivity (mA M -1 cm-2 ) Stability b Linear range (mM) Ref GC/CNT/GDI/CHIT/Nafion (GDH) 1.8 (24 h, 100%)/ (1.5 months, 64%) 0.02-2.0 49 GC/ TFC/Ca 2+ /PL(GDH) 0.67 ? ? 128 GC/ MB/Nafion(GDH) 0.49 ? ? 127 CP/PS-TBO(GDH) 4.0 ? 0.1-5.0 129 CP/PMA(GDH) 0.014 ?/(4 months, ?) 5-36 130 CP/PMA/Nafion(GDH) 0.002 ? 10-330 130 CP/osphendione(GDH) ? (8 h, 92%)/(1 month, 92%) ? 131 CP/Ru complex(GDH) ? ?/(7 days, 60%) ? 132 CP/MB(GDH) ? (1 day, 10%) ?-20 133 Au/Ca 2+ /Nafion(GDH) ? (10 cycles, ?)/? ? 134 GC/NAD + /MWCNT(GDH) ? ? 0.01-0.30 59 Au/PEI-Fc-NAD (ADH) ? ? ?-30 56 Pt/PVA-SbQ/NADH oxidase /NAD-dextran/ cellophane 2 (80 assays)/(6 days) 0.0003-0.1 54 membrane (ADH) a GC, glassy carbon; TFC, trinitrofluorenonecarboxylic acid; PL, polylysine; MB, Meldola Blue; CP, carbon paste; PS, modified polystyrene; TBO, toluidine blue O; PMA, polymethacrylate; CHIT, chitosan; GDI, glutaric dialdehyde;; PVA-SbQ, Poly(vinylalcohol) bearing styrylpyridinium groups; GDH, Glucose Dehydrogenase; ADH, Alcohol dehydrogenase; CNT, carbon nanotubes; PEI-Fc-NAD, A ferrocene-labeled high molecular weight cofactor derivative;?, not reported. b Operational stability (hours, h)/long-term stability (days or months). 1.1. Sol-gel reagents This work focuses on designing functional layers based on silica sol-gel thin films to co- immobilize dehydrogenase, cofactor and mediator. A series of precursor with different properties are used (Table II-1). Tetraethoxysilane and Tetramethoxysilane are the most common precursors used to prepare sol-gel film. 3-Glycidoxypropyl-trimethoxysilane has three OCH 3 groups and a glycidoxy group. The epoxy ring at the end of the glycidoxy group displays chemical activity and can react with other active groups. Aminopropyltriethoxysilane is an aminopropyl-functionalized silane precursor. The positive charges held by the protonated aminopropyl groups could provide suitable environment for bioencapsulation. sodium metasilicate, Ludox® HS-40 colloidal and Sodium silicate solution have been used for protein encapsulation in order to avoid any trace of alcohol (i.e. aqueous sol-gel route) . Table II -1. Information of used precursors II Chemicals Formula Grade MW (g.mol -1 ) Suppliers Tetraethoxysilane (TEOS) Si(OC 2 H 5 ) 4 98 % 208.33 Alfa Aesar Tetramethoxysilane (TMOS) Si(OCH 3 ) 4 99 % 152.22 Aldrich 3-Glycidoxypropyl-trimethoxysilane (GPS) C 9 H 20 O 5 Si 98 % 236.34 Aldrich Aminopropyltriethoxysilane (APTES) H 2 N(CH 2 ) 3 Si(OC 2 H 5 ) 3 99 % 221.37 Aldrich Sodium metasilicate Na 2 SiO 3 95 % 122.06 Aldrich Ludox® HS-40 colloidal SiO 2 40 % 60.08 Sigma-Aldrich Sodium silicate solution Na 2 O SiO 2 10.6 % 26.5 % Sigma-Aldrich Sodium silicate solution (water glass) Na 2 O SiO 2 8.7 % 28.4 % Molekula 1.4.1 Electron mediator for oxidation of NADH 1.4.1.1 Commercial electron mediator The use of an electron-transfer mediator can help to overcome the problems observed during the direct electrochemical oxidation of NADH. In this work, we have tested a series of mediators ( Table II-2). Table II-2. Information of used electron mediator for NADH oxidation Chemicals Formula MW (g.mol -1 ) Suppliers Methylene Green (MG) C 16 H 17 ClN 4 O 2 S.0.5ZnCl 2 433.01 Sigma Meldola's blue (MB) C 18 H 15 ClN 2 O.xZnCl 2 370.78 Acros organics Nile blue chloride (Nb) C 20 H 20 ClN 3 O 85% 353.85 fluka Ferrocenedimethanol (FDM) C 12 H 14 FeO 2 98 % 246.09 Aldrich Ferrocenemethanol (FM) C 11 H 12 FeO 97 % 216.07 Aldrich 1 .4.1.2 Synthesis of non commercially-available electron mediator another 2 h. Finally, the mixture was dried under vacuum condition and the residue was extracted with distilled water. The aqueous solution was purified by membrane dialysis against water for 12 h and dried. The polymer so obtained was referred as PEI-Fc. ② Ferrocene functionalized organoalkoxysilane: ferrocene-alkyl-silane (provided by the group of A. Demir, METU, Ankara, Turky) H N 4 Si(OMe) 3 Fe 1 CHO + H 2 N H 2 N 2 or 3 N H Si(OMe) 3 Si(OMe) 3 1. THF, MS 4 o A 2. NaBH 4 , EtOH Fe Fe H N or 5 H N Si(OMe) 3 Figure II-1. Functionalization strategies of Ferrocene-type mediators with siloxy-functions. Synthesis of siloxane end-group mediators was also considered for possible immobilization in sol-gel thin films. For the synthesis of ferrocene-siloxane compounds with different chain lengths (Figure II-1), two methoxysilane derivatives 2 and 3 was used. 1 mM of 1 (214 mg) and 1 mM of 2 or 3 were dissolved in 10 ml of dry THF in the presence of 100 mg of MS 4 o A under argon atmosphere. The mixture was stirred overnight and filtered off to remove molecular sieves. The filtrate was concentrated and dissolved in 10 ml of absolute ethanol. 2 mmole of NaBH 4 was added portionwise to the solution in a ice-bath. After 3 h, the ethanol was removed. The remaining mixture was dissolved in 10 ml of DCM and extracted with 10 ml of water. The aq. phase was washed with 10 ml of DCM. The combined organic phases was dried over MgSO 4 and concentrated to give 4 or 5. ③ Synthesis of Os-containing redox polymers ① PEI-Fc Os-containing redox polymers were kindly provided by group of Prof. Wolfgang Ferrocene was tethered to poly(ethyleneimine) (PEI) based on the procedure reported in the Schuhmann, and synthesised by Sascha Pöller (Analytische Chemie -Elektroanalytik & literature [3, 4]. FcCHO (90 mg) was dissolved in 15 mL ethanol and added within 1 h to 30 Sensorik & Center for Electrochemical Sciences -CES; Ruhr-Universität Bochum, Bochum, mL of ethanol solution containing 400 mg PEI. The mixture was stirred for 1 h at room Germany). Molecular structures of the synthesized Os-containing redox polymers are shown temperature, then NaBH 4 was carefully added in portions at 0 °C, and stirred continually for in Figure II-2. Table II-3. Characteristic and origin of carbon nanotubes used in this thesis Chemicals Diameter (nm) Length (µm) Purity Suppliers Multi-walled carbon nanotubes (MWCNTs) 5.5 5 95 % Aldrich Carboxylic acid functionalized Single-walled 4-5 0.5-1.5 90 % Aldrich carbon nanotubes (SWCNTs-COOH) Carboxylic acid functionalized Muti-walled 15±5 1-5 95 % Nanolab carbon nanotubes (MWCNTs-COOH) Multi-walled carbon nanotubes (MWCNTs) 15±5 1-5 95 % Nanolab • Microwaved MWCNTs (MWCNTs-µW) .2 Electron mediators for reduction of NAD + 1.4.2.1 Syhthesis of [Cp*Rh(bpy)Cl] + derivatives 2 mg MWCNTs (MWCNTs-COOH, Nanolab) were dispersed in 2 mL distilled water by sonication. Then, 250µL the resulting solution and 150 µL osmium-polymer solution were mixed together. The mixture was first sonicated for 1 h and then stirred for 24 h at room temperature. 1.4A series of substituted (2,2'-bipyridyl) (pentamethylcyclopentadienyl)-rhodium complexes ([Cp*Rh(bpy)Cl] + ) derivatives have been synthesized by by the group of A. Demir (METU, Ankara, Turky) (Figure II-3) according to literature [8, 9, 10, 11, 12, 13], which could be used afterwards as precursors to immobilize such compounds onto electrode surfaces for the reduction of NAD + . Basically, the "simple" mediators (1-10, Figure II-3) have been prepared by reaction of the corresponding unsubstituted and suitably 4,4'-derivatized 2,2'-bipyridine derivatives with (RhCp*Cl 2 ) 2 . 2,2'-bipyridine, 4,4'-dimethoxy-2,2'-bipyridine, 4,4'-dimethyl- 2,2'-bipyridine, 4,4'-diamino-2,2'-bipyridine,4,4'-di-t-butyl-2,2'-bipyridine and (RhCp*Cl 2 ) 2 are commercially available and purchased from Aldrich. Other derivatives of 2,2'-bipyridine were synthesized from 4,4'-dimethyl-2,2'-bipyridine. The preparation of the more "sophisticated" derivatives 11 & 12, which are likely to be used as precursor reagents for immobilization on electrode surfaces, has required more elaborated synthetic procedures. 1.4.2.2 CNTs-Rh H 2 N NH 2 Cp* = N N Cl - N N Cl - N N Cl - Rh Rh Rh Cl Cp* Cl Cp* Cl Cp* Br OH COH SH Cl - Cl - Cl - Cl - N N N N N N N N Rh Rh Rh Rh Cl Cp* Cl Cp* Cl Cp* Cl Cp* MeO OMe MeO 2 C CO 2 Me Cl - Cl - Cl - N N N N N N Rh Rh Rh Cl Cp* Cl Cp* Cl Cp* The non functionalized [Cp*Rh(bpy)Cl] + complex can be efficiently immobilized on the CNT surface by π-π-stacking interaction. 5 mg [Cp*Rh(bpy)Cl] + and 2 mg CNTs (SWCNTs- COOH, nanolab) were dispersed in 2 mL distilled water. The mixture was first sonicated for 1 h and then stirred for 24 h at room temperature. Table II-4. Information of used polymer additives Chemicals Short name Concentration (wt %) Suppliers Poly(dimethyldiallylammonium chloride) PDDA 20 % Aldrich Poly(ethyleneimine) PEI Water free Aldrich Poly(allylamine) PAA 20 % Aldrich Nafion perfluorinated ion-exchange resin NF 5 % Aldrich. Co-immobilization in electrogenerated sol-gel film 4.2.1 Immobilization Os-polymer in electrodeposited sol-gel thin film 8 7 buffer_B A0.3mMNADH_B 6 5 A0.6mMNADH_B A0.9mMNADH_B A1.2mMNADH_B I / µA 2 3 4 1 0 -1 Figure V-13. Cyclic voltammograms recorded with a GCE modified by drop-coated TEOS/GPS/PEI/Os-polymer/DSDH/DI/NAD-GPS film in the absence and presence D-sorbitol E versus Ag/AgCl / V from 2 to 6mM. All cyclic voltammograms have been performed in Tris-HCl buffer (pH 9) at a scan rate of 50 mV/s. 4.2 The encapsulation of Os redox polymer inside the electrodeposited silica gel has finally been investigated. Figure V-14 shows cyclic voltammograms recorded with a GCE modified by electrodeposited TEOS/GPS sol-gel film containing Os-polymer. No electrochemical signal of osmium can be observed in the cyclic voltammograms, so that this way of sol-gel film formation does not seem to be an adequate strategy for Os-polymer immobilization. Figure V-14. Cyclic voltammograms recorded with a GCE modified by electrodeposited TEOS/GPS/PEI/Os-polymer in the 0.1 M Tris-HCl buffer (pH 9) at a scan rate of 50 mV/s, scan cycle, 10. Films have been deposited by electrolysis at -1.3 V for 60 s with a TEOS/GPS sol containing Os-polymer. PEI-NAD + : Poly (ethylenimine)-NAD + NAD-GPS: NAD + -Glycidoxypropylsilane Compound1 Compound 2 properties toward NADH regeneration. We find some alternatives "soft" immobilization procedures not requiring the resort to functional groups harmful to the mediator functioning. Compound 3 Compound Interest of carbon nanotubes as immobilization support The noncovalent functionalization of single-walled carbon nanotubes (SWCNTs) with molecular metal-containing compounds via π-π-stacking starts to become a versatile alternative to covalent bonding [ 52 ]. An approach has thus been tested here for the immobilization of the non functionalized [Cp*Rh(bpy)Cl] + complex, with the aim to overcome the aforementioned deactivation of the electrocatalytic processes when using derivatized [Cp*Rh(bpy)Cl] + derivatives. Conferences participations International Conferences In this thesis, the research work was focused on designing functional layers based on silica sol-gel thin films to co-immobilize dehydrogenase, cofactor and electron mediator to get the most highly active systems and such modifications of electrode surfaces should be adaptable to the macroporous electrodes. Immobilization of dehydrogenase in an active form in a sol-gel matrix was obtained with using a positively-charged polyelectrolyte as additive in the starting sol. This polymer provides a good environment for the protein in the sol-gel. The optimal sol can be deposited by evaporation or by electrodeposition and was successfully deposited in macroporous electrodes. Diaphorase was also successfully co-immobilized with dehydrogenase for the electroenzymatic regeneration of the NAD + cofactor. The immobilization of the cofactor was investigated by simple entrapment, adsorption to carbon nanotube, encapsulation of NAD + chemically attached to dextran (NAD-dextran), and by in-situ coupling with glycidoxypropyltrimethoxysilane (GPS). The last approach allowed stable immobilization of the cofactor, and was extended to electrodeposition and applied to macroporous electrodes. Keywords: Silica, sol-gel, dehydrogenase, NAD + /NADH cofactor, electron mediator, polyelectrolyte, bioencapsulation, electrochemically-assisted deposition, thin films, reagentless device, porous electrodes, carbon nanotubes. La recherché menée dans cette thèse concerne l'élaboration de couches minces sol-gel permettant la co-immobilisation de déshydrogénase, du cofacteur NAD + /NADH et d'un médiateur électrochimique afin d'obtenir le système présentant une activité électrocatalytique optimale et pouvant être déposé au sein d'électrodes macroporeuses. L'immobilisation de la D-sorbitol déshydrogénase (DSDH, l'enzyme modèle de cette étude) sous une forme active dans la matrice sol-gel a été obtenue en utilisant un polyélectrolyte positivement chargé comme additif dans le sol de départ. La présence de ce polymère dans le sol de silice procure un environnement favorable à l'activité enzymatique de la déshydrogénase. Le film peut être déposé par évaporation du sol optimal ou électrogénéré par électrolyse de ce même sol, ce dernier procédé ayant été appliqué à la fonctionnalisation d'électrodes d'or macroporeuses. La diaphorase a également pu être co-encapsule avec la DSDH pour la régénération électroenzymatique du cofacteur NAD + . L'immobilisation du cofacteur dans cette matrice sol-gel a ensuite été étudiée. Le cofacteur a tout d'abord été simplement encapsulé dans la matrice sol-gel en présence ou non de nanotubes de carbone. L'encapsulation d'une forme macromoléculaire du NAD + (NADdextran) a également été étudiée et finalement une voie alternative a été étudiée, utilisant le couplage chimique du NAD + avec le groupement époxy du glycidoxypropylsilane (NAD-GPS). Cette dernière approche s'est montrée être la plus intéressante, notamment en ce qui concerne la stabilité du signal électrocatalytique. Les études de faisabilité ont été menées en utilisant le dépôt sol-gel par évaporation du sol sur électrode plane et la méthode a ensuite été transposée aux électrodes macroporeuses pour dépôt par électrogénération. Plusieurs stratégies d'immobilisation du médiateur électrochimique ont alors été étudiées. Les espèces de type ferrocène ou des complexes d'osmium(III) peuvent être incorporées dans la matrice sol-gel par encapsulation de polymères portant ces médiateurs (Fc-PEI et polymère d'osmium) ou par co-condensation avec un ferrocène fonctionnalisé par un groupement silane. Ces trois systèmes se sont montrés opérationnels lorsque la couche mince sol-gel était déposées par évaporation du sol contenant l'ensemble des éléments de la co-immobilisation (DSDH, diaphorase, NAD-GPS et médiateur). Par contre le dépôt par électrogénération ne permet aux médiateurs de transférer les électrons entre la diaphorase et l'électrode, empêchant toute activité catalytique. Finalement d'autres stratégies basées sur la fonctionnalisation de nanotubes de carbone par différents médiateurs électrochimiques ont alors été étudiées pour dépasser le problème rencontré avec les films déposés par électrogénération (perte de la fonction de médiateur). Les nanotubes de carbones ont été fonctionnalisés par des fonctions quinone grâce à un traitement micro-onde, par électropolymérisation du vert de méthylène, ou par recouvrement par un polymère de type acrylate portant des complexes d'osmium(III). Il alors été possible de coimmobiliser l'ensemble des éléments de ce processus électrocatalytique en utilisant l'électrogénération d'une couche mince sol-gel servant à immobiliser les protéines (DSDH et diaphorase) et le cofacteur (NAD-GPS) à la surface des nanotubes fonctionnalisés par le polymère d'osmium(III). Enfin, les nanotubes de carbone ont permis l'immobilisation sous une forme active de complexes de Rh(III) permettant la régénération du cofacteur NADH.
291,809
[ "781746" ]
[ "411849" ]
01430833
en
[ "sdv" ]
2024/03/05 22:32:07
2017
https://inserm.hal.science/inserm-01430833/file/Stefanoic%26ZaffranMechDev2017-1.pdf
Sonia Stefanovic email: sonia.stefanovic@univ-amu.fr Stéphane Zaffran email: stephane.zaffran@univ-amu.fr Mechanisms of retinoic acid signaling during cardiogenesis Keywords: Substantial experimental and epidemiological data have highlighted the interplay between nutritional and genetic factors in the development of congenital heart defects. Retinoic acid (RA), a derivative of vitamin A, plays a key role during vertebrate development including the formation of the heart. Retinoids bind to RA and retinoid X receptors (RARs and RXRs) which then regulate tissue-specific genes. Here, we will focus on the roles of RA signaling and receptors in gene regulation during cardiogenesis, and the consequence of deregulated retinoid signaling on heart formation and congenital heart defects. Introduction The heart is the first organ to function and is essential for the distribution of nutrients and oxygen in the growing mammalian embryo. Normal cardiac morphogenesis is thus vital for embryonic survival. Heart development is a complex process that requires the precise and coordinate interactions between multiple cardiac and extra-cardiac cell types. Any perturbation in the cells that contribute to heart formation leads to cardiac defects. Congenital heart defects affect 1-2% of live births, and are found in up to one-tenth of spontaneously aborted fetuses [START_REF] Bruneau | The developmental genetics of congenital heart disease[END_REF][START_REF] Fahed | Genetics of congenital heart disease: the glass half empty[END_REF]. Studies in the invertebrate Drosophila melanogaster have defined numerous regulators that determine cardiac cell specification and differentiation, revealing that the cardiac regulatory network is remarkably conserved during evolution. More recently, genetic studies have identified mutations in genes encoding components of signaling pathways as well as proteins organizing chromatin structure that are responsible for congenital heart defects [START_REF] Miyake | KDM6A point mutations cause Kabuki syndrome[END_REF][START_REF] Vissers | Mutations in a new member of the chromodomain gene family cause CHARGE syndrome[END_REF][START_REF] Zaidi | De novo mutations in histone-modifying genes in congenital heart disease[END_REF]. The specification of multipotent heart progenitor cells and their differentiation into different cell lineages is under tight spatial and temporal transcriptional control. Defining the transcriptional networks underlying normal heart development is a prerequisite for understanding the molecular basis of congenital heart malformation. Vitamin A (or provitamin A carotenoid) deficiency is a major public health problem in underdeveloped countries [START_REF] Zile | Vitamin A-not for your eyes only: requirement for heart formation begins early in embryogenesis[END_REF]. Young children, pregnant and breast feeding women are the main groups affected because their requirements for Vitamin A are higher and the impact of deficiency more severe than the other population subgroups. Malformations following maternal vitamin A deficiency were first reported by [START_REF] Hale | The relation of vitamin A to anophthalmos in pigs[END_REF] (F., 1935). The mammalian embryo is strongly dependent on the maternal delivery of retinol (carotenoids and retinyl esters) through transplacental transfer. The fetus needs vitamin A throughout pregnancy [START_REF] Comptour | Nuclear retinoid receptors and pregnancy: placental transfer, functions, and pharmacological aspects[END_REF]. Consequently, both deficiency and excess of vitamin A cause severe damage during prenatal and postnatal development. Nutritional and clinical studies on animals and humans have shown that maternal vitamin A insufficiency can result in fetal death, or a broad range of abnormalities including cardiac malformations (D'Aniello and Waxman, 2015;[START_REF] Wilson | An analysis of the syndrome of malformations induced by maternal vitamin A deficiency. Effects of restoration of vitamin A at various times during gestation[END_REF]. Moreover, it is suggested that the elevated incidence of heart malformations in developing countries could be partly explained by a low availability of retinol due to vitamin A deficiency in the diet [START_REF] Sommer | Impact of vitamin A supplementation on childhood mortality. A randomised controlled community trial[END_REF][START_REF] Underwood | Vitamin A deficiency disorders: international efforts to control a preventable "pox[END_REF]. Conversely, a high level of retinol during pregnancy leads to toxicity of many organs including the heart. For example, maternal intake of isotretinoin has been shown to cause congenital cardiac defects in addition to other malformations [START_REF] Guillonneau | Teratogenic effects of vitamin A and its derivates[END_REF]. Importantly, genetic alterations reducing retinol uptake [START_REF] Golzio | Matthew-Wood syndrome is caused by truncating mutations in the retinol-binding protein receptor gene STRA6[END_REF][START_REF] Kawaguchi | A membrane receptor for retinol binding protein mediates cellular uptake of vitamin A[END_REF][START_REF] Pasutto | Mutations in STRA6 cause a broad spectrum of malformations including anophthalmia, congenital heart defects, diaphragmatic hernia, alveolar capillary dysplasia, lung hypoplasia, and mental retardation[END_REF] or retinoic acid (RA) production [START_REF] Pavan | ALDH1A2 (RALDH2) genetic variation in human congenital heart disease[END_REF][START_REF] Roberts | Cyp26 genes a1, b1 and c1 are down-regulated in Tbx1 null mice and inhibition of Cyp26 enzyme function produces a phenocopy of DiGeorge Syndrome in the chick[END_REF] have been implicated in human congenital heart disease. Altered RA signaling either genetically or nutritionally could be a predominant risk factor, increasing the frequency of congenital heart diseases in humans [START_REF] Huk | Increased dietary intake of vitamin A promotes aortic valve calcification in vivo[END_REF][START_REF] Jenkins | Noninherited risk factors and congenital cardiovascular defects: current knowledge: a scientific statement from the American Heart Association Council on Cardiovascular Disease in the Young: endorsed by the American Academy of Pediatrics[END_REF][START_REF] Underwood | Vitamin A deficiency disorders: international efforts to control a preventable "pox[END_REF]. In this review, we will discuss the role of retinoids in cardiac gene regulation and congenital heart defects. Early heart development The mammalian heart has four chambers and is composed of a variety of cell types. Distinct sets of cardiac progenitors differentiate to form the different parts of the heart. It develops from cardiac progenitors that can be traced back to the early gastrulating embryo (embryonic day (E) 6.5 in the mouse). The earliest progenitors originate from the primitive streak and migrate toward the anterior lateral region to form the cardiac crescent, defined as the first heart field (E7-7.5). By E7.5-8.0, during folding of the embryo and formation of the foregut, the two sides of the cardiac crescent are brought together to form the primary heart tube (Fig. 1). The embryonic myocardium of the tube is characterized by a primitive phenotype, i.e. lower proliferation, a poorly developed contractile apparatus and slow conduction [START_REF] Christoffels | Development of the pacemaker tissues of the heart[END_REF][START_REF] Moorman | Cardiac chamber formation: development, genes, and evolution[END_REF]. Growth of the heart tube depends on the addition of progenitor cells from adjacent pharyngeal mesoderm to the arterial and venous poles. This cell population, named the second heart field, was first identified in the mouse and the chick models [START_REF] Buckingham | Building the mammalian heart from two sources of myocardial cells[END_REF][START_REF] Zaffran | New developments in the second heart field[END_REF]. These progenitor cells, located in a dorsal/medial position relative to the linear heart tube, are kept in an undifferentiated and rapidly proliferating state. These cells ultimately contribute to the outflow tract, right ventricle and a major part of the atria, while the linear heart tube gives rises mainly to the left ventricle [START_REF] Buckingham | Building the mammalian heart from two sources of myocardial cells[END_REF][START_REF] Kelly | The arterial pole of the mouse heart forms from Fgf10-expressing cells in pharyngeal mesoderm[END_REF][START_REF] Zaffran | New developments in the second heart field[END_REF][START_REF] Zaffran | Right ventricular myocardium derives from the anterior heart field[END_REF]. Specific regions in the embryonic heart tube subsequently acquire a chamber-specific gene program, differentiate further and expand, or "balloon" by rapid proliferation to form the ventricular and atrial chamber myocardium (NE8.5) (Fig. 1). In contrast, the regions in between these differentiating chambers, the sinus venosus, the atrioventricular canal and the outflow tract, do not differentiate or expand, and consequently form constrictions. The inflow tract cells of the heart tube develop into atrial cells, pulmonary myocardial cells and myocardial cells of the superior caval veins. Expression of the LIM homeobox 1 Islet1 (Isl1) in second heart field cells led to an appreciation of the full contribution of these progenitors to the venous, as well as the arterial pole of the heart [START_REF] Cai | Isl1 identifies a cardiac progenitor population that proliferates prior to differentiation and contributes a majority of cells to the heart[END_REF]. However, differences in gene expression between progenitors of the venous and arterial poles revealed that the second heart field is pre-patterned [START_REF] Galli | Atrial myocardium derives from the posterior region of the second heart field, which acquires left-right identity as Pitx2c is expressed[END_REF][START_REF] Snarr | Isl1 expression at the venous pole identifies a novel role for the second heart field in cardiac development[END_REF]. Recent genetic lineage analysis in the mouse has shown that anterior Homeobox (Hox) genes Hoxa1, Hoxa3 and Hoxb1 expression define distinct sub-domains within the posterior domain of the second heart field that contribute to a large part of the atrial and sub-pulmonary myocardium [START_REF] Bertrand | Hox genes define distinct progenitor sub-domains within the second heart field[END_REF][START_REF] Diman | A retinoic acid responsive Hoxa3 transgene expressed in embryonic pharyngeal endoderm, cardiac neural crest and a subdomain of the second heart field[END_REF]. This suggests that Hox-expressing progenitor cells in the posterior domain of the second heart field contribute to both poles of the heart tube. Indeed, fate mapping and clonal analysis experiments have confirmed that posterior second heart field cells contribute to outflow tract, and that sub-pulmonary and inflow tract myocardial cells are clonally related [START_REF] Dominguez | Asymmetric fate of the posterior part of the second heart field results in unexpected left/ right contributions to both poles of the heart[END_REF][START_REF] Laforest | Genetic lineage tracing analysis of anterior Hox expressing cells[END_REF][START_REF] Lescroart | Lineage tree for the venous pole of the heart: clonal analysis clarifies controversial genealogy based on genetic tracing[END_REF]. Genetic tracing of Hoxb1 lineages in deficient embryos for the transcription factor T-box1 (Tbx1) showed that the deployment of Hoxb1-positive cell during the formation of the heart is regulated by Tbx1 [START_REF] Rana | Tbx1 coordinates addition of posterior second heart field progenitor cells to the arterial and venous poles of the heart[END_REF]. RA signaling functions during heart development Many studies have demonstrated that the formation of the heart depends on the vitamin A metabolite RA, which serves as a ligand for nuclear receptors (Fig. 2) [START_REF] Niederreither | Embryonic retinoic acid synthesis is essential for early mouse post-implantation development[END_REF][START_REF] Niederreither | Embryonic retinoic acid synthesis is essential for heart morphogenesis in the mouse[END_REF]. RA metabolic pathways have been the subject of some excellent recent reviews [START_REF] Niederreither | Retinoic acid in development: towards an integrated view[END_REF][START_REF] Rhinn | Retinoic acid signalling during development[END_REF]. Excess exposure in humans to vitamin A or its analogs, the retinoids, can cause embryonic defects and congenital heart disease, including conotruncal and aortic arch artery malformations such as transposition of the great vessels, double outlet right ventricle, and tetralogy of Fallot [START_REF] Lammer | Retinoic acid embryopathy[END_REF][START_REF] Mark | Function of retinoid nuclear receptors: lessons from genetic and pharmacological dissections of the retinoic acid signaling pathway during mouse embryogenesis[END_REF]. In rodents, treatment with RA was one of the earliest teratogenic models of heart defects [START_REF] Wilson | Congenital anomalies of heart and great vessels in offspring of vitamin A-deficient rats[END_REF]. Retinoic exposure produces transposition of the great arteries and a wide spectrum of great artery patterning defects [START_REF] Ratajska | Coronary artery embryogenesis in cardiac defects induced by retinoic acid in mice[END_REF][START_REF] Yasui | Morphological observations on the pathogenetic process of transposition of the great arteries induced by retinoic acid in mice[END_REF]. The variability, low penetrance, lack of molecular and electrophysiological data of any particular defect makes these RA-induced teratogenic defects difficult to identify in human patients. Indeed, early disturbances of RA signaling may lead to severe CHDs associated with embryonic death and thus be only rarely observed as a cause of congenital heart disease in humans. The canonical RA synthetic pathway has been elucidated over the last two decades (Fig. 2), mainly via gene targeting studies of several enzymes in the mouse [START_REF] Niederreither | Retinoic acid in development: towards an integrated view[END_REF]. Two sequential reactions are required to transform retinol, the major source of retinoids, into retinaldehyde and RA. The first reversible oxidation is catalyzed by cytosolic alcohol dehydrogenases (ADHs) and microsomal retinol dehydrogenase (RDH), and retinaldehyde is then irreversibly oxidized to RA by retinaldehyde dehydrogenase (RALDHs also known as ALDHs). There are three members of the RALDH family, each with a unique developmental expression patterns [START_REF] Mic | Novel retinoic acid generating activities in the neural tube and heart identified by conditional rescue of Raldh2 null mutant mice[END_REF][START_REF] Mic | RALDH3, a retinaldehyde dehydrogenase that generates retinoic acid, is expressed in the ventral retina, otic vesicle and olfactory pit during mouse development[END_REF]Niederreither et al., 2002b) (Fig. 2). Analysis of knockout mice demonstrated that RALDH2 (ALDH1A2) is responsible for almost all RA production during early development. Studies in mouse and avian embryos have shown that RA deficiency is associated with anomalies of anteroposterior patterning of the primitive heart [START_REF] Hochgreb | A caudorostral wave of RALDH2 conveys anteroposterior information to the cardiac field[END_REF][START_REF] Osmond | The effects of retinoic acid on heart formation in the early chick embryo[END_REF][START_REF] Yutzey | Expression of the atrial-specific myosin heavy chain AMHC1 and the establishment of anteroposterior polarity in the developing chicken heart[END_REF]. Using in situ hybridization experiments, [START_REF] Hochgreb | A caudorostral wave of RALDH2 conveys anteroposterior information to the cardiac field[END_REF] have described two phases of Raldh2 expression [START_REF] Hochgreb | A caudorostral wave of RALDH2 conveys anteroposterior information to the cardiac field[END_REF][START_REF] Moss | Dynamic patterns of retinoic acid synthesis and response in the developing mammalian heart[END_REF]. The first phase is characterized by a large expression domain in lateral mesoderm in proximity with posterior cardiac precursors. The second phase is characterized by progressive encircling of cardiac precursors [START_REF] Hochgreb | A caudorostral wave of RALDH2 conveys anteroposterior information to the cardiac field[END_REF]. Furthermore, treatment of chick embryos with a pan-antagonist of RA signaling at stages HH4-7 causes changes in inflow architecture, indicating that a caudal to rostral wave of Raldh2 conveys anteroposterior information to the forming heart tube. The phenotype of Raldh2-null mice supports this notion (Niederreither et al., 2002a;[START_REF] Niederreither | Embryonic retinoic acid synthesis is essential for early mouse post-implantation development[END_REF][START_REF] Niederreither | Embryonic retinoic acid synthesis is essential for heart morphogenesis in the mouse[END_REF]. In the mouse, deletion of Raldh2 causes heart defects with poor development of the atria and sinus venosus (Niederreither et al., 2002a;[START_REF] Niederreither | Embryonic retinoic acid synthesis is essential for early mouse post-implantation development[END_REF][START_REF] Niederreither | Embryonic retinoic acid synthesis is essential for heart morphogenesis in the mouse[END_REF]. Interestingly, some of these abnormalities can be rescued by transient maternal RA supplementation from E7.5 to E8.5-9.5, suggesting that cardiac precursors commit to their fate early during cardiogenesis [START_REF] Mic | Novel retinoic acid generating activities in the neural tube and heart identified by conditional rescue of Raldh2 null mutant mice[END_REF][START_REF] Niederreither | The regional pattern of retinoic acid synthesis by RALDH2 is essential for the development of posterior pharyngeal arches and the enteric nervous system[END_REF]. Our investigation of the role of Raldh2 revealed that RA signaling plays a role in establishing the boundary of the second heart field in the embryo [START_REF] Ryckebusch | Retinoic acid deficiency alters second heart field formation[END_REF][START_REF] Sirbu | Retinoic acid controls heart anteroposterior patterning by down-regulating Isl1 through the Fgf8 pathway[END_REF]. Analysis of markers of the second heart field, including Isl1, Tbx1, Fgf8 and Fgf10 in Raldh2 mutant embryos has shown abnormal expansion of the expression domains of these genes in posterior lateral mesoderm, suggesting that RA signaling is required to define the posterior boundary of the second heart field [START_REF] Ryckebusch | Retinoic acid deficiency alters second heart field formation[END_REF][START_REF] Sirbu | Retinoic acid controls heart anteroposterior patterning by down-regulating Isl1 through the Fgf8 pathway[END_REF] (Fig. 3). Similarly, the zebrafish mutation neckless (nls), which disrupts function of raldh2, causes formation of large hearts [START_REF] Keegan | Retinoic acid signaling restricts the cardiac progenitor pool[END_REF]. This excess of cardiomyocytes results from an increase number of cardiac progenitor cells as revealed by increased number of nkx2.5-expressing cells [START_REF] Keegan | Retinoic acid signaling restricts the cardiac progenitor pool[END_REF]. The other RADLH enzymes do not play major roles during heart development since deletion of Raldh1 does not result in any observable phenotype [START_REF] Fan | Targeted disruption of Aldh1a1 (Raldh1) provides evidence for a complex mechanism of retinoic acid synthesis in the developing retina[END_REF], and Raldh3-null mice have only defects in ocular and nasal regions as well as neuronal differentiation in the brain [START_REF] Dupe | A newborn lethal defect due to inactivation of retinaldehyde dehydrogenase type 3 is prevented by maternal retinoic acid treatment[END_REF][START_REF] Molotkova | Role of retinoic acid during forebrain development begins late when Raldh3 generates retinoic acid in the ventral subventricular zone[END_REF]. Another important enzyme for RA synthesis and for early embryogenesis is RDH10. Rdh10 expression is localized in the lateral plate mesoderm of the cardiac crescent and later in the venous pole of the heart tube [START_REF] Sandell | RDH10 is essential for synthesis of embryonic retinoic acid and is required for limb, craniofacial, and organ development[END_REF]. Using the RARE-hsp68-lacZ reporter transgene, it has been shown that RA activity in Rdh10 null embryos is almost completely eliminated at the critical E8.0-E8.5 stage of development [START_REF] Rhinn | Involvement of retinol dehydrogenase 10 in embryonic patterning and rescue of its loss of function by maternal retinaldehyde treatment[END_REF][START_REF] Sandell | RDH10 oxidation of Vitamin A is a critical control step in synthesis of retinoic acid during mouse embryogenesis[END_REF][START_REF] Sandell | RDH10 is essential for synthesis of embryonic retinoic acid and is required for limb, craniofacial, and organ development[END_REF]. RDH10 loss-of-function is lethal between E10.5 and E14.5 [START_REF] Cammas | Expression of the murine retinol dehydrogenase 10 (Rdh10) gene correlates with many sites of retinoid signalling during embryogenesis and organ differentiation[END_REF][START_REF] Romand | Dynamic expression of the retinoic acid-synthesizing enzyme retinol dehydrogenase 10 (rdh10) in the developing mouse brain and sensory organs[END_REF][START_REF] Sandell | RDH10 is essential for synthesis of embryonic retinoic acid and is required for limb, craniofacial, and organ development[END_REF]. Rdh10 mutant embryos exhibit abnormalities characteristic of RA deficiency. Some severally affected mutant (b 10%) fail to undergo normal looping and chamber formation, remaining, instead, simple tubes, which can be partly rescued by maternal RA supplementation [START_REF] Rhinn | Involvement of retinol dehydrogenase 10 in embryonic patterning and rescue of its loss of function by maternal retinaldehyde treatment[END_REF][START_REF] Sandell | RDH10 oxidation of Vitamin A is a critical control step in synthesis of retinoic acid during mouse embryogenesis[END_REF]. Rdh10 mutants obtained at E12.5-E14.5 have poor myocardial trabeculation. Zebrafish Rdh10a deficient embryos have enlarged hearts with increased cardiomyocyte number (D'Aniello et al., 2015). The retinaldehyde reductase DHRS3 regulates retinoic acid biosynthesis through a feedback inhibition mechanism and the interaction between RDH10 and DHRS3. Dhrs3 mutant embryos die late in gestation and display defects in cardiac outflow tract formation, atrial and ventricular septation [START_REF] Adams | The retinaldehyde reductase activity of DHRS3 is reciprocally activated by retinol dehydrogenase 10 to control retinoid homeostasis[END_REF][START_REF] Billings | The retinaldehyde reductase DHRS3 is essential for preventing the formation of excess retinoic acid during embryonic development[END_REF][START_REF] Feng | Dhrs3a regulates retinoic acid biosynthesis through a feedback inhibition mechanism[END_REF]. The transport of vitamin A appears to be mediated by STRA6, a membrane bound protein that can interact with cellular retinol binding proteins (CRABPs and CRBP), which bind retinol in the serum (Fig. 2). Human mutations in STRA6 underlie Matthew-Wood syndrome, associated with multiple developmental defects including, occasionally, outflow tract, atrial and ventricular septal defects [START_REF] Golzio | Matthew-Wood syndrome is caused by truncating mutations in the retinol-binding protein receptor gene STRA6[END_REF][START_REF] Pasutto | Mutations in STRA6 cause a broad spectrum of malformations including anophthalmia, congenital heart defects, diaphragmatic hernia, alveolar capillary dysplasia, lung hypoplasia, and mental retardation[END_REF]. Surprisingly, deletion of Stra6 in the mouse has only a modest effect on the levels of RA signaling in most tissues, with the exception of the eye [START_REF] Amengual | STRA6 is critical for cellular vitamin A uptake and homeostasis[END_REF]. Cyp26A1 is a RA degrading enzyme that belongs to the p450 family (Fig. 2). Interestingly, Cyp26A1 expression is spatially restricted in the cardiac crescent and later at the poles of the E8.0 heart tube [START_REF] Maclean | Cloning of a novel retinoic-acid metabolizing cytochrome P450, Cyp26B1, and comparative expression analysis with Cyp26A1 during early murine development[END_REF][START_REF] Rydeen | Cyp26 enzymes are required to balance the cardiac and vascular lineages within the anterior lateral plate mesoderm[END_REF]. Loss of Cyp26 enzymes in zebrafish and mice results in severe phenotypes with embryonic lethality which include smaller atria, looping defects and outflow tract defects [START_REF] Abu-Abed | The retinoic acid-metabolizing enzyme, CYP26A1, is essential for normal hindbrain patterning, vertebral identity, and development of posterior structures[END_REF][START_REF] Emoto | Retinoic acid-metabolizing enzyme Cyp26a1 is essential for determining territories of hindbrain and spinal cord in zebrafish[END_REF][START_REF] Hernandez | Cyp26 enzymes generate the retinoic acid response pattern necessary for hindbrain development[END_REF]Niederreither et al., 2002a;[START_REF] Sakai | The retinoic acid-inactivating enzyme CYP26 is essential for establishing an uneven distribution of retinoic acid along the anterio-posterior axis within the mouse embryo[END_REF]. Although Cyp26c1 knock-out mice do not have significant defects, double Cyp26A1 and Cyp26C1 mutants have more severe looping defects [START_REF] Uehara | CYP26A1 and CYP26C1 cooperatively regulate anterior-posterior patterning of the developing brain and the production of migratory cranial neural crest cells in the mouse[END_REF]. Loss of CYP26 enzymes in humans is associated with numerous developmental syndromes [START_REF] Rydeen | Cyp26 enzymes are required to balance the cardiac and vascular lineages within the anterior lateral plate mesoderm[END_REF]. Inhibition of CYP26A1 is associated with DiGeorge syndrome-like phenotypes that causes heart defects such as conotruncal malformations (interrupted aortic arch, persistent truncus arteriosus, tetralogy of Fallot, and ventricular septal defects [START_REF] Roberts | Cyp26 genes a1, b1 and c1 are down-regulated in Tbx1 null mice and inhibition of Cyp26 enzyme function produces a phenocopy of DiGeorge Syndrome in the chick[END_REF]). The role for CRABPs and CRBP is less clear. However, studies using knock-out mouse suggest that these proteins appear not to be essential for heart development [START_REF] Lampron | Mice deficient in cellular retinoic acid binding protein II (CRABPII) or in both CRABPI and CRABPII are essentially normal[END_REF]. RA regulates development by acting as a diffusible signaling molecule that controls the activity of retinoic acid receptors (RARs). A total of six receptors (RARα, -β, -γ, RXRα, -β, and -γ) transduce the activities of RA [START_REF] Metzger | Contribution of targeted conditional somatic mutagenesis to deciphering retinoid X receptor functions and to generating mouse models of human diseases[END_REF]. Unlike the specific and restricted pattern of Raldh2 or the spatial and temporal availability of RA during development, RARα, RXRα, and RXRβ are ubiquitously expressed in embryonic and adult tissues, whereas RARβ, RARγ, and RXRγ expression is more restricted [START_REF] Dolle | Developmental expression of murine retinoid X receptor (RXR) genes[END_REF][START_REF] Dolle | Retinoic acid receptors and cellular retinoid binding proteins. I. A systematic study of their differential pattern of transcription during mouse organogenesis[END_REF][START_REF] Ruberte | Retinoic acid receptors and cellular retinoid binding proteins. II. Their differential pattern of transcription during early morphogenesis in mouse embryos[END_REF][START_REF] Ruberte | Specific spatial and temporal distribution of retinoic acid receptor gamma transcripts during mouse embryogenesis[END_REF]. Studies with knockout strategies for RARs and RXRs in mutant mice, have demonstrated their crucial role in many developmental processes [START_REF] Kastner | Genetic analysis of RXR alpha developmental function: convergence of RXR and RAR signaling pathways in heart and eye morphogenesis[END_REF][START_REF] Li | Normal development and growth of mice carrying a targeted disruption of the alpha 1 retinoic acid receptor gene[END_REF][START_REF] Lohnes | Function of retinoic acid receptor gamma in the mouse[END_REF][START_REF] Lufkin | High postnatal lethality and testis degeneration in retinoic acid receptor alpha mutant mice[END_REF][START_REF] Mendelsohn | Function of the retinoic acid receptors (RARs) during development (II). Multiple abnormalities at various stages of organogenesis in RAR double mutants[END_REF][START_REF] Sucov | RXR alpha mutant mice establish a genetic basis for vitamin A signaling in heart morphogenesis[END_REF]. As in vitamin A deficient syndrome, fetal or postnatal damages were found in RAR or RXR single-mutant mice, but the defects are less severe, suggesting functional redundancy among these receptors. Whereas RXRα-null mice exhibit embryonic lethality, functional redundancy between the RAR and other RXR isotypes has been demonstrated [START_REF] Mendelsohn | Function of the retinoic acid receptors (RARs) during development (II). Multiple abnormalities at various stages of organogenesis in RAR double mutants[END_REF]. Mutation of either Raldh2 or RXRα results in similar phenotypes characterized by profound embryonic lethality with prominent myocardial defects suggesting a role of RXRα in myocardial growth [START_REF] Dyson | Atrial-like phenotype is associated with embryonic ventricular failure in retinoid X receptor alpha-/-mice[END_REF][START_REF] Gruber | RXR alpha deficiency confers genetic susceptibility for aortic sac, conotruncal, atrioventricular cushion, and ventricular muscle defects in mice[END_REF][START_REF] Kastner | Genetic analysis of RXR alpha developmental function: convergence of RXR and RAR signaling pathways in heart and eye morphogenesis[END_REF][START_REF] Li | Normal development and growth of mice carrying a targeted disruption of the alpha 1 retinoic acid receptor gene[END_REF][START_REF] Sucov | RXR alpha mutant mice establish a genetic basis for vitamin A signaling in heart morphogenesis[END_REF]. On the other hand, other defects in double RXR-RAR mutants are not observed in Raldh2 mutants. RA signaling through RARα1/RXRα regulates differentiation of second heart field cells and outflow tract formation [START_REF] Li | Retinoic acid regulates differentiation of the secondary heart field and TGFbeta-mediated outflow tract septation[END_REF]. Mechanisms of transcriptional regulation RA has been characterized as a diffusible morphogen that acts directly on cells in a concentration-dependent manner to assign positional identities [START_REF] Briscoe | Morphogen rules: design principles of gradient-mediated embryo patterning[END_REF]. RA has a non-cell-autonomous (paracrine) effect on neighboring cells but there is also evidence for it acts in an intracrine manner in cells that synthesize it [START_REF] Azambuja | Retinoic acid and VEGF delay smooth muscle relative to endothelial differentiation to coordinate inner and outer coronary vessel wall morphogenesis[END_REF]. RA signaling is dependent on cells that have the ability to metabolize retinol to RA. RA can form gradients capable of inducing sharp boundaries of target gene expression. The underlying mechanisms include activities of RA-degrading enzymes [START_REF] White | How degrading: Cyp26s in hindbrain development[END_REF]. Several enzymatic activities such as RDHs and CYP26s are required in addition to RALDHs to control RA distribution within the embryo. In zebrafish, it has been demonstrated that RA degradation by CYP26 enzymes progressively determines the limits of RA-dependent gene expression [START_REF] Hernandez | Cyp26 enzymes generate the retinoic acid response pattern necessary for hindbrain development[END_REF]. CYP26s enzymes would thus function to establish boundaries in RA responsiveness. RA gradients induce sharply defined domains of gene expression also through tight feedback regulation of RA synthesis and interactions with other localized morphogens [START_REF] Schilling | Dynamics and precision in retinoic acid morphogen gradients[END_REF][START_REF] Shimozono | Visualization of an endogenous retinoic acid gradient across embryonic development[END_REF]. This has been explored mainly in the context of brain development. The basic mechanism for transcriptional regulation by RARs relies on DNA binding to specific sequence elements, the RA response elements (RAREs). RARs and RXRs are highly conserved among mammals. Unlike RARs, RXRs are not specific to the retinoic pathway, and can be involved in other signaling by binding vitamin D receptors, liver X receptors, thyroid receptors, and peroxisome proliferator-activated receptors. RXR can act as either a homodimer or heterodimer with RARs. In the latter case, regulation of gene transcription is achieved by the binding of the heterodimer RAR/RXR to a specific sequence composed classically of two direct repeats of a hexameric motif. In the classical view, functional RAREs near genes that require RA for normal expression during development typically consist of hexameric direct repeats (DRs) (A/G)G(T/ G) TCA with interspacing of 5 bp (DR5 elements) or 2 bp (DR2 elements), unlike vitamin D and thyroid hormone response elements, which typically exhibit DR3 and DR4 configurations, respectively. Even if spacing is required, the specificity of RAR and RXR binding seems also regulated by interactions of other transcription factors and the epigenetic landscape around the RARE. The cell specificity of the response to RA signaling is probably due to interactions with different regulatory proteins. Recently a two-hybrid assay in yeast demonstrated that RXRα interacts with the cardiac transcription factor Nkx2.5 [START_REF] Waardenberg | Prediction and validation of protein-protein interactors from genome-wide DNA-binding data using a knowledge-based machine-learning approach[END_REF]. In humans, mutations of NKX2-5 result in congenital heart defects such as atrial septal defects and conduction block [START_REF] Prendiville | Insights into the genetic structure of congenital heart disease from human and murine studies on monogenic disorders[END_REF]. Mutations of Nkx2.5 alters this interaction suggesting that defects seen in patients carrying Nkx2.5 mutations may in part due to disrupted protein partner interaction between Nkx2.5 and RXRα. Local chromatin environment, nearest neighboring factor binding motifs are likely important parameters underlying the RARE recognition code. Identifying RARs and RXRs co-factors has the potential to shed light on the complex gene regulatory processes underlying normal development and is likely critical for better differentiation protocols used to drive stem cells into specific cardiac cell types. The use of chromatin immunoprecipitation (ChIP), with antibodies against RARs has demonstrated a greater diversity of RAREs than previously appreciated [START_REF] Boergesen | Genome-wide profiling of liver X receptor, retinoid X receptor, and peroxisome proliferator-activated receptor alpha in mouse liver reveals extensive sharing of binding sites[END_REF][START_REF] Chatagnon | RAR/RXR binding dynamics distinguish pluripotency from differentiation associated cis-regulatory elements[END_REF][START_REF] He | The role of retinoic acid in hepatic lipid homeostasis defined by genomic binding and transcriptome profiling[END_REF][START_REF] Lalevee | Genome-wide in silico identification of new conserved and functional retinoic acid receptor response elements (direct repeats separated by 5 bp)[END_REF][START_REF] Mendoza-Parra | Dissecting the retinoid-induced differentiation of F9 embryonal stem cells by integrative genomics[END_REF][START_REF] Moutier | Retinoic acid receptors recognize the mouse genome through binding elements with diverse spacing and topology[END_REF]. Other hexameric repeat configurations have been found to bind to RARs in cell line studies involving ChIP-seq, but there in vivo importance is unknown. A recent ChIP study coupled to sequencing and performed in ES cells suggested that the presence of RA might also induce de novo RAR/RXR binding to numerous RAREs that are not bound by unliganded receptors [START_REF] Mahony | Ligand-dependent dynamics of retinoic acid receptor binding during early neurogenesis[END_REF]. There is also evidence that inverted repeats with no spacer can also be targets for RARs. RAR ChIP studies and in silico analyses have discovered 13,000-15,000 potential RAREs. Many of these RAREs have not been attributed to endogenous RA signaling and seem to be off-targets due to treatment with high amounts of RA or RAR antagonists. RXR ChIP-seq analyses also revealed that a large fraction of genomic regions occupied by RXR are not associated with a recognizable DR binding site, indicating indirect binding via DNA looping and interaction with co-factors [START_REF] Delacroix | Cell-specific interaction of retinoic acid receptors with target genes in mouse embryonic fibroblasts and embryonic stem cells[END_REF]. Consistent with this, in vitro reporter assays suggest that the transcriptional activities of RARs and RXRs do not necessarily require direct DNA binding [START_REF] Clabby | Retinoid X receptor alpha represses GATA-4-mediated transcription via a retinoid-dependent interaction with the cardiac-enriched repressor FOG-2[END_REF][START_REF] Molkentin | Transcription factor GATA-4 regulates cardiac muscle-specific expression of the alpha-myosin heavy-chain gene[END_REF]. Since ChIP assesses protein-DNA proximity by cross linking, and not direct binding, it will be necessary to verify RAR and RXR binding using in vivo foot printing. Importantly, the presence of an RAR or RXR does not conclusively show that RA will bind to the receptor and regulate gene expression in an RA-dependent manner. RA acts as a ligand for RAR and RXR nuclear receptors, switching them from potential repressors to transcriptional activators. Whether a change of RA gradient concentrations guide this function is unknown. Since the spatial organization of the nucleus may impact on activation or repression of gene expression and interaction with co-factors, assessment of the localization of RA receptors within the nucleus might be relevant. When RA is absent, RAR/RXR heterodimers bind RAREs where they recruit repressive complexes that inhibit transcription. In the presence of RA, however, the repressive complex bound to the receptor is exchanged for an activating complex and transcription at the target site is activated. As the receptors are already present on many target genes, this makes RA the limiting factor in deciding whether or not target genes are activated. The main determinant that drives RA signaling is RA availability rather than nuclear-receptor abundance, which is likely to be secondary. Mechanisms underlying the function of governing the decision of whether RARs and RXRs function as activators or active repressors of a targets gene have been studied in depth using in vitro systems and in the context of the several developmental processes [START_REF] Gillespie | Retinoid regulated association of transcriptional co-regulators and the polycomb group protein SUZ12 with the retinoic acid response elements of Hoxa1, RARbeta(2), and Cyp26A1 in F9 embryonal carcinoma cells[END_REF][START_REF] Janesick | Active repression by RARgamma signaling is required for vertebrate axial elongation[END_REF][START_REF] Kashyap | Epigenetic regulatory mechanisms distinguish retinoic acidmediated transcriptional responses in stem cells and fibroblasts[END_REF][START_REF] Kumar | Retinoic acid controls body axis extension by directly repressing Fgf8 transcription[END_REF][START_REF] Nagy | Nuclear receptor repression mediated by a complex containing SMRT, mSin3A, and histone deacetylase[END_REF]. HDAC inhibitors increase RA sensitivity by promoting dissociation of repressive complexes from RAR [START_REF] Lee | High histone acetylation and decreased polycomb repressive complex 2 member levels regulate gene specific transcriptional changes during early embryonic stem cell differentiation induced by retinoic acid[END_REF]. Indeed, RARs associate with histones acetylases (HATs) and histone deacetylases (HDACs) to modulate gene activity and dictate cell fate [START_REF] Weston | Active repression by unliganded retinoid receptors in development: less is sometimes more[END_REF]. In the repressive unliganded state, the RAR-RXR heterodimer recruits co-repressors such as histone deacetylase (HDAC) protein complexes and Polycomb repressive complex 2 (PRC2). This results in histone H3 lysine 27 trimethylation, chromatin condensation and gene silencing. RA binding to RAR-RXR induces a conformational change in the heterodimer, which promotes the replacement of repressive factors by co-activators such as histone acetylase (HAT) complexes and Trithorax proteins, which mediate H3K4me3, chromatin relaxation and gene activation. These epigenetic factors thus act as mediators or partners in the action of cardiac RARs and RXRs on chromatin structure. In summary, in the presence of RA, RARs bind RA response elements (RAREs) and recruit HATs. In the absence of RA, RARs can actively repress gene transcription by recruiting HDACs that promote chromatin compaction and gene repression. Surprisingly, there are exceptions to this classical model: during neurogenesis RARE sequences upstream of Fgf8 and Hoxb1 mediate gene repression, rather than activation, because RA binding to RAR leads to the recruitment of PRC2 and HDACs, and triggers H3K27me3 [START_REF] Boudadi | The histone deacetylase inhibitor sodium valproate causes limited transcriptional change in mouse embryonic stem cells but selectively overrides Polycomb-mediated Hoxb silencing[END_REF][START_REF] Kumar | Retinoic acid controls body axis extension by directly repressing Fgf8 transcription[END_REF][START_REF] Studer | Role of a conserved retinoic acid response element in rhombomere restriction of Hoxb-1[END_REF]. Transcriptional activities of retinoic receptors in mammalian heart development Activities during early cardiogenesis Several members of the Hox gene family, including Hoxa1 and Hoxb1, are regulated by RAREs, which has been demonstrated in vitro and in vivo [START_REF] Dupe | In vivo functional analysis of the Hoxa-1 3′ retinoic acid response element (3′RARE)[END_REF][START_REF] Huang | A conserved retinoic acid responsive element in the murine Hoxb-1 gene is required for expression in the developing gut[END_REF][START_REF] Langston | Retinoic acid-responsive enhancers located 3′ of the Hox A and Hox B homeobox gene clusters. Functional analysis[END_REF][START_REF] Marshall | A conserved retinoic acid response element required for early expression of the homeobox gene Hoxb-1[END_REF][START_REF] Oosterveen | The direct context of a hox retinoic acid response element is crucial for its activity[END_REF]. The transcription factor homeobox gene Hoxa1 (LaRosa and [START_REF] Larosa | Early retinoic acid-induced F9 teratocarcinoma stem cell gene ERA-1: alternate splicing creates transcripts for a homeobox-containing protein and one lacking the homeobox[END_REF], is a direct target of RA and possesses an enhancer containing a RARE. Consistent with such regulation, reduction or increase of RA signaling causes defects in the contribution of Hoxa1-; Hoxa3-and Hoxb1-expressing progenitor cells to the heart [START_REF] Bertrand | Hox genes define distinct progenitor sub-domains within the second heart field[END_REF]. Our study has demonstrated that RA is required to activate Hoxa1 expression in the posterior second heart field, a subpopulation of cardiac progenitor cells that will later give rise to atrial and sub-pulmonary myocardium [START_REF] Bertrand | Hox genes define distinct progenitor sub-domains within the second heart field[END_REF][START_REF] Ryckebusch | Retinoic acid deficiency alters second heart field formation[END_REF]. Reduction or excess of RA signaling causes abnormalities in the cardiac contribution of Hoxa1 and Hoxb1 expressing progenitors [START_REF] Bertrand | Hox genes define distinct progenitor sub-domains within the second heart field[END_REF]. Enhancers for Hoxa3 and Hoxb1 genes driving expression in cardiac progenitors have been identified [START_REF] Diman | A retinoic acid responsive Hoxa3 transgene expressed in embryonic pharyngeal endoderm, cardiac neural crest and a subdomain of the second heart field[END_REF][START_REF] Nolte | Shadow enhancers flanking the HoxB cluster direct dynamic Hox expression in early heart and endoderm development[END_REF]. It has been reported that enhancers for Hoxb1 gene mediate reporter expression in the second heart field and the proepicardium. These cardiac enhancers have RAREs and may be the direct targets of RA signaling. RA signaling is maintained by an autoregulatory mechanism via Hox genes. Indeed, in the context of brain development, Raldh2 expression is under the direct transcriptional control of HOX, PBX and MEIS complex [START_REF] Vitobello | Hox and Pbx factors control retinoic acid synthesis during hindbrain segmentation[END_REF]. HOXA1-PBX1/2-MEIS2 binds a regulatory element required to maintain normal Raldh2 expression [START_REF] Vitobello | Hox and Pbx factors control retinoic acid synthesis during hindbrain segmentation[END_REF]. The expression profile of Pbx and Meis factors overlaps Hox genes in the second heart field [START_REF] Chang | Pbx1 functions in distinct regulatory networks to pattern the great arteries and cardiac outflow tract[END_REF][START_REF] Stankunas | Pbx/Meis deficiencies demonstrate multigenetic origins of congenital heart disease[END_REF][START_REF] Wamstad | Dynamic and coordinated epigenetic regulation of developmental transitions in the cardiac lineage[END_REF]. Mice deficient for Pbx1, Meis1 and Hox genes have similar cardiac phenotypes [START_REF] Makki | Cardiovascular defects in a mouse model of HOXA1 syndrome[END_REF][START_REF] Paige | A temporal chromatin signature in human embryonic stem cells identifies regulators of cardiac development[END_REF]Stankunas et al., 2008). Together it suggests that PBX/MEIS and HOX proteins may cooperatively regulate Raldh2 gene expression in cardiac progenitors. RA signaling activates another marker of cardiac progenitor cells, the Tbox transcription factor Tbx5 [START_REF] Liberatore | Ventricular expression of tbx5 inhibits normal heart chamber development[END_REF][START_REF] Niederreither | Embryonic retinoic acid synthesis is essential for heart morphogenesis in the mouse[END_REF][START_REF] Sirbu | Retinoic acid controls heart anteroposterior patterning by down-regulating Isl1 through the Fgf8 pathway[END_REF]. Tbx5 is expressed in the posterior domain of the second heart field as well as the first heart field and is required to activate chamber-specific genes, such as atrial natriuretic factor (Nppa) and atrial natriuretic factor (Nppb) [START_REF] Bruneau | A murine model of Holt-Oram syndrome defines roles of the T-box transcription factor Tbx5 in cardiogenesis and disease[END_REF][START_REF] Mori | Tbx5dependent rheostatic control of cardiac gene expression and morphogenesis[END_REF]. RA signaling specifies Tbx5 expressing cells, progenitors of the first heart field to a venous and atrial cell fate [START_REF] Xavier-Neto | A retinoic acid-inducible transgenic marker of sino-atrial development in the mouse heart[END_REF]. DNA elements conferring tissue-type specific gene expression are ideal to analyse the molecular mechanisms that underlie the localized cardiac gene expression. For example, in the context of somite development a site-directed mutagenesis study demonstrated that a RARE upstream of Fgf8 is required for RA repression of Fgf8 in transgenic mouse embryos, thereby showing that RA directly represses Fgf8 transcription in vivo [START_REF] Kumar | Retinoic acid controls body axis extension by directly repressing Fgf8 transcription[END_REF]. Studies of Raldh2 null embryos showed that RA restricts the size of the second heart field by repressing Fgf8 expression in the second heart field [START_REF] Ryckebusch | Retinoic acid deficiency alters second heart field formation[END_REF][START_REF] Sirbu | Retinoic acid controls heart anteroposterior patterning by down-regulating Isl1 through the Fgf8 pathway[END_REF], and zebrafish heart development also requires FGF8 repression by RA [START_REF] Sorrell | Restraint of Fgf8 signaling by retinoic acid signaling is required for proper heart and forelimb formation[END_REF]; however, the of use CRISPR/Cas9-mediated genomic deletion of the Fgf8 RARE showed no defect in heart development or cardiac Fgf8 expression [START_REF] Kumar | Nuclear receptor corepressors Ncor1 and Ncor2 (Smrt) are required for retinoic acid-dependent repression of Fgf8 during somitogenesis[END_REF]. Several RXR/RAR target genes have been identified, including genes within the retinoid pathway, such as the cardiac expressed genes Rarα [START_REF] Dolle | Retinoic acid receptors and cellular retinoid binding proteins. I. A systematic study of their differential pattern of transcription during mouse organogenesis[END_REF][START_REF] Leroy | Mouse retinoic acid receptor alpha 2 isoform is transcribed from a promoter that contains a retinoic acid response element[END_REF][START_REF] Ruberte | Retinoic acid receptors and cellular retinoid binding proteins. II. Their differential pattern of transcription during early morphogenesis in mouse embryos[END_REF]), Rarβ2 (de The et al., 1990), Cyp26a1 [START_REF] Loudig | Transcriptional cooperativity between distant retinoic acid response elements in regulation of Cyp26A1 inducibility[END_REF][START_REF] Maclean | Cloning of a novel retinoic-acid metabolizing cytochrome P450, Cyp26B1, and comparative expression analysis with Cyp26A1 during early murine development[END_REF] Crbp1 [START_REF] Smith | A retinoic acid response element is present in the mouse cellular retinol binding protein I (mCRBPI) promoter[END_REF] and Crabp2 [START_REF] Durand | All-trans and 9-cis retinoic acid induction of CRABPII transcription is mediated by RAR-RXR heterodimers bound to DR1 and DR2 repeated motifs[END_REF]. RA represses the expression of the major embryonic production enzymes RDH10 and RALDH2 [START_REF] D'aniello | Depletion of retinoic acid receptors initiates a novel positive feedback mechanism that promotes teratogenic increases in retinoic acid[END_REF][START_REF] Niederreither | Restricted expression and retinoic acid-induced downregulation of the retinaldehyde dehydrogenase type 2 (RALDH-2) gene during mouse development[END_REF][START_REF] Strate | Retinol dehydrogenase 10 is a feedback regulator of retinoic acid signalling during axis formation and patterning of the central nervous system[END_REF]. Whether this feedback mechanism implies a direct transcriptional mechanism is currently unknown. Ectopic RA signaling affects outflow tract cushion development through the direct repression of a functional RARE in the promoter region of the myocardial Tbx2 gene [START_REF] Sakabe | Ectopic retinoic acid signaling affects outflow tract cushion development through suppression of the myocardial Tbx2-Tgfbeta2 pathway[END_REF]. The chicken slow MyHC3 promoter (slow myosin heavy chain 3) directs transgene expression in the cardiac venous pole at E8.5 and in the atrium at E9.5 with a persistent expression at later stages. This 168 bp regulatory element contains a RARE, suggesting that atrial specific gene expression is controlled directly by the localized synthesis of RA. An inhibitory protein complex composed of RXRα and IRX4 that binds this RARE to inhibit slow MyHC3 expression in primary cultures of embryonic atrial and ventricular quail cardiomyocytes [START_REF] Wang | Irx4 forms an inhibitory complex with the vitamin D and retinoic X receptors to regulate cardiac chamber-specific slow MyHC3 expression[END_REF]. GATA factors also bind the slow MyHC3 regulatory element in vitro, suggesting a cooperative effect between RAR/RXR and GATA factors [START_REF] Wang | A positive GATA element and a negative vitamin D receptor-like element control atrial chamber-specific expression of a slow myosin heavy-chain gene during cardiac morphogenesis[END_REF]. Similarly, RA receptors regulate the chamber-specific genes Nppa (Anf, atrial natriuretic factor) and Nppb (Bnf, brain natriuretic factor) via direct interaction with Gata4 and its co-repressor, Fog2 [START_REF] Clabby | Retinoid X receptor alpha represses GATA-4-mediated transcription via a retinoid-dependent interaction with the cardiac-enriched repressor FOG-2[END_REF][START_REF] Wu | 1,25(OH)2 vitamin D3, and retinoic acid antagonize endothelin-stimulated hypertrophy of neonatal rat cardiac myocytes[END_REF]. Fog factors facilitate the chromatin occupancy of Gata factors and interact with the repressive nucleosome remodeling and deacetylase (NuRD) complex [START_REF] Chlon | Combinatorial regulation of tissue specification by GATA and FOG factors[END_REF][START_REF] Stefanovic | GATA-dependent transcriptional and epigenetic control of cardiac lineage specification and differentiation[END_REF][START_REF] Vakoc | Proximity among distant regulatory elements at the beta-globin locus requires GATA-1 and FOG-1[END_REF]. Thus, RARE elements could act as a platform to recruit these cardiac co-factors and drive chamber specific gene programs [START_REF] Prendiville | Insights into the genetic structure of congenital heart disease from human and murine studies on monogenic disorders[END_REF]. RA activities and cardiac stem cells biology Unlike many other adult tissues, the myocardium of mammals has a limited ability to compensate for the loss of cells after cardiac damage. The ability of RA to stimulate cellular differentiation has been exploited in regenerative medicine differentiation of atrial and ventricular myocytes from human embryonic stem cells [START_REF] Devalla | Atrial-like cardiomyocytes from human pluripotent stem cells are a robust preclinical model for assessing atrial-selective pharmacology[END_REF][START_REF] Gassanov | Retinoid acid-induced effects on atrial and pacemaker cell differentiation and expression of cardiac ion channels[END_REF][START_REF] Wobus | Retinoic acid accelerates embryonic stem cellderived cardiac differentiation and enhances development of ventricular cardiomyocytes[END_REF][START_REF] Zhang | Direct differentiation of atrial and ventricular myocytes from human embryonic stem cells by alternating retinoid signals[END_REF]. Treatment of differentiating embryonic stem cells with RA promotes atrial specification [START_REF] Devalla | Atrial-like cardiomyocytes from human pluripotent stem cells are a robust preclinical model for assessing atrial-selective pharmacology[END_REF]. Several studies have indicated the involvement of COUP-TFs in RA signaling [START_REF] Jonk | Isolation and developmental expression of retinoic-acid-induced genes[END_REF][START_REF] Van Der Wees | Developmental expression and differential regulation by retinoic acid of Xenopus COUP-TF-A and COUP-TF-B[END_REF]. Atrial identity is determined by a COUP-TFII regulatory network [START_REF] Pereira | The orphan nuclear receptor COUP-TFII is required for angiogenesis and heart development[END_REF][START_REF] Wu | Atrial identity is determined by a COUP-TfiI regulatory network[END_REF]. Furthermore COUP-TFI and COUP-TFII are upregulated in differentiated cardiomyocytes in response to RA [START_REF] Devalla | Atrial-like cardiomyocytes from human pluripotent stem cells are a robust preclinical model for assessing atrial-selective pharmacology[END_REF], indicating that a RA-COUP-TF network module participates in the normal control of atrial specification. COUP-TFII is also present in a complex with the HAT p300 and RXR/RAR at the RAREs and enhances RA's actions on target genes during development [START_REF] Vilhais-Neto | Rere controls retinoic acid signalling and somite bilateral symmetry[END_REF]. COUP-TFI and COUP-TFII receptors bind DR elements used by RARs [START_REF] Kliewer | Retinoid X receptor-COUP-TF interactions modulate retinoic acid signaling[END_REF]. Further studies will reveal whether COUP-TfiI is also a RXR/RAR co-factor or a direct target of RA signaling during cardiogenesis. In cell culture, RA can also promote epicardial lineage specification. The combined action of RA, BMP and WNT signaling is required in specifying an epicardium-like lineage from human embryonic stem cells under chemically defined conditions [START_REF] Iyer | Robust derivation of epicardium and its differentiated smooth muscle cell progeny from human pluripotent stem cells[END_REF]. Varying concentration of RA may be responsible for generating in vitro these different cardiac subtypes of cells. Cardiac specific differentiation is also certainly influenced by the use of temporal and combined morphogens. During embryonic stem cell differentiation RA treatment does not affect Hcn4 expression [START_REF] Wobus | Retinoic acid accelerates embryonic stem cellderived cardiac differentiation and enhances development of ventricular cardiomyocytes[END_REF], a major gene in the cardiac conduction tissue [START_REF] Liang | Insights into cardiac conduction system formation provided by HCN4 expression[END_REF], indicating that RA signaling may not be implicated in the differentiation of pacemaker-like cells. In line with this, there is currently interest in reprogramming cells to pacemaker cells by transducing transcription factor genes [START_REF] Boink | The past, present, and future of pacemaker therapies[END_REF]. Through such a reprogramming attempt, it has been shown that the use of RARγ or RXRα together with the cardiac transcription factors Gata6 and Tbx3 does not generate cells with spontaneous beating activity, a key feature of pacemaker cells [START_REF] Nam | Induction of diverse cardiac cell types by reprogramming fibroblasts with cardiac transcription factors[END_REF]. It also remains to be determined whether manipulating the RARs-RXRs-HAT/RARs-RXRs-HDAC complexes could target many silent cardiac-specific sites, open the chromatin for active transcription and enhance reprogramming toward human atrial cells. Activities during late stages of cardiogenesis RA is also involved in processes taking place during late cardiac development (reviewed in Xavier-Neto et al., (2015)). RA signaling acts on neural crest cells orientation and positioning, myocardial specification, and the endothelial-to-mesenchymal transition of endocardial cells process to allow proper endocardial cushion fusion and complete outflow tract septation [START_REF] El Robrini | Cardiac outflow morphogenesis depends on effects of retinoic acid signaling on multiple cell lineages[END_REF][START_REF] Niederreither | Embryonic retinoic acid synthesis is essential for heart morphogenesis in the mouse[END_REF]. RA signaling is involved in the formation of the epicardium [START_REF] Braitsch | Pod1/Tcf21 is regulated by retinoic acid signaling and inhibits differentiation of epicardium-derived cells into smooth muscle in the developing heart[END_REF][START_REF] Moss | Dynamic patterns of retinoic acid synthesis and response in the developing mammalian heart[END_REF][START_REF] Von Gise | WT1 regulates epicardial epithelial to mesenchymal transition through beta-catenin and retinoic acid signaling pathways[END_REF]. Indeed both Raldh2 and RARE-hsp68-lacZ transgene are expressed in the epicardium from stage E11.5 [START_REF] Moss | Dynamic patterns of retinoic acid synthesis and response in the developing mammalian heart[END_REF][START_REF] Xavier-Neto | Sequential programs of retinoic acid synthesis in the myocardial and epicardial layers of the developing avian heart[END_REF]. The heart phenotypes of Raldh2 and RXRα deficient embryos are very similar and are characterized by a severe hypoplasia of the ventricular myocardium, a phenotype mimicking other mutants with defective epicardial function [START_REF] Brade | Retinoic acid stimulates myocardial expansion by induction of hepatic erythropoietin which activates epicardial Igf2[END_REF][START_REF] Merki | Epicardial retinoid X receptor alpha is required for myocardial growth and coronary artery formation[END_REF]. Raldh2 is a direct target of Wt1 in epicardial cells [START_REF] Guadix | Wt1 controls retinoic acid signalling in embryonic epicardium through transcriptional activation of Raldh2[END_REF]. Wt1 regulates epicardial epithelial to mesenchymal transition through β-catenin and RA signaling pathways [START_REF] Von Gise | WT1 regulates epicardial epithelial to mesenchymal transition through beta-catenin and retinoic acid signaling pathways[END_REF]. The transcription factor Tcf21 is regulated by RA signaling and inhibits differentiation of epicardium-derived cells into smooth muscle in the developing heart [START_REF] Braitsch | Pod1/Tcf21 is regulated by retinoic acid signaling and inhibits differentiation of epicardium-derived cells into smooth muscle in the developing heart[END_REF]. It was found that epicardium-derived cells that maintain the expression of Wt1 and Raldh2 initially populate the subepicardial space and subsequently invade the ventricular myocardium. As epicardiumderived cells differentiate into the smooth muscle and endothelial cell lineage of the coronary vessels, the expression of Wt1 and Raldh2 becomes downregulated [START_REF] Perez-Pomares | Experimental studies on the spatiotemporal expression of WT1 and RALDH2 in the embryonic avian heart: a model for the regulation of myocardial and valvuloseptal development by epicardially derived cells (EPDCs)[END_REF]. RA stimulates myocardial expansion by induction of hepatic erythropoietin which activates epicardial Igf2 [START_REF] Brade | Retinoic acid stimulates myocardial expansion by induction of hepatic erythropoietin which activates epicardial Igf2[END_REF]. Erythropoietin and RA, secreted from the epicardium, are required for cardiac myocyte proliferation [START_REF] Stuckmann | Erythropoietin and retinoic acid, secreted from the epicardium, are required for cardiac myocyte proliferation[END_REF]. The RA pathway regulates myocardial growth signals such as Pi3k/ERK, Fgf2-9 [START_REF] Kang | Convergent proliferative response and divergent morphogenic pathways induced by epicardial and endocardial signaling in fetal heart development[END_REF][START_REF] Lin | Endogenous retinoic acid regulates cardiac progenitor differentiation[END_REF][START_REF] Merki | Epicardial retinoid X receptor alpha is required for myocardial growth and coronary artery formation[END_REF] and Wnts [START_REF] Merki | Epicardial retinoid X receptor alpha is required for myocardial growth and coronary artery formation[END_REF]. RA and VEGF delay smooth muscle relative to endothelial differentiation to coordinate inner and outer coronary vessel wall morphogenesis [START_REF] Azambuja | Retinoic acid and VEGF delay smooth muscle relative to endothelial differentiation to coordinate inner and outer coronary vessel wall morphogenesis[END_REF]. RA deficiency reduces expression of Sonic Hedgehog targets and the factors required in the coronary vasculature [START_REF] Lavine | Endocardial and epicardial derived FGF signals regulate myocardial proliferation and differentiation in vivo[END_REF]. Wt1 and RA signaling in the subcoelomic mesenchyme control the development of the pleuropericardial membranes and the sinus horns [START_REF] Norden | Wt1 and retinoic acid signaling in the subcoelomic mesenchyme control the development of the pleuropericardial membranes and the sinus horns[END_REF]. RA signaling is activated in the postischemic heart suggesting that it may play a role in regulation of damage and repair during remodeling. [START_REF] Bilbija | Retinoic acid signalling is activated in the postischemic heart and may influence remodelling[END_REF], Tables 1 and2). Future directions Although there has been progress in characterizing the function of RA signaling, many gaps remain with respect to the underlying mechanisms of RA-mediated gene regulation. Interpretation of experimental data are complicated by the fact that exposure to RA (in cultured cells, whole embryos or explants) may have different, sometimes opposite, effects depending on the concentration, stage or duration of exposure. Strategies that interfere with endogenous retinoid signaling through genetic loss-of-function appear more reliable than approaches using exogenous retinoids, including RAR/RXR antagonists that may lead to the forced repression of target gene loci. Given the ability of RA to signal across cells, understanding the site of action of RA receptors remains difficult. Recent studies have led to novel insights into the interplay between retinoid and other transcription factors in several developing systems. Our knowledge of the relationship between RA signaling and other signaling pathways also remains rudimentary. In the context of heart development, our understanding of the transcriptional targets of RA signaling is also limited. As previously mentioned several cardiac genes have been identified as regulatory targets of RA. In a few cases this regulation is direct, driven by a heterodimer of retinoid receptors bound to a DNA response element; in others, it has either not been investigated in depth or it is indirect, reflecting the actions of intermediate factors. However, our understanding of the role of retinoids will be enhanced if such a distinction can be made for each regulated target gene. Treated epicardium-derived cells Activation [START_REF] Braitsch | Pod1/Tcf21 is regulated by retinoic acid signaling and inhibits differentiation of epicardium-derived cells into smooth muscle in the developing heart[END_REF] Table 2 Some known co-factors associated proteins that regulates RAR/RXR function. Coactivator proteins Function References NcoA-1 Histone acetylation [START_REF] Kashyap | Epigenetic regulatory mechanisms distinguish retinoic acidmediated transcriptional responses in stem cells and fibroblasts[END_REF][START_REF] Kumar | Nuclear receptor corepressors Ncor1 and Ncor2 (Smrt) are required for retinoic acid-dependent repression of Fgf8 during somitogenesis[END_REF] HAT p300 Histone acetylation [START_REF] Gillespie | Retinoid regulated association of transcriptional co-regulators and the polycomb group protein SUZ12 with the retinoic acid response elements of Hoxa1, RARbeta(2), and Cyp26A1 in F9 embryonal carcinoma cells[END_REF][START_REF] Kashyap | Epigenetic regulatory mechanisms distinguish retinoic acidmediated transcriptional responses in stem cells and fibroblasts[END_REF][START_REF] Vilhais-Neto | Rere controls retinoic acid signalling and somite bilateral symmetry[END_REF] FOG2 Transcription factor [START_REF] Clabby | Retinoid X receptor alpha represses GATA-4-mediated transcription via a retinoid-dependent interaction with the cardiac-enriched repressor FOG-2[END_REF] Baf60a/c Recruit SWI/SNF complex [START_REF] Chiba | Two human homologues of Saccharomyces cerevisiae SWI2/SNF2 and Drosophila brahma are transcriptional coactivators cooperating with the estrogen receptor and the retinoic acid receptor[END_REF][START_REF] Flajollet | The core component of the mammalian SWI/SNF complex SMARCD3/BAF60c is a coactivator for the nuclear retinoic acid receptor[END_REF] GATA4 Transcription factor [START_REF] Clabby | Retinoid X receptor alpha represses GATA-4-mediated transcription via a retinoid-dependent interaction with the cardiac-enriched repressor FOG-2[END_REF] Nkx2.5 Transcription factor [START_REF] Waardenberg | Prediction and validation of protein-protein interactors from genome-wide DNA-binding data using a knowledge-based machine-learning approach[END_REF] COUP-TFI-II Transcription factor [START_REF] Vilhais-Neto | Rere controls retinoic acid signalling and somite bilateral symmetry[END_REF] HDACs Histone deacetylation [START_REF] Kashyap | Epigenetic regulatory mechanisms distinguish retinoic acidmediated transcriptional responses in stem cells and fibroblasts[END_REF] There is no doubt that emerging molecular technologies will help in understanding the function of retinoic receptors in cardiac lineage specification. Studies using ChIP-chip and ChIP-seq against RARs and RXRs are available for cultured cell lines and adult tissues [START_REF] Boergesen | Genome-wide profiling of liver X receptor, retinoid X receptor, and peroxisome proliferator-activated receptor alpha in mouse liver reveals extensive sharing of binding sites[END_REF][START_REF] Chatagnon | RAR/RXR binding dynamics distinguish pluripotency from differentiation associated cis-regulatory elements[END_REF][START_REF] He | The role of retinoic acid in hepatic lipid homeostasis defined by genomic binding and transcriptome profiling[END_REF][START_REF] Lalevee | Genome-wide in silico identification of new conserved and functional retinoic acid receptor response elements (direct repeats separated by 5 bp)[END_REF][START_REF] Mendoza-Parra | Dissecting the retinoid-induced differentiation of F9 embryonal stem cells by integrative genomics[END_REF][START_REF] Moutier | Retinoic acid receptors recognize the mouse genome through binding elements with diverse spacing and topology[END_REF]. The exploitation of ChIP-seq technologies from embryonic tissue will enhance the ability to distinguish direct and indirect regulation of cardiac gene expression. For most targets, RA receptors will be present on cardiac regulatory regions regardless of whether the associated gene is transcriptionally active or not and thus interpreting the information will require additional epigenetic data at those sites to determine the likely transcriptional status associated with specific RAREs. The ENCODE project has provided access to valuable data on genome-wide chromatin occupancy of transcription factors, chromatin modifying and remodeling enzymes and histone modifications in heart tissues [START_REF] Bernstein | An integrated encyclopedia of DNA elements in the human genome[END_REF]. Retinoic receptors can mediate looping of distant DNA sequences, enabling transcriptional regulation by far-upstream enhancers [START_REF] Yasmin | DNAlooping by RXR tetramers permits transcriptional regulation "at a distance[END_REF]. Whether this is the case in the context of heart development is unknown. Chromosome conformation capture technologies (e.g. 3-5C, Hi-C) were developed to identify long-range chromatin interactions [START_REF] De Laat | Topology of mammalian developmental enhancers and their regulatory landscapes[END_REF]. Merging these data sets can further facilitate identification of RA regulatory elements. The functional importance of RAREs can be assessed in vivo using recent genome editing technologies [START_REF] Harrison | A CRISPR view of development[END_REF]. Another technical issue is obtaining the starting material from small, localized populations of cardiac progenitor cells. This represents a significant challenge but will be necessary to determine the specificity of the cardiac gene programs. Overcoming these technical challenges will provide important new data in our understanding of RA signaling and its role in cardiac development. Addressing this question is critical for understanding the origin of congenital heart defects. Finally, defining how RA signaling and its interacting factors act to enable epigenetic regulatory events will provide insight into the biology of cardiac progenitor cells leading to methods for increasing the efficiency of directed differentiation of pluripotent cells and cellular reprogramming into cardiac subtypes. Fig. 1 . 1 Fig. 1. Heart fields and their contributions to the developing heart. (A) The second heart field (light grey) is located dorsally from the forming heart derived from the first heart field (dark grey). The second heart field is added at the venous and arterial poles of the definitive heart. Ballooning model of cardiac chamber formation (B). The early heart tube has an embryonic phenotype (dark grey). Chamber myocardium (light grey) expands from the outer curvature, whereas non-chamber myocardium (grey) of the inflow tract, atrioventricular canal, outflow tract and inner curvature does not expand. a indicates atrium; ift, inflow tract; la, left atrium; Iv, left ventricle; ra, right atrium; rv, right ventricle; sv, sinus venosus. Fig. 2 . 2 Fig. 2. Metabolism of vitamin A. Retinyl esters, retinol, and β-carotene are taken into the body from the diet. Both retinol and β-carotene may be converted into the transcriptionally active vitamin A forms after first being converted to retinaldehyde. RA then regulates transcription of vitamin A-responsive genes. When RA is no longer needed, it is catabolized by cytochrome enzymes (CYP26 enzymes). Fig. 3 . 3 Fig. 3. Retonic acid is required to define the posterior limits of the second heart field. Fgf10 is a molecular marker of the murine SHF. The use of the Mlc1v-nlacZ-24 reporter line, in which a lacZ transgene has been integrated upstream of Fgf10 gene, shows the SHF. Ventral views of wild-type (A) or Raldh2-/-(B) embryos at embryonic day 8.5 showing posterior expansion of Fgf10-lacZ transgene expression (arrowhead). The heart has been removed to allow observation of the X-gal staining, compare Raldh2-/-(B) with WT (A) embryos. Table 1 1 Retinoid-responsive cardiac genes. Gene Source Response to RA Mode References MHCα Neonatal rat cardiomyocytes Activation (Rohrer et al., 1991) Cardiac α actin Chicken cardiac mesoderm Repression (Wiens et al., 1992) MHC1a Treated chicken embryo Activation (Yutzey et al., 1994) MLC2a Treated mouse embryos Repression (Dyson et al., 1995) a-Actinin Treated chicken embryos Repression (Dickman and Smith, 1996) SERCA Neonatal rat cardiomyocytes Activation (Rohrer et al., 1991) Na/K/ATP1A3 subunit Neonatal rat cardiomyocytes Repression (He et al., 1996) Large chloride conducting channel Sheep ventricular cells Activation (Rousseau et al., 1996) G protein-coupled endothelin signaling Neonatal rat cardiomyocytes Repression (Zhou et al., 1995) G protein-coupled a-adrenergic signals Neonatal rat cardiomyocytes Repression (Zhou et al., 1995) ANF Neonatal rat cardiomyocytes Activation Direct (Clabby et al., 2003; Wu et al., 1996; Zhou et al., 1995) GLUT4 Adult mouse heart Activation (Castello et al., 1994) GATA4 Cell culture Activation (Arceci et al., 1993; Clabby et al., 2003) FGF8 Raldh2-/-embryos Repression (Ryckebusch et al., 2008; Sirbu et al., 2008) ISL1 Raldh2-/-embryos Repression (Ryckebusch et al., 2008; Sirbu et al., 2008) TBX2 Treated mouse embryos, C2C12 Repression Direct (Sakabe et al., 2012) TBX5 Raldh2-/-, chicken embryos Activation (Liberatore et al., 2000; Niederreither et al., 2001; Sirbu et al., 2008) RARα P19 cells Activation Direct (Ruberte et al., 1991) RARβ2 Treated chicken embryos Activation (Kostetskii et al., 1998) Cyp26α1 P19 cells Activation (MacLean et al., 2001) Raldh2 Treated mouse embryos Activation (Niederreither et al., 1997) Beta-integrin Treated mouse embryos Activation (Hierck et al., 1996) Flectin Treated chicken embryos Activation (Tsuda et al., 1996) Heart lectin-associated matrix protein Treated chicken embryos Activation (Smith et al., 1997) JB4/fibrillin-related protein Treated chicken embryos Repression (Smith et al., 1997) EPO3-IGF2 Raldh2-/-embryos Activation Direct (Brade et al., 2011) TGF2 Treated mouse embryos Activation (Mahmood et al., 1992) Tcf21 Acknowledgments We thank Dr. R. Kelly who critically read this manuscript and offered valuable suggestions. S. Stefanovic is supported by post-doctoral awards from l'Institut de France Lefoulon-Delalande and H2020-MSCA-IF-2014. S. Zaffran is an INSERM research fellow. Work in S. Zaffran's laboratory is supported by the INSERM, the Agence Nationale pour la Recherche (ANR-13-BSV2-0003-01) and the Association Française contre les Myopathies (AFM-Telethon).
76,897
[ "11592", "16072" ]
[ "46221", "46221" ]
01746435
en
[ "info" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01746435/file/sampling_switching_systems_V8.pdf
Antonio Ventosa Cutillas Carolina Albea Alexandre Seuret Francesco Gordillo Relaxed periodic switching controllers of high-frequency DC-DC converters using the δ-operator formulation This paper deals with the design of new periodic switching control laws for high frequency DC-DC converters. The contributions are twofolds. On a first hand, the DC-DC converter model are rewritten as a periodic switched affine systems thanks to a δ-operator formulation, which represent an efficient framework for the numerical discretization at high frequencies. On a second hand, three different control laws are provided, the first one being the usual Lyapunov-based control law and the two others being relaxed versions of this first solution. The benefits of these two new control laws over the usual Lyapunovbased one are demonstrated on an simple example. More particularly, it is showed that the selection of sampling period and of the control law strongly influence the size of the region of attraction. I. Introduction Nowadays, there is a relevant interest for DC-DC converters due to their numerous applications in the industry, as for example in computer power supply, cell phones, appliances, automotive, aircraft, etc. These systems can be modeled as switched affine systems (SASs), which represent a particular nonlinear class of switched systems. They correspond to a class of hybrid dynamical systems consisting of several operating modes represented by continuous-time subsystems and a rule that selects between these modes [START_REF] Liberzon | Basic problems in stability and design of switched systems[END_REF]. Compared to the linear case, the affine structure of these systems imposes a set of operating points defined for an averaged dynamic, leading to solutions in the generalized sense of Krasovskii. Many works found in the literature in continuoustime control the SASs by a min-projection strategy [START_REF] Albea | Hybrid dynamic modeling and control of switched affine systems: application to DC-DC converters[END_REF], [START_REF] Deaecto | Switched affine systems control design with application to DC-DC converters[END_REF], [START_REF] Pettersson | Stabilization of hybrid systems using a min-projection strategy[END_REF], even for systems with a general nonlinear form [START_REF] Liu | On the (h0, h)-stabilization of switched nonlinear systems via state-dependent switching rule[END_REF], [START_REF] Lu | A piecewise smooth control-lyapunov function framework for switching stabilization[END_REF]. In these works the provided controllers are good, but may lead to arbitrarily fast switching control. Some solutions to this problem can be found in the literature, as [START_REF] Buisson | On the stabilisation of switching electrical power converters[END_REF], [START_REF] Senesky | Hybrid modelling and control of power electronics[END_REF], [START_REF] Theunisse | Robust global stabilization of the dc-dc boost converter via hybrid control[END_REF], where, the authors aim at ensuring a dwelltime associated with an admissible chattering around the operating point. Nevertheless, [START_REF] Buisson | On the stabilisation of switching electrical power converters[END_REF] does not prove a minimum time associated to the spacial regularization. In [START_REF] Theunisse | Robust global stabilization of the dc-dc boost converter via hybrid control[END_REF], a focus on specific electronic architecture related to boost converters is proposed. In addition, the contributions of [START_REF] Senesky | Hybrid modelling and control of power electronics[END_REF] do not provide a complete stability proof. On the other hand, in [START_REF] Colaneri | Stabilization of continuous-time switched nonlinear systems[END_REF], the authors present an open-loop stabilization strategy based on dwell-time computation, Antonio Ventosa Cutillas and Francisco Gordillo are with University of Seville. Ad. Camino de los Descubrimientos s/n. 41092, Sevilla, Spain. aventosa,gordillo@us.es C. Albea and A. Seuret are with LAAS-CNRS, Univ. de Toulouse, UPS, LAAS, 7 avenue du colonel Roche, F-31400 Toulouse, France. calbea, aseuret@laas.fr [START_REF] Albea | Practical stabilisation of switched an systems with dwell-time guarantees[END_REF] proposes a minimum dwell-time with a space and time regularization, [START_REF] Hauroigne | Switched affine systems using sampled-data controllers: Robust and guaranteed stabilisation[END_REF] guarantees a minimum and maximum dwell-time by solving optimization problems. These solutions presents a common characteristic: systems are controlled by aperiodic switching. In many occasions, it is necessary to control this class of systems with periodic switching, due to physical constraints. In order to deal with this issue, a solution consisting in the discretization of the continuous-time model with a fixed periodic sampling time was provided in [START_REF] Deaecto | Discrete-time switched linear systems state feedback design with application to networked control[END_REF], [START_REF] Hetel | Robust sampled-data control of switched affine systems[END_REF]. The authors of [START_REF] Deaecto | Discrete-time switched linear systems state feedback design with application to networked control[END_REF] present a controller based on a Lyapunov function synthesized by solving an optimization problem, whose objective is to minimize the area around the equilibrium, where the solutions converge. On the other hand, in [START_REF] Hetel | Robust sampled-data control of switched affine systems[END_REF], the authors design a sampleddata switching control with an upper-bound designed to ensure robustness in continuous-time systems. Through Linear Matrix Inequalities (LMI) based conditions, the upper-bound of the length of the inter-sampling interval can be directly related to the size of the asymptotic stability set around the considered equilibrium. Practical stability is obtained using Lyapunov-Krasovskii functional and the Jensen's inequality. Both solutions do not consider a high-frequency sampling, beside the fact that they are conservative because the controller are based on a Lyapunov-function. In some applications when discretizing these systems, if the sample time is very low, several problems may appear to assess stability, because of numerical issues. Several solutions have been considered in the literature of automatic control. Among them, the δ-operator has been introduced in [START_REF] Middleton | Improved Finite Word Length Characteristics in Digital Control Using Delta Operators[END_REF]. It consists in a discretization method that becomes sufficiently close to the continuoustime model, ensuring continuity of the conditions for high-frequency samplings. In [START_REF] Eidson | Proc. IEEE South East conference[END_REF], a comparison is made between the operator q and the operator δ to perform this discretization. Here, it is possible to observe as the operator δ presents as advantages the natural convergence to a continuous system, while avoiding numerical problems when the sampling period is very short. In [START_REF] Viji | Improved Delta Operator based Discrete Sliding Mode Fuzzy Controller for Buck Converter[END_REF] it is possible to observe that the delta operator is used to perform the sliding mode fuzzy controller of a DC-DC buck converter due to the need for very fast sampling. Therefore, the use of the δ-operator presents a great advantage in the design of controllers with very fast sampling times like the one presented in this paper. In this paper, we model the DC-DC converters in discrete-time by using the δ-operator and we control the system with a well-known min-projection strategy, based on a Lyapunov function. Then, we propose a periodic-sampling relaxed controller for these systems, allowing to obtain results less conservatives, even for highfrequency systems. Moreover, this approach presents a tradeoff between the sampling period and the size of the chattering effects. Some simulations in Matlab valid our contribution. The paper is organized as follow: the problem formulation is stated in Section II. Then, a classical controller is presented in Section III. From this, Section IV proposes some relaxed controllers. An optimization of the controllers is given in Section V. Section VI illustrates the potential of this method on a particular DC-DC converter. The paper ends with a conclusion section. Notation: Throughout the paper N and R denote the set of natural and real numbers, respectively. R n the ndimensional Euclidean space and R n×m the set of all real n×m matrices. The set composed by the first N positive integers, namely {1, 2, ..., N }, is denoted by K N . I is the identity matrix of suited dimension. The Euclidean norm of vector x ∈ R n is denoted by |x|. For any symmetric matrix M of R n×n , the notation M 0 (M ≺ 0) means that the eigenvalues of M are strictly positive (negative). II. Problem formulation A. System data Inspired by the work in [START_REF] Deaecto | Discrete-time switched linear systems state feedback design with application to networked control[END_REF], we focus on the following class of switched affine systems, which is relevant in the context of DC-DC converters ż = A σ z + a σ , (1) where z ∈ R n is the state and it is accessible, A σ and a σ present suited dimensions. The control action is performed through the high frequency switching signal σ ∈ K N := {1, 2, ..., N }, which may be only modified at sampling instants t k , with k ∈ N. In this paper, the length of the sampling interval t k+1 -t k = T is assumed to be constant, known and small enough. This paper focuses on the design problem of a feedback law for the high frequency periodic switching signal σ, in such a way to ensure suitable practical convergence properties of the plant state z to a operating equilibrium z e , which is not necessarily an equilibrium for the continuous-time dynamics in ( 1), but can be obtained as an equilibrium for the switching system with arbitrary switching. A necessary and sufficient condition characterizing this equilibrium is then represented by the following standard assumption (see [START_REF] Deaecto | Switched affine systems control design with application to DC-DC converters[END_REF], [START_REF] Liberzon | Basic problems in stability and design of switched systems[END_REF]). Assumption 1: There exists λ = [λ 1 , λ 2 , ..., λ N ] satisfying i∈K λ i = 1, such that the following convex combination holds: i∈K λ i (A i z e + a i ) = A(λ)z e + a(λ) = 0. ( 2 ) Remark 1: It is emphasized that Assumption 1 is both necessary and sufficient for the existence of a suitable switching signal ensuring forward invariance of the point z e (namely inducing an equilibrium at z e ) when understanding solutions in the generalized sense of Krasovskii or Filippov. Indeed, under (2), we can conclude that the error equation of (1): ẋ = A σ x + B σ , (3) where the error vector is denoted by x := z-z e and where the matrices B σ are defined by B σ := A σ x e + a σ and verify the following convex combination i∈K λ i B i = 0. The objective is to ensure that the error state x converges to the equilibrium x = 0 in the Filippov sense. In addition, the following property is assumed. Assumption 2: The matrices A i , for i ∈ K N are nonsingular and A(λ) is Hurwitz. B. δ-operator for high frequency switching function In this paper, we will propose a discrete-time model based on the δ-operator [START_REF] Middleton | Improved Finite Word Length Characteristics in Digital Control Using Delta Operators[END_REF], which is suitable for high switching frequencies. The δ-operator has been widely used in the literature to avoid numerical problems in the computation of discrete-time dynamics. This is based on the continuous ones in the situation where the sampling period T is potentially very small. The definition of the δ-operator is as follows. For any function ξ from R + to R n , the vector δξ k , at any sampling instant t k ∈ R + , is defined as follows δξ k := 1 T (ξ k+1 -ξ k ), ∀k ≥ 0, where we used the convention ξ k = ξ(t k ) and δξ k = δξ(t k ), for all integer k ≥ 0. Hence the dynamics of system (1) can be rewritten in the framework of the δoperator, which yields the following dynamics δx k = E σ x k + F σ (4) where the matrices that defines the system dynamics are given by E σ = 1 T (e AσT -I), F σ = 1 T T 0 e Aσ(T -s) dsB σ . (5) The interest of this formulation compared to the usual discrete-time formulation comes from the fact that, when T goes to zero, matrices E σ and F σ converge to A σ and B σ , respectively. Another important issue is that matrices E σ and F σ depend explicitly on the switching period T . Indeed, considering small values of T may lead to several numerical problems when discretizing (3). Remark 2: Note that if matrix A σ is non singular, then a simple expression of F σ is provided by F σ = e AσT -I T A -1 σ B σ . It is worth noting that model (4) does not account for the continuous evolution of (1) during the intersampling time. It is however possible to characterize the continuous solution by integrating the solution over a sampling interval, leading, for all t ∈ [t k , t k + T ] and for all k ∈ N, to x(t) = e Aσ(t-t k ) x(t k ) + t t k e Aσ(τ -t k ) dτ a σ Since, the matrices A σ are assumed to be Hurwitz, and since t belongs to the bound interval [t k , t k + T ], the solutions to the system are obviously bounded during the inter sampling time. C. Control objectives When considering such switching affine systems, asymptotic stability to zero is in general not possible. Therefore one has to relax the control objectives and to consider attractor sets, which are not necessarily reduced to the equilibrium set. In this paper, we will consider an estimation of the attractive set, which is of the following quadratic form E := x ∈ R n , x Sx ≤ 1 . (6) with S being a symmetric positive definite matrix to be optimized. This formulation is quite usual and has been used in other contexts as in [START_REF] Albea | Practical stabilisation of switched an systems with dwell-time guarantees[END_REF], [START_REF] Deaecto | Discrete-time switched linear systems state feedback design with application to networked control[END_REF], [START_REF] Hetel | Robust sampled-data control of switched affine systems[END_REF]. This paper focuses on the design problem of a feedback law for the high frequency periodic switching signal σ, in such a way to ensure suitable practical convergence properties of the plant state x to 0, which is not necessarily an equilibrium for the continuous-time dynamics in ( 1), but can be obtained as an equilibrium for the switching system with arbitrary switching. A necessary and sufficient condition characterizing this equilibrium is then represented by the following standard Assumption 2 (see [START_REF] Deaecto | Switched affine systems control design with application to DC-DC converters[END_REF], [START_REF] Liberzon | Basic problems in stability and design of switched systems[END_REF]). The problem can be summarized as follows Problem 1: For any small sampling period T , the problem is to find a switching control law that selects, at each sampling time, the mode or subsystem among all possibilities that stabilizes system (1) with certain performance guarantees at its equilibrium. III. Lyapunov-based switching control Looking at the literature on switched affine systems, one can find the well-known min-projection control law for such a class of systems [START_REF] Deaecto | Discrete-time switched linear systems state feedback design with application to networked control[END_REF], [START_REF] Hetel | Robust sampled-data control of switched affine systems[END_REF], [START_REF] Pettersson | Stabilization of hybrid systems using a min-projection strategy[END_REF]. The underlying idea of this control law is to select the mode of the system which minimizes the decrease of a quadratic Lyapunov function given by V (x) := x P x, ∀x ∈ R n (7) where P 0 is a positive definite matrix of R n×n . This idea is formalized in the following theorem. Theorem 1: Consider Assumption 1 and Property 2 and matrices P 0 and S 0 of suited dimension that are solution to the feasibility problem Γ (1) ij ≺ 0, ∀i, j ∈ K (8) for any pair (i, j) in K 2 . Γ (1) ij = Ψ i (P ) + µ i S 0 0 -1 + γ i [Ψ j (P ) -Ψ i (P )] , Ψ i (P ) = P E i + E i P + T E i P E i P F i + T E i P F i F i P + T F i P E i T F i P F i , (9) for some given parameters γ i > 0 and µ i > 0. Then the switching control (C1) law defined by (C1) σ(x k ) = argmin i∈K N x k 1 Ψ i (P ) x k 1 (10) guarantees ∆V <0 outside of set E. Proof: Consider the Lyapunov function given in [START_REF] Eidson | Proc. IEEE South East conference[END_REF], where P 0 ∈ R n×n . Let us first compute the expression of δV k as follows δV (x k ) = 1 T (V (x k+1 ) -V (x k )) = 1 T ((x k + T δx k ) P (x k + T δx k ) -x k P x k ) = 2δx k P x k + T δx k P δx k . (11) Replacing δx k by its expression given in (4), and using the definition of the matrix Ψ i (P ) provided in (9) yields δV (x k ) = x k 1 Ψ σ (P ) x k 1 . Using ( 9), the previous expression can be rewritten as follows, for any j in K δV (x k ) = x k 1 Γ (1) σj -µ σ S 0 0 -1 -γ σ [Ψ j (P ) -Ψ σ (P )] x k 1 Since the matrix inequalities Γ i,j ≺ 0 holds for any pair (i, j) in K 2 , we have δV (x k ) < µ σ (1 -x k Sx k ) -γ σ x k 1 [Ψ j (P ) -Ψ σ (P )] x k 1 Note that the switching control law [START_REF] Hauroigne | Switched affine systems using sampled-data controllers: Robust and guaranteed stabilisation[END_REF] ensures that the last term of the right hand side of the previous inequality is negative, which guarantees that δV (x k ) < µ σ (1 -x k Sx k ) The previous inequality finally guarantees that for any values of x k outside of E (i.e. 1 -x k Sx k < 0), the quantity δV (x k ) is strictly negative, which concludes the proof. IV. Relaxed switching controller In the previous section, a Lyapunov-based switching control was presented. This control was clearly inspired from the existing literature on switched affine systems such as [START_REF] Deaecto | Discrete-time switched linear systems state feedback design with application to networked control[END_REF], [START_REF] Hetel | Robust sampled-data control of switched affine systems[END_REF] but adapted to the δ-operator modelling of DC-DC converters. The motivation of this section is to present a relaxed version of the previous control law. This relaxation considered here is related to the fact that the a priori intuition behind this Lyapunov-based control law might be too restrictive in the sense that the selection of the Lyapunov matrix P is done to verify two distinct purposes, namely, the definition of the Lyapunov function and the construction of the switching signal. Based on this comment, a relaxed control law can be provided by decoupling the two problems of selecting a Lyapunov function and of designing the switching control law. The relaxed control law is based on the simple idea consisting of keeping the same structure of control law presented in [START_REF] Hauroigne | Switched affine systems using sampled-data controllers: Robust and guaranteed stabilisation[END_REF]. However, instead of using the Lyapunov matrix P , a new unconstrained matrix is introduced to define the switching law. This is formalized in the following theorem. Theorem 2: Consider Assumption 1 and Property 2 and matrices P 0 and S 0 of suited dimension, a new matrix N ∈ R n×n that are solution to the feasibility problem Γ (2) ij ≺ 0, ∀i, j ∈ K (12) where, for any pair (i, j) in K 2 , Γ (2) ij = Ψ i (P ) + µ i S 0 0 -1 + γ i [Ψ j (N ) -Ψ i (N )] , (13) where the matrix Ψ i (P ) and Ψ i (N ) are given in [START_REF] Grant | CVX: Matlab software for disciplined convex programming[END_REF] and, again, for some given parameters γ i > 0 and µ i > 0. Then the switching control (C2) law defined by (C2) σ(x k ) = argmin i∈K N x k 1 Ψ i (N ) x k 1 (14) guarantees ∆V <0 outside of set E. Proof: The proof strictly follows the proof of Theorem 1, except that, now, the switching control law is not characterized by the Lyapunov matrix P but by an arbitrary matrix N , which only has to be a solution to the feasibility problem [START_REF] Liberzon | Basic problems in stability and design of switched systems[END_REF]. Remark 3: Compared to Theorem 1, there are two main advantages. The first one relies on the fact that matrix N is not required to be symmetric nor positive. The second one consists in the fact that the switching law is now completely decoupled from the definition of the Lyapunov function. Moreover, one can see that selection N = P in Theorem 2 leads to the same statement as in Theorem 1. This ensures that the set of feasible solutions of ( 12) is greater than the ones of [START_REF] Grant | Graph implementations for nonsmooth convex programs[END_REF]. It is then expected to derive relaxed solutions that will be presented in the example section where the optimization procedure presented later on in Section V is included. V. Optimisation procedure The feasibility problems proposed in Theorems 1 and 2, only ensure that there exists a switching control law that stabilizes the system to a bounded region around the equilibrium. Without an optimization process, the resulting regions might be too large to be relevant from the physical point of view. Indeed, considering a too large set E can possibly increase the chattering effects which are the main phenomena to avoid or limit in the control design of DC-DC converters. This chattering behavior can damage or even break the devices. Therefore, it is necessary to include an optimization procedure in these theorems, whose objective is to minimize the size of the set E. Since this set is fully characterized by the symmetric positive definite matrix S, minimizing the size of E can be achieved by maximizing the determinant of S. Based on the discussion above, the following proposition of stated dealing with the optimization of the solutions to the conditions of Theorems 1 and 2 is stated. Proposition 1: For any a priori fixed scalar parameters µ j and γ i , the optimisation problem max S,P det(S), s.t. Γ (c) ij ≺ 0, ∀i, j ∈ K (15) for any c = {1, 2}, minimizes the size set [START_REF] Deaecto | Discrete-time switched linear systems state feedback design with application to networked control[END_REF] guaranteeing ∆V <0 outside of set E. Remark 4: Note that we do not optimize the chattering region, because it is constrained to an ellipse form and, the controller is also constrained for a given structure. Thus, it is expected that set E will be relaxed with the control given in Theorem 2, with respect to the control law C1.. Remark 5: This optimization process strongly depends on the selection of the scalar parameters γ i and µ j . Hence it is expected that an iterative procedure to selected the best parameters needs to be included. Remark 6: Other optimization objectives can be considered such as the maximization of the eigenvalues of S, which can be done by maximizing a scalar τ such that τ I ≤ S. VI. Application to a boost converter The control laws introduced above are evaluated on a classical boost converter system. This converter switches at high frequency between two modes (N = 2) corresponding to two affine subsystems. The state variable is defined by x = [i L v c ] , where i L denotes the inductor current and v c the capacitor voltage. We take the parameters given in [START_REF] Deaecto | Switched affine systems control design with application to DC-DC converters[END_REF] for comparison with the switched control algorithm presented therein, which switches with arbitrary aperiodic switching in the steady-state. This type of switching tends to be complicated in physical applications. The considered nominal values are: V in = 100V , R = 2Ω, L = 500µH, C o = 470µF and R o = 50Ω. The switched system state space model ( 1) is defined by the following matrices for i = 1, 2: A i = -R L (i-1) L (i-1) C0 -1 R0C0 , a i = 1 L 0 . The chosen simulation parameters are given by z e = 3 120 , λ = [0.22 0.78] for which simple calculations ensure the satisfaction of Assumption 2. The optimization problems given in Proposition 1 is solved using the CVX solver [START_REF] Grant | Graph implementations for nonsmooth convex programs[END_REF], [START_REF] Grant | CVX: Matlab software for disciplined convex programming[END_REF]. The results obtained with this software illustrate our the efficiency of the new control law presented in Theorem 2 with respect to the Lyapunov-based controller employed in the literature, presented in Theorem 1. As pointed out in Remark 5, the optimization scheme presented in Proposition 1 delivers different results for different values of γ i and µ j . Therefore, a random algorithm has been considered to obtain the best tuple of parameters. The values of these parameters µ 1 , µ γ 2 , obtained for several values of the sampling period T s are provided in Table I. Figure 1 shows the state trajectories, set E and δV > 0 surface in the state-plane for the different controllers given in Proposition 1, as well as for different sampling periods. Note that the δV > 0 surface is in the interior of the set E as is expected from Theorem 1 and 2. Remark also as E is reduced, as T s decreases. We can see that set E is reduced with control law C2 w.r.t. C1, showing as the control law C2 provide reduced region of attraction with respect to C1. This is consistent with Remark 3. Another important remark concerns the fact that the region where δV > 0 is concentrated in a smaller area in control law C2 with respect to C1. These simulations demonstrate the advantages of the relaxed control law over the existing Lyapunov-based one. Indeed, our proposed controller C2 allow to control efficiently the DC-DC converters under study even with relatively high sampling period with a notable reduction of the switches of these systems, which ensures an increase of the lifespan and the reduction of the dissipated energy. VII. Conclusions and future work In this paper, we have presented two main contributions. The first one deals with the definition of accurate discrete-time model for high frequency sampling DC-DC converters. The second contributions consists in the extension of the usual Lyapunov-based control laws employed in this domain to a less restrictive control law that encompass this first formulation. This new control law deliver notable improvements with respect to the Lyapunov-based control law in terms when comparing the estimate of the set E of the switched affine system. (a) C1 with T = 10 -4 . (b) C1 with T = 10 -5 . C1 with T = 10 -6 ; (d) C2 with T = 10 -4 .(e) C2 with T = 10 -5 . C2 with T = 10 -6 . Fig. 1 : 1 Fig. 1: Numerical results of Proposition 1. Time trajectories in green, set E in yellow and δV > 0 in red. 2 , γ 1 and Parameters Ts µ 1 µ 2 γ 1 γ 2 Th. 1 10 -4 9.84 9.86 0.948 0.0515 10 -5 0.16 0.157 0.777 0.223 10 -6 0.907 0.907 0.782 0.218 Th. 2 10 -4 3.16 3.16 0.916 0.327 10 -5 1 1 0.349 0.1 10 -6 0.313 0.313 10 2.793 TABLE I : I Numerical values of µ 1 , µ 2 , γ 1 and γ 2 .
27,361
[ "13288", "2509" ]
[ "254694", "388529", "388529", "254694" ]
00174700
en
[ "spi" ]
2024/03/05 22:32:07
2005
https://ineris.hal.science/ineris-00174700/file/ARTICLE_TOURS.pdf
V Renaud F Lahaie G Armand T Verdel P Bigarré Conception, numerical prediction and optimization of geomechanical measurements related to a vertical Mine-by-Test at the Meuse/Haute-Marne URL Keywords: instrumentation, conception, design, under-excavation technique, numerical modelling, field stresses, measurement Andra is conducting scientific experiments in the Meuse/Haute-Marne Underground Laboratory among which REP experiment is a vertical mine-by-test focusing on short and long term hydromechanical response of the argilite to the main shaft sinking. Displacements, strains, and pore pressures will be monitored while the shaft is passing down. Andra and INERIS intend to back-analyse most recorded geomechanical data based on under-excavation numerical technique in order to estimate pre-existing field stresses. The under-excavation interpretative technique consists in determining the pre-existing stress tensor related to a quite large volume of rock based on generalized inversion of geomechanical measurements recorded during the disturbance of the host rock (typically the excavation of an underground opening). In the framework a numerical study aiming to test accurately the sensitivity and numerical stability of this interpretative technique, 3D modelling of a step-by-step vertical mine-by-test, based on REP design, has been undertaken. One major step of the numerical INTRODUCTION 1.1 OBJECT/CONTEXT This study is part of the research related to REP experiment in the underground laboratory of Meuse/Haute-Marne (located at Bure, France). Main objective of REP experiment is to study the short and long-term response of argillite to the sinking a vertical shaft (REP). The experiment will allow to record a great number of geomechanical measurements (strains, displacements, tilts.) around the excavated works. It is then possible to estimate, through a generalized numerical inversion, the pre existing stress field in the virgin rock mass. Background literature and field experimentations show both this technique, known as the "under-excavation" or "undercoring" technique, as very promising. The under-excavation technique was first described and proposed by Wiles and Kaiser [START_REF] Wiles | In Situ Stress Determination Using the Underexcavation Technique -I[END_REF][START_REF] Wiles | In Situ Stress Determination Using the Underexcavation Technique -II[END_REF], which applied it quite successfully in the granitic context of the underground laboratory of the AECL (Atomic Energy off Canada Ltd, [START_REF] Read | Technical summary of AECL's Mine-by Experiment Phase 1: Excavation response[END_REF][START_REF] Thompson | In situ rock stress determinations in deep boreholes at the Underground Research Laboratory[END_REF][START_REF] Tonon | Stresses in anisotropic rock masses: an engineering perspective building on geological knowledge[END_REF]). This technique was evaluated by Andra and INERIS in a marl formation (potash mining in Alsace: MDPA, France [START_REF] Bigarré | Casamance, Mesures de déformations, interprétation des données[END_REF]) and clays type (Mont Terri, [START_REF] Bigarré | Mt. Terri Project, Phase III, Stress Measurement Experiment[END_REF][START_REF] Martin | Measurement of in-situ stress in weak rocks at Mont Terri Rock Laboratory, Switzerland[END_REF]). These experiments allowed to confirm both all the interest of this technique [START_REF] Galybin | A measuring scheme for determining in situ stresses and moduli at large scale[END_REF] (even if it is not yet largely used [START_REF] Martin | Stress, instability and design of underground excavations[END_REF]) while raising up some issues concerning its sensitivity and stability as regards both uncertainties affecting the important amount of input data handled and moreover the model used. Research has then been undertaken based on numerical simulation of a standard, synthetic 3D experiment, aiming to a careful, detailed evaluation of this technique. DESCRIPTION OF THE UNDER-EXCAVATION TECHNIQUE The under-excavation technique requires several assumptions, whose principal one is the linear elastic behaviour of the rock mass. If several types of instruments are to be set up around the work (CSIRO cells, extensometers, inclinometers, convergence meters, clinometers, etc. figure 1), a linear relationship between strains/displacements and stresses can be expressed in the following matrix form:   ... M       (this implies an elastic behaviour). [M] is the influence matrix.                                                          N N N N N N N P Q Q a a a                                                        X Y P P P P P P Z YZ XZ Q Q Q Q Q Q XY R R R R R R b b b b b b b b b b b c c c c c c c c c c c c d d d d d d d d d d d d      (1)   M where:    is the initial stress tensor and the a ij , b ij and c ij are the influence coefficients respectively connecting the measurement variations of the sensors, with the components of the initial stress tensor    on the assumption of a linear elastic behaviour of the rock mass. Determining the six components of the initial stress tensor amount thus to solve the system of N+P+Q+… linear equations with 6 unknown. The difficulty of the under-excavation method lies in the direct problem: i.e. the determination of the coefficients of the matrix [M] which requires the numerical modelling of the experiment with 6 canonical loading schemes. For each one of these 6 simulations and with each stage of work excavation, the method consists in recovering, at the location of each virtual sensor, the local value of the stress shift (for CSIRO cells), the new position (for extensometers and inclinometers), the new angle (clinometer), i.e. all induced perturbations due to the mining work progress. These values are then transformed into virtual measurements of strain (CSIRO cells), relative displacement (extensometer) and so on. GENERAL CONSIDERATIONS The synthetic experiment run consisted on the "virtual" monitoring of the progressive excavation of a 6.25 m in diameter vertical shaft around which were laid out beforehand different sensors,. Simulations have been carried out starting from the computer code FLAC 3D (transverse isotropic elastic behaviour + continuous and homogeneous medium). 3D mesh has been designed in order to fit as much as possible location of sensors, made in this case of 3 CSIRO cells, 3 multipoint extensometers and 1 multipoint inclinometer. An additional calculation was carried out using results of triaxial loading tests in accordance with the one already estimated on the field. This was needed in order to check numerically several supplementary functionalities implemented in SYTGEOmath interpretative tool developed by INERIS. DESCRIPTION OF THE NUMERICAL MODEL MODEL GEOMETRY The model geometry is presented on figure 1. Dimensions of the model are 75 m x 75 m x 95 m: the lower and higher dimensions being respectively -515 m and -420 m. A meshing made up of 114009 elements, that is to say 120870 nodes was generated. The model is defined in the coordinates system of principal stresses (X, Y, Z), which corresponds to a rotation of 25° of the general East-North (x, y, z) coordinates system. The meshing has been adapted to take into account the theoretical location of some measurement points (figure 1). MECHANICAL PROPERTIES -EXCAVATION PROCESS Input mechanical properties data are those presented in chapter VI of Andra report "Geological Referential of the site of East" [START_REF] Collectif | Référentiel Géologique du site de Meuse/Haute-Marne -Tome 4 Le Callovo-Oxfordien[END_REF]. They are summarized for each main geological facies in table 1. The shear modulus G 13 of the transverse isotropic elastic law was calculated with the relation of Lekhnitskii [START_REF] Lekhnitskii | Theory of elasticity of an anisotropic elastic body[END_REF], based on laboratory tests:   13 13 13 1 3 12 EE G EE    (2) The boundary conditions correspond to a null normal displacement on the vertical faces and the lower horizontal face. The upper horizontal face is loaded with stress components. All simulations have been carried out following two principal phases:  first phase corresponds to the calculation of the initial state of equilibrium before shaft sinking onset ;  second phase corresponds to the shaft sinking simulation in 31 successive stages. MODEL VALIDATION The numerical model was validated based on 3 x 3 x 4 calculated stress profiles compared with the analytical solutions of the stress field in elastic homogeneous medium around an infinite cylindrical shaft. Error acceptance has been set as very low in order to minimize errors due only to numerical modelling artefacts in the overall procedure. Table 2 recapitulates the maximum absolute and relative error (between numerical and analytical results) made near the various measurement points. ANALYSIS OF THE MEASUREMENTS OBTAINED ON THE VARIOUS INSTRUMENTS OF THE VIRTUAL EXPERIMENTAL DEVICE Measurements obtained in the canonical loading simulations Measurements obtained correspond to the influence coefficients relating these measurements to the corresponding components of the initial stress tensor. Basic analysis of these measurements allows to evaluate amplitude versus time fonction of each sensor (maximum, signal to noise ratio, gradients, etc.). Then analysis of these data (of the 42 graphs like those presented on figure 4.), instrument by instrument, offers a unique mean to identify quantitatively numerous singular conditions as, for example:  redundancy of sensors inside a same instrument (extensometer) can not be justified in terms of quantitative improvement of the overall instrumentation set up;   a sensor may show a narrow predicted useful data range to be back analysed; this can be anticipated by further considerations on the front face working progress of the opening;  best numerical conditioning is to be obtained by combining instruments bringing of additional information, for example CSIRO cells (information on all the components of [ 0 ] except  0 ZZ ) and axial extensometer 3 (information only on  0 ZZ ). Numerical results of the shaft sinking The graphs of evolution of the numerically simulated measurements obtained on each instrument in the triaxial case (loaded with an estimated stress state representative of the experiment depth) are presented on figure 4. On can note some of the relevant points below:  the order of amplitude of the measurement variations obtained on CSIRO 1 cell (more than 4000 µm/m in extension) is relatively high taking into account the range of recommended use for CSIRO cells, i.e. 2500-3000 µm/m. This remark thus encourages to recommend the taking of this cell away the shaft side wall;  the variations of maximum displacement obtained on the sensors of extensometers 1 and 2 lie between 500 and 2400 µm (4000 to 8500 µm for extensometer 3). These variations are at the same time sufficient with respect to the precision of these instruments ( 50 µm) and remain quite lower than their measurement range (105 µm);  the variations of displacement obtained on extensometer 3 are at the same time sufficient with respect to their precision ( 50 µm) and lower the tolerance range considered for this instrument (100000 µm);  the variations of measurements obtained by the inclinometer lie between 100 and 1000 µm. These variations are weak, but remain sufficient for the points closest to the shaft (points n°3 to 5). On the other hand, they become insufficient, with respect to the device precision ( 100 µm/m) for the points furthest away from the shaft (points n°1 and 2);  the comparison (figure 3) of the displacements obtained at the position of the reference points of extensometers 1, 2 and of the inclinometer with those obtained at the position of their first measurement point (point n°1) shows well that displacements of the reference positions are significant and do not have to be neglected. Those simulations have been completed with elastoplastic modelling of the shaft sinking in order to finalize the layout of the experiment. As this article is focus on the under excavation method which requires elastic model, the result of elastoplastic are not shown here. DATA INVERSION STRATEGY PRESENTATION Increasing number of measurements of varied types make the data to be selected and to be inversed quite difficult. Because there are a too great number of possible choices which relate at the same time to:  number of sensors (data sub sets) to be inversed;  temporality of the considered measurement intervals compared to the face advance;  number and width of measurement intervals;  relative temporal shift between the considered intervals, etc… At the same time, as it was mentioned for the under-excavation technique, by Wiles & Kaiser [START_REF] Wiles | In Situ Stress Determination Using the Underexcavation Technique -II[END_REF] who proposed a methodology of selection of the data to be taken into account in the inversion (figure 5). Wiles and Kaiser propose for the stage B three different strategies (figure 6):  "simple interval" (SI);  "multiple intervals with shifted origins" (MISO);  "multiple intervals with common origins" (MICO). In this study, we explored a large set of possible choices for combination of instruments at stage A (12 combinations of instruments: C1, C2, C3, C1-E3, C2-E3, C3-E3, C1-C2-C3, C1-C2-C3-E3, E1-E2-I1, E1-E2-I1-E3, C1-C2-C3-E1-E2-I1 and C1-C2-C3-E1-E2-I1-E3) and all possible combinations for stages B and C. Moreover, for stage B, we tested a fourth procedure (called TOT), which consists in taking into account all the possible intervals of measurement in the inversion. The aim of this last approach is to be able to give a more complete answer on the influence of each assumption on the estimated initial stresses. It is thus possible to establish the type of instrument or the intervals of measurement which it is necessary to consider in the inversion to be able to limit to the maximum the influence of the assumptions that one wishes to test. APPLICATION TO THE TRIAXIAL LOADING SIMULATION The triaxial tests simulation allows to select the most favourable inversion methods. The application of the methodology of inversion led to 15660 inversions to be run through an automated function implemented in the interpretative tool (1305 by combination of instruments). All possible inversions are studied in order to have a better estimation of stresses and their linked error. The assumptions used in this method being the same as those supposed to build the matrix [M], we have to find, after inversion, the initial stress tensor imposed on the model boundaries if the matrix [M] is well conditioned (the matrix conditioning is the ratio of its greater eigenvalue on its smaller eigenvalue). It is allowed that a conditioning ranging between 0 and 10 is "very good", "good" between 10 and 20, "acceptable" between 20 and 30, and that beyond 30, the inversion of measurements presents a significant amplification risk of numerical errors. Figure 7 shows the cumulative distribution (P<(x), percentage of case where x is lower than a given value) of the conditioning value according to the combination of instruments taken into account in the inversion. The conditioning value of the influence matrix (figure 7) is very variable according to the combination of instruments considered in the inversion. Best conditioning is not obtained by considering all the instruments, but only the CSIRO cells and the extensometer located in the shaft axis (E3). For these two combinations of instruments, almost all the inversion methods lead to an acceptable conditioning (< 30). On the other hand, the inversions only carried out on the extensometers except the one placed axially inside the front face of the shaft and inclinometer (E1-E2-I1) never lead to an acceptable conditioning. As for conditioning, we note that the difference between the back calculated and prescribed stresses is very variable according to the combination of instruments considered in the inversion. The combinations of instruments giving place to a bad conditioning tend overall to generate a more important error on the estimated stresses. This global correlation between the conditioning (COND) of the influence matrix and the made error DEVMAX on the estimated stresses is shown on figure 9. However, one can note certain differences between figures 7 and 8. For example, the combination of instruments giving place to best conditioning (C1-C2-C3-E3) is not that which produces less error on the estimated stresses (even if it remains among the best). A limited number of inversion cases were selected. These favourable inversion cases are those which lead at the same time to best conditioning and the weakest error on the prescribed stresses (COND < 30, DEVMAX < 1%). The number of cases thus selected for the study continuation is 2151, that is to say 13.7 % of the number of initial inversions. CONCLUSION Within the framework of REP experiment in Bure (France), Andra and INERIS intend to develop an under-excavation interpretative technique in order to reduce uncertainty on the insitu stress state in the Callovo-Oxfordian argillite formation. A 3D synthetic numerical study has then been completed in order to assess quantitatively the overall reliability and performance of the under-excavation technique. The methodology has been extended in the way that:  intermediate results needed as calculated influence coefficients and total predicted measurements of all varied sensors to be implemented could be back analysed in terms of operational recommendations, regarding a given sensor or a subset of sensors;  inversion numerical strategies are explored based on calculated indicators able to quantify comparatively different instrumentation schemes versus excavation front face overall lay outs, aiming to minimize computational undesired artefacts. This study is one of the studies necessary to the REP experiment design, nevertheless, the calculations presented in this article allow to improve the experimental device by:  moving away CSIRO 1 from the shaft side wall of approximately 3 m;  relocating the inclinometer sensors by reducing the bars length and by increasing the number of measurement points. Moreover, this study shows that the reference points of the extensometers out of shaft and the inclinometer will move significantly in the shaft passage (comparison of figure 3a with figure 3b). This problem can be avoided by not inversing measurements out of shaft that starting from the stage -460 m. 3D numerical conception of an under-excavation interpretative experiment, e.g. aiming to back estimate an unique, overall quantitative result as field stresses, appears to be of major interest for optimizing data quality to be recorded, first for a specific sensor considered, secondly and moreover for the quality of the overall instrumentation scheme, or whatever can be called "information wealth" of data set to be recorded. This "optimal 3D design" approach includes complex factors usually difficult to handle at the same time for the rock mechanics engineer as: 1-best instrumentation coverage of the 3D complex geometry of the advancing excavation inside the instrumented volume of rock; 2-geomechanical properties of host rock to be monitored; 3-correct spreading of the varied instruments to be set up. no measuring instrument considered individually shows a satisfactory sensitivity to the whole components of [  ]. This result is clearly accentuated by the 2D final geometry of the experiment, once the shaft has been completely passing by the monitored volume of rock; Figure 8 8 Figure 8 shows the distribution of the maximum relative difference (noted DEVMAX) between the values of stresses imposed on the model  0 i imp and those estimated by inversion Figure 10 shows 10 Figure10shows the distribution of the favourable inversion cases according to the 2 C 2 o m è tr e I1 E xte n s o m è tr e E X 1 E xte n s o m è tr e E X 2 C ellu le C S IR O 1 C ellu le C S IR O Figure 2 : 2 Figure 2 : 3D overview of the measurement device around the shaft Figure 3 : 3 Figure 4 :Figure 5 : 3345 Figure 3 : Comparison between displacements of the n°1measurement and reference points of extensometer 2 Figure 6 :Figure 7 : 67 Figure 6 : Selection procedures of intervals of measurements taken into account in the inversion = stage B (adapted of Wiles & Kaiser [15]) Figure 8 :Figure 9 :Figure 10 : 8910 Figure 8 : Distribution of the maximum relative difference (DEVMAX) between the estimated stresses and the prescribed stresses in the triaxial case Table 1 : 1 Mechanical characteristics of facies A, B and C Parameter Facies A Facies B & C Wet density  h = 2420 kg/m 3  Young modulus  to the plan of transverse isotropy E 3 = 5200 MPa E 3 = 5200 MPa Young modulus  to the plan of transverse isotropy E 1 = 6300 MPa E 1 = 6300 MPa Poisson's ratio   =   = 0,30 Shear modulus G 13 G 13 = 2144 MPa G 13 = 2144 MPa  rr      zz   r  Relative error 1.58% 1.01% 1.21% 0.62% Absolute error (MPa) 6.71E-02 1.00E-03 1.00E-03 2.13E-03 Table 2 : 2 Absolute and relative maximum errors between numerical and analytical resultsFigure 1 : Detail of the horizontal meshing and localisation of measurement points
21,743
[ "842892", "842893", "842894", "9346", "834637" ]
[ "3186", "3186", "12854", "8782", "3186" ]
00174722
en
[ "spi" ]
2024/03/05 22:32:07
2005
https://ineris.hal.science/ineris-00174722/file/article_GISOS_2005_english_last.pdf
Renaud Vincent email: vincent.renaud@ineris.fr Tritsch Jean-Jacques email: jacques.tritsch@ineris.fr Franck Christian email: christian.franck@industrie.gouv.fr MODELING AND ASSESSMENT FOR SUBSIDENCE HAZARD IN INCLINED IRON MINING Keywords: hazard, subsidence, inclined seams, iron mine, modeling aléa, affaissement, gisement penté, mine de fer, modélisation The old iron mines of the North-West of France have geometrical and exploitation configurations appreciably similar, with dips varying between 30° and 90°. Within the framework of the establishment of risk maps related to these exploitations, the observation of subsidence in certain basins leds us to try to better know the conditions of occurrence and the consequences on the surface of these phenomena, and in particular the influence of the dip on their relevance. A modeling was thus undertaken, consisting initially of back-analysis of a subsidence trough observed and studied, in order to seek the initiating mechanism within mining work and to appreciate the influence and the degree of reliability of the parameters, and in the second time the parameterised analysis of the zones of potential failure according to the dip, the opening of the mine seam, the extraction ratio and the thickness of the overburden. The contribution of this modeling and the experience feedback of other mining basins allowed to fix the principles of evaluation of the subsidence alea, in terms of intensity and occurrence, of these deposits. Introduction In the framework of the assessment and the prevention of mining hazards, the establishment of risk maps related to the movements above the iron deposits exploited in the North-West of France (figure 1) highlighted their relative homogeneity and singularity. These basins indeed have geological and exploitation characteristics of formation relatively homogeneous. In addition, and mainly because these exploitations have an important dip (between 30° and 90°), it quickly appeared during the information collection and the first observations of disorders on the surface that the risk evaluation of ground movement, and especially of subsidence occurrence, had to take into account the singularity of these deposits and could not be completed with the analyses made for horizontal mining works. This article initially describes the characteristics specific to these exploitations. Then, the objectives are presented, steps and results of the modeling made on the basis of back-analysis of an observed subsidence phenomenon. Finally the transcription of these results and the evaluation of the subsidence risk are discussed. Characteristics of North Western iron mining exploitations The risk analysis is developed for studies at the scale of a whole basin of risk, even several risk basins, if they present strong analogies. It is the case of the iron deposits of the synclinals of Soumont, May/Orne, La Ferrière-aux-Etangs (Normandy) and Segré (Pays-de-Loire). Figure 1. Localisation of the iron-bearing basins of the North-West of France (Varoquaux and Gerard, 1980). The various basins present much analogies on the geological and exploitation aspects. These deposits fit in the dissymmetrical synclinal whose periods of deposit (Ordovician or Silurian) and of crumpling are near on a geological scale. They are fairly to strongly slopes (figure 2), located at very close depths (between 10 and 600 m) and hold one or two veins of low or average thickness (overall 2 to 4 m, locally more). [START_REF] Maury | Aménagement de la mine de May-sur-Orne en stockage souterrain d'hydrocarbures[END_REF]. The nature and the strength of the iron ore are relatively variable. The ore is mainly constituted of haematite at May-sur-Orne and at a shallow depth under the calcareous overthrust of Soumont. The ore is carbonated in-depth in others basins, like at Segré and La Ferrière-aux-Etangs. The compressive strength of the ore is about 100 MPa at May-sur-Orne and 200 MPa (perpendicular to the bedding plane) at La Ferrière-aux-Etangs. The mining methods used in those several basins are appreciably similar (figure 3). The oldest mining sites were exploited by short dip faces also called stops, then by dip strike faces. Thereafter, one systematically applied the method of the rise faces or the mechanised strike faces for the mining sites with low slopes (dip lower than 50°) and the shrinkage method for the mining sites slopes to high slopes (dip higher than 50°). These works are connected by level galleries connected to the works of ore extraction, and spaced of 30 m to 75-80 m in altitude, according to the basins and the methods used. The observed disorders (table 1) in these various basins are similar (primarily some localized sinkholes by crown section rupture, shaft or raise clearings, or collapses of galleries). One notes however the existence of collapses of important districts at the bottom, in production run, in general without repercussions on the surface, except for Soumont and La Ferrière-aux-Etangs. The observed disorders on surface are traditional depressions with spread out board, with opened cracks but without frank breaks of shearing, which can be connected with subsidence troughs. On the other hand, the documentary analyses do not identify any accident of huge collapse type: the only events known in the western French basins are exclusively the fact of slate exploitations whose common factors are their complex geometry, very different from iron mining works, and the presence of important residual voids [START_REF] Tritsch | Assistance technique à l'élaboration d'un dossier de demande d'abandon, carrières de Misengrain -site de Noyant[END_REF]. Modeling of the inclined deposits of the West of France Modeling approach The study of modeling, using code UDEC (2D calculations in discontinuous medium), was organised in two stages:  the back-analysis of a subsidence trough in Soumont developed in 1966 above a well delimited underground collapse;  parametric analysis of the zones of potential failure (extension, amplitude) according to the dip (30 to 65°), the extraction ratio (70 to 90 %), the layer thickness (1.5 to 5 m), and thickness of the formations of overburden in discordance (0 to 50 m). Back-analysis of the subsidence phenomenon of Soumont appeared in 1966 Several collapses occurred in the mine of Soumont between 1929 and 1966. They mainly induced subsidence troughs on the surface (figure 4). The latest one is the most documented for underground visits and analyses of the causes were carried out in close connexion with this collapse. Thus this event has been selected for the back-analysis. Collapse occurred between the levels -120 and -250 m, 40 years after the exploitation of this sector of mining works. The dip of the layer is 30° and the extraction ratio is high (80-85 %).The maximum subsidence measured at that time was 65 cm. The back-analysis was based on a modeling on the collapsed district scale. The objective was to specify the conditions which were at the origin of collapse and the most relevant mechanisms. This work was carried out by respecting the three following checking points:  A subsidence amplitude of 0.65 m was measured on surface;  The absence of collapse in work of depth higher than 220 m;  The stability of the mining works exploited with the same depth by shrinkage, to the west of the collapsed districts. Description of the mining conditions The extraction ratio of the lower stages decreases with depth. The section of the way is trapezoidal and the opening of these levels is 4.5 m. The pillar width of the lower stages varies between 4.5 m and 6 m. Above the ore layer, one meets massive and resistant schist beds for a total thickness of 120 m, then sandstone beds over 95 m, then a schist alternation and sandstone of weak thickness (10 m) and finally again sandstone. In lower part of the ore layer, there is a 10 m thick schist bed then a series of sandstones. The whole of these formations is covered by a calcareous slab whose thickness can be evaluated to 30 m at the location of the concerned sector [START_REF] Tincelin | Mine de Soumont -Mesures à entreprendre pour prévoir l'imminence d'un risque d'effondrement survenant à l'aplomb des routes nationales n°158 ou départementales n°43[END_REF]. Geomechanical characteristics The geomechanical characterisation of various materials of the southern side of the mine of Soumont is not complete. Only the iron ore and its immediate roof and floor have been tested in laboratory. Hence we estimated the data according to various sources:  values obtained in laboratory: bibliographical study carried out at the time of the preliminary phase [START_REF] Delaunay | Phase préliminaire à la réalisation d'une modélisation numérique sur les gisements pentés des bassins ferrifères de Soumont[END_REF];  data resulting from a study on the slate mine of Misengrain [START_REF] Tritsch | Assistance technique à l'élaboration d'un dossier de demande d'abandon, carrières de Misengrain -site de Noyant[END_REF];  data from the database by [START_REF] Fine | Le soutènement des galeries minières[END_REF];  data (for the sandstone) resulting from the synthesis of the mechanical characterisations for the HBL [START_REF] Mery | Synthèse des caractérisations géomécaniques[END_REF];  data from the geological map of Mézidon (BRGM); The values of strengths (tensile and compressive) were then degraded while taking into account:  the scale effect being estimated at 0.47: according to [START_REF] Bieniawski | The significance of in-situ tests on large rock specimens[END_REF], referring to a curve obtained on unconfined iron ore samples;  the 2D aspect of modeling by preserving the strength/stress ratio for the pillar in 2D and 3D by decreasing the compressive strength of the seam (equation 1): Let us recall that for square rooms and pillars:  the time influence on the material (by estimating that the coefficient of reduction of strengths is founded on a ratio elastic strength/ peak strength). Table 2 shows a synthesis of all strengths values obtained according to the various effects taken into account. The behaviour law retained takes into account a hardening then softening post-failure behaviour. 1981) showed that  h / v = 0.5. However, many stress measurements carried out in the West of France, within synclinal structures, show that the horizontal stress is always higher or equal to the vertical stress. For the sites of Grais (May/Orne) and St-Sigismond (Maine-et-Loire), the ratio  h / v varies between 1 and 1.5 [START_REF] Burlet | Détermination du champ de contrainte régional à partir de tests hydrauliques en forages, résultats de neuf expérimentations in-situ réalisées en France[END_REF]. The stress tensor being of doubtful validity, we considered three values for the ratio  h / v : 0.5, 1 and 2. Results The solutions being able to explain the collapse of 1966 being plural, some certain data input were regarded sure (extraction ratio, dimensions of the various stages, width of the stage pillars, exploitation thickness, geology, geomechanical characterisation of materials other than the iron ore) and others were regarded as variables of the study. Each method of calculation of this study was analysed in terms of subsidence on surface, distribution of plasticity, displacements, principal stresses and plastic deformation in five tested pillars (figure 5). To sum up, the different methods of calculation carried out made it possible to study the influence of the:  the length of the plastic zone cannot exceed 200 m. The mechanism highlighted cannot thus be repeated in the lower stages;  the verification of the 2 nd checking point showed the importance of the strength value of the barrier pillars. The pillar strength must be higher (51 MPa) than that of the pillars of the higher stage (34 MPa) for the mechanism to occur. That is compatible with the fact that the iron ore of the lower stages is more carbonated (so more resistant);  for  h / v = 0.5 and 1, we notice that the value of maximum subsidence on the surface is close to that measured in 1966: 65 cm;  the state of the initial stresses has a relatively weak influence on the thresholds of pillar strength of the lower stages;  the characteristics (spacing and friction angle) of the stratification network parallel to the dip are essential parameters in the mechanism which we highlighted: the increase in spacing between the joints inhibits the mechanism of collapse. It is the same for the friction angle;  the values of strengths which we introduced into our models are compatible with the intervals of variation of the in-situ characteristics [instantaneous value ; value in the long term integrating the effect of time].  Parametric analysis The second part of this study consisted in carrying out a numerical modeling on the scale of the mine in order to develop the back-analysis collapse (of Soumont in 1966) and to evaluate the criteria to specify the risk by carrying out a parametric study (allowing a valorization on the whole inclined deposits of the same type). We thus studied the sensitivity of four parameters by carrying out twenty calculations:  the dip (between 30 and 65°);  the extraction ratio (between 70 and 90 %);  the exploitation thickness (between 1.5 and 5 m);  the height of overburden (between 0 and 50 m). The analysis of these twenty calculations was focused on the extension of the zones of potential failures (plasticity), on the value of maximum displacement in the pillars and on the value of the maximum subsidence on the surface. This reveals that the mechanism identified at the time of the back-analysis can be reproduced under the geometrical conditions synthesised in table 4. In addition, we noticed that the subvertical faults can inhibit or amplify the mechanism of failure by shearing. Moreover, the reduction of the height/width ratio of the pillars (or thickness reduction) has a very significant positive role on the exploitation stability. Evaluation of the "subsidence" hazard The hazard assessment is classically made by combining the awaited intensity of the phenomenon with its probability of occurrence, this being the predisposition of the site with respect to the dreaded phenomenon. Qualification of the intensity It is recognised that the characteristics of depression which materialise the most severe damage for the goods located on surface are the horizontal differential strains and movements of ground inclined setting rather than maximum vertical subsidence in itself. Table 3 gives indicative values of the strains and slopes which make it possible to evaluate the phenomenon intensity. Negligible  < 1  < 0.2 Very low 1 <  < 5 0.2 <  < 1 Low 5 <  < 10 1 <  < 2 Medium 10 <  < 30 2 <  < 6 High  > 30  > 6 The value of these two parameters can be appreciably influenced by different factors studied before. It appears so that value of maximum subsidence is in the form: A max = 0.3. w., with:  A max = maximum subsidence;  w = exploited thickness (in the districts exploited by shrinkage);   = extraction ratio (or recovery factor). It can be easily deduced from them the values from the strains ( max ) and slopes ( max ) starting from the following traditional relations:  max =  . A max / P  max =  . A max / P Where:  P is the average depth of the panel;   and  of the coefficients estimated respectively at 1.5 and 5 in the western iron basin. The values of the coefficients  and  are deduced from the studies in experience feedback carried out on the Iron Mines of Lorraine and adopted for their drastic security character. Qualification of the occurrence probability In the inclined exploitations of the iron deposits of the West of France, it is mainly the stability of the barrier pillars, the slabs or the pillars left in place to ensure the behaviour of the immediate strata which controls the subsidence predisposition. To evaluate the long-term stability of the undermined surface, main factors that have to be take into account are:  dimensions of the panels;  dip of the layers;  extraction ratio;  opening (height exploited between immediate strata);  strength of pillars. In a more precise way, the parametric analysis described previously provides fundamental indications on the configurations of layer and exploitation for which the occurrence of a subsidence can be excluded (table 4, below). > 55° ≤ 90% ≤ 4 m ≤ 85% ≤ 5 m 45° to 55° ≤ 90% ≤ 3 m ≤ 80% ≤ 5 m 30° to 45° ≤ 80% ≤ 3 m ≤ 70% ≤ 5 m The influence of an increase in the dip appears by a displacement of the zones of failure toward the surface (or of the outcrop): the greater the dip value, the more one affects the grounds close to surface (plastic points). In addition to these configurations of exploitation, other conditions must be taken into account for a reduction of the hazard level, like:  condition n° 1: for a subsidence to occur entirely, it is necessary that dimensions of the mining sites (width L) reach or exceed the depth (H) (that is: L ≥ H), which represents, in the context of these exploitations, a width along the dip from 250 to 290 m (depth lower than 220-250 m). In lower part (L < H), subsidence is all the more, the hazard level is lower;  condition n° 2: it is considered that there are no repercussions on surface (non perceptible subsidence) if the mining site has a width L < 0.4 H;  condition n° 3: if the minimum depth of mine working is higher than 250-300 m (according to the geometry of the mining sites), it is considered that the failure zones are not likely to reach surface. Hazard zoning The limits materialising on surface the zone influenced by subsidence are established, taking in account an angle called "influence angle", measured from the vertical, which connects the end of the panel, at the bottom, to the points of surface where subsidence, strains or slopes are regarded as unperceivable or null. Although an single influence angle () value of 30° to 35° is retained for flat veins, three angle limit values are defined for inclined layers (exploitations) These are:  the limiting angle value (), in the direction of drivage which is equal to the limiting angle in flat vein;  the "upstream" angle value, lower than the angle ;  the "downstream" angle value, always greater than the angle ; Looking at the data obtained in Soumont, it can be noticed (table 6) that the values of failure angles measured upstream and downstream (on average respectively about 7° and 30°), for a dip ranging between 30° and 40°, are very close to the corresponding values of the abacuses of the Lorraine coalfields or the Nord Pas-de-Calais region [START_REF] Proust | Etude sur les affaissements miniers dans le Bassin du Nord et du Pas-de-Calais[END_REF]. Hence, it can be deduced that the influence angles must be also very close and take for the layer of Segré some values of influence angle equal to 30° (upstream side) and 45° (downstream side). Let us specify that the downstream influence angle is taken at the base of the exploited panels, and the upstream influence angle at the higher part of the panels. Figure 2 . 2 Figure 2. Example of the mine configuration of May-sur-Orne (according to Maury, 1972). Figure 3 . 3 Figure 3. Mining methods of the basin of Soumont (according to Perrotte and Lidou, 1983). Figure 4 . 4 Figure 4. Plane view of the seam worked by rooms and pillars at Soumont in the collapse zone of 1966. stress field with the ratio  h / v (3 values: 0.5, 1 and 2);  density of the roof stratification;  friction angle of the bedding planes;  effects of faults on the collapse mechanism;  joint behaviour law;  strength of the pillars;  panel width;  opening effect. The various calculations allow to reproduce a mechanism and explain the subsidence observed on surface in 1966. It is due initially to the relative compressions of pillars and then to the deflection of the roof. These two zones are the place of strong shear mechanisms which imply a potential failure by shearing up to surface. The three checking points (collapse of 1966 in the upper stage, stability of the lower stages and shrinkage stability) were checked. Figure 5 : 5 Figure 5: Distribution of plasticity: joint spacing of 10 m + variation of subsidence on the surface Figure 6 : 6 Figure 6: Diagram showing the dissymmetry of the upstream and downstream influence angles in inclined deposit Table 1 : 1 Comparative analysis of various Western iron- bearing basins Basin MAY-SUR-ORNE SOUMONT LA FERRIERE-AUX- ETANGS SEGRE Dates of exploitation 1896 -1968 1907 -1989 1905 -1970 1907 -1984 Maximum depth 450 m 650 m 400 m 490 m Mining methods stoping, dip faces, shrinkage Rise faces, shrinkage, strike faces or "stoping" Stopings, rise faces, retreating workings, shrinkage shrinkage Dip 45° to 90° 30° to 60° 25° to 45° 60° to 90° Number of worked seams 1 (very locally 2) 1 1 2 (intercalated bed of 40 to 50 m thickness) Haematite under Dominant nature of the iron ore Facies haematite calcareous overthrust, Carbonated and siliceous Chlorito-carbonated. Little haematite Carbonated in-depth Content of iron 35-50% 36-50% 35-50% Average 52% Compressive 80 MPa parallel to the bedding strength of the 100 MPa 115 MPa plane, 200 MPa perpendicular ??? ore to the bedding plane Table 2 . 2 Synthesis of the compressive strengths for various materials of the study. Iron ore Table 3 : 3 Classes of intensity of the risk "subsidence" (purely indicative values) Classify intensity Horizontal differential strains  (in mm/m) Surface inclination  (in %) Table 4 : 4 conditions of exclusion of the process of subsidence (according to[START_REF] Renaud | Contribution à l'analyse des conditions d'effondrement des gisements pentés des bassins ferrifères de Soumont[END_REF] Dip Extraction ratio ( %) Thickness (W) Table 5 : 5 Classes of predisposition of the site for the risk "subsidence" Site predisposition Ratio L/H Depth (H) Very sensitive L < 250 m sensitive L/H # 1 < 250 m Not very sensitive 0.4 < L/H <1 < 250 m negligible L/H < 0.4 L < 250 m > 250-300 m Table 6 : 6 Values given in the subsidence abacuses of Nord/Pas-de-Calais, Saar and Lorraine basins Dip values 0° 15° 25° 30° 40° 50° 60° Angles of rupture giving the limits of Upstream angle 18 14 12 11 9 7 6 fracturing on the surface Downstream angle 18 22 25 27 30 33 36 Angles of influence giving the limits of null subsidence Upstream angle Downstream angle 35 35 32 38 30 40 30 43 30 45 28 47 27 48
22,759
[ "842892", "834615", "842898" ]
[ "3186", "3186", "3186" ]
00174736
en
[ "spi" ]
2024/03/05 22:32:07
2006
https://ineris.hal.science/ineris-00174736/file/Sea_to_sky_vancouver_2006_p481-487.pdf
Yannick Wileveau Vincent Renaud Jean-Bernard Kazmierczak RHEOLOGICAL CHARACTERIZATION OF A CLAY FORMATION FROM DRIFTS EXCAVATION : ELASTIC AND ELASTOPLASTIC APPROACH An extensive scientific programme has been carried out by Andra (French Agency in charge of radioactive waste management) for investigating feasibility of High Level Activity Waste disposal in deep geological formation. An Underground Research Laboratory (URL) is currently being constructed in North-eastern France to assess the adequacy of a hard-clay argillite layer (Callovo-Oxfordian formation) situated between 420 m and 550 m of depth. Geotechnical measurements have been carried out during the shafts and drifts excavation and particularly upon the main level of the laboratory (-490 m). The drifts are "horseshoe section" type with about 17 m² in area mainly supported by metallic ribs and rock bolts. The digging has been performed with classical pneumatic hammer. Measurement sections have been instrumented very close to the front face using convergencemeters and radial extensometers. This paper presents a comparison between in situ measurements and numerical modelling. Elastic calculations are not in agreement with the measured deformations. An elastoplastic constitutive model considering damage and using Hoek & Brown criteria has been developed and implemented in the FLAC 3D numerical code. Mechanical parameters came from lab tests performed on core samples. For the first meters, model provides consistent displacements. Beyond 4 meters, a time dependent convergence takes place and has to be integrated in the model to take into account creep and/or hydromechanical behaviour. RÉSUMÉ Un important programme scientifique a été conduit par l'Andra (l'agence française de la gestion des déchets radioactifs) afin d'étudier la faisabilité d'un stockage de déchets de haute activité et à vie longue dans une formation géologique profonde. Un laboratoire de recherche souterrain est actuellement en cours de construction dans le Nord-Est de la France pour évaluer les propriétés d'une roche indurée argileuse (argillite du callovo-oxfordien). Cette roche se situe entre 420 m et 550 m de profondeur. Des expérimentations géomécaniques ont été mises en oeuvre lors de la construction des puits et des galeries en particulier au niveau principal (-490 m). Une méthode classique de creusement au marteau piqueur a été utilisée pour excaver les galeries de type 'section fer à cheval' d'une surface de 17 m 2 environ avec un soutènement se composant de cintres métalliques et de boulons aciers. Des sections de mesures ont été équipées au plus tôt près du front avec des plots de convergence et des extensomètres radiaux. Dans cet article, une comparaison entre modèles mécaniques et mesures in situ est présentée. Les résultats d'une approche purement élastique ne sont pas en accord avec les déformations observées. Une loi rhéologique élastoplastique utilisant les critères Hoek & Brown a été développée et implémentée dans le code numérique FLAC 3D . Les paramètres du modèle ont été ajustés sur des essais sur échantillons. Le modèle prédit bien les déplacements mesurés lors des 4 premiers mètres d'excavation. Après, la convergence est dominée par les effets différés, nécessitant l'intégration d'une loi de fluage ou d'un comportement hydromécanique dans le modèle. . 1. INTRODUCTION In November 1999, having completed the preliminary work phase, Andra started construction work of an underground research laboratory (figure 1) in the district of Bure (Meuse département), located in the Northeastern of France. From 2000 to 2005, the construction of the experimental site has allowed to study radioactive waste storage possibilities in deep geological formation [START_REF] Delay | The French Underground Research Laboratory at Bure as a model precursor for deep geological repositories -IGC symposium -Florence[END_REF]Andra, 2005a). The target horizon for the laboratory is a 130 m thick layer of argillaceous rocks that lies between about 420 and 550 meters below the surface at the URL site. From a lithological view point, the depositional period straddles the Callovian and Oxfordian subdivisions of the middle to upper Jurassic. Argillaceous rocks contain a mixture of clay minerals and clay-sized fractions of other compositions. The clays, which constitute 40 % -45 % on average of the Callovo-Oxfordian argillaceous rocks, offer groundwater isolation and radionuclides retention. Silica and carbonate-rich sedimentary components strengthen the rock to contribute to stability of the underground construction. The stratigraphy of the URL is one of alternating limestone-rich and clay-rich units. On the upper part, the Oxfordian limestones lie from about 150 to 400 meters depth. Between the surface and the Oxfordian limestones is a 150 m thick sequence of mixed argillaceous rocks, [START_REF] Vigneron | Apport des investigations multi échelles pour la construction d'un modèle conceptuel des plateformes carbonatées de l'Oxfordien moyen et supérieur de l'est du Bassin de Paris[END_REF]. The state of in situ stress at the Meuse/Haute-Marne site has been measured by comprehensive combined methods. The vertical stress profile is presently well known on the site. The orientation of the σH stress (N155°E) is consistent with the regional stress field. The horizontal stress anisotropy is estimated between 1.1 < KH = σH/σh < 1.3 and the vertical stress and minimum horizontal stress have been directly measured close to the main level (-500 m depth) respectively equal to 12.7 MPa and 12.4 MPa [START_REF] Wileveau | Complete in situ stress determination in an argillite sedimentary formation[END_REF]. The main purpose of the geomechanical in situ investigation is to understand the rock response to the excavation of underground engineered structures and to the development of the damaged zone. The damaged zone characterization during shafts and drifts excavations will not be developed in this paper. Mechanical measurements are grouped in drift sections and within specific shaft excavation monitoring experiments socalled "mine by tests". These geomechanical experiments include a set of boreholes or convergence sections designed to monitor the behaviour of rock when openings are restarted. Figure 2 presents the overall layout of the underground network of excavated rock in the Meuse/Haute Marne laboratory and the location of the experimental drifts constructed in the clay formation. The first geomechanical mine by experiment is located in the -445 m experimental drift, corresponding to the upper layer of the Callovo-Oxfordian layer, where the mechanical response is mainly elastic [START_REF] Wileveau | Complete in situ stress determination in an argillite sedimentary formation[END_REF]. After an important instrumentation carried out from the -445 m level, the vertical mine by test has been monitored between -465 m and -480 m during the main shaft sinking [START_REF] Souley | Hydromechanical response to a mine by test experiment in a deep claystone -SGC Sea to Sky conference -Vancouver -Oct[END_REF]. The third geomechanical experiment, dealing with this paper, is the drift excavation tests at the main level of the URL, which is around -490 m beneath the surface. SMR1.1 and SMR1. EXPERIMENTAL RESULTS The two instrumented sections have been put in place very close to the front face (around 1.5 meter) in order to investigate the maximal deconfinement from drift excavation. The sections are composed of radial extensometers for which the end point is fixed at 20 m far from the wall, and convergence measurements with 6 points on the section (see figure 3). One notice that the feature of the drifts is "horseshoe section" with about 17 m 2 in area mainly supported by metallic sliding arches composed of three parts and rock bolts of 2.4 m length. The floor is also covered by a concrete slab of 0.7 m of thickness. Classical pneumatic hammer has been used to dig the galleries. The convergences have been measured manually using a system of invar wire along 9 directions of bases. The accuracy of this method is +/-0.2 mm. The reading frequency has been adapted to the excavation advance rate in order to obtain a high density of measurements within a distance of 12 meters from the section, corresponding approximately to 3 times the excavation diameter. The convergence measurements are given on figure 4 for the two sections SMR1.1 and SMR1.3 until early October 2005. The convergence is still monitored. It is not presented in this paper. Obviously, the behaviour of these two perpendicular sections is very different and strongly linked with the in situ stress anisotropy. The evolution of convergence in the SMR1.3 is very similar in the vertical and horizontal direction (respectively, measured on bases 6-3 and 1-5) (Fig. 4b). On the SMR1.1 section (Fig. 4a), the vertical convergence is much higher than the horizontal one what is in good agreement with the stress concentration due to the maximum horizontal stress σH acting on the walls. These values are given below (the convergence is reset to zero just before the excavation starting). Figure 5 shows the measurements for the cases of vertical downward and horizontal extensometers. Several names are used (e.g. GMR, GLE, GKE, GNI) for the drifts dug in the experimental area. PM indicates the distance between the workface and the axis of the previous gallery. Only 4 curves by extensometer are presented (0 m, 2 m, 5 m, 10 m relatively to the anchor installed at 20 m which is considered as a fixed point. One observes the deformation rate reacts gradually with the progress of the face. Moreover, in the particular case of SMR1.3 section where the history of excavation is more complex, the effect of the others openings is clearly measured. Such interaction between drifts is mainly due to the general layout of the URL designed to have a rapid access to the facilities for the time schedule constraint. The smallest distance between parallel galleries is equal to times the diameter of the gallery between the GLE and GKE drifts. This effect has not been identified on the SMR1.1 section. The magnitude of extension at the wall relatively to the reference point taken at the 20 m anchor is comparable to the convergence measurements, even if the starting date of measurement are differed of few days. The table 1 shows the value for both instrumentations. The values obtained by convergence and extension make up a consistent set of data, even though the deformations obtained by the convergence method give in most of the case a larger deconfinement. This difference can be explained by the delay to install the extensometer compared to the convergence section or also by the non measurable part of deformation in place up to 20 m from the wall of the openings. In the following chapter, one takes the convergence values to compare the results obtained by modelling. INTERPRETATION OF CONVERGENCES Results of elastic model The first analysis has been made using a classical approach in the framework of linear elasticity assuming a plane strain approach developed by [START_REF] Panet | Contribution à l'étude du soutènement derrière le front de taille[END_REF]. More complex calculations including a 3D simulation of the drift excavation is presented in the next chapter. The results are not in agreement with the observations as it is shown in figure 6 for the case of SMR1.3 where the in situ stresses are nearly isotropic around the gallery. A better agreement is obtained while reducing Young modulus up to one order of magnitude. This assumption is inconsistent with the mechanical behaviour observed usually on samples subjected to lab tests and can not be validated. Moreover, the significant elastic deconfinement in the first meters from section predicted by plane strain model is not reproduced. To better represent the complex behaviour of the argillite, one also needs to consider a damageable rock. The fracture network has clearly detected within the first 2 meters from the gallery wall by several direct methods (geological survey on cores samples, resin injection within the fracture network followed by overcoring, borehole camera) and indirect methods (velocity measurement, tomography). The application of elastic model, combined with in situ observations, leads us to consider (as it was previously forecasted) an elastoplastic approach for the argillite lying at this depth. Results of elastoplastic model Numerical calculations were carried out in 3D to simulate the drift progress (in 5 phases) of a horseshoe gallery at 490 m of depth (figures 3 and 7) according to the two orientations of the galleries with respect to the stress tensor : σh and σ Η . A first phase allows to reach the fine grid zone. Then, 3 phases of drift advancing (1 meter by 1 meter) are simulated. Finally, the drift advancing is continued to have a complete deconfinement until the end meshed zone. The geometrical model extends on 49.4 m in X direction, 25.3 m in the direction Y and on 50.6 m height (Z direction). It consists of 627183 gridpoints and 605784 zones. The used mesh is sufficiently fine for highlighting the characteristics of the zones exceeding the damage or failure criteria. In the same way, nodal points were selected in various directions (in front of the face, at the walls side, in vault and under floor) to follow the evolution of displacements (figure 8) and stresses during the phases of drift advancing. These points also allowed easy confrontation with in situ measurements on the two sections SMR1.1 and SMR1.3. Modelling assumptions The commercial computation software used for this study is delivered by Itasca ( 2002 The state of natural stresses (at -490 m depth) is as follows: σv = 12.7 MPa, σh = 12.4 MPa and σH = 1.3 xσh. From the assumptions on natural stress field, the worst case has been considered (KH = 1.3). Geomechanical properties Numerical calculations are carried out with a damageable elastoplastic model with hardening (as developed by Hoek and Brown). Table 2 indicates the parameters of this model used as reference parameters for studied zone (named B&C zones) of argillite [START_REF]Dossier 2005 -Référentiel du site Meuse/Haute-Marne[END_REF]. Figure 8 illustrates the isovalues of total displacement obtained by the numerical calculation at the end of the 3 steps of one meter excavation. The higher extrusion on the workface is clearly represented, as well as the corner effect. In order to compare the SMR1.1 and SMR1.3 sections to the modelling results, we have put the tracked point at one meter before the initial face. The zero displacement is then considered when initial face is reached. The calculation results corresponding to 3 m of excavation with respect to the initial face show a value of vertical convergence of about 10 mm in the direction of the major stress and of 25 mm in minor stress direction, that is to say a ratio of 2.5 between these two directions. We notice that this tendency is also observed for the measurements, even if the ratio of displacements is not exactly the same. The differences between the displacements calculated in the four configurations depicted on Figure 9 show a relative good agreement of modelling with the measurements on the 4 first meters of face progress. In spite of the fact that the results of displacement calculation are rather well correlated with the measurements in the case of drift advancing for a gallery oriented according to σH, the model predicts displacements higher than the measured values for the other direction (σh). This is natural, since the model does not integrate neither creep, nor hydraulic coupling. Therefore, long-term convergences cannot be reproduced by the model. CONCLUSION The main purpose of this paper is to assess the classical approach of tools used in geomechanics to interpret the in situ displacements measured on experimental mine by test conducted by Andra in the Meuse/Haute-Marne URL. For such hard clay lying at 490 m of depth in the Callovo-Oxfordian formation, a complex behaviour is observed, including different phenomena as elastoplastic behaviour, time dependent effect, and hydro-mechanical coupling. On one hand, the elastic approach has not successfully provided reasonable comparison when the global elastic mechanical response of the clay has been confirmed at the 445 m of depth [START_REF] Wileveau | Complete in situ stress determination in an argillite sedimentary formation[END_REF]. On the other hand, a damageable elastoplastic constitutive model using Hoek & Brown criteria has been used. Main conclusion of this analysis is that for the first meters only, displacements observed are consistent with the elastoplastic model. After, the time dependent mechanisms take effect as a predominant part of deformations. One notices that the elastoplastic modelling presented here has been carried out in the framework of studies on 3D complexity of URL (real geometry of drifts, anisotropy of horizontal stress, and working phases). Thus, we voluntarily simplified the argillite behaviour by neglecting the effects related on creep and hydraulic couplings. Some other numerical modelling are in progress to better understand the strong hydro-mechanical coupling observed in this clay, the real geometry and possible interaction between drifts, the role and link between plasticity and creep. Figure 1 . 1 Figure 1. Localization of the underground research laboratory at the Meuse/Haute-Marne site marls, and limestones of the Kimmeridgian. Underlying the Callovo-Oxfordian argillaceous rocks are the Bathonian and Bajocian-age Dogger limestones and dolomitic limestones[START_REF] Vigneron | Apport des investigations multi échelles pour la construction d'un modèle conceptuel des plateformes carbonatées de l'Oxfordien moyen et supérieur de l'est du Bassin de Paris[END_REF]. Figure 2 . 2 Figure2presents the overall layout of the underground network of excavated rock in the Meuse/Haute Marne laboratory and the location of the experimental drifts constructed in the clay formation. The first geomechanical mine by experiment is located in the -445 m experimental drift, corresponding to the upper layer of the Callovo-Oxfordian layer, where the mechanical response is mainly elastic[START_REF] Wileveau | Complete in situ stress determination in an argillite sedimentary formation[END_REF]. After an important instrumentation carried out from the -445 m level, the vertical mine by test has been monitored between -465 m and -480 m during the main shaft sinking[START_REF] Souley | Hydromechanical response to a mine by test experiment in a deep claystone -SGC Sea to Sky conference -Vancouver -Oct[END_REF]. The third geomechanical experiment, dealing with this paper, is the drift excavation tests at the main level of the URL, which is around -490 m beneath the surface. SMR1.1 and SMR1.3 sections have been respectively installed in april 2005 and august 2005 during the works Figure 3 . 3 Figure 3. Example of instrumented section SMR1.1 with combined extensometer and convergence measurement at the same location. Base "n" indicates the reference number of convergence point. The extensometers, on the same section than the convergence measurements, have monitored the deformations of the rock mass automatically (frequency : 4 data/hour) during the excavation advance of the drifts.Figure5shows the measurements for the cases of vertical downward and horizontal extensometers. Several names are used (e.g. GMR, GLE, GKE, GNI) for the drifts dug in the experimental area. PM indicates the distance between the workface and the axis of the previous gallery. Only 4 curves by extensometer are presented (0 m, 2 m, 5 m, 10 m relatively to the anchor installed at 20 m which is considered as a fixed point. Convergence measurements on sections SMR1.1 (a) and SMR1.3 (b) Figure 5 . 5 Figure 5. Extension of the rock mass versus time around the drift during the drifts progress -5a) and 5b) SMR1.1 section -5c) and 5d) SMR1.3 section -the values are calculated considering the fixed point at 20 m. Figure 6 . 6 Figure 6. Comparison between the results of convergence measurements of SMR1.3 and plane strain modelling Considering isotropic mechanical behaviour of the argillite, the elastic parameters Young modulus (E = 4.0 GPa) and Poisson's ratio (ν = 0.3) are taken from the Andra's labtests on samples coming from deep boreholes drilled on the Meuse/Haute-Marne site[START_REF]Dossier 2005 -Référentiel du site Meuse/Haute-Marne[END_REF]. are as follows: • null normal displacements are prescribed on the lower face of the model (Z = -515.3), on the face corresponding to the gallery symmetry plane (Y = 0) and on the "left" face of the model (X = -23.0); • stress conditions are prescribed on the upper face of the model (Z = -464.7), on the "back" face of the model (Y = 25.3) and on the right face of the model (X = 26.4). Figure 7 . 7 Figure 7. Mesh of the model including horseshoe shape and refined mesh around the drift (see zoom) Figure 9 9 Figure 8. Variation of total displacement for a 3 m advance 3.2.3Comparison between convergence measurements SMR 1.1 and SMR 1.3 and modelling Figure9illustrates the comparison between the results of numerical modelling and the convergence measurements at sections SUG1350, SUG1360 and SUG1170 (whose axis is parallel to σH) and sections SUG1150, SUG1160 and SUG1180 (whose axis is parallel to σh) according to the face advance. Figure 9 . 9 Figure 9. Measurements comparison between horizontal and vertical convergences and modelling results Table 1 . 1 Comparison between convergence and extension measurements (SMR1.1 and SMR1.3). Ref. number Drift axis Period (days) 1 type 2 (mm) Value SUG1350 σH 163 base 1-5 horiz. base 6-3 vertical 35.8 39.7 SMR1.3 SUG1360 σH 163 base 1-5 horiz. base 6-3 vertical 42.7 39.8 SUG1301 σH 163 horiz. extens. 33.8 SUG1303 σH 163 vertical extens. 27.0 SUG1150 σh 45 base 1-5 horiz. base 6-3 vertical 8.6 33.2 SMR1.1 SUG1160 SUG1103 σh σh 45 45 base 1-5 horiz. base 6-3 vertical extens. horiz. 8.9 53.0 5.7 SUG1105 σh 45 vertical extens. 37.2 SUG1118 σh 45 vertical extens. 23.6 1 Period of observation calculated until the 1 2005 2 The values given are the half-convergence measured st October between two bases Table 2 . 2 Reference values of B & C zones (used in the modelling (S and m: parameters of the Hoek & Brown criterion, α and β: parameters characterising the residual strength evolution) Elastic parameters Young's modulus (MPa) Poisson's ratio 4000 0.3 Criteria of the Hoek & Brown model Failure criterion S (rup) 0.128 m (rup) 2 σc (rup) in MPa 33.5 Damage initiation S (dam) 1 m (dam) 1.5 σc (dam) in MPa 9.6 Residual strength α β in MPa 2.8 3
22,947
[ "842902", "842892", "840067", "842894" ]
[ "12854", "3186", "3186", "12854" ]
01747433
en
[ "info" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01747433/file/ECC18_0293_FI.pdf
Maxime Thieffry Alexandre Kruszewski Thierry-Marie Guerra Christian Duriez Reduced Order Control of Soft Robots with Guaranteed Stability This work offers the ability to design a closedloop strategy to control the dynamics of soft robots. A numerical model of a robot is obtained using the Finite Element Method, which leads to work with large-scale systems that are difficult to control. The main contribution is a reduced order model-based control law, that consists in two main features: a reduced state feedback tunes the performance while a Lyapunov function guarantees the stability of the large-scale closed-loop systems. The method is generic and usable for any soft robot, as long as a FEM model is obtained. Simulation results show that we can control and reduce the settling time of the soft robot and make it converge faster without oscillations to a desired position. I. INTRODUCTION Soft robots -robots made of deformable materialspromise disruptive advances in many areas and bring transversal challenges, among which are dynamical modelling and control, see [START_REF] Rus | Design, fabrication and control of soft robots[END_REF], [START_REF] Majidi | Soft robotics: A perspective-current trends and prospects for the future[END_REF] and [START_REF] Kim | Soft robotics: A bioinspired evolution in robotics[END_REF]. Being lighter and more compliant than rigid ones, vibrations issues arise when dealing with soft robots dynamics. We propose a feedback control design to handle these issues. Designing such a feedback law brings different challenges such as dynamical modelling, large scale control and stability preservation. Modelling soft robots analytically is hard to achieve as this new type of robot has a theoretical infinite number of degrees of freedoms. Several approaches has been proposed so far: piece-wise constant curvature (PCC), Cosserat rod theory and finite element method (FEM). Depending on the robot geometry, the constant curvature is not always valid. Moreover, the equations coming from the Cosserat model are not often suitable for controller design. Recent work has been done to deal with these issues in [START_REF] Renda | Discrete Cosserat approach for multi-section soft robots dynamics[END_REF] and [START_REF] Falkenhahn | Dynamic modeling of bellows-actuated continuum robots using the Euler-Lagrange formalism[END_REF] to study continuum manipulators or beam-like soft robots. The goal of the present method is to be as generic as possible concerning the geometry of the robot, therefore, the finite element method is used. This spatial discretization gives rise to large scale systems, which are abounding in many fields of research, such as control theory. Standard control theory tools, like Linear Matrix Inequalities (LMI) or Lyapunov equation solvers, can not deal with a too large number of decision variables. Numerical efficiency for control applications is also an active field of research, see [START_REF] Benner | Numerical solution of large and sparse continuous time algebraic matrix Riccati and Lyapunov equations: a state of the art survey[END_REF], [START_REF] Koo | Decentralized fuzzy observerbased output-feedback control for nonlinear large-scale systems: an LMI approach[END_REF] or [START_REF] Chang | H∞ fuzzy control synthesis for a largescale system with a reduced number of LMIs[END_REF]. If classical tools of automatic control have to be applied, model order reduction must be considered. The methods used in this work are rather standard and we refer the reader to [START_REF] Benner | Model Reduction and Approximation : Theory and Algorithms[END_REF] for more details. Recent work has been done on soft robots control: [START_REF] Marchese | Dynamics and trajectory optimization for a soft spatial fluidic elastomer manipulator[END_REF] provides an open-loop stategy for dynamics and trajectory optimization, [START_REF] Lismonde | Trajectory planning of soft link robots with improved intrinsic safety[END_REF] provides an open-loop trajectory planning approach based on FEM model and [START_REF] Zhang | Kinematic modeling and observer based control of soft robot using real-time finite element method[END_REF] provides a closedloop controller based on FEM model but restricted to kinematics. Recently, space reduction has also been used to study continuum manipulators in [START_REF] Sadati | Control space reduction and real-time accurate modeling of continuum manipulators using Ritz and Ritz-Galerkin methods[END_REF]. This paper aims at providing a closed-loop controller that fixes the dynamics of a reduced order system while guaranteeing the stability of the full order model using Lyapunov framework. The remainder of this document is organised as follows. Section II presents a dynamic model of the robot while the main contribution of this work is presented in section III and section IV presents tracks to improve this result. Simulation results are provided along with the theoretical results to illustrate and show the effectiveness of the methodology. II. SOFT ROBOT MODELLING A. FEM model Modelling soft robots relies on both continuum mechanics theory and numerical approaches to solve the underlying equations. Here, a corotated FEM allows us to define position and velocity vectors, respectively q(t) ∈ R n and v(t) ∈ R n whose dimension n is proportional to the size of the FEM mesh used to model the robot. The more nodes the mesh has, the more the model tends to be accurate and, for soft robots application, the size N of the FEM mesh goes from hundreds to thousands of variables. The dimension n of the previous vectors is made of 3 × N variables, as the position and velocity vectors are given in the 3 dimension of space. Using the SOFA framework, the method proposed in [START_REF] Coevoet | Software toolkit for modeling, simulation, and control of soft robots[END_REF] describes concretely how to use FEM to model soft robots, in the particular context of real-time simulation. The non-linear equation of motion of the robots is given by the second law of Newton: M (q) v = P (q) -F (q, v) + H T (q)u(t) (1) where M (q) is the mass matrix, H T (q)u is the actuators contribution : H T (q) contains the direction of the actuators forces and u their amplitude. The matrix F (q, v) represents the internal forces and P (q) gathers all the known external forces. As we consider only the gravity field, P (q) is constant and P (q) = P . Let q 0 ∈ R n be a stable equilibrium point induced by P and u(t) = u 0 , i.e. q 0 is solution to 0 = P -F (q 0 , 0) + H T (q 0 )u 0 (2) Equation ( 1) can also be written as: M (q) v = P -F (q, v) + H T (q)u(t) -P + F (q0, 0) -H T (q0)u0 ⇔ M (q) v = F (q0, 0) -F (q, v) + H T (q)u(t) -H T (q0)u0 (3) We can approximate the internal forces F with a first order Taylor expansion around this equilibrium point: F (q, v) ≈f (q0, 0) + ∂F (q, v) ∂q q=q 0 (q -q0) + ∂F (q, v) ∂v v=0 v (4) where ∂F (q,v) ∂q = K(q, v) is the compliance matrix, and ∂F (q,v) ∂v = D(q, v) is the damping matrix. By definition, mass, compliance and damping matrices are positive definite. With these notations, equation (3) becomes: M (q) v ≈ -K(q0, 0)(q-q0)-D(q0, 0)v+H T (q)u(t)-H T (q0)u0 (5) Let d be the displacement vector defined by: d = q -q 0 (6) The equation of motion around an equilibrium point q 0 is thus given by the following relation: M (q) v ≈ -K(q 0 , 0)d-D(q 0 , 0)v +H T (q)u(t)-H T (q 0 )u 0 (7) B. State-space Equation Without loss of generality, considering u 0 = 0 allows us to define the following non-linear state-space equation:          ẋ = -M (x) -1 D(x) -M (x) -1 K(x) I 0 A(x) x + M (x) -1 H T (x) 0 B(x) u y = Cx (8) where x = v d , x ∈ R 2n and where system matrices are large-scale sparse non-linear matrices, i.e. A(x) ∈ R 2n×2n , B(x) ∈ R 2n×m , C ∈ R p×2n , where m is the number of actuators and p the number of outputs. The results showed in this paper are obtained on simulation experiments, where the non-linear model is used to simulate the robot. For control application, it is more direct to work on a linear model to design the control law and around the equilibrium point q 0 , system (8) can be approximated by the following linear representation:    ẋ = -M -1 D -M -1 K I 0 x + M -1 H T 0 u y = Cx (9) where M = M (q 0 , 0), D = D(q 0 , 0) and K = K(q 0 , 0). Remark 1: The FEM model of the robot allows us to compute its energy in real-time. The kinetic energy of a soft robot is defined as: E k (x) = 1 2 v T M v (10) and its potential energy: E p (x) = 1 2 d T Kd (11) The total energy of the robot is then: E(x) = 1 2 v d T M 0 0 K v d (12) As M and K are positive definite, i.e. M > 0 and K > 0, this energy function is positive definite: E(x) > 0 (13) and E(x) = 0 ⇔ (q, v) = (0, 0) (14) III. LARGE SCALE CONTROL DESIGN The general objective of this paper is to compute a state feedback control law u = Lx (15) that guarantees the performance of the system (9) in closedloop. Yet, using pole placement or LMI approaches, the dimension of x implies that the computation of the matrix L cannot be done with numerical efficiency. Moreover, this controller could guarantee the closed-loop stability of the large scale model, but to be usable it would require the measurement of the whole state, which is of course impossible in practice due to the large number of variables to measure. The objective of this work is also to reduce the number of parameters in the controller. The use of Lyapunov stability and LMI constraints to study the stability and design the controller permits to optimize the performance of the system. In many cases, energy functions can be used as Lyapunov functions. Without actuation the studied soft robot converges to a natural equilibrium point where its energy is zero, this energy function is also a Lyapunov function for the system in open-loop. This allows us to design a Lyapunov function based on the system matrices and limits the complexity in the choice of the Lyapunov function. However, this does not reduce the complexity of the controller design, as the matrix L still contains a lot of variables to tune. The contribution of this paper is a method to deal with this issue: relying on model reduction techniques, a reduced order state feedback control law u = L r ξ r (16) is computed and a proof of stability for the original large scale system is given, using large-scale Lyapunov functions. A. Model Order Reduction Before developping the control part, this subsection presents notion of model order reduction that are required to develop the main results. There are two main branches of model order reduction methods: the first one based on optimisation [START_REF] Vuillemin | Poles residues descent algorithm for optimal frequency-limited H 2 model approximation[END_REF] and the other based on projection [START_REF] Gallivan | Sylvester equations and projection-based model reduction[END_REF]. As the implementation of optimisation based model reduction is still challenging for very large scale systems, only projection methods are used here . These methods aim at computing two projectors W r ∈ R n×r and V r ∈ R n×r , with W T r V r = I r , to approximate a large scale system ẋ = f (x, u) , x ∈ R n (17) with a reduced order one: ξr = W T r f (V r ξ r , u), ξ r ∈ R r , r n (18) where ξ r = W T r x (19) is the reduced order state and ξ r is the neglected one: ξ r = W T r x, ξ r ∈ R r , r = n -r (20) such that: x = V r ξ r + V r ξ r (21) For soft robotics applications, to compute an approximation of the full-order state x = v d , it is interesting to use structure preserving model order reduction. The reduced and neglected states will also keep their initial structure: ξ r = ξ rv ξ rd ; ξ r = ξ rv ξ rd ; (22) Concretely, this requires to find projectors for the velocity and the displacement vectors such that equation (21) writes: v d = V rv 0 V rv 0 0 V rd 0 V rd     ξ rv ξ rd ξ rv ξ rd     ⇔ x = VΞ (23) where Ξ ∈ R n is the projected state. To find the projectors V and W, state-of-the-art model reduction methods are available, such as Improved Balanced Truncation [START_REF] Benner | An improved numerical method for balanced truncation for symmetric second-order systems[END_REF], Iterative Interpolation Methods [START_REF] Gugercin | An iterative SVD-krylov based method for model reduction of large-scale dynamical systems[END_REF] and Proper Orthogonal Decomposition (POD). The first two methods are for now restricted to the linear case whereas the POD is a projection based method adapted to non-linear systems. B. Reduced Order Model-Based Controller To get rid of the whole state measurement to control the large scale system using equation [START_REF] Vuillemin | Poles residues descent algorithm for optimal frequency-limited H 2 model approximation[END_REF], we aim at computing a reduced order state feedback controller. Starting from the Lyapunov function defined previously: E(x) = x T M 0 0 K x = Ξ T V T M 0 0 K VΞ (24) In open-loop, with ẋ defined in eq [START_REF] Benner | Model Reduction and Approximation : Theory and Algorithms[END_REF], the derivative of this function is: Ė(x) = x T -D 0 0 0 x + ( * ) = -2v T Dv (25) where ( * ) represents the transpose of the matrix preceding it. The reduced order state ξ r is of reasonable dimension and we can design a reduced order feedback u = L r ξ r = L rv L rd ξ rv ξ rd (26) that fixes the performance of the closed loop. With this control law, the derivative of the Lyapunov function becomes: Ė(x) = x T -D 0 0 0 x + x T H T 0 u + ( * ) (27) which, in the projected space, is equivalent to: Ė(Ξ) = Ξ T V T -D 0 0 0 VΞ + Ξ T V T H T 0 u + ( * ) = Ξ T QΞ + ξ r ξ r T V T H T 0 L r 0 ξ r ξ r + ( * ) = Ξ T (Q + R)Ξ (28) where Q =     -V T rv DV rv 0 -V T rv DV rv 0 0 0 0 0 -V T rv DV rv 0 -V T rv DV rv 0 0 0 0 0     + ( * ) (29) and R =     V T rv H T L v V T rv H T L d 0 0 0 0 0 0 V T rv H T L v V T rv H T L d 0 0 0 0 0 0     + ( * ) (30) with Q, R ∈ R 2n×2n . The computation of the matrix L r is done using the framework of LMI constraints. The LMI is defined as: Ė(x) < -λE(x) ⇔ (Q + R) < -λ M 0 0 K (31) where λ is a positive scalar that fixes the decay rate of the function E(x). The stability is proven for the large-scale system, even if the number of variables in the LMI is equal to m×r, where m is the number of input and r is the reduced dimension. C. Simulation Results The method is tested on a simulated model of a soft robot made of silicone, shown on the following picture: This robot is actuated with 8 cables mounted on the structure as shown on figure 1 so that the robot can deform in any direction of space. Friction between the robot and its cables is neglectable thanks to flexible tubes that guide the cables and that are added in the simulation. SOFA integrates the CGAL library to compute a FEM mesh from a visual model. In this case, the mesh is made of 1557 nodes and 5157 tetrahedron elements, as shown on figure 2. The size n of both velocity and position vectors is equal to n = 3 × 1557 = 4671, and thus system ( 9) is of dimension 9342. Computing a full-order state feedback, as in [START_REF] Vuillemin | Poles residues descent algorithm for optimal frequency-limited H 2 model approximation[END_REF] would have implied the computation of a matrix L of dimension 8 × 9342, with 74 736 variables to compute. Instead of that, the model reduction step provide us with a reduced system of dimension 6, leading to a feedback matrix L r ∈ R m×r where m = 8 and r = 6. Solving LMI defined in equation ( 31) needs the computation of 48 decision variables. In simulation experiments the state vector, i.e. both velocity and position vectors, is directly available and we use POD reduction method to obtain the reduced order state. The simulation results are given for the following example: the robot starts from its initial shape (left of figure 2) and converges to its rest position (right of figure 2), where the model has been linearized. The behaviour of the robot in open-loop is illustrated on figure 3 and4, it shows that the robot converges to its equilibrium point after some oscillations. The goal of the proposed control method is to reduce, or suppress, this oscillations and make the robot converge faster to the desired position. The control law defined in ( 26) is directly usable in practice, as it only requires the measurement of the state of the robot, i.e. displacement and velocity vectors. In simulation, they are directly available and we reconstruct the reduced state by applying equation ( 19) at each time step. (To test on real robots, an observer should be used to compute the reduced order state from the sensor measurement, this work is not conducted in this paper.) With this feedback controller, the oscillations vanish, as shown on figure 5 and6. This control law allows us to suppress the oscillations in the robot behaviour and the convergence time of the robot is decreased. However, the gain L rd is near zero and does not seem to have an impact on the closed-loop performance. Looking at the definition of matrices Q and R in ( 29) and (30), one can see zero entries on the diagonal corresponding to the displacement quadratic term. This brings conservatism in the choice of matrix L rd , next section proposes an extension of the Lyapunov function (24) to handle this issue. IV. REDUCTION OF CONSERVATISM A. Choice of Lyapunov function Adding parameters in the Lyapunov function reduces the conservatism of the results obtained using the previous Lyapunov function. The following result holds: Theorem 1: The following functions are Lyapunov functions for system (9): V(x) = x T (1 + )M M M (1 + )K + D x (32) for all scalar such that: 0 < < α 1 -α ( 33 ) where α is the mass-damping coefficient of the material (see appendix for details). In open-loop, the derivative of V(x) becomes: V(x) = 2x T -(1 + )D + M 0 0 -K x ( 34 ) By adding parameter in the Lyapunov function, we add a non-zero entry on the diagonal of its derivative. Fig. 5. Results in closed-loop using feedback gains computed thanks to eq (31). Left : Norm of the displacement (cm); Right: Norm of velocity (cm/s 2 ). Fig. 6. Reduced order state in closed-loop using eq (31) B. Closed-loop algorithm The derivative of the Lyapunov function according to the trajectories of the closed-loop writes: V(x) = x T -( 1 + )D + M 0 0 -K + (1 + )H T H T L x+( * ) (35) which in the projected space is equivalent to: Ξ T V T -( 1 + )D + M 0 0 -K + (1 + )H T H T L VΞ + ( * ) (36) from which the design of the controller is made possible thanks to the following theorem: Theorem 2: System (9) with feedback (26) is stable with a decay rate λ if: (S + T) < -λ (1 + )M M M (1 + )K + D (37) with S = V T -(1 + )D + M 0 0 -K V + ( * ) (38) which also writes S =     -(1 + )D rv rv + M rv rv 0 -(1 + )D rv rv + M rv rv 0 0 -K rv rv 0 -K rv rv -(1 + )D rv rv + M rv rv 0 -(1 + )D rv rv + M rv rv 0 0 -K rv rv 0 -K rv rv     where D rv rv = V T rv DV rv , D rv rv = V T rv DV rv and T = V T (1 + )H T H T LV + ( * ) =     (1 + )V T rv H T Lrv (1 + )V T rv H T L rd 0 0 V T rd H T Lrv V T rd H T L rd 0 0 (1 + )V T rv H T Lrv (1 + )V T rv H T L rd 0 0 V T rd H T Lrv V T rd H T L rd 0 0     + ( * ) (39) Remark 2: Sketch of the proof. V (x) is a Lyapunov function defined in Theorem 1 and it holds: (37) ⇔ V(x) < -λV(x) (40) Moreover, L r = 0 is solution of the previous LMI for λ = 0 which makes it possible to use optimization algorithms. Using the Lyapunov function V(x), more flexibility is given in the choice of the matrix L r than in (31) with the same number of decision variables. C. Simulation Experiments The same experiment is done with the robot presented on figure 1 but with the control method of this section, i.e. Theorem 2. The objective remains the same, the oscillations of the open-loop behaviour of figure 3 and4 should be attenuated by the controller. The closed-loop results are presented on figures 7 and 8. It is clear that the oscillations are removed with this control law, and in this case the tuning of the decay rate of the Lypunov function is easier than with the controller defined in (31). During firsts experiments whose results are shown on figure 5, the displacement vector converges after t = 11s whereas on figure 7, it converges at t = 6s. A more complete control of the robots dynamics is also made possible using the method presented in this section. The large scale Lyapunov function V(x) guarantees the stability of the large scale model but here both reduced velocity and displacement have a direct impact on the closed-loop performance. We also have more flexibility in the tuning of the controller. Remark 3: The resolution of the LMI (37), with 48 variables and 9342×9342 constraints took 75 minutes on a Intel Core i7 CPU. V. CONCLUSIONS We presented a generic method that offers the possibility to control the dynamical behaviour of soft robots. The first benefit of this work is the use of model order reduction methods to provide the user with a reduced order system that models a large-scale accurate model of the robot. Thanks to the reduction of the state variable, the design of a reduced order state feedback is made possible. The second advantage of our work is the use of Lyapunov function to prove the stability of the large-scale closed-loop system. Simulation results provided in this paper show the performance of our approach, which is generic in the sense that it is applicable to any robot with a stable equilibrium point, as long as a FEM mesh of the robot is obtained. A few steps are still required before testing on real prototypes, as the one presented on figure 1. Our next move is the integration of an observer in the control design to make it possible to reconstruct the reduced state from measurements. For now, the control design allows the user to control a studied soft robot from any initial shape to a desired position, where the model has been linearized. An extension of the approach would be interesting to be able to control the robot from any initial shape to any desired position. Further research could focus on the linearization assumption required in the method, removing the linearization step to design a controller for the initial non-linear system (8) could also lead to a controller where the region of convergence is guaranteed. APPENDIX : PROOF OF THEOREM 1 The continuous function V(x) is a Lyapunov function in open-loop for system (9) if: 1) V (x) > 0 2) V (x) is radially unbounded 3) V (x) < 0 • Proof of 1) V(x) > 0 ⇔ (1 + )M M M (1 + )K + D > 0 Using Schur complement, it is equivalent to: (1 + )M > 0 ; (1 + )K + D -      > - 1 1 + β ⇒ (1 + + β)K > 0 0 < < α 1 -α ⇒ (α - 1 + ) > 0 The following condition is also a sufficient condition for V(x) to be positive definite in open-loop for system (9): 0 < < α 1 -α (41) • Proof of 2) is trivial. • Proof of 3) V (x) < 0 ⇔ -2 (1 + )D -M 0 0 K < 0 With > 0, it directly follows K > 0. One just need (1 + )D -M > 0, for which condition (41) is also a sufficient condition. Fig. 1 . 1 Fig. 1. Top: Visual model of the robot Bottom: Design of the robot: slice view (left) and side view (right). The robot is actuated with 8 cables: 4 short cables in red, and 4 long ones in green. Fig. 2 . 2 Fig. 2. FEM mesh of the Trunk-like robot, made of 1557 nodes. Left : Deformed position; Right : Rest position. Fig. 3 . 3 Fig. 3. Left : Norm of the displacement (cm) in open-loop Right: Norm of velocity (cm/s 2 ) in open-loop. Fig. 4 . 4 Fig. 4. Reduced order state in open-loop. Fig. 7 . 7 Fig. 7. Results in closed-loop using feedback gains computed thanks to eq (37). Left : Norm of the displacement (cm); Right: Norm of velocity (cm/s 2 ). Fig. 8 . 8 Fig.8. Reduced order state in closed-loop using eq (37) ⇔ > - 1 ; 1 By definition, matrices M, K and D are positive definite. The damping matrix is defined using Rayleigh definition:D = αM + βKwhere α and β are respectively the mass-proportional and the stiffness-proportional damping coefficients of the material. Both are positive scalars lower than one. It holds:V(x) > 0 ⇔ > -1; (1 + )K + (αM + βK) -(1 + + β)K + (α -1 + )M > 0Sufficient conditions are: Univ. Valenciennes, UMR 8201 -LAMIH F-59313 Valenciennes, France Univ. Lille, CNRS, Centrale Lille, Inria, UMR 9189 -CRIStAL -Centre de Recherche en Informatique Signal et Automatique de Lille, F-59000 Lille, France
24,622
[ "737611", "7036", "181665", "4013" ]
[ "1303", "410272", "409871", "410272", "409871", "1303", "409871", "410272" ]
00174770
en
[ "shs" ]
2024/03/05 22:32:07
2002
https://shs.hal.science/halshs-00174770/file/tallon.pdf
Alain Chateauneuf email: chateaun@univ-paris1.fr Rose-Anne Dana email: dana@ceremade.dauphine.fr Jean-Marc Tallon email: jmtallon@univ-paris1.fr Optimal Risk-Sharing Rules and Equilibria with Choquet-Expected-Utility Keywords: Choquet expected utility, comonotonicity, risk-sharing, equilibrium This paper explores risk-sharing and equilibrium in a general equilibrium set-up wherein agents are non-additive expected utility maximizers. We show that when agents have the same convex capacity, the set of Pareto-optima is independent of it and identical to the set of optima of an economy in which agents are expected utility maximizers and have same probability. Hence, optimal allocations are comonotone. This enables us to study the equilibrium set. When agents have different capacities, matters are much more complex (as in the vNM case). We give a general characterization and show how it simplifies when Pareto-optima are comonotone. We use this result to characterize Pareto-optima when agents have capacities that are the convex transform of some probability distribution. comonotonicity of Paretooptima is also shown to be true in the two-state case if the intersection of the core of agents' capacities is non-empty; Pareto-optima may then be fully characterized in the two-agent, two-state case. This comonotonicity result does not generalize to more than two states as we show with a counter-example. Finally, if there is no-aggregate risk, we show that non-empty core intersection is enough to guarantee that optimal allocations are full-insurance allocation. This result does not require convexity of preferences. Introduction In this paper, we explore the consequences of Choquet-expected-utility on risk-sharing and equilibrium in a general equilibrium set-up. There has been over the last fifteen years an extensive research on new decision-theoretic models [START_REF] Karni | Utility theory with uncertainty[END_REF] for a survey), and a large part of this research has been devoted to the Choquet-expected-utility model introduced by [START_REF] Schmeidler | Subjective probability and expected utility without additivity[END_REF]. However, applications to an economy-wide set-up have been relatively scarce. In this paper, we derive the implications of assuming such preference representation at the individuals level on the characteristics of Pareto optimal allocations. This, in turn, allows us to (partly) characterize equilibrium allocations under that assumption. Choquet-expected-utility (CEU henceforth) is a model that deals with situations in which objective probabilities are not given and individuals are a priori not able to derive (additive) subjective probabilities over the state space. It is well-suited to represent agents' preferences in situation where "ambiguity"(as observed in the Ellsberg experiments) is a pervasive phenomenon1 . This model departs from expected-utility models in that it relaxes the sure-thing principle. Formally, the (subjective) expected-utility model is a particular subclass of the CEU of model. Our paper can then be seen as an exploration of how results established in the von Neumann-Morgenstern (vNM henceforth) case are modified when allowing for more general preferences, whose form rests on sound axiomatic basis as well. Indeed, since CEU can be thought of as representing situations in which agents are faced with "ambiguous events", it is interesting to study how the optimal social risk-sharing rule in the economy is affected by this ambiguity and its perception by agents. We focus on a pure-exchange economy in which agents are uncertain about future endowments and consume after uncertainty is resolved. Agents are CEU maximizers and characterized by a capacity and a utility index (assumed to be strictly concave). When agents are vNM maximizers and have the same probability on the state space, it is well-known since [START_REF] Borch | Equilibrium in a reinsurance market[END_REF] that agents' optimal consumptions depend only on aggregate risk, and is a non-decreasing function of aggregate resources : at an optimum, an agent bears only (some of) the aggregate risk. It is easy to fully characterize such Pareto Optima (see Eeck-houdt and Gollier [1995]). More generally, in the case of probabilized risk, [START_REF] Landsberger | Co-monotone allocations, Bickel-Lehmann dispersion and the Arrow-Pratt measure of risk aversion[END_REF] and [START_REF] Chateauneuf | A review of some results related to comonotonicity[END_REF] have shown that Pareto Optima (P.O. henceforth) are comonotone if agents' preferences satisfy second-order stochastic dominance. This, in particular, is true in the rank-dependent-expected-utility case. The first goal of this paper is to provide a characterization of the set of P.O. and equilibria in the rank-dependent-expected-utility case. Our second and main aim is to assess whether the results obtained in the case of risk are robust when one moves to a situation of non-probabilized uncertainty with Choquet-expected-utility, in which there is some consensus. We first study the case where all agents have the same capacity. We show that if this capacity is convex, the set of P.O. is the same as that of an economy with vNM agents whose beliefs are described by a common probability. Furthermore, it is independent of that capacity. As a consequence, P.O. are easily characterized in this set-up, and depend only on aggregate risk (and utility index). Thus, if uncertainty is perceived by all agents in the same way, the optimal risk-bearing is not affected (compared to the standard vNM case) by this ambiguity. The equivalence proof relies heavily on the fact that, if agents are vNM maximizers with identical beliefs, optimal allocations are comonotone and independent of these beliefs : each agent's consumption moves in the same direction as aggregate endowments. This equivalence result is in the line of a result on aggregation in appendix C of [START_REF] Epstein | Intertemporal asset pricing under Knightian uncertainty[END_REF]. Finally, the information given by the optimality analysis is used to study the equilibrium set. A qualitative analysis of the equilibrium correspondence may be found in [START_REF] Dana | Pricing rules when agents have non-additive expected utility and homogeneous expectations[END_REF]. When agents have different capacities, matters are much more complex. To begin with, in the vNM case, we don't know of any conditions ensuring that P.O. are comonotone in that case. However, in the CEU model, intuition might suggest that if agents have capacities whose cores have some probability distribution in common, P.O. are then comonotone. This intuition is unfortunately not correct in general, as we show with a counterexample. As a result, when agents have different capacities, whether P.O. allocations are comonotone depends on the specific characteristics of the economy. On the other hand, if P.O. are comonotone, they can be further characterized, although not fully. It is also in general non-trivial to use that information to infer properties of equilibrium. This leads us to study cases for which it is possible to prove that P.O. allocations are comonotone. A first case is when the agents' capacities are the convex transform of some probability distribution. We then know from [START_REF] Chateauneuf | A review of some results related to comonotonicity[END_REF] and [START_REF] Landsberger | Co-monotone allocations, Bickel-Lehmann dispersion and the Arrow-Pratt measure of risk aversion[END_REF] that P.O. are comonotone. Our analysis then enables us to be more specific than they are about the optimal risk-bearing arrangements and equilibrium of such an economy. The second case is the simple case in which there are only two states (as in simple insurance models à la [START_REF] Mossin | Aspects of rational insurance purchasing[END_REF]). The non-emptiness of the cores' intersection is then enough to prove that P.O. allocations are comonotone, although it is not clear what the actual optimal risk-sharing arrangement looks like. If we specify the model further and assume there are only two agents, the risk-sharing arrangement can be fully characterized. Depending on the specifics of the agents' characteristics, it is either a subset of the P.O. of the economy in which agents each have the probability that minimizes, among the probability distributions in the core, the expected value of aggregate endowments, or the less pessimistic agent insures the other. (This last risk-sharing arrangement typically cannot occur in a vNM setup with different beliefs and strictly concave utility functions.) The equilibrium allocation in this economy can also be characterized. Finally, we consider the situation in which there exists only individual risk, a case first studied by [START_REF] Malinvaud | The allocation of individual risks in large markets[END_REF][START_REF] Malinvaud | Markets for an exchange economy with individual risks[END_REF]. comonotonicity is then equivalent to full-insurance. We show that a condition for optimal allocations to be full-insurance allocations is that the intersection of the core of the agents' capacities is non-empty, a condition that can be intuitively interpreted as minimum consensus. This full-insurance result easily generalizes to the multi-dimensional set-up. Using this result, we show that establish that any equilibrium of particular vNM economies is an equilibrium of the CEU economy. These vNM economies are those in which agents have the same characteristics as in the CEU economy and have common beliefs given by a probability in the intersection of the cores of the capacities of the CEU economy. When the capacities are convex, any equilibrium of the CEU economy is of that type. This equivalence result between equilibrium of the CEU economy and associated vNM economies suggests that equilibrium is indeterminate, an idea further explored in [START_REF] Tallon | Risque microéconomique, aversion à l'incertitude et indétermination de l'équilibre[END_REF] and [START_REF] Dana | Pricing rules when agents have non-additive expected utility and homogeneous expectations[END_REF]. The rest of the paper is organized as follows. Section 2 establishes notation and define the characteristics of the pure exchange economy that we deal with in the rest of the paper. In particular, we recall properties of the Choquet integral. We also recall there some useful information on optimal risk-sharing in vNM economies. Section 3 is the heart of the paper and deals with the general case of convex capacities. In a first sub-section, we assume that agents have identical capacities, while the second sub-section deals with the case where agents have different capacities. Section 4 is devoted to the study of two particular cases of interest, namely the case where agents' capacities are the convex transform of a common probability distribution and the two-state case. The case of no-aggregate risk in a multi-dimensional set-up is studied in section 5. Notation, definitions and useful results We consider an economy in which agents make decisions before uncertainty is resolved. The economy is a standard two-period pure-exchange economy, but for agents' preferences. There are k possible states of the world, indexed by superscript j. Let S be the set of states of the world and A the set of subsets of S. There are n agents indexed by subscript i. We assume there is only one good2 . C j i is the consumption by agent i in state j and C i = (C 1 i , . . . , C k i ). Initial endowments are denoted w i = (w 1 i , . . . , w k i ). w = n i=1 w i is the aggregate endowment. We will focus on Choquet-Expected-Utility. We assume the existence of a utility index U i : IR + → IR that is cardinal, i.e. defined up to a positive affine transformation. Throughout the paper U i is taken to be strictly increasing and strictly concave. When needed, we will assume differentiability together with the usual Inada condition: Assumption U1: ∀i, U i is C 1 and U ′ i (0) = ∞. Before defining CEU (the Choquet integral of U with respect to a capacity), we recall some properties of capacities and their core. Capacities and the core A capacity is a set function ν : A → [0, 1] such that ν(∅) = 0, ν(S) = 1, and, for all A, B ∈ A, A ⊂ B ⇒ ν(A) ≤ ν(B). We will assume throughout that the capacities we deal with are such that 1 > ν(A) > 0 for all A ∈ A, A = S, A = ∅. A capacity ν is convex if for all A, B ∈ A, ν(A ∪ B) + ν(A ∩ B) ≥ ν(A) + ν(B). The core of a capacity ν is defined as follows core(ν) =    π ∈ IR k + | j π j = 1 and π(A) ≥ ν(A), ∀A ∈ A    where π(A) = j∈A π j . Core(ν) is a compact, convex set which may be empty. Since 1 > ν(A) > 0 ∀A ∈ A, A = S, A = ∅, any π ∈ core(ν) is such that π ≫ 0, (i.e., π j > 0 for all j). It is well-known that when ν is convex, its core is non-empty. It is equally well-known that non-emptiness of the core does not require convexity of the capacity. If there are only two states however, it is easy to show that core(ν) = ∅ if and only if ν is convex. We shall provide an alternative definition of the core in the following sub-section. Choquet-expected-utility We now turn to the definition of the Choquet integral of f ∈ IR S : f dν ≡ E ν (f ) = 0 -∞ (ν(f ≥ t) -1)dt + ∞ 0 ν(f ≥ t)dt Hence, if f j = f (j) is such that f 1 ≤ f 2 ≤ . . . ≤ f k : f dν = k-1 j=1 [ν({j, . . . , k}) -ν({j + 1, . . . , k})] f j + ν({k})f k As a consequence, if we assume that an agent consumes C j in state j, and that C 1 ≤ . . . ≤ C k , then his preferences are represented by: v (C) = [1 -ν({2, .., k})] U (C 1 )+... [ν({j, .., k}) -ν({j + 1, .., k})] U (C j )+...ν({k})U (C k ) Observe that, if we keep the same ranking of the states as above, then v (C) = E π U (C), where C is here the random variable giving C j in state j, and the probability π is defined by: π j = ν({j, . . . , k})ν({j + 1, . . . , k}), j = 1, . . . , k -1 and π k = ν({k}). If U is differentiable and ν is convex, the function v : IR k + → IR defined above is continuous, strictly concave and subdifferentiable. Let ∂v(C) = {a ∈ IR k | v(C) -v(C ′ ) ≥ a(C -C ′ ), ∀C ′ ∈ IR k + } denote the subgradient of the function v at C. In the open set C ∈ IR k + | 0 < C 1 < C 2 < . . . < C k , v is differentiable. If 0 < C 1 = C 2 = . . . = C k then, ∂v(C) is proportional to core(ν). The following proposition gives an alternative representation of core(ν) that will be useful in section 5. Proposition 2.1 core(ν) = π ∈ IR k + | k j=1 π j = 1 and v(C) ≤ E π (U (C)) , ∀ C ∈ IR k + Proof: Let π ∈ core(ν) and assume C 1 ≤ C 2 ≤ . . . ≤ C k . Then, v(C) = U (C 1 )+ν({2, . . . , k})(U (C 2 )-U (C 1 ))+. . .+ν({k})(U (C k )-U (C k-1 )) Hence, since π ∈ core(ν), and therefore ν(A) ≤ π(A) for all events A: v(C) ≤ U (C 1 )+ k j=2 π j (U (C 2 )-U (C 1 ))+. . .+π k (U (C k )-U (C k-1 )) = E π (U (C)) which proves one inclusion. To prove the other inclusion, let π ∈ π ∈ IR k + | k j=1 π j = 1, v(C) ≤ E π (U (C)) , ∀ C . Normalize U so that U (C) = 0 and U ( C) = 1 for some C and C. Let A ∈ A and C A = C1 A c + C1 A . Since v(C A ) = ν(A) ≤ E π (U (C A )) = π(A), one gets π ∈ core(ν). A corollary is that if core(ν) = ∅, then v(C) ≤ min π∈core(ν) E π U (C). 3 comonotonicity We finally define comonotonicity of a class of random variables ( C i ) i=1,...,n . This notion, which has a natural interpretation in terms of mutualization of risks, will be crucial in the rest of the analysis. Definition 1 A family ( C i ) i=1,...,n of random variables on S is a class of comonotone functions if for all i, i ′ , and for all j, j ′ , C j i -C j ′ i C j i ′ -C j ′ i ′ ≥ 0. An alternative characterization is given in the following proposition (see [START_REF] Denneberg | Non-additive measures and integral[END_REF]): Proposition 2.2 A family ( C i ) i=1,...,n of non-negative random variables on S is a class of comonotone functions if and only if for all i, there exists a function g i : IR + → IR + , non-decreasing and continuous, such that for all x ∈ IR + , n i=1 g i (x) = x and C j i = g i n m=1 C j m for all j. The family ( C i ) i=1,...,n is comonotone if they all vary in the same direction as their sum. Optimal risk-sharing with vNM agents We briefly recall here some well-known results on optimal risk-sharing in the traditional vNM case (see e.g. [START_REF] Eeckhoudt | Risk, evaluation, management and sharing[END_REF] or [START_REF] Magill | Theory of incomplete markets[END_REF]). Consider first the case of identical vNM beliefs. Agents have the same probability π = (π 1 , . . . , π k ), π j > 0 for all j, over the states of the world and a utility function defined by v i (C i ) = k j=1 π j U i (C j i ), i = 1, . . . , n. The following proposition recalls that the P.O. allocations of this economy are independent of the (common) probability, depend only on aggregate risk (and utility indices), and are comonotone4 . Proposition 2.3 Let (C i ) n i=1 ∈ IR kn + be a P.O. allocation of an economy in which agents have vNM utility index and identical additive beliefs π. Then, it is a P.O. of an economy with additive beliefs π ′ (and same vNM utility index). Furthermore, (C i ) n i=1 is comonotone. As a consequence of propositions 2.2 and 2.3, it is easily seen that, at a P.O. allocation, agent i's consumption C i is a non-decreasing function of w. If agents have different probabilities π j i , j = 1, . . . , k, i = 1, . . . , n, over the states of the world, it is easily seen that P.O. now depend on the probabilities and on aggregate risk. It is actually easy to find examples in which P.O. are not necessarily comonotone (take for instance a model without aggregate risk in which agents have different beliefs : the P.O. allocations are not state-independent and therefore are not comonotone). 3 Optimal risk-sharing and equilibrium with CEU agents: the general convex case In this section we deal first with optimal risk-sharing and equilibrium analysis when agents have identical convex capacities and then move on to different convex capacities. Optimal risk-sharing and equilibrium with identical capacities Assume here that all agents have the same capacity ν over the states of the world and that this capacity is convex. We denote by E 1 the exchange economy in which agents are CEU with capacity ν and utility index U i , i = 1, . . . , n. Define D ν (w) as follows: D ν (w) = {π ∈ core(ν) | E π w = E ν w} The set D ν (w) is constituted of the probabilities that "minimize the expected value of the aggregate endowment". In particular, if w 1 < w 2 . . . < w k , D ν (w) contains only π = (π 1 , . . . , π k ) with π j = ν({j, j + 1, . . . , k})ν({j + 1, . . . , k}) for all j < k and π k = ν({k}). If w 1 = . . . = w k , the set D ν (w) is equal to core(ν). It is important to note that the Choquet integral of any random variable that is non-decreasing with w is actually the integral of that random variable with respect to a probability distribution in D ν (w). In particular, we have the following lemma. Lemma 3.1 Let ν be a convex capacity, U an increasing function and C ∈ IR k + be a non-decreasing function of w. Then, v(C) = E π U (C) for any π ∈ D ν (w). Proof: Since C is non-decreasing in w, if w 1 ≤ . . . ≤ w k , then C 1 ≤ . . . ≤ C k . Furthermore, w j = w j ′ implies C j = C j ′ . The same relationship holds between w j k j=1 and U (C j ) k j=1 , U being increasing. It is then simply a matter of writing down the expression of the Choquet integral to see that v(C) = E π U (C) for any π ∈ D ν (w). Proposition 3.1 The allocation (C i ) n i=1 ∈ IR kn + is a P.O. of E 1 if and only if it is a P.O. of an economy in which agents have vNM utility index U i , i = 1, . . . , n and identical probability over the set of states of the world. In particular, P.O. are comonotone. Proof: Since the P.O. of an economy with vNM agents with same probability are independent of the probability, we can choose w.l.o.g. this probability to be π ∈ D ν (w). Let (C i ) n i=1 be a P.O. of the vNM economy. Being a P.O., this allocation is comonotone. By proposition 2.2, C i is a non-decreasing function of w. Hence, applying lemma 3.1, v i (C i ) = E π [U i (C i )], i = 1, . . . , n. If it were not a P.O. of E 1 , there would exist an allocation (C ′ 1 , C ′ 2 . . . C ′ n ) such that v i (C ′ i ) = E ν [U i (C ′ i )] ≥ v i (C i ) = E π [U i (C i )] for all i, and with at least one strict inequality. Since E π [U i (C ′ i )] ≥ E ν [U i (C ′ i )] for all i, this contradicts the fact that (C i ) n i=1 is a P.O. of ′ i ) n i=1 such that E π [U i (C ′ i )] ≥ E π [U i (C i )] ≥ v i (C i ) for all i, and with a strict inequality for at least an agent. (C ′ i ) n i=1 being Pareto optimal, it is comonotone and it follows by proposition 2.2 that C ′ i is a non-decreasing function of w. Hence, applying lemma 3.1, v i (C ′ i ) = E π [U i (C ′ i )], i = 1, . . . , n. This contradicts the fact that (C i ) n i=1 is a P.O. of E 1 . Note that this proposition not only shows that P.O. allocations are comonotone in the CEU economy, but also completely characterizes them. We may now also fully characterize the equilibria of E 1 . Proposition 3.2 (i) Let (p ⋆ , C ⋆ ) be an equilibrium of a vNM economy in which all agents have utility index U i and beliefs given by π ∈ D ν (w), then (p ⋆ , C ⋆ ) is an equilibrium of E 1 . (ii) Conversely, assume U1. If (p ⋆ , C ⋆ ) is an equilibrium of E 1 , then there exists π ∈ D ν (w) such that (p ⋆ , C ⋆ ) is an equilibrium of the vNM economy with utility index U i and probability π ∈ D ν (w). Proof: See [START_REF] Dana | Pricing rules when agents have non-additive expected utility and homogeneous expectations[END_REF]. Corollary 3.1 If U1 is fulfilled and w 1 < w 2 < . . . < w k , then the equilibria of E 1 are identical to those of a vNM economy in which agents have utility index U i , i = 1, . . . , n and same probabilities over states π j = ν{j, j + 1, . . . , k} -ν{j + 1, . . . , k}, j < k and π k = ν({k}). Hence, (w, C ⋆ i , i = 1, ..., n) are comonotone. To conclude this sub-section, observe that P.O. allocations in the CEU economy inherits all the nice properties of P.O. allocations in a vNM economy with identical beliefs. In particular, P.O. allocations are independent of the capacity. However, the equilibrium allocations in the vNM economy do depend on beliefs, and it is not trivial to assess the relationship between the equilibrium set of a vNM economy with identical beliefs and the equilibrium set of the CEU economy E 1 . Note for instance that E 1 has "as many equilibria" as there are probability distributions in the set D ν (w). If D ν (w) consists of a unique probability distribution, equilibria of E 1 are the equilibria of the vNM economy with beliefs equal to that probability distribution. On the other hand, if D ν (w) is not a singleton, it is a priori not possible to assimilate all the equilibria of E 1 with equilibria of a given vNM economy. Optimal risk-sharing and equilibrium with different capacities We next consider an economy in which agents have different convex capacities. Denote the economy in which agents are CEU with capacity ν i and utility index U i , i = 1, . . . , n by E 2 . We first give a general characterization of the set of P.O., when no further restrictions are imposed on the economy. We then show that this general characterization can be most usefully applied when one knows that P.O. are comonotone. Proposition 3.3 (i) Let (C i ) n i=1 ∈ IR kn + be a P.O. of E 2 such that for all i, C j i = C ℓ i , j = ℓ. Let π i ∈ core(ν i ) be such that E ν i U i (C i ) = E π i U i (C i ) for all i. Then (C i ) n i=1 is a P.O. of an economy in which agents have vNM utility index U i and probabilities π i , i = 1, . . . , n. (ii) Let π i ∈ core(ν i ), i = 1, . . . , n and (C i ) n i=1 be a P.O. of the vNM economy with utility index U i and probabilities π i , i = 1, . . . , n. If E ν i U i (C i ) = E π i U i (C i ) for all i, then (C i ) n i=1 is a P.O. of E 2 . Proof: (i) If (C i ) n i=1 is not a P.O. of a vNM economy, then there exists (C ′ i ) n i=1 such that E π i U i (C ′ i ) ≥ E ν i U i (C i ) with a strict inequality for some i. Since t i C ′ i + (1 -t i )C i C i , ∀t i ∈ [0, 1] , by choosing t i small, one may assume w.l.o.g. that C ′ i is ranked in the same order as C i . Hence, E π i U i (C ′ i ) = E ν i U i (C ′ i ) for all i, which contradicts the fact that (C i ) n i=1 is a P.O. of E 2 . (ii) Assume there exists a feasible allocation ( C ′ i ) n i=1 such that E ν i U i (C ′ i ) ≥ E ν i U i (C i ) with a strict inequality for at least some i. Then, E π i U i (C ′ i ) ≥ E π i U i (C i ) with a strict inequality for at least some i, which leads to a contradiction. We now illustrate the implications of this proposition on a simple example. Example 3.1 Consider an economy with two agents, two states and one good, that thus can be represented in an Edgeworth box. Divide the latter into three zones : • zone (1), where C 1 1 > C 2 1 and C 1 2 < C 2 2 • zone (2), where C 1 1 < C 2 1 and C 1 2 < C 2 2 • zone (3), where C 1 1 < C 2 1 and C 1 2 > C 2 2 In zone (1), everything is as if agent 1 had probability (ν 1 1 , 1ν 1 1 ) and agent 2, probability (1ν 2 2 , ν 2 2 ). In zone (2), agent 1 uses (1ν 2 1 , ν 2 1 ) and agent 2, (1ν 2 2 , ν 2 2 ), while in zone (3), agent 1 uses (1ν 2 1 , ν 2 1 ) and agent 2, (ν 1 2 , 1ν 1 2 ). In order to use (ii) of proposition 3.3, we draw the three contract curves, corresponding to the P.O. in the vNM economies in which agents have the same utility index U i and the three possible couples of probability. Label (a), (b) and (c) these curves. One notices that curve (a), which is the P.O. of the vNM economy for agents having beliefs (ν 1 1 , 1ν 1 1 ) and (1ν 2 2 , ν 2 2 ) respectively, does not intersect zone (1), which is the zone where CEU agents do use these probability distributions as well. Hence, no points are at the same time P.O. of that vNM economy and such that E ν i [U i (C i )] = E π i [U i (C i )], i = 1, 2. a c b (3) (1) (2) ✻ ✲ ❄ ✛ C 1 1 C 2 2 C 1 2 C 2 1 1 2 ✻ ✲ ❄ ✛ C 1 1 C 2 2 C 1 2 C 2 1 1 2 contained in zone (2) . That part constitutes a subset of the set of P.O. that we are looking for. We will show later on that, in order to get the full set of P.O. of the CEU economy, one has to replace the part of curve (b) that lies in zone (3) by the segment along the diagonal of agent 2. ♦ It follows from proposition 3.3 that, without any knowledge on the set of P.O., one has to compute the P.O. of (k!) n -1 economies (if there are k! extremal points in core(ν i ) for all i). Thus, the actual characterization of the set of P.O. of E 2 might be somewhat tedious without further information. In the comonotone case however, the characterization of P.O. is simpler, even though it remains partial. Corollary 3.2 Assume w 1 ≤ w 2 ≤ . . . ≤ w k . (i) Let U1 hold and (C i ) n i=1 ∈ IR kn + be a comonotone P.O. of E 2 such that C 1 i < C 2 i < . . . < C k i for all i = 1, . . . , n. Then, (C i ) n i=1 is a P.O. allocation of the economy in which agents are vNM maximizers with utility index U i and probability π j i = ν i ({j, . . . , k})ν i ({j + 1, . . . , k}) for j < k and π k i = ν i ({k}). (ii) Let (C i ) n i=1 ∈ IR kn + be a P.O. of the economy in which agents are vNM maximizers with utility index U i and probability π j i = ν i ({j, . . . , k})ν i ({j + 1, . . . , k}) for j < k and π k i = ν i ({k}). If (C i ) n i=1 is comonotone, then it is a P.O. of E 2 . These results may now be used for equilibrium analysis as follows. Proposition 3.4 Assume w 1 ≤ w 2 ≤ . . . ≤ w k . (i) Let (p ⋆ , C ⋆ ) be an equilibrium of E 2 . If 0 < C ⋆1 i < . . . < C ⋆k i for all i, then (p ⋆ , C ⋆ ) is an equilibrium of the economy in which agents are vNM maximizers with utility index U i and probability π j i = ν i ({j, . . . , k})ν i ({j + 1, . . . , k}) for j < k and π k i = ν i ({k}). (ii) Let (p ⋆ , C ⋆ ) be an equilibrium of the economy in which agents are vNM maximizers with utility index U i and probability π j i = ν i ({j, . . . , k})ν i ({j + 1, . . . , k}) for j < k and π k i = ν i ({k}). If C ⋆ is comonotone, then (p ⋆ , C ⋆ ) is an equilibrium of E 2 . Proof: (i) Since (p ⋆ , C ⋆ ) is an equilibrium of E 2 , and since v i is differentiable at C ⋆ i for every i, there exists a multiplier λ i such that p ⋆ = λ i U ′ i (C ⋆1 i )π 1 i , . . . , U ′ i (C ⋆k i )π k i , where π j i = ν i ({j, . . . , k})ν i ({j + 1, . . . , k}) for j < k and π k i = ν i ({k}) for all i. Hence, (p ⋆ , C ⋆ ) is an equilibrium of the economy in which agents are vNM maximizers with probability π j i for all i, j. (ii) Let (p ⋆ , C ⋆ ) be an equilibrium of the economy in which agents are vNM maximizers with probability π j i for all i, j. Assume C ⋆ is comonotone. We thus have p ⋆ C ′ i ≤ p ⋆ w i ⇒ E π i U i (C ′ i ) ≤ E π i U i (C ⋆ i ) Since E ν i U i (C ′ i ) ≤ E π i U i (C ′ i ) and E ν i U i (C ⋆ i ) = E π i U i (C ⋆ i ) , we get E ν i U i (C ′ i ) ≤ E ν i U i (C ⋆ i ) for all i, which implies that (C ⋆ , p ⋆ ) is an equi- librium of E 2 . Observe that, even though the characterization of P.O. allocations is made simpler when we know that these allocations are comonotone, the above proposition does not give a complete characterization. comonotonicity of the P.O. allocations is also useful for equilibrium analysis. This leads us to look for conditions on the economy under which P.O. are comonotone. Optimal risk-sharing and equilibrium in some particular cases In this section, we focus on two particular cases in which we can prove directly that P.O. allocations are comonotone. Convex transform of a probability distribution In this sub-section, we show how one can use the previous results when agents' capacities are the convex transform of a given probability distribution. In this case, one can directly apply corollary 3.2 and proposition 3.4 to get a characterization of P.O. and equilibrium. Let π = (π 1 , . . . , π k ) be a probability distribution on S, with π j > 0 for all j. Proposition 4.1 Assume w 1 ≤ w 2 ≤ . . . ≤ w k . Assume that, for all i, U i is differentiable and ν i = f i • π, where f i is a strictly increasing and convex function from [0, 1] to [0, 1] with f i (0) = 0, f i (1) = 1. Then, at a P.O., C 1 i ≤ C 2 i ≤ . . . ≤ C k i for all i. Proof: Since U i is differentiable, strictly increasing and strictly concave, and f i is a strictly increasing, convex function for all i, it results from corollary 2 in Chew, [START_REF] Chew | Risk aversion in the theory of expected utility with rank dependent preferences[END_REF] that every agent strictly respects second order stochastic dominance. Therefore it remains to show that if every agent strictly respects second order stochastic dominance, then, at a P.O., C 1 i ≤ C 2 i ≤ . . . ≤ C k i for all i. We do so using proposition 4.1 in [START_REF] Chateauneuf | A review of some results related to comonotonicity[END_REF]. Assume (C i ) n i=1 is not comonotone. W.l.o.g., assume that andC ′ 2 be determined by the feasibility condition C 1 1 > C 2 1 , C 1 2 < C 2 2 , and C 1 1 + C 1 2 ≤ C 2 1 + C 2 2 . Let C ′ be such that: C 1′ 1 = C 2′ 1 = π 1 C 1 1 + π 2 C 2 1 π 1 + π 2 and C j′ 1 = C j 1 , j > 2 Let C ′ i = C i for all i > 2, C 1 + C 2 = C ′ 1 + C ′ 2 . Hence, C 1′ 2 = C 1 2 + π 2 π 1 + π 2 (C 1 1 -C 2 1 ), C 2′ 2 = C 2 2 - π 1 π 1 + π 2 (C 1 1 -C 2 1 ) and C j′ 2 = C j 2 , j > 2 It may easily be checked that C 2 1 < C 1′ 1 = C 2′ 1 < C 1 1 , and C 1 2 < C 1′ 2 ≤ C 2′ 2 < C 2 2 . Furthermore, π 1 C 1′ 1 + π 2 C 2′ 1 = π 1 C 1 1 + π 2 C 2 1 , and π 1 C 1′ 2 + π 2 C 2′ 2 = π 1 C 1 2 + π 2 C 2 2 . Therefore, C ′ i i = 1 , 2 is a strictly less risky allocation than C i i = 1, 2, with respect to mean preserving increases in risk. It follows that agents 1 and 2 are strictly better off with C ′ , while other agents' utilities are unaffected. Hence, C ′ Pareto dominates C. Thus, any P.O. C must be comonotone, i.e., C 1 i ≤ C 2 i ≤ . . . ≤ C k i for all i. Using corollary 3.2, we can then provide a partial characterization of the set of P.O. Note that such a characterization was not provided by the analysis in [START_REF] Chateauneuf | A review of some results related to comonotonicity[END_REF] or [START_REF] Landsberger | Co-monotone allocations, Bickel-Lehmann dispersion and the Arrow-Pratt measure of risk aversion[END_REF]. Proposition 4.2 Assume w 1 ≤ . . . ≤ w k and that agents are CEU maximizers with ν i = f i • π, f i convex, strictly increasing and such that f i (0) = 0 and f i (1) = 1. Then, (i) Let (C i ) n i=1 ∈ IR kn + be a P.O. of this economy such that C 1 i < C 2 i < . . . < C k i for all i = 1, . . . , n. Then, (C i ) n i=1 is a P.O. allocation of the economy in which agents are vNM maximizers with utility index U i and probability π j i = f i k s=j π s -f i k s=j+1 π s for j = 1, . . . , k -1, and π k i = f i π k . (ii) Let (C i ) n i=1 ∈ IR kn + be a P.O. of the economy in which agents are vNM maximizers with utility index U i and probability π j i = f i k s=j π s -f i k s=j+1 π s for j = 1, . . . , k -1, and π k i = f i π k . If (C i ) n i=1 is comonotone, then it is a P.O. of the CEU economy with ν i = f i • π. Proof: See corollary 3.2. The same type of result can be deduced for equilibrium analysis from proposition 3.4, and we omit its formal statement here. The previous characterization formally includes the Rank-Dependent-Expected-Utility model introduced by [START_REF] Quiggin | A theory of anticipated utility[END_REF] in the case of (probabilized) risk. It also applies to so-called "simple capacities" (see e.g. [START_REF] Dow | Uncertainty aversion, risk aversion, and the optimal choice of portfolio[END_REF]), which are particularly easy to deal with in applications. Indeed, let agents have the following simple capacities: ν i (A) = (1ξ i )π(A) for all A ∈ A, A = S, and ν i (S) = 1, where π is a given probability measure with 0 < π j < 1 for all j, and 0 ≤ ξ < 1. These capacities can be written ν i = f i •π where f i is such that f i (0) = 0, f i (1) = 1, is strictly increasing, continuous and convex, with: f i (p) = (1 -ξ i )p if 0 ≤ p ≤ max {π(A)<1} π(A) f i (1) = 1 Hence, ν i is a convex transformation of π, and we can apply the results of this sub-section to characterize the set of P.O. in an economy where all agents have such simple capacities. The two-state case We restrict our attention here to the case S = {1, 2}. Agent i has a capacity ν i characterized by two numbers ν i ({1}), ν i ({2}) such that ν i ({1}) ≤ 1ν i ({2}). To simplify notation, we'll denote ν i ({s}) = ν s i . In this particular case, core (ν i ) = {(π, 1 -π) | π ∈ [ν 1 i , 1 -ν 2 i ]}. Call E 3 the two-state exchange economy in which agents are CEU maximizers with capacity ν i and utility index U i , i = 1, . . . , n. Assumption C: ∩ i core(ν i ) = ∅ This assumption is equivalent to ν 1 i + ν 2 j ≤ 1, i, j = 1, . . . , n, or stated differently, to ∩ i [ν 1 i , 1 -ν 2 i ] = ∅. Recall that in the two-state case, under C, agents' capacities are convex. We now proceed to show that this "minimal consensus" assumption is enough to show that P.O. are comonotone. Proposition 4.3 Let C hold. Then, P.O. are comonotone. Proof: Assume w 1 ≤ w 2 and C not comonotone. W.l.o.g., assume that C 1 1 > C 2 1 , C 1 2 < C 2 2 . Let (π, 1 -π) ∈ ∩ i core(ν i ) and C ′ be the feasible allocation defined by C 1′ 1 = C 2′ 1 = πC 1 1 + (1 -π)C 2 1 and C 1′ 2 and C 2′ 2 are such that C j′ 1 + C j′ 2 = C j 1 + C j 2 , j = 1, 2, i.e. C 1′ 2 = C 1 2 + (1 -π)(C 1 1 -C 2 1 ), C 2′ 2 = C 2 2 -π(C 1 1 -C 2 1 ) One obviously has C 1 2 < C 1′ 2 ≤ C 2′ 2 < C 2 2 . Finally, let C j′ i = C j i , ∀i > 2, j = 1, 2. We now prove that C ′ Pareto dominates C. v 1 (C ′ 1 ) -v 1 (C 1 ) = U 1 (πC 1 1 + (1 -π)C 2 1 ) -ν 1 1 U 1 (C 1 1 ) -(1 -ν 1 1 )U 1 (C 2 1 ) > (π -ν 1 1 ) U 1 (C 1 1 ) -U 1 (C 2 1 ) ≥ 0 since U 1 is strictly concave and π ≥ ν 1 i . Now, consider agent 2's utility: v 2 (C ′ 2 ) -v 2 (C 2 ) = (1 -ν 2 2 ) U 2 (C 1′ 2 ) -U 2 (C 1 2 ) + ν 2 2 U 2 (C 2′ 2 ) -U 2 (C 2 2 ) Since U 2 is strictly concave and C 1 2 < C 1′ 2 ≤ C 2′ 2 < C 2 2 , we have: U 2 (C 1′ 2 ) -U 2 (C 1 2 ) C 1′ 2 -C 1 2 > U 2 (C 2 2 ) -U 2 (C 2′ 2 ) C 2 2 -C 2′ 2 and hence, U 2 (C 1′ 2 ) -U 2 (C 1 2 ) 1 -π > U 2 (C 2 2 ) -U 2 (C 2′ 2 ) π . Therefore, v 2 (C ′ 2 ) -v 2 (C 2 ) > (1 -ν 2 2 ) 1 -π π -ν 2 2 U 2 (C 2 2 ) -U 2 (C 2′ 2 ) ≥ 0 since (1 -ν 2 2 )(1 -π) -πν 2 2 = 1 -ν 2 2 -π ≥ 0 and U 2 (C 2 2 ) -U 2 (C 2′ 2 ) > 0. Hence, C ′ Pareto dominates C. Remark: If ν 1 i + ν 2 j < 1, i, j = 1, . . . , n , which is equivalent to the assumption that ∩ i core(ν i ) contains more than one element, then one can extend proposition 4.3 to linear utilities. Remark: Although convex capacities can, in the two-state case, be expressed as simple capacities, the analysis of sub-section 4.1 (and in particular proposition 4.1), cannot be used here. Indeed, assumption C does not require that agents' capacities are all a convex transform of the same probability distribution as example 4.1 shows. Example 4.1 There are two agents with capacity ν 1 1 = 1/3, ν 2 1 = 2/3, and ν 1 2 = 1/6, ν 2 2 = 2/3 respectively. Assumption C is satisfied since π = (1/3, 2/3) is in the intersection of the cores. The only way ν 1 and ν 2 could be a convex transform of the same probability distribution is ν 1 = π and ν 2 = f 2 • π with f 2 (1/3) = 1/6 and f 2 (2/3) = 2/3. But f 2 then fails to be convex. ♦ Intuition derived from proposition 4.3 might suggest that some minimal consensus assumption might be enough to prove comonotonicity of the P.O. However, that intuition is not valid in general, as can be seen in the following example, in which the intersection of the cores of the capacities is non-empty, but where (some) P.O. allocations are not comonotone. Example 4.2 There are two agents, with the same utility index U i (C) = 2C 1/2 , but different beliefs. The latter are represented by two convex capacities defined as follows: ν 1 ({1}) = 3 9 ν 1 ({2}) = 3 9 ν 1 ({3}) = 1 9 ν 1 ({1, 2}) = 6 9 ν 1 ({1, 3}) = 6 9 ν 1 ({2, 3}) = 4 The intersection of the cores of these two capacities is non-empty since the probability defined by π j = 1/3, j = 1, 2, 3 belongs to both cores. The endowment in each state is respectively w 1 = 1, w 2 = 12, and w 3 = 13. We consider the optimal allocation associated to the weights (1/2, 1/2) and show it cannot be comonotone. In order to do that, we show that the maximum of v 1 (C 1 ) + v 2 (C 2 ) subject to the constraints C j 1 + C j 2 = w j , j = 1, 2, 3 and C j i ≥ 0 for all i and j, does not obtain for C 1 i ≤ C 2 i ≤ C 3 i , i = 1, 2. Observe first that if C 1 i ≤ C 2 i ≤ C 3 i , i = 1, 2, then: v 1 (C 1 )+v 2 (C 2 ) = 2 5 9 C 1 1 + 3 9 C 2 1 + 1 9 C 3 1 + 4 9 C 1 2 + 2 9 C 2 2 + 3 9 C 3 2 Call g(C 1 1 , C 2 1 , C 3 1 , C 1 2 , C 2 2 , C 3 2 ) the above expression. Note that v 1 (C 1 ) + v 2 (C 2 ) takes the exact same form if C 1 1 < C 3 1 < C 2 1 and C 1 2 < C 2 2 < C 3 2 . The optimal solution to the maximization problem: max g(C 1 1 , C 2 1 , C 3 1 , C 1 2 , C 2 2 , C 3 2 ) s.t. C j 1 + C j 2 = w j j = 1, 2, 3 C j i ≥ 0 j = 1, 2, 3 i = 1, 2 is C 1 1 , C 2 1 , C 0 < C 1 1 < C 3 1 < C 2 1 and 0 < C 1 2 < C 2 2 < C 3 2 . Therefore: v 1 ( C 1 )+v 2 ( C 2 ) > v 1 (C 1 )+v 2 (C 2 ) for all C such that C 1 i ≤ C 2 i ≤ C 3 i , i = 1, 2 and hence the P.O. associated to equal weights for each agent is not comonotone.♦ One may expect that it follows from proposition 4.3 that P.O. of E 3 are the P.O. of the vNM economy in which agents have probability π i = 1-ν 2 i , i = 1, . . . , n. However, it is not so, since as recalled in sub-section 2.4, P.O. of a vNM economy with different beliefs are not in general comonotone. We can nevertheless use proposition 3.3 to provide a partial characterization of the set of P.O. In this particular case of only two states, we can obtain a full characterization of the set of P.O. if there are only two agents in the economy. This should then be interpreted as a characterization of the optimal risk-sharing arrangement among two parties to a contract (arrangement that has been widely studied in the vNM case). Figure 2: ✻ ✲ ❄ ✛ C 1 1 C 2 2 C 1 2 C 2 1 1 2 (a) ✻ ✲ ❄ ✛ C 1 1 C 2 2 C 1 2 C 2 1 1 2 (b) 3 (a), the thin line represents P.O. of the vNM economy in which agent i uses probability (1ν 2 i , ν 2 i ) that are not P.O. of the CEU economy, as they are not comonotone. If agent 2 has a utility index of the DARA type, it is easy to show that the set of P.O. of the economy in which agents have utility index U i and probability (1ν 2 i , ν 2 i ) crosses agent 1's diagonal at most once, hence preventing the kind of situation represented on figure 3. When there are only two (types of) agents, we can also go further in the characterization of the set of equilibria. Proposition 4.5 Assume k = 2, n = 2, w 1 ≤ w 2 , C and U1 hold and ν 2 1 < ν 2 2 . Let (p ⋆ , C ⋆ ) be an equilibrium of E. Then there are only two cases: (i) Either C 1⋆ i < C 2⋆ i , i = 1, 2 and (p ⋆ , C ⋆ ) is an equilibrium of a vNM economy in which agents have utility index U i and beliefs given by π i = 1 -ν 2 i , i = 1, 2. (ii) Or, C 1⋆ 1 = C 2⋆ 1 = C ⋆⋆ and C ⋆⋆ satisfies the following : (a) (1 -ν 2 2 )(w 1 1 -C ⋆⋆ )U ′ 2 (w 1 -C ⋆⋆ ) + ν 2 2 (w 2 1 -C ⋆⋆ )U ′ 2 (w 2 -C ⋆⋆ ) = 0 (b) ν 2 1 1-ν 2 1 ≤ -(w 1 1 -C ⋆⋆ ) (w 2 1 -C ⋆⋆ ) Proof: It follows from proposition 4.4 that either C 1⋆ i < C 2⋆ i , i = 1, 2 or C 1⋆ 1 = C 2⋆ 1 = C ⋆⋆ . The first case follows from proposition 3.4. In the Figure 3: ✻ ✲ ❄ ✛ C 1 1 C 2 2 C 1 2 C 2 1 1 2 (a) ✻ ✲ ❄ ✛ C 1 1 C 2 2 C 1 2 C 2 1 1 2 (b) ❛ w ❵ ❵ ❵ ❵ ❵ ❇ ❇ ❇ ❇ ❇ ❇ second case, the P.O. allocation is supported by the price (1 -ν 2 2 )U ′ 2 (w 1 - C ⋆⋆ ), ν 2 2 U ′ 2 (w 2 -C ⋆⋆ ) , hence the tangent to agent two's indifference curve at (w 1 -C ⋆⋆ , w 2 -C ⋆⋆ ) has the following equation in the (C 1 , C 2 ) plane: (1 -ν 2 2 )U ′ 2 (w 1 -C ⋆⋆ )(C 1 -C ⋆⋆ ) + ν 2 2 U ′ 2 (w 2 -C ⋆⋆ )(C 2 -C ⋆⋆ ) = 0 Now C ⋆ is an equilibrium allocation iff (w 1 1 , w 2 1 ) fulfills that equation. Condition (b) follows from condition (a) and condition (ii) from proposition 4.4. There might therefore exist, for a range of initial endowments, equilibria at which agent 1 is perfectly insured even though agent 2 has strictly convex preferences. Observe also that nothing excludes a priori the possibility of having different kind of equilibria for the same initial endowment (see figure 3 (b)). Optimal risk-sharing and equilibrium without aggregate risk We now turn to the study of economies without aggregate uncertainty5 . This corresponds to the case of individual risk first analyzed by [START_REF] Malinvaud | The allocation of individual risks in large markets[END_REF] and [1973]. A particular case is the one of a sunspot economy, in which uncertainty is purely extrinsic and does not affect the fundamentals, i.e., each agent's endowment is independent of the state of the world (see [START_REF] Tallon | Do sunspots matter when agents are Choquet-expectedutility maximizers[END_REF] for a study of sunspot economies with CEU agents). Our analysis of the case of purely individual risk might also yield further insights as to which type of financial contracting (e.g. mutual insurance rather than trade on Arrow securities defined on individual states) is necessary in such economies to decentralize an optimal allocation. It turns out that the economy under consideration possesses remarkable properties: P.O. are comonotone and coincide with full insurance allocations, under the relatively weak condition C. Furthermore, this condition, which is weaker than convexity of preferences, is enough to prove existence of an equilibrium. Finally, the case of purely individual risk lends itself to the introduction of several goods. We thus move to an economy with m goods, indexed by subscript ℓ. C j iℓ is the consumption of good ℓ by agent i in state j. We have, C j i = (C j i1 , . . . , C j im ), and C i = (C 1 i , . . . , C k i ). If C j i = C j ′ i for all j, j ′ , then C i will denote both this constant bundle (i.e. C j i ≡ C i ) and the vector composed of k such vectors, the context making it clear which meaning is intended. Let p j ℓ be the price of good ℓ available in state j, p j = (p j 1 , . . . , p j m ), and p = (p 1 , . . . , p k ). The utility index U i is now defined on IR m + , and is still assumed to be strictly concave and strictly increasing. We will also need a generalization of assumption U1 to the multi-good case, that ensures that the solution to the agent's maximization program is interior. Assumption Um: ∀i, {x ′ ∈ IR m + | U i (x ′ ) ≥ U i (x)} ⊂ IR m ++ , ∀x ∈ IR m ++ . Aggregate endowment is the same across states, although its distribution among households might differ in each state. Therefore, we consider a pure exchange economy E 4 with n agents and m goods described by the list: E 4 = v i : IR km + → IR, w i ∈ IR km + , i = 1, . . . , n . We will denote aggregate endowments w, i.e., w = i w i . Before dealing with non-additive beliefs, we first recall some known results in the vNM case. Proposition 5.1 Assume all agents have identical vNM beliefs, π = (π 1 , . . . , π k ). Then, (i) At a Pareto optimum, C j i = C j ′ i for all i, j, j ′ and C 1 i n i=1 is a P.O. of the static economy (U i , wi = E π (w i ) , i = 1, . . . , n). (ii) Let Um hold. (p ⋆ , C ⋆ ) is an equilibrium of the vNM economy if and only if there exists q ⋆ ∈ IR m + such that p ⋆ = q ⋆ π 1 , q ⋆ π 2 , . . . , q ⋆ π k and (q ⋆ , C ⋆ ) is an equilibrium of the static economy (U i , wi = E π (w i ) , i = 1, . . . , n). It is also easy to see that if agents' beliefs are different, agents will consume state-dependent bundles at an optimum. We now examine to what extent these results, obtained in the vNM case, generalize to the CEU setup, assuming that condition C holds. Using proposition 2.1 this assumption is equivalent6 to P = ∅, where P =    π ∈ IR k ++ | k j=1 π j = 1 and v i (C 1 i , . . . , C k i ) ≤ E π (U i (C i )) , ∀ i, ∀ C i    Recall that assumption C was not enough to prove comonotonicity of P.O. in the general case (though it was sufficient in the two-state case). We now proceed to fully characterize the set of P.O. Proof: (i) Assume, to the contrary, that there exist an agent (say agent 1) and states j and j ′ such that C j 1 = C j ′ 1 . Let Cj i ≡ Ci = E π (C i ) for all j and all i where π ∈ P. This allocation is feasible: i Ci = E π ( i C i ) = w. By definition of P, v i C 1 i , . . . , C k i ≤ E π (U i (C i )) for all i. Now, E π (U i (C i )) ≤ U i (E π (C i )) = U i Ci = v i C1 i , . . . , Ck i for all i, since U i is concave. This last inequality is strict for agent 1, since C j 1 = C j ′ 1 , π ≫ 0, and U 1 is strictly concave. Therefore, v i C 1 i , . . . , C k i ≤ v i C1 1 , . . . , Ck i for all i, with a strict inequality for agent one, a contradiction to the fact that (C i ) n i=1 is an optimum of E 4 . (ii) We skip the proof for this part of the proposition for it relies on the same type of argument as that of proposition 3.3. Thus, even with different "beliefs" (in the sense of different capacities), agents might still find it optimal to fully insure themselves : differences in beliefs do not necessarily lead agents to optimally bear some risk as in the vNM case. We now proceed to study the equilibrium set. Proposition 5.3 Let C hold. (i) Let (p ⋆ , C ⋆ ) be an equilibrium of a vNM economy in which all agents have utility index U i and beliefs given by π ∈ P, then (p ⋆ , C ⋆ ) is an equilibrium of E 4 . (ii) Conversely, assume ν i is convex and U i satisfies Um for all i. Let (p ⋆ , C ⋆ ) be an equilibrium of E 4 , then there exists π ∈ P such that (p ⋆ , C ⋆ ) is an equilibrium of the vNM economy in which all agents have utility index U i and probability π. Furthermore, p ⋆ = q ⋆ π 1 , q ⋆ π 2 , . . . , q ⋆ π k with π ∈ P and q ⋆ = λ i ∇U i (C ⋆ i ), λ i ∈ IR + for all i. Proof: (i) Let (p ⋆ , C ⋆ ) be an equilibrium of a vNM economy in which all agents have beliefs given by π. We have, C j⋆ i = C j ′ ⋆ i for all j, j ′ and all i. By definition of an equilibrium, i C j⋆ i = i w j i for all j and, for all i: C ′ i ≥ 0, p ⋆ C ′ i ≤ p ⋆ w i ⇒ E π (U i (C ′ i )) ≤ E π (U i (C ⋆ i )) Now, since π ∈ P, v i (C ′ i ) ≤ E π (U i (C ′ i )). Notice that v i (C ⋆ i ) = E π (U i (C ⋆ i )). Hence, v i (C ′ i ) ≤ v i (C ⋆ i ) , and (p ⋆ , C ⋆ ) is an equilibrium of E 4 . (ii) Let (p ⋆ , C ⋆ ) be an equilibrium of E 4 . Assume ν i is convex and U i satisfy Um for all i. Then C ⋆ i ≫ 0 for all i. From proposition 5.2 and the first theorem of welfare, C j⋆ i = C j ′ ⋆ i for all j, j ′ and all i. From first-order conditions and Um, there exists λ i ∈ IR + for all i, such that p ⋆ ∈ λ i ∂v i (C ⋆ i , . . . , C ⋆ i ). Therefore, p ⋆ = λ i ∇U i (C ⋆ i ) π 1 i , . . . , λ i ∇U i (C ⋆ i ) π k i where π i ∈ core(ν i ). Summing over p ⋆ 's components, one gets: j λ i ∇U i (C ⋆ i ) π j i = j p j⋆ = j λ i ′ ∇U i ′ (C ⋆ i ′ ) π j i ′ that is: λ i ∇U i (C ⋆ i ) = λ i ′ ∇U i ′ (C ⋆ i ′ ) ∀ i, i ′ Hence, π j i is independent of i for all j, i.e. π i ∈ ∩ i core(ν i ) = P. Let q ⋆ = λ i ∇U i (C ⋆ i ) and π = π i . One gets p ⋆ = q ⋆ π 1 , q ⋆ π 2 , . . . , q ⋆ π k with π ∈ P. It follows from proposition 5.1 that (p ⋆ , C ⋆ ) is an equilibrium of the vNM economy with utility index U i and probability π. This proposition suggests equilibrium indeterminacy if P contains more than one probability distribution. This road is explored further in Tallon [1997] and [START_REF] Dana | Pricing rules when agents have non-additive expected utility and homogeneous expectations[END_REF]. A direct corollary concerns existence: Corollary 5.1 Under C, there exists an equilibrium of E 4 . Hence, since capacities satisfying assumption C need not be convex, convexity of the preferences (which is equivalent, in the CEU setup, to the convexity of the capacity and concavity of the utility index, see [START_REF] Chateauneuf | Diversification, convex preferences and non-empty core[END_REF]) is not necessary to prove that an equilibrium exists in this setup. [START_REF] Malinvaud | Markets for an exchange economy with individual risks[END_REF] noticed that P.O. allocations could be decentralized through insurance contract in a large economy. [START_REF] Cass | Individual risk and mutual insurance[END_REF] showed, in an expected utility framework, how this decentralization can be done in a finite economy: agents of the same type share their risk through (actuarially fair) mutual insurance contract. The same type of argument could be used here in the Choquet expected utility case. It is an open issue whether P.O. allocations can be decentralized with mutual insurance contract where agents in the same pool have different capacities. The same is true for curve (c) and zone (3). On the other hand, part of curve (b) is Figure 1: b 4 if and only if it is a P.O. of a vNM economy with utility index U i and identical probability over the set of states of the world. Hence, P.O. are independent of the capacities. Proposition 5.2 Let C hold. Then, (i) Any P.O. (C i ) n i=1 of E 4 is such that C j i is independent of j for all i and C 1 i n i=1 is a P.O. of the static economy in which agents have utility function (U i ) n i=1 . (ii) The allocation (C i ) n i=1 ∈ IR kmn + is a P.O. of E See Schmeidler [1989],[START_REF] Ghirardato | Coping with ignorance: unforeseen contingencies and nonadditive uncertainty[END_REF],[START_REF] Mukerji | Understanding the nonadditive probability decision model[END_REF]. In section 5, we will deal with several goods and will introduce the appropriate notation at that time. It is well-known (see[START_REF] Schmeidler | Integral representation without additivity[END_REF]) that when ν is convex, the Choquet integral of any random variable f is given by f dν = min π∈core(ν) Eπf . [START_REF] Borch | Equilibrium in a reinsurance market[END_REF] noted that, in a reinsurance market, at a P.O., "the amount which company i has to pay will depend only on (...) the total amount of claims made against the industry. Hence any Pareto optimal set of treaties is equivalent to a pool arrangement." Note that this corresponds to the characterization of comonotone variables as stated in proposition 2.2. For a study of optimal risk-sharing without aggregate uncertainty and an infinite state space, see[START_REF] Billot | Sharing beliefs: between agreeing and disagreeing[END_REF]. Recall that we are dealing with capacities such that 1 > ν(A) > 0 for all A = ∅ and A = S. Proposition 4.4 Assume n = 2, w 1 < w 2 and that agents have capacities ν i , i = 1, 2 which fulfill C, and such that ν 2 1 < ν 2 2 . Assume finally U1 and let (C i ) i=1,2 be a P.O. of E 3 . Then, there are only two cases: ,2 is a P.O. of the vNM economy with utility index U i and probabilities (1) Proof: It follows from proposition 4.3 that there are three cases : ; hence the right-hand side of (2) is fulfilled and (1) is equivalent to (2). Lastly, the case 0 < C 1 2 = C 2 2 is symmetric. The first-order corresponding conditions imply ν 2 1 ≥ ν 2 2 which contradicts our hypothesis. We can illustrate the optimal risk-sharing arrangement just derived in an Edgeworth box. Figure 2 (a) represents case (i) and the optimal contract is the same as the one in the associated vNM setup. However, figure 2 (b) gives a different risk-sharing rule, that can interpreted as follows. The assumption ν 2 1 < ν 2 2 is equivalent to E ν 1 (x) ≤ E ν 2 (x) for all x comonotone with w. But we have just shown that we could restrict our attention to allocations that are comonotone with w. Hence, the assumption ν 2 1 < ν 2 2 can be interpreted as a form of pessimism of agent 1. Under that assumption, agent 2 never insures himself completely, whereas agent 1 might insure do so. This is incompatible with a vNM economy with strictly concave and differentiable utility indices. Finally, risk-sharing arrangements such as the one represented on figure 3 (a) cannot be excluded a priori, i.e., there is no reason that the contract curve in the vNM economy crosses agent 1's diagonal only once. On figure
54,430
[ "838103", "12658", "740561" ]
[ "61", "60", "5545" ]
01713886
en
[ "info" ]
2024/03/05 22:32:07
2018
https://inria.hal.science/hal-01713886v2/file/audio-source-separation.pdf
Antoine Liutkus Christian Rohlfing Antoine Deleforge AUDIO SOURCE SEPARATION WITH MAGNITUDE PRIORS: THE BEADS MODEL Keywords: audio probabilistic model, magnitude, phase, source separation 1 Audio source separation comes with the need to devise multichannel filters that can exploit priors about the target signals. In that context, experience shows that modeling magnitude spectra is effective. However, devising a probabilistic model on complex spectral data with a prior on magnitudes is non trivial, because it should both reflect the prior but also be tractable for easy inference. In this paper, we approximate the ideal donut-shaped distribution of a complex variable with approximately known magnitude as a Gaussian mixture model called BEADS (Bayesian Expansion Approximating the Donut Shape) and show that it permits straightforward inference and filtering while effectively constraining the magnitudes of the signals to comply with the prior. As a result, we demonstrate large improvements over the Gaussian baseline for multichannel audio coding when exploiting the BEADS model. I. INTRODUCTION Audio source separation aims at processing an audio mixture x composed of I channels (e.g. stereo I = 2) so as to recover its J constitutive sources s j [START_REF] Vincent | From blind to guided audio source separation: How models and side information can improve the separation of sound[END_REF]. It has many applications, including karaoke [START_REF] Liutkus | Adaptive filtering for music/voice separation exploiting the repeating musical structure[END_REF], upmixing [START_REF] Avendano | Frequency domain techniques for stereo to multichannel upmix[END_REF], [START_REF] Liutkus | Separation of music+ effects sound track from several international versions of the same movie[END_REF] and speech enhancement [START_REF] Barker | The third chime speech separation and recognition challenge[END_REF]. Usually, the sources are recovered by some time-varying filtering of the mixture, which is amenable to applying a I × I complex matrix G (f, t) to each entry x (f, t) of its Short-Term Fourier Transform (STFT) [START_REF] Vincent | Probabilistic modeling paradigms for audio source separation[END_REF], [START_REF] Benaroya | Audio source separation with a single sensor[END_REF], [START_REF] Avendano | Frequency domain techniques for stereo to multichannel upmix[END_REF]. Devising good filters requires either dedicated models for sources power spectrograms [START_REF] Ozerov | Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation[END_REF], [START_REF] Ozerov | A general flexible framework for the handling of prior information in audio source separation[END_REF] or machine learning methods to directly predict effective filters [START_REF] Nugraha | Multichannel audio source separation with deep neural networks[END_REF]. The theoretical grounding for these linear filtering procedures boil down to considerations about second-order statistics for the sources STFT coefficients s j (f, t) ∈ C. When seen from a probabilistic perspective, this is often translated as picking a complex isotropic Gaussian distribution [START_REF] Gallager | Circularly Symmetric Complex Gaussian Random Vectors -A Tutorial[END_REF] This work was partly supported by the research programme KAMoulox (ANR-15-CE38-0003-01) funded by ANR, the French State agency for research. Fig. 1: The BEADS probabilistic model for a complex variable with a magnitude close to b and squared uncertainty σ. Re for each s j (f, t) ∈ C, which is called the Local Gaussian Model (LGM) [START_REF] Arberet | Underdetermined instantaneous audio source separation via local Gaussian modeling[END_REF], [START_REF] Duong | Under-determined convolutive blind source separation using spatial covariance models[END_REF] and is depicted on Figure 1a. Although it enjoys tractability and easy inference, the main shortcoming of the LGM is that it gives highest probability to 0, which may appear as counterintuitive. Still, this comes as a consequence of the fact that stationarity makes all phases to be equally probable a priori, so that E [s j (f, t)] = 0. Combining the two first moments only, maximum entropy principles naturally lead us to pick the LGM [START_REF] Jaynes | Probability theory: The logic of science[END_REF]. However, this badly reflects the additional prior knowledge one often has on s j (f, t). The vast majority of methods use priors on its magnitude: |s j (f, t)| should be close to some b j (f, t) > 0 with squared uncertainty σ j (f, t). In that setting, the donut-shaped distribution shown in Figure 1b is much better than LGM because it gives its highest probability mass on the circle of radius b j (f, t). If σ = 0, we end up with the phase unmixing problem [START_REF] Deleforge | Phase unmixing: Multichannel source separation with magnitude constraints[END_REF]. However, even with some uncertainty σ > 0, such a distribution suffers from non-tractability. In particular, it is not stable with respect to additivity, nor allows for simple posterior inference that would lead to straightforward filtering procedures. In this paper, we translate our prior as a mixture of C identical Gaussian components evenly located over the circle of radius b j (f, t), yielding our proposed BEADS model (Bayesian Expansion Approximating the Donut Shape), depicted on Figure 1c. The most remarkable feature of BEADS is to allow both for straightforward filtering procedures while complying with priors on magnitude and possibly priors on phase. As such, it extends recent research on anisotropic modeling of complex spectra [START_REF] Magron | Phase-dependent anisotropic gaussian model for audio source separation[END_REF] by translating phase information into the choice of one particular component from the model. It thus appears as yet another way to incorporate phase information in source separation [START_REF] Sturmel | Phase-based informed source separation for active listening of music[END_REF]. We illustrate the BEADS model in an informed source separation (ISS) setting, where the true sources are available at a first coding stage, that allows to compute good models to be used for separation at a decoding stage [START_REF] Nikunen | Object-based Modeling of Audio for Coding and Source Separation[END_REF], [START_REF] Liutkus | Informed source separation : a comparative study[END_REF], [START_REF] Rohlfing | NMF-based informed source separation[END_REF]. As demonstrated already before [START_REF] Ozerov | Codingbased informed source separation: Nonnegative tensor factorization approach[END_REF], this ISS setup is interesting because the separation parameters can be encoded very concisely, leading to effective instances of spatial audio object coding [START_REF] Breebaart | Spatial audio object coding (SAOC)-the upcoming MPEG standard on parametric object based audio coding[END_REF]. II. PROBABILISTIC MODEL II-A. BEADS source model The BEADS model is expressed as follows: P [s j (f, t) | b j (f, t) , σ j (f, t)] = C c=1 π j (c | f, t) N (b j (f, t) ω c , σ j (f, t)) , (1) where N denotes here the complex isotropic Gaussian distribution 1 , ω = exp (i2π/C) is the C th root of unity and π j (c | f, t) is the prior for the phase of source j at time-frequency (TF) bin (f, t): it indicates the probability that s j (f, t) is drawn from component c and hence that its phase is close to ω c . While some phase unwrapping approach [START_REF] Magron | Model-based STFT phase recovery for audio source separation[END_REF] may be used to set this prior, we take it as uniform here. The parameter σ j (f, t) stands for the variance of each component. It may be understood as the expected squared error of our estimate b j (f, t) for the magnitude. We can note that the LGM is equivalent to C = 1 1 π I |Σ| exp -(x -µ) Σ -1 (x -µ) , where |Σ| is the determinant of Σ [START_REF] Gallager | Circularly Symmetric Complex Gaussian Random Vectors -A Tutorial[END_REF]. and b j (f, t) = 0, and that many beads tend to the donut shape, as shown in Figure 1d. Now, we consider the joint prior distribution of the J independent sources. We need to consider all the C J possible combinations for the components. Let N C be the set of the first C natural numbers. We write z (f, t) ∈ N J C for the J × 1 vector whose j th entry z j (f, t) ∈ N C is the actual component drawn for source j. We define π (c | f, t) as the probability of each combination: ∀c ∈ N J C , π (c | f, t) def = P [z (f, t) = c] = J j=1 π j (c j | f, t) , (2) where c j is the j th entry of c ∈ N J C . We have c π (c | f, t) = 1. The joint prior distribution of the sources is given by: P [s (f, t) | Θ] = c∈N J C π (c | f, t) N (ω c • b (f, t) , [σ (f, t)]) , (3) where ω c for c ∈ N J C denotes the J × 1 vector with entries ω cj , a•b denotes element-wise multiplication of the vectors a and b and [v] denotes the diagonal matrix whose diagonal is the vector v. In words, the prior distribution for the sources under the BEADS model is a Gaussian Mixture Model (GMM), with weights π (c | f, t) , which is reminiscent but different from the pioneering work in [START_REF] Rennie | Variational speech separation of more sources than mixtures[END_REF] that was limited to one source signal. II-B. Mixture likelihood and separation We take the mix as a convolutive mixture with the narrowband assumption, so that x (f, t) ≈ A (f ) s (f, t), where A (f ) is the I × J mixing matrix at frequency bin f . Further exploiting the BEADS model (1), we get the marginal model of the mixture given all parameters Θ as: P [x (f, t) | Θ] = c∈N J C π (c | f, t) N (x c (f, t) , Σ x (f, t)) , (4) with x c (f, t) = A (f ) (ω c • b (f, t)) Σ x (f, t) = A (f ) [σ (f, t)] A (f ) . (5) Now, the real advantages of the BEADS model is that we can straightforwardly obtain the joint posterior distribution of the sources as: P [s (f, t) | x, Θ] = c∈N J C π (c | f, t, x) N (µ c , Σ c ) , (6) where the posterior statistics µ c and Σ c for each combination c of the phases is: µ c (f, t) = G (f, t) (x (f, t) -x c (f, t)) + ω c • b (f, t) Σ c (f, t) = Σ (f, t) = [σ (f, t)] -G (f, t) A (f ) [σ (f, t)] , (7) Algorithm 1 BEADS decoder to update the phase configuration probabilities and perform separation. Input: parameters Θ, mixture x (f, t), prior π j , C. Initialization: Σ π ← 0, ŝ ← 0 For all c n ∈ N J C : 1) π (c | f, t, x) ← π (c | f, t) N (x (f, t) | x c (f, t) , Σ x (f, t)) 2) ŝ (f, t) += π (c | f, t, x) (ω c • b (f, t) -x c (f, t)) 3) Σ π (f, t) += Σ π (f, t) + π (c | f, t, x) Finalization: 1) ŝ (f, t) ← ŝ(f,t) Σπ(f,t) 2) ŝ (f, t) += G (f, t) x (f, t) with the J × I Wiener gain G (f, t) defined as: G (f, t) = [σ (f, t)] A (f ) A (f ) [σ (f, t)] A (f ) -1 . The minimum mean squared error (MMSE) estimate for the sources thus becomes: E [s (f, t) | x, Θ] = G (f, t) x (f, t) + c∈N J C π (c | f, t, x) (ω c • b (f, t) -x c (f, t)) . (8) In the case we know the true phases configuration z (f, t) or have estimated one at the decoder, this estimate (8) simplifies to ŝ (f, t) = µ z(f,t) . III. PARAMETERS ESTIMATION In this section, we only consider the informed case, where the true source signals s j (f, t) and the mixing matrices A (f ) are known at the coder, while the mixture and the parameters Θ = {b, σ, A} are known at the decoder. III-A. Decoder: posterior π (c | f, t, x) and separation We show how the probabilities π (c | f, t) for the phase configurations may be updated to yield their posterior π (c | f, t, x) that fully exploit the BEADS model for constraining the phases. Dropping the (f, t) indices for readability, we have: ∀c ∈ N J C , π (c | x) = π (c) N (x | x c , Σ x ) P [x] . (9) This posterior probability may hence be expressed up to a normalizing constant independent of c as: π (c | x) ∝ π (c | x) = π (c) exp -(x -x c ) Σ -1 x (x -x c ) , (10) which can straightforwardly be computed at the decoder with known parameters Θ = {b, σ}. π (c | x) is obtained by normalization after computation for all c ∈ N C J . Algorithm 1 summarizes the computations done at the decoder to perform estimation of the posterior probabilities for the phase configurations and separation. III-B. Coder: amplitudes b and errors σ The parameters to be learned at the coder are the amplitude priors b j (f, t) and the error models σ j (f, t). First, for saving bitrate and computing time, we only use BEADS for the F 0 frequency bands that have the highest energy in the mix, let them be F 0 , and simply pick b j (f, t) = 0 for others. Then, for f ∈ F 0 we compress them by picking a Nonnegative Tensor Factorization model [START_REF] Cichocki | Nonnegative matrix and tensor factorizations: applications to exploratory multi-way data analysis and blind source separation[END_REF]: b j (f, t) = k W b (f, k) H (t, k) Q b (j, k) if f ∈ F 0 0 otherwise, ( 11 ) in which case Θ b = {W b , H, Q b } are small nonnegative F 0 × K, T × K and J × K, respectively. Same thing for σ, where we further reduce the number of parameters of the model by taking the same activations H in both cases, but this time, we model all frequencies. In this section and for convenience, we will assume that the component z j (f, t) drawn for each source at each TF bin is known and equal to the one closest to s j (f, t) . This simplification has the advantage of strongly reducing the computational cost of the estimation algorithm. Indeed, the BEADS model ( 1) then reduces to: s j (f, t) | z j ∼ N b j (f, t) ω zj (f,t) , σ j (f, t) . We can define the relative source j (f, t) : j (f, t) def = s j (f, t) ω zj (f,t) ∼ N (b j (f, t) , σ j (f, t)) . (12) Provided z j (f, t) is correctly chosen as the component whose argument 2πzj (f,t) C is the closest to that of s j (f, t) , we can furthermore safely assume that the real part R ( j (f, t)) of j (f, t) is nonnegative. Now, we detail the learning procedure we propose for b and σ. The strategy is to alternatively fix each of them and learn the other one. Learning b j (f, t): Assume σ j (f, t) is kept fixed. The distribution [START_REF] Arberet | Underdetermined instantaneous audio source separation via local Gaussian modeling[END_REF] for the relative sources mean that we may estimate Θ b using a weighted Euclidean method: Θ b ← argmin Θ f,t,j |R ( j (f, t)) -b j (f, t | Θ)| 2 σ j (f, t) , which can be done with a classical weighted NTF [START_REF] Virtanen | Combining pitch-based inference and non-negative spectrogram factorization in separating vocals from polyphonic music[END_REF] scheme with the Euclidean cost function. Learning σ j (f, t): If b j (f, t) is fixed we see from (12) that j (f, t) -b j (f, t) has an isotropic complex Gaussian distribution with variance σ j (f, t). This means Θ σ can be estimated through: Θ σ ← argmin Θ f,t,j d IS | j (f, t) -b j (f, t)| 2 σ j (f, t | Θ) , where d IS (a b) is the classical Itakura-Saito divergence for two nonnegative scalars a and b. This optimization is classical in the audio processing literature [START_REF] Fitzgerald | On the use of the beta divergence for musical source separation[END_REF], [START_REF] Févotte | Nonnegative matrix factorization with the Itakura-Saito divergence. With application to music analysis[END_REF], [START_REF] Févotte | Algorithms for nonnegative matrix factorization with the beta-divergence[END_REF]. Considering that the activations H we take for σ are those for b, we only learn W σ and Q σ with it. IV. EVALUATION We evaluate the BEADS model through its performance for ISS, i.e. by displaying its average quality as a function of the bitrate required to transmit its parameters. To assess quality, we use BSSeval metrics [START_REF] Vincent | Performance measurement in blind audio source separation[END_REF]: SDR (Source to Distortion Ratio) and SIR (Source to Interference Ratio), both expressed in dB and higher for better separation. For normalization purpose, we compute δ-metrics, defined as the difference between the score and performance of oracle Wiener filtering, i.e. using true sources spectrograms [START_REF] Vincent | Oracle estimators for the benchmarking of source separation algorithms[END_REF]. The data consists of 10 excerpts of 30 s, taken from DSD100 database 2 . Each consists of J = 4 sources (vocals, bass, drums and accompaniment), sampled at 44.1 kHz. We generated either mono (I = 1) or stereo (I = 2) mixtures from these sources, through simple summation or anechoic mixing (delays+gains), respectively. STFT was conducted with 50 % overlap and a window size of 93 ms. We evaluated the following methods: • BEADS oracle: ŝ (f, t) = µ z(f,t) . • BEADS point using only the phase configuration ẑ that is most likely a posteriori. • BEADS as given in Algorithm 1. • Itakura-Saito NTF [START_REF] Liutkus | Informed source separation through spectrogram coding and data embedding[END_REF], with K components. Given all these methods and data, our extensive evaluation consisted in trying the methods with F 0 = 150 frequency bands for BEADS magnitudes, C = {8, 16} beads, and all methods were tried for K ∈ [8, 128] NTF components. We picked 16 quantization levels for all parameters. Results were smoothed using LOESS [START_REF] Cleveland | Locally weighted regression: an approach to regression analysis by local fitting[END_REF] and are displayed on Figure 2. An interesting fact we see on Figure 2 is that the oracle BEADS model significantly outperforms standard oracle Wiener filtering, even for very crude magnitude models b j (f, t). This can be seen by the fact that its δmetrics get positive even at very small bitrates. Then, we may notice that the δ-metrics appear as higher for mono than for stereo mixtures. In this respect, we should highlight that the absolute performance of oracle Wiener is of course higher for stereo (not shown on Figure 2), due to the knowledge of the mixing filters A (f ) ∈ C that alone bring good separation already and actually some information about the phase of the sources. Adding additional spectral knowledge in that case is then less important than in the mono case, where it is crucial. Now, we see a very clear improvement of BEADS as described in Algorithm 1 over classical NTF-ISS, of approximately 2 dB SDR and 5 dB SIR, at most bitrates. This significant boost in performance shows that BEADS helps a 2 http://sisec.inria.fr. lot in predicting the source signals by adequately handling priors on magnitudes, which is the main result for this study. Finally, Figure 2 also shows that the procedure for computing the phase posterior probabilities is not sufficient for correctly identifying the true phase configuration. This can be seen by the strong discrepancies between the BEADS point estimate and its oracle performance. While marginalization over the configuration as described in Algorithm 1 helps a lot in this respect, there is much room for improvement for parameter estimation of this model. V. CONCLUSION In this paper, we introduced BEADS as a convenient probabilistic model for complex data whose magnitude is approximately known. BEADS is a Gaussian Mixture Model where all components share the same variance and are scattered along a circle. While simple conceptually, BEADS comes with several advantages. First, it translates the delicate problem of modeling the phase into setting probabilities over a discrete set of components. Second, it allows for easy inference and, finally, it straightforwardly leads to effective filtering procedures. Although we demonstrated its potential in an audio-coding application, we believe it may also be useful in the blind separation setting when embedded in an Expectation-Maximization estimation procedure. The BEADS model combines advantages of both (C = 8)... ...and approximates the donut shape well (C = 16). 16 Fig. 2 : 162 Fig.2: BEADS for ISS on mono (left) or stereo (right) mixes. Metrics are δSDR (top) and δSIR (bottom). Units are kilobits/second/source (x-axis) and dB (y-axis).
20,083
[ "2740", "10056" ]
[ "141072", "303510", "491299", "420403" ]
01748519
en
[ "math" ]
2024/03/05 22:32:07
2018
https://hal.science/hal-01748519/file/L1Stationary_EHL_M-Pierre.pdf
El Haj Laamri email: el-haj.laamri@univ-lorraine.fr Michel Pierre email: michel.pierre@ens-rennes.fr Stationary reaction-diffusion systems in L 1 Keywords: reaction-diffusion systems, nonlinear diffusion, cross-diffusion, global existence, porous media equation, weak solutions. 2010 MSC: 35K10, 35K40, 35K57 come Introduction and main results Our goal is to analyze the existence of solutions to stationary reaction-diffusion systems in L 1 -spaces. We first give a general abstract result for nonlinearities satisfying as many structural inequalities as the number of equations. This is done in the framework of abstract m-accretive operators in L 1 spaces for the diffusion part and the full system is controlled by a somehow "good" associated cross-diffusion system. We give several examples where this abstract result applies. Then, we provide a general result for more specific systems associated with general chemical reactions for which less structure holds for the nonlinearities. We denote by (Ω, µ) a measured space where µ is a nonnegative measure on the set Ω with µ(Ω) < +∞. We consider systems of the type (S)    ∀i = 1, ..., m, u i ∈ D(A i ) ∩ L 1 (Ω, dµ) + , h i (•, u) ∈ L 1 (Ω, dµ), u i + A i u i = h i (•, u 1 , ..., u m ) + f i (•) ∈ L 1 (Ω, dµ), (1) where for all i = 1, ..., m, -A i is a (possibly) nonlinear operator in L 1 (Ω, dµ) (generally a diffusion operator in applications), defined on D(A i ) ⊂ L 1 (Ω, dµ), -h i : Ω × R m → R is a nonlinear "reactive" term, -f i ∈ L 1 (Ω, dµ) + [:= {g ∈ L 1 (Ω, dµ) ; g ≥ 0, µ -a.e.}]. We are interested in nonnegative solutions u = (u 1 , ..., u m ) ∈ L 1 (Ω, dµ) +m . These systems are naturally associated with the evolution reaction-diffusion systems ∂ t u i (t) + A i u i (t) = h i (•, u(t)), t ∈ [0, +∞), i = 1, ..., m. ( When approximating these evolution systems by an implicit time discretization scheme, we are led to solving the following set of equations, for all small time interval ∆t ∀ i = 1, ..., m, n = 0, 1, ..., u n+1 i ∈ D(A i ), u n+1 i -u n i ∆t + A i u n+1 i = h i (•, u n+1 ) or u n+1 i + (∆t)A i u n+1 i = (∆t)h i (•, u n+1 i ) + u n i . This is exactly the system (S) with unknown u = u n+1 , up to trivially changing (∆t)A i into A i and (x, u) → [(∆t)h i (x, u) + u n i (x)] 1≤i≤m into (x, u) → [h i (x, u) + f i (x)] 1≤i≤m . The asymptotic steady states of the evolution system (2) are also the solutions u = (u 1 , ..., u m ) to the system {A i u i = h i (•, u), i = 1, ..., m, } which is also essentially included in the system (S) up to a slight change of the operators A i . The kind of operators A i we have in mind are : -The Laplacian u → -∆u on a bounded open set Ω of R N with various boundary conditions on ∂Ω. -More general elliptic operators u → -N i=1 ∂ x i N i=1 a ij (x)∂ x j u + b i u on the same Ω, with various boundary conditions as well. -Nonlinear operators of porous media type like u → -∆ϕ(u) where ϕ : R → R is an increasing function, again with different boundary conditions on ∂Ω. -Nonlinear operators of p-Laplacian type like u → -∆ p u := -∇ • |∇u| p-2 ∇u where p ∈ (1, +∞) and | • | is the euclidian norm in R N . -Classical perturbations and various associations of all of those. All these operators will satisfy the assumption (A) below which means that each A i is an m-accretive operator in L 1 (Ω, dµ) whose resolvents are compact and preserve positivity, namely the following, where I denote the identity : 1 (Ω, dµ), and for all λ ∈ (0, +∞), (A1) maccretivity : I + λA i is onto and (I + λA i ) -1 is nonexpansive, (A2) positivity : Ω sign -(u i )Au i ≥ 0, ∀ u i ∈ D(A i ), (A3) compactness : (I + λA i ) -1 : L 1 (Ω, dµ) → L 1 (Ω, dµ) is compact. (A)            ∀ i = 1, ..., m, A i : D(A i ) ⊂ L 1 (Ω, dµ) → L (3) Here I denotes the identity on L 1 (Ω, dµ) and in (A 2 ), we define sign -(s) := -sign + (-s), ∀s ∈ R where sign + (s) := 1, ∀ s ∈ [0, +∞), sign + (s) = 0, ∀ s ∈ (-∞, 0). This implies immediately that (I + λA i ) -1 L 1 (Ω, dµ) + ⊂ L 1 (Ω, dµ) + since then, for g ∈ L 1 (Ω, dµ) + and u i := (I + λA i ) -1 g 0 ≥ Ω sign -(u i )g dµ = Ω sign -(u i )[u i + λA i u i ] dµ ≥ Ω u - i dµ ⇒ u - i = 0 µ -a.e. For simplicity, we consider only single-valued operators A i , but everything would also work for multivalued m-accretive operators A i . The nonlinear reactive terms h i will preserve positivity as well. We assume that, for all i = 1, ..., m : (H1)        (1) h i : Ω × R m → R measurable ; (2) h i (•, R) := sup{|h i (•, r)|; |r| ≤ R} ∈ L 1 (Ω, dµ) + , ∀ R ∈ [0, +∞) ; (3) r ∈ R m → h i (•, r) is continuous ; (4) quasipositivity : h i (•, r 1 , ..., r i-1 , 0, r i+1 , ..., r m ) ≥ 0, ∀ r ∈ [0, +∞) m . (4) Moreover, the nonlinear reactive terms h i we are considering are those for which mass conservation or, more generally, mass dissipation or at least mass control holds for the associated evolution system [START_REF] Bebernes | Finite time blowup for semilinear reactive-diffusive systems[END_REF]. It is the case when the h i 's satisfy a relation like m i=1 h i (•, r) ≤ 0 for all r ∈ [0, +∞) m or more generally (H2) m i=1 a i h i (•, r) ≤ m i=1 b i r i +ω(•), ω ∈ L 1 (Ω, dµ) + , for some 0 ≤ b i < a i , i = 1, ..., m, r ∈ [0, +∞) m . ( 5 ) As we will see, this structure naturally implies an a priori L 1 -estimate on the solutions u i of the system (S) (see Proposition 2.2), together with "standard operators" A i by which we mean (A inf ) a ∞ := inf Ω A i u i dµ, u i ∈ D(A i ) ∩ L 1 (Ω, dµ) + > -∞ for each i = 1, ..., m. (6) This assumption essentially holds when 0 ∈ D(A i ), i = 1, ..., m. It also holds with most diffusion operators with non homogeneous boundary conditions (see Section 3) except a few ones : this is discussed in Remark 3.4. Actually, the main point will be to get a priori L 1 -estimates even on the nonlinearities h i (•, u) where u is solution of System (S). This will be satisfied if more structure is required on the nonlinear functions h i , namely (using the natural order in R m ) : (M )    There exist two m × m matrices M 0 , M 1 where M 0 is invertible, with nonnegative entries and such that M 0 h(•, r) ≤ M 1 r + Θ(•), ∀ r ∈ [0, +∞) m , Θ ∈ L 1 (Ω, dµ) +m . (7) Applying the matrix M 0 to the system (S) leads to the following set of m inequalities for an associated cross-diffusion system : M 0 u + M 0 Au = M 0 h(•, u) + M 0 f ≤ M 1 u + Θ + M 0 f, where we denote Au := (A i u i ) 1≤i≤m , h(•, u) := (h i (•, u)) 1≤i≤m and f := (f i ) 1≤i≤m . As proved in Proposition 2.3, this will imply an a priori estimate of h i (•, u), i = 1, ..., m in L 1 (Ω, dµ). More precisely, we will consider an approximate problem where the nonlinearities h i are replaced by truncated versions h n i . Then (M ) will imply that h n i (•, u n ) is bounded in L 1 (Ω, dµ) for the approximate solutions u n . The compactness of u n in L 1 (Ω, dµ) will then follow. Thus up to a subsequence, u n will converge in L 1 (Ω, dµ) m and µ-a.e. to some u and h n i (•, u n ) will also converge µ-a.e. to h i (•, u) for all i. But this is not sufficient yet to pass to the limit in the system since one essentially needs the convergence of h n i (•, u n ) in L 1 (Ω, dµ). It is the case if h n i (•, u n ) is uniformly integrable, by Vitali's lemma. This will hold if we add the following technical assumption which is some kind of compatibility condition between the various operators A i . It will be satisfied in the examples of Section 3. (Φ) There exist ϕ : [0, +∞) m → [0, +∞) continuous with lim |r|→+∞ ϕ(r) = +∞ and b ∈ (0, +∞) m such that Ω sign + (ϕ(u) -k)b • M 0 Au dµ ≥ 0, ∀ u ∈ D(A) ∩ L 1 (Ω, dµ) +m , ∀ k ∈ [0, +∞), D(A) := Π m i=1 D(A i ). ( 8 ) We now state our abstract result. It will be proved in Section 2 and applied to several explicit examples in Section 3. We refer to [START_REF] Laamri | Existence globale pour des systèmes de réaction-diffusion dans L 1[END_REF] where this kind of results and examples were already widely discussed and analyzed. Theorem 1.1 Assume that (A), (H1), (H2), (A inf ), (M ), (Φ) hold. Then the system (S) has a solution for all f ∈ L 1 (Ω, dµ) +m . As already explained, the assumption (M ) means that m independent inequalities hold between the m nonlinear functions h i . But many systems come with less than m such relations. The following 2 × 2 system satisfies only one relation of type (H2) and not (M ):    u 1 -d 1 ∆u 1 = (β 1 -α 1 )[u α 1 1 u α 2 2 -u β 1 1 u β 2 2 ] + f 1 =: h 1 (u 1 , u 2 ) + f 1 , u 2 -d 2 ∆u 2 = (β 2 -α 2 )[u β 1 1 u β 2 2 -u α 1 1 u α 2 2 ] + f 2 =: h 2 (u 1 , u 2 ) + f 2 , ∂ ν u 1 = 0 = ∂ ν u 2 on ∂Ω, (9) where for i = 1, 2, d i ∈ (0, +∞), α i , β i ∈ {0} ∪ [1, +∞), f i ∈ L 1 (Ω) + , Ω ⊂ R N equipped with the Lebesgue measure. This kind of nonlinearity appears in reversible chemical reactions with two species, namely α 1 A 1 + α 2 A 2 β 1 A 1 + β 2 A 2 . ( 10 ) Here (β 1 -α 1 )(β 2 -α 2 ) < 0. The nonlinearity h 1 and h 2 are quasipositive but satisfy the only one relation γ 2 h 1 + γ 1 h 2 = 0, γ i := |β i -α i |. It turns out that existence indeed holds for system (9) when f i ∈ L 1 (Ω) + for all i, as we prove it here. Actually we prove existence for a general class of such "chemical systems" when f i ∈ L 1 (Ω) + and also f i log f i ∈ L 1 (Ω). This is the second main result of this paper. Let us consider the system (CHS)      For all i = 1, ..., m, u i -d i ∆u i = (β i -α i ) k 1 Π m k=1 u α k k -k 2 Π m k=1 u β k k + f i , ∂ ν u i = 0 on ∂Ω, (11) where k 1 , k 2 ∈ (0, +∞) and, for all i = 1, ..., m, d i ∈ (0, +∞), α i , β i ∈ {0} ∪ [1, +∞) and f i ∈ L 1 (Ω) + where Ω is bounded regular open subset of R N and where    I := i ∈ {1, ..., m} ; α i -β i > 0 , J := j ∈ {1, ..., m}; β j -α j > 0 satisfy : I = ∅, J = ∅, I ∪ J = {1, ..., m}. (12) We denote by |I| (resp. |J|) the number of elements of I (resp. J). Theorem 1.2 Assume that f i ∈ L 1 (Ω) + , f i log f i ∈ L 1 (Ω) for all i = 1, ..., m. Assume also that |I| ≤ 2 (or |J| ≤ 2). Then there exists a nonnegative solution u ∈ W 2,1 (Ω) +m of (CHS) with h i (u) ∈ L 1 (Ω) for all i = 1, ..., m. If moreover m = 2, then the same result holds if f i ∈ L 1 (Ω) + , i = 1, 2 only. Remark 1.3 System (11) arises when modeling a general reversible chemical reaction with m species, according to the mass action law and to Fick's linear diffusion. In fact, it also contains systems written more generally as v i -d i ∆v i = λ i K 1 Π m k=1 v α k k -K 2 Π m k=1 v β k k + f i , where λ i ∈ R, λ i (β i -α i ) > 0, i = 1, ..., m. Indeed, we may go back to the exact writing of (11) by setting u i := (β i -α i )v i /λ i , k 1 := K 1 Π k [λ k /(β k -α k )] α k , k 2 := K 2 Π k [λ k /(β k -α k )] β k , f i := (β i -α i ) f i /λ i . On the other hand, it does not include systems like u 1 -d 1 ∆u 1 = +[u 3 1 u 2 2 -u 2 1 u 3 2 ] + f 1 (= u 2 1 u 2 2 [u 1 -u 2 ] + f 1 ), u 2 -d 2 ∆u 2 = -[u 3 1 u 2 2 -u 2 1 u 3 2 ] + f 2 . Here the condition λ i (β i -α i ) > 0 is not satisfied. Remark 1.4 Note that besides the 2 × 2 system (9), Theorem 1.2 contains as particular cases some favorite systems of the literature like h i (u) = (-1) i [u α 1 1 u α 3 3 -u β 2 2 ], i = 1, 2, 3, α 1 , α 3 , β 2 ∈ [1, +∞), h i (u) = (-1) i [u 1 u 3 -u 2 u 4 ], i = 1, 2, 3, 4. We may use Remark 1.3 to write them as in [START_REF] Laamri | Existence globale pour des systèmes de réaction-diffusion dans L 1[END_REF]. Analysis of systems of this kind may be found for instance in [START_REF] Laamri | Global existence of classical solutions for a class of reaction-diffusion systems[END_REF], [START_REF] Desvillettes | Global existence for quadratic systems of reactiondiffusion[END_REF], [START_REF] Pierre | Global existence for a class of quadratic reaction-diffusion systems with nonlinear diffusions and L 1 initial data[END_REF], [START_REF] Laamri | Global existence for reaction-diffusion systems with nonlinear diffusion and control of mass[END_REF], [START_REF] Goudon | Regularity analysis for systems of reaction-diffusion equations[END_REF], [START_REF] Cañizo | Improved duality estimates and applications to reaction-diffusion equations[END_REF], [START_REF] Caputo | Solutions of the 4-species quadratic reaction-diffusion systems are bounded and C ∞ -smooth[END_REF], etc. In fact Theorem 1.2 applies to quite general systems like, for instance, those obtained by multiplying the above 3 × 3 (resp. 4 × 4) example by u σ 1 1 u σ 2 2 u σ 3 3 (resp. u σ 1 1 u σ 2 2 u σ 3 3 u σ 4 4 ) where σ i ∈ [0, +∞). They lead to the following nonlinearities h i (u) = λ i [Π m k=1 u α k k -Π m k=1 u β k k ] , where m = 3 (resp. m = 4) and λ i (β i -α i ) > 0. Actually, Theorem 1.2 applies also to these same nonlinearities when m = 5. Indeed, since I ∪ J = {1, 2, 3, 4, 5}, it follows that I or J contains at most two elements. We believe that the result of Theorem := |x|, x ∈ Ω := B(0, 1) ⊂ R N , b ∈ (N/2, N -2), c ∈ (0, +∞): σ(r) := r -b + br + c, ∀ 0 < r ≤ 1, ⇒ σ -∆σ = f, f (r) := c + r -b + br + b(N -2 -b)r -(b+2) -b(N -1)r -1 , σ (1) = 0, where c is chosen large enough so that f ≥ 0. Note that f ∈ L p (Ω), p ∈ [1, N/(b + 2)). Let us now consider the solution (which exists by Theorem 1.2) of the system        u 1 , u 2 ∈ W 2,1 (Ω), u 2 2 -u 2 1 ∈ L 1 (Ω), u 1 -∆u 1 = u 2 2 -u 2 1 + f /2, u 2 -∆u 2 = -[u 2 2 -u 2 1 ] + f /2, ∂ ν u 1 = ∂ ν u 2 = 0 on ∂Ω. Then (u 1 + u 2 ) -∆(u 1 + u 2 ) = f, ∂ ν (u 1 + u 2 ) = 0 on ∂Ω. Thus u 1 + u 2 = σ. But u 1 , u 2 cannot be in L 2 (Ω) since σ is not in L 2 (Ω) by the choice of b > N/2. Remark 1.6 It is classical that an entropy structure holds in System (11) since m i=1 [log u i + µ i ]h i (u) = -[log k 1 Π m k=1 u α k k -log k 2 Π m k=1 u β k k ][k 1 Π m k=1 u α k k -k 2 Π m k=1 u β k k ] ≤ 0, when choosing µ i := [log k 1 -log k 2 ]/[m(α i -β i )]. Under the assumptions of Theorem 1.2, and in particular the LLogL assumption on the f i , we have the estimate Ω m i=1 [log u i + µ i ]u i + d i |∇u i | 2 u i ≤ Ω m i=1 [log u i + µ i ]f i ≤ Ω m i=1 f i log f i + (µ i -1)f i + u i , where, for the last inequality, we use the Young's inequality (75) with r = f i , s = log u i . But we will not use here this stucture in the proof of Theorem 1.2. Our strategy will consist in proving that the nonlinearity h i (u) is a priori bounded in L 1 (Ω). Then adequate compactness arguments allow us to pass to the limit in the approximate system. Note that the entropy inequality provides the extra information that ∇ √ u i ∈ L 2 (Ω) for the solutions obtained in Theorem 1.2 when f i log f i ∈ L 1 (Ω), i = 1, ..., m. Theorem 1.1 will be proved in Section 2, examples of applications are given in Section 3 and the proof of Theorem 1.2 is given in Section 4. Proof of Theorem 1.1 Since µ is fixed, we will most of the time more simply write L 1 (Ω), L 1 (Ω) + instead of L 1 (Ω, dµ), L 1 (Ω, dµ) + . We will use the natural order in R m namely [r ≤ r ⇔ r i ≤ ri , i = 1, ..., m] and we also denote r + := (r + 1 , ..., r + m ). Lemma 2.1 Let f = (f 1 , ..., f m ) ∈ L 1 (Ω) +m . We set h n i (x, r) := h i (x, r) 1 + n -1 m j=1 |h j (x, r)| , ∀ i = 1, ..., m, r ∈ R m , x ∈ Ω. ( 13 ) Assume that (A) and (H1) hold. Then the following approximate system (S n ) ∀i = 1, ..., m, u n i ∈ D(A i ) ∩ L 1 (Ω) + , u n i + A i u n i = h n i (•, u n ) + f i (•) in L 1 (Ω, dµ), (14) has a nonnegative solution u n = (u n 1 , • • • , u n m ). Proof. We consider the mapping T : v ∈ L 1 (Ω) m → w ∈ L 1 (Ω) m where w = (w 1 , ..., w m ) is the solution of ∀i = 1, ..., m, w i ∈ D(A i ), w i + A i w i = h n i (•, v + ) + f i (•) in L 1 (Ω, dµ), (15) where v + := (v + 1 , ..., v + m ). The mapping T is well defined since |h n i (x, r)| ≤ n for all (x, r) ∈ Ω × R m and f ∈ L 1 (Ω) m so that the solution is given by w i = (I + A i ) -1 h n i (•, v + ) + f i , i = 1, ..., m. Moreover h n i (•, v) + f i L 1 (Ω) ≤ n µ(Ω) + f i L 1 (Ω) =: M i . Let K i := (I + A i ) -1 {g ∈ L 1 (Ω) ; g L 1 (Ω) ≤ M i } , K := K 1 × ... × K m ⊂ L 1 (Ω) m . By assumption (A) and in particular (A3), K is a compact set of L 1 (Ω) m and K ⊃ T L 1 (Ω) m . On the other hand, we easily check that T is continuous. Thus T is a compact operator from L 1 (Ω) m into the compact set K ⊂ L 1 (Ω) m . By Schauder's fixed point theorem (see e.g. [START_REF] Zeidler | Nonlinear Functional Analysis and its applications, 1. Fixed Point Theorems[END_REF]), T has a fixed point u n . This means that u n is solution of (S n ). To prove the nonnegativity property of u n , we multiply the i-th equation by sign -(u n i ) and integrate to obtain, using Ω sign -(u i )A i u i ≥ 0 (see (A2) ), Ω (u n i ) -dµ ≤ Ω sign -(u n i )h n i (•, (u n ) + ) + sign -(u n i )f i dµ. By the nonnegativity of f i and the quasipositivity of h i assumed in (H1), the integral on the right is nonpositive so that Ω (u n i ) -dµ ≤ 0 or u n i ≥ 0 µ -a.e.. We will now progressively obtain estimates on u n independently of n. We start with the control of the total mass of u n . Proposition 2.2 Assume that (A), (H1), (H2), (A inf ) hold. Then there exists C ∈ (0, +∞) independent of n such that the solution u n of the approximate system (S n ) satisfies max 1≤i≤m u n i L 1 (Ω) ≤ C. Proof. According to (H2), we multiply each equation by a i and we add them to obtain m i=1 a i (u n i + A i u n i ) = m i=1 a i [h n i (•, u n ) + f i ] ≤ m i=1 [b i u n i + a i f i ]. We use (A inf ) to obtain n i=1 (a i -b i ) Ω u n i dµ ≤ -a ∞ m i=1 a i + m i=1 a i Ω f i dµ. Using a i > b i for all i yields the estimate of Proposition 2.2. We now prove that the nonlinearities are bounded in L 1 independently of n. Proposition 2.3 Assume that (A), (H1), (H2), (A inf ), (M ) hold. Then there exists C ∈ (0, +∞) independent of n such that the solution u n of the approximate system (S n ) satisfies max 1≤i≤m h n i (•, u n ) L 1 (Ω) ≤ C. Proof. According to the assumption (M ), let us multiply the system (S n ) by the matrix M 0 . As before, we denote Au n := (A 1 u n , ..., A m u n ), h n (•, u n ) := (h n 1 (•, u n ), ..., h n m (•, u n )) , f := (f 1 , ..., f m ). Then M 0 u n + M 0 Au n = M 0 h n (•, u n ) + M 0 f ≤ M 1 u n + Θ + M 0 f. (16) Since the entries of M 0 are nonnegative, and thanks to (A inf ) and the nonnegativity of u n , there exists C ∞ ∈ [0, +∞) m such that Ω [M 0 u n + M 0 Au n ]dµ ≥ -C ∞ which implies Ω [M 0 h n (•, u n ) + M 0 f ]dµ ≥ -C ∞ . We deduce Ω [M 1 u n + Θ -M 0 h n (•, u n )]dµ ≤ C ∞ + Ω [M 1 u n + Θ + M 0 f ]dµ ≤ D ∞ ∈ (0, +∞) m , the last inequality using also Proposition 2.2. As the function M 1 u n + Θ -M 0 h n (•, u n ) is nonnegative, these inequalities provide a bound for its L 1 (Ω) m -norm. It follows that M 0 h n (•, u n ) is also bounded in L 1 (Ω) m independently of n. Since M 0 is invertible, we deduce that h n (•, u n ) is itself bounded in L 1 (Ω) m : indeed if | • | denotes the euclidian norm in R m and • the induced norm on the m × m matrices, we may write |h n (•, u n )| = |M -1 0 M 0 h n (•, u n )| ≤ M -1 0 |M 0 h n (•, u n )|. Whence Proposition 2.3. Proposition 2.4 Assume that (A), (H1), (H2), (A inf ), (M ), (Φ) hold. Then, if u n is the solution of the approximate system (S n ), {h n (., u n )} n is uniformly integrable in Ω which means that, for all ε > 0, there exists δ > 0 such that for all measurable set E ⊂ Ω µ(E) < δ ⇒ E n i=1 |h n i (•, u n )|dµ ≤ ε for all n. Proof. By Proposition 2.3, we already know that h n (•, u n ) is bounded in L 1 (Ω) m . Let us prove that the extra condition (Φ) implies that it is even uniformly integrable. First, since u n i = (I + A i ) -1 (h n (•, u n ) + f i ), the compactness condition (A3) in (3) implies that, for all i = 1, ..., m, u n i belongs to a compact set of L 1 (Ω). Let us show that M 0 h n (•, u n ) is uniformly integrable. Since M 0 is invertible, this will imply that h n (•, u n ) is itself uniformly integrable and end the proof of Proposition 2.4. We will successively prove that (M 0 h n (•, u n )) + and (M 0 h n (•, u n )) -are uniformly integrable. Note that by the definition (13), h n (•, u n ) ≤ h(•, u n ). We may write (recall that the entries of M 0 are nonnegative) M 0 h n (•, u n ) ≤ M 0 h(•, u n ) ≤ M 1 u n + Θ ⇒ (M 0 h n (•, u n )) + ≤ (M 1 u n ) + + Θ. ( 17 ) Since u n is in a compact set of L 1 (Ω) m , so is (M 1 u n ) + and it is in particular uniformly integrable. We then deduce from the last inequality that (M 0 h n (•, u n )) + is uniformly integrable. To control (M 0 h n (•, u n )) -, we go back to ( 16) and, using ( 17), we rewrite it as follows M 0 u n + M 0 Au n + (M 0 h n (•, u n )) -= (M 0 h n (•, u n )) + + M 0 f ≤ (M 1 u n )) + + Θ + M 0 f. ( 18 ) According to (Φ), we multiply this inequality by sign + (ϕ(u n ) -k)b. By (Φ) on one hand and by the nonnegativity of u n , b and of the entries of M 0 on the other hand, we have Ω sign + (ϕ(u n ) -k)b • M 0 Au n dµ ≥ 0, Ω sign + (ϕ(u n ) -k)b • M 0 u n dµ ≥ 0, ∀ k ∈ [0, +∞). Combining with [START_REF] Pierre | Global existence for a class of quadratic reaction-diffusion systems with nonlinear diffusions and L 1 initial data[END_REF], we deduce Ω sign + (ϕ(u n ) -k)b • (M 0 h n (•, u n )) -dµ ≤ Ω sign + (ϕ(u n ) -k)b • (M 1 u n )) + + Θ + M 0 f dµ. (19) Since lim |r|→+∞ ϕ(r) = +∞, for all k ∈ [0, +∞), there exists R(k) such that [|r| ≥ R(k)] ⊂ [ϕ(r) ≥ k] and therefore [|u n | ≥ R(k)] ⊂ [ϕ(u n ) ≥ k] , µ -a.e.. (20) Let E ⊂ Ω be a measurable set. Then E b • (M 0 h n (•, u n )) -dµ ≤ E∩[|u n |≤R(k)] b • (M 0 h n (•, u n )) -dµ + E∩[|u n |≥R(k)] b • (M 0 h n (•, u n )) -dµ =: I n -+ I n + . If we denote ϕ R := max{ϕ(r); |r| ≤ R}, then R ∈ [0, +∞) → ϕ R ∈ [0, +∞) is a nondecreasing function such that [ϕ(r) ≥ k] ⊂ [|r| ≥ ψ(k)] where ψ(k) := inf ϕ -1 R ([k, +∞)) . By [START_REF] Schmitt | Existence globale ou explosion pour les systèmes de réaction-diffusion avec contrôle de masse[END_REF] and [START_REF] Rothe | Global solutions of reaction-diffusion systems[END_REF] and [ϕ(u n ) ≥ k] ⊂ [|u n | ≥ ψ(k)], we have I n + ≤ [ϕ(u n )≥k] b • (M 1 u n ) + + Θ + M 0 f dµ ≤ [|u n |≥ψ(k)] b • (M 1 u n ) + + Θ + M 0 f dµ. ( 21 ) Since u n lies in a compact set of L 1 (Ω) m and lim k→+∞ ψ(k) = +∞, then lim k→+∞ µ([|u n | ≥ ψ(k)]) = 0 uniformly in n. Thus, given ε ∈ (0, 1), there exists k = k ε large enough so that I n + ≤ ε/2 for all n. (22) We can also control I n -as follows. We remark that (see assumption [START_REF] Bebernes | Finite time blowup for semilinear reactive-diffusive systems[END_REF] in (H1)) |u n | ≤ R(k ε ) ⇒ |h n i (•, u n )| ≤ |h i (•, u n )| ≤ h i (•, R(k ε )). This implies that for some B ∈ (0, +∞), I n -= E∩[|u n |≤R(k)] b • (M 0 h n (•, u n )) -dµ ≤ E B m i=1 h i (•, R(k ε )) (23) We may choose δ small enough (independent of n) such that µ(E) ≤ δ ⇒ E m i=1 h i (•, R(k ε )) < ε/2B. Combining with I n + ≤ ε/2 proved above (see (22)), we deduce that b • (M 0 h n (•, u n )) -is uniformly integrable. Since b ∈ (0, +∞) m , this implies that (M 0 h n (•, u n )) -is itself uniformly integrable. We already know that (M 0 h n (•, u n )) + is uniformly integrable. Thus so is M 0 h n (•, u n ) and this ends the proof of Proposition 2.4. Proof of Theorem 1.1. We consider the solution u n of the approximate system (S n ) built in Lemma 2.1. By Propositions 2.2, 2.3, 2.4, up to a subsequence as n → +∞, we may assume that u n converges in L 1 (Ω) m and µ-a.e. to some u ∈ L 1 (Ω) +m . Moreover, by definition of h n i (see [START_REF] Laamri | Global existence for reaction-diffusion systems with nonlinear diffusion and control of mass[END_REF] ) and the continuity property of h i assumed in (H1), h n i (•, u n ) converges µ-a.e. to h(•, u). Moreover h n (•, u n ) is uniformly integrable. By Vitali's Lemma (see e.g. [START_REF] Schmitt | Existence globale ou explosion pour les systèmes de réaction-diffusion avec contrôle de masse[END_REF] or [START_REF] Fonseca | Modern Methods in the Calculus of Variations : L p spaces[END_REF]), h n (•, u n ) converges also in L 1 (Ω) m to h(•, u). Since u n i = (I +A i ) -1 (h n (•, u n )+f i ) this implies that u i = (I + A -1 i (h(•, u) + f i ) which means that u is solution of the limit system (S) and this ends the proof of Theorem 1.1. Examples In all examples below, Ω is a bounded open subset of R N with regular boundary and equipped with the Lebesgue measure. Examples with linear diffusions and homogeneous boundary conditions We start with a simple example associated with the Laplacian operator with homogeneous boundary conditions. Corollary 3.1 Assume the nonlinearity h = (h 1 , ..., h m ) satisfies (H1), (H2), (M ). Let d i ∈ (0, +∞), f i ∈ L 1 (Ω) + , i = 1, ..., m. Then the following system has a solution    for all i = 1, ..., m; u i ∈ W 1,1 0 (Ω) + , h i (•, u) ∈ L 1 (Ω), u i -d i ∆u i = h i (•, u) + f i . (24) Proof. We consider the operators D(A i ) := {u ∈ W 1,1 0 (Ω) ; ∆u ∈ L 1 (Ω)}, A i u := d i ∆u. ( 25 ) It is classical that these operators A i satisfy the three conditions of (A) in (3) (see e.g. [START_REF] Brezis | Semilinear elliptic equations in L 1[END_REF]). Since 0 ∈ D(A i ), as already noticed just after ( 6), (A inf ) is also satisfied. And for (Φ), if we denote M 0 = [m ij ] 1≤i,j≤m , we choose ϕ(r) := m j=1 ( m i=1 m ij ) d j r j , b = (1, ..., 1) t ∈ (0, +∞) m . Note that m i=1 m ij > 0 for all j = 1, ..., m since m ij ≥ 0 and M 0 is invertible. In particular, lim |r|→+∞ ϕ(r) = +∞). Then, if u ∈ D(A), k ∈ (0, +∞), Ω sign + (ϕ(u) -k)b • M 0 Au = - Ω sign +   m i,j=1 m ij d j u j -k   ∆   m i,j=1 m ij d j u j   ≥ 0. ( 26 ) Then we may apply Theorem 1.1. Remark 3.2 As examples of functions h, we may for instance choose    m = 2, α i , β i ∈ {0} ∪ [0, +∞), i = 1, 2, λ ∈ (0, 1) h 1 (•, u 1 , u 2 ) = λu α 1 1 u α 2 2 -u β 1 1 u β 2 2 , h 2 (•, u 1 , u 2 ) = -u α 1 1 u α 2 2 + u β 1 1 u β 2 2 . We easily check that (H1) holds and that (H2) is satisfied with a i = 1, b i = 0 for all i and ω = 0 (in other words h 1 + h 2 ≤ 0). Moreover (M ) is satisfied with M 1 = 0, Θ = (0, 0) t and M 0 = 1 1 1 λ . Note that for λ = 1, M 0 is not invertible so that only one relation h 1 + h 2 ≤ 0 holds and Theorem 1.1 does not apply. This kind of systems is considered in Theorem 1.2. • Here is another example of a nonlinearity h = (h 1 , h 2 , h 3 ) which satisfies the corollary. m = 3, α i ∈ [1, +∞), i = 1, 2, h 1 = u 3 -u α 1 1 u α 2 2 = h 2 = -h 3 . The corresponding evolution problem is studied in [START_REF] Rothe | Global solutions of reaction-diffusion systems[END_REF] for N ≤ 5, [START_REF] Martin | Nonlinear reaction-diffusion systems[END_REF] for any dimension N with α 1 = α 2 = 1, and in [START_REF] Laamri | Global existence of classical solutions for a class of reaction-diffusion systems[END_REF] where (α 1 , α 2 ) ∈ [1, +∞) 2 . Here (H1) is obviously satisfied and so is (H2) with a i = 1, i = 1, 2, a 3 = 2, b i = 0, i = 1, 2, 3, ω = 0. Then (M ) is satisfied with Θ = (0, 0, 0) t and M 0 =   1 0 0 0 1 0 1 1 2   , M 1 =   0 0 1 0 0 1 0 0 0   . A main point here is that the dependence in u 3 is linear. When it is superlinear, the system does not fit any more into the scope of Theorem 1.1. It is however analyzed in Theorem 1.2. About more general linear diffusions Let us now make some comments on the following example where diffusions are more general than in Corollary 3.1.                  u i ∈ W 1,1 0 (Ω), h i (u 1 , u 2 ) ∈ L 1 (Ω), i = 1, 2, u 1 - N i,j=1 a ij ∂ x i x j u 1 = h 1 (u 1 , u 2 ) + f 1 , u 2 - N i,j=1 b ij ∂ x i x j u 2 = h 2 (u 1 , u 2 ) + f 2 , (27) where a ij , b ij ∈ R, N i,j=1 a ij ξ i ξ j , N i,j=1 b ij ξ i ξ j ≥ α|ξ| 2 , α ∈ (0, +∞), ∀ξ = (ξ 1 , ..., ξ N ) ∈ R N . ( 28 ) We consider the operators        D(A 1 )[resp. D(A 2 )] := v ∈ W 1,1 0 (Ω), N i,j=1 a ij ∂ x i x j v [resp. N i,j=1 b ij ∂ x i x j v] ∈ L 1 (Ω) , A 1 v := N i,j=1 a ij ∂ x i x j v, A 2 v := N i,j=1 b ij ∂ x i x j v. It is easy to see that the assumptions (A), (A inf ) are satisfied. Thus if (H1), (H2), (M ) are satisfied like in Corollary 3.1, then for the approximate solutions u n of this system, as defined in the proof of Theorem 1.1, it follows that u n , h n (•, u n ) are bounded in L 1 (Ω) 2 and u n lies in a compact set of L 1 (Ω) 2 . However, the extra condition (Φ) is not satisfied in general so that it is not clear whether h n (•, u n ) is uniformly integrable. Actually, we have to choose here a different strategy with does not seem to be generalized to the abstract setting of Section 2. It consists in looking at the equation satisfied by T k (u n 1 + ηu n 2 ) where T k is a cut-off function. It is then easy to pass to the limit in the nonlinear terms of the truncated approximate system since they are multiplied by T k (u n 1 + ηu n 2 ) which vanishes for u n i large for all i. Therefore a.e. convergence is sufficient to pass to the limit. The difficulty is then to prove precise estimates independent of n, in terms of η, in order to control the other terms as it done in the parabolic case (see [START_REF] Pierre | Weak solutions and supersolutions in L 1 for reaction-diffusion systems[END_REF], [START_REF] Pierre | Global Existence in Reaction-Diffusion Systems with Dissipation of Mass : a Survey[END_REF], [START_REF] Laamri | Global existence for reaction-diffusion systems with nonlinear diffusion and control of mass[END_REF]). This approach through cut-off functions T k is precisely developed in Section 4 to prove Theorem 1.2 and we refer the reader to this other approach without giving more details here. Examples with linear diffusions and Robin-type boundary conditions Now we analyze what happens for systems like (24) when the boundary conditions are different. If we replace the homogeneous boundary conditions of (24) by homogeneous Neumann boundary conditions, then the result is exactly the same. On the other hand, the situation is quite more complicated if the boundary conditions are of different type for each of the u i 's. This is actually connected with the content of the assumption (Φ). For simplicity, we do it only for a 2 × 2-system. Given a nonlinearity h satisfying (H1), (H2), (M ), we consider the following system with general Robin-type boundary conditions.            i = 1, 2, u i ∈ W 1,1 (Ω) + , h i (u 1 , u 2 ) ∈ L 1 (Ω), u i -d i ∆u i = h i (u 1 , u 2 ) + f i , λ i u i + (1 -λ i )∂ ν u i = ψ i on ∂Ω, λ i ∈ [0, 1], d i ∈ (0, +∞), ψ i ∈ [0, +∞), f i ∈ L 1 (Ω) + . (29) Corollary 3.3 Let (f 1 , f 2 ) ∈ L 1 (Ω) + × L 1 (Ω) + . Assume the nonlinearity h satisfies (H1), (H2), (M ). Assume moreover [0 ≤ λ 1 , λ 2 < 1] or [λ 1 = λ 2 = 1 and ψ 1 = ψ 2 = 0]. Then the system (29) has a solution. Remark 3.4 It is known that the case when one of the λ i is equal to 1 is different (see the analysis in [START_REF] Martin | Influence of mixed boundary conditions in some reaction-diffusion systems[END_REF]). For instance, finite time blow up may occur for the associated evolution problem when the boundary conditions are u 1 = 1, ∂ ν u 2 = 0 (see [START_REF] Bebernes | Finite time blowup for semilinear reactive-diffusive systems[END_REF], [START_REF] Bebernes | Finite-time blowup for a particular parabolic system[END_REF]). Then the operator A 1 does not satisfy (A inf ) as easily seen by considering (as in [START_REF] Martin | Influence of mixed boundary conditions in some reaction-diffusion systems[END_REF]) the following simple example: u σ (x) := cosh(σx)/ cosh(σ) on Ω := (-1, 1) with σ ∈ (0, +∞). Then u σ ≥ 0 and u σ = 1 on ∂Ω. But -Ω u σ = u σ (-1) -u σ (1) → -∞ as σ → +∞. Here we consider only the cases that directly fall into the scope of Theorem 1.1. A few other cases could be treated directly like λ 1 = λ 2 and positive data ψ 1 , ψ 2 or also λ 1 = 0 = λ 2 and ψ 1 = 0. Proof of Corollary 3.3. Here we define D(B i ) = {u ∈ W 2,1 (Ω); λ i u i + (1 -λ i )∂ ν u i = ψ i on ∂Ω}, B i u := -d i ∆u i . Then the closure A i of B i in L 1 (Ω) satisfies the assumption (A) (see [START_REF] Brezis | Semilinear elliptic equations in L 1[END_REF]). • Let us first assume 0 ≤ λ i < 1, i = 1, 2. Then, for u i ∈ D(B i ) Ω B i u i = - Ω d i ∆u i = - Ω d i ∂ ν u i = d i Ω (1 -λ i ) -1 (λ i u i -ψ i ) ≥ -d i (1 -λ i ) -1 Ω ψ i . This remains valid for A i by closure. Thus the assumption (A inf ) is satisfied. For the condition (Φ), we come back to (26) with the same ϕ, b. Let p q (r) be a standard approximation of sign + (r -k) like p q (r) = 0, ∀ r ∈ (-∞, k -1/q]; p q (r) = q (r -k) + 1, ∀ r ∈ [k -1/q, k]; p q (r) = 1, ∀ r ∈ [k, +∞). (30) Then, for u j ∈ D(B j ), j = 1, 2 and for V : = 2 i,j=1 m ij d j u j Ω p q (ϕ(u) -k)b • M 0 Bu = Ω p q (V )|∇V | 2 - ∂Ω p q (V )∂ ν V ≥ - ∂Ω p q (V )∂ ν V. - ∂Ω p q (V )∂ ν V = ∂Ω p q (V ) 2 i,j=1 m ij d j (1 -λ j ) -1 (λ j u j -ψ j ). If ψ j = 0 for all j = 1, ..., m, then -∂Ω p q (V )∂ ν V ≥ 0 and letting q → +∞, we obtain that B and therefore A satisfies the condition (Φ). When ψ j = 0 for some j, we only obtain Ω sign + (ϕ(u) -k)b • M 0 Bu ≥ - ∂Ω sign + (V -k) m i,j=1 m ij d j (1 -λ j ) -1 ψ j =: -η(V, k). The point is that we can slightly modify the proof of Theorem 1.1 to get the same conclusion with this weaker estimate from below. Indeed, in the present case, we have to add the term η(V n , k) in the inequality [START_REF] Rothe | Global solutions of reaction-diffusion systems[END_REF] where V n = 2 i,j=1 m ij d j u n j . Since η(V n , k) tends to 0 as k → +∞ uniformly in n, the rest of the proof remains unchanged if we choose k ε large enough so that η(V, k ε ) < ε also. • Assume now that λ 1 = λ 2 = 1. Then we make the same choice as in (26) and we see that - Ω sign + (ϕ(u) -k)b • M 0 Au ≥ - ∂Ω sign + ( 2 i,j=1 m ij d j ψ j -k)∂ ν ( 2 i,j=1 m ij d j u j ) ≥ 0, since u j ≥ 0, u j = 0 on ∂Ω imply that ∂ ν u j ≤ 0 on ∂Ω. For the same reason, (A inf ) holds since - Ω ∆u j = - ∂Ω ∂ ν u j ≥ 0. Examples with nonlinear diffusions Let us now consider nonlinear diffusions. • We start with porous media type equations.    For i = 1, ..., m, u i ∈ L 1 (Ω) + , ϕ i (u i ) ∈ W 1,1 0 (Ω), h i (u) ∈ L 1 (Ω), u i -∆ϕ i (u i ) = h i (u) + f i , (31) where ϕ i : [0, +∞) → [0, +∞) is continuous, increasing with ϕ i (0) = 0, lim s→+∞ s -(N -2) + /N ϕ i (s) = +∞. Corollary 3.5 Let (f 1 , • • • , f m ) ∈ (L 1 (Ω) + ) m . Assume the nonlinearity h satisfies (H1), (H2), (M ). Then the system (31) has a solution. Proof. We naturally define D(A i ) := {u i ∈ L 1 (Ω) ; ϕ i (u i ) ∈ W 1,1 0 (Ω), ∆ϕ i (u i ) ∈ L 1 (Ω) }, A i u i := -∆ϕ i (u i ). It is classical that the operators A i satisfy the assumptions (A), (A inf ) (see [START_REF] Brezis | Semilinear elliptic equations in L 1[END_REF], [START_REF] Ph | Équations d'évolution dans un espace de Banach et applications[END_REF]). To check that (Φ) is satisfied, we consider r ∈ [0, +∞) m → ϕ(r) := m i,j=1 m ij ϕ j (r j ) and b := (1, ..., 1) t . Then Ω sign + (ϕ(u) -k)b • M 0 Au = - Ω sign + ( m i,j=1 m ij ϕ j (u j ) -k)∆   m i,j=1 m ij ϕ j (u j )   ≥ 0. We then apply Theorem 1.1. • A second example with nonlinear diffusions is the following where p ∈ (1, +∞) :    For i = 1, 2, u i ∈ W 1,p-1 0 (Ω) + , f i ∈ L 1 (Ω), d i ∈ (0, +∞), u i -d i ∆ p u i = h i (u 1 , u 2 ) + f i , (32) where for v ∈ W 1,p-1 (Ω), ∆ p v := ∇ • (|∇v| p-2 ∇v). Corollary 3.6 Let (f 1 , f 2 ) ∈ L 1 (Ω) + ×L 1 (Ω) + . Assume the nonlinearity h satisfies (H1), (H2), (M ). Then the system (32) has a solution. Proof. Here we define for i = 1, 2 D(B i ) := {v ∈ W 1,p 0 (Ω) ∩ L 2 (Ω) ; ∆ p v ∈ L 2 (Ω)}, B i (v) := -d i ∆ p v. And the operators A i are defined as the closure of B i in L 1 (Ω). Then the assumptions (A), (A inf ) are satisfied (see e.g. [START_REF] Herrero | Asymptotic behaviour of the solutions of a strongly nonlinear parabolic problem[END_REF]). Let us prove that (Φ) holds. For p q defined as in (30) and ûi ∈ D(B i ), i = 1, 2, we have - Ω p q (û 1 + û2 )(∆ p û1 + ∆ p û2 ) = Ω (∇û 1 + ∇û 2 )(|∇û 1 | p-2 ∇û 1 + |∇û 2 | p-2 ∇û 2 )p q (û 1 + û2 ). The mapping r ∈ R N → |r| p is convex. Therefore its gradient r ∈ R N → p|r| p-2 r ∈ R N is monotone, which means (r 1 -r 2 ) • (|r 1 | p-2 r 1 -|r 2 | p-2 r 2 ) ≥ 0 , ∀ r 1 , r 2 ∈ R N . We apply this with r 1 := ∇û 1 (x), r 2 := -∇û 2 (x), x ∈ Ω to deduce after integration on Ω: - Ω p q (û 1 + û2 )(∆ p û1 + ∆ p û2 ) ≥ 0. And letting q → +∞ gives -Ω sign + (û 1 + û2 -k)(∆ p û1 + ∆ p û2 ) ≥ 0 and by closure Ω sign + (û 1 + û2 -k)(A 1 û1 + A 2 û2 ) ≥ 0 ∀ u 1 ∈ D(A 1 ), ∀ u 2 ∈ D(A 2 ). ( 33 ) For the condition (M ), we choose b := (1, 1) t , ϕ(r 1 , r 2 ) := c 1 r 1 + c 2 r 2 , c 1 := {(m 11 + m 21 )d 1 } 1/(p-1) , c 2 := {(m 21 + m 22 )d 2 } 1/(p-1) where M 0 := (m ij ) 1≤i,j≤2 . Then for u 1 ∈ D(A 1 ), u 2 ∈ D(A 2 ), u = (u 1 , u 2 ), Ω sign + (ϕ(u) -k)b • M 0 Au = Ω sign + (c 1 u 1 + c 2 u 2 -k) 2 i,j=1 m ij d j A j u j = Ω sign + (c 1 u 1 + c 2 u 2 -k)[A 1 (c 1 u 1 ) + A 2 (c 2 u 2 )] ≥ 0, the last inequality coming from (33) applied with û1 := c 1 u 1 , û2 := c 2 u 2 . This ends the proof of Corollary 3.6. • To end this section, let us comment on the following system which is a model of situations where the operators A i are very different from each other :            p = 2 u 1 ∈ W 1,p-1 0 (Ω) + , u 2 ∈ W 1,1 0 (Ω) + , h 1 , h 2 , f 1 , f 2 ∈ L 1 (Ω), u 1 -∆ p u 1 = h 1 (u 1 , u 2 ) + f 1 , u 2 -∆u 2 = h 2 (u 1 , u 2 ) + f 2 . (34) As a consequence, the compatibility condition (Φ) is not generally satisfied. However, if the nonlinearity h satisfies (H1), (H2), (M ), then for the approximate solution u n defined in Section 2, h n (•, u n ) is bounded in L 1 (Ω) × L 1 (Ω) and therefore u n lies in a compact set of L 1 (Ω) × L 1 (Ω). Thus we may assume that, up to a subsequence, u n converges to some u in L 1 (Ω) × L 1 (Ω) and a.e. so that h n (•, u n ) converges a.e. to h(•, u). Unfortunately we do not know whether we can pass to the limit in the approximate version of system (34). And it is not clear how the use of cut-off function T k as in the next section can help. We leave this as an open problem. Proof of Theorem 1.2 For each k = 1, ..., m, we define σ k := min{α k , β k }, γ k := |α k -β k | so that (see [START_REF] Laamri | Global existence of classical solutions for a class of reaction-diffusion systems[END_REF] for the definition of I, J ): h k (u) = (β k -α k ) Π m =1 u σ B(u), B(u) := k 1 Π i∈I u γ i i -k 2 Π j∈J u γ j j . (35) We first solve the approximate system with the bounded data f n i := inf{f i , n}, n ∈ N. Lemma 4.1 There exists a nonnegative solution u n ∈ ∩ p∈[1,∞) W 2,p (Ω) +m of    For i = 1, ..., m, u n i -d i ∆u n i = (β i -α i )Π m k=1 (u n k ) σ k B(u n ) + f n i in Ω, ∂ ν u i = 0 on ∂Ω. (36) Moreover, we have γ j u n i + γ i u n j -∆(γ j d i u n i + γ i d j u n j ) = γ j f n i + γ i f n j , ∀ i ∈ I, j ∈ J, (37) and u n is bounded in L 1+η (Ω) m for some η > 0. Proof. By the abstract Lemma 2.1, for ε ∈ (0, 1), there exists a regular nonnegative solution u ε of        For i = 1, ..., m, u ε i -d i ∆u ε i = (β i -α i ) Π m k=1 (u ε k ) σ k B(u ε ) 1 + ε Π k=1 m (u ε k ) α k + Π m k=1 (u ε k ) β k + f n i in Ω, ∂ ν u ε i = 0 on ∂Ω. (38) Indeed the nonlinearity is here quasi-positive and uniformly bounded by max i γ i /ε. Then by multiplying the equations in i ∈ I, j ∈ J respectively by γ j , γ i , we have (γ j u ε i + γ i u ε j ) -∆(γ j d i u ε i + γ i d j u ε j ) = γ j f n i + γ i f n j . (39) If d := max 1≤k≤m d k , this implies (since u ε i , u ε j ≥ 0) d -1 [γ j d i u ε i + γ i d j u ε j ] -∆(γ j d i u ε i + γ i d j u ε j ) ≤ γ j f n i + γ i f n j , ∂ ν (γ j d i u ε i + γ i d j u ε j ) = 0 on ∂Ω. We deduce that γ j d i u ε i + γ i d j u ε j ≤ d -1 I -∆ -1 (γ j f n i + γ i f n j ). (40) Since the f n i are bounded by n (fixed), using the nonnegativity of u ε , we obtain that the u ε i are bounded in L ∞ (Ω) independently of ε. By standard elliptic regularity results applied to the equation (38) in u i , they are also bounded in ∩ p∈[1,∞) W 2,p (Ω). They are therefore included in a compact set of L ∞ (Ω) as → 0. We can then easily pass to the limit as ε → 0 and obtain the convergence of a subsequence of u ε toward a solution u n of (36). Then the identities (37) follow by passing to the limit in (39). But (40) remains also valid at the limit and implies that u n i , i = 1, ..., m are bounded in L 1 (Ω) independently of n. Next we may rewrite (37), using d := min 1≤k≤m d k : d -1 [γ j d i u n i +γ i d j u n j ]-∆(γ j d i u n i +γ i d j u n j ) = γ j [(d -1 d i -1)u n i +f n i ]+γ i [(d -1 d j -1)u n j +f n j ], ∀ i ∈ I, j ∈ J, (41) together with ∂ ν (γ j d i u n i + γ i d j u n j ) = 0 on ∂Ω. Since the right-hand side is bounded in L 1 (Ω), this implies that γ j d i u n i + γ i d j u n j is bounded in L p (Ω) for all p ∈ [1, N/(N -2) + ) (see e.g. [START_REF] Brezis | Semilinear elliptic equations in L 1[END_REF]). This ends the proof of Lemma 4.1. In order to rewrite the relations (41), we denote g n k := (d -1 d k -1)u n k + f n k ≥ 0, ∀ k = 1, ..., m. (42) When f k log f k is assumed to be in L 1 (Ω) for all k = 1, ..., m, then sup n∈N max 1≤k≤m Ω g n k + | g n k log g n k | < +∞. (43) This is due to the "L LOG L" assumption on the f i 's and on the L 1+η -bound on the u n i stated in Lemma 4.1. Next we introduce the solutions G n k of d -1 G n k -∆G n k = g n k in Ω, ∂ ν G n k = 0 on ∂Ω, G n k ≥ 0, k = 1, ..., m, (44) so that the relations (37), (41) may be rewritten γ j d i u n i + γ i d j u n j = γ j G n i + γ i G n j , ∀ i ∈ I, j ∈ J. ( 45 ) Our goal is to prove that the nonlinearity of the system (36) is bounded in L 1 (Ω) m independently of n. It will imply enough compactness on u n to pass to the limit. The following lemma will provide the key estimate. Lemma 4.2 Under the assumptions of Theorem 1.2, there exists θ n ∈ W 2,1 (Ω) +m (unique) such that    γ j d i θ n i + γ i d j θ n j = γ j G n i + γ i G n j , ∀ i ∈ I, j ∈ J, k 1 Π i∈I (θ n i ) γ i = k 2 Π j∈J (θ n j ) γ j (or B(θ n ) = 0), ∂ ν θ n k = 0 on ∂Ω, ∀ k = 1, ..., m. (46) Moreover sup n max 1≤k≤m ∆θ n k L 1 (Ω) < +∞. ( 47 ) We postpone the proof of this lemma and we end the proof of Theorem 1.2. Without loss of generality, we assume that 1 ∈ I. Using the equation in u n 1 given in the approximate system (36), and the fact that B(θ n ) = 0, we have u n 1 -θ n 1 -d 1 ∆(u n 1 -θ n 1 ) + γ 1 Π m k=1 (u n k ) σ k [B(u n ) -B(θ n )] = ρ n , ρ n := f n 1 -θ n 1 + d 1 ∆θ n 1 . (48) Now the main point is that B(u n ), B(θ n ) have the property that B(u n ) = b n (•, u n 1 ), B(θ n ) = b n (•, θ n 1 ), (49) where r → b n (•, r) is a increasing function. More precisely, we deduce from the relations (45), (46) that γ i d j θ n j = γ j G n i + γ i G n j -γ j d 1 θ n 1 , ∀ j ∈ J, γ 1 d i θ n i = γ i d 1 θ n 1 + γ 1 G n i -γ i G n 1 , ∀i ∈ I. γ i d j u n j = γ j G n i + γ i G n j -γ j d 1 u n 1 , ∀ j ∈ J, γ 1 d i u n i = γ i d 1 u n 1 + γ 1 G n i -γ i G n 1 , ∀i ∈ I. Plugging this into B(u n ), B(θ n , recalling that B(v) = k 1 Π i∈I v γ i i -k 2 Π j∈J v γ j j , we indeed obtain (49) by setting b n (x, r) := k 1 Π i∈I (δ i d 1 r + F n i (x)) γ i -k 2 Π j∈J (F n j (x) -δ j d 1 r) γ j , ∀ r ∈ [r n -, r n + ], (50) where we define    δ k := γ k /(γ 1 d k ), ∀ k = 1, ..., m, F n i := d -1 i G n i -δ i G n 1 , ∀ i ∈ I, F n j := δ j G n 1 + d -1 j G n j , ∀ j ∈ J, r n -(x) := max i∈I [F n i (x)] -/(δ i d 1 ), r n + (x) := min j∈J F n j (x)/(δ j d 1 ). ( 51 ) We multiply the equation (48) by sign(u n 1 -θ n 1 ) and we integrate. Then we use the two following properties Ω sign(u n 1 -θ n 1 )[u n 1 -θ n 1 -d 1 ∆(u n 1 -θ n 1 )] ≥ 0, sign(u n 1 -θ n 1 )[B(u n ) -B(θ n )] = sign(u n 1 -θ n 1 )[b n (•, u n 1 ) -b n (•, θ n 1 )] = |b n (•, u n 1 ) -b n (•, θ n 1 )|, to obtain γ 1 Ω Π m k=1 (u n k ) σ k |b n (•, u n 1 ) -b n (•, θ n 1 )| ≤ Ω |ρ n |. Since ρ n is bounded in L 1 (Ω) by Lemma 4.2, and since b n (•, θ n 1 ) ≡ 0, this implies that the nonlinearity of the equation in u n 1 u n 1 -d 1 ∆u n 1 + γ 1 Π m k=1 (u n k ) σ k b n (•, u n 1 ) = f n i in Ω, is bounded in L 1 (Ω) independently of n. This implies that ∆u n 1 is bounded in L 1 (Ω). Going back to (45), it follows that ∆u n k is bounded in L 1 (Ω) m for all k = 1, ..., m as well. Up to a subsequence, we may deduce that u n k converges for all k to some u k in L 1 (Ω) and a.e. and that ∇u n k converges in L 1 (Ω) N to ∇u k . Note also that by Fatou's lemma, Π k (u k ) σ k B(u) ∈ L 1 (Ω). Now to end the passing to the limit, we look at the equation satisfied by T R (v n i ) where v n i := u n i + ε j =i u n j and ε ∈ (0, 1) and the T R are C 2 -cut-off functions satisfying T R (s) = s if s ∈ [0, k -1], T R (s) = 0 if s ≥ k, T R (s) ≤ k, ∀s ∈ [0, +∞), 0 ≤ T R (s) ≤ 1, T R (s) ≤ 0 for all s ≥ 0. ( 52 ) -∆T R (v n i ) = -T R (v n i )∆v n i -T R (v n i )|∇v n i | 2 ≥ -T R (v n i )∆(u n i + ε j =i u n j ). Coming back to the system (36), we have that for all ψ ∈ C ∞ Ω + , Ω ∇ψ∇T R (v n i ) ≥ Ω ψT R (v n i )    Π k (u n k ) σ k B(u n )[ β i -α i d i + ε j =i β j -α j d j ] + f n i -u n i d i + ε j =i f n j -u n j d j    . We know that, up to a subsequence, u n converges in L 1 (Ω) m and a.e. to some u ∈ W 1,1 (Ω) m . We may pass to the limit along this subsequence in the above inequality. Indeed the nonlinear terms on the right converge a.e. and T R ( v n i )Π k (u n k ) σ k B(u n ) is uniformly bounded, while the f n k , u n k , k = 1, ..., m converge in L 1 (Ω). Moreover, ∇T R (v n i ) = T R (v n i )∇v n i converges in L 1 (Ω) to ∇T R (v i ), v i := u i + ε j =i u j . At the limit, we obtain the same inequality without the superscript n . Then, we let ε go to 0 to obtain Ω ∇ψ∇T R (u i ) ≥ Ω ψT R (u i ) Π k (u k ) σ k B(u) β i -α i d i + f i -u i d i . And now we let R → +∞ to obtain Ω ∇ψ∇u i ≥ Ω ψ Π m k=1 (u k ) σ k B(u) β i -α i d i + f i -u i d i , or equivalently ∀i = 1, ..., m, Ω ψu i + d i ∇ψ∇u i ≥ Ω ψ {Π k (u k ) σ k B(u)(β i -α i ) + f i } . ( 53 ) Throughout the rest of the proof, we will assume that γ j G n i + γ i G n j ≡ 0. By maximum principle applied to the equation (44) defining the G n k , a n ij := inf(γ j G n i + γ i G n j ) > 0. Also for the rest of the proof, choosing n 0 large enough, we fix c ∈ (0, +∞) such that 0 < c < a n ij /[γ j d i + γ i d j ], ∀ (i, j) ∈ I × J, ∀n ≥ n 0 . (55) This definition of c will be used only in STEPS 6 and 7 of the proof. For simplicity, we now drop the superscript ' n ' in the rest of the proof. STEP 2: Existence of θ n satisfying (46). Again we assume (without loss of generality) that 1 ∈ I so that we may use (50), (51). Thus, for all x ∈ Ω, the function r ∈ [r -(x), r + (x)] → b(x, r) is increasing where r -, r + are defined in (51). Moreover b(x, r -(x)) = -k 2 Π j∈J (F j (x) -d 1 δ j r -(x)) γ j ≤ 0, b(x, r + (x)) = k 1 Π i∈I (d 1 δ i r + (x) + F i (x)) γ i ≥ 0. Thus there exists a unique θ 1 (x) ∈ [r -(x), r + (x)] such that b(x, θ 1 (x)) = 0. Since the function (x, r) → b(x, r) is regular, by the implicit function theorem, so is x → θ 1 (x). The function θ k are then uniquely determined from the first line of (46) which we rewrite as: θ i := d 1 δ i θ 1 + F i , ∀ i ∈ I; θ j := F j -d 1 δ j θ 1 , ∀j ∈ J. (56) It remains to prove that ∂ ν θ k = 0 on ∂Ω for all k = 1, ..., m. This will be a consequence of the following computation (see (60) ). k 1 Π i∈I θ γ i i = k 2 Π j∈J θ γ j j , which implies log k 1 + i∈I γ i log θ i = log k 2 + j∈J γ j log θ j . (57) Differentiating this leads to i∈I γ i ∇θ i θ i = j∈J γ j ∇θ j θ j . (58) Inserting (56) in this formula, we obtain ∇θ 1 in terms of the θ k , namely d 1 ∇θ 1 A = j∈J γ j ∇F j θ j - i∈I γ i ∇F i θ i , A := m k=1 γ k δ k θ k . ( 59 ) Since ∇F k • ν = 0 on ∂Ω for all k = 1, ..., m, it follows from this identity that ∇θ 1 • ν = 0 on ∂Ω as well. And by (56) it also follows that ∇θ k • ν = 0 on ∂Ω, ∀ k = 1, ..., m. (60) Differentiating once more (58) gives i∈I γ i ∆θ i θ i - γ i |∇θ i | 2 θ 2 i = j∈J γ j ∆θ j θ j - γ j |∇θ j | 2 θ 2 j , or also, using again (56) and the definition of A in (59), d 1 A∆θ 1 = i∈I - γ i ∆F i θ i + γ i |∇θ i | 2 θ 2 i + j∈J γ j ∆F j θ j - γ j |∇θ j | 2 θ 2 j . ( 61 ) Our goal is to estimate the L 1 -norm of ∆θ 1 . We remark that, if we denote α k := γ k δ k /Aθ k , k = 1, ..., m, then 0 ≤ α k ≤ 1, m k=1 α k = 1. But the relation (61) may be rewritten d 1 ∆θ 1 = i∈I α i [- ∆F i δ i + |∇θ i | 2 δ i θ i ] + j∈J α j [ ∆F j δ j - |∇θ j | 2 δ j θ j ]. (62) According to the definition of F k in (51) and to the definition of G k (or more precisely of G n k ) in ( 44) and (43), we know that ∆F k L 1 (Ω) , k = 1, ..., m is bounded in terms of the data (independently of n). Thus i∈I -α i ∆F i δ i + j∈J α j ∆F j δ j L 1 (Ω) ≤ C, (63) where C is independent of n. We also have that Ω ∆θ 1 = ∂Ω ∂ ν θ 1 = 0. Inserting this into (62) and (63) gives Ω j∈J α j |∇θ j | 2 δ j θ j ≤ Ω i∈I α i |∇θ i | 2 δ i θ i + C, (64) where again C does not depend on n. Therefore, it is sufficient to bound the right-hand side of (64) to obtain a bound on ∆θ 1 L 1 (Ω) and this will end the proof of Lemma 4.2 (since an L 1 -bound on ∆θ 1 implies an L 1 -bound on ∆θ k for all k = 1, ..., m). STEP 4 : A bound from below on the θ k . The previous step indicates that one has to bound |∇θ i | 2 /θ i in L 1 (Ω) for all i ∈ I. The identity (59) says that c 2 i 0 Ω |∇F i 0 | 2 a + |F i 0 | ≤ Ω | f i 0 | log | f i 0 | + | f i 0 |[a -1 d -1] + 1, where f i 0 = d -1 F i 0 -∆F i 0 = g i 0 /d 1 -δ i 0 g 1 as indicated in (70). We conclude from the estimate (43) that ∇F i 0 θ i 0 √ θ 1 A L 2 (Ω) ≤ C where C is independent of n. (73) STEP 6: More bounds from below on the θ k . We prove here the two following facts, where the real number c was defined in (55): Ω = Ω I ∪ Ω J , Ω I := {x ∈ Ω ; θ i (x) > c, ∀i ∈ I}, Ω J := {x ∈ Ω ; θ j (x) > c, ∀j ∈ J}, sup i∈I θ i ≥ c 1 > 0, for some c 1 ∈ (0, ∞) independent of n. (74) For the first part of (74), assume by contradiction that there exists x ∈ Ω \ (Ω I ∪ Ω J ). Then there exists (i, j) ∈ I × J such that, for this x: θ i (x) ≤ c and θ j (x) ≤ c, or equivalently, γ j d i θ i (x) ≤ γ j d i c, γ i d j θ j (x) ≤ γ i d j c. Let us add these last two inequalities. Using the first line of (46) and the definition of c in (55), we have γ j G i (x) + γ i G j (x) = γ j d i θ i (x) + γ i d j θ j (x) ≤ [γ j d i + γ i d j ]c < inf{γ j G i + γ i G j }. And this is a contradiction. Whence the first statement of (74). For the second one, let us first note that sup Now let x ∈ Ω J so that, by the previous statement, θ j (x) ≥ c for all j ∈ J. We then use the second line of (46) to obtain k 2 c j∈J γ j ≤ k 2 Π j∈J θ j (x) γ j = k 1 Π i∈I θ i (x) γ i ≤ k 1 [sup i∈I θ i ] i∈I γ i . This implies that sup i∈I θ i ≥ c 1 := min{c, k 2 c j∈J γ j /k 1 [ i∈I γ i ] -1 }. Whence the second statement of (74). STEP 7: End of the proof of Lemma 4.2. This is where we use that I (for instance) has at most two elements. Indeed, let us go back to the expression of ∇θ 1 / √ θ 1 in (65). We already know by STEP 5 that all terms indexed by j ∈ J are bounded in L 2 (Ω). Since F 1 ≡ 0, there is at most one term indexed by i ∈ I, namely none if I = {1}, and only ∇F i 0 /(θ i 0 √ θ 1 A) if I = {1, i 0 }. If I = {1}, it immediately follows that ∇θ 1 / √ θ 1 is bounded in L 2 (Ω) independently of n. If I = {1, i 0 }, then sup i∈I θ i = sup{θ 1 , θ i 0 }. It follows from the second line of (74) in STEP 6 that (72) holds. Consequently, as proved in Remark 4.4, ∇F i 0 /(θ i 0 √ θ 1 A) is bounded in L 2 (Ω) independently of n. Having controlled all terms in (65), we can conclude that ∇θ 1 / √ θ 1 is itself also bounded in L 2 (Ω) independently of n. By symmetry, this also holds for ∇θ i 0 / θ i 0 . This implies that the right-hand side of (64) in STEP 4 is bounded independently of n and ends the proof of Lemma 4.2. Let us now state the following technical lemma which was used in two places in the previous proof. If moreover f ≥ 0, then Ω |∇F | 2 F ≤ Ω f log + f + (d -1 -1)f + de -1 . Proof. For the first inequality of the lemma, let us first remark that, if we set F := a F , then by homogeneity, If f ≥ 0, then F ≥ 0 by maximum principle. We multiply the equation in F by log(F + ε) and we integrate by parts. Then, Ω dF log(F + ε) + |∇F | 2 F + = Ω f log(F + ε). (76) We apply the Young's inequality (75) with r := f (x), s := log(F (x) + ε). Then f log(F + ε) ≤ (f log f -f ) + F + ε, so that, using (76), we deduce [F ≥1] dF log(F + ε) + Ω |∇F | 2 F + ε ≤ Ω f log f -f + F + ε -d [F ≤1] F log(F + ε). STEP 3 : 3 Differentiating B(θ) = 0. The condition B(θ) = 0 means x∈Ω I θ i (x) ≥ inf x∈Ω I θ i (x) ≥ c, ∀i ∈ I. Lemma 4 . 5 45 Let F ∈ W 2,1 (Ω) such that, for some d ∈ (0, +∞)dF -∆F = f, ∂ ν F = 0 on ∂Ω with f, f log |f | ∈ L 1 (Ω).For a ∈ (0, +∞), we haveΩ |∇F | 2 a + |F | ≤ Ω |f | log |f | + |f |[(ad) -1 -1] + 1. Ω |∇F | 2 aΩ|∇ F | 2 1+ 22 + |F | = a | F | .Let us now multiply the equation d F -∆ F = f /a by sign( F ) log(| F | + 1). We obtainΩ | F | log(| F | + 1) + |∇ F | 2 1 + | F | = 1 a Ω f sign( F ) log(| F | + 1) ≤ 1 a Ω |f | log(| F | + 1).Now we use the Young's convexity inequality∀ r ∈ [0, +∞), ∀ s ∈ R, rs ≤ (r log r -r) + e s .(75)We apply it with r := |f (x)|, s := log(| F (x)| + 1) to deduce Ω |f | log(| F | + 1) ≤ Ω |f |[log |f | -1] + | F | + 1. From the equation in F , we also derive d Ω | F | ≤ Ω |f |/a. The first inequality of Lemma 4.5 follows. Now we let ε → 0 and we use[F ≥1] F log F ≥ 0, f log f ≤ f log + f, Ω dF = Ω f, x log x ≥ -e -1 , ∀ x ∈ (0, 1),to deduce the second estimate of Lemma 4.5. 1.2 holds even if |I| and |J| > 2. But it is not clear how to extend our main Lemma 4.2 to the general case. We leave this as an open problem. Theorem 1.2 provides a solution u such that h i (u) ∈ L 1 (Ω). It is easy to write down explicit examples where the nonlinear terms Π k u α k k , Π k u β k k are not separately in L 1 (Ω). For instance, let N > 4 and let us introduce the following function σ where r Remark 1.5 But, on the other hand, by passing directly to the limit as n → +∞ in (37), we have that for i ∈ I, j ∈ J Ω ψ(γ j u i + γ i u j ) + ∇ψ∇[γ j d i u i + γ i d j u j ] = Ω ψ(γ j f i + γ i f j ). This implies that all inequalities in (53) are actually equalities so that, for all i = 1, ..., m, This ends the proof of Theorem 1.2. Proof of Lemma 4.2. STEP 0 : The case m = 2. For simplicity, we drop the index 'n'. We assume I = {1}, J = {2}. Then the system (46) is equivalent to It is easily seen that the equation in θ 1 has a unique regular solution (see STEP 2 for more details) and differentiating this equation gives If γ = 1, we immediately have By (42) and (44), we know that ∆G is bounded in L 1 (Ω) independently of n as soon as the f k are (only) in L 1 (Ω) + for all k. Therefore so is ∆θ 1 . If γ = 1, then integrating (54) and using ∂ ν θ 1 = 0, we obtain Again, the last integral is bounded independently of n if the f k are (only) assumed to be in L 1 (Ω). Therefore so is the first integral. But by positivity, this implies that θ γ-2 (Ω) independently of n. So is ∆θ 1 by going back to (54). And finally the same holds for ∆θ 2 = ∆(G -δθ 1 ). We now come back to the general situation m ≥ 2 and with the LLogL assumptions on the f k . STEP 1 : Let us first treat the trivial case when there exists (i 0 , j 0 ) ∈ I × J such that γ j 0 G n i 0 + γ i 0 G n j 0 ≡ 0 (i.e. G n i 0 ≡ 0 ≡ G n j 0 ). Then, by the first line of (46), θ i0 ≡ 0 ≡ θ j 0 . Using again the first line of (46), we deduce that θ n k = G n k for all k = 1, ..., n. Thus, the conclusion of the lemma is obvious in this case.
55,678
[ "831021" ]
[ "75", "247362" ]
00174870
en
[ "math" ]
2024/03/05 22:32:07
2007
https://hal.science/hal-00174870/file/2007-32.pdf
I Hlaváček email: hlavacek@math.cas.cz A A Novotny email: novotny@lncc.br J Soko Lowski email: jan.sokolowski@iecn.u-nancy.fr A Żochowski Energy change in elastic solids due to a spherical or circular cavity, considering uncertain input data In the paper we consider topological derivative of shape functionals for elasticity, which is used to derive the worst and also the maximum range scenarios for behavior of elastic body in case of uncertain material parameters and loading. It turns out that both problems are connected, because the criteria describing this behavior have form of functionals depending on topological derivative of elastic energy. Therefore in the first part we describe the methodology of computing the topological derivative with some new additional conditions for shape functionals depending on stress. For the sake of fulness of presentation the explicit formulas for stress distribution around cavities are provided. Introduction In the paper we consider topological derivative of shape functionals for elasticity, which is used to derive the worst and also the maximum range scenarios for behavior of elastic body in case of uncertain material parameters. It turns out that both problems are connected, because the criteria describing this behavior have form of functionals depending on topological derivative of elastic energy. Therefore in the first part we describe the methodology of computing the topological derivative with some new additional conditions for shape functionals depending on stress. For the sake of fulness of presentation the explicit formulas for stress distribution around cavities are provided. Topological Derivative The topological derivative T Ω of a shape functional J (Ω) is introduced in [START_REF] Soko Lowski | On topological derivative in shape optimization[END_REF] in order to characterize the infinitesimal variation of J (Ω) with respect to the infinitesimal variation of the topology of the domain Ω. The topological derivative allows us to derive the new optimality condition for the shape optimization problem: J (Ω * ) = inf Ω J (Ω) . The optimal domain Ω * is characterized by the first order condition [START_REF] Soko | Introduction to Shape Optimization[END_REF] defined on the boundary of the optimal domain Ω * , dJ(Ω * ; V ) ≥ 0 for all admissible vector fields V , and by the following optimality condition defined in the interior of the domain Ω * : T Ω * (x) ≥ 0 in Ω * . The other use of the topological derivative is connected with approximating the influence of the holes in the domain on the values of integral functionals of solutions, what allows us to solve a class of shape inverse problems. In general terms the notion of the topological derivative (TD) has the following meaning. Assume that Ω ⊂ IR N is an open set and that there is given a shape functional J : Ω \ K → IR for any compact subset K ⊂ Ω. We denote by B ρ (x), x ∈ Ω, the ball of radius ρ > 0, B ρ (x) = {y ∈ IR N | y -x < ρ}, B ρ (x) is the closure of B ρ (x), and assume that there exists the following limit T(x) = lim ρ↓0 J (Ω \ B ρ (x)) -J (Ω) |B ρ (x)| which can be defined in an equivalent way by T(x) = lim ρ↓0 J (Ω \ B ρ (x)) -J (Ω) ρ N The function T(x), x ∈ Ω, is called the topological derivative of J (Ω), and provides the information on the infinitesimal variation of the shape functional J if a small hole is created at x ∈ Ω. This definition is suitable for Neumann-type boundary conditions on ∂B ρ . In several cases this characterization is constructive, i.e. TD can be evaluated for shape functionals depending on solutions of partial differential equations defined in the domain Ω. For instance, TD may be computed for the 3D elliptic Laplace type equation, as well as for extremal values of cost functionals for a class of optimal control problems. All these examples have one common feature: the expression for TD may be calculated in the closed functional form. As we shall see below, the 3D elasticity case is more difficult, since it requires evaluation of integrals on the unit sphere with the integrands which can be computed at any point, but the resulting functions have no explicit functional form. In the particular case of energy functional we obtain the closed formula. In section 5 we compare the results of the present paper with the formulae for 2D elasticity. The main contribution of the present paper is the procedure for computations of the topological derivatives of shape functionals depending on the solutions of 3D elasticity systems. Therefore it constitutes an essential extension of the results given in [START_REF] Soko Lowski | On topological derivative in shape optimization[END_REF] for the 2D case. Problem setting for elasticity systems We introduce elasticity system in the form convenient for the evaluation of topological derivatives. Let us consider the elasticity equations in IR N , where N = 2 for 2D and N = 3 for 3D,    div σ(u) = 0 in Ω u = g on Γ D σ(u)n = T on Γ N (1) and the same system in the domain with the spherical cavity B ρ (x 0 ) ⊂ Ω centered at x 0 ∈ Ω, Ω ρ = Ω \ B ρ (x 0 ),        div σ ρ (u ρ ) = 0 in Ω ρ u ρ = g on Γ D σ ρ (u ρ )n = T on Γ N σ ρ (u ρ )n = 0 on ∂B ρ (x 0 ) (2) where n is the unit outward normal vector on ∂Ω ρ = ∂Ω ∪ ∂B ρ (x 0 ). Assuming that 0 ∈ Ω, we can consider the case x 0 = 0. Here u and u ρ denote the displacement vectors fields, g is a given displacement on the fixed part Γ D of the boundary, T is a traction prescribed on the loaded part Γ N of the boundary. In addition, σ is the Cauchy stress tensor given, for ξ = u (eq. 1) or ξ = u ρ (eq. 2), by σ(ξ) = D∇ s ξ , (3) where ∇ s (ξ) is the symmetric part of the gradient of vector field ξ, that is ∇ s (ξ) = 1 2 ∇ξ + ∇ξ T , (4) and D is the elasticity tensor, D = 2µII + λ (I ⊗ I) , (5) with µ = E 2(1 + ν) , λ = νE (1 + ν)(1 -2ν) and λ = λ * = νE 1 -ν 2 (6) being E the Young's modulus, ν the Poisson's ratio and λ * the particular case for plane stress. In addition, I and II respectively are the second and fourth order identity tensors. Thus, the inverse of D is D -1 = 1 2µ II - λ 2µ + N λ (I ⊗ I) , The first shape functional under consideration depends on the displacement field, J u (ρ) = Ωρ F (u ρ ) dΩ , F (u ρ ) = (Hu ρ • u ρ ) p , ( 7 ) where F is a C 2 function, p ≥ 2 is an integer. It is also useful for further applications in the framework of elasticity to introduce the yield functional of the form J σ (ρ) = Ωρ Sσ(u ρ ) • σ(u ρ ) dΩ , (8) where S is an isotropic fourth-order tensor. Isotropicity means here, that S may be expressed as follows S = 2mII + l (I ⊗ I) , where l, m are real constants. Their values may vary for particular yield criteria. The following assumption assures, that J u , J σ are well defined for solutions of the elasticity system. (A) The domain Ω has piecewise smooth boundary, which may have reentrant corners with α < 2π created by the intersection of two planes. In addition, g, T must be compatible with u ∈ H 1 (Ω; IR N ). The interior regularity of u in Ω is determined by the regularity of the right hand side of the elasticity system. For simplicity the following notation is used for functional spaces, H 1 g (Ω ρ ) = {ψ ∈ [H 1 (Ω ρ )] N | ψ = g on Γ D }, H 1 Γ D (Ω ρ ) = {ψ ∈ [H 1 (Ω ρ )] N | ψ = 0 on Γ D }, H 1 Γ D (Ω) = {ψ ∈ [H 1 (Ω)] N | ψ = 0 on Γ D } , here we use the convention that eg., in our notation H 1 g (Ω ρ ) stands for the Sobolev space of vector functions [H 1 g (Ω ρ )] N . The weak solutions to the elasticity systems are defined in the standard way. Find u ρ ∈ H 1 g (Ω ρ ) such that, for every φ ∈ H 1 Γ D (Ω ρ ), Ωρ D∇ s u ρ • ∇ s φ dΩ = Γ N T • φ dS (9) We introduce the adjoint state equations in order to simplify the form of shape derivatives of functionals J u , J σ . For the functional J u they take on the form: Find w ρ ∈ H 1 Γ D (Ω ρ ) such that, for every φ ∈ H 1 Γ D (Ω ρ ), Ωρ D∇ s w ρ • ∇ s φ dΩ = - Ωρ F u (u ρ ) • φ dΩ, (10) whose Euler-Lagrange equation reads        div σ ρ (w ρ ) = F u (u ρ ) in Ω ρ w ρ = 0 on Γ D σ ρ (w ρ )n = 0 on Γ N σ ρ (w ρ )n = 0 on ∂B ρ (x 0 ) , ( 11 ) while v ρ ∈ H 1 Γ D (Ω ρ ) is the adjoint state for J σ and satisfies for all test functions φ ∈ H 1 Γ D (Ω) the following integral identity: Ωρ D∇ s v ρ • ∇ s φ dΩ = -2 Ωρ DSσ(u ρ ) • ∇ s φ dΩ. ( 12 ) which associated Euler-Lagrange equation becomes        div σ ρ (v ρ ) = -2div (DSσ ρ (u ρ )) in Ω ρ v ρ = 0 on Γ D σ ρ (v ρ )n = -2DSσ ρ (u ρ )n on Γ N σ ρ (v ρ )n = -2DSσ ρ (u ρ )n on S ρ (x 0 ) = ∂B ρ (x 0 ) . ( 13 ) Remark 1 We observe that DS can be written as DS = 4µmII + γ (I ⊗ I) (14) where γ = λlN + 2 (λm + µl) (15) Thus, when γ = 0, the boundary condition on ∂B ρ (x 0 ) in eq. ( 13) becomes homogeneous and the yield criteria must satisfy the constraint m l = - µ λ + N 2 , ( 16 ) which is naturally satisfied for the energy shape functional, for instance. In fact, in this particular case, tensor S is given by S = 1 2 D -1 ⇒ γ = 0 and 2m + l = 1 2E , ( 17 ) which implies that the adjoint solution associated to J σ can be explicitly obtained such that v ρ = -(u ρ -g). Main result We shall define the topological derivative of the functionals J u , J σ at the point x 0 as: T J u (x 0 ) = lim ρ↓0 dJ u (ρ) d(|B ρ (x 0 )|) , (18) T J σ (x 0 ) = lim ρ↓0 dJ σ (ρ) d(|B ρ (x 0 )|) . ( 19 ) Now we may formulate the following result, giving the constructive method for computing the topological derivatives: Theorem 1 Assume that (A) is satisfied, then T J u (x 0 ) = - 1 2(N -1)π [ 2(N -1)πF (u) + Ψ(D -1 ; σ(u), σ(w))] x=x 0 , (20) T J σ (x 0 ) = - 1 2(N -1)π [ Ψ(S; σ(u), σ(u)) + Ψ(D -1 ; σ(u), σ(v))] x=x 0 , (21) where w, v ∈ H 1 Γ D (Ω) are adjoint variables satisfying the integral identities ( 10) and ( 12) for ρ = 0, i.e. in the whole domain Ω instead of Ω ρ , that is Ω D∇ s w • ∇ s φ dΩ = - Ω F u (u) • φ dΩ. ( 22 ) Ω D∇ s v • ∇ s φ dΩ = -2 Ω DSσ(u) • ∇ s φ dΩ. ( 23 ) for all test functions φ ∈ H 1 Γ D (Ω). Some of the terms in (20), (21) require explanation. The function Ψ is defined as an integral over the unit sphere S 1 (0) = {x ∈ IR N | x = 1} of the following functions: Ψ(S; σ(u(x 0 )), σ(u(x 0 ))) = S 1 (0) Sσ ∞ (u(x 0 ); x) • σ ∞ (u(x 0 ); x) dS (24) Ψ(D -1 ; σ(u(x 0 )), σ(v(x 0 ))) = S 1 (0) σ ∞ (u(x 0 ); x) • D -1 σ ∞ (v(x 0 ); x) dS (25) Ψ(D -1 ; σ(u(x 0 )), σ(w(x 0 ))) = S 1 (0) σ ∞ (u(x 0 ); x) • D -1 σ ∞ (w(x 0 ); x) dS (26) The symbol σ ∞ (u(x 0 ); x) denotes the stresses for the solution of the elasticity system (2) in the infinite domain IR N \ B 1 (0) with the following boundary conditions: • no tractions are applied on the surface of the ball, S 1 (0) = ∂B 1 (0); • the stresses σ ∞ (u(x 0 ); x) tend to the constant value σ(u(x 0 )) as x → ∞. In this notation σ ∞ (u(x 0 ); x) is a function of space variables depending on the functional parameter u(x 0 ), while σ(u(x 0 )) is a value of the stress tensor computed in the point x 0 for the solution u. The dependence between them results from the boundary condition at infinity listed above. The method for obtaining such solutions (and u ∞ ), based on [START_REF] Kachanov | Handbook of Elasticity Solutions[END_REF], is discussed in the next section. In order to derive the above formulae (20), (21) we calculate the derivatives of the functional J u (ρ) with respect to the parameter ρ, which determines the size of the hole B ρ (x 0 ), by using the material derivative method [START_REF] Soko | Introduction to Shape Optimization[END_REF]. Then we pass to the limit ρ ↓ 0 using the asymptotic expansions of u ρ with respect to ρ. For the functional J u the shape derivative with respect to ρ is given by J u (ρ) = Ωρ F u (u ρ ) • u ρ dΩ - Sρ(x 0 ) F (u ρ ) dS, (27) and in the same way for the state equation: Ωρ D∇ s u ρ • ∇ s φ dΩ - Sρ(x 0 ) D∇ s u ρ • ∇ s φ dS = 0, ( 28 ) where u ρ is the shape derivative, i.e. the derivative of u ρ with respect to ρ, [START_REF] Soko | Introduction to Shape Optimization[END_REF]. After substitution of the test functions φ = w ρ in the derivative of the state equation, φ = u ρ in the adjoint equation, we get J u (ρ) = - Sρ(x 0 ) [F (u ρ ) + D∇ s u ρ • ∇ s w ρ ] dS = - Sρ(x 0 ) [F (u ρ ) + σ(u ρ ) • D -1 σ(w ρ )] dS, (29) and similarly for J σ J σ (ρ) = - Sρ(x 0 ) [Sσ(u ρ ) • σ(u ρ ) + D∇ s u ρ • ∇ s v ρ ] dS = - Sρ(x 0 ) [Sσ(u ρ ) • σ(u ρ ) + σ(u ρ ) • D -1 σ(v ρ )] dS. (30) Observe, that both matrices D -1 and S are isotropic, and therefore the corresponding bilinear forms in terms of stresses are invariant with respect to the rotations of the coordinate system. Now we exploit the fact, that dJ u (ρ) d(|B ρ (x 0 )|) = 1 2(N -1)πρ N -1 dJ u dρ , and use the existence of the asymptotic expansions for u ρ in the neighborhood of B ρ (x 0 ), namely u ρ = u(x 0 ) + u ∞ + O(ρ 2 ). (31) In addition, u ∞ is proportional to ρ, u ∞ IR N = O(ρ), on the surface S ρ (x 0 ) of the ball. The expansion of σ(u ρ ) corresponding to (31) has the form σ(u ρ ) = σ ∞ (u(x 0 ); x) + O(ρ). (32) It may be proved, that w ρ and v ρ have similar expansions. Using the formulae (31),(32) we may justify the following passages to the limit: lim ρ↓0 1 ρ N -1 Sρ(x 0 ) σ(u ρ ) • D -1 σ(v ρ ) dS = Ψ(D -1 ; σ(u(x 0 )), σ(v(x 0 ))), lim ρ↓0 1 ρ N -1 Sρ(x 0 ) σ(u ρ ) • D -1 σ(w ρ ) dS = Ψ(D -1 ; σ(u(x 0 )), σ(w(x 0 ))), lim ρ↓0 1 ρ N -1 Sρ(x 0 ) Sσ(u ρ ) • σ(u ρ ) dS = Ψ(S; σ(u(x 0 )), σ(u(x 0 ))), lim ρ↓0 1 ρ N -1 Sρ(x 0 ) F (u ρ ) dS = 2(N -1)πF (u(x 0 )). This completes the proof of the theorem. The main difficulty lies in the computation of the values of the functions denoted above as Ψ(S; σ(u(x 0 )), σ(u(x 0 ))), Ψ(D -1 ; σ(u(x 0 )), σ(w(x 0 ))) and Ψ(D -1 ; σ(u(x 0 )), σ(v(x 0 ))), which, in general, is difficult to obtain in the closed form, in contrast with the two dimensional case. Therefore we can approximate them using numerical quadrature. It is possible, because we may calculate the values of integrands at any point on the sphere. This makes the computations more involved, but does not increase the numerical complexity in comparison to evaluating single closed form expression. Remark 2 The tensor S in the definition of J σ may, in fact, be arbitrary, not only isotropic. However, it is difficult to imagine such a need for the isotropic material. Anyway, in the general case, we would have to transform S according to the known rules for the fourth order tensor, connected with the rotation of the reference frame. Topological derivatives in 3D elasticity The shape functionals J u , J σ are defined in the same way as presented in section 2.2 with the exception, that J σ is now the energy stored in a 3D elastic body (see remark 1). The weak solutions to the elasticity system as well as adjoint equations are defined also analogously to the section 2.2. Then, considering the expansions presented in Appendix A.2, we may state the following result [START_REF] Novotny | Topological Sensitivity Analysis for three-dimensional linear elasticity problems[END_REF] (see also [START_REF] Garreau | The Topological Asymptotic for PDE Systems: The Elasticity Case[END_REF]): Theorem 2 The expressions for the topological derivatives of the functionals J u , J σ have the form T J u (x 0 ) = -F (u) + 3 2E 1 -ν 7 -5ν (10(1 + ν)σ(u) • σ(w) -(1 + 5ν)trσ(u)trσ(w)) x=x 0 , (33) T J σ (x 0 ) = 3 4E 1 -ν 7 -5ν 10(1 + ν)σ(u) • σ(u) -(1 + 5ν)(trσ(u)) 2 x=x 0 . ( 34 ) Topological derivatives in 2D elasticity For the convenience of the reader we recall here the results derived in [START_REF] Soko Lowski | On topological derivative in shape optimization[END_REF] for the 2D case. The principal stresses associated with the displacement field u are denoted by σ I (u), σ II (u), the trace of the stress tensor σ(u) is denoted by trσ(u) = σ I (u) + σ II (u). The shape functionals J u , J σ are defined in the same way as presented in section 2.2, with the tensor S isotropic (that is similar to D). The weak solutions to the elasticity system as well as adjoint equations are defined also analogously to the section 2.2. Then, from the expansions presented in Appendix A.1, we may formulate the following result [START_REF] Soko Lowski | On topological derivative in shape optimization[END_REF]: Theorem 3 The expressions for the topological derivatives of the functionals J u ,J σ have the form T J u (x 0 ) = -F (u) + 1 E (a u a w + 2b u b w cos 2δ) x=x 0 = -F (u) + 1 E (4σ(u) • σ(w) -trσ(u)trσ(w)) x=x 0 (35) T J σ (x 0 ) = -η(a 2 u + 2b 2 u ) + 1 E (a u a v + 2b u b v cos 2δ) x=x 0 = -η(4σ(u) • σ(u) -(trσ(u)) 2 ) + 1 E (4σ(u) • σ(v) -trσ(u)trσ(v)) x=x 0 (36) Some of the terms in ( 35), (36) require explanation. According to eq. ( 15) for N = 2, constant η is given by η = l + 2 m + γ ν E . (37) Furthermore, we denote a u = σ I (u) + σ II (u), b u = σ I (u) -σ II (u), a w = σ I (w) + σ II (w), b w = σ I (w) -σ II (w), a v = σ I (v) + σ II (v), b v = σ I (v) -σ II (v). (38) Finally, the angle δ denotes the angle between principal stress directions for displacement fields u and w in (35), and for displacement fields u and v in (36). Remark 3 For the energy stored in a 2D elastic body, tensor S is given by eq. (17), γ = 0 and η = 1/(2E). Thus, since v = -(u -g), we obtain the following well-known result T J σ (x 0 ) = 1 2E 4σ(u) • σ(u) -(trσ(u)) 2 x=x 0 (39) Compare these expressions to the 3D case. Their simplicity comes from the fact, that on the plane the rotation of one coordinate system with respect to the other is defined by the single value of the angle (here δ). This is a purely 2D phenomenon and it makes the explicit computations possible. Uncertain input data In reality, the values of input data(loading, material parameters) are guaranteed only in some given intervals. One of the simplest remedy is to apply the worst scenario or maximum range scenario method [START_REF] Hlaváček | Uncertain Input Data Problems and the Worst Scenario Method[END_REF]. In what follows, we present the methods for the traction problem (1) with ∂Ω = Γ N and the criterion corresponding to the topological derivatives (34) or (39), respectively. Traction problem in 3D-elasticity Let us consider a bounded domain Ω ⊂ R 3 with Lipschitz boundary ∂Ω ≡ Γ, occupied by a homogeneous and isotropic elastic body. Let the body be loaded by surface forces T ∈ [L ∞ (Γ)] 3 and the body forces be zero. We introduce sets of admissible uncertain input data as follows : (i) Lamé coefficients λ ∈ U λ ad = λ, λ , 0 ≤ λ < λ < ∞, µ ∈ U µ ad = µ, µ , 0 < µ < µ < ∞; (ii) surface loading forces T i ∈ U T i ad = τ ∈ L ∞ (Γ) : τ Γp ∈ C (0),1 Γ p , |τ | ≤ C 1 , |∂τ /∂s j | ≤ C 2 a.e. on Γ, j = 1, 2 , where Γ = P p=1 Γ p , Γ k ∩ Γ m = ∅ for k = m, i = 1, 2, 3, s j are local coordinates of the surface Γ p and C 1 , C 2 are given constants, T ≡ (T 1 , T 2 , T 3 ) ∈ U T ad = {T i ∈ U T i ad , i = 1, 2, 3 and Γ T dS = 0, Γ x × T dS = 0}. Finally, we define U ad = U λ ad × U µ ad × U T ad and A ≡ {A, T }, A = {λ, µ}. We will consider the following criterion-functional based on the topological derivative associated to the energy shape functional (34) Φ(A, σ) = σ T H(A)σ where σ ≡ σ(y) is the stress tensor of a full body at the center y ∈ Ω of a spherical cavity, H(A) = 3(1 -ν) 4E (Λ 1 + 10(1 + ν) 7 -5ν Λ 2 ), (40) Λ 1 = 1 3 I ⊗I, Λ 2 = II -Λ 1 , ν is the corresponding Poisson's constant and E the Young's modulus. Note that ν = λ 2(λ+µ) , E = µ(3λ+2µ) λ+µ . Continuous dependence of the criterion on the input data Our main result of the present section is given by the following theorem Theorem 4 Let A n ∈ U ad , A n → A in R 2 × [L ∞ (Γ)] 3 as n → ∞. Then Φ(A n , σ(A n )) → Φ(A, σ(A)). Proof. The estimate (47) follows from the Cauchy-Schwartz inequality and the boundedness of sets U λ ad , U µ ad . To justify (48), we write a(A; u, u) ≥ 2µ Ω ε ij (u)ε ij (u)dx and use the Korn's inequality Ω ε ij (u)ε ij (u)dx ≥ c u 2 1,Ω ∀u ∈ V 0 (see e.g. [7]-Lemma 7.3.3). Lemma 2 Let λ n ∈ U λ ad , µ n ∈ U µ ad , λ n → λ and µ n → µ as n → ∞. Then ν n → ν and T * ij (ν n ) → T * ij (ν) in [L 2 (Γ)] 3 . ( 49 ) Proof. Since λ n + µ n ≥ λ + µ > 0, ν n = λ n 2(λ n + µ n ) → λ 2(λ + µ) = ν. We infer that s ij k (ν n ) → s ij k (ν) in L 2 (Γ), k = 1, 2, 3. (50) Indeed, we have λ n /κ n → λ/κ, µ n /κ n → µ/κ and u ij0 (ν n ) -u ij0 (ν) H 1 (Γ) ≤ C|ν n -ν| → 0, so that (50) holds. Since the field w ij is independent of A, we arrive at (49). Lemma 3 Let λ n ∈ U λ ad , µ n ∈ U µ ad , λ n → λ and µ n → µ as n → ∞ and u * ij (A n ) ∈ V 0 . Then u * ij (A n ) → u * ij (A) in [H 1 (Ω)] 3 . Proof. For brevity, let us denote T * n = T * ij (ν n ), T * = T * ij (ν), u * n = u * ij (A n ), u * = u * ij (A). By definition, we have a(A n ; u * n , v) = Γ T * n v dS (51) a(A; u * , v) = Γ T * v dS (52) for all v ∈ [H 1 (Ω)] 3 . Let us consider also solutions ûn ∈ V 0 of the following problem a(A; ûn , v) = Γ T * n v dS ∀v ∈ [H 1 (Ω)] 3 . (53) From ( 53) and (52) we obtain a(A; ûn -u * , v) = Γ (T * n -T * )v dS. Proof. We may write |σ kl (A n ) -σ kl (A)| = | Γ T n • (c klij (A n )G ij y (A n ))dS -Γ T • (c klij (A)G ij y (A))dS| ≤ | Γ T n • ((c klij (A n ) -c klij (A))G ij y (A n ))dS| +| Γ T n • (c klij (A)(G ij y (A n ) -G ij y (A)))dS| +| Γ (T n -T ) • (c klij (A)G ij y (A))dS| ≡ I 1 + I 2 + I 3 , where I 1 ≤ Γ C A n -A 0,∞ |G ij y (A n )|dS → 0 and I 2 ≤ Γ C|G ij y (A n ) -G ij y (A)|dS → 0 due to Proposition 1 and the boundedness of T n in [L ∞ (Γ)] 3 . I 3 tend to zero by assumption. Proof of Theorem 1. We have |Φ(A n , σ(A n )) -Φ(A, σ(A))| ≤ |σ(A n ) T H(A n )(σ(A n ) -σ(A))| +|σ(A n ) T (H(A n ) -H(A))σ(A)| +|(σ(A n ) T -σ(A) T )H(A)σ(A)| = J 1 + J 2 + J 3 . By Proposition 2 we infer that J 1 and J 3 tend to zero. We also use the continuity of the function A → H(A), which follows from Lemma 2 and the convergence E n = 2µ n (1 + ν n ) → 2µ(1 + ν) = E ≥ 2µ > 0. As a consequence, J 2 tends to zero, as well. The worst scenario and the maximum range scenario Suppose that we wish to be "on the safe side", taking uncertain input data A and T in consideration. Then we solve either the worst scenario problem A 0 = arg max A∈U ad Φ(A, σ(A)) (59) or the maximum range scenario problem: find (i) A 0 according to (59) and (ii) A 0 = arg min A∈U ad Φ(A, σ(A)). (60) In other words, we seek exact upper and lower bounds of the criterion functional (see the monograph [START_REF] Hlaváček | Uncertain Input Data Problems and the Worst Scenario Method[END_REF] for applications of problem (60) within the frame of the fuzzy set theory). Theorem 5 Problems (59) and ( 60) have at least one solution. Proof. The set U ad is compact in R 2 × ( Traction problem in 2D-elasticity Let us consider a plane elasticity, i.e., either the case of plane strain or that of plane stress. It is well-known, that both cases have the same stress-strain relations, where only the coefficient λ varies It is either λ or λ , see [START_REF] Novotny | Topological Sensitivity Analysis for three-dimensional linear elasticity problems[END_REF]. λ = Eν (1 + ν)(1 -2ν) for plane strain, whereas λ = λ * = Eν 1 -ν 2 for plane stress. Let us consider a bounded domain Ω ⊂ R 2 with a Lipschitz boundary ∂Ω ≡ Γ, occupied by a homogeneous and isotropic elastic body, loaded only by surface loads T ∈ [L ∞ (Γ)] 2 . Assume that λ ∈ U λ ad , µ ∈ U µ ad and T i ∈ U T i ad , i = 1, 2, with U λ ad , U µ ad and U T i ad defined in section 1. Moreover, assume that the forces T are in equilibrium, i.e. Γ T dS = 0, Γ (x 1 T 2 -x 2 T 1 )dS = 0. ( 61 ) We define U T ad = {T ≡ (T 1 , T 2 ) : T i ∈ U T i ad , i = 1, 2, T satisfy (61)}, U ad = U λ ad × U µ ad × U T ad , A = {λ, µ}, A = {A, T } and introduce the criterion-functional based on the topological derivative associated to the energy shape functional (39) Φ(A, σ) = σ T H(A)σ, (62) where σ ≡ σ(y) is the stress tensor of a full body at the center y ∈ Ω of a circular cavity, and H(A) = (K + µ) 2Kµ (Λ 1 + 2Λ 2 ), (63) where K = λ + µ is the bulk modulus. Continuous dependence of the criterion on the input data The main result of the present section will be represented by an analogue of Theorem 1 as follows. Theorem 6 Let A n ∈ U ad , A n → A in R 2 × [L ∞ (Γ)] 2 as n → ∞. Then Φ(A n , σ(A n )) → Φ(A, σ(A)). For the proof we shall employ the following integral representation formula, analogous to (41), namely ∂u i ∂y j (y) = Γ T • G ij y dS, i, j ∈ {1, 2}. (64) By Reciprocity theorem, we obtain Γ (T • u * ij -T * ij • u)dS = 0. (71) Then ( 69) and (71) yield ∂u i ∂y j (y) = Γ T • (u * ij -u ij )dS + Γ u • (s ij -T * ij )dS. The last integral vanishes by virtue of normalization conditions, since s ij -T * ij = -w ij . As a consequence, we arrive at the formula (64), where G ij y = u * ij -u ij . (72) Now we may go on in proving Theorem 3 as in the prooof of Theorem 1. We establish an analogue of Lemma 1, where the subspace V 0 is defined by V 0 = v ∈ [H 1 (Ω)] 2 : Γ v dS = 0, Γ (x 1 v 2 -x 2 v 1 )dS = 0 . For the Korn's inequality in V 0 , see e.g. Section 10.2.2 in [START_REF] Nečas | Mathematical Theory of Elastic and Elasto-plastic Bodies : An Introduction[END_REF]. As far as an analogue of Lemma 2 is concerned, we use the formula ν = λ 2(λ + µ) for plane strain and ν = λ * λ * + 2µ for plane stress. It is readily seen that s ij ≡ s ij (ν), i.e., it does not depend on the modulus E. Then we can prove that λ * n → λ * , ν n → ν and s ij (ν n ) → s ij (ν) in L 2 (Γ)] 2 as ν n → ν , since λ n (K n + 2µ n )/κ 0n → λ(K + 2µ)/κ 0 (73) and λ n K n /κ 0n → λK/κ 0 for K n = λ n + µ n , λ n ∈ U λ ad , µ n ∈ U µ ad , A n → A. The field w ij is independent of A, so that we arrive at T * ij (ν n ) → T * ij (ν) in L 2 (Γ)] 2 . An analogue of Lemma 3 can be proved in the same way as Lemma 3. We infer that u * ij (A n ) → u * ij (A) in [H 1 (Ω)] 2 . ( 74 ) Using again (73), we observe that u ij (A n ) → u ij (A) in L 2 (Γ)] 2 . (75) Combining (72) with (74), the Trace theorem and (75), we obtain G ij y (A n ) → G ij y (A) in L 2 (Γ)] 2 . ( 76 ) Theorem 3 follows in a way parallel to the proof of Theorem 1, from (76), the uniform convergence of surface loads on Γ and the continuity of the function A → H(A). The worst scenario and the maximum range scenario Both the worst scenario problem (59) and the maximum range scenario problem (60) have at least one solution. This assertion is a consequence of Theorem 3 and the compactness of the set U ad in R 2 × 2 i=1 P p=1 C(Γ p ). • for i = 1 σ rr 1 (ξ ρ ) = σ I (ξ) 14 -10ν ρ 3 r 3 - ρ 5 r 5 + 14 -10ν -10(5 -ν) ρ 3 r 3 + 36 ρ 5 r 5 sin 2 θ sin 2 ϕ , (84) σ rθ 1 (ξ ρ ) = σ I (ξ) 14 -10ν -5ν + 5(1 + ν) ρ 3 r 3 -12 ρ 5 r 5 sin 2θ sin 2 ϕ , (85) σ rϕ 1 (ξ ρ ) = σ I (ξ) 14 -10ν -5ν + 5(1 + ν) ρ 3 r 3 -12 ρ 5 r 5 sin θ sin 2ϕ , (86) • for i = 2 Remark 4 It is important to mention that the stress distribution for i = 1, 2 was obtained from a rotation of the stress distribution for i = 3. In addition, the derivation of this last result (for i = 3) can be found in [START_REF] Kachanov | Handbook of Elasticity Solutions[END_REF], for instance. p )), so that the assertion follows from Theorem 1. r 5 -r 5 r 5 555 14 -10ν + 25(1 -2ν) cos 2θ sin 2 ϕ , (87)σ θϕ 1 (ξ ρ ) = σ I (ξ) 14 -10ν -5ν + 5(1 -2ν) ρ 3 r 3 + 3 ρ 5 r 5 cos θ sin 2ϕ ,(88)σ ϕϕ 1 (ξ ρ ) = σ 1 (ξ) 56 -40ν -20ν + (11 -10ν) ρ 3 r 3 + 9 ρ 5 r 5 + 28 -20ν + 5(1 -2ν) cos 2θ sin 2 ϕ , r 5 5 sin 2 θ , (101) where σ I (ξ), σ III (ξ) and σ III (ξ) are the principal stress values of tensor σ(ξ), for ξ = u, ξ = w or ξ = v associated to the original domain without hole Ω. 1,Ω ∀u ∈ V 0 ,(48)whereV 0 = {v ∈ [H 1 (Ω)] 3 : Γ v dS = 0, Γ v × x dS = 0}. Γ (s ij • u -T • u ij )dS, (69)which follows by differentiating the so-colled Somigliana's identityu i (y) = Γ (T • u i y -u • s i y )dS,(70)where ∂s i y /∂y j = -s ij . Acknowledgments 4 Conclusions We have seen that the worst case and maximal range scenario problems are solvable with criterions of energy-based topological derivative. The same methodology, considering topological derivatives of different shape functional may be applied to derive similar analysis for criteria dependent for example on displacement (kinematic constraints) and yield constraints. Proof is based on the formulas ([7]-Theorem 10.1.1) and where and r = x -y. Since (κ(A n )) -1 -→ (κ(A)) -1 and the components u ij0 k are bounded on Γ, The vector field u * ij (A) is the displacement solving the first boundary value problem with zero body forces and the equilibriated surface loading where and represents a rigid body displacement such that (e i denote unit vectors in the directions of Cartesian coordinates). The field w ij is uniquely determined by the conditions shown. Inserting (43) in (45), we observe that since u ij0 , λ κ and µ κ are independent of the modulus E. Lemma 1 Let us define Inserting v := ûn -u * and using Lemma 1, we infer that so that ûn -u * 1,Ω → 0 follows from Lemma 2. We can show that Indeed, (51) and Lemma 1 yield that so that (55) follows from Lemma 2. We can use ( 51), ( 53) and Lemma 1 to obtain Then ( 55) and ( 56) yield The convergence u * n → u * in [H 1 (Ω)] 3 follows from the triangle inequality, (54) and (57). as n → ∞. Proof. Since by (42) we have the assertion follows from Lemma 3, the Trace theorem and (44). Proposition 2 Let the stress components at the point y be We can construct the vector function G ij y in a way parallel to that of the proof of Theorem 10.1.1 in [START_REF] Nečas | Mathematical Theory of Elastic and Elasto-plastic Bodies : An Introduction[END_REF]. First, we consider the well-known Kelvin's solution where κ 0 = 4πµ(K + µ), r = x -y and define u ij = -∂u i y /∂y j . The corresponding surface forces on Γ are then We can find that Let us construct the rigid body translation Note that the field w ij is uniquely determined by conditions (67). If we define the forces T * ij are in equilibrium, i.e., they satisfy conditions (61). There exists a unique displacement field u * ij , which solves the first boundary value problem of elasticity with zero body forces and surface loads T * ij and satisfies the normalization conditions Next, we assume that the field u fullfils conditions (68) as well and consider the so-called Love's formula ∂u i ∂y j (y) = A Stress distribution around cavities We present in this appendix the analytical solution for the stress distribution around a circular (N = 2) and spherical (N = 3) cavities respectively for two and three-dimensional linear elastic bodies. A.1 Circular cavity Considering a polar coordinate system (r, θ), we have the following expansion for the stress distribution σ(ξ ρ ) around a free boundary circular cavity (σ rr (ξ ρ ) = σ rθ (ξ ρ ) = 0 on ∂B ρ (x 0 )), with where the angle θ u = θ and θ w = θ + δ, with δ denoting the angle between principal stress directions for displacement fields u and w in (35). In addition, the following expansion for σ(v ρ ) satisfying the boundary condition on ∂B ρ (x 0 ) given by σ rθ (v ρ ) = 0 and σ rr (v ρ ) = -2γσ θθ (u ρ ), holds where the angle θ v = θ + δ, with δ denoting the angle between principal stress directions for displacement fields u and v in (36). Finally, where σ I (ξ) and σ II (ξ) are the principal stress values of tensor σ(ξ), for ξ = u, ξ = w or ξ = v associated to the original domain without hole Ω. A.2 Spherical cavity Let us introduce a spherical coordinate system (r, θ, ϕ). Then, the stress distribution around the spherical cavity B ρ (x 0 ) is given by σ rr (ξ ρ ) = σ rr 1 (ξ ρ ) + σ rr 2 (ξ ρ ) + σ rr 3 (ξ ρ ) + O(ρ) , σ rθ (ξ ρ ) = σ rθ 1 (ξ ρ ) + σ rθ 2 (ξ ρ ) + σ rθ 3 (ξ ρ ) + O(ρ) , σ rϕ (ξ ρ ) = σ rϕ 1 (ξ ρ ) + σ rϕ 2 (ξ ρ ) + σ rϕ 3 (ξ ρ ) + O(ρ) , σ θθ (ξ ρ ) = σ θθ 1 (ξ ρ ) + σ θθ 2 (ξ ρ ) + σ θθ 3 (ξ ρ ) + O(ρ) , σ θϕ (ξ ρ ) = σ θϕ 1 (ξ ρ ) + σ θϕ 2 (ξ ρ ) + σ θϕ 3 (ξ ρ ) + O(ρ) , σ ϕϕ (ξ ρ ) = σ ϕϕ 1 (ξ ρ ) + σ ϕϕ 2 (ξ ρ ) + σ ϕϕ 3 (ξ ρ ) + O(ρ) , where ξ ρ = u ρ , ξ ρ = w ρ or ξ ρ = v ρ ; σ rr i (ξ ρ ), σ rθ i (ξ ρ ), σ rϕ i (ξ ρ ), σ θθ i (ξ ρ ), σ θϕ i (ξ ρ ) and σ ϕϕ i (ξ ρ ), for i = 1, 2, 3, are written, as:
32,187
[ "5275" ]
[ "176771", "4626", "23", "82314" ]
01749050
en
[ "spi" ]
2024/03/05 22:32:07
2011
https://hal.univ-lorraine.fr/tel-01749050/file/BAO.Lei.SMZ1114.pdf
Lei Bao Christophe Schuman Jean-Sébastien Lecomte Marie-Jeanne Philippe Xiang Zhao Liang Zuo Claude Esling Study of deformation mechanisms in titanium by interrupted rolling and channel die compression tests Keywords: titanium, deformation mechanism, twinning, gliding, orientation, EBSD Titanium, Rolling, Rotation flow field, Gliding, Twinning, Texture Twin, Twin variants, Twin growth, Schmid factor, Titanium Titanium, Twinning, Double twinning, Variant selection, interrupted in situ SEM/EBSD orientation determination Titanium and its alloys are widely used in aviation, space, military, construction and biomedical industry because of the high fracture strength, high ductility and good biocompatibility. The mechanisms of plastic deformation in titanium have been studied in detail, especially deformation twinning since it has a great influence on the ductility and fracture strength. In this study, an interrupted "in situ" SEM/EBSD investigation based on a split sample of commercial titanium T40 was proposed and performed in rolling and channel die compression. This approach allows to obtain the time resolved information of the appearance of the twin variants, their growth, the interaction between them and the interaction with the grain boundaries or twin boundaries. With the orientation data acquired by the EBSD technique, we calculated the Schmid factor, crystallographic geometry, and plastic energy associated with each variant of primary twins, secondary twins and double twins to investigate the lattice rotation, the activation of twins, the growth of twins, and the variant selection criterion. In this observation, two types of twin systems were activated: {10-12} tension and {11-22} compression twins. Secondary twins were also activated, especially the twin variants with the highest Schmid factors (e.g. higher than 0.4). The growth of the two types of twin is quite different. The {11-22} twin shows Multiple Variants System (MVS) whereas the {10-12} twin shows Predominant Variant System (PVS). The twinning occurs in grains that have particular orientations. Generally, the reorientation induced by the twinning aligns the c-axis of the twinned part to the stable rolling texture orientations, so that no further secondary twinning can be induced. The secondary twinning occurs only when the primary twinning orientates the c-axis of the primary twins far away from the stable orientations. For twinned ~ II ~ grains, the lattice rotation of the matrix is similar to that of the grains having a similar crystallographic orientation but without any twin. Two sets of double twins were observed in this study, classified as C-T1 and T1-C double twins respectively. All the variants of C-T1 and T1-C double twins were classified into three groups: A, B and C according to the crystallographic symmetry. The misorientations of theses three groups with respect to the matrix are 41.34°, 48.44° and 87.85°. Strong variant selection took place in double twinning. In C-T1 double twins, 78.9% variants belong to group B whereas in T1-C double twins, 66.7% variants belong to group C. The plastic energy and Schmid factor both play important roles in the variant selection of double twinning. Geometrical characteristics, like the common volume or strain accommodation do not contribute significantly to the variant selection. ~ III ~ Résumé Le titane et ses alliages sont largement utilisés dans les domaines aéronautique, spatial, de l'armement, du génie civil, dans des applications commerciales et biomédicales en raison de sa résistance à la rupture élevée, d'une bonne ductilité et d'une grande biocompatibilité. Les mécanismes de la déformation plastique du titane ont été étudiés en détail par le passé, particulièrement sur l'étude de la déformation par maclage car il a une grande influence sur les propriétés mécaniques. Une méthode d'essais "in situ" en EBSD basée sur des tôles polies et colées ensemble a été développée dans cette étude et utilisée en laminage et en compression plane. Avec cette méthode, des mesures EBSD sont effectuées à chaque étape de la déformation dans la même zone comprenant un grand nombre de grains. Par conséquent, l'information sur l'orientation de ces grains à chaque l'étape de la déformation est mesurées. Le maclage apparait dans les grains qui ont des orientations particulières. En règle générale, la réorientation induite par le maclage aligne l'axe c de la partie maclées vers les orientations stables de la texture de laminage, de sorte qu'aucun autre maclage secondaire peut être induit. Le maclage secondaire se produit uniquement lorsque le macle primaire envoie l'axe c loin des orientations stables. Pour les grains maclés, la rotation du réseau de la matrice est semblable à celle des grains ayant une orientation cristallographique identique mais sans macles. Deux types de systèmes de macles ont été activés au cours de la déformation à la température ambiante: des macles de tension (10)(11)(12) et des macles de compression (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). Dan le maclage primaire, les résultats montrent que les variantes de macle ayant des facteurs Schmid supérieurs à 0.4 ont une bonne chance d'être actifs. Les comportements des deux types de maclage sont complètement différents. Dans la déformation en compression, les macles (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22) montrent le comportement de type ~ IV ~ multiplication des variants (Multiply Variants System: MVS) alors que les macles (10)(11)(12) montrent le type de maclage prédominant (Predominant Twin System: PTS). Cette étude présente deux types de macles doubles dénommées C-T1 (= macle primaire de Compression et macle secondaire de Tension) et T1-C (= macle primaire de Tension et macle secondaire de Compression). Tous les variants sont classés seulement en trois groupes: A, B et C par symétrie cristallographique. Les désorientations de ces 3 groupes par rapport à l'orientation de la matrice sont respectivement de 41.34°, 48.44° et 87.85°.Une forte de sélection de variant se déroule dans le maclage double. Pour les macles doubles CT, 78.9% des variantes appartiennent à la B et pour T1-C, 66.7% des variantes appartiennent à C. Le facteur de Schmid joue un rôle prépondérant dans la sélection des variants des macles doubles. Les caractéristiques géométriques, associant " volumes communs " et l'accommodation de la déformation ne contribuent pas de manière significative à la sélection des variants. ~ V ~ would like to give my hearted thanks to all the members at this laboratory for their kind help. I would like to sincerely thank my supervisors, Prof. Claude Esling, Dr. Christophe Shuman at University of Metz, and Prof. Xiang Zhao at Northeastern University for guiding me into this research field and for their constant help and support on my research work. Every progress in this work coagulated their care and enlightenment. Special thanks should be given to Prof. Marie-Jeanne Philippe for her enriching ideas and the fruitful discussions. I am also deeply indebted to Dr. Jean-Sébastien Lecomte, who is a friendly co-worker of outstanding practical ability and active thinking. At last, I would like to give my hearted thanks to Dr. Yudong Zhang for her constant help not only on my research work but also on my daily life in France. As an illuminating guider, she exhibits the characters of being reliable, modest, serious and sympathetic. I did extremely enjoy working with them all. During my study, I always received direct help from the French Ph. D students at LETAM, especially Pierre Blaineau and Jean-Christophe Hell. Their kind care, selfless help and the deep friendship between us have made up of large part of the support and enjoyment of my study. I will cherish our friendship deeply in my mind. Last but not least, I would like to give my hearted thanks to my mother and my wife. Their deep love, understanding, constant support and encouragement over the years are the great impetus to my study. Metals with hexagonal close-packed structure The hexagonal structure materials such as titanium and magnesium are especially interesting because of their properties. The properties of titanium are particularly appreciated by the aerospace and biomedical industry. Magnesium is applied in automotive, computers or sports equipment. Zirconium is studied for its use in nuclear reactors. However, since their slip systems are not as sufficient as cubic's, so the twinning becomes more common in these materials. It becomes important to know the characteristics of deformation, including the activity of twinning, shear critical resolved on different slip systems... These relevant issues on hexagonal structure materials are the subject of many research works from 1950s until recent years [START_REF] Schmid | Plasticity of Crystals, FA Hughes and Co[END_REF]. Hexagonal close-packed crystal structure The atom positions in the hexagonal close-packed structure are shown in Figure 1-1. If atoms are assumed to be hard spheres, the closest arrangement in an atom plane produces a series of hexagonal placed closely. The stacking sequence of close-packed atom planes one upon another produces is ABAB (Figure1-1). In an ideal closed-packed structure, the axial ratio = ⁄ = 8 3 ⁄ ≈ 1.633. The coordination number of an ideal hexagonal close-packed structure is 12, as same as the FCC structure. However, no pure metal has the ideal 1.633 axial ratio. The pure metals with axial ratio higher than 1.633, have 6 nearest atoms in the basal plane; in the other case, the metals with axial ratio lower ~ 3 ~ than 1.633, have 6 nearest atoms, three above the basal plane and three under basal plane [START_REF] Hume-Rothery | The structure of metals and alloys[END_REF][START_REF] Christian | The theory of transformation in metals and alloys[END_REF]. Because the Miller indices of the crystallographic planes and directions from a same family appear quite dissimilar, and in order to avoid the possibility of confusion and the inconvenience, A Miller-Bravais indices were developed to describe the crystallographic planes and directions in hexagonal system [START_REF] Taylor | X-ray Metallography[END_REF][START_REF] Reed-Hill | Physical metallurgy principles[END_REF][START_REF] Barrett | Structure of metals[END_REF]. This Miller-Bravais indices base on a 4-axis system, where the coplanar vectors a 1 , a 2 and a 3 are at 120 to each other and vector c is perpendicular to the plane consists of vectors a 1 , a 2 and a 3 . (Figur 1-2) The vector a 3 is redundant since a 3 =-(a 1 + a 2 ). In the Miller-Bravais indices, a crystallographic direction d will have 4 indices [uvtw], such as d=ua 1 +va 2 +ta 3 +wc, and then the crystallographically equivalent directions have similar indices. Twinning modes Twinning in hexagonal metals is of great important because the limited slip modes in these metals make twinning a necessary way to accommodate the deformation. Unlike the slip mode, twinning products a specific new orientation in crystal by shear. The geometry description of twinning shear is illustrated in Figure1-3. All points of the lattice on the upper side of plane K 1 are displaced in the direction  1 by an amount of u 1 proportional to their distance above K 1 . The plane containing  1 and the normal to K 1 is called the plane of shear S. And it is evidently that all vectors in that plane through which is normal to plane S are unchanged in length, although rotation. The plane K2 in Figure 123conventionally called the second undistorted plane. K1 is neither rotated nor When a crystal is completely converted to a twin, all directions lying in the initially acute sector between K 1 and K 2 are shortened, while all directions lying in the obtuse sector are lengthened. Since that twin can been seen as a specific type of crystalline structure relationship between the twin part and the matrix, we can describe twin in the same way as describe two crystalline structure relationship, with a rotation angle and a rotation axis. {10-12}, {11-21} and {11-22} twins in titanium are illustrated in Figure1-4. In hexagonal metals, many types of twinning are exhibited, and the type can be related to the c/a ratio of the metal. Generally speaking, the lower c/a ratio, the greater the variety of twinning exhibited. Figure1-4: The {10-12}, {11-21} and {11-22} twins in titanium. ~ 7 ~ The observed type of twinning in hexagonal metals is showed in Table 1-1. Twinning in {10-12} planes occurs in all hexagonal metals, because this twinning has the lowest shear. Titanium, zirconium and hafnium (c/a<1.633) exhibit {10-12} twinning in tension of c axis, {11-22} twinning in compression and in some case, {11-21} twinning in tension [START_REF] Rosi | Mechanism of plastic flow in titanium at low and high temperatures[END_REF][START_REF] Chin | On the microstructure of extruded rods and drawn wires of beryllium[END_REF][START_REF] Conrad | Mechanisms of fatigue in metals under coal liquefaction conditions[END_REF]. Magnesium and beryllium (c/a<1.633) shows only {10-12} twinning in tension [START_REF] Kelley | The deformation characteristics of textured magnesium[END_REF][START_REF] Wonsiewicz | Analysis of local necking in a biaxially stretched sheet[END_REF][START_REF] Mahajan | Deformation twinning in metals and alloys[END_REF][START_REF] Chin | On the microstructure of extruded rods and drawn wires of beryllium[END_REF]. Zinc and Cadmium (c/a>1.633) exhibit {10-12} twinning in compression [START_REF] Price | On dislocation loops formed in zinc crystals during low temperature pyramidal glide[END_REF]. The calculation of shear {10-12} of twinning is illustrated in Figure 1-5, in the schematic Where =c/a. The formulas to calculate shear of observed twin mode from [START_REF] Christian | Deformation twinning[END_REF] are given in Table 1-2. The relationship between the shear and the c/a ratio of various types of twinning is illustrated in Figure 1-6. Because the twinning only can provide shear in single direction, the load direction and crystal orientation influence the activation of twinning greatly. The twins with positive slope in Figure 1-6 ({11-22} and {10-11}) only activate under the compression force in the direction of c axis. On the other hand, the twins with negative slope ({10-12} and {11-21}) only activate under the tension force in the direction of c axis. Table 1-2: Shear of various twin modes [START_REF] Christian | Deformation twinning[END_REF]] Twin modes Shear {10-12}<10-1-1> (3 -)/√3 {10-11}<10-1-2> (4 -9)/4√3 {11-21}<11-2-6> 1 ⁄ {11-22}<11-2-3> ( -)/ 1-3 1-1 1-2 ~ 9 ~ Slip modes The crystallographic slip is a mechanism that operates in all crystalline materials, metals and metal alloys, calcite or crystalline polymers. This mechanism has already been observed before the twentieth century. Metalworkers observed deformed lines or streaks regular on polycrystals under an optical microscope, and they called them "slip lines". In fact, later observed in the Scanning Electron Microscope, these slip lines were actually steps. The formation of these steps is a direct result of the mechanism of deformation of parts of the crystal (or polycrystal) slide over each other on well defined crystallographic planes (slip systems). This mechanism is due to the movement of dislocations in these slip planes. A dislocation can be 2) [11.3] (11.1) [11.6] Range of hexagonal metals The relationship between the shear and the c/a ratio of various types of twinning. ~ 10 ~ activated when a stress applied to the crystal. Under sufficient stress, the dislocations glide through the crystal. It produces a small displacement (Burgers vector b ⃑ ) on the surface. When a dislocation slips, the volume of metals remains unchanged, because the shift occurs by shear between parallel planes of the crystal. In the face-centered cubic lattice, the Burgers vectors are in the direction of <110>, and <111> in the bcc. In materials with hexagonal structures, there are several families of slip systems with Burgers vectors of type <a> and <c+a>. A slip system is defined by a glide plane and a direction slip contained in this plan. Table 1-3 and Figure 1-7 below show the different families of slip system operating in hexagonal structures. ~ 11 ~ Hexagonal metals In this section, we provide a presentation of the main works related to crystal orientation, evolution of texture and different deformation mechanisms in titanium, zirconium, magnesium and zinc. Titanium The plastic deformation in titanium have been studied all the time and especially during rolling (hot and cold). First, it should be mentioned the for the rapid development of texture and the transitions in texture. The formation of the stable end texture is thought to be due to slip. In 1994, Kailas et al. (Prasad, Biswas et al. 1994) studied the influence of initial texture on the instability of the microstructure of titanium during compression at temperatures between 25 and 400C. They found that at strain rates  1 s -1 , both sets of specimens, in the rolling direction specimens and in the long transverse direction specimens, exhibited adiabatic shear bands, but the intensity of shear bands was found to be higher in the rolling direction specimens than in the long transverse direction specimens. At strain rates  ~ 13 ~ 0.1s -1 the material deformed in a micro structurally inhomogeneous fashion. For the rolling direction specimens, cracking was observed at 100 °C and at strain rates  0.1 s -1 . This is attributed to dynamic strain aging. Such cracking was not observed in the long transverse specimens. The differences in the intensity of adiabatic shear bands and that of dynamic strain aging between the two sets of test specimens are attributed to the strong crystallographic texture present in these plates. Subsequently, in 1997, Lebensohn and Canova (Lebensohn and Canova 1997) proposed a self-consistent model to simulate the evolution of texture in titanium and apply to the rolling. This model accounts for crystallographic textures and grain morphologies, as well as for the phase correlation, both in space and orientation. In their experiment, the two phases, (α + β) Ti alloys, exhibit specific morphologic and crystallographic correlations. Their study showed that the model leads to better texture predictions when all these correlations are accounted for. In 1999, Singh et al. (Singh, Bhattacharjee et al. 1999) ~ 18 ~ Zirconium In 1991, Tomé et al. [START_REF] Tome | A model for texture development dominated by deformation twinning: application to zirconium alloys[END_REF] propose a new method for modeling grain reorientation due to twinning, they deal with tension and compression in zirconium alloys. This new model, called "Volume Transfer Scheme", a part of the volume of the crystal transferred to the twinned position directly in the Euler space. Their model predicts the evolution of texture more accurately than traditional models when the twinning is the predominant mechanism of deformation. However, the disadvantage of this model is that you cannot take into account the work hardening. In 1993, Lebensohn et al. (Lebensohn and Canova 1997) present an approach anisotropic viscoplastic self-consistent model for the plastic deformation of polycrystals. This approach is different from that presented earlier by Molinari et al. in his formulation 'stiffness'. The self-consistent model is particularly suitable for highly anisotropic materials such as hexagonal. The authors applied their model to predict the evolution of texture in rolled zirconium and get better results after compared their prediction with experiments. In 1994 and 1995, Philippe et al. [START_REF] Philippe | Modelling of texture evolution for materials of hexagonal symmetry--I. Application to zinc alloys[END_REF] propose a model of the evolution of texture in hexagonal materials. The authors describe a first step in the evolution of texture and microstructure during cold rolling in zinc, subsequently, Philippe et al. [START_REF] Philippe | Modelling of texture evolution for materials of hexagonal symmetry--II. application to zirconium and titanium [alpha] or near [alpha] alloys[END_REF] extended their studies to titanium and zirconium, taking both the sliding plastic modeling and also twinning into account. The authors performed simulations using models of Sachs and Taylor and compared their results to the available literature. Fortunately, the simulation results were in good agreement with the experiments for reductions between 0 and 80% ~ 19 ~ reduction. [START_REF] Francillette | Experimental and predicted texture evolutions in zirconium alloys deformed in channel die compression[END_REF][START_REF] Francillette | Experimental and predicted texture evolutions in zirconium alloys deformed in channel die compression[END_REF] performed "channel die" compression tests on polycrystalline zirconium at room temperature, at the plastic deformation of 40%. They tested 5 samples with different initial texture. They simulated the evolution of texture using a crystal plasticity model of self-consistent, the simulation agreed with experiments very well. In 2000, Kaschner and Gray [START_REF] Kaschner | The influence of crystallographic texture and interstitial impurities on the mechanical behavior of zirconium[END_REF] studied the influence of texture on the mechanical behavior of zirconium. They evaluate the response of the material in compression according to the predominant orientation of the c-axis of the hexagonal mesh for several test temperatures and strain rate. They found that the compressive-yield responses of both high-purity (HP) crystal-bar and lower-purity (LP) zirconium depend on the loading orientation relative to the c-axis of the hcp cell, the applied strain rate, which varied between 0.001 and 3500 s -1 , and the test temperature, which varied between 77 and 298 K. The rate of strain hardening in zirconium was seen to depend on the controlling defect-storage mechanism as a function of texture, strain rate, and temperature. The substructure evolution of HP zirconium was also observed to be a function of the applied strain rate and test temperature. The substructure of HP zirconium was seen to display a greater incidence of deformation twinning when deformed at a high strain rate at 298 K or at 77 K. In 2001, Sanchez et al. [START_REF] Sanchez | Torsion texture development of zirconium alloys[END_REF] Orientation imaging microscopy in a scanning electron microscope and defect analysis via transmission electron microscopy are used to characterize the defect microstructures as a function of initial texture, deformation temperature and plastic strain. Finally, they found that the observed deformation mechanisms are correlated with measured mechanical response of the material. Magnesium In this section, we propose a review of literature on textures and their evolution in magnesium. In 2000, Kaneko et al. [START_REF] Kaneko | Effect of texture on the mechanical properties and formability of magnesium wrought materials[END_REF] ~ 22 ~ examined the effect of texture on the mechanical properties of AZ31 magnesium. They found that strong texture is observed in magnesium wrought materials in which the basal plane is oriented parallel to the extrusion or rolling direction. As a result, tensile strength of magnesium wrought materials increased by 15 to 20% due to texture hardening at room temperature. The AZ31 alloy sheet is highly anisotropic at room temperature with high r-value above 4, resulting that forming limits in biaxial tension are much lower than those in uniaxial tension. However, this anisotropy decreases with increasing forming temperature and no texture hardening is found at 473 K. In this respect, formability of magnesium alloys sheets in terms of Erichsen and conical cup values is remarkably poor at room temperature but appreciably improves with increasing temperature. Sheet forming of magnesium alloys is practically possible only at the high temperature range where plastic anisotropy disappears. In 2006, Yi et al. [START_REF] Yi | Deformation and texture evolution in AZ31 magnesium alloy during uniaxial loading[END_REF][START_REF] Yi | Deformation and texture evolution in AZ31 magnesium alloy during uniaxial loading[END_REF]) studied the behavior of magnesium alloy AZ31 under a uniaxial loading (tension and compression). The specimens were cut at 0°, 45° and 90° to the direction of extrusion. They highlight the various modes of deformation acting under the direction of the initial texture. They found that the activity of the basal <a> slip and the tensile twinning exert a significant effect on the mechanical anisotropy during tension, while the importance of the <c + a> slip increases during compression. Helis et al. [START_REF] Helis | Microstructure evolution and texture development during high-temperature uniaxial compression of magnesium alloy AZ31[END_REF] [START_REF] Watanabe | Effect of temperature of differential speed rolling on room temperature mechanical properties and texture in an AZ31 magnesium alloy[END_REF] studied the texture evolution in AZ31 magnesium and ductility in different conditions of rolling symmetrical. They found that the strain rate was inversely proportional to the square of the grain size and to the second power of stress. The activation energy was close to that for grain boundary diffusion at 523-573 K, and was close to that for lattice diffusion at 598-673 K. From the analysis of the stress exponent, the grain size exponent and activation energy, it was suggested that the dominant diffusion process was The as-received material was hot-rolled and then annealed commercial pure titanium sheet (mean grain size is 10 m) of 1.5 mm thickness with the composition given in table 2-1. Heat treatment In order to obtain a coarse grain microstructure with equiaxed grains, a grain growth annealing was performed on so some samples at 800°C for 2 hours, and then the grain had grown to 200 m after the annealing treatment. Samples preparation The samples were firstly mechanically polished with silicon carbide sandpaper (600 # , 1200 # , 2400 # until 4000 # sandpaper (Struers standard)) and then electrolytically-polished in a solution of 200 ml perchloric acid in 800 ml methanol at 17V (30 seconds) at a temperature of 20°C . Electron Back Scattered Diffraction (EBSD) This part introduces our most important experimental methods and equipment -Electron Back Scattered Diffraction (EBSD) in detail. We will ~ 34 ~ explain how an EBSD system works, describes the experiments that can be performed and how to undertake them, and finally outlines the basic crystallography needed for EBSD. In this work, A Field emission JEOL-6500F FEG-SEM with EBSD camera and HKL CHANNEL5 software is used to perform EBSD measurement and analysis. HKL CHANNEL5 uses a modular approach to its various. All software modules interact seamlessly with one another and form a powerful and expressive suite with which to perform microstructural characterization. Therefore the HKL CHANNEL5 Flamenco allows image collection, versatile EBSD analysis and phase identification all within a single program. Introduction EBSD is a technique which allows crystallographic information to be obtained from samples in the scanning electron microscope (SEM). In EBSD a stationary electron beam strikes a tilted crystalline sample and the diffracted electrons form a pattern on a fluorescent screen. This pattern is characteristic of the crystal structure and orientation of the sample region from which it was generated. The diffraction pattern can be used to measure the crystal orientation, measure grain boundary misorientations, discriminate between different materials, and provide information about local crystalline perfection. When the beam is scanned in a grid across a polycrystalline sample and the crystal orientation measured at each point, the resulting map will reveal the constituent grain morphology, orientations, and boundaries. This data can also be used to show the preferred crystal ~ 35 ~ orientations (texture) present in the material. A complete and quantitative representation of the sample microstructure can be established with EBSD. Basics of EBSD The principal components of an EBSD system are shown in ~ 38 ~ The crystal orientation is calculated from the Kikuchi band positions by the computer processing the digitized diffraction pattern collected by the CCD camera. The Kikuchi band positions are found using the Hough transform. The transform between the coordinates (x, y) of the diffraction pattern and the coordinates (,) of Hough space is given by: = cos + sin A straight line is characterized by , the perpendicular distance from the origin and θ the angle made with the x-axis and so is represented by a single point (,) in Hough space. Kikuchi bands transform to bright regions in Hough space which can be detected and used to calculate the original positions of the bands. Using the system calibration, the angles between the planes producing the detected Kikuchi bands can be calculated. These are compared with a list of inter-planar angles for the analyzed crystal structure to allocate Miller indices to each plane. The final step is to calculate the orientation of the crystal lattice with respect to coordinates fixed in the sample. His whole process takes less than a few milliseconds with modern computers. Basic crystallography for EBSD Crystal orientation A crystal orientation is measured with respect to an orthogonal coordinate system fixed in the sample. The sample system is normally aligned with The relationship between a crystal coordinate system and the sample system is described by an orientation matrix G. A direction measured in the crystal system r c is related to the same direction measured in the sample system r s by: = The rows of the matrix G are the direction cosines of the crystal system axes in the coordinates of the sample system. Misorientation The orientation between two crystal coordinate systems can also be defined by the angle-axis pair [uvw]. One coordinate system can be superimposed onto the other by rotating by an angle  around the common axis [uvw] (Figure 234). Because it is an axis of rotation, the direction [uvw] is the same in both coordinate systems. The angle-axis pair notation is normally used to describe grain boundary misorientations. The orientation between two coordinate systems can also be defined by a set of three successive rotations about specified axes. These rotations are called the Euler angles  1 , ,  2 and are shown in Figure 2345, which shows the rotations necessary to superimpose the crystal coordinate system (red) onto the sample system (blue). The first rotation  1 is about the z axis of the crystal coordinate system. The second rotation is  about the new x-axis. The third rotation is  2 about the new z-axis. The dotted lines show the positions of the axis before the last rotation. Note that the orientation can also be defined by an equivalent set of Euler angles which superimpose the sample coordinate system onto the crystal coordinate system. 2-5 ~ 41 ~ ~ 42 ~ In-situ EBSD test and interrupted "in-situ" EBSD test After automated EBSD systems are used in combination with other equipment within the scanning electron microscope (SEM), it is possible to perform in-situ tension test in the sample chamber of SEM and apply EBSD measurements simultaneously. While a changes in a single EBSD pattern could be observed during in-situ deforming of a sample, so much greater insight can be gained by EBSD scan data during an experiment. This chapter briefly introduces in-situ EBSD tension test and interrupted "in-situ" EBSD test used in this study. In-situ EBSD test The arrangement of an in-situ EBSD test is illustrated in The EBSD measurement are taking during the in-situ experiment.In order to get high quality Kikuchi patterns, the sample must be maintained at the standard 70 tilt during the experiment. This requires good compatibility between the SEM,EBSD camera and deformation stage. ~ 43 ~ Interrupted "in-situ" EBSD test Although in-situ EBSD test is an excellent method in collecting orientation data of grains during deformation, it requests some strict conditions such as low deformation speed, small measure area and so on; most important is that tension test is the only deformation mode it can provide. Therefore, an interrupted "in situ" EBSD investigation method was proposed and applied. In this method, we concentrate on sufficient amount of grains and perform EBSD measurement prior and after the deformation in the same zone, therefore, we can acquire the detailed orientation information of these grains in the interrupted step of deformation and identify the deformation modes during the deformation process. This method can be applied in rolling, tension, compression and some other deformation modes. Experiment arrangement The samples were cold rolled or compressed in channel die in several passes respectively, first to a certain amount of deformation and then to a further amount of deformation, and continue to more and more deformation. To perform interrupted "in situ" measurement, a 500×300 mm In this chapter, we continued this study on the deformation modes in rolling and channel die compression at room temperature, to provide information on the lattice rotation in the course of the deformation. The role of twinning, the formation and the evolution of mechanical twins is studied with the method of interrupted "in situ" SEM/EBSD measurements. A Introduction The mechanism of plastic deformation has been studied in some detail in the past [Yoo (1981) } compression twins are activated during plastic deformation at room temperature. Due to the compacity ratio c/a <1.633 in titanium, prismatic glide is the easiest one at room temperature but basal and pyramidal glide were also observed [Pochettino, Gannio, Edwards and Penelle (1992)]. However the previous studies concerning the texture and deformation modes were mostly performed after a certain amount of deformation either by of X-ray diffraction XRD or by transmission electron microscopy (TEM). Therefore the initial orientation of the individual grains and the evolution of the orientation flow during deformation were not documented. Moreover, after a certain amount of deformation, twinning and gliding were both active and interacted with each other. Thus it was difficult to resolve the specific orientation condition to activate each deformation mode (either twinning or gliding). Therefore, the present work is devoted to these aspects, providing the lack of information in the literature. In order to follow the evolution of individual orientations during the deformation and to determine the effect of initial orientation on the deformation modes, an interrupted "in situ" EBSD investigation method was proposed. In this ~ 48 ~ method, we follow a sufficient number of grains and perform EBSD measurement in the same area, prior to and after the deformation. Thus it is possible to acquire detailed orientation information of these grains in the interrupted step of deformation and identify the active deformation modes. Experimental The as-received material was hot-rolled and then annealed commercial purity titanium sheet of 1.5 mm thickness with the composition given in table 1. In order to obtain a twin-free microstructure with equiaxed grains, a grain growth annealing was performed at 750°C for 2 hours. After the annealing, the samples were mechanically and then electrolytically-polished in a solution of 200 ml perchloric acid in 800 ml methanol at 17V (30 seconds) and at a temperature of 5°C before deformation. Then, the samples were cold rolled or compressed in channel die in two passes respectively, first to 10 % and then to 20% reduction. To perform interrupted "in situ" measurement, a 500×300 mm 2 area was carefully polished and marked out with four micro-indentations. The orientation of all the grains in this Deformation in rolling The orientation map of the grain growth annealed sample shown in Fig. 2 ~ 50 ~ The lattice rotation was studied after each rolling pass. The orientation of each individual grain was carefully determined, so that the lattice rotation of each grain could be brought in relation to its own orientation as well as to that of its neighbouring grains. It was found that { 2 > axis [Philippe, Esling and Hocheid(1988) } twinning was predominant at this stage of deformation. This result is reasonable considering that the initial orientation favors the activation of this compression twin. Whereas when rolling continues to 20%, { 2 1 10 } tension twinning was remarkably increased (Fig. 4 (c)). The orientation analysis could be studied from the microscopic point of view of the crystal reorientation step by step, in terms of the rotation flow field. A small arrow is plotted in the Euler space between the initial grain orientation and the final grain orientation. This field of small arrows offers a graphical representation of the rotational flow field. The flow field can be defined and plotted in the Euler space, and represents an efficient tool to describe the texture evolution through modeling, e.g. [START_REF] Clement | Eulerian simulation of deformation textures[END_REF], Bunge andEsling (1984), Knezevic, Kalidindi andFullwood(2008). In order to identify the activated glide systems corresponding to the traces observed, the possible traces of all possible glide planes [Partridge (1967)] are calculated in the crystal coordinate system, using the orientation data of the related grains and comparing with the observed traces. Consequently, basal <a>, prismatic <a> and pyramidal <a> or <c+a> glide systems are identified in this work. Deformation in channel die compression In channel die compression, only one type of twin -{ 2 1 10 } tension twin is observed and the amount of twinned grains is very low, only 1.07% of the observed grains, which cannot be clearly resolved from the misorientationangle distribution diagram in Fig. 7. Compared with rolling, numerous slip traces are observed in a great number of grains in EBSD maps after channel die compression. Using the above trace comparison method, basal <a>, prismatic <a> and pyramidal <a> or <c+a> glide are identified. A statistical set of 100 randomly selected grains with slip traces is studied. The occurrence of various glide systems in the studied grains is listed in Table 2. It is seen that among the activated glide systems, 11% are basal <a>, 51% are prismatic <a> and 38% are pyramidal <c+a> or <a>. The Schmid factors for the three glide systems were calculated, and the calculation indicated that most grains have high Schmid factors for pyramidal <c+a> glide system but low Schmid factors for ~ 55 ~ basal <a> and prismatic <a>. This is due to the strong texture which means a majority of grains belong to one main orientation mode. Table .2 Activated glide systems in 100 randomly selected grains. Activated glide system frequency Basal <a> 11% Prismatic <a> 51% Pyramidal <a> or <c+a> 38% ~ 59 ~ pole. The pole can be seen as a dislocation source and the partial dislocations are produced continuously in the planes parallel to the twin plane during deformation. If the path in which the partial dislocations move is short, the dislocations will easily be blocked, pile up and react to the source disabling it. Deformation twins always have a lenticular shape, since the interface can deviate from the twinning plane without greatly increasing the twin-interface energy [Partridge (1967)]. In addition, the strain energy increase should also be taken into account, approximately equal to (c/r)µS 2 , where c and r represent the thickness and length of the twin, µ the shear modulus and S the twinning shear. Hence, secondary twins are already confined in a small volume due to the lenticular shape and cannot provide enough free path to develop a higher order twinning. In fact, in titanium alloys it is difficult to induce twinning at room temperature once the size of the matrix drops below the range of about 10 µm. A major benefit of the interrupted "in situ" method is that we can follow the deformation process step by step. For example, in the grains having their initial c-axis close to ND for which secondary twinning occurs inside the primary twins, we can clearly discriminate the initial matrix from the primary twins thanks to the in-situ orientation information. In this case the primary twinned area is much larger than the remaining matrix and thus represents the "new matrix" for possible subsequent secondary twinning. The effect of the neighboring grains slightly modifies the orientation in the vicinity of the grain boundary. This leads to a larger spread in the orientation measured in the vicinity of the grain boundary when the ~ 60 ~ neighboring grain is strongly misoriented with respect to the considered grain. Compared with rolling, channel die compression showed a simple deformation mode including only { 2 1 10 } tension twinning and various gliding. The primary reason is the simple stress condition applied in channel die compression. Therefore, we used the channel die compression results to clarify the effect of grain size on twinning. 357 grains with orientation favorable to the activation of { 2 1 10 } twinning were selected and divided into three groups according to their diameter. Group 1: 0 to 10 µm (221 grains); Group 2: 10 to 20 µm (129 grains); Group 3: 20 to 30 µm (7 grains). The calculated percentage of grains with twins for the three groups is shown in Fig. 11. It is clear that with the similar orientation (c axis tilted 70º~90º from normal direction ND), no twin occurs in the grains smaller than 10m. With the increase of the grain size, the occurrence of the twin increases. Hence, grain size is an important factor affecting twin activation. The reason can be understood from geometrical and energetical considerations we introduced above. ~ 61 ~ Conclusion Twinning occurs in grains having specific orientations. Generally, the reorientation induced by twinning aligns the c-axis of the twinned part to the stable rolling texture orientation, so that no further secondary twinning can be induced. Secondary twinning occurs only when the primary twinning orientates the c-axis of the primary twins far away from the stable orientations (this is generally the case for the { 2 } primary twin results in a reorientation of the c-axis of the secondary twin to a stable orientation. Only a little amount of second order twin could be observed and twinning of higher than second order was not found. The rotation of the matrix-part of the grains having twins is similar to that of the non-twinned grains with similar orientation. The twinned part of a grain can be considered as a new grain. When twins grow within the grain, Abstract: The present work was undertaken to provide information, lacking in the literature, on the lattice rotation and the role of twinning during cold rolling of commercial purity titanium (T40). The proposed method consists of determining the individual rotation of the grains induced by low to intermediate deformation (up to 30% in thickness reduction) and following the rotation field using electron backscattered diffraction (EBSD) measurements in a high resolution FEG SEM at different steps of deformation (10 and 20 %). We have especially studied the formation, the evolution and the role of mechanical twins. According to the former research, during the deformation at room temperature, three different types of twin systems were activated: { 2 1 10 } tensile twinning, { 2 2 11 } compression twinning and -to a small extent -{ 1 2 11 } tensile twinning, depending on the grain orientation. Secondary twins (often { 2 1 10 } within { 2 2 11 } twins) were activated in the grains oriented favourably for this secondary twinning. This resulted in a heterogeneous microstructure in which grains were refined in some areas. It also induced re-orientation of the c-axes to stable orientations. No twins of higher order than the second order twins could be found. Introduction Introduction The titanium textures observed at room temperature in the hexagonal close-packed (HCP) structure are inherited to some extent from their prior texture in the body centred cubic (BCC) structure [1] . However most of the ~ 66 ~ research effort concentrated on the strong deformation textures that Titanium [2-4] , like other HCP metals, develops during the rolling at room temperature, that lead to a pronounced plastic anisotropy of the polycrystalline materials [5] . The mechanical response of HCP metals is strongly dependent on the combination of active deformation modes: slip and twinning. The specific deformation mechanisms depend on the c/a ratio, the available deformation modes, the critical resolved shear stress (CRSS) for slip and the twin activation stress, as well as the imposed deformation relative to the crystallographic texture. For pure titanium, { 0 1 10 } < 0 2 11 > slip is the primary deformation mode. This slip mode alone, however, cannot accommodate the imposed strain in the grains of a polycrystalline aggregate, because it cannot provide 5 independent slip systems [4, 6-9] . Additional deformation mechanisms such as pyramidal planes with <c+a> Burger's vector or twinning usually have to be activated. Chun et al. [10] studied the effect of deformation twinning on microstructures during cold rolling of commercially pure (CP) titanium. The primary twinning systems activated were { 2 In order to describe the texture evolution, different models of polycrystalline plasticity are used. Polycrystal plasticity models are routinely employed to predict deformation textures. Wu et al. [11] employed a new ~ 67 ~ Taylor type of crystal plasticity model to predict the texture evolution and anisotropic stress-strain curves in -titanium. The main features of this model include: (i) incorporation of slip inside twins as a significant contributor to accommodating the overall imposed plastic deformation; and (ii) extension of slip and twin hardening laws to treat separately the hardening behaviour of the different slip families (prismatic <a>, basal <a> and pyramidal <c+a>) using hardening parameters that are all coupled to the extent of deformation twinning in the sample. Proust et al. [12] developed a model which takes into account the texture evolution associated with twin reorientation and the effect of the twin barriers on dislocation propagation. The role of the twins as barriers to dislocations was explicitly incorporated into the hardening description via geometrically necessary dislocations (GNDs) and a directional Hall-Petch mechanism. However, with these complex models, the lack of direct experimental information on slip and twinning systems imposes difficulties for the related modelling practices. Experimental The as-received material was hot-rolled and then annealed commercial pure titanium sheet of 1.5 mm thickness with the composition is given in table 1. In order to obtain a microstructure with a mean grain size of 30 µm, a grain growth anneal was performed at 750°C for 2 hours. After annealing, the samples were mechanically ground and then electrolytically-polished in a solution of 20 ml perchloric acid in 80 ml methanol at 17V (30 seconds) and a temperature of 5°C before the cold rolling. Then, the samples were cold rolled in two passes, first to 10 % and then to 20% reduction. The cold rolling was performed in the transverse direction of the former hot rolling in order to induce significant reorientations of the C-axes of the grains. To Results Initial microstructure and texture The orientation map of the grain growth annealed sample shown in Fig. 2 reveals a completely recrystallized microstructure. No twins were observed as metals with HCP structure do not undergo recrystallization twinning. The{0002}-pole figure (PF) in Fig. 3 shows two strong maxima at ±35° tilted from ND towards TD, the setting of the coordinate systems, and the definition of the Euler angles being in accordance to Bunge's convention (see e.g., Ref. [13]). The { 0 1 10 } PF displays the maximum pole densities parallel to RD. Lattice rotation fields and texture development The orientation analysis could be studied from the microscopic viewpoint of the crystal reorientation step by step, in terms of the lattice rotation flow field. A small arrow is plotted between the initial grain orientation and the final grain orientation. This field of small arrows offers a graphical representation of the flow field. The orientation flow field can be defined and plotted in the Euler space, and represents an efficient tool to describe the texture evolution through modeling, e.g. [START_REF] Clement | Eulerian simulation of deformation textures[END_REF] [13] , [START_REF] Bunge | Texture development by plastic deformation[END_REF] [14] . In the present case of hexagonal material, due to the particular importance of the c-axes, we choose to plot the small arrows linking the initial orientation and the final orientation in the two dimensional pole figures of the c-axes. For the further discussion, it was of interest to plot separately the rotation flow field of the grains having no twinned part on the one hand (Fig. 5 (a)) and the rotation flow field of the matrix part of the grains presenting twinned parts inside the grains (Fig. 5 (b)). Both orientation flow fields are similar, but for a smaller amplitude of the rotation of the matrix of twinned grains, as compared to the grains without twins. The orientation analysis could be also studied from the macroscopic Interestingly it is found that tension twins and compression twins can coexist in one and the same grain (Fig. 9). From studies of the literature, we could not find an explanation for this coexistence which was initially unexpected. Effect of the orientation of the neighbouring grains: heterogeneous deformation In large grains, different domains in the one and the same grain may undergo different lattice rotations. Fig. 12 presents a large grain (green color) in which several parts have experienced different lattice rotations. ~ 78 ~ The numbers inside the small neighboring grains give the misorientations of the c-axes with respect to the c-axis corresponding to the mean orientation of the large green grain. We can study here the respective rotations of the domains inside the large green grain, notably in the neighbourhood of the grain boundary, and thus estimate the influence of the neighbouring grains. In any case the rotation of the grain interior is smaller than that of the part close to the grain boundary, especially when the neighbouring grains are highly misoriented with respect to the investigated domain. Discussion Comparison of the rotation of the matrix part of the grains containing twins with the rotation of the untwinned grains indicates that the lattice rotation in the two cases are similar, even if the amplitude of the rotation in the non-twinned part of the twinned grains is relatively smaller. The c-axis of this secondary twin orientates towards to the stable orientation. We could hardly observe the presence of any third-order twin, even after much higher deformation. This can be easily understood by the relation between the geometrical and energetical characteristics of twinning (and by the mean free path necessary for the formation of a twin). In fact, in titanium alloys it is difficult to induce twinning once the grain size in the matrix drops below the range of about 10 µm. The great benefit of the present method is that we can follow the deformation process step by step. For example, in the grains having their initial c-axis close to ND for which secondary twinning occurs inside the primary twins, we can clearly discriminate the initial matrix from the ~ 80 ~ primary twins thanks to the in-situ orientation information. In this case the primarily twinned area is much larger than the remaining matrix and thus represents the "new matrix" for the subsequent secondary twinning. The effect of the neighboring grains slightly modifies the orientation in the vicinity of the grain boundary, and leads to a larger spread in orientation when the neighboring grain is strongly misoriented with respect to the considered grain. The detailed experimental study of the complex twinning conditions in hexagonal materials may be helpful to the implementation of mechanical twinning in models and codes of polycrystalline plasticity. A thorough study of the implementation of mechanical twinning in a Grain Interaction Model and the application to magnesium alloys [16] can be read with interest in this special volume of AEM. Conclusion Twinning occurs in grains that have particular orientations. Generally, the reorientation induced by twinning aligns the c-axis of the twinned part to the stable rolling texture orientations, so that no further secondary twinning can be induced. Secondary twinning occurs only when the primary twinning orientates the c-axis of the primary twins far away from the stable orientations (this is generally the case for the { 2 of the deformation energy of each variant should be adopted as the main criterion in predicting the variant selection, and the Schmid factor used as an additional second criterion. We also extended our discussion to the various twinning behaviors of {11-22} and {10-12} primary twins. The {11-22} twin shows a behaviour of multiply twin variants system (MVS) and the {10-12} twin shows predominant variant system (PVS) and these mainly result from the orientation relationship between stress components and the parent grain. I Introduction Twinning is a particularly important deformation mode in  titanium [1][2][3][4][5][6]. Twinning process strongly depends on the crystallographic orientation of the matrix. The simple Schmid factor allows to calculate the resolved shear stress on the twin system from the applied macroscopic stress. Titanium exhibits two major twinning systems at room temperature, {10-12} twin and {11-22} twin. The {10-12} system is referred to as tension twin ({11-22} as a compression twin) because it only activates under tension (compression) load along the c axis of the matrix [7]. Hence, Schmid's law loses effectiveness of selecting twin type because of the directionality of twinning, and it is not clear whether Schmid's law is an applicable criterion This work is concerned with experimental verification of Schmid' law as a criterion for selecting twin variants, and extend to a study of the twin growth as well. In order to trace the evolution of individual orientations during the deformation and the growth of twin, an interrupted "in situ" EBSD investigation method [8] was proposed and applied. In this method, we concentrate on a sufficient amount of grains and perform EBSD measurement on these grains in each step of the deformation. Therefore, we can acquire the detailed orientation information of these grains in the interrupted deformation step. Experimental The material used in this investigation is a cold-rolled commercially pure titanium T40 sheet with 1062 ppm (wt.) oxygen. The sheet was annealed at 800°C for 2 hours to allow the deformed microstructure to be fully recrystallized. The final average grain size is 200µm. A four-stepped compression test was performed at room temperature at the rate of 0.5mm/min. The total thickness reduction after each step is 8%, 16%, 24% and 35%, respectively. The EBSD orientation measurement was performed on the same sample area after each deformation step with a JEOL 6500F field-emission-gun SEM equipped with EBSD acquisition camera and HKL channel 5 software. In order to obtain statistically representative results, 100 twinned grains are randomly selected to analyze the twin variant selection role and twin ~ 86 ~ growth behaviors. To eliminate the effect of grain size, the selected grains have a similar size. Results Initial texture The selection of twin variants In the present study, after 8% deformation, both {10-12} tension twin and {11-22} compression twin are spotted in some grains but {11-22} ~ 87 ~ compression twin is more frequent. This result is coherent with the initial texture. Theoretically, there are six twin variants in each twin family [1], but in one initially un-deformed grain, only a limited number of variants are active in each twinning mode. Due to the crystallographic symmetry, the maximum number of variants for the {10-12} tension twin is two, whereas that for the {11-22} compression twin can reach four, depending on the proximity of the c-axis to the applied force. With the increase of the amount of deformation, twinning occurs in more grains. The growth of twin Figure 2 shows the EBSD maps of two typical grains at the initial state, 8% and 16% deformation, one containing {11-22} compression twin (Figure 2 (a)) and the other {10-12} tension twin (Figure 2 (c)). ~ 90 ~ In order to study the mechanisms that lead to the selection of specific twin variants, the misorientation between the variants of the respective {11-22} and {10-12} twins is calculated and listed in Table 1. in the studied grains is 0.43 and the minimum is 0.32. From the results above, it can be deduced that Schmid law is determinant for the activation of the twin variants. However, still in some cases, variants with non-highest SF (NSF<1) are activated. Their activation may be attributed to the microscopic local stress that is deviated from the macroscopic load. In these cases, the Schmid factor of the active twin system with respect to the local stress may be the highest. ~ 92 ~ Table 1 The misorientation angle and axis between each pair of variants of respectively {11-22} twin and {10-12} twin. {11-22} twin variant (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22) variant (-12-12) variant (-2112) variant (-1-122) variant variant (2-1-12) variant (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)) 0° 60.00° [11][12][13][14][15][16][17][18][19][20][21] 77.29° 51.20° 77.29° [8-1-70] 60.00° [-1-12-1] variant (-12-12) 0° 60.00° [-12-1-1] 77.29° [8-7-10] 51.20° 77.29° ~ 94 ~ Conclusions In this work, we performed an analysis on twin variant selection and twin growth in polycrystalline titanium during small reduction compression by "interrupted in situ" EBSD measurement. The following results can be concluded: 1. Schmid' law is an appropriate criterion for twin variants selection. Twin variants with Schmid factors higher than 0.4 have a good chance to be activated. In compression Introduction Mechanical twinning is a particularly important deformation mode in  titanium [START_REF] Vedoya | Plastic Anisotropy of Titanium, Zirconium and Zircaloy 4 Thin Sheets[END_REF]; Philippe et al. (1995); [START_REF] Fundenberger | Modelling and prediction of mechanical properties for materials with hexagonal symmetry (zinc, titanium and zirconium alloys)[END_REF]Zaefferer, (2003)]. It also offers a means to control the properties of titanium such as ductility and fracture strength [START_REF] Kocks | The importance of twinning for the ductility of HCP polycrystals[END_REF]; Partridge (1967); [START_REF] Mahajan | Deformation twinning in metals and alloys[END_REF]]. At room temperature, three major twinning systems are commonly observed in titanium: the {10-12} twin, the {11-21} twin and the {11-22} twin [Philippe et al. (1988)]. The which indicated that the variants with higher resolved shear stress are more likely to be selected. However in the case of secondary twinning, Barnet [Barnet et al. (2008)] have shown that the variant selection does not follow Schmid's law. [START_REF] Capolungo | Variant selection during secondary twinning in Mg-3%Al[END_REF]] have shown that in magnesium, Schmid's law is not the only criterion controlling the variant selection, shared volumes and primary to secondary accommodation shears are prevalent. In this paper, some calculations of Schmid factors, crystallographic geometry, and plastic energy associated with double twinning were performed to investigate the variant selection in double twinning and ~ 98 ~ reveal the relevant microscale features responsible for the formation of secondary twins. A study on the influences of the twinning on the texture evolution was also carried out. Moreover, an "interrupted in situ SEM/EBSD orientation determination" [Bao et al (2010b)] was adopted. This approach allows to obtain the time resolved information on the appearance of the twin variants, their growth, the interaction between them and the interaction with the grain boundaries or twin boundaries. Experimental The material used was a commercial pure titanium T40 sheet of 1.5 mm thickness with the composition given in table 1. First, the sheet was annealed at 800°C for 2 hours to allow the deformed microstructure to be fully recrystallized with a final average grain size of about 200µm. The samples with the dimension of 15mm×10mm were prepared, mechanically polished and further electrolytically polished at a temperature of 5°C in a solution of 200 ml perchloric acid in 800 ml methanol at 17V ( 30seconds) for subsequent SEM/EBSD measurements. On the polished surface, a 1250×950 m 2 area was selected and delimited by four microindentations. The microstructure and OIM of this selected area was measured by SEM/EBSD before and after each channel die compression step (0, 8, 16, 24 and 35% reduction). The surface with the selected area ~ 99 ~ was firmly stuck with another polished sample (sandwich like) to avoid any surface sliding during the compression, in order to maintain a good surface quality. The channel die compression layout is illustrated in Fig. 1. SEM/EBSD measurements were performed with a JEOL 6500F fieldemission-gun SEM equipped with EBSD acquisition camera with a step size of 0.6 µm at 15KV. The data were acquired and processed with the HKL Channel 5 software from HKL technology. In order to obtain statistical representation of the results, 100 twinned grains were randomly selected to analyze the twin variant selection. To eliminate the effect of the grain size, only the grains having similar sizes ranging from 180m to 220m were selected. Results Microstructure and texture The OIM of the initial sample shown in Fig. 2 (a) reveals a completely recrystallized microstructure. From the pole figures (PF) in Fig. 2 (c), it can be found that the initial texture is characterized by two strong maxima at In Table 2, ~ 105 ~ Variant selection of the double twinning Since the T2 twin (the {11-21} tension twin) was not found in this study, only the combination of T1 and C was investigated in the experimental study on variant selection for double twinning. To perform a statistical analysis, 100 grains with double twins were systematically studied. C-T1 and T1-C double twins were analyzed separately despite they induce the same misorientation by symmetrically equivalent rotations, namely Group A (41.3° about <1-543>), Group B (48.4° about <5-503>) and Group C (87.9° about <4-730>). The frequency of the occurrence of each variant group in both C-T1 and T1-C double twins was presented in Fig. 4. It could be seen that the Group B (78.9% in the frequency) predominated over the other two in C-T1 double twin, the Group A was 20% and the Group C was nearly inactivated (1.8%). In the case of the T1-C double twin, the dominant variant group was the Group C (66.7% in the frequency), whereas Group A was 33.3%. ~ 106 ~ Schmid factor analysis Since the Schmid factor (SF) plays an important role in twin variant selection [START_REF] Lebensohn | A study of the stress state associated with twin nucleation and propagation in anisotropic materials[END_REF]], this study first examined the effect of the Schmid factor on the variant selection in the primary and secondary twinning, respectively. For each selected grain, the geometric SF was calculated to examine the resolved stress applied on the twin plane and along the twin shear direction, given the applied macroscopic compressive stress. Primary twinning The ranking of the SF corresponding to the active variants of primary twins was shown in Fig. 5 (a) in the form of a histogram. Normalized Schmid factor (NSF) [START_REF] Capolungo | Nucleation and ~ 126 ~ growth of twins in Zr: A statistical study[END_REF]] was adopted in the present work to investigate whether the Schmid's law is conclusive for the variant selection. In a given grain, the NSF of this grain equals to the SF of the active variants ~ 108 ~ Plastic energy analysis A twin variant will be activated if the energy of deformation which is used to create the twin was sufficient and the internal energy of the material would decrease with this operation. We have considered here that the material is an ideal (i.e., no strain hardening) rigid-plastic body to calculate the energy of deformation. Because the elastic energy is restored when the twin is created, we restrict to the plastic energy of deformation which is calculated by the equation: ( Where ' ij is the critical resolved shear stress required (=shear stress  expressed in the sample frame) to activate the twinning system and  ij is the corresponding twinning deformation. In the case of channel die compression, the deformation is equivalent to in-plane compression and the compressive force is applied in the sample normal direction (the third axis 33). In a grain, the stress applied to a twinning system is composed of the macroscopic applied stress and an additional local stress resulting from the interaction of the considered grain with the neighboring grains. Since we restrict to relatively small deformation degrees, we neglect the local stress resulting from the interaction with the neighboring grains. The stress applied to a twinning system is thus restricted to the macroscopic compressive stress  33, which corresponds to the Sachs (or static model) hypothesis. The twinning system will be active when the resolved shear stress reaches the corresponding critical value' 33 . When the twinning system is active, the corresponding deformation energy expressed in the macroscopic coordinate system is given by the above Eq. ( 1). We introduce the grain size effect by expressing the critical resolved shear stress according to a Hall Petch (HP) type equation: L k   0   (2) with 0  and k constants, 0  representing the stress when the length of the grain is infinite. L is the free path of the twin before encountering an obstacle (grain boundary, precipitate or other twins). Then the deformation energy can be expressed as: 33 33 0 33 0 ) (      L k L k W     (3) In Eq. ( 3), 0  and k are unknowns. Taking into account that twinning is activated when the size of a grain exceeds a certain value below which only ~ 110 ~ crystal glide can be activated, we can deduce that the second term of the equation is dominant. Rearranging Eq. ( 3) we obtain: L k W 33 33 0      (4) In the right hand term of Eq. ( 4), 33  and L , are accessible to the experiment. In the following we will mainly focus on this term, where u, v and w are the displacement components and x, y, and z are the coordinates in the sample system, was first expressed in an orthonormal reference frame defined by the related twinning elements. The unit vector normal to the twinning plane, the unit vector normal to the shear plane and the unit vector in the twinning direction define this reference frame. In this frame the displacement gradient tensor has a particularly simple form: ~ 111 ~ = 0 0 0 0 0 0 0 0 (6) With = √ for the (10)(11)(12) twin and = ( ) for the (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22) twin where = ⁄ ratio of titanium, the displacement gradient tensor for the two types of twins can be obtained as: = ⎝ ⎜ ⎛ ⎠ ⎟ ⎞ = 0 0 0 0 0 0 0 0 = ⎩ ⎪ ⎨ ⎪ ⎧ 0 0 0.218 0 0 0 0 0 0 {112 2} 0 0 0.175 0 0 0 0 0 0 {101 2} (7) Through coordinate transformation, this displacement gradient tensor can be expressed in the crystal coordinate system (here we choose the orthonormal reference system set to the hexagonal crystal basis and the setting follows the Channel 5 convention, i.e. e 2 //a 2 and e 3 //c). With the Euler angles measured by SEM/EBSD that represent a set of rotations from the sample coordinate system to the orthonormal crystal basis, this tensor can be further transformed into the macroscopic sample coordinate system. If G is the coordinate transformation matrix from the macroscopic sample coordinate system to the orthonormal twin reference system, the displacement gradient tensor with respect to the sample coordinate system can be expressed as: ( ) = ( ) (8) Thus the deformation tensor in the macroscopic sample coordinate system can be obtained as the symmetrized displacement gradient: = + (9) ~ 112 ~ With Eq. ( 9), the energy term in Eq. ( 4) can thus be calculated. The energy term in Eq. ( 4) has been calculated for all the examined grains. Here, the ranks referring to the decreasing order of energy term corresponding to each twin variant of primary and secondary twins were calculated separately. Further, for each energy term rank, the frequency of being selected is calculated and plotted in Fig. 7. The results indicate that in the case of primary twin, the prediction is correct in 85% of the cases using the energy term as a variant selection criterion, and in 95% of the cases of secondary twin. The variant selection strongly depends on the energy because the free path length for twin variant is included in this calculation. According to the previous study, normally the selection of twin variant is dependent on the grain shape. In the equiaxed grains, the free path for each variant is almost the same, so several variants can form together in one grain. However, in most instances of elongated grains, only one variant can be activated because of the longer free path. Also for the case of elongated grains, although the activation of the twin variant changes the dimensions of the grain, it does not change the free path length of this variant. Thus this variant can form repeatedly as long as it does not create conditions more favorable for another variant. Under such conditions, the appearing twins can continue to grow until all the parent grain is completely twinned. In most cases, primary twins that accommodate the secondary twins usually show the appearance of lamellae, i.e. secondary twins always form in parent grains of uneven shape (primary twin). The conditions of being active of secondary twin variants are thus largely dependent on the free path. This is also the reason why the energy term is L 33  L 33  ~ 113 ~ highly accurate in view of predicting the variant selection of secondary twinning. Geometric features of double twins analysis The geometric features of the combination of C and T1 twins were illustrated in Fig. 8, where the twin elements of primary and secondary twin, twin plane (TP), shear direction (SD) and shear plane (SP), were plotted as poles in a pole figure, in a reference frame bound to the primary twinning system, as seen in Fig. that in the magnesium, the growth potential is strongly related to the angles of shear plane normal (SPN) and SD between primary and secondary twins because primary twins very efficiently grow along the SPN and SD and the growth of the secondary twins is primarily limited by the lengths of the Fig. 7. Frequency of energy term ranks ("1" is the highest and "6" is the lowest) corresponding to the active variants of secondary twin. ~ 114 ~ primary twin along these directions. Therefore, the variants of double twins with small angles between SPN and SD of primary and secondary twins could easily be activated from the point of view of the growth potential. In the case of titanium, the corresponding angles were summarized in table 3. The C-T1 double twin agrees well with this theory (Fig. 8 (a)). Group B has the lowest angles of TP and SD between primary and secondary twin (see Table 3), i.e. the growth of secondary twin variants belonging to group B suffer the least limitation from their primary twins. In Fig. 4, it is seen that group B was dominant and took the proportion up to 78.9%. However, this theory seems less convincing in the case of the T1-C double twin. In Table 3 it can be deduced that group B should be predominant according to the theory. Our experiment, on the contrary, exhibited a reverse result. Also in Fig. 4, the Group C took a high proportion of 66.7% and group B was not spotted. shear in the adjacent grain [START_REF] Yoo | Interaction of slip dislocations with twins in HCP metals[END_REF]]. On OIM micrographs, it appears that this twin shear crosses the grain boundaries and continues in the adjacent grain. In our observation, however, the most common case is that twins do not cross the boundaries. Therefore, aiming at these twins straddling boundaries, a statistical analysis of the angle between the twin planes on each side of the boundary was carried out. From the results displayed in Fig. 9, it can be deduced that for those twins whose twin planes do not deviate beyond 20°, there is a high tendency to cross the boundary. On the contrary, those twins with larger deviation, generally terminate at the boundary. Discussion The occurrence of twinning is governed by various factors, such as the orientation, the size, the shape of parent grain, the boundaries, and even the slip activity [Yoo (1981)]. Due to symmetries of the crystal lattice, several potential twins are in competition, so that it is necessary to clarify the mechanisms of the variant selection process. In this study, as verified by the experimental results, the deformation energy gave an excellent accuracy up to 85%, the Schmid Factor (SF) a still acceptable accuracy of 50%, as the variant selection criterion. The reason of high accuracy of the deformation energy criterion, as compared to the SF criterion, can be explained by the fact that the calculation not only involves the deformation energy associated with the creation of a twin variant, but also the effect of the size and shape of the parent grain. This is quite effective in the case of a non-homogeneous microstructure or non- ~ 122 ~ equiaxed parent grain. When a twin forms in a grain, it cuts the parent grain and thus changes the shape and dimensions of this grain, generally subdividing the initial parent grain into three domains. The twin can be considered as a new grain with its own dimensions and crystallographic orientation. When a twin forms, the size of the parent grain is modified and thus the critical stress on each possible twin variant will change according to the HP law. As a result, variants which did not have a sufficient level of stress through the SF can nevertheless be activated in a newly created domain, despite the orientation of the initial parent grain not having changed. This explains why in equiaxed grains, several variants can appear. However, in the elongated grains, although the activation of the twin variant changes the dimension of the grain, it does not change the length of the free path of this variant. Thus this variant can form repeatedly, at least as long as it does not create conditions more favorable for another variant. In such conditions, the twins activated can continue to grow until the whole grain is twinned. A twin can thus be regarded as a new grain, which is a slightly different point of view when considering secondary twinning. Generally, this new grain (i.e. the primary twin) presents an elongated shape (at least at the early stage of its formation). Thus there will generally be only one activated variant in this existing primary twin. In fact, this concept strongly depends on the size of the twin, since, as previously discussed, when a grain is fully twinned, there can be several secondary twin variants thereafter. With respect to a homogeneous microstructure, the geometric SF that is the factor transforming the applied stress into the resolved shear stress on ~ 123 ~ the twin plane and along the twin direction, should in principle be conclusive in the variant selection. But the local stress tensor effectively applied to the grain does not coincide with that derived from the macroscopic applied stress tensor. Thus, the geometric SF thus may be not pertinent in some cases, notably when in one parent grain, the first and second highest SFs of variants are very close (NSF very close to 1). As seen in Fig. 5 and Fig. 6, the high frequency of the rank 2 SF probably results from this deviation between the local stress tensor and macroscopic applied stress tensor. Moreover, the high proportion of NSF range from 0.8 to 1.0 (not including 1.0) in Fig. 5 (b) and Fig. 6 (b) also supports this interpretation. It could be inferred that the true frequency of rank 2 SF should be about 15 to 20% lower than the calculated value in Fig. 5 and Fig. 6. If it was possible to calculate the true local stress state, the accuracy using SF in variant selection could increase dramatically. It can therefore be concluded that the deformation energy can be adopted as an effective criterion in variant selection, and SF should not be completely abandoned but used as a complementary criterion in the case of homogeneous microstructure. Conclusion In this study, an investigation of the effect of various factors on twin variant selection in double twins was performed. Twinning affects the texture evolution by the reorientation of the crystallographic lattice from the former stable orientation of the parent grain into the new stable orientation of the twin. The stable orientations depend on the deformation mode, like rolling or channel die compression for example. Prospects The following are some suggestions for future work based on the findings, conclusions and problems identified in the course of the present work. In order to study the deformation mechanisms in titanium, many efforts are required on twinning, gliding and the interaction between them. In the present study, we focused on the twinning and extended the discussion a little to the gliding. Therefore, some prospects could be suggested as follows: 1. The interrupted "in situ" EBSD method is an effective way to study the deformation twinning in detail. With this approach, various deformation modes could be studied, such as shear, tensile test and ECAP So far, the deformation in this approach is limited within 35% Le maclage apparait dans les grains qui ont des orientations particulières. En règle générale, la réorientation induite par le maclage aligne l'axe c de la partie maclées vers les orientations stables de la texture de laminage, de sorte qu'aucun autre maclage secondaire peut être induit. Le maclage secondaire se produit uniquement lorsque le macle primaire envoie l'axe c loin des orientations stables. Pour les grains maclés, la rotation du réseau de la matrice est semblable à celle des grains ayant une orientation cristallographique identique mais sans macles. Deux types de systèmes de macles ont été activés au cours de la déformation à la température ambiante: des macles de tension (10)(11)(12) et des macles de compression (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). Dan le maclage primaire, les résultats montrent que les variantes de macle In this study, an interrupted "in situ" SEM/EBSD investigation based on a split sample of commercial titanium T40 was proposed and performed in rolling and channel die compression. This approach allows to obtain the time resolved information of the appearance of the twin variants, their growth, the interaction between them and the interaction with the grain boundaries or twin boundaries. With the orientation data acquired by the EBSD technique, we calculated the Schmid factor, crystallographic geometry, and plastic energy associated with each variant of primary twins, secondary twins and double twins to investigate the lattice rotation, the activation of twins, the growth of twins, and the variant selection criterion. In this observation, two types of twin systems were activated: {10-12} tension and {11-22} compression twins. Secondary twins were also activated, especially the twin variants with the highest Schmid factors (e.g. higher than 0.4). The growth of the two types of twin is quite different. The {11-22} twin shows Multiple Variants System (MVS) whereas the {10-12} twin shows Predominant Variant System (PVS). The twinning occurs in grains that have particular orientations. Generally, the reorientation induced by the twinning aligns the c-axis of the twinned part to the stable rolling texture orientations, so that no further secondary twinning can be induced. The secondary twinning occurs only when the primary twinning orientates the c-axis of the primary twins far away from the stable orientations. For twinned grains, the lattice rotation of the matrix is similar to that of the grains having a similar crystallographic orientation but without any twin. Two sets of double twins were observed in this study, classified as C-T1 and T1-C double twins respectively. All the variants of C-T1 and T1-C double twins were classified into three groups: A, B and C according to the crystallographic symmetry. The misorientations of theses three groups with respect to the matrix are 41. 1). Le plan de symétrie sera le plan de maclage. Le taux de cisaillement induit est donné par la géométrie du système de maclage (plan et direction cristallographique de maclage) dans la maille cristalline. Étape 4 : on place l'assemblage dans de l'acétone pour dissoudre la colle. On revient à l'étape 0 où l'on fait un EBSD. Et le cycle peut recommencer. Conclusion Générale Le présent travail vise à améliorer la compréhension de la contribution des macles de déformation lors de la déformation plastique d'un alliage de titane T40, ainsi que de l'établissement d'un critère de sélection de variant de macles. À partir des données expérimentales et des études théoriques, les conclusions importantes suivantes peuvent être tirées: Le maclage se produit dans les grains ayant des orientations spécifiques. Généralement, lors du laminage et de la compression plane, le maclage de compression se produit dans les grains qui ont leur axe c parallèle à la force de compression (ou légèrement incliné); le maclage de tension se produit dans les grains avec leur axe c perpendiculaire la force de compression. La partie maclée d'un grain peut être considérée comme un nouveau grain. Lorsque les macles grandissent dans le grain, elles peuvent consommer pratiquement toute la matrice. Dans ce cas, la zone principale maclées est beaucoup plus grande que la matrice restante et représente donc Le maclage affecte l'évolution de la texture par la réorientation de la maille cristallographique de l'ancienne orientation stable du parent vers la nouvelle orientation stable de la macle. Les orientations stables dépendent du mode de déformation, comme le laminage ou la compression plane par exemple. ~ 1 ~ 1 : 11 Chapter Basic understanding and review of the literature on the plastic deformation of hexagonal materials his chapter is devoted to introducing the basic concepts and definitions essential to the understand of the present work topic. It also proposed a review of the literature on plastic deformation mechanism in materials with hexagonal crystal structure, especially in titanium. At first, an introduction of the concepts about the hexagonal close-packed structure and a detailed description about deformation modes, twinning and slipping are proposed. Then we provide a review on some hexagonal metals often studied in the field of research in materials science.T ~ 2 ~ Figure 1 - 1 : 11 Figure 1-1: Atoms in the hexagonal close-packed structure. Figure 1 - 2 : 12 Figure 1-2: Four-axis system in Miller-Bravais indices. Figure 1 - 3 : 13 Figure 1-3: The geometry description of twinning shear. Figure 1 - 5 : 15 Figure 1-5: Schematic description of twinning in Titanium. Figure 1 - 7 : 17 Figure 1-7: The families of slip system operating in hexagonal structures. describe the evolution of texture in titanium alloy Ti-10V-4.5Fe-1.5Al during rolling and annealing. The rolling and re-crystallisation textures obtained in the study are compared with those of other β titanium alloys and bcc metals and alloys such as tantalum and low carbon steel. More recently,Chun et al. in 2005 (Chun, Yu et al. 2005), studied the effect of deformation twinning on microstructure and the evolution of texture during cold rolling. They found that for low to intermediate deformation up to 40% in thickness reduction, the external strain was accommodated by slip and deformation twinning. In this stage, both compressive {11-22} and tensile {10-12} twins, as well as, secondary twins and tertiary twins were activated in the grains of favorable orientation, ~ 14 ~ and this resulted in a heterogeneous microstructure in which grains were refined in local areas. For heavy deformation, between 60 and 90%, slip overrode twinning and shear bands developed. The crystal texture of deformed specimens was weakened by twinning but was strengthened by slip, resulting in a split-basal texture in heavily deformed specimens. Also in 2005, Bozzolo et al.(Bozzolo, Dewobroto et al. 2005) examined the microstructure and texture in titanium alloy rolled to 80% in order to study the grain enlargement and the effects of dynamic recrystallization. They found that Recrystallization of 80% cold-rolled sheets and subsequent grain growth lead to equiaxed microstructures. The texture obtained at the end of primary recrystallization is very close to that of the cold-rolled state, with the maximum value of the orientation distribution function at {0°, 35°, 0°}. The orientations developing during grain growth correspond to a broad peak centered around {0°, 35°, 30°} which is a minor component in the initial texture. The disappearing orientations are widely scattered throughout orientation space and present two major disadvantages in the growth competition. The grain boundaries remaining after extended grain growth are characterized by an increasing proportion of misorientations below 30° and random rotation axes. In 2001, the work on the behavior of titanium before and after Equal channel angular pressing (ECAP) was made. Stolyarov et al.[START_REF] Stolyarov | Microstructures and properties of ultrafine-grained pure titanium processed by equal-channel angular pressing and cold deformation[END_REF]) interested in the microstructure and properties of ultrafine grain structure in pure titanium past ECAP, which improved mechanical properties: grain size obtained by ECAP alone is about 260 nm.The strength of pure Ti was improved from 380 to around 1000 MPa by the two-step process. It is also reported the microstructures, microhardness, ~ 15 ~ tensile properties, and thermal stability of these Ti billets processed by a combination of ECAP and cold deformation. In 2002, Shin et al.[START_REF] Shin | Shear strain accommodation during severe plastic deformation of titanium using equal channel angular pressing[END_REF] studied the mechanisms to accommodate the shear induced by titanium in the ECAE. They observed the activity of twinning system and identify them. TEM analysis of the twins revealed that their twin plane is {10-11} and the twins are accompanied with dislocations on non-basal planes.These results suggest that the severe plastic deformation imposed on titanium via ECA pressing is accommodated mainly by the {10-11} twinning, rather than dislocation slips commonly observed in pressing of other metals such as aluminum and steel. The twinning modes that might accommodate the severe strain were proposed based on the dislocation slip systems observed during the pressing. In 2003, Kim et al. also observed deformation twinning in pure titanium during ECAE by means of EBSD measurements. In 2003, Shin et al. (Shin, Kim et al. 2003) analyzed the microstructure of titanium samples after ECAE. They highlight the twinning systems and ensure slip deformation. Transmission electron microscopy (TEM) revealed that the strain imposed by pressing was accommodated mainly by {10-11} deformation twinning. During the second pass, the deformation mechanism changed to dislocation slip on a system which depended on the specific route. For route C (in which the shear is fully reversed during successive passes by 180° rotation of the sample between passes), prism (a) and pyramidal (c+a) slip occurred within alternating twin bands. For route B (in which the sample is rotated 90° after the first pass), prism a slip was the main deformation mechanism. For route A (in which the billet is not rotated between passes), deformation was controlled by basal a slip and micro-twinning in alternating ~ 16 ~ twin bands. They indicated that the variation in deformation behavior was interpreted in terms of the texture formed during the first pass and the Schmid factors for slip during subsequent deformation. More recently, in 2006, Perlovich et al. (Perlovich, Isaenkova et al. 2006) studied the titanium rods after ECAE at 200°C and 400°C, and they found heterogeneity of deformation, microstructures and textures through the thickness of stem.More generally in 2001, Bache and Evans[START_REF] Bache | Impact of texture on mechanical properties in an advanced titanium alloy[END_REF], proposed a study on the influence of texture on the mechanical properties of a titanium alloy. They had the conclusions that highly textured, uni-directionally rolled Ti 6/4 plate demonstrates significantly different in monotonic strength characteristics according to the direction of the principal stress relative to the predominant basal plane texture. Loading in the transverse orientation, perpendicular to basal planes preferentially lying co-incident with the longitudinal-short transverse plane, promotes a relatively high yield stress and ultimate tensile strength (UTS). Under strain controlled fatigue loading, the longitudinal orientation was found to offer the optimum cyclic response.The differences in mechanical behavior have been related to the ability to induce slip in the various plate orientations. In addition, stress relaxation is encouraged under cyclic loading parallel to the longitudinal direction due to the preferential arrangement of {10-10} prismatic planes. In 2003, Zaefferer(Zaefferer 2003) studied the activity of deformation mechanisms in different titanium alloys and its dependence with regard to their compositions. The main results are: in TiAl6V4, <a> basal slip has a lower critical resolved shear stress,  c , than prismatic slip. <c+a> pyramidal glide shows a very high  c , which is up to two times larger than that for prismatic slip. Nevertheless, ~ 17 ~ <c+a> glide systems were only rarely activated and twinning systems were never activated. Therefore, deformation with c-components may be accommodated by β-phase deformation or grain boundary sliding. The observed c-type texture is due to the strong basal glide. In T40,  c for <c+a> glide is up to 13 times higher than that for prismatic glide. However, <c+a> glide and twinning were strongly activated, leading to the observed t-type texture. In T60, the high oxygen content completely suppressed twinning and strongly reduced <c+a> glide. The less developed t-type texture is due to the combination of <c+a> and basal glide. In 2007, Wu et al.(Wu, Kalidindi et al. 2007) simulate the evolution of texture as well as the behavior of titanium during large plastic deformations. A new crystal plasticity model has been formulated to simulate the anisotropic stress-strain response and texture evolution for α-titanium during large plastic strains at room temperature. The major new features of the model include: (i) incorporation of slip inside twins as a significant contributor to accommodating the overall imposed plastic deformation; and (ii) extension of slip and twin hardening laws to treat separately the hardening behavior of the different slip families (prismatic <a>, basal <a> and pyramidal <c + a>) using hardening parameters that are all coupled to the extent of deformation twinning in the sample. Reasonable agreement between model predictions and experimental measurements has been observed for both the anisotropic stress-strain responses and the evolved deformation textures in three different monotonic deformation paths: (1) simple compression along ND; (2) simple compression along TD; and (3) simple shear in the RD-TD plane. influenced by temperature and grain size. It was demonstrated that the notion of effective diffusivity explained the experimental results. In 2005, Hartig et al.[START_REF] Hartig | Plastic anisotropy and texture evolution of rolled AZ31 magnesium alloys[END_REF] measured the texture of the rolled AZ31 and perform simulations by the crystal plasticity model self-consistent viscoplastic. They considered that anisotropy of plastic flow stresses can be explained by the off-basal character of the texture and the activation of the ~ 25 ~ prismatic slip in addition to the basal, pyramidal slip and the (01-1-2) <011-1> twinning system, discussed the results obtained by comparing the simulation with the experience. Gehrmann et al. (Gehrmann, Frommert et al. 2005) studied the texture on the plastic deformation of magnesium AZ31 in compression flat at 100 and 200 ° C. The measured flow curves and the microstructure investigation reveal that plastic deformation of magnesium at these temperatures is generally inhomogeneous and dominated by the appearance of shear bands. However, if the initial texture is chosen such that the formation of a basal texture is slowed down or even suppressed, substantial ductility can be achieved at temperatures as low as 100 °C. The texture development due to crystallographic slip can be reasonably modelled by a relaxed constraints Taylor simulation and yields information on the activated slip systems. Walde et al. (Walde and Riedel 2005; Walde and Riedel 2007) simulated the texture evolution during rolling of AZ31 with finite element code ABAQUS / Explicit, in which they have implemented a model self-consistent viscoplastic. This model is able to describe the softening behavior of the magnesium alloy AZ31 during hot compression and allows us to simulate the development of the typical basal texture during hot rolling of this alloy. Their simulations are quite consistent with experimental measurements of texture. devoted to introducing the materials, samples preparation, equipment and experimental techniques used in the present work, especially the SEM/EBSD technique and analysis. Then, we provide a detailed description of the interrupted "in situ" experiment arrangement used in rolling and channel die compression tests. Figure2- 1 Figure 2 - 1 : 121 Figure 2-1: Components of an EBSD system. Figure 2 - 2 : 22 Figure 2-2: EBSD geometry. texture measurements on rolled sheet materials. The x axis is parallel to the rolling direction of the sample (RD), the y axis parallel to the transverse direction (TD) and the z axis parallel to the normal direction (ND) (Figure2-3). Figure 2 - 3 :Figure 2 - 232 Figure 2-3: Relationship between crystal and sample coordinate systems. α 1 ,  1 and  1 are the angles between the crystal direction [100] and RD, TD and ND respectively. Figure 2 - 4 : 24 Figure 2-4: Two interpenetrating lattices can be realigned by a single rotation about a common axis [uvw] by an angle . In the figure the axis is the common [111] direction and the rotation angle 60°. Figure 2 - 5 : 25 Figure 2-5: Definition of the Euler angles. Figure 2 - 9 , 29 including mini tensile testing machine installed in the chamber of SEM, a load and displacement recording for controlling the tensile speed, loading, unloading and recording the force and displacement data, an automated EBSD systems to collect Kikuchi patterns, a computer to save Kikuchi patterns collected and index them with special software. Figure 2 - 6 : 26 Figure 2-6: An in-situ EBSD tension test figures and orientation flow fields. Figure 2 - 7 : 27 Figure 2-7: The preparation of samples and arrangement of the rolling and channel die compression tests. ; McDarmaid, Bowen and Partridge (1984);[START_REF] Vedoya | Plastic Anisotropy of Titanium, Zirconium and Zircaloy 4 Thin Sheets[END_REF];Philippe, Serghat, Houtte, and Esling (1995); Panchanadeeswaran, Doherty and Becker (1996); Kalidindi, Bhattacharyya and Doherty (2004); Prasannavenkatesan, Li, Field and Weiland (2005); Merriman, Field and Trivedi (2008); Skrotzki, Toth, Kloden, Brokmeier and Arruffat-Massion (2008); Quey, Piot and Driver (2010)]. Many efforts have been made in studying deformation twinning and gliding in single and polycrystalline metals [Akhtar (1975); Akhtar, Teghtsoonian and Cryst (1975); Kalidindi (1998); Field, True, Lillo and Flinn (2004); Jiang, Jonas, ~ 47 ~ Mishra, Luo, Sachdev and Godet (2007); Jiang and Jonas (2008)]. The mechanical response of titanium, like other HCP metals, is strongly dependent on the combination of active deformation modes: gliding and twinning. The specific deformation mechanisms depend on the c/a ratio, the available deformation modes, the critical resolved shear stress (CRSS) for gliding, the twin activation stress as well as the imposed deformation relative to the crystallographic texture. According to previous work, polished area was measured by SEM/EBSD before and after each deformation step. The rolling and channel die compression layout is illustrated in Fig. 1. Both sheets of the sandwich were firmly stuck together ~ 49 ~ to avoid any surface sliding during the rolling in order to maintain a good surface quality. EBSD measurements were performed with a JEOL-6500F SEM with a step size of 0.4 µm. The evolution of grain orientation during deformation will be presented later both by pole figures and orientation flow fields. (a) reveals a completely recrystallized microstructure with an average grain size of 10 µm. The {0002}-pole figure (PF) in Fig. 2 (b) shows two strong maxima at ±30° tilted from ND towards TD, whereas the { 0 1 10 } PF displays the maximum pole densities parallel to RD. In this work, the setting of the coordinate systems and the Euler angles {1,, 2} are defined according to Bunge's convention, seeBunge, Esling and Muller (1980). Fig. 1 : 1 Fig. 1: The preparation of samples and arrangement of the rolling and channel die compression tests. misorientation between the twin and its matrix corresponds to a 65° rotation about their common < ]) were most frequently observed (Fig 3). The respective amount of these two types of twinning was further presented by means of the misorientation-angle distributions in Fig. 4. It is seen that after 10% rolling (Fig. 4 (b)), 65° misorientation occurs most frequently, suggesting that { 2 2 11 Fig. 2 : 2 Fig. 2: EBSD orientation mapping (a) and {0002}, {10-10} pole figures (b). Fig. 5 : 11 } 11 } 51111 Fig. 5: Rotational flow field in grains showing no twins (a). Lattice rotation field in untwinned part (or matrix) of grains showing twins (b). Fig. 6 : 6 Fig. 6: Initial grain with c axis close to the ND (dark red) (a); 10% deformation: { 2 2 11 } compression twins outlined by red lines (b); 20% deformation: a part of { 2 2 11 } twins undergoes secondary { 2 1 10 } tension twinning, (c). outlined by blue twinning boundaries delimiting the tension twin (in yellow colour). Fig. 7 : 11 } 711 Fig. 7: Misorientation-angle distribution, before compression (a) and after 20 % compression (b). Fig. 9 : 11 } 911 Fig. 9: (0002) Pole Figure delimitating schematically the orientation domains of the c-axes of the grains in which the indicated twinning is expected to be activated (theoretical expectations) Fig. 10 10 Fig. 10 Schematical reorientation of c axes by { 2 1 10 } twinning (blue) and { 2 2 11 } twinning (red arrow). Fig. 11 : 11 Fig.11: Percentage of twinned grains in the three different grain-size groups. twins. Secondary twins, mainly of the tensile type, were also activated. The present work was undertaken to provide information, lacking in the literature, on the lattice rotation and the role of twinning during cold rolling of commercial purity titanium (T40). The method proposed consists of determining the individual rotation of the grains induced by low to intermediate deformation (up to 30% in thickness reduction) and to following the rotation field using electron backscattered diffraction (EBSD) measurements in a high resolution FEG SEM at different steps of deformation(10 and 20 %). We have especially studied the formation, the evolution and the role of mechanical twins. Additional work on ~ 68 ~ identification of slip systems by two different approaches (deformation experiments on single crystals as well as numerical ab initio and molecular dynamics calculations) are being developed in parallel. follow the rotation of the individual grains during the deformation, a 500×300 m 2 area was carefully polished and marked out with four microindentations. The orientation of all the grains in this polished area (about 800 grains) was measured by SEM/EBSD before and after each deformation step. The rolling layout is illustrated in Fig.1. Both sheets of the sandwich were firmly stuck together to avoid any surface sliding during the rolling in ~ 69 ~ order to maintain a good surface quality. EBSD measurements were performed with a JEOL-6500F SEM with a step size of about 0.77 µm. The evolution of grain orientation during deformation will be presented later both by pole figures and lattice rotation fields. Fig. 1 Fig. 2 ( 2 2 11 } 12211 Fig. 1 Successive cold-rolling was performed on sandwich samples. The inner surface was initially polished and the orientation of the grains was measured by EBSD before and after each rolling step. Fig. 4 4 Fig. 4 Misorientation-angle distributions of samples deformed to (a) 0%, (b) 10% and (c) 20% reduction. 2 1 10 } 2 1 10 } 2 2 11 } 210210211 viewpoint of development of crystallographic texture, and presented in the classical representation of pole figures. ~ 73 ~ Orientation analysis indicates that at 10% reduction, the c-axis of the grains with { twins (tension along c-axis) is oriented close to the rolling direction, as shown in Fig. 6 (a); whereas that of the grains with { 2 2 11 } twins (compression along c-axis) is oriented close to the sample normal direction, as shown in Fig. 6 (b). These results are coherent with the theory (Fig. 7part in each grain is systematically oriented close to a stable orientation belonging to the rolling texture component, however, the { 2 2 11 } twinning leads the c-axis of the twinned part oriented close to the rolling direction i.e. to an unstable orientation. As shown in Fig. 8, the { twinning gives an 84.78°misorientation of c-axis and illustrated with blue arrow in this figure. Likewise, the { twinning gives a 64.62°misorientation of c-axis and shown with red arrow. Green area delimits the stable orientation belongs to the rolling texture (characterized by c axes tilted 30 from ND to TD). Form the figure, we can see clearly that the { 2 1 10 } twinning transfer the Fig. 5 (Fig. 7 ( 57 Fig. 5 (a) Lattice rotation field in grains showing no twins. (b) Lattice rotation field in untwinned part or matrix of grains showing twins. Fig. 6 ( 2 1 10 } 2 2 11 } 6210211 Fig. 6 (a) {0002} -Pole figure of grains having { 2 1 10 } twin and (b) {0002} Pole Figure of grains having { 2 2 11 } -Twin (10% deformation). Fig. 8 2 2 11 } 2 2 11 } 2 1 10 } 2 2 11 } 8211211210211 Fig. 8 Misorientation of c axes caused by { 2 1 10 } twinning (blue arrow) and { 2 2 11 } twinning (red arrow). Fig. 12 12 Fig. 12 Effect of the neighboring grains on the lattice rotation. (a) Misorientation of the c-axes of neighboring grains (numbers inside the neighboring grains) with respect to the large grain they surround and reorientation of the c-axis in the regions A, B…in the neighborhood of the grain boundary (color code in the inset). (b) Illustration of the reorientation of the c axes in the various boundary regions A, B…of the large green grain. 2 1 10 } 210 This tends to show that the main effect of twinning is to create a newly oriented zone (a new grain) and does not induce additional deformation mechanisms in the remaining matrix part of the grain. In other words, the deformation mechanisms in the matrix part of the twinned grains remain the same as those in the untwinned grains. The reorientation induced tends to orientate the c-axes close to stable orientations. Thus, there is no tendency for secondary twinning to occur within such twins.The secondary twinning only takes place in the { 2 2 11 } compression twins whose c-axes are orientated far away from the stable texture orientation.In such a case, the new and major twins appearing inside the { twins. This explains why when deformation is increased from 10% to 20% the amount of { tension twin drastically increases. Chapter 4 : 4 reorientation of the c-axis of the secondary twin to a stable orientation. Only a little amount of second order twinning could be observed and twinning of higher than second order was never found.~ 81 ~The rotation of the matrix-part of the grains having twins is similar to that of the non-twinned grains with similar orientation. The twinned part of a grain can be considered as a new grain. When twins grow within the grain, they can consume almost the whole matrix. Special attentions should be paid when determining the twinned volume fraction. Thanks to the EBSD measurement, a strong increase of the twinned volume could be demonstrated. This contradicts the conventional judgement that the twinned part is always the smaller part in a twinned grain, as concluded by optical microscopy. Only step by step EBSD orientation mapping allows an unambiguous determination of the twinned volume fraction. The confirmation that the order of twinning never exceeds the second order is very useful for the modelling of plasticity in polycrystalline metals, such that the order allowed for twins should be restricted to only the first and second order (also called double twins).~ 83 ~ Variant selection in primary twins, secondary twins and double twins n this chapter, with the purpose of studying the mechanisms governing the selection of specific twin variants, we performed some calculations of Schmid factor SF, deformation energy factor and some geometrical factors in primary and secondary twins. We came to the conclusion that the Schmid and energy factor plays an important role in variant selection during primary and secondary twinning. It is suggested that rank (or level) in the selection of twin variants. The development of an efficient activation criterion is essential when employed to model the deformation behavior of ~ 85 ~ materials with twinning as their important deformation mechanism, such as HCP metals. Figure 1 1 Figure1shows the {0002} and {10-10} pole figures (PFs) of the as-annealed Ti40 sheet. It is seen that the initial texture is characterized by two strong maxima at ±30° tilted from ND towards TD in the {0002} PF and the Figure. 1 : 1 Figure. 1: The initial texture presented in the pole figures of {0002} and {10-10}. Figure 2 ( 2 b) and (d) indicate the SF for each twin variant (active variants are highlighted in red). It appears that the growth of {11-22} compression twin variants and {10-12} tension twin variants exhibit different features. After {11-22} compression twin lamellae form, they grow and expand rapidly along the shear direction. Subsequently, the rapid expansion causes the growing variants to intersect with each other. The collision between lamellae of different variants blocks the respective growth of one another. The subsequent deformation progressed by the formation of new twin lamellas of the same variants (or repeated twin nucleation) in the un-deformed matrix contoured by the already formed twin lamellae with the progress of the deformation (Figure 2(a)). ~ 88 ~ However, for the {10-12} tension twin, repeated twin nucleation occurs at the very beginning of the deformation. The twin lamellae also rapidly grow in the shear direction until they intersect the grains boundaries. Different from the compression twinning, it accommodates the subsequent deformation mainly by thickening the already formed twin lamellae, which transforming the whole initial grain into twin, as shown in Figure 2 (c). Discussion In this study, {11-22} compression twinning and {10-12} tension twinning show quite different twin variant selection and growth characters. {11-22} twin always exhibits three or four variants in one grain and these variants possess large misorientation with respect to each other. We call this multiply variants system (MVS), as shown in Figure2(a). However, {10-12} twin exhibit only one or two variants. We call it predominant variant system (PVS), as shown in Figure2 (c). After growth, they consume almost the whole volume of the initial grain. Figure 2 2 Figure 2 (a) and (c) show EBSD maps of two grains at the deformation steps of initial, 8%, 16% , (a) with {11-22} compression twin variants and (c) with {10-12} tension twin variants. (b) and (d) indicate the Schmid factors for each twin variant (active variants are highlighted in red). Figure 3 Figure 4 , 34 Figure4, the Schmid factors of all the active variants SF a lie in the range from 0.32 to 0.5 (X axis in Figure4(a)) and the NSF that is equal to one has the highest occurrence. The average Schmid Factor of all active twins (SF a ) Figure 3 3 Figure 3 Schematic presentation of twin variants of {11-22} twin (a) and {10-12} twin (b). Figure 4 4 Figure 4 Distribution of NSFs (a) and their frequency (b). , when the c-axis of the initial grain is close to the compression load, the SFs of the six variants are close and high, showing MVS. 3. In contrast, {11-22} twin always exhibit only one variant per grain. This predominant twin variant grows fast until almost no matrix is left, showing PVS. and {11-21} systems are referred to as tension twin ({11-22} as a compression twin) because it is only activated under tension (compression) load along the c axis of the parent crystal[Rosi et al. (1956)]. Once a twin forms, the crystal lattice of the twin has a misorientation with respect to the former parent crystal lattice, misorientation which is represented by a rotation depending on the twin systemThis orientation change accelerates the occurrence of the secondary twinning inside the primary twin due to a more favorable orientation. The combination of primary and secondary twins is called double twin. In other words, a double twin can be regarded as the structure of a secondary twin inside a pre-existing primary twin[Barnet et al. (2008);[START_REF] Capolungo | Variant selection during secondary twinning in Mg-3%Al[END_REF]]. In titanium, the double twins combined of the {10-12} tension twin and the {11-22} compression twin are commonly observed in rolling and compression deformation[Bao et al (2010a)]. At present, it is well established that the variant selection in the primary twin generally follow Schmid law[START_REF] Lebensohn | A study of the stress state associated with twin nucleation and propagation in anisotropic materials[END_REF]], Fig 1 : 1 Fig 1: Schematic description of the channel-die set-up used. Fig. 2 (b) exhibits the same area selected as shown in Fig. 2 (a), but after 8% reduction. The boundaries of {10-12} tension twins were marked in blue and the boundaries of {11-22} compression twins in red. Two types of primary twins and two types of double twins were spotted: the {10-12} tension twin (T1 twin) and the {11-22} compression primary twin (C twin), Fig. 2 . 2 Fig. 2. (a). OIM of the initial microstructure of the T40 titanium. (b). OIM of the microstructure after 8% deformation. Red line represents {11-22} twin boundaries and blue line represents {10-12} twin boundaries. (c). {0002} and {10-10} pole figures of the initial texture. figure with a symbol and a color code corresponding to every set of double twin (Fig.3). Note that this is a pole figure referenced in a XYZ coordinate frame, i.e. it is not used to describe an orientation with respect to the sample frame, but to represent the misorientation with respect to the parent grain. This is a direct and effective way to study the texture Fig. 3 . 3 Fig. 3. The {0002} pole figure of all possible double twin variants. The color code relates the secondary twin variants (six symbols) to their respective primary twin variants (filled circle). Each primary twin variant and the six possible secondary twin variants inside are presented with a same color. Fig. 4 . 4 Fig. 4. Frequency of occurrence of double twin variant group A, B and C. ( SF a ) divided by the highest SF (SF h ) of the six possible variants. If NSF=1, it means that the active twin variant is the one with the highest SF; if NSF<1, another twin variant is activated instead of the one with the highest SF. The frequency of the NSF was displayed in Fig.5 (b). The interest of NSF is that it provides the information to what extent the SF of the active variants deviate from the highest SF in the case of an active variant with non-highest SF. As shown in Fig.5, it can be seen that Schmid's law gives a 50% accuracy in predicting variants selection of the primary twinning. The majority of the primary twins form on the variants with the first or second rank of SF.~ 107 ~Secondary twinningThe SFs of the secondary twins in the C-T1 and T1-C double twin were examined separately and plotted with different color in Fig.6. Fig.6(a) exhibited the frequency of each SF rank in the form of histograms and Fig.6(b) showed the frequency of NSF associated with variants of secondary twins. It is seen that the accuracy of SF in predicting variants selection of the secondary twinning declines to 40%. Note that the variants having the second rank of SF in the C-T1 secondary twins still have a very high proportion (see red bar in Fig.6 (a)). Fig. 5 . 5 Fig. 5. (a) Frequency of SF ranks (here "1" is the highest and "6" the lowest) corresponding to the active variants of primary twin and (b Frequency of NSF associated with variants of primary twins. Fig. 6 . 6 Fig. 6. (a) Frequency of SF ranks (here "1" is the highest and "6" the lowest) corresponding to the active variants of secondary twin in C-T1 (red) and T1-C (blue) double twin and (b) is the frequency of NSF associated with variants of secondary twins L 33  33 . Clearly, the length (L) of the free path of a twin lamella in a grain can be visualized with its boundary traces on the sample observation plane. The maximum longitudinal length of the twin lamella appearing on the sample observation plane is determined as L for each twin variant. In the present work 8 . 8 The poles corresponding to the secondary twin elements were represented in stereographic projection by symbols defined in the figure caption of Fig.8 . In each variant group there are two geometrically equivalent variants, one was represented by a filled symbol and the other an open symbol. Martin et al. [Martin et al. (2010)] suggested Fig. 8 : 8 Fig. 8: Stereographic plots of the twin plane normal, shear direction and shear plane normal for group A, B and C in the C-T1 and T1-C double twins are represented with different symbols. Fig. 9 . 4 . 94 Fig. 9. Frequency of the angles between the twin planes on each side of the boundary. Fig. 11 . 11 Fig. 11. The {0002} and {10-10} pole figures measured at (a) initial, (b) 8%, (c) 16%, (d) 24% and (e) 35% deformation degree. 5 .Chapter 5 : 55 All possible misorientations corresponding to double twin combinations of {10-12}, {11-21} and {11-22} were calculated with respect to the sample coordinate frame. This leads to ~ 124 ~ the crystallographic characterization of each double twin variant. The main conclusion can be summarized as follows:1. The twinning order does affect the resulting misorientation induced by double twinning, even though they are identical in misorientation angle and have symmetrically equivalent axes. All the double twin variants are classified into 15 orientation variant groups rather than 10 geometric variants groups.2. Strong variant selection takes place in the C-T1 and the T1-C double twinning. The Group B is predominant with respect to the two others in the C-T1 double twin (78.9%). In the case of the T1-C double twin, the predominant variant group is the Group C (66.7%).3. Schmid factor analysis was performed on the primary twin and secondary twin respectively. SF gave an accuracy of about 50% for predicting the primary twin variant selection and about 40% for the secondary twin variant selection. The relative inaccuracy is probably due to the deviation between the local stress tensor and macroscopic applied stress tensor. 4. A new calculation associated with deformation energy was described to assess the influence of deformation energy on the variant selection. It gave an excellent accuracy up to 85% for predicting the primary twin variant selection and about 95% for the secondary twin variant selection. The twins formed will induce in the adjacent grains variants to be activated across the boundary as long as the angles between the twin ~ 125 ~ planes are not beyond 20°. No influence of geometric features was found on the variant selection in the double twinning. 6. It is suggested that deformation energy rank of each variant should be adopted as a main criterion in predicting variant selection, and the Schmid factor used as an additional second criterion. 7. Twinning affects the texture evolution by the reorientation of the crystallographic lattice from the former stable orientation of the parent grain into the new stable orientation of the twin. The stable orientations depend on the deformation mode, like rolling or channel die compression for example. ~ 127 ~ Conclusions and prospects his chapter lists the main conclusions obtained in the present work and some suggestions for future work based on the findings, conclusions and problems identified in the course of the present work. The present work attempts to improve the understanding of the contribution of deformation twinning to the plastic deformation of hexagonal T40 titanium alloy, as well as establishing a criterion for the twin variants selection. From the experimental data and theoretical investigations, the following important conclusions can be drawn: Twinning occurs in grains having specific orientations. Generally, in rolling and compression, compression twinning occurs in the grains with their c axis close to the compressive force; tension twinning occurs in the grains with their c axis perpendicular to the compressive force. The twinned part of a grain can be considered as a new grain. When twins grow within the grain, they can consume almost the whole matrix. In this case the primary twinned area is much larger than the remaining matrix and thus represents the "new grain" for possible subsequent secondary twinning. Special attention should be paid when determining the twinned volume fraction. With the EBSD technique, a large twinned volume fraction could be demonstrated. This contradicts the conventional judgement that the twinned part is always smaller in a twinned grain than the remaining part of the parent grain, as was generally concluded from optical microscopy. Only step by step EBSD orientation mapping allows an unambiguous determination of the twinned volume fraction. In rolling and compression, it appears that the growth of {10-12} tension twin variants and {11-22} compression twin variants exhibit different characteristics. {10-12} tension twins usually exhibit only one variant per ~ 129 ~ grain. Even if another variant was activated, it would be rapidly absorbed by the first variant. This predominant twin variant grows very fast until almost no original parent is left, showing predominant variant system (PVS). This is because the {10-12} tension twins form under the resolved compressive force that is almost perpendicular to the c axis of the parent grain. In this situation, only two variants have a high Schmid factor and the misorientation between the two variants is only 9.98°. Hence, in a given grain, {10-12} twin lamellae of these two variants can easily merge into one another. In contrast, it is easy for the {11-22} compression twin to activate more than one variant, and they collide with each other and block growing, showing multiply variants system (MVS). This is because that as a compression twin, {11-22} twin forms under the resolved compressive force that is parallel to the c axis of the matrix. Since all the six {11-22} twin variants have symmetrical orientation relations by a 60° rotation around the c axis, the six variants should have the same Schmid factor under a compressive force parallel to the c axis. In our experimental observations, for the grains with {11-22} compression twins, the applied compressive force always deviates to some extent from the c axes, therefore usually three or four variants have a relatively high Schmid factors and these variants are activated simultaneously. PVS is inclined to occur in elongated grains and MVS is inclined to occur in equiaxed grains. The order of the twinning does affect the resulting misorientation induced by double twinning, even though they are identical in misorientation angle and have symmetrically equivalent axes. The set of all the double twin ~ 130 ~ variants are classified into 15 orientation variant groups rather than 10 geometric variants groups.In this study, two sets of double twins were observed, C-T1 double twins and T1-C double twins respectively. All the variants of these two sets of double twins are classified into 3 groups by symmetrically equivalent rotation with respect to the parent crystal, namely Group A (41.3° about <1-543>), Group B (48.4° about <5-503>) and Group C (87.9° about <4-730>). Strong variant selection takes place in these two double twinning systems. The Group B is predominant with respect to the two others in the C-T1 double twin (78.9%). In the case of the T1-C double twin, the predominant variant group is the Group C (66.7%).Schmid Factor (SF) analysis was performed on the primary twin and secondary twin respectively. SF gave an accuracy of about 50% for predicting the primary twin variant selection and about 40% for the secondary twin variant selection. The relative inaccuracy is probably due to the deviation between the local stress tensor and the macroscopic applied stress tensor. A new calculation associated with deformation energy was described to assess the influence of the deformation energy on the variant selection. It gave an improved accuracy up to 85% for predicting the primary twin variant selection and about 95% for the secondary twin variant selection.The twins formed will induce in the adjacent grains variants to be activated across the boundary as long as the angles between the twin planes are not beyond 20°. No influence of geometric features was found on the variant ~ 131 ~ selection in the double twinning. It is suggested that deformation energy rank of each variant should be adopted as a main criterion in predicting variant selection, and the Schmid factor used as an additional second criterion. 2 . 3 . 23 due to the poor EBSD index ratio after larger plastic deformation. A ~ 132 ~ solution for this problem would be quite useful in the view of future work. The in situ deformation (tensile and shear tests) experiments in the chamber of SEM/EBSD with microgrids deposited on the sample surface would be another interesting approach to study the local strain (thus stress) distribution in polycrystalline titanium during deformation. Further studies could be concentrated on dislocations and slips, with TEM technique. Additional detailed investigation on deformation mechanisms could be performed using Burger vector identification methods in the TEM. Résumé Le titane et ses alliages sont largement utilisés dans les domaines aéronautique, spatial, de l'armement, du génie civil, dans des applications commerciales et biomédicales en raison de sa résistance à la rupture élevée, d'une bonne ductilité et d'une grande biocompatibilité. Les mécanismes de la déformation plastique du titane ont été étudiés en détail par le passé, particulièrement sur l'étude de la déformation par maclage car il a une grande influence sur les propriétés mécaniques. Une méthode d'essais "in situ" en EBSD basée sur des tôles polies et colées ensemble a été développée dans cette étude et utilisée en laminage et en compression plane. Avec cette méthode, des mesures EBSD sont effectuées à chaque étape de la déformation dans la même zone comprenant un grand nombre de grains. Par conséquent, l'information sur l'orientation de ces grains à chaque l'étape de la déformation est mesurées. ayant des facteurs Schmid supérieurs à 0.4 ont une bonne chance d'être actifs. Les comportements des deux types de maclage sont complètement différents. Dans la déformation en compression, les macles (11-22) montrent le comportement de type multiplication des variants (Multiply Variants System: MVS) alors que les macles (10-12) montrent le type de maclage prédominant (Predominant Twin System: PTS). Cette étude présente deux types de macles doubles dénommées C-T1 (= macle primaire de Compression et macle secondaire de Tension) et T1-C (= macle primaire de Tension et macle secondaire de Compression). Tous les variants sont classés seulement en trois groupes: A, B et C par symétrie cristallographique. Les désorientations de ces 3 groupes par rapport à l'orientation de la matrice sont respectivement de 41.34°, 48.44° et 87.85°.Une forte de sélection de variant se déroule dans le maclage double. Pour les macles doubles CT, 78.9% des variantes appartiennent à la B et pour T1-C, 66.7% des variantes appartiennent à C. Le facteur de Schmid joue un rôle prépondérant dans la sélection des variants des macles doubles. Les caractéristiques géométriques, associant " volumes communs " et l'accommodation de la déformation ne contribuent pas de manière significative à la sélection des variants. Summary Titanium and its alloys are widely used in aviation, space, military, construction and biomedical industry because of the high fracture strength, high ductility and good biocompatibility. The mechanisms of plastic deformation in titanium have been studied in detail, especially deformation twinning since it has a great influence on the ductility and fracture strength. Figure 1 Figure 2 2 . 1 . 3 . 2 . 122132 Figure 1 Illustration du maclage Figure 3 : 3 Figure 3: Synoptique d'un essai interrompu : Étape 0 après polissage, on le ''nouveau grain'' pour d'éventuels maclages secondaires ultérieurs. Une attention particulière doit être accordée lors de la détermination de la fraction volumique maclée. Avec la technique EBSD, une fraction volumique maclée plus grande peut être trouvée. Ceci contredit l'opinion selon 20 laquelle la partie maclée est toujours plus petite dans un grain maclé que dans la partie restante du grain parent, comme cela a été généralement conclu par microscopie optique. Seule la technique EBSD en suivant étape par étape l'évolution de la cartographie d'orientation permet une détermination sans équivoque de la fraction volumique maclée. Lors du laminage et de la compression plane, il apparaît que la croissance des variants de macles de tension {10-12} (notées T1) et des variants de macles de compression {11-22} (notées C) présentent des caractéristiques différentes. Les macles de tension {10-12} présentent généralement un seul variant par grain. Même si un autre variant a été activé, il sera rapidement absorbé par le premier variant. Ce variant de macle prédominant croit très vite jusqu'à ce qu'il ne reste plus de matrice ; ceci montre le système de variant prédominant (PVS). C'est parce que les macles de tensions {10-12} se sont formées avec une force de compression parallèle à DN alors que les axes c du grain parent sont dans DT (perpendiculaire à DN). Dans cette situation, seuls deux variants ont un haut facteur de Schmid et la désorientation entre les deux variants n'est que de 9,98 °. Ainsi, dans un grain donné, ces deux variants de macles {10-12} peuvent facilement se fondre l'un avec l'autre. En revanche, il est facile pour les macles de compression {11-22} d'activer plus d'un variant. Ceux-ci entrent en contact les uns avec les autres et se bloquent. C'est le système de variants multiples (MVS). C'est parce que les macles de compression {11-22} sont formées avec une force de compression parallèle à DN qui est parallèle à l'axe c de la matrice. Comme tous les six variants de macle {11-22} ont des relations d'orientation symétrique par une rotation de 60 ° autour de l'axe c. Les six variants devraient avoir le même facteur Schmid sous une force de compression parallèle à l'axe c. Dans nos observations expérimentales, pour les grains ayant des macles de compression {11-22}, la force de compression appliquée dévie toujours dans une certaine mesure des axes c des grains, donc seulement trois ou quatre variants auront des facteurs Schmid relativement élevés. Ces variants seront activés simultanément. PVS est enclin à se produire dans les grains allongés et MVS est enclin à se produire dans les grains équiaxes. L'ordre d'apparition du maclage affecte la désorientation résultante induite par le maclage double (maclage secondaire), même si elles sont identiques en angle de désorientation et ont des axes symétriquement équivalents.L'ensemble de tous les variants de macles secondaires sont classés en 15 groupes de variant d'orientation plutôt qu'en 10 groupes de variants de 85% pour prédire la sélection de variant de macle primaire et 95% environ pour la sélection du variant de macle secondaire.Les variants de macle formées dans un grain pourront induire dans les grains adjacents des variants qui vont être activés à travers le joint de grain tant que les angles entre les plans de macle ne sont pas au-delà de 20 °. Il n'a pas été trouvé d'influence des caractéristiques géométriques sur la sélection de variant dans le maclage double. Il est suggéré de prévoir que le calcul de l'énergie de déformation de chaque variant doit être adopté comme principal critère de sélection de variant, et que le facteur de Schmid soit utilisé comme un critère supplémentaire secondaire. Table 1 - 1 : 11 Types of twinning in hexagonal metals Metals c/a Types of twinning Cd 1.886 {10-12} Zn 1.586 {10-12} Mg 1.624 {10-12}, {11-21} Zr 1.593 {10-12}, {11-21}, {11-22} Ti 1.588 {10-12}, {11-21}, {11-22} Table 1 - 1 Slip Systems Slip plane and direction Basal <a> {0001}<11-20> Prismatic <a> {10-10}<1-210> Pyramidal <a> {10-11}<1-210> Pyramidal <c+a> {10-11}<11-2-3> Pyramidal <c+a> {2-1-12}<-2113> 3: Slip systems in h.c.p. metals Table Ⅱ Ⅱ -1: Chemical composition of commercially pure titanium T40 Elements H C N O Fe Ti Composition ppm (wt.) 3 52 41 1062 237 Balance Table 1 1 Chemical composition of commercially pure titanium T40 Element H C N O Fe Ti Composition (ppm (wt.)) 3 52 41 1062 237 Balance Table 1 1 Chemical composition of commercially pure titanium T40 Element H C N O Fe Ti Composition (ppm (wt.)) 3 52 41 1062 237 Balance Table 1 1 Chemical composition of commercially pure titanium T40 Element H C N O Fe Ti Composition (ppm (wt.)) 3 52 41 1062 237 Balance Table 3 3 : Angles between twinning elements associated with variants groups of double twins C-T1 double twins Angle between Angle between Angle between Variant groups primary and primary and primary and secondary twin's TP secondary twin's SD secondary SPN A 84.1° 103.5° 30° B 27.4° 23.7° 30° C 66.9° 123.6 90° T1-C double twins Angle between Angle between Angle between Variant groups primary and primary and primary and secondary twin's TP secondary twin's SD secondary SPN A 84.1° 76.5° 30° B 27.4° 23.6° 30° C 66.9° 55.3° 90° Acknowledgements This work was carried on at Laboratoire d'Etude des Textures et Application aux Materiaux (LETAM CNRS FRE 3143), University of Metz in France. First of all, I Acknowledgement This work was supported by the Federation of Research for Aeronautic and Space (Fédération de Recherche pour l'Aéronautique et l'Espace Thème Matériaux pour l'Aéronautique et l'Espace : project OPTIMIST (optimisation de la mise en forme d'alliage de titane). Acknowledgement This study was supported by the Federation of Research for Aeronautic and Space (Fédération de Recherche pour l'Aéronautique et l'Espace Thème Matériaux pour l'Aéronautique et l'Espace : project OPTIMIST (optimisation de la mise en forme d'alliage de titane). THESE
133,757
[ "764277" ]
[ "178323" ]
01749173
en
[ "spi" ]
2024/03/05 22:32:07
2011
https://hal.univ-lorraine.fr/tel-01749173/file/Li.Zongbin.SMZ1146.pdf
Keywords: Ferromagnetic shape memory alloys, Ni-Mn-Ga, Microsructure, Twinning, Martensitic transformation, EBSD Since the large magnetic-field induced strain in Ni-Mn-Ga ferromagnetic shape memory alloys was reached through the reorientation of martensite variants, the microstructural configuration and crystallographic correlation of constituent variants have strong influence on the activation of the magnetic shape memory effect. In this work, the microstructural and crystallographic features of Ni-Mn-Ga alloys were thoroughly studied. For the 5M martensite in a Ni 50 Mn 28 Ga 22 alloy, the modulated superstructure information was applied for the first time to perform EBSD measurements, which enables a deeper insight into the microstructural characters. Consequently, four types of twin-related variants (A, B, C and D) were unambiguously revealed, while only two variants can be identified if using the simplified tetragonal non-modulated crystal structure for the Kikuchi pattern indexation. Based on the exact local orientations of martensite variants and crystallographic calculations, the twinning elements and the twin interface planes were fully determined. Furthermore, with no residual austenite, the most favorable orientation relationship governing martensitic transformation was revealed to be the Pitsch relation, i.e. (101) A //(1 2 5 ) 5M and [10 1 ] A //[ 5 5 1] 5M . The auto orientation mappings on 7M martensite in a Ni 50 Mn 30 Ga 20 alloy were realized by using the incommensurate superstructure information, which provided an alternative means for verifying the crystal structure information. Four types of alternately distributed martensite variants (A, B, C, and D) in one martensite colony were determined to be twin-related: A and C (or B and D) possess type I twin relation, A and B (or C and D) type II twin, and A and D (or B and C) compound twin. All the twin interfaces are in coincidence with the respective twinning plane (K 1 ). The energetically favorable orientation relationship between the austenite and the 7M martensite was revealed to be the Pitsch relation with (101) A //(1 2 10 ) 7M and [10 1 ] A //[10 10 1] 7M . Notably, the ambiguity of the geometrically favorable orientation relationships was resolved by examining the lattice discontinuity caused by the phase transformation and the structural modulation. In a Ni 54 Mn 24 Ga 22 alloy with NM martensite, four types of plates are identified locally and each plate consists of nano-scaled paired fine variants. Totally, eight orientation variants are found in one martensite colony. The paired fine variants in each plate were found to be compound twin related with the {112} Tet as the twinning plane and the <11 1 > Tet the twinning direction. The inter-plate interfaces are close to {1 1 2} Tet plane but with ~3° deviation, while the interfaces of two paired fine variants are in good agreement with {112} Tet twinning plane. In a Ni 53 Mn 22 Ga 25 alloy with transformation temperatures around room temperature, the microstructure evolution in continuous austenite-7M-NM transformation was observed for the first time. The initially formed self-accommodated 7M martensite was characterized by a diamond-shaped morphology composed of four variants. 7M martensite was revealed to be an intermediate phase and thermodynamically metastable and it possesses an independent crystal structure rather than nanotwinned combination of NM martensite. The transformation from the 7M to the NM martensite is realized by lattice distortion following the (001) 7M //(112) Tet and [100] 7M //[11 1 ] Tet relation, which is accompanied by the thickening of the 7M plates. The role of 7M martensite in bridging the austenite to NM martensite transformation is to relieve the large lattice mismatch between austenite and NM martensite and to avoid the formation of the incoherent NM plate interfaces that represent insurmountable energy barrier. Résumé Depuis qu'une grande déformation induite par le champ magnétique dans les alliages à mémoire de forme ferromagnétiques Ni-Mn-Ga a été atteinte grâce à la réorientation des variantes de martensite, la configuration de la microstructure et la corrélation cristallographique des variantes constitutives ont une forte influence sur l'activation de l'effet de mémoire de forme magnétique. Dans ce travail, les caractéristiques microstructurales et cristallographiques des alliages Ni-Mn-Ga ont été soigneusement étudiées. Pour la martensite 5M dans un alliage de Ni 50 Mn 28 Ga 22 , l'information de la superstructure modulée a été appliquée pour la première fois à la réalisation de mesures EBSD, ce qui permet une compréhension plus profonde des caractères microstructuraux. En conséquence, quatre types de variantes (A, B, C et D) en relation de macle ont été clairement révélés ; en revanche, seulement deux variantes peuvent être identifiées en cas d'utilisation de la structure cristalline tétragonale simplifiée non modulée pour l'indexation des clichés de Kikuchi. Basé sur les orientations locales exactes des variantes de martensite et sur des calculs cristallographiques, les éléments de maclage et les interfaces des macles ont été entièrement déterminés. Par ailleurs, sans austénite résiduelle, la relation d'orientation la plus favorable de transformation martensitique a été révélée être la relation Pitsch, c'est-à-dire (101) A //(1 2 5 ) 5M et [10 1 ] A //[ 5 5 1] 5M . Des cartes d'orientation sur la martensite 7M dans un alliage Ni 50 Mn 30 Ga 20 ont été réalisées automatiquement en utilisant l'information de superstructure cristalline incommensurable, qui ont fourni un moyen alternatif pour la vérification de l'information sur la structure cristalline. Quatre types de variantes de martensite (A, B, C et D) distribués alternativement dans une colonie de martensite ont été déterminés être en relation de maclage. A et C (ou B et D) est une macle de type I, A et B (ou C et D) une macle de type II, et A et D (ou B et C) une macle de type composé. Toutes les interfaces entre les variantes sont respectivement en coïncidence avec le plan de maclage (K 1 ). La relation d'orientation énergétiquement favorable entre l'austénite et la martensite 7M s'est révélée être la relation de Pitsch (101) A //(1 2 10 ) 7M and [10 1 ] A //[10 10 1] 7M . Notamment, l'ambiguïté de la relation d'orientation géométriquement favorable a été résolue par l'examen de la discontinuité de la maille causée par la transformation de phase et la modulation de la structure de martensite. Dans un alliage Ni 54 Mn 24 Ga 22 avec la martensite non modulée (NM), quatre types de platelets sont identifiés localement et chaque platelet est constitué de variantes fines nanométriques jumelées. En tout, huit variantes d'orientation sont trouvées dans une colonie de martensite. Les variantes fines jumelées dans chaque platelet ont été trouvées être en relation de maclage composé avec {112} Tet comme plan de maclage et <11 1 > Tet comme direction de maclage. Les interfaces inter-platelets sont proches de {1 1 2} Tet mais avec ~ 3° de l'écart, tandis que les interfaces des deux variantes fines jumelées sont en bon accord avec le plan {112} Tet , i.e. le plan de maclage. Dans un alliage Ni 53 Mn 22 Ga 25 avec les températures de transformation autour de la température ambiante, l'évolution de la microstructure dans la transformation continue austénite-7M-NM a été observée pour la première fois. La martensite 7M auto-accommodée qui se forme initialement a été caractérisée par une morphologie en forme de losange composée de quatre variantes. La martensite 7M a été révélée comme étant une phase intermédiaire et thermodynamiquement métastable. La martensite 7M possède une structure cristalline indépendante plutôt que d'être la combinaison de martensite NM nanométrique en relation de macle. La transformation de la martensite 7M en celle NM est réalisée par la distorsion du réseau suite à la relation de (001) 7M // (112) Tet et [100] 7M //[11 1 ] Tet , qui est accompagnée par l'épaississement des platelets de 7M. Le rôle de la martensite 7M en établissant un pont lors de la transformation martensitique de l'austénite à la martensite NM est de relaxer le grand désaccord des mailles entre l'austénite et la martensite NM et d'éviter la formation d'interfaces incohérentes de platelets de martensite NM qui représentent une barrière énergétique infranchissable. Mots-clés: alliages à mémoire de forme ferromagnétiques; Ni-Mn-Ga; microstructure; maclage; transformation martensitique; EBSD. Ni-Mn-Ga Ni-Mn-Ga Ni-Mn-Ga EBSD Ni 50 Mn 28 Ga 22 A, B, C, D EBSD Pitsch (101) A //(1 2 5 ) 5M [10 1 ] A //[ 5 5 (001) 7M // (112) Tet [100] 7M //[ 11 1 ] Tet Ni-Mn-Ga Chapter 1 Literature Review General introduction In human history, the development of science and technology has always been marked by the innovations and the evolutions in the use of materials. New advanced materials are always the stepping stones to human progress and have brought great convenience to the daily life of people. Among the numerous advanced materials, the so-called smart materials have the unique attractiveness due to their special properties to sense and respond to the environment around them in a predictable manner. They can allow their shape to be changed in different external field, like electric, magnetic or temperature field, transforming one form of energy into another. Hence, they have wide applications as actuators or sensors in various domains [1][2][3][4], such as medical, civil engineering, aerospace and marine industries. Piezoelectric ceramics, magnetostrictive materials and shape memory alloys are the main smart materials groups that have been utilized for practical applications. Piezoelectric ceramics are the most widely used smart materials. They present two distinct abilities due to the electro-mechanical coupling [5], i.e. becoming electrically charged when subjected to a mechanical stress and making an expansion or contraction when voltage is applied. Piezoelectric materials have high response frequency, on the order of tens of 100 kHz, making them ideal for precise and high speed actuation. The most common piezoelectric material used in these days is lead-zirconate-titanate (PZT) [6]. Nevertheless, piezoelectric ceramics have some limitations. They tend to be brittle and the induced strain in these materials is relatively small. The best piezoelectric ceramics only exhibit a strain of about 0.19% [5]. Magnetostrictive materials such as Terfenol-D are similar to piezoelectric ceramics, but these materials respond to magnetic field rather than electric field. When a magnetic field is applied, the electron spin tries to align in the field direction and the orbit of that electron also tends to be reoriented, which results in considerable lattice distortion, hence magnetostriction. Magnetostrictive materials can operate at high frequencies up to 10 kHz, but they also have the same drawbacks of small output strain. The leading magnetostrictive material (Tb 0.3 Dy 0.7 )Fe 2 , shows a field-induced strain of about 0.24%. Moreover, they are also expensive to produce and highly brittle. Conventional shape memory alloys (SMAs) are another class of smart materials that can produce very high recoverable strains as a result of reversible martensitic transformation. The most common commercially available shape memory alloy is Ni-Ti. Ni-Ti alloys can produce strains up to 8% [2] and also can be deformed easily and biocompatible. When these shape memory alloys are cooled to martensite, the martensite phase has a multi-variant structure and, in general, almost no noticeable shape change is observed between the high-temperature and low-temperature phases owing to the small difference between the volumes of the unit cells and to the self-accommodation of martensite variants. Application of a uniaxial stress breaks this arrangement and the twin boundaries between martensite variants are activated to move. Those variants with favorable orientation with respect to the stress grow at the expense of other variants. This results in a macroscopic shape change. On heating the alloy to parent phase, the crystallographically reversible transformation causes the recovery of the original macroscopic shape. However, a major inconvenience for the practical application of the thermal-induced shape memory effect was the low working frequency (less than 1Hz) inherent to the thermal control of the effect. Recently, a significant breakthrough in the research of shape memory alloys came about with the discovery of ferromagnetic shape memory alloys (FSMAs). The realization of large field-induced strains by the external magnetic field at high working frequencies has aroused intensive research attention to these materials [7]. So far, the reported magnetic-field-induced strain (MFIS) reached up to ~10% [8], which is an order of magnitude comparative to the conventional shape memory alloys and much higher than the strains generated in piezoelectric and magnetostrictive materials. Meanwhile, the possibility of controlling the shape change by the application of magnetic field enables the relatively higher working frequency (KHz) than that of conventional shape memory alloys [9]. Due to these features, the ferromagnetic shape memory alloys are taken as the potential candidates for a new class of magnetic actuator materials. The magnetic-field-induced strain effect, also termed as magnetic shape memory effect, in ferromagnetic shape memory alloys can be achieved through different mechanisms driven by external magnetic field, which includes (1) the rearrangement of ferromagnetic martensite variants by twinning and detwinning, such as Ni-Mn-Ga [10] and Fe-Pd alloys [11]; (2) the phase transition from the paramagnetic parent phase to ferromagnetic martensite, such as Fe-Mn-Ga alloys [12]; and (3) the reverse phase transition from the antiferromagnetic martensite to ferromagnetic parent phase, such as Ni(Co)-Mn-In [13] and Ni(Co)-Mn-Sn alloys [14]. Among the ferromagnetic shape memory alloys, the most significant magnetic-field-induced strain was found in Ni-Mn-Ga alloys and thus these materials have become the most attractive ones in the family of the ferromagnetic shape memory alloys. The magnetic shape memory phenomenon in Ni-Mn-Ga alloys has been demonstrated in two of the martensitic structures with the phase transformation close to the ambient temperature [8,15]. Martensitic transformation and twin relationship between variants Since the fundamental condition to the shape memory phenomena is the occurrence of a thermoelastic martensitic transformation, it would be useful to have a brief overview on this transformation. Many solids undergo a solid-to-solid phase transformation. Solid state transformations are usually of two types: diffusional and displacive. Diffusional transformations take place by long range diffusion resulting from thermally activated atomic movements. The new phase is of a different chemical composition compared with that of the parent. Displacive (diffusionless) transformations, on the other hand, do not require long-range diffusion during the phase transformation. Only small atomic movements usually less than the inter-atomic distances are needed. The atoms are rearranged into a new structure in a cooperative manner, with no change of the chemical composition and their atomic arrangements. Martensitic transformation is a kind of shear-dominant diffusionless solid-state phase transformation and occurs by nucleation and growth of the new phase from the parent phase [16]. Typically, upon cooling, the high temperature phase (austenite) with higher symmetry transforms to a low temperature phase (martensite) with lower symmetry through a first-order phase transition. There is a rigorous crystallographic connection between the lattices of the initial and final phases. The martensitic transformation is called thermoelastic, when it is thermally reversible and associated with mobile interfaces between the parent and martensitic phases with small transformation temperature hysteresis [17,[START_REF] Otsuka | Shape memory materials[END_REF]. In general, there are four characteristic temperatures defining the martensitic transformation process. The forward transformation start and finish temperatures from austenite to martensite are called M s and M f , respectively; while the inverse transformation start and finish temperatures from martensite to austenite are called A s and A f , respectively. Usually, the transformation temperatures differ on heating and cooling during the transformation. There is a hysteresis associated with phase transformation. The transformation temperatures depend mainly on the alloy composition and processing history. Microstructural defects, degree of order and grain size of the parent phase can also alter the transformation temperatures by several degrees [START_REF] Macqueron | [END_REF]. Martensitic transformation is usually accompanied by a change in the form of the transformed region which manifests itself in a characteristic relief on the surface where the martensite plate appears. Moreover, numerous physical properties show different changes with the occurrence of martensitic transformation [17]. During the transformation, a latent heat associated with the transformation is absorbed or released depending on the transformation direction. The two phases also have different resistance due to their different crystallographic structures, thus the phase transformation is associated with a change in the electrical resistivity. These changes allow for the measurement of the transformation temperatures by differential calorimetry and electrical resistivity; respectively. In addition, changes in specific volume, mechanical properties and magnetization et al. also allow the transformation temperatures to be determined [START_REF] Delaey | Diffusionless Transformations[END_REF][START_REF] Webster | [END_REF]. The martensite phase usually takes the form of plates and the interface separating the martensite from the parent phase is called the habit plane. A careful analysis of the surface relief reveals that any vector in the habit plane is left unrotated and undistorted during the transformation [START_REF] Delaey | Diffusionless Transformations[END_REF]. The habit plane is thus essentially "undistorted" and the shape change resulted from the martensitic transformation is an "invariant plane strain". To illustrate the transformation mechanism, in 1924, Bain proposed that the change in the structure from FCC lattice to BCC lattice could be achieved by a simple homogeneous deformation, as illustrated in Fig. 1.1 [22]. A unit cell of the BCT structure is drawn within two FCC cells. Then, the transformation to a BCC unit cell can be achieved by the compression along the z axis and the expansion along the x and y axes. This homogeneous deformation is often called the Bain distortion or Bain strain. Although Bain distortion simply explains how the BCC unit cell can be transformed from FCC unit cells with minimum atomic movement, in fact, the Bain deformation alone would cause enormous transformation strains depending on the volume transformed and a non-invariant plane strain can be achieved. To fulfill the requirements of an invariant plane, Wechsler, Lieberman and Read [23], and Bowles and Mackenzie [24] independently developed the so called "phenomenological crystallographic theory of martensitic transformation" to explain the shape change of martensitic transformation. Although the mathematical treatment is slightly different in the two theories, they are essentially equivalent [25]. In the phenomenological theories, the shape deformation is artificially decomposed into following three components: a lattice deformation, i.e. Bain strain, a lattice invariant shear (slip, twinning or stacking faults), and a rigid body rotation. The pure lattice strain transforms the parent lattice into the product lattice and the combination of lattice invariant shear will, in general, leave one plane undistorted but rotated to a new position; a rigid body rotation is applied to bring back the undistorted plane into the original position. The phenomenological theories have successfully predicted crystallographic data associated with the martensitic transformation, such as the habit plane, orientation relationship, and magnitude of shape deformation et al., in various alloy system, which were confirmed by experimental measurements. Since martensitic transformation is diffusionless completed by atomic movement in a coordinated manner, it follows that the austenite and the martensite lattices should be intimately related and lead to a reproducible orientation relationship between them. The orientation relationship can be described by specifying the parallelism between certain crystallographic planes and directions. Representative orientation relationships of martenstic transformation are given below (the subscript A represents austenite and M martensite): Bain relation [22]: (001) A // (001) M and [100] A // [1 1 0] M Kurdjumov-Sachs (K-S) relation [26]: (111) A // (011) M and [1 1 0] A // [1 1 1] M Nishiyama-Wassermann (N-W) relation [27,28]: (111) A // (011) M and [ 2 11] A // [0 1 1] M Greninger-Troiano (G-T) relation [29]: (111) A about 1° from (011) M and [1 1 0] A about 2.5° from [1 1 1] M Pitsch relation [30]: (110) A // (1 1 2 ) M and [1 1 0] A // [ 1 1 1 ] M Burgers relation (BCC to HCP) [START_REF] Delaey | Diffusionless Transformations[END_REF]: (111) A // (0001) M and [1 1 0] A // [1 2 10] M Orientation relationships of the two phases change from one alloy system to another, and within a given alloy system from one composition to another. Under different orientation relationships, the number of induced martensite variants is different. As the crystal lattice of the martensite phase usually has lower symmetry than that of the parent austenite phase, elastic strains associated with the transformation accompany the nucleation and growth of the martensite. The elastic strains increase with the increasing martensite fraction. To compensate the transformation strains, different oriented variants are formed from the same parent phase. For keeping the lattice continuity, the neighboring martensite variants usually develop a twin relationship to each other. Such twins make a substantial contribution to the lattice-invariant deformation [START_REF] Christian | The theory of transformations in metals and alloys PERGAMON[END_REF]. The classical definition of twinning describes that the twin and the matrix lattices are related by reflection with respect to a plane or by a rotation of 180° about an axis [START_REF] Christian | [END_REF]. These two types of twin often designated as "reflection" twins and "rotation" twins. By convention [33], a twinning mode is defined by six elements: (1) K 1 -the twinning or composition plane that is the invariant (unrotated and undistorted) plane of the simple shear; (2) 1 -the twinning direction or the direction of shear lying in K 1 ; (3) K 2 -the reciprocal or conjugate twinning plane, the second undistorted but rotated plane of the simple shear; (4) 2 -the reciprocal or conjugate twinning direction lying in K 2 ; (5) P -the plane of shear that is perpendicular to K 1 and K 2 and intersects K 1 and K 2 in the direction 1 and 2 , respectively; (6) s -the magnitude of shear. The geometrical configuration of K 1 , K 2 , 1 , 2 and P is shown in Fig. 1.2. According to the rationality of the Miller indices for K 1 , K 2 , 1 and 2 , crystal twins are usually classified into three categories: type I twin (K 1 and 2 are rational), type II twin (K 2 and 1 are rational), and compound twin (K 1 , K 2 , 1 and 2 are all rational). Twins are usually formed by a homogeneous simple shear of the matrix lattice when the material is strained, in which the resulting structure is identical to that of the matrix, but differently oriented. This type of twinning is termed as deformation twinning which is an important plastic deformation mechanism in many materials. On the other hand, twin structures are often found in the products of many martensitic transformations, i.e. transformation twin. Transformation twin produces highly organized structures with alternative twin lamellae of fixed thickness ratios. In many martensites of shape memory alloys, the twin boundaries are highly glissile. Upon the application of a load, the twin boundaries between martensite variants are easily activated to move, thus to accommodate the induced strain, resulting in detwinning of martensite variants with the growing of some energetically favored variants at the expense of others. Ni-Mn-Ga Ferromagnetic Shape Memory Alloys Introduction Ni-Mn-Ga ferromagnetic shape memory alloys have emerged as a promising new class of smart materials due to the large magnetic-field-induced strains. They can be driven at higher actuation frequency than conventional shape memory alloys. The Ni-Mn-Ga alloys have been already studied for more than 50 years when the initial work exhibited Ni 2 MnGa alloy as one of the Heusler alloys having the formula X 2 YZ [34][35][36][37]. Soltys was the first to concentrate only on the Ni-Mn-Ga alloy system [38]. In 1984, Webster et al. firstly investigated the martensitic transformation and magnetic order of Ni 2 MnGa [START_REF] Webster | [END_REF]. More systematic studies in the 1990's were carried out by Kokorin et al. [39,40], and by Chernenko et al. [41], when Ni-Mn-Ga alloys were investigated as potential shape memory alloys. In 1996, Ulakko et al. firstly demonstrated a 0.19% field-induced strain in a Ni 2 MnGa single crystal sample at 265 K under a magnetic field of 8kOe [7]. Since then, the research interest on magnetic shape memory alloys grew rapidly. In 2000, Murray et al. achieved a 6% large magnetic-field-induced strain in a non-stoichiometric Ni-Mn-Ga single crystal with a five-layered martensitie [15]. In 2002, a much larger strain close to 10% was reported by Sozinov et al. [8] in the single crystal of a seven-layered martensite, which is the largest magnetic field induced strain obtained in ferromagnetic shape memory alloys. Due to the high fabrication cost of single crystals, the research was also directed towards polycrystalline materials. However, the magnetic field induced strain is almost zero in fine grained alloys. Innovative routes of fabrication, such as films, textured polycrystals and foams, were designed to improve the magnetic field induced strain in polycrystalline alloys. An strain of 0.065% was reported for 0.1-1 m thick films on a 10 m thick Mo substrate [42]. By preparation of highly textured bulk alloy, Gaitzsch et al. [43] observed 1% field-induced strain in a polycrystalline alloy. By producing a porous material, the magnetic field induced strain of 0.12% was reported [44]. After a certain training, the magnetic field induced strain in the magnetic shape-memory alloy foams reached 8.9% [45]. So far, numerous experimental and theoretical studies have been performed on Ni-Mn-Ga alloys, such as crystal structure, phase transformation, magnetic properties, magnetic shape memory effect, mechanical behavior, martensite stability and alloying et al., and interesting phenomena have been revealed. Magneto-structural transformation in Ni-Mn-Ga alloys Ni 2 MnGa alloy is one of the Heusler alloys with the occurrence of both the paramagentic-ferromagnetic and the thermoelastic martensitic transformation on cooling. The so-called Heusler alloys are highly ordered intermetallics having the common formula X 2 YZ with a L2 1 order. On cooling from the melt state, Ni-Mn-Ga alloys firstly transform into the B2' phase at about 1100 . Below the ordering temperature of about 750-800 , the Heusler phase (austenite) with L2 1 long range order is formed. The austenite of Ni-Mn-Ga alloys has a cubic structure with space group Fm 3 m (No. 225) [46], where Ni atoms occupy 8c (0.25, 0.25, 0.25) Wyckoff sites, Mn and Ga atoms 4a (0, 0, 0) and 4b (0.5, 0.5, 0.5) sites, respectively [46], as shown in Fig. 1.3. ). This magnetic transition is one of the main limits when considering the service temperatures of the magnetic shape memory effect. The T C of the stoichiometric Ni 2 MnGa was reported to be 376K [START_REF] Webster | [END_REF]. The T C of off-stoichiometric Ni-Mn-Ga alloys is less sensitive to the chemical composition [41], having a value of around 370K for a wide range of compositions. Ni-Mn-Ga alloys are a special case of shape memory alloy and thus they exhibit thermoelastic martensitic transformation on cooling from the parent phase. In some near-stoichiometric alloys, prior to the martensitic transformation at low temperatures, premartensitic transformation occurs and the austenite transforms to micromodulated three-layered (3M) martensite [46]. The premartensitic transformation is characterized by anomalies in the elastic [47][48][49], thermal [47,48], resistivity [50,51] and magnetic properties [48,[50][51][52]. Inelastic neutron scattering experiments performed on stoichiometric Ni 2 MnGa showed the existence of a soft [ 0] TA 2 phonon mode over a wide temperature interval. An important observation in these measurements was that the TA 2 phonon branch at a wave vector of 1/3 incompletely condenses at the premartensitic transition temperature, which is well above the martensitic transition temperature [53,54]. This sudden soft mode phonon freezing is the typical character of the premartensite phase and can only be observed in the premartensite phase. Weak spots (or peaks) were found using diffraction techniques by electron [47,49], neutron [46], and x-ray [55], which can be indexed using a propagation vector of (1/3, 1/3, 0) with cubic symmetry. Based on Landau theory, first principle simulation and phenomenological model have revealed that premartensitic transformation was a kind of weak first order transformation, which is driven by magnetoelastic interaction [56,57]. The martensitic transformation in Ni-Mn-Ga alloys may result in several martensites. The most frequently observed are five-layered modulated (5M) martensite, seven-layered modulated martensite (7M) and non-modulated martensite (NM). The kind of martensite which appears on cooling depends on the composition, but the stability of the structures seems to be always the same. 5M martensite is the most unstable one; 7M martensite is in the middle and the NM martensite is the most stable one [58]. The alloys displaying a direct transformation to NM martensite typically have martensitic transformation temperatures close or above their Curie temperatures [59,60]. The martensitic transformation temperatures (T M ) of Ni-Mn-Ga alloys are highly dependent on the chemical composition [41,61,62]. Generally, the increase of the martensitic transformation temperature can be described as a linear function with the average number of valence electrons per atom (electron concentration) e/a. Suppose that the number of valence electrons for Ni (3d 8 4s 2 ), Mn (3d 5 4s 2 ) and Ga (4s 2 4p 1 ) atoms are 10, 7 and 3, respectively. The e/a can be calculated as follows: In addition to the occurrence of martensitic transformation, there also exists a first order inter-martensitic transformation (IMT) in some alloys that transforms one type of martensite to another [63][64][65][66][67]. Depending on the composition and the thermal history of the alloys [63,66], typical transformation path of inter-martensitic transformation on cooling was reported as from modulated martensite to final NM martensite, such as 5M-7M-NM or 7M-NM. Thus, NM martensite is the ground state of the alloys. The inter-martensitic transformations are usually accompanied by anomalies in the calorimetric, mechanical, magnetic and electrical resistivity both on cooling and heating process [63]. Moreover, the occurrence of inter-martensitic transformation has a certain influence on their practical applicability. In this aspect, it means that the working range of ferromagnetic shape memory alloys could be restricted to a temperature range where inter-martensitic transformations are not observed, i.e. the inter-martensitic transformation may set the lower limit for the service temperature of the magnetic shape memory effect [68]. Crystal structure of martensite Among the three types of martensite formed after martensitic transformation, the NM martensite has a tetragonal crystal structure with space group I4/mmm (No. 139) [69], while 5M and 7M martensites possess monoclinic superstructure subjected to periodic shuffling along the (110) A [1 1 0] A system. Shuffling means that transformation shear occurs periodically on the (110 ) A plane in [1 1 0] A or [ 1 10] A direction. In electron diffraction patterns, the lattice modulation is reflected by the satellite spots between two main spots. The distance between the two main reflections is divided into five parts by four satellite spots or seven parts by six satellite spots for 5M and 7M martensite; respectively [70]. Conventionally, the diffraction data is interpreted as that the superstructure is composed of five consecutive subcells for 5M martensite and seven consecutive subcells for 7M martensite. Generally, two models were commonly used to represent layered martensitic structure. The first one is the long-period stacking order approach [71], which is a well-known method to describe the close-packed layered martensite structure and widely accepted in the studies on Ni-Al alloys and Cu-based shape memory alloys. This method is based on the uniform shear occurring on each basal plane, which is originated from the {110} A . It is necessary to use Zhdanov notation to explicitly indicate the stacking sequence, such as (5 2 ) 2 for sever-layered martensite. The second approach is to consider the layered structure as a modulation function superimposed onto the basic structure [72]. The deviations of atoms from their ideal positions are defined by the modulation function. The distinction between these two methods is that the long-period stacking order model assumes uniform shear between two neighboring atomic planes, which is the case only when considering the modulation function with the zero-order harmonic coefficients [73]. In such a context, the stacking order model is the simple case of the modulation function model and it is valid only for a commensurate modulated structure [73]. In Ni-Mn-Ga alloys, recent investigations confirmed that the superstructure of modulated martensite can be commensurate or incommensurate [74,75], thus, the lattice modulation model is more appropriate. In spite of the numerous structural characterization works, the crystal structures of modulated martensites are still in dispute. Martynov et al. [72,76] proposed that the structural modulation could be interpreted with a sinusoidal function applied to the (001) planes of the distorted martensite lattice. The displacement of the jth plane from the regular position in the shuffling direction can be expressed as: sin(2 /L) + sin(4 /L) + sin(6 /L) j A j B j C j (1.2) where L is the modulation period; A, B and C are constants to achieve the best fit with the observed intensities of the main and satellite spots. For 5M martensite (L=5), A = -0.06, B = 0.002, C = -0.007 [72]. For 7M martensite (L=7), A = 0.083, B=-0.027, C=0 [76]. Ignoring the lattice modulation, Wedel et al. used simplified tetragonal crystal structure (I4/mmm) and orthorhombic crystal structure (Fmmm) to describe the 5M and 7M martensites [77]. By elastic neutron scattering measurement, Zheludev et al. [78] found that the 5M modulation of a near-stoichiometric Ni 2 MnGa alloy was an incommensurate one, in which the modulation was along wave vectors ( M , M , 0), M = 0.43. Brown et al. [46] performed structural refinement for the thermally induced Ni 2 MnGa martensite based on powder neutron diffraction and suggested a commensurate seven-layered modulation with orthorhombic crystal structure (Pnnm). However, this structure was more likely to be an incommensurate 5M modulation. Righi et al. applied the superspace theory to fit powder X-ray diffraction patterns and demonstrated that the 5M modulation in Ni-Mn-Ga alloys can be commensurate or incommensurate [74,79]. It was revealed that the incommensurate 5M modulated structure (Pnmn) composed of seven consecutive subcells related to the orthorhombic structure for the stoichiometric Ni 2 MnGa alloy [79]; while the Mn-rich alloy has commensurate structure with five consecutive subcells related to the monoclinic lattice basis (I2/m) [74]. Glavatskyy et al. reported the refinement results of powder neutron diffraction pattern for the 5M martensite in a non-stoichiometric alloy and also suggested the commensurate monoclinic superstructure but with the different space group (P2/m) [80]. For 7M martensite, it was reported by Righi et al. that the crystal structure was monoclinic (P2/m) and the superlattice was incommensurate one composed of ten subcells [75]. Recently, Kaufmann et al. proposed that the so-called 7M long-period structure is simply composed of nanotwinned tetragonal NM martensite lamellae with (5 2 ) 2 stacking sequence, ruling out the existence of the independent modulated structure [81]. Further confirmation and clarification of the most appropriate crystal structure for the modulated martensites are still needed. Ferromagnetism of Ni-Mn-Ga alloys The ferromagnetism of Ni-Mn-Ga alloys is mainly from the contribution of the magnetic moment of Mn atoms. Ni atoms only carry a small magnetic moment and the magnetic moment of Ga is negligible [START_REF] Webster | [END_REF]. From the magnetization measurements, Webster et al. [START_REF] Webster | [END_REF] reported that the total magnetic moment of the cubic Heusler phase of Ni The ferromagnetism of Ni-Mn-Ga alloys is interesting because Mn is antiferromagnetic in its pure element state. The change from the antiferromagnetic behavior of pure Mn to the ferromagnetic behavior of Ni-Mn-Ga alloys is due to the increased distance between the Mn sites in the L2 1 structure compared with the distance between Mn atoms in pure Mn, which changes the Mn-Mn exchange interaction from antiferromagnetic to ferromagnetic [83]. Magnetocrystalline anisotropy is an intrinsic property of a material in which the magnetization favors preferred directions (easy directions). In Ni-Mn-Ga alloys, the easy axis of magnetization of the parent phase was reported to be <100> A [84]. The easy axis of magnetization of modulated martensites, i.e. 5M and 7M, correspond to the shortest axis of the distorted austenite, i.e. b-axis of the superlattice. NM martensite has the easy magnetization plane that perpendicular to the c-axis. Magnetocrystalline anisotropy energy, defined as the work necessary to rotate the magnetization from the easy axis to the hard axis with an applied magnetic field, is an important parameter for the achievement of giant magnetic-field-induced strain effects in Ni-Mn-Ga alloys. Magnetocrystalline anisotropy energy is usually expressed by the magnetocrystalline anisotropy constants, which can be acquired by measuring magnetization curves along different crystalline directions. It is reported that the magnetocrystalline anisotropy constant of the austenite for Ni-Mn-Ga alloys is relatively low, of the order of 10 3 J/m 3 , whereas the anisotropy constant of the martensite is increased by two orders of magnitude [85]. J/m 3 and K 2 =0.9×10 5 J/m 3 referring to the hard and mid-hard axes; and for Ni 50.5 Mn 30.4 Ga 19.1 NM martensite K 1 =-2.3×10 5 J/m 3 and K 2 =0.55×105 J/m 3 . It should be noted that the magnetic anisotropy constants are temperature and composition dependent [86,87]. Mechanism of magnetic field-induced strain The macroscopic shape memory effects induced by magnetic field in Ni-Mn-Ga alloys are the results of field-induced twin variants reorientation [10]. The twin boundary motion can occur when the difference in magnetization energy ( E mag ) between different martensite variants exceeds the elastic energy needed for twin boundary motion [88]: E mag 0 tw (1.3) where 0 is the reorientation strain and tw is the twinning stress. If the material is magnetized to saturation perpendicularly to the easy axis of one variant and along the axis of the adjacent variant, the difference of the magnetic energy is equal to the difference of the magnetocrystalline anisotropy energy (K u ). The condition for twin boundary motion under magnetic field can be rewritten as [88]: K u / 0 > tw + ( ext ). (1.4) The term in bracket, ext , is the external applied stress applied in the direction perpendicular to the magnetic field direction. This equation describes the usual setup of an actuator. When an external magnetic field is applied, the variants with their magnetization vectors aligning along the applied field are more energetically favorable and they will increase the volume fraction at the expense of the other variants through the twin boundary motion, leading to the macroscopic shape change of the sample, namely magnetic field induced strain [58]. The schematic illustration of the rearrangement of the martensite variants under a magnetic field is shown in Fig. 1.4. So far, large strains up to ~6% and ~10% have been reported in 5M [15] and 7M martensite [8], while the strain induced by magnetic field in NM martensite is negligible [89]. In general, the shape change achieved by the rearrangement of martensite variants under an applied field is not recovered when the magnetic field is switched off. To recover the strain induced by magnetic field, one possibility is to rotate the magnetic field [90]. Another possibility is to apply a external stress perpendicular to the direction of the field [91]. Assume that in the absence of the magnetic field, the martensite is at single-variant stage and it is favored by the stress. On applying a magnetic field, if the maximum magnetic stress (K u / 0 ) overcomes the sum of the twinning stress and the external stress, the stress-favored variant transforms to a different variant which is favored by the magnetic field, resulting in a large shape change. When the magnetic field is switched off, if the twinning stress is very low and the external stress exceeds the twinning stress, fully reversible behavior can be obtained. The condition for full reversibility of the magnetic shape memory effect can be written as [88] K u / 0tw > ext > tw (1.5) This equation reflects the fact that high magnetic anisotropy (K u ) and low twinning stress ( tw ) are requisites for the occurrence of magnetic shape memory. It also shows that too large external stresses will inhibit twin boundary motion. In principal, the magentocrystalline anisotropy energy can be increased by increasing the saturation magnetization and Curie temperature (T C ). It is reported that the Curie temperature and the saturation magnetization of Ni-Mn-Ga alloys are less sensitive to the composition [41,92]. The easiest way to increase the magnetocrystalline anisotropy energy is to choose an operating temperature (T O ) significantly lower than T C since the magnetocrystalline anisotropy energy increases with decreasing temperature below T C [91]. Similarly, twinning stress also increases with decreasing temperature below transformation temperatures [93]. Therefore, it is critical to understand and quantify the relationship between magentocrystalline anisotropy energy, twinning stress, operating temperature, T C and transformation temperatures to select materials and operating temperatures for obtaining high blocking stress (the external stress level above which magnetic field induced reorientation is not possible) and magentocrystalline anisotropy energy. In many cases, the observed the twinning stress exceeds the stress induced by magnetic field due to the interlocked variants with various orientation. [90]. The magnetic field induced strain increased from the initial 6% to 9.7%. Crystallographic orientation relationship of Ni-Mn-Ga alloys As the field-induced effect originates from the reorientation of martensite variants through a detwinning and twinning process, the microstructural configuration and crystallographic correlation of constituent martensite variants have strong influence on the activation of the magnetic shape memory effect (output strain and dynamic response). Thus, comprehensive crystallographic knowledge on the microstructural features of martensite variants and variant boundaries is desired, from which a clear image of structure-property relation is built up and also necessary indications for further precise control and improvement of properties are presented. So far, constant efforts have been devoted to the study of orientation relationships between martensite variants by means of X-ray diffraction (XRD) [97] and transmission electron microscopy (TEM) [98][99][100]. Mogylnyy et al. successfully determined the twinning mode in a Ni-Mn-Ga single crystal with 5M martensite by X-ray diffraction methods [97]. Han et al. [98,99] investigated the orientation relationship between the martensite variants in 5M and 7M martensite by TEM. It was found that there exists a twin relationship between the variants in both 5M and 7M martensite. The twinning plane in 5M martensite is {1 2 5 } 5M , while that in 7M martensite is {1 2 7 } 7M . More systematic TEM investigation on twin types and twinning elements in 5M and 7M martensite were presented by Nishida et al. [100]. However, the XRD analysis suffers a limit on acquiring simultaneously the spatial microstructural information of the measured orientations, which prevents the identification of orientation-microstructure correlation. In contrast, the TEM analysis enables one to reveal the variant morphology and the inter-variant orientation relationships, but the exploitable area is too local. Hence, it is difficult to obtain a global orientation image of multi-variants. It has also been noted that for the incommensurate modulation, the number of the observed satellites plus the main reflection does not conform to the number of the subcells in the superstructure [75]. Apparently, the habitual interpretation of the structural modulation according to TEM observation is not suited for the determination of the lattice constants and hence the orientations of the martensite variants [100], and needs to be reconsidered. Moreover, due to the limitation of TEM observation, the crystallographic nature of inter-variant interfaces and their statistical distributions has also not been well addressed. were fully determined and the inter-plate interfaces were found to be close to {112} Tet twinning plane with 2.6° deviation. In modulated martensite, some pioneer work has been done to characterize the global microstructures and variant orientations using approximate structure information [103][104][105][106][107]. For instance, the simplified tetragonal crystal structure was used to approximate the 5M modulated monoclinc superstructure, considering that the monoclinic angle of the modulated structure is very close to 90° and the difference between the basis vector a and b is very small. As a consequence of that, crystallographic nature was not fully disclosed. Especially, the variant number was not detected accurately. Further efforts should be made for the full disclose of the microstructural features and the crystallographic characteristics concerning the number and the shape of the martensite variants, the inter-variant orientation relationships and the twin interfaces. One the other hand, the key factor behind the magnetic shape memory effect in Ni-Mn-Ga alloys is the martensitic transformation below a certain temperature. The microstructural configuration of martensite variants and their crystallographic correlation results directly from the martensitic transformation that follows certain specific orientation relationships (ORs) between the parent phase and the product phase. Thus, the determination of possible phase transformation orientation relationships in Ni-Mn-Ga alloys is of practical importance for their microstructure control and theoretical interest for insight into their martensitic transformation process. However, in most of the alloys, the transformation is complete. It is difficult to obtain a mixed microstructure consisting of austenite and martensite, thus to determine the orientation relationship between the two phases. An alternative option for the determination of the transformation OR is to calculate the orientations of parent austenite from the orientations of the transformed martensite variants under an assumed OR [108], and then to identify the most favorable OR. Recently, a specific OR between austenite and NM martensite, namely the K-S relation with substantial advance in the study of martensitic transformation in Ni-Mn-Ga alloys [102]. Further efforts should be made on complete description of the phase transformation OR between austenite and modulated martensie. Content of the present work During the last twenty years, extensive investigations have been devoted to understanding the shape memory behaviors of Ni-Mn-Ga alloys. However, as Ni-Mn-Ga ferromagnetic shape memory alloys are newly emerging materials, there are still many fundamental issues that need to be further explored. Some obtained results are still in controversial due to the limitations of the characterization techniques. Electron backscatter diffraction (EBSD) measurement, as a promising tool for crystallography investigation, directly correlates the spatial crystallographic orientations with the morphological characters and allows a larger scale of measurement on the bulk sample. However, due to the complexity of structural modulation of the modulated martensite phase, there are seldom EBSD characterizations by application of superstructure information. Consequently, many crystallographic natures have not been correctly revealed. Based on such a background, the present work was designed to perform a thorough crystallographic investigation on different kinds of martensite in Ni-Mn-Ga alloys, i.e. 7M, 5M and NM martensite, based on EBSD orientation and microstructural examination techniques. The superstructure information of modulated martensites was applied for the orientation determination. The main content of the present work will focus on: (1) Precisely determine variant number, the orientation relationships of adjacent variants and the variant interface planes of 5M and 7M martensite by using the precise superstructure information for the EBSD auto-orientation mapping. (2) Determine the most favorable orientation relationship between austenite and modulated martensites based on the experimentally acquired orientation data and crystallographic calculation. (3) Examine the morphology and crystallographic features of NM martensite by EBSD measurements and crystallographic calculations. (4) Study the self-accommodation mechanism of modulated martensite. Reveal the thermodynamic stability and crystal structure nature of modulated martensite. Clarify the role of the modulated martensite in the transformation from the austenite to NM martensite. Chapter 2 Experimental and calculation methods Alloy preparation Ni-Mn-Ga polycrystalline alloys with different nominal composition were dedicatedly prepared, aiming to obtain different kinds of martensite at room temperature. High purity elements Ni (99.97 wt.%), Mn (99.9 wt.%) and Ga (99.99 wt.%) were used as raw materials to prepare as-cast ingot. The alloys were melted in arc-melting furnace with water-cooled copper crucible under argon atmosphere. The weight of each target ingot was about 70-80g. Prior to be melted, the pure raw materials were weighed by an electronic balance according to the designed nominal composition with the precision of 0.01g. To reduce the loss of Mn caused by strong evaporation during melting, the pure Mn was placed at the bottom of the crucible before melting. Each alloy was melted for four times and the electromagnetic stirring was applied during melting process for the composition homogenization. The final as-cast ingot had a button-shaped appearance, as shown in To further reduce the composition inhomogeneity in the prepared alloys, the alloys were sealed in a vacuum quartz tube and then homogenized at 900°C for 24h. Finally, they were quenched in the water by breaking the quartz tube. Sample preparation The samples for compressive testing with dimension of 6mm×5mm were cut by electrical discharge wire-cutting. The small cylindrical samples were mechanically polished with SiC grinding paper. To prepare the samples for the powder X-ray diffraction (XRD) measurement, parts of the homogenized alloys were crushed and ground into powder. To release the stress introduced by grinding, the powder was sealed in vacuum quartz tubes and annealed at 600 for 5 hours. Rectangular parallelepiped bulk samples for microstructure observation and electron backscatter diffraction (EBSD) measurements were cut out of the as-cast button or suction cast rod by electrical discharge wire-cutting. For the preparation of suitable observation plane, all the samples were first mechanically polished with SiC grinding paper. Then they were electrolytically polished in a solution of 20% nitric acid in methanol at room temperature with the voltage of 12V for 30 seconds. The samples for transition electron microscope (TEM) observation were mechanically thinned to ~100 m. Then the thin foils were electrolytically thinned in a twin-jet device at room temperature with the same solution as mentioned above. Characterization methods Optical microscopy The microstructure observations after polishing were firstly performed with OLYMPUS BX61 optical microscope equipped with polarized light. The microstructure evolution during inverse martensitic transformation for the alloys with the martensitic transformation temperatures around room temperature was also observed with the optical microscope. The sample was firstly pre-cooled into the full martensite state and then subjected to the optical microscope observation at room temperature. Mechanical property testing The mechanical properties of the as-cast alloys and the suction-cast alloys were measured by compressive testing using a CMT5305 Electronic Universal Testing Machine. The load was applied by controlling the displacement with a compressing rate of 0.2mm/min. The sample was loaded till fractured. Differential scanning calorimetry The forward and backward martensitic phase transformation temperatures for the prepared alloys were measured by differential scanning calorimetry (DSC). A Q100 DSC device, TA Instruments, was used in the present work. During DSC experiments, two crucibles were placed next to each other. One crucible was empty functioning as a reference and the other was filled with the sample to be measured. The two crucibles were heated and cooled under the same experimental setting. The heat flux in and out of a specimen at a specific temperature was determined and recorded by comparing the sample temperature with that of the reference. The heating and cooling rates for the measurement were set to be 10 /min under a constant flow of Argon. The martensitic transformation temperatures were determined by tangent method using the heat flux peaks in the DSC curve during heating and cooling. X-ray diffraction The crystal structures of the alloys were determined by means of X-ray diffraction (XRD). The XRD measurements were preformed in a PANalytical X'Pert Pro MPD diffractometer with a heating and cooling stage. The stress-free powder samples were used for the measurements. The XRD patterns were measured in the 2 range of 20-120° with a step size of 0.0334° using Cu K radiation. The lattice parameters of the alloys were determined from the experimental XRD patterns by fitting the diffraction profiles using Powder Cell software [109]. Scanning electron microscopy For detailed microstructural analyses, the field emission gun scanning electron microscope (SEM) Jeol JSM 6500 F was used. Additionally to the ability of detecting secondary electrons, the SEM is also equipped with a backscattered electron detector, an energy dispersive X-ray spectrometry (EDS, BRUKER, Germany) and an electron backscatter diffraction camera. The morphological features were characterized by secondary electron (SEI) imaging and backscattered electron (BSE) imaging. The composition of the alloy was verified by EDS. Electron backscatter diffraction (EBSD) was applied for the orientation acquisition. In the present work, the orientation measurements were performed in a field emission gun scanning electron microscope (Jeol JSM 6500 F) with EBSD acquisition camera and Channel 5 software. The working voltage was set at 15KV. The detailed crystal structure information was input into the Channel 5 software to construct the corresponding database for Kikuchi indexing. The orientation data were manually and automatically acquired. The beam control mode was applied for automatic orientation mapping. To ensure the indexation accuracy using the monoclinic superlattice for 7M and 5M martensite, the minimum number of detected bands is set to be 10 and 15, respectively. The maximum allowed MAD that represents the mean angular deviation between the calculated and the experimental EBSD pattern was set to be 1. For the case of 5M martensite, as the monoclinic angle is very close to 90°, it is unavoidable that there exist misindexations in the orientation map. For the determination of correct crystallographic orientation and variant number, we firstly judged and acquired the orientation data manually, then replaced the misindexations in the map by the post treatment. transforming the macroscopic sample coordinate system to the orthonormal crystal coordinate system. The transformation between two coordinate systems can also be represented by the following rotation matrix G and its inverse matrix: where the {i, j, k} and {a M , b M , c M } are respectively the basis vectors of the orthonormal crystal system and the monoclinic crystal system, as shown in Fig. 2.2. 1 2 1 2 1 2 1 2 1 1 2 1 2 1 2 Let (h M , k M , l M ) and [u M , v M , w M ] be the respective Miller indices of a given lattice plane and direction in the monoclinic crystal system. Their corresponding coordinates (n 1 , n 2 , n 3 ) and [u, v, w] in the orthonormal crystal system are then given by: 1 2 3 cos 1 0 sin sin 1 0 0 1 0 0 M M M M M M a c n h n k b n l c (2.2) 1 1 1 sin 0 0 0 0 cos 0 M M M M M M M u u a v b v a c w w (2.3) Fig. 2.2 Schematic representation of the monoclinic crystal system and the orthonormal crystal system. Misorientation calculation The misorientation between two crystals (described by the orthonormal reference systems) is defined by sets of rotations from one of the symmetrically equivalent coordinate systems of one crystal to another equivalent coordinate system of the other crystal. Let us consider two adjacent variants 1 and 2. The misorientation between them can be expressed in matrix notation [101]: 1 1 1 2 i j g S G G S (2.4) where g is the misorienation matrix; G 1 and G 2 are the rotation matrices transforming the orthonormal sample coordinate system to the orthonormal coordinate system set to the lattice basis of the respective variants; S i and S j are the symmetry elements; the superscript "-1" denotes the inverse of a matrix. If we denote = (d , d , d ) = ( , , ) 2sin 2sin 2sin d g g g g g g . (2.7) (2) = 180° 33 11 22 1 2 3 1 1 1 (d , d , d ) ( , , ) 2 2 2 d g g g with d = max( d ,i = 1, 2,3) d > 0, by convention i m, sgn(d ) = sgn(g ) m i m i i m (2.8) (3) =0° 1 2 3 (d , d , d ) (1, 0, 0) d (by convention) (2.9) Chapter 3 Characterization of Ni-Mn-Ga martensites 5M martensite For Ni-Mn-Ga ferromagnetic shape memory alloys, the crystallographic features of modulated martensite (including the number of constituent variants, the inter-variant orientation relationship and the geometrical distribution of variant interfaces) determine the attainability of the shape memory effect. In this section, a comprehensive microstructural and crystallographic investigation has been conducted on a bulk polycrystalline Ni 50 Mn 28 Ga 22 alloy. As a first attempt, the orientation measurements by EBSD using the precise information on the commensurate 5M modulated monoclinic superstructure (instead of the simplified non-modulated tetragonal structure) were successfully performed to identify the crystallographic orientations of martensite variants. Consequently, the morphology of the modulated martensite, the orientation relationships between adjacent variants and the twin interface planes were unambiguously determined. Based on the correct orientation data of the martensite variants acquired from EBSD measurement, the favorable orientation relationship (OR) between austenite and 5M martensite was revealed by detailed crystallographic calculation with no residual austenite. Crystal structure of 5M martensite The off-stoichiometric Mn-rich Ni-Mn-Ga polycrystalline alloy, with nominal composition of Ni 50 Mn 28 Ga 22 (at. %), was prepared by arc-melting. By means of energy-dispersive X-ray analysis (SEM/EDX), the composition of the alloy was verified to be Ni It is noted that, in the pioneering EBSD orientation measurements on the same kind of alloys [103][104][105][106][107], the 5M modulated martensite was usually approximated by a non-modulated tetragonal structure (I4/mmm, No. 139) [77] as the monoclinic angle is very close to 90° and the difference between a 5M and c 5M /5 is very small. Using this simplified structure (a=b=4.226Å, c=5.581Å), the XRD pattern was also recalculated in a similar way, as shown in Fig. 3.1c. It is evident that the diffraction peaks related to the monoclinic distortion, the orthorhombic distortion and the lattice modulation (arrowed positions) do disappear in the recalculated pattern. This indicates that the approximation of the 5M modulated superstructure with a non-modulated tetragonal structure may lead to an incorrect EBSD orientation identification, due to the loss of the important structure modulation information. Microstructural features of 5M martensite Orientation identification of 5M martensite Orientation relationships between martensite variants With the correct orientation data of the individual martensite variants determined by EBSD measurements, the inter-variant orientation relationships can be further calculated in terms of misorientation angle/axis, i.e. a set of misorientation angles around the corresponding rotation axes. By taking into account possible combinations of the four types of variants A, B, C and D in Fig. 3.3a, the complete sets of misorientation angles ( ) and the corresponding rotation axes (d) were calculated according to Eq. (2.4) and are shown in Table 3.1. Owing to the monoclinic symmetry, there are two sets of distinct misorientations Based upon the minimum shear criterion, the complete twinning elements (K 1 -the twinning plane; 1 -the twinning direction; K 2 -the reciprocal or conjugate twinning plane; 2 -the reciprocal or conjugate twinning direction; P-the plane of shear; s-the amount of shear) of the above three types of twins were unambiguously determined using a general method recently developed [33] and are displayed in Table 3.2. It is seen that type I twin (rational K 1 and 2 ) and type II twin (rational K 2 and 1 ) have the same magnitude of shear s, but K 1 and K 2 , and 1 and 2 are interchanged. Therefore, type I twin and type II twin can be regarded as conjugate or reciprocal to each other [START_REF] Christian | [END_REF]. Among the three types of twins, compound twin (rational K 1 , K 2 , 1 and 2 ) possesses the smallest twinning shear. Geometrically, all these twin relationships can be equivalently expressed by a minimum misorientation between two twinned variants. This minimum misorientation may be used as a simple criterion to judge possible twinning relationships between two variants for post EBSD orientation analysis, i.e. ~86° around the [501] 5M direction for type I twin, ~94° around the normal of the (105) 5M plane for type II twin, and ~180° around the [ 5 01] 5M direction or ~180° around the normal of the (105) 5M plane for the compound twin. Characters of twin interface planes Because magnetic-field induced strains in Ni-Mn-Ga alloys are realized by the reorientation of martensite variants through interface motion, the crystallographic nature of the interface planes is of major significance for the efficiency of the magnetic shape memory effect. Here, the twin interfaces between neighbouring variants were further analyzed using the indirect two-trace method [113]. The Miller indices of individual twin interface planes were unambiguously determined from the orientation data of adjacent variants and the trace vectors of twin interfaces in the sample coordinate system. To achieve statistical reliability, five groups of twin interface normals were calculated and the mean values are shown in Table 3.3. For above three types of twins, the calculated interface planes are in coincidence with their respective twinning planes (K 1 plane) with a slight deviation, being close to (1 2 5 ) 5M for type I twin, (1.0569 2 4.7155 ) 5M for type II twin and (105) 5M for compound twin. Therefore, all these three types of interfaces can be considered as coherent interfaces; hence they should have the highest mobility and reversibility as compared with the general incoherent high angle boundaries in polycrystalline materials. Table 3.3 Mean values of twin interface normals calculated by indirect two-trace method [113] and expressed in the orthonormal crystal coordinate system and their deviations from the ideal interface plane normals. According to the accurate orientation identification of the martensite microstructure (Fig. Clearly, the geometrical combination of the four differently oriented variants in the plate microstructure is not an optimal solution to the maximum shape memory effect. The existence of a large amount of compound twin interfaces could be considered as one of the major reasons for a high actuation stress required for these alloys [43,96,106]. In this connection, the compound twin interfaces should be eliminated to achieve a large strain output. Owing to the very small twinning shear of compound twins, the compound twin interfaces might be removed readily by detwinning via mechanical training, with the training force applied in the inverse twinning direction, i.e. [5 01] 5M direction. One can further conceive that, for the most ideal case, only two variants, with either type twin or type twin relationship, are left in the plate microstructure after the complete detwinning of the compound twins. In such a circumstance, the shape memory process would be achieved by the detwinning or twinning of one variant with respect to the other variant. As the two types of twins have the same twinning shear, the intensity of the external actuating field would be the same for the two cases. The angle between the easy magnetization directions of the two variants is 86.04° in type I twin relation and 93.96° in type II relation. To obtain the maximum strain output under a minimum actuating field, the field should be applied in the easy magnetization direction of either of the two variants. Hereafter, we will refer to this average subcell of the 5M modulated martensite as the unit cell of a so-called "1M martensite", and denote the austenite, 1M martensite and 5M martensite by putting the symbols "A", "1M" and "5M" as the subscript in this section, respectively. Orientation relationship between austenite and 5M martensite Since the martensitic transformation is diffusionless and realized by coordinate displacement of atoms, some specific ORs between the parent and product phases are required to minimize the lattice discontinuity across the phase boundary. In most cases, the determination of these ORs are rendered to find a plane and in-plane direction parallelism by making use of the coexistence of the retained parent austenite and the product martensite. However, for the present Ni 50 Mn 28 Ga 22 alloy, the martensitic transformation is complete at room temperature, i.e. no residual austenite. It is not possible to make a direct determination of the OR between the austenite and the martensite. Therefore, verifying the austenite orientations that are calculated from the orientations of the martensite variants induced from the same initial austenite under an assumed OR could be an alternative solution to deduce the transformation OR [108]. If the austenite orientations calculated from all individual martensite variants inherited from the same parent grain share a common orientation, the assumed OR could be the one that governs the martensitic transformation, and the resultant common orientation is the orientation of the initial austenite grain. The austenite orientations ( l A G ) with respect to the sample coordinate system can be recalculated by the following equation expressed in matrix notation [108]: 1 ( ) l k i j A M M A G G S T S (3.1) where k M G represents the measured orientation of the kth martensite variant with respect to the sample coordinate system; T is the rotation matrix transforming the orthonormal crystal coordinate system fixed to the monoclinic martensite lattice to the austenite lattice basis under the given OR; j A S (j = 1, 2, ..., 24) and i M S (i = 1, 2) are the respective cubic (austenite) and monoclinic (martensite) symmetry elements. When T is fixed, a total of 48 different austenite orientations may be generated from one martensite variant with Eq. (3.1) on account of the symmetry elements of cubic and monoclinic basis, but they are not all symmetrically independent and can be further incorporated into the physically distinct orientations. By a survey of the literature, those widely addressed Bain [22], K-S [26], N-W [27,28] and Pitsch [30] ORs are presumed as possible ORs between the parent austenite and the product martensite in the present work, as listed in Table 3.4. The assumed plane and in-plane direction parallelisms are firstly used to specify the ORs between austenite and 1M martensite. (111) A //(011 ) 1M & [10 1 ] A //[ 1 1 1] 1M N-W relation (111) A //(011) 1M & [11 2 ] A //[0 1 1] 1M Pitsch relation (101) A //(1 2 1 ) 1M & [10 1 ] A //[ 1 1 1] 1M Based on the above considerations, we calculated the orientations of parent austenite from the measured orientations of locally adjacent four twin-related martensite variants, using the possible ORs listed in Table 3.4. Here, four twin-related variants in one broad plate were treated as a variant group. To achieve statistical significance, the variant orientation data measured from six different groups (denoted as g1, g2…… g6) were served as initial input data. For easy visualization, the calculated austenite orientations under a given OR were plotted in the {001} standard stereographic projection in the macroscopic sample coordinate frame. As the resultant martensite of the present material possesses a modulated crystal structure, the information on structural modulation should be taken into account to evaluate the deviation from the ideal OR between two phases. The structural modulation can be considered as an atomic reshuffling in each subcell of the supercell with respect to the "averaged unit cell". Clearly, this modulation results in a certain angular deviations of the same indexed planes and directions in each subcell of 5M martensite with respect to those in the averaged unit cell. Based on the determined Pitsch OR between the two phases, the theoretical number of martensite variants induced from the same austenite grain can be predicted. The possible orientations of martensite variants with respect to the sample coordinate system are given by the following equation: 1 ( ) k l j i M A A M G G S T S (3.2) Due to the cubic symmetry of the austenite and the monoclinic symmetry of the martensite, there are at most 24 martensite variants inherited from the same austenite grain under the Pitsch OR. As former EBSD analysis has revealed that only 4 variants appear in one variant group and they are twin-related one another, these 24 variants can be divided into 6 groups. Thus, the formation of self-accommodated martensite in one initial austenite grain is realized firstly by combination of 4 twin-related variants in an individual group and then by combination of different groups over the entire grain. Such a configuration of the product microstructure should ensure a minimum lattice discontinuity between the parent austenite and the martensite. Summary The crystal structure, microstructural features, twin relationships of 5M Considering the minimum misorientation angles between the calculated austenite orientations, the most favorable OR governing transformation from the austenite to 5M martensite was revealed to be the Pitsch relation with (101) A //(1 2 5 ) 5M and [10 1 ] A //[ 5 5 1] 5M with no residual austenite. Under such an OR, at most 24 variants could be induced from the same austenite grain after the martensitic transformation. 7M Martensite 7M martensite, as the other kind of modulated martensite in Ni-Mn-Ga alloys, can generate much larger magnetic field induced strain (i.e. ~10%) than that of 5M martensite. So far, the crystal structure studies of these materials conducted by TEM have suffered from uncertainties in determining the number of subcells of modulated superstructure, i.e. commensurate and incommensurate, and consequently improper interpretations of orientation correlations of martensite variants. In this section, the microstructural and crystallographic characteristics of 7M martensite in a polycrystalline Ni 50 Mn 30 Ga 20 alloy were investigated by EBSD with the application of the correct incommensurate superstructure information of 7M martensite (7M(IC)). The orientation relationships of adjacent martensite variants and their twin interface characters in the incommensurate 7M martensite were unambiguously determined. With the accurate orientation measurement on inherited martensitic variants, the local orientations of parent austenite grains were predicted using four classical ORs for the martensitic transformation. Furthermore, a specific OR between austenite and 7M martensite was unambiguously determined by considering the magnitude of discontinuity between the lattices of the product and parent phases and the structural modulation of the incommensurate 7M modulated martensite. Phase transformation temperatures and mechanical property Polycrystalline Ni-Mn-Ga alloy with nominal composition of Ni 50 Mn 30 Ga 20 (at.%) was prepared. For the reduction of microcracks and porosity, the ingots were remelted and suction cast into a chilled copper. The actual composition was determined to be Ni 49.6 Mn 30.4 Ga 19.9 by energy dispersive X-ray spectrometry (EDS) measurements. The phase transformation temperatures were measured by differential scanning calorimetry (DSC). The martensitic transformation start temperature (M s ) and finish temperature (M f) were determined to be 88.7 and 78.3 , and the reverse transformation start temperature (A s ) and finish temperature (A f ) were 87.7°C and 97.2°C, respectively. The compressive stress-strain curves of the as-cast alloy and the suction-cast alloy are presented in Fig. 3.8. For the button without suction cast, the maximum compressive strength and compressive strain are 385MPa and 8.6%, respectively; while for the rod after suction casting, the maximum strength and strain reach the corresponding values of 705MPa and 10.1%, respectively. The compression strength and the deformation capacity of the suction cast alloy are increased by 83.1% and 17.4%; respectively, compared with those obtained by common casting. This could be attributed to the smaller grain size and the less microcracks and porosity in the as-suctioned rod than that in the as-cast button due to the faster cooling rate of suction-cast. its indexation with the incommensurate 7M modulation (7M(IC)) structure model. Crystal structure of 7M martensite Determination of twin relationships and twin interfaces of 7M martensite To determine the orientation relationships between two adjacent variants, the misorientations were calculated according to Eq.(2.4) using the orientation data of the corresponding variants in Fig. 3.11a and are shown in Table 3.6. For each variant pair, there exist two sets of misorientations ( /d) owing to the monoclinic point symmetry. Among the two sets of misorientations, if taking possible experimental errors into account, there are only one 180° rotation between A and C (or B and D) and between A and B (or C and D), but two 180° rotations between A and D (or B and C) with their rotation axes perpendicular to each other. This suggests that all the variant pairs are twin related according to the classical definition of twins [START_REF] Christian | [END_REF]112]. For variant pair A:C, the 180° rotation axis is close to the normal of the rational plane (1 2 10 ) 7M (with 0.41° deviation) of the monoclinic lattice, indicating that the two variants are reflection of each other with respect to this rational plane; while for the variant pair A:B, the 180° rotation axis is close to the rational direction [10 According to the minimum shear criterion [33], the complete twinning elements of the above three types of twins were unambiguously determined, as displayed in are interchanged [START_REF] Christian | [END_REF], as the case of the 5M martensite. It is noted that the local variant number and the twin types of the 7M martensite are in consistent with those of 5M martensite, which should be due to the fact that both 7M and 5M martensite possess the monoclinic crystal structure. However, the twinning shear of the 7M martensite for each kind of twin is much larger than the corresponding value of the 5M martensite. This means that for the 7M martensite, larger force is needed for variant reorientation. Since the magnetic-induced strains of Ni-Mn-Ga alloys are achieved by the reorientation of martensite variants through the motion of variant interfaces, insight into the crystallographic nature of these interface planes is surely of theoretical interest and practical significance. Here, by using the indirect two-trace method [113], the Miller indices of individual twin interface planes were unambiguously determined. To achieve statistical reliability, the results were collected from five groups of variants with different orientations, and the mean values are showed in Since the martensitic transformation is complete at room temperature for Ni 50 Mn 30 Ga 20 alloy, the orientation relationship (OR) between austenite and 7M martensite was determined by the indirect method, as the case of the 5M martensite. Orientation relationship between austenite and 7M martensite The classical Bain [22], K-S [26], N-W [27,28] and Pitsch [30] ORs listed in Table 3.4 are presumed as possible ORs between the parent austenite and the product martensite. The assumed plane and in-plane direction parallelisms are firstly used to specify the ORs between the austenite and the 1M martensite. Based on the above considerations, the orientations of the austenite were calculated according to Eq. orientations calculated from each martensite pair were estimated, as shown in Table 3.9. Indeed, among all the selected variant groups, both K-S and Pitsch ORs deliver the smallest deviation angle and there is almost no difference in the resultant austenite orientations, suggesting that they could be considered as two possible transformation ORs. This result is different from that of the 5M martensite, where the Pitch OR is favorable without any ambiguity. It is seen from Table 3.10 that the deformation components that represent the discontinuity between the lattices of austenite and 1M martensite under the K-S and Pitsch ORs are not the same. As compared to the K-S OR, the Pitsch OR involves a relatively small lattice discontinuity (only the last 2 shear components in Table 3.10 are slightly elevated) for the austenite to 1M martensite transformation. Considering that an elongation or contraction requires volume change but a simple shear does not, the deformation by a simple shear may occur more easily. In this sense, the Pistch OR has somewhat energetically advantageous over that of the K-S OR for the austenite to 1M martensite transformation. Furthermore, the information on structural modulation should be exploited to discriminate the energetically more favorable OR. For the incommensurate 7M modulation, the tenfold superstructure can be produced in such a way that a set of ten consecutive average unit cells of the 1M martensite are sheared into the corresponding waved subcells of the 7M martensite by a monoclinic angle. Certainly, the same indexed planes and directions -expressed respectively in the 5 distinct subcells of the 7M martensite and in the average unit cell of the 1M martensite -exist somewhat angular deviations between them, as shown in Fig. 3.17. It is seen that the angular deviations both for plane (Fig. 3.17a) and in-plane direction (Fig. Once an assumed OR is proved to be valid for the martensitic transformation, we can predict the possible number of martensite variants within one original austenite grain according to Eq. (3.3). Because of the cubic symmetry of austenite and the monoclinic symmetry of the 7M martensite, there are at most 24 physically distinct martensite variants inherited from the same austenite grain under the Pitsch OR. As the EBSD analysis has revealed that only 4 variants appear in each variant colony (Fig. 3.11a) and they are twin-related one another, these 24 variants can be divided into 6 groups. Here, we may envisage that the formation of self-accommodated martensite variants in one initial austenite grain is realized firstly by combination of 4 twin-related variants in an individual group and then by combination of different groups over the entire grain, ensuring the minimum transformation strain and hence the lowest transformation energy consumed. Summary It has been demonstrated that the EBSD technique can be used as an advanced tool for unambiguously determining the orientation relationships of martensite Such an approach to determine the OR from measured orientations of martensite variants -irrespective of the presence of retained austenite -can be easily adapted to various martensitic transformations that produce martensite with modulated superstructure. Non-modulated martensite NM martensite only exhibits a negligible magnetic shape memory strain, however, it is reported that NM martensitic alloys possess better mechanical properties than modulated martensite and some of them have good magnetocaloric effect due to the co-occurrence of the magneto-structural transition. By proper composition modification to obtain the higher martensitic transformation temperatures, NM martensitic alloys also display the potential as high-temperature shape memory alloys. In this section, the non-modulated Ni 54 Mn 24 Ga 22 alloys were prepared by arc-melting and suction-cast. The grain sizes were refined through suction-cast, making improvements on the mechanical properties. The morphology and the crystallographic features were further analyzed by electron backscatter diffraction (EBSD) and crystallographic calculation. Phase transformation temperatures and crystal structure Compressive properties The compressive stress-strain curves of the as-cast alloy and the as-suctioned alloy are shown in Fig. 3.20. For the as-cast button, the maximum compressive strength and compressive strain are 670MPa and 8.5%, respectively. However, for the as-suctioned rod, the corresponding values are 990MPa and 18%, respectively. Obviously, the better mechanical properties can be achieved through suction-cast. This could be attributed to the relatively small grain size, less microcracks and porosity in the as-suctioned rod than those in the as-cast button. of the neighboring grain. It is also found that some cracks appear among the martensite plates, as shown in Fig. 3.21b, suggesting that the cracks can also nucleate and propagate inside the martensite plates, leading to intragranular fracture. The coordinate frame (X 0 -Y 0 -Z 0 ) refers to the macroscopic sample coordinate frame. For the paired fine lamellae in each plate, they were found to be compound twin related according to the misorientation calculations. An example of the misorientation calculation results corresponding to two fine lamellae in plate P1 was given in Table 3.11. It is seen that there are two sets of ~180° rotation and the corresponding rotation axes are perpendicular to each other, indicating that the two variants are twin-related. Microstructure One ~180° rotation axis is near the normal of {112} Tet with 0.40° deviation and the other is near <11 1 > Tet with 0.32° deviation. The twinning elements are unambiguously determined according to the minimum shear criterion and displayed as follows: K 1 = {112} Tet ; K 2 = {11 2 } Tet ; 1 = <11 1 > Tet ; 2 = <111> Tet ; P = {110} Tet ; s = 0.393. Determination of inter-plate and inter-lamellar interface The interfaces between the neighboring plates (marked as red line in Fig. 3.23b) were determined using the indirect two-trace method [113]. Table 3.12 shows the calculated indices of 14 inter-plate interfaces expressed in the orthonormal basis. The mean interface indices were determined to be {0.565129, -0.499292, 0.656763}. In the tetragonal basis, the mean value of the determined inter-plate interface plane is {1, -0.883501, 1.998241} Tet , which is 3.15° from the {1 1 2} Tet plane, indicating that the inter-plate interface is not fully coherent. The interfaces between the neighboring lamellae (marked as green line in Fig. 3.23b) were also calculated. The calculated results show that the twin interface planes are in good agreement with the {112} Tet twinning plane thus coherent. Summary Ni 54 Mn 24 Ga 22 alloys were prepared by arc-melting and suction-cast. The two processed alloys have the identical tetragonal crystal structure, but the grain sizes were refined through suction-cast, resulting in an increase of transformation hysteresis and an improvement of mechanic property. The fractures occur mainly along the austenite grain boundaries. The adjacent plates have the misorientation of 80°~85° around <110> Tet axes. Locally, there are four types of martensite plates and each plate consists of paired fine variants, thus totally eight variants can be found in a martensite colony. The paired fine variants in each plate were found to be compound twin related with the {112} Tet as the twinning plane and the <11 1 > Tet the twinning direction. The inter-plate interfaces are close to {1 1 2} Tet plane but with ~3° deviation, while the interfaces of two paired fine variants are in good agreement with {112} Tet twinning plane. Chapter 4 Austenite-7M-NM transformation 4.1 Formation of self-accommodated 7M martensite Generally, in a transformation from high-symmetry austenite to low-symmetry martensite, more than one martensite variants can be induced in the same austenite grain to minimize the macroscopic transformation strain. Since the crystal structures of two phases are different, the elastic strain is generated by crystal lattice mismatch between martensite and its parent phase, as well as between differently oriented variants of martensite. Reducing the strain energy is an essential factor in the nucleation and growth processes of martensitic transformation, which can be achieved through the formation of unroated and undistorted phase boundaries and the self-accommodation of martensite variants. In this section, we selectively prepared the polycrystalline Ni-Mn-Ga alloys with the nominal composition of Ni 53 Mn 22 Ga 25 , for the observation of the morphologic characters associated with martensite transformation at room temperature. Detailed microstructural investigation on the co-existed austenite and incommensurate 7M modulated martensite was performed by EBSD measurements. The formation of diamond-shaped martensite was evidenced at the beginning of the transformation and the mechanism of self-accommodation during nucleation and growth was further discussed. Martensitic transformation temperatures and crystal structure The Then the "diamond" gradually grows into the paired plates, as shown in Fig. 4.5b. Occasionally, it was also found that the growth of the martensite variants can be realized by the formation of the fork configuration (A:D or B:C pair), as shown in Fig. 4.5c, where one thin plate bifurcates away from the other plate. The adjacent two bifurcated thin plates also tend to form into a spear. With the abovementioned two types of growth manner for the "diamond", a large number of martensite plates form and they always keep spear configuration adjacent to the austenite matrix, as shown in stacking faults on the basal plane can be recognized in the martensite plates. The occurrence of the stacking faults should be as a kind of complementary shear [117] to accommodate the transformation strain and to achieve the formation of an invariant plane between the austenite and the martensite. Confirmation of the orientation relationship between austenite and 7M martensite By crystallographic calculation using the orientation data of the two phases, we confirmed the energetically favorable OR between the austenite and 7M martensite to Experimental determination of habit plane The habit plane is believed to be an invariant plane unrotated and undistorted on the macroscopic scale. In the present study, the habit plane between the austenite and the 7M martensite was determined by the indirect two-trace method [113]. To achieve statistical reliability, sixteen habit plane normals were calculated. The calculation results, expressed in the coordinate frame of austenite and listed in Calculation based on the crystallographic phenomenological theory In general, the martensitic transformation tends to proceed in a self-accommodated manner to minimize the total transformation strain when no external stress is applied. For further analyzing the martensitic transformation crystallography and the self-accommodation mechanism of 7M martensite, the crystallographic phenomenological theory of martensitic transformation [23] was used to perform the theoretical calculation, which has been well applied to the prediction of crystallographic parameters in many shape memory alloys [118][119][120][121][122]. This prediction is based on the assumption that the interface between austenite and its product phase is invariant on a macroscopic scale. In the present austenite (cubic)-7M martensite (monoclinic) system, the lattice invariant shear for the calculation was supposed to occur on the basal plane in either of the two opposite crystallographic directions, which correspond to {101} A <10 1 > A system in the parent phase. As the 7M plate possesses the modulated structure, the structure itself (lattice modulation) provides the main contribution of invariant shear and the remaining is balanced by the stacking faults. The faulted nature of martensite plate has been revealed by TEM observation, as illustrated in Fig. 4.7. By inputting the lattice parameters of austenite and martensite for the phenomenological calculation, the results show that there are 24 pairs of habit plane normal and shape deformation direction due to the symmetry of the cubic system, corresponding to 24 variants. The 24 variants are divided into 6 groups and each group is composed of four twin-related variants distributed around one of the {101} A pole. As an example, Table 4.2 shows the habit plane normals, shape deformation directions and shape deformation matrices of variant A, B, C and D in the group around (101) A pole. The magnitude of the shape deformation for each variant is 0.09596, but the shape deformation direction is different. The theoretical habit plane normal was predicted to be {0.720332, 0.692379, 0.041627} A , which is consistent with experimental results with 2.11° deviation. Apparently, the shape deformation matrix of one single variant is far from the identity matrix, and the magnitude of deformation, i.e. 0.09596, is quite large. Supposing that four variants constitute a diamond group with equal volume fraction (1/4), the total shape deformation matrix can be approximated by the method of summation. Taking (101) A group as an example, the total shape deformation matrix is The total shape deformation matrix is quite close to the identity matrix, thus the transform strain are effectively cancelled out by the combination of the four variants. Hence, the "diamond" as a whole is an energetically feasible, self-accommodated combination when formed from the parent phase. Further compensation of transformation strain can be achieved by the combination of different variant groups. The total shape deformation matrix of all the 6 variant groups is The total shape deformation matrices of type and type twin pair are quite close to the identity matrix compared with the deformation matrix of a single variant. Thus, they are self-accommodated. However, the total shape deformation matrix of the compound twin is not close to the identity matrix. Thus, the compound twin can not effectively accommodate the transformation strain. The self-accommodation is a process that minimize the transformation strain hence the strain energy. This process should guarantee an invariant interface between austenite and martensite. Considering the diamond model and without losing coherence at the phase interfaces, the initial growth is through the coordinated mutual movement of the four habit planes into the austenite matrix. This expansion will be blocked due to the accumulated elastic strain caused by the increased volume fraction of the transformed martensite, as elastic strain energy is proportional to the volume of the martensite [123]. Then the possible route of further growth for the diamond can be through the extension of either type twin (A:C or B:D) or compound twin (A:D or B:C), if the chemical driving force is sufficient. As compound twin is not self-accommodated, it is not favorable that the growth of a variant group is through the extension of compound twin. Then, the spears (type twin) should be responsible for the variant growth. Comparing the total shape deformation matrix over variants A and C (or B and D) with that over variants A, B, C and D, there still exist some residual unaccommodated elastic strain [124]. This residual strain can be further accommodated by introducing the type twin pair, as the situation in Fig. 4.5a and b, which renders the growth of martensite through the forward progression of the spear (type twin). As the martensite "diamond" only consists of type and compound twin systems, the type twin should be a secondary twin after further shear of variant pair A and D or B and C during the growth process. Locally, type twin and type twin are bridged by compound twin and the compound twin interface has always appeared with its neighboring Type I and Type II twin interfaces as a whole in the final martensite microstructure. If the chemical driving force cannot overcome the elastic barrier to induce the forward movement of spears, the martensite tends to form much thinner plates for the reduction of the elastic strain energy, as the thin plate bifurcates from the other plate in Fig. 5c. Summary The structural and microstructural characters from austenite to 7M martensite in a analysis. Results showed that the modulated martensite has its own crystal structure with an incommensurate 7M modulation, other than the nanotwin combined structure. Based upon the XRD measurement and the concept of adaptive phase, Kaufmann et al. [81] examined the co-existing austenite, 7M and NM martensite in an epitaxial Ni-Mn-Ga film. They concluded that the 7M modulated martensite can be simply constructed from nanotwinned lamellae of a tetragonal martensite phase with (5 2 ) 2 stacking sequence, ruling out the existence of the independent modulated structure. The 7M modulated martensite phase built up from microscopic twins may evolve into the final NM martensite phase by thickening the nanotwin width through branching [81]. Notably, the controversy on long-period modulated structure remains unclarified for many alloy systems. Indeed, it is very difficult to make a direct validity discrimination of the above two structure models by either diffraction or thermodynamic measurements. On one hand, the formation of the modulated Lattice modulation and nanotwin combination The The two structure models of the modulated phase are further tested with the adaptive phase criteria [130]. The lattice constants of the 7M (IC) structure and the nanotwin combination are expressed in the cubic austenite coordinate system and compared with those calculated using the equation (a ad = c Tet + a Teta A ; b ad = a A ; c ad = a Tet ) proposed by the nanotwin combination theory [130]. The results are given in Table 4.3. Clearly, the two sets of lattice constants also possess excellent agreement with the calculated ones. This indicates that the verification criterion of the nanotwin combination theory is not sufficient to discriminate the two structures for the modulated phase. give the same results concerning the types of twins, but some differences appear in the twinning elements, as detailed in Table 4.4. EBSD measurements on the coexistence of three phases Inter-plate interface To visualize the atomic match on the plate interface, the atomic correspondences of the type I twin interface is constructed under the two structure models and displayed in Fig. 4.12. For the 7M(IC) structure (Fig. 4.12a), some of the atoms on the interface are slightly deviated from their exact equilibrium position due to the structure modulation. The plate interface is basically coherent and thus should have low interfacial energy. This coherence surely renders a good mobility in the twinning and detwinning process. The twinning configuration (the twinning shear plane and direction parallel to the plate interface) ensures that if only two variants (plates) exist in the material, one variant can be easily reoriented to the other by twinning or detwinning through the plate interface movement. This feature has been well observed in many magnetic field driven shape memory experiments in Ni-Mn-Ga alloys [8,89,90]. In contrast, if the modulated martensite has the (5 2 ) 2 nanotwin combination structure, although the crystal structure differs slightly from the 7M(IC) structure, the plate interfacial feature and the in-plate structure (atomic shuffling in 7M(IC) plates and nanotwins in nanotwin combined plates) will be very different. As shown in Fig. To further quantify the atomic misfits at the plate interface under the two structure models, the average atomic displacement ( r ) of the atoms on the plate interface from their equilibrium positions was calculated using the L2 norm: % 100 ) r ( N 1 r N 1 i 2 i . (4.7) Here, N represents the number of the atomic layers in one modulation period (N = 20 for the 7M(IC) structure and N = 14 for nanotwin combination structure); r i is the amount of atomic displacement of the i th atom on the plate interface from its equilibrium position. The results show that the average displacement is 1.52% for 7M(IC) structure and 3.81% for the nanotwin combination structure. It is clear that the deviation of the nanotwin combination at the plate interface is much larger than that of the 7M(IC) structure. In consequence, a high plate interfacial energy resulting from the atomic displacement for the nanotwin combination model could be expected. The thickening of the NM plate to reduce the specific interface displayed in Fig. 4.10a indirectly proves that the plate interface of the nanotwin combined martensite should possess high interfacial energy. Furthermore, the large mismatch on the plate interfaces under the nanotwin combination model surely imposes significant constraints on the variant reorientation. For plate reorientation to achieve shape memory effect, it has to overcome two barriers. Firstly, no uniform magnitude of shear and shear system available for an easy coordinated atom displacement in the plate reorientation that is essential to a reversible external field induced shape change. Secondly, the coherent nanotwin interfaces that are densely distributed inside the plate act as pins to the movement of the plate interface. Under such circumstances, the plate reorientation resistance is inevitably enhanced. The loss of magnetic field induced shape change in tetragonal NM martensite is also well recognized in many Ni-Mn-Ga alloys [89]. Such experimental evidences indirectly provide support to the above analysis. The main argument for the nanotwin combination model is that the nanotwin combined structure could offer reduced lattice mismatch with respect to the lattice of the parent austenite [130]. Actually, we will see that the 7M(IC) model offers even smaller lattice mismatch. The lattice misfits of the two structure models with respect to the cubic austenite are expressed in stress-free strain tensors [130]. The non-zero terms in these tensors ( 11 , 22 , 33 , ) are shown in Table 4.5. It is evident that the nanotwin combination has a higher lattice mismatch than the 7M(IC) modulated structure. All the above results have demonstrated that the modulated martensite in the Ni-Mn-Ga alloy has its own crystal structure, other than the nanotwin combination of the tetragonal NM martensite. The transformation from the modulated martensite to the final NM martensite is realized by further lattice distortion, and this might significantly degrade the magnetic field-induced shape memory performance. Summary In summary, we present experimental evidence for the thermodynamic stability and the crystal structure of the long-period modulated martensite in Ni-Mn-Ga alloys. It is proved that the modulated martensite is an intermediate state between the austenite to the NM martensite. It possesses its own crystal structure, instead of the nanotwins of the NM martensite (tetragonal simple structure) proposed by the nanotwin combination theory. The 7M(IC) structure can generate a reduced number of local variants and more favorable configuration for twinning and detwinning. This is essential to the attainability of the magnetic field induced shape change. The present results provide unambiguous evidence to clarify the long debated issue concerning the nature of the long-period modulated phases. plates stretch in roughly the same direction. In general, the plates of the 7M martensite are thinner than those of the NM martensite. When approaching the NM martensite, some 7M plates tend to thicken and the width of 7M plates is close to that of NM plates, indicating that the transformation from 7M to NM martensite is accompanied by a reduction of the specific area of the NM plate interface. This tendency suggests that the plate interfacial energy of the NM martensite is higher than that of the 7M martensite. By thickening the plates, the total interfacial area is reduced and thus the interfacial energy is lowered. The coordinate frame (X 0 -Y 0 -Z 0 ) refers to the macroscopic sample coordinate system. For the tetragonal NM martensite as analyzed above, the neighboring fine lamellae in each plate are also found to be compound twin related with the (112) Tet as the twinning plane and the [11 1 ] Tet the twinning direction. The full twinning elements are given in Table 4.6. The inter-lamellar interfaces (as marked with green line in Fig. 4.13a) were determined to be the twinning planes ((112) Tet ) and thus coherent. At the inter-plate interfaces of NM martensite (as marked with red line in Fig. 4.14a), two thick lamellae and two thin lamellae from neighboring plates intersect. When the thick lamellae meet, i.e. L1 and L3, as illustrated at the right upper corner of Fig. 4.14a, they appear to have (1 1 2) Tet [ 1 11] Tet twin relationship but with a certain degrees of deviation in the twinning plane and in the twinning direction. The angular deviations calculated using the experimental orientation data are 5.91° between the twinning planes and 3.89° between the twinning directions, respectively. Further calculation manifests that the plate interface of the NM martensite is oriented close to the (112) Tet planes of the thick lamellae with ~3° deviation. Using (112) Tet Orientation relationship between 7M martensite and NM martensite To further precisely reveal the transformation process and the atomic arrangements of the 7M and the NM plate interfaces, the OR between the 7M and the NM martensites were determined. Calculations show that the 7M martensite and the NM martensite possess a specific OR with (001) 7M //(112) Tet and [100] 7M //[11 1 ] Tet , i.e. the shuffling plane in one 7M variant is parallel to the twinning plane of its corresponding NM lamellae and the shuffling direction is parallel to the twinning direction. The OR secures a one-to-one correspondence between the 7M plate and the NM plate, as displayed in Transformation mechanism from 7M martensite to NM martensite The present experimental results allow evidencing the transformation sequence from austenite to NM martensite bridged by the 7M martensite and the microstructure evolution during this phase transition. The intrinsic OR between the 7M and NM martensites with (001) 7M //(112) Tet and [100] 7M //[11 1 ] Tet indicates that the structure change from the 7M (monoclinic) to the NM (tetragonal) can be viewed as further reshuffling of the atoms on the (001) 7M along the [100] 7M or the [ 1 00] 7M direction. As the crystal structure of the 7M is different from that of the NM, the reshuffling should also be accompanied with certain distortion of these planes and their interplanar distance. Consequently, one 7M variant finally transforms to one NM plate composed of paired twin-related fine variants. Four 7M variants correspond to 8 NM lamellae. Based on such an ideal OR between the two phases, the plate interfaces of the 7M and the NM martensite was reconstructed to further reveal the structure change at the plate interface during this phase transition. As an example, Fig. 4.16 displays the atomic correspondence of the plate interface between variant A and C (type I twin) of 7M martensite and the plate interface between plate P1 and P3 of NM martensite following the transformation OR. It is seen from Fig. 4.16a that although some of the atoms on the interface are slightly deviated from their exact equilibrium position due to the structure modulation, the 7M plate interface is basically coherent. Therefore, the interfacial energy should be relatively low. However, when the 7M plates transforms to the NM, the misfit at the interface becomes non neglectable, as shown in Fig. 4.16b. The plate interface thus becomes non-coherent. This atomic misfit will surely result in high interfacial energy and thus contributes as energy barrier to the transformation. It is due to this high atomic misfit that the 7M to NM transformation is accompanied by the thickening of the NM martensite plates, which can reduce the specific interfacial area of the NM martensite and thus lower the total interfacial energy of NM plates, as evidenced in Fig. 4.13. Therefore, the 7M modulated structure is advantageous in the plate interfacial energy. Another energetically advantage of the 7M modulated structure is that it offers smaller lattice distortion with respect to the parent lattice. This lattice distortion can be quantified with the corresponding stress-free strain tensors [130] for the transformation from the austenite to the 7M martensite and from the austenite directly to the NM martensite. The non-zero terms in these tensors ( 11 , 22 , 33 , ) are shown in Table 4.7. It is clear that the lattice distortion accompanying the transformation from the cubic austenite to the tetragonal martensite is much larger than that occurring in the transformation from the austenite to the 7M martensite. This lattice distortion gives rise to volume dependent elastic energy that contributes as energy barrier to the transformation. Together with the large plate interfacial energy, this volume dependent elastic energy imposes insurmountable barriers to the transformation from the austenite to the normal tetragonal martensite (NM martensite). Therefore, the formation of the intermediate 7M modulated phase is unavoidable in bridging the parent austenite to the final NM martensite and thus a two-step lattice distortion is experienced in the transformation from the austenite to the NM martensite. From the microstructural configurations of the two kinds of martensite, it is clear that the reorientation of the 7M plates (variants) that gives rise to the shape memory effect is much easier than those of the NM. For 7M, there are 2 advantages. First, 7M plate corresponds to one orientation variant and the plate interface is coherent. Second, the shear system possessing a uniform and much smaller twinning shear is parallel to the plate interface. If we can obtain only two alternately distributed variants, the magnetic field can easily reorient one variant to the other by twinning or detwinning through plate interface movement. In contrast, the situation in the NM configuration is very different. Two NM plates correspond to four orientation variants (lamellae). The inter-plate interfaces and the lamellar interfaces are in an inter-pinned position. Moreover, the twinning system is not consistent with the plate interface and the twinning shear of the lamellar variants is much higher. Clearly, the pinning of the interfaces, the large atomic mismatch at the plate interface and the much larger twinning shear all act as plate or lamellae reorientation resistance. In such a context, it is not difficult to understand why modulated martensite of Ni-Mn-Ga alloys offers superior magnetic field induced reversible shape change; whereas the tetragonal martensite does not as have been demonstrated by many experiments [89]. As the 7M modulated martensite of Ni-Mn-Ga alloys is essentially important for the field induced shape memory applications, its thermodynamic metastablility has become a critical issue for the development of the Ni-Mn-Ga alloys. Its existing temperature window is of particular importance. Measures should be taken to enlarge this temperature window. From the examinations of the transformation path and the possible energy barriers in this study, it is seen that the lattice misfit between the 7M and NM (Table 4.7) and the atomic mismatch on the NM plate interfaces can be the controlling factors to postpone the 7M to NM transformation, hence enlarge the existing temperature range of 7M. This could be achieved by adjusting the composition or by alloying new elements. Summary In summary, we clarified the roles of the Ni-Mn-Ga 7M modulated martensite in The available temperature window for the stable existence of the 7M martensite depends on the energy barriers related to the lattice mismatch between the 7M and the NM martensite and the atomic misfit on the plate interfaces of the NM martensite. Perspective Since Ni-Mn-Ga ferromagnetic shape memory alloys are newly developed system, there are still some research margins. In these alloys, giant magnetic field induced strains were only obtained in the bulk single crystals with modulated structure. Due to the complexity of producing single crystal, the high fabricating cost and the severe composition segregation hinder their practical application. In contrast, the manufacture of polycrystalline alloys is much simpler and easier to be implemented in practical production, but the randomness of the crystallographic orientation of the polycrystalline alloys leads to the loss of the magnetic shape memory performance. Texturation of polycrystalline alloys has become a promising way to improve the shape memory performance comparable to that of the single crystal. To achieve this goal, it is necessary to introduce innovative fabricating routes or external field training processes. Moreover, the low blocking stress and the intrinsic brittleness of Ni-Mn-Ga alloys act as a great hindrance to the practical applications. To increase the blocking stress and improve the ductility, appropriate alloying is quite necessary. Since the inter-variant interfaces are crucial for the magnetic shape memory effect, further work should pay more attention on revealing the details of inter-variant interface by considering the lattice modulation. Therefore, detailed TEM or HRTEM characterizations of the interface structure are needed. Résumé Etendu Les alliages à mémoire de forme ferromagnétiques sont de nouveaux matériaux L'écart de la maille des deux modèles de structure par rapport à celle de l'austénite cubique ont été calculées et exprimées en tenseurs de déformations sans contraintes ( 11 , 22 , 33 , ) [9]. Les termes non nuls de ces tenseurs sont indiqués dans le tableau 5. Il est évident que la combinaison de nano-macles a un décalage de maille supérieur à celui de la structure modulée 7M(IC). Tous les résultats ci-dessus ont montré que la martensite modulée dans l'alliage Ni-Mn-Ga a son propre structure cristalline, autre que la combinaison de nano-macles de la martensite NM tétragonale. Tableau 5 Tenseurs de déformations entre la matrice et les phases produites pour la transformation de l'austénite en 7M et de l'austénite en phase adaptive (structure de combinaison de nano-macles). Austenite -7M modulation Austenite -nanotwin combination Fig. 1 . 1 . 11 Fig. 1.1. Bain distortion (FCC to BCC) of martensite. The transformation involves the contraction of the parent phase along the z direction, and the expansion in x and y directions, respectively. Fig. 1 . 2 12 Fig.1.2 The geometrical configuration of K 1 , K 2 , 1 , 2 and P. Fig. 1 . 3 13 Fig.1.3 Crystal structure of Ni 2 MnGa L2 1 cubic austenite 1 ) 1 Chernenko et al.[41] carefully investigated the composition dependence of martensitic transformation temperatures of Ni-Mn-Ga alloys and concluded that (i) at a constant Mn content, Ga addition lowers T M ; (ii) Mn addition at a constant Ni content increases T M and (iii) substitution of Ni by Mn at a constant Ga content lowers T M . Depending on the martensitic transformation temperatures and transformation latent heat, these alloys have been conventionally classified into three groups[61].Group I (e/a<7.55) is composed of alloys with transformation temperatures well below the room temperature and the Curie point. Alloys in Group II (7.55 e/a 7.7) have the transformation temperatures around room temperature and below the Curie point, while the alloys in Group III (e/a>7) have the transformation temperature above the Curie point. Fig 1 . 4 14 Fig 1.4 Illustration of the rearrangement of martensite variants under a magnetic field in ferromagnetic shape memory alloys (FSMAs) [58]. With the mature of SEM/EBSD technique, large-scale spatially resolved orientation examination has made the correlation between microstructure and crystallographic orientation possible, which overcomes the above mentioned limitations. The merits of EBSD measurements lie on that, firstly, it provides an alternative means for verifying the crystal structure information, as the EBSD-based orientation determination requires the complete crystal structure information including the lattice constants and the atomic position of each atom in the unit cell; secondly, it enables the automatic orientation mapping of individual martensite variants, which correlates the crystal structure and orientation information with the morphologic features on an individual variant basis; thirdly, it allows an unambiguous determination of the orientation relationships of adjacent variants and the twin interface planes, and thus a full crystallographic analysis on a bulk sample with statistical reliability. Cong et al. systematically investigated the twinning relationship of NM martensite in a Ni 53 Mn 25 Ga 22 alloy by means of SEM/EBSD [101, 102]. The twinning elements ( 111 ) 111 A //(101) Tet and [1 1 0] A //[11 1 ] Tet , was predicted by Cong et al. based on experimental measurements and crystallographic calculations, representing a Fig 2 . 2 1a. To reduce microcracks and porosity, some ingots were remelted and then fast solidified by suction casting into a chilled copper mold with 6mm in diameter, through which a rod was obtained, as shown in Fig 2.1b. Fig 2 . 1 21 Fig 2.1 Macroscopic appearance of prepared alloys. (a) as-cast; (b) suction-cast. 50 . 1 501 Mn 28.3 Ga 21.6 (at. %). According to the DSC measurements, the austenite to martensite transformation started at 43.8 °C (M s ) and finished at 30.0 °C (M f ) upon cooling. The reverse transformation started at 41.1 °C (A s ) and finished at 51.3 °C (A f ) upon heating. Fig. 3 . 3 Fig. 3.1a shows the powder X-ray diffraction (XRD) pattern of the Ni 50 Mn 28 Ga 22 Fig. 3 . 3 Fig.3.1 XRD patterns of Ni 50 Mn 28 Ga 22 alloy at room temperature: (a) measured; (b) recalculated according to the commensurate 5M superstructure; (c) recalculated according to the non-modulated tetragonal crystal structure. The insets show the unit Fig. 3 . 3 Fig. 3.2a shows a typical backscattered electron (BSE) image of the 5M martensite taken at room temperature. According to the gray-level contrasts in the image, the microstructural features can be characterized as alternatively distributed broad plate with 10-20 micrometers in width and separated by inter-plate boundaries.Moreover, the plates with light mean contrast consist of sub-plates and those with dark mean contrast consist of blocks. It is seen from Fig.3.2b -the zoomed image of the marked frame in Fig.3.2a -that the broad plates are composed of pairs of thin lamellae with the thickness in the nanometer range. In each pair, one plate is thicker than the other. The thin lamellae are bound by long and short inter-lamellar interfaces.The bending at the short inter-lamellar interfaces is accompanied with contrast change in the BSE image. The short interfaces mark the sub-plates interfaces and the block interfaces. According to the BSE contrast of nanoplates, there are always four types of martensite variants that are locally interconnected. As the martensite is a mono-phase and has a homogeneous chemical composition, the contrast changes in the BSE image should be attributed to the orientation variations. This will be clarified by the Fig. 3 . 3 Fig.3.3 EBSD maps of fine lamellae reconstructed according to orientation identification with (a) the commensurate 5M superstructure and (b) the non-modulated tetragonal crystal structure. There are four types of variants (designated as A, B, C and D) in (a), but only two types of variants in (b). The coordinate frame (X 0 Y 0 Z 0 ) refers to the macroscopic sample coordinate system. Fig. 3 . 4 34 Fig.3.4 Kikuchi patterns showing the orientation difference across bended fine lamellae: (a) and (b) acquired from two measured positions marked in the inset of BSE image in Fig. 3.4 (a); (c) and (d) recalculated patterns using the commensurate 5M superstructure; (e) and (f) recalculated patterns using the non-modulated tetragonal crystal structure. A set of three Euler angles with respect to the sample coordinate system are listed in each recalculated pattern. ( angle-axis) for each variant pair. Among them, variant pairs A:C and B:D, A:B and C:D, A:D and B:C have almost identical sets of rotations. It is seen that each pair of variants possesses at least one 180° rotation, suggesting that all the variant pairs are twin related according to the classical definition of twinning[START_REF] Christian | [END_REF]112]. By further transforming the rotation axes into the monoclinic crystal basis, it demonstrates that for variant pair A:C (or B:D) the 180° rotation axis is close to the normal of the rational plane (1 2 5 ) 5M (with 0.39° deviation), whereas for the variant pair A:B (or C:D) the 180° rotation axis is close to the rational direction [ 55 1] 5M (with 0.33° deviation). As for the variant pair A:D (or B:C), two 180° rotation axes exist that are perpendicular to each other, being close to the normal of the rational plane (105) 5M (with 0.23° deviation) and the rational direction [ 5 01] 5M (with 0.15° deviation), respectively. The small deviations from the rational planes/direction with low indices may be attributed to the experimental inaccuracy caused by the EBSD measurements.Following the rationality criterion of the Miller indices of the twinning elements for different types of twin[START_REF] Christian | [END_REF]112], one could infer that variant pair A:C (or B:D) has twin relationship of type I, A:B (or C:D) of type II twin, and A:D (or B:C) of compound twin. 3.3a), the bended thin lamellae are revealed as martensite variants connected by compound twin interfaces. Locally, a compound twin interface intersects its neighboring type and type twin interfaces. This morphological feature of bent lamellae with changed orientation could result from the self-accommodation of thin lamellae during martensitic transformation to reduce the transformation strain and minimize the resistance. However, such a configuration does not favor the global reorientation of variants through twin boundary movement under an actuating magnetic field, since compound twin interface represents a kind of interface pin. Moreover, the (105) 5M plane that contains the [010] 5M direction (i.e. the b axis -the easy magnetization direction) is the invariant twinning plane of compound twins. As two twin-related crystals are in mirror symmetry with respect to the twinning plane (105) 5M , their b axes are parallel to each other. Thus, it becomes almost impossible to trigger the reorientation of one crystal with respect to the other through the motion of the compound twin interface under a unidirectional actuating magnetic field, i.e. the compound twin interface is inert. Fig. 3 . 5 . 35 Fig. 3.5. Illustrations of (a) the unit cell of cubic austenite, (b) the supercell of monoclinic 5M modulated martensite consisting of five subcells (outlined by dash lines), and (c) the reduced average unit cell (ignoring the lattice modulation). Fig. 3 . 3 Fig. 3.5a and b illustrate the lattice correspondence between austenite and 5M martensite. The high-temperature austenite possesses a cubic L2 1 Heusler structure (Fm-3m, No. 225) with the lattice parameter a A =5.84Å [21, 114]. The 5M martensite (a 5M =4.226Å, b 5M =5.581Å, c 5M =21.052Å and =90.3°) has a monoclinic superstructure (P2/m, No. 10) and the supercell of 5M martensite can be decomposed into five consecutive subcells along the c-axis (denoted as C 1 , C 2 ……C 5 in Fig.3.5b). The monoclinic crystallographic axes of 5M martensite align along the Fig. 3 . 3 Fig. 3.6. {001} standard stereographic projections of austenite orientations in the macroscopic sample coordinate frame calculated from the four martensite variants in variant group g1 under Bain, K-S, N-W and Pitsch relations, respectively. The common poles are enclosed in squares. The angular deviations correspond to Pitsch OR due to structural modulation are shown in Fig. 3.7. It is seen that the angular deviation for both (1 2 1 ) 1M plane and [ 1 1 1] 1M direction increases with the difference in the monoclinic angle between the "averaged unit cell" and each subcell. The structural modulation generates ~1-2.6° angular deviations for the corresponding plane and ~1-2.5° deviations for the in-plane direction in each subcell. Fig. 3 . 7 37 Fig. 3.7 Angular deviations of the same indexed (1 2 1 ) 1M and [ 1 1 1] 1M in each subcell from those of the average unit cell. martensite and phase transformation OR in a bulk polycrystalline Ni 50 Mn 28 Ga 22 alloy were investigated. The crystal structure analysis has shown that the martensite possesses a commensurate 5M modulated structure, the superlattice of which is composed of 5 subcells. The microstructure of the 5M martensite can be characterized by broad plates with alternatively distributed fine lamellar-shaped variants. The correct and accurate EBSD orientation indexing was made using the precise crystal structure information. From the microstructure reconstructed with the individually measured orientations, the four types of martensite variants A, B, C and D with distinct orientations were revealed. All the variants are twin related, i.e. the variant pair A and C (or B and D) are in type I twin relation, A and B (or C and D) in type II twin relation, and A and D (or B and C) in compound twin relation. Moreover, the twin interface planes were determined to be in coincidence with their respective twinning planes (K 1 ). Based on the local orientations of the individual martensite variants measured by EBSD, the orientations of the parent austenite were calculated using the assumed ORs between the cubic austenite and the monoclinic martensite. Fig 3 . 8 38 Fig 3.8 Compressive stress-strain curves of the as-cast and the as-suction alloys. Fig. 3 . 3 Fig.3.9 displays the measured and recalculated powder X-ray diffraction (XRD) patterns of the Ni 50 Mn 30 Ga 20 suction-cast alloy at room temperature. The profile of the measured XRD pattern (Fig.3.9a) is consistent with that reported by L. Righi et al.[75], suggesting that the alloy may have an incommensurate 7M modulated (7M(IC)) structure. The superlattice consists of 10 unit cells along the c-axis, belonging to the monoclinic space group P2/m (No.10). With the information of the atomic coordinates in the superlattice (as shown in Appendix II)[75], the measured XRD pattern was Fig. 3 . 3 Fig. 3.9 XRD patterns of Ni 50 Mn 30 Ga 20 suction-cast alloy at room temperature: (a) measured; (b) recalculated using the incommensurate 7M modulation (7M(IC)) structure model. The inset shows the schematic illustration of superstructure. 3. 2 . 3 MicrostructureFig. 3 . 233 Fig. 3.10a shows the typical backscattered electron (BSE) image of the Ni 50 Mn 30 Ga 20 suction-cast alloy taken at room temperature. It is clearly seen that the alloy presents the plate-like morphological features. The martensite plates are clustered in colonies in equiaxed shape and one colony or several colonies are located within an original austenite grain, which is different from the microstructure of 5M martensite. For the 5M martensite, the martensite colonies are in lamellar shape. Further TEM observations manifest that the 7M martensite plates possess stacking faults as internal sub-structures, as shown in Fig. 3.10b. Fig. 3 . 3 Fig. 3.11 (a) Orientation map of Ni 50 Mn 30 Ga 20 alloy taken at room temperature. Four types of martensite variants with different colours are designated as variant A, B, C, and D, respectively. The coordinate frame X 0 -Y 0 -Z 0 refers to the microscopic sample coordinate frame. (b) Kikuchi line pattern acquired from one of the variants; and (c) 10 1] 7M (with 0.62° deviation), suggesting that the two variants have a rotation of 180° with each other about this rational direction. Following the definition of twinning[START_REF] Christian | [END_REF]112], one can deduce that variant pair A:C (or B:D) are type I twin; while A:B (or C:D) type II twin.As for the variant pair A:D (or B:C), one of the 180° rotation axes is close to the normal of the rational plane (1 0 10) 7M (with 0.25° deviation) and the other is close to the rational direction [10 0 1] 7M (with 0.39° deviation), indicating that the variants A:D (or B:C) are compound twins to each other. Fig. 3 .Fig. 3 . 33 Fig. 3.12a and b illustrate the lattice correspondences between the cubic L2 1 structure of the austenite and the monoclinic structure of the incommensurate 7M modulated martensite. The high temperature austenite possesses the cubic (Heusler) (3. 1 ) 1 by using the measured orientations of inherited martensite variants. To achieve statistical significance, the EBSD orientation data of seven groups of martensite variants (numbered as g1, g2, …, g7), were measured from different colonies and served as initial input data and the results were presented in the {001} standard stereographic projection of the cubic austenite in the macroscopic sample coordinate frame. As an example, Fig.3.13 displays the three {001} austenite poles calculated from the respective martensite variants A, B, C and D from the same variant group g1. Fig. 3 . 3 Fig. 3.13 {001} standard stereographic projections of austenite calculated from the martensite variants A ( ), B ( ), C ( ) and D ( ) in variant group g1 under (a) Bain, (b) K-S, (c) N-W and (d) Pitsch relations, respectively, in the macroscopic sample coordinate frame The common austenite orientations are enclosed in the open squares. Fig 3 . 14 Fig. 3 . 3143 Fig 3.14 Schematic stacking sequences of the respective planes in austenite and 1M martensite viewed along [1 0 1 ] A and [ 1 1 1] 1M . Fig. 3 . 3 Fig.3.16 Illustration of lattice deformation to achieve the transformation from austenite to 1M martensite under (a) K-S OR and (b) Pitsch OR. The solid line represents the unit cell of austenite and the dashed line represents the average unit cell of 1M martensite. 3.17b) increase with the increased difference in monoclinic angle between the average unit cell of the 1M martensite and each individual subcell of the 7M martensite. Obviously, the K-S and Pitsch ORs differ from each other in the plane deviation as the [10 1 ] A //[ 1 1 1] 1M direction parallelism holds for the two ORs. As shown in Fig. 3.17a, the K-S OR ((011) 1M ) possesses larger deviation than that of the Pitsch OR ((1 2 1 ) 1M ), which means that the K-S OR requires larger atomic reshuffling to achieve the structural modulation. This result may further confirm that the Pitsch OR, i.e. (101) A //[1 2 1 ] 1M and [10 1 ] A //[ 1 1 1] 1M between the austenite and the 1M martensite, or (101) A //[1 2 10 ] 7M and [10 1 ] A //[10 10 1] 7M if referred to the 7M martensite, is more energetically favorable than the K-S OR for the martensitic transformation. Fig 3 . 17 317 Fig 3.17 Angular deviations of the same indexed (a) (1 2 1 ) 1M (Pitsch OR) and (011) 1M (K-S OR) and (b) [ 1 1 1] 1M in 5 subcells (C 1 , C 2 , C 3 , C 4 and C 5 ) with distinct monoclinic angle from those of the average unit cell. variants and the crystallographic characteristics of twin interfaces for materials having modulated superstructure. Detailed analyses on the incommensurate 7M martensite of Ni 50 Mn 30 Ga 20 alloy showed that there exist four types of twin-related martensite variants (A, B, C, and D) that are alternately distributed. All the pairs of variants can be categorized into three twinning modes: variants A and C (or B and D) are in type I twin relation, variants A and B (or C and D) in type II twin relation and variants A and D (or B and C) in compound twin relation. All the twin interfaces are in coincidence with the respective twinning plane (K 1 ). Based on the local orientations of individual martensite variants measured by EBSD system, the orientations of the parent austenite were evaluated using the assumed ORs. By a detailed crystallographic analysis, the energetically favorable OR governing the austenite to incommensurate 7M martensite transformation was revealed to be the Pitsch relation with (101) A //(1 2 10 ) 7M and [10 1 ] A //[ 10 10 1] 7M . Under this determined OR, at most 24 physically distinct martensite variants may be resulted from an initial austenite grain during the martensitic transformation. Notably, in the present work, the first attempt has been made to resolve the ambiguity of the geometrically favorable ORs by examining the lattice discontinuity caused by the phase transformation and the structural modulation. Fig. 3 .Fig. 3 . 33 Fig. 3.18 DSC curve of the as-cast alloy and the as-suctioned Ni 54 Mn 24 Ga 22 alloy Fig. 3 . 3 Fig. 3.19 Powder X-ray diffraction patterns measured at room temperature. (a) as-cast alloy; (b) suction-cast alloy. The inset shows the illustrations of the crystal structure. Fig. 3 .Fig. 3 . 33 Fig. 3.20 Compressive stress-strain curves of as-cast alloy and as-suctioned alloy Fig. 3 . 3 Fig. 3.21 shows the secondary electron images of the fractured surface after uniaxial compressive loading of the suction-cast rod. It can be seen in Fig.3.21a that Fig. 3 . 3 Fig.3.22 (a) Orientation micrograph of Ni 54 Mn 24 Ga 22 suction-cast alloy; (b) band contrast image; (c) misorientation angle distribution covering the measured region; (d) distribution of the corresponding rotation axes with misorientation angle of 80°-85°. Fig. 3 . 3 Fig. 3.22 displays the EBSD measurement results of the suction-cast alloy taken at the room temperature. In the orientation map in Fig. 3.22a, the plates are colored according to their orientation. It is seen that some plates are straight, while others are bent with slight orientation variation. Fig. 3.22b displays the band contrast image of Fig. 3 . 3 Fig. 3.23 (a) BSE image obtained in a martensite colony; four kinds of plates are numbered as P1, P2, P3, P4; (b) Zoomed image of the squared region showing the paired fine lamellae in the martensite plates. The inter-plate and inter-lamellar interfaces are marked with red and green lines, respectively. off-stoichiometric Ni-Mn-Ga polycrystalline alloy with nominal composition of Ni 53 Mn 22 Ga 25 (at.%) was prepared. The composition of the alloy was verified by energy dispersive X-ray spectrometry (EDS) to be Ni 53.4 Mn 21.8 Ga 24.8 . Fig.4.1 shows the measured DSC curves for the Ni 53 Mn 22 Ga 25 alloy on heating and cooling the sample. The forward martensitic transformation start temperature (M s ) and finish temperature (M f ) were determined to be 13.5 and 4.9 , respectively; the inverse martensitic transformation start temperature (A s ) and finish temperature (A f ) were 16.3 and 30.1 , respectively. In addition, much smaller peaks appear in the DSC curve on the cooling and the heating at the temperatures below -10°C (as arrowed in the figure), indicating the possibility of further transformation of the already formed martensite. It is noted that the martensitic transformation temperatures are around room temperature, suggesting that the co-existence of austenite and martensite at room temperature is possible. DSC curves also indicate that the Curie temperature (T c ) of the alloy is ~68.8 on cooling and ~74.6 on heating, respectively. The decrease of the magnetic transition temperatures with respect to that of the stoichiometric Ni 2 MnGa alloy should be attributed to the excess of Ni [116]. Fig. 4 . 4 Fig.4.1 DSC curves of Ni 53 Mn 22 Ga 25 alloy on heating and cooling. Fig. 4 . 2 4 . 1 . 2 42412 Fig.4.2 Powder XRD patterns of Ni 53 Mn 22 Ga 25 alloy measured at (a) 25°C; (b) -30°C and (c) -120°C. Fig. 4 . 4 Fig.4.3 (a) BSE image of co-existing austenite and 7M martensite. (b) EBSD micrograph of martensite "diamond" composed of four variants (denoted as A, B, C and D) surrounded by austenite. The coordinate frame (X 0 -Y 0 -Z 0 ) referring to the sample coordinate frame. Fig. 4 .Fig. 4 . 4 444 Fig. 4.4 shows the microstructure evolution of the martensite "diamond" during Fig. 4 . 4 Fig.4.5 shows the orientation micrographs reflecting the further growth of the martensite variants from the "diamond". It is seen in Fig.4.5a that the growth of the "diamond" proceeds by the elongation through the forward motion of spears, i.e. A:C or B:D pair. This elongation results in the formation of type II twin behind the spears. Fig. 4 . 4 Fig.4.5d. Locally, a cluster of four types of martensite variants, i.e. A, B, C and D, constitutes one variant group. It is usually observed that there are several variant groups formed in the same austenite grain. An example is shown in Fig. 4.6. Fig. 4 . 4 Fig.4.5 EBSD orientation map with co-existing two phases: (a) elongation of martensite "diamond"; (b) formation of paired plates from the "diamond"; (c) formation of the fork configuration by bifurcating; (d) formation of a large number of martensite plates keeping spear configuration adjacent to the austenite matrix. Fig. 4 . 6 46 Fig.4.6 Formation of various variant groups (G1, G2, G3 in the figure) in an austenite grain. Fig. 4 . 4 Fig.4.7 TEM bright field image showing the internal sub-structure of the 7M martensite plate. The corresponding SAED patterns are shown in the lower right corner. be the Pitsch relations, i.e. (101) A //(1 2 10 ) 7M and [10 1 ] A //[10 10 1] 7M , as we have found in Ni 50 Mn 30 Ga 20 alloy. The corresponding pole figures are shown in Fig. 4.8, which is plotted using the orientation data of the two phases in Fig.4.5b. Fig. 4.8a displays the {101} A pole figure of austenite and the {1 2 10 } 7M pole figures of the four martensite variants in the macroscopic sample coordinate system. It is noted that from the four martensite variants, one {1 2 10 } 7M pole of each variant is overlappedand they are all common to one {101} A pole of the austenite, as marked in Fig.4.8a, suggesting {101} A //{1 2 10 } 7M . Accordingly, the four variants share a common <10 10 1> 7M pole that is in common with one <10 1 > A pole of the austenite, as shown in Fig.4.8b, indicating <10 1 > A //<10 10 1> 7M . Therefore, the orientation relationships between the austenite and the 7M martensite are confirmed to be Pitsch OR with {101} A //{1 2 10 } 7M and <10 1 > A //<10 10 1> 7M , which is well consistent with our previous result. Fig 4 . 8 48 Fig 4.8 Identification the Pitsch OR from the corresponding pole figures of the two 3 )( 3 ) 33 This combination of all six self-accommodation groups gives the minimum transformation strain energy. Now, we pay our attention to the more local area in a self-accommodated variant group and the total deformation matrices for the three types of twin pairs, i.e. A:C (or B:D) pair, A:B (or C:D) pair and A:D (or B:C) pair, are calculated.(1) Type-twin pair (A:C) : Compound twin pair (A:D): Ni 53 53 Mn 22 Ga 25 alloy with the martensitic transformation temperatures around room temperature were investigated. The formation of characteristic diamond-like martensite microstructure with four variants (A, B, C and D) during the martensitic transformation was evidenced. As revealed by EBSD measurements, the martensite "diamond" consists of type I twin (A:C and B:D pair) and compound twin (A:D and B:C pair); the long ridge of martensite "diamond" corresponds to type I twin interface and the short ridge to compound twin interface. The "diamond" thus can be regarded as being built up of two spears (A:C and B:D pair) placed back to back. The favourable way for the "diamond" growth is through the forward progression of spears. The OR of the martensitic transformation was further confirmed to be Pitsch relation according to the crystallographic calculation. The habit plane normals were determined experimentally to be {0.736130, 0.673329, 0.068855} A by means of indirect double-trace method, which is consistent with results of phenomenological theory with 2.11° deviation. Further calculation indicates that the characteristic four variants in a "diamond" group clustered around one {101} A pole and the elastic strains around martensite were effectively cancelled out by making such a group. Both A:C (or B:D) and A:B (or C:D) variant pairs are self-accommodated, whereas A:D (or B:D) variant pair is not. martensite variants is very sensitive to local constraints. On the other hand, different characterization techniques require different sample geometries and sizes that represent different internal constraints. Clearly, significant inconsistency may result from different experimental techniques and the characterization information (global and local) suffers from a lack of generality. So far, numerous experimental studies of modulated structures have been performed almost exclusively by diffraction techniques. The role of the micrstructural correlations between martensite plates has seldom been taken into account by either experimental examination or the nanotwin combination theory. These microstructural features surely have a strong influence on the stability of the modulated martensites and their functionality as shape-memory materials. In this context, insight into the microstructure-property correlation may bring convincing clues to clarify the structure of long-period modulated martensite. Under such a background, the polycrystalline bulk Ni 53 Mn 22 Ga 25 alloy having martensitic transformation near room temperature is selected as an ideal testing material for the following reasons. The alloy displays a transformation sequence from the austenite to the modulated martensite and then the NM martensite during continuous cooling detected by X-ray diffraction (XRD) measurements, and islands of modulated martensite and NM martensite coexist in some initial austenite grains when kept at room temperature. Using the local electron backscatter diffraction (EBSD) orientation determination technique, the crystal structure nature of the long-period modulated martensite was unambiguously clarified through microstructural and crystallographic examination. XRD pattern of the modulated phase in Ni 53 Mn 22 Ga 25 alloy measured at -30 , as shown in Fig. 4.9a, is first solved and further refined with the incommensurate 7M modulation structure model (hereafter denoted 7M(IC) in this section) [75]. It showed that the modulated phase has a monoclinic long-period superstructure (P2/m, No. 10) with the lattice constants a 7M = 4.222 Å; b 7M = 5.537 Å; c 7M = 41.982 Å, and = 92.5°. The unit cell of the combined nanotwins was artificially constructed based on the lattice constants of the tetragonal NM unit cell (a Tet = b Tet = 3.879Å, c Tet = 6.511Å) and the (5 2 ) 2 stacking sequence of (112) Tet twins [81]. The resolved lattice constants of the adaptive phase (with a structure of combined nanotwins) [81, 130] are a ad = 4.257 Å, b ad = 5.486 Å, c ad = 29.446 Å, and ad = 94.2°. The calculated atomic coordinates of the nanotwined superstructure is shown in Appendix III. The XRD patterns are recalculated using the two structure models and are displayed in Fig. 4.9b and c. It is seen that the two structures possess very close pattern to the measured one. Close examination reveals that the 7M(IC) model delivers a slightly better fit to the measured profile. The distinguishable difference between the two structures appears in the secondary minor peaks (as arrowed in the figure) around the three main diffraction peaks in the 2 range from 40 to 50°. Fig. 4 . 4 Fig.4.9 XRD patterns for the modulated martensite: (a) measured at -30°C; (b) recalculated pattern with the 7M(IC) superstructure; (c) recalculated with the tetragonal nanotwin combined unit cell. The insets show the unit cells of the two structures. Fig. 4 .Fig. 4 .Fig. 4 . 444 Fig.4.10 EBSD map showing (a) the coexistence of austenite, 7M(IC) and NM martensite within an initial austenite grain, and (b) four 7M variants designated as A, B, C and D where the coordinate frame (X 0 -Y 0 -Z 0 ) refers to the macroscopic sample coordinate system. Fig. 4 . 4 Fig.4.11 (a) Measured EBSD Kikuchi pattern of the modulated martensite; (b) recalculated with the 7M (IC) structure; (c) recalculated with the nanotwin combination structure. Fig. 4 . 4 Fig. 4.10b presents the EBSD orientation micrograph corresponding to Fig.4.10a, 4.12b, the two orientation plates correspond to four tetragonal NM orientation variants that are in lamellar shape. The in-plate variants are twin related. The twinning elements of the(112) Tet nanotwin in the plate are determined to be K 1 =(112) Tet ; K 2 =(11 2 ) Tet ; 1 =[11 1 ] Tet ; 2 =[111] Tet and the twinning shear s=0.344. The twinning shear is almost twice as that of the 7M(IC) twins (type I and type II). The inter-lamellar interfaces are the(112) Tet twinning planes and coherent. Due to this in-plate twin configuration, the atomic coherency at the plate interface is greatly deteriorated. At the plate interface, two thick lamellae and two thin lamellae from neighboring plates intersect, as illustrated in Fig.4.12b. When the thick lamellae meet, they appear to have a (1 1 2) Tet[ 1 11] Tet twin relationship but with some degrees of deviation in the twinning plane and twinning direction, i.e. 4.97° between (112) Tet planes and 2.62° between the[111] Tet directions. Using the two(112) Tet planes as reference, the closest planes of the thin lamellae from the neighboring plates are (010) Tet . The angular deviation between the two planes is 11.44°. Thus, the parallelism of the planes is far from perfect. Fig. 4 . 4 Fig.4.12 Atomic correspondences of type twin interface of the modulated martensite viewed along the [210] 7M direction constructed under (a) the 7M (IC) structure model; (b) the nanotwin combination structure model. For a clear representation, only Mn atoms are displayed. 4. 3 Fig. 4 . 34 Fig.4.13a shows the EBSD phase micrograph covering the co-existing austenite, incommensurate 7M martensite and NM martensite in the same initial austenite. The 7M and the NM martensite are in plate-shape and the neighboring 7M and NM Fig. 4 . 4 Fig. 4.13b shows the EBSD orientation micrograph corresponding to Fig.4.13a.The three phases are colored according to their orientations. It is seen that for the 7M martensite there are 4 orientation variants, whereas for the NM martensite, it seems that one plate corresponds to one orientation variant but in reality it is composed of alternatively distributed two fine lamellae, as displayed in Fig.4.14a. Of the two lamellae, one is thinner than the other so that the thickness of the thinner one is beyond the resolution of the present automated EBSD orientation mapping.Representing the orientation of NM plate with the orientation of the thicker lamellae, one can find 4 plate variants, as shown in Fig.4.13b. Considering that one NM plate is composed of two orientation variants, we can obtain, in total, 8 orientation variants for NM martensite. Fig 4 . 4 Fig 4.13 (a) EBSD phase index map of the coexisted austenite, 7M(IC) and NM martensite in the same austenite grain; (b) EBSD orientation map, four 7M variants are designated as A, B, C and D; four NM plates are nominated as P1, P2, P3 and P4. Fig. 4 . 4 Fig.4.14 (a) Zoomed BSE image of the squared region in Fig.4.13b showing the fine lamellae in plate P1 and P3 of NM martensite. The paired fine lamellae in two plates are denoted as L1, L1 , L3 and L3 , respectively, as shown in the inset schema. L1 and L3 represent the thicker lamellae, and L1 and L3 the thinner lamellae. The red and green lines represent the traces of inter-plate interface and (112) Tet twinning interface. (b) {112} Tet pole figure of the two thick lamellae (L1 and L3) and {010} Tet pole figure of the two thin lamellae (L1 and L3 ). The red line represents the trace of the plate interface normal. (c) <20 1 > Tet pole figure of the four fine lamellae in plate P1 and P3. figure) and Fig. 4.14c (direction pole figure). The projection of the plate interface normal (the red straight line) is displayed in the plane pole figure (Fig. 4.14b). The corresponding plane and direction poles are enclosed in the open squares in the pole figures and their deviations are given in Table 4.6. It is seen that the parallelisms are far from perfect, especially those of the planes. With such large degrees of deviations, elastic displacement of atoms from their equilibrium positions in vicinity of the interface area could be expected. This large mismatch on the plate interfaces surely imposes significant constraint on the transformation direct from austenite to NM martensite and this might be the reason for the formation of the 7M martensite at the intermediate state of the transformation to mitigate the transformation constraint. Fig. 4 . 4 Fig.4.15 BSE image showing the transition from 7M variant A to NM plate P1. Fig. 4 . 4 Fig.4.16 Atomic projection of (a) the type twin interface in [210] 7M and their cross section atomic correspondences for 7M martensite; (b) the corresponding inter-plate interface of NM martensite and their cross section atomic correspondences. Chapter 5 ( 2 )( 3 )( 4 ) 5234 the transformation from the cubic austenite to the tetragonal NM martensite through crystal structure, crystallographic orientation and microstructure examinations. In this transformation, the lattice mismatch between the cubic austenite and the tetragonal NM martensite and the formation of incoherent NM plate interfaces represent as an insurmountable energy barrier for a direct transformation from the austenite to the NM martensite. The formation of 7M modulated martensite is evidenced as an intermediate step in this transformation by introducing an intermediate crystal structure that greatly mitigates the large lattice mismatch and by forming coherent plate interfaces. During this two step transformation, one austenite variant will gives rise to 4 twin related 7M orientation variants and one 7M variant will result in 2 twin related NM variants, correspondingly 8 NM orientation variants in total. The transformation from the 7M to the NM martensite is realized by lattice distortion following the (001) 7M //(112) Tet and [100] 7M // [11 1 ] Tet OR, which is accompanied by the degradation of the atom coherency in the vicinity of the NM plate interface and the complete change of twin configuration. This microstructure change is well correlated with the experimentally observed field induced shape memory performance degradation from 7M to NM. The available temperature window for the stable existence of the 7M martensite depends on the energy barriers related to the lattice mismatch between the 7M and the NM martensite and the atomic misfit on the plate interfaces of the NM martensite. Conclusion and perspective Conclusion In this dissertation, four off-stoichiometric Ni-Mn-Ga polycrystalline alloys were deliberately prepared. The microstructure and martensitic transformation features were thoroughly investigated. Through this dissertation, it has been demonstrated that the EBSD technique can be used as an advanced characterization tool for accurate crystallographic analyses for materials having modulated superstructure. The conclusions of the present study are summarized as follows: (1) Ni 50 Mn 28 Ga 22 alloy: The martensite possesses a monoclinic 5M modulated superstructure, the superlattice of which is composed of 5 subcells. The microstructure of the 5M martensite can be characterized by broad plates with alternatively distributed fine variants. EBSD measurements using the monoclinic superstructure information revealed four twin-related variants A, B, C and D with distinct orientations in one broad plate, i.e. the variant pair A:C (or B:D) has the relation of type I twin, A:B (or C:D) type II twin, and A:D (or B and C) compound twin. The variant interfaces were revealed to be the corresponding twinning planes (K 1 ). Based on the local orientations of the individual martensite variants measured by EBSD system and crystallographic calculation, the more favorable transformation OR from austenite to 5M martensite was revealed to be the Pitsch relation with (101) A //(1 2 5 ) 5M and [10 1 ] A //[ 5 5 1] 5M with no residual austenite. Ni 50 Mn 30 Ga 20 alloy: It is confirmed by Kikuchi pattern indexation that 7M martensite possesses an incommensurate monoclinic superstructure and the superlattice is composed of ten subcells. From a parent austenite grain, one or several martensite colonies are inherited, each consisting of 4 types of twin-related variants (A, B, C, and D) according to EBSD measurements. All the pairs of variants can be categorized into three twinning modes: variants A and C (or B and D) are in type I twin relation, variants A and B (or C and D) type II twin, and variants A and D (or B and C) compound twin. All the twin interfaces are in coincidence with the respective twinning plane (K 1 ). Furthermore, based on the local orientations of individual martensite variants measured by EBSD system and detailed crystallographic analysis, the energetically favorable OR governing the austenite to incommensurate 7M martensite transformation was revealed to be the Pitsch relation with (1 0 1) A //(1 2 10 ) 7M and [1 0 1 ] A //[10 10 1] 7M . Under this determined OR, at most 24 physically distinct martensite variants may be resulted from an initial austenite grain during the martensitic transformation. Notably, in the present work, the first attempt has been made to resolve the ambiguity of the geometrically favorable ORs by examining the lattice discontinuity caused by the phase transformation and the structural modulation. Ni 54 Mn 24 Ga 22 alloy: The alloys are composed of self-accommodated NM martensitic plates with tetragonal crystal structure. The adjacent plates have the relationship of 80°~85° rotation around the <110> Tet axes. Locally, four types of plates are identified and each plate consists of paired fine variants. Totally, eight orientation variants are found in one martensite colony. The paired fine variants in each plate were found to be compound twin related with the {112} Tet as the twinning plane and the <11 1 > T the twinning direction. The inter-plate interfaces are close to {1 1 2} Tet plane but with ~3° deviation, while the interfaces of two paired fine variants are in good agreement with {112} Tet twinning plane. Ni 53 Mn 22 Ga 25 alloy: At room temperature, austenite and martensite co-exist in Ni 53 Mn 22 Ga 25 alloy. The formation of the characteristic diamond-like martensite microstructure with four variants (A, B, C and D) during the austenite-7M martensite transformation was evidenced. As revealed by EBSD measurements, the martensite "diamond" consists of type I twin (A:C and B:D pair) and compound twin (A:D and B:C pair); the long ridge of martensite "diamond" corresponds to type I twin interface and the short ridge to compound twin interface. The "diamonds" finally transforms into martensite plates. The favorable way for the "diamond" growth is through the forward of spears. Crystallographic calculation manifests that the characteristic four variants in a "diamond" group clustered around one {101} A pole and the elastic strains around martensite were effectively cancelled out by making such a group. Both A:C (or B:D) and A:B (or C:D) variant pairs are self-accommodated; while A:D (or B:D) variant pair is not.The microstructure with co-existing three phases (austenite, modulated martensite and final NM martensite) was observed in some austenite grains. Three phases are located with a fixed adjacency, i.e. austenite -modulated martensite -NM martensite.The long-period 7M martensite occurs on cooling as a thermodynamically metastable phase that is intermediate between the parent austenite and the final stable NM martensite. The modulated martensite phase possesses an independent crystal structure, rather than the nanotwin combination of the normal non-modulated martensite proposed by the nanotwin combination theory. The modulated structure of 7M martensite provides reduced local number of variants and favors twinning and detwinning configuration. This is essential to the attainability of the magnetic field induced shape change.The role of 7M martensite in the transformation from the cubic austenite to the tetragonal NM martensite has been clarified. In this transformation, the lattice mismatch between the cubic austenite and the tetragonal NM martensite and the formation of the incoherent NM plate interfaces represent as an insurmountable energy barrier for a direct transformation from the austenite to the NM martensite. The formation of 7M modulated martensite is evidenced as an intermediate step in this transformation by introducing an intermediate crystal structure that greatly mitigates the large lattice mismatch and by forming coherent plate interfaces. During this two step transformation, one austenite variant will gives rise to 4 twin related 7M orientation variants and one 7M variant will result in 2 twin related NM variants, correspondingly 8 NM orientation variants in total. The transformation from the 7M to the NM martensite is realized by lattice distortion following the (001) 7M // (112) Tet and [100] 7M // [11 1 ] Tet OR, which is accompanied by the degradation of the atom coherency in the vicinity of the NM plate interface and the complete change of twin configuration. This microstructure change is well correlated with the experimentally observed field induced shape memory performance degradation from in NM state. 1 .Fig. 1 11 Fig.1Représentations schématiques de (a) de la cellule élémentaire de l'austénite cubique, (b) de la supercellule monoclinique de la martensite modulée 5M constituée de cinq sous-cellules (décrites par les traits pointillés), et (c) de la cellule réduite unitaire moyenne (en ignorant les modulations du réseau). Fig. 2 2 Fig.2 (a) L'image ESB des plaques de martensite larges dans un grain initial d'austénite; (b) micrographie d'orientation des fines lamelles. L'orientation l AGG l d'un grain d'austénite par rapport au repère orthonormé de l'échantillon peut être exprimée par la relation suivante en matrices: représente l'orientation mesurée du k ème variant de la martensite par rapport au repère orthonormé de l'échantillon ; T est la matrice de rotation qui transforme le repère orthonormé lié à la base du réseau monoclinique de la martensite à celui cubique de l'austénite en respectant la OR donnée ; j A S (j = 1, 2, ..., 24) et 7 sont les éléments de symétrie de rotation du système cristallin cubique et du système cristallin monoclinique, respectivement. En se basant sur une étude bibliographique, les ORs de Bain, de KS, de NW et de Pitsch sont présumées comme les ORs possibles de la transformation dans le présent travail. Pour des raisons de simplicité, les ORs exprimées en parallélisme des plans et des directions dans les plans sont d'abord vérifiées entre l'austénite et la martensite 1M moyenne (Fig.1c). Fig. 3 {001} 3 Fig.3 {001} Projections stéréographiques standard des orientations de l'austénite calculées à partir des variantes de martensite A ( ), B ( ), C ( ) et D ( ) dans la variante du groupe g1 sous les hypothèses d'OR de Bain, KS, N-W et Pitsch, respectivement. Les orientations communes de l'austénite sont encadrées. 2 .Fig. 4 24 Fig.4 Représentation schématique de (a) maille de l'austénite; (b) supercellule de martensite 7M, où chaque cellule élémentaire (martensite 1M) est indiquée en pointillés et les cinq cellules élémentaires avec des constantes distinctes sont désignées comme C 1 , C 2 ... C 5 ; (c) cellule moyenne de la martensite modulée. Fig. 5 5 Fig.5 (a) Une image typique en contraste d'ESB de l'alliage polycristallin Ni 50 Mn 30 Ga 20 . (b) Cartographie d'orientation EBSD déterminée localement sur une colonie. Le système de coordonnées (X 0 -Y 0 -Z 0 ) se réfère au système de coordonnées macroscopiques de l'échantillon. OR de Bain classique, celles de KS, de NW et de Pitsch sont présumés comme OR possibles entre l'austénite parent et la martensite 7M. Les parallélismes des plan présumés et de directions dans le plan sont d'abord utilisés pour spécifier les ORs entre l'austénite et la martensite 1M (Fig. 4c). Fig. 6 6 Fig. 6 Figures de pôles {0 0 1} de l'austénite calculées pour les variantes A ( ), B ( ), C ( ) and D ( ) de la martensite sous l'hypothèse des ORs de (a) Bain, (b) K-S, (c) N-W et (d) Pitsch, respectivement. Les pôles correspondant à l'orientation commune de l'austénite sont encadrées. Fig. 7 Tableau 4 . 74 Fig.7 Correspondances atomiques des plans respectifs dans l'austénite et la martensite 1M sous l'hypothèse (a) relation de K-S avec (111) A //(011) 1M et [10 1 ] A //[ 1 1 1] 1M ; (b) relation de Pitsch avec (101) A //(1 2 1 ) 1M et [10 1 ] A //[ 1 1 1] 1M . Fig. 8 8 Fig. 8 Écarts angulaires de (a) plans (1 2 1 ) 1M (relation de Pitsch) et (011) 1M (relation de K-S) et (b) direction [ 1 1 1] 1M dans les 5 sous cellules (C1, C2, …, C5) par rapport à ceux de la cellule moyenne 1M. 3 . 3 Caractéristiques cristallographiques de la martensite NM. L'alliage Ni 54 Mn 24 Ga 22 avec la martensite non-modulée à la température ambiante a été préparé. Les diagrammes de diffraction de rayons X de poudre mesurés à la température ambiante démontrent que l'alliages possède seulement la martensite non-modulée avec une structure cristalline tétragonale. Les spectres peuvent être indexés par le groupe d'espace I4/mmm (n ° 139) [10]. Les constantes de maille de l'alliages ont été déterminés comme a T =b T = 3.853 Å et c T =6.625 Å, respectivement. Des mesures EBSD ont révélé que les plaques de martensite adjacentes ont la relation d'orientation de 80° ~ 85° de rotation autour d'axes <110> T . Des observations additionnelles en MEB à fort grossissement ont montré que les plaques NM sont composées de lamelles mince et il y a quatre types de plaques de martensite (notées P1, P2, P3 et P4) dans une colonie de martensite selon l'orientation des minces interfaces lamellaires, comme représenté sur la fig. 9. Les lamelles fine apparaissent en paires, dans lesquelles l'une est plus épaisse que l'autre. Ainsi, dans une colonie de martensite, on peut toujours trouver huit variantes d'orientation. Les variantes fines appariées dans chaque plaque ont été trouvées en relation de macle avec le plan {112} T comme plan de macle et <11 1 > T comme direction de maclage. Les interfaces inter-plaques sont proches du plan {1 1 2} T , mais avec un écart de ~ 3 °, tandis que les interfaces de deux paires de variantes fines appariées sont en bon accord avec le plan de macle {112}. Fig. 9 9 Fig.9 (a) Image en électrons rétrodiffusés obtenue dans une colonie de martensite; quatre sortes de plaques sont numérotés P1, P2, P3, P4; (b) Image agrandie de la région délimitée par le rectangle qui montre les lamelles fines appariées dans les plaques de martensite. Les interfaces inter-plaques et inter-lamellaires sont marquées avec des lignes rouge et verte, respectivement. Fig. 10 ( 10 Fig.10 (a) Micrographie EBSD de martensite en forme de « diamant » composée de quatre variantes notées A, B, C et D. (b) Formation des platelets appariés à partir du « diamant ». 5 . 5 Clarification de la structure et la stabilité de longue période martensite moduléeUne apparente contradiction dans l'interprétation de la structure martensitique de longue période a été soulevée récemment dans les alliages à mémoire de forme ferromagnétiques Ni-Mn-Ga. Righi et al.[4] ont fait une étude détaillée sur la martensite modulée 7M par application de l'approche du superespace à l'analyse de diffraction de rayons X (XRD) de poudres. Les résultats ont montré que la martensite modulée possède sa propre structure cristalline avec une modulation incommensurable 7M (désignée ci-après 7M (IC)). En se basant sur des mesures DRX et le concept de la phase d'adaptation[12], Kaufmann et al.[13] ont examiné la co-existence de l'austénite, de la martensite 7M et NM dans des films épitaxiaux de Ni-Mn-Ga. Ils ont conclu que la martensite 7M modulée peut être tout simplement construite à partir de variantes nanomaclées de la martensite tétragonale MN avec la séquence d'empilement (5 2 ) 2 , ce qui exclut l'existence d'un structure indépendante modulée.En effet, il est très difficile de faire une discrimination directe de la validité de ces deux modèles de structure. Jusqu'ici, de nombreuses études expérimentales de structures modulées ont été réalisées presque exclusivement par des techniques de diffraction. Le rôle des corrélations microstructurales entre les platelets de martensite a rarement été pris en compte soit par l'examen expérimental soit par le modèle de la combinaison de nanomacles. Ces caractéristiques microstructurales existant dans la nature ont sans doute une forte influence sur la stabilité et les fonctionnalités de la martensite modulée. Dans un tel contexte, l'alliage polycristallin massif Ni 53 Mn 22 Ga 25 ayant une température de transformation martensitique voisine de la température ambiante est sélectionné en tant que matériau d'essai idéal. L'alliage affiche une séquence de transformation de l'austénite à la martensite modulée puis la martensite NM au cours du refroidissement continu, détectée par des mesures de diffraction de rayons X (XRD) et des îlots de martensite modulée et martensite NM coexistent dans certains grains initiaux d'austénite, lorsqu'ils sont conservés à la température ambiante. Le diagramme XRD de la phase modulée dans l'alliage Ni 53 Mn 22 Ga 25 mesuré à -30°C, comme représenté sur la fig.11a, est tout d'abord résolu puis affiné ensuite avec le modèle de structure 7M(IC) [4] en utilisant le logiciel PowderCell [14]. Il est montré que la phase modulée possède une superstructure monoclinique à longue période (P2/m, No. 10) avec les constantes de réseau a 7M = 4.222 Å; b 7M = 5.537 Å; c 7M = 41.982 Å, and = 92.5°. La cellule élémentaire de la combinaison des nanomacles a également été construite artificiellement sur la base des constantes de réseau de la cellule élémentaire tétragonale NM (a T = b T = 3.879Å, c T = 6.511Å) et la séquence d'empilement (5 2 ) 2 des macles (112) T [13]. Les constantes de réseau résolues pour la phase d'adaptation sont a ad = 4.257 Å, b ad = 5.486 Å, c ad = 29.446 Å, et ad = 94.2°. Les diagrammes DRX sont recalculés en utilisant les deux modèles de structure et sont présentés dans les Fig.11b et c. On voit que les deux structures possèdent un diagramme de diffraction très proche de celui mesuré. Un examen attentif révèle que le modèle 7M(IC) fournit un ajustement au profil mesuré légèrement meilleur. La différence distinguable entre les deux structures apparaît dans les pics secondaires mineurs correspondants (marqués par une flèche dans la figure) localisés près des trois pics de diffraction principaux dans le domaine 2 entre 40 et 50°. Fig. 11 11 Fig.11 Diagrammes XRD pour la martensite modulée: (a) mesuré à -30 ° C; (b) recalculé avec la superstructure 7M(IC); (c) recalculé avec la cellule élémentaire tétragonale de nanomacles combinées. Les encadrés dans les figures montrent les cellules élémentaires des deux structures. Fig. 12 ( 12 Fig.12 (a) Micrographie EBSD des phases qui montre l'austénite, la marteniste 7M (IC) et la martensite NM coexistant dans un même grain d'austénite; (b) micrographie d'orientation EBSD, où les quatre variantes de 7M sont désignées A, B, C et D et les quatre platelets de NM sont nommés P1, P2, P3 et P4. Le système de coordonnées (X0 Y0-Z0-) se réfère au système de coordonnées de l'échantillon macroscopique. Fig. 13 13 Fig.13 Correspondance atomique à l'interface de macle de type de la martensite modulée vue le long de la direction [210] 7M construite sous l'hypothèse (a) du modèle de la structure 7M(IC) et (b) du modèle de la structure de combinaison de nano-macles. Pour une représentation claire, uniquement les atomes de Mn sont dessinés. 6 .Fig. 14 614 Fig. 14 Image en électrons rétrodiffusés (BSE) montrant la transition de la variante A de 7M au Fig. 15 (Tableau 6 22 15622 Fig. 15 (a) La projection de l'interface atomique de macle de type I dans la direction [210] 7M et les correspondances atomiques dans la section transversale pour la martensite 7M ; (b) l'interface correspondante entre les platelets de martensite NM et les correspondances atomiques dans la section transversale. ( 2 ) 2 L'alligae Ni 50 Mn 30 Ga 20 : Il est confirmé par l'indexation de clichés de Kikuchi que la martensite 7M possède une superstructure incommensurable monoclinique et le super-réseau est composé de dix sous-cellules. Dans un grain d'austénite parente, une ou plusieurs colonies de martensite se forment, chacune constituée de 4 variantes (A, B, C et D) qui sont en relation de macle l'une par rapport à l'autre. Toutes les paires de variantes peuvent être classées en trois modes de macle: les variantes A et C (ou B et D) forment des macles de type I, les variantes A et B (ou C et D) des macles de type II, et les variantes A et D (ou B et C) des macles composées. Toutes les interfaces sont en coïncidence avec le plan respectif de maclage (K1). En outre, en se basant sur les orientations locales des différents variantes martensitiques mesurées par EBSD et sur les analyses cristallographiques détaillées, la relation d'orientation la plus favorable énergétiquement régissant la transformation de l'austénite en martensite 7M incommensurable a été identifiée être la relation de Pitsch avec (1 0 1) A //(1 2 10 ) 7M et [1 0 1 ] A //[ 10 10 1] 7M . Notamment, l'ambiguïté de la relation d'orientation géométriquement la plus favorable a été résolue par l'examen de la discontinuité du réseau causée par la transformation de phase et par la modulation structurale. (3) L'alliage Ni 54 Mn 24 Ga 22 : L'alliage se compose de platelets de martensite NM auto-accomodés ayant une structure cristalline tétragonale. Les platelets adjacents ont la relation de désorientation 80° ~ 85° autour des axes <110> T . Localement, quatre types de platelets sont identifiés et chaque platelet se compose de paires de variantes fines. Totalement, huit variantes d'orientation se trouvent dans une colonie de martensite. Les variantes fines en paires dans chaque platelet ont été trouvées en relation de macle composée avec le plan {112} T comme plan de maclage et la direction <11 1 > T comme direction de maclage. Les interfaces inter-platelets sont proches du plan {112} T , mais avec ~ 3 ° d'écart, tandis que les interfaces de deux paires de variantes fines sont en bon accord avec le plan de maclage {112} T . (4) L'alliage Ni 53 Mn 22 Ga 25 : A température ambiante, l'austénite et la martensite co-existent dans l'alliage Ni 53 Mn 22 Ga 25 . La formation de microstructures caractéristiques de martensite en forme de « diamant » avec quatre variantes (A, B, C et D) au cours de la transformation de l'austénite en martensite modulée 7M a été mise en évidence. Comme révélé par des mesures EBSD, le « diamant » est constitué des variantes de martensite en relation de macle de type I (A: C et B: D) et de macles composées (A: D et B: C); la longue crête du « diamant » correspond à l'interface de macle de type I et la courte crête à l'interface de macle composée. Les diamants se transforment finalement en platelets de martensite. Des calculs cristallographiques montrent que les quatre variantes caractéristiques dans un groupe de diamant sont autour d'un plan {101} A et les déformations élastiques issues de la transformation ont été effectivement annulées par la formation du groupement. Les paires de variantes A: C (ou B: D) et A: B (ou C: D) sont toutes les deux auto-accommodées, tandis que les paires A: D (ou B: D) ne le sont pas. La microstructure avec les trois phases (austénite, martensite modulé et martensite NM final) co-existantes a été observée dans certains grains d'austénite. Les trois phases sont situées avec une contiguïté fixe, c.-à-d. austénite -martensite modulé -martensite NM. La martensite 7M de période longue se produit lors du refroidissement en tant qu'une phase thermodynamiquement métastable qui est intermédiaire entre l'austénite parente et la martensite NM finale stable. La phase martensite modulée possède une structure cristalline indépendante, plutôt que la combinaison de nano-macles de martensite normale non-modulée comme proposée par la théorie de combinaison de nano-macles. La structure modulée de la martensite 7M offre un nombre de variantes locales réduit et la configuration favorable pour maclage et démaclage. Cela est essentiel pour réaliser le changement de forme induit par un champ magnétique. Le rôle de la martensite 7M dans la transformation de l'austénite cubique en martensite tétragonale NM a été clarifié. Dans cette transformation, le désaccord de maille entre l'austénite cubique et la martensite NM tétragonale et la formation des interfaces inter-platelet NM incohérentes représentent une barrière d'énergie insurmontable pour une transformation directe de l'austénite en martensite NM. La 2.3.6 Transmission electron microscopy TEM is capable of imaging at a significantly higher resolution than optical microscopy and SEM, which enables the research to examine fine detail of the microstructure. Here, a Philips CM 200 LaB 6 cathode TEM was used to observe the stacking faults inside the martensite plates. A Gatan MSC 792 CCD camera was used to acquire the images. The working voltage was set at 200KV. 2. 4 Crystallographic calculation method 2.4.1 Coordinate transformation between orthonormal reference system and monoclinic system By convention, individual orientations acquired by EBSD measurement are represented by a set of rotations expressed in Euler angles ( 1 , , 2 ) [110] a M in i-O-k plane, cos cos sin sin cos cos sin sin cos cos sin sin G sin cos cos sin cos sin sin 1 cos cos cos 2 1 cos sin 2 sin sin 2 cos sin cos (2.1) In this study, the orthonormal crystal coordinate system is linked to the monoclinic crystal coordinate system by setting c M //k, b M //j and Table 3 . 3 1 Misorientation angles ( ) and rotation axes (d) between variants A, B, C, and D in Fig. 3.3a. The rotation axes refer to the orthonormal crystal coordinate frame set to the monoclinic martensite lattice basis. Variant pair Misorientation angle [°] d 1 Rotation axis, d d 2 d 3 A:C 86.80 -0.70846 0.00241 -0.70575 179.81 0.48490 -0.72659 -0.48676 B:D 86.79 179.83 -0.71242 -0.48214 -0.00221 0.72661 -0.70175 0.48947 93.32 0.70870 0.00098 0.70551 A:B 179.92 -0.51312 -0.68633 0.51543 C:D 93.09 179.64 0.70882 -0.51205 0.00435 -0.68778 0.70538 0.51455 A:D 179.64 179.89 0.70487 -0.70933 -0.00098 0.00317 -0.70933 -0.70487 179.88 0.70978 0.00079 0.70443 B:C 179.91 -0.70443 -0.00102 0.70978 Table 3 . 3 2 Twinning elements of commensurate 5M modulated martensite in Ni 50 Mn 28 Ga 22 . K 1 (1 2 5 ) (1.0569 2 4.7155 ) (1 0 5) K 2 (1.0569 2 4.7155) (1 2 5) (1 0 5) 1 [ 5.2504 5 0.9499] [ 5 5 1] [ 5 0 1] 2 [5 5 1 ] [5.2504 5 0.9499 ] [5 0 1] P (1 0.0514 5.2568) (1 0.0514 5.2568) (0 1 0) s 0.1387 0.1387 0.0074 Type (A:C and B:D) Type (A:B and C:D) Compound (A:D and B:C) Table 3 . 3 Transformation OR Plane and in-plane direction parallelism Bain relation (001) A //(010) 1M & [010] A //[101] 1M K-S relation 4 A selection of possible ORs between austenite and martensite. Note that the Miller indices of planes and in-plane directions for product martensite with monoclinic structure are referred to the average unit cell illustrated in Fig. 3 .5c. Table 3 . 3 5 Minimum misorientation angles between the austenite orientations calculated from variant A and from other three variants in six variant groups under Bain, K-S, N-W and Pitsch OR. Group No. Variant pair Bain (°) K-S (°) N-W (°) Pitsch (°) A:B 3.45 0.75 2.09 0.75 g1 A:C 3.52 1.61 1.74 0.50 A:D 0.44 1.58 2.44 0.25 A:B 3.81 0.34 2.34 0.33 g2 A:C 3.30 1.89 1.86 0.66 A:D 1.07 1.81 2.86 0.81 A:B 3.33 0.66 1.80 0.66 g3 A:C 3.22 1.72 1.62 0.81 A:D 0.41 1.90 2.66 0.20 A:B 3.53 0.48 1.83 0.48 g4 A:C 3.42 1.44 1.50 0.70 A:D 0.49 1.52 2.46 0.26 A:B 3.20 0.79 1.81 0.79 g5 A:C 3.22 1.56 1.44 0.86 A:D 0.84 1.38 2.38 0.40 A:B 3.70 0.43 4.17 0.43 g6 A:C 3.66 1.90 3.94 0.36 A:D 0.40 1.98 2.86 0.28 Table 3 . 3 Variant pair Misorientation angle [°] d 1 Rotation axis, d d 2 d 3 A:C 82.6319 -0.728809 -0.00337 -0.684708 179.745 -0.452053 0.751082 0.481169 B:D 82.9978 179.8 -0.725021 -0.456351 -0.002634 0.74897 -0.688722 0.480404 97.783 0.723592 0.003766 0.690218 A:B 179.675 -0.520057 -0.65749 0.545204 C:D 96.5924 179.909 0.719889 0.518203 -0.001061 0.66528 0.694089 -0.537465 A:D 179.219 179.507 0.724602 -0.689145 0.004305 -0.006812 0.689154 0.724592 179.588 0.723417 0.003106 0.690404 B:C 179.644 -0.690403 -0.003594 0.723416 6 Misorientation angles ( ) and rotation axes (d) among four types of variants (A, B, C, and D) in Fig. 3.11a, where the coordinates of rotation axes refer to an orthonormal crystal coordinate frame fixed to the monoclinic martensite lattice basis. Table 3 . 3 7. It is seen that the variant pair A:C (or B:D) does possess the rational K 1 and 2 (type I twin), A:B (or C:D) the rational 1 and K 2 (type II twin), and A:D (or B:C) the rational K 1 , 1 , K 2 and 2 (compound twin). Among the three twin types, the compound twin type has the smallest twinning shear, while the other two types possess the twinning shear of the same order. Type I twin and type II twin are conjugate or reciprocal to each other with a common s, but K 1 and K 2 , and 1 and 2 Table 3 . 3 7 Full twinning elements of Ni 50 Mn 30 Ga 20 7M martensite twin variants. All the indices are expressed in the incommensurate 7M superlattice basis. Elements Type (A:C / B:D) Type (A:B / C:D) Compound (A:D / B:C) K 1 (1 2 10 ) (1.0621 2 9.3785 ) (1 0 10) K 2 (1.0621 2 9.3785) (1 2 10) (1 0 10) 1 [10.5541 10 0.9446] [10 10 1] [10 0 1] 2 [10 10 1 ] [10.5541 10 0.9446 ] [10 0 1] P (1 0.057 10.5699) (1 0.057 10.5699) (0 1 0) s 0.2299 0.2299 0.0135 Table 3 . 3 8. It can be seen that for the three types of twin interfaces, the calculated interface planes are in coincidence with their respective twinning planes (K 1 plane) within reasonable deviation, being close to {1 2 10 } 7M for type I twin, {1.0621 2 9.3785 } 7M for type II twin and {1 0 10} 7M for compound twin. Therefore, all these three types of twin interfaces can be considered as coherent interfaces. Obviously, individual movement of such interfaces would generate the smallest atomic mismatch, hence the highest mobility and reversibility as compared with other types of boundaries. This constitutes the positive necessities for the fast dynamic response to the actuating field in these materials. Table 3 . 3 8 Mean values of twin interface normals in the orthonormal crystal coordinate frame calculated by indirect two-trace method[113] and their deviations from the related K 1 planes. Twin type Variant pair Twin interface normal Deviation from related K 1 plane Type A:C B:D (0.46507, -0.75038, -0.46972) (0.46842, -0.74778, -0.47054) 1.09° deviation from {1 2 10 } 1.14° deviation from {1 2 10 } Type A:B C:D (0.48386, -0.74886, -0.45287) 0.27° deviation from {1.0621 2 9.3785 } (0.49928, -0.74497, -0.44242) 1.02° deviation from {1.0621 2 9.3785 } Compound A:D B:C (0.72807, 0.01186, 0.68540) (0.72073, 0.02296, 0.69284) 0.73° deviation from {1 0 10} 1.35° deviation from {1 0 10} Table 3 . 3 Group No. Variant pair Bain (°) K-S (°) N-W (°) Pitsch (°) A:B 6.68 0.38 2.05 0.39 g1 A:C 6.78 0.33 1.38 0.30 A:D 0.72 0.30 1.74 0.21 A:B 7.47 1.14 2.31 1.14 g2 A:C 7.39 1.85 2.51 1.52 A:D 0.35 0.60 1.43 0.32 A:B 6.46 0.27 2.00 0.27 g3 A:C 6.71 0.89 1.54 0.57 A:D 0.47 0.81 1.50 0.49 A:B 7.38 1.30 3.18 1.27 g4 A:C 7.81 1.41 2.49 1.31 A:D 1.13 1.12 1.98 1.03 A:B 6.43 0.27 1.80 0.27 g5 A:C 6.43 0.72 1.22 0.42 A:D 0.83 0.89 1.89 0.53 A:B 6.13 0.89 1.09 0.89 g6 A:C 5.52 1.50 1.02 1.36 A:D 0.58 1.09 1.15 0.93 g7 A:B 6.69 0.24 1.89 0.25 9 Minimum misorientation angles between austenite orientations calculated from martensitic variant pairs (A:B, A:C and A:D) in seven variant groups under Bain, K-S, N-W and Pitsch relations. Table 3 . 3 10 Components of lattice deformation for austenite to 1M martensite transformation under K-S OR and Pitsch OR. K-S Pitsch Dilation in [111] A -0.4834% Dilation in [101] A -0.3923% Dilation in [1 2 1] A 0.0531% Dilation in [0 1 0] A -0.0428% Dilation in [10 1 ] A 0.2664% Dilation in [10 1 ] A 0.2664% Shear in (111) A [ 1 01] A 0.0701 Shear in (101) A [ 1 01] A 0.0051 Shear in (1 2 1) A [10 1 ] A 0.0910 Shear in (0 1 0) A [10 1 ] A 0.1149 Shear in (10 1 ) A [1 2 1] A 0.0018 Shear in (10 1 ) A [0 1 0] A 0.0045 Table 3 . 3 11 Misorientation calculation results of the two fine lamellae in plate P1 expressed in the orthonormal basis. Misorientation Rotation axes angle (°) d1 d2 d3 79.48 -0.705085 0.709088 -0.007046 100.53 0.711234 -0.702951 0.002353 113.69 -0.763601 0.002162 0.645685 114.56 0.002151 0.759855 -0.650089 126.09 0.005053 -0.862696 -0.505697 Table 3 . 3 No. n 1 n 2 n 3 1 0.564232 -0.503713 0.654153 2 0.577535 -0.474879 0.664036 3 0.582612 -0.484456 0.652583 4 0.531444 -0.52204 0.667114 5 0.577133 -0.491117 0.652473 6 0.579255 -0.506573 0.63863 7 0.576666 -0.487242 0.655783 8 0.553357 -0.498681 0.667168 9 0.580171 -0.499824 0.6431 10 0.557753 -0.508412 0.65607 11 0.549901 -0.503883 0.666116 12 0.56102 -0.516914 0.646573 13 0.563209 -0.507294 0.652264 14 0.555645 -0.483403 0.676447 Mean value 0.565129 -0.499292 0.656763 12 Coordinates of the inter-plate interface normal n expressed in the orthonormal crystal basis. Table 4 4 n = i i n n is {0.736130, 0.673329, 0.068855} A , which is near to {110} A with 4.7° deviation. .1, manifest that the habit planes between the austenite and the 7M martensite are irrational. The mean values of the calculated habit plane normals determined using the formula Table 4 . 4 1 Coordinates (n 1 , n 2 , n 3 ) of the calculated habit plane normals and the mean habit plane. All the indices are expressed in the coordinate frame of austenite. No. n 1 n 2 n 3 1 0.738894 0.665652 0.104611 2 0.738351 0.669307 0.082866 3 0.756754 0.650963 0.059750 4 0.723577 0.687154 0.065253 5 0.742925 0.662117 0.098304 Table 4 . 4 2 Habit plane normals, shape deformation directions and shape deformation matrices of variant A, B, C and D in the group around (101) A pole calculated according to the phenomenological theory. The results are expressed in the austenite basis. Variant Habit plane normal Shape deformation direction Shape deformation matrix 0.951753 -0.002901 -0.050195 A [0.692379, 0.041627, 0.720332] [-0.726173, 0.039664, 0.686367] 0.002635 1.000158 0.002742 0.045603 0.002742 1.047444 1.047444 0.002742 0.045603 B [0.720332, 0.041627, 0.692379] [0.686367, 0.039664, -0.726173] 0.002742 1.000158 0.002635 -0.050195 -0.002901 0.951753 1.047444 -0.002742 0.045603 C [0.720332, -0.041627, 0.692379] [0.686367, -0.039664, -0.726173] -0.002742 1.000158 -0.002635 -0.050195 0.002901 0.951753 0.951753 0.002901 -0.050195 D [0.692379, -0.041627, 0.720332] [-0.726173, -0.039664, 0.686367] -0.002635 1.000158 -0.002742 0.045603 -0.002742 1.047444 Table 4 . 4 3 Lattice constants of the modulated martensite in the frame of 7M(IC) and nanotwin combination, in comparison with those calculated according to the adaptive phase criteria (a ad = c Tet + a Teta A ; b ad = a A ; c ad = a Tet ). Noted that all the lattice parameters are expressed in the cubic parent phase coordinate system. Lattice constants 7M(IC) (Å) Nanotwin combination (Å) Theoretical (Å) a 6.082 6.200 6.186 b 5.823 5.761 5.811 c 5.537 5.486 5.486 Table 4 . 4 4 Twinning elements of three types of twins in Ni 53 Mn 22 Ga 25 7M martensite under the 7M(IC) and nanotwin combination models. Model Type (A:C / B:D) Type (A:B / C:D) Compound (A:D / B:C) K 1 (1 2 10 ) (1.0632 2 9.3676 ) (1 0 10) K 2 (1.0632 2 9.3676) (1 2 10) (1 0 10) 7M(IC) 1 2 [10.5719 10 0.9428] [10 10 1 ] [10 10 1] [10.5719 10 0.9428 ] [10 0 1] [10 0 1] P (1 0.0589 10.5888) (1 0.0589 10.5888) (0 1 0) s 0.1883 0.1883 0.0113 K 1 (1 2 7 ) (1.1023 2 6.2839 ) (1 0 7) K 2 (1.1023 2 6.2839) (1 2 7) (1 0 7) Nanotwin 1 [ 7.6497 7 0.9072] [ 7 7 1] [ 7 0 1] combination 2 (7 7 1 ) [7.6497 7 0.9072 ] [7 0 1] P (1 0.0973 7.6813) (1 0.0973 7.6813) (0 1 0) s 0.2458 0.2458 0.0239 Table 4 . 4 5 Strain tensors between matrix and product phases for austenite to 7M and austenite to adaptive phase (nanotwin combination structure) transformation. Austenite -7M modulation Austenite -nanotwin combination 11 0.0275 0.0360 22 -0.0472 -0.0559 33 0.0207 0.0210 -0.0446 -0.0750 Table 4 . 4 6 Orientation relationships between the NM fine lamellae connected by inter-plate interface between plate P1 and P3. The orientation data of the thin lamellae were acquired manually by EBSD measurement. Orientation relationship L1/L3 twin-related K 1 =(112) Tet ;K 2 =(11 2 ) Tet ; L3/L3 twin-related 1 =[11 1 ] Tet ; 2 =[111] Tet ; s=0.344 L1/L3 (1 1 2) L1 5.91° from (1 1 2) L3 [20 1 ] L1 1.62° from [20 1 ] L3 L1 /L3 (010) L1 11.05° from (010) L3 [20 1 ] L1 1.07° from [20 1 ] L3 L1 /L3 (010) L1 2.70° from (1 1 2) L3 [20 1 ] L1 0.97° from [20 1 ] L3 Table 4 . 4 7 Strain tensors between matrix and product phases for austenite to 7M, austenite to NM and 7M to NM transformation. Austenite -7M Austenite -NM 7M-NM 11 0.0275 -0.0560 0.0083 22 -0.0472 0.1204 -0.0092 33 0.0207 -0.0560 0.0002 -0.0446 0 0.0252 ad = 4.257 Å; b ad = 5.486 Å; c ad = 29.446 Å; ad = 94.2°; space group: PM (No. 6) Ni 2c -0.0965 0.25 0.6429 Appendix III: atomic coordinates of nanotwinned 7M superstructure Ni 2c 0.3553 0.25 0.7143 Appendix II: atomic coordinates of incommensurate 7M Ni 2c -0.1930 0.25 0.7857 Ni 2c 0.2588 0.25 0.8571 superstructure Ni 2c -0.1206 0.25 0.9286 Atom type Wyck. position x y z Mn a 7M = 4.2651 Å; b 7M = 5.5114 Å; c 7M = 42.365 Å; = 93.27°; space group: P2/m (No. 10) 1a 0 0 0 Mn 1b 0.4518 0.5 0.0714 Atom type Wyck. position Mn 1a x -0.0965 y 0 z 0.1429 Mn Mn 1a 1b 0 0.3553 0 0.5 0 0.2143 Mn Mn 1c 1a 0 -0.1930 0 0 0.5 0.2857 Mn Mn 2m 1b 0.066 0.2588 0 0.5 0.1004 0.3571 Mn Mn 2m 1a -0.02139 -0.1206 0 0 0.1996 0.4286 Mn Mn 2m 1b -0.0848 0.5 0 0.5 0.3000 0.5 Mn Mn 2m 1a 0.1055 -0.0482 0 0 0.4002 0.5714 Mn Mn 2n 1b 0.605 0.4035 0.5 0.5 0.0501 0.6429 Mn Mn 2n 1a 0.5083 -0.1447 0.5 0 0.1502 0.7143 Mn Mn 2n 1b 0.4096 0.3070 0.5 0.5 0.2496 0.7857 Mn Mn 2n 1a 0.5475 -0.2412 0.5 0 0.3499 0.8571 Mn Mn 2n 1b 0.5414 0.3794 0.5 0.5 0.4505 0.9286 Ga Ga 1b 1b 0 0 0.5 0.5 0 0 Ga Ga 1f 1a 0 0.4518 0.5 0 0.5 0.0714 Ga Ga 2n 1b 0.066 -0.0965 0.5 0.5 0.1004 0.1429 Ga Ga 2n 1a -0.02139 0.3553 0.5 0 0.1996 0.2143 Ga Ga 2n 1b -0.0848 -0.1930 0.5 0.5 0.3000 0.2857 Ga Ga 2n 1a 0.1055 0.2588 0.5 0 0.4002 0.3571 Ga Ga 2m 1b 0.605 -0.1206 0 0.5 0.0501 0.4286 Ga Ga 2m 1a 0.5083 0.5 0 0 0.1502 0.5 Ga Ga 2m 1b 0.4096 -0.0482 0 0.5 0.2496 0.5714 Ga Ga 2m 1a 0.5475 0.4035 0 0 0.3499 0.6429 Ga Ga 2m 1b 0.5414 -0.1447 0 0.5 0.4505 0.7143 Ni Ga 2l 1a 0.5 0.3070 0.75 0 0.5 0.7857 Ni Ga 2j 1b 0.5 -0.2412 0.75 0.5 0 0.8571 Ni Ga 4o 1a 0.1056 0.3794 0.25 0 0.0501 0.9286 Ni Ni 4o 2c 0.0075 0.5 0.25 0.25 0.1502 0 Ni Ni 4o 2c -0.0920 -0.0482 0.25 0.25 0.2496 0.0714 Ni Ni 4o 2c 0.0511 0.4035 0.25 0.25 0.3499 0.1429 Ni Ni 4o 2c 0.0395 -0.1447 0.25 0.25 0.4505 0.2143 Ni Ni 4o 2c 0.5663 0.3070 0.75 0.25 0.1004 0.2857 Ni Ni 4o 2c 0.4786 -0.2412 0.75 0.25 0.1996 0.3571 Ni Ni 4o 2c 0.4151 0.3794 0.75 0.25 0.3000 0.4286 Ni Ni 4o 2c 0.6055 0 0.75 0.25 0.4002 0.5 Ni 2c 0.4517 0.25 0.5714 a Tableau 1 1 Angles de désorientation ( ) et les axes de rotation (d) entre les variantes A, B, C et D dans la figure3(a). Les axes de rotation sont répérés au repère orthonormé de cristal placé sur le réseau monoclinique de la martensite. Eléments de maclage de la martensite modulée 5M dans Ni 50 Mn 28 Ga 22 . sont nécessaires pour éliminer la discontinuité de réseau. Ces ORs ont été généralement déterminées en tirant parti de la coexistence de l'austénite résiduelle et la martensite produite. Pourtant, pour le présent alliage étudié, comme la Variant pair Misorientation angle [°] d 1 Rotation axis, d d 2 d 3 A:C 86.80 -0.70846 0.00241 -0.70575 179.81 0.48490 -0.72659 -0.48676 B:D 86.79 179.83 -0.71242 -0.48214 -0.00221 0.72661 -0.70175 0.48947 93.32 0.70870 0.00098 0.70551 A:B 179.92 -0.51312 -0.68633 0.51543 C:D 93.09 179.64 0.70882 -0.51205 0.00435 -0.68778 0.70538 0.51455 A:D 179.64 179.89 0.70487 -0.70933 -0.00098 0.00317 -0.70933 -0.70487 179.88 0.70978 0.00079 0.70443 B:C 179.91 -0.70443 -0.00102 0.70978 Tableau 2 Type (A:C and B:D) Type (A:B and C:D) Compound (A:D and B:C) K 1 (1 2 5 ) (1.0569 2 4.7155 ) (1 0 5) K 2 (1.0569 2 4.7155) (1 2 5) (1 0 5) 1 [ 5.2504 5 0.9499] [ 5 5 1] [ 5 0 1] 2 [5 5 1 ] [5.2504 5 0.9499 ] [5 0 1] P (1 0.0514 5.2568) (1 0.0514 5.2568) (0 1 0) s 0.1387 0.1387 0.0074 Comme la transformation martensitique est réalisée par le déplacement coordonné d'atomes, certaines relations d'orientation (ORs) spécifiques entre la phase parent et la phase produit Afin de quantifier les écarts pour une OR présumée, les angles désorientation minimum entre deux orientations calculées de l'austénite ont été estimés. En effet, parmi tous les groupes de variantes choisies, Pitsch présente le plus petit angle de déviation. Par conséquent, la OR de Pitsch, c'est à dire (101) A //(1 2 1 ) 1M et [10 1 ] A //[ 1 1 1] 1M entre l'austénite et la martensite 1M, ou (101) A //(1 2 5 ) 5M et [10 1 ] A //[ 5 5 1] 5M si raaportée à la martensite 5M, devrait être considérée comme la OR la plus favorable pour la transformation. donnée ont été tracées dans la projection stéréographique standard {001} de l'austénite dans le repère macroscopique de l'échantillon. A titre d'exemple, la figure 3 affiche les résultats du calcul d'un groupe de variantes, où une orientation de l'austénite est représentée par trois pôles {001}. On voit qu'il y a une orientation distincte calculée pour l'austénite à partir de la martensite sous l'hypothèse de la relation de Bain, et deux sous KS, NW et Pitsch, respectivement. Si la OR présumée est vraie pour la transformation, tous les trois pôles {001} austénite d'une variante de martensite devraient correspondre à celles des trois autres variantes dans le même En se basant sur les considérations qui précèdent, nous avons calculé les orientations de l'austénite parente à partir des orientations mesurées des quatre variantes de martensite qui sont localement liées en relation de macle. Ici, un ensemble de quatre variantes en relation de macle dans un platelet large est traité comme un groupe de variantes. Les orientations calculées de l'austénite sous une OR groupe de variantes. Il apparaît que les relations de KS et Pitsch donnent le moins de dispersion parmi les pôles {001} respectifs pour une orientation commune de l'austénite. Tableau 3 3 Éléments de maclage de la martensite 7M modulée de l'alliage Ni 50 Mn 30 Ga 20 . Tous les indices sont exprimés en utilisant le modèle de supermaille composée de 10 sous cellules. Elements Type-(A:C or B:D) Type-(A:B or C:D) Compound (A:D or B:C) K 1 (1 2 10 ) (1.0621 2 9.3785 ) (1 0 10) K 2 (1.0621 2 9.3785) (1 2 10) (1 0 10) 1 [10.5541 10 0.9446] [10 10 1] [10 0 1] 2 [10 10 1 ] [10.5541 10 0.9446 ] [10 0 1] P (1 0.057 10.5699) (1 0.057 10.5699) (0 1 0) comme cela apparaît dans les agrandissements des carrés sur les figures 6 (b) et (d). Pour quantifier la qualité de l'accord des relations d'orientation, les angles de désorientation minimum entre les orientations des diverses solutions calculées pour l'austénite ont été estimés. Les résultats ont montré que l'OR de KS et celle de Pitsch donnent le plus petit angle de déviation et qu'il n'y a presque pas de différence entre les deux ORs, ce qui suggère que ces deux ORs pourraient être considérées comme possibles pour la transformation. Pour continuer à discriminer les deux relations et déterminer celle qui serait la plus favorable pour la transformation, les déformations de réseau pendant la transformation sous l'hypothèse des deux relations (cf. figure 7) ont été examinées. La déformation du réseau est décomposée en dilatation ou contraction et en cisaillement, comme indiqué dans la figure 7. Les quantités de déformation sous l'hypothèse des deux relations ont été calculées et reportées dans le tableau 4. Dans le cas de KS, l'allongement ou la contraction est assigné dans [111] A , [1 2 1] A et [10 1 ] A . Le cisaillement simple dans (111) A le long de [ 1 01] A , dans (1 2 1) A le long de [10 1 ] A et 'écart de plan, dans la mesure où les deux relations d'orientation possèdent le même parallélisme entre les directions. La OR de KS provoque un écart plus grand que celle de Pitsch, ce qui signifie que la OR de KS exige un plus grand réarrangement atomique pour réaliser la modulation de la structure. Ce résultat confirme également que la relation d'orientation de Pitsch, c'est à dire (101) A //(1 2 10 ) 7M et [10 1 ] A //[10 10 1] 7M entre l'austénite et la martensite, est plus favorable que celle de KS pour la transformation martensitique. dans (10 1 ) A le long de [1 2 1] A . Pour celle de Pitsch, l'allongement ou la contraction est assigné dans [101] A , [0 1 0] A and [10 1 ] A et le cisaillement simple dans (101) A le long de [ 1 01] A , dans (0 1 0) A le long de [10 1 ] A et dans (10 1 ) A le long de [0 1 0] A , respectivement. Par rapport à la relation de KS, celle de Pitsch implique une distorsion de réseau relativement petite (seulement les 2 derniers éléments de cisaillement sont légèrement élevés) pour la transformation de l'austénite à la martensite 1M moyenne. Considérant que l'allongement ou la contraction exige un changement de volume, et que le cisaillement simple n'exige pas, la déformation par un cisaillement simple peut se produire plus facilement. De ce point de vue, la relation de Pistch est énergiquement avantageuse par rapport à celle de KS pour la transformation de l'austénite en martensite 1M. Afin de mieux distinguer les quantités de déformation sous l'hypothèses des deux relations, les écarts entre les plans (1 2 1 ) (Pitch)/(011) (KS) et les directions [ 1 1 1] (Pitch et KS) de chaque souscellule 7M de la supermaille par rapport au ceux de la cellule moyenne 1M sont évalués. Comme montré dans la figure 8, la relation de KS et celle de Pitsch diffèrent l'une de l'autre seulement par l Acknowledgements This work is financially supported by the National Natural Science Foundation of China (Grant No. 50820135101), the Ministry of Education of China (Grant Nos. 2007B35, 707017 and IRT0713), the Fundamental Research Funds for the Central Universities of China (Grant No. N090602002), the CNRS of France (PICS No. 4164) and the joint Chinese-French project OPTIMAG (N°ANR-09-BLAN-0382). I would like to give my sincere thanks to these institutions. I also gratefully acknowledge the French Embassy in Beijing for providing a grant for my study in France. This work is completed at LEM3 (former LETAM, University of Metz, France) and the Key Laboratory for Anisotropy and Texture of Materials (Northeastern University, China). I had the honor to work with numerous colleagues in two labs and I would like to give my hearted thanks for their kind help. I would like to sincerely thank to the reviewers, Prof. Y. Fautrelle and Prof. Z. Q. Hu, for taking time out of their busy schedules to review my dissertation and provide constructive suggestions and comments. I would like to give my special thanks to my supervisors, Prof. Claude Esling, Dr. Yudong Zhang at University of Metz, and Prof. Liang Zuo at Northeastern University, not only for their support, ideas, guidance, and organizational help during the last three years, but also making me a better person and scientist by setting high standards and good examples. I would also like to thank Prof. Xiang Zhao and Dr. Changshu He, for encouraging, supporting, and mentoring me during my PhD study. I would like to thank all my group members who treated me with dignity and respect. Finally, I want to thank my parents, friends and family, especially my wife Ms. Xiaoliang Li, for their continuous support and understanding Avec la méthode ci-dessus, nous avons calculé les orientations de l'austénite à partir des orientations mesurées de variantes martensitiques, en utilisant les quatre ORs classiques. Pour faciliter la visualisation, les orientations calculées de l'austénite sont montrées dans la figures de pôles {001} dans le repère macroscopique de l'échantillon (figure 6). Si la OR assumée est effectivement celle de la transformation, les pôles {001} de l'austénite calculés à partir de différentes variantes martensitiques devraient tomber en coïncidence dans la figure de pôle. Apparemment, la relation d'orientation de KS et celle de Pitsch peuvent donner lieu au meilleur accord parmi les figures de pôles {001} correspondantes pour une orientation d'austénite commune, Clarification of structure and stability of long-period modulated martensite As one of the common characteristics of the -phase alloys [71], Heusler alloys [70,125] and piezoelectric materials [126], the transformation process from the parent austenite to the product martensite produces long-period lattice-modulated structures. The thermodynamic stability and the crystallographic nature of this modulated structures have been questioned [81], because the product martensite was found to have a non-modulated (NM) structure in several alloy systems [65,[127][128][129][130]. By analyzing the crystal lattice mismatch between the austenite and the NM martensite, Khachaturyan et al. [130,131] interpreted that the modulated structure with a plate-like macroscopic shape is actually the combination of twin-related variants of the normal NM martensite phase, wherein the twin domain has a lamellar shape in the order of a few atomic layers. The main argument for this structure model is that the geometrical requirement for an invariant habit plane between the parent and the product lattices can be fulfilled, which lowers the volume-dependent elastic energy. If the long-period modulated martensite is composed of nanotwin-related lamellae with a simple NM structure, the lattice constants of the modulated martensite phase should satisfy some specific relations with those of the parent austenite and the NM martensite [130,131]. Such relations have been taken as the criteria for the verification of the nanotwin combined structure (i.e. adaptive phase). For instance, excellent fits between the measured and calculated lattice constants were found in Ni-Al, Fe-Pd and lead-based ferroelectric perovskites [130,131]. However, the validity of this concept is strongly questioned in lead titanate [126], because the Raman spectroscopy measurement results demonstrated that the modulated phase possesses an independent crystal structure. Recently, this contradiction arose in the newly developed Ni-Mn-Ga ferromagnetic shape memory alloys. Righi et al. [75] made a detailed study on the crystal structure of the so-called seven-layered (7M) modulated martensite by application of the superspace approach to the powder X-ray diffraction (XRD)
200,004
[ "778337" ]
[ "178323" ]
01749215
en
[ "spi" ]
2024/03/05 22:32:07
2012
https://hal.univ-lorraine.fr/tel-01749215/file/DDOC_T_2012_0049_SADIQ.pdf
ACKNOWLEDGEMENTS First of all, I would like to thank Dr. Mohammed Cherkaoui for providing me an opportunity to work under his supervision. His support, guidance and motivation have always been a great inspiration during the entire work. I would like to thank Dr. Karim Inal, Dr. Olaf Van der Sluis, Dr. El Mostafa Daya, Dr. Raphael Pesci, Dr. Suriyakan Kleitz and Mme. Stephanie Blanc who accepted my request to be the part of my PhD committee. I strongly appreciate their suggestions and remarks. I would like to thank the whole techno-group of Schlumberger, Paris, for providing me an opportunity to work on such a challenging project. I would like to thank Jean-Sebastien Lecomte for his help in Nanoindentation, Claude Guyomard for the die design, Olivier Naegelen for the sample casting and Marc Wary for the polishing and etching processes. I am also thankful to Abderrahim Nachit and Ammar for their help in the electrical characterization of the solder alloys. I am thankful to Dr. Min Pei for his help in the coarsening models and David Macel for his support in wettability testing. My experience at University of Lorraine was made more pleasant by my colleagues Wei, Mathieu, Bhasker, Malek and Liaqat, Sajid, Rafiq, Armaghan, Aamer, Jawad, Khuda Bux, Mohsin and Fahd. I would like to thank all of them. I would like to offer special thanks to my brother Muhammad Arif for all his help and support in editing and organizing my thesis. I would like to thank the whole team of University of Lorraine who helped me in everything to make this period more interesting and enjoyable. Ag 3 Sn RE RE RE K Q d d t T R T RE d d RE RE K K R E RE Q Q RE PILE-UP AREA CALCULATIONS c c A b h pu i A a i c ic i i b A h Ch a r op pu S E A A op pu F H A A 8.7 CREEP CHARACTERIZATIONS BY NANOINDENTATION b t CHAPTER 9 WETTABILITY TESTING Wettability of Solders Wetting is the behavior of a liquid to wet a solid surface. This behavior is very important to the interconnection of electronic packages, especially in soldering, for making highly strong bonding between different solid components. In order to attain successful soldering, a certain degree of wetting of the molten solder on the solid substrate surface is required. So generally, a wetting or solderability test is used to measure, The initial wettability of the component termination materials The wetting properties of a newly developed solder A proper metallurgical bond is always necessary to form before analyzing the wetting performance of any solder alloy. This bond, of course, varies for different substrates. An interfacial reaction takes place which forms certain amount of IMCs or in some cases an Inter Metallic Layer (IML) at the two interfaces. These IMCs or IML works as an adhesion layer between the substrate and the solder and hence keep them firmly together. The wettability of any lead-free solder alloy varies with changing its chemical composition and the substrate. For SnPb, the wettability is good over many metallic substrates including copper. Thus, changing from SnPb to lead-free, wettability is an important concern. The substitute must have sufficient wettability under severe service conditions and for different soldering processes such as reflow and wave soldering. Wettability Measurement Methods There are mainly 2 methods for the wettability measurements; Spread-area test (good for reflow soldering) and the wetting balance test (good for the wave soldering). They are described below. Spread-Area Test A solder disc (e.g. 6 mm in diameter and 1 mm in thickness) is coated with flux and melted, and then allowed to solidify on a substrate (e.g. Cu). When a bond is formed, the free energy is reduced and hence the solder changes its shape [START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF]]. This change in shape causes an increase in the contact area which indicates the wetting behavior of solder. In some cases, the ratio of the as-bonded area to this new area (after soldering) is taken as the wettability of solder. This wetting is given by wetting (or contact angle), c , and is calculated using the Young Dupre's equations as 9.1 and 9.2 [START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF] Cos where SF , LF , and SL refer to the surface tensions of the substrate/flux, liquid (solder)/flux, and substrate/liquid (solder) interfaces, respectively, as shown in Figure 9.1 [START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF]]. For a good wetting, the contact or wetting angle should be small (i.e. LF , and SL smaller and SF larger). The spread area test is a good approximation for the reflow soldering process. The solder disc, in the spread area test, has a similar shape to the layer of solder paste by screen-printing or in similar process. Besides, the solder disc in the spread area test and the solder paste in reflow soldering undergo the same process of heating. They are heated above the melting point and are then allowed to solidify. Wetting Balance Test Wetting balance test is the second important technique to evaluate the solder wettability. In this method, the coupon (for example Cu) is dipped into the molten solder, present inside the crucible at a temperature more than its melting point. The molten solder climbs up the Cu coupon due to wetting force exerted on it as shown in Figure 9.2 [START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF]]. As spread area test was a good approximation of reflow soldering, this wetting balance test is a good approximation of the wave soldering in which the substrate is brought into contact with the molten solder. The height of the solder onto the Cu coupon depends on many parameters including the soldering temperature, alloy composition and wetting time. Generally, the wetting F Vg p Where p is the perimeter of the coupon, LF the surface tension of the solder in contact with the flux, c the contact angle, the density of the solder, g the acceleration due to gravity, and V is the immersed volume. A typical wetting curve is shown in Figure 9.3 which represents different types of wettings for solders, in which the wetting time, t w , is the time at which the solder contact angle to the coupon is 90°, as shown at point B. Wetting occurring in a short wetting time is considered to be good. Hence, the wetting force and time of each solder system are revealed by the wetting balance test. The wettability of lead-free solders becomes crucial when high solder joint reliability is needed. As presented in equation 9.4, the contact angle is one of the most important parameter in characterizing the solderability (wettability) of any lead-free solder. A smaller contact angle, c , and thus higher cos c is representing good wettability. Ideally, cos c can reach to 1 which makes c to be 0° which is the best possible wettability. But, in practice, it is somewhere in the range of 40 to 50°, as shown in Figure 9.4, depending on the temperature applied, flux used and many other experimental parameters. On the other hand, a contact angle of more than 90° is not recommended for good joining. A schematic of contact angle more than 90° is provided in In [START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF]], the impact of RE elements, mainly Ce and La, has been studied for the wettability performance on SnPb, SnAg, SnCu and SAC. It has been reported that they can all be soldered successfully using the rosin mildly activated (RMA) flux. The wettability of SnPb solder is quite good in comparison to SnAg, SnCu and SAC alloys. In any case, over the copper substrates, SAC alloys perform well whereas the RE dopings have shown significant improvements in the wetting performance of all SnAg, SnCu and SAC alloys [START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF]]. In most of the cases, the addition of RE elements help to reduce the wetting angle. It can be seen that the amount of RE elements needs to be optimized for the best wetting performance, Figure 9.6. An extensive experimental study on wettability study on Sn3.5Ag eutectic and Sn-3.5Ag-RE doped alloys was made by [START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF]. The wetting curves of SnAg, SnAg-0.25RE and SnAg-0.5RE are compared with that of SnPb in Figure 9.6. It can be seen that all four solders have roughly the same wetting time. However, the wetting force of SnPb is the highest among the four alloys. The effect of adding RE elements (mainly Ce and La) is clearly demonstrated by a significant increase in the wetting force of Sn3.5Ag-0.25RE alloy as getting closer to the SnPb solder. However, an excessive amount of RE addition can lower the wetting force. In the same study, this effect was also applicable to the wetting angle, such that SnAg-0.25RE has the lowest wetting angle among SnAg, SnAg-0.25RE and SnAg-0.5RE. Sample Preparation And Experimental Conditions The equipment used for the wetting balance test was METRONELEC as shown in Figure 9.7, a French manufacturer of soldering equipments. The copper foils used as substrates were 0.025mm thick and 12mm wide. They were purchased from the "Good Fellows", France, with a purity of 99.9%. The samples were cleaned with acetone and dried. Later on, they were dipped into 20 % HNO 3 solution for 20 seconds and then placed in the flux for about 1 second. The flux used was resin mildly activated (RMA) and was manufactured by "Metaux Blancs Ouvres" with fluxing code BC 310.15. The immersion depth into the crucibles, for the copper substrate, was 3mm as shown in Figure 9.8. The wetting time used was 10s and the wetting temperatures applied were 250°C and 260°C. Cos The angle as directly linked to surface tensions is thus representative of the wetting quality. For a smaller contact angle, and thus better wettability, LV should be as small as possible. This is the main function performed by adding the RE elements which decrease the surface tension and this reduces the contact angle as performed during the experiments. This is important to mention that the contact angle is only conceivable for the alloys in the molten state. Meniscography Principle As described earlier, after immersing the Cu coupon into the molten solder bath, the surface tensions are highest at the solder/flux interface. The measurement of the resultant force is representative of the meniscus and, consequently, of the wetting angle and of the solderability. The principle set forth above is described in equation 9. Where is the mass density of the solder alloys, which is provided in Table 9.1, P is the perimeter of the coupon immersed into the solder crucible and g is the acceleration due to gravity (9.81 m/s 2 ). Thus F is the measured force from which the contact angle is deduced. Wettability Measuring Parameters Generally, there are 3 important parameters for the wettability measurements. These are the surface tension, wetting force and the contact (or wetting) angle. These parameters with required description and evaluation are provided below. Flux-Solder Surface Tension In order to determine the wettability in terms of wetting force and contact angles, it is necessary to calculate the surface tension, LV . The results for surface tension are given in RE is in excess (>0.25%), is mainly because of the oxidation resistance during wetting due to the strong affinity of RE to oxidize. Oxidation at the solder surface inhibits the molten solder from contacting with the solid substrate, which usually causes non-wetting behavior. But when the oxide is massive, or the oxide quickly generates, the flux cannot perfectly remove the oxides, and the wettability will degrade. This has also been confirmed by the work performed by [START_REF] Zhou | Investigation on properties of Sn 8Zn 3Bi lead-free solder by Nd addition[END_REF]]. They showed (after TGA analysis) that 0.1% of Nd is the optimum amount as the surface tension increases if Nd is more than 0.1% which finally increases the contact angle. Similarly [START_REF] Shi | Effects of small amount addition of rare earth Er on microstructure and property of SnAgCu solder[END_REF]] have suggested the maximum RE (Er) doping as 0.1-0.4% and [Wang el al. 2002] have demonstrated RE (Ce) as 0.25-0.5% in SnAg alloys. In the work performed by [START_REF] Noh | Effects of cerium content on wettability, microstructure and mechanical properties of Sn Ag Ce solder alloys[END_REF], they stated that RE is liable to oxidation [START_REF] Huang | Creep behavior of eutectic Sn-Ag lead-free solder alloy[END_REF] due to its strong affinity towards oxygen and have suggested 0.3 % of Ce as the optimum doping in SAC alloys. The Summary Extensive work has been performed on the wettability testing of SAC and SAC-La doped alloys at 2 different temperatures of 250°C and 260°C. It was noticed that increasing temperatures make better wettability for the same composition. This is dedicated to the decreasing effects of surface tension at elevated temperatures. For the SAC-La doped alloys, the surface tension decreased due to RE dopings at both 250°C and 260°C as compare to the SAC alloy. RE doping further increases the wetting forces from 5.7 mN (SAC) up to 6.7 mN (SAC-0.25La). SAC-0.5La has smaller wetting force than SAC-0.25La which means an optimization of RE elements is to be specified for better alloy compositions. Wetting or contact angles measurements were made on the basis of the surface tensions already evaluated. An appreciable decrease in the contact angles was investigated and once again SAC-0.25La has a better (smaller) contact angle than SAC and SAC-0.5La alloys. I am extremely thankful to Dr. Raphael Pesci for all his contributions, support and motivation during the entire experimental work. fr attitude gave me the real motivation to work in a friendly environment. I am thankful to Dr. El-Mostafa Daya and Dr. Jean-Marc Raulot from LEM3, University of Lorraine, France for their enriching discussions and valuable remarks. Figure 9 . 1 : 91 Figure 9.1: Schematic diagram of the spread area test Figure 9.2: Coupon geometry for wetting balance test on molten solder alloy[START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF] Figure Figure 9.5. Figure 9 9 Figure 9.3: Variation of wetting force with time [METRONELEC] Figure 9 9 Figure 9.6: Wetting curves of Sn-3.5Ag-RE solders compared with Sn-Pb solder[START_REF] Wu | Properties of lead-free solder alloys with rare earth element additions[END_REF] Figure Figure 9.7: METRONELEC wettability setup Figures 9 . 9 Figures 9.10-9.16 for different compositions and at different temperatures. Good reproducibility Figure 9 .Figure 9 Figure 9.10: SAC surface tension vs. wetting time at 250°C -9.20. The average values with 5% error bars are given in Figure 9.21. Figure 9 . 9 Figure9.17: SAC wetting force vs. wetting time at 250°C oxides is also confirmed during our EDS mapping on the free surface of unpolished specimens. Figure 9 . 9 Figure 9.22: SAC contact angle vs. wetting time at 250°C ], ..........................................................................(9.1) SF SL LF Cos c c SF SL .............................................................................(9.2) LF Table 9 . 9 1: Densities for SAC and SAC-La alloys Alloy Density (mg/mm 3 ) SAC 7.45 SAC-0.05 7.453 SAC0.25 7.46 SAC-0.5 7.48 CHAPTER 6 MECHANICAL PROPERTIES
16,240
[ "770552" ]
[ "178323" ]
01749219
en
[ "spi" ]
2024/03/05 22:32:07
2012
https://hal.univ-lorraine.fr/tel-01749219/file/DDOC_T_2012_0052_ZHANG.pdf
Keywords: Magnetic Field, Phase Transformation, Magnetic Dipolar Interaction, Texture,Orientation Relationship In this work, the influence of the magnetic field on diffusional phase transformation in high purity Fe-C alloys has been investigated theoretically and experimentally. The magnetic field induced microstructural features and crystallographic orientation characteristics have been thoroughly studied in three different carbon content alloys: Fe-0.12C, Fe-0.36C and Fe-1.1C alloys. Magnetic field induces different aligned and elongated microstructures along the field direction, namely aligned and elongated pearlite colonies in Fe-0.12C alloy and elongated proeutectoid ferrite grains in Fe-0.36C alloy, due to the two scaled magnetic dipolar interaction. Magnetic field increases the amount of ferrite in hypoeutectoid alloys and this field effect becomes more pronounced with the increase of the carbon composition. Magnetic field inhibits the formation of Widmanstätten ferrite by introducing additional driving force to ferritic transformation and thus reducing the need for low energy interface which is required to overcome the transformation barriers during slow cooling process. Magnetic field promotes the formation of abnormal structure by increasing the driving force of transformation from carbon-depleted austenite to ferrite and it enhances the spheroidization of pearlite due to its influence on accelerating carbon diffusion resulting from increased transformation temperature, together with its effect on increasing the relative ferrite/cementite interface energy. The field induced enhancement of carbon solution in ferrite is evidenced through the WDS-EPMA measurements for the first time. Ab-initio calculations reveal that the presence of an interstitial carbon atom in bcc Fe modifies the magnetic moments of its neighboring Fe atoms. This leads to the decrease of the demagnetization energy of the system and makes the system energetically more stable under the magnetic field. Due to the atomic-scaled magnetic dipolar interaction, magnetic field favors the nucleation and growth of the ferrite grains with their distorted <001> direction parallel to the transverse field direction, and thus induces the enhancement of the <001> fiber component in the transverse field direction. This field effect is related to the crystal lattice distortion induced by carbon solution and its impact becomes stronger with the increase of the carbon content and the field intensity. Three ORs between pearlitic ferrite and cementite have been found in present work, namely Isaichev (IS) OR and two close Pitsch-Petch (P-P) ORs. Magnetic field hardly changes the types of the appearing ORs, but it considerably increases the occurrence frequency of the P-P2 OR, especially in Fe-1.1C alloy, by favoring the nucleation of ferrite. II Résumé Dans ce travail, l'influence du champ magnétique sur la transformation de phase diffusionnelle dans des alliages Fe-C de haute pureté a été étudiée théoriquement et expérimentalement. Les caractéristiques microstructurales et celles d'orientations cristallographiques induites par le champ magnétique ont été soigneusement étudiées dans trois alliages Fe-C à différents taux de carbone, à savoir Fe-0.12C, Fe-0.36C, Fe-1.1C. Le champ magnétique induit différentes microstructures alignées et allongées le long de la direction du champ, à savoir des colonies de perlite alignées et allongées dans l'alliage Fe-0.12C et des grains allongés de ferrite proeutectoïde dans l'alliage Fe-0.36C, en raison de l'interaction magnétique dipolaire à deux différentes échelles. Le champ magnétique augmente la quantité de ferrite dans les alliages hypoeutectoïdes et cet effet du champ est plus prononcé avec l'augmentation du taux de carbone. Le champ magnétique inhibe la formation de ferrite de Widmanstätten en introduisant une force motrice supplémentaire pour la transformation ferritique et réduisant ainsi la nécessité de l'interface de faible énergie qui est requise pour surmonter les barrières de transformation durant le processus de refroidissement lent. Le champ magnétique favorise la formation de la structure anormale en augmentant la force d'entraînement de la transformation de l' austénite appauvrie en carbone en ferrite et il améliore la sphereoidization de perlite en raison de son influence sur l'accélération de la diffusion de carbone entraînée par l'augmentation de la température de transformation, ainsi que son effet sur l'augmentation de l'énergie relative de l'interface ferrite /cémentite. L'augmentation de la solubilité du carbone dans la ferrite induite par le champ est mise en évidence à travers les mesures WDS-EPMA pour la première fois. Des calculas ab-initio montrent que la présence d'un atome de carbone interstitiel dans Fe C modifie les moments magnétiques des atomes de Fe voisins. Ceci conduit à la diminution de l'énergie de démagnétisation du système et rend le système énergétiquement plus stable dans le champ magnétique. En raison de l'interaction dipolaire magnétique à l'échelle atomique, le champ magnétique favorise la nucléation et la croissance des grains de ferrite ayant leur direction <001> distordue parallèle à la direction du champ transversal, et induit donc l'augmentation de la composante de fibre <001> dans le sens transversal par rapport à la direction du champ. Cet effet du champ est relié à la distorsion du réseau cristallin induite par une solution de carbone et son impact devient plus fort avec l'augmentation de la teneur en carbone et l'intensité du champ. Trois relations d'orientations (OR) entre la ferrite perlitique et la cémentite ont été trouvées dans ce travail, à savoir l'OR Isaichev (IS) et deux OR proches des OR Pitsch-Petch (PP). Le champ magnétique ne modifie guère les types d'OR qui apparaissent, mais il augmente considérablement la fréquence d'occurrence des OR P-P2, en particulier dans l'alliage Fe-1.1C, en favorisant la nucléation de la ferrite. Mots General introduction The earliest known descriptions of magnets and their properties were around 2500 years ago, from Greece, India, and China, when man happened to notice the lodestones and their affinity to iron. Since then, people were undergoing a long period from being curious about its magnetic power, and attempting to uncover the physical essence, to fully understand it and even to establish corresponding theory to make use of it for every aspect of daily life and scientific researches. Magnetic field is considered as a powerful tool for studying the properties of matter, because they couple directly to the electronic charge and magnetic moments of the protons, neutrons, and electrons of which matter is made up. In fact, all the matters are magnetic, but some materials are much more magnetic than the others. The origin of magnetism lies in the orbital and spin motions of electrons and the way by which the electrons interact with one another. The magnetic behavior of materials can be classified into five major groups: diamagnetism, paramagnetism, ferromagnetism, ferrimagnetism and antiferromagnetism (The ordering of magnetic moments in diamagnetism, paramagnetism, ferromagnetism, ferrimagnetism and antiferromagnetism materials have been shown in Figure 1.1). Diamagnetic substances are composed of atoms which have no net magnetic moments-all the orbital shells are filled and there are no unpaired electrons. Many everyday materials, e.g., water, wood, glass and many elements, e.g., H 2 , Ar, Cu, Pb, are diamagnetic. When a diamagnetic material is exposed to an external field, it magnetized: a negative magnetization is produced. If the field is removed, the magnetization of diamagnetic material vanishes. For paramagnetic materials, some of the atoms or ions have a net magnetic moment due to unpaired electrons in partially filled orbitals. The individual magnetic moments do not interact magnetically, so the total magnetization is zero without a field. In the presence of a field, there is a partial alignment of the atomic magnetic moments in the direction of the field, resulting in a net positive magnetization and positive susceptibility. In addition, the efficiency of the field in aligning the moments is opposed by the randomizing effects of thermal agitation. This results in a temperature dependent susceptibility, known as the Curie Law. Unlike paramagnetic materials, the atomic moments in ferromagnetic materials exhibit very strong interactions. These interactions are produced by electronic exchange forces and result in a parallel or anti-parallel alignment of atomic moments. Ferromagnetic materials exhibit parallel alignment of moments resulting in large net magnetization even in the absence of a magnetic field. Even though electronic exchange forces in ferromagnets are very large, thermal energy eventually overcomes the exchange and produces a randomizing effect. This occurs at a particular temperature called the Curie temperature (Tc). Below Tc, the ferromagnet is ordered and above it, disordered. The saturation magnetization goes to zero at the Curie temperature. For ferrimagnetic materials, the magnetic moments of the atoms on different sublattices are antiparallel. However, the opposing moments are unequal and a resolved spontaneous magnetization remains. If the opposing moments are exactly equal, the net moment is zero. This type of magnetic ordering is called antiferromagnetism. Diamagnetic Paramagnetic Ferromagnetic Ferrimagnetic Antiferromagnetic Figure 1.1 Magnetic ordering in magnetic materials. H Under the magnetic field, the field effect on different materials will be verified according to their magnetic behaviors, this underlie the essence of the magnetic field effect in material science. As the development of the magnetic field theories and technique the application of the high magnetic field is greatly encouraged. High magnetic field as a clean, non-contacting and powerful source of energy can affect atomic behaviors, such as atom arrangement, matching and migration and thus exert powerful influence on microstructures and properties of materials. Consequently, more and more meaningful results and new phenomena together with economic profits are brought, which in turn inspires stronger desire to conduct more widening and deepening research. Recently, magnetic field has been introduced to solid phase transformations of metallic materials; especially Fe-C based alloys, for the purpose of microstructure control. As is known, steels are the most widely used materials related to every industry in daily live. Any small improvement in properties of Fe-C alloys means a big progress in human being and a huge amount of interests. Together with a bright prospective based on mature knowledge, the study of phase transformation of Fe-C based alloys under high magnetic field has attracted special attention from all over the world, and will become a popular topic in the following years. Basic mechanisms of the magnetic field influence on solid phase transformations in steels Solid phase transformations in steels are often referred to the decomposition of austenite, which is usually classified into three groups: diffusionless, semidiffusional and diffusional transformations, from a viewpoint of atom moment. The diffusionless phase transformation is known as martensitic transformation which occurs without long-range diffusion of neither Fe nor C atoms; semi-diffusional transformation refers to the bainitic transformation, when only C atoms diffuses; as for diffusional transformation, it usually corresponds to the pearlitic and ferritic transformations, in which cases, both Fe and C atoms can fulfill a long-range diffusion. In terms of these solid phase transformations in steels, the phases involved are with different magnetisms. There are three equilibrium phases with great engineering importance located within different carbon composition and temperature range. One is high temperature austenite; the others are ferrite and cementite, which are considered as basic components to the product of the austenite decomposition. High temperature phase austenite is paramagnetic; ferrite is ferromagnetic below ~770°C and cementite is paramagnetic at the formation temperature but becomes ferromagnetic below ~210°C. Due to the natural magnetic difference of parent and product phases, the transformation process and the resultant microstructure could be modified by an external magnetic field. This is because that magnetic field could modify the energy terms due to its magnetization effect during the phase transformation and thus affect the phase transformation both thermodynamically and kinetically. Take the phase transformation from austenite to ferrite for an example, when an external magnetic field was applied, the phases under the magnetic field will be magnetized and their Gibbs free energy will be changed by introducing so-called intrinsic magnetization energy. This magnetization energy is also denoted as the "magnetic Gibbs free energy", it can be written as dM H µ V M ∫ 0 0 0 , where V is the volume of the phase, 0 µ the permeability of free space (vacuum), 0 H the magnetic field strength in free space and M the induced magnetization per unit volume. Therefore, for a transformation from austenite to ferrite in the presence of a magnetic field, the total Gibbs free energy change M G γ α γ → + ∆ will contain two terms: the "chemical Gibbs free energy difference" C G ∆ and the "magnetic Gibbs free energy difference" M G ∆ as follows: M C M G G G γ α γ → + ∆ = ∆ + ∆ (1.1) 0 0 0 0 0 0 0 ( ( ) ( ) ( ) M M M M M G Hd M Hd M H dM dM α γ α γ α γ α γ µ µ µ ∆ = - - = - - ∫ ∫ ∫ ∫ (1.2) where superscript and subscript α and γ denote ferrite and austenite and M the magnetization. As the magnetization of ferrite is higher than that of austenite at all the transformation temperature concerned, Temperature T 0 T 0 M Gibbs free energy G γ γ γ γ G α α α α G γ γ γ γ Μ Μ Μ Μ G α α α α Μ Μ Μ Μ Figure 1. 2 Schematics of the Gibbs free energy of γ and α without and with a magnetic field, γ-austenite, α-ferrite, M-magnetic field. The Gibbs free energy difference between the parent and the product phases is also denoted as the phase transformation driving force which deal with the nucleation and growth process of the product phases, thus magnetic field affects not only the phase transformation thermodynamics but also the transformation kinetics such as modifying the transformation rate. In addition to the thermodynamic and kinetic effects, there also exists magnetization and demagnetization effect of the magnetic field resulting from the strong magnetic interaction among the magnetic moments of Fe atoms when placed in a magnetic field. It is called magnetic dipolar interaction. Each Fe atom can be treated as a magnetic dipole. When an external magnetic field is applied, the atomic moments tend to align along the field direction, as schematized in Figure 1.3. Then, there exists the dipolar interaction between neighboring dipoles. They attract each other along the field direction (magnetization), but repel each other in the transverse field direction (demagnetization). This dipolar interaction has mainly microstructural effect as it works on the nucleation and the growth process of ferrite during the transformation from austenite to ferrite, especially when ferrite is ferromagnetic with high magnetization. By minimizing the demagnetization energy (repelling among magnetic moments), it θ FD r may bring about grain shape anisotropy by preferential grain growth in the field direction or crystallographic texture by selective grain nucleation or growth. Solid phase transformations in steels under magnetic field As mentioned above, since the magnetic natures of the parent and product phases involved in solid phase transformation in steels are usually different, the phase transformation process is expected to be modified by the application of magnetic field and the field influence is of great importance to be investigated. In 1960's, studies on the effects of the magnetic field on austenitic decomposition began with the diffusionless martensitic transformation. As for martensitic transformation, the transformation temperature is relatively low and transformation can only be studied under low magnetic fields. As soon as the maturity of superconducting technique, the generation of direct current high magnetic field was available. From then on, a number of high temperature phase transformations under magnetic field are studied including semi-diffusional transformations and diffusional transformations. Although the studies on diffusional phase transformation under the magnetic field had a relatively late start, the results on this issue are far more fruitful than the other two. Diffusionless transformation-Martensitic transformation Martensitic transformation is the representative diffusionless transformation in ferrous alloys which involves austenite with face-centered-cubic crystal structure transforms to carbon oversaturated martensite with basically body-centered-cubic or body-centered-tetragonal crystal structure depending on carbon oversaturation degree. In martensitic transformation, the parent austenite is paramagnetic; whereas the martensite is ferromagnetic. The induced magnetization of martensite is much higher than that of austenite. When the magnetic field is applied, the Gibbs free energy of austenite does not change much but that of martensite is greatly lowered. Thus, the phase equilibrium between parent and product phases is changed by magnetic field. For many years, the investigations on martensitic transformation under the magnetic field were focused on the effect of the magnetic field on increasing the Ms temperature and enhancing the amount of transformed martensite. In early 1960's, studies on athermal martensitic transformation reported that magnetic field promoted martensitic transformation and increased the Ms temperature in ferrous alloys. Krivoglaz et al. [1] based on their thermodynamic analysis suggested that the effect of magnetic field on martensitic transformations be due only to the magnetostatic (Zeeman) energy. They proposed following formula to estimate the shift of Ms: q T V H M T / 0 ⋅ ⋅ ⋅ = δ δ δ (1.3) where M δ is the magnetization difference between the product and the parent phases, H the strength of applied magnetic field, V δ volume change between the two phases and q the latent heat of transformation. The calculated shift of Ms for Fe-Ni alloys fitted well the experimental results [2,3] . Based on this, Kakeshita et al. [4][5][6][START_REF] Kakeshita | Magnetoelastic martensitic transformation in an ausaged Fe-Ni-Co-Ti alloy[END_REF][START_REF] Kakeshita | Magnetic field-induced martensitic transformations in Fe-Ni-C invar and non-invar alloys[END_REF][START_REF] Kakeshita | Magnetic Field-Induced Martensitic Transformation in Single Crystals of Fe-31.6 at%Ni Alloy[END_REF][START_REF] Kakeshita | Magnetic field-induced transformation from paramagnetic austenite to ferromagnetic martensite in an Fe-3.9Mn-5.0C (at%) alloy[END_REF][START_REF] Kakeshita | Effect of magnetic fields on martensitic transformations in alloys with a paramagnetic to antiferromagnetic transition in the austenitic state[END_REF][START_REF] Kakeshita | Magnetic field-induced martensitic transformations in a few ferrous alloys[END_REF][START_REF] Kakeshita | A new model explainable for both the athermal and isothermal natures of martensitic transformations in Fe-Ni-Mn alloys[END_REF][START_REF] Kakeshita | Effect of magnetic fields on athermal and isothermal martensitic transformations in Fe-Ni-Mn alloys[END_REF][START_REF] Kakeshita | Effects of static magnetic field and hydrostatic pressure on the isothermal martensitic transformation in a Fe-Ni-Mn alloy[END_REF][START_REF] Kakeshita | Effects of magnetic field and hydrostatic pressure on martensitic transformation process in some ferrous alloys[END_REF][START_REF] Kakeshita | Effect of magnetic field and hydrostatic pressure on martensitic transformation and its kinetics[END_REF][START_REF] Kakeshita | Martensitic transformations in some ferrous and non-ferrous alloys under magnetic field and hydrostatic pressure[END_REF][START_REF] Kakeshita | Effects of hydrostatic pressure and magnetic field on martensitic transformations[END_REF][START_REF] Kakeshita | Magnetic field-induced martensitic transformation and giant Magnetostriction in Fe-Ni-Co-Ti and ordered Fe 3 Pt shape memory alloys[END_REF][START_REF] Kakeshita | Time-dependent nature of displacive transformations in Fe-Ni and Fe-Ni-Mn alloys under magnetic field and hydrostatic pressure[END_REF] carried out more systematical and deeper studies on this issue. They used pulsed magnetic fields to induce larger increases in Ms and noticed that there existed a critical strength of magnetic field to effectively induce martensitic transformation at the temperatures above Ms, and the higher the temperature was, the stronger the critical strength became. They also found that the experimentally measured increases in Ms temperature of the Fe-Pt ordered and disordered alloys [5,[START_REF] Kakeshita | Magnetic field-induced martensitic transformation and giant Magnetostriction in Fe-Ni-Co-Ti and ordered Fe 3 Pt shape memory alloys[END_REF], Fe-Ni-Co-Ti thermo-elastic alloys [START_REF] Kakeshita | Magnetoelastic martensitic transformation in an ausaged Fe-Ni-Co-Ti alloy[END_REF] and Fe-Ni alloys [6,[START_REF] Kakeshita | Magnetic Field-Induced Martensitic Transformation in Single Crystals of Fe-31.6 at%Ni Alloy[END_REF] were not in agreement with the calculated results using Krivoglaz's formula (Eq. 1. 3). They proposed a more accurate expression for the magnetic Gibbs free energy change by clarifying the unknown effects which Krivoglaz [1] did not find under low field as the high field susceptibility and forced volume magnetostriction effects and gave the new estimation formula of the increase of Ms as follows [START_REF] Kakeshita | Effect of magnetic field and hydrostatic pressure on martensitic transformation and its kinetics[END_REF]: H have been calculated for Fe-Pt alloys [5], Invar Fe-Ni [6], non-Invar Fe-Ni-C [START_REF] Kakeshita | Magnetic field-induced martensitic transformations in Fe-Ni-C invar and non-invar alloys[END_REF] and Invar Fe-Mn-C alloys [START_REF] Kakeshita | Magnetic field-induced transformation from paramagnetic austenite to ferromagnetic martensite in an Fe-3.9Mn-5.0C (at%) alloy[END_REF], which are in good agreement with the experimental ones. The authors also clarified the influence of composition [6], grain boundaries [START_REF] Kakeshita | Magnetic Field-Induced Martensitic Transformation in Single Crystals of Fe-31.6 at%Ni Alloy[END_REF], crystal orientations [START_REF] Kakeshita | Magnetic field-induced martensitic transformations in Fe-Ni-C invar and non-invar alloys[END_REF][START_REF] Kakeshita | Magnetic Field-Induced Martensitic Transformation in Single Crystals of Fe-31.6 at%Ni Alloy[END_REF], Invar characteristics [START_REF] Kakeshita | Magnetic field-induced martensitic transformations in Fe-Ni-C invar and non-invar alloys[END_REF], thermo-elastic nature [START_REF] Kakeshita | Magnetoelastic martensitic transformation in an ausaged Fe-Ni-Co-Ti alloy[END_REF][START_REF] Kakeshita | Magnetic field-induced martensitic transformation and giant Magnetostriction in Fe-Ni-Co-Ti and ordered Fe 3 Pt shape memory alloys[END_REF] and austenitic magnetism [START_REF] Kakeshita | Effect of magnetic fields on athermal and isothermal martensitic transformations in Fe-Ni-Mn alloys[END_REF] on the magnetic field-induced martensitic transformations. Results showed that Eq. (1.4) suited well under the above conditions. Thus, the propriety of the newly derived equation is quantitatively verified. B H H w H H s M M Ms G Ms G c c p h c ⋅ ⋅ ∂ ∂ ⋅ + ⋅ ⋅ - ⋅ ′ ∆ - = ∆ - ∆ ) / ( ) 2 / 1 ( ) ( ) ' ( ) ( 0 2 ε χ (1. As for the influence of the magnetic field on the kinetics of martensitic transformation, Kakeshita et al. [START_REF] Kakeshita | A new model explainable for both the athermal and isothermal natures of martensitic transformations in Fe-Ni-Mn alloys[END_REF][START_REF] Kakeshita | Effect of magnetic fields on athermal and isothermal martensitic transformations in Fe-Ni-Mn alloys[END_REF] conducted their investigation in Fe-Ni-Mn alloys. Their results suggest that, under high magnetic fields, the originally isothermal kinetics of martensitic transformation can be changed to the athermal one and the kinetics in both case can be evaluated from the same equation. They also suggested [START_REF] Kakeshita | Effects of static magnetic field and hydrostatic pressure on the isothermal martensitic transformation in a Fe-Ni-Mn alloy[END_REF] that the shift of the nose temperature and incubation time increase with increasing magnetic field. The martensitic morphologies under the magnetic field have also been studied in a variety of Fe-based alloys. Many studies showed that the structure of the magnetic field-induced martensite did not show much difference [4][5][6][START_REF] Kakeshita | Magnetoelastic martensitic transformation in an ausaged Fe-Ni-Co-Ti alloy[END_REF][START_REF] Kakeshita | Magnetic field-induced martensitic transformations in Fe-Ni-C invar and non-invar alloys[END_REF][START_REF] Kakeshita | Magnetic Field-Induced Martensitic Transformation in Single Crystals of Fe-31.6 at%Ni Alloy[END_REF][START_REF] Kakeshita | Magnetic field-induced transformation from paramagnetic austenite to ferromagnetic martensite in an Fe-3.9Mn-5.0C (at%) alloy[END_REF][START_REF] Kakeshita | Effect of magnetic fields on martensitic transformations in alloys with a paramagnetic to antiferromagnetic transition in the austenitic state[END_REF][START_REF] Kakeshita | Magnetic field-induced martensitic transformations in a few ferrous alloys[END_REF][START_REF] Kakeshita | A new model explainable for both the athermal and isothermal natures of martensitic transformations in Fe-Ni-Mn alloys[END_REF][START_REF] Kakeshita | Effect of magnetic fields on athermal and isothermal martensitic transformations in Fe-Ni-Mn alloys[END_REF][START_REF] Kakeshita | Effects of static magnetic field and hydrostatic pressure on the isothermal martensitic transformation in a Fe-Ni-Mn alloy[END_REF][START_REF] Kakeshita | Effects of magnetic field and hydrostatic pressure on martensitic transformation process in some ferrous alloys[END_REF][START_REF] Kakeshita | Effect of magnetic field and hydrostatic pressure on martensitic transformation and its kinetics[END_REF][START_REF] Kakeshita | Martensitic transformations in some ferrous and non-ferrous alloys under magnetic field and hydrostatic pressure[END_REF][START_REF] Kakeshita | Effects of hydrostatic pressure and magnetic field on martensitic transformations[END_REF][START_REF] Kakeshita | Magnetic field-induced martensitic transformation and giant Magnetostriction in Fe-Ni-Co-Ti and ordered Fe 3 Pt shape memory alloys[END_REF][START_REF] Kakeshita | Time-dependent nature of displacive transformations in Fe-Ni and Fe-Ni-Mn alloys under magnetic field and hydrostatic pressure[END_REF]; whereas the amount of martensite was increased by the magnetic field [6,[START_REF] Kakeshita | Magnetic field-induced martensitic transformations in a few ferrous alloys[END_REF]. The increase in the amount of martensite is very meaningful and important, as it will result in further strengthening, considering that martensite is a hard phase, whereas austenite is a soft phase. Semi-diffusional transformation-Bainitic transformation The study on the effects of magnetic field on bainitic transformation has a relatively late start, and only a few researches have been done. In this case, the parent austenite is paramagnetic and the product bainite is ferromagnetic like martensite. In this content, the effect of magnetic field on bainitic transformation is similar to the case of martensitic transformation, on both thermodynamic and kinetic aspects. Grishin [START_REF] Grishin | Structure and properties of structural steels after isothermal treatment in a magnetic field[END_REF] made an investigation on the transformation from austenite to bainite under a magnetic field based on three different structural steels. It revealed that the bainitic transformation was accelerated by magnetic field. The acceleration effect by field was also observed in Fe-0.52C-0.24Si-0.84Mn-1.76Ni-1.27Cr-0.35Mo-0.13V alloy by microstructural observation. As shown in Figure 1.4 [START_REF] Bhadeshia | Bainite in Steels[END_REF], the amount of the transformed bainite (black) was greatly increased under the 10T magnetic field. Later, effects of magnetic fields on transformation temperature, transformation behavior and transformed microstructure have been investigated for bainitic transformation in a Fe-3.6Ni-1.45Cr-0.5C steel [START_REF] Ohtsuka | Effects of a high magnetic field on bainitic transformation in Febased alloys[END_REF][START_REF] Ohtsuka | Effects of strong magnetic fields on bainitic transformation[END_REF]. It was found that the Bs temperatures increase with the increase of the applied magnetic field. Bainitic transformation is accelerated by the applied magnetic field. Although elongated and aligned microstructures were observed for austenite to ferrite transformation in a Fe-0.4C alloy, but no elongation or alignment of transformed structure has been observed for transformations to bainite and lath martensite. Diffusion-controlled phase transformations under magnetic field Compared with diffusionless and semi-diffusional transformation, the diffusion-controlled phase transformations in steels are more complicated and intricate, due to the variety of the proeutectoid phases diversified by carbon composition. As the diffusional phase transformations usually happen in relatively high temperature range, experimental studies are difficult to conduct, especially when high magnetic field is difficult to be obtained. Studies on this subject began from the theoretical simulation calculations instead. Only until 1980's, magnetic field was tried to be applied to the diffusion-controlled transformation. Later, with the development of magnetic field technology, more experiments on diffusioncontrolled transformations under magnetic field have been carried out in various Fe based alloys under the instruction of the calculated results. Since then, more and more new phenomena have been uncovered and verified. In the meantime, corresponding theories have been explored for better understanding of magnetic field influential mechanisms. Unlike the martensitic transformation and bainitic transformation, in which the magnetic field effects were largely from energy aspects and work on the transformation thermodynamic and kinetic, but less on morphology, the influence of magnetic field on diffusional phase transformation were more widely and complicated, which also involved with the magnetic dipolar interaction and thus leads to the effects from both crystallographic and morphologic point of view. New Fe-C phase equilibrium under the magnetic field When people tried to understand the Fe-C solid phase transformation under magnetic field, they started from understanding the field effects on phase equilibriums. Even in earlier days, when experimental studies on the effects of magnetic field on diffusion-controlled solid phase transformation in steel were not available, thermodynamic calculations were carried out to explore field effects, and later the calculation results were applied to guide the experimental studies. The influence of magnetic field on phase equilibrium first expressed as the shift of phase transformation temperature. In 1965, Sadovskii et al. [START_REF] Sadovskii | Magnetic field and phase transformations in steel[END_REF] mentioned that with the increasing intensity of the magnetic field, the amount of the magnetic phase formed increases and the range of the existence of the phase increases with the degree of magnetization in a way similar to the effect of pressure. It was pointed out that the change in the temperature of the phase transformation under the influence of the magnetic field is expressed by the equation: q JT dH dT 0 ∆ = (1.5) where H is the intensity of the field; ∆J is the difference in the magnetization of the phase intervening in the transformation; T 0 is the temperature of phase equilibrium; q is the heat of transformation. Later, Ghosh et al. [START_REF] Ghosh | Phase tranformation of steel in magnetic field[END_REF] investigated the change of TTT diagram using a similar equation during austenite to ferrite transformation. This analysis did not take the magnetization of austenite into account, thus, it is only suitable for low magnetic field. When magnetic field is strong, the ignorance of the magnetization of austenite was quite a deviation from real situation, let alone the influence brought by composition changes. Although the equation is useful in a limited range, it is a very instructive calculation of the effects of magnetic field on solid phase equilibrium of iron based alloys. The studies of phase diagram are meaningful and necessary due to its guiding role in designing heat treatment plan and developing new materials. The basic idea of building up a new Fe-C phase diagram is to determine the new phase equilibrium temperature and equilibrium composition under the external magnetic field. In terms of this, the change of the Gibbs free energy induced by the magnetic field of related phases has to be evaluated by using appropriate mode and magnetic parameters. From last 90's, studies on the Fe-C phase diagram under the field have been launched by researchers from Korea and Japan. Considering that the absence of high-field magnetic susceptibility data around Tc makes it impossible to construct the Fe-C phase diagram under the high magnetic field based on the experimental data, they used the molecular field theory, which is capable to provide a theoretical method to estimate the high-field magnetization in this temperature region. As a result, former studies were based on the Weiss molecular field theory together with the Curie-Weiss law to evaluate the change in Gibbs free energy of individual phases involved with the applied magnetic field. Meanwhile, experimental measurements are important and needed to verify the validation of the calculated results. In term of this, experimental measurements of phase transformation temperature and examination on transformed microstructure had been made as powerful and direct methods to offer comparative study. The first comprehensive study on the prediction of the new Fe-C phase diagram under the magnetic field was carried out by Joo et al [START_REF] Joo | An effect of high magnetic field on phase transformation in Fe-C system[END_REF]. Considering the disagreament of magnetic susceptibility of ferrite from Curie-Weiss law and experimental data, the magnetization calculated by Weiss theory is used instead of susceptibility to calculate the magnetic Gibbs free energy of ferrite. In this work, the influence of the magnetic field on all phases was concerned, the Fe-C phase diagram were calculated under various applied magnetic fields and the eutectoid temperature, eutectoid composition as well as the γ/α transformation temperature were determined. Results showed that both Ac 1 and Ac 3 temperatures increase as the magnetic field is applied, while the Ac m temperature change is almost independent of the field. As a result, eutectoid point was shifted to high carbon and high temperature side, as seen in Figure 1.5. equilibrium transformation for various applied magnetic fields [START_REF] Joo | An effect of high magnetic field on phase transformation in Fe-C system[END_REF]. It is regarded for the first time the simulation of the Fe-C phase diagram under the field. This study is quite limited, because the magnetic Gibbs free energy of ferrite is obtained by mathematic method without interpreting the inner physic relation between M, H and T. Moreover, the prediction of the magnetic Gibbs free energy at any field strength is not obtainable. This limits the application of this study to some extent. Despite of this, the results obtained from this work are of great importance and meaning. It showed that the applied magnetic field could affect the phase equilibrium temperature as well as the eutectoid composition, and thus change the microstructure at room temperature. This enables the application of magnetic field as a method of microstructure control and materials properties optimization. In addition, the shift of eutectoid point by magnetic field enables the possibility of improving mechanical properties by increasing carbon content without hypereutectoid transformation. Joo et al. [START_REF] Joo | An effect of a strong magnetic field on the phase transformation in plain carbon steels[END_REF] measured the γ-α transformation temperature under 10T magnetic field by thermo-dilatometer. It turned out that the increase of the γ-α transformation temperature in pure iron and the eutectoid temperature calculated on the basis of molecular field theory are in good agreement with measured results [START_REF] Joo | An effect of high magnetic field on phase transformation in Fe-C system[END_REF]. Similar calculations using Weiss molecular field theory to calculate Gibbs free energy change of the phases were also carried out by Choi et al. [START_REF] Choi | Effects of a strong magnetic field on the phase stability of plain carbon steels[END_REF], Guo and Enomoto [START_REF] Enomoto | Influence of magnetic field on the kinetics of proeutectoid ferrite transformation in iron alloys[END_REF][START_REF] Guo | Influence of magnetic fields on γ-α equilibrium in Fe-C(-X) alloys[END_REF] as well as Hao and Ohtsuka [START_REF] Ohtsuka | Effects of a high magnetic field on bainitic transformation in Febased alloys[END_REF][START_REF] Ohtsuka | Effects of strong magnetic fields on bainitic transformation[END_REF][START_REF] Hao | Effect of High Magnetic Field on Phase Transformation Temperature in Fe-C Alloys[END_REF][START_REF] Hao | Quantitative Characterization of the Structural Alignment in Fe-0.4C Alloy Transformed in High Magnetic Field[END_REF][START_REF] Hao | Structural elongation and alignment in a Fe-0.4C alloy by isothermal ferrite transformation in high magnetic fields[END_REF][START_REF] Ohtsuka | Alignment of ferrite grains during austenite to ferrite transformation in a high magnetic field[END_REF][START_REF] Ohtsuka | Effects of magnetic field and prior austenite grain size on the structure formed by reverse transformation from lath martensite to austenite in an Fe-0.4C alloy[END_REF][START_REF] Ohtsuka | Structural control of Fe-based alloys through diffusional solid/solid phase transformations in a high magnetic field[END_REF] in Fe-C system. Choi et al. [START_REF] Choi | Effects of a strong magnetic field on the phase stability of plain carbon steels[END_REF] combined the calculation studies with experimental examination and proved that magnetic field can also increase the carbon content in eutectoid composition and solubility in ferrite. Guo and Enomoto even further enlarged their studies to Fe-C-Mn and Fe-C-Si alloys [START_REF] Enomoto | Influence of magnetic field on the kinetics of proeutectoid ferrite transformation in iron alloys[END_REF][START_REF] Guo | Influence of magnetic fields on γ-α equilibrium in Fe-C(-X) alloys[END_REF] by taking into account the interaction free energy of the alloying elements as well as the influence of the alloying elements on Curie temperature and the magnetic moments of the iron atoms. Considering the Curie-Weiss law can hardly offer reliable susceptibility data of ferrite around Curie temperature, these calculations were very approximate around and above the Curie temperature. However, valuable results were obtained that the α-γ transformation temperature is raised 1-3°C per Tesla depending on the alloy composition and the intensity of the applied field, whereas the γ-δ transformation temperature is decreased about 0.4°C per Tesla. They also predicted that, at about 100T pure bcc iron may be more stable than the fcc iron at all temperature range. Hao and Ohtsuka [START_REF] Hao | Effect of High Magnetic Field on Phase Transformation Temperature in Fe-C Alloys[END_REF] revealed that: It was found that when magnetic field is lower than 10T, the transformation temperature for pure Fe from austenite to ferrite has a linear relationship with magnetic field strength, increasing about 0.8°C per Tesla. For eutectoid transformation in Fe-0.8C alloy, similar relationship exists, the transformation temperature increases about 1.5°C per Tesla. Hao and Ohtsuka [START_REF] Hao | Effect of High Magnetic Field on Phase Transformation Temperature in Fe-C Alloys[END_REF] used the thermocouple and a digital recorder to measure the transformation temperature. It was indicated that the measured transformation temperature data are not consistent with calculation results using Weiss molecular field theory and moreover, using the experimental measured susceptibility under a low magnetic field to conduct the simulation calculations for high magnetic field is not appropriate. It was noted that Weiss model allows basic calculations for ferromagnetism and offers relatively accurate magnetization data of bcc Fe below Tc, however, it still had several shortcomings [START_REF] Guo | Influence of magnetic fields on γ-α equilibrium in Fe-C(-X) alloys[END_REF], which limited the application of Weiss model around and above Tc, especially when the applied magnetic field is high. Thus, new models were needed for simulation calculation of Fe-C equilibrium under the magnetic field. To solve this problem, Zhang [START_REF] Zhang | Calculation of magnetization and phase equilibrium in Fe-C binary system under a magnetic field[END_REF] revised the Weiss model by substituting the molecular field coefficient λ with a short-range-ordering coefficient γ around and above Tc and applied this revised mode to the susceptibility of bcc Fe above Tc, which turned out to be in good agreement with the ones measured experimentally. Moreover, the electronic band model is used to calculate the temperature variation of susceptibility of fcc Fe. Based on these, the magnetization and further the magnetic Gibbs free energy of ferrite and austenite are calculated, thus the Fe-C phase diagram under the magnetic field was calculated, the shift of Ae 3 temperature and the eutectoid temperature and composition are predicted. It is found that in the phase diagram the magnetic field enlarges the ferritic phase area and shrinks the austenitic one. With pure iron, the Ae 3 is raised by about 7°C under a magnetic field of 10T. This result is in good agreement with the measured values by Ohtsuka et al. [START_REF] Hao | Effect of High Magnetic Field on Phase Transformation Temperature in Fe-C Alloys[END_REF]. Generally, for Fe and Fe-0.8C the average γ-α transformation temperature measured experimentally was easily compared with the calculated results based on Weiss molecular field theory. The most accepted result was that the change of γ-α transformation temperature is almost proportional to the magnetic field when the ferrite phase is ferromagnetic at the transformation temperature (as is the case in Fe-0.8C), whereas it was proportional to the square of the magnetic field when the ferrite phase is paramagnetic (as is the case in pure iron). However, for intermediate pro-eutectoid steel, of which the (γ+α) region is large, this method is no longer valid [START_REF] Fukuda | Magnetic field dependence of γ-α equilibrium temperature in Fe-Co alloys[END_REF] and it would have been meaningless to define an average γ/α transformation temperature in order to compare it with a calculated γ/α equilibrium temperature due to the large hysteresis of the transformations in these alloys. Considering this, another approach was proposed to discuss the magnetic field dependence of diffusional phase transformation temperatures, taking into account the existence of temperature hysteresis due to the large driving force required for transformation [START_REF] Farjami | Effect of Magnetic Field on γ-α Transformation in Fe-Rh Alloys[END_REF]. More recently, this new approach was applied to study the nonequilibrium α-γ and γ-α transformation temperature separately in Fe-Rh alloys [START_REF] Farjami | Effect of Magnetic Field on γ-α Transformation in Fe-Rh Alloys[END_REF], Fe-C-Mn alloys [START_REF] Garcin | Experimental evidence and thermodynamics analysis of high magnetic field effects on the austenite to ferrite transformation temperature in Fe-C-Mn alloys[END_REF] and Fe-Ni alloys [START_REF] Garcin | Thermodynamic analysis using experimental magnetization data of the austenite/ferrite phase transformation in Fe-xNi alloys (x= 0, 2, 4 wt%) in a strong magnetic field[END_REF][START_REF] Garcin | In situ characterization of phase transformations in a magnetic field in Fe-Ni alloys[END_REF]. Together with the development of new analysis approach, new devices to monitor high temperature phase transformations under magnetic field had emerged. Rivoirard and Garcin [START_REF] Garcin | Experimental evidence and thermodynamics analysis of high magnetic field effects on the austenite to ferrite transformation temperature in Fe-C-Mn alloys[END_REF][START_REF] Garcin | Kinetic effects of magnetic field on the γ/α interface controlled reaction in iron[END_REF][START_REF] Rivoirard | High temperature dilatation measurements by in situ laser interferometry under high magnetic field[END_REF] developed a high magnetic field in situ dilatometer, ranging from room temperature up to 1500K, using a high resolution Michelson laser interferometer. This new devices were applied to monitor the austenite to ferrite transformation under a magnetic field of 16T in pure iron. Then, measured Ar 3 temperatures were compared with the calculated results. In their studies, experimental magnetization data, measured up to 3.5 T and 1100 K by high-sensitivity magnetometer [START_REF] Gaucherand | Magnetic susceptibility of high-Curie-temperature alloys near their melting point[END_REF], were used to calculate the magnetic contribution to the Gibbs free energy instead of the calculated one from Weiss molecular field theory. In this approach, calculated Ar 3 temperatures were found to be in good agreement with the experimental ones measured by dilatometer. Besides the modification of γ-α transformation temperature, the eutectoid point shift has also been calculated and experimentally evidenced in hypereutectoid carbon steel [START_REF] Zhang | Effect of a high magnetic field on eutectoid point shift and texture evolution in 0.81C-Fe steel[END_REF][START_REF] Zhang | Shift of the eutectoid point in the Fe-C binary system by a high magnetic field[END_REF]. Microstructural observation, which indicates formation of ferrite in the hypereutectoid carbon steel under field, evidences the shift of the eutectoid point in the Fe-C system. It shifts under a 12 T magnetic field from 0.77 wt. %C to 0.8287 wt. %C. Furthermore, Zhang et al. [START_REF] Zhang | Effect of a high magnetic field on eutectoid point shift and texture evolution in 0.81C-Fe steel[END_REF][START_REF] Zhang | Shift of the eutectoid point in the Fe-C binary system by a high magnetic field[END_REF] propose a general and comprehensive calculation method by combining the well-established statistical thermodynamic models with the magnetism theory to predict this eutectoid point shift (both in carbon composition and temperature scales) as shown in Figure 1.6. The eutectoid carbon content and temperature calculated without and with a 12 T magnetic field is 0.779 wt. %C; 725.71 °C and 0.847 wt. %C; 754.68 °C. The eutectoid carbon contents without and with the magnetic field-as calculated-appears to be very close to the values determined experimentally. It shows that the equations proposed offer a relatively accurate prediction of the eutectoid point shift under a high magnetic field in both carbon content and temperature scales. New microstructural features under the magnetic field The most remarkable microstructural feature induced by magnetic field is the formation of the aligned and elongated microstructures during transformation between austenite and ferrite. Studies on this subject have been drawing increasing attention as it may lead to the control of the microstructure and thus the texture and the mechanical properties. The first observation of the aligned microstructure was reported by Shimotomai [START_REF] Shimotomai | Aligned of two-phase structures in Fe-C alloys[END_REF] in Fe-0.1C alloy and Fe-0.6C alloy during the α to γ inverse transformation. The chains or columns of the paramagnetic austenite were found along the field direction in the ferromagnetic ferrite phase during the ferrite to austenite inverse transformation in 8T magnetic field, as shown in Figure 1.7. They contributed the formation of the alignment to the dipolar interactions of the magnetic moments between the pairs of paramagnetic austenite nuclei which are regarded as magnetic hole in ferrite matrix. Later, the alignment of the γ nuclei under the field effect was even noticed above the Curie temperature during γ-α inverse transformation [START_REF] Maruta | Magnetic field-induced alignment of steel microstructures[END_REF]. Later, Shimotomai and Maruta studied the magnetic field-induced alignment structures in Fe-0.6C alloy, Fe-0.2C-0.2Si-1.3Mn-0.1Ti alloy and Fe-0.1C-2.0Si-2.0Mn [START_REF] Maruta | Magnetic field-induced alignment of steel microstructures[END_REF][START_REF] Maruta | Alignment of two-phase structures in Fe-C alloys by application of magnetic field[END_REF][START_REF] Shimotomai | Formation of aligned two-phase microstructures by applying a magnetic field during the austenite to ferrite transformation in steels[END_REF] during the γ-α transformation. A special experimental setup was designed, which features a combination of roller dice for hot deformation and a superconducting magnet for applying a magnetic field during the phase transformation. The concept is characterized by deforming steels prior to the transformation under the field in order to introduce more nucleation sites for transformation. In their studies, the formation mechanism of the aligned structures has been discussed from the point of view of nucleation and growth of the ferrite grains. They observed the ferrite nucleation sites under the field using SEM and drew a conclusion that no matter the nucleation site is at the grain boundary or in the austenite grain interiors, the long axis of the ferrite grain is always parallel to the field direction, as shown in Figure 1.8. Further analysis on ferrite morphology showed that the ferrite particles nucleated and grew along the magnetic field direction are mostly ellipsoidal in shape and this is determined by a competition between the magnetic field energy that favors an elongated shape and the interfacial energy that favors a spherical shape. Moreover, it was reported that, though ferrite grains were elongated and aligned by applied field, no preferred crystallographic orientation of ferrite is developed. Shimotomai and Maruta considered a combination of prior rolling and transformation in an external field is essential for yielding an aligned structure during γ-α transformation. However, a similar aligned structures were reported by Ohtsuka et al in a Fe-0.4C alloy without prior deformation during continuous slow cooling under the 10T magnetic field [START_REF] Ohtsuka | Alignment of ferrite grains during austenite to ferrite transformation in a high magnetic field[END_REF]. As seen from Figure 1.9, each ferrite grain is elongated and these grains are distributed head to tail along the field direction. The effects of magnetic field strength, cooling rate and austenite grain size on the transformed structure in a magnetic field were observed for Fe-Mn-C-Nb alloy, but no elongation or alignment of ferrite grains has been observed [START_REF] Ohtsuka | Alignment of ferrite grains during austenite to ferrite transformation in a high magnetic field[END_REF]. On this basis, the existence of alloying elements may totally eliminate this elongation and alignment morphology. Later, Zhang et.al [START_REF] Zhang | Thermodynamic and kinetic characteristics of the austenite-to-ferrite transformation under high magnetic field in medium carbon steel[END_REF][START_REF] Zhang | New microstructural features occurring during transformation from austenite to ferrite under the kinetic influence of magnetic field in a medium carbon steel[END_REF][START_REF] Zhang | A new approach for rapid annealing of medium carbon steels[END_REF] found that the aligned structure formed during slow cooling process rather than fast cooling process, when they studied the austenite decomposition in a hot-rolled 42CrMo steel under the magnetic field. They suggested that when cooling rate is 10°C/min, due to the inhomogeneous deformation during rolling and the dipolar attraction between ferrite nuclei, the microstructure of alternately distributed ferrite grains and pearlite colonies along the field direction is obtained. In the case of cooling at 46°C/min, the nucleation at high temperature is greatly inhibited; as a result, the nucleation is postponed at lower temperature, when more sites inside the austenite grains besides grain boundaries are available. Consequently, the microstructure is characteristic of randomly distributed ferrite grains and pearlite colonies with smaller sizes, seen in Figure 1.11. The magnetic field-induced grain elongation mechanism was analyzed from physical point of view [START_REF] Zhang | Magnetic-fieldinduced grain elongation in a medium carbon steel during its austenitic decomposition[END_REF]. It revealed that the grain elongation is the results of the opposing contributions from the atomic dipolar interaction energy of Fe atoms and the interfacial energy. The effect of magnetic field on the formation of elongated and aligned pearlite was investigated by Song et al [START_REF] Song | Effects of high magnetic field strength and direction on pearlite formation in Fe-0.12%C steel[END_REF]. It was reported that in the Fe-0.12C alloy, the pearlite was elongated and aligned along the field direction during its diffusional decomposition under the 12T magnetic field. Moreover, this tendency increased with increasing magnetic field strength and this field effect is dependent on the specimen orientation with respect to the field direction. It should be noticed that, several works indicates that the nucleation and growth of ferrite grains happen preferentially along the field direction, however, no Recently, magnetic field-induced microstructures features were investigated from crystallographic point of view. Zhang et al. applied a 12T magnetic field to a medium plain carbon steel during the diffusional decomposition of austenite and investigated the effect of a high magnetic field on the distribution of misorientation angles, grain boundary characteristics and texture formation in the ferrite produced [START_REF] Zhang | Grain boundary characteristics and texture formation in a medium carbon steel during its austenitic decomposition in a high magnetic field[END_REF]. It was reported that magnetic field can cause a considerable decrease in the frequency of low-angle misorientation and an increase in the occurrence of low Σ coincidence boundaries, especially the Σ3 of ferrite [START_REF] Zhang | Grain boundary characteristics and texture formation in a medium carbon steel during its austenitic decomposition in a high magnetic field[END_REF]. They attributed this to the elevation in the transformation temperature caused by the magnetic field and, therefore, the reduction of the transformation stress. It is also found that magnetic field enhances the <001> texture component along the transverse field direction due to the dipolar interaction between the magnetic moments of Fe atoms. Significance and content of this work The investigations on phase transformation under the magnetic field have been carried out for more than fifty years. Many meaningful experimental phenomena have been discovered and some related theories have been developed. Nowadays, as a new promising technique, magnetic field has been widely applied in materials processing. However, most of the former studies on the effect of the magnetic field on diffusional phase transformation are lack of regularity, and are not systematic either. The fundamental theories of magnetic field influence on phase transformation are still in need to be addressed. As alloying elements affect the phase transformation in steels to a large extent, the fundamental theories on field influential mechanism are hard to establish based on the existing studies, most of which were conducted in Fe-based alloys with considerable alloying elements. With the development of the magnetic field generator technique, the application of magnetic field becomes more and more extensive. Deeper understanding of field influential mechanism is necessary and imperative. Based on such background, the present work has been carried out as a fundamental research to explore the magnetic field influential mechanism on diffusional phase transformation and also enrich the existing phase transformation theory. To examine the field effect without the involvement of impurities and alloying elements and to obtain a systematic result, three high purity Fe-C alloys in both hypo and hyper eutectoid composition range were prepared as experimental materials. The field induced microstructural features and the crystallographic characteristics have been thoroughly investigated. The main content of present work can be summed up as follows: (1) Experimentally examine the microstructural features induced by magnetic field in both hypo and hyper eutectoid Fe-C alloys: Magnetic field induced aligned and elongated microstructures are analyzed through a comparative study in Fe-0.12C alloy and Fe-0.36C alloy. Microstructural modifications by magnetic field due to its thermodynamic influence on phase equilibrium in hypo and hyper eutectoid Fe-C alloys, respectively. (2) Experimentally examine the modification of carbon solubility in ferrite under the magnetic field, conduct calculations on magnetic moments of the related atoms and underline the physical essence of the field influential mechanism using magnetic field dipolar interaction. (3) By means of SEM/EBSD to examine the effect of the magnetic field on crystallographic orientation characteristics. The field induced preferred orientation distribution of ferrite and then the texture formation mechanism is investigated. The types of the orientation relationships in pearlite and the corresponding occurrence frequency have been examined. Chapter 2 Experimentals and Calculations Materials preparation The materials used in this work are three high purity Fe-C alloys with different carbon content, namely, Fe-0.12C alloy, Fe-0.36C alloy and Fe-1.1C alloy. To minimize the involvement of impurities during the preparation, the alloys were prepared by vacuum induction melting using high purity constituent elements in EPM laboratory of Northeastern University, China. The preparation of the high purity Fe-C alloys involves following two steps. (1) Preparation of high purity cast iron as high carbon-content constituent for carbon content tuning of the alloys. The high purity (99.99%) electrolytic iron was melted in a high purity graphite crucible (carbon 99.99%) by repeated vacuum induction melting, allowing carbon to diffusion into the iron melt from the graphite to produce cast iron. The chemical composition of the obtained cast iron is shown in Table 2.1. (2) Preparation of the final alloys with assigned carbon content. The cast iron obtained in step (1) as high carbon constituent was melted with high purity (99.99%) electrolytic iron in water cooled copper crucible by vacuum induction levitation melting. By varying the quantity of the cast iron, the final Fe-C alloys with assigned carbon contents (0.12C, 0.36C and 1.1C in wt.%) were obtained. Figure 2.1 shows the top view of one high purity Fe-C alloy ingot. Pre-treatment of high purity Fe-C alloys In order to study the influence of the magnetic field on phase transformation without the interference from the inhomogeneity of the initial microstructure, the three high purity alloys were pre-treated to obtain the homogeneous and equilibrium microstructures for the subsequent magnetic field heat treatments. First, the ingots were multidirectionally forged to homogenize the microstructure. The multidirectional forging was performed by first forging the ingots into cubes and then further forging the cubes along their three perpendicular edge directions to resume the initial cube form of the work pieces. This process was repeated 4 times. Finally the cube was forged along one of its diagonal direction into a pancake shape. The multidirectional forging was conducted within the temperature range from 1273K to 1073K without annealing between the forging steps. After forging, the work pieces were air cooled to the room temperature. To avoid the involvement of the outside oxide coating and decarburized layer in the specimens, a plate of 70mm×70mm×35mm in size was cut out from the centre of each forged pancake by electrical spark cutting. Then, the plates were homogeneously annealed in vacuum at set temperature for 10 hours to homogenize the composition. (The annealing temperature is 1373K for Fe-0.12C alloy and Fe-0.36C alloy; 1323K for Fe-1.1C alloy). After that, full annealing was performed to obtain equilibrium microstructures. The alloys were fully austenitized for 45min, and cooled to 973K at 0.3K/min for proeutectoid and eutectoid transformation, and then cooled to room temperature at The corresponding phase equilibrium temperatures of the three alloys were calculated with Thermo-Calc Software. For the Fe-0.12C alloy and the Fe-0.36C alloy, their Ae 3 temperatures are 1135K and 1069K, respectively; for the Fe-1.1C alloy, Ae cm temperature is 1128K. Magnetic field heat treatment equipment and experiments In this work, all the field heat treatments were conducted at EPM laboratory of Northeastern University, China. The magnetic field heat treatment furnace is set in a 12T cryo-cooled superconducting magnets with a bore sized 100mm in diameter (Figure 2.2 shows the photo (a) and the schema (b) of the magnetic field heat treatment furnace). The magnetic field ramping time from 0T to 12T is 39 minutes. For this magnetic field heat treatment furnace, the highest heating temperature is 1473K, the maximum heating rate is 5K/min and the maximum cooling rate is For field heat treatments, the magnetic field was applied during the whole heating, isothermal holding and cooling process. For Fe-0.12C and Fe-1.1C alloys, the 12T magnetic field was applied, while for Fe-0.36C alloy, both 8T and 12T magnetic fields were applied. The non-field heat treatments for comparison were carried out in the same furnace using the same heat treatment parameters only without switching on the magnetic field. During the field heat treatment, the specimens were kept in the centre (zero magnetic force) area with one of their length directions parallel to the field direction, as shown in To obtain an appropriate observation surface for microstructural examinations, the treated specimens were first mechanically polished using SiC grinding papers from 800 # to 2000 # and then polished using diamond liquid with the size of the diamond particle from 3µm to 1 µm. For optical microstrutural examinations and carbon content measurements, the specimens were further etched with 4% Nital at room temperature for several seconds. Specimen cutting and specimen geometry For SEM microstructural and EBSD crystallographic orientation examinations, the specimens were electronic polished in 8% perchloric acid ethanol solution at room temperature at the voltage of 30V for 10~15 seconds after mechanical polishing (paper polishing up to 2000 # ). It should be emphasized that the specimen preparation procedures of the non-field and field treated specimens (electrolytic polishing or etching, and rinsing) were strictly controlled to ensure identical preparation conditions, so that the output results from the non-field and the field treated specimens are comparable. Optical microstructural examinations The optical microstructure observations were performed with an Lamellar spacings of pearlite 20 areas Wavelength-dispersive spectroscopic (EPMA) analysis The carbon concentration of the proeutectoid ferrite in Fe-0.36C alloy was measured by means of wavelength-dispersive spectroscopy using a Shimadzu 1610 electron probe microanalyzer (WDS-EPMA). Six standard samples with carbon content ranging from 0.0075% to 0.978% were used to set the calibration curve for the WDS measurements. Both specimens treated without and with the magnetic field were examined during the same specimen loading to secure the identical measurement conditions for the non-field and field treated specimens. 20 proeutectoid ferrite areas in each specimen were randomly selected and measured to reach a global representation of the results. Hardness tests The hardness of proeutectoid ferrite in Fe-0.36C alloy was tested using a micro-indenter (diamond Vickers indenter Leitz, Wetzlar, Germany) with an applied load of 25g for 20s. The mean hardness was averaged over 8 measurements for each specimen. Scanning electron microscopic and crystallographic orientation analysis In this work, a field emission gun scanning electron microscope-Jeol JSM 6500F (the photo and the schematic illustration are displayed in Figure 6 In this work, two different beam controlled modes were applied for different orientation analysis purposes. (1) Automated orientation mapping for massive orientation acquisition and for correlated microstructure and texture analyses. The automated orientation mapping was performed to examine the orientation distribution of ferrite. As is known, ferrite is a simple-crystal-structured phase. The (b) (a) orientation determination of ferrite by EBSD is usually efficient and accurate guaranteed by the attainable high-quality Kikuchi patterns form bulk ferrite grains and even from lamellar pearlitic ferrite. This ensures the orientation determination of ferrite from a large measurement area can be performed in the automatic mode within a short time. In this work, the automated mappings were performed on the whole cross section of the specimens at different step sizes, namely 0.5µm, 1µm, 2µm and 3µm. As the crystal structure of cementite is relatively complicated and it is subject to a high level of internal constraint, within the same acquisition time for one Kikuchi pattern from ferrite, the acquired Kikuchi patterns of cementite is of very poor quality. Therefore, the measurement points on cementite always remain as non-indexed zero solution. The orientation information of pearlite colonies was only from their pearlitic ferrite. In this way, each pearlite colony is simply indentified as a ferrite grain. Finally, only the texture of ferrite (pearlitic and proeutectoid) was analyzed. (2) Interactive mode for individual orientation measurements. To investigate the orientation relationships (ORs) between ferrite and cementite in pearlite and in abnormal structure, individual orientations of ferrite and its neighboring cementite were measured. Due to the above-mentioned reasons for the nature of cementite together with the geometrical constrains imposed by the anisotropic shape of pearlitic cementite (thin lamellar: ~20nm in width) and the anisotropic shape of the electron beam when it touches the specimen surface under EBSD measurement chamber geometry (as shown in To obtain a representative result of occurrence frequency of the appearing ORs, Determination of orientation relationships between ferrite and cementite In this work, the basic principle of the OR determination is to verify the direction-direction and plane-plane parallelisms between ferrite and cementite using the ORs between ferrite and cementite reported in literature. The utilized ORs are summarized in Table 2.5. Table 2.5 Summary of the well-known and new ORs. Well-known ORs Expressed in Conventional way New ORs [START_REF] Zhang | New insights into crystallographic correlations between ferrite and cementite in lamellar eutectoid structures obtained by SEM-FEG/EBSD and an indirect two-trace method[END_REF] Expressed in close-packed plane and in-plane direction BAG F C F C ] 111 //[ ] 010 [ ] 0 1 1 //[ ] Unknown The direction-direction and plane-plane parallelism verifications were performed by first transforming the corresponding direction vectors or plane normal vectors in the lattice basis of cementite (orthorhombic) to that of ferrite (cubic) by coordinate transformation using the determined orientation of ferrite and cementite by EBSD and then comparing them with the corresponding ferrite direction vectors or plane normal vectors to verify the parallelism between them to conclude the possible OR. The full OR determination process is detailed as follows. (1) Setting of coordinate systems. Three orthonormal coordinate systems have been chosen for convenience in addition to the crystal basis for ferrite and cementite. One is referred to the sample coordinate system. The second is to the crystal of ferrite that should be the same as the lattice basis of ferrite and the third one is to the crystal of cementite in a way that the corresponding basis vectors are parallel to lattice basis vectors (i. ). The difference between the two bases is that the crystal coordinate system of cementite is orthonormal, whereas the lattice basis is not. (2) Construction of coordinate transformation matrices. The coordinate transformation matrix j G (j=F or C, where F denotes ferrite and C cementite) from the sample coordinate system to the ferrite/cementite coordinate system can be constructed using the Euler angles φ 1 , Φ, φ 2 of ferrite/cementite with respect to the sample coordinate system determined by EBSD measurements.           Φ Φ Φ Φ - Φ + - Φ + Φ Φ - - Φ - = cos 2 1 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2 1 ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ j G (2.1) Then, the coordinate transformation matrix from ferrite crystal coordinate system to cementite crystal coordinates system ∆G F→C (as illustrated in Figure 2.9) can be calculated in matrix notation: [ ] [ ] [ ] [ ] [ ] C C C F F F S G G S G ⋅ = ∆ ⋅ ⋅ → (2.2) [ ] [ ] [ ] [ ] [ ] C C F F C F S G G S G ⋅ ⋅ ⋅ = ∆ - - → 1 1 (2.3) where F S or C S is the rotational symmetry matrix of crystal system of ferrite or cementite. Figure 2.9 Coordinate system transformations from ferrite to cementite. ( 3) Coordinate transformation of vectors If one intends to transform the direction vectors or plane normal vectors in the cementite lattice basis to that of ferrite, the coordinate transformations start from cementite lattice basis between cementite crystal coordinate system, and followed by cementite crystal coordinate system between ferrite coordinates system (lattice basis). For a direction vector [u v w] in cementite lattice basis, its corresponding direction vector in cementite coordinate system v C is where i e uv (i=1,2,3) is the unit vector of cementite crystal coordinate system and a, b, c are the crystal lattice parameters of cementite. Then, the same vector in ferrite coordinate system F v , can be calculated with the transformation matrices F C F C v G v → = ∆ (2.5) For a normal vector of a plan (h k l) in cementite lattice basis, the coordinate transformation is more complicated. First, its normal vector in cementite lattice c V V V Λ Λ Λ = = = v v v v v v uuv uuv uuv (2.7) where V is the volume of the unit cell built on the three cementite crystal lattice vectors , , a b c uv uv uv . ( ) ( ) ( ) V a b c b c a c a b Λ Λ Λ = ⋅ = ⋅ = ⋅ v v v v v v v v v (2.8) Thus, (2.9) With equation (2.6) and (2.9), hkl g uuuv can be obtained: And so the same vector in ferrite coordinate system can be calculated by equation (2.5). (4) Angle between two vectors As soon as the direction vectors or plane normal vectors of cementite are transformed into the same coordinates system with the corresponding vectors of ferrite, they can be compared. We suppose 1 v uv and 2 v uu v are two vectors in ferrite coordinates system. As it is known, ( ) 1 2 1 2 1 2 cos v v v v v v ⋅ = ⋅ ⋅ uuv uv uu v uu v uv uu v $ (2.12) Then, ( ) 1 2 1 2 1 2 cos v v v v v v ⋅ = ⋅ uv uu v uv uu v $ uuv uu v (2.13) The angle ∆δ between two vectors can be calculated as following equation: ( ) 1 2 1 2 1 2 arc cos arc cos v v v v v v δ   ⋅   = =   ⋅   uv uu v uv uu v $ uuv uu v ∆ ∆ ∆ ∆ (2.14) In the present work, if ∆δ ≤ 3° for the parallel directions and plane normals from ferrite and cementite, we consider that direction and plane parallelisms are fulfilled and the OR between ferrite and cementite is confirmed. For the present study, the habit plane between ferrite and cementite in each OR was determined by the "indirect two-trace method" [START_REF] Zhang | Indirect two-trace method to determine a faceted low-energy interface between two crystallographically correlated crystals[END_REF]. Ab-initial calculations In order to interpret the working mechanism of the magnetic field on carbon solution in ferrite, the magnetic moments of bcc Fe and carbon atoms and the atomic magnetic dipolar interaction energy of Fe clusters without and with an interstitial carbon atom were evaluated. The calculations were carried out within the framework of density functional theory (DFT) using the Vienna ab-initio simulation package (VASP) [START_REF] Kresse | Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set[END_REF][START_REF] Kresse | From ultrasoft pseudopotentials to the projector augmented-wave method[END_REF]. The interaction between ions and electrons was described by the projector augmented wave method (PAW) [START_REF] Blöchl | Projector augmented-wave method[END_REF], and the exchange correlation potential was treated by the generalized gradient approximation (GGA) [START_REF] Perdew | Generalized gradient approximation made simple[END_REF]. The pseudopotentials with 3d 7 4s 1 and 2s 2 2p 2 as respective valence states for Fe and carbon and a spin polarized representation of the electronic charge densities that allows for collinear description of magnetic moments were used. The kinetic energy cutoff was set to be 400eV and a Monkhorst-Pack [START_REF] Monkhorst | Special points for Brillouin-zone integrations[END_REF] grid was employed to In section 3.1, the magnetic-field-induced aligned and elongated microstructures were studied through a comparative examination of Fe-0.12C alloy and Fe-0.36C alloy. The magnetic field influential mechanism on the aligned and elongated microstructures was theoretically analyzed using the magnetic dipolar mode and discussed as a function of carbon content. In section 3.2, the area fraction of the transformed ferrite was measured in both Fe-0.12C alloy and Fe-0.36C alloy. The effect of the magnetic field on modifying the amount of ferrite has been studied and discussed as a function of carbon content. Furthermore, the effect of the magnetic field on Widmänstatten ferrite has been studied in Fe-0.36C alloy. In section 3.3, the effects of the magnetic field on the abnormal structure and the spheroidization of pearlite have been investigated in Fe-1.1C alloy. The morphology features as well as the crystallographic characteristics of the abnormal structure and the pearlite were studied. For Fe-0.36C alloy, the transformed microstructures without and with the magnetic field are also composed of proeutectoid ferrite and pearlite, as shown in Magnetic-field-induced aligned and elongated microstructures Discussion The field induced elongated and aligned microstructure during the proeutectoid ferritic transformation has been focused and studied in some iron-based alloys [34, 36, 50-53, 56, 58, 59]. It can be qualitatively explained by the magnetic dipole model [START_REF] Shimotomai | Formation of aligned two-phase microstructures by applying a magnetic field during the austenite to ferrite transformation in steels[END_REF][START_REF] Zhang | Grain boundary characteristics and texture formation in a medium carbon steel during its austenitic decomposition in a high magnetic field[END_REF]. It is known that, under the magnetic field, the magnetic moments tend to align along the field direction, as schematically illustrated in Figure 3.3. Thus, there exists a dipolar interaction between the neighboring moments. The dipolar interaction energy E D can be expressed as follows: 0 1 12 2 12 1 2 3 12 [3( )( ) ] 4 D E m e m e m m r µ π = - ⋅ ⋅ -⋅ v v v v v v (3.1) where 0 µ is the vacuum magnetic permeability; 12 e v is a unit vector parallel to the line joining the centers of the two dipoles; 12 r is the distance between two dipoles 1 m v and 2 m v . If the magnetic moments align along the field direction and m 1 = m 2 =m, 12 r =r, the dipolar interaction energy E D can be considered in the following two situations: H Magnetic moment (1) The two magnetic dipoles are aligned in the magnetic field direction, as shown in Figure 3.4(a). Then the dipolar interaction energy between these two magnetic dipoles can be calculated as: 2 0 3 2 D m E r µ π = - (3.2) since E D is negative (E D <0), the magnetic dipolar interaction between them is attractive, as a results, the magnetic dipoles tend to attract each other along the field direction. (a) The two magnetic dipoles are aligned along the magnetic field direction. (b) The two magnetic dipoles are aligned in the transverse magnetic field direction. (2) The two magnetic dipoles are aligned in the transverse magnetic field direction, as shown in Figure 3.4(b). Then the dipolar interaction energy between these two magnetic dipoles is resulted in: 2 0 3 4 D m E r µ π = (3.3) It is seen that E D is positive (E D >0), which implies the magnetic dipolar interaction between them would make them repel each other in this direction. (b) 2 m v 1 m v r H 1 m v 2 m v r H (a) As a result, under the magnetic field, magnetic dipoles attract each other along the field direction but repel each other along the transverse field direction due to the dipolar interaction. During the austenite to ferrite and pearlite transformation, the magnetic dipolar interaction works in two scales: atomic-scale and micro-scale. In the atomic-scale, each Fe atom in ferrite grains carrying a magnetic moment can be regarded as a magnetic dipole. The magnetic dipolar interaction between them makes them attract each other along the field direction and repel each other in the transverse field direction. To minimize the demagnetization energy (caused by the repulsion) Fe atoms tend to align along the field direction. In this case, the effect of the magnetic field is mainly on grain growth process, which favors the elongation of the ferrite grains along the field direction. However, the elongation leads to an increase of the interfacial area and hence an increase of the interfacial energy which opposes the elongation and favors a spherical shape. The counterbalance between these two effects determines the final elongation degree of the grains [START_REF] Shimotomai | Formation of aligned two-phase microstructures by applying a magnetic field during the austenite to ferrite transformation in steels[END_REF][START_REF] Zhang | Magnetic-fieldinduced grain elongation in a medium carbon steel during its austenitic decomposition[END_REF]. In the microscale, each magnetized ferrite grain can be regarded as a magnetic dipole. Due to the same magnetic dipolar interaction, new ferrite nuclei tend to nucleate next to the existing ones along the field direction and form ferrite chains to reduce the demagnetization energy. In this way, the nucleation of the ferrite is affected and the alignment is induced. As the alignment of both Fe atoms and the ferrite nuclei can reduce the demagnetization energy, they are energetically favored by the magnetic field. This is the reason for the formation of the elongated and aligned microstructure under the magnetic field. In the present work, the carbon content of the Fe-0.12C alloy is relatively low. In this case, proeutectoid ferrite forms at high temperature, which is far above the Curie temperature, T C . One could imagine that at the early stage of proeutectoid ferritic transformation which is above the T C , the magnetic dipolar interaction in both atomic-scale and micro-scale is weak, as the induced magnetization of ferrite and austenite are very close. Hence, ferrite nuclei form randomly at the prior austenite boundaries and grow uniformly. When the temperature drops down to the T C , ferrite becomes ferromagnetic. As a consequence, the magnetic dipolar interaction in the two scales becomes stronger. Thus, both nucleation and grain growth in the subsequent cooling process is greatly influenced by the magnetic field. The new ferrite nuclei tend to form preferentially next to the existing ferrite grains along the field direction to form ferrite chains. It should be mentioned that as the carbon content of the alloy is low, the relative amount of proeutectoid ferrite is high and ferrite should be distributed densely in the specimen. This ensures small spacing between the existing ferrite grains and the new ferrite nuclei and hence strong micro-scale magnetic dipolar interaction between ferrite grains. Meanwhile, the existing ferrite grains preferentially grow along the field direction under the atomic dipolar interaction to form elongated grains. Since ferrite is in large quantity, its growth along the field direction may hinder the growth of the newly formed ferrite along that direction. Therefore, the growth of the new ferrite nuclei is restricted and occurs along the transverse field direction. As a result, the ferrite grains in two different elongation orientations are formed: the ferrite grains transformed at the early stage are elongated along the field direction, whereas those transformed at the late stage are elongated in transverse field direction. As ferrite is carbon depleted, the preferential nucleation and growth of proeutectoid ferrite along the field direction can cause the diffusion of carbon atoms in the field transverse direction. Hence, the remaining austenite between ferrite chains, especially next to the field-direction-elongated ferrite grains, is rich in carbon and provides perfect sites for the formation of pearlite. When temperature drops below Ar 1 , this carbonrich austenite located between ferrite chains decomposes into pearlite. As the For the Fe-0.36C alloy, magnetic field elongates the proeutectoid ferrite grains in the field direction through atomic-scaled dipolar interaction during grain growth process. This demonstrates magnetic field effect on inducing the elongated and aligned microstructures is carbon-content dependent. Magnetic-field-induced phase fraction modification of ferrite Results The measurement results of ferrite area fraction of Fe-0.12C alloy and Fe-0.36C alloy treated without and with the magnetic field are shown in Figure 3.5. It shows that the magnetic field increases the amount of ferrite in both alloys. The field-induced increment in area fraction of ferrite is 6.0% in Fe-0.12C alloy and 14.0% in Fe-0.36C alloy. In this content, magnetic field enhances the formation of ferrite by increasing its area fraction and this field effect becomes more pronounced with the increase of the carbon content. The area fraction of Widmänstatten ferrite in Fe-0.36C alloy is measured and the result is given in Table 3.1. It is found that the amount of the Widmänstatten ferrite is greatly decreased with the application of the magnetic field. Discussion When magnetic field is applied to proeutectoid ferritic phase transformation, both parent austenite and product ferrite can be magnetized and thus their Gibbs free energy is lowered according to their magnetization. The decrease in Gibbs free energy is expressed as M d B G M v v ⋅ - = ∆ ∫ 0 0 (3.4) where M is the magnetization of a certain phase and 0 B is the induction of the applied magnetic field. Due to the fact that the magnetization of the product ferrite is higher than that of the parent austenite within the whole transformation temperature range, decrease in the Gibbs free energy of ferrite is higher. Hence, the energy difference between the parent austenite and the product ferrite is increased by the magnetic field. As a consequence, an extra energy term is introduced by magnetic field as a driving force to the austenite to ferrite phase transformation. Therefore, the phase transformation is accelerated by the magnetic field. Previous simulation studies [28-33, 43, 68] have shown that magnetic field modifies the Fe-C phase equilibrium by shifting the α⁄α+γ and the γ⁄α+γ boundaries in the Fe-C phase diagram towards the high carbon content and high temperature side, as shown in Figure 3.6. Moreover, it is proved that the field-influence on the γ⁄α+γ boundaries is more pronounced. According to the 'Metallurgical Level Law', the amount of ferrite at eutectoid transformation temperature is proportional to the difference between the carbon solubility in austenite and the carbon content of the steel and inversely proportional to the carbon solubility difference between austenite and ferrite at the same temperature, i.e. ( ) ( ) ' ' ' F P F P C P P ---, as schematically illustrated in Figure 3.6. Therefore, the amount of ferrite was increased by the magnetic field due to its influence on phase equilibrium. The effect of the magnetic field on modifying the phase transformation temperature has been also discussed as the function of carbon content in Fe-based alloys. It is has been revealed by Garcin et al. [START_REF] Garcin | Experimental evidence and thermodynamics analysis of high magnetic field effects on the austenite to ferrite transformation temperature in Fe-C-Mn alloys[END_REF] that, with the increase of the carbon content of the alloys, the increment of the Ae 3 temperature rises. However, the increment of the Ae 1 temperature does not change with the carbon content. As a result, the field effect on enhancing the area fraction of ferrite becomes more pronounced when the carbon content increases, as proved by this work. F' F P P' C In addition, during the transformation from the parent austenite to ferrite, sometimes when the carbon content is suitable and the cooling rate is low, a kind of ferrite in acicular shape, known as Widmanstätten ferrite, of which the formation follows a K-S orientation relationship with the parent austenite, forms instead of the normal equiaxed ferrite. The formation of the Widmanstätten ferrite is considered to be resulted from the need to reduce the formation energy barrier when the dirving force is low [START_REF] Wang | Effect of a high magnetic field on the formation of Widmänstatten ferrite in Fe-0.52C[END_REF]. According to the solid-state phase transformation theory, during the austenite to ferrite transformation, the Gibbs free energy changes related to uniform nucleation of a new phase can be express as ε σ V S G V G V + + ∆ - = ∆ (3.5) where V is the volume of the product phase, V G ∆ is the volume Gibbs free energy difference between the parent phase and the product phase. S is the surface area of the nuclei, σ is the interfacial energy between the parent phase and the product phase and ε is the elastic strain energy of the new phase. V G V∆ is the driving force of the transformation, whereas, both σ S -the total interfacial energy-and ε V -the total elastic strain energy-are considered as the transformation barrier. When austenite transforms into Widmanstätten ferrite, the K-S OR guarantees a low energy semi-coherent interface between the parent austenite and the product ferrite that greatly reduces the transformation barrier related to the interfacial energy. Considering of that, it is clear that the formation of Widmanstätten ferrite is due to the need of the reduction of the interfacial energy by forming coherent or semicoherent interface, when the driving force is insuffient in the case of slow cooling. As we mentioned before, when an external magnetic field is applied, an additional transformation driving force item is introduced by magnetic field. ε σ V S G G V G M V + + ∆ + ∆ - = ∆ ) ( (3.6) Consequently, the need of forming low energy interface to compensate the insuffient driving force is reduced. Thus, the chance to form Widmanstätten ferrite is considerably lowered resulting in the reduced amount in the transformed microstructure of Fe-0.36C alloy under the magnetic field. As Widmanstätten ferrite is brittle and it is harmful to the ductility of steels, the field effect in reducing the amount of this ferrite is positive for its practical application. Summary Magnetic field increases the phase fraction of ferrite due to its thermodynamic influence on phase equilibrium. This field effect becomes more pronounced with the decreased austenite to ferrite transformation temperature that corresponds to the increased carbon content of steels. Magnetic field reduces the formation of Widmanstätten ferrite by introducing an additional transformation driving force. Magnetic field-induced microstructure features in Fe-1.1C alloy Results The microstructures of the Fe-1.1C alloy treated without and with the magnetic field and the corresponding pole figures of ferrite are displayed in Figure 3.7. As seen from the micrograph in Figure 3.7(a), without the magnetic field, the microstructure is composed of pearlite and a small amount of abnormal structure [START_REF] Mcquaid | Effect of quality of steel on case-carburizing results[END_REF] (indicated by the arrow) that consists of coarse proeutectoid cementite distributed along the initial austenite grain boundaries and a border of ferrite surrounding it. From the pole figures in Figure 3.7 (a), it is seen that ferrite (ferrite in abnormal structure and pearlitic ferrite) does not display any preferred crystallographic orientation. When the 12T magnetic field is applied, as shown in Figure 3.7 (b), the microstructural constituents of the alloy remain the same (pearlite plus abnormal structure) and the crystallographic orientations of ferrite stays random, indicating that the magnetic field has no special effect on modifying the components of the microstructure and the crystallographic texture of ferrite. However, the magnetic field exhibits clear influence on the amount of the microstructural constituents, the morphology of the pearlite and the occurrence of the orientation relationships between ferrite and cementite in pearlite. Table 3.2 displays the total area percentage of the abnormal structure obtained without and with the 12T magnetic field. It is seen that the total area percentage of the abnormal structure is increased under the magnetic field, indicating that the magnetic field promotes the formation of the abnormal structure. Nevertheless, no specific OR between the cementite and the ferrite in the abnormal structure has been found either in the non-field-or in the field-treated specimen. This is different from what Chairuangsri et al. [START_REF] Chairuangsri | Abnormal ferrite in hyper-eutectoid steels[END_REF] reported. In their work, they found specific orientation relationships (ORs) between the cementite and ferrite in the abnormal structure that are close to the Pitsch-Petch OR and the Bagaryatsky OR found in pearlite. 3.3. It is seen that the amount of spheroidal pearlite is increased under the magnetic field. This indicates that the magnetic field promotes the spheroidization of cementite in pearlite. In addition, the average pearlite lamellar spacing is found enlarged from 0.47µm in the non-fieldtreated specimen to 0.9 µm in the field-treated specimen. This phenomenon has been observed in other steels and has been thoroughly analyzed [START_REF] Zhang | Microstructural features induced by a high magnetic field in a hypereutectoid steel during austenitic decomposition[END_REF][START_REF] Zhang | The effects of thermal processing in a magnetic field on grain boundary characters of ferrite in a medium carbon steel[END_REF]. It is clear (c) Z X Y//FD that the area percentage of the abnormal structure and the morphology of the pearlitic cementite strongly depend on the orientation of the abnormal structures and the lamellar pearlite with respect to the observation plane (i.e. at which angle they intersect the observation plane). As there are no preferential crystallographic orientations in either the non-field-or the field-treated specimen, and the abnormal structures and the pearlite colonies are randomly selected with a substantial amount, the corresponding results on the abnormal structure and the pearlite could be considered only from the magnetic field. Discussion For a fully austenitized hypereutectoid steel, proeutectoid cementite first precipitates along the original austenite grain boundaries when the temperature drops to the range between Ar cm and Ar 1 and then pearlite composed of lamellar cementite and ferrite forms from the remaining austenite when the temperature drops below Ar 1 , during a slow cooling. The formation of the ferrite border around the proeutectoid cementite that constitutes the so-called abnormal structure is due to the fact that the formation of the proeutectoid cementite along the initial austenite grain boundaries that is carbon enriched generates a carbon depleted zone in the vicinity of the proeutectoid cementite. If the depleted carbon cannot be resupplied from the interior of the austenite grain through carbon diffusion, the local composition will become hypoeutectoid, hence this part of austenite will transform into ferrite when the temperature is below Ar 3 , forming the abnormal structure. The increase of the amount of abnormal structure by the magnetic field may be related to the influence of the magnetic field on phase equilibrium between austenite and ferrite. It is known that magnetic field promotes the formation of phases with higher induced magnetization during a transformation. In the case of the austenite to ferrite transformation, ferrite has higher magnetization than austenite; hence its formation is promoted by the magnetic field. In the present work, once the proeutectoid cementite forms along the prior austenite boundaries, the carbon depleted austenite in its vicinity obtain a higher driving force under a magnetic field to transform to ferrite [START_REF] Choi | Effects of a strong magnetic field on the phase stability of plain carbon steels[END_REF][START_REF] Garcin | In situ characterization of phase transformations in a magnetic field in Fe-Ni alloys[END_REF][START_REF] Garcin | Kinetic effects of magnetic field on the γ/α interface controlled reaction in iron[END_REF][START_REF] Zhang | Thermodynamic and kinetic characteristics of the austenite-to-ferrite transformation under high magnetic field in medium carbon steel[END_REF]. As the magnetic field also shifts the eutectoid point to high carbon content side [START_REF] Joo | An effect of a strong magnetic field on the phase transformation in plain carbon steels[END_REF][START_REF] Zhang | Shift of the eutectoid point in the Fe-C binary system by a high magnetic field[END_REF] the amount of ferrite obtained is also increased, as widely observed in hypoeutectoid steels after its transformation from austenite to ferrite [START_REF] Choi | Effects of a strong magnetic field on the phase stability of plain carbon steels[END_REF][START_REF] Enomoto | Influence of magnetic field on the kinetics of proeutectoid ferrite transformation in iron alloys[END_REF][START_REF] Garcin | Experimental evidence and thermodynamics analysis of high magnetic field effects on the austenite to ferrite transformation temperature in Fe-C-Mn alloys[END_REF][START_REF] Zhang | Thermodynamic and kinetic characteristics of the austenite-to-ferrite transformation under high magnetic field in medium carbon steel[END_REF]. Consequently, the amount of the abnormal ferrite surrounding the proeutectoid cementite is increased under the magnetic field. For lamellar pearlite, spheroidization is a natural tendency, as in this process, the total interfacial area between ferrite and lamellar cementite can be reduced and then the total system becomes thermodynamically stable [START_REF] Tian | Kinetics of pearlite spheroidizations[END_REF]. The influence of the magnetic field on spheroidization of pearlite may be related to two factors. First, spheroidization of pearlite is a carbon diffusion process [START_REF] Tian | Kinetics of pearlite spheroidizations[END_REF][START_REF] Nam | Accelerated spheroidization of cementite in high-carbon steel wires by drawing at elevated temperatures[END_REF][START_REF] Zhang | Effect of deformation on the evolution of spheroidization for the ultra high carbon steel[END_REF][START_REF] Kamyabi-Gol | Spheroidizing Kinetics and Optimization of Heat Treatment Parameters in CK60 Steel Using Taguchi Robust Design[END_REF]. As magnetic field elevates the proeutectoid transformation temperature [START_REF] Joo | An effect of high magnetic field on phase transformation in Fe-C system[END_REF][START_REF] Garcin | Experimental evidence and thermodynamics analysis of high magnetic field effects on the austenite to ferrite transformation temperature in Fe-C-Mn alloys[END_REF][START_REF] Zhang | Shift of the eutectoid point in the Fe-C binary system by a high magnetic field[END_REF][START_REF] Zhang | Microstructural features induced by a high magnetic field in a hypereutectoid steel during austenitic decomposition[END_REF], pearlite forms at higher temperature under a magnetic field. High temperature favors carbon diffusion. Therefore, the spheroidization of pearlite that is realized by cementite fragmentation and granulation through carbon diffusion is enhanced by the magnetic field. Second, it has been revealed that magnetic field increases the relative interfacial energy between ferrite and cementite [START_REF] Garcin | Kinetic effects of magnetic field on the γ/α interface controlled reaction in iron[END_REF][START_REF] Zhang | High temperature tempering behaviors in a structural steel under high magnetic field[END_REF], as the field has different impact on the magnetization of the boundary area and the grain interior. It is known that the Gibbs free energy of a crystal is lowered when it is magnetized. The degree of magnetization depends on the perfectness of the crystal in crystalline materials. As the boundary area possesses a high density of defects, the magnetization induced by a magnetic field is limited with respect to that of grain interior. The Gibbs free energy drop in grain interior is much higher than that in boundary area. As a result, the relative boundary energy is raised. Consequently, reducing the total boundary area thus the total boundary interfacial energy by spheroidization is energetically favored by the magnetic field. Chapter 4 Magnetic Field-Enhanced Carbon Solution in Ferrite Introduction Strengthening has been widely required as one of the necessary properties for the deveploment of high-performance structural materials. The most common strengthening mode refers to solid solution strengthening by adding various alloying elements, such as carbon in iron. As a promising technique for microstructure modification and texture control [START_REF] Ohtsuka | Alignment of ferrite grains during austenite to ferrite transformation in a high magnetic field[END_REF][START_REF] Shimotomai | Formation of aligned two-phase microstructures by applying a magnetic field during the austenite to ferrite transformation in steels[END_REF][START_REF] Zhang | Magnetic-fieldinduced grain elongation in a medium carbon steel during its austenitic decomposition[END_REF][START_REF] Zhang | Grain boundary characteristics and texture formation in a medium carbon steel during its austenitic decomposition in a high magnetic field[END_REF][START_REF] Bacaltchuk | Effect of magnetic field applied during secondary annealing on texture and grain size of silicon steel[END_REF][START_REF] Li | Effects of high magnetic field annealing on texture and magnetic properties of FePd[END_REF], high magnetic field has shown its capacity to modify the solubility of phases in Fe-C alloy system, hence demonstrating great potential to enhance the effect of solid solution strenthening of steels. According to thermodynamic calculations on phase equilibrium [28-32, 39, 68], the solubility of carbon in ferrite is increased under an applied magnetic field. Moreover, imposition of magnetic field increases the proeutectoid ferrite transformation temperature [START_REF] Joo | An effect of a strong magnetic field on the phase transformation in plain carbon steels[END_REF][START_REF] Garcin | Experimental evidence and thermodynamics analysis of high magnetic field effects on the austenite to ferrite transformation temperature in Fe-C-Mn alloys[END_REF][START_REF] Garcin | Kinetic effects of magnetic field on the γ/α interface controlled reaction in iron[END_REF][START_REF] Zhang | Grain boundary characteristics and texture formation in a medium carbon steel during its austenitic decomposition in a high magnetic field[END_REF] and enhances the phase fraction of the transformed ferrite [START_REF] Choi | Effects of a strong magnetic field on the phase stability of plain carbon steels[END_REF][START_REF] Enomoto | Influence of magnetic field on the kinetics of proeutectoid ferrite transformation in iron alloys[END_REF][START_REF] Garcin | Experimental evidence and thermodynamics analysis of high magnetic field effects on the austenite to ferrite transformation temperature in Fe-C-Mn alloys[END_REF][START_REF] Zhang | Thermodynamic and kinetic characteristics of the austenite-to-ferrite transformation under high magnetic field in medium carbon steel[END_REF]. As ferrite is carbon-depleted and cementite is carbon-enriched in Fe-C system, a net increase in the amount of ferrite under a magnetic field requires a balanced carbon repartition bewteen the ferrite and cementite phases to maintain the fixed carbon content of the material. Hardness test of ferrite 7 has demonstrated that under a magnetic field ferrite becomes harder, indicating that the hardness increase is due to the increased carbon solution. Until now, however, there has been no direct experimental evidence for this assertion and the underlying physical mechanism of the field-induced carbon solubility remains to be addressed. In the this chapter, the carbon concentrations of ferrite in the Fe-0.36C alloy treated without and with a 12T magnetic field were measured by means of wavelength-dispersive spectroscopy using a Shimadzu 1610 electron probe microanalyzer (WDS-EPMA). The hardness of proeutectoid ferrite was tested using a micro-indenter with a load of 25g. The mean hardness was calculated by averaging over 8 measurements for each specimen. In order to interpret the physical mechanism of the magnetic field on carbon solution in ferrite, the magnetic moments of bcc Fe and carbon atoms and the atomic magnetic dipolar interaction energy of Fe clusters without and with an interstitial carbon atom were evaluated. The calculations were carried out within the framework of density functional theory (DFT) using the Vienna ab-initio simulation package (VASP) [START_REF] Kresse | Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set[END_REF][START_REF] Kresse | From ultrasoft pseudopotentials to the projector augmented-wave method[END_REF]. The interaction between ions and electrons was described by the projector augmented wave method (PAW) [START_REF] Blöchl | Projector augmented-wave method[END_REF], and the exchange correlation potential was treated by the generalized gradient approximation (GGA) [START_REF] Perdew | Generalized gradient approximation made simple[END_REF]. The pseudopotentials with 3d 7 4s 1 and 2s 2 2p 2 as respective valence states for Fe and carbon and a spin polarized representation of the electronic charge densities that allows for collinear description of magnetic moments were used. The kinetic energy cutoff was set to be 400eV and a Monkhorst-Pack [START_REF] Monkhorst | Special points for Brillouin-zone integrations[END_REF] grid was employed to sample the Brillouin zone: 4×4×4 kpoints sampling within a supercell containing 54 Fe atoms (27 bcc Bravais cells) and 1 carbon atom located in the octahedral interstice of the bcc Bravais cell at the center of the supercell. For comparison, the same calculations were performed on the same supercell but without the carbon atom. For both specimens, the measurement carbon contents are higher than the theoretical values. This may come from two factors: the residual ethanol left from etching and rinsing during specimen preparation and the carbon contamination inside the EPMA column during the WDS measurements. Since the experimental conditions for the non-field and field treated specimens were strictly controlled to be identical, the carbon increments resulting from these two factors could be regarded as the same for the two specimens. Thus, the carbon content difference between the two specimens should be attributed to the magnetic field, as indirectly proved by the hardness tests (Table 4.3). Under the 12T magnetic field, the hardness of ferrite increases from HV 0.025 88.1 to 102.1 by 15.9%, suggesting the solid solution strengthening of excessive carbon atoms in ferrite. In this regard, the fieldinduced increase in the amount of carbon-depleted ferrite is actually balanced by the elevation of their carbon content to maintain the fixed carbon content of the alloy. µ Β /atom). Starting from the third neighbors, the magnetic moments of Fe atoms resume to that in the carbon free cell. As a consequence of the modified magnetic moments, the magnetic interaction between these atoms is changed. Results Discussion It is known that when a magnetic field is applied, the moments of Fe atoms tend to align along the field direction. Each Fe or carbon atom can be regarded as a magnetic dipole and the interaction energy E D between two magnetic dipoles can be calculated by Eq.(3.1) as discussed in Chapter 3. Then, the interaction energy E D can be decomposed into two opposite contributions: the one giving rise to magnetization is negative, whereas the other resulting in demagnetization is positive. Clearly, the atomic structural configuration that enhances the magnetization contribution is preferred by the system. given by integrating the dipolar interaction of all atom pairs in the cluster, under the consideration that starting from the third neighbors, the Fe moments in the carbon containing cell are the same as those of their counterparts in the carbon free cell. Accordingly, the magnetic interaction energies over the atom cluster without and with an interstitial carbon atom were calculated to be , respectively. The magnetic interactions in the two cases are both positive, indicating that the demagnetization contribution is more important than that of magnetization. However, when a carbon atom occupies the octahedral interstice, the interaction energy is decreased, i.e. the system becomes energetically more stable. This means that the magnetic field favors the solution of carbon atoms in the bcc Fe through reducing the demagnetization energy. In this way, the carbon content in ferrite is increased when a magnetic field is applied during the austenite to ferrite transformation. Summary The carbon content increase in ferrite induced by magnetic field has been experimentally demonstrated through the WDS-EPMA for the first time. According to the Ab-initio calculations, the modified Fe magnetic moments originate from the carbon solution. When the magnetic moments of Fe atoms are aligned under an external magnetic field, the demagnetization energy due to atomic dipolar magnetic interaction is reduced with the carbon solution. This underlies the physical mechanism of field-enhanced carbon solution in ferrite. mechanism was analyzed as the function of magnetic field intensity and carbon content. Results The inverse pole figures of the proeutectoid ferrite in Fe-0.12C alloy treated without and with the 12T magnetic field are shown in Figure 5.1. The sample coordinate system is shown in Figure 5.2. It can be seen, there is no obvious preferential orientation of ferrite in the Fe-0.12C alloy. The inverse pole figures of ferrite in Fe-0.36C alloy treated without and with the application of the magnetic field are shown in Figure 5.3 (under the same sample coordinate system in Figure 5.2). In the Fe-0.36C alloy, an enhancement of <001> fiber component along the ND (transverse field direction) appears in the field treated specimens. Though this enhancement in 8T specimen is not obvious, it becomes much stronger when field is increased to 12 T. Based on the above results, it is seen that the field induced ferrite texture is related to the field intensity and the carbon content of the alloys. Discussion It is known that, when carbon atoms dissolve into ferrite, they occupy preferentially the octahedral interstices in the ferrite which are flattened in the <001> direction. As the atomic spacing between the two summate Fe atoms of the octahedral interstice in the <001> direction is smaller than the diameter of the This will generate the lattice distortion and thus distortion energy [START_REF] Zhang | Grain boundary characteristics and texture formation in a medium carbon steel during its austenitic decomposition in a high magnetic field[END_REF]. Since under the magnetic field, Fe atoms attract each other along the field direction and repels each other along the transverse field direction (the atomic dipolar interaction), the lattice distortion can be reduced by the increased atom spacing in the transverse field direction. If the distorted <001> direction is along the transverse field direction, the nucleation and growth of such ferrite grains will be energetically favored by the magnetic field. In this way, the <001> fiber component along the transverse field direction is enhanced. Obviously, this field effect is strongly related to the degree of lattice distortion of ferrite. Generally, the more the crystal lattice is distorted, the larger the distortion energy is. In this case, the need to reduce the distortion energy by favoring the growth of grain with their distorted <001> direction parallel to the transverse field direction would be increased. Consequently, the field effect appears stronger and the intensity of this <001> fiber component along the transverse field direction becomes enhanced. The degree of the lattice distortion is determined by two factors. One is the thermal expansion of the lattice which is temperature dependent and the other is the amount of oversaturated carbon atoms in ferrite when the transformation is non-equilibrium. For the Fe-0.36C alloy, the austenite to ferrite transformation happens at relatively lower temperature compared with that of the Fe-0.12C alloy. Carbon diffusion is restrained, and then more oversaturated carbon atoms will be left in the formed ferrite. Moreover, thermal expansion that the ferrite lattice can reach is also reduced. Thus, higher degree of lattice distortion could be expected in the Fe-0.36C alloy than in the Fe-0.12C alloy. As a result, the effect of the magnetic field on the texture of the proeutectoid ferrite is enhanced for the Fe-0.36C alloy, resulting in a visible enhancement of <001> fiber texture along the transverse field direction. Orientation relationships of pearlite under the magnetic field Introduction Pearlitic transformation, regarded as the most classic solid-solid state transformation, has been widely studied under the effect of magnetic field. The magnetic field influences on pearlitic transformation kinetics, mechanism of the pearlite formation and structure evolutions have been fruitful. It has been proved that magnetic field shows considerable effects on pearlitic transformation. It elevates the transformation temperature [START_REF] Zhang | Calculation of magnetization and phase equilibrium in Fe-C binary system under a magnetic field[END_REF][START_REF] Garcin | Experimental evidence and thermodynamics analysis of high magnetic field effects on the austenite to ferrite transformation temperature in Fe-C-Mn alloys[END_REF][START_REF] Zhang | Shift of the eutectoid point in the Fe-C binary system by a high magnetic field[END_REF][START_REF] Zhang | Thermodynamic and kinetic characteristics of the austenite-to-ferrite transformation under high magnetic field in medium carbon steel[END_REF], increases eutectoid carbon composition [START_REF] Zhang | Shift of the eutectoid point in the Fe-C binary system by a high magnetic field[END_REF] and modifies the morphology of pearlite [START_REF] Zhang | Microstructural features induced by a high magnetic field in a hypereutectoid steel during austenitic decomposition[END_REF]. Nowadays, the investigations on the mechanism of the pearlitic transformation have been widely conducted from crystallographic point of view [START_REF] Zhang | New insights into crystallographic correlations between ferrite and cementite in lamellar eutectoid structures obtained by SEM-FEG/EBSD and an indirect two-trace method[END_REF][START_REF] Zhang | Indirect two-trace method to determine a faceted low-energy interface between two crystallographically correlated crystals[END_REF]. Several orientation relationships (ORs) between pearlitic ferrite and cementite have been consistently reported such as Isaichev OR [START_REF] Isaichev | Orientation between cementite and ferrite[END_REF], Bagaryatsky OR [START_REF] Bagaryatsky | Veroyatnue mechanezm raspada mar-tenseeta[END_REF] and Pitsch-Petch OR [START_REF] Petch | The orientation relationships between cementite and α-iron[END_REF][START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Ferrit im Perlit[END_REF]. Recently, new ORs have been confirmed by Zhang et.al [START_REF] Zhang | New insights into crystallographic correlations between ferrite and cementite in lamellar eutectoid structures obtained by SEM-FEG/EBSD and an indirect two-trace method[END_REF]. However, the effect of the magnetic field on the ORs of pearlite has been less studied yet. Based on this, in this section, the ORs of pearlite in three high purity Fe-C alloys treated without and with the magnetic field have been examined by means of SEM/EBSD. The effect of the magnetic field on pearlitic ORs and their corresponding occurrence frequency are analyzed. Results The type of the ORs of pearlite found in this work has been summarized in Table 5.1. Three ferrite/cementite ORs were detected, namely, Isaichev OR denoted IS OR, and two near Pitsch-Petch ORs denoted P-P1 OR and P-P2 OR respectively [START_REF] Zhang | New insights into crystallographic correlations between ferrite and cementite in lamellar eutectoid structures obtained by SEM-FEG/EBSD and an indirect two-trace method[END_REF]. The types of the appearing ORs are the same in all the three alloys without and with the application of the magnetic field. It is noticed that, without the presence of the magnetic field, the IS OR is the most favorable OR in low carbon content steel (Fe-0.12C alloy), whereas, with the increase of the carbon content of the alloy, P-P1 OR tends to increase in the appearing number (Fe-0.36C alloy: 46.7% and Fe-1.1C alloy: 60.0%) at the expense of mainly P-P2 OR. For all the alloys, the occurrence of P-P2 OR is much lower than the other two ORs. When the magnetic field is applied, the occurrence frequency of the appearing ORs has been modified and it is found that the effect of the magnetic field is varied according to the carbon content of the alloys. For alloy with very low carbon (Fe-0.12C alloy), the effect of the magnetic field is rather limited indicating by a slight decrease in IS OR and small increase in both P-P1 and P-P2. When the carbon content increases, the effect of the magnetic field becomes pronounced. Magnetic field shows a consistent effect on increasing the number of P-P2 OR. This field effect is especially noticeable in Fe-1.1C alloy. Discussion It is known that when fcc austenite decomposes into dual-phase pearlite which consists of bcc ferrite and orthorhombic cementite, there exists two transformation barriers: one is the transformation strain energy caused by the lattice misfit at the austenite/ferrite [START_REF] Garcin | Kinetic effects of magnetic field on the γ/α interface controlled reaction in iron[END_REF] and austenite/cementite interfaces [START_REF] Zhou | Ferrite: Cementite crystallography in pearlite[END_REF]; the other is the interfacial energy of the ferrite/cementite interface which depends on the atom misfit at the ferrite/cementite connecting plane [START_REF] Zhang | Crystallography and morphology of Widmanstätten cementite in austenite[END_REF]. To minimize the total transformation energy barrier, special ORs between the ferrite and the cementite in pearlite that ensure a low misfit interface between the parent and the product phase and a low misfit habit plane connecting the product phases are required. In terms of the three ORs obtained in the present work (IS, P-P1 and P-P2 OR), they all possess a common feature of closed-packed plane parallelism between the ferrite and the cementite, namely: {103} C //{101} F . Since the interplanar spacing of the {103} C and of the {101} F are both close to that of the closed-packed plane {111} A of austenite [START_REF] Zhang | New insights into crystallographic correlations between ferrite and cementite in lamellar eutectoid structures obtained by SEM-FEG/EBSD and an indirect two-trace method[END_REF], the transformation strain at the austenite/ferrite and the austenite/cementite interface are minimized and hence leads to the minimum transformation strain energy. In addition, it has been illustrated [START_REF] Zhang | New insights into crystallographic correlations between ferrite and cementite in lamellar eutectoid structures obtained by SEM-FEG/EBSD and an indirect two-trace method[END_REF] that atoms from both the pearlitic ferrite and the pearlitic cementite are well matched at the connecting interface, which guarantees small interfacial energy under the three ORs in general. In view of the strain and the interfacial atomic mismatch, the IS and the P-P1 OR have the lowest formation energy barrier, so they are the more energetically favored and appear in large numbers. However, for the P-P2 OR, there is a 3.5° deviation in plane parallelism between the pearlitic ferrite and the pearlitic cementite at the connecting interface, therefore its occurrence is reduced, much less than the other two. It has also been found that different ORs correspond to different nucleation situations. Previous work [START_REF] Zhang | New insights into crystallographic correlations between ferrite and cementite in lamellar eutectoid structures obtained by SEM-FEG/EBSD and an indirect two-trace method[END_REF] has suggested that the IS OR could occur with either pearlitic ferrite or cementite nucleating first; the P-P1 OR happens when pearlitic ferrite and cementite nucleate simultaneously; while the P-P2 OR appears when pearlitic ferrite nucleates prior to pearlitic cementite. In this point of view, the frequency of P-P2 OR is expected to be low in high carbon alloys, as it is difficult to offer large low carbon content area in high carbon alloys for ferrite nucleation first. This is consistent with the results in this study. In a hypereutectoid steel, with relatively high carbon content, austenite should possess high carbon content when austenite to pearlite transformation takes place. This composition characteristic is in favor of the formation of cementite or the simultaneous formation of cementite and ferrite. This is in good accordance with the present observation. As displayed in Table 5.2, the P-P1 and the IS OR account for the majority of the occurrence in the three ORs when the magnetic field is not applied. However, when the magnetic field is applied, the occurrence of the P-P2 OR is increased. As mentioned above, magnetic field promotes the formation of high magnetization phases [START_REF] Zhang | Microstructural features induced by a high magnetic field in a hypereutectoid steel during austenitic decomposition[END_REF]. In the pearlitic transformation temperature range, ferrite is ferromagnetic with higher magnetization and cementite is paramagnetic Chapter 6 Conclusion and Perspectives Conclusion In this dissertation, the effect of the magnetic field on diffusional phase transformation has been thoroughly investigated in high purity Fe-C alloys theoretically and experimentally. Three high purity Fe-C alloys with different carbon content from both hypo-and hyper-eutectoid range were deliberately prepared. The effect of the magnetic field on microstructures features and crystallographic orientation characteristics of the transformed microstructures have been analyzed. The main achievements and conclusions have been drawn as follows. Magnetic field induces new morphology features of microstructures. (1) Due to the magnetic dipolar interaction, magnetic field induces the elongated and aligned microstructures in hypo-eutectoid alloys. This field effect is carbon-content dependent. The magnetic dipolar interaction works in two scales: atomic-and micro-scale. In the atomic scale, the magnetic dipolar interaction affects the grain growth process and results in the elongation; in the micro-scale, the magnetic dipolar interaction influences the nucleation process and introduces the alignment. (2) Magnetic field promotes the formation of ferrite due to its thermodynamic influence on increasing eutectoid carbon composition. This field effect becomes more pronounced with the increase of the carbon content. Magnetic field inhibits the formation of Widmanstätten ferrite by introducing the additional driving force. (3) Magnetic field promotes the formation of the abnormal structure by increasing the driving force of the transformation from carbon-depleted austenite to ferrite. There is no specific OR between ferrite and cementite in abnormal structure. Magnetic field enhances the spheroidization of pearlite through combination effect of enhanced carbon diffusion resulting from the elevation of the transformation temperature and the increased relative ferrite/cementite interface energy from the magnetization difference between boundary areas and grain interiors. Magnetic field enhances carbon solution in ferrite. Carbon solution in ferrite affects the neighboring Fe magnetic moments, this leads to a decrease in the demagnetization energy which caused by the atomic dipolar magnetic interaction and makes the system more stable under the magnetic field. The field-induced carbon content enhancement offers a new possibility of material strengthening. Magnetic field modifies the crystal orientation distribution of ferrite and affects the orientation relationship of pearlite. (1) Magnetic field favors the nucleation and the growth of the ferrite grains with their distorted <001> direction parallel to the transverse field direction due to the atomic-scaled magnetic dipolar interaction, this leads to the enhancement of the <001> fiber component in the transverse field direction. This field effect is carbon content dependent. For low carbon content alloy (Fe-0.12C alloy), it is greatly reduced due to the reduced carbon oversaturation in ferrite and elevated formation temperature. For Fe-0.36C alloy, a noticeable enhancement of <001> fiber component along the transverse field direction is detected under the 12T magnetic field. At the meantime, this field effect is also strongly related to the field intensity, as the enhancement of <001> fiber component along the transverse field direction becomes more pronounced with the increase of the magnetic field intensity. (2) Magnetic field can hardly change the type of the appearing ORs in pearlite. However, magnetic field promotes the nucleation of high magnetization phasepearlitic ferrite and thus increases the occurrence of the P-P2 OR that corresponds to the situation that pearlitic ferrite nucleates first. This enhancement of the magnetic field on the occurrence of the P-P2 OR is more pronounced in high carbon content alloys. Perspectives Up to date, the application of magnetic field has been popular in many areas of materials science. As the maturity of the magnetic field theory, plenty of magnetic field-induced phenomena have been discovered and well explained. However, better understanding of materials behavior under high magnetic field is still in need. Meanwhile, more physical phenomena are waiting to be revealed. Moreover, magnetic data, such as magnetic moment, Tc temperature, magnetic anisotropy, magnetostriction and so on, are waiting to be completed. New devices thus need to be developed for experimental measurements. 本研究从利用 -clés: Champ magnétique, Transformation de phase, Interaction Dipolaire Magnétique, Texture, Relation d'Orientation -0.12C 合金, Fe-0.36C 合金 和 Fe-1.1C 合金)扩 散型固态相变过程中显微组织形貌以及晶体学特征进行了系统的实验研究和理论解析。 研究发现,由于微观尺度和原子尺度上的磁偶极子相互作用, 强磁场能诱发亚共析钢中沿 磁场方向排列和伸长的显微组织形貌的形成, 并且根据合金含碳量的不同, 显微组织形貌呈现不 同特点。含碳量很低的 Fe-0.12C 合金中, 珠光体团呈沿磁场方向排列和伸长的趋势, 而 Fe-0.36C 合金中, 先共析铁素体呈沿磁场方向排列和伸长的形貌。由于强磁场对相平衡的影响, 使共析点 含碳量升高, 导致强磁场下相变产物中铁素体含量明显增加,并且强磁场对铁素体转变的促进 作用随合金含碳量的增加而增强。此外还发现,强磁场能明显抑制先共析魏氏体的形成。这是 由于强磁场能提高铁素体相变驱动力,从而降低了依靠形成先共析魏氏体借助低能界面来克服 相变阻力的需要。另外,强磁场通过提高铁素体相变驱动力,促进晶界渗碳体周围贫碳奥氏体 的分解, 促进 Fe-1.1C 合金中反常组织的形成。同时,由于强磁场能提高珠光体相变温度, 促进 碳原子扩散明显加快球化过程, 并通过提高铁素体/渗碳体界面能, 增加球化驱动力, 导致强磁场 下 Fe-1.1C 合金中珠光体的球化趋势显著增加。 IS, P-P1 和 P-P2 取向关系。强磁场能促进高磁化率相-铁素体相的形核, 从而增加铁素体相优先形核的几率, 提高与铁素体相优先形核相对应的 P-P2 取向关系的出现几 率。磁场的这一作用在 Fe-1.1C 合金中最为明显。但是, 强磁场对合金中出现的珠光体晶体学 取向关系的类型并没有明显影响。 关键词 关键词 关键词 关键词:强磁场,相变,磁偶极子作用,织构,取向关系 Chapter 1 Literature Review 1. absolute value of the Gibbs free energy difference between the two phases is increased. This leads to the new phase equilibrium and phase stability under the magnetic field by modifying the phase equilibrium temperature and equilibrium composition, as shown in Figure1.2. Figure 1 . 3 A 13 Figure 1.3 A pair of magnetic dipoles in a magnetic field. FD: magnetic field direction. Figure 1 . 4 A 14 Figure 1.4 A Fe-0.52C-0.24Si-0.84Mn-1.76Ni-1.27Cr-0.35Mo-0.13V (mass) alloy austenitized at 1000°C and transformed isothermally to bainite, followed by helium quenching to ambient temperature. (a) 0T and (b) 10T [23]. Figure 1 . 5 15 Figure 1.5 Fe-C phase diagram associated with the α/γ+α, γ/α and γ/Fe 3 C Figure 1 . 6 16 Figure 1.6 Eutectoid carbon content and temperature as a function of the magnetic field induction [49]. Figure 1 . 7 17 Figure 1.7 Microstructures of Fe-0.1C alloy (a) and Fe-0.6C alloy (b) subjected to the α/γ inverse transformation under 8T magnetic field [50]. (The dark spots are γ phase followed by quenching to a new martensite, while the light regions represent the annealed α phaseinitial martensitethe. The arrow indicates the direction of the magnetic field). Figure 1 . 8 18 Figure 1.8 SEM micrographs of Fe-0.1C alloy under 12T magnetic field (a) a ferrite particle nucleated inside an austenite grain, (b) ferrite particles nucleated at the grain boundary of austenite. Figure 1 . 9 19 Figure 1.9 Microstructures of Fe-0.4C alloy during continuous slow cooling from γ to α transformation under 10T magnetic field (The field direction is vertical) [36]. Figure 1 . 10 110 Figure1.[START_REF] Kakeshita | Magnetic field-induced transformation from paramagnetic austenite to ferromagnetic martensite in an Fe-3.9Mn-5.0C (at%) alloy[END_REF] The degree of elongation of ferrite grains as a function of isothermal holding time and temperature in a magnetic field of 10T in Fe-0.4C alloy[START_REF] Hao | Structural elongation and alignment in a Fe-0.4C alloy by isothermal ferrite transformation in high magnetic fields[END_REF]. Figure 1 . 11 111 Figure 1.11 Microstructures after heating at 880°C for 33 min and cooling at 10°C /min with magnetic field of (a) 6T, (b) 10T, (c) 14T and (d) cooling at 46°C /min with 14T magnetic field.(the magnetic field direction and the rolling direction are vertical in the pictures) [56]. (d) preferred crystallographic orientations in the microstructure were formed[START_REF] Ohtsuka | Alignment of ferrite grains during austenite to ferrite transformation in a high magnetic field[END_REF][START_REF] Shimotomai | Aligned of two-phase structures in Fe-C alloys[END_REF][START_REF] Maruta | Magnetic field-induced alignment of steel microstructures[END_REF][START_REF] Maruta | Alignment of two-phase structures in Fe-C alloys by application of magnetic field[END_REF][START_REF] Shimotomai | Formation of aligned two-phase microstructures by applying a magnetic field during the austenite to ferrite transformation in steels[END_REF][START_REF] Zhang | Thermodynamic and kinetic characteristics of the austenite-to-ferrite transformation under high magnetic field in medium carbon steel[END_REF][START_REF] Zhang | New microstructural features occurring during transformation from austenite to ferrite under the kinetic influence of magnetic field in a medium carbon steel[END_REF][START_REF] Zhang | A new approach for rapid annealing of medium carbon steels[END_REF]. Figure 2 . 1 21 Figure 2.1 Photo of the ingot of the Fe-C alloy. 23 .Figure 2 . 2 2322 Figure 2.2 Photo (a) and schema (b) of magnetic field heat treatment furnace. Figure 2 .Figure 2 . 3 223 Figure 2.3 Heat treatment parameters for Fe-C alloys (a) Fe-0.12C (b) Fe-0.36C and (c) Fe-1.1 C. The respective Ae 3 and Ae cm temperatures are calculated by Thermo-Calc, but the Ae 1 temperatures are from the common accepted theoretical value (1000K). For the final heat treatments, small sized specimens were cut out from the center of the plates after the pre-heat treatment. The dimensions of the specimens are 7mm×7mm×1mm for the Fe-0.12C alloy and the Fe-0.36C alloy; and 10mm×10mm×3mm for the Fe-1.1C alloy, as illustrated in Figure2.4. Figure 2 . 4 24 Figure 2.4 Illustration of specimens for heat treatments (a) and the sizes of the specimens (b) for Fe-0.12C alloy and Fe-0.36C alloy; (c) for Fe-1.1C alloy. OLYMPUS/BX61 microscope (The observation surface and the sample reference frame are shown in Figure 2.5). Phase fraction of the transformed microstructures and lamellar spacing of pearlite were analyzed with analySIS and averaged over certain number of analyzed areas. The details of the measurement information are displayed in Table 2.3. Figure 2 . 5 25 Figure 2.5 Schema of the observation area of the specimen. )-with EBSD acquisition camera and Oxford-HKL Channel 5 software was used for microscopic and crystallographic orientation analyses. The working voltage was set at 15KV and the working distance was 15 mm. For microscopic observations, the morphology of pearlitic cementite was examined and the area of spherical pearlite was measured in 20 randomly selected areas. The crystallographic orientation analyses were performed for texture analyses and individual orientation measurements. The orientation data were obtained by acquiring and indexing the electron back-scatter diffraction Kikuchi patterns. The orientations are represented in the form of three Euler angles (φ 1 , Φ, φ 2 ) in Bunge notation. The crystal structure data of ferrite and cementite for EBSD orientation measurements, are given in Table 2.4. Figure 2 . 6 26 Figure 2.6 Photo (a) and schema (b) of the field emission gun SEM-Jeol JSM 6500 F. Figure 2 . 7 )Figure 2 . 8 . 2728 Figure 2.8. It is seen the raw pattern in Figure 2.8 (a) clearly displays the line details and recalculated pattern in (b) matches with the raw pattern, demonstrating that our strategies work well in ensuring the acquisition quality and the orientation determination quality. Figure 2 . 7 27 Figure 2.7 Geometry relation between the electron beam and the specimen surface under EBSD measurement condition. (The electron beam is elongated along the Y direction in the specimen coordinate system). Figure 2 . 8 28 Figure 2.8 Kikuchi pattern of cementite (a) and the one superposed with the recalculated pattern (b). vectors of the cementite reciprocal lattice basis which can be calculated by the following equations: Figure 2.10). For comparison, the same calculations were performed on the same supercell but without the carbon atom. Figure 2 . 10 210 Figure 2.10 Supercell containing 54 Fe atoms and 1carbon atom. Figure 3 . 3 Figure 3.1 shows the microstructures of the Fe-0.12C alloy treated without and with the 12T magnetic field. It is seen that the transformed microstructures are both composed of proeutectoid ferrite (white) and pearlite (dark). Without the field, most ferrite grains and pearlite colonies are equiaxed. Though some others have elongated shapes, their major axes are randomly oriented. With the application of the 12T magnetic field, the pearlite colonies are obviously elongated along the field direction and moreover, many proeutectoid ferrite grains are aligned in chains along the field direction. Some of the ferrite grains are also elongated but with two elongation orientations: one is along the field direction and the other is perpendicular to the field direction. It can be seen that most of the elongated pearlite colonies are located between the ferrite chains. Figure 3 . 1 31 Figure 3.1 Microstructures of Fe-0.12C alloy without (a) and with (b) a 12T magnetic field. The arrow indicates the magnetic field direction. Figure 3 . 2 . 32 Figure 3.2. There are two kinds of proeutectoid ferrite with different morphologies: one is equiaxed, the other is acicular, known as Widmänstatten ferrite. Without the field, most of the proeutectoid ferrite is of Widmänstatten type, stretching through the thickness of the specimen. The proeutectoid ferrite and the pearlite colonies are basically randomly distributed, whereas with the field, the proeutectoid ferrite grains are obviously elongated along the field direction. Figure 3 . 2 32 Figure 3.2 Microstructures of Fe-0.36C alloy without (a) and with (b) a 12T magnetic field. The arrow indicates the magnetic field direction. Figure 3 . 3 33 Figure 3.3 Schematic of the magnetic moments under the magnetic field. Figure 3 . 4 34 Figure 3.4 Illustration of the dipolar interaction between magnetic dipoles. Figure 3 . 5 35 Figure 3.5 Ferrite area fraction of Fe-0.12C alloy and Fe-0.36C alloy treated without and with the 12T magnetic field. Figure 3 . 6 36 Figure 3.6 Effects of magnetic field on Fe-C phase equilibrium [49]. Figure 3 . 7 Table 3 . 2 Figure 3 . 8 373238 Figure 3.7 Optical micrographs and the corresponding pole figures of ferrite of Fe-1.1C alloy treated (a) without, (b) with a 12T magnetic field (the abnormal structure is indicated by the arrow; the field direction is horizontal) and (c) the corresponding sample coordinate system. Figure 3 . 8 38 Figure 3.8 SEM secondary electron micrographs of transformed microstructure in Fe-1.1C alloys treated (a) without and (b) with a 12T magnetic field, the field direction is horizontal. Figure 4 . 4 Figure 4.1 shows the microstructures of the specimens treated without and with the 12T magnetic field. The microstructures of both specimens are composed of proeutectoid ferrite (white) and pearlite (dark). Figure 4 . 1 41 Figure 4.1 Microstructures of Fe-0.36C alloy treated without (a) and with (b) a 12T magnetic field. The arrow indicates the magnetic field direction. Figure 4 . 2 42 Figure 4.2 Octahedral interstice and its first and second neighboring Fe atoms in bcc structure. Figure 4 . 4 Figure 4.2 illustrates an atom cluster that contains one octahedral interstice and its first and second neighboring Fe atoms in the bcc structure. When a carbon atom is present, it occupies the octahedral interstice. Suppose that the magnetic moments of the Fe atoms and the carbon atom are parallel to the field direction under the Figure 5 . 1 Figure 5 . 2 5152 Figure 5.1 Inverse pole figures of ferrite in Fe-0.12C alloy treated without (a) and with (b) a 12T magnetic field. Figure 5 . 3 53 Figure 5.3 Inverse pole figures of the ferrite in Fe-0.36C alloy (a) 0T, (b) 8T and (C) 12T the carbon atom displaces the two Fe atoms in the opposite direction. Table 2 . 1 21 Chemical composition of cast iron, %. C S O N Fe 4.39 0.001 0.0025 0.0003 Bal. Table 2 . 3 23 Information of microstructural measurements. Fe-C alloys Measurements No. or area of measurements Fe-0.12C Area precentage of proeutectoid ferrite 15 areas Fe-0.36C Area precentage of proeutectoid ferrite 15 areas Area precentage of Widmanstätten ferrite whole cross section(7mm×1mm) Fe-1.1C Area percentage of abnormal structure whole cross section(10mm×3mm) Table 2 . 4 24 Crystal structure data of ferrite and cementite. Phase Crystal structure/ Lattice parameters Space group (No.) No. Atom Wyckoff Atomic position X Y Z Occupancy Ferrite bcc a=0.28665 nm. Im 3 m (229) 1 Fe 2a 0 0 0 1 Cementite orthorhombic 1 C 4c 0.881 0.25 0.431 1 a=0.5090 nm b=0.6748 nm Panm (62) 2 Fe 4c 0.044 0.25 0.837 1 c=0.4523 nm 3 Fe 8d 0.181 0.063 0.837 1 Table 3 . 1 31 Area fraction of Widmänstatten ferrite in Fe-0.36C alloy treated without and with magnetic field. Magnetic field Area fraction of Widmänstatten ferrite, % 0T 48.44 12T 24.36 Table 3 . 3 33 Area percentage of the spherical pearlite in Fe-1.1C alloy treated without and with a 12T magnetic field. Magnetic field Area percentage of the spherical pearlite, % 0T 12.4 12T 27.1 Table 4 . 4 1 and Table 4.2, respectively. It is seen that the magnetic field increases the amount of proeutectoid ferrite by 14% and the carbon content in proeutectoid ferrite by 35.7%. Table 4 . 1 41 Area percentages of proeutectoid ferrite in Fe-0.36C alloy treated without and with the 12T magnetic field (Number of measurements: 10). Magnetic field Area percentage of proeutectoid ferrite, (Standard deviation) Increment 0T 57 (±1.1)% -- 12T 65 (±1.2)% 14% Table 4.2 Carbon contents of proeutectoid ferrite in Fe-0.36C alloy treated without and with the 12T magnetic field (Number of measurements: 20). Magnetic field Carbon content of proeutectoid ferrite, (Standard deviation) Increment 0T 0.14(±0.017)% -- 12T 0.19(±0.024) % 35.7% Table 4 . 3 43 Mean hardness of proeutectoid ferrite in Fe-0.36C alloys treated without and with the 12T magnetic field (Number of measurements: 8). Magnetic field Mean hardness of proeutectoid ferrite, HV 0.025 (Standard deviation) Increment 0T 88.1 (±2.71) -- 12T 102.1 (±6.86) 15.9% Considering that the magnetic natures of Fe and carbon atoms are not the same, introducing a carbon atom into bcc Fe structure may modify the magnetic interaction between local atoms. The magnetic moments of Fe and carbon atoms calculated from ab-initio simulations are summarized in Table 4.4 and 4.5. Table 4 . 4 44 Calculated magnetic moments carried by Fe atoms.The ab-initio calculations were carried out using a supercell consisting of 54 Fe atoms. Magnetic moments, (µ Β /atom) Lattice constant, (Å) 2.151 2.81199 Table 4 . 5 45 Calculated magnetic moments carried by Fe and carbon atoms.The ab-initio calculations were carried out using a supercell consisting of 54 Fe atoms with one carbon atom in the octahedral interstice of the centre bcc Bravais cell.It is seen that without the carbon atom in the bcc Fe structure, the magnetic moments of Fe atoms are identical. When a carbon atom is introduced, the magnetic moments of Fe atoms vary with the neighborhood order to the carbon atom. The first (closest) neighbors of the Fe atoms to the carbon atom possess a smaller moment (1.615µ Β /atom) with respect to that (2.151 µ Β /atom) of the Fe atoms in the carbon free cell, whereas the second neighbors carry slightly larger moment(2.178 Magnetic moments, (µ Β /atom) Lattice constant, (Å) First neighbor 1.615 Fe Second neighbor 2.178 Others 2.151 2.84018 Carbon -0.119 Table 5 . 1 51 Orientation relationships in pearlite.Although no change in the type of the appearing ORs is obtained, the corresponding occurrence frequency of each OR is evidently varied by the carbon content and the application of magnetic field, as illustrated in Table5.2. IS OR P-P1 OR P-P2 OR (103) / /(101) C F (103) / /(110) C F (103) / /(011) C F [010] / /[111] C F [010] / /[1 13] C F [311] / /[1 11] C F Habit plane Habit plane Habit plane (101) / /(1 12) C F (001) / /(125) C F ( ) 001 C . 3 5 ° from ( 1 ) 25 F Table 5 . 2 52 Occurrence frequency of the appearing ORs in different Fe-C alloys. Carbon content Magnetic field IS OR P-P1 OR P-P2 OR Fe-0.12C 0T 43.8% 37.5% 18.8% 12T 40.0% 40.0% 20.0% 0T 46.7% 46.7% 6.7% Fe-0.36C 12T 46.7% 40.0% 13.3% 0T 33.3% 60.0% 6.7% Fe-1.1C 12T 26.7% 53.3% 20.0% Acknowledgements This work is financially supported by the National Natural Science Foundation of China (Grant No. 50971034, 50901015 and 50911130365), the Program for Changjiang Scholars and Innovative Research Team in University (Grant No. IRT0713), the "111" Project (Grant No. B07015), the CNRS of France (PICS No. 4164) and the joint Chinese-French project OPTIMAG (N°ANR-09-BLAN-0382). I would like to give my sincere thanks to these institutions. I also gratefully acknowledge the CHINA SCHOLARSHIP COUNCIL for providing the scholarship to support my PhD study in France. This work is completed at LEM3 (former LETAM, University of Lorraine, France) and the Key Laboratory for Anisotropy and Texture of Materials (Northeastern University, China). I had the honor to work with numerous colleagues in two labs and I would like to give my hearted thanks for their kind help. I would like to thank the reviewers, Prof. Y. Fautrelle and Prof. Z. M. Ren for taking time out of their busy schedules to review my dissertation and provide constructive suggestions and comments. I would like to give my special thanks to my supervisors, Prof. Claude Esling, Prof. Yudong Zhang at University of Lorraine, Prof. Liang Zuo and Prof. Xiang Zhao at Northeastern University, not only for their support, ideas, guidance, and organizational help during the last three years, but also making me a better person and scientist by setting high standards and good examples. I would like to thank all my group members who treated me with dignity and respect. Last but not least, I want to thank my parents, friends and family, especially my husband Mr. Lijun Zhang for his constant support, understanding and encouragement. growth of the pearlite is restricted between the ferrite chains, the elongated pearlite colonies along the field direction are finally obtained. As for the Fe-0.36C alloy, the proeutectoid ferrite transformation temperature is much lower than that of the Fe-0.12C alloy due to the increased carbon content. Though the ferrite transformation temperature is still above the T C , the difference between them is reduced. Thus, the two-scaled magnetic dipolar interaction functions and affects the nucleation and growth of the proeutectoid ferrite, as most proeutectoid ferrite is transformed around and below T C . However, as the carbon content is high, the relative amount of proeutectoid ferrite is low. The interspacing between ferrite grains is large. As the magnetic dipolar interaction energy E D is proportional to r -3 (seen from Eq. 3.1), thus the micro-scaled dipolar interaction is reduced. As a result, transformed ferrite is mainly elongated in the field direction. No obvious field direction alignment is obtained compared with the case of Fe-0.12C alloy. Summary Magnetic field induces the elongated and aligned microstructures due to the effect of the magnetic dipolar interaction. During the austenitic decomposition process, the magnetic dipolar interaction works in two scales: atomic-scale and micro-scale. In atomic-scale, the grain growth process is influenced and the elongation of the nucleated grains in the field direction is resulted. In the microscale, the nucleation of the ferrite along the field direction is favored and the alignment is induced. For the low carbon content Fe-0.12C alloy, the elongated pearlite colonies along the field direction are induced by the chained elongated ferrite grains in the field direction due to the effects of the atomic-and micro-scaled magnetic dipolar interaction on the nucleation and the growth of the ferrite around and below the T C . Summary Magnetic field promotes the formation of the abnormal structure through its influence on increasing the driving force of the carbon-depleted austenite to ferrite transformation. No specific OR between the cementite and the ferrite in the abnormal structure is found in the present study. Magnetic field enhances the spheroidization of pearlite through enhanced carbon diffusion resulting from the elevation of the transformation temperature and the increased relative ferrite/cementite interface energy from the magnetization difference between boundary areas and grain interiors. The field induced texture has been noticed and studied during magnetic annealing process in Fe based alloys. Martikainen et al. [START_REF] Martikainen | Observations on the effect of magnetic field on the recrystallization in ferrite[END_REF] reported the increase of the <001> texture component along the field direction and attributed this texture formation to the anisotropic magnetization in different crystallographic directions, as <100> direction is the easiest magnetization direction. Later, Zhang et al. [START_REF] Zhang | Grain boundary characteristics and texture formation in a medium carbon steel during its austenitic decomposition in a high magnetic field[END_REF] studied the effect of the magnetic field on texture of the ferrite during austenitic decomposition process and found the field-induced enhancement of <001> component along the transverse field direction in a medium carbon steel (0.49C). However, the similar texture component was not obtained in a 42CrMo steel [START_REF] Zhang | New microstructural features occurring during transformation from austenite to ferrite under the kinetic influence of magnetic field in a medium carbon steel[END_REF]. This reveals the magnetic field influential mechanism on texture formation works differently and this field effect remains to be addressed. In this chapter, the field-induced texture of ferrite was studied in two hypoeutectoid alloys (Fe-0.12C alloy and Fe-0.36C alloy), the magnetic field influential In addition, it is clear that the intensity of the <001> fiber component along the traverse field direction is directly related to the field intensity. For the present Fe-0.36C alloy, the visible enhancement of the <001> fiber texture requires high field intensity, i.e., 12T. Summary Due to the atomic-scaled magnetic dipolar interaction, magnetic field favors the nucleation and the growth of the ferrite grains with their distorted <001> direction parallel to the transverse field direction, and thus induces the enhancement of the <001> fiber component in the transverse field direction. This field effect is carbon content dependent. For low carbon content alloy (Fe-0.12C alloy), it is greatly reduced due to the reduced carbon oversaturation in ferrite and elevated formation temperature. At the meantime, this field effect is also strongly related to the field intensity, as the enhancement of <001> fiber component along the transverse field direction becomes more pronounced with the increase of the magnetic field intensity. For Fe-0.36C alloy, a noticeable enhancement of <001> fiber component along the transverse field direction is detected under the 12T magnetic field. with lower magnetization. The formation of ferrite is thus promoted by the magnetic field. Therefore, some low carbon concentration areas in austenite would transform to ferrite with the help of magnetic field before the formation of cementite. Consequently, the occurrence of the P-P2 OR, which corresponds to pearlitic ferrite nucleates first is increased by the magnetic field. Summary To minimum the transformation barriers, pearlitic ferrite and cementite follows specific ORs which guarantees the small transformation stains and the low atomic misfits, during the pearlite formation and growth process. Though magnetic field is able to offer additional driving force for this transformation and considerably raise the transformation temperature, it hardly overcomes the transformation strain energy barrier and the interfacial energy barrier to offer new ORs between the ferrite and the cementite in pearlite. As a result, the magnetic field has little influence on the type of the ORs appearing. As magnetic field favors the nucleation of the high magnetization phasepearlitic ferrite, the occurrence of the P-P2 OR that corresponds to the situation that pearlitic ferrite nucleates first, is increased by the magnetic field. This enhancement of the magnetic field on the occurrence of the P-P2 OR is more pronounced in high carbon content alloys, i.e. Fe-1.1C alloy.
150,662
[ "773217" ]
[ "178323" ]
01749243
en
[ "spi" ]
2024/03/05 22:32:07
2012
https://hal.univ-lorraine.fr/tel-01749243/file/DDOC_T_2012_0173_CHENTOUF.pdf
Examinateur Rapporteur Rapporteur Les alliages intermétalliques riches en fer du système fer-aluminium, Fe 3 Al, ont des caractéristiques très intéressantes pour des applications mécaniques à haute température. Ils possèdent, comme la plupart des composés intermétalliques, une résistance mécanique élevée, une bonne résistance à l'oxydation ainsi qu'une faible densité. Cependant, les principales raisons qui limitent leurs applications sont leur fragilité à température ambiante et une forte diminution de leur résistance pour des températures supérieures à 550°C. Un aspect intéressant de ces alliages est leur comportement envers les métaux de transition. Certains éléments, comme Ti, peuvent augmenter la stabilité de la phase D0 3 , en augmentant la transition D0 3 /B2 vers des températures plus élevées. La situation est moins claire dans le cas du Zr. En effet, malgré l'effet bénéfique du dopage en Zr sur la cohésion des joints de grains et la ductilité, il n'existe pas de données expérimentales concernant son effet sur la stabilité de la structure D0 3 du composé Fe 3 Al. Ce travail de thèse vise à étudier l'effet de ces deux métaux de transitions Ti et Zr sur les propriétés du composé intermétallique D0 3 -Fe 3 Al en utilisant des calculs pseudopotentiels ab initio basées sur la théorie de la fonctionnelle de la densité (DFT). Deux principaux thèmes ont été abordées: (i) la compréhension du rôle de ces deux métaux de transition en termes de stabilité de la phase D0 3 à la lumière de leur site préférentiel dans la structure D0 3 -Fe 3 Al (ii) le comportement du Ti et Zr dans le joint de grains Σ5 (310) [001] ainsi que leur effet sur la stabilité structurale de cette interface. Un élément important pour étudier ces aspects est de prendre en compte l'effet de la température. Cela nécessite un traitement de type dynamique moléculaire des atomes dans la supercellule. La technique dynamique moléculaire ab initio (AIMD) résout ces problèmes en combinant des calculs de structure électronique avec la dynamique à une température finie. Ainsi, notre étude a été menée à la fois en utilisant des calculs ab initio statiques à 0K ainsi que par la prise en compte de l'effet de la température jusqu'à 1100K (Dynamique Moléculaire Ab Initio). the light of their site preference in the D0 3 -Fe 3 Al structure (ii) the behaviour of Ti and Zr transition metals in the ∑5 (310) [001] grain boundary and their effect on the structural stability of this interface. An important issue when studying these aspects is to take into accounts the effect of temperature. This requires a molecular dynamics treatment of the atoms in the supercell. The technique known as ab initio molecular dynamics (AIMD) solves these problems by combining 'on the fly' electronic structure calculations with finite temperature dynamics. Thus, our study was conducted both using the conventional static ab initio calculations (0K) as well as by taking into account the effect of temperature (Ab Initio Molecular Dynamics). To my parents Acknowledgment I would like to express my gratitude to the many people who have supported me as I completed my graduate studies and dissertation. First, I would like to sincerely thank my advisor, Pr. Thierry GROSDIDIER, for providing continuous support and instructive guidance throughout my doctoral studies. I feel lucky to have an adviser like him with great patience and understanding to the difficult conditions that I had during this thesis. His effort, encouragement and advices are greatly appreciated. I gratefully acknowledge Pr. Hafid AOURAG who introduced me to research and I will always be highly grateful to him for his commitment to my academic success. I also would like to thank my co-director Dr. Jean-Marc RAULOT for the many interesting discussions we had, and for his ability to inspire thoughts and suggestions on a large variety of topics, ranging from fundamentals of theoretical condensed matter physics to applications and computational issues. Introduction Fe 3 Al-based intermetallic compounds are promising materials for structural applications at high temperature. Their advantageous properties originate from their low density and their high corrosion resistance in oxidizing and sulfidizing environments. At ambient and intermediate temperatures, Fe 3 Al shows higher strength than other single-phase iron alloys due to its ordered D0 3 superlattice structure. However, at about 550 °C, disordering of the D0 3 structure as well as a sharp drop in the flow stresses occur in the binary stoichiometric Fe 3 Al compound, causing detrimental effects to this material with regard to structural applications. An interesting aspect of these alloys is their behavior towards transition metal impurities. Some elements like Ti increase the stability of the D0 3 by increasing the D0 3 /B2 transition towards higher temperature. The situation is less clear for Zr addition, indeed, despite the beneficial effect of small Zr addition on the grain boundary cohesion and ductility; there is no experimental data available concerning its effects on the stability of the D0 3 -Fe 3 Al compounds. In this thesis the effect of the Ti and Zr transition metals on the D0 3 -Fe 3 Al intermetallic compounds has been investigated by means of ab initio PseudoPotentials numerical simulations based on Density Functional Theory. Two main issues will be addressed (i) the understanding of the role of the these two transition metals in terms of stability of the bulk at the light of their site preference in the D0 3 -Fe 3 Al structure (ii) knowing that the ductility of iron aluminides is affected by grain boundary brittleness and that experimental information on segregation at grain boundary (G.B.) is hardly available because of the resolution of the measuring tools, the behaviour of Ti and Zr transition metals in the ∑5 (310) [001] grain boundary will then be studied to point out their effect on the structural stability of this interface. The particular ∑5 (310) [001] grain boundary has been selected because of its short period and the presence of a single type of (310) planes in the D0 3 structures. These two factors make it much easier a numerical simulation by keeping the calculation time within reasonable limit. It is also reasonable to consider however that, due to its high degree of coincidence, this specific grain boundary can be representative of a wide range of boundaries in Fe 3 Al alloys. Thus, the behavior depicted here for Ti and Zr atoms with respect to their neighbor atoms should not change drastically with the nature of the grain boundary. In principle, and this is the approach emphasized in this thesis, a theory attempting to realistically describe of the structural properties (stability of the D0 3 structure and grain boundary relaxation) needs to take into account the effect of temperature. This requires a molecular dynamics treatment of the atoms in the supercell. The technique known as ab initio molecular dynamics (AIMD) solves these problems by combining 'on the fly' electronic structure calculations with finite temperature dynamics. Thus, our study was conducted both using the conventional static ab initio calculations (0K) as well as by taken into account the effect of temperature (Ab Initio Molecular Dynamics). The most important element in an AIMD calculation is the representation of the electronic structure. The calculation of the exact ground-state electronic wave-function is intractable, and approximations must be used. In the present study, we choose to use pseudopotentials methods implemented in the Vienna Ab Initio Simulation Package (VASP). It is one of the most powerful ab initio DFT pseudopotential-based packages available at present. It has been already applied to a wide range of problems and materials, including bulk systems, surfaces and interfaces. This thesis is organized as follows. In Chapter I we provide a literature review concerning the effect of alloying elements on the properties of FeAl based intermetallics compounds. In Chapter II an outline of the theoretical background that serves as a foundation of our calculations is given. The implementation of Ab Initio Molecular Dynamics within the framework of plane wave pseudopotential density functional theory is given in detail. A short introduction to density functional and pseudopotential theory will be given in the second part of this chapter. Chapter III and IV are devoted to the application of these methods and the use of tools described in the previous section. In Chapter III, the results of the static ab initio calculations of the substitutions of the Ti and Zr transition metals in the bulk as well as the ∑5 grain boundary are presented. Chapter IV gives the results of Ab Initio Molecular Dynamic calculations that was set up to investigate the effect of temperature on the structural stabilities of the two transition metals impurities in the bulk and ∑5 (310) [001] grain boundary of the D0 3 -Fe 3 Al compounds. Finally, we conclude the document and summarize the basic insights Chapter I Background for the iron aluminides based intermetallics analysis I. Intermetallic compounds Intermetallic compounds can be simply defined as ordered alloy phases formed between two or more metallic elements. These materials have different crystal structures from those of their based metallic constituents metallic components and exhibits long-range ordered superlattices. In comparison with conventional metallic materials, intermetallic compounds have the advantages of high melting point and high specific strength, which make them promising high temperature structural materials for automotive, aircraft, and aerospace applications. The ordered nature of intermetallic compounds generates high temperature properties due to the presence of long-range-ordered superlattice, which reduce dislocation mobility and diffusion processes at elevated temperatures [1,2]. Aluminides based intermetallics are low density materials which are distinctly different from conventional solid-solution alloys. For example, In this chapter, we give survey of the literature concerning the effect of the alloying elements on the properties of FeAl based intermetallics compounds. After a brief introduction to the properties of the intermetallics and the more especially the intermetallic compounds based on the FeAl system, in Section I and II, the different strengthening mechanisms of the alloying elements in the bulk are presented in Section III. In Section IV the effect of additions at the grain boundaries are also discussed with a description of the different tools for the characterization of the grain boundaries. Ni 3 Al exhibits an increase in yield strength with increasing temperature, whereas conventional alloys exhibit a general decrease in strength with temperature [3,4]. Nickel and iron aluminides also possess sufficiently high concentration of aluminium, thus formation of a continuous and adherent alumina scale on the external surface of the material could always be achieved. In contrast, most of the alloys and capable of operating above 700 °C in oxygen-containing environments contain less than 2 wt. % aluminium, and invariably contain high concentration of chromium for oxidation protection with chromia. Nickel and iron aluminides therefore could provide excellent oxidation resistance at temperatures ranging from 1100 to 1400 °C owing to their high aluminium contents and high melting points [4]. II. Iron aluminides Among the big family of intermetallic compounds, the Fe-Al, Ni-Al and Ti-Al systems are attracting most of the attention. The FeAl system is attractive because of specific features. Due to their excellent oxidation resistance _first noted in the 1930s_ iron aluminides have been the subjected to extensive studies with respect to structural and functional applications [6]. In addition to their superior oxidation and sulfidation resistance, iron aluminides also offer the advantages of low material cost, Consisting of non-strategic elements, their density is also lower in comparison with stainless steels. Therefore they have long been considered for applications in the automotive and petrochemical industries as well as conventional power plants and coal conversion plants, fot components such as, shofts, pipes as well as coatings for heat exchangers and molten soft applications [7]. However, their poor ductility at room temperature and significant drop in strength above 600 °C together with inadequate high temperature creep resistance has limited their potential for structural applications. The phase diagram of the binary Fe-Al system, according to Kubaschewski [8], is shown in Fig. I-1. The solid solubility of Al in f.c.c. γ-Fe is limited to 1.3 at.% at 1180 °C. In contrast, in the disordered b.c.c. α-Fe (A2) up to 45 at.% Al can be dissolved at high temperature (1310 °C). Between 0 and 54 at.% Al two ordered compounds exist. The D0 3 -ordered Fe 3 Al is stable at compositions around 27 at.% Al and from room temperature to 550°C (830K). Above 550°C the ordered Fe 3 Al with D0 3 structure transforms to an imperfectly ordered B2 (α 2 ') structure, which ultimately changes to a disordered solid solution, A2 (α). On the other hand, FeAl exists with B2 structure and is stable from about 36-48 at% Al, and the transition from B2 (α 2 (I)) to A2 occurs well above 1100°C. In contrast to the newer diagrams by Massalski [9] the ordered α 2 phase field has been subdivided into three separate modifications, α 2 ' and α 2 (I) region at lower and α 2 (h) one at higher temperatures. The subdivision is a result of measurements by Köster and Gödecke [10] who recorded energy evolution as well as expansion coefficients and elastic moduli as function of composition and temperature. In the present work, we are studying the Fe3Al stochiometric alloy with the D03 structure. According to the phase diagram in Fig. I-1, it is expected that, at this composition, the structure encounters the change at about 550°C (850K). Following by the change at 790°C (1063K) Figure.I-1 Fe-rich part of the Fe-Al system according to Kubaschewski [8]. In addition to the phase boundaries for γ (disordered A1), α (disordered A2), Fe 3 Al (ordered D0 3 ) and FeAl (α 2 ; ordered B2) additional lines are shown for the Curie temperature (T c ), for different variants of α 2 and the area in which the so-called 'k-state' is observed. During the last decade, efforts have been made to enhance room-temperature ductility, hightemperature strength, and high-temperature creep resistance by alloying of iron aluminides. Two approaches, namely, solid-solution strengthening and precipitation strengthening, were considered for strengthening of iron aluminides. Elements such as Nb, Cu, Ta, Zr, B and C were considered for precipitation strengthening; while Cr, Ti, Mn, Si, Mo, V and Ni were added into iron aluminides for solid solution strengthening. In general, the addition of elements either for precipitation strengthening or solid solution strengthening to improve high temperature tensile strength and creep resistance resulted in low room temperature tensile elongations [11]. It is well established that the fracture of iron-based intermetallics is often brittle in nature occurring by cleavage and/or intergranular (i.e occurring along grain boundaries) fracture. In this context, it is important to determine the effect of ternary element additions both on (i) bulk strengthening as well as (ii) grain boundary "softening". III. Bulk strengthening III.1 Strengthening by solid-solution hardening The Fe-Al-Cr system is an example where, within a large area of compositions only, solidsolution hardening is possible. In Fig. I-2 the isothermal section at 1000 °C is shown [12]. At this temperature the phase boundary between α-(Fe,Al) and FeAl has not been determined and the term α-(Fe,Al)/FeAl is used here and also otherwise in this chapter when no distinction between the two phases is made. Complete solid solubility between α-(Fe,Al)/FeAl and α-Cr exists and according to the 1000 °C isotherm solid-solution hardening is the only strengthening mechanism available from the phase diagram for compositions up to 50 at.% Al. Only at higher Al contents the possibility of precipitating a second phase, e.g. hexagonal Al 8 Cr 5 , exists. There are a number of Fe-Al-X systems, e.g. with X=Si, V, Mn, Co, Ni, Cu, Zn, where an extended solid solubility for X in α-(Fe,Al)/FeAl exists. Figure.I-2 Isothermal section of the Fe-Al-Cr system at 1000 °C [12]. The phase boundary between α-(Fe,Al) and FeAl has not been determined and is therefore given by a broken line. It has been reported that the addition of Cr, which is beneficial for increasing the room temperature ductility [13,14], does not show any effect on yield stress at 600 °C for alloys with 25 at.% Al [15]. This is not corroborated by the results of Stein et al. [16] who investigated the effect of solid-solution hardening for various alloying additions for alloys containing 26 at.% Al (Fig. I-3). At 600 °C an increase of the yield stress is observed by adding 2 at.% of Ti, V, Cr, or Mo. If higher amounts are added a further increase of the yield stress is only observed for Mo while for V and Cr even a decrease of the yield stress is reported [16]. At 700 °C the yield stress increases continuously with the amount of alloying addition for Ti, V and Mo while for Cr only an addition of 2 at.% increases the yield stress. At 800 °C again a continuous increase of the yield stress with the amount of alloying addition is observed for Ti, V and Mo within the investigated composition range while the addition of Cr has no marked effect on the yield stress at this temperature (Fig. I-3). Figure.I-3 0.2%-yield stress (in compression; 10 -4 s -1 deformation rate) at 600, 700 and 800 °C for Fe-26Al with additions of 2 and 4 at.% X (X = Cr, V, Mo, Ti) [16] and Fe-28Al-5Mo [17]. Black symbols denote B2-type ordering while grey symbols denote D0 3 /L2 1 -type ordering at the respective temperature. Several experimental and theoretical studies have focused on the structural and magnetic properties of Fe-Cr-Al with the D0 3 -type structure. X-ray, neutron, magnetization and Mossbauer effect [18] studies which have been carried out on Fe 3-x Cr x Al alloys with x < 0.6 showed that chromium atoms occupy preferentially FeI-sites and enter also Al-positions. Their magnetic moments are small, if any, and they diminish the value of the neighbouring iron atoms by roughly 0.1 u B per chromium atom. Ready et al. [20] have found that Cr couple antiferromagneticaly with the Fe atoms and occupy the FeI site. More recently the self-consistent TB-LMTO calculations [19] confirm the result of Ready and al. indicating that a strong exists preference of the FeI-site occupation by chromium in Fe 3 Al. The TB-LMTO calculations [19] show also the effect of the surroundings on the magnetic moment. They confirm negative magnetic moment of chromium found experimentally in [18]. III.2. Strengthening by incoherent precipitates III.2.1. Precipitation of intermetallic phases In many Fe-Al-X systems the solid solubility for the third element within the Fe-Al phases is limited and the possibility of strengthening Fe-Al-based alloys by precipitation of another intermetallic compound exists. In several systems this intermetallic phase is a Laves phase, e.g. in the Fe-Al-X systems with X=Ti, Zr, Nb and Ta. In order to study the effect of precipitates on strengthening, the Fe-Al-Zr system may be considered as a prototype system as only limited solid solubility for Zr in the Fe-Al phases has been found, which is independent from temperature, at least between 800 and 1150 °C [21]. Fig. I-4 presents the partial isothermal section of the Fe-corner at 1000 °C, which is shown by means of example, for the phase equilibria in the temperature range 800-1150 °C. The isothermal section reveals that for α-(Fe,Al)-based alloys strengthening by a Laves phase is possible in this temperature range while FeAl-based alloys may be strengthened by precipitates of the tetragonal phase (Fe,Al) 12 Zr (τ 1 ). As the solubility for Zr in α-(Fe,Al)/FeAl does not increase with temperature, no possibility exists for generating fine and evenly distributed precipitates from a solid solution. Figure.I-4 Partial isothermal section of the Fe-Al-Zr system at 1000 °C [21]. The two ternary compounds, which are in equilibrium with the Fe-Al phases, are either a Laves phase (λ, with λ 0 denoting the cubic C15 structure and λ 1 denoting the hexagonal C14 structure) or the tetragonal ThMn 12 -type phase (Fe,Al) 12 Zr (τ 1 ). A set of experiments with the laves phases in the Fe-Al system have been done. However, the dependence of their mechanical properties on the chemical composition is not yet understood. Thus a deeper understanding of the structure stability and mechanical properties of the laves phases is essential to control the material properties of the iron aluminides. Recently, experimental investigations of the pure lave phases Fe 2 Nb [22] and (FeAl) 2 Nb [23] have been performed and the phase equilibria in the respective ternary systems has been studied. In addition to experiments, the structural properties of the Fe 2 Nb laves phase (C14 Hexagonal structure) has been investigated by quantum-mechanical ab initio calculations [24]. Fig. I-5 shows the results of the calculated single crystalline elastic constants tensor C ij , which gives direct insight into the directional dependence of the Young modulus. It indicats a rather strong elastic anisotropy, i.e. deviations from an ideal sphere. The derived Young's modulus at T=0K by using selfconsistent crystal homogenization method is about 250 ± 22 GPa. The authors have determined also the site preference of Al between the Fe sublattices of the C14 structure using a combined experimental and theoretical approach. Figure.I-5 Quantum-mechanically calculated directional dependence of single-crystalline Young's modulus of Fe 2 Nb with the hexagonal C14 structure. Shape deviations from an ideal sphere identify elastic anisotropy of the studied Laves phase compound. In the ab initio investigation, the site preference has been investigated at both low and elevated temperatures, making use of CALPHAD-like statistical sublattice models to determine the configurational entropy. The resulting free energies are shown for (Fe 0.75 Al 0.25 ) 2 Nb in Fig. I-6 (considering a double-layer antiferromagnetic structure). The x axis at the bottom/top indicates the fraction of 2a/6h sites occupied by Al. The authors show that the 2a site has the lowest solution free enthalpy already at T = 0 K (see Fig. I-6). With increasing temperature, the larger number of sites and thus configurations in the 6h sublattice and the corresponding gain in configurational entropy make the occupation of this sublattice more and more attractive. However, even for temperatures up to 1500 K the minimum of the free enthalpy curve is above x = 0.25 (the value corresponding to the statistical distribution), yielding a net-preference to the 2a sites. Figure.I-6 The ab initio calculated solution free enthalpies for (Fe 0.75 Al 0.25 ) 2 Nb at different temperatures. The black dots indicate results of ab intitio calculations, solid lines combine these results with a sublattice regular solution model [24]. III.2.2. Precipitation of carbides Besides hardening by precipitation of intermetallic phases, carbides could also act as strengthening phases. Fig. I-7 shows two partial isothermal sections at 800 and 1000 °C of the Fe corner of the Fe-Al-C system [25]. At both temperatures α-(Fe,Al) and FeAl are in equilibrium with the cubic K-phase Fe 3 AlC. The solid solubility for carbon in α-(Fe,Al)/FeAl changes only slightly between 800 and 1200 °C but is considerably lower at lower temperatures, e.g. drops from about 1 at.% C at 1000 °C to about 50 ppm at 320 °C [26]. This leads to the precipitation of fine needle shaped precipitates of the k phase at the grain boundaries during cooling. As the carbon diffusivity is even high at ambient temperatures, these precipitates at the grain boundaries are found at room temperature in all alloys of appropriate compositions even after quenching and they do strongly affect mechanical properties at low temperatures [26]. The effect of k phase precipitates on the mechanical behaviour of Fe-Al-based alloys with Al contents between 25 and 30 at.% has been studied in detail by Schneider et al. [27]. Figure.I-7 Partial isothermal sections of the Fe-Al-C system at (a) 800 (b) and 1000 °C [15]. The exact course of the α-(Fe,Al)/FeAl phase boundary has not been determined within the ternary system and therefore only its position in the binary Fe-Al system is indicated by a bar on the Fe-Al axis. To control the precipitation end microstructures for the carburizing process of the Fe-Al alloys, it is necessary to rely on the thermodynamical properties of the iron riche phases Fe-Al-C system, and to know the fundamental properties of these phases. Several experimental and theoretical informations are present in the literature about the k carbide. The k phase is associated to the The stoichiometric Fe 3 AlC has, in fact, never been observed. Experimentally, the stoichiometry proposed for k is Fe 4-y Al y C x where 0.8<y<1.2 and 0<x<1 [28]. Other results indicate that the composition of the different synthesized compounds is probably close to Fe 3 AlC x=1/2 [29,30]. In addition, the experimental magnetic nature of the compound (ferro-or nonmagnetic) is not yet well established. Since the investigations of Morral (1934) [31], it has been stated several times that the Kappa phase is ferromagnetic. The given Curie temperature values would lie between 125 [32] and 290 °C [33]. However, the investigations of Parker et al. [34] indicate that the k phase might not be magnetic. Later, the investigations of Andryushchenko et al. [28] seem to have confirmed these observations. These authors have observed that the distribution of aluminium on the corners of the cube and of iron on the faces of the cube is apparently not perfect. Antisites' defects (aluminium atoms on iron sites and reciprocally iron on aluminium sites) seem to be at the origin of the reduced magnetic moment. Ohtani et al. [35] have published a Fe-Al-C phase diagram based on ab initio calculations within an all electron approach, and Maugis et al. [36] have discussed the relative stability of various phases in aluminium-containing steels, through ab initio calculations using the VASP package. More recently, Connétable et al. [37] have investigated the influence of the carbon on different properties of the Fe 3 Al system using ab intio calculations. The authors have found that the insertion of the carbon atom decreases the magnetism of the iron atoms and modifies strongly the heat capacity and the elastic constant in k-phase compared to the Fe 3 Al-L1 2 structure. The interactions between the Fe and the C are is the main origin of these modifications. Kellou et al. have also investigated the structural and thermal properties of Fe 3 AlC k-carbide [78]. The authors show that The C addition has the highest effect in strengthening the cohesion of the Fe 3 Al base between several additions. These authors have found that the bulk modulus (166GPa) and cohesive energy (5.7eV/ atom) of the Fe 3 AlC (k-carbide) phase has been found to the highest from all the investigated Fe 3 AlX compounds (X=. H, B, C, N, O) [78]. III.3. Strengthening by coherent precipitates In the Fe-Al-Ni system a miscibility gap between disordered α-(Fe,Al) (A2) and ordered NiAl (B2) exists at temperatures below about 1200 °C [38]. The lattice mismatch of both phases is sufficiently small so that it is possible to produce very fine-scale coherent two-phase microstructures of disordered α-(Fe,Al) (A2) + ordered (Ni,Fe)Al (B2). Except for the Fe-Al-Ni system, the mechanical properties of the coherent two-phase microstructures have not been studied in detail. The coherent precipitates have a strong strengthening effect and microstructures can be varied such that the hard (Ni,Fe)Al phase is either the matrix or the precipitate and in both cases a strengthening effect has been achieved. The deformation behaviour of ternary Fe-Al-Ni alloys at high temperatures has been studied [39]. These studies have been extended to quaternary Fe-Al-Ni-Cr alloys and first results, especially on the creep behaviour of these alloys, are reported by Stallybrass et al. [40]. III.4. Strengthening by order An additional possibility for strengthening of Fe-Al based alloys is to stabilise the D0 3 structure with respect to the B2 structure to higher temperatures. Nishino et al. [41,42] have determined the D0 3 -B 2 transformation temperatures in (Fe 1-x M x ) 3 Al with M= Ti, V, Cr, Mn and Mo. In particular, the transformation temperatures T 0 for M= Ti and V increase rapidly with increasing x, reaching T 0 values as high as 1300 K for x=0.15 (approx. 11 at.% Ti) and x=0. 25 (approx. 19 at.% V). Anthony and Fultz [43] have reviewed the solute effects on T 0 in Fe 3 Al and also measured the changes in T 0 for a large number of solutes only in the dilute limit (see Fig. I-9). Figure.I-9 Effect of ternary concentrations on ∆ ି [43]. Among the transition elements, the addition of Ti gives rise to the sharpest increase in T 0 at the rate of 55 K/at.% Ti [43,41]. Likewise, the additions of V and Mo increase T 0 but only at the rates of 35 K/at.% V [43,41], and 25 [44,42] or 30 [43] K/at.% Mo. An initial rise in T 0 for M=Ti, V and Mo tends to moderate at a higher composition. An approximately linear dependence of T D03-B2 on ternary concentration is observed for all except for two elements of the transition metals. The two exceptions, Nb and Ta, have a limited solubility in Fe 3 Al, and the increase in T D03-B2 saturates at a 1% concentration [43] (as seen from Fig. I-9). In contrast, the Cr, Hf and the Zr additions have been reported to have no significant effect on T 0 [43], except for a slight increase in T 0 reported by Mendiratta and Lipsitt [45]. It is worthwhile mentioning here that an increase in the D0 3 -B2 transformation temperature T 0 can lead to an improvement in the high-temperature strength of Fe 3 Al-based alloys [2,46]. Nishino et al. [42] have indeed demonstrated that a peak in hardness extends to higher temperatures in parallel with the increase in T 0 for M= Ti, V and Mo. At present, there is no clear understanding of why these solutes have their characteristic effects on the transformation temperature. The effect of solute atom on T 0 D0 3 -B 2 is expected to be related to its c istallographic site preference in the D0 3 structure. selectively [47]. The elements to the left of Fe in the periodic table, i.e. Ti, V, Cr and Mn, substitute for the FeI site, while those to the right, i.e. Ni and Co, substitute for the FeII site. III.4.1. Site preference Such a selective site substitution has also been found for Fe 3 Ga based alloys [48,49], where the D0 3 phase is always stabilized although Fe 3 Ga forms an L1 2 phase, unlike Fe 3 Si and Fe 3 Al, in the low-temperature equilibrium state. Figure.I-10 The unit cell of D0 3 ordered Fe 3 Al. The site preference of transition elements in Fe 3 Al may to follow that of Fe 3 Si and Fe 3 Ga, as supported by the band calculations [50]. Nevertheless, the site preference data on Fe 3 Al are still incomplete. While Ti, Mo [44,43,51] and probably Mn [51] occupy the FeI site, Mossbauer experiments for the substitution of V [52] and Cr [53] provided tentative exceptions to the above trend. Other results of Mossbauer experiments have reported that Cr occupies preferentially the FeI site and also enters the Al site [54]. More recently, Reddy et al. [20], by using the ab initio calculations found, that the Cr can occupy the FeI and FeII sites with nearly equal energies. However, the authors [20] confirmed that the Cr couple anti-ferromagnticaly with Fe atoms and prefer to occupy the FeI site. The results of Reedy et al. [20] also show that the V occupies the FeI site (see Fig. I-11). Comparatively, the Co and Ni, though the difference is small between the FeI and FeII occupation, it has been found that these two impurities prefer the FeII site [20]. Furthermore, X-ray analysis [41] and ab initio calculations [55] have indeed demonstrated the FeI site selection of V atoms. The situation is less clear concerning an element such as Zr for which there is no data on site preference in D0 3 -Fe 3 Al. Figure.I-11 The energy gain/loss when the FeI /FeII sites are replaced by various 3d transitionmetal atoms. The negative energies correspond to the gain in energy while the positive energy corresponds to a lowering of the overall binding energy with respect to pure clusters [20]. After all, we believe that the site preference plays an important role in stabilizing the D0 3 phase of Fe 3 Al-based alloys. Therefore, it is of the utmost interest to determine the site preference and ab-initio calculations coupled with temperature effect analysis are a good tool for this. III.4.2. Solute effects on D0 3 ordering Although pseudopotential calculations [56] predicted some of the solute effects on the transformation temperature T 0 , further questions should be addressed as to why certain transition elements cause their own characteristic increases in T 0 . Fortnum and Mikkola [44] suggested that the difference between solutes in raising T 0 is most probably caused by the difference between their electronic structures and/or atomic sizes. Anthony and Fultz [43] have shown that the solute effect on T 0 is related to the difference between the metallic radii of a solute atom and an Al atom: the closer the metallic radius of a ternary solute to that of Al, the greater its effectiveness in raising T 0 , as shown in Fig. I-12. A qualitative support for this atomic size argument is provided by their lattice parameter results: additions of Mo, W and Ta are effective both in raising T 0 as well as in increasing the lattice parameter. Figure.I-12 The showing the relationship between the efficiency of a ternary additive in raising ∆ ି , and the absolute value of the difference in metallic radii between that ternary element and Al [43]. However, the atomic size effect inevitably assumes the FeI site occupation of any solute atom, regardless of the site preference rule, and is also in conflict with the lattice contraction for M=V, Taking the site preference into consideration, an attention is directed to the variation of electron concentration, as proposed by Nishino et al. [41,49]. The results of Reddy et al. [20] suggest that it is the sign of magnetic coupling that determines the preferential location of the impurity atoms. Impurities to the left of Fe couple antiferromagnetically to Fe and prefer FeI sites while the impurities to the right (Co and Ni) couple ferromagnetically and prefer FeII sites. For the case of Cr, whereas the small difference between energies when substituted in FeI/FeII sites, it is found to couple antiferromagnetically to Fe and ocuppy the FeI site (Table . I-1). However, these calculations have been carried out at 0 Kelvin, and the nature of coupling changes with increasing the temperature has not been investigated. Table.I-1 The bond length (BL), binding energy (BE), and the nature of coupling in various 21immers. The corresponding bond lengths in the relaxed 35-atom clusters are also given [20]. Dimer III. 5 Objective of our work for bulk analysis The above analysis of the effect of the transition metals in raising the temperature of transition In this framework, the purpose of the present work is to compare the behviours of Ti and Zr in the bulk of Fe 3 Al. In particular we aim at determining: The temperature dependence of the site preference of Ti and Zr. The effect of temperature on the stability of the D0 3 phase of the pure as well as Ti and Zr doped Fe 3 Al. The effect of temperature on the structural properties of the pure Fe 3 Al as well as the Ti and Zr doped compounds. IV. Effect of alloying elements on ductility IV. 1 Boron addition and grain boundary strength In the case of many intermetallic alloys, small boron additions modify their ambient temperature properties. In fact, these alloys, which present an intrinsic intergranular brittleness in their 'pure' state, change their fracture mode, when boron-doped. In some cases -like in the B-doped Ni 3 Al alloys -the fracture becomes ductile. In other cases, like in FeAl-B2 alloys, even in the B-doped alloys a brittle fracture is observed, it takes place cleavage in a transgranular manner. If the first (intergranular) type of room temperature brittleness of intermetallic alloys is commonly considered as an intrinsic one, the second one (transgranular) seems in fact to be due to an extrinsic embrittling action of atomic hydrogen, created during the oxidation reaction on the sample surface. 2Al+3H 2 O→Al 2 O 3 +6H (Eq. I-1) This phenomenon, first identified by Liu et al. [60], is known as the 'environmental effect'. The boron effect in intermetallic alloys is typically attributed to its intergranular segregation. This hypothesis is a simple conclusion of experimental measurements of some intergranular boron enrichment, mainly in Ni 3 Al alloys, by the Auger Electrons Spectrometry (AES) method [START_REF] Fraczkiewicz | A[END_REF][START_REF] Hondros | Physical metallurgy[END_REF]. This hypothesis was confirmed by the experimental results of Fraczkiewicz et al. [START_REF] Gay | [END_REF]. Figure.I-15 Effect of temperature on the intergranular concentration of boron. Fe-45Al+400 appm B alloy; annealing during 24 h [START_REF] Gay | [END_REF]. The authors [START_REF] Gay | [END_REF] have used AES to measure the concentration dependence of boron segregation at grain boundaries of polycrystalline Fe-40 at.% Al base intermetallics. In a series of alloys containing different (80-2000 at. ppm) contents of boron, a maximum grain boundary concentration of about 13 at.% boron was measured for the bulk concentrations above 800 at. ppm boron, i.e. close to its solid solubility limit under these conditions. The Fowler approach [64] was used to fit the experimental results providing them with the values of the Gibbs free energy of segregation, ∆G B 0 =-41 kJ/mol, and the Fowler interaction parameter zw=+96 kJ/mol [START_REF] Gay | [END_REF]. The numerical values of thermodynamic parameters describing this system were corrected later by the same authors [START_REF] Mckamey | Physical metallurgy and processing of intermetallic compounds[END_REF]. In fact, a strong non-equilibrium segregation was still present in the studied materials under the applied conditions of the heat treatment. After a prolonged annealing, however, all boron remaining at the grain boundaries can be considered to be in segregation equilibrium. The true equilibrium grain boundary fractions of boron, E X B GB , are shown in Table . I-2. Based on this correction, the values of ∆G B 0 ranging from -30 to -34 kJ/mol, and zw ranging from +220 to 320 kJ/mol were obtained. More recently, by means of ab initio pseudopotential calculations [START_REF] Mckamey | [END_REF], the comparison between the formation energies of boron insertions in the bulk and at a ∑5 grain boundary shows that the boron atoms prefer to segregate at the grain boundary of the FeAl intemetallic compound. Table.I-2 Measured dependence of grain boundary atomic fraction of boron, M X B GB , in polycrystalline Fe-40 at.% Al alloy at 673 K on bulk boron concentration, c B [START_REF] Gay | [END_REF] and corresponding equilibrium values of grain boundary atomic fraction, E X B GB [START_REF] Mckamey | Physical metallurgy and processing of intermetallic compounds[END_REF]. GB (3) X B GB (2) X B GB (1) X B GB (1) IV. 2. Transition metal additions The room temperature ductility of Fe 3 Al can also be significantly improved with Cr alloying [13,14] together with increasing the Al-content from the stochiometric 25 towards 28 at.% [START_REF] Mckamey | Physical metallurgy and processing of intermetallic compounds[END_REF][START_REF] Mckamey | [END_REF]. A small addition of Zr (up to about 1 at.%) are also beneficial for the ductility by increasing the grain boundary strength and by trapping of hydrogen by zirconium rich precipitates [67]. Furthermore, the results of a combined experimental and finite element modeling simulations of intergranular fracture indicates that the 0.5% of Zr to the ternary Fe-28%Al-5%Cr alloy increases the intrinsic fracture resistance at room temperature [68]. IV. 3 Modelling approach and objectives of our G.B. simulations After all, we believe that the grain boundaries are the key parameters determining the macroscopic mechanical properties of iron aluminides, and must therefore be characterized as regards of their intrinsic structural properties or the influence of the alloying elements. To this purpose, experimental techniques _High-Resolution Transmission Electron Microscopy (HRTEM) and Auger Electron Spectroscopy, among others_ are particularly well adapted to yield structural information. However, Fe-Al samples (high-purity bicrystals prepared in wellcontrolled conditions, for instance), because of their extreme sensitivity to impurities, are very difficult to obtain and manipulate, which certainly contributes to limit experiments. In complementary manner, atomic-scale simulations can offer a valuable way of investigating both the structure and thermodynamics of model interfaces. This must yield matter for comparison with available or future experimental results and/or provide with new insights for understanding of the related mechanisms. In performing atomic-scale simulations, special care has to be taken of the choice of the potential-energy model. Ab initio methods provide the reputedly most accurate state-of-the-art potentials but require a high computational power compared to other semi-emperical (e.g. tightbinding) or empirical (e.g., embedded atom method, EAM) models. In spite of the continuous enhancement of the available computer power, the high computational cost of the ab initio calculations limits the size of tractable systems to about 50 transition-metal atoms, hindering comprehensive grain boundary studies that have to include point defect thermodynamics, chemical and segregation effects, in addition to more common studies generally limited to a few specific grain boundary variants. In particular, in order to determine the ground state properties of a given grain boundary, the configurational atomic phase space that has to be investigated must include in-plane rigid-body translations (RBT's) and local composition that can be different from the bulk one (if segregation occurs). Owing to the difficulty of this task, there is no example of ab initio study embracing the full problem, including both the chemical and translational degrees of freedom, a deficiency that still makes relevant the use of the empirical potentials. Although alloy interfacial segregation is a well-known phenomenon, deeper insight into it was recently gained from the ab initio, explaining in particular the effect the grain boundary segregation of boron and sulphur on grain boundary cohesion [69] and co-segregation of boron, titanium and oxygen at the grain boundaries of α-iron [70]. Segregated gallium was found to draw charge from the surrounding aluminium atoms, thus, to reduce the cohesion of aluminium [71]. On the basis of the local density functional equations, several phenomena were also determined: the structure and electronic properties of boron and sulphur at the coherent twin boundary in ferritic iron [72], the embrittlement of the same boundary induced by phosphorus segregation [73], hydrogen segregation [74] or the effect of boron [75] on the cohesion of iron. The first-principles quantum mechanical calculations showed that large bismuth atoms weaken the interatomic bonding by pushing apart the copper atoms at the interface [76] The density-functional theory was further applied to study the geometric and magnetic structures of fully relaxed symmetrical tilt {013} grain boundary in iron and {012} grain boundary in nickel. In both cases, enhancements of the local magnetic moments of the atoms in the grain boundary plane were found. Calculated values of the segregation enthalpy of silicon and tin at these grain boundaries were in good agreement with experiment [77]. Concerning more specifically the atomic simulation of the grain boundary segregation in intermetallics, except for some studies by Besson et al. [79] and Raulot et al. [64], they were done Ni-Al. Concerning the modeling works carried out on Fe-Al based alloys, the majority of these calculations were always carried out without taking into account the vibrational effect related to the effect of temperature. In this context the goals of the present modeling work on the ∑5 (310)[001] in D0 3 -Fe 3 Al are done to determine: The site preference of the two transition metals (Ti and Zr) between different configurations on the grain boundary interface. The effect of the transition metals on the stability of the grain boundary. The effect of the relaxation on the structural deformations of the grain boundary interface. The effect of temperature on structural relaxation of the grain boundary. The temperature dependence of the site preference of the two transition metals Ti and Zr. Chapter II Theoretical tools In this Chapter we review the theoretical models that were used in our simulations. Two major theories will be described, the Ab Initio Molecular Dynamics and the Density Functional Theory. In the first part, the focus is on the temporal evolution of a molecular system. In particular, the methods considering the nuclei as classical particles are described. However, the goal is not to list in an exhaustive manner the different methods of molecular dynamics but to clarify the context within which the ab initio molecular dynamics takes place. In the second part, the Density Functional Theory 'DFT' used in the context of our simulations is described. Finally we detail the computational techniques that were used to carry out our pseudopotential calculations through the VASP (Vienna Ab initio Simulation Package) code. Part A: Ab Initio Molecular Dynamics I.1. Introduction Modern theoretical methodology, aided by the advent of high speed and massively parallel computing, has advanced to a level that the microscopic details of chemical processes in condensed phases can now be treated on a relatively routine basis. One of the most commonly used theoretical approaches for such studies is the Molecular Dynamics (MD) method, in which the classical Newtonian equations of motion for a system are solved numerically starting from a specified initial state and subject to a set of boundary conditions appropriate to the problem. MD methodology allows both equilibrium thermodynamic and dynamical properties of a system at finite temperature to be computed. The quality of a MD calculation rests largely on the method by which the forces are specified. In many applications, these forces are computed from an empirical model or "force field", an approach that has enjoyed tremendous success in the treatment of systems ranging from simple liquids and solids to polymers and biological systems including proteins, membranes, and nucleic acids. Since most force fields do not include electronic polarization effects [1] and can treat chemical reactivity only through specialized techniques [2], it is often necessary to turn to the methodology of ab initio MD (AIMD). AIMD is a rapidly evolving and growing technique that constitutes one of the most important theoretical tools developed in the last decades. In an AIMD calculation, finite-temperature dynamical trajectories are generated by using forces obtained directly from electronic structure calculations performed "on the fly" as the simulation proceeds. Thus, AIMD permits chemical bond breaking and forming events to occur and accounts for electronic polarization effects [3,4]. AIMD has been successfully applied to a wide variety of important problems in physics and chemistry and is now beginning to influence biology as well. In numerous studies, new physical phenomena have been revealed and microscopic mechanisms elucidated that could not have been uncovered by using empirical methods, often leading to new interpretations of experimental data and even suggesting new experiments to be perform. Figure. II-1 Ab initio molecular dynamics unifies approximate ab initio electronic structure theory (i.e solving Schrodinger's wave equation numerically using, for instance, Hartree-Fock theory or the Local Density Approximation (LDA) within Kohn-Sham theory) and classical molecular dynamics (i.e solving Newton's equation of motion numerically for given interaction potential as reported by Fermi, Pasta, Ulam, and Tsingou for one-dimensional anharmonic chain model of solids and published by Alder and Wainwright for the three-dimensional hard-sphere model of fluids [5]). I.2. Quantum Molecular Dynamic I.2.1. Deriving Classical Molecular Dynamics The starting point of the following discussion is non-relativistic quantum mechanics as formalized via the time-dependent Schrödinger equation ݅ℏ డ డ௧ φሺሼ ሽ, ሼ ூ ሽ; ݐሻ = ܪφሺሼ ሽ, ሼ ூ ሽ; ݐሻ (Eq. A-1) in its position representation in conjunction with the standard Hamiltonian for the electronic ሼݎ ሽ and nuclear ሼܴ ூ ሽ degrees of freedom. ܪ = - ℏ ଶ ܯ2 ூ ∇ ூ ଶ ூ - ℏ ଶ 2݉ ∇ ଶ + ݁ ଶ ห -ห - ழ ݁ ଶ | ூ -| + ூ, ݁ ଶ ܼ ூ ܼ ห ூ -ห ூழ = - ℏ ଶ ܯ2 ூ ∇ ூ ଶ ூ - ℏ ଶ 2݉ ∇ ଶ + ܸ ି ሺሼ ሽ, ሼ ூ ሽሻ = -∑ ℏ మ ଶெ ∇ ூ ଶ ூ + ܪ ሺሼ ሽ, ሼ ூ ሽሻ (Eq. A-2) The goal of this section is to derive classical molecular dynamics [7][8]9] starting from Schrodinger's wave equation and following the elegant route of Tully [10,11]. To this end, the nuclear and electronic contributions to the total wave function φሺሼݎ ሽ, ሼܴ ூ ሽ; ݐሻ, which depends on both the nuclear and electronic coordinates, have to be separated. The simplest possible form is a product ansatz φሺሼ ሽ, ሼ ூ ሽ; ݐሻ ≈ ψሺሼ ሽ; ݐሻχሺሼ ூ ሽ; ݐሻ ݔ݁ ቂ ℏ ݐ݀ ′ ܧ ሺݐ ′ ሻ ௧ ௧ బ ቃ (Eq. A-3) Where the nuclear and electronic wave functions are separately normalized to unity at every instant of time, i.e ۦχ; |ݐχ; ۧݐ = 1 and ۦψ; |ݐψ; ۧݐ = 1, respectively. In addition, a convenient phase factor ܧ = ݀݀ ψ * ሺሼ ሽ; ݐሻ χ * ሺሼ ூ ሽ; ܪ‪ሻݐ ψሺሼ ሽ; ݐሻχሺሼ ூ ሽ; ݐሻ (Eq. A-4) was introduced at this stage such that the final equations will look nice; ݀݀ refers to the integration over all i=1,… and I=1,… variables ሼ ሽ and ሼ ூ ሽ, respectively. It is mentioned in passing that this approximation is called a one-determinant or single-configuration ansatz for the total wavefunction, which at the end must lead to a mean-field description of the coupled dynamics. Note also that this product ansatz (excluding the phase factor) differs from the Born-Oppenheimer ansatz [12,13] for separating the fast and slow variables φ ை ሺሼ ሽ, ሼ ூ ሽ; ݐሻ = ∑ ψ ሺሼ ሽ, ሼ ூ ሽሻχ ሺሼ ூ ሽ; ݐሻ ∞ ୀ (Eq. A-5) even in its one-determinant limit, where only a single electronic state k (evaluated for the nuclear configurationሼܴ ூ ሽ) in included in the expansion. Inserting the separation ansatz Eq. (A-3) into Eqs. (A-1)-(A-2) yields (after multiplying from the left by 〈 ψ| and 〈 χ| and imposing energy conservation 〉ܪ〈݀ ݐ݀ ≡ 0 ⁄ ) the following relations ݅ℏ డψ డ௧ = ∑ ℏ మ ଶ ∇ ଶ + ሼ ݀χ * ሺሼ ூ ሽ; ݐሻܸ ି ሺሼ ሽ, ሼ ூ ሽሻχሺሼ ூ ሽ; ݐሻሽψ (Eq. A-6) ݅ℏ డχ డ௧ = -∑ ℏ మ ଶெ ∇ ூ ଶ ூ χ + ሼ ݀ψ * ሺሼ ሽ; ܪ‪ሻݐ ሺሼ ሽ, ሼ ூ ሽሻ ψሺሼ ሽ; ݐሻሽχ (Eq. A-7) This set of coupled equations defines the basis of the Time-Dependent Self-Consistent Field (TDSCF) method introduced as early as 1930 by Dirac [14]. Both electrons and nuclei move quantum-mechanically in a time-dependent effective potential (or self-consistently obtained average fields) obtained from appropriate averages (quantum mechanical equation values〈… 〉) over other class of degree of freedom (by using the nuclear and electronic wavefunctions, respectively). Thus, the single-determinant ansatz Eq. (A-3) produces, as already anticipated, a mean-field description of the coupled nuclear-electronic quantum dynamics. This is the price to pay for the simplest possible separation of electronic and nuclear variables. The next step in the derivation of classical molecular dynamics is the task to approximate the nuclei as classical point particles. How can this be achieved in the framework of the TDSCF approach, given one quantum-mechanical wave equation describing all nuclei. A well-known route to extract classical mechanics from quantum mechanics in general starts with rewriting the corresponding wavefunction χሺሼ ூ ሽ; ݐሻ = ܣሺሼ ூ ሽ; ݐሻ expሾ݅ܵሺሼ ூ ሽ; ݐሻ/ℏሿ (Eq. A-8) In terms of an amplitude factor A and a phase S which are both considered to be real and A>0 in this polar representation [15]. After transforming the nuclear wave function in Eq. (A-7) accordingly and after separating the real and imaginary parts, the TDSCF equation for the nuclei డௌ డ௧ + ∑ ଵ ଶெ ሺ∇ ூ ܵሻ ଶ ூ + ݀ ψ * ܪ ψ = ℏ ଶ ∑ ଵ ଶெ ∇ మ ூ (Eq. A-9) డ డ௧ + ∑ ଵ ெ ሺ∇ ூ ܣሻ ூ ሺ∇ ூ ܵሻ + ∑ ଵ ଶெ ܣሺ∇ ூ ܵሻ ଶ = 0 ூ (Eq. A-10) is (exactly) re-expressed in terms of the new variables A and S. This so-called "quantum fluid dynamical representation" Eqs. (A-9),(A-10) can actually be used to solve the time-dependent Schrodinger equation [16]. The relation for A, Eq. (A-10), can be rewritten as continuity equation [15] with the help of identification of the nuclear density |χ| ଶ ≡ ܣ ଶ as directly obtained from the definition Eq. (A-8). This continuity equation is independent of ℏ and ensures locally the conservation of the particle probability |χ| ଶ associated to the nuclei in the presence of a flux. More important for the present purpose is a more detailed discussion of the relation for S, Eq. (A-9). This equation contains one term that depends on ℏ, a contribution that vanishes if the classical limit డௌ డ௧ + ∑ ଵ ଶெ ሺ∇ ூ ܵሻ ଶ ூ + ݀ ψ * ܪ ψ = 0 (Eq. A-11) Is taken as ℏ →0; an expansion in terms of ℏ would lead to a hierarchy of semi-classical methods. The resulting equation is now isomorphic to equations of motion in the Hamilton-Jacobi formulation [17] డௌ డ௧ + ܪሺሼ ூ ሽ, ሼ∇ ூ ܵሽሻ = 0 (Eq. A-12) of classical mechanics with the classical Hamilton function ܪሺሼ ூ ሽ, ሼ ூ ሽሻ = ܶሺሼ ூ ሽሻ + ܸሺሼ ூ ሽሻ (Eq. A-13) Defined in terms of (generalized) coordinates ሼ ூ ሽ and their conjugate momenta ሼ ூ ሽ. With the help of the connecting transformation ூ ≡ ∇ ூ ܵ (Eq. A-14) the Newtonian equation of motion ሶ ூ = -∇ ூ ܸሺሼ ூ ሽሻ corresponding to Eq. (A-11) ௗ ௗ௧ = -∇ ூ ݀ ψ * ܪ or ܯ ூ ሷ ூ ሺݐሻ = -∇ ூ ݀ ψ * ܪ ψ (Eq. A-15) = -∇ ூ ܸ ா ሺሼ ூ ሺݐሻሽሻ (Eq. A-16) can be read off. Thus, the nuclei move according to classical mechanics in an effective potential ܸ ா due to the electrons. This potential is a function of only the nuclear positions at time t as a result of averaging ܪ over the electronic degrees of freedom, i.e. computing its quantum expectation value ܪ|‪ψۦ |ψۧ, while keeping the nuclear positions fixed at their instantaneous values ሼ ூ ሺݐሻሽ. However, the nuclear wave function still occurs in the TDSCF equation for the electronic degrees of freedom and has to be replaced by the positions of the nuclei for consistency. In this case the classical reduction can be achieved simply by replacing the nuclear density |χሺሼ ூ ሽ; ݐሻ| ଶ in equation Eq. (A-6) in limit ℏ→0 by a product of delta functions Π ூ ߜ൫ ூ -ݐܫ centered at the instantaneous positions ݐܫ of the classical nuclei as given by Eq. (A-15). This yields e.g. for the position operator ݀ χ * ሺሼ ூ ሽ; ݐሻ ூ χሺሼ ூ ሽ; ݐሻ ℏ→ ሱۛሮ ூ ሺݐሻ (Eq. A-17) the required expectation value. This classical limit leads to a time-dependent wave equation for the electrons ݅ℏ ߲ψ ݐ߲ = - ℎ ଶ 2݉ ∇ ଶ ψ + ܸ ି ሺሼ ሽ, ሼ ூ ሺݐሻሽሻψ = ܪ ሺሼ ሽ, ሼ ሺݐሻሽሻψሺሼ ሽ, ሼ ூ ሽ; ݐሻ (Eq. A-18) Which evolve self-consistently as the classical nuclei are propagated via Eq. (A-15). Note that now ܪ and thus ψ depend parametrically on the classical nuclear position ሼ ூ ሺݐሻሽ at the time t through ܸ ି ሺሼ ሽ, ሼ ூ ሺݐሻሽሻ. The approach relying on solving Eq. (A-15) together with Eq. (A-18) is sometimes called "Ehrenfest molecular dynamics" in honor of Ehrenfest who was the first to address the question of how Newtonian classical dynamics can be derived from Schrodinger's wave equation [18]. In the present case this leads to a hybrid or mixed approach because only the nuclei are forced to behave like classical particles, whereas the electrons are still treated as quantum objects. I.2.2. « Ehrenfest » Molecular Dynamics Although the TDSCF approach underlying Ehrenfest molecular dynamics clearly is a mean-field theory, transitions between electronic states are included in this scheme. This can be made evident by expanding the electronic wavefunction Ψ (as opposed to the total wave function φ according to Eq. (A-5)) in terms of many electronic states or determinants Ψ k Ψሺሼ ሽ, ሼ ூ ሽ; ݐሻ = ∑ ܿ ሺݐሻΨ ሺሼ ሽ, ሼ ூ ሽሻ ஶ ୀ (Eq. A-19) with complex coefficients ሼܿ ሺݐሻሽ. In this case, the coefficients ሼ|ܿ |)ݐ( ଶ ሽ) (with ∑ |ܿ |)ݐ( ଶ ≡ 1) describe explicitly the time evolution of the populations (occupations) of the different states ሼ݇ሽ whereas interferences are included via the ሼܿ * ܿ ≠ ݇ሽ contributions. One possible choice for the basis functions ሼΨ ሽ is the adiabatic basis obtained from solving the time-independent electronic Schrodinger equation ܪ (ሼ ሽ; ሼ ூ ሽΨ = ܧ (ሼ ூ ሽ)Ψ (ሼ ሽ; ሼ ூ ሽ) (Eq. A-20) where ሼ ୍ ሽ are the instantaneous nuclear positions at time t according to Eq. (A-15). Thereby, the a priori construction of any type of potential energy surface is avoided from the outset by solving the time-dependent electronic Schrodinger equation "on the fly". This allows one to compute the force from ∇ ூ ܪ〈 〉 for each configuration ሼ ூ ()ݐሽ generated by molecular dynamics. The corresponding equations of motion in terms of the adiabatic basis Eq. (A-20) and the time-dependent expansion coefficients Eq. (A-19) ܯ ூ ሷ ூ )ݐ( = -∑ |ܿ |)ݐ( ଶ ∇ ூ ܧ -∑ ܿ * ܿ ܧ( -ܧ )݀ ூ , (Eq. A-21) ݅ℏܿሶ )ݐ( = ܿ ܧ)ݐ( -݅ℏ ∑ ܿ )ݐ( ሶ ூ ݀ ூ ூ, (Eq. A-22) Where the coupling terms are given by ݀ (ሼ ூ ()ݐሽ) = ݀ Ψ * ∇ ூ Ψ ூ (Eq. A-23) with the property d ୍ ୩୩ ≡ 0. The Ehrenfest approach is thus seen to include rigorously nonadiabatic transitions between different electronic states Ψ and Ψ within the framework of classical nuclear motion and the mean-field (TDSCF) approximation to the electronic structure. The restriction to one electronic state in the expansion Eq. (A-19), which is in most cases the ground state Ψ , leads to ܯ ூ ሷ ூ )ݐ( = -∇ ூ ۦΨ ܪ| |Ψ ۧ (Eq. A-24) ݅ℏ డΨ బ డ௧ = ܪ Ψ (Eq. A-25) note that ܪ is time-dependent via the nuclear coordinates ሼ ூ ()ݐሽ. A point worth mentioning here is that the propagation of the wavefunction is unitary, i.e. the wavefunction preserves its norm and the set of orbitals used to build up the wavefunction will stay orthonormal. Ehrenfest molecular dynamics is certainly the oldest approach to "on the fly" molecular dynamics and is typically used for collision-and scattering-type problems. However, it was never in widespread use for systems with many active degrees of freedom typical for condensed matter problems (although a few exceptions exist [19,20] but here the number of explicitly treated electrons is fairly limited with the exception of [21]). I.2.3. « Born-Oppenheimer » Molecular Dynamics An alternative approach to include the electronic structure in molecular dynamics simulations consists in straightforwardly solving the static electronic structure problem in each molecular dynamics step given the set of fixed nuclear positions at that instance of time. Thus, the electronic structure part is reduced to solving a time-independent quantum problem, e.g. by solving the time-independent Schrodinger equation, concurrently to propagating the nuclei via classical molecular dynamics. Thus, the time-dependence of the electronic structure is a consequence of nuclear motion, and not intrinsic as in Ehrenfest molecular dynamics. The resulting Born-Oppenheimer molecular dynamics method is defined by ܯ ூ ሷ ூ )ݐ( = -∇ ூ ݉݅݊൛ൻψ หܪ หψ ൿൟ (Eq. A-26) ܧ ψ = ܪ Ψ (Eq. A-27) for the electronic ground state. A deep difference with respect to Ehrenfest dynamics concerning the nuclear equation of motion is that the minimum of ܪ〈 〉 has to be reached in each Born-Oppenheimer molecular dynamics step according to Eq. (A-26). In Ehrenfest dynamics, on the other hand, a wavefunction that minimized ܪ〈 〉 initially will also stay in its respective minimum as the nuclei move according to Eq. (A-24). A natural and straightforward extension [22] of ground-state Born-Oppenheimer dynamics is to apply the same scheme to any excited electronic state ψ k without considering any interferences. In particular, this means that also the "diagonal correction terms" [23] ܦ (ሼ ூ ()ݐሽ) = - ݀ ψ * ∇ ூ ଶ ψ (Eq. A-28) are always neglected; the inclusion of such terms is discussed for instance in ( Refs. [10,11]). These terms renormalize the Born-Oppenheimer or "clamped nuclei" potential energy surface E k of a given state Ψ (which might also be the ground state Ψ ) and lead to the so-called "adiabatic potential energy surface" of that state [23]. Whence, Born-Oppenheimer molecular dynamics should not be called "adiabatic molecular dynamics", as is sometime done. It is useful for the sake of later reference to formulate the Born-Oppenheimer equations of motion for the special case of effective one-particle Hamiltonians. This might be the Hartree-Fock approximation defined to be the variational minimum of the energy expectation value ൻψ หܪ หψ ൿ given a single Slater determinant ψ = det൛ψ ൟ subject to the constraint that the ܯ ூ ሷ ூ )ݐ( = -∇ min൛ൻψ หܪ ுி หψ ൿൟ (Eq. A-33) 0 = ܪ- ுி ψ + ∑ ߉ ψ (Eq. A-34) for the Hartree-Fock case. A similar set of equations is obtained if Hohenberg-Kohn-Sham density functional theory [24,25] is used, where ܪ ுி has to be replaced by the Kohn-Sham effective one-particle Hamiltonian ܪ ுி . Instead of diagonalizing the one-particle Hamiltonian an alternative but equivalent approach consists in directly performing the constraint minimization according to Eq. (A-29) via nonlinear optimization techniques. Early applications of Born-Oppenheimer molecular dynamics were performed in the framework of a semi empirical approximation to the electronic structure problem [26,27]. But only a few years later an ab initio approach was implemented within the Hartree-Fock approximation [28]. Born-Oppenheimer dynamics started to become popular in the early nineties with the availability of more efficient electronic structure codes in conjunction with sufficient computer power to solve "interesting problems". Undoubtedly, the breakthrough of Hohenberg-Kohn-Sham density functional theory in the realm of chemistry -which took place around the same time -also helped a lot by greatly improving the "price/performance ratio" of the electronic structure part [29,30]. A third and possibly the crucial reason that boosted the field of ab initio molecular dynamics was the pioneering introduction of the Car-Parrinello approach [31]. This technique opened novel avenues to treat large-scale problems via ab initio molecular dynamics and catalyzed the entire field by making "interesting calculations" possible, see also the closing section on applications. I.2.4. « Car-Parrinello » Molecular Dynamics A non-obvious approach to cut down the computational expenses of molecular dynamics which includes the electrons in a single state was proposed by Car and Parrinello in 1985 [31]. In retrospect it can be considered to combine the advantages of both i.e. they can be integrated on the time scale given by nuclear motion. However, this means that the electronic structure problem has to be solved self-consistently at each molecular dynamics step, whereas this is avoided in Ehrenfest dynamics due to the possibility to propagate the wavefunction by applying the Hamiltonian to an initial wavefunction (obtained e.g. by one selfconsistent diagonalization). The basic idea of the Car-Parrinello approach can be viewed to exploit the quantum-mechanical adiabatic time-scale separation of fast electronic and slow nuclear motion by transforming that into classical-mechanical adiabatic energy-scale separation in the framework of dynamical systems theory. In order to achieve this goal the two-component quantum/classical problem is mapped onto a two-component purely classical problem with two separate energy scales at the expense of losing the explicit time-dependence of the quantum subsystem dynamics. Furthermore, the central quantity, the energy of the electronic subsystemൻψ หܪ หψ ൿ evaluated with some wavefunction ψ , is certainly a function of the nuclear positionsሼ ூ ሽ. But at the same time it can be considered to be a functional of the wavefunction ψ and thus of a set of oneparticle orbitals ൛ψ ൟ. Now, in classical mechanics the force on the nuclei is obtained from the derivative of a Lagrangian with respect to the nuclear positions. This suggests that a functional derivative with respect to the orbitals, which are interpreted as classical fields, might yield the force on the orbitals, given a suitable Lagrangian. In addition, possible constraints within the set of orbitals have to be imposed, such as e.g. orthonormality. and that the constraints are holonomic [32]. Following this route of ideas, generic Car-Parrinello equations of motion are found to be of the form ܯ ூ ሷ ூ )ݐ( = - డ డ ൻψ หܪ หψ ൿ + డ డ ‪ሽݏݐ݊݅ܽݎݐݏ݊ܿ‪ሼ (Eq. A-38) ߤ ψሷ )ݐ( = - ఋ ఋψ * ൻψ หܪ หψ ൿ + ఋ ఋψ * ‪ሽݏݐ݊݅ܽݎݐݏ݊ܿ‪ሼ (Eq. A-39) where ߤ (= ߤ) are the "fictitious masses" or inertia parameters assigned to the orbital degrees of freedom; the units of the mass parameter ߤ are energy times a squared time for reasons of dimensionality. ݏݐ݊݅ܽݎݐݏ݊ܿ = ݏݐ݊݅ܽݎݐݏ݊ܿ (൛ψ ൟ, ሼ ூ ሽ) (Eq. A-40) might be a function of both the set of orbitals ൛ψ ൟ and the nuclear positions ሼ ூ ሽ .These dependencies have to be taken into account properly in deriving the Car-Parrinello equations following from Eq. (A-35) using Eqs.( A-36)-(A-37). According to the Car-Parrinello equations of motion, the nuclei evolve in time at a certain (instantaneous) physical temperature ∝ ∑ ܯ ூ ሶ ூ ଶ ூ , whereas a "fictitious temperature" ∝ ∑ ߤ ൻψሶ หψሶ ൿ is associated to the electronic degrees of freedom. In this terminology, "low electronic temperature" or "cold electrons" means that the electronic subsystem is close to its instantaneous minimum energy ݉݅݊ ൛ψ ൟ ൻψ หܪ หψ ൿ, i.e. close to the exact Born-Oppenheimer surface. Thus, a ground-state wave function optimized for the initial configuration of the nuclei will stay close to its ground state also during time evolution if it is kept at a sufficiently low temperature. I.3. Integration of the equations of motion In the classical MD [9,7,33], the trajectory for the various components of the system is generated by integrating the Newton equations of motion, which, for each particle i, write: ቐ ܯ ூ ௗ మ (௧) ௗ௧ మ = ݂ ூ )ݐ( ݂ ூ )ݐ( = - డ(ோ ) డ (௧) (Eq. A-41) V(x) is the potential energy function of the N-particle system, which only depends upon the Cartesian coordinates { ሬሬԦ ூ }. Eqs.(A-41) are integrated numerically using an infinitesimal timestep, δt, to guarantee the conservation of the total energy of the system. Hoping to generate exact trajectories over long times is, however, illusory, considering that the Newton equations of motion are solved numerically, with a finite time-step. The exactness of the solution of Eqs.(A-41) is, nevertheless, not as crucial as it would seem. What really matters in reality is the correct statistical behavior of the trajectory to ensure that the thermodynamic and dynamic properties of the system be reproduced with a sufficient accuracy. This pivotal condition is fulfilled only if the integrator employed to propagate the motion possesses the property of symplecticity [34][35]. A so-called symplectic propagator conserves the invariant metric of the phase space, Γ. As a result, the error associated with this propagator is bound: lim ೞ →∞ ቀ ଵ ቁ ∑ ቚ ఢ(ఋ௧)ିఢ() ఢ() ቚ ୀଵ ≤ ߝ ெ (Eq. A-42) Here, n step denotes the number of steps of the simulation, ߳(0) ≡ (ܪ ூ , ௫ ; 0), the initial total energy of the equilibrated system, and ߝ ெ , the upper bound for energy conservation-viz. 10 -4 constitutes an acceptable value. Assuming that the time-step is limited, integration of the equations of motion does not lead to an erratic growth of the error associated with the conservation of the total energy, which may affect significantly the statistical behavior of MD over long times. In order to clarify the statistical equivalence between the numerical solution and the exact solution of the equations of motion, it is useful to recall some points of hamilton's approach and relationship with statistical mechanics. I.3.1. Hamiltion's point of view and statistical mechanics The Hamiltonian of a system with N particles moving under the influence of a potential function ܷ is defined as (ܪ ே , ே ) = ∑ మ ଶெ + ே ூୀଵ ܷ( ே ) (Eq. A-43) Were ሼ ூ ሽ are the momenta of particles defined as ܲ ሬԦ ூ = ܸܯ ሬԦ ூ . ே ( ே ) is the union of all positions (or momenta) ሼ ଵ , ଶ , … , ே ሽ. The forces on the particle are derived from the potential ூ ( ே ) = - డ( ಿ ) డ (Eq. A-44) The equations of motion are according to Hamilton's equation ሶ ூ = డு డ = ெ (Eq. A-45) ሶ ூ = - డு డ = - డ డ = ூ ( ே ) (Eq. A-46) From which we get Newton's second law using ூ = ܯ ூ ሶ ூ ܯ ூ ሷ ூ = ூ ( ே ) (Eq. A-47) The equations of motion can also be derived using the Lagrange formalism. The Lagrange function is ܮ൫ ே , ሶ ே ൯ = ∑ ଵ ଶ ܯ ூ ሶ ூ ଶ -ܷ( ே ) ே ூୀଵ (Eq. A-48) And the associated Euler-Lagrange equation ௗ ௗ௧ డ డ ሶ = డ డ (Eq. A-49) leads to the same final results. The two formulations are equivalent, but the ab initio molecular dynamics literature almost exclusively uses the Lagrangian techniques. I.3.2. Microcanonical Ensemble The equations of motion are time reversible (invariant to the transformation t → -t) and the total energy is a constant of motion డா డ௧ = డு൫ ಿ , ሶ ಿ ൯ డ௧ = 0 (Eq. A-50) These properties are important to establish a link between molecular dynamics and statistical mechanics. The latter connects the microscopic details of a system the physical observables such as equilibrium thermodynamic properties, transport coefficients, and spectra. Statistical mechanics is based on the Gibbs' ensemble concept. That is, many individual microscopic configurations of a very large system lead to the same macroscopic properties, implying that it is not necessary to know the precise detailed motion of every particle in a system in order to predict its properties. It is sufficient to simply average over a large number of identical systems, each in a different configuration; i.e. the macroscopic observables of a system are formulated in term of The thermodynamic variables that characterize an ensemble can be regarded as experimental control parameters that specify the conditions under which an experiment is performed. Now consider a system of N particles occupying a container of volume V and evolving under Hamilton's equation of motion. The Hamiltonian will be constant and equal to the total energy E of the system. In addition, the number of particles and the volume are assumed to be fixed. Therefore, a dynamical trajectory (i.e. the positions and momenta of all particles over time) will generate a series of classical states having constant N, V, and E, corresponding to a microcanonical ensemble. If the dynamics generates all possible states then an average over this trajectory will yield the same result as an average in a microcanonical ensemble. The energy conservation condition, ܪ൫ ே , ሶ ே ൯ which imposes a restriction on the classical microscopic states accessible to the system, defines a hypersurface in the phase space called a constant energy surface. A system evolving according to Hamilton's equation of motion will remain on this surface. The assumption that a system, given an infinite amount of time, will cover the entire constant energy hypersurface is known as the ergodic hypothesis. Thus, under the ergodic hypothesis, averages over a trajectory of a system obeying Hamilton's equation are equivalent to averages over the microcanonical ensemble. In addition to equilibrium quantities also dynamical properties are defined through ensemble averages. Time correlation functions are important because of their relation to transport coefficients and spectra via linear response theory [36,37]. The important points are: by integration Hamilton's equation of motion for a number of particles in a fixed volume, we can create a trajectory; time averages and time correlation functions of the trajectory are directly related to ensemble averages of the microcanonical ensemble. I.3.3. The molecular dynamics propagators Several approaches for integrating numerically the Newton equations of motion (Eq. A-41) are currently available. Among them, three will be detailed here. Unquestionably the simplest, the Verlet algorithm relies upon the knowledge of the triplet ሼ ூ ,)ݐ( ூ ݐ( -,)ݐߜ ூ ()ݐሽ , where ூ )ݐ( = ሷ ூ )ݐ( denotes the acceleration of particles I [38]. Modifying the positions of the particles is achieved through a Taylor expansion of the position ݐ -ݐߜ and at ݐ + ,ݐߜ leading to: ூ ݐ( + )ݐߜ = 2 ூ )ݐ( -ூ ݐ( -)ݐߜ + ூ )ݐ( ݐߜ ଶ (Eq. A-51) Which implies possible errors in ݐߜ( ସ ). It is worth noting that the velocities, ூ )ݐ( = ሶ ூ )ݐ( = ݀ ூ )ݐ( ݐ݀ ⁄ , do not appear explicitly in this scheme. They cancel out in the Taylor expansion of ݔ ூ ݐ( + )ݐߜ and ݔ ூ ݐ( -. )ݐߜ Though unnecessary for describing the trajectory, their evaluation is an obligatory step for computing the kinetic energy, () , which depends upon the sole momentum variables, p, and, consequently, the total energy of the system, according to: ூ )ݐ( = (௧ାఋ௧)ି (௧ିఋ௧) ଶ ఋ௧ (Eq. A-52) At each time-step, the associated error is in ݐߜ( ଶ ). The so-called leap-frog algorithm, derived from the preceding one, makes use the ሼ ூ ,)ݐ( ூ ݐ( -,)2ݐߜ )ݐ(ܫ triplet. The origin of its name appears clearly in the writing of the algorithm: ቐ ூ ݐ( + )ݐߜ = ூ )ݐ( + ூ ቀݐ + ఋ௧ ଶ ቁ ݐߜ ூ ቀݐ + ఋ௧ ଶ ቁ = ூ ቀݐ - ఋ௧ ଶ ቁ + ூ ݐߜ)ݐ( (Eq. A-53) In practice, the first step is the computation of ூ ݐ( + ݐߜ 2) ⁄ , from which ܸ ூ )ݐ( is deduced, which is a requisite for evaluating the term, , following: ቐ ூ ݐ( + )ݐߜ = ூ )ݐ( + ூ ݐߜ)ݐ( + ଵ ଶ ூ ݐߜ)ݐ( ଶ ூ ݐ( + )ݐߜ = ூ )ݐ( + (௧)ା (ାఋ௧) ଶ ݐߜ (Eq. A-54) This scheme involves the two following steps: ூ ቀݐ + ఋ௧ ଶ ቁ = ூ )ݐ( + ଵ ଶ ூ ݐߜ)ݐ( (Eq. A-55) From which the thermodynamic forces, ݂ ூ , and accelerations, ܽ ூ , at time ݐ + ݐߜ can be evaluated. It ensures that: ூ ݐ( + )ݐߜ = ூ ቀݐ + ఋ௧ ଶ ቁ + ଵ ଶ ூ ݐ( + ݐߜ)ݐߜ (Eq. A-56) The kinetic energy may then be deduced at time ݐ + ,ݐߜ while the potential energy, , is computed in the force loops. I.3.4. Extended System Approach In the framework of statistical mechanics all ensembles can be formally obtained from the microcanonical NVE ensemble -where particle number, volume and energy are the external thermodynamic control variables -by suitable Laplace transforms of its partition function. Thermodynamically this corresponds to Legendre transforms of the associated thermodynamic potentials where intensive and extensive conjugate variables are interchanged. In thermodynamics, this task is achieved by a "sufficiently weak" coupling of the original system to an appropriate infinitely large bath or reservoir via a link that establishes thermodynamic equilibrium. The same basic idea is instrumental in generating distribution functions of such ensembles by computer simulation. Additional degrees of freedom that control the quantity under consideration are added to the system. The simulation is then performed in the extended microcanonical ensemble with a modified total energy as a constant of motion. This system has the property that after the correct integration over the additional degrees of freedom has been performed the distribution function of the targeted ensemble is recovered. Two important special cases are: thermostats and barostats, which are used to impose temperature instead of energy and / or pressure instead of volume as external control parameters [7,9,39,40,41,42,43]. I.3.4.1. Barostats Keeping the pressure constant is a desirable feature for many applications of molecular dynamics. The concept of barostats and thus constant-pressure molecular dynamics was introduced in the framework of extended system dynamics by Andersen [42]. His method was devised to allow for isotropic fluctuations in the volume of the supercell. An extension of Andersen's method consists in allowing for changes of the shape of the computational cell to occur as a result of applying external pressure [41], including the possibility of non-isotropic external stress; the additional fictitious degrees of freedom in the Parrinello-Rahman approach [41] are the lattice vectors of the supercell. These variable-cell approaches make it possible to study dynamically structural phase transitions in solids at finite temperatures. The basic idea to allow for changes in the cell shape consists in constructing an extended Lagrangian where the lattice vectors a 1 , a 2 and a 3 of the simulation cell are additional dynamical variables. Using the 3 × 3 matrix h = [a 1 , a 2 , a 3 ] (which fully defines the cell with volume det h) the real-space position R I of a particle in this original cell can be expressed as ூ = ℎ ூ (Eq. A-57) where ூ is a scaled coordinate with components ܵ ,௨ ∈ [0,1] that defines the position of the ith particle in a unit cube (i.e. Ω ௨௧ = 1) which is the scaled cell [41]. The resulting metric tensor ܩ = ℎ ௧ ℎ converts distances measured in scaled coordinates to distances as given by the original coordinates. The variable-cell extended Lagrangian can be postulated ܮ = ∑ ଵ ଶ ܯ ூ ே ூ ൫ ሶ ூ ௧ ܩ ሶ ூ ൯ -ܷ(ℎ, ே ) + ଵ ଶ ܹ ݎܶ ℎ ሶ ௧ ℎ ሶ -Ω (Eq. A-58) with additional nine dynamical degrees of freedom that are associated to the lattice vectors of the supercell h. Here, p defines the externally applied hydrostatic pressure, W defines the fictitious mass or inertia parameter that controls the time-scale of the motion of the cell h. In particular, this Lagrangian allows for symmetry-breaking fluctuations -which might be necessary to drive a solid-state phase transformation -to take place spontaneously. The resulting equations of motion read ܯ ூ ሷ ூ,௨ = -∑ డ൫,ௌ ಿ ൯ డ ,ೡ (ℎ ௧ ) ௩௨ ିଵ ଷ ௩ୀଵ -ܯ ூ ∑ ∑ ܩ ௨௩ ିଵ ܩ ሶ ௩௦ ܵ ሶ ூ,௦ ଷ ௦ୀଵ ଷ ௩ୀଵ (Eq. A-59) ܹ ሷ ௨௩ = Ω ∑ (Π ௨௦ ௧௧ -ߜ ௨௦ )(ℎ ௧ ) ௦௩ ିଵ ଷ ௦ୀଵ (Eq. A-60) Where the total internal stress tensor Π ௨௦ ௧௧ = ଵ Ω ∑ ܯ ூ ൫ܵ ሶ ூ ௧ ܵܩ ሶ ூ ൯ ௨௦ ூ + Π ௨௦ (Eq. A-61) Is the sum of the thermal contribution due to the nuclear motion at finite temperature and the internal stress tensor Π derived from the interaction potential. A modern formulation of barostats that combines the equation of motion also with thermostats (see next section) was given by Martyna et al. [43]. I.3.4.2. Thermostats Standard molecular dynamics generates the microcanonical or NVE ensemble where in addition the total momentum is conserved [9]. The temperature is not a control variable and cannot be preselected and fixed. But it is evident that also within molecular dynamics the possibility to control the average temperature (as obtained from the average kinetic energy of the nuclei and the energy equipartition theorem) is welcome for physical reasons. A deterministic algorithm of achieving temperature control in the spirit of extended system dynamics [42] by a sort of dynamical friction mechanism was devised by Nosé and Hoover [40,44]. Thereby, the canonical or NVT ensemble is generated in the case of ergodic dynamics. It is well-known that the standard Nosé-Hoover thermostat method suffers from non-ergodicity problems for certain classes of Hamiltonians, such as the harmonic oscillator [44]. A closely related technique, the so-called Nosé-Hoover-chain thermostat [45], cures that problem and assures ergodic sampling of phase space even for the pathological harmonic oscillator. This is achieved by thermostatting the original thermostat by another thermostat, which in turn is thermostatted and so on. In addition to restoring ergodicity even with only a few thermostats in the chain, this technique is found to be much more efficient in imposing the desired temperature. The underlying equations of motion read ܯ ூ ሷ ூ = -∇ ூ ܧ ௌ -ܯ ூ ξ ሶ ଵ ሶ ூ (Eq. A-62) ܳ ଵ ξ ሷ ଵ = ൣ∑ ܯ ூ ሶ ூ ଶ -݃݇ ܶ ூ ൧ -ܳ ଵ ξ ሶ ଵ ξ ሶ ଶ (Eq. A-63) ܳ ξ ሷ = ቂܳ ିଵ ξ ሶ ିଵ ଶ -݇ ܶቃ -ܳ ξ ሶ ξ ሶ ାଵ (1 -ߜ ) ݁ݎ݁‪ℎݓ ݇ = 2, … , ܭ (Eq. A-64) By inspection of (Eq. A-22) it becomes intuitively clear how the thermostat works: ξ ሶ ଵ can be considered as a dynamical friction coefficient. The resulting "dissipative dynamics" leads to non-Hamiltonian flow, but the friction term can acquire positive or negative sign according to its equation of motion. This leads to damping or acceleration of the nuclei and thus to cooling or heating if the instantaneous kinetic energy of the nuclei is higher or lower than k B T which is preset. As a result, this extended system dynamics can be shown to produce a canonical ensemble in the subspace of the nuclear coordinates and momenta. In spite of being non-Hamiltonian, Nosé-Hoover (-chain) dynamics is also distinguished by conserving an energy quantity of the extended system; see (Eq. A-67). The desired average physical temperature is given by T and g denotes the number of dynamical degrees of freedom to which the nuclear thermostat chain is coupled (i.e. constraints imposed on the nuclei have to be subtracted). It is found that this choice requires a very accurate integration of the resulting equations of motion (for instance by using a high-order Suzuki-Yoshida integrator [46]). The integration of these equations of motion is discussed in detail in Ref. [46] using the velocity Verlet algorithm. One of the advantages of the velocity Verlet integrator is that it can be easily used together with higher order schemes for the thermostats. The choice of the "mass parameters" assigned to the thermostat degrees of freedom should be made such that the overlap of their power spectra and the ones of the thermostatted subsystems is maximal [46]. The relations ܳ ଵ = ಳ ் ఠ మ (Eq. A-65) ܳ = ಳ ் ఠ మ (Eq. A-66) Assures this if ߱ is a typical phonon or vibrational frequency of the nuclear subsystem (say of the order of 2000 to 4000 cm -1 ). There is a conserved energy quantity in the case of thermostatted molecular dynamics. This constant of motion reads ܧ ௦ ே் = ∑ ଵ ଶ ܯ ሶ ଶ + ܷ( ே ) + ∑ ଵ ଶ ܳ ξ ሶ ଶ + ∑ ݇ ܶξ + ݃݇ ܶξ ଵ ୀଶ ୀଵ ூ (Eq. A-67) For Nosé-Hoover-chain thermostatted molecular dynamics. Part B: The Electronic Structure Methods II. 1. Introduction Up to this point, the electronic structure method to calculate the ab initio forces ∇ ூ ۦΨ ܪ| |Ψ ۧ was not specified in detail. It is immediately clear that ab initio molecular dynamics is not tied to any particular approach, although very accurate techniques are of course prohibitively expensive. It is also evident that the strength or weakness of a particular ab initio molecular dynamics scheme is intimately connected to the strength or weakness of the chosen electronic structure method. Over the years a variety of different approaches were combined with molecular dynamics. The focus of the present review is classical molecular dynamics in conjunction with Hohenberg-Kohn-Sham density functional theory [1,2]. In the following, only those parts of density functional theory are presented that impact directly on our static ab initio and molecular dynamics calculations. II. 2. Density Functional Theory The total ground-state energy of the interacting system of electrons with classical nuclei fixed at positions ሼܴ ூ ሽcan be obtained min Ψ బ ሼۦΨ ܪ| |Ψ ۧሽ = min ൛φ ൟ ܧ ௌ ൣ൛φ ൟ൧ (Eq. B-1) as the minimum of the Kohn-Sham energy [1, 2] ܧ ௌ ൣ൛φ ൟ൧ = ܶ ௦ ൣ൛φ ൟ൧ + ݀ ܸ ௫௧ ሺሻ ݊ሺሻ + ଵ ଶ ܸ݀ ு ሺሻ݊ሺሻ + ܧ ௫ ሾ݊ሿ (Eq. B-2) which is an explicit functional of the set of auxiliary functions ൛φ ሺሻൟ that satisfy the orthonormality relation ൻφ หφ ൿ = ߜ . This is a dramatic simplification since the minimization with respect to all possible many-body wavefunctions ሼΨሽ is replaced by a minimization with respect to a set of orthonormal one-particle functions, the Kohn-Sham orbitals ൛φ ൟ . The associated electronic one body density or charge density ݊ሺሻ = ∑ ݂ |φ ሺሻ| ଶ (Eq. B-3) is obtained from a single Slater determinant built from the occupied orbitals, where ሼ݂ ሽ are integer occupation numbers. The first term in the Kohn-Sham functional (Eq. B-1) is the kinetic energy of a non-interacting reference system ܶ ௦ ൣ൛φ ൟ൧ = ∑ ݂ ർφ ቚ- ଵ ଶ ∇ ଶ ቚφ (Eq. B-4) consisting of the same number of electrons exposed to the same external potential as in the fully interacting system. The second term comes from the fixed external potential ܸ ௫௧ () = -∑ | ି| + ∑ ห ି ห ூழ ூ (Eq. B-5) in which the electrons move, which comprises the Coulomb interactions between electrons and nuclei and in the definition used here also the internuclear Coulomb interactions; this term changes in the first place if core electrons are replaced by pseudopotentials. The third term is the Hartree energy, i.e. the classical electrostatic energy of two charge clouds which stem from the electronic density and is obtained from the Hartree potential ܸ ு ሺሻ = ݀′ ሺ ′ ሻ |ି′| (Eq. B-6) which in turn is related to the density via ∇ ଶ ܸ ு ሺሻ = -4ߨ݊ሺሻ (Eq. B-7) Poisson's equation. The last contribution in the Kohn-Sham functional, the exchange-correlation functional ܧ ௫ ሾ݊ሿ, is the most intricate contribution to the total electronic energy. The electronic exchange and correlation effects are lumped together and basically define this functional as the remainder between the exact energy and its Kohn-Sham decomposition in terms of the three previous contributions. The minimum of the Kohn-Sham functional is obtained by varying the energy functional Eq. B-1) for a fixed number of electrons with respect to the density Eq. B-2) or with respect to the orbitals subject to the orthonormality constraint. This leads to the Kohn-Sham equations ቄ- ଵ ଶ ∇ ଶ + ܸ ௫௧ ሺሻ + ܸ ு ሺሻ + ఋா ೣ ሾሿ ఋሺሻ ቅ φ ሺሻ = ∑ ߉ φ ሺሻ (Eq. B-8) ቄ- ଵ ଶ ∇ ଶ + ܸ ()ቅ φ () = ∑ ߉ φ () (Eq. B-9) ܪ ௌ φ () = ∑ ߉ φ () (Eq. B-10) which are one-electron equations involving an effective one-particle Hamiltonian ܪ ௌ with the effectif potential ܸ . Note that ܪ ௌ nevertheless embodies the electronic many-body effects by virtue of the exchange-correlation potential ఋா ೣ [] ఋ() = ܸ ௫ () (Eq. B-11) A unitary transformation within the space of the occupied orbitals leads to the canonical form ܪ ௌ φ = ߳ φ (Eq. B-12) of the Kohn-Sham equations, where ሼ߳ ሽ are the eigenvalues. In conventional static density functional or "band structure" calculations this set of equations has to be solved self-consistently in order to yield the density, the orbitals and the Kohn-Sham potential for the electronic ground state [3]. The corresponding total energy (Eq. B-2) can be written as ܧ ௌ = ∑ ߳ - ଵ ଶ ݀ ܸ ு ()݊() + ܧ ௫ [݊] - ݀ ఋா ೣ [] ఋ() ݊() (Eq. B-13) where the sum over Kohn-Sham eigenvalues is the so-called "band-structure energy". II. 3. Energy functionals Crucial to any application of density functional theory is the approximation of the unknown exchange and correlation functional. Those exchange-correlation functionals that will be considered in the implementation part, belong to the class of the "Generalized Gradient Approximation" [4] ܧ ௫ ீீ [݊] = ݀ ݊()ߝ ௫ ீீ ൫݊(); ∇݊()൯ (Eq. B-15) where the unknown functional is approximated by an integral over a function that depends only on the density and its gradient at a given point in space The combined exchange-correlation function is typically split up into two additive terms ߝ ௫ and ߝ for exchange and correlation, respectively. In the simplest case it is the exchange and correlation energy density ߝ ௫ (݊) of an interacting but homogeneous electron gas at the density given by the "local" density n(r) at space-point r in the inhomogeneous system. This simple but astonishingly powerful approximation [5] is the famous local density approximation LDA [6] (or local spin density LSD in the spin-polarized case [7]). The self-interaction correction [8] SIC as applied to LDA was critically assessed for molecules in Ref. [9] with a disappointing outcome. A significant improvement of the accuracy was achieved by introducing the gradient of the density as indicated in Eq. B-14) beyond the well-known straightforward gradient expansions. These so-called GGAs (also denoted as "gradient corrected" or "semilocal" functionals) extended the applicability of density functional calculation to the realm of chemistry. Another considerable advance was the successful introduction of "hybrid functional" [10,11] that includes to some extent "exact exchange" [12] in addition to a standard GGA. Although such functional can certainly be implemented within a plane wave approach [13], they are prohibitively time-consuming. A more promising route in this respect are those functional that include higher-order powers of the gradient (or the local kinetic energy density) in the sense of a generalized gradient expansion beyond the first term. Promising results could be achieved by including Laplacian or local kinetic energy terms [14], but at this stage a sound judgment concerning their "prize/performance ratio" has to await further scrutinizing tests. The "optimized potential method" (OPM) or "optimized effective potentials" (OEP) are another route to include "exact exchange" within density functional theory. Here, the exchange-correlation functional ܧ ௫ ைெ = ܧ ௫ [൛φ ൟ] depends on the individual orbitals instead of only on the density or its derivatives. II. 4. The plane wave pseudopotential method The Kohn Sham equation, (Eq. B-12), can be solved in practice by expanding the Kohn Sham orbitals in a finite set of basis functions. The Schrodinger equation then transforms into an algebraic equation for the expansion coefficient which may be solved by various wellestablished numerical methods. Among these methods, we limit our discussion to the Plane Wave (PW) basis set. Plane waves are the exact eigenfunctions of the homogeneous electron gas. Therefore, plane waves are the natural choice for a basis expansion of the electron wave functions for simple metals where the ionic cores can be viewed as rather small perturbations to the homogeneous electron gas ("nearly free electron" metals). Plane waves are orthonormal and energy-independent. Hence, upon a basis set expansion the Schrodinger equation transforms into a simple matrix eigenvalue problem for the expansion coefficients. A further advantage of plane waves is that they are not biased to any particular atom. Any region in space is treated on an equal footing so that calculations do not have to be corrected for a basis set superposition error. Since plane waves do not depend on the positions of the atoms, the Hellmann-Feynman theorem can be applied directly to calculate atomic forces. Even for a non-complete basis set the Pulay terms are identical zero. In practical calculations only plane waves up to a certain cutoff wave vector are included in the basis set. The convergence of the calculations with respect to the basis set size is therefore controlled by a single parameter and can be checked simply by increasing the length of the cutoff wave vector. However, due to the nodal structure of the valence wave functions in the core region of the atoms a prohibitively large number of plane waves would be needed for a good representation of these fast oscillations. For plane wave approaches to be of practical use we have to replace the Coulomb potential of the electron-nucleus interaction by pseudopotentials. By introducing pseudopotentials we are able to achieve two goals: First, we can remove the core electrons from our calculations. The contribution of the core electrons to the chemical bonding is negligible but they contribute most to the total energy of the system (typically a thousand times more than the valence electrons). Hence, the removal of the core electrons from the calculation means that total energy differences between ionic configurations can be taken between much smaller numbers so that the required accuracy for the total energy calculations will be much less demanding than in an all-electron calculation. Second, by introducing pseudopotentials we replace the true valence wave functions by so-called pseudo wave functions which match exactly the true valence wave functions outside the ionic core region but are nodeless inside. These pseudo wave functions can be expanded using a much smaller number of plane wave basis states. A further advantage of pseudopotentials is that relativistic effects can be incorporated easily into the potential while further treating the valence electrons non-relativistically. In spite of introducing pseudopotentials, the number of basis functions N pw needed for an accurate calculation is still an order of magnitude larger than for approaches using localized orbitals. This disadvantage, however, is more than compensated by the possibility to evaluate many expressions with the help of the Fast Fourier Transform (FFT) algorithm. The most time consuming step in solving the single-particle Schrodinger equations is to apply the Hamilton operator to the valence wave functions. In a traditional matrix representation of the Hamilton operator this step scales quadratically with the number of basis functions. With plane waves and the FFT algorithm this operation reduces to a N pw ln(N pw ) scaling. Hence, for large systems the use of plane wave basis functions will become much more efficient than localized basis sets. Furthermore, the total charge density and the Hartree potential are easily calculated in a plane wave representation. II.4.1. Plane waves II.4.1.1. Supercell Although we have simplified the complicated many-body problem of interacting electrons in the Coulomb potentials of fixed nuclei to a set of single-particle equations, the calculation of the one-electron wave functions for an extended (or even infinite) system is still a formidable task. To make the problem tractable we assume that our system of interest can be represented by a box of atoms which is repeated periodically in all three special directions. The box shall be described by three vectors a 1 , a 2 , and a 3 . The volume of the box is given by Ω = ଵ . ( ଶ ݔ ଷ ) (Eq. B-16) The three vectors define a lattice in real space. General lattice vectors T are multiples of the primitive lattice vectors: = ܰ ଵ ଵ + ܰ ଶ ଶ + ܰ ଷ ଷ (Eq. B-17) Where N 1 , N 2 , N 3 can be any integer number. The box can be, for example, either the primitive unit cell of a crystal or a large supercell containing a sufficient number of independent atoms to mimic locally an amorphous solid or a liquid phase. By using supercells also atomic point defects, surfaces or isolated molecules can be modeled as illustrated schematically in II.4.1.2. Fourier representations The translational symmetry of the atomic arrangements can now be exploited to reduce the computational cost for solving the Kohn-Sham equations. The effective potential (as well as the electron density) is a periodic function with the periodicity of the lattice, i.e. ܸ ( + ) = ܸ () (Eq. B-18) for any lattice vector T of the form of (Eq. B-17). Therefore V eff can be expanded into a Fourier series ܸ () = ∑ ܸ ()݁ , ܸ () = ଵ Ω ܸ ()݁ ି ݀ ଷ Ω (Eq. B-19) The sum runs over all vectors G which fulfill the condition G.T= 2π M for all lattice vectors T with M being an integer number. The vectors G form a lattice, the so-called reciprocal lattice, which is generated by the three primitive vectors b 1 , b 2 , b 3 defined by [15] . = 2ߨߜ , ݅, ݆ = 1,2,3 (Eq. B-20) The volume of the unit cell of the reciprocal lattice is given by ଵ . ( ଶ ݔ ଷ ) = (ଶగ) య Ω (Eq. B-21) II.4.1.3. Bloch's Theorem The solutions of a single-particle Schrödinger equation with a periodic potential are by no means themselves necessarily periodic. However, the eigenstates can be chosen in such a way that associated with each wave function φ is a wave vector k to hold φ( + ) = ݁ φ() (Eq. B-22) for every lattice vectors T (Bloch's theorem). From now on all eigenstates of the single-particle Schrödinger equation will be labeled with its corresponding vector k. From the form of the exponential factor in (Eq. B-22) it is obvious that the values of k can be restricted to within one unit cell of the reciprocal lattice. By convention this unit cell is usually taken to be the first Brillouin Zone (BZ) [15]. Different solutions to the same vector k will be labeled with the band index j. Bloch's theorem is often stated in an alternative form. The property in (Eq. B-22) is equivalent to the statement that all eigenfunctions φ kj of a single-particle Schrödinger equation with a periodic potential can be written as a periodic function u kj modulated by a plane wave with wave vector k [15]: φ () = ݁ ݑ () (Eq. B-23) This allows us to restrict the calculation of the eigenfunctions to within one unit cell. The form of the eigenfunctions in all other unit cells is determined by applying (Eq. B-22). From now on we will assume that the eigenfunctions are normalized with respect to a single unit cell: |φሺሻ| ଶ ݀ ଷ = 1 Ω (Eq. B-24) Since the functions u kj are periodic they can be expanded in a set of plane waves. Together with the exponential prefactor we get φ ሺሻ = ∑ ܿ ݁ ሺାሻ (Eq. B-25) Before we make use of the plane wave expansion of the wave functions we write the Kohn-Sham equations of density functional theory in the notation of Bloch-states: ቀ- మ ଶ ∆ + ܸ ሺሻቁ φ ሺሻ = ߳ φ ሺሻ (Eq. B-26) with ܸ () = ܸ ௫௧ () + ܸ ு [݊()] + ܸ ௫ [݊()] (Eq. B-27) and ݊() = 2 Ω (ଶగ) య ∑ ቚφ ()ቚ ଶ Θ(ܧ ி -߳ )݀ ଷ (Eq. B-28) V ext , V H and V xc are the external potential of the nuclei, the Hartree and the exchange-correlation potential, respectively. By the factor of 2 in (Eq. B-28) we take the electron spin into account. Θ is a step function which is one for positive and zero for negative arguments. E F is the Fermi energy up to which single-particle states have to be occupied. The Fermi energy is defined by the number of electrons N e in the unit cell: ݊()݀ ଷ = ܰ Ω (Eq. B-29) For an insulator the Fermi energy lies in a band gap. Hence, at each k-point exactly N e /2 bands will be occupied. For metals one or more bands cross the Fermi energy so that the number of occupied states will change between k-points. II.4.1.4. k-Point Sampling By making use of Bloch's theorem we have transformed the problem of calculating an infinite number of electronic states extended infinitely in space to one of calculating a finite number of eigenstates at an infinite number of k-points which are extended over a single unit cell. At first glance this seems to be only a minor improvement since still an infinite number of calculations are needed for the different k-points. However, the electronic wave functions at k-points which are close together will be very similar. Hence it is possible to represent the wave functions of a region of k-space by the wave function at a single k-point. We thus define a regular mesh of N kpt k-points and replace the integral over the Brillouin zone by a discrete sum over the chosen kpoint mesh: Ω (ଶగ) య … Θ(ܧ ி -߳ )݀ ଷ → ଵ ே ೖ ∑ ݂ (Eq. B-30) The f kj are occupation numbers which are either one or zero. Several schemes to construct such k-point meshes have been proposed in the literature [16][17] The convergence of all calculations with respect to the basis set size can be tested simply increasing step by step the plane wave cutoff energy. The electron density in Fourier representation is given by Since we have truncated the wave functions at a maximum wave vector it is obvious from Eq. B 34 that the electron density has only non this cutoff wave vector. In Fourier space the calculation of the H simple. It is given by sually only a small number of k-points is required to get good converged results. For increasing size of the supercell the volume of the Brillouin zone becomes smaller and smaller (see Eq. B 22). Therefore, with increasing supercell size less and less k-points are needed. From a certain supercell size on it is often justified to use just a single k-point, which is usually taken to be k=0 point approximation). For metallic systems, on the other hand, much denser k are required in order to get a precise sampling of the Fermi surface. In these cases the convergence with respect to the k-point density can often be accelerated by introducing fractional occupation numbers [18][19]. II.4.1.5. Fourier representation of the Kohn-Sham equations In a plane wave representation of the wave functions the Kohn-Sham equations assume a particular simple form. If we insert (Eq. B-25) into (Eq. B-26), multiply from left with and integrate over r we get the matrix eigenvalue equation ቀ మ ቁ ‖ ‖ ଶ ߜ ᇱ ܸ ሺ ᇱ െ ሻܿ ൌ ߳ ܿ In practical calculations the Fourier expansion (Eq. B-25) of the wave functions is truncated by keeping only those plane wave vectors (k+G) with a kinetic energy lower than a g ቀ మ ଶ ቁ ‖ ‖ ଶ ܧ ௪ The convergence of all calculations with respect to the basis set size can be tested simply increasing step by step the plane wave cutoff energy. electron density in Fourier representation is given by ݊ሺሻ ൌ ଶ ே ೖ ∑ ݂ ∑ ൫ܿ ᇲ ି ൯ * ᇱ ܿ ᇲ Since we have truncated the wave functions at a maximum wave vector it is obvious from Eq. B 34 that the electron density has only non-vanishing Fourier components up to twice the length of this cutoff wave vector. In Fourier space the calculation of the Hartree potential is particularly The convergence of all calculations with respect to the basis set size can be tested simply by (Eq. B-33) Since we have truncated the wave functions at a maximum wave vector it is obvious from Eq. Bvanishing Fourier components up to twice the length of artree potential is particularly (Eq. B-34) ܸ ு ሺሻ ൌ 4ߨ݁ ଶ ሺሻ ‖‖ మ As the electron density, the Hartree potential has a finite Fourier expansion. To calculate the exchange-correlation potential we have to Fourier transform the electron density to real-space, evaluate the given functional and Fourier transform back the result. II.4.1.6. Fast Fourier Transformation (FFT) The main advantage of working with plane waves is that the evaluation of various expres-sions can be speeded up significantly by using FFTs. In particular, since the wave functions and the electron density have a finite Fourier representation this can be done without any loss in accuracy, as long as we use in our real-space Fourier grid twice as many grid points in each spacial direction than the number of points in the Fourier space grid [20]. For example, the calculation of the electron density according to Eq. B-34 scales quadratically with the number N pw of plane waves. However, if we Fourier transform the wave functions to real-space (which scales with N pw ln(N pw )), calculate ቚφ ()ቚ ଶ on the real-space Fourier grid (N pw scaling) and then Fourier transform back the result we significantly reduce the computational cost. Along the same arguments we can also reduce the number of calculations for the evaluation of the term ∑ ܸ ( ᇱ -)ܿ in Eq. B-32 from a ܰ ௪ ଶ to a N pw ln(N pw ) scaling. II.4.2. Pseudopotentials II.4.2.1. Norm conserving Pseudopotentials The norm-conserving pseudopotential approach provides an effective and reliable means for performing calculations on complex molecular, liquid and solid state systems using plane wave basis sets. In this approach only the chemically active valence electrons are dealt with explicitely. The inert core electrons are eliminated within the frozen-core approximation, being considered together with the nuclei as rigid non-polarizable ion cores. In turn, all electrostatic and quantum-mechanical interactions of the valence electrons with the cores, as the nuclear Coulomb attraction screened by the core electrons, Pauli repulsion and exchange and correlation between core and valence electrons, are accounted for by angular momentum dependent pseudopotentials. These reproduce the true potential and valence orbitals outside a chosen core region but remain much weaker and smoother inside. The valence electrons are described by smooth pseudo orbitals which play the same role as the true orbitals, but avoid the nodal structure near the nuclei that keeps the core and valence states orthogonal in an all-electron framework. The respective Pauli repulsion largely cancels the attractive parts of the true potential in the core region, and is built into the therefore rather weak pseudopotential. This pseudoization of the valence wavefunctions along with the removal of the core states eminently facilitates a numerically accurate solution of the Kohn-Sham equations and the Poisson equation, and enables the use of plane waves as an expedient basis set in electronic structure calculations. By virtue of the norm-conservation property and when constructed carefully pseudopotentials present a rather marginal approximation, and indeed allow for an adequate description of the valence electrons over the entire chemically relevant range of systems. • Pseudopotentials should be additive and transferable. Additivity can most easily be achieved by building pseudopotentials for atoms in reference states. Transferability means that one and the same pseudopotential should be adequate for an atom in all possible chemical environments. This is especially important when a change of the environment is expected during a simulation, like in chemical reactions or for phase transitions. • Pseudopotentials replace electronic degrees of freedom in the Hamiltonian by an effective potential. They lead to a reduction of the number of electrons in the system and thereby allow for faster calculation or the treatment of bigger systems. • Pseudopotentials allow for a considerable reduction of the basis set size. Valence states are smoother than core states and need therefore less basis functions for an accurate description. The pseudized valence wavefunctions are nodeless (in the here considered type of pseudopotentials) functions and allow for an additional reduction of the basis. This is especially important for plane waves. Consider the 1s function of an atom φ ଵୗ )ܚ( ~ ݁^(-ܼ * )ݎ (Eq. B-35) with Z* ≈ Z , the nuclear charge. The Fourier transform of the orbital is φ ଵୱ (۵) ~ 16π ఱ మ ⁄ ۵ మ ା మ (Eq. B-36) From this formula we can estimate the relative cutoffs needed for different elements in the periodic table. • Most relativistic effects are connected to core electrons. These effects can be incorporated in the pseudopotentials without complicating the calculations of the final system. II.4.2.1.1. Hamann-Schluter-Chiang conditions Norm-conserving pseudopotentials are derived from atomic reference states, calculated from the atomic Kohn-Sham equation (Eq. B-4). This equation is replaced by a valence electron only equation of the same form (-∇ ଶ + ܸ ௩ ) หφ ௦ 〉 = ߳̂ หφ ௦ 〉 (Eq. B-37) Hamann, Schluter, and Chiang [21] proposed a set of requirements for the pseudo wavefunction and pseudopotential. The pseudopotential should have the following properties 1. Real and pseudo valence eigenvalues agree for a chosen prototype atomic configuration. ϵ = ϵ̂ 2. Real and pseudo atomic wave functions agree beyond a chosen core radius r c . φ )ݎ( = φ ௦ )ݎ( for ݎ ≥ ݎ 3. The integrals from 0 to R of the real and pseudo charge densities agree for R ≥ r c for each valence state (norm conservation). ൻφ ௦ หφ ௦ ൿ ோ = ൻφ หφ ൿ ோ for ܴ ≥ ݎ where ൻφ ௦ หφ ௦ ൿ ୖ = r ଶ | ୖ φ ௦ ሺrሻห dr ଶ (Eq. B-38) 4. The logarithmic derivatives of the real and pseudo wave function and their first energy derivatives agree for r ≥ r c . Property 3) and 4) are related through the identity െ ଵ ଶ ቂሺrφ ௦ ሻ ଶ ୢ ୢ ୢ ୢ୰ lnφ ௦ ቃ ୖ ൌ r ଶ | ୖ φ ௦ ห dr ଶ (Eq. B-39) They also gave a recipe that allows to generate pseudopotentials with the above properties: 1. V ሺଵሻ ሺrሻ ൌ V ሺrሻ ቂ1 െ f ଵ ቀ ୰ ୰ ౙౢ ቁ ቃ (Eq. B-40) r cl : core radius ≈ 0.4 -0.6 R max , where R max is the outermost maximum of the real wave function. 2. V ሺଶሻ ሺrሻ ൌ V ሺଵሻ ሺrሻ c ୪ f ଶ ቀ ୰ ୰ ౙౢ ቁ (Eq. B-41) determine c l so that ϵ ୪ ൌ ϵ ො ୪ in ቀെ∇ ଶ V ሺଶሻ ሺrሻቁ w ሺଶሻ ሺrሻ ൌ ϵ ො w ሺଶሻ ሺrሻ (Eq. B-42) 3. φ ௦ (r) = γ ቂ w (ଶ) (r) + δ r ାଵ f ଷ ቀ ୰ ୰ ౙౢ ቁቃ (Eq. B-43) where γ l and δ l are chosen such that φ ௦ (r) → φ (r) for r ≥ r ୡ୪ and γ ଶ ቚw (ଶ) (r) + δ r ାଵ f ଷ ( ୰ ୰ ౙౢ )ቚ ଶ ݎ݀ = 1 (Eq. B-44) 4. Invert the Schrodinger equation for ϵ ො φ ௦ (r) to get V ௩ (r) 5. Unscreen V ௩ (r) to get V ௦ (r) V ௦ (r) = V ௩ (r) -V ு (݊ ௩ ) -V ௫ (݊ ௩ ) (Eq. B-45) where V H (ρ v ) and V xc (ρ v ) are the Hartree and exchange and correlation potentials of the pseudo valence density. Hamann, Schluter and Chiang chose the following cutoff functions f ଵ (x) = f ଶ (x) = f ଷ (x) = exp (-x ସ ). These pseudopotentials are angular momentum dependent. Each angular momentum state has its own potential that can be determined independently from the other potentials. It is therefore possible to have a different reference configuration for each angular momentum. This allows it for example to use excited or ionic states to construct the pseudopotential for l states that are not occupied in the atomic ground state. The total pseudopotential in a solid state calculation then takes the form ܸ ௦ )ݎ( = ∑ ܸ ௦ ܲ)ݎ( (Eq. B-46) where L is a combined index {l,m} and P L is the projector on the angular momentum state {l,m}. II.4.2.1.2 Bachelet-Hamann-Schluter (BHS) form Bachelet et al. [22] proposed an analytic fit to the pseudopotentials generated by the HSC recipe of the following form V ୮ୱ (r) = V ୡ୭୰ୣ (r) + ∑ ∆V ୧୭୬ (r) (Eq. B-47) V ୡ୭୰ୣ (r) = - ೡ ୰ ൣ ∑ c ୧ ୡ୭୰ୣ erf (ඥα ୧ ୡ୭୰ୣ r) ଶ ୧ୀଵ ൧ (Eq. B-48) ∆V ୧୭୬ (r) = ∑ (A ୧ + ଷ ୧ୀଵ r ଶ A ୧ାଷ )exp (-α ୧ r ଶ ) (Eq. B-49) The cutoff functions were slightly modified to be f 1 (x) = f 2 (x) = f 3 (x) = exp(-x 3.5 ). They generated pseudopotentials for almost the entire periodic table (for the local density approximation), where generalizations of the original scheme to include spin-orbit effects for heavy atoms were made. Useful is also their list of atomic reference states. BHS did not tabulate the A i coefficients as they are often very big numbers but another set of numbers C i , where C ୧ = -∑ A Q ୧ ୀଵ (Eq. B-50) and A ୧ = -∑ C Q ୧ ିଵ ୀଵ (Eq. B-51) with ܳ = ൞ 0 ݅ > ݈ ൣ ܵ -∑ ܳ ଶ ିଵ ୀଵ ൧ ଵ ଶ ⁄ ݅ = ݈ ଵ ொ ൣ ܵ -∑ ܳ ܳ ିଵ ୀଵ ൧ ଵ ଶ ൗ ݅ < ݈ (Eq. B-52) Where ܵ = ݎ ଶ ߮ ߮)ݎ( )ݎ( ஶ ݎ݀ and ߮ )ݎ( = ቊ ݁ ିఈ మ , ݅ = 1,2,3 ݎ ଶ ݁ ିఈ మ , ݅ = 4,5,6 (Eq. B-53) II.4.2.1.3. Kerker Pseudopotetials Also in this approach [23] pseudopotentials with the HSC properties are constructed. But instead of using cutoff functions (f 1 , f 2 , f 3 ) the pseudo wavefunctions are directly constructed from the all-electron wavefunctions by replacing the all-electron wavefunction inside some cutoff radius by a smooth analytic function that is matched to the all-electron wavefunction at the cutoff radius. The HSC properties then translate into a set of equations for the parameters of the analytic form. After having determined the pseudo wavefunction the Schrdinger equation is inverted and the resulting potential unscreened. Note that the cutoff radius of this type of pseudopotential construction scheme is considerably larger than the one used in the HSC scheme. Typically the cutoff radius is chosen slightly smaller than R max , the outermost maximum of the all-electron wavefunction. The analytic form proposed by Kerker is φ ௦ )ݎ( = ݎ ାଵ ݁ () (Eq. B-54) with )ݎ( = ݎߙ ସ + ݎߚ ଷ + ݎߛ ଶ + ߜ (Eq. B-55) The term linear in r is missing to avoid a singularity of the potential at r = 0. The HSC conditions can be translated into a set of equations for the parameters α, β, γ, δ. II.4.2.1.4. Trouiller-Martins Pseudopotentials The Kerker method was generalized by Trouiller and Martins [24] to polynomials of higher order. The rational behind this was to use the additional parameters (the coefficients of the higher terms in the polynomial) to construct smoother pseudopotentials. The Trouiller-Martins wavefunctions has the following form φ ௦ )ݎ( = ݎ ାଵ ݁ () (Eq. B-56) with )ݎ( = ܿ + ܿ ଶ ݎ ଶ + ܿ ସ ݎ ସ + ܿ ݎ + ܿ ଼ ݎ ଼ + ܿ ଵ ݎ ଵ + ܿ ଵଶ ݎ ଵଶ (Eq. B-57) and the coefficients c n are determined from • norm-conservation • For n=0…4 ௗ φ ೞ ௗ ฬ ୀ = ௗ φ ௗ ቚ ୀ (Eq. B-58) • ௗφ ೞ ௗ ฬ ୀ = 0 (Eq. B-59) II.4.2.1.5. Kinetic Energy Optimized Pseudopotentials This scheme is based on the observation that the total energy and the kinetic energy have similar convergence properties when expanded in plane waves. Therefore, the kinetic energy expansion is used as an optimization criteria in the construction of the pseudopotentials. Also this type [25] uses an analytic representation of the pseudo wavefunction within r c φ ௦ )ݎ( = ∑ ܽ ݆ ݍ( )ݎ ୀଵ (Eq. B-60) where j l (qr) are spherical Bessel functions with i-1 zeros at positions smaller than r c . The values of q i are fixed such that ′ ( ) ( ) = φ ′ ( ) φ ( ) (Eq. B-61) The conditions that are used to determine the values of a i are: • φ ௦ is normalized • First and second derivatives of φ are continuous at r c • ∆E K ({a i }, q c ) is minimal ܧ∆ = - ݀ ଷ ݎφ ௦ * ∇ ଶ ߖ ௦ - ݀ ݍݍ ଶ หφ ௦ )ݍ( ห ଶ (Eq. B-62) ∆E K is the kinetic energy contribution above a target cutoff value q c . The value of q c is an additional parameter (as for example r c ) that has to be chosen at a reasonable value. In practice q c is changed until it is possible to minimize ∆E K to a small enough value. II.4.2.2. Pseudopotentials in the Plane Wave Basis With the methods described in the last section we are able to construct pseudopotentials for states l = s, p, d, f by using reference configurations that are either the ground state of the atom or of an ion, or excited states. In principle higher angular momentum states could also be generated but there physical significance is questionable. In a solid or molecular environment there will be wavefunction components of all angular momentum character at each atom. The general form of a pseudopotential is ܸ (, ′ ) = ∑ ∑ ܸ ܲ)ݎ( (߱) ୀିଵ ∞ ୀ (Eq. B-63) where P lm (ω) is a projector on angular momentum functions. A good approximation is to use ܸ () = ܸ () for ݈ > ݈ ௫ (Eq. B-64) With this approximation one can rewrite ܸ (, ′ ) = ܸ ()ܲ (߱) ∞ + [ܸ () -ܸ ()]ܲ (߱) ∞ = ܸ () ܲ (߱) + ߜܸ () ∞ ܲ (߱) ∞ = ܸ () + ∑ ߜܸ ()ܲ (߱) ∞ (Eq. B-65) where the combined index L = {l, m} has been used. The pseudopotential is now separated into two parts; the local or core pseudopotential V c (r) and the non-local pseudopotentials δV l (r)P lm (ω). The pseudopotentials of this type are also called semilocal, as they are local in the radial coordinate and the nonlocality is restricted to the angular part. The contribution of the local pseudopotential to the total energy in a Kohn-Sham calculation is of the form ܧ ୀ ܸ ()݊()݀ (Eq. B-66) It can easily be calculated together with the other local potentials. The non-local part needs special consideration as the operator in the plane wave basis has no simple structure in real or reciprocal space. There are two approximations that can be used to calculate this contribution to the energy. One is based on numerical integration and the other on a projection on a local basis set. II.4.2.2.1. Gauss-Hermit Integration The matrix element of the non-local pseudopotential ܸ (, ′ ) = 1 Ω න ݀݁ ି ∆ܸ ()݁ ′ = ∑ ݀ ∞ 〈|ܻ 〉 ఠ ݎ ଶ ∆ܸ ሺሻ〈ܻ |′ 〉 ఠ (Eq. B-67) where 〈. |. 〉 ఠ stands for an integration over the unit sphere. These integrals still depend on r. The integration over the radial coordinate is replaced by a numerical approximation ଶ ݂ሺሻ݀ ≈ ∑ ݓ ݂ሺ ሻ ∞ (Eq. B-68) The integration weights w i and integration points r i are calculated using the Gauss-Hermit scheme. The non-local pseudopotential is in this approximation ܸ ሺ, ′ ሻ ൌ 1 Ω ݓ ∆ܸ ሺ ሻ〈|ܻ 〉 ఠ 〈ܻ |′ 〉 ఠ ൌ ∑ ଵ Ω ∑ ݓ ∆ܸ ሺ ሻܲ * ()ܲ ( ′ ) (Eq. B-69) Where the definition for the projectors P ܲ () = 〈ܻ | 〉 ఠ (Eq. B-70) has been introduced. The number of projectors per atom is the number of integration points (5 -20 for low to high accuracy) multiplied by the number of angular momenta. For the case of s and p non-local components and 15 integration points this accounts to 60 projectors per atom. The integration of the projectors can be done analytically ܲ ሺሻ ൌ න ܻ * ݁)ݓ( ݀߱ ఠ = න ܻ * ߨ4)ݓ( ݅ ݆ ( ) ܻ ′ * ܻ)ݓ( ′ ()݀߱ ′ ୀିଵ ∞ ୀ ఠ = 4ߨ݅ ݆ ( )ܻ ൫ܩ ൯ (Eq. B-71) where the expansion of a plane wave in spherical harmonics has been used. j l are the spherical Bessel functions and ܩ the angular components of the Fourier vector G. II.4.2.2.2. Kleinman-Bylander Scheme The other method is based on the resolution of the identity in a local basis set ∑ | ߯ ఈ 〉〈߯ ఈ | ఈ ൌ 1, (Eq. B-72) where {χ α } are orthonormal functions. This identity can now be introduced in the integrals for the non-local part ܸ (, ′ ) = න ܻ|ۦ݀ ۧ ఠ ݎ ଶ △ ܸ ሺሻܻۦ |′ۧ ఠ ∞ ൌ ∑ ∑ ݀ ∞ ߯|ۦ ఈ ߯ۦۧ ఈ |ܻ ۧ ఠ ݎ ଶ ∆ܸ ሺݎሻൻܻ ห߯ ఉ ൿ ఠ ൻ߯ ఉ ห′ൿ ఈ,ఉ (Eq. B-73) and the angular integrations are easily performed using the decomposition of the basis in spherical harmonics ߯ ఈ ሺݎሻ ൌ ߯ ఈ ሺሻܻ ሺ߱ሻ (Eq. B-74) This leads to ܸ ሺ, ′ ሻ ൌ ܺ|ۦ ఈ ۧ න ݀߯ ఈ ሺሻݎ ଶ ∆ ∞ ఈ,ఉ ܸ ሺሻܺ ఉ ሺሻൻ߯ ఉ ห′ൿ ൌ ∑ ∑ ߯|ۦ α ۧ ఈ,ఉ ∆ܸ ఈఉ ൻ߯ ఉ ห′ൿ (Eq. B-75) which is the non-local pseudopotential in fully separable form. The coupling elements of the pseudopotential ∆ܸ ఈఉ ൌ ݀߯ ఈ ሺሻ ∞ ݎ ଶ ∆ܸ ሺሻ߯ ఉ ሺሻ (Eq. B-76) are independent of the plane wave basis and can be calculated for each type of pseudopotential once the expansion functions χ are known. The final question is now what is an optimal set of basis function χ. Kleinman and Bylander [26] proposed to use the eigenfunctions of the pseudo atom, i.e. the solutions to the calculations of the atomic reference state using the pseudopotential Hamiltonian. This choice of a single reference function per angular momenta guarantees nevertheless the correct result for the reference state. Now assuming that in the molecular environment only small perturbations of the wavefunctions close to the atoms occur, this minimal basis should still be adequate. The Kleinman-Bylander form of the projectors is ∑ ห ఞ ಽ 〉〈∆ ಽ ఞ ಽ | 〈ఞ ಽ ∆ ಽ ఞ ಽ 〉 ൌ 1, (Eq. B-77) where χ L are the atomic pseudo wavefunctions. The plane wave matrix elements of the non- local pseudopotential in Kleinman-Bylander form is ܸ ሺ, ′ ሻ ൌ 〈 |∆ ಽ ఞ ಽ 〉〈∆ ಽ ఞ ಽ | ′〉 〈ఞ ಽ ∆ ಽ ఞ ಽ 〉 (Eq. B-79) Generalizations of the Kleinman-Bylander scheme to more than one reference function were introduced by Blöchl [27] and Vanderbilt [28]. They make use of several reference functions, calculated at a set of reference energies. In transforming a semilocal to the corresponding Kleinman-Bylander (KB) pseudopotential one needs to make sure that the KB-form does not lead to unphysical "ghost" states at energies below or near those of the physical valence states as these would undermine its transferability. Such spurious states can occur for specific (unfavorable) choices of the underlying semilocal and local pseudopotentials. They are an artefact of the KB-form nonlocality by which the nodeless reference pseudo wavefunctions need to be the lowest eigenstate, unlike for the semilocal form [29]. Ghost states can be avoided by using more than one reference state or by a proper choice of the local component and the cutoff radii in the basic semilocal pseudopotentials. The appearance of ghost states can be analyzed by investigating the following properties: • Deviations of the logarithmic derivatives of the energy of the KB-pseudopotential from those of the respective semilocal pseudopotential or all-electron potential. • Comparison of the atomic bound state spectra for the semilocal and KBpseudopotentials. • Ghost states below the valence states are identified by a rigorous criteria by Gonze et al. [29]. II.4.2.3. Non-linear Core Correction The success of pseudopotentials in density functional calculations relies on two assumptions: the transferability of the core electrons to different environments and the linearization of the exchange and correlation energy. The second assumption is only valid if the frozen core electrons and the valence state do not overlap. However, if there is significant overlap between core and valence densities, the linearization will lead to reduced transferability and systematic errors. The most straightforward remedy is to include "semi-core states" in addition to the valence shell, i.e. one more inner shell (which is from a chemical viewpoint an inert "core level") is treated explicitely. This approach, however, leads to quite hard pseudopotentials which call for high plane wave cutoffs. Alternatively, it was proposed to treat the non-linear parts of the exchange and correlation energy E xc explicitely [30]. This idea does not lead to an increase of the cutoff but ameliorates the above-mentioned problems quite a bit. The method of the non-linear core correction dramatically improves results on systems with alkali and transition metal atoms. For practical applications, one should keep in mind that the non-linear core correction should only be applied together with pseudopotentials that were generated using the same energy expression. II.4.2.4. Ultrasoft Pseudopotentials Method For norm-conserving pseudopotentials the all-electron wavefunction is inside some core radius replaced by a soft nodeless pseudo wavefunction, with the crucial restriction that the PS wavefunction must have the same norm as the all-electron wavefunction within the chosen core radius; outside the core radius the pseudo and all-electron wavefunction are identical. It is well established that good transferability requires a core radius around the outermost maximum of the all-electron wavefunction, because only then the charge distribution and moments of the allelectron wavefunction are well reproduced by the pseudo wavefunctions. Therefore, for elements with strongly localized orbitals (like first-row, 3d, and rare-earth elements) the resulting pseudopotentials require large plane wave basis sets. To work around this problem, compromises are often made by increasing the core radius significantly beyond the outermost maximum of the all-electron wavefunction. But this is usually not a satisfactory solution because the transferability is always adversely affected when the core radius is increased, and for any new chemical environment, additional tests are required to establish the reliability of such soft pseudopotentials. An elegant solution to this problem was proposed by Vanderbilt [31]. In his method, the normconservation constraint is relaxed and to make up for the resulting charge deficit, lo-calized atom-centered augmentation charges are introduced. These augmentation charges are defined as the charge difference between the all-electron and pseudo wavefunctions, but for convenience they are pseudized to allow an efficient treatment of the augmentation charges on a regular grid. The core radius of the pseudopotential can now be chosen around the nearest neighbor distance; independent of the position of the maximum of the all-electron wavefunction. Only for the augmentation charges a small cutoff radius must be used to restore the moments and the charge distribution of the all-electron wavefunction accurately. The pseudized augmentation charges are usually treated on a regular grid in real space, which is not necessarily the same as the one used for the representation of the wavefunctions. The relation between the ultrasoft pseudopotential method and other plane wave based methods was discussed by Singh [32]. Tools The most important factors determining the level of theory of a quantum-mechanical computer experiment are the choice of an exchange-correlation functional, the choice of a basis-set for the expansion of the Kohn-Sham orbitals, charge-and spin densities and potentials, and the algorithm adopted for solving the Kohn-Sham equations and for calculating energies, forces and stresses. The degree to which the chosen functional accounts for many-electron correlations and the completeness of the basis-set determine the accuracy of the calculation, the numerical algorithms are decisive for its efficiency. Our calculations, both static and ab initio molecular dynamics, were performed using the Chapter III Static ab initio calculations (0K) In this chapter, the results of the static calculations of the substitutions of Ti and Zr transition metals in the bulk as well as at the ∑5 (310)[001] grain boundary are presented. To this end, the chapter is divided into three sections. In section I, the computational details of our static calculations are given. Section II gives the results of the energetic of the point defects in the bulk of the D0 3 -Fe 3 Al structure. Emphasis is also given on the importance of using relaxation when determining formation energy of calculation for site preference configurations. In the last section, the behaviour of the two transition metals Ti and Zr in the ∑5 (310) grain boundary are discussed. The formation energies, the interface energies and the electronic charge density transfer associated with the presence of the impurity has been investigated. I. Computational details I.1. Computational method Our calculations were performed within the Vanderbilt-type UltraSoft PseudoPotential (USPP) [1] and the formalism of Density Functional Theory as implemented in the Vienna Ab initio Simulation Package (VASP) [2,3]. The electronic wave functions were expanded in plane waves with a kinetic energy cutoff of 240 eV. The USPPs employed in this work explicitly treated eight valence electrons for Fe (4s 2 3d 6 ), three valence electrons for Al (3s 2 3p 1 ) and four valence electrons for both Ti (4s 2 3d 2 ) and Zr (5s 2 4d 2 ). The spin polarization was taken into account for all calculations. The Generalized Gradient Approximation (GGA-PW91) was employed for evaluation of the exchange -correlation energy with the Perdew and Wang version [4,5]. The Brillouin zone integrations were performed using Γ centered Monkhorst-Pack [6] k-point meshes, and the Methfessel-Paxton [7] technique with a 0.3 smearing of the electrons levels. Tests were carried out for Fe 3 Al unit cell (four atoms by cell) using different k-point meshes to ensure the absolute convergence of the total energy to a precision of 10 -3 eV/atom. As a result, the k-mesh for the Fe 3 Al unit cell was adapted using (16x16x16) k-points. Depending on the structure and the size of the cell, the number of k-points changes as a consequence of the resultant modification of the Brillouin zone size. For total energy calculations of the Fe 3 Al supercells with 32 atoms (2x2x2 unit cell) and 108 atoms (3x3x3 the unit cell), the (8x8x8) and (4x4x4) k-points mesh were chosen, respectively. In the case of grain boundary configurations, the Monkhorst-Pack grid was adapted to the supercell parameters using (4x2x5) k-points. The ground state atomic geometries were obtained by the minimizing the Hellman-Feyman forces using a conjugate -gradient algorithm until force on each atom reaches a convergence level of 0.1 Å/atom and for an external pressure lower than 0.3 GPa. I.2. Structural properties All the results presented below were obtained employing the computational settings described in the previous paragraph. However, for the Fe 3 Al-D0 3 unit cell additional calculations were also conducted with alternative settings to gauge the overall accuracy of the reported results. Specifically, test calculations were performed employing the Local-Density Approximation (LDA) as well as the GGA, in order to compare the two approximations and determine the influence of the analytical representation of the exchange-correlation functional. The values are given in Table . III-1 along with the available theoretical and experimental data for D0 3 -Fe 3 Al. Figure. III-1 The total energy within LDA and GGA according to the pseudopotential calculations. Not surprisingly, the GGA functional gives a larger value for the equilibrium lattice constant compared to the LDA functional, as it is well known that in most cases the frequent over binding for transition metals and their compounds in LDA is corrected with GGA. Furthermore, a fairly good agreement is found for the lattice parameter and the bulk modulus, which are calculated with the USPP-GGA, when compared with the values from experiment [13]. The present value of lattice parameter 5.76Å is also comparable to the other theoretical values obtained with GGA exchange-correlation functional [13][14]15]. In summary, the results show that the used ultarsoft pseudopotential and the GGA exchange-correlation approximation reproduce successfully the structural properties of the D0 3 -Fe 3 Al structure. I.3. Energetics In this section, we first define the energies used in our calculations. To examine the preferential site occupations of the transition metal atoms both in the bulk and at the grain boundary, their formation energies E f were evaluated in different sites by: E f = E solid (Fe 3 AlX) + E Fe or Al -E solid (Fe 3 Al) -E X (Eq.III-1) for substitutional impurities and by E f = E solid (Fe 3 AlX) -E solid (Fe 3 Al) -E X (Eq.III-2) for interstitial impurities. The formation energy, for the case of vacancies, was evaluated using the following equation [16] E f = E solid (Fe 3 Alv) + E Fe or Al -E solid (Fe 3 Al) (Eq.III-3) In these equations, X and v represent the T.M. atoms and vacancies, respectively. E solid (Fe 3 AlX) (solid referring to bulk as well as G.B.) is the energy of the supercell containing one substitutional or interstitial impurity and E solid (Fe 3 Al) is the energy of the pure supercell without defect. E Fe or Al (Fe or Al being the atom which is substituted) and E X are the calculated total energies for pure metals in their equilibrium lattices -bcc Fe, fcc Al, hcp Ti and hcp Zr. Note 1 Ref. [10] 2 Ref. [11] 3 Ref. [12] 4 Ref. [13] 5 Ref. [14] 6 Ref. [15] that, as it is very unlikely that Ti and Zr reside on interstitial sites in the bulk, Eq. III-2 will only be used for the analysis of Ti and Zr interstitials within the G.B. The grain boundary energy γ GB for the undoped system is defined as: γ GB = (E GB -E bulk ) / 2A (Eq.III-5) E GB and E bulk are the total energies of the grain boundary and bulk supercells, respectively, and A is the area of the interface (the factor of ½ is needed to account for the presence of two symmetrically equivalent grain boundaries per simulation cell). In our computer simulations, the energies E GB and E bulk are calculated for simulations blocks consisting of an equal number of atoms of each species. For the impurity doped system, the G.B. energy is evaluated from: γ GB = (E GBX -E Fe3AlX ) / 2A (Eq.III-6) In this formulation, as presented in [17,18], E GBX and E Fe3AlX are the total energies of the supercells with impurity-doped G.B. and bulk Fe 3 Al, respectively. II. Point defects in bulk Fe 3 Al We first investigate the properties of point defects in the bulk of the D0 3 Fe 3 Al structure. Because of their atomic size, Ti and Zr must occupy substitutional sites rather than interstitial ones within the bulk Fe 3 Al. The three types of possible point defects-i.e substitution impurity or vacancy formation on Al, FeI, FeII sites (see Fig. III-2)-were modeled within a supercells containing 32 and 108 atoms. It must be noted that, when substituting an impurity atom (Ti or Zr) on a site of the 32 atoms supercell, the corresponding impurity concentration is about 3 at.%. For the case of 108 atoms supercell, the impurity concentration is about 1 at. %. II.1. Importance of relaxation In order to have some insight in the effect of relaxation and underline the importance of this relaxation for the calculation of intermetallics, Fig. III-3 compares the formation energy of the substitution defects calculated by using unrelaxed and relaxed supercells (with 32 atoms). It is clear that the relaxation leads to an overall reduction in the formation energies. Additionally, though the curves have the same profile, it appears clearly that the differences in formation energies determined using the unrelaxed and relaxed supercells are lower for substitutions of Ti than for Zr. On a first approach, this difference may be attributed to the differences in size between the Ti and Zr atoms (the Ti atom is smaller than the Zr one). What is important to stress here is essentially the magnitude of the differences. While the overall differences under the relaxed and unrelaxed modes are in the range 15 to 50 % for the various Ti substitutions, the relaxation decreases the formation energy by more than 140 % in the case of the substitution of a Zr atom on a Fe II site. For the case of vacancies, the relaxation is more pronounced when the vacancy is created in the Al site. This can also be related to the difference in size between the Fe and Al atoms. Knowing that the Al atom is greater than the Fe one, the void created when the vacancy is produced in the Al site is larger and, consequently, the relaxation is also more important. Thus, in the following, only data obtained from relaxed configurations are treated. Figure. III-3 Energy profile of point defects formation energies for unrelaxed and relaxed D0 3 -Fe 3 Al supercells. II.2. The site preference of point defects in the bulk D0 3 -Fe 3 Al The values of formation energies for substitutions and vacancies in relaxed supercells are given in 1.97 2.44 1.71 From Table III-2, it can be seen that, though the vacancies occur with positive formation energies in the three different configurations, the lower formation energy correspond to the substitution on the FeII site. This tend agrees well with the conclusion of Mayer et al. [19] obtained by ab-initio pseudopotential method. Our value of the formation energy of a vacancy in FeII sublattice (1.09 eV) is also comparable to (1.18±0.04 eV) obtained by Schaefer et al. [20] from positron annihilation experiments. Comparatively, Jiraskova et al. [21], who investigated the Mössbauer spectra of a Fe 72 Al 28 compound, found that the Fe vacancies appear on the FeI sublattice which is in conflict with our result and that obtained by Mayer et al [19]. Since the results of Jiraskova were obtained at ambient temperature, unlike our calculations and those of Mayer et al. [19] which were carried out at 0K, the difference in site occupation of vacancies can be related to the effect of temperature. To check this possibility, we have calculated the defect energies of the vacancies on the FeI and FeII sites at 300K, using the Ab Initio Molecular Dynamics. The defect energies are defined as a change in energy of the pure cluster when an impurity replaces the FeI or the FeII sites, namely, ܧ ௗ = ܧሺ݀ሻ -ܧ (Eq. III-7) ܧሺ݀ሻ and ܧ are the energies of the supercell with and without transition metal impurities, respectively. The preferential site then corresponds to the case where the energy is gained by replacing the FeI/FeII sites. More details about the calculations of the temperature dependence of the defect energies will be presented in Chapter IV. However, some results are presented here in Table . III-3 together with the results of the defects energies calculated at 0K. The values of the defect energies are more important than that of the formation energies. This is because the value of the total energy of pure iron in its equilibrium lattices (bcc Fe) has not been subtracted as defined in Eq. III-2. As seen from Table . III-3, the FeII site remains the favoured site occupation of vacancies even at 300K. This means that the temperature changes in the range 0→300K does not modify the stability of vacancies in the bulk D0 3 -Fe 3 Al intermetallic compound. Here, both the static ab initio and molecular dynamics at 300K results are in conflict with the Mossbauer conclusions. This indicates that the disagreement with the experimental results does not be related to the effect of temperature but is certainly related to the high sensitivity of these alloys to the vacancies. This is because the difference between the formation energies of 3% and 1% of vacancies calculated in supercells with 32 and 108 atoms is important. From Table . III-2, one can see that the formation energies at different sites are reduced by ~60% when the concentration of vacancies is reduced. From 1% (at 108 atoms) to 3% (at 32 atoms), additionally, the difference between the formation energies of vacancies on the FeI and FeII sites is also reduced with the concentration of the vacancies. The difference between the formation energies is about 0.6 eV for the 3% of vacancies while for the case of the concentration of 1%, the difference is only about 0.03 eV. The calculated formation energies for the substitutions in the relaxed scheme are also shown in Table III-2. For the three configurations of Ti substitution, negative formation energies are obtained. This means that these defects are stable and, in other words, that 1 at% as well as 3 at% of Ti impurities are miscible in Fe 3 Al. This is consistent with the solubility data obtained experimentally [22] and from phase diagram calculations available in the literature [23]. The calculation also shows that the most stable configuration for Ti is to reside on the FeI site, with the lower effective formation energies for both types of calculation (i.e. concentration). This is the correct prediction of the experimentally observed behavior [24]. For the case of Zr, the most favoured configuration is also the substitution on a FeI site. However, contrary to Ti, at 3% concentration all the formation energies are positive, indicating a cost in energy to introduce Zr on a substitution site in the bulk. It is interesting to notice that these values tend to diminish when the calculation is carried out with 108 atoms and that the concentration of defects decreases. This is consistent with experimental observation indicating that the solubility of Zr is below 0.1% and that precipitation of a second phases such as Laves phase Zr(Fe,Al) 2 and the ߬ ଵ phase Zr(Fe,Al) 12 by adding small amounts of Zr [25]. III. Impurity segregation at grain boundaries III.1. Crystal structures and location of structural defects The atomic structure of the ∑5 (310)[001] symmetric tilt grain boundary in III.2. Site preference and effect of Ti and Zr on the grain boundary cohesion The calculated impurity formation energies of the two transition metals in the grain boundary are given in Table III-4 for each type of defect. atoms which preferred to be inserted along a ∑5 (310) grain boundary at locations where they could be surrounded by Fe near neighbors [26]. There is however here a clear difference in the behaviors of the Ti and Zr impurity atoms. The formation energy is positive for both configurations if we consider the presence of Ti interstitial, indicating thereby that it costs more energy to insert Ti at the grain boundary interface. Comparatively, the atomic configuration becomes more stable (-0.18 eV) when Zr is inserted at a (1) site (i.e. iron rich configuration). As for B in FeAl [26], the negative values of formation energy (-0.18 eV) obtained here for Zr indicates that this atom is more stable when inserted within Fe neighbors at the grain boundary than in the bulk of the material. It is important to recall that small additions of these two atoms tend to bring some ductility in iron aluminides [27][28]29]. In the case of substitutions, the results given in Table III- The comparison of the formation energies within the bulk material and within the grain boundary shows that Ti is generally stable with the same order both in the bulk and within grain boundary interface. However, for case of the Zr impurity which is clearly not stable in the bulk, prefers to segregate at the grain boundary with lower formation energies Thus, the effect of Zr at grain boundary has really to be taken into account to understand the overall properties of these ternary iron aluminides. Comparing Al and Fe sites, it is clear that substitution of the transition metals on Al sites are never the favoured configurations. For the Fe substitutions, as for the bulk results, the Ti impurity always prefers to reside on the FeI sites rather than the FeII ones. The situation is less pronounced for Zr. Indeed while Fe sites are always preferred to Al ones, the FeII are favoured when the substitution of Zr atom is lo the second plane (-0.78 eV). Finally, it is also interesting to note that, for both transition metal substitution, the most stable is obtained for substitution on a FeI site located in the first plane away from the interface (-1.32 eV for Ti/ the Ti impurity always prefers to reside on the FeI sites rather than the FeII ones. The situation is less pronounced for Zr. Indeed while Fe sites are always preferred to Al ones, the FeII are favoured when the substitution of Zr atom is located at the G.B. interface ( 0.78 eV). Finally, it is also interesting to note that, for both transition metal substitution, the most stable is obtained for substitution on a FeI site located in the first plane 1.32 eV for Ti/-0.97 for Zr). Impurity formation energies (in eV) for different substitution sites in (a) Titanium doped systems and (b) Zirconium-doped systems. 87 the Ti impurity always prefers to reside on the FeI sites rather than the FeII ones. The situation is less pronounced for Zr. Indeed while Fe sites are always preferred to Al ones, the FeII are cated at the G.B. interface (-0.51 eV) and within 0.78 eV). Finally, it is also interesting to note that, for both transition metal substitution, the most stable is obtained for substitution on a FeI site located in the first plane Impurity formation energies (in eV) for different substitution sites in (a) Titanium- The influence of the segregated impurities on the interfacial energetic has been calculated for different substitutional configurations using Eq. III-6. Our calculated results are listed in Table III for Zr (0.29 J/m 2 ), respectively. It can be noticed also that the interface energies for the Zr-doped grain boundaries are systematically lower than for the Ti-doped ones [Fig. III-7]. Thus the principal trend that can be drawn here is that zirconium is more cohesion enhancer than titanium for ∑5 (310) Fe 3 Al-grain boundary. Indeed, the presence of Zr instead of Ti on the various sites systematically decreases further the interface energy by 0.02 / 0.03 J/m 2 ; this is to say by 5% to 8%. Table III-5 The calculated interface energies (in J/m 2 ) for impurities in substitutional sites. In order to investigate the interaction between the impurities and their adjacent atoms, the formation energies are calculated for the first nearest neighbor vacancies to 1FeI substituted by transition metal impurities (i.e. the most stable configuration for the doped grain boundary). For comparison, we have calculated the formation energies for these vacancies in clean-grain boundary. The results of the formation energies are listed in Tables III-6 and III-7 for Ti and Zr impurities. From Table III-6, it can be seen that, for the Ti impurity, the formation energy increased only when the FeII is replaced by a vacancy (1.15 eV) compared to that calculated in the clean grain boundary (1.02 eV). This means that it is more expensive to remove a FeII atom in the presence of Ti impurity which indicates that Ti strengthens the interactions with their FeII first neighbor. This may originate from the antiferromagnetic coupling that forms between Ti and its FeII atoms [30]. However, for the case of Zr impurity (see III.3. Charge density distribution In order to gain new insight at the microscopic level of the bonding charge density at the grain boundary from that in the bulk, we have calculated the charge density difference both for the bulk system and the grain boundary. The charge density difference is defined as the difference between the total charge density in the solid and the superposition of each charge density placed at lattice sites, namely, III.4. Impurities induced bonding charge density To understand the effect of impurities on the bonding charge properties of the grain boundary we consider the redistribution of bonding charge induced by the impurity atom when placed at the substitutional site (1FeI). This configuration is considered because it is the most stable one between the different tested configurations. The bonding charge properties can be best described by the difference in charge density between the pure and impurity-doped grain boundaries, namely, ∆ρ(r) = ρ(Fe 3 AlX) -∑ρ(Fe) -∑ρ(Al) -ρ(X) (Eq.III-8) The impurity-induced bonding charge density in ( 004 As in the clean grain boundary, the closest atoms to the interface have different charge density distributions because they moved from their initial positions by the effect of relaxation. This will be discussed in the following sections. III. 5. The relaxation of the clean grain boundary In this section, the effect of relaxation on the structural deformation of the ∑5(310) grain boundary will be discussed. After having examined all the configurations of substitutions, it has been found that the behaviour of impurities can be classified in three different categories according to their distances from the G.B. interface. In this section, three representative configurations will be presented, namely, the impurities substitutions within the G.B. interface, substitutions in the first plane from the interface and substitutions on the second plane from the interface. All the configurations are presented in Appendix B. IV. Summary and Conclusion In this chapter, the effects of Ti and Zr transition metal impurities located in the bulk as well as at the ∑5 (310)[001] tilt grain boundary in the D0 3 -Fe 3 Al intermetallic compound were studied by means of static ab-initio calculations. The main conclusions are as follows: -Relaxation is extremely important to determine accurate formation energies and determine the stability of the point defects. For example, 140% error in formation energy is obtained when considering a Zr substitution on a FeII site using a cell that is not relaxed. -In the bulk, the calculated formation energies reveal that the FeII sites are the preferential sites for vacancies while both Ti and Zr prefer to reside on FeI sites. -The interface energy of a clean ∑5 (310) interface has been found to be (0.36 J/m 2 ). The lowest formation energies for the T.M. substitutions have always been obtained on FeI sites on the first plane away from the exact grain boundary interface. Ti and Zr impurities are found to reduce the interface energies on various sites of this ∑5 (310) grain boundary by an average of 14% and 22%, respectively. However, significant differences between the behavior of Zr and Ti atoms were revealed. In particular, Ti can reside in bulk and grain boundary configurations with the same order of stability. Comparatively, Zr is stable within the grain boundary both as an insertion and as a substituting element (on FeI and FeII sites). Also, the creation of FeII vacancies in a Ti doped boundary is energetically costly while it favored in a Zr-doped one. -The expansion of the G.B region occurs as an effect of the relaxation in the pure grain boundary. The presence of the impurities in the G.B interface does not affect the relaxation of the grain boundary. For the case of substitutions in the first and second plane, the important displacements correspond to the first nearest neighbours atoms of the impurities while the G.B. region was not affected. The Ti impurity prefers to reside in iron rich environment while the Zr impurity tends to relax to the grain boundary interface. Chapter IV Effect of temperature on the structural stabilities of Ti and Zr in the bulk and the ∑5 grain boundary: Ab Initio Molecular Dynamics study. In this chapter the results of the AIMD calculations of Ti and Zr substitutions in the bulk and at the ∑5 grain boundary are given. This chapter is divided in three major sections. In the first section (section I), the calculation details together with the preliminary calculations are presented. Section II gives the results of the temperature dependence of the site preference of the two transition metals and their effect on the structural properties of the bulk D0 3 -Fe 3 Al. The pair distribution functions were also calculated in different temperatures to get insight on the phase stability of the D0 3 structure with the transition metals additions at higher temperatures. In Section III, the defect energies of Ti and Zr transition metals are calculated in different configurations within the ∑5 grain boundary to compare their stability with the bulk. In a second part of section III, the effects of temperature on the structural relaxation of the grain boundary is given. Finally, the limitations of the CSL model at higher temperature are also discussed. I. Calculation details I. 1. Computational methods In our molecular dynamic simulations, the atomic forces were calculated from Density Functional Theory (DFT) as implemented in the VASP (Vienna Ab initio Simulation Package) code [1,2]. The calculations are based on the Generalized Gradient Approximation (GGA) [3,4] and made use of the Ultra Soft Pseudo-Potential approach USPP [5]. The Verlet algorithm was used to integrate the Newton's equations of motion with a time step of ∆t=5 fs and each simulation were allowed to turn for a total of 200 steps. To ensure that any memory of the initial configuration is completely erased, the first 100 time steps were reserved for equilibrating the system and were discarded from the subsequent analysis. We have used a cutoff energy of 250 eV which was found to be required in order to reach convergence in total energy. The valence states used are 4s 2 3d 6 for Fe, 3s 2 3p 1 for Al, 4s 2 3d 2 for Ti and 5s 2 4d 2 for Zr. The large dimension of the supercells used in our calculation (108 atoms in bulk supercell and 80 atoms in the ∑5 (310) G.B. supercell) allowed us to limit the sampling of the Brillion zone to the Γ point. The simulation were performed in the canonical ensemble NVT (particle Number, Volume and Temperature are fixed). I. 2. Preliminary calculations Preliminary test calculations were performed at each temperature in order to obtain the lattice parameter of the D0 3 structure at zero pressure. In a first step, the lattice parameter of the Fe 3 Al-D0 3 supercell was dilated/compressed to the equilibrium lattice constant at the zero pressure. From this value of lattice parameter, a new calculation was carried out to obtain the new pressure. In the second step, the resulting value of pressure -which is close to zero-was reintroduced in the graph of the evolution of the lattice parameter with the pressure to obtain a more accurate lattice parameter as shown in the The pressure difference was found smaller than 1.5 kBar in the simulated temperature range from 100 to 1100K. Such a small pressure difference in our simulation should not bring significant errors for the comparison of structural properties at different temperatures. Figure. IV-1. The evolution of pressure as function of expansion/compression of the lattice parameter (at temperature T=100K). The diamond and squares symbols represent the calculated pressure and the statistical uncertainty, respectively. I.3. Energetic In order to examine the temperature dependence of the site preference of Ti and Zr transitions metals, their defect energies were calculated both in the bulk and the ∑5 (310)[001] over a wide range of temperatures (100-1100 K). The defect energies are defined as change in energy of the pure supercell when an impurity replaces an atom in the supercell, namely, ܧ ௗ = ܧ ௦ௗ ݁ܨ( ଷ )݈ܺܣ -ܧ ௦ௗ ݁ܨ( ଷ )݈ܣ (Eq.IV-1) Where X represent the T.M. impurities. ܧ ௦ௗ ݁ܨ( ଷ )݈ܺܣ ܽ݊݀ ܧ ௦ௗ ݁ܨ( ଷ )݈ܣ (solid referring to bulk as well as G.B.) are the energies of the supercell with and without transition metal impurities, respectively. The preferential site then corresponds to the case where the energy is gained by replacing an atom. Unlike the formula of the formation energy used in the previous chapter (Chapter III, section I-2) the energies of the pure elements (Fe, Al, Ti and Zr) are not introduced in the formula of the defect energy. Thus, the comparisons (i) between the defect energies of the two transition metals and (ii) between substitutions on the Fe and Al sites are not possible. 5.767Å It is important to recall that the formation energy is defined as ܧ = ܧ ௦ௗ ݁ܨ( ଷ )݈ܺܣ + ܧ ி -ܧ ௦ௗ ݁ܨ( ଷ )݈ܣ -ܧ (Eq. IV-2) In this equation, E Fe or Al (Fe or Al being the atom which is substituted) and E X are the calculated total energies for pure metals in their equilibrium lattices -bcc Fe, fcc Al, hcp Ti and hcp Zr. In the following, we will only examine the replacement of FeI or FeII sites by transition metal impurities. Indeed, the case of substitutions on the Al site was not treated since the comparison is not possible but also as we have found, from the static calculations (at 0K), that the substitutions on the Al sites are not favourable both in the bulk and at the grain boundary ∑5 (310)[001]. II. Transition metal impurities in the bulk D0 II.1. Site preference of the Ti and Zr substitutions The defect energies of the two transition metals were calculated using a atoms of Fe and 27 atoms of Al), i.e. 3x3x conditions. By substituting an impurity in this supercell, the impurity concentration is about 1% (exactly 0.92%). The results of the defect energies are summarized in Table substitutions. To help and indicate better the trends, t substitution in FeI and FeII sites are also listed and presented in Fig. The defect energies of the two transition metals were calculated using a system of 108 d 27 atoms of Al), i.e. 3x3x3 the D0 3 -Fe 3 Al unit cell, under periodic boundary conditions. By substituting an impurity in this supercell, the impurity concentration is about 1% The results of the defect energies are summarized in Ti Zr The replacement of Ti on the FeI site leads to a significant gain in energy (-1.62~-2.55 eV) on the overall temperature range. Comparatively, the replacement on the FeII site is energetically more expensive (+0.05~-0.7 eV). This also indicates that, over the entire tested temperature range, the Ti impurity is always more stable when seated on the FeI site. For the case of the Zr impurity, the gains in energy are even more substantial: (-2.04~-2. Bond lengths (Å) Temperature (K) Ti-Al Ti-FeI FeII-Al FeII-FeI Pure-Fe 3 Al Ti-doped Fe 3 Al 2,4 2,45 2,5 2,55 2,6 2,65 2,7 2,75 2,8 Bond lengths (Å) Temperature (K) Zr-Al Zr-FeI FeII-Al FeII-FeI Pure-Fe3Al Zr-doped Fe3Al To quantify the interactions between the Zr impurity and their adjacent atoms at higher temperatures the defect energies were calculated for the first neighbour vacancies to FeII site substituted by Zr impurity. We have chosen to perform these calculations in the most stable configurations; i.e. substitutions on the FeII site at 900K and 1000K. For comparison, the defect energies for these vacancies were calculated in the pure supercell. This method is more detailed in the previous chapter (Chapter III, Section III-1). The results of defect energies are presented in Table IV-2 together with the relative difference between the defect energies of vacancies in doped and pure Fe 3 Al. The defect energies of vacancies in Ti-doped Fe 3 Al are also listed. The results of the defect energies shows that the vacancies are more favoured in the Zr doped Fe 3 Al than in the pure supercell. Their defect energies of vacancies decrease by (-39%) and (-66%) when the vacancies are created in the Al and the FeI sites, respectively, at 900K. This suggests that the presence of Zr on a FeII site weakens both the FeAl and the FeFe bonds. Despite the small increase in the defect energy of vacancy at 1000K, it remains low compared to that in the case of Ti additions. One can see from Table IV-2 that it is energetically more expensive (about +60%) to create a vacancy on the FeI site when the FeII is replaced by the Ti impurity. Comparatively, the defect energy of vacancy created on the Al site is (-65%) smaller than that created in the pure-Fe 3 Al, at 900K. Further the Ti-Al average bond length is about 2.6 Å compared to Ti-Fe average bond length of 2.45 Å. The occupancy of a FeII site by a Ti, therefore, not only leads to a reduced interactions but also induces strain on the neighbouring Al atoms. Ti atoms, therefore, occupy FeI site even at higher temperatures. The above results reveal that the behaviour of the Zr impurity differs from that of the Ti one. At higher temperature (about ~900K), while Ti impurity remains stable on a FeI site, the Zr impurity can reside on both the FeI and/or the FeII sites with the same order of stability. From Fig. IV-5 it can be seen that a small decrease in the lattice parameter takes place between 100 and 300K before the curves level off at 400K. This small decrease may be due to the changes of magnetic moment at these temperatures. According to a theoretical calculation [7], II.2. Structural and stability results II.2.1. Equilibrium lattice parameters performed by TB-LMTO code, it was found that there is a close relationship between Fe magnetic moment and lattice parameter in the FeAl intermetallic compounds. On other hand, magnetization measurements [8] and neutron diffraction analysis [9] have reported magnetization decrease with temperature below 170 K in D0 3 -Fe 3 Al. Fig. IV-6 shows our calculated magnetic moment of Fe atoms as function of the temperature. In agreement with the experimental results [8,9] a small decrease in the magnetic moment is observed below 200K. Another interesting point is the increase in the magnetic moment that occurs smoothly from about 600K and increasing sharply at 800K. This is related to the structural disorder that occurs in the D0 3 after 830K. This is reflected also on the lattice parameter. From Fig. IV-5, it can be seen that, from about 300K the lattice parameter increases gradually with temperature up to 5, 75 5,76 5,77 5,78 5,79 5,8 5,81 5,82 5,83 5,84 5,85 Lattice parameters (Å) Temperature (K) 700K. Between 700 and 900K the rate of increase of the lattice parameter is slightly larger. Thus, in agreement with the experimental result, there is a direct relationship between the magnetic and structural properties in the D0 3 -Fe 3 Al intermatallic compounds. Figure. IV-6. the temperature dependence of the total magnetic moment in the D0 3 -Fe 3 Al. Magnetic moment (µ µ µ µ B ) Temperature (K) -0,2 0 0,2 0,4 0,6 0,8 1 1,2 1,4 1,6 10 2 x∆ ∆ ∆ ∆L/L Temperature (K) Present work Experiment [10] Debye model [11] From Fig. IV-7 it can be seen that our results are very close to the experimental data which indicates the validity of the AIMD used method. Our results are also comparable with the pioneer results obtained by Seletskaia et al. [11] using the Debye model to take into account the effect of temperature. At low temperatures, the calculated fractional lengths obtained using the Debye model are consistent with our results as well as the experimental data. However, an increasing divergence is observed when the temperature increases. This is due to the growing contribution of the optical phonons to the thermal expansion, since, the anharmonic effects (the vibrational contributions) are neglected in the Debye model. Considerable effort has lately gone into understanding the role of lattice vibrations in thermodynamics of chemical ordering in binary substitunational alloys [12,13]. It has been suggested [14,15] in this connection that the contribution of vibrational entropy to the phase stability in alloys can be quite significant. Nix and Shockley [16] first suggested that the state of order could affect the lattice vibrations through a change in the Debye temperature. Suprisingly enough, the vibrational entropy contribution has remained largely neglected as compared to the configurational entropy for quite some time. Several theoretical calculations [17,18] followed, emphasizing its significance. This indicates that the approach used in the present work is rather powerful to take into account the effect of the temperature in this kind of material. Lattice parameters (Å) Temperature (K) Fe3Al_pure Fe3AlTi Fe3AlZr It is clear that the addition of Ti and Zr leads to an increase on the lattice parameter. It appears also that the effect of Zr addition on the calculated lattice parameter is more important than that of Ti one. This difference is related to the difference in size between the Zr and Ti atoms (the Ti atom is smaller than the Zr one). What is important to stress here is the profile of the curves above 800K. For the case of Ti substitution, the lattice parameter increases linearly up to higher temperature. Comparatively, the rate of increase becomes lower of the pure Fe 3 Al compound and Zr doped compound. This suggests that the Zr impurity does not affect the structural stability of pure Fe 3 Al, from about 800K, despite the large increase that it brings to the lattice parameter. for the pure Fe 3 Al between 900 and 1000K. There is a cross over between the curves corresponding to the Ti and Zr doped compounds between 1000 and 1100K. This means that the In order to explore, in more detail, the effect of these transition metals on the stability of the D0 3 structure we will examine, in the following section, the pair distribution functions for the pure Fe 3 Al structure and those for Ti and Zr-doped Fe 3 Al, repectively. -0,2 0 0,2 0,4 0,6 0,8 1 1,2 1,4 10 2 x II.2.2. Pair distribution function The Pair Distribution Function PDF or pair correlation function, g(r), is a very suitable measure to analyze the structure of material. Its variation with temperature and pressure gives information about the structural phase transition. Considering an homogeneous repartition of the particles in space, the g(r) represents the probability to find a particle in the shell dr at the distance r of another particle (Fig. . By discretizing the space in intervals dr (Fig. , it is possible to compute for a given atom the number of atoms dn(r) at a distance between r and r + dr of this atom: dn(r) = ே g(r) 4π r 2 dr (Eq.IV-4) where N represents the total number of particles, V the volume and where g(r) is the radial distribution function. In this notation the volume of the shell of thickness dr is approximated ( V shell = ସ ଷ π(r + dr) 3 - ସ ଷ πr 3 ≈4π r 2 dr). By distinguishing the chemical species it is possible to compute the partial radial distribution functions g αβ (r): g αβ (r)= ௗ ഀഁ () ସగ మ ௗఘ ഀ with ρ α = ே ഀ = ே୶ ഀ (Eq.IV-5) where c α represents the concentration of species α. These functions give the density probability for an atom of the α species to have a neighbour of the β species at a given distance r. In the following, we have computed the g αβ (r) evolution with temperature (T=100-1000K) in order to investigate the effect of temperature on the structural phase stability of the D0 From the comparison of each profile with the corresponding g(r) at 100K, it can be seen that all the peaks of the g(r) become broader when the temperature increases. This is due to the larger thermal motion of atoms in the supercell. For the case of the g Fe-Fe (r), the noticeable feature is that the characteristic double-peak structure in the range 2.5 Å-3 Å disappears as the temperature reaches 800K. The first peak is located at around 2.5 Å ( √ଷ ସ ܽ, the distance between the FeI and its eight first near negibours FeII atoms). This peak decreases and widens gradually with increasing temperature. However, the peak position is almost unchanged meaning that the distance between the first near neighbours Fe atoms remains constant. The second peak is located at around 2.87 Å ( ଶ , the distance between the FeII and its second near neighbours FeII atoms). With the increase of temperature, its position shows a slight shift towards 3 Å and its height decreases obviously. In the Al-Al PDF curve [Fig. ], the first peak position is around 4.1 Å ( √ଶ ଶ ܽ, the distance between the first near neighbours Al atoms in the D0 3 -Fe 3 Al structure). Its height decreases with the increase in temperature. Furthermore, it can be noticed that a new peak appears around 3.9 Å in the profile of g AlAl (r) from about 800 K. The appearance of the new peak in the g Al-Al (r) indicates that the distances between the first near neighbour atoms decreases as a consequence of the large displacement of the Al atoms in the supercell. This means also that the order of the D0 3 superlattice is affected from this temperature. This result is remarkable as it II.2.2.2. Pair distribution functions for doped Ti and Zr-Fe 3 Al Based on the results obtained from the defect energies calculations, the radial distribution functions were calculated at first time for the Ti and Zr substitutions on the FeI site in the D0 3 structure. For the case of Zr, the pair distribution functions were also calculated for the FeII substitutions at higher temperatures for comparison knowing that the stability of Zr on the FeI and FeII becomes equivalent at higher temperatures. This indicates that the stability of the D03 structure is not affected by the presence of Zr on the FeI site. Knowing that the stability of Zr on the FeII increases site for higher temperatures and tends to be equivalent to that of FeI substitutions, the pair distribution functions were also calculated for the case when the Zr is placed on the FeII site. The the calculated g AlAl (r) for Zr substitutions on the III. Transition metals segregation in ∑5 (310) [001] grain boundary After having determined the differences in the behaviour between of the two transition metals (Ti and Zr) in the bulk, in the following, their stabilities in the ∑5 (310) [001] will be treated. The calculations were performed using a supercell with 80 atoms. More details about the method used for the construction of the grain boundary and the choice of the size of the supercell are Unlike the bulk, each calculation has been turned with 400 dynamic molecular steps in order to gain a compromise between the size of the supercell and the relative running time. The 300 first steps were reserved for equilibrating the system and were discarded from the subsequent analysis. III.1. Site preference of Ti and Zr in the ∑5(310)[001] Because the large number of configurations to be taken into account in the ∑5 (310) [001] grain boundary and the AIMD consuming calculation time, the defect energies of the two transition metal impurities were determined only for three temperatures (300 K, 600 K and 900 K). The defect energies of the two transition metals in the grain boundary were calculated using the Eq. IV-1. However, unlike the bulk, the defect energies were also calculated for the transition metals substitutions on the Al site in addition to the substitutions on the FeI and FeII site. -3,99 - These calculations were carried out to determine the site preference of the transition metals between the substitutions on the Al configurations and then to examine the relaxation of the grain boundary. It is important to recall that the comparisons (i) between the defect energies of the two transition metals and (ii) between substitutions on the Fe and Defect energies (eV) - -0,05 -0,54 -0 Defect energies (eV) 0FeII 1FeII The calculated defect energies of Ti impurity in the ∑5 (310) [001] grain boundary at 300 K, 600 K and 900 K temperatures, for substitutions on (a) sites, by taking into account the distance from the G.B. - ) 600 (K) 900 (K) -4,31 -4,85 - Temperature (K) (b) Ti on the FeI site 1FeI 2FeI -0,61 -1,05 -6,44 -0,94 -3,77 0,43 -3,33 -1,18 -1,38 300 (K (K) Defect energies (eV) 4,76 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 0 Defect energies (eV) 0FeII 1FeII Ab Initio Molecular Dynamics calculations The calculated defect energies of Zr impurity in the ∑5 (310) [001] grain boundary at 300 K, 600 K and 900 K temperatures, for substitutions on (a) Al sites (b) FeI sites and (c) FeII sites, by taking into account the distance from the G.B. - shows that for the three temperatures, the defect energies are generally smaller for the configurations of the FeI substitutions, except for the three configurations 2FeII at 300K (-3.33 eV), 1FeII at 600K (-3.77 eV) and 0FeII at 900K (-6.44 eV) when the Ti impurity occupy the FeII sites with lower defects energies. The fact that the Ti impurity occupy the FeII site near the interface may be related to the effect of the environment knowing that the Ti impurity prefers to occupy the site where is surrounded only by the Fe atoms as first neighbours (i.e. iron rich environment). Note also that, contrary to the bulk, the geometry of the grain boundary has greatly changed as an effect of the temperature which led to changes in the first nearest neighbours of the Ti impurity. The structural environment of the impurities will be discussed below (Section III-2). On the other hand, The classification of the most favourable configurations for the three temperatures, by taking into account the distance from the interface, is Fe substitution in: the second plane at 300K (2FeII)→ the first plane from the interface at 600K (1FeII)→ the G.B interface at 900K (0FeII). This indicates that the stability of the Ti impurity changes with increasing the temperature. While for intermediate temperatures (300K) is stable within configuration close to the bulk, by increasing temperature Ti impurity tends to occupy configuration in the grain boundary interface. In another words, this means that, it takes more temperature for the Ti impurity to be stable at the G.B. interface. For the case of Zr substitutions, it can be seen from interface. Whereas for the case of Ti substitutions, a higher temperature (~900K) was needed to cause its relaxation to the grain boundary. Now, the comparison between the defect energies for substitutions in the FeI and the FeII sites [Figs. and (b)] shows that for intermediate temperature (300K) the Zr impurity prefer to occupy the FeI sites. However, for higher temperatures, the Zr impurity tends to occupy the FeI and FeII sites with nearly equal energies. This indicates that, like in the bulk, the stability of Zr on the FeII site increases with increasing temperature. From the calculated defect energies, it is clear that the two transition metals have different behaviours. While Ti is more stable at the bulk and it takes a higher temperature to relax to the grain boundary, the Zr impurity prefers to segregate at the grain boundary with lower defect energies even at intermediate temperature. III.2. The effect of temperature on the structural relaxation of ∑5 grain boundary In this section, the influence of temperature on the structural deformations of the Ti and Zrdoped grain boundaries will be treated. Before attempting to model the geometrical relaxation of the grain boundary induced by the impurity segregation, it is important to examine the relaxation of the pure grain boundary structure. III.2.1. Relaxation of the clean grain boundary To quantify the relaxation of the atoms we have calculated their displacements in the supercell. The displacements were calculated as difference between the average positions of the last 100 MD steps. We have considered only the last 100 steps after having tested that the movements of the atoms become almost to a stable configuration after relaxation towards the final positions. Clearly one can see that the relaxations with temperature of the grain boundary structure are important when compared to that at 0K. For the temperatures 300 and 600K, the displacements of the atoms are in range 0.2-1.6 Å. Comparatively, for the case of relaxation at 0K, the displacements of the atoms are in between 0 and 0.35Å, i.e. on average 70% smaller than that at 300K. This is due to the large vibrations of atoms as an effect of temperature. Increasing the temperature increases the internal energy and this is reflected in an increase in the average motion of the atoms in the system. For the case of relaxations at 900K, it can be seen that the displacements are more important, and can reach up to 2.5 Å (≈ to the distance between the Fe nearest neighbours atoms in Fe 3 Al) in the interface region. results presented in the literature. Some authors consider indeed, from experimental results of grain boundary relaxation, that pronounced relaxation takes place when the temperature is higher than T 0 ≅ 0.4 T m or so (T m is the melting temperature) [19]. Note, that for the case the Fe 3 Al intemetallic compounds T 0 ≅ 614K ≅ 0.4x1536K. Thus is consistent with our estimation showing a shop change between 600 and 900K. This also indicates that significant local disordering only occurs above this temperature. Also, according to results of dynamic molecular simulation for bicrystals with CSL structure becomes disordered above this temperature [20,21]. This signifies that T 0 is the transformation temperature at which the structure of the grain boundary transforms from CSL structure to disordered structure. Thus in the following, the study will be limited to the structural relaxation at the intermediate temperature of 300K. Figure .IV-21 Average displacements of the atoms in the relaxed supercells at 0K, 300K, 600K and 900K. The relative differences are marked in the graph. As reported by T.S. Ke et al. [22], the coincidence lattice grain boundary constructed on the basis of geometrical considerations can not be stable when the temperature increases, because atomic overlaps or crowding of atoms will be produced in the boundary plane so that atomic readjustment or rigid translations will take place to reduce the energy. Then, although the periodicity of the boundary can be maintained, it is expected that the coincidence sites will no longer coincide with atoms. Especially when the misorientation is large or when there are deviations from coincidence site lattice misorientations, the disordered grain boundary region formed should be complicated. III.2.2. Relaxation of the doped grain boundary at 300K Now we turn to the structures of the doped grain boundaries with the substitutional Ti and Zr transition metal impurities. First of all, we examine the displacement of the impurities in different substitutional configurations. The first point to be made is that the displacements of the impurities at 300K are more important than that at 0K. From Fig.V-23 it can be seen that the displacements of the impurities at 0K are in between 0.1 and 0.6 Å. Comparatively, while for the case of 300K, the amplitudes of displacements is between 0.35 and 1.1 Å. On the other hand, it can be seen that the most significant displacements take place when impurities are incorporated in the first and second planes from the grain boundary interface. These displacements are in between 0.8 and 1.1 Å, which are closer to the distance between two parallel planes (0.92 Å) in the grain boundary supercell. This suggests that the impurities tend to relax to the grain boundary interface. It important to recall that, at 0K, it has been found that the largest displacements of the impurities correspond to the configurations of the substitutions on the FeII sites (Figs. IV-23 (c) and (d)). The reason for this is that the impurities, particularly the Ti impurity, in these configurations (poor iron environment, 4FeI+4Al) tend to relax to an iron rich environment. However, at 300K, it can be seen from Figs. IV-23 (a) and (b), that the displacements of the impurities are important for substitutions on the FeII sites as well as for Al and FeI sites. Additionally, in the first and second planes from the interface, the displacements of the impurities on the Al and FeI sites are larger than that on the FeII ones, despite the fact that these configurations are initially richer in iron. Knowing that the atoms in the grain boundary supercell have moved from their initial positions, it very much possible that the environment around the impurities has changed due to the large displacement of the first nearest neighbours. In this case, it is likely that the impurities also tried to move towards a new iron rich environment. To check this possibility we have analyzed the environment (the first nearest neighbours) corresponding to each configuration after relaxation. and 2FeII), the structural environments becomes similar to that of the bulk. For the case of Ti substitutions, the comparison between the defect energies of the three configurations (i.e. G.B interface, 1 st plane and 2 nd plane) for each type of substitutions (i.e. Al, FeI and FeII) shows that the most stable configurations correspond to the substitutions with an iron rich environments. The three most stable configurations are, depending on the nature of substitutions, substitution on the 0Al site within the G.B. interface with (5FeII+3FeI) as first neighbours, substitution on the 1FeI site in the first plane from the interface with (3FeII-2FeI-2Al) as first neighbours and substitution on the 2FeII site in the second plane from the interface with (3FeII-3FeI-2Al) as first neighbours. On another hand, comparison between the substitutions on the FeI and FeII sites reveals that the most stable configuration corresponds to substitutions on the FeII site in the second plane from the interface (2FeII) with the lower defect energies. This indicates that the Ti, in addition to the nature of the environment, prefers to occupy a configuration close to the bulk. Comparatively, for the case of Zr substitutions, the most stable configurations correspond to the substitutions on the G.B. interface (0Al, 0FeI and 0FII). Except for the case of substitution on the 0FeI site with (5FeII-3Al), the two other configurations correspond to the substitutions with iron rich environments. This indicates that, contrary to the Ti impurity, the Zr atom prefers to reside in the configurations within the G.B. interface whatever the nature of its environment. Configurations IV. Summary and conclusion In this chapter Ab Initio Molecular Dynamic study was set to investigate the effect of temperature on the structural stabilities of Ti and Zr impurities in the bulk and ∑5 (310)[001] of the D0 3 -Fe 3 Al intermetallic compound. The results of defect energies indicate that the stability of Ti on the FeI site increase with temperature. Comparatively, Zr impurities have the tendency to occupy nearly equally the FeI and FeII sites at higher temperatures. The calculated defect energies of vacancies created in the first nearest neighbours to the impurities allow to draw the conclusion that the Ti impurity strengthens the interaction with their Fe first neighbour atoms. However, the Zr additions reduce the interactions with their first nearest neighbours when compared to Fe-Al and Fe-Fe bonds in the pure Fe 3 Al. The calculated pair distribution functions in the pure Fe 3 Al reveal that the structural stability of the D0 3 is affected from about 800K. We show also that, in agreement with the experimental results, there is a direct relationship between the magnetic moment and the lattice parameter of the D0 3 Fe 3 Al intermetallic compounds. The structural disorder of the D0 3 starting from about 800K alters the values of magnetic moment of the Fe atoms. In agreement with the the trends observed experimentally, it is found here that the 1% of Ti increases the stability of the D0 3 structure up to 1000K. Comparatively, the Zr addition does not affect the stability of the D0 3 structure. The calculated defect energies of the impurities in the ∑5(310) grain boundary show that the Ti impurity prefers to reside in an iron rich environment further away the interface at intermediate temperature of 300K. It becomes stable at the G.B. interface only at high temperature (900K). The results show also that, contrary to the Ti impurity, the Zr segregate at the G.B. interface with the lower defect energies even at intermediate temperatures. The relaxation of the ∑5 (310) grain boundary at the temperatures of 300K and 600K is 70% more important than that occurred at 0K. However, the structural geometries of the interface remain unchanged. In agreement with the theoretical assumptions and experimental data for metals, the relaxation of the grain boundary becomes pronounced only above 0.4xT m (T m melting temperature). At the intermediate temperature of 300K, the segregation of impurities in the ∑5 (310) grain boundary affects its relaxation in an irregular manner and a local disorder occurs in the grain boundary interface. Conclusion Combining low density with creep and corrosion resistance, the D0 3 ordered iron aluminides are promising materials for a wide variety of medium to high-temperature structural applications. However, their application is still limited because of their room-temperature intergranular brittleness and their low strength and creep resistance at higher temperatures. Efforts to improve the high-temperature mechanical properties have often included ternary additions to Fe 3 Al in order to extend the temperature range over which the D0 3 phase is stable up to higher temperatures. In this work, the effects of Ti and Zr transition metals on the stability of the D0 3 structure were studied by means of ab-initio calculations. Knowing that, in these compounds, small additions of transition elements, such as Zr, can strengthen the grain boundary cohesion, comparison of the behaviour of these two transition metals when are placed at the ∑5 (310) [001] has been then investigated. Our study was performed both by using the static and Ab Initio Molecular Dynamics to take into account the effect of temperature. The results obtained based on the static ab initio calculations can be summarized as follows : The analysis of site occupancy in the bulk confirms, consistently with the previous literature results, that FeII sites are the preferential sites for vacancies. Comparatively, both Ti and Zr prefer to reside on FeI sites. The positive formation energies calculated for all Zr substitutions suggest however that Zr has very little miscibility in the bulk D0 3 -Fe 3 Al while the 3% of Ti impurities are miscible in Fe 3 Al. The interface energy of a clean ∑5 (310) interface has been found to be (0.36 J/m 2 ). The presence of transition metal impurities on various sites is found to reduce the interface energies by about 14% and 22% for Ti and Zr, respectively. The maximum expected reductions are obtained when the transition metals are located on a FeII site in the first plane away from the exact interface. This suggests that both Ti and Zr doped grains boundaries are more stable than the parent 'clean' grain boundary. The interface energies for the Zr-doped grain boundaries are systematically lower than for the Ti-doped boundary. So that the stability induced with Zr is more important than that of Ti. The interface energy depends also on the relaxation of the multilayers of the G.B interface: the larger the interface expansion, the higher the interface energy. At the grain boundary, the analysis of interstitial configurations indicates that the most favorable sites for Zr and Ti is the one for which this transition metal impurity interacts with only Fe atoms as first neighbors. Interestingly, the negative values of formation energy (-0.18 eV) obtained for Zr indicates that this atom is more stable when inserted on such a site at a ∑5 (310) grain boundary than in the bulk of the material. Comparatively, Ti is clearly more stable within the bulk than inserted at the grain boundary. Contrary to Ti which is not stable as an insertion, the results indicates that Zr is stable within the grain boundary both as an insertion and as a substituting element (on FeI and FeII sites). This brings a high potential for introducing Zr at the grain boundary with a large domain of stability in terms of the exact location within a relaxed grain boundary. The bonding charge distribution in the boundary region is different from that in the bulk due to the different atomic rearrangement. For the clean grain boundary, the bonding normal to the grain boundary develops between the FeI atoms in the (001) planes. Whereas, in the (002) plane, the accumulation of interstitial bonding charge across the neighbours FeII pair is parallel to the grain boundary. The T.M. impurities (when substituted) in the most stable configuration are found to reduce the general bonding with their first neighbor FeI atoms in the (004) plane and enhance the bonding charge normal to the interface between the FeII atoms in the pure FeII plane. Therefore the beneficial effect of the impurities on the cohesion of the ∑5 (310) [001] grain boundary may originate from the FeII-FeII covalent bonding normal to the interface in the pure FeII planes which holds the two grains together. Selected results from the ab initio molecular dynamics are summarized below: The analysis with temperatures (100-1100 K) of the site preference for substitutions of Ti and Zr impurities on FeI and FeII sites of the Fe 3 Al has revealed interesting differences between the two transition metal elements. Ti is more stable on a FeI site and the stability of this site increases with temperature. Comparatively, Zr impurities have the tendency to occupy nearly equally the FeI and FeII sites at high temperature. The calculated bond lengths between the impurities and their first nearest neighbours show that the Ti impurity induces a strain on their neighbouring Al atoms when it is placed on the FeII site. Complementary, it was found that it is rather more expensive to create a vacancy on the Fe site first nearest neighbour to the Ti impurity than on the Al one. This indicates that the Ti impurities strengthen the interactions with their first Fe nearest neighbours atoms. However, for the case of Zr additions, the vacancies are favoured both on the Fe and Al sites. Therefore, the Zr impurities reduce the interactions with their first nearest neighbours in the Fe 3 Al structure. The appearance of new peaks in the Al-Al pair distributions function, calculated for the pure The relaxation of the grain boundary becomes significant as an effect of temperature. It was found that the relaxation at intermediate temperatures is 70% more important than that occurred at 0K. However, although the large displacement of the atoms, the structural stability of the ∑5 (310)[001] grain boundary maintains unchanged. From about 600K the relaxation becomes pronounced, 90% more important than that at 0K, and the disorder occurs in the grain boundary supercell. Based on the theoretical and experimental assumptions this temperature corresponds to the temperature transformation T 0 ≅0.4xT m (T m , melting temperature) of the CSL grain boundary structure to a disordered structure. Thus, in our study, the analysis of the structural geometries induced by the segregated impurities has been limited to the intermediate temperature of 300K. It has been found that the segregated impurities (Ti and Zr) affect the relaxation process of the grain boundary. The relaxation of both Ti and Zr doped-grain boundaries occurs in irregular manners. However, despite the bigger size of the Zr impurity, it has been found that, the distortions created by Ti addition are more important than that produced in the case of Zr segregation. The reason is because the Ti impurity tends to strengthen the interactions with their first neighbours Fe atoms. On a more general basis, the work carried out in this PhD brings additional information on the complex field dealing with the understanding of the effect of transition metal additions in intermetallic coumpounds. Indeed, it is worth remembering that the iron aluminide based intermetallic compounds still presents several open questions, despite the numerous experimental and theoretical studies in the literature. For example, it is well known that their ductility and fracture toughness can be modified by addition of ternary elements and that the associated grain boundary segregation can change the fracture mode in several compounds of the FeAl systems. However, little is known from the microscopical point of view on dislocation nucleation, mobility or pile-up at grain boundaries and the influence of interstitial and substitution solutes. A lot of work is still required before mastering the modeling of crack tip plasticity or the atomistics of brittle fracture in the presence of ductilizing additives. The type of work carried out in this thesis is just one of the numerous stages to be done to understand, in the future with the combination of the experimental investigation, the effect of segregated elements on the ductility/hardening of the FeAl intermetallic compounds. Our work, using first principal density functional theory calculation, has shown that Ti and Zr are two transition metals having significantly different effects in Fe 3 Al. Their site preferences with temperature have been determined in the bulk and for a specific ∑5 grain boundary. Such information on the location of site defects and the way different transition metals relax a grain boundary is the type of important information that will be required by mechanical engineers and metallurgists dealing with dislocation interactions to understand plasticity (or lack of plasticity) in these alloys. Fig. 1 The description of tilt GBs in the framework of edge dislocations. A model which has received much success in the interpretation of experimental data is the coincidence site lattice (CSL) model [4,5]. This is the model upon which the present work is based. Thus it seems necessary to explain in more detail the basic idea of the CSL. The coincidence site lattice In general the CSL represents lattice points of two discrete lattices that coincide if one imagines that the two lattices interpenetrate each other. If both crystals have the same crystal structure and are aligned without any misorientation or translation then the CSL would be the underlying lattice of the material. Introducing a misorientation between the two lattices described by a rotation matrix R will only give CSLs for certain misorientation angles and axes. For a better understanding of the underlying mathematics let us recall here that the position of an atom i can only be found in real space if its coordinates (x i , y i , z i ) as well as the unit vectors, e x , e y and e z , spanning out the actual coordinate system are known. Here r i = (x i ,y i ,z i ) only gives the coordinates of atom i with respect to a certain coordinate system and its real space vector would be given by r i = x i e x + y i e y + z i e z . The same is true for two misoriented crystals that do interpenetrate each other. Here this means that for either of the two crystals the internal coordinates (x i , y i , z i ) of any atom i are exactly the same when bearing in mind that they are defined with respect to each coordinate system. This now offers a simple way of mathematically expressing the basic equation for coincidence, namely writing ݎ ଶ ൌ ܴ. ݎ ଵ ൌ ݎ ଵ ݐ ଵ (Eq. 1) where r L1 represents the internal coordinates of a certain lattice point with respect to coordinate system No. Here one should note that the CSL is confined to descrite lattice points. In order to find the vectors that span out the CSL, Eq. 1 needs to be rewritten and thus one obtains Eq. 2. Inserting the translation vectors of lattice No.1 in Eq. 1 would then give the CSL vectors. ܴ . ݎ ଵ െ ݎ ଵ ൌ ሺܴ െ 1ሻ ݎ ଵ ൌ ݐ ଵ ⇔ ݎ ଵ ൌ ሺܴ ିଵ െ 1ሻ ݐ ଵ (Eq. 2) Extending the discussion to any point r L1 and r L2 within the lattices leads to the concept of the O-lattice [6]. It is a more general concept than the CSL and the CSL is a sub-lattice of the Olattice. There are several different approaches to describe a misorientation relationship between two coordinate system. In this work the misorientation relationship is expressed by a rotation axis ሾ,ܪ ,ܭ ܮሿand a rotation angle θ. For certain misorientation relationships between two crystals depending on the rotation axis and angle well-defined CSL lattices exist that are characterized by a single parameter namely their ∑ values. Since this work only deals with cubic materials, the further discussion is restricted to cubic materials. Well-defined misorientations between both crystals can for instance be expressed by a rational Rodrigues vector ρ ൌ ሾ,ܪ ,ܭ ܮሿ , where m, n, H, K and L are integers [6]. Here the misorientation angle is not explicitly set but rather defined through the distinct set of m, n, H, K and L. In terms of rotation axis and angle the Rodrigues vector represents a rotation about an axis ሾ,ܪ ,ܭ ܮሿ by θ where θ is given by ݊ܽݐ ቀ θ ଶ ቁ ൌ ܪ√ ଶ ܭ ଶ ܮ ଶ (Eq. 3) Furthermore the ratio of the volume of the primitive CSL with respect to the atomic volume of the material is characterized by ∑ and formally given by ∑ ൌ n ଶ m ଶ . ሺH ଶ K ଶ L ଶ ሻ (Eq. 4) Eq. 3 and 4 can be used as master equations to calculate the misorientation angle θ and ∑ for a chosen rotation axis ሾH, K, Lሿ and m and n. If ∑ as given by Eq. 4 is an even number, ∑ is then to be divided by 2β with β being the smallest integer to find the largest odd ∑ number. The CSL scheme itself is not unique concerning the misorientation path. Theoretically a given CSL can be generated by 24 different misorientation paths for cubic materials due to the symmetry operations possible for cubic materials [7] Concerning the nature of GBs some general statements can be made with respect to the CSL procedure if one restricts the discussion to one distinct CSL misorientation representation out of the 24 possible. Generally it can then be stated that a pure twist GB follows the rule that its GB normal and the misorientation axis are parallel. For pure tilt GBs a similar rule exists, namely that the GB normal and the tilt axis are perpendicular to each other. These rules are rather simple and once again illustrated by Fig. 3 (a) and (b) and for any pure twist or tilt GB the rule will apply for at least one of the 24 misorientation representations. Résumé en français Les alliages intermétalliques riches en fer du système fer-aluminium, Fe I. Détails de calculs I.1. Méthodes de calculs Nos calculs ont été effectués en utilisant les pseudopotentiels de Vanderbilt (UltraSoft USPP) [1] II. Résultats des calculs statiques à 0K II. 1. Défauts ponctuels dans le bulk-Fe 3 Al Dans cette section les résultats des défauts ponctuels dans le bulk D0 3 -Fe 3 Al seront présentés. Les trois types de configurations de substitution des impuretés ou formation de lacunes II.1.1. Importance de la relaxation Afin d'avoir un aperçu sur l'effet de relaxation et de souligner son importance sur les calculs des intermétalliques, Fig. [13] et à partir de calculs de diagramme de phase disponibles dans la littérature [14]. Le calcul montre également que la configuration la plus stable pour les Ti correspond à la substitution dans le site FeI, avec les plus basses énergies de formation efficace les deux types de calcul (de concentration par exemple). Il s'agit de la prévision correcte du comportement observé expérimentalement [15]. Pour le cas de Zr, la configuration la plus favorisée est également la substitution sur un site II.2.2. Sites préférentiels et l'effet de Ti et Zr sur la cohésion des joints de grains Les énergies de formations des deux métaux de transition ( III.2. Site préférentiel de Ti et Zr dans le joint de grains Σ5 (310) [001] A cause du grand nombre de configurations à prendre en compte dans le joint de grains Σ5 (310) [001] le temps prohibitif de calcul AIMD, les énergies de défauts des deux métaux de transition ont été déterminés que pour trois températures (300 K, 600 K et 900 K). Les énergies de défauts des deux métaux de transition dans les joints de grains ont été calculées en utilisant l'Eq. 6. Cependant, contrairement au bulk, les énergies de défauts ont été également Fe 3 3 AlC structure, with the Strukturbericht Designation E2 1 (a perovskite-type structure). This carbide is based on the fcc ordered structure Fe 3 Al-L1 2 where the iron atoms are located in the center of each face, and the aluminium atoms sit on the corners of the cube (see Fig. I-8). The carbon atom occupies the central octahedral interstitial position formed by the six iron atoms as first nearest neighbours. Figure.I- 8 8 Figure.I-8 Conventional cell of the k-Fe 3 AlC carbide. The carbon atom is represented in white in octahedral position, the aluminium atoms in grey and the iron atoms in dark-grey. Fe 3 3 Al crystallizes in a D0 3 -type structure shown in Fig. I-10. In this structure, there are two inequivalent Fe sites with specific neighbor configurations, which are named FeI and FeII sites. The former has eight Fe nearest neighbors in an octahedral configuration, and the latter has four Fe and four Al nearest neighbors in a tetrahedral configuration. It is known in Fe 3 Si, which is isomorphous with Fe 3 Al, that transition-metal impurities occupy the FeI or the FeII site Figure.I-13Lattice parameters of the D0 3 phase in (Fe 1-x M x ) 3 Al as a function of composition x for M= Ti, V, Cr, Mn and Mo. The D0 3 single-phase state is obtained in the range shown by the solid lines[41]. Fig. I-14 shows the D0 3 -B 2 transformation temperatures T 0 in (Fe 1-x M x ) 3 Al with M=Ti, V, Cr, Mn and Mo as a function of the average electron concentration e/a. The D0 3 -B 2 transformation temperature always increases at the values of e/a lower than 6.75 for Fe 3 Al since, the Fe atoms are substituted by the transition elements with less than half-filled d states. As mentioned in Section III-4.2, the increasing rate of T 0 for the addition of Mo is lower than that of V. In contrast, as shown in Fig. I-14, the curve of the Mo addition almost coincides with that of the V addition in the range of the D0 3 single-phase state when plotted as a function of e/a. The electron concentration effect further demonstrates that the curves for M= V and Mo are rather close to that for M= Ti, despite the sharpest increase in T 0for the Ti addition. Consequently, the variation of the electron concentration plays a dominant role in determining the values of T 0 . The situation is less clear for M= Cr, since the site preference of Cr is not as clear as in M= Ti, V and Mo. Finally, it was also suggested that the electron concentration effect could predict the increase in T 0 even for the additions of two or more kinds of transition elements, if these complex alloys have a D0 3 single phase structure. Figure.I- 14 D0 3 - 143 Figure.I-14 D0 3 -B2 transformation temperatures in (Fe 1-x M x ) 3 Al as a function of average electron concentration (e/a) for M=Ti, V, Cr, Mn and Mo [49]. D0 3 3 →B 2 (T c ) and consequently the strength of the Fe 3 Al intermetallic shows that the increases in T c is related to the increases in the ordering of the D0 3 superlattice caused by specific site substitutions by the solutes. The most effective solutes in raising T c , (Ti , V and Mo) have been found to occupy the FeI site in the D0 3 structure. The situation is different and in contradictions for Cr additions. Whereas for the case of Zr, there is no experimental data about it site preference in the D0 3 Fe 3 Al. Additionally, the majority of the calculations for the intermetallic compounds are performed at zero tempreture. The effect of temperature rarely used because of the time consuming calculations. Stein et al. have measured the flexural fracture strains of as-cast samples as a function of temperature in four-point bending tests, in order to test their Britle-Ductile Temperature Transition BDTT. The results show that the BDT temperatures of the ternary alloys strongly increase with increasing volume fraction of second phase τ1. The effect of the Al content on the BDT of the investigated ternary Fe-Al-Zr alloys is qualitatively very similar to that of binary Fe-Al alloys even in the presence of 50 vol% of second phase τ1. At Al contents below 40at.% there is only a very weak dependence of the temperature range of the BDT on Al content, whereas the alloy series with 40at.% Al shows a strong increase of the BDT temperatures. Figure.I- 16 16 Figure.I-16 Flexural fracture strains as a function of temperature for two series of Fe-Al-Zr alloys (a) with varying Al content in Fe-Al+50 vol% Laves phase alloys and (b) with varying volume fraction of the second phase in Fe-40AlCx vol% t1 phase (the arrows indicate fracture strains above 3%). one-particle orbitals ψ are orthonormal ർψ ቚψ = ߜ . The corresponding constraint minimization of the total energy with respect to the orbitals min ൛ൻψ หܪ หψ ൿൟห ቄർψ ቚψ ೕ ୀఋ ೕ ቅ (Eq. A-29) can be cast into Lagrange's formalism ܮ = -ൻψ หܪ หψ ൿ+∑ ߉ (ർψ ቚψ , δ ) (Eq. A-30) where ߉ are the associated Lagrangian multipliers. Unconstrained variation of this Lagrangian with respect to the orbitals ఋ ఋψ * = 0 (Eq. A-31) leads to the well-known Hartree-Fock equations ܪ ுி ψ = ∑ ߉ ψ (Eq. A-32) The diagonal canonical form ܪ ுி ψ = ߳ ψ is obtained after a unitary transformation and ܪ ுி denotes the effective one-particle Hamiltonian. The equations of motion corresponding to Eqs.(A-26)-(A-27) read Car and Parrinello postulated the following class of Lagrangians [ 31 ] 31 ൿ -ൻψ หܪ หψ ൿ + ݏݐ݊݅ܽݎݐݏ݊ܿ (Eq. A-35) to serve this purpose. The corresponding Newtonian equations of motion are obtained from the associated Euler-Lagrange equations mechanics, but here for both the nuclear positions and orbitals; note ψ * = 〈ψ | ensemble averages. Statistical ensembles are usually characterized by fixed values of thermodynamic variables such as energy, E; temperature, T; pressure, P; volume, V; particle number, N; or chemical potential µ. One fundamental ensemble is called the microcanonical ensemble and is characterized by constant particle number, N; constant volume, V ; and constant total energy, E, and is denoted the NVE ensemble. Other examples include the canonical or NVT ensemble, the isothermal-isobaric or NPT ensemble, and the grand canonical or µV T ensemble. Thus, Eqs. (B-8)-(B-10) together with Eqs. (A-33)-(A-34) define Born-Oppenheimer molecular dynamics within Kohn-Sham density functional theory. The functional derivative of the Kohn-Sham functional with respect to the orbitals, the Kohn-Sham force acting on the orbitals, the connection to Car-Parrinello molecular dynamics, see Eq. A-39. Fig. II-1. It is essential to make the supercells large enough to prevent the defects, surfaces or molecules in neighboring cells from interacting appreciably with each other. The independence of the configurations can be checked systematically by increasing the volume of the supercell until the computed quantity of interest has converged. Figure. II- 2 . 2 Figure. II-2. Schematic illustration of a supercell geometry (a) for a vacancy in a bulk crystalline solid, (b) surface, and (c) for a isolated molecule. The boundaries of the supercells are shown by dashed lines. 61 points 61 is required to get good converged results. For increasing size of the supercell the volume of the Brillouin zone becomes smaller and smaller (see Eq. Bints are needed. From a certain point, which is usually taken to be k=0 point approximation). For metallic systems, on the other hand, much denser k-point meshes a precise sampling of the Fermi surface. In these cases the point density can often be accelerated by introducing Sham equations assume a 26), multiply from left with we get the matrix eigenvalue equation ᇱ (Eq. B-31)25) of the wave functions is truncated by) with a kinetic energy lower than a given cutoff (Eq. B-32) Generalized Gradient Approximation formulated by Perdew-Wang functional (GGA-PW91) and the UltraSoft PseudoPotential (USPP) method and plane wave basis set. These calculations were carried out using the Vienna Ab initio Simulation Package (VASP). The VASP code is developed at the Institute fur Materialphysik at the University of Wien by Kresse, Furthmüller and Hafner. More details about the calculation methods will be given in the beginning of the two following chapters. Fig. III- 1 1 Fig. III-1 shows the total energy plotted, as function of the volume of the unit cell, for the GGA and LDA functionals. The solid lines are the result of fit to Birch-Murnaghan equation of state[8]. The equilibrium lattice constants and bulk modulli were determined from these fitted curves. Figure. III- 2 2 Figure. III-2 (a) The un-doped structure of the bulk Fe 3 Al and the three point defects substitutions as well as vacancies on (b) the Al site (c) the FeI site and (d) the FeII site. Fe 3 3 Al was obtained using geometrical rules of the Coincidence Site Lattice model (CSL). An overview about the CSL theory is given in Appendix A. Fig. III-4 gives a view of the resultant cell showing the symmetry of the ∑5 (310) [001] grain boundary. Along the [001] direction, the supercell contains in fact four alternating (001) layers separated by the fourth lattice constant, a 0 . The four layers consist of two pure FeII layers and two mixed FeI/Al layers as shown in Fig. III-4 (b). The cell size was chosen in order to preserve a large amount of bulk crystal between the two interfaces visible in Fig. III-4 (a), and, thereby, reasonable energy convergence. Several calculations were made preliminarily in order to estimate the sufficient number of planes.Following these energy convergence calculations, it was estimated that 20 planes parallel to the grain boundary plane were required. This configuration leads to a total of 80 atoms per calculation cell. It must also be noted that, considering a grain boundary region having a thickness of five atomic planes, the local concentration of impurities at the grain boundary when substituting one atom is about 1.25 at.%. Figure. III- 4 4 Figure. III-4 Atomic structure of the Fe 3 Al ∑5 (310) [001] grain boundary (a) viewed along the [001] direction (b) viewed along the (130) direction. The D0 3 structure being more complex than the B2 one [26], the number of G.B. defects to be taken into account is significantly larger. The location of the G.B. defects is given in Fig.III-5. Figure. III- 5 5 Figure. III-5 Typical (a) substitutional sites and (b) the interstitial sites within the G.B.interface. 4 are also plotted in Figs. III-6 (a) and III-6 (b) by taking into account the distance from the G.B interface (Fig. III-5). For comparison the formation energies of the transition metals substitutions in the bulk (supercell with 108 atoms) are represented in Figs. III-6 (a) and III-6 (b). Figure. III- 6 6 Figure. III-6 Impurity formation energies (in eV) for different substitution sites in (a) Titanium doped systems and (b) Zirconium : Static ab initio calculations(0K) Figure. III- 8 Figure. III- 9 : 89 Figure. III-8 Difference charge density of bulk Fe (b) the (001) pure FeII plane. Positive (negative) contours represent contours of increased (decreased) charge density. Contours start from ±0.25 )] planes the charge density is also 9 (a) and (b) shows the charge density difference for the grain boundary ∑5(310)[001] 5 (310) clean grain boundary, on (a) the (004) FeI/Al mixed plane and (b) the (001) pure FeII plane. Positive (negative) contours represent contours of increased (decreased) charge density. Contours start from ±0.25 e/(Å) 3 and 9 (a) that in the grain boundary a depletion of up of charge density at the ionality in FeI atoms further away from the interface has changed as results of misorientation of the crystals by an angle of 36.8° (the angle of the ∑5 (310) grain boundary), except for one atom in the third plane from the interface who try to create bonding nearest FeI atom in the first plane from the grain boundary interface. The bonding charge distribution in the boundary region is different from that in the bulk due to interstitial bonding charge between FeI atoms across the boundary plane [Fig. III-9 (a)], increases the covalent bonding normal to the grain boundary. This indicates that the Fe atoms have an The charge density difference on the (001) plane in Fig. III-9 (b) shows a different charge distribution between the nearest neighbour FeII-FeII atoms across the grain boundary plane. The bonding parallel to the interface develops between the FeII (refer to rectangle) atoms, which contributes very little, if any, to the grain boundary cohesion. This accumulation is within very thin range and extends only about 0.2 Å away from the grain boundary plane. In the first plane from the interface, it can be seen that the FeII atoms [Fig. III-9 (b)] have different charge density distributions. This is related (due) to the effect of relaxation as will be demonstrated in the Section III-5. During the relaxation the atoms moves from their initial positions, thus their charge density distribution in the vertical sections to the interface will be different. This is because in the vertical section [Fig. III-9 (b)] only a portion of total charge density are represented. Figure. III- 10 10 Figs. III-10(a) and III-10(c) represent the redistribution of bonding charge density for doped Tigrain boundary and the Figs III-10(b) and III-10(d) are for doped Zr-grain boundary. Fig. III-11 (a) represents the calculated magnitude of displacements of the atoms. The magnitudes of the displacements of the atoms are obtained for each configuration by subtracting the un-relaxed atomic positions from the relaxed ones. The positions of the various atoms (Al, FeI and FeII) are given on the x axes depending on their location (n) away from the exact interface at n=0. Due to the cell symmetry 0 and -/+10 label the two interfaces in the supercell. From Fig. III-11, it can be seen that the larger values of displacements correspond to atoms located in the vicinity of the interface. This is particularly true for the first and second planes labeled -/+1, -/+2 as well as -/+8 and -/+9. Note also that, the rate of the displacement decay away from the grain boundary interfaces (or towards the bulk). The displacement of each atom from its initial position is also represented by solid arrow in Fig. III-11 (b). The large displacement of the atoms in the first and second planes from the grain boundary interface is clearly visible with the largest arrows. As pointed out in previous study by Wolf et al.[33], the large atomic displacement seen in the grain boundary interface is mainly due to the strong repulsive forces between the counter atoms in both sides of the mirror plane. This gives rice to a shift of the first and second plane parallel to the interface causing an expansion of the grain boundary. It was found that the grain boundary expansion affects significantly the kinetics of formation and migration of point defects as well as the interaction between lattice dislocations and grain boundaries[34, 35]. Figure. III- 11 III. 5 . 115 Figure. III-11 (a) The calculated displacements of the atoms in the relaxed pure-grain boundary as function of distance from (0 and -10 label two interfaces in the supercell). These displacements are represented with solid arrows in (b) III.5. The relaxation of the doped grain boundary Figure. III- 12 12 Figure. III-12 The displacement of the impurities (a) Ti and (b) Zr in different configurations of substitutions. As seen from Figs. III-12 (a) and (b), for both Ti and Zr impurities the largest displacements correspond to the substitutions on the FeII sites. This may be related to the environment effect.In the FeII substitutions, the impurities are surrounded by four FeI and four Al atoms. Based on the assumption that both Ti and Zr impurities prefer to reside on the FeI site where are surrounded by eight FeII atoms, their large displacements reveals that the impurities tend to relax to an iron rich configuration. Figure. III- 13 13 Figure. III-13 (a) The displacements of the atoms in the relaxed supercell with solid arrows for the case of Zr substitution on the 2Al site. These displacements are represented with solid arrows in (b). Figure. III- 14 14 Figure. III-14 (a) The displacements of the atoms in the relaxed supercell with solid arrows for the case of Ti substitution on the 2Al site. These displacements are represented with solid arrows in (b) For the case of Figure. III-15 (a) The displacement of the atoms in the relaxed supercell with Ti substitutions on the 0Al site as function of the positions of the planes parallel to the grain boundary interface. These displacements are represented with solid arrows in (b). Figure. III- 16 16 Figure. III-16 (a) The displacement of the atoms in the relaxed supercell with Zr substitutions on the 0Al site as function of the positions of the planes parallel to the grain boundary interface. These displacements are represented with solid arrows in (b). Figure . . Figure. III-17 (a) The displacement of the atoms in the relaxed supercell with Ti substitutions on the 1FeI site as function of the positions of the planes parallel to the grain boundary interface. These displacements are represented with solid arrows in (b). Figure. III- 18 18 Figure. III-18 (a) The displacement of the atoms in the relaxed supercell with Zr substitutions on the 1FeI site as function of the positions of the planes parallel to the grain boundary interface. These displacements are represented with solid arrows in (b). Figure . . Figure. III-19 (a) The displacement of the atoms in the relaxed supercell with Ti substitutions on the 2FeII site as function of the positions of the planes parallel to the grain boundary interface. These displacements are represented with solid arrows in (b). Figure. III- 20 20 Figure. III-20 (a) The displacement of the atoms in the relaxed supercell with Zr substitutions on the 2FeII site as function of the positions of the planes parallel to the grain boundary interface. These displacements are represented with solid arrows in (b). Fig. IV-1 from the calculations at 100 K. The deduced lattice parameter at 100K is indicated by red arrow in Fig. IV-1. Figure. IV- 2 . 2 Figure. IV-2. The energies differences (in eV) between the substitutions Fig. shows the calculated distances between the Zr atom, when placed in the FeII site, and its first neighbour FeI and Al. As seen in Fig.IV-4, both the Zr-FeI and Zr-Al bond lengths are larger when compared to the FeII-FeI and FeII-Al ones in the pure supercell over the entire temperature range. This may be related to the size effect, knowing that the atomic radius of Figure. IV- 3 . 3 Figure. IV-3. The bond lengths in the relaxed pure and Ti-doped Fe 3 Al supercells. Figure. IV- 4 . 4 Figure. IV-4. The bond lengths in the relaxed pure and Zr-doped Fe 3 Al supercells. Fig. IV- 5 5 Fig. IV-5 represent the calculated lattice parameters of D0 3 -Fe 3 Al in the temperature interval 100 -1100 K with the increment ∆T=100 K. The value obtained at 0K from the static ab initio calculations are also ploted (5.76Å). Figure. IV- 5 . 5 Figure. IV-5. The calculated lattice parameters at different temperatures for pure D0 3 -Fe 3 Al. The error bars represent the statistical uncertainty. Fig. IV- 7 7 Fig. IV-7 shows the temperature dependence of the fractional length change ܶ(ܮ/ܮ∆ ) = )ܶ(ܮ[ -ܶ(ܮ ܶ(ܮ/]) ) where the reference of temperature T=300K, together with the previously experimental[10] and theoretical[11] data. Figure. IV- 7 . 7 Figure. IV-7. Fractional length change data of D0 3 -Fe 3 Al. Fig. IV- 8 Figure. IV- 8 . 88 Fig. IV-8 shows the temperature dependence of the lattice parameter with Ti and Zr substitutions in FeI sites, respectively. The lattice parameters of the pure Fe 3 Al are also represented for comparison. Fig. IV- 9 9 Fig. IV-9 shows the variation of the fractional length ܶ(ܮ/ܮ∆ ) as a function of temperature for the pure Fe 3 Al, Ti-doped Fe 3 Al and Zr-doped Fe 3 Al when the transition metals are placed in on a FeI site. Figure. IV- 9 . 9 Figure. IV-9. Fractional length change data of D0 3 -Fe 3 Al, Ti-doped Fe 3 Al and Zr-doped Fe 3 Al (Ti and Zr are placed on the FeI site). As seen from Fig.IV-9, the values of the fractional length for Ti-doped Fe 3 Al are smaller than that of the pure and Zr-doped Fe 3 Al. This means that the thermal expansion of Ti-Fe 3 Al is lower than that of the pure and Zr-doped Fe 3 Al. Interestingly, the variation of the Ti doped compound linear up to the highest temperature of 1100K while a change in slope is visible for the case of pure and Zr-doped-Fe 3 Al compounds. It is clearly visible, from Fig. IV-9, that the evolution of the three fractional lengths remain rather close up to 800K. The discontinuity starts to take place and Zr on thermal expansion of the Fe 3 Al are different reflecting their different effects on the stability of the D0 3 -Fe 3 Al above 800K. These trends are consistent with the results depicted in Fig. IV-8 for the lattice parameters. Figure. IV- 10 . 10 Figure. IV-10. Space discretization for the evaluation of the radial distribution function. 3 - 3 Fig. IV-11 shows the PDFs g(r) in D0 3 -Fe 3 Al for the Fe-Fe [Fig. IV-11(a)], Al-Al [Fig. IV-11(b)] and Fe-Al [Fig. IV-11(c)] pairs. points to a direct correlation with the experimentally observed transition phase (D0 3 -B 2 ) that occurs around 820K in the Fe 3 Al compound. As shown in Fig. IV-11 (c), the disappearance of the second peak from 800K is also noticed in the g FeAl (r) pair distribution function. Figure. IV- 11 . 11 Figure. IV-11. Pair distribution functions (a) Fe-Fe (b) Al-Al and (c) Fe-Al pairs for the pure D0 3 -Fe 3 Al, in the temperature range of 100-1000K. Figs Figs. IV-12 and IV-14 represent the PDFs g(r) in D0 3 -Fe 3 Al with Ti and Zr substitutions in the FeI site, respectively, for (a) Al-Al, (b) Fe-Fe and (c) Fe-Al pairs. Figure. IV- 12 . 12 Figure. IV-12. Pair distribution functions (a) Fe-Fe (b) Al-Al and (c) Fe-Al pairs for the Tidoped D0 3 -Fe 3 Al (Ti on FeI site), in the temperature range of 100-1000 K. Figure. IV- 13 . 13 Figure. IV-13. The g AlAl (r) for pure and Ti-doped D0 3 Fe 3 Al (on the FeI site) at 600K, 800K and 1000K. Figure. IV- 14 .Fe 3 143 Figure. IV-14. Pair distribution function (a) Fe-Fe (b) Al-Al and (c) Fe-Al pairs for the Zrdoped D0 3 -Fe 3 Al (Zr on the FeI site), in the temperature range of 100-1000K. FeI figure, the profile curves of the two PDFS shows that there is no significant difreneces between the substitutions on the FeI and FeII sites. This indicates that the presence of Zr whether on a FeI site or a FeII site has no effect on the stability of the D0 3 -structure. Figure IV- 15 . 15 Figure IV-15. the gAlAl(r) for pure and Zr-doped D0 3 Fe 3 Al on the FeI site and FeII sites at 600K, 800K and 1000K. described in the previous chapter (Chapter III, Section III-1). The configurations of the transition metals substitutions considered in our calculations are given in the Fig. IV-16. These configurations are grouped in three categories (i) three substitutions in the grain boundary interface (ii) three substitutions in the first plane from the interface and (iii) three substitutions in the second plane from the interface. Figure. IV- 16 . 16 Figure. IV-16. Sketch showing the nine substitutional sites located within three different planes the within ∑5 (310)[001] grain boundary. Al sites are not possible. The results are represented in Figs. IV-17 and IV-18 for substitutions on the (a) Al sites (b) FeI sites and (c) FeII sites, by taking into account the distance from the G.B. interface. For comparison, the values obtained from the static calculations at 0K (Chapter III) are also recalled. Figure. IV- 17 . 17 Figure. IV-17. The calculated defec boundary at 300 K, 600 K and 900 K t and (c) FeII sites, by taking into account the distance from the G.B. ∑5 (310) [001] grain n (a) Al sites (b) FeI sites sites, by taking into account the distance from the G.B. interface. Figure. IV- 18 . 18 Figure. IV-18. The calculated defect energies of Zr boundary at 300 K, 600 K and 900 K t and (c) FeII sites, by taking into account the distance from the G.B. ∑5 (310) [001] grain n (a) Al sites (b) FeI sites and (c) FeII sites, by taking into account the distance from the G.B. interface. As shown in Fig. IV-17 (a) for Ti on Al configuration, for the three temperatures, the most stable configuration corresponds to substitutions in the G.B. interface. For the case of Fe configurations, a comparison between the substitutions of Ti on the FeI and FeII sites [Figs. IV-17 (b) and (c)] Fig.IV-18 that the most favorable configurations correspond to the substitutions in the G.B. interface (0Al, 0FeI and 0FeII) for the three temperatures. Contrary to the case of Ti configurations at the grain boundary, the Zr impurity prefers to reside in the G.B. interface even at intermediate temperature. It is important to recall that, from the static calculations at zero temperature [Fig. IV-18], the most favorable configurations were found to be the substitutions in the first plane from the grain boundary interface (1Al, 1FeI and 1FeII) for the two transition metals. Therefore, the small increase in temperature (to the intermediate temperature) leads to the migration of Zr impurity to the G.B. For example, Fig. IV-19 represents the positions of atoms during the relaxation, at 300K, for (a) 400 molecular dynamic (MD) steps (b) the last 200 MD steps and (c) the last 100 MD steps. It is clear from Fig.IV-19 (b) that the relaxation of atoms is less important when the 200 first steps were removed. The effect of relaxation is even less important when only the last 100 steps are considered [Fig. IV-19 (c)]. This suggests that the atoms have relaxed to their final positions after the 300 first steps of thermalization of the system. Fig.IV- 20 20 Fig.IV-20 represents the displacements of the atoms calculated from the static calculations at 0Kand that from AIMD calculations for the three different temperatures 300K, 600K and 900K. It is important to note before further analysis that the scale indicating the magnitude of the displacement vectors are rather different between 0K and 300K-600K and 900K. Figure.IV- 19 . 19 Figure.IV-19. The atomic positions in pure grain boundary ∑5 (310) for (a) 400 MD steps (b) the last 200 MD steps and (c) the last 100 MD steps. Figure.IV- 20 . 20 Figure.IV-20. The displacement of the atoms from in the relaxed grain boundary supercell from their initial positions (a) at 0K (b) at 300K (c) at 600K and (d) at 900K. Fig. IV-21 gives the relative difference between the average displacements at each temperature and the average positions in the relaxed grain boundary at 0K. On can see that the displacement of atoms at 900K are very important, about ~90%. This indicates that a local disorder occurs in the grain boundary suprecell at this temperature. This trends is consistent with the experimental and theoretical Fig. IV-22 gives the relaxed pure grain boundary (a) at 0K and (c) at 300K. The displacements of the atoms from their initial positions are represented in Fig. IV-22 (b). At first sights, the magnitude of the displacement seems such that the center of the simulation box (away from the G.B) has a different aspect. This different aspect is in fact partially related the fact that the image of the grain boundary in Fig. IV-22 (c) correspond to the superposition of four different planes contained in the depth of the simulation box and that even the small displacement of the atom position leads to a higher number of atoms visible in Fig. IV-22 (c). However, comparison between Fig.IV-(a) and (c), shows that despite the apparent important relaxation of the atoms from their initial positions, the global structure of the ∑5 (310) is preserved. In particular, the misorientation angle of the grain boundary is not affected. Figure. IV- 22 . 22 Figure. IV-22. (a) The initial positions of the relaxed grain boundary at 0K (b) the displacement of the atoms from the initial positions in the relaxed grain boundary at 300K and (c) the final positions of the atoms in the relaxed grain boundary at 300K. Fig. V-23 (a) and (b) shows the calculated displacement of the impurities in different substitutional configurations. In Fig. V-23 the configurations are classified based on their positions from the grain boundary interface. For comparison, the calculated displacements of Ti and Zr impurities at 0K are also represented in Fig V-23 (c) and (d) respectively. Figure. IV- 23 . 23 Figure. IV-23. The displacements of the impurities (a) Ti (b) Zr at 300K, (c) Ti and (d) Zr at 0K on different substitutional configurations. Fig. IV- 24 24 Fig. IV-24 represents the structural environment distributions for the different configurations of the Ti and Zr substitutions. For comparison, the structural environments of the impurities in the un-relaxed grain boundary are also represented. In this figure, the calculated defect energies that were obtained in Section III.1 (Chapter IV) are also listed to get more insights about the most stable configurations. Figure.IV- 23 . 23 Figure.IV-23. Snapshots of the structural environment of the impurities before and after relaxation in the ∑5 (310)[001] grain boundary. Figure. V- 24 . 24 Figure. V-24. The left plots (a), (b) and (c) are the relaxed grain boundary geometries with Ti substitution on the most stable configurations 0Al, 1FeI and 2FeII, respectively. The right plots (d), (e) and (f) the displacement of each atom in the grain boundaries represented in (a), (b) and (c) from its initial Ab Initio Molecular Dynamics calculationsThe left plots (a), (b) and (c) are the relaxed grain boundary geometries with Ti on the most stable configurations 0Al, 1FeI and 2FeII, respectively. The right plots (d), (e) and (f) the displacement of each atom in the grain boundaries represented in (a), (b) and (c) from its initial positions. 143 Figure. V-25. The left plots (a), (b) and (c) are the relaxed grain boundary geometries with Zr substitution on the most stable configurations 0Al, 1FeI and 2FeII, respectively. The right plots (d), (e) and (f) the displacement of each atom in the grain boundaries represented in (a), (b) and (c) from its initial Ab Initio Molecular Dynamics calculationsThe left plots (a), (b) and (c) are the relaxed grain boundary geometries with Zr substitution on the most stable configurations 0Al, 1FeI and 2FeII, respectively. The right plots (d), (e) and (f) the displacement of each atom in the grain boundaries represented in (a), (b) and (c) from its initial positions. 144 The left plots (a), (b) and (c) are the relaxed grain boundary geometries with Zr substitution on the most stable configurations 0Al, 1FeI and 2FeII, respectively. The right plots (d), (e) and (f) the displacement of each atom in the grain boundaries represented in (a), (b) and (c) from its initial Fe 3 3 Al, from about 800K indicates that the stability of the D0 3 -Fe 3 Al structure is affected from this temperature and local disorder occurs. For the case of Ti substitutions, the pair distributions function g Al-Al (r) maintains unchanged up to 1000K. This indicates that, in agreement with the experimental observations, the 1% of Ti increases the stability of the D0 3 structure up 1000K.However, for the case of Zr substitutions, the analysis of the pair distribution function shows that the Zr additions have no effect on the stability of the D0 3 both when are placed on the FeI and FeII sites.The calculated defect energies of the impurities in the ∑5(310)[001] grain boundary shows that the Ti impurity prefers to occupy the iron rich configurations further away the interface at intermediate temperature of 300K. The relaxation of Ti impurity to the G.B. interface occurs only at high temperature (at 900K). Comparatively, the Zr impurities occupy the configurations at the G.B interface with the lower defect energies even at intermediate temperatures. Fig. 2 2 Fig. 2 (a) Two interpenetrating lattices, misoriented by 36.87° /001 forming a dichromatic pattern, viewed down 001. (b) GB generated from the dichromatic pattern in (a). (c) coincidence site lattice, ∑=5, generated by the misorientation shown in (a) and (b). (d) GB formed by a misoriented of 22.62°/001, which gives a CSL of ∑=13. The coincidence sites are denoted by solid symbols throughout. [8] Fig. 3 3 Fig. 3 Schematic of twist and tilt GB geometries. (a) illustrates the generation scheme of symmetrical twist GBs and (b) illustrates the generation scheme of symmetrical tilt GBs. possibles dans les sites Al, FeI et FeII (voir Fig. 1.) ont été modélisés dans des supercellules contenant 32 et 108 atomes. Il faut noter que, en cas de substitution d'une d'impureté (Ti ou Zr) sur un site de la supercellule à 32 atomes, la concentration de défaut correspondante est d'environ 3% at. Pour le cas de la supercellule à 108 atomes, la concentration de défaut est d'environ 1 % at. Fig. 1 . 1 Fig. 1. (a) La structure D0 3 -Fe 3 Al non-dopée et les trois défauts de substitutions ainsi que les lacune sur les sites (b) Al (c) FeI et (d) FeII. FeI. Cependant, contrairement au Ti, à une concentration de 3% de toutes les énergies de formation sont positives, indiquant un coût en énergie pour introduire Zr sur un site de substitution dans le bulk. Il est intéressant de noter que ces valeurs ont tendance à diminuer lorsque le calcul est effectué avec 108 atomes c-à-d lorsque la concentration de défauts diminue. Ceci est cohérent avec l'observation expérimentale indiquant que la solubilité de Zr Fig. 3 3 Fig. 3 La structure atomique du joint de grains Σ5 (310) [001] dans l'intermétallique D0 3 -Fe 3 Al (a) vue le long de la direction [001] (b) vue le long de la direction (130). La structure D0 3 étant plus complexe que celle du B2 [17], le nombre de configuration de défauts à prendre en compte dans le joint de grains est beaucoup plus important. Les configurations des substitutions des défauts dans le joint de grains est donnée dans la Fig. 4. A partir la Figure. 4, est clairement visible que neufs configurations de substitution sont présentes au sein du joint de grains Σ5 (310). Elles sont regroupées en trois catégories en fonction de leur distance de l'interface: (i) trois substitutions au niveau de l'interface 0Al, 0FeI et 0FeII (ii) trois substitutions dans le premier plan suivant l'interface 1Al, 1FeI et 1FeII, et (iii) trois substitutions suivant le deuxième plan de l'interface, 2Al 2FeI et 2FeII. Dans la Fig. 4 (b) les deux sites correspondant à l'insertion sont présentés. Ils correspondent à l'emplacement (1) avec seulement des atomes FeII comme premiers voisins et sur le site (2) avec à la fois les atomes Al et FeI comme premiers proches voisins. Fig. 4 . 4 Fig. 4. Les différentes configurations (a) de substitution et (b) sites interstitiels dans l'interface du joint de grains. a) et 5 A 4 A 5 - 545 titre de comparaison les énergies de formation de (supercellule avec 108 atomes) sont repr que, entre les deux types d'interstitiels (1 et 2), la configuration la plus favorable est le site interstitiel (1) pour les deux métaux de transitions Ti . Ceci peut être expliqué simplement comme une conséquence des effets de onfiguration les impuretés sont entourés que par des atomes FeII voisins. Le même comportement a été obtenu dans FeAl L'atome du Bohr préfère être inséré au joint de grains Σ5 (310) dans a cependant ici une nette différence dans les comp Ti et Zr. L'énergie de formation est positive pour les deux configurations, si l'on dans des sites interstitiels, indiquant qu'il en coûte plus pour insérer Ti à l'interface des joints de grains. Comparativement, la configuration atomique 0,18 eV) où Zr est inséré dans un site (1) (c-à-d une configuration riche le cas des additions du B dans FeAl [26], les valeurs négatives de 0,18 eV) obtenues ici pour Zr indique que cet atome est plus stable rsqu'il est inséré dans une configuration riche au niveau du joint de grains Il est important de rappeler que de petites additions de ces deux atomes tendent à aluminiures de fer [18-19, 20]. substitutions, les résultats présentés dans le Tableau III reportés sur les Figures. 5 (a) et 5 (b) en prenant en compte la distance de l'interface (Fig. titre de comparaison les énergies de formation des métaux de transition dans l (supercellule avec 108 atomes) sont représentées dans les Figures. 5 (a) et types d'interstitiels (1 et 2), la métaux de transitions Ti e conséquence des effets de onfiguration les impuretés sont entourés que par des atomes FeII voisins. Le même comportement a été obtenu dans FeAl dopé avec dans des configurations a cependant ici une nette différence dans les comportements des Ti et Zr. L'énergie de formation est positive pour les deux configurations, si l'on qu'il en coûte plus d'énergie pour insérer Ti à l'interface des joints de grains. Comparativement, la configuration atomique d une configuration riche l [26], les valeurs négatives de 0,18 eV) obtenues ici pour Zr indique que cet atome est plus stable rsqu'il est inséré dans une configuration riche au niveau du joint de grains que dans le bulk. de petites additions de ces deux atomes tendent à améliorer la présentés dans le Tableau III sont également tance de l'interface (Fig. 4). métaux de transition dans le bulk ésentées dans les Figures. 5 (a) et 5 (b). FeIIFig. 5 . 5 Fig. 5. Énergies de formation (a) le système dopé avec Fig. 6 .III. 1 . 61 Fig. 6. Les énergies d'interface (en J/m2) calculées pour les joint de grains dopé et non-dopé. Fig. 7 7 Fig.7 Les différences des énergies (en eV) entre les substitutions dans des sites FeI et FeII à un gain significatif de l'énergie (-1,62 ~ -2,55 ment sur le site FeII 0,7 eV). Cela indique également que, pour toutes les températures testées, l'impureté Ti est toujours plus stable dans un site FeI. Pour le cas 2,65 eV) et (-1,29 ∼ eV) sur les sites FeI et FeII, respectivement. Le point le plus intéressant révélé par ces données est l'évolution en fonction de la température de la stabilité relative entre les différents Ti Zr sites pour les deux métaux de transitions. La différence de comportement entre les deux éléments est plus illustrée dans la Fig. 7. Pour Ti, la préférence du site FeI que FeII est confirmée avec une différence d'énergie dans la gamme de -1,16 à -2,27 eV. En outre, la pente négative représenté par les points de données dans la Fig. 7 indique une tendance pour augmenter la stabilité du site FeI que le site FeII lorsque la température augmente. Les tendances observées pour Zr sont assez différentes. Premièrement, l'ampleur en termes de différence d'énergie entre les deux sites ne dépasse pas 0,75 eV (-0,75 max à 100K). Deuxièmement, alors que FeI est favorisée à basse température, l'augmentation de la température a tendance de stabiliser de plus en plus le site FeII. Finalement, avec la configuration énergétique très stable pour le site FeII à 900 K (-2,55 eV) et 1000 K (-2,28 eV), les préférences pour les sites FeI et FeII sont très proches. L'analyse en températures des sites préférentiels (100-1100 K) pour les substitutions des impuretés Ti et Zr sur les sites de la FeI et FeII de la supercellule Fe 3 Al a révélé des différences de comportements intéressantes entre les deux éléments de métaux de transition. Le Ti est plus stable sur un site FeI et la stabilité sur ce site augmente avec la température. Comparativement, le Zr a tendance à occuper avec le meme ordre de stabilité les sites FeI et FeII à haute température. calculées pour les substitutions des deux métaux de transitions dans le site d'Al en plus de la substitution sur le site FeI et FeII. Ces calculs ont été effectués pour déterminer les sites préférentiels des métaux de transitions entre les substitutions dans les configurations d'Al et ensuite examiner la relaxation des joints de grains. Il est important de rappeler que les comparaisons (i) entre les énergies de défauts des deux métaux de transition et (ii) entre des substitutions sur les sites FeI et Al ne sont pas possibles. Les résultats sont représentés dans les figures. 8 et 9 pour les substitutions sur (a) les sites Al (b) les sites FeI et (c) les sites FeII, en tenant compte de la distance de l'interface du joint de grains. A titre de comparaison, les valeurs obtenues à partir des calculs statiques à 0K (chapitre III) sont également rappelées. Fig. 8 8 Fig.8 Energies de défauts pour le Ti pour différentes température 300, 600, 900K sur différents sites (a) Al (b) FeI et (c) FeII en tenant compte des distances suivant l'interface. Energies de défauts pour le Ti pour différentes température 300, 600, 900K sur différents sites (a) Al (b) FeI et (c) FeII en tenant compte des distances suivant l'interface. 179 Energies de défauts pour le Ti pour différentes température 300, 600, 900K sur différents sites (a) Al (b) FeI et (c) FeII en tenant compte des distances suivant l'interface. Fig. 9 9 Fig. 9 Energies de défauts pour le différents sites (a) Al (b) FeI et (c) FeII en tenant compte des distances suivant l'interface. Energies de défauts pour le Zr pour différentes température 300, 600, différents sites (a) Al (b) FeI et (c) FeII en tenant compte des distances suivant l'interface. température 300, 600, 900K sur différents sites (a) Al (b) FeI et (c) FeII en tenant compte des distances suivant l'interface. size of the supercell the volume of the Brillouin zone becomes smaller and smaller (see Eq. B22). Therefore, with increasing supercell size less and less k supercell size on it is often justified to use just a single k (k-point approximation). For metallic systems, on the other hand, much denser k are required in order to get convergence with respect to the k fractional occupation numbers [18 II.4.1.5. Fourier representation of the Kohn In a plane wave representation of the wave functions the Kohn particular simple form. If we insert (Eq. B expሺെ݅ሺ ᇱ ሻݎሻ and integrate over ∑ ቀ . Within this approximation the electronic states at only a finite number of k-points are needed to calculate the charge density and hence the total energy of the solid. The error induced by this approximation can be reduced systematically by increasing the density of the k-point mesh. For insulators it turns out that usually only a small number of k మ ଶ In practical calculations the Fourier expansion (Eq. B keeping only those plane wave vectors ( value E pw : Table . . III-1. Lattice parameters and the bulk modulus for Fe 3 Al-D0 3 . a(Å) B(Gpa) Present Work USPP-GGA 5.76 143 USPP-LDA 5.59 157 Theory FPLAPW-GGA-sp [Gonzales et al. 2002] 1 5.77 - USPP-PBE [Connétable et al. 2008] 2 5.76 159 MBPP-PBE-sp [Lechermann et al. 2002] 3 5.78 151 MBPP-CAPZ-sp [Lechermann et al. 2002] 5.60 192 PWSCF-PBE [Kellou et al. 2010] 4 5.78 139 Experiment 5.79 [Nishino et al. 1997] 5 144 [Leamy et al. 1967] 6 Table . III-2 . The calculated effective formation energies (in eV) for point defects in D0 3 -ordered bulk Fe 3 Al. ܧ ் ܧ ܧ ௩ Al FeI FeII Al FeI FeII Al FeI FeII Present work Relaxed (108 atoms) -0.46 -1.01 -0.39 0.45 -0.14 0.39 0.94 0.34 0.31 Relaxed (32 atoms) -0.20 -0.72 -0.06 0.70 0.17 0.76 2.26 1.72 1.09 Theory [19] Unrelaxed (32 atoms) Table . . III-3. The defect energies of vacancies on the FeI and FeII sites. ܧሺ݀ሻ and ܧ are the energies of the supercell with and without transition metal impurities, respectively. ܧ ௗ = ܧሺ݀ሻ -ܧ (eV) FeI FeII 0K Supercell with 32 atoms (3 at.%) 10.59 9.78 Supercell with 108 atoms (1 at.%) 7.85 7.55 300K Supercell with 108 atoms (1 at.%) 9.55 8.89 Table . III-4 . The calculated effective formation energies (in eV) for relaxed grain boundary supercells. E f (in eV) Insertions Substitutions G.B. Interface First plane from G.B. Second plane form G.B. (1) (2) 0Al 0FeI 0FeII 1Al 1FeI 1FeII 2Al 2FeI 2FeII Ti-doped GB 0.31 0.90 0.05 -0.86 -0.63 -0.44 -1.32 -1.21 0.31 -0.98 -0.93 Zr-doped GB -0.18 2.09 0.49 -0.47 -0.51 -0.04 -0.97 -0.90 0.05 -0.66 -0.78 As seen in Table III-4, between the two types of interstitials (1 and 2), the most favorable configuration is the interstitial site (1) for both Ti and Zr doped grain boundaries. This may be explained simply as a consequence of surroundings effects: in this configuration the impurities are surrounded only by FeII neighboring atoms. The same behavior was obtained in FeAl for B -5 and plotted in Fig. III-7. Our calculation indicates that the values of interface energy for a clean ∑5 (310) [001] D0 3 -Fe 3 Al grain boundary is 0,37 J/m 2 . Comparatively, this is about three This suggests that the alloying elements Ti and Zr can stabilize the grain boundary in D0 3 -Fe 3 Al. The maximum expected reductions are obtained when the transition metals are on a FeII site locate in the first plane away from the exact interface: 14% for Ti (0.32 J/m 2 ) and 22% times lower than the values obtained on a ∑5 (310) [001] iron aluminide grain boundary for the FeAl B2 state (1,12 J/m 2 ) [26]. Except for one case (0Al for Ti/ γ GB =0.38 J/m 2 ), it can be seen that Ti and Zr impurities lower the interface energy when compared for a clean-grain boundary (0.37 J/m 2 ). Table. III-7), the formation energy increases slightly (~+2%) when a vacancy is produced on an Al site. 0,45 0,4 γ γ ) γ γ (J/m 2 0,3 0,35 Ti Zr Clean-GB 0,25 0,2 0Al 1Al 2Al 0FeI 1FeI 2FeI 0FeII 1FeII 2FeII Table III-6. The calculated formation energies (in eV) for vacancies in clean-grain boundary and Ti doped grain boundary. E f E f v (Clean GB) v (Ti doped GB) ϑ Al 3.40 3.26 ϑ FeI 1.44 1.08 ϑ FeII 1.02 1.15 Relative difference -3% -25% +13% Table III - 7 . III7 The calculated formation energies (in eV) for vacancies in clean-grain boundary and Zr doped grain boundary. E f E f v (Clean GB) v (Zr doped GB) ϑ Al 2.35 2.37 ϑ FeI 1.43 0.75 ϑ FeII 1.11 1.01 Relative difference +2% -47% -9% Table . . IV-1. The defect energies (in eV) when the FeI/FeII sites are replaced by the Ti and Zr transition metals. T (K) FeI FeII 100 -1.79 0.05 200 -1.62 -0.46 300 -2.54 -0.56 400 -1.90 -0.65 500 -1.77 -0.34 600 -2.16 -0.48 700 -1.98 -0.55 800 -1.99 0.16 900 -2.35 -0.08 1000 -2.55 -0.70 1100 -1.71 -0.31 Table d d indicate better the trends, the energies differences between the substitution in FeI and FeII sites are also listed and presented in Fig.IV-2.The defect energies (in eV) when the FeI/FeII sites are replaced by the Ti and Zr 111 system of 108 atoms (81 Al unit cell, under periodic boundary conditions. By substituting an impurity in this supercell, the impurity concentration is about 1% The results of the defect energies are summarized in Table IV-1 for Ti and Zr he energies differences between the The defect energies (in eV) when the FeI/FeII sites are replaced by the Ti and Zr Ti Zr FeII E d (FeI)-E d (FII) FeI FeII E d (FeI)-E E d (FeII) 0.05 -1.84 -2.04 -1.29 -0.75 0.75 0.46 -1.16 -2.34 -1.62 -0.72 0.72 0.56 -1.98 -2.35 -1.76 -0.59 0.59 0.65 -1.25 -2.62 -1.93 -0.69 0.69 0.34 -1.43 -2.30 -1.60 -0.7 0.7 0.48 -1.68 -2.36 -1.69 -0.67 0.67 0.55 -1.43 -2.23 -1.71 -0.52 0.52 0.16 -2.15 -2.29 -1.79 -0.5 0.5 0.08 -2.27 -2.08 -2.55 +0.47 +0.47 0.70 -1.85 -2.65 -2.28 -0.37 0.37 0.31 -1.40 -2.58 -2.14 -0.31 0.31 Temperature (K) The energies differences (in eV) between the substitutions on on FeI and FeII sites. Table . IV-2. . The calculated defect energies (in eV) for vacancies in pure Fe 3 Al as well as Ti and Zr-doped Fe 3 Al. The relative difference between the defect energies of vacancies in the pure and doped Fe 3 Al are also presented. Pure-Fe 3 Al Ti-doped Fe 3 Al Zr-doped Fe 3 Al 900K Vacancies on Al sites (ߴ ) -2.64 -7.61 (-65%) -4.18 (-39%) Vacancies on FeI sites (ߴ ிூ ) 1.14 2.79 (+59%) 0.38 (-66%) 1000K Vacancies on Al sites (ߴ ) -8.16 -7.12 (+13%) -8.08 (+1.2%) Vacancies on FeI sites (ߴ ிூ ) -4.61 -1.78 (+61%) -4.24 (+8%) 1, r L2 the internal coordinates of that lattice point with respect to coordinate system No.2 and finally t L1 the coordinates of a translation vector of lattice No.1 with respect to coordinate system No.1. Thus any lattice point r L1 of lattice No.1 that satisfies Eq. III-4 is a lattice point of lattice No.1 and No.2 and therefore represents a coincident point. . For instance one obtains a ∑5 CSL by a 36.87° [001] rotation (see Fig. 2) as well as by a 53.13° [001] rotation. These two representations are linked by the 270° [001] symmetry operation of the cubic lattice. So far aGB has yet not appeared in the discusssion of the CSL scheme and thus the plain CSL scheme is therefore only associated with misorienting two crystals. Utilizing the CSL concept to misorient two crystals, the final step to geometrically generate symmetrical CSL twist GBs, symmetrical CSL tilt GBs as well as asymmetrical CSL tilt GBs is to define the GB plane that will separate the two crystals from each other. This means that either one of the crystals only exists on one side of the GB, thus here the discussion of interpenetrating crystals comes to an end. Obviously the scheme itself is rather theoretical since for instance in sample preparation of experiments such a working sequence of misorienting interpenetrating crystals can only be regarded as imaginary. 3 Al, ont des caractéristiques très intéressantes pour des applications mécaniques à haute température. Ils possèdent, comme la plupart des composés intermétalliques, une résistance mécanique élevée, une bonne résistance à l'oxydation ainsi qu'une faible densité. Cependant, les principales raisons qui limitent leurs applications sont leur fragilité à température ambiante et une forte diminution de leur résistance pour des températures supérieures à 550°C. Un aspect intéressant de ces alliages est leur comportement envers les métaux de transition. Certains éléments, comme Ti, peuvent augmenter la stabilité de la phase D0 3 , en augmentant la transition D0 3 /B2 vers des températures plus élevées. La situation est moins claire dans le cas du Zr. En effet, malgré l'effet bénéfique du dopage en Zr sur la cohésion des joints de grains et la ductilité, il n'existe pas de données expérimentales concernant son effet sur la stabilité de la structure D0 3 du composé Fe 3 Al. Ce travail de thèse vise à étudier l'effet de ces deux métaux de transitions Ti et Zr sur les propriétés du composé intermétallique D0 3 -Fe 3 Al en utilisant des calculs pseudopotentiels ab initio basées sur la théorie de la fonctionnelle de la densité (DFT). Deux principaux thèmes ont été abordées: (i) la compréhension du rôle de ces deux métaux de transition en termes de stabilité de la phase D0 3 à la lumière de leur site préférentiel dans la structure D0 3 -Fe 3 Al (ii) le comportement du Ti et Zr dans le joint de grains Σ5 (310) [001] ainsi que leur effet sur la stabilité structurale de cette interface. Un élément important pour étudier ces aspects est de prendre en compte l'effet de la température. Cela nécessite un traitement de type dynamique moléculaire des atomes dans la supercellule. La technique dynamique moléculaire ab initio (AIMD) résout ces problèmes en combinant des calculs de structure électronique avec la dynamique à une température finie. Ainsi, notre étude a été menée à la fois en utilisant des calculs ab initio statiques à 0K ainsi que par la prise en compte de l'effet de la température jusqu'à 1100K (Dynamique Moléculaire Ab Initio). et basé sur formalisme de la théorie fonctionnelle de la densité telle que implémentée dans le code VASP (Vienna Ab initio Simulation Package)[2, 3]. Les fonctions d'onde électroniques ont été étendues sur une base d'ondes planes avec une énergie de coupure de 240 eV. Les USPPs employés dans ce travail explicitement traitent huit électrons de valence pour le cas du Fe (4s 2 3d 6 ), trois électrons de valence pour Al (3s 2 3p 1 ) et quatre électrons de valence à la fois pour Ti (4s 2 3d 2 ) et Zr (5s 2 4d 2 ). Tous les calculs ont été spin polarisés. Où X et ν représentent les métaux de transitions et lacunes, respectivement. E solid (Fe 3 AlX) (solide, indique bulk ainsi que joint de grains) est l'énergie de la supercellule contenant une impureté et E solid (Fe 3 Al) est l'énergie de la supercellule sans défaut. E Fe ou Al (Fe ou Al étant l'atome qui est substitué) et E X sont les énergies totales calculées pour les métaux purs dans leurs réseaux d'équilibre Fe-bcc, Al-fcc, Ti-hcp et Zr-hcp. Notant que, comme il est très improbable que Ti et Zr occupent des sites interstitiels dans le bulk, l'Eq. 2 ne sera utilisée que pour l'analyse des interstitiels dans le joint de grains. Dans cette équation, telle que présentée dans [8, 9], E GBX et E Fe3AlX sont les énergies totales des supercellules joint de grains et bulk-Fe 3 Al dopés, respectivement. E f = E solid (Fe 3 AlX) -E solid (Fe 3 Al) -E X (Eq.2) pour les configurations d'insertion. L'énergie de formation, pour le cas des lacunes, a été évaluée en utilisant l'équation suivante [7] : E f = E solid (Fe 3 Alv) + E Fe ou Al -E solid (Fe 3 Al) (Eq.3) γ GB = (E GBX -E Fe3AlX ) / 2A (Eq.5) I.2. Energies Dans cette section, nous définissons d'abord les énergies utilisées dans nos calculs. Pour examiner les site préférentiels des métaux de transition à la fois dans le bulk et au joint de grains, leur énergies de formations E f ont été calculées dans différents configurations en utilisant l'équation suivante: E f = E solid (Fe 3 AlX) + E Fe ou Al -E solid (Fe 3 Al) -E X (Eq.1) pour les configurations de substitution et par L'approximation du gradient généralisé (GGA-PW91) a été utilisée pour décrire l'énergie d'échange et de corrélation avec la version de Perdew et Wang [4, 5] . Les intégrations de la zone de Brillouin ont été effectuées en utilisant une grille d'échantillonnage de Monkhorst-Pack [6] . Les tests ont été effectués pour la cellule unité Fe 3 Al (quatre atomes par cellule) en utilisant différents nombre de points k dans la grille k pour assurer la convergence de l'énergie totale avec une précision de 10 -3 eV/atome. Par conséquence, la grille pour la maille unité a été adaptée en utilisant Fe 3 Al (16x16x16) points-k. Selon la structure et la taille de la cellule, le nombre de points-k change comme conséquence de la modification de la taille de zone de Brillouin. Pour les calculs d'énergie totale de la supercellules Fe 3 Al avec 32 atomes (2x2x2 cellule unité) et 108 atomes (3x3x3 de la cellule unitaire), les grilles ont été générées avec (8x8x8) et (4x4x4) points-k, respectivement. Dans le cas du joint de grains, la grille Monkhorst-Pack a été adaptée pour les paramètres en utilisant (4x2x5) k-points. Les géométries atomiques de l'état fondamental ont été obtenues par la minimisation des forces Hellman-Feyman utilisant un algorithme de gradient conjugué. L'énergie d'interface γ GB pour le système non dopé est définie comme: γ GB = (E GB -E bulk ) / 2A (Eq.4) E GB et E bulk sont les énergies totales des supercellules du joint de grains et bulk, respectivement. A est la surface de l'interface (le facteur de ½ est nécessaire pour rendre compte de la présence de deux joints de grains symétriquement équivalents dans la suprcellule). Dans nos simulations, les énergies E GB et E bulk sont calculées pour les blocs de simulations, comprenant un nombre égal d'atomes de chaque espèce. Pour le système dopé, l'énergie d'interface est calculée en utilisant: II.1.2. Les sites préférentiels des défauts ponctuels dans le bulk D0 3 -Fe 3 Al : 2. compare les énergies de formations des impuretés calculées en utilisant des supercellules relaxées et non relaxées (avec 32 atomes). Il est bien clair que la relaxation conduit à une réduction globale des énergies de formation. En outre, bien que les courbes aient le même profil, il apparaît clairement que les différences dans les énergies de formation calculées en utilisant les supercellules relaxée et non relaxée sont plus basses pour entre les énergies de formations calculés dans des supercellules relaxées et non relaxées sont dans la fourchette de 15 à 50% pour les différentes substitutions du Ti diverses, la relaxation diminue l'énergie de formation de plus de 140% dans le cas de la substitution du Zr sur un site FeII. Pour le cas des lacunes, la relaxation est plus prononcée lorsque la lacune est créée dans le site Al. Cela peut aussi être lié à la différence de taille entre les atomes de Fe et Al. Sachant que l'Al est plus grand que le Fe, le vide créé lorsque la lacune est produite dans le site Al est plus grand et, par conséquent, la relaxation est également plus importante. Ainsi, dans la suite, seuls les résultats obtenus à partir des configurations détendues seront traitées.Les valeurs des énergies de formation pour les substitutions et les lacunes dans les supercellules relaxées sont données dans le Tableau. I Les résultats obtenus par Mayer et al. Les énergies de formation (en eV) pour des défauts ponctuels dans le bulk D0 3le Tableau. I, bien que la lacune se produise avec des énergies de formations positives dans les trois différentes configurations, la plus basse énergie de formation correspond à la substitution sur le site FeII. Cette tendance est en accord avec la conclusion de Mayer et al.Fe 72 Al 28 , ont trouvé que les lacunes Fe apparaissent sur le sous-réseau FeI, ce qui est en conflit avec notre résultat et celui obtenu par Mayer et al[10]. Puisque les résultats de Jiraskova ont été obtenus à température ambiante, contrairement à nos calculs et ceux de Mayer et al.[10] qui ont été réalisées à 0K, la différence de l'occupation des lacunes peuvent être liés à l'effet de la température. Pour vérifier cette possibilité, nous avons calculé les énergies de défaut des lacunes dans les sites FeI et FeII à 300K, en utilisant la dynamique moléculaire ab initio. Les énergies de défauts sont définis comme les différences dans l'énergie du système pur quand une impureté remplace le FeI ou FeII, à savoir, et E 0 sont les énergies de la supercellule avec et sans impuretés, respectivement. Le site préférentiel correspond alors au cas où l'énergie est acquise en remplaçant les sites FeI / FeII. Plus de détails sur les calculs sur la dépendance en température des énergies défaut seront présentés dans la section III. Cependant, certains résultats sont présentés ici dans le Tableau. II ainsi que les résultats des énergies de défauts calculées à 0K. Les valeurs des énergies de défauts sont plus importantes que celle des énergies de formation. C'est parce que la valeur de l'énergie totale du fer pur dans son réseau d'équilibre (bcc Fe) n'a pas été soustrait tel que défini dans l'Eq. 2.Comme le montre le Tableau. II, le site FeII reste le site privilégié des lacunes, même à 300K.Cela signifie que les changements de température dans la gamme 0 → 300K ne modifie pas la stabilité des lacunes dans le bulk du composé intermétallique D0 3 -Fe 3 Al. Ici, les deux résultats obtenus par les deux méthodes ab initio statique et ab initio dynamique moléculaire à 300K sont en conflit avec les conclusions Mossbauer. Cela indique que le désaccord avec les résultats expérimentaux ne doit pas être lié à l'effet de la température, mais est certainement liée à la haute sensibilité de ces alliages aux lacunes. C'est parce que la différence entre les énergies de formation de 3% et 1% des lacunes calculées dans des supercellules à 32 et 108 atomes est importante. D'après le Tableau. I, on peut voir que les énergies de formation sur différents sites sont réduites par environ 60% lorsque la concentration des lacunes diminue. En outre, de 1% (à 108 atomes) à 3% (à 32 atomes), la différence entre les énergies de formation des lacunes dans les sites FeI et FeII est également réduite avec la concentration des lacunes. La différence entre les énergies de formation est d'environ 0,6 eV pour les 3% de lacunes alors que pour le cas de la concentration de 1%, la différence n'est que d'environ 0,03 eV.Les énergies de formation calculées pour les substitutions dans la supercellule relaxée sont également indiquées dans le Tableau. I. Pour le cas du Ti, les trois configurations de substitution donnent des énergies de formation négatives. Cela signifie que le Ti est un défaut stable et/ou en autres termes, les 1% ainsi que 3% de Ti sont miscibles dans Fe 3 Al. Ceci est cohérent avec les données de solubilité obtenues expérimentalement Tableau. I Fe 3 Al. ܧ ் ܧ ܧ ௩ Al FeI FeII Al FeI FeII Al FeI FeII Present work Relaxed (108 atoms) -0.46 -1.01 -0.39 0.45 -0.14 0.39 0.94 0.34 0.31 Relaxed (32 atoms) -0.20 -0.72 -0.06 0.70 0.17 0.76 2.26 1.72 1.09 Theory [19] Unrelaxed (32 atoms) 1.97 2.44 1.71 Figure. 2. Profil des énergies de formation des défauts ponctuels calculées dans des supercellules relaxées et non relaxées -1 -0,5 0 0,5 1 1,5 2 2,5 3 3,5 E f (eV) X_Al X_FeI X_FeII Ti-unrelaxed Ti-relaxed Zr-unrelaxed Zr-relaxed v-unrelaxed v-relaxed D'après E d = E (d)-E 0 (Eq. 6) En utilisant une supercellule à 32 atomes [10] sont également représentés dans le Tableau. I E(d) pour comparaison. les substitutions de Ti que celles du Zr. Dans une première approche, cette différence peut être reliée à des différences de taille entre le Ti et Zr (le Ti est plus petit que le Zr). Ce qui est important à souligner ici est essentiellement l'ampleur des différences. Alors que l'ensemble des différences [10] obtenu par la méthode ab-initio pseudopotentiel. La valeur de l'énergie de formation d'une lacune dans le sous-réseau FeII (1,09 eV) est également comparable à (1,18 ± 0,04 eV) obtenue par Schaefer et al. [11] en utilisant la méthode d'annihilation de positons. Comparativement, Jiraskova et al. [12] , qui ont utilisé les mesures Mössbauer d'un composé Ti et Zr) calculée dans différentes configurations du joint de grains ∑5 (310) [001] sont données dans le Tableau II. Tableau. III Les énergies de formations (en eV) calculées pour différentes configurations dans la supercellule relaxée du joint de grains. Tableau III, on peut voir que, configuration la plus favorable est le site interstitiel (1) pour les deux et Zr. Ceci peut être expliqué simplement comme un l'environement: dans cette configuration les impuretés sont entourés que par des atomes FeII comme premier proches voisins. Le même comportement a été obtenu dans FeAl du B, L'atome du Bohr préfère être inséré au joint de grains riches en Fer [26]. Il y a cependant ici une nette différence dans les comp atomes Ti et Zr. L'énergie de formation est positive pour les deux configurations, si l'on considère la présence de Ti dans des sites pour insérer Ti à l'interface des joints de grains. Comparativement, la configuration atomique devient plus stable (-0,18 eV) où Zr est inséré dans un site (1) ( en Fer). Comme pour le cas des additions du l'énergie de formation (-0,18 eV) obtenues ici pour Zr indique que cet atome est plus stable lorsqu'il est inséré dans une configuration riche au niveau du joint de grains Il est important de rappeler que ductilité des aluminiures de fer [ Dans le cas des substitutions, les résultats reportés sur les Figures. 5 ( A partir du E f (in eV) Insertions Substitutions G.B. Interface First plane from G.B. Second plane form G.B. (1) (2) 0Al 0FeI 0FeII 1Al 1FeI 1FeII 2Al 2FeI 2FeII Ti-doped GB 0.31 0.90 0.05 -0.86 -0.63 -0.44 -1.32 -1.21 0.31 -0.98 -0.93 Zr-doped GB -0.18 2.09 0.49 -0.47 -0.51 -0.04 -0.97 -0.90 0.05 -0.66 -0.78 La comparaison des énergies de formation des deux métaux de transitions lorsqu'ils sont s le joint de grains montrent que le Ti est généralement stable avec le bulk et au niveau du joint de grains. Toutefois, pour le cas de l'impureté Zr qui n'est certainement pas stable dans le bulk, préfère ségréger le énergies de formation négatives. Ainsi, l'effet de Zr au joint de grain doit pris en compte pour comprendre les propriétés globales du ces aluminiures de fer.Fe, il est clair que la substitution des métaux de tran sites d'Al n'est jamais la configuration favorable. Pour les substitutions comme pour le bulk, le Ti préfère toujours résider dans des sites FeI plutôt que des sites La situation est moins prononcée pour Zr. En effet alors que les sites Fe sont toujours favorables aux Al, le FeII sont plus favorable lorsque le Zr est situé à l'interface 0,51 eV) et dans le deuxième plan (-0,78 eV). Enfin, il est également intéressan substitutions des deux métaux de transition, la configuration la la substitution sur un site FeI dans le premier plan suivant l'interface ( des impuretés sur les énergies interfaciales a été calculé pour différentes isant l'Eq. 6. Nos résultats sont groupés dans le Tableau IV et . Nos calculs indiquent que les valeurs de l'énerg Ti est généralement stable avec le joint de grains. Toutefois, pour le cas de as stable dans le bulk, préfère ségréger le joint de grains Ainsi, l'effet de Zr au joint de grain doit être pris en compte pour comprendre les propriétés globales du ces aluminiures de fer. métaux de transition sur les . Pour les substitutions dans des sites Fe, comme pour le bulk, le Ti préfère toujours résider dans des sites FeI plutôt que des sites FeII.La situation est moins prononcée pour Zr. En effet alors que les sites Fe sont toujours plusFeII le joint de grains-pur Σ5 (310) [001] est 0,37 J/m 2 . Comparativement, ce sont environ trois fois inférieur aux valeurs obtenues pour le joint de grains Σ5 (310) [001] des aluminure de fer avec la structure B2-FeAl (1,12 J/m2)[26]. A part pour le cas (0Al pour Ti / γGB = 0,38 J/m 2 ), il est bien clair que la présence des impuretés Ti et Zr diminue l'énergie d'interface par rapport celle du joint de grains pur (0,37 J/m2). Ceci suggère que le Ti et Zr peuvent stabiliser le joint de grains du composé intermétallique D0 3 -Fe 3 Al. Les réductions maximales attendues sont obtenus lorsque les métaux de transition sont sur un site FeII situer dans le premier plan suivant l'interface exacte: 14% pour Ti (0,32 J/m2) et 22% pour Zr (0,29 J/m2), respectivement. Il peut être remarqué aussi que les énergies d'interface pour le joint de grains dopé avec Zr sont systématiquement plus faibles que pour celui dopé avec du Ti [Fig.6].Ainsi, la tendance principale qui peut être tirée ici est que la stabilité apportée par la présence du Zr est plus importante que celle du titane. 174 (b) Zr G.B. doped 0.45 0..39 0..39 0.05 -0.04 -0.14 0.51 -0.66 -0.78 -0.9 -0.97 Al FeI Interface Interface plane_1 plane_1 plane_2 plane_2 Bulk substitution Bulk substitution Tableau IV Les énergies d'interface calculé (en J/m2) pour différentes configuration de (en eV) pour différents sites de substitution dans zirconium. substitution. Clean-GB γ GB des deux métaux de transitions lorsqu'ils sont 0.37 Ti doped GB Zr doped GB G.B. Interface 0Al 0.38 0.36 0FeI 0.36 0.34 0FeII 0.34 0.31 First plane from G.B. 1Al 0.36 0.33 1FeI 0.34 0.31 1FeII 0.32 0.29 Second plane from G.B. 2Al 0.36 0.34 2FeI 0.36 0.33 2FeII 0.33 0.30 interface du joint de , il est également intéressant de deux métaux de transition, la configuration la plus stable l'interface (-1,32 eV été calculé pour différentes ultats sont groupés dans le Tableau IV et . Nos calculs indiquent que les valeurs de l'énergie d'interface pour Energies de défauts (en eV) pour les substitutions du Ti et Zr dans des sites Les différences des énergies (en eV) entre les substitutions dans des sites FeI et FeII nt de Ti sur le site FeI mène à un gain significatif de l'énergie ( eV) sur la tout l'intervalle de température. Comparativement, le remplacement sur le site FeII est énergétiquement plus coûteux (0,05 ~ -0,7 eV). Cela indique également que, pour toutes les températures testées, l'impureté Ti est toujours plus stable dans un site FeI. Pour le cas d'impureté Zr, les gains en énergie sont encore plus importantes: (-2,04 ∼ -2,65 eV) sur les sites FeI et FeII, respectivement. Le point le plus intéressant révélé par ces données est l'évolution en fonction de la température de la stabilité relative entre les différents Energies de défauts (en eV) pour les substitutions du Ti et Zr dans des sites Les différences des énergies (en eV) entre les substitutions dans des sites FeI et FeII 177 Ti Zr FeII E d (FeI)-E d (FII) FeI FeII E d (FeI)-E d d (FeII) 0.05 -1.84 -2.04 -1.29 -0.75 0.75 0.46 -1.16 -2.34 -1.62 -0.72 0.72 0.56 -1.98 -2.35 -1.76 -0.59 0.59 0.65 -1.25 -2.62 -1.93 -0.69 0.69 0.34 -1.43 -2.30 -1.60 -0.7 0.7 0.48 -1.68 -2.36 -1.69 -0.67 0.67 0.55 -1.43 -2.23 -1.71 -0.52 0.52 0.16 -2.15 -2.29 -1.79 -0.5 0.5 0.08 -2.27 -2.08 -2.55 +0.47 +0.47 0.70 -1.85 -2.65 -2.28 -0.37 0.37 0.31 -1.40 -2.58 -2.14 -0.31 0.31 Temperature (K) Ti_1FeI Ti_0FeII Zr_1FeI jury, Pr. Ghouti MERAD and Dr. Jacques LACAZE, as it is a great honor for me to have them to evaluate my work. This thesis has been carried out in the Framework of the French/Algerien CMEP PHC Tassili project N° 12053TL and the Eiffel Excellence Scholarship N° 690532C. I would like to acknowledge EGIDE (Centre Français pour l'Accueil et les Échanges Internationaux) for the Financial support. I present also my sincere thanks to Pr. (1 st and 2 nd planes) are more important in the bulk region than that for the configurations when the impurities are placed in the G.B. interface. However the expansion of the interface for these configurations (substitutions in the 1 st and 2 nd plane) is lower when compared to that produced both in the clean interface and the doped G.B. with impurities substitutions in the G.B. interface. In terms of energy, these configurations correspond to the lower interface energies (listed in Table III-7 and presented in Fig. III-5). This is in agreement with the results of Shiga et al. [36]. These authors show that, the G.B. interface varies depending on the multilayer relaxation of the G.B. interface: the larger the interface expansion, the higher the interface energy. Appendix A In this Appendix, the calculated displacements of the atoms in the doeped-grain boundary supercell will be presented. It is important to recall that the displacement of the atoms is calculated as difference between the final positions of the atoms in the relaxed supercell and the initial positions in the un-relxed supercell. The following left plots represent the calculated displacements of the atoms. The positions of the various atoms (Al, FeI and FeII) are given on the x axes depending on their location (n) away from the exact interface at n=0. Due to the cell symmetry 0 and -/+10 label the two interfaces in the supercell. In the right plots, the displacements are represented by solid arrows. Appendix B This appendix will give a brief general introduction into the field of two-dimensional defects in materials and the Coincidence Site Lattice theory. Generally one speaks of an interface being present in a system if the physical properties discontinuously change across that interface. Phase boundaries i.e. exist between phases of different state of order. For example this is true for liquid-solid or liquid-gaseous interfaces. Limiting the discussion to internal interfaces in solid state materials one differentiates between either phase or grain boundaries. Phase boundaries separate grains of different phases. This i.e. could be grains having different lattice structure. Grain boundaries (GBs) separate almost stress free grains having the same lattice structure but different orientation. Throughout history the structure of GBs was seen quite differently. First models by Rosenhain [1] understood a GB as being an amorphous disordered region separating grains. A succeeding model by Mott [2] assumed that GBs are two-dimensional defects where zones of good lattice match and zones of bad lattice match exist. A different approach is to discuss GBs in the framework of dislocations. The Read-Shockley model [3] tried to explain GBs on the basis of dislocations. Such an approach is well suited for low-angle GBs. Here it is well known that low-angle GBs can form by arrays of lattice dislocations. Although discrete lattice dislocations are having long-range stress fields, dislocation theory is able to show that by linear superposition of GB dislocation stress fields, GBs can be formed that only exhibit a short range stress field. For instance Fig. 1 demonstrates how stabile almost stress free GBs can be formed. One of the major drawbacks of this model is the non-inclusion of any rigid body translations between crystals [9].
270,392
[ "1188568" ]
[ "178323" ]
01749405
en
[ "spi" ]
2024/03/05 22:32:07
2012
https://hal.univ-lorraine.fr/tel-01749405/file/DDOC_T_2012_0326_SHASHKOV.pdf
Keywords: déformation plastique, dynamique des dislocations, effet Portevin -Le Chatelier, maclage, émission acoustique, systèmes dynamiques, auto-organisation, analyse statistique, analyse multifractale Recent studies of plastic deformation using high-resolution experimental techniques testify that deformation processes are often characterized by collective effects that emerge on a mesoscopic scale, intermediate between the scale of individual crystal defects and that of the macroscopic sample. In particular, the acoustic emission (AE) method reveals intermittency of plastic deformation in various experimental conditions, which is manifested by the property of scale invariance, a characteristic feature of self-organized phenomena. The objective of the dissertation was to study the inherent structure of AE for different mechanisms of plastic deformation, to examine its dependence on the strain 2 rate and strain hardening of the material, and to understand the relationships between short time scales related to organization of defects and those relevant to the continuous approach of plasticity. The study was performed on AlMg and Mg-based alloys, the plastic deformation of which is accompanied by a strong acoustic activity and controlled by different physical mechanisms: the Portevin-Le Chatelier (PLC) effect in the first case and a combination of twinning and dislocation glide in the second case. Application of a technique of continuous AE recording ("data streaming") allowed proving that the apparent behavior, discrete or continuous, of AE accompanying the PLC effect depends on the time scale of observation and the physical parameters surveyed. However, unlike the traditional view, it appears that AE has an intermittent character during both stress serrations and macroscopically smooth flow. Using methods of the theory of nonlinear dynamical systems, such as the multifractal analysis, a tendency to a transition between the scale-invariant dynamics and the behaviors characterized by intrinsic scales was detected during work hardening. Finally, we proved that the power-law statistical distributions persist in wide ranges of variation of parameters conventionally used to individualize acoustic events. This result is of general importance because it applies to all avalanche-like processes emerging in dynamical systems. Résumé Les études récentes de la déformation plastique à l'aide de techniques expérimentales à haute résolution témoignent que les processus de déformation sont souvent caractérisés par des effets collectifs qui émergent à une échelle mésoscopique, intermédiaire entre celle de défauts cristallins et celle d'une éprouvette macroscopique. Notamment, la méthode de l'émission acoustique (EA) révèle, dans divers conditions expérimentales, l'intermittence de la déformation plastique, qui se manifeste par une propriété de l'invariance d'échelle, caractéristique de phénomènes d'auto-organisation. L'objectif de la thèse a été d'étudier la structure inhérente de l'EA pour différents mécanismes de déformation plastique, d'examiner sa dépendance à la vitesse de déformation et à l'écrouissage du matériau, et d'appréhender les liens entre les petites échelles de temps, liées à l'organisation des défauts, et celles qui relèvent de l'approche continue de la plasticité. L'étude a été réalisée sur des alliages AlMg et des alliages base Mg, dont la déformation plastique est accompagnée d'une forte activité acoustique et contrôlée par differénts mécanismes physiaues : l'effet Portevin-Le Chatelier (PLC) dans les premiers et une combinaison du maclage et du glissement des dislocations dans les deuxièmes. L'utilisation de la technique d'enregistrement continue de l'EA ("data streaming") a permis de montrer que le comportement apparent -discrète ou continue -de l'EA accompagnant l'effet PLC dépend de l'échelle de temps d'observation et du paramètre physique étudié. Cependant, contrairement à une vision traditionnelle, il se trouve que l'EA a un caractère intermittent pendant l'écoulement macroscopiquement lisse tant que pendant l'instabilité macroscopique de la déformation plastique. Grace aux méthodes d'analyse issues de la théorie des systèmes dynamiques non linéaires, telles que l'analyse multifractale, une tendance à la transition entre la dynamique invariante d'échelle et les comportements caractérisés par des échelles intrinsèques a été trouvée lors de l'écrouissage des matériaux. Enfin, nous avons prouvé que les distributions statistiques en loi puissance persistent dans des larges intervalles de variation des paramètres, conventionnellement utilisés pour individualiser les événements acoustiques. Ce résultat est d'une importance générale car il s'applique à tous les processus avalancheux émergeant dans différents systèmes dynamiques. Introduction The plasticity of crystalline materials results from the motion of defects of crystal structure -dislocations, twins, point defects, and so on. Until recently, the research in plasticity was divided into two major parts, that of the microscopic motions of defects and that of the macroscopic behavior of the material. The latter was considered as resulting from averaging over local random fluctuations of the distribution and mobility of defects, dislocations par excellence, which statistically compensate each other and give rise to spatially uniform and continuous plastic flow. Since the 1980th, however, many studies showed that the ensemble of crystal defects represents an example of nonlinear dissipative dynamical systems in which the interaction between various constituents may lead to self-organization phenomena. The properties of the collective dynamics appear to be common for dynamical systems of different origin, coming from various fields, such as physics, mechanics, chemistry, biology,.... [START_REF] Nicolis | Self-organization in nonequilibrium systems : from dissipative structures to order through fluctuations[END_REF][START_REF] Haken | Synergetik[END_REF]. Each example of collective effects, interesting by itself, is also interesting as a representative of a class of phenomena characterized by universal behavior. The complex dynamics of such systems is often associated with the property of scale invariance, or self-similarity, which manifests itself through power-law relationships. It was found that self-organization of dislocations concerns both spatial and temporal behavior and results in formation of dislocation structures and/or intermittency of plastic deformation, characterized by non Gaussian statistics and invalidating the averaging operations. These phenomena give rise to a very complex problem because depending on the material and deformation conditions, the collective dynamics of defects may show up at different scales and lead to various collective effects, displaying both universal and unique properties. The main challenges for investigations of this problem are to determine the limits of the continuous approach to the description of plasticity and to find a link between the elementary mechanisms of plastic deformation and the macroscopic behavior of deforming materials. Understanding such multiscale behavior is especially important nowadays because the technological developments are turning towards micro-and nanosystems with dimensions comparable to the scales imposed by collective processes in the dislocation system. In this framework, the present doctoral research will mostly concern the problem of the intermittency of plastic deformation of crystals. The intermittent collective motion of defects generates jumps in the plastic strain rate, which are characterized by nonrandom statistics and in particular, by power-law statistical distributions. Such properties were first identified for different mechanisms giving rise to macroscopic plastic instabilities, mostly the Portevin-Le Chatelier effect (PLC) -jerky flow in dilute alloys caused by interaction between dislocations and impurities [START_REF] Portevin | [END_REF][START_REF] Kubin | Dislocations in Solids[END_REF]. In tensile tests with constant strain rate, this effect displays a complex spatiotemporal behavior, associated with repetitive strain localization in deformation bands and concomitant abrupt variations of the deforming stress. Various approaches to the analysis of serrated deformation curves were proposed [START_REF] Lebyodkin | [END_REF]6,7,8,9,10,11]. They all showed that spatiotemporal patterning corresponds to nontrivial dynamical regimes. In particular, the dynamical [12] and statistical [START_REF] Lebyodkin | [END_REF] analyses testified to the existence of a deterministic chaos [13] in some range of strain rates, and a transition, at higher strain rates, to self-organized criticality (SOC) [14] which is generally considered as a paradigm of avalanche-like processes. These two modes demonstrate different statistics of amplitudes and durations of serrations: distributions with characteristic peaks in the case of chaos and power-law distributions in the case of SOC. It should be noted, however, that chaos is also associated with scale invariance reflected in the geometry of the phase trajectory of the corresponding dynamical system. Moreover, application of the multifractal analysis [15] revealed scale-invariant behaviors in the entire strain-rate range in which the PLC effect is observed. Recently, acoustic emission (AE) technique was applied to study the PLC effect in an AlMg alloy [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF][START_REF] Lebyodkin | [END_REF]18]. AE stems from transient elastic waves generated within a material due to localized changes of microstructure, therefore, it reflects the motion of group of defects. Surprisingly, power-law distributions were found for amplitudes of AE events in all experimental conditions. This result indicates, on the one hand, that plastic deformation is an inherently intermittent, avalanche-like process at mesocopic scales relevant to AE, and, on the other hand, that scale invariance may not spread out to the macroscopic scale. Furthermore, another group of investigations showed that "regular" plastic flow is also intermittent although the jumps in the strain rate are smaller than those leading to macroscopic stress serrations. These jumps can however be evaluated using highresolution extensometry or the AE technique. The AE statistics was studied rather in detail during macroscopically homogeneous deformation of some pure crystalline solids, such as ice single and polycrystals and copper single crystals, and displayed a persistent power-law character [19,20]. This conclusion was later confirmed in experiments on local extensometry during plastic flow of Cu single crystals [21,22]. Power-law statistics were also found for the AE accompanying twinning of single crystals of Zn and Cd [23], as well as for stress serrations observed during compression of microscopic pillars of pure metals [24]. All the above-described results led to the growing recognition of a ubiquitous character of self-organization phenomena in dislocation ensembles. Moreover, various data on small-scale intermittency in pure single crystals provided approximately the same value for the power-law exponent, thus giving rise to a "universality" conjecture [25]. However, different exponent values were reported for AE in ice polycrystals [26]. Besides, the exponents obtained for AE accompanying the PLC effect were found to depend on strain rate and microstructure [27,[START_REF] Lebyodkin | [END_REF]18]. These differences raise a question on the relationship between the general laws governing the collective dislocation dynamics at relevant mesoscopic scales and the role of specific mechanisms of plasticity. Furthermore, as can be seen from the above-said, the range of scale invariance is limited from above. However, the transition to the macroscopic scale is not well understood because the macroscopic behaviors include not only smooth deformation curves but also jerky flow, with statistics depending on the experimental conditions. On the other hand, the scale invariance must also break at small scales because of the limited experimental resolution. So, the classical analysis of EA considers as elementary the acoustic events at the time scale inferior to 1 ms. However, a recent study under conditions of the PLC effect showed that these "elementary" events may possess a fine structure detected by the multifractal analysis [28]. It can be supposed that such temporal structures characterize short-time correlations between the motions of defects, which have not been studied up to date. With these questions in view, the objective of the present doctoral research was to study the intrinsic structure of AE at different time scales and for different mechanisms of plasticity; to characterize the relationships between the correlations of deformation processes at very short time scales, corresponding to "elementary" acoustic events, and the long-time correlations, up to the macroscopic scale of the deformation curve; and to examine the influence of strain and/or strain rate on the observed statistical behavior. Mg and Al based alloys were chosen as the main objects of this study. Both these alloys exhibit a highly cooperative character of plastic deformation, leading to strong acoustic activity which is governed by distinct microscopic mechanisms, -respectively, mechanical twinning and the PLC effect. In order to get a comprehensive description of complex behavior the data treatment combined various methods, including the statistical, spectral, and multifractal analyses. The thesis contains six chapters. The first chapter reviews literature data concerning the problem of intermittency of plasticity, gives examples of similar behaviors in solid state physics, introduces notions of nonlinear dynamical systems, and describes the mechanisms of the PLC effect and twinning. Experimental details on the materials, testing, and data recording are given in Chapter 2. Chapter 3 introduces the relevant types of data analyses. The results of investigations are presented in Chapters 4 to 6. Chapter 4 dwells on the effect of the parameters of individualization of acoustic events on their apparent statistics. This study treats a general question arising in experimen-tal investigations of avalanche processes of different nature: are the results of statistical analysis not affected by the superposition of the analyzed events? Indeed, the superposition might occur for various reasons, for example, because of an almost instantaneous emergence of avalanches or because of insufficient resolution of individual events. Understanding this influence is an indispensable basis for the quantitative evaluation of the critical exponents characterizing scale invariance. However, there are virtually no pertinent studies in the literature, or at least this question is only considered theoretically. Chapter 5 utilizes a technique of continuous AE recording to compare the nature of AE during jerky and smooth flow in an AlMg alloy. As a matter of fact, the AE accompanying unstable plastic deformation is usually considered to be composed of discrete bursts associated with the motion of large dislocation ensembles, giving rise to stress serrations, and a continuous emission generated during macroscopically smooth plastic flow. This traditional viewpoint however contradicts the observation of intermittency of plastic flow in smoothly deforming pure materials. A minute examination of AE records reported in Chapter 5 allows overcoming this contradiction and provides a basis for the subsequent analysis of AE statistics. The chapter is completed with first results of similar investigations on Mg alloys. The last chapter describes the results of the statistical and multifractal analysis of AE in Al and Mg alloys. The general conclusions and discussion of the perspectives for future investigations complete the dissertation. Chapter 1 BACKGROUND The current research into self-organization of crystal defects and collective deformation processes explores various directions and is developing into a large field of investigation. Without trying to give an exhaustive overview of related problems, this chapter will treat some aspects which concern the above-formulated objectives of the doctoral study and are necessary for understanding its results. We first outline observations of scale-invariant behavior, associated with power laws, during plastic flow of crystalline solids. In order to position the plasticity phenomena within a more general problem of avalanche-like processes in physics and mechanics, we further present examples of powerlaw statistical behavior in solid state physics and introduce some general concepts of nonlinear dynamical systems and theoretical frameworks for explanation of power-law statistics. This general representation is followed by a more detailed description of the Portevin-Le Chatelier effect, which will be one of the main focuses of investigation. After briefly introducing the macroscopic manifestations and the microscopic mechanism of the effect, we present experimental observations and computer modeling of the complexity of its spatiotemporal behavior, with an accent put on statistical properties and application of AE method to experimental investigation of jerky flow. The chapter ends with the description of the phenomenon of twinning -another mechanism of plastic deformation studied in this dissertation and characterized by strong AE displaying features qualitatively different from those observed for the PLC effect. Intermittency and power-law scaling in plastic flow Plastic deformation of crystalline solids is governed by the multiplication and motion of crystal defects -dislocations, twins, point defects, and so on. Consequently, it is intrinsically heterogeneous and discontinuous at the microscopic scale. By contrast, the macroscopically smooth plastic deformation of crystals is conventionally viewed as homogeneous and continuous plastic "flow." This is understood as a result of averaging over independent motions of a very large number of defects contained in the material. Indeed, the typical densities of dislocations in deformed samples are of the order of magnitude of 10 10 per cm 2 . The approximate sense of this viewpoint which supposes that the interaction between defects can be neglected has been recognized for a long time. On the one hand, application of the electron microscopy to investigation of dislocations revealed formation of complex spatial structures (e.g., [29,30]). However, the development of this spatial heterogeneity was observed during macroscopically smooth deformation and investigated regardless of the problem of discontinuity of plastic flow, although the numerical models proposed for explanation of the spatial aspect of selforganization of dislocations are also promising as to understanding its temporal aspect [31,32,33]. On the other hand, due to utilization of elaborated experimental techniques, several observations of sporadic bursts in plastic strain rate were reported very early, even before the occurrence of the concepts of plastic deformation through motion of defects (e.g., [34,35]). These bursts were however considered as resulting from large stochastic fluctuations in the system of defects. Finally, in some cases the continuous plastic flow becomes unstable at macroscopic scale and serrated deformation curves are observed. The jerky flow may be caused by various microscopic mechanisms which have been extensively studied, e.g., the Portevin-Le Chatelier (PLC) effect controlled by interaction of dislocations with solutes [START_REF] Portevin | [END_REF][START_REF] Kubin | Dislocations in Solids[END_REF][START_REF] Cottrell | Dislocations and plastic flow in crystals[END_REF][START_REF] Estrin | [END_REF], thermomechanical instability caused by insufficient heat evacuation from samples deformed at very low temperatures [38,39], twinning [40,41], martensitic transformations [START_REF] Ogata | AIP Conference Proceedings[END_REF][START_REF] Zhang | [END_REF], and so on. The observation of jerky flow testifies that at least in certain conditions, interaction between defects may lead to intermittent plastic flow caused by short-time cooperative motion of large groups of defects. The creation of the theory of nonlinear dissipative systems which are characterized by self-organization [START_REF] Haken | Synergetik[END_REF][START_REF] Nicolis | Self-organization in nonequilibrium systems : from dissipative structures to order through fluctuations[END_REF] made it possible to analyze discontinuous plastic deformation from the viewpoint of collective processes. Such investigations started in 1980th, mostly using the example of the PLC effect, which displays complex spatiotemporal behavior associated with strain localization within deformation bands and stress serrations of various kinds. A detailed description of this phenomenon will be given in § 1.5. Essentially, various approaches to the analysis of stress drops showed that spatiotemporal patterning corresponds to nontrivial dynamical regimes [START_REF] Lebyodkin | [END_REF]6,7,8,9,10,11]. In particular, power-law Fourier spectra of series of stress serrations and power-law statistical distributions of their amplitudes and durations were found in a range of high strain rates [START_REF] Lebyodkin | [END_REF]6,7,8]. The power-law dependences are equivalent to scale-free behavior, as can be easily seen from the relationship p(kx) ∝ (kx) β ∝ k β x β ∝ k β p(x) which displays self-similarity upon scaling. This scalefree behavior bears witness to possible manifestation of self-organized criticality. The concept of SOC was introduced by P. Bak et al. [44] as a general framework for explaining the avalanche-like phenomena in spatially extended dynamical systems and, more specifically, the flicker noise characterized by 1/f power density spectrum. Although some authors contest the application of SOC to explain the 1/f -noise [45], it is widely used to model earthquakes and dry friction -the phenomena presenting certain similarities with jerky flow. It should be noted that such a hypothesis is consistent with the infinite number of degrees of freedom of the dislocation ensembles. At lower strain rates histograms with characteristic peaks were observed. They were shown (e.g., [START_REF] Kubin | Dislocations in Solids[END_REF]7,8]) to neither be associated with stochastic behavior, but with the so-called deterministic chaos [START_REF] Bergé | Order within chaos, towards a deterministic approach to turbulence[END_REF]. Interestingly, this observation means a drastic reduction in the number of degrees of freedom because chaos occurs in low-dimensional systems. It is also characterized by scale invariance which is reflected in the so-called fractal geometry (see § 3.3 and Appendix) of the attractor of phase trajectories of the dynamical system. Observing scale invariance in the distributions of stress serrations at the macroscopic level, suggests extending the analysis to finer event scales by using more sensitive techniques. Recently, acoustic emission (AE) technique was used to investigate statistics of the PLC effect [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF][START_REF] Lebyodkin | [END_REF]18]. Surprisingly, it was found that acoustic emission is characterized by power-law statistics of event size for all strain rates. Moreover, a similar statistical behavior of AE was found for both jerky flow and macroscopically smooth plastic flow in the same tests. Such persistence of statistical behavior testifies to the invariant nature of deformation processes during both stable and unstable deformation, and implies an inherently intermittent scale-invariant character of plastic activity at a mesoscopic scale relevant to AE. It is noteworthy that this conjecture is conform with the earlier observations of power-law statistics of series of electric signals accompanying stress serrations during low-temperature catastrophic dislocation glide or twinning in pure metals [START_REF] Bobrov | [END_REF]48]. Although the above research testifies to an important role of self-organization of dislocations, the PLC effect is often considered as a particular, exotic case. Nevertheless, application of AE technique to pure materials, either displaying serrations caused by twinning (Cd, Zn) or deforming by smooth dislocation glide (ice, Cu), showed that the intermittency of plastic deformation is rather the rule than the exception and results from an avalanche-like collective dislocation motion [19,49,21,25,23]. An example of power-law statistical distributions of acoustic energy bursts in ice single crystals deformed by creep is illustrated in Fig. 1.1. The results of AE studies were recently confirmed using another sensitive method based on high-resolution extensometry [21,22]. In this case, power-law distributions were found for local strain-rate bursts detected during plastic flow of Cu single crystals. All these results bear evidence to the intermittent scale-invariant character of macroscopically uniform plastic activity, albeit at sizes of the local strain-rate bursts much smaller than in jerky flow. Finally, serrated deformation curves and power-law distributions of serration amplitudes were recently observed in the compression of microscopic pillars of various pure Figure 1.1: Statistical distributions of acoustic energy bursts recorded in ice single crystals under constant stress. [49] metals (Fig. 1.2) [24,50,51]. These works provide a direct proof of collective dislocation dynamics, which shows up on the deformation curves when the size of the deformed specimens is small enough, so that their plastic deformation cannot be considered anymore as a result of averaging over many independent plasticity events. The collection of all the above-described results led to the growing recognition of a ubiquitous character of self-organization phenomena in dislocation ensembles. 1 The various data on small-scale intermittency in pure single crystals provided approximately the same value for the power-law exponent. Namely, distributions p(E) ∝ E β with β ≈ -(1.5 ÷ 1.8) were reported for the energy E of AE events recorded during deformation of single crystals of various materials with hexagonal and cubic crystal structure [19,20,23]. A similar β-range was found for jerky displacements in the tests on micropillars. As the abrupt displacement takes place at approximately the same force level, it determines the produced mechanical work and, therefore, also characterizes the energy dissipated in the process of plastic deformation. The entirety of these data gave Figure 1.2: Left: intermittency of the shear-stress vs. shear-strain curve for Ni sample with [ 269] orientation; the numbers designate the pillar diameter. Right: distribution of slip events plotted in logarithmic coordinates. Open circles -data for a single sample with diameter ∼ 20 µm, solid circles -aggregate data from several samples. [24] rise to a "universality" conjecture [25]. Concurrently, the value β * ≈ -1.35 was found for the amplitude distribution of AE in ice polycrystals [26]. This value corresponds to an even higher estimate of the exponent β for the energy distribution, as can be illustrated with the aid of the approximation suggested in [20], that the energy dissipated during an acoustic event is proportional to the square of its peak amplitude. Using this simplification, the relationship β = (β * -1)/2 can be easily deduced (see, e.g., [22]) and gives an estimate β ≈ -1.2. Much lower values, β ≈ -(2 ÷ 3), which in addition depend on strain rate and evolve with work hardening, were reported for AE accompanying the PLC effect [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF][START_REF] Lebyodkin | [END_REF]18]. Taking into account that at the macroscopic scale of stress serrations the power-law distributions were only found at high strain rates, with exponents varying in rather large intervals for various samples {β ≈ -(1 ÷ 1.7) was typically reported for amplitude distributions of stress serrations}, the questions on the relationship between the general laws governing the collective dislocation dynamics and on the role of specific mechanisms of plasticity remain open. Analogues in physics The observations of power-law statistics in plasticity are reminiscent of similar features in nonlinear phenomena of various natures, often referred to as "crackling noise". Numerous investigations following the development of the theory of dynamical dissipative systems proved that the emergence of scale invariant behavior is one of the fundamental properties of self-organization phenomena in systems consisting of a large number of interacting elements. Their behavior is expressed by a jerky response to smoothly changing external conditions. Well known examples in physics and material science include the Barkhausen noise in magnetic materials [54], vortex avalanches in type II superconductors [55], charge density waves [56], fracture [57,58], martensitic transformations [59], dry friction [60], earthquakes [61], and so on. All these phenomena are characterized by avalanche-like relaxation processes alternating with periods of slow loading, and are characterized by power-law distributions of sizes and durations of the avalanches. Several examples of such behavior are presented below in some detail. The Barkhausen effect is an instability responsible for the jerky character of magnetization of ferromagnets [62]. Discovered in 1919, it provided direct experimental evidence for the existence of magnetic domains. Let us consider a ferromagnet below the Curie temperature. In a zero magnetic field it is divided into domains whose magnetic moments tend to disorder, in order to compensate each other and to minimize the internal energy which is the lowest in the unmagnetized state. When an external magnetic field is applied to the material, its magnetization takes place through displacement of the domain boundary walls. If the crystal structure were free of defects, the domain walls would move in infinitesimal magnetic fields. In reality they interact with various pinning centers, such as dislocations, polycrystalline grains, stacking faults, surface roughness, etc. In particular, this explains the existence of permanent magnets, i.e., materials having a spontaneous magnetic moment in the absence of external magnetic field, because their full demagnetization is impeded by pinning of the domain walls. The other consequence of pinning is the Barkhausen effect which is caused by jump-like displacements of domain walls. The corresponding jumps in magnetization can be observed, e.g., with the help of inductive techniques. Recently, the development of magneto-optical methods allowed direct (in situ) observations of the intermittent motion of domains walls [63]. However, the most reliable experiments in terms of signal statistics are still based on inductive measurements. A typical example of such measurements is presented in Fig. 2. This analogy is rather profound as both phenomena are related to the problem of collective pinning. It is not surprising that the same authors who studied the phenomenon of plastic instability also suggested a model of unstable magnetization processes [START_REF] Mccormick | [END_REF]. However, most of works treating in some way the analogy between plasticity and magnetization problems mostly considered the motion of individual domain walls through pinning centers (e.g., [66]). An avalanche dynamics of magnetic flux, characterized by power-law statistics, was also observed in type II superconductors [55,67] and in discrete superconductors -Josephson junction arrays [68,[START_REF] Ishikaev | New Developments in Josephson Junctions Research[END_REF]. Magnetization of these materials is determined by the dynamics of a system of vortices (called, respectively, Abrikosov or Josephson vortices) carrying magnetic flux. There are evident qualitative similarities between the magnetic flux flow in such materials and the plastic flow. Indeed, both vortices and dislocations are linear "defects" which are subjected to external forces (Lorentz force [START_REF] Landau | The Classical Theory of Fields[END_REF] and Peach-Koehler force [START_REF] Friedel | Dislocations[END_REF], respectively), pinning forces, and mutual interaction forces. It should be noted, however, that the case of dislocations presents more complexity. In particular, the magnetic vortices are oriented in the direction of the applied magnetic field and their interactions are mostly reduced to repulsion, whereas the interactions between dislocations depend on their type, Burgers vector, mutual orientations, bounding to certain slip systems, and so on, and include annihilation and multiplication [START_REF] Friedel | Dislocations[END_REF]. Perhaps, the most famous example of avalanche-like behavior is rock fracture. Whereas the above examples are typically characterized by self-similar statistics over only 1-3 orders of magnitude of the measured variable, because of strong limitations of the sample size in the laboratory conditions, in geology the avalanches can reach the size of giant earthquakes. Consequently, the distribution of sizes of seismic events obeys a power law, known as the Gutenberg-Richter law, over more than eight orders of magnitude [61]. The dynamics of earthquakes is generally related to a stick-slip mechanism, which involves sliding of a plate of the earth crust along another plate [60]. Their key feature is that the friction force, which acts along the fault line between the plates, decreases with increasing the slip velocity. As argued in [START_REF] Lebyodkin | [END_REF], the mechanism responsible for such behavior is similar to that responsible for the property of negative strain-rate sensitivity of stress under conditions of the PLC effect. Application of AE technique to statistics of deformation processes The sudden changes in the internal structure of materials (cracking, motion of dislocation pile-ups, phase transitions, twinning, etc. ) under action of external forces lead to emission of sound waves. This phenomenon has been known since long ago because the sound is sometimes audible, as in the well-known case of mechanical twinning during bending of a tin bar ("tin cry"). The recording of the acoustic signal provides information about these processes, taking place on various spatial and temporal scales. The regular application of the AE technique as an in situ method of investigation has started since a comprehensive study by Kaiser [73,[START_REF] Kaiser | Untersuchung über das Auftreten von Geräuschen beim Zugversuch[END_REF]. It has proven to be a powerful tool to study the mechanisms of plastic deformation and failure in wide time and space ranges, from microscopic scales corresponding to the motion of groups of defects to the macroscopic scale of the deforming sample (see, e.g., reviews [START_REF] Mathis | Acoustic Emission, chapter Exploring Plastic Deformation of Metallic Materials by the Acoustic Emission Technique[END_REF][START_REF] Heiple | [END_REF] and references therein). In particular, simple estimations show that a measurable stress drop of a few hundredth of MPa requires cooperative motion of the order of 10 5 dislocations through the sample cross-section, whereas the motion of a few hundred dislocations through a polycrystalline grain can provoke a measurable acoustic event [77]. Indeed, significant AE accompanies even macroscopically smooth deformation curves of pure materials [21]. Consequently, this technique is particularly useful for the study of the collective motion of dislocations. In spite of a wide application of AE to monitoring microstructural changes in deforming materials, its interpretation is not direct because the signal is affected by the transfer function of the AE sensor, the sound reflections, the modulations by the propagating medium, etc... In some cases, comparison of AE data with microstructural studies may provide information on the possible sources of elastic waves. In other cases, the interpretation depends on the physical assumptions. Dislocations-based models of AE usually consider three mechanisms: (i) relaxation of stress fields caused by dislocation motion; (ii) annihilation of dislocations; (iii) "bremsstrahlung" radiation from accelerated dislocation [START_REF] Heiple | [END_REF]. The estimates of the energy released by these sources prove a preponderant role of the first type of AE sources (see, e.g., [START_REF] Mathis | Acoustic Emission, chapter Exploring Plastic Deformation of Metallic Materials by the Acoustic Emission Technique[END_REF]). A model of such source was proposed by Rouby et al. [78], who considered the motion of a straight dislocation line at a constant velocity. Pursuing this approach, Richeton et al. [79] deduced the following relationship for the amplitude of the acoustic wave generated by a dislocation avalanche with a total dislocation length L and the average velocity v: A(t) = 3C 2 T 4πC 2 L ρb D 2 Lv (1.1) where C T and C L are, respectively, the transverse and longitudinal wave velocities, ρ is the density of the material, b is the Burgers vector, and D is the distance between the AE source and the sensor. The term Lv corresponds to the rate of area sweeping by dislocations: Lv = dS/dt. Normalized by volume and combined with the Orowan equation [80] for plastic strain rate, dε dt = kρ m bv, where ρ m is the density of mobile dislocations and k is a geometrical factor, this relationship shows that A(t) ∼ dε/dt. In statistical investigations the maximum amplitudes A of acoustic events are usually examined. Integration of Eq. 1.1 under assumption that the avalanche velocity v exponentially decays with time leads to a proportionality relationship between A and the strain increment caused by the dislocation avalanche: A ∼ ε. Its feasibility is confirmed, e.g., by experiments on ice single crystals, which showed proportionality between the global AE activity and plastic deformation [19]. Weiss et al. [21] provide arguments that this relationship should be valid in a more general case, e.g., for a decay following a power-law, as could be expected for avalanche-like behavior. It is often supposed that the radiated acoustic energy is a more relevant characteristic of deformation processes, also applicable to non dislocation sources, e.g., twinning or cracking [81,20,82]. This characteristic is considered to scale with A 2 , as follows from the estimate of the energy dissipated at the source by a single screw dislocation [83]: E = KL 2 b 2 v 2 (1.2) where K is given by a combination of material constants, and its comparison with Eq 1.1. As specified in § 3.1, this approach is used for the statistical analysis in the present study. Theoretical approaches to power-law statistics The theory of nonlinear dynamical systems can be found in many books and journal reviews, starting from the classical ones [START_REF] Nicolis | Self-organization in nonequilibrium systems : from dissipative structures to order through fluctuations[END_REF][START_REF] Haken | Synergetik[END_REF]. The most striking feature of such systems is that they self-organize. The result of self-organization may be a very complex behavior which cannot be understood in terms of summation of random or periodic motions. Three aspects of complexity, i.e., fractals, deterministic chaos, and self-organized criticality, have already been mentioned above. Since SOC is characteristic of infinite dimensional systems and leads to power-law statistics, it seems to be relevant to the studied problem and will be described below in some detail. Alternative interpretations of power laws will be also presented. At the same time, the correlation between the system elements may drastically reduce the effective number of degrees of freedom controlling its dynamics. It was shown in the case of the PLC effect that this reduction may lead to a transition from SOC to low-dimensional chaos [7,84]. Moreover, it is known that a system composed of many oscillators may reach perfect synchronization leading to simple periodic behavior, when all oscillators move in phase [85]. Another realization of synchronization can occur through propagation of a kind of switching waves [86], giving rise to the so-called "relaxation oscillations" which are characterized by alternation of fast and slow kinetics [START_REF] Andronov | Theory of Oscillators[END_REF]. Relationships between SOC and these dynamical modes will be shortly discussed. Finally, another remarkable feature of dynamical systems is universality, i.e., similar behavior of systems governed by distinct mechanisms and even coming from different fields of science, which allows classifying various systems into classes of universality. Thanks to this feature, simple models, like the ones presented below, often prove usefulness for understanding and modeling real behaviors. Self-organized criticality Despite a vast literature on self-organized criticality there is no clear definition of this concept, which is usually explained using examples of specific dynamical systems. The SOC concept was proposed by Bak et al. to explain the behavior of a simple cellular automaton model of a "sandpile" [44]. It was shown that the sandpile reaches a kind of critical state, characterized by power-law correlations, similar to second-order phase transitions. The salient feature of the behavior is that in contrast to phase transitions, the sandpile reaches the critical state spontaneously, without fine tuning of an order parameter. A simple sandpile model can be demonstrated using a square grid, each element of which is assigned an integer variable z(x, y) representing the number of sand grains accumulated on this site. At each time step a grain is placed on a randomly selected site. If z(x, y) reaches a critical value (equal to 4 in 2D case) on a given site, the grains are redistributed among its nearest neighbors or, eventually, leave the system through the grid boundaries. This redistribution may trigger chain processes leading to formation of an avalanche, with size s defined as the total number of "toppling" sites and duration T given by the number of time steps during which the chain process is developed. It can be said that the system performs avalanche-like transitions between different metastable states. The avalanche behavior is characterized by several power laws which describe the fractal structure of avalanche patterns, the probability densities P (s) and P (T ), the relationship s(T ) between the variables, and the Fourier spectra of time series representing the time evolution of system variables. The last feature explains why SOC is often considered as a possible mechanism of 1/f -noise. More specifically, the dependencies demonstrate a cut-off at large scale because of the finite dimensions of the system. For example, P (s) is generally described by a relationship: P (s) = s βs f c (s/s 0 ), (1.3) where β s usually varies between -1 and -2, and the cut-off scale s 0 is associated with the linear system size. The cellular automata present stochastic models of SOC. Another kind of models is based on a deterministic approach. The so-called spring-block models were proposed to explain the Gutenberg-Richter statistical law for earthquakes [START_REF] Chen | [END_REF]89,60]. Such models usually consider an array of blocks connected to their neighbors by springs. The blocks are pulled across an immobile plate by a driving plate to which they are also connected with the aid of springs. In this case the role of the parameter z is played by the local force acting on a given block. Moving the driving plate leads to adding a small force to each block. As mentioned in §1.2, the nonlinear friction law between the blocks and the immobile plate (see Fig. 1.4) results in a stick-slip behavior. If the local force exceeds the friction threshold, the corresponding block slips and the resulting rearrangement of local forces may trigger a chain process. Again, it was found that the dynamics of the system is described by power laws, providing that the driving rate is low. This simple scheme demonstrates three basic ingredients controlling the dynamics of such models: the threshold friction, separation of a slow time scale (loading) and a fast time scale (avalanche-like relaxation), and spatial coupling between blocks. Using 2D and 3D block-spring models, a good agreement was found between the simulated power-law exponents and the values following from earthquakes catalogues. As the earthquakes are related to accumulation of stresses in the earth crust, there is a direct analogy between this natural phenomenon and the plastic deformation of solids [START_REF] Lebyodkin | [END_REF][START_REF] Lebyodkin | [END_REF]. Consequently, the block-spring schemes are of special interest for modeling the intermittent plastic flow. In particular, they were successfully used to simulate the PLC behavior, as will be discussed in § 1.5.4. Alternative explanations of power-law behavior Although SOC was proposed as a general framework for explanation of numerous observations of power-law statistics, the systems for which it has been clearly established are rare. Various alternative models were proposed in the literature to explain such critical-like behavior within the concept of "plain old criticality", by adding some uncertainty. For example, Sethna et al. [90] studied a random field Ising model to explain the Barkhausen effect [90]. The Ising model considers a lattice of magnetic spins and represents a classical model for a first-order transition in a ferromagnet below Curie temperature: when the magnetic field passes through zero, the equilibrium magnetization reverses abruptly. By adding disorder in the form of a random magnetic field the authors did not only reproduce magnetic hysteresis, instead of a sharp transition, but also found scale-free fluctuations of magnetization for certain values of disorder. Another model giving power laws without parameter tuning was proposed by Sornette [91]. It is based on the slow sweeping of a control parameter [91]. The idea of this approach can be illustrated using a fiber bundle made of N independent parallel fibers with identically distributed independent random failure thresholds. The bundle is subjected to a force F . The ratio F/N plays the role of the control parameter whose value changes because when the applied force F is increased, more and more fibers break down. Again, the numerical simulation reveals power-law distributions of rupture sizes for a low loading rate. These models are conform with a general consideration by Weissman [45], who examined various mechanisms of 1/f -noise and concluded that it can occur because of smeared kinetics features. In particular, a system described by characteristic rates which are determined by a large number of independent factors should have a log-normal distribution. If this distribution is wide enough, its flat top might provide many octaves of power-law dependence. The author suggests that the 1/f -noise may be viewed as an extreme limiting case of many types of broadened kinetics. Role of finite loading rate and overlapping The models presented above bear evidence that the scale-invariant statistics may be due to various mechanisms. Therefore, determination of the underlying mechanism necessitates a detailed statistical analysis and a careful measurement of various critical exponents and relationships between them. This requirement raises above all the question of reliability of experimental determination of power-law exponents. As a matter of fact, the above models suggest either a very slow driving rate or, in the ideal case, halting the loading during the propagation of an avalanche. Consequently, no two avalanches will propagate simultaneously. The experiment cannot always approach this ideal situation, so that avalanches can overlap in time and in space and be recorded as a single event. Few works concerned a possible effect of the avalanche superposition on the critical exponents. Durin and Zapperi [92] experimentally measured the Barkhausen noise in soft ferromagnetic materials and distinguished two cases: when the exponent β T describing the avalanche duration probability tends to -2 for zero rate of the external magnetic field, such material shows a linear increase in β s and β T with an increase in the field rate; in the case when β T (0) > -2, the slopes are independent of the field rate. White and Dahmen [93] proposed a theoretical explanation of this behavior by considering a linear superposition of avalanches. According to their model, the decrease in the slopes of the power-law dependencies in the former case occurs because small events are absorbed into larger ones. In the latter case the superposition is strong starting from very low field rates, so that no changes are observed when the rate is varied. The authors also analyzed theoretically the case β T < -2 and found that the exponents must be rate independent for small enough rates, but a crossover to one-dimensional percolation on the time axis should occur when the rate is increased. To our knowledge, the effect of overlapping on the statistics has not been explored in the case of AE investigations of plasticity. Recent data on the distributions of AE amplitudes during the PLC effect [18] showed similar β s values for low and intermediate strain rates and somewhat flatter dependences for higher strain rates, in qualitative agreement with the above predictions. However, even at small strain rates the PLC instability results in merging of AE events during stress serrations, the latter being short on the macroscopic time scale but quite long in comparison with the individual AE events. This merging causes distinct bursts in the duration of AE events, so that a clear power-law behavior is actually observed only for AE amplitudes. Moreover, any investigation of plasticity represents a fundamental problem: the behavior is never statistically stationary because of the microstructural changes which are at the origin of the work hardening of the material. As a result, the values of the critical exponents evolve during deformation. Since there exist no two samples with identical microstructure, and besides, the microstructure evolution depends on various factors including the strain rate, a rigourous comparison of the results obtained at different strain rates is hardly possible. To study the effect of overlapping, another strategy was adopted in the present study (see Chapter 4). Namely, the criteria used to extract events from a signal were varied, so that various sets of events were extracted from each of the recorded signals and analyzed. Relation to other dynamical regimes As mentioned above, the analysis of stress serrations has led to a conjecture that the PLC effect presents a rare example of a transition from an infinite (SOC) to low (chaos) dimensional behavior when the driving rate is decreased. Another known example of a transition from a scale-free to a chaotic state concerns the hydrodynamic turbulence [94]. It can be conjectured that spatially extended dynamical systems may find themselves in various regimes which present different manifestations of complexity. The complexity associated with the deterministic chaos is related to the sensitivity to initial conditions, which can be quantified using the so called Lyapunov exponents. To illustrate it, one can take an infinitesimal initial distance between two phase trajectories and consider the evolution of its projections on the principal directions. In the limit of small time, it is expressed in terms of e λ i t , where λ i is the Lyapunov exponent in the ith direction. Chaos occurs when one λ i becomes positive, which reflects instability in the sense of the exponential divergence of trajectories, while the dynamics remains stable in other directions. Although the language of phase trajectories is impractical for the description of infinite dimensional systems, SOC is often associated with almost zero Lyapunov exponents which reflect a slower, power-law divergence. Another nontrivial dynamical regime is associated with the phenomenon of collective synchronization in a system of coupled oscillators, which spontaneously lock to a common phase, despite initially different phases of individual oscillators [85]. In fact, this phenomenon is modeled using the same lattice models as those proposed for SOC, i.e., characterized by a threshold dynamics, the separation of slow and fast time scales, and spatial coupling. Perez et al. [86] showed that such models allow for a transition between the scale-free and synchronized behaviors. Their dynamics is governed by the strength of the spatial coupling and the degree of nonlinearity of the driving force. More specifically, when the nonlinearity of the driving force is strong and the coupling strength is weak, large avalanches repetitively sweep the whole system, giving rise to relaxation oscillations. Moving away from these conditions leads to a gradual transition to a discrete distribution of a few avalanche sizes, coexistence of a discrete and a continuous distribution, and finally, power-law behavior. It seems natural to suggest from these two examples that the dynamical chaos is related to synchronization of various elements of the dynamical system. However, since chaos is commonly studied in low dimensional systems, such a relationship has not been examined so far. 1.5 Portevin-Le Chatelier effect General behavior The PLC effect is a plastic instability observed in dilute alloys and caused by interaction between dislocations and solute atoms. It was discovered in the early part of the 20th century [95,96] but continues attracting great attention of researchers up to now. One of the fundamental reasons for this interest is that the PLC effect offers a striking example of complex spatiotemporal dynamics of dislocations [START_REF] Kubin | Dislocations in Solids[END_REF]. Indeed, as will be seen below, it is one of the cases when the heterogeneity of plastic flow shows up at the macroscopic scale, which requires collective behavior of large numbers of dislocations. Besides, understanding of this behavior presents high practical interest because of undesirable detrimental effects on the formability of materials widely used in industry, e.g., steels and aluminum alloys [START_REF] Estrin | Continuum Models for Materials with Microstructure[END_REF]. Most of research into the PLC effect has been realized in the geometry of uniaxial tensile tests. In this case, the effect consists in repetitive localization of plastic strain rate within transversal deformation bands, which may either propagate along the tensile axis or be immobile. The static bands have a life time about milliseconds [START_REF] Schwarz | [END_REF]). The traces of the deformation bands which usually have width from fractions to several millimeters can be observed on the side surface of specimens using an optical microscope or even with the naked eye [99]. In the tests with a constant velocity of the mobile grip, i.e., constant imposed overall strain rate εa , the elastic reaction of the "deformation machinespecimen" system on the unstable plastic flow of the specimen gives rise to abrupt stress variations on stress-time or stress-strain curves (see Fig. 1.5). 2As can be recognized in Fig. 1.5, the serrations usually onset after macroscopically uniform deformation to a certain critical strain ε cr [102,103]. Figure 1.6 represents a typical dependence of the critical strain on the imposed strain rate. It can be seen that when ε is increased ε cr increases in the region of high strain rates (the so-called "normal behavior") but decreases at low strain rates ("inverse behavior"). Figure 1.6 also indicates a strain-rate range of observation of instability. As a matter of fact, the PLC effect occurs in limited domains of temperature and strain rate. The morphology of stress variations strongly depends on the experimental conditions, as illustrated in Fig. 1.5 for variation of εa at room temperature. In general, the shape of the deformation curves also depends on the material composition, microstructure, and sample dimensions. Nevertheless, several generic types of behavior have been found (e.g., [105]). Usually three persistent types called A, B, and C are distinguished, depending on the shape of the deformation curve and the spatial pattern of deformation bands [START_REF] Kubin | Dislocations in Solids[END_REF]. A salient feature is the gradual transition between different behaviors, which takes place when either T or εa is varied from one to another boundary of the respective range of instability. At a given temperature, type A behavior occurs close to the upper bound of the strain-rate interval of existence of the PLC effect, typically above 10 -3 s -1 for AlMg alloys. It is characterized by irregular stress fluctuations without visible characteristic amplitude, which are associated with deformation bands usually nucleating near one specimen end and propagating quasi-continuously along the tensile axis. Some stress fluctuations are likely to result from fluctuations of the width and velocity of a propagating band. Besides, the band propagation through the sample is followed by a phase when the stress increases until a new band is nucleated, upon which the stress falls towards the envelope deformation curve -the imaginary prolongation of the smooth curve prior to ε cr . Such a pattern of stress drops preceded by stress rises gave rise to a name of "locking serrations" for type A behavior. Type B instability is observed around εa = 10 -4 s -1 and is characterized by rather regular stress drops, such that the stress oscillates around the envelope curve. Each stress drop results from a static deformation band nucleated ahead of the previous band, for which reason this regime is often referred to as "hopping propagation" of a band. As the localized plastic deformation results in the material work hardening within the band, the serrations often form groups corresponding to the hopping band propagation along the entire specimen and separated by periods of smoother deformation, during which the stress is increased to finally trigger a new band. When the strain rate is further decreased, the transition to type C instability occurs upon approaching εa = 10 -5 s -1 . This behavior is associated with "unlocking serrations" -sharp stress drops below the envelope curve. Like type B serrations such stress drops display a characteristic size. They are usually attributed to randomly nucleated bands, although the analysis of serrations bears evidence to existence of some correlation [106]. Finally, mixed behaviors are observed at intermediate strain rates. Microscopic mechanism Since the dislocation mechanism of plasticity is governed by thermally activated motion of dislocations through obstacles [START_REF] Friedel | Dislocations[END_REF], it is characterized, like any other activated process, by a positive dependence of the driving force on the rate of the process: a higher force is necessary to sustain a higher rate. The occurrence of the PLC instability is generally associated with a negative value of the strain-rate sensitivity (SRS) of the applied stress σ: S = ∂σ ∂ln ε < 0. The inversion of the sign of S is attributed to dynamic strain aging (DSA) of dislocations [107, 108], engendering a recurrent process of pinningunpinning of dislocations from solute atoms. The DSA mechanism can be schematically described as follows. On microscopic scale the motion of dislocations is discontinuous: their free motion lasts very short duration and alternates with long arrest on local obstacles, such as forest dislocations, during the waiting time for thermal activation. The impurity atoms diffuse to dislocations during t w and additionally anchor them. This additionnal pinning stress is thus determined by the competition between two time scales, the waiting time t w and the solute diffusion time t a , which depend on the strain rate and temperature. The PLC effect occurs when the two time scales are comparable. This is illustrated in Fig. 1.7(a) which schematically represents the σ( ε)-dependence for a given temperature. If the strain rate is very high, t w is much smaller than t a and the dislocations move as if there were no impurity atoms. In the opposite limit (t w ≫ t a ) the dislocations are constantly saturated with solute atoms and move together with solute clouds. In both cases a normal positive-slope σ( ε)-dependence is observed for thermally activated process, as shown in the figure for the intervals ε < ε1 and ε > ε2 . However, the left-hand side of the curve corresponds to a higher stress level. Consequently, a negative slope occurs in the interval ε1 and ε2 , where the concentration C of solutes on dislocations diminishes with an increase in ε. The overall behavior is thus represented by a N-shaped σ( ε)-dependence. The interval of instability covers several orders of magnitude of ε (the scheme in Fig. 1.7(a) would look realistic if the ε-axis was supposed to be logarithmic). Such a stress-rate dependence leads to an instability in the form of relaxation oscillations [START_REF] Andronov | Theory of Oscillators[END_REF], providing that the imposed strain rate value finds itself in the range between ε1 and ε2 . Indeed, as demonstrated by Penning [109], when the loading brings the system to the threshold stress, ε undergoes a jump from the "slow" to the "fast" positive-slope branch of the characteristic N-curve. In velocity-driven experiments (constant εa ) the elastic reaction of the deformation machine converts the strain rate jump into drastic unloading, which terminates with a backward ε jump to the slow branch. The alternation of the slow loading and abrupt stress drops gives rise to a saw-toothed deformation curve, as illustrated in Fig. 1.7(b). Such a cyclic behavior is similar, for example, to the Gann effect in a medium with a negative differential resistance [110]. Different mathematical descriptions of this behavior use variants of the following constitutive equation composed of three additive terms: σ = σ h (ε) + S 0 ln( ε ε0 ) + σ DSA {1 -exp(- Ω(ε) ετ p )}, (1.4) where the first term reflects the strain hardening, S 0 is the positive SRS in the absence of DSA, and the third term is due to DSA [111,112]. In the last term, σ DSA is the maximum pinning stress corresponding to saturation of dislocations with solutes, τ is the characteristic diffusion time of solutes, p is equal to 2/3 for the bulk diffusion [113] or 1/3 for the diffusion in dilocation cores [114], Ω(ε) = bρ m ρ -1/2 f is the quantity introduced in [112] to describe the "elementary plastic strain", i.e., the strain produced when all mobile dislocations are activated and move to the next pinned configuration (b and ρ m are the Burgers vector and density of mobile dislocations, ρ f is the density of forest dislocations). This equation allows explaining the existence of a critical strain for the onset of the PLC effect. Kubin and Estrin [112] considered the evolution of Ω with deformation to find the conditions when the SRS becomes negative. Alternatively, the evolution of τ due to generation of vacancies during deformation was examined [115]. These analyses provided qualitative explanations of the normal and inverse behaviors of ε cr but the quantitative predictions were quite loose, especially for the inverse behavior. Recently, several attempts were made to improve the quantitative predictions. Some of these approaches are based on taking into account the strain dependencies of either σ DSA or the solute concentration on dislocations, the latter leading to a modification of the argument of the exponential function in Eq. 1.4 [116,117]. Mazière and Dierke [118] showed that the agreement with experimental results can be improved by replacing the condition S < 0 by a stronger condition leading to an exponential growth of instability. This question remains a matter of debate. Observations of complex behavior Complexity on the macroscopic scale of stress serrations The qualitative prediction of plastic instability associated with propagating PLC bands and saw-toothed deformation curves was one of the important successes of the microscopic models. However, the experimental observations display more diverse and complex patterns. Some aspects of this complexity were mentioned above (see also Introduction). Various observations are shortly summarized below. During the last few decades quite a number of different approaches were proposed to quantify the complexity of serrated deformation curves, e.g., dynamical analysis [12], statistical analysis [START_REF] Lebyodkin | [END_REF], wavelet analysis [119], multifractal analysis [START_REF] Lebyodkin | Multifractal analysis of unstable plastic flow[END_REF], and so on, each method highlighting one or another aspect of behavior. The entirety of results proved a correlated nature of stress serrations, with correlations strongly depending on the material and experimental conditions (temperature, strain rate, microstructure). In particular, power-law statistical distributions of the stress drop size and duration, as well as power-law Fourier spectra of deformation curves were found for type A instability [START_REF] Lebyodkin | [END_REF]7,84]. The analysis of critical exponents allowed for a conjecture of SOC-type behavior. However, this hypothesis remains a matter of debate. For example, Annathakrishna and Bharathi [START_REF] Ananthakrishna | [END_REF] pointed out that SOC models require a slow driving rate, while the behavior of the PLC effect is peculiar in this sense: whereas power-law statistics are observed at high εa -values, the decrease in εa leads to peaked histograms. These authors suggested that the critical behavior at high εa is similar to that in the hydrodynamic turbulence [94]. For type B serrations the emergence of deterministic chaos was proved [12,7,84]. In this case the statistical distributions of stress drops acquire an asymmetrical peaked shape. Finally, type C serrations are characterized by near Gaussian distributions. However, even these serrations are not completely uncorrelated, as revealed by the multifractal analysis [106]. As a matter of fact, multifractal scaling was found for stress-time curves in the entire strain-rate range of instability [84,122,[START_REF] Lebyodkin | Multifractal analysis of unstable plastic flow[END_REF]. Recent studies revealed that power-law distributions may appear even under conditions of type C behavior [18]. Unexpected in such conditions, this conclusion concerns stress fluctuations that occur on a smaller scale than the deep type C serrations. Indeed, amplifying the deformation curves, one can distinguish two distinct scales of stress drops at low strain rates. The mechanism responsible for the occurrence of small drops attracted little attention so far. Usually, they are considered as "noise" caused by the material heterogeneity. This point of view is confirmed, e.g., by their dependence on the surface treatment [123]. Nevertheless, such fluctuations show a nontrivial statistical behavior. When they are not disregarded, the statistical analysis results in bimodal histograms with two distinct peaks [11,[START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]. Application of the analysis separately to the two groups of events shows that whereas the large stress drops follow rather symmetrical peaked distributions, power-law behavior with an exponent between -1 and -1.5 is found for small serrations (Fig. 1.8) [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]18]. This observation proves their non-random nature. Therefore, it reveals the multiscale character of deformation processes and adds interest to investigations with a higher resolution, e.g., with the aid of the AE technique. Complexity on a mesoscopic scale of AE Until recently, the AE study of the PLC effect was mostly concentrated on the analysis of average characteristics, such as the AE count rate [124,125,126]. Here, a count is recorded each time when the amplitude of an acoustic oscillation exceeds a given threshold value, usually set at the level of the background noise, as described in § 2.4. This confrontation of the discrete and continuous AE under conditions of the PLC effect has been questioned recently. First, the amplitudes of acoustic events were found to vary in the same range during smooth and jerky flow in an AlMg alloy, whereas bursts were observed for their durations and related characteristics such as the count rate or energy [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]. It was supposed that the PLC band is formed through a process of synchronization of glide events in a similar amplitude range, due to propagation of elastic waves, which leads to merging of the corresponding acoustic events. This conjecture is consistent with the optical observations of the complex evolution of the forming deformation bands [127,128], although it should be remarked that the optical methods applied were limited by millisecond time scale. It is also conform to the results of the multifractal analysis of individual waveforms recorded during jerky flow in an AlCu alloy [28], which proved that the single AE events may not be elementary but possess a fine structure. On the other hand, in Vinogradov and Lazarev [129] application of the data streaming technique, which allows recording the entire AE signal (see § 2.3), proved a continuous character of AE accompanying the PLC effect in α-brass. However, only type A behavior has been examined in this work. The statistical analysis of AE during the PLC effect started several years ago [27,[START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF][START_REF] Lebyodkin | [END_REF]18]. As already mentioned above in this Chapter, one of the major results of these works is that AE is characterized by power-law statistics of event size in all experimental conditions. Moreover, the AE events extracted separately during stress serrations and during smooth intervals revealed very close distributions. These results led to a conjecture that the deformation processes relevant to the mesoscopic scale (uncovered through AE measurements) have a similar nature during both stress serrations and smooth plastic flow. This unexpected result was also confirmed due to application of the multifractal analysis to the series of AE events, which testified that the temporal correlations characterizing the AE show no peculiarities associated with the PLC effect [START_REF] Lebyodkin | [END_REF]. At the same time, the exponents of the power-law distributions of AE amplitudes are clearly different from those observed for pure materials and are not unique. In particular, the values of the exponents depend on εa and evolve in the course of deformation. Numerical modeling The numerical models of the PLC effect aimed at reproducing the observed complex behavior of serrated deformation curves by allowing for strain heterogeneity and spatial coupling between differently strained regions in the deforming specimen. The general framework for phenomenological models was proposed by Zbib and Aifantis who suggested a spatial coupling term in the form of the second gradient of strain [130]. It appears as either an additional internal stress, σ = σ h (ε) + F ( ε) + c ∂ 2 ε ∂ 2 x (1.5) or in the form proposed by Jeanclaude and Fressengeas [131] and describing the dislocation transport due to double cross-slip of dislocations [START_REF] Friedel | Dislocations[END_REF], σ = σ h (ε) + F ( ε -D ∂ 2 ε ∂ 2 x ) (1.6) where F ( ε) is the above described N-shaped characteristic given by the sum of the 2nd and the 3rd terms in Eq. 1.4, c is a constant having an elastic nature, and D is a diffusion-like coefficient. Various mechanisms governing the coupling via internal stresses were examined in the literature: plastic strain incompatibility, which must by compensated by an elastic strain and thus gives rise to internal stresses [132], elastic fields of dislocations [START_REF] Canova | Large Plastic Deformations[END_REF], nonlocal strain hardening [130], rotation of the specimen axis in the case of single crystals [START_REF] Hähner | [END_REF], variation of the specimen cross-section, leading to a triaxial character of stress [START_REF] Bridgman | Studies in large plastic flow and fracture[END_REF]. The comparison of various mechanisms showed that long-range incompatibility stresses play a preponderant role in the most frequent case of polycrystals [START_REF] Hähner | [END_REF][START_REF] Lebyodkin | [END_REF], although the transport of dislocations may manifest itself on Similar ideas, based on the combination of the nonlocal approach with the abovedescribed microscopic model of DSA, were used to implement 3D finite-element models (e.g. [140,141,142,137]). Whereas 3D models are very expensive with regard to com-putation time, they provide an opportunity to avoid explicit conjectures on the nature of spatial coupling. For example, Kok et al. suggested a model of the PLC effect in polycrystals, in which spatial coupling appears due to incompatibility of plastic strains between differently oriented adjacent grains, provided that dislocation glide is considered in various slip systems characteristic of fcc metals [140]. This model described salient spatiotemporal features of the PLC effect, as well as the power-law and chaotic dynamical regimes. The entirety of these results proves that the dynamics of the PLC effect is essentially determined by two factors: the negative strain rate sensitivity, stemming from DSA and leading to instability of the homogeneous plastic flow, and strain heterogeneity, which tends to disappear due to spatial coupling but is recurrently resumed because of the instability. Twinning Mechanisms and dynamics In some cases plastic deformation may proceed by twinning. The term "twinning" describes a situation when two crystals with the same crystal lattice but different orientation adjoin each other and are separated by an interface. The crystallography of twins is described in various manuals [START_REF] Hirth | Theory of dislocations[END_REF][START_REF] Friedel | Dislocations[END_REF]. A simple scheme is presented in Fig. 1.11, where the open circles denote the atomic positions in a perfect crystal lattice and the black circles show atoms in twinned positions, so that the upper half of the crystal is a mirror reflection of the original lattice. It is noteworthy that although the shear of the two parts of the crystal is macroscopic, this scheme shows that it can be realized due to shifting each atom over only a part of the interatomic distance. Such shift in one atomic plane may be produced by a displacement of a partial disclocation with a Burgers vector smaller than a translation vector of the lattice. In contrast, dislocation glide involves motion of perfect dislocations which shift the lattice by a complete translation vector and do not create an interface. As the energy of a dislocation is proportional to b 2 , the partial dislocation carries less energy and seems to provide a favorable mechanism of plasticity. However, the interfaces occurring during deformation of real crystals are not perfect and their creation needs much energy. Consequently, twinning requires high local overstresses and occurs when dislocation glide is impeded for some reason. For example, it is often observed at low temperatures, even in materials with high symmetry, e.g., face-centered cubic and body-centered cubic crystals which possess many slip systems [41]. The low-temperature twinning is usually explained by a faster increase of the flow stress for perfect dislocations than for partial dislocations. Furthermore, twinning is important in materials with a limited number of slip systems, such as hexagonal closepacked crystals [START_REF] Mathis | Acoustic Emission, chapter Exploring Plastic Deformation of Metallic Materials by the Acoustic Emission Technique[END_REF]. It should be underlined that twinning and dislocation glide are interdependent processes. On the one hand, piling-up of dislocations provides internal stress concentration necessary for twin nucleation. On the other hand, twinning locally changes orientation of the crystal lattice and makes it more favorable for slip[144]. The deformation (or mechanical) twins occur in the form of thin layers confined by two boundaries. In this sense, there is a similarity between twinning and unstable plastic flow, because both are associated with a localized shear. Various twinning mechanisms were suggested in the literature. Perhaps, the most known is the Cottrell-Bilby pole mechanism which considers formation of a twin layer due to rotation of a twinning dislocation around a fixed point, so that the dislocation shifts with each turn to an adjacent crystallographic plane [START_REF] Hirth | Theory of dislocations[END_REF]. Among other, such mechanisms were considered as the breakaway of partial dislocations from sessile dislocation configurations, nucleation of partial dislocations due to splitting and double cross-slip of dislocations near dislocation pile-ups [START_REF] Hirth | Theory of dislocations[END_REF], autocatalytic mechanism of twin nucleation similar to martensite transformation [145], and so on. Even this incomplete list shows that different models often aim at describing different aspects of the process, which is likely to include several mechanisms. Indeed, numerous experimental observations proved that nucleation of twinning dislocations and their motion leading to formation of interfaces is a very fast process (the twin tip velocity can reach hundreds of meters per second) [146,147,41], in agreement with the need of high local overstresses for its triggering. This can hardly be explained by the pole mechanism, which can however allow for the subsequent twin widening with velocities about 10 -3 cm/s [41]. Twinning is known to be accompanied with high-amplitude discrete AE and, therefore, it offers another model system for the present study. Moreover, as it involves fast nucleation, multiplication, and motion of dislocations, twinning possesses features of avalanche processes. In particular, it can give rise to serrations on deformation curves. The jerky flow occurs as a rule at low temperatures [40] but is also observed at room temperatures [82]. Finally, it is generally suggested that the fast twin nucleation process is responsible for the occurrence of AE during twinning, whereas the slow twin growth is virtually silent [START_REF] Mathis | Acoustic Emission, chapter Exploring Plastic Deformation of Metallic Materials by the Acoustic Emission Technique[END_REF]. In the present thesis twinning was studied using Mg polycrystalline alloys with hcp crystal structure. There are three main slip planes in Mg (Fig. 1.12): basal (0 0 0 1), prismatic {1 1 0 0}, and first-order pyramidal plane {1 0 1 1}, which provide a total of four independent slip systems. According to the von Mises criterion [148], plastic deformation of polycrystals necessitates operation of five independent slip systems. The dislocations in the second-order pyramidal plane have a very large Burgers vector and their activation was only observed above 200°C [149]. Consequently, twinning is found to be an important deformation mode of Mg alloys at room temperature. The main twinning system in a Mg crystal deformed by tension is {1 0 1 2} 1 0 1 1 , which provides extension along the c-axis of the hexagonal lattice. However, as the polycrystalline grains are differently constrained, twins were also observed in other planes, mainly in {1 0 1 1} and {1 0 1 3} [150]. Figure 1.12: Slip systems in the hexagonal close-packed lattice. The prismatic and firstorder pyramidal planes are hatched. The second-order pyramidal plane is outlined by a dash-and-dot line. AE studies Acoustic emission in single and polycrystals of magnesium and its alloys was studied in a number of works (e.g., [151,[START_REF] Heiple | [END_REF]152,153,154]). However, all these papers were devoted to investigation of the overall AE behavior as a function of material and experimental conditions: material composition, orientation of single crystals, strain rate, and temperature. It was proven that both deformation twinning and dislocation glide contribute to the AE. To our knowledge, no statistical investigation was carried out on this material, except for a paper [155] included in this thesis. Several works on the statistics of AE in other hexagonal metals (cadmium and zinc) appeared recently [82,23]. Figure 1.13 displays examples of stress-strain curves for Zn-0.08%Al single crystals with different orientations with regard to the tension direction. The deep serrations observed on the deformation curves testify to intense twinning in these crystals. Usually, there are too few stress drops before fracture, so that the statistical analysis of their amplitudes is hardly possible. In contrast, such materials manifest a strong AE, the statistics of which was studied in the cited papers. exponent ranging from -1.4 to -1.6. This β-value is similar to the data for ice single crystals deformed by basal glide [20]. Moreover, no difference was found in [82] between the distributions obtained for the stage of easy basal glide and for the subsequent stage characterized by intense twinning, as illustrated in Fig. 1.14 for a Cd single crystal. This result testifies that from the viewpoint of complex dynamical systems, twinning and dislocation glide may belong to the same universality class. Another interesting result consists in the observation of two distinct AE waveforms depicted in Fig. 1.15. This result will be discussed in § 5.2.3 in relation to the present study. However, some aspects are noteworthy here. Based on a similarity with the data obtained on sapphire single crystals [156], the authors supposed that the abrupt events (upper figure) are generated by dislocation avalanches and the complex patterns (bottom figure) are indicative of twinning. In the latter case, the initial high-frequency burst is supposed to correspond to the abrupt twin nucleation and the next low-frequency behavior is associated with the twin growth. The last statement contradicts the abovementioned opinion that the twin growth generates virtually no AE [START_REF] Mathis | Acoustic Emission, chapter Exploring Plastic Deformation of Metallic Materials by the Acoustic Emission Technique[END_REF]. On the other hand, by analyzing the evolution of the ratio between AE peak amplitude and AE root mean square amplitude (Fig. 1.16), which takes on higher values for short events, the authors found that such events indeed dominate during the basal slip, whereas the second type of waveforms becomes preponderant when twinning occurs (see Fig. 1.16). Whatever the exact nature of each of these types of events, their separation allowed for an important observation that they mutually trigger each other. It can be concluded that twinning and dislocation glide are interdependent processes not only on the macroscopic time scale, associated with the work hardening, but also on a mesoscopic scale related to avalanche deformation processes. AlMg alloys. Tensile tests were carried out using aluminium alloys of 5000 series with 5wt.% and 3wt.% of Mg. The average grain size in these two kinds of alloys was about 4-6 µm and 30 µm, respectively (Figs. 2.1). Specimens with a dog-bone shape and gauge parts 30 × 7 × 1mm 3 (Al5Mg) or 25 × 6.8 × 2.5mm 3 (Al3Mg) were cut from cold-rolled sheets in two orientations, along and across the rolling direction, and tested either in as-rolled conditions or after annealing for 2h at 400 • C, followed by water quenching. Such heat treatment is well-known in literature. It is aimed at dissolution of secondphase inclusions and obtaining a virtually homogeneous solid solution with a uniform distribution of magnesium atoms [157]. For both alloys, it led to partial recrystallization and resulted in grain size about twice as large as it was initially (Figs. 2.2). Mg alloys. A thorough study was realized using MgZr alloys which are known to be prone to twinning. Samples with different Zr content were taken as it strongly influences on the grain size. Specimens with a rectangular cross-section of 5 × 5mm 2 and gauge length of 25 mm were prepared from material with 0.04wt%, 0.15wt%, and 0. Mechanical testing The present work presents data obtained in tensile tests. The AlMg samples were deformed in a highly sensitive screw-driving Zwick/Roell 1476 machine controlled by the software package testExper. Either 10kN or 100kN load cell was used depending on the specimens cross-section, in order to warrant the maximum resolution. The tests were carried out at a constant crosshead speed V , i.e. in the hard-machine configuration (the machine stiffness was approximately 10 7 N/m). This loading mode is known to be characterized by complex deformation curves and acoustic emission, for which various nontrivial dynamical regimes have been found (see Chapter 1) [12,[START_REF] Lebyodkin | [END_REF]82]. It is also worth reminding the analogy between the PLC effect in the hard machine and the dynamical "stick-slip" models of earthquakes, which was suggested in several works [START_REF] Lebyodkin | [END_REF]6] and adds interest to the analysis of AE from the general viewpoint of investigation of avalanchelike processes. Since the PLC effect displays various types of behavior depending on the imposed strain rate and/or temperature, V was varied in a wide range corresponding to the nominal applied strain rate (referred to the initial specimen length) in the range εa = 2 × 10 -5 s -1 ÷ 2 × 10 -2 s -1 . All tests were performed at room temperature. The choice of the acquisition time for recording the stress-time curves was a compromise between the Acoustic emission measurements Since the present work was aimed at investigating the intrinsic structure of the global AE signals generated in the deformed sample, this allowed using not very long specimens and a single acoustic sensor in each experiment. Spatial localization of the sources of AE events, i.e., mapping the sites where local plastic processes take place, will be a task for future work. The AE recording equipment utilized in the present study was based on the systems which allow for continuous sampling of the AE signal arriving from the piezoelectric transducer. This made possible a comprehensive post-processing of both the complete stored signal and the waveforms of individual acoustic events. Since such continuous recording results in huge data files, the entire data stream was only gathered for high enough strain rates, εa > 2 × 10 -4 s -1 . The signal was recorded piece-wise in the slower tests. In all cases, the equipment additionally used a standard procedure of picking out acoustic events ("hits") during the entire test, applying some preset criteria (without data streaming), as described below (see §2.4). In the experiments on AlMg and MgZr alloys, the signal from the transducer was pre-amplified by 40 Individualization of acoustic events As follows from the discussion of the possible overlapping of avalanche processes in § 1.4.3, the problem of extraction of individual events from a continuous signal is very important for application of quantitative analyses to experimental data. For example, acoustic events may either closely follow each other or be generated almost simultaneously in different locations in the sample, and be recorded as a single event. It is not clear a priori how such overlap of individual events would affect the results of the statistical analysis. On the other hand, each AE event might give rise to several echoes due to sound reflections from interfaces and consequently, be recorded as a few separate events. Another source of errors stems from insufficient resolution of the individual events against noise. All these factors are affected by the criteria utilized to identify the events within the acoustic signal. Up to now, the sensitivity of the apparent statistics to these criteria has not been verified experimentally, although the problem is general and concerns a vast range of dynamical systems of different nature which are characterized by depinning transitions and avalanche-like behavior. Since the data streaming leads to huge data files (many hundreds of gigabytes), some standard procedures are used in most of AE applications to extract significant acoustic events in real time, without recording the continuous signal itself. In order to ensure the continuity of the present results with literature data, as well as the conformity of the results obtained by different methods and for different datasets, we applied the same approach to the events identification in a continuously stored AE signal. Namely, the record is looked through and the procedure makes use of four preset parameters (Fig. 2.6): • The threshold voltage U 0 . This parameter is aimed at cutting off the part of the acoustic signal below the noise level. An event is considered to start when the signal surpasses U 0 . • The hit definition time HDT. The event is considered to have come to an end if the signal remains below U 0 longer than during HDT. • The peak definition time PDT which determines the event peak amplitude A. Namely, the software detects the local maxima of the signal and compares them to the current value of its absolute maximum. The current global maximum is recorded as the event peak amplitude A if it has not been exceeded during a period equal to the PDT. Otherwise, it is assigned the new value and the time counting is restarted. In what follows, this parameter is supposed to be equal to half HDT, if not stated explicitly otherwise. • The hit lockout time (HLT), or dead time. After detecting the end of an event, no measurements is performed during HLT in order to filter out sound reflections. The HLT is triggered at the end of HDT. Consequently, the sum of HDT and HLT represents the minimum time between the end of one event and the start of the next one. • The duration δ; • The dissipated energy E computed as the integral of the squared signal amplitude over the event duration: E = ´tb +δ t b U 2 (t)dt; • The count-rate defined as the number of crossings of the acoustic signal over the noise threshold. It is obvious that the choice of U 0 , HDT, and HLT may influence on the identification of the AE events and, therefore, on the apparent statistical distributions of their characteristics. The conventional AE studies apply the following rules of thumb to set the time parameters. In one approach, a high value is chosen for HDT in order to include all sound reflections into the event. HLT can then be taken small. The disadvantage of this approach is that the event duration and the related parameters, such as the AE energy, are loosely determined. In the opposite case, a small HDT is set in order to separate the hit from the sound reflections which are then cut off by choosing a large HLT. This method is not free from drawbacks either, because it results in a lost of a part of the useful signal, particularly, the "aftershocks" that may follow the initial plastic event [82]. In any case, the criteria of "smallness" or "largeness" are approximate. These difficulties justify the need for investigation of the influence of the parameters of the events individualization on their statistics, which was undertaken in the present thesis. Chapter 3 Data analysis Statistical analysis The statistical analysis proved to be rather useful for studying plastic deformation because of numerous examples of intermittent deformation behavior which is associated with non-gaussien, power-law statistical distributions of plasticity processes (see Chapter 1). The statistical analysis of AE accompanying various physical processes is usually applied to the events amplitudes (e.g., [57,82,[START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]) because the duration and other allied characteristics, in particular, the dissipated energy obtained by integration over the AE event duration, do not properly reflect the properties of the source signal. Indeed, the time characteristics may be affected by the properties of the transfer function of the piezoelectric transducer, reflections and interference of sound waves, and so on. An additionnal reason for such a restriction concerning the PLC effect is the merging of acoustic events during stress serrations, which was conjectured in [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF] and will be justified by explicit proofs in the present thesis. It is found that the events merging results in bursts in their apparent durations, whereas the amplitude range remains essentially unchanged. For these reasons, we will pay the most attention to the analysis of the amplitude distributions. Two approaches to the statistical analysis of AE amplitudes are reported in the literature. The first approach is based on the qualitative analogy between plastic deformation of solids and seismic processes. Analysis of a large number of seismic records proved that the earthquakes statistics obeys the Gutenberg-Richter relationship between their magnitude M, a characteristic roughly corresponding to the logarithmic peak amplitude of AE events, and the number N of earthquakes greater than or equal to magnitude M : log 10 N = a -(b × M) [61]. Here, the seismic b-value characterizes the power-law scaling. As is readily seen, a disadvantage of such a direct projection of the methods of seismology on the study of AE during plastic deformation is that power-law behavior is expected a priori. Another approach to the analysis of AE statistics was proposed by Weiss et al. who studied plastic deformation in pure materials (e.g., [20]). The authors argued that the squared value of the peak amplitude A of an acoustic wave provides a physically based measure of plastic activity, reflecting the energy E dissipated by the plastic processes in the deforming sample: E ∼ A 2 . Both approaches were used in the present thesis, and led to consistent results. In what follows, the results obtained by the latter method are presented.The following procedure was used to compute distributions of (squared) amplitudes and durations of acoustic events. For a given quantity x, its probability density function P (x) is expressed as P (x) = 1 δx δN (x) N , where N is the total number of data in the statistical sample and δN(x) the number of events corresponding to x-value in the interval [x -δx/2, x + δx/2]. To handle the statistics of rare events, e.g., events with high amplitude, the method of variable bin sizes was used. As a rule of thumb, when an initial-size bin contains less than a preset minimum number of events, it is merged with the next bins until this minimum number is reached (cf. [START_REF] Lebyodkin | [END_REF]6]). In the present work, it was chosen to be equal to 5. In the case when the data were distributed according to a power-law in a certain interval of value, the power-law exponent was determined as the slope of the corresponding linear dependence in double logarithmic coordinates using the least-squares method. Recently, a new generalized method, based on the Monte-Carlo technique, was proposed to evaluate the closeness of experimental statistics to a power law [158]. The test calculations showed that the evaluation of the exponent and its uncertainty by these two methods provide similar results. In order to examine the strain dependence of the power-law dependences, the time intervals where both the AE and the deformation curves look visually steady were first searched for. This partition was then refined by varying the width of the intervals and repeating the calculation of the distribution functions until finding intervals where the magnitudes of the exponents remained constant within the error determined by the least-squares method. Fourier analysis Another well-known fundamental tool for processing time series is Fourier transformation which allows determining the frequency spectrum by decomposing the analyzed signal f (t) into the sum of harmonic functions {e ikt }: f (t) = +∞ k=-∞ f k e ikt , (3.1) where f k is the contribution of the frequency k, given by the following relationship: f k = 1 2π ˆπ -π f (t)e -ikt dt (3.2) For numerical analysis of discrete time series, the respective discrete form of Fourier transformation (DFT) reads: F m = N -1 n=0 f n e -2πi N mn (3.3) where m = 0, 1, ..., N -1. Equations (3.3) translate the mathematical meaning of the transform but the direct computation using this definition is often too slow. Fast Fourier transformation (FFT) is usually applied in practice [159]. In the thesis, the MATLAB realisation of this method was utilized to determine the spectra of the typical AE waveforms observed in the experiment. This analysis is particularly efficient when the analyzed signal is composed of several harmonics. Then, the output of the computation provides their frequencies and intensi-ties. However, the FFT applications are not limited to these cases. In fact, the Fourier transformation often provides important pieces of information on the nature of complex signals. For example, white noise possesses a continuous spectrum, where all harmonics have the same amplitude. The spectra of deterministic signals describing non-periodic motion are also continuous, however, various harmonics give different contributions. For example, the spectrum of a chaotic signal typically decreases with increasing frequency, with broad peaks or narrow lines often superposed on this background [START_REF] Kubin | Dislocations in Solids[END_REF]. In the case of SOC, the continuous spectrum is described by a power-law S(f ) ∼ 1/f α , with the exponent α in the range between 1 and 2 [44]. It is worth noting that in practice, these idealized patterns are perturbed by the contribution from noise, which reduction is often a difficult task. In addition to FFT analysis of the AE waveforms, the Fourier spectral analysis was also applied to the continuously recorded signal in order to examine the evolution of the emitted energy and the characteristic frequency of the signal. The details of the procedure can be found in the original paper by Vinogradov [160]. The data set was divided into half-overlapping windows of 4096 data points. After subtraction of the laboratory noise which was recorded before each test, the power spectral density function P (f ) of each subset was computed using FFT, and two characteristics were obtained from the P (f ): the sound "energy" E E = ˆfmax f min P (f )df (3.4) and the median frequency f med : ˆfmed 0 P (f )df = ˆ∞ f med P (f )df (3.5) The so-obtained time dependences of energy and median frequency were smoothed over ten to thousand points, depending on the applied strain rate. As argued in [160], the variations in the average energy and median frequency reflect the variations in the degree of correlation in the motion of dislocations generating the AE. Multifractal analysis The multifractal (MF) analysis [15] has been widely used to detect self-similarity in natural systems with complex dynamics. The applications concern the treatment of problems or, somewhat more often, in the field of materials science could be cited. In particular, it was successfully used to characterize the serrated deformation curves under conditions of the PLC effect [84,[START_REF] Lebyodkin | Multifractal analysis of unstable plastic flow[END_REF]. The approach described in [START_REF] Lebyodkin | Multifractal analysis of unstable plastic flow[END_REF] is accepted in this thesis. A short outline of the method, necessary for understanding the data analysis, is given below. A more detailed description including the basic concepts of fractal dimensions is presented in Appendix 1. The application of the MF analysis requires defining the so-called local probability measure. This quantity is constructed to characterize the intensity of the signal locally, in a time window, and allows detecting the scaling law when the window width is varied. As the papers [84,106,[START_REF] Lebyodkin | [END_REF] dealt with other kinds of time series, in particular, with stress fluctuations, one issue to be discussed here concerns the choice of the measure for the analyzed signals. In the present case, two kinds of time series have been dealt with: the raw AE signal U(t) measured "continuously" with a given sampling time dt, i.e., at t = jdt (j is the serial number) and the series of amplitudes of AE events, A j . Note that in the latter case, the index j does not order the recorded data points but the extracted AE events characterized by the amplitude and time of occurrence. The same calculation procedure was utilized in both cases. In what follows, both time series will be designated as ψ j (j = 1...N, where N is the total number of data points in the respective series). The calculations were performed using the fixed-size box-counting algorithm [START_REF] Falconer | Fractal Geometry, Mathematical Foundations and Applications[END_REF]. For this purpose, the analyzed time interval T is covered by a grid with division δt (see µ i (δt) = n k=1 ψ k N j=1 ψ j , (3.6) where n is the number of data points in the i th interval. Using this definition for the measure, the value of δt is varied and the scaling of the partition functions      Z q (δt) = i µ q i , q = 1 Z 1 (δt) = i µ i lnµ i , q = 1 (3.7) is studied for several real q-values. It should be noted for clarity that the series A j do not fill the time interval continuously. Obviously, only nonempty intervals corresponding to nonzero measure are present in these sums. As can be easily shown (e.g., [START_REF] Lebedkina | [END_REF]), in the trivial case of self-similarity of a constant signal, Z q is proportional to δt q-1 and the dependences log(Z q )/(q -1) vs log(δt) are represented by straight lines with the same slope equal to 1 for all q = 1 (Z q ∝ log(δt) for q = 1). A pure stochastic or a periodic signal on average tend to this case due to homogeneously filling the time axis above a certain characteristic scale (the magnitude of the period or the average spacing). A fractal signal would also be characterized by a unique slope. However, its value, which is called fractal dimension, differs from unity. As insignificant as it could seem, this quantitative difference in the scaling law reflects very complex behavior. For multifractal objects, most often met in real complex systems, the slopes of the straight lines depend on q. In this case, the following relationships are fulfilled:      Z q (δt) = δt (q-1)D(q) Z 1 (δt) = D(1)lnδt (3.8) where D(q) are the generalized fractal dimensions (designation D q will be also used when needed to facilitate reading) . Besides the spectra of generalized dimensions, an equivalent description in terms of singularity spectra f (α) [169] was used as well. Here, the singularity strength α (Lipchits-Hölder index) describes one more feature of self-similar structures, namely, singular behavior of the local measure, expressed as a scaling law: µ i ∼ δt α . (3.9) For multifractal objects, the exponent α can take on a range of values corresponding to different regions of the heterogeneous object analyzed. The f -value gives the fractal dimension of the close-to-uniform subset corresponding to close singularity strength between α and α + dα. By representing a heterogeneous object as consisting of interpenetrating fractal subsets, this description clarifies the physical meaning of the MF analysis. At the same time, the singularity spectra are more difficult to calculate. For the sake of clariness, more details are taken to Appendix 1. Essentially, the MF analysis allows uncovering the presence of correlations between both amplitudes and times of occurrence of events in the signal through their scaling behavior, and characterizing the heterogeneity of the scaling properties over a range of subsets of events. Focusing on a given subset is obtained through the choice of q: for example, large positive q-values tend to select large measures in the partition function, while large negative q-values allow highlighting small events. Variation of q over a set of real numbers thus provides continuous characterization of heterogeneity, the property known as a "mathematical microscope". A wide spectrum of the values of D, f , or α indicate a substantial shift in the correlation characteristics between large events on the one hand, and small events on the other hand. Note that most natural fractals are multifractals, since (homogeneous) fractality is a more demanding property than (heterogeneous) multifractality. In the case of real signals, the above scaling laws are satisfied in bounded ranges of δt. Indeed, experimental data always possess characteristic scales implied by the size of the system, the dimensions of its structural elements, the length of the time interval analyzed, the resolution of the equipment, the experimental noise, and so on. In order to reduce noise, two approaches were applied. First, the test calculations showed that the MF analysis of an entire AE signal leads to stochastic-type behavior. By applying a threshold U 0 and gradually increasing its height, it was found that multifractal behavior can already be detected for the data exceeding 24 dB, in spite of the incomplete suppression of the noise. This threshold level was usually applied. A complementary method of noise reduction made use of wavelet analysis described in the next section. The MF spectra were plotted by varying q in a large interval from - Wavelet analysis Wavelet transformation is a powerful tool of signal processing, whose principles and numerical implementation have been described in detail in many books and original papers (e.g., Refs. [START_REF] Daubechies | Ten Lectures on Wavelets[END_REF][START_REF] Percival | Wavelet Methods for Time Series Analysis[END_REF]). Only basic concepts, which are of relevance to the subsequent analysis, will be recapitulated in this section. Wavelet transformation decomposes a signal into constituent parts with different frequencies and evaluates the contribution of each part on different time scales [START_REF] Daubechies | Ten Lectures on Wavelets[END_REF]. It is defined through relationship: W (a, b) = 1 √ 2 ˆ∞ -∞ f (t)ψ( t -b a )dt, (3.10) where a is a scale factor, b is a time shift, and ψ is a soliton-like function, known as "mother" wavelet, which satisfies a following condition: 1. Take a wavelet and compute the coefficient from relationship 3.10. ˆ∞ -∞ ψ(t)dt = 0. ( 3 2. Shift the wavelet to the right and repeat step 1 until the whole signal is covered. 3. Scale (stretch) the wavelet and repeat steps 1 through 2. 4. Repeat steps 1 through 3 for all scales. In the present work, we used Matlab discrete wavelet transformation (DWT) based on dyadic scales. It is applied to series with length equal to a power of 2, and the wavelets are stretched, or dilated, by a factor of 2. In each level of analysis, the analyzed signal is decomposed into two parts: the approximation (low-frequency components) and the details (high-frequency components), as shown schematically in Fig. 3.2. The 66 decomposition can be iterated, so that the signal becomes represented as a sum of many components with specified sets of frequencies. It is worth noting that for many real signals, the low-frequency content is the most essential. This remark concerns, for example, the human voice. Indeed, although the voice cleared of the high-frequency components sounds differently, the content of the speech remains recognizable. On the contrary, the removal of the low-frequency components results in a voice sounding like noise. Thus, the wavelet transformation filtering the high-frequency components may uncover the inherent elements of the signal. Chapter 4 Role of superposition on the statistics of AE events In this Chapter, the effect of the parameters of individualization of acoustic events on their statistics is examined using various materials the plastic deformation of which is accompanied by strong acoustic activity and controlled by different microscopic mechanisms: a combination of twinning and dislocation glide in hexagonal MgZr alloys and the PLC effect in face-centered cubic AlMg alloys. These materials present a particular interest for the analysis because on the one hand, as pointed out in Chapter 1, both the waveforms of the individual acoustic events and the overall AE behavior have different signatures in the case of twinning or dislocation glide [82,[START_REF] Vinogradov | [END_REF]. On the other hand, recent investigations reveal a persistent power-law character of the amplitude statistics of AE in different experimental conditions, although the corresponding exponent may vary with conditions. Such robustness allows analyzing the sensitivity of the experimental estimates to the event identification criteria. A detailed analysis of the AE statistics in relation with the relevant deformation processes will be the scope of the next chapters. To study the effect of the parameters applied to individualize acoustic events on their apparent statistics, the continuous acoustic signal was recorded using "data streaming" technique. As described in Chapter 2, in order to ensure conformity of the results obtained by different methods and for different datasets, the events identification in a continuously stored AE signal was designed to mimic the standard procedure used by the acoustic system to extract AE hits in the course of measurement. Each of the three parameters, U 0 , HDT, and HLT, was varied in a wide range of magnitude while keeping the other parameters constant. The entirety of experiments showed that for all samples, and similar to literature on various materials (e.g., [152,174,124]), nonstationary AE activity is observed at small strains, in the region of the elastoplastic transition. It further decreases and displays roughly stationary behavior, which allows performing the statistical analysis in a steady-state range. The conformity of the statistics for the events which are either extracted from the continuously recorded signal or detected by the equipment using preset parameters was first verified as illustrated in Fig. 4.1. where the statistical analysis was performed for this specimen were selected for t >120 s. An example of comparison of the results of calculation of the statistical distributions is illustrated in Fig. 4.1(c). In this test, the threshold U 0 was chosen to be equal to 16.48 mV for the continuously measured signal, which corresponds to the logarithmic threshold of 45 dB in the case of real-time measurement. The time parameters were set at HDT = 800 µs and HLT = 100 µs. Importantly, despite the discreteness of the logarithmic measure allowed by the standard procedure, the series of events selected with the aid of two different methods coincide with high accuracy. Furthermore, it can be seen in Fig. 4.1(c) that both methods result in power-law dependences over more than three orders of magnitude of A 2 and give close values for the exponent β determined as a least-square estimate of the slope of the dependences in double-logarithmic coordinates: β = -1.80 ± 0.03 and β = -1.78 ± 0.03 for the "continuous" and "discrete" methods, respectively. This verification justifies the application of the continuously stored signals to the investigation of the effect of the conventional parameters of identification of AE events on their statistics. We further proceed to a detailed examination of their role for two kinds of alloys. Using families of such curves, the dependences of β on U 0 , HDT, and HLT were traced in large ranges of variation of each parameter. recognized that β relatively strongly depends on U 0 only in a narrow range U 0 < 10 mV (even here, all changes in β do not exceed 0.2). The rate of the dependence falls with increasing U 0 . A weak or no dependence (within the error bars) is observed for U 0 in the range from 10 mV to the maximum value of 90 mV, above which the amount of data becomes too small for the statistical analysis. Most probably, the initially higher β-value is explained by the merging of successive AE events when U 0 is low, so that the fraction of smaller events is decreased and the apparent power law is flatter than the true one. Indeed, such merging can take place for low U 0 values if the individual AE hits are linked to each other due to the presence of a (quasi)continuous background, e.g., noise, as illustrated in Fig. 4.4. By admitting this conjecture, the end of the fast changes in β can be explained by U 0 exceeding the continuous background level, so that the AE events become essentially isolated. The merge of such isolated hits will be weaker and will depend on the relationships between all identification parameters and the temporal arrangement (clustering) of the hits, so that some further slight (if any) evolution of β with increasing U 0 may be attributed to the diminishing merge of successive events. MgZr alloys This suggestion is consistent with the changes observed when HDT is reduced from 800 µs to 50 µs: since less effective overlapping of successive hits is expected for the lower HDT, the initial fall in the dependence becomes sharper and β quickly saturates at an approximately constant value. The weak dependences which can be detected in the almost saturated region may also be due to the influence of the statistics depletion. In any case, an important conclusion following from this figure is that the effect observed is weak in a wide U 0 -range. Furthermore, for each HDT, a threshold value can be found above which its influence is insignificant. Another interesting observation is that whereas two of the three kinds of samples display close β-values, the curve obtained for the material with the largest grain size goes separately. In spite of this quantitative difference, all the curves have the same shape described above. Thus, although the different power law indicates a different specific structure of the AE signal in Mg0.04%Zr, a similar effect of U 0 is found. Such robustness testifies that the quantitative difference between the materials with different microstructure is not due to artefacts of the AE method but reflects physically sound changes in the correlation of the deformation processes (see, e.g., [154]). The physical consequences of this and similar observations will be discussed in the next chapters. The companion Fig. 4.3(b) represents the effect of HDT for the same tests and for two U 0 -levels: one selected just above the range of the sharp β(U 0 )-dependence (17 mV) and another taken in the far saturation region (67 mV). It can be recognized that the effect of HDT is weak in both cases and almost negligible for U 0 = 17 mV. Some variation of β in a narrow range of small HDT values can be detected for U 0 = 67 mV and is most likely due to the statistics depletion. As a whole, the analysis is quite robust against HDT variation. As far as the HLT is concerned (see Fig. 4.3(c)), no significant dependence was found over the whole range studied, from 0 to 1 ms. The entirety of these results testifies that the power laws observed for MgZr alloys are robust with regard to the criteria used to select the AE events. AlMg alloys The case of AlMg alloys is particularly delicate for performing such analysis because the corresponding AE commonly has a low intensity and is known to be characterized The colours show the events detected for two choices of noise thresholds, U 0 , and the same choice of HDT which is deliberately taken very large. Application of a large threshold U 01 gives four separate events (magenta colour) with relatively short duration. Decreasing U 0 leads to merging of consecutive events (blue colour). The number of detected events and the resulting stack of amplitude values do not change significantly but the apparent durations increase drastically. by strong overlapping and merging of neighboring hits [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]. On the other hand, these features allow for attacking the investigated problem in the conditions of very noisy signals. Additional interest to this material is caused by the presence of transitions between different types of the PLC effect, which lead to different behavior of the instability at the macroscopic scale. First of all, it is worth noting that the β-range found for the Al5Mg alloy (typically from -2 to -3) lies remarkably below the similar range for MgZr alloys (-1.5 to -2). This difference is much more significant than any variation in β observed upon modification of the events identification criteria. Furthermore, β-values are almost always higher for the as-delivered specimens (circles) than for the annealed ones (squares). Taking into account that annealing leads to an increase in the average grain size, this observation is consistent with the above-mentioned grain-size effect observed for MgZr alloys. Importantly, the change in β upon annealing is usually more significant than the uncertainty of β determination caused by the influence of the identification criteria. More specifically, the first row of Fig. 4.5 represents β(U 0 )-dependences. For the high strain rate (Fig. 4.5(a)) and large HDT and HLT values (blue color), β displays behavior similar to the case of Mg alloys, namely a fast initial drop followed by almost no dependence. The fast initial change, which was attributed above to linking of successive AE events by the agency of a continuous background, disappears when HDT is reduced, so that β becomes nearly constant in the entire U 0 -range. It is natural to suppose that when the imposed strain rate is decreased, the measurable AE events which require motion of powerful enough dislocation ensembles become rarer and better separated (the on-off time ratio becomes larger). 2 As a result, the fast initial change does not occur for lower εa (Fig. 4.5(b) and (c)). The β(U 0 )-dependences obtained for the lowest εa (Fig. 4.5(c)) using a large HDT show a somewhat opposite tendency: a slow increase with U 0 . This trend is also consistent with the discussed framework. Indeed, the increase in U 0 leads to discarding small events and decaying big compound events into two or more (big) components, thus resulting in a larger fraction of big events and an apparently higher β-value. This effect is illustrated in Fig. 4.6 which demonstrates the change in the slope of the power-law for the truncated distribution. It should be noted that alongside with the left truncation, obviously stemming from scrapping small hits, some truncation from the right is also seen. It is a common effect for a limited observation time, as it is caused by the casual lost of the rare large events when they find themselves within a HLT interval. The companion curves in Fig. 4.5(c) display the behavior for a reduced value of HDT and demonstrate that the increasing trend in β(U 0 )-dependences is suppressed when the AE events are better separated due to a more appropriate choice of HDT. In this case, the U 0 increase only leads to (left) illustrated by the two upper curves in Fig. 4.5(e), which correspond to an as-delivered sample deformed at εa = 2 × 10 -4 s -1 . This fall can be explained if one recalls that the small HDT may lead to erroneously taking one of the first local maxima of an event as its peak amplitude. This error would enhance the fraction of small events and reduce β correspondingly. Note that the strongest variations on the β(HDT )-curves obtained for MgZr alloys were observed in the same HDT-range (see Fig. 4.3(b)). Thus, the value of 40 µs determines the lower limit for selecting HDT. The following growth of the exponent, more significant for the higher strain rates, is obviously due to the abovediscussed effect of AE events merging. The optimum HDT value does not seem to be the same for each strain rate. However, as the β(HDT )-dependences are rather weak above HDT=40 µs, this influence can be neglected in most cases. The third row of Fig. 4.5 illustrates the effect of HLT for two choices of HDT. As could be expected, the power-law exponent strongly depends on HLT for small HDT values, thus confirming the above discussion of the effect of HDT. However, for HDT >= 50 µs, the influence of HLT is insignificant, similar to the data for MgZr case. Its effect mostly consists in the statistics impoverishment which should not influence on the scale-invariant distributions, provided that enough data remain in the dataset. The effect of HDT and HLT on AE amplitude statistics was also verified in the usual (without data stream) tests on Al3%Mg samples, in which series of acoustic events were recorded using preset parameters. In these experiments, U 0 was set at 27 dB -the value corresponding to the noise level for the free-running deforming machine -and the effect of changing the HDT and HLT by a factor of ten was checked at different strain rates. Such tests are illustrated in Fig. 4.7 which presents examples of AE amplitude statistics for three AlMg samples. All samples were deformed in the same experimental conditions but the time settings used to detect the AE events were different. The analyzed data are normalized with regard to the average over the respective dataset and all three dependences fall onto one master curve, probably except for some deviations from the power law for the largest events. This and similar results obtained for different εa testify that the power law observed for Al3Mg alloys is quite robust against the variation of Conclusion In summary of this chapter, power-law statistics are found for the amplitudes of AE events accompanying plastic flow of various materials in different experimental conditions. The data obtained confirm the hypothesis that the plastic deformation is inherently intermittent, critical-type process at the scale relevant to AE [82,21,27,26]. The major result of the above investigation is that the criteria used to identify the individual AE events weakly influence on the apparent AE statistics. This conclusion has been verified using Mg and Al alloys which are characterized by distinct deformation mechanisms and display different AE behaviors. A very week influence is detected for Mg alloys, in consistence with the literature data reporting well separated abrupt AE hits accompanying mechanical twinning. More specifically, almost no effect was discerned when HDT and HLT were varied. Somewhat stronger, albeit a weak effect is observed in a wide range of the voltage threshold U 0 . The only case when a relatively strong influence on the power-law is found corresponds to a narrow range of the lowest U 0 values and is most likely due to aggregation of successive AE events. A less favorable situation for the application of the AE method is found under con-ditions of the PLC effect which is known to generate lasting AE events, most likely due to merge of many hits because of successive triggering of many dislocation ensembles during either repetitive formation of deformation bands at low strain rates or propagation of deformation bands at high strain rates. Nevertheless, the influence of the event identification parameters is rather weak in wide ranges of parameters even in this case. In particular, the variations in the β-value caused by the variation of the identification parameters are typically bounded in a range of several tenths, which is much narrower than the difference (about 1 to 1.5) between β-values observed for AlMg and MgZr alloys. This robustness not only justifies the quantitative estimates of the critical indices but also provides an additional proof of the scale-invariant statistics of AE during plastic deformation. Indeed, one of the consequences of the variation of the event identification parameters is the cutoff of a part of data from the entire statistical sample. The robustness of the statistics against such cutoff is consistent with the scale invariance reflected in the power-law dependences. Chapter 5 Multiscale study of AE during smooth and jerky flow Since the pioneer work by Kaiser [73] in 1953, the AE technique is used as a powerful tool to study plastic deformation in various materials. However, before the occurrence of the data streaming technique these studies were based on the extraction of individual acoustic events, as illustrated in the previous chapter, followed by either an analysis of average characteristics over a series of events or, only recently, analysis of their statistics. The data streaming method has opened possibilities for an investigation of AE from Plastic instability in an Al5Mg alloy As described in Chapter 1, very recent investigations [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF][START_REF] Lebyodkin | [END_REF]18,129] questioned the conventional point of view that in the case of the PLC effect, the burst-like AE events are caused by the motion of large dislocation ensembles giving rise to stress serrations, whereas smaller-size dislocation avalanches occur randomly during the macroscopically smooth plastic flow and generate virtually continuous AE. In the present research the data streaming technique is applied to study the nature of AE accompanying the jerky flow. On the one hand, it is aimed at providing insight into the kinematics of formation of deformation bands in a wide strain-rate range corresponding to various types of behavior of the PLC effect. On the other hand, the nature of AE is compared during jerky and smooth plastic flow. AE patterns The material used for this study was the Al5Mg alloy (see Chapter 2). As shown in Fig. 5.1, it presents mechanical behavior typical of aging alloys. The plastic deformation starts with a Lüders plateau, which is generally interpreted as due to propagation of a deformation band through the gauge length of the specimen, a process associated with unpinning of dislocations from their solute atmosphere in a statically aged material. The Lüders plateau is followed by the PLC effect, governed by dynamical strain aging of dislocations. All three conventional types of deformation curves, C, B, and A, were found as εa was varied. The solution treatment led to some softening of the material, resulting in a lower yield point and enhanced ductility, but did not visibly affect the plastic instability. In the case of high and intermediate strain rate values, the plastic instability started practically immediately after the end of the Lüders plateau. At the lowest strain rate the onset of type C instability was observed after a significant critical strain ε cr around 15%. Besides the deep type C serrations, the initial portions (below ε cr ) of deformation curves displayed stress drops described in § 1.5.1 which had much lower amplitude and frequency of occurrence (cf. [27]). The analysis of AE for the low strain rate will be presented with most details because it responds in the best way to the purpose of comparison between AE generated before and during jerky flow. rates. The high resolution of the elongation and force makes it possible to observe the first deviation from linear elastic behavior, which begins very early with regard to the conventional yield strength. It can be recognized that AE is almost absent before this "true elastic limit". Some sporadic AE events recorded in this region may be considered as due to occasional breakthrough of dislocation pile-ups. To be precise, such events may also be due to noise pickup. However, the verification records during idling showed that noise events with such amplitude arrive much rarer. The AE activity manifestly starts increasing after this initial period. Both the AE activity and intensity grow abruptly in the region corresponding to a slight inflection on the deformation curve, which is showed with higher resolution in the figure inset. Such inflection was characteristic of the studied Al5Mg alloy and was not observed for samples with exactly the same size and geometry made from other materials. Thus, it is likely to correspond to enhancing microplastic deformation in this material. This observation confirms that the recorded AE is generated by the (correlated) motion of dislocations. It also attests the AE as a sensitive indicator of the onset of dislocations movements. The data streaming reveals that already at this stage the AE appears at all strain rates in the form of discrete bursts superimposed on a continuous signal, as illustrated in Fig. 5.3. At the lowest strain rate (Fig. 5.3(c)) short isolated bursts with a rise time of several microseconds and duration of several tens of microseconds appear above the continuous noise level. Such bursts are also detected at higher εa . At the same time, a tendency to increasing AE is observed, as could be expected from the intensification of deformation processes required to sustain the higher imposed strain rate. Therewith, the increase is not uniform. In particular, long-duration complex events are formed due to clustering and merging of individual bursts. The merge is relatively weak at εa = 2 × 10 -4 s -1 (Fig. 5.3(b)), so that many of these events have a short rise time, thus allowing to distinguish the main initial shock. At the highest strain rates, the large bursts have much longer wave fronts, durations approaching a millisecond, and often a complex structure, all this indicating that such a burst reflects a developing process of deformation (Fig. 5.3(a)). Besides, the overall increase in the frequency of small pulses adds to the continuous-type signal exceeding the noise level. These patterns already contain many aspects of the signals observed at the later deformation stages. More examples presenting details of the typical waveforms will be given below. is weak and similar to that observed before elasto-plastic transition, whereas very strong acoustic events with durations reaching fractions of a second accompany the serrations. Behavior during the Lüders plateau Consecutive zooming of such events show that they appear as almost stationary at a millisecond scale (Fig. 5. 5(b)). Behavior in the PLC conditions Figure 5.6 illustrates the overall evolution of the AE signal recorded during deformation of an annealed specimen at εa = 2 × 10 -5 s -1 . As can be seen in Fig. 5.6(a), both the activity and intensity of AE diminish after the end of the Lüders phenomenon. However, the qualitative features of the signal do not change: it is composed of a quasi-continuous background with amplitude varying above the noise level, superimposed with larger-amplitude bursts. What attracts attention is that these bursts are not correlated with the deep stress serrations caused by the PLC effect. Indeed, most of them occur during the initial plastic deformation characterized by small stress serrations. Moreover, even the continuous background level decreases during deformation, in spite of the onset of the PLC instability [124]. Important information on this counterintuitive behavior is obtained from consecutive zoom in steps uncovering AE at various scales. burst-like character. The burst-like events occur across the deformation curve, i.e., not only at the instants of stress drops but also during smooth plastic flow between them. As can be seen in Fig. 5.6(b), the small stress drops observed at the initial deformation stage are characterized by acoustic events with relatively high amplitude. However, the instants of stress drops are not exceptional: AE events with similar amplitudes also occur during smooth plastic flow. Moreover, the correlation between AE and stress drops becomes weaker in the course of deformation. Namely, the difference between the amplitude of the acoustic events accompanying type C serrations and that of neighboring events either becomes less significant, as illustrated by the example of Fig. 5.6(c), or even completely disappears. The latter behavior is similar to the data on Al3Mg alloy studied in [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF], where no difference was found between AE events extracted with the help of a standard acoustic equipement during stress drops and during intervals between them. Further increasing the time resolution allows two principal waveforms of the observed AE events to be distinguished. ration varying from hundreds of microseconds to tens or even hundreds of milliseconds, on a background from which some short bursts can be separated, as illustrated in Figs. 5.8(a) and 5.8(b). Like in the case of the pulses accompanying the Lüders plateau (Fig. following each other. In particular, small stress drops are often accompanied by dense sequences of short bursts which can still be isolated from each other. Similarly, their more frequent occurrence often precedes deep stress bursts. The beginning of a very long AE event usually just precedes an abrupt stress drop, thus testifying to a developing deformation process. Another example is given in Fig. 5.8(b), which reveals a short higher-amplitude burst superimposed on a long event. Furthermore, this suggestion is fully consistent with results [28] in which individual long-duration AE events observed during the PLC effect in a AlCu alloy were shown to possess a complex correlated structure revealed by multifractal analysis. A quantitative analysis of such short-time correlations which confirms the complex nature of these signals will be presented in Chapter 6. The samples not subjected to solution treatment demonstrated the same AE patterns for each scale of observation as those illustrated in Figs. (5.6-5.8). Visual inspection of the overall AE intensity level also did not reveal noticeable difference. However, the maximum amplitude of AE bursts appears to reach values up to twice as high as in the case of annealed specimens. For this reason statistical analyses were performed to reveal possible quantitative changes caused by the thermal treatment of the material. 2 The statistical analysis was performed for one parameter set optimized using tests with the so-called Hsu-Nielsen source (pencil lead break): the amplitude threshold was chosen equal to 27 dB, HDT = 300 µs, HLT = 40µs, and PDT = 40 µs. Power-law probability functions were found for both kinds of samples almost over almost the entire range of AE amplitudes, except for the largest events which showed tendency for an increased probability. An example of a comparison of such dependences is given in Fig. 5.9 for the time interval from 2000 s to 4000 s, which corresponds to the region before onset of the PLC effect. It can be seen that the slope of the power-law function is steeper in the case of the annealed sample. In other words, annealing leads to an increasing probability of smaller events. Consecutive analysis for similar time intervals along the entire deformation curve showed that this tendency does not persist during the test: the exponent β for the annealed specimens approaches the value for the as-delivered specimens, which changes little with strain hardening (check in Chapter 6), so that the difference between the dependences for annealed and as-delivered samples gradually decreases and finally vanishes beyond ε cr . The overall behavior of AE on the time scale of the test duration (see Fig. Therewith, although the stress drops are often associated with rather intense AE events, events with similar amplitude are also observed in the intervals between stress drops. Finally, at the highest strain rate bursts with not very high amplitude are more difficult to isolate, so that a virtually continuous AE signal on which rare strong bursts are superimposed is observed at a similar time scale (see Fig. 5.11(b)). Spectral analysis As described in the previous section, visual examination of AE signals allows distinguishing the main waveforms of acoustic signals accompanying plastic deformation in AlMg. Additional quantitative information can be derived from spectral analysis of both individual waveforms and coarse signals. Figure 5.13 illustrates typical waveforms with the respective power spectra. Herewith, no significant evolution in spectra of individual events was revealed during deformation. More delicate analysis based on the multifractal formalism and aimed at revealing possible changes will be presented in the next chapter. The portions of the close-to-noise signals selected at different strains give spectral shapes close to the Fourier spectrum of the noise recorded before the loading start, represented in Fig. 5.13(a). The comparison of the power spectra obtained for various types of events (Fig. 5.13(a)-(d)) reveals a persistent peak around 320 kHz, which can therefore be supposed to reflect the properties of the sound propagation in the investigated media for the given specimen geometry. The width and height of this peak, however, depend on the event type. For example, the spectrum of a burst-like event followed by a dozen of regular oscillations with decaying amplitude is mostly determined by this peak, as shown in Fig. 5.13(b). Merging and overlapping of such "elementary" bursts gives rise to complex waveforms and leads to occurrence of new peaks and resultant broadening of spectra (see Fig. 5.13(c),(d)). illustrate that besides strong discontinuities caused by large AE bursts, practically all stress drops give rise to distinct rises in acoustic energy and simultaneous falls in median frequency. The reduction in f med is usually attributed to an enhanced correlation in dislocation processes and strain localization and therefore, reflects highly cooperative processes (see, e.g., [129,178]). It should be noted that the changes in E and f med begin well before the stress drop. The closer analysis allows relating this effect to the AE increase prior to the drop, as was specified above. The existence of such predecessors of "catastrophes" also agrees with the data of correlation analysis of series of AE events in [18]. Finally, at high strain rates the evolution of E and f med also displays fluctuations on the scale of test duration but, as the almost continuous character of AE complicates distinguishing the individual events, accurate analysis on the scale of one band propagation is difficult (see Fig. Discussion The overall behavior of AE observed in the present work is consistent with the results of earlier studies on the PLC effect interpreted from the viewpoint of generation of AE by multiplication and motion of dislocations (e.g., [124,125,18]). Further discussion will be developed within the same framework of the dislocation mechanism of AE. However, the possible role of cracking of second-phase particles in the overall AE signal, which is usually disregarded in the literature on AE accompanying the PLC effect, should be mentioned. Indeed, post-mortem electron microscopy of fracture surfaces, although displaying ductile dimpled fracture, revealed some sparse broken inclusions, mainly in the non-annealed specimens (Fig. 5.17). Besides the scarcity of such sites found in the microscopy investigation, some other experimental observations suggest a minor role of cracks. In particular, the exponents of the power-law distributions determined in the present work are close to the data for an AlMg alloy in which no cracks were observed [18], and much higher than the typical values reported for AE caused by cracking (e.g., [179]), which are similar to those found for dislocation glide or twinning in pure materials (e.g., [21]). However, the role of inclusions as AE sources during plastic deformation of aging alloys remains an open question and would require a special study. As was underlined in Chapter 1, the conventional vision of AE accompanying the 3), whereas the increasing density of obstacles in the work hardened material would reduce the free path of mobile dislocations. As argued in [124], the waveform of such bursts is mainly determined by the properties of sound propagation in the material. More precisely, their leading front time is supposed to arise from different propagation speeds of different (bulk and surface) modes of stress waves, whereas the rear front of these bursts may reflect the developing deformation process and last for a long duration. On the other hand, the merging of many burst-like events during the development of stress drops leads to a nearly continuous appearance of such composed events when the time scale is further refined (Fig. 5.8). The occurrence of long composed events explains the usually reported observation of drastic count-rate bursts accompanying stress serrations [124,[START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF], whereas the AE amplitude shows lesser or no bursts. As suggested in [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF], the merging of AE events reflects synchronization of the deformation processes which are responsible for large stress serrations. Indeed, at low enough strain rate the internal stress caused by strain incompatibility produced by a deformation band efficiently relaxes during slow reloading following the corresponding stress drop. Therefore, when the threshold of instability is reached at some site in the crystal the neighboring sites are also close to the threshold. In particular, this suggestion is confirmed by the observation of an increasing number of AE bursts preceding the large event (cf. [124,18]), which is quantitatively confirmed by the results of spectral analysis. Eventually the local dislocation glide may trigger propagation of plastic activity and the formation of a deformation band. The band development is stopped later on by the fall in stress, which moves the system state far from conditions of instability. When εa is increased short AE bursts are still distinguishable on the relevant time scale both during relatively silent intervals corresponding to deformation band propagation and at the background of more intense AE during band nucleation. It can thus be suggested that the "elementary" plasticity events in the investigated material are essentially collective avalanche-like processes, similar at all strain rates, in agreement with the earlier observations of the same AE waveforms for different types of serrations [124]. At the same time growth of the overall plastic activity required to sustain the faster loading results in enhanced merging and superposition of AE events and gives rise to virtually continuous AE. It can also be suggested that these changes are in fact more profound and reflect changes in the correlations between the elementary processes, which lead to a transition to a different dynamical pattern on the macroscopic scale. Indeed, at high strain rate not only AE but also the (type A) stress serrations are characterized by power-law statistics, whereas peaked distributions are observed for type B and type C serrations [START_REF] Lebyodkin | [END_REF]84,18]. This transition fits in the above-described framework. Namely, when the imposed strain is increased the internal stresses caused by strain heterogeneity do not have enough time to relax so that most dislocation ensembles are constantly close to the threshold of instability. Consequently, an increase in the strain rate leads to a transition from distinct serrations caused by repetitive synchronization of dislocations to a critical-type behavior characterized by stress fluctuations of all sizes. The other important observation follows from a comparison of AE during smooth and jerky flow for a given strain rate, including the regions before the onset of the PLC effect. It suggests that AE has an essentially intermittent nature over the whole deformation curve, although the AE events show different synergy effects displaying series of individual bursts during smooth plastic flow and a high degree of correlation for jerky flow. This conclusion is consistent with the results of a concurrent multifractal analysis of series of stress serrations and series of AE events in [START_REF] Lebyodkin | [END_REF], as well as with the observation [27] of the closeness of AE statistics calculated separately for AE events gathered during stress serrations and between them. It can thus be conjectured that the elementary processes of plastic deformation are the same not only for different types of serrations but also for a macroscopically uniform flow. However, it may not be universal for all materials. Specifically, essentially continuous AE at all scales was observed in [129] for the PLC effect in α-brass under type A conditions. The persistent nature of power-law statistical distributions of AE amplitudes supports the entirety of the above results, leading to the conjecture of an intrinsically intermittent character of the dislocation processes in the investigated material. One feature of the observed distributions is, however, unusual and deserves a special discussion. Namely, the power-law statistics characterizing the dynamics of various real systems usually manifest a cut-off at the large scale of the analyzed variable. Alongside with various specific mechanisms of cut-off, it is caused by general reasons, such as the limitations imposed on the size of avalanches by the system dimensions, the impossibility of waiting long enough to accumulate sufficient statistics for rare large events, and so on. In contrast, the present data often manifest an enhanced probability of large AE events (see Fig. 5.9). This behavior sheds light on the effect of grain boundaries on the collective dislocation dynamics. Indeed, being effective obstacles to dislocation motion, the grain boundaries may play a dual role. On the one hand, the stress concentration caused by dislocation pile-ups may trigger dislocation sources in the neighboring grains and, therefore, promote large dislocation avalanches. Such a situation seems to be at the origin of the observation in [26] of an increase in the power-law exponent in ice polycrystals, i.e., an increase in the probability of larger AE events, in comparison with single crystals of the same material. On the other hand, the β-values reported in [26] and similar works {β ≈ -(1.3 ÷ 1.7)} are much higher than those found in the case of the PLC effect {β ≈ -(2 ÷ 3)}. In [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF], this difference was ascribed to a more important role of other hardening mechanisms caused by forest dislocations and solutes. The present data provide more arguments confirming this hypothesis. First, although a detailed comparison with the data of [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF] is not possible because the Al5Mg alloy investigated in the present chapter has a different Mg content, initial dislocation density, and grain structure, the observation of a similar β-range for two alloys confirms the limitation of avalanche size in such materials. Furthermore, a tendency for an increasing probability of large dislocation avalanches was observed in [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF], but this effect is considerably stronger in the present study. Both these observations may be explained within the framework of the discussed hypothesis, taking into account the small grain size in the investigated material, compared with the typical grain size about 30÷70 µm in the alloy studied in [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]. Indeed, the decrease in grain size may reinforce the role of grain boundaries in promoting the transfer of plastic activity to neighboring grains, at least for strong dislocation avalanches. The discussed hypothesis also agrees with both the above-described effect of β decreasing on doubling the grain size by thermal treatment and its disappearance after work hardening of the material. Finally, it can be suggested that this effect is not specific of AlMg alloys: a similar effect of grain size was reported in Chapter 4 for MgZr alloys. In summary, among the literature on the intermittency of plastic flow, the phenomenon of jerky flow is usually considered as an exotic case, because of the huge instabilities resulting in macroscopic stress fluctuations. Moreover, these fluctuations show scale-free power-law distributions only at high strain rates, whereas characteristic scales appear for slow deformation. The results of the AE study presented above show that the large stress drops are accompanied with bursts in the duration of acoustic events but, rather counter intuitively, the amplitudes of these events are confined to the same amplitude range as in the absence of macroscopic instabilities. The latter observation indicates some general limitations of the collective dislocation dynamics. In [21], it was argued that the dislocation avalanche size is mainly limited by the sample dimensions in the case of single crystals of ice and pure metals. The present data testify that the grain size and the dislocation microstructure (e.g., forest dislocations) may cause important limiting effects. Within this limited range, the amplitudes of AE events obey power laws, thus confirming a ubiquitous nature of intermittency and unifying the cases of smooth and jerky flow. At the same time, the power-law exponents are larger than the values found in the case of pure materials, and depend on the material microstructure. As far as the bursts in duration are concerned they may be caused by the synchronization of dislocation avalanches, which is realized as a propagation process, similar to relaxation oscillations [86]. Consequently, although the traditional representation of AE using duration-dependent characteristics displays bursts during stress drops, dense successions of acoustic events may lead to virtually continuous appearance of the AE signal itself, provided that the proper time and voltage resolutions are chosen. Twinning and dislocation glide in Mg alloys As described in § 1.6.1 plastic deformation of hcp metals is essentially governed by twinning and dislocation glide. It is known that in some cases twinning manifests itself through macroscopic stress serrations, e.g., at low temperatures or in single crystals. However, deformation curves obtained for polycrystals tested at room temperature are often macroscopically smooth. Up to date, the question on the relative role of twins and dislocations in the total plastic deformation and in the concomitant AE remains a controversy. In several works a high potential of the analysis of the event waveform for separation of AE events was demonstrated [START_REF] Vinogradov | [END_REF]82]. For this reason, application of the data streaming technique seems promising. This paragraph presents the first results of investigation on the AE accompanying deformation of Mg alloys, using the approach described above for the case of the PLC effect. MgZr alloys 5.18(a) ). However, the microstructural analysis of deformed samples reveals a large number of differently oriented twins (Fig. 5.19). That twinning plays a significant role in the plastic flow of this alloys is also confirmed by the character of deformation curves. Indeed, the tensile curve presented in Fig. 5.18(a) demonstrates a substantial strain hardening, which can be attributed to mechanisms related to twins, e.g., by taking into account that twin boundaries are strong obstacles to the motion of dislocations. Further, magnification shown in Fig. 5.18(b) allows discerning irregular stress fluctuations which may be produced by twins. The data streaming records show that the AE is essentially burst-like in this material. Indeed, magnification of a signal portion appearing as continuous on a global scale of Fig. 5.18(a) displays a large number of isolated bursts with strongly varying amplitudes, as illustrated in Fig. 5.18(b). The first acoustic events occur during the elasto-plastic transition. The AE sharply increases, reaches a maximum, and gradually decreases to a constant average level which persists up to the specimen failure. However, strong AE bursts are observed during the whole test. is not the same. The comparison of two figures shows that the amplitudes of the bursts described by the similar waveform vary in a remarkably wide range. The shape of the rear front of these events does not show a smooth transient but is rather complex. Consequently, the corresponding spectra are large and strongly vary. Nevertheless, they possess a generic feature, namely an intense peak in the interval 200-300 kHz, which dominates the spectrum. Another waveform, presented in Fig. 5.20(d) and, occurs much rarer. It possesses a relatively small amplitude and often a large duration, up to a millisecond. However, in this case too, the signal shows abrupt fluctuations giving rise to a wide spectrum, which often consist of several peaks with similar intensity. Moreover, some of these events follow the short bursts. Taking into account the presence of abrupt fluctuations in the long event, it can be suggested that the deformation processes with different kinetics, giving rise to two different kinds of AE responses, mutually trigger each other. In spite of the fact that the waveforms and their spectra remain qualitatively similar in the course of deformation, inspection of the time evolution of the average energy and median frequency reveals clear quantitative trends, in addition to fluctuation caused by strong AE events. An example of such dependencies is presented in Fig. 5.21. It can be seen that the AE energy rises rapidly from the onset of plastic deformation, passes a maximum, and decreases to an approximately constant level, while the median frequency shifts gradually to higher frequencies. Similar trends were also observed in the case of the AlMg alloy (see § 5.1.2), although in a less pronounced manner. AZ31 alloy As described in Chapter 2, these samples were prepared using different procedures resulting in two different microstructures. Figure 5.22 presents examples of deformation curves and the accompanying AE signals for both kinds of samples deformed at εa = 1 × 10 -3 s -1 . It can be seen that the specimen with a heterogeneous grain structure (sample s1) is harder than that with a uniform grain structure (sample s2). In both cases the acoustic response shows similar patterns in the elastic and elastoplastic regions, characterized by very strong bursts. However, the behaviors becomes completely different after the yield. For sample s1 the AE almost vanishes and represents rare highamplitude bursts on a continuous background (Fig. 5.22(a)). In contrast, the response of the sample s2 gradually decreases with strain and keeps a high burst-like activity up to fracture (Fig. 5.22(b)). The postmortem microstructure analysis revealed substantial differences in the nature of plastic deformation of the two kinds of alloys, as shown in Fig. 5.23. No twins and no significant changes in the grain size have been found in sample s1 (compare with the initial grain structure in Fig. 2.4). In the second case, many large twins are observed, so that their presence makes impossible the analysis of the grain structure in sample s2. almost reducing it to one narrow peak. The spectra of long events are more complex. Besides peaks located in approximately the same frequency range (350-400 kHz), they also present rather high peaks at different frequencies. However, in comparison with the spectra obtained for long events in MgZr samples (Fig. 5.18(d)), these components are shifted to the low-frequency part of the spectrum ( Fig. 5.24(f)), thus indicating the absence of abrupt fluctuations in the corresponding waveforms. The PSD analysis of the entire AE signals was complicated in these tests because of the low resolution of the equipment. Nevertheless, its results confirm the same qualitative trends as discussed above. Discussion The observed overall evolution of AE is similar to the literature data reported for magnesium alloys of different chemical composition (e.g., [152,174]). Mathis et al. [152] distinguished three stages of AE evolution corresponding to distinct mechanisms of deformation: • At the beginning of deformation, basal slip (easy glide system in Mg) and primary {10 12} < 10 11 >twins in the grains unfavorably oriented for slip are found. Both this processes are considered to be effective sources of sound waves, contributing to the initial strong AE. • The maximum of AE corresponds to twining activation in other slip systems. µs for long events) were found in Tymiak et al. [156] for sapphire single crystals. In this work the long events were only observed in crystals displaying twinning. Therefore, these data provided an explicit proof that such events are generated by twins. The authors suggested that the sharp transient is due to twin nucleation and the slowly decaying part is controlled by its growth, whereas the events displaying only sharp transients may be due to dislocation avalanches. Based on the similarity of the observed waveforms, a similar hypothesis was adopted in [82] to interpret the results obtained for single crystals of hexagonal metals. An opposite conclusion was drawn in Vinogradov et al. [START_REF] Vinogradov | [END_REF] for Cu alloys containing various percentage of Ge. It was shown that twinning occurs only in the material with a high Ge content. These aloys present sharp AE bursts with duration about 100 µs, while the samples with a low Ge content deform by dislocation glide and are characterized by essentially continuous emission, in which individual bursts can hardly be isolated. However, the authors admit that some of the short bursts can be due to dislocation avalanches. The existence of diverse hypotheses on the waveforms generated by dislocation avalanches is not surprising, in view of the above results of investigation on Al5Mg alloy, which prove that both sharp transients and almost continuous signals can occur during dislocation glide. It can be suggested that the kinematics of deformation processes may considerably vary even in the same test and under conditions of one mechanism of plasticity, because of the variation of local conditions controlling spatial correlations. Obviously, the characteristic waveforms may also strongly depend on the studied material. It is also noteworthy that analysis of the correlation between different kinds of AE events, reported by Richeton et al. [82], indicated that twinning and dislocation glide may mutually trigger each other. The results obtained in the present study are consistent with the hypothesis that twins are responsible for essentially burst-like behavior displaying short transients and amplitudes reaching very high values. This conclusion directly follows from the comparison of two types of AZ31 samples, one deformed by dislocation glide (Fig. 5.22(a)) and the other showing intense twinning (Fig. 5.22(b)). It is also confirmed by the observation of burst-like behavior in MgZr alloys characterized by twinning. The occurrence of strong bursts at the beginning of deformation in Fig. 5.22(a) proves that dislocation avalanches can also produce very intense events, at least at small strains, when the dislocation can move over long distances. However, these events usually last much longer than those associated with twinning. The short duration and the distinct isolation of AE events from each other in the case of twinning bears witness that the deformation processes involving twinning provide more efficient relaxation of the constrained microstructure, and lead to an abrupt interruption of the localized fast deformation. Another feature that could be marked out is that Fourier spectra of the waveforms observed for Mg alloys are usually remarkably wider than those found for AlMg (cf. Figs. 5.13, 5.20, and 5.24). This difference reflects a more complex shape of the waveforms for Mg alloys and may be due to the conditions for plastic flow being more constraining in hcp than in fcc metals. This view is consistent with the conjecture of a complex nature of the processes determining the shape of an individual burst in a Mg alloy, which includes mutually triggering twinning and dislocation glide. Such coupling may be due to various mechanisms. For example, the elastic wave generated because of formation of a twin may trigger other twins, even in remote regions, whereas the concomitant reorientation of the crystal lattice facilitates the motion of dislocations within the twin region. Finally, it is noteworthy that the PSD analysis revealed a persistent feature of plastic deformation consisting in gradual increasing of the median frequency f med during deformation and observed both for Mg and Al alloys. Such changes are usually attributed to a decrease in the dislocation mean free path because of the increasing density of obstacles to their motion. Indeed, the respective reduction of the mean free flight time of dislocations would result in an increase in f med . The same reasoning can also be applied to twinning. Another aspect of this behavior which was discussed in § 5.1.2 should also be noted. Namely, the higher f med means a weaker correlation of the processes giving rise to AE. Therefore, the observed increase in the average median frequency suggests progressive stochastization of deformation processes on the global time scale. This question, concerning the changes in the correlations between deformation processes in the course of work hardening, will be addressed in the next chapter using statistical and multifractal analyses of AE on different time scales. Chapter 6 Statistical and multifractal analysis This chapter presents the results of the statistical and multifractal analysis of AE accompanying plastic deformation of the materials used in the study. The investigation is performed at various strain rates, various stages of deformation, and on various time scales. Until recently, only statistical analysis of AE was used to reveal information about the collective dislocation dynamics [20,82,18,[START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]. In the case of the PLC effect [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]18] our attention was attracted by the observation of considerable changes in the indices of power-law statistical distributions during deformation of an Al3%Mg alloy. The increase in the slope of the dependencies indicates a tendency to a transition from scale invariance to a behavior characterized by an intrinsic scale. To verify this hypothesis we performed a similar analysis for a Al5%Mg alloy as well as for MgZr alloys. The investigation is corroborated by the multifractal analysis. It should be noted in this relation that the statistical distributions only characterize the probability of plastic activity with a given intensity during the test duration, but they do not provide information on the relative arrangement of plastic events. The multifractal analysis has the advantage of uncovering the presence of correlations and characterizing their scaling properties. Statistical analysis AlMg alloy The analysis was performed for Al5%Mg samples in both as-delivered and annealed strain, for four strain-rate values. As said above, at the highest strain rate the AE initially displays virtually continuous signals with superimposed large discrete bursts, so that only a few events are extracted by the applied numerical procedure. However, the AE signal acquires a more discrete character after some work hardening, providing enough data for statistical analysis. The results shown in Fig. 6.2(a) were obtained on statistical samples containing about 1000 events, which allowed for reliable detection of the power law. At least ten times larger statistical samples were typically obtained for lower strain rates, except for the quasi-elastic region before the Lüders plateau and the late intervals preceding the specimen rupture. The data for lower strain rates, represented in Fig. 6.2(b-d), testify that for a given εa the exponent β evolves during deformation. Both the overall behavior and the β-range are similar to the data for the Al3%Mg alloy [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]. Depending on εa the value of β varies between -1.8 and -2.4 at the onset of plastic deformation. Then it decreases and varies from -2 to less than -3. Finally, an inverse trend (an increase in β) occurs after some deformation at εa = 2 × 10 -5 s -1 . The comparison of the data obtained in similar strain intervals for different strain rates reveals a tendency to a decrease in β with decreasing εa , in the strain-rate range corresponding to type A and type B behavior (Fig. 6.2(a-c)). At this stage of investigation it is difficult to say whether such a trend is meaningful. It qualitatively agrees with the influence of the rate of change of the magnetizing field on the power-law statistics of the Barkhausen effect, as well as with theoretical predictions of the role of overlapping [93] (see § 1.4.3). Importantly, this trend does not show up when εa is further reduced (cf. Figs. 6.2(c) and (d)). It can thus be conjectured that correct values of β, unaffected by the overlapping, are obtained for εa ≤ 2 × 10 -4 s -1 . This observation explains the remarkable robustness of β with regard to the events individualization parameters, which was reported in [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF] for a similar εa -range. The data of Fig. 6.2 make more exact the observation noticed in Chap. 5 in regard to the difference between β-values for as-delivered and solution treated samples. Namely, the slope of the power-law dependence is generally steeper for annealed samples, i.e., annealing leads to increasing the probability of smaller events, although in the case of type C instability this difference decreases during the test or even vanishes. The difference between β-values for as-delivered and annealed samples is conform with the conjecture that the grain boundaries may promote powerful avalanches due to triggering new dislocation sources in the neighboring grains. Indeed, the thermal treatment leads to growth of grain size and reduction of stress concentration on grain boundaries, which would reduce the triggering effect. The observation of a decrease in this difference after some deformation in the tests at εa = 2 × 10 -5 s -1 does not seem to contradict this conjecture, because the low strain rate provides more time for relaxation of local overstresses on the grain boundaries. Consequently, the accumulation of the dislocation density would put in the forefront the forest dislocations as obstacles to slip, and reduce the role of grain boundaries. MgZr alloys As reported in Chapter 4, power-law statistical distributions of AE were observed for all Mg alloys studied in the dissertation. Multifractal analysis General approach Two kinds of time series were used to reveal the multifractal properties of AE signals. One approach was based on the analysis of series of peak amplitudes of the events extracted from a signal (cf. [START_REF] Lebyodkin | [END_REF][START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF]). This method allows analyzing large time intervals. It also provides a way to filter the events that do not belong to the detected multifractal set. For this purpose, the calculation of the partition functions Z q (δt) (see § 3.3) is repeated several times, for different choices of the threshold which serves for cutting off events with either the lowest or the highest amplitudes. In the second approach, the time series is represented by the AE signal itself. Such time series are usually limited by short time intervals (< 5 s) to provide a reasonable computation time, but this limit can be considerably widened through removal of a noise component below a threshold. This approach is particularly useful for the analysis of individual waveforms, and also in the case of high strain-rate data for which the extraction of individual events is impeded because of their merging. examples may be found in [START_REF] Lebyodkin | Multifractal analysis of unstable plastic flow[END_REF]. The removal of noise allows detecting the multifractality in an interval of large enough δt, from 30 ms to about 1 s (Fig. 6.5(b)). As this linear behavior on log(Z q )/(q -1) vs. log δt dependencies. Consequently, truncation was mostly used in the present dissertation, due to its simplicity. The research with the aid of wavelet analysis will be continued in the future. AlMg alloy Analysis in large time intervals As reported in Chapter 1, application of the MF analysis to series of stress serrations revealed multifractality of deformation curves at all strain rates [84,106,103,[START_REF] Lebyodkin | [END_REF]. In [START_REF] Lebyodkin | [END_REF], multifractal behavior was also found for the corresponding series of amplitudes of AE events, except for the highest strain rate, εa = 6 × 10 -3 s -1 . In this last case the failure of the MF analysis was explained by a strong overlapping of AE events, which made it difficult to resolve individual bursts. For this reason we mostly used direct processing of AE signals when it was possible, in order to avoid errors associated with events individualization. linear log(Z q )/(q-1) vs. log δt dependencies are found over intervals covering noticeably more than an order of magnitude of δt. The dependencies deviate from straight lines when δt is decreased to a scale corresponding to separate events. The upper scaling limit is related to the finite length of the analyzed time series. Using families of such dependencies, spectra of generalized dimensions, D(q), and singularity spectra, f (α), were calculated for different εa . Smooth MF spectra were found for all applied strain rates. It should be noted, however, that at low and intermediate strain rates, there exist time windows during which multifractality was not detected. of which, selected beyond ε cr , is shown in Fig. 6.8(b)). It can be seen that the spectra gradually expand and deteriorate in the course of deformation. Furthermore, the multifractality is usually not detected for the latest portions of deformation curves. That the branches corresponding to negative q-values (right-hand parts of singularity spectra) are particularly sensitive to any deviation from fractal behavior and are difficult to obtain in a reliable way, is a general problem for the treatment of real signals because the negative qs correspond to the subsets with the poorest statistics (see, e.g., [122,106]). Nevertheless, the increase in the spectra width in the range of q > 0 indicates an increasing heterogeneity of the signal. In order to verify the conclusions on the evolution of MF spectra, the calculations were also made for series of amplitudes of AE events. Figure 6.9 presents the resulting singularity spectra for time intervals including those processed in Fig. 6.7 (larger intervals are taken to provide statistically significant numbers of extracted AE events). As could be expected, the spectra do not coincide with those in Fig. 6.7. However, they fall into a similar range of singularity strength α. Importantly, the qualitative effect of the work hardening is the same for the two kinds of time series. Despite the increase in AE activity with increasing εa , the overlapping of AE events at intermediate strain rates seems insignificant during the macroscopically smooth parts of the deformation curves. Taking into account that the test duration diminishes accord- ingly to εa , both approaches to MF analysis of AE can be applied comfortably. Moreover, in these conditions the numerical procedure allows for processing time intervals covering several sequences of stress serrations, associated with the relay-race propagation of type B deformation bands. Examples of MF spectra for εa = 2 × 10 -4 s -1 are given in Fig. 6.10. For one segment, T=[520s; 700s], results obtained using both kinds of time series are presented. It can be seen that the entire AE signal yields in this case an apparently distorted spectrum (open circles), thus questioning the suggestion, based on the visual examination of the signal, that the events are weakly overlapped. In contrast, the series of amplitudes in the same time interval produces a smooth spectrum (solid circles). In any case, the comparison of MF spectra obtained for different time segments using the same approach proves that their evolution with deformation is similar to that found for type C behavior. In particular, the AE is depressed on the latest stages of deformation, so that many stress serrations do not show an acoustic response, and the multifractality completely disappears. For illustration purposes, the dependencies marked by open squares show the results of formal calculation of MF spectra in this last case. It can be recognized that no smooth curve is found. Besides the treatment of large time intervals, the relatively high AE activity observed at intermediate strain rates allows for scaling examination on a scale of one "period" of that the spectrum has a similar shape and width as its counterpart in Fig. 6.10, determined for δt ≈[10 s; 100 s]. Although there is a gap between these two δt-ranges, it is likely to be artifact of the truncation procedure, which imposes the small-range limit of scaling. Indeed, it was verified by repeating the analysis using intervals with intermediate lengths. Thus, the observed similarity testifies to the same mechanism of correlations operating in a rather wide time range, from milliseconds to tens of seconds, although further verification of this conjecture is needed. Unfortunately, the number of AE events occurring during reloadings decreases with deformation, which makes difficult a systematic analysis. The further increase in εa leads to strong overlapping of acoustic events. Nevertheless, multifractal scaling is found for type A instability, too. Although the evolution of spectra with strain makes difficult evaluation of the effect of εa , the comparison of MF spectra obtained in similar strain intervals for different εa shows a trend to more heterogeneous behavior (wider spectra) at higher strain rates. This observation agrees with the results of analysis of stress serrations in the literature [84,8,122]. It can be illustrated using the example of a test at 6 × 10 -3 s -1 , which displays type A serrations at the beginning of the test and a progressive transition to type B serrations. Figure 6.12 shows the shapes of the analyzed AE signals and the results of the MF analysis. The AE initially appears to be essentially continuous on the scale of the figure (signal 1). The transition to type B instability is also reflected in AE, as a transition to more discrete behavior (signals 2 and 3). The comparison of MF spectra for the signals 1 and 2 shows a higher spectrum width for type A behavior, although it corresponds to an earlier deformation stage. Finally, the signal 3 selected on a late deformation stage does not possess a MF spectrum, in consistence with the above results for type B instability. Interestingly, the deterioration of correlations illustrated by the case of signal 3 concerns the coarse time scale (T = 3 s) of Fig. 6.12, but the correlations persist on a finer time scale. Indeed, figure 6.13 presents results of analysis in a much shorter interval (T = 0.1 s) corresponding to reloading between two stress serrations. The approach is similar to that used in Fig. 6.11, with the only difference that the latter corresponds to a much lower driving velocity, for which the reloading time is about 2 s. The characteristic waveforms observed at the lowest strain rate are presented in Fig. 6.14 (see also Chap. 5). The small stress fluctuations below ε cr are accompanied by merging sequences of closely following events (Fig. 6.14(a)). The deep PLC serrations generate long waveforms with a millisecond duration (Fig. 6.14(b)), which apparently present a finer structure but are usually extracted as single events by the standard AE methods. The macroscopically smooth regions of the deformation curve between two successive stress drops usually display separate short events. However, sequences of events are also observed and present interest for the analysis. The corresponding pattern is shown in Fig. 6.14(c). Figure 6.15 shows the MF spectra which testify to the presence of multifractal scaling in all signals presented in Fig. 6.14. Scaling behavior was found over δt intervals between a few microseconds and 0.1 ms for the first two waveforms and between 60µs and 1 ms for the last signal. It seems important that the AE activity is weak at the low strain rate and the above-described events are followed by periods without activity, when only noise is present. Consequently, increasing the analyzed time interval leads to disappearance of scaling. Scaling occurs again for long enough time intervals containing several stress serrations, as described in the previous paragraph. Thus, at this strain rate the AE is not globally multifractal: the detected correlations correspond to either a separate event (or cluster of events) or to rather long series of events. . Individual bursts with a short rising time, like the one displayed in Fig. 6.16(a), are usually observed during reloading parts of deformation curves. Such type of signals was studied in detail in [28]. Its structure is obviously not multifractal because it involves two distinct scales corresponding to the burst itself and the background signal. It was shown that some deviations from the trivial scaling occur, seemingly because of the presence of some fine structure during the burst decay. However, no smooth MF spectra were found. 6.17(a)). They exhibit approximately straight segments at small time scales and gradually converge to unity slope for δt above several hundreds of microseconds. The corresponding singularity spectrum presented in Fig. 6.18 (open circles) testifies to multifractality of the considered signal, but reveals strong imperfections even for q > 0. By selecting several values of threshold U tr and truncating the signal below the threshold, it was possible to uncover approximate scaling at larger scales, as illustrated in Fig. 6.17(b), at the same time causing degradation of the small-scale scaling. Linear segments were found in a similar δt-range for all trial values U tr =0.5 mV, 0.75 mV, and 1 mv, but the corresponding spectra vary considerably, depending on U tr (Fig. 6.18). Thus, a good approximation of the true MF spectrum was not found. Nevertheless, the obtained results testify with certainty to the presence of correlations in the treated signal. The calculation separately for two time domains yields good spectra for the small scale range (Fig. 6.19, open circles), proving the existence of short-range correlations in the signal structure. Multifractal behavior for positive q-values is found for the next time interval as well (solid circles), but the negative q branches of the spectra are strongly distorted. The latter behavior suggests two hypotheses: (1) taking into account that the slopes only slightly differ for q > 0, the second linear segment may be due to a slow deviation from the scaling law established for the first segment, thus reflecting a decay of correlations on larger time scales; [START_REF] Haken | Synergetik[END_REF] there is a crossover between two scaling domains. The latter situation might reflect a change in the physical mechanism of correlation. It is also noteworthy that similar to the low strain-rate case and in spite of the higher overall AE activity, the analysis in time intervals intermediate between the scale of the individual waveforms and that of the series of events often did not reveal scaling behavior. Finally, under conditions of type A behavior at εa 6 × 10 -3 s -1 , the signals are essentially continuous, similar to that presented in Fig. 6.16(c), and yield similar MF spectra. Some of the treated signals also showed a crossover between two scaling laws but this was rather exception than a rule. In contrast to the cases of lower strain rates, multifractality was found for all time scales. This universality may indicate formation of a globally correlated behavior, in consistence with the conjecture of self-organization to a critical state. However, this hypothesis needs a very accurate verification because, as follows from above, whereas the detection of multifractality is relatively simple, reliable quantitative determination of spectra and their comparison could only be made in rare cases. Further investigation, perhaps, using different methods of analysis is needed. Mg alloys The behavior of the AE accompanying deformation of MgZr alloys differs from that for AlMg but it is also characterized by multifractal correlations. The salient features observed are similar for all samples. After truncation of the latter background (U tr equal to 3 mV was used), calculations yield smooth spectra for the entire q-range (Fig. 6.20, open symbols), indicating that the main signal is characterized by a unique physical mechanism of correlation. At the same time, the relatively large span of α-values qualifies the signal as a complex heterogeneous structure. This initial AE activity is followed by a period of very intense non-stationary emission ascribed to the occurrence of primary twins. In this case the analysis was only performed during short time intervals, as described below, because of the non-stationary character of the AE signal. This stage is succeeded by a stage of dominant secondary twinning, giving rise to a stationary pattern of incessant burst-like AE, as shown on bottom of Fig. 6.20. This dense pattern gives a narrower MF spectrum in the range of positive qs, which testifies to a more uniform AE activity (Fig. 6.20, solid symbols). However, the spectrum is obtained after truncation of a significant lowamplitude part of the signal, using a threshold of 50 mV, which leads to deterioration of the negative q-branch. Finally, similar to experiments on AlMg, scaling fails at late stages of deformation. As discussed in concern with Fig. 6.16(a), the MF method cannot be applied to into account that the spectrum in Fig. 6.20 was obtained after truncation of a large background, it is clear that the low-amplitude and high-amplitude parts of this signal do not belong to the same set of events. However, it does not mean that the lowamplitude component is uncorrelated. Indeed, the treatment of short segments (about 1 ms) between intense bursts proved that the background signal also gives multifractal spectra with relatively small width (α min ≈ 0.8). Moreover, a similar check in the range of basal glide showed that the background, having in this case a much lower level, cannot be attributed to random noise either. It reveals scaling of partition functions, resulting in a narrow MF spectrum (α min ≈ 0.9). It can be concluded on the whole that the deformation processes are essentially correlated, but these correlations cannot be described by unique scaling dependencies and may be due to various mechanisms. governed by two competing processes: that of the generation of microstructural heterogeneities because of the intermittent localized deformation and that of plastic relaxation of the resulting incompatibility stresses (which is also realized through motion of dislocations). It is likely that this picture should also apply to mesoscopic-scale processes during macroscopically smooth plastic flow of pure materials. It would be of interest to realize a similar investigation in this case, too, whereas only the statistical method was applied to such data so far. The case studied in the dissertation is peculiar in the sense that the deformation of DSA alloys is unstable. One consequence of the macroscopic instability is that it may give rise to characteristic scales, associated with the ideal case of cyclic relaxation oscillations, according to the N-shaped SRS function (see § 1.5.2), or, in other words, with the tendency to synchronization of dislocations. This is a possible reason why scaling is not found all over the test duration for type C and type B behaviors but there exist time windows of non-fractal behavior. Second, the plastic instability leads to emergence of correlations in a range of short time scales (from microseconds to milliseconds), associated with the development of catastrophic processes of plastic instability. In this case it is natural to suggest that the correlations may be governed not only by the (fast) changes in the internal stress field but also by a direct impact of elastic waves, involving dislocations to a chain process. Actually, such a pattern corresponds well to the general statement that multifractal time sequences may be generated by cascade processes [162]. It is more surprising that the language of cascade processes also applies to long-range time scales. It can also be conjectured that there is no principal difference between these two cases, although at low and intermediate strain rates there is a "gap" in the observation of scaling between the short and long ranges. Indeed, this gap disappears when the strain rate is increased. Another interesting observation for small scale behavior concerns the crossover between two slopes illustrated in Fig. 6.19. It implies that an additional mechanism of correlations may act on small scales, for example, the mechanism of double cross-slip of dislocations, as was recently justified theoretically [137]. It is noteworthy that investigation of heterogeneous distributions of dislocation densities in ice single crystals [180] led to a similar conjecture on a possible effect of this mechanism on short-range spatial correlations in dislocation arrangements. This analogy raises a question of a relationship between self-organized dynamics dislocations and the resulting dislocation microstructure. Effect of work hardening Two competing factors seem to determine the evolution of the statistics and multifractal scaling of AE during deformation. On the one hand, work hardening creates obstacles to the motion of mobile dislocations and must cause deterioration of correlations between deformation processes. On the other hand, it leads to homogenization of the internal stress field and, therefore, will promote synchronization of dislocation avalanches. The increase in the probability of low-amplitude AE events which is reflected in the decrease in the exponent β of power-law statistical distributions (Fig. 6.2), suggests stochastization of the dislocation dynamics. The stochastization might also be responsible for the failure of multifractal scaling at late stages of deformation. However, the conjecture of stochastization cannot allow for the totality of observations. First, it would lead to collapse of multifractal behavior into the trivial scaling with unity fractal dimension, whereas experiment shows an increase in the width of MF spectra with deformation. A possible explanation of this observation evokes the tendency to synchronization of dislocations under conditions of the PLC effect. Indeed, it may lead to emergence of distinct scales which would enhance the heterogeneity. Moreover, it may eventually cause the failure of scaling and, therefore, provides an alternative interpretation of the final non-fractal behavior. Second, attention should be drawn to the inverse trend observed for β at large strains in the tests at εa = 2 × 10 -5 s -1 . A similar change was reported in [START_REF] Bougherira | Etude des phénomènes d'auto-organisation des ensembles de dislocations dans un alliage au vieillissement dynamique[END_REF] for a region close to fracture and attributed to localization of the PLC bands because of the developing necking. Indeed, the localization enhances simultaneity of slip, which results in the superposition of AE events and a higher probability of large amplitude bursts. In the present case this change was observed to take place before the beginning of necking. To explain this effect, it should be taken into account that as far as the slow loading provides conditions for efficient plastic relaxation of internal stresses, the resulting homogenization would promote the effect of synchronization. Finally, it is noteworthy that synchronization may also contribute to the tendency to an increased probability of the largest events, observed for type B and type C behavior (Fig. 6.1). Magnesium based alloys The results obtained for MgZr alloys are qualitatively similar to those for AlMg. First, the slope of the power-law statistical distributions of AE amplitudes increases (β decreases) with deformation. Second, the multifractal scaling fails at large strains. At the same time, the width of MF spectra does not increase during deformation, in contrast to the above-described behavior of the PLC effect. Actually, the data obtained for MgZr may be interpreted using the conjecture of stochastization of deformation processes. This also concerns the behavior on short time scales, namely, the observation of a gradually increasing contribution of low-amplitude events which are characterized by narrow MF spectra. In the case of AlMg the stochastization was explained by a decaying correlation between dislocations, because of the increasing density of obstacles to slip. It is not obvious whether the same logic can be applied to twinning. However, as far as the twinning may be considered from the viewpoint of the motion of twinning dislocations, it can be suggested that work hardening results in reduction of the width of the twin nuclei, which are supposed to be the main AE sources during twin formation, while the further twin growth may not generate acoustic waves. This conjecture is indirectly confirmed by the observation of thinner twins in secondary twinning systems which start operating after some deformation [154]. However, the post-mortem microscopy can only provide indirect proofs because it reveals the ultimate size of twins. In situ investigations will perhaps be able to verify the discussed hypothesis. Besides, the growing density of twins may cause an increase in the contribution of the dislocation glide which would lead to efficient relaxation of internal stresses and to a decrease in the correlation of the deformation processes. General conclusions and perspectives for future research Since the 1980th the application of the concepts of nonlinear dynamical systems to plasticity problems revealed a variety of complex behaviors which cannot be predicted by the traditional theory of dislocations. In particular, using the AE technique evidence was found for an intermittent collective motion of dislocations on a mesoscopic scale, which obeys power-law statistics, characteristic of avalanche-like dynamics. Furthermore, the values of the power-law exponents were observed not to vary considerably for single crystals of various pure materials and to be similar to critical exponents describing avalanche processes in other fields of physics, e.g., the motion of domain walls in magnetic materials or the martensitic transformations. The observed universality of dynamics allows generalizing some of the findings obtained in the study of the collective motion of dislocations to other dynamical systems. At the same time, recent investigations on polycrystals and alloyed materials showed limitations of the concept of universality. In the present dissertation, the effect of the experimental conditions and microstructure on the statistical and fractal properties of AE was studied using alloys characterized by different mechanisms of deformation. Alongside with the statistical investigation, the evolution of the acoustic signal during deformation was examined and the possible mechanisms of the observed changes were discussed. The main results of this research are shortly summarized below. Power-law amplitude distributions of AE accompanying plastic flow in AlMg and MgZr alloys were obtained in different experimental conditions. An important result of statistical analysis is that the criteria used to extract individual AE events have little effect on the apparent statistics. A very weak effect was found for MgZr, in consistence with the literature data showing that the materials deforming by twinning generate well separated abrupt AE hits. A less favorable situation for the statistical analysis of AE appears under conditions of the PLC effect. The entirety of the results obtained in the dissertation show that this difficulty is caused by the plastic instability being related to the tendency to synchronization of the dynamics of dislocations, which leads to localization of deformation on the macroscopic scale and merging of acoustic events on a mesoscopic scale. Nevertheless, even in this case the effect of the criteria used to identify AE events is not crucial and does not prevent from detecting the changes in the power-law exponents when the experimental conditions are modified. It should be noted that this result is of general importance for the study of avalanche processes in various fields of science. Statistical analysis of the amplitudes of AE events showed that the slope of the power-law distributions depends on the microstructure, particularly, the grain size and the microstructural changes induced by deformation and related to accumulation of dislocations and twins. More specifically, these results provided a direct proof of an important role of local stress concentrations on grain boundaries. Indeed, reduction of the grain size in both kinds of materials led to an increase in the probability of higheramplitude AE events. This effect can be explained by an efficient transfer of plastic activity to neighboring grains, promoting formation of powerful avalanches. On the contrary, the forest hardening results in an increase in the probability of lower-amplitude AE events, which indicates gradual weakening of correlations between dislocations, i.e., stochastization of plastic deformation processes. The conjecture of stochastization is also confirmed by the observation of a progressive increase in the median frequency of the acoustic signal during deformation, usually associated with a decrease in the mean free path of the defects generating the AE, which was also observed for both kinds of When viewed on a traditionally used time scale, which does not resolve the structure of individual AE bursts, their amplitudes are indistinguishable during the periods of smooth plastic flow and at the instants of stress drops. However, the synchronization of the dynamics of dislocations at the moments of stress drops leads to generation of AE events with millisecond durations, which exceed by more than an order of value the durations of the hits observed during smooth flow. The application of the multifractal analysis to continuous AE records permitted us to quantitatively characterize the correlations between deformation processes in various time scale ranges. Under conditions of the PLC effect, the time correlations observed in a very wide range, from about a hundred of milliseconds to hundreds of seconds, are most probably governed by internal stresses. Importantly, evidence was found that synchronization of dislocations leads to emergence of a distinct time scale associated with abrupt stress drops (types B and C of behavior) and corresponding to a microsecond range. Observation of a crossover in the multifractal scaling suggests that in this case, besides the changes in the internal stress field (and perhaps a direct impact of elastic waves), another mechanism of correlations may be present, e.g., the transfer of plastic activity due to double cross-slip of dislocations. Finally, in the case of MgZr alloys, the MF analysis revealed the existence of two scale ranges of amplitudes of AE events, which correspond to different correlations as revealed by MF spectra. High-amplitude bursts are generally attributed to mechanical twinning and are usually considered in AE studies of plastic deformation of such materials. However, we found that the low-amplitude events are by no means random noise, because they also show correlations leading to emergence of MF spectra. The nature of the two different sets of AE events is not clear yet. However, as the dislocation pile-ups are believed to yield lower-amplitude AE bursts than the twins, we hope that such a quantitative analysis would help distinguishing the AE related to twins and dislocations. Perspectives for further research The present dissertation is one of the first works aiming at multiscale quantitative The way to do it can be illustrated by adding complexity to the recurrent procedure of generation of the Cantor set. Let us begin with assigning a uniform weight distribution to the segments of the Cantor set. The modified procedure starts from a unit segment with a weight µ = 1 and includes an additional rule: the weight of each segment obtained at the current generation step is shared equally between the two segments created from it at the next step. After m iterations, the set consists of 2 m segments with the same length l i = (1/3) m and weight µ i = (1/2) m . Thus, the following power-law relationship holds: µ i ∼ l α i . (A.2) The Lipschitz-Hölder index α is often called singularity strength because when α < 1, the local density diverges in the limit l → 0: µ i /l i ∼ l α-1 i . The weight µ is an example of a probability measure which makes it possible to describe the distribution of a physical quantity on a fractal support. In the considered case, two indices, D f and α, provide such a description (for Cantor set, α = ln2/ln3 = D f ). This simple case can be further generalized to allow for description of real heterogeneous self-similar objects. As demonstrated, e.g., in Ref. [162], the modification of the recurrent rules so that the segments are divided into unequal parts and the weights are assigned with unequal probabilities results in heterogeneous fractal sets, for which α takes on a continuous range of (non-negative) values corresponding to different regions of the set. The heterogeneous set may then be described by calculating fractal dimensions f (α) of the subsets corresponding to close values of the singularity exponent between α and α + dα: N(α) ∼ l -f (α) , (A.3) where N(α) is the number of segments in the given subset. In the general case, the 1. 3 . 3 The left figure displays a burst-like voltage signal induced in a coil of wire wound on a ferromagnet and the magnetization curve obtained by integration of this signal over time. The almost horizontal portions of the magnetization curve correspond to smooth motion of domain walls and their pinning on obstacles. The upward jumps reflect the moments when the domains configuration becomes unstable and suddenly change to a new state. The statistical analysis of the voltage bursts reveals power-law distributions of event sizes and durations (right figure). Figure 1 . 3 : 13 Figure 1.3: Barkhausen effect. Left: Voltage signal measured in annealed F e 73 Co 12 B 15 amorphous ribbon and its time integral showing a staircase magnetization curve. Right: Distribution of Barkhausen jump duration and amplitude for Si-Fe[64]. Figure 1 . 1 Figure 1.3 shows a striking similarity with jerky deformation curves and acoustic signals in Figs.1.1 and 1.2. This analogy is rather profound as both phenomena are Figure 1 . 4 : 14 Figure 1.4: The velocity-weakening slip-stick friction law[60]. Figure 1 . 5 : 15 Figure 1.5: Examples of deformation curves for Al -3at.%Mg samples, recorded at room temperature for three different εa -values and corresponding to three commonly distinguished types of the PLC effect [18]: type C ( εa = 2 × 10 -5 s -1 ), type B ( εa = 2 × 10 -4 s -1 ), and type A ( εa = 6 × 10 -3 s -1 ). The respective arrows indicate the critical strain ε cr for the onset of plastic instability. Figure 1 . 6 : 16 Figure 1.6: Variation of critical strain as a function of strain rate for Al -4.8%Mg alloy [104]. Figure 1 . 7 : 17 Figure 1.7: (a) A scheme explaining the occurrence of an N-shaped SRS as a result of competition between two microscopic mechanisms; (b) The resulting instability in the form of a saw-toothed deformation curve. Figure 1 . 8 : 18 Figure 1.8: Distribution of the amplitudes of (a) large stress drops and (b) low-amplitude serrations observed in an AlMg alloy at εa = 2 × 10 -5 s -1 [18]. Figure 1 . 1 9 illustrates typical results for an AlMg alloy. Similar to other materials, including those deforming smoothly, the AE occurs very early during the nearly elastic deformation. The count rate quickly increases and passes a maximum in the region of the elastoplastic transition which sometimes displays a Lüders plateau, as shown in Fig.1.9(a). The latter phenomenon is often observed in the same alloys which show the PLC effect. It is generally interpreted as due to propagation of a deformation band through the gauge length of the specimen, a process associated with unpinning of dislocations from their solute atmosphere in a statically aged material, whereas the PLC effect is governed by dynamical strain aging of dislocations[START_REF] Friedel | Dislocations[END_REF]. Figure 1 . 9 : 19 Figure 1.9: (a) Stress-strain curves and (b) the corresponding time dependencies of AE count rate for Al-1.5%Mg alloys deformed at room temperature and various strain rates: (1) εa = 2.67 × 10 -6 s -1 ; (2) εa = 1.33 × 10 -5 s -1 ; (3) εa = 5.33 × 10 -5 s -1 ; (4) εa = 1.33 × 10 -4 s -1 [124]. Figure 1 . 1 Figure 1.9(b) illustrates that the abrupt stress drops of type C are accompanied with bursts of AE count rate. When εa is increased, such a correlation disappears progressively. So, only a part of type B serrations is accompanied with such bursts.Moreover, the strongest AE is observed during the phase of nucleation of a new relay race of deformation bands, although this region is characterized by less abrupt stress fluctuations. In the case of type A behavior the AE bursts are rare and they seemingly correlate with stress humps associated with nucleation of new deformation bands. The observation of the correlation between the AE count rate and the distinct stress drops, particularly for type C serrations, led to a suggestion that the PLC instability gives rise to discrete burst-like AE events produced by large-size dislocation ensembles, in consistence with the macroscopic size of stress serrations, whereas smaller-size dislocation avalanches occur randomly during the macroscopically smooth plastic flow, and generate virtually continuous AE. This hypothesis was also corroborated by the observation of discrete acoustic events during serrated deformation, whereas continuous AE signals superimposed with discrete events were found to accompany the propagation of Lüders bands[124] (see Fig.1.10). It should be noted, however, that the acoustic equipment Figure 1 . 10 : 110 Figure 1.10: AE signal waveforms observed during (a) Lüders phenomenon and (b) PLC effect[124]. The total time interval length is 2.5 ms. a short-range scale [137]. Several 1D models using these ideas and representing the deforming sample as a chain of coupled blocks were proposed. McCormick and Ling considered solid blocks, all obeying the same constitutive equation and coupled via triaxial stresses [138]. The model successfully reproduced certain aspects of deformation curves of type A and type B, as well as deformation band propagation. Lebyodkin et al. used a similar model in which solid blocks were coupled by elastic springs [5, 6]. It reproduced not only the characteristic types of behavior but also the transition from a power-law to a bellshaped statistics. Ananthakrishna et al. developed a model which does not apply a phenomenological nonlinear constitutive law to each block but considers diffusion and mutual reactions of several dislocation densities, one of which corresponds to dislocations carrying solutes (see review [139] and references therein). The coupling between blocks was supposed to be due to the double cross-slip of dislocations. Although this model does not apply the same formulation of the DSA mechanism as that used in other models, it also reproduces various aspects of the PLC effect, including the transition from scale-free to chaotic behavior. Figure 1 . 11 : 111 Figure 1.11: Rearrangement of the crystal lattice due to twinning. The twinning plane and the corresponding shear direction are denoted by K 1 and η 1 . Figure 1 .Figure 1 . 13 : 1113 Figure 1.14 presents one of the main results of this investigation. It was shown that the statistical distributions of the energy of AE events obey a power law with an Figure 1 . 14 : 114 Figure 1.14: Probability density functions of AE energies for the stage of easy basal glide (stars) and the stage manifesting twinning (circles). Cd single crystal. Figure 1 . 15 : 115 Figure 1.15: The main types of AE waveforms observed in [82]. Figure 1 . 16 : 1 1161 Figure 1.16: Time evolution of the averaged ratio between the peak amplitude and square root of energy of AE events recorded for Cd single crystal [82]. Figure 2 . 1 : 21 Figure 2.1: Microstructure of Al-5wt.%Mg(left) and Al-3wt.%Mg(right) before solution treatment. Figure 2 . 2 : 22 Figure 2.2: Microstructure of Al-5wt.%Mg(left) and Al-3wt.%Mg(right) after solution treatment. Figure 2 . 3 : 23 Figure 2.3: Microstructure of Mg-0.35wt.%Zr (left) Mg-0.04wt.%Zr (right) alloys. Figure 2 . 4 : 24 Figure 2.4: Microstructure of AZ31 alloy: structure with a fraction of finer grains (left), structure with coarse grains (right). Figure 2 . 5 : 25 Figure 2.5: Scheme of acoustic sensor disposition for an AlMg sample (left) and an AZ31 sample (right). The AE was captured by piezoelectric transducers clamped to the specimen surface using silicon grease and a spring, to warrant a good acoustic contact. Most of experi-ments on AlMg alloys were done using a Micro-80 sensor with the operating frequency band 200-900 kHz and sensitivity of 57 V/(m/s) (dB), fabricated by Physical Acoustic Corporation. During tension tests, it was usually clamped to the wide specimen head above its gauge length, in order to avoid direct shocks when the deformation bands emerge on the surface (Fig.2.5). The control tests with the transducer fixed in the middle of the gauge length of the specimens did not show influence of the sensor location on the AE statistics. The same sensor location was used in all tensile tests on Mg alloys (Fig. 2.5). In these experiments, a miniaturized MST8S piezoelectric transducer (3mm diameter, frequency band from 50 to 600 kHz, sensitivity 55 dB (ref. 1V ef f )) was used, which helped keeping a good acoustic contact in spite of the specimen surface distortion during deformation. Figure 2 . 6 : 26 Figure 2.6: Scheme of event selection both the structures emerging in such systems and the temporal signals reflecting their evolution. The well-known examples are the thin-film morphology [161], dendritic solidification [162], dielectric breakdown [163], volcano activity [164], rainfalls [165], street traffic [166], and so on. In contrast, only sparse examples of its application to plasticity Figure 3 . 1 : 31 Figure 3.1: Example of AE time series. In this case, the measure is distributed on the time axis, the red line shows division into sections δt. Fig. 3 . 3 Fig.3.1) varied as a power of 2. The local measure µ i (δt) is defined as follows: 20 to 40. Recently, it was shown that the estimates of the scaling exponents systematically depart from the correct values for |q| > 10 [170]. Still, the corresponding curves are useful as sensitive indicators of any imperfectness of the linear trend. For this reason and because the present analysis is based on the relative changes occurring when either the experimental conditions or the scale of observation are varied, such data will also be used in further illustrations. . 11 ) 11 Note that the wavelet transformation uses a time-frequency description similar to Fourier transformation in that both are performed by taking the integral of the inner product between the signal and the analyzing function. However, in contrast to Fourier analysis, which is based on infinite trigonometric series, the wavelet transform uses a finite wavelet function. It is this property that allows revealing information on both time and frequency. The right choice of wavelet ψ makes possible examination of the local features of the signal. Computation of wavelet coefficients consists of four steps: Figure 3 . 2 : 32 Figure 3.2: Signal decomposition scheme. Figures 4.1 (a) and (b) compare the series of acoustic events detected by these two methods in tensile test performed on a specimen of the Mg0.35%Zr alloy. Intervals of approximately stationary behavior Figure 4 . 1 : 41 Figure 4.1: Examples of series of AE events which are either (a; red colour) extracted from the continuously recorded signal using the software developed in the present doctoral research or (b; blue colour) detected by the acoustic equipment during the test, using parameters preset before the measurement. In the latter case, the logarithmic values recorded in dB are converted to the linear scale in order to facilitate the comparison. (c) The corresponding statistical distributions. The red line is arbitrarily shifted to the left to avoid superposition of the curves. εa = 3.5 × 10 -4 s -1 . Figure 4 . 4 Figure 4.2 shows examples of the probability functions illustrating the effect of U 0 for one particular choice of time parameters, HDT = 50 µs and HLT = 100 µs, for the same Mg0.35%Zr alloy. It can be recognized that the statistics obeys power laws in a wide U 0range and the corresponding slopes are fairly robust. The main effect of the increase in U 0 consists in the reduction of the number of events and the corresponding limitation of the interval of A 2 because of the cutoff of the low-amplitude events. Nevertheless, some decrease in β can be also detected: β = -1.80±0.05 for U 0 = 9 mV and β = -1.87±0.02 for U 0 = 60.1 mV. Figure 4 . 4 3(a) represents β(U 0 )curves for three MgZr alloys and for two choices of the time parameters. It can be Figure 4 . 2 : 42 Figure 4.2: Effect of the voltage threshold on the statistics of the amplitudes of AE events for the same specimen. 1 -U 0 = 9 mV, 2 -U 0 = 15.3 mV, 3 -U 0 = 30.5 mV, 4 -U 0 = 60.1 mV. Figure 4 . 3 : 43 Figure 4.3: Effect of (a) U 0 , (b) HDT and (c) HLT settings on the power-law index β for MgZr specimens deformed at εa = 3.5 × 10 -4 s -1 . 1 and 2 -Mg0.35%Zr, 3 -Mg0.15%Zr, 4 -Mg0.04%Zr. HLT = 100 µs; HDT = 800 µs except for the case (2) where HDT=50 µs. (b) HLT = 0 µs; U 0 = 17 mV except for the case (2) where U 0 = 67 mV. (c) U 0 = 17 mV; HDT = 100 µs except for the case (2) where HDT = 20 µs. 73 Figure 4 . 4 : 44 Figure 4.4: Illustration of merging of AE events.The colours show the events detected for two choices of noise thresholds, U 0 , and the same choice of HDT which is deliberately taken very large. Application of a large threshold U 01 gives four separate events (magenta colour) with relatively short duration. Decreasing U 0 leads to merging of consecutive events (blue colour). The number of detected events and the resulting stack of amplitude values do not change significantly but the apparent durations increase drastically. Figure 4 . 4 Figure 4.5 summarizes the results of the analysis for as-delivered and annealed Al5Mg specimens deformed at different strain rates corresponding to the three distinct types of behavior of the PLC effect 1 . It can be seen that the influence of the identification parameters is stronger than in the case of MgZr alloys and the dependences are less monotonous. It is obvious that because of the effects of merging of AE events, the results might depend on such casual factors as the specific relationships between the selected time and voltage parameters, the level of the measurement noise, and the distribution of the on-off time periods in the AE signal, which reflect the distribution of the hits durations and their occurrence times. Therefore, we will only stop on some characteristic trends. Figure 4 . 5 : 45 Figure 4.5: Effects of U 0 , HDT and HLT parameters on the power-law index β for Al5Mg specimens deformed at (a), (d), (g): εa = 6 × 10 -3 s -1 ; (b), (d), (h): εa = 2 × 10 -4 s -1 ; (c), (e), (i): εa = 2 × 10 -5 s -1 . Circles and squares designate results for as delivered and annealed specimens, colours designate the different choices of the parameters which are kept constant, for (a), (b) and (c): blue -HDT = 300 µs, HLT = 300 µs, red -HDT = 30 µs, HLT = 100 µs; for (d), (e) and (f): blue -U 0 = 1.85 mV, red -U 0 = 3.4 mV (HLT = 300 µs); for (g), (h) and (i): blue -HDT = 10 µs, red -HDT = 50 µs (U 0 = 1.85 mV). Figure 4 . 7 : 47 Figure 4.7: Probability density function for squared amplitude of AE events collected in the same strain range for three different samples deformed at driving strain rate εa = 2 × 10 -5 s -1 . a microsecond scale corresponding to individual oscillations within an acoustic event to the scale of the mechanical test. In this chapter we present first results of such investigations[175], which combine an inspection of AE patterns on different time scales with Fourier analysis of both individual waveforms and entire AE signals, as well as statistical analysis of AE events. The most detailed study was realized under conditions of the PLC effect in an Al5Mg alloy. First results are also presented for Mg based alloys. The comparison of behaviors for different materials allows formulation of further directions of these investigations. Figure 5 . 1 : 51 Figure 5.1: Examples of deformation curves for three values of imposed strain rate corresponding to different types of the PLC instability: 2×10 -5 s -1 (type C); 2×10 -4 s -1 (type B); 2×10 -2 s -1 (type A). The two upper curves are deliberately shifted along the ordinate axis to better discern the shape of serrations. Figure 5 . 2 : 52 Figure 5.2: Illustration of the onset of AE in Al5Mg specimens: portions of deformation curves (top) and the accompanying acoustic events (bottom). Colors: blue -εa = 6 × 10 -3 s -1 ; red -εa = 2 × 10 -4 s -1 ; black -εa = 2 × 10 -5 s -1 . Figure 5 . 1 (Figure 5 . 3 : 5153 Figure 5.3: Examples of AE data streaming during macroscopically elastic parts of deformation curves. (a) εa = 6 × 10 -3 s -1 ; (b) εa = 2 × 10 -4 s -1 ; (c) εa = 2 × 10 -5 s -1 . Figure 5 . 4 : 54 Figure 5.4: Examples of Lüders plateau and the accompanying AE: (a) εa = 2×10 -2 s -1 ; (b) εa = 2 × 10 -4 s -1 ; (c) εa = 2 × 10 -5 s -1 . Figure 5 . 5 : 55 Figure 5.5: Example of an acoustic event accompanying a stress drop during the Lüders plateau. εa = 2 × 10 -4 s -1 . Figure 5 .Figure 5 . 6 : 556 Figure 5.6: Superposition of an entire load-time curve and the accompanying AE signal for an annealed AlMg specimen. The arrow indicates the critical strain ε cr for the onset of the PLC effect; (b) zoom in a time interval corresponding to the deformation stage below ε cr ; (c) zoom in a time interval beyond ε cr . εa = 2 × 10 -5 s -1 . Figure 5 . 7 : 57 Figure 5.7: Example of a short isolated AE burst at two magnifications. The front of the signal is as short as approximately 2 µs, the main shock lasting about 30 µs. Some aftershocks can also be seen after the main shock. Figure 5 . 8 : 58 Figure 5.8: (a), (b) Examples of two composed AE events with large duration; (c), (d) respective magnification of these events showing a virtually continuous character of AE at a sufficiently fine time scale. Figure 5 . 9 : 59 Figure 5.9: Probability density function for the squared amplitude of AE events for an annealed specimen (solid symbols, the slope β ≈ -2.9) and an as-delivered specimen (open symbols, β ≈ -2.5 ). The events were collected in the time interval [2000 s; 4000 s], corresponding to the strain range ε < ε cr . εa = 2 × 10 -5 s -1 . 5.6(a)) is similar in the entire εa -range. This is illustrated by Figs. 5.10(a) and 5.11(a), which represent AE for annealed specimens deformed at a εa = 2 × 10 -4 s -1 and εa = 2 × 10 -2 s -1 , corresponding to type B and type A behavior, respectively. The finer details shown in Figs. 5.10 and 5.11 demonstrate a nonuniform growth of the AE activity with increasing imposed strain rate, similar to the effect discussed above in relation to the AE accompanying microplastic deformation (Fig. 5.3). Nevertheless, all the above conclusions made for type C behavior remain in the case of type B serrations. For example, Fig. 5.10(c) shows that the increased AE activity does not prevent burstlike patterns on a time scale corresponding to several stress drops (cf. Fig. 5.6(c)). Figure 5 . 10 :Figure 5 . 11 : 510511 Figure 5.10: Superposition of an entire force-time curve of type B and the accompanying AE signal; (b), (c) consequtive zoom in steps presenting a sequence of serrations and individual stress drops. The arrows (1) and (2) correspond to Figs. 5.12(a) and (b), respectively. εa = 2 × 10 -4 s -1 . Figure 5 . 12 : 512 Figure 5.12: Examples of AE waveforms during nucleation (a) and propagation (b) of type B deformation bands; (c), (d) magnification of figures (a) and (b), respectively. Arrows indicate the sites magnified at the bottom figures. εa = 2 × 10 -4 s -1 . Figure 5 . 5 Figure 5.13 also indicates that in the case of events with large duration, recorded during deep stress drops, calculation of average characteristics, such as energy E and median frequency f med (see Chapter 3), usually gives higher E value and lower f med value than in the neighboring regions. The correlation between the increasing AE activity and stress drops is confirmed by the graphs of evolution of E and f med , which are presented in Figs. 5.14 and 5.15 for type C and type B behavior, respectively. Indeed, this representation allow detecting discontinuities in AE signal on a coarse scale corresponding to rather continuous appearance of the signal itself. Figures 5.14 and 5.15 Figure 5 . 13 : 513 Figure 5.13: Typical patterns of AE and their power spectral density (PSD) function: (a) noise signal during idling of the deformation machine, (b) burst during the period separating two regular series of stress drops, (c) event during reloading between two successive stress drops , (d) portion of a large event recorded during a stress drop. εa = 2 × 10 -4 s -1 . 5.16). In this case, local maxima of energy and local minima of the median frequency are found during the stress humps corresponding to nucleation of a new deformation band, but the stress fluctuations accompanying the deformation band propagation do not produce noticeable effects. Figure 5 . 14 : 514 Figure 5.14: Superposition of a portion of the deformation curve with the time evolution of the average AE energy E and the median frequency f med . Annealed specimen; εa = 2 × 10 -5 s -1 . Figure 5 . 5 Figure 5.15: (a) Evolution of the PSD function in terms of energy and median frequency; (b) Close up of a portion of the upper figure. Annealed specimen; εa = 2 × 10 -4 s -1 . Figure 5 . 16 : 516 Figure 5.16: Same as Fig. 5.15 for an as-delivered specimen deformed at εa = 2×10 -2 s -1 . Figure 5 . 5 Figure 5.17: SEM image of the fractured surface of a nonannealed AlMg specimen. Arrow shows a broken particle appearing in white color. Figure 5 . 5 Figure 5.18 presents results of simultaneous recording of a tensile curve and AE signal for a Mg0.04%Zr sample deformed at εa = 3.5 × 10 -4 s -1 . Qualitatively similar patterns were observed for alloys with other chemical compositions, although the corresponding difference in the grain size was reflected in their different strength and ductility. The deformation curves of the investigated materials are smooth on the global scale (Fig. Figure 5 . 18 : 518 Figure 5.18: Example of load vs. time curve and simultaneously recorded acoustic response for Mg0.04%Zr deformed at εa = 3.5 × 10 -4 s -1 . Figure 5 . 5 Figure 5.20 represents typical waveforms observed for the studied materials, and the respective power spectral density functions. The upper pattern presents a noise signal recorded before the test start. The respective Fourier spectrum displays a narrow peak around 120 kHz and another peak at an approximately double frequency. Such a shape of Fourier spectra was found for all noise-like signals extracted during the test in the Figure 5 . 19 : 519 Figure 5.19: Microstructure of polycrystalline Mg samples after deformation. Left: Mg0.35%Zr sample with an average grain size <d>=170µm; right: Mg0.04%Zr sample with an average grain size <d>=550µm. Figure 5 . 20 : 520 Figure 5.20: Examples of typical waveforms and their power spectral density functions of AE signals in MgZr alloys. (a)-noise-like pattern; (b) and (c) type 1 pattern; (d)type 2 pattern. Figure 5 . 21 : 521 Figure 5.21: Time evolution of AE energy E (top) and median frequency f med (bottom) for Mg0.35%Zr deformed at εa = 3.5 × 10 -4 s -1 . The deep drop in both E and f med around 240 s is due to accidental noise pick-up. Figure 5 . 22 : 522 Figure 5.22: Example of deformation curves and AE signals for AZ31 alloy. (a) sample s1; (b) sample s2. Figure 5 . 23 : 523 Figure 5.23: Microstructure of AZ31 specimens after tension test. Left: sample s1; right: sample s2. Figure 5 . 5 Figure 5.24: (a), (b) Examples of two AE events in AZ31 alloy; (c), (d) magnification of the same events; (e), (f) the corresponding Fourier spectra. state. As shown in Chapter 4, the choice of the event individualization parameters only weakly affects the apparent AE statistics. Thanks to this observation the statistical analysis in the present work was performed for one parameter set for both kinds of material: U 0 =2.2 mV (27 dB), HDT=HLT=300 µs. Except for the initial stage of deformation with the highest strain rate of 2 × 10 -2 s -1 , where the analysis was impeded by a very strong events overlapping, power-law probability functions were found for all deformation conditions and almost over the entire range of AE amplitudes, as illustrated in Fig.6.1. It can also be seen that the largest events show a tendency to an increased probability. The possible nature of this unusual deviation was discussed in § 5.1.3 and attributed to overstresses generated by dislocation pile-ups stopped at grain boundaries, which can trigger dislocation sources in neighboring grains. Figure 6 . 1 : 61 Figure 6.1: Probability density functions for squared amplitude of AE events, for an annealed Al5%Mg specimen deformed at εa = 2 × 10 -4 s -1 . Open circles: events extracted in the time interval T =[100s; 400s]; solid circles: T =[1000s; 1500s]. Figure 6 . 6 Figure 6.2 demonstrates the evolution of the slope of power-law distributions with Figure 6 . 2 : 62 Figure 6.2: Example of collation of deformation curves with the evolution of power-law indexes β for the AE energy distribution. The rectangles designate the time intervals corresponding to statistically stationary series of AE events; their heights give the error of β determination, as defined by the least square method. Data for two samples are displayed for each strain rate: blue -annealed specimens, red -as delivered specimens. (a) εa = 2 × 10 -2 s -1 ; (b) εa = 6 × 10 -3 s -1 ; (c) εa = 2 × 10 -4 s -1 ; (d) εa = 2 × 10 -5 s -1 . Figure 6 . 6 3 summarizes the data of statistical analysis for MgZr with different Zr content, the latter being the principal factor determining the grain size. The following event individualization parameters were used to extract AE events: HDT=50 µs, HLT=100 µs and U 0 =9.1 mV (≈39 dB). It can be recognized that in spite of the different mechanisms of deformation and different character of AE in Al and Mg alloys (cf. Chap. 5), the principal trends found in the previous paragraph for AlMg are also valid for MgZr. The main conclusions which can be drawn from this figure are as follows: (1) the power-law index β is higher than in the case of AlMg and varies in a range close to the values reported previously for single crystals of hexagonal metals[82,23]; (2) despite this difference with the DSA materials, the effect of the grain size and strain is the same, i.e., β decreases in the course of deformation, and the material with a smaller grain size shows a trend to an enhanced probability of large avalanches (flatter slope of the power-law function). This observation presents an essential difference with the case of single crystals, for which no strain dependence of β was reported. Figure 6 . 3 : 63 Figure 6.3: Load vs. time curves and variation of the power-law index β for the AE energy distribution. Blue -Mg0.04%Zr (grain size about 550 µm); red -Mg0.15%Zr (360 µm); magenta -Mg0.35%Zr (170 µm). Figure 6 . 6 Figure 6.4(а) presents examples of partition functions for two q-values for a noise signal recorded during idling of the deformation machine. All dependencies follow very closely the trivial unity slope, which qualifies the numerical procedure for the MF analysis. This figure also shows the effect of the finite size of the analyzed set, which manifests itself in the form of steps occurring when δt approaches the length of the total time interval. Figure6.4(b) displays the result of truncation of a part of the same signal below a threshold. It leads to gradual deviation of the dependencies in the limit of small δt, because of addition of voids to the initially continuous signal. Figures 6. 5 5 Figures 6.5(a) and (b) illustrate the masking effect of noise on a multifractal set and Figures 6.5(a) and (b) illustrate the masking effect of noise on a multifractal set and the result of truncation of the noise component. It is seen that the presence of noise may completely mask the multifractality (Fig. 6.5(a)). It is not a rule, though. Opposite Figure 6 . 4 : 64 Figure 6.4: Examples of partition functions for experimental noise: (а) total signal; (б) after removal of a part of the signal below 1 mV. The red, blue, and black color correspond to q =10, 5, and 0. Figure 6 . 5 : 65 Figure 6.5: Example of the effect of noise on the partition functions for a signal recorded during deformation of an AlMg sample and displaying multifractal features: (а) total signal; (b) after truncation of a part of the signal below 1 mV (the noise level in the tests on AlMg samples was about 1.5 mV). Figure 6 . 123 Figure 6 . 6 : 612366 Figure 6.7 shows MF spectra for AE signals recorded at different stages of deformation in a test conducted at εa = 2 × 10 -5 s -1 . The first interval is selected before ε cr in a range of macroscopically uniform plastic flow between two low-amplitude stress drops (see Fig.6.8(a)). It yields smooth MF spectra, thus testifying that the correlations of the dynamics of dislocations, which lead to the emergence of multifractral patterns, exist before the occurrence of strong self-organization effects associated with the macroscopic plastic instability. Figure 6.7 also presents MF spectra for two subsequent intervals, one Figure 6 . 7 : 67 Figure 6.7: Examples of MF spectra for three portions of an AE signal recorded at εa = 2 × 10 -5 s -1 : open circles -T=[2700s; 2900s], before ε cr , U>1.7 mV; solid circles -T=[5300; 5600], before ε cr , U>1.7 mV; open squares -T=[9650s; 10050s], beyond ε cr , U>1.5 mV. The deformation curve and the AE signal are illustrated in Fig. 6.8 for the first and the third intervals. Figure 6 . 8 : 68 Figure 6.8: Examples of portions of the deformation curve and the accompanying acoustic signal (only the positive half-waves of oscillations are shown) from an annealed Al5Mg specimen deformed at εa = 2 × 10 -5 s -1 : (a) and (b) correspond to the regions before and beyond ε cr , respectively. Red lines mark the analyzed time intervals. U tr =1.52 mV. Figure 6 . 9 : 69 Figure 6.9: Examples of singularity spectra for series of events amplitudes: open circles -T=[2100s, 3100s]; solid circles -T=[5000s, 6000s]; open squares -T=[9200s;10200s]. The event selection settings were as follows: U 0 =6 mV; HDT=HLT=300 µs. Figure 6 . 10 : 610 Figure 6.10: Examples spectra of (a) generalized dimension D(q) and (b) singularity spectra f (α) for portions of AE signal from an annealed Al5%Mg specimen deformed under type B conditions; εa = 2 × 10 -4 s -1 . Solid squares -T=[100s; 260s], before ε cr ; open circles -T=[520s; 700s], beyond ε cr ; open square: T=[1300s; 1600s]. The solid circles illustrate MF spectra for series of amplitudes in the interval T=[520s; 700s]. U tr =1.8 mV. Figure 6 . 6 11(b) displays the corresponding singularity spectrum for this short segment, found for a δt-range from about 40 ms to 0.6 s. It is noteworthy Figure 6 . 6 Figure 6.11: (a) Example of AE burst-like signal accompanying reloading between two stress serrations (the same test as in Fig. 6.10); (b) -the corresponding singularity spectrum obtained after cutting off the background noise below U tr = 1.5 mV. δt=[40ms; 0.6s]. Figure 6 .Figure 6 . 12 : 6612 Figure 6.12: Examples of AE signals and their singularity spectra for a specimen deformed at εa = 6 × 10 -3 s -1 . displays a smooth MF spectrum found for δt=[1 ms; 20 ms]. This result indicates that although work hardening leads to a progressive lost of correlations between deformation processes on a long-range time scale, shorter-time memory can still be present. Figure 6 . 13 : 613 Figure 6.13: Example of (a) AE signal accompanying a sequence of reloading followed by a type B serration at εa = 6 × 10 -3 s -1 ; (b) the corresponding singularity spectrum, revealed after truncation of the background below U tr = 0.8 mV. Figure 6 . 14 : 614 Figure 6.14: Examples of long AE events observed during deformation with εa = 2 × 10 -5 s -1 . Figure 6 . 15 : 615 Figure 6.15: Singularity spectra of acoustic events exhibiting a complex temporal structure at εa = 2 × 10 -5 s -1 . The notations (a), (b) and (c) correspond to Fig. 6.14. Figure 6 . 6 Figure 6.16 shows some typical examples of acoustic signals observed in an AlMg sample deformed at the εa = 2 × 10 -4 s -1 . Individual bursts with a short rising time, Figure 6 . 16 : 616 Figure 6.16: Examples of acoustic emission events observed during jerky flow of an annealed specimen deformed at εa = 2 × 10 -4 s -1 . The signal in Fig. 6.16(b), displaying a sequence of short consecutive bursts, often accompanies the phase of nucleation of a new relay race of type B deformation bands (see § 1.5.1, 1.5.3). The partition functions computed for the entire signal (without truncation) are shown in Fig. 6.17(a)). They exhibit approximately straight segments at Figure 6 . 6 Figure 6.16(c) shows a part of an AE event with large duration. Such events occur during stress drops and seemingly consist of overlapping consecutive bursts. In these Figure 6 . 18 : 618 Figure 6.18: Singularity spectra f (α) for the AE signal in Fig.6.16(b): open circlesentire signal; open squares -after truncation of the signal below U tr = 0.5 mV; solid circles -U tr = 0.75 mV; solid squares -U tr = 1 mV. Figure 6 . 6 Figure 6.19: (a) Partition functions and (b) singularity spectra for the AE signal from Fig.6.16(c). Open circles represent the spectrum for small time scales (3µs ≤ δt < 30µs); solid circles denote that for larger time scales (30µs ≤ δt < 1ms). Figure 6 . 6 20 presents examples of AE signals and MF spectra for different stages of deformation of a Mg35%Zr specimen. The upper example is taken from the initial part of deformation, which is usually interpreted as due to basal slip of dislocations, perhaps, accompanied by some twinning[152]. It displays strong discrete AE bursts separated by long periods with close to noise emission. Figure 6 . 20 : 620 Figure 6.20: Examples of AE signals (left panel) and corresponding singularity spectra f (α)(right panel) for Mg35%Zr specimen. Open symbols: T=[40s; 50s], U tr =3 mV; solid symbols: T=[120s; 160s], U tr =50 mV. The scaling dependences were found for δt ∈[0.3s; 4s] and δt ∈[2s; 16s], respectively.study a separate short burst because its fine structure is masked by a rapid decay function. To uncover correlations on short time scales the analysis was performed over segments of 10-20 ms. Such analysis was not possible on the stage of basal glide because of large distances between events. The results for the next two stages of deformation are illustrated in Fig.6.21. Since the short intervals contain only several high-amplitude bursts (which largely determine the spectra of Fig.6.20) the truncation of the lower part would leave insufficient amount of data for the analysis. Therefore, no thresholding was applied. The upper signal corresponding to primary twinning yields a smooth singularity spectrum (solid symbols) for a δt-range extending beyond the average distance (∼ 1 ms) between separate bursts, δt =[0.1 ms, 2 ms]. It is characterized by a rather big width (α min < 0.5), in consistence with a strongly clustered AE pattern. The bottom signal, representing a magnification of its counterpart in Fig.6.20, also shows nontrivial partition functions but does not provide a smooth spectrum (open symbols). Taking Figure 6 . 21 : 621 Figure 6.21: AE signals (left) and the corresponding MF spectra (right) in shorter time intervals for the same Mg35%Zr specimen. Solid symbols -T=[70.3s; 70.31s], δt ∈[0.1ms, 2ms]; open symbols -T=[140.54s; 140.56s], δt ∈[0.24ms, 4ms]. U tr = 0 mV. alloys.The data streaming technique allowed investigating the jerky flow of an AlMg al-loy on different timescales and provided various evidences to support the hypothesis of a relation between type C and type B PLC instability and the phenomenon of synchronization in dynamical systems. In particular, it allowed clarifying a contradiction existing in the literature on the PLC effect, with regard to discrete and continuous attributes of the accompanying AE. Namely, the observation of huge AE count rate bursts at the instants of type B or type C stress serrations has led to contraposition between discrete AE associated with the PLC instability and continuous AE during stable flow.Among other, this contraposition contradicts the experimental fact that the amplitudes of AE events do not show peculiarities during stress serrations. The analysis of continuously recorded AE signals in the present work proved that AE has a burst-like character during both stress serrations and smooth flow, with amplitudes of bursts varying in the same range. At the same time, the apparent behavior, discrete or continuous, of the AE accompanying stress serrations is found to depend on the scale of observation. analysis of plastic deformation with the help of continuously recorded AE signals. The results obtained testify to the occurrence of complex temporal patterns during plastic deformation of fcc and hcp metals and raise various questions related to the description of the collective dynamics of dislocations. Some of the possible directions of future research are listed below:-A detailed investigation of the role of cracking of second phase particles, using both the analysis of microstructure at different stages of deformation and the comparison of statistical properties of AE with the results obtained for cracking in brittle materials; -Investigation of the spatial aspect of collective processes with the aid of localization of AE sources not only in time but also in space, using two acoustic sensors. Reconstruction of spatio-temporal patterns of deformation processes in various materials. For this purpose, it would be of interest to combine the AE technique with high-resolution local extensometry methods; -Investigation of the initial stage of plastic deformation during elastoplastic transition. This transition is almost unstudied experimentally from the viewpoint of collective phenomena. It can be expected that the AE technique would be able to judge not only about the collective motion of dislocations but also about the processes of their multi-plication; -Statistical analysis of durations of AE events, which remained beyond of this dissertation; -Multifractal analysis of AE signal in pure single crystals, for which only data on the amplitude distributions are available in the literature. It should be underlined that the observation of power-law amplitude distributions does not provide sufficient criteria to verify or falsify the hypothesis of SOC, which is most often applied to interpret these data; -Investigation of AE accompanying deformation by compression. It might be especially interesting in the case of hcp materials which present a strong asymmetry of plastic flow, e.g., different twinning systems operate in tension and compression. Figure A. 1 : 1 Figure A.1: Example of a self-similar object: snowflake under microscope. 35 wt% of Zr, which had the average grain size of 550 µm, 360 µm, and 170 µm, respectively (cf. Fig.2.3). All samples were obtained as cast and annealed for 1h at 250 • C before deformation, in order to reduce the density of dislocations and twins formed during casting. Some tests were done on an industrial alloy AZ31 (2.9wt.% Al, 0.98 wt.% Zn, 0.29 wt.% Mn). Cylindrical samples 35 mm long and 6 mm in diameter were obtained by extrusion from melt using different regimes (extrusion speed of 8 m/min and 45 m/min, dB and recorded with the aid of the Euro Physical Acoustics sys- tem (PCI-2 18-bit A/D device fabricated by Physical Acoustic Corporation), with the sampling rate of 2 MHz or 1 MHz, respectively. 1 The AE measurements during defor- mation of AZ31 samples were carried out with the aid of DAKEL-CONTI-4 AE system developed by ZD RPETY -DAKEL Rpety (sampling frequency of 2 MHz; 4-channel data acquisition with 12-bit A/D converter for each channel), which allows for recording data in four channels with different amplification, in order to avoid overamplification and saturation of the measured voltage. The signals were pre-amplified by 26 dB; the total gain was varied between 26 and 80 dB. It is noteworthy that recent investigations using optical methods bear evidence to another generic behavior observed in various materials[22,52,53]. Namely, it is shown that the intermittent strain localization may self-organize in space so that to give rise to a kind of excitation waves. However, this aspect goes beyond the scope of the present thesis. It is noteworthy that the unstable plastic flow under conditions of a constant loading rate has been discovered by F. Savart et A. Masson as early as in 1830th[100,101]. However, this experimental scheme is out of the scope of the present work because it leads to specimen fracture after several strain jumps only. In the first tests on AlMg alloys, an older Physical Acoustics LOCAN 320 system was utilized, which did not provide data stream but recorded series of acoustic events, using preset parameters. It should be reminded that the set-ups used in the tests on different materials did not have the same total gain, so that the absolute U -ranges may differ in figures for different materials. The experimental observations explicitly confirming this expectation will be described in detail in Chap.[START_REF] Lebyodkin | [END_REF] This is a rather unusual mode of propagation of the Lüders band. However, its analysis goes beyond the scope of the doctoral research. A detailed statistical analysis of AE for various strain rates will be presented in the next chapter. Acknowledgments The work reported in present dissertation has been carried out in the Laboratoire d 'Etude des Microstructures et de Mécanique des Matériaux at the Université de Lorraine and in the Institute of Solid State Physics Russian Academy of Sciences during the years 2009-2012. First of all I wish to thank my advisors Mikhail Lebedkin and Vladimir Gornakov for their inspiring and active guidance during these years, without which completing this thesis would not have been possible. I would also like to thank Tatiana Lebedkina, she has repeatedly expressed her encouragement and moral support. The work presented in this thesis has been done in fruitful collaboration with Prof. František Chmelík and his collegues who I wish to thank: for kind hospitality and interesting discussions during my two visits to Charles University of Prague. My sincere thanks also goes to Prof. Joel Courbon and Dr. Benoit Devincre, who has accepted the heavy task of review of my work. I would like to thank the rest of my doctoral committee: Prof. Claude Fressengeas and Dr. Nikolay Kobelev for their encouragement, insightful comments, and hard questions. I also want to thank all the members of the LEM3 for interesting conversations and good atmospher. Finaly, I am grateful to my family and especially to my wife Alena for encouraging and supporting me during this project. truncation without renormalizing the slope of the power-law dependence. It should also be noted from this point of view that in all three figures, (a)-(c), the β(U 0 )-dependences corresponding to higher HDT (blue symbols) generally lie above their homologs for lower HDT (red symbols), in conformity with the discussed influence of the AE events merging on the seeming β-value. The above data prove that the choice of the time parameters may be quite important as it can entrain considerable changes in the power-low exponents. The β(HDT )dependences are displayed in the second row of Fig. 4.5. It is noteworthy that, as illustrated in Figs. [START_REF] Kubin | Dislocations in Solids[END_REF].5(a) and (d) for the high strain rate, choosing a small U 0 -value corresponding to the range of fast changes on the β(U 0 )-dependence may lead to a considerable shift of the β(HDT )-curve with regard to its counterpart for a higher U 0 -value. Such a shift is observed for the annealed sample (Figs. 4.5(d), squares). The difference is inessential, though, for the as-delivered sample. It is also weak for the low strain rate and practically negligible for the intermediate strain rate which is characterized by weak β(U 0 )-dependences. In spite of these quantitative changes, similar shapes of the curves are obtained for all three strain rates: first, β rapidly decreases approximately by 0.2 with increasing HDT from 10 µs to 40 µs, then it slowly grows. The initial fast fall may be absent, as Appendix A Multifractal analysis A.1 Fractals, fractal dimension An accurate mathematical description of the (multi)fractal analysis can be found in a number of books and reviews, e.g. [162]. The aim of this section is to provide a brief qualitative consideration highlighting the physical meaning of the concept of fractals and its usefulness for characterization of complex structures and signals. The name "fractals" was proposed by Benoît Manelbrot [START_REF] Mandelbrot | The Fractal Geometry of Nature[END_REF] to describe specific non-Euclidean geometrical constructions which present self-similar, or scale-invariant, patterns. Soon afterwards, it has been understood that various natural objects manifesting complex spatial structures or evolution patterns possess the property of self-similarity in some range of scales and may be described using concepts of fractal geometry. Selfsimilar objects are abundant in nature and are found in everyday life as well as in various fields of science, starting from the obvious self-similarity of the hierarchic structure of snowflakes (see Fig. However, the application of the equation (A.1) results in a fractional positive D 0 value, which is called fractal dimension and is usually denoted D f . Indeed, choosing segments of length l = (1/3) n to cover the Cantor set readily gives N(l) = 2 n and therefore, D f = -lnN(l)/lnl = ln2/ln3. This value is greater than the (zero) topological dimension of the Cantor set and smaller than that of the embedding space -the initial 1D segment. This property leads to various peculiar features of fractals. In particular, it follows that L ∼ l 1-D f , i.e., measuring the length of the Cantor set gives results depending on the scale of observation. The modification of the recursion rule will lead to generation of a set with a different value of D f . Thus, the fractal dimension allows not only checking whether the set is self-similar, but also characterizing it quantitatively. At the same time, the fractal dimension is a global characteristic of the occupancy of the space (a hypercube is either occupied or not), which disregards local properties, such as the events amplitudes or clustering of the events, the latter leading to inhomogeneous filling of the space and various occupancy of the hypercubes at a given scale of observation. This problem is attacked with the aid of the multifractal formalism described in the following section. A.2 Multifractals The description of natural objects usually requires more than one scaling index. The reason for this is that besides the underlying fractal geometry, characterized by the fractal dimension, they may carry a locally fluctuating physical property. In addition to dependence f (α), often called singularity spectrum, is a continuous function. Obviously, f varies in a range between 0 and 1 for a 1D signal. The spectrum f (α) degrades to a single point in the case of a uniform fractal. The singularity spectrum makes clear the physical meaning of the multifractal formalism but the above definitions do not provide a method to calculate it. A convenient numerical procedure was proposed in [183]. Using a normalized measure μi (l, q) = µ q i j µ q j , where q ∈ Z, the values of f (α) can be found from the following scaling relationships As presented in Section 1.3, there also exists an alternative description in terms of generalized dimensions D(q) which are found from the scaling laws Z q (l) = l (q-1)D(q) Z 1 (l) = D(1)lnl where the partition functions Z q (l) are defined by the relationships Z q (l) = i µ q i , q = 1 Z 1 (l) = i µ i lnµ i , q = 1 (A.6) D(q) is constant for simple fractals, while the decreasing D(q) is a signature of a multifractal object. The two kinds of multifractal spectra, D(q) and f (α), are related with each other by the Legendre transform: f (α) = qα -τ (q) and α = dτ (q)/dq, where τ (q) = (q -1)D(q).
250,101
[ "788391" ]
[ "178323" ]
01749470
en
[ "info" ]
2024/03/05 22:32:07
2017
https://theses.hal.science/tel-01749470/file/SLAMA_Olfa.pdf
Keywords: quantication oue dans une requête est très limité par rapport au coût de l'évaluation globale. Conclusion & Perspectives Cette thèse est la première proposant une extension oue du langage SPARQL visant à améliorer son expressivité et à permettre i) d'interroger des bases de données RDF oues et ii) d'exprimer des préférences complexes sur la valeur des données et sur la structure du graphe. Les résultats présentés dans ce manuscrit sont prometteurs et montrent que le coût supplémentaire dû à l'introduction de conditions de recherche oues reste limité/acceptable. De nombreuses perspectives peuvent être envisagées. Une première perspective concerne l'extension des langages FURQL et FUDGE avec des préférences plus sophistiquées dont certaines font appel à des notions provenant du domaine de l'analyse des réseaux sociaux (centralité ou prestige d'un noeud) ou de la théorie des graphes (par exemple, clique, etc). Nous envisageons ensuite d'étudier d'autres types de requêtes quantiées plus complexes, par exemple trouver les auteurs ayant un article publié dans la plupart des revues de base de données renommées (ou plus généralement, trouver les x tels que x est relié (par un chemin) à Q n÷uds d'un type donné T satisfaisant la condition C). Les logiciels SURF et SUGAR peuvent également être améliorés an de les rendre plus conviviaux, ce qui pose la question de l'élicitation de requêtes oues complexes. Il vaut également la peine d'étudier la manière dont notre cadre pourrait être appliqué à la gestion de dimensions de qualité des données (par exemple, précision, cohérence, etc.) qui sont en général d'une nature graduelle. List of Tables Appendix A Sample of Queries Bibliography Résumé en français La publication de données ouvertes (éventuellement liées) sur le web est un phénomène en pleine expansion. L'étude des modèles et langages permettant l'exploitation de ces données s'est donc grandement intensiée ces dernières années. Récemment, le modèle RDF (Resource Description Framework) s'est imposé comme le modèle de données standard, proposé par le W3C, pour représenter des données du web sémantique [W3C, 2014]. RDF est un cas particulier de graphe étiqueté orienté, dans lequel chaque arc étiqueté (représentant un prédicat) relie un sujet à un objet. SPARQL [START_REF] Prud | SPARQL query language for RDF[END_REF] est le langage de requête standard recommandé par le W3C pour l'interrogation de données RDF. Il s'agit d'un langage fondé sur la mise en correspondance de patrons de graphe. Les travaux que nous présentons visent à introduire plus de exibilité dans le langage (SPARQL ici) en orant la possibilité d'intégrer des préférences utilisateur aux requêtes. Les motivations pour intégrer les préférences des utilisateurs dans les requêtes de base de données sont multiples. Tout d'abord, il semble souhaitable d'orir à l'utilisateur la possibilité d'exprimer des requêtes dont la forme se rapproche, autant que possible, de la formulation de la requête en langage naturel. Ensuite, l'introduction de préférences utilisateur dans une requête permet d'obtenir un classement des réponses, par niveau décroissant de satisfaction, ce qui est trés utile en cas d'obtention d'un grand nombre de réponses. Et enn, là où une requête booléenne classique peut ne retourner aucune réponse, une version à préférence (qui peut être vue comme une version relaxée et donc moins restrictive), peut permettre de produire des réponses proches des objets idéals visés. [START_REF] Bruno | Top-k selection queries over relational databases: Mapping strategies and performance evaluation[END_REF], Chomicki, 2002, Torlone and Ciaccia, 2002, Borzsony et al., 2001, Kieÿling, 2002, Tahani, 1977, Bosc and Pivert, 1995, Pivert and Bosc, 2012]. La littérature sur les requêtes à preférences dans le contexte de bases de données RDF n'est pas aussi abondante puisque cette question n'a commencé à attirer l'attention que récemment. La plupart des approches existantes sont des adaptations directes des propositions faites dans le contexte des bases de données relationnelles. En particulier, elles se limitent à l'expression de préférences sur les valeurs présentes dans les n÷uds. Dans un contexte de graphe RDF, la nécessité d'exprimer des conditions sur la structure des données, puis d'extraire les relations entre les ressources dans le graphe RDF, a motivé des travaux visant à étendre SPARQL et à le rendre plus expressif. Dans [START_REF] Kochut | SPARQLer: Extended SPARQL for semantic association discovery[END_REF], Anyanwu et al., 2007, Alkhateeb et al., 2009] et [START_REF] Pérez | nSPARQL: A navigational language for RDF[END_REF], les auteurs étendent principalement SPARQL en permettant d'interroger RDF à l'aide de patrons de graphe en utilisant des expressions régulières. Mais dans ces approches, le graphe RDF et les conditions de recherche restent non-ous (booléens). Le modèle RDF de base ne permet en eet de représenter nativement que des données de nature booléenne. Les concepts du monde réel à manipuler sont cependant souvent de nature graduelle. Il est donc nécessaire de disposer d'un langage plus exible qui prenne en compte des graphes RDF dans lesquels les données sont intrinsèquement décrites de façon pondérée. Les poids peuvent représenter des notions graduelles telles qu'une intensité ou un coût. Par exemple, une personne peut être l'amie d'une autre avec un degré fonction de l'intensité de la relation d'amitié. An de représenter ces informations, plusieurs auteurs ont proposé des extensions oues du modèle de données RDF. Cependant, les extensions oues de SPARQL qui peuvent être trouvées dans la littérature restent très limitées en termes d'expression de préférences. Notre objectif dans cette thèse est de dénir un langage de requête beaucoup plus expressif pour i) traiter des bases de données RDF oues et non oues et ii) exprimer des préférences complexes sur les valeurs des noeuds et sur la structure du graphe. Un exemple d'une telle requête est: trouver les acteurs a tels que la plupart des lms récents où a joué l'acteur a, sont bien notés et ont été recommandés par un ami proche de a. Nos contributions principales sont décrites dans la suite. Une extension oue de SPARQL avec des capacités de navigation oue Notre objectif dans la première contribution est d'étendre le langage SPARQL de façon à lui permettre d'exprimer des préférences utilisateur pour exprimer des requêtes exibles, portant sur des données RDF véhiculant ou non des notions graduelles. Tout d'abord, nous proposons une extension de la notion de patron de graphe, fondée sur la théorie des ensembles ous, que l'on nomme patron ou de graphe. Cette extension repose sur celle de patron de graphe SPARQL introduite dans [START_REF] Pérez | Semantics and complexity of SPARQL[END_REF] et [START_REF] Arenas | Querying semantic web data with SPARQL[END_REF]. Dans ces travaux, les auteurs dénissent un patron de graphe SPARQL dans un formalisme algébrique plus traditionnel que le formalisme introduit dans la norme ocielle. Un patron de graphe est récursivement déni comme étant soit un graphe contenant des variables, soit un graphe complexe obtenu par l'application d'opérations sur des patrons de graphe. Ensuite, on nous fondant sur cette notion de patron ou de graphe, nous proposons le langage FURQL qui est plus expressif que toutes les propositions existantes de la littérature, et qui permet: 1. d ( [START_REF] Mazzieri | A fuzzy semantics for semantic web languages[END_REF], Udrea et al., 2006, Mazzieri and Dragoni, 2008, Lv et al., 2008, Straccia, 2009, Udrea et al., 2010, Zimmermann et al., 2012]), dont le principe commun consiste à ajouter un degré dans [0, 1] à chaque triplet RDF, formalisé ou bien par l'encapsulation d'un degré ou dans chaque triplet ou bien par l'ajout au modèle d'une fonction associant un degré de satisfaction à chaque triplet (ces deux représentations sont sémantiquement équivalentes et présentent la même expressivité). Un degré attaché à un triplet s, p, o exprime à quel point l'objet o satisfait la propriété p sur le sujet s. Préférences oues Le langage FURQL est basé sur des patrons ous de graphe qui permettent d'exprimer des préférences oues sur les données d'un graphe ou F-RDF via des conditions oues (par exemple, l'année de publication d'un lm est récente ) et sur sa structure via des expressions régulières oues (par exemple, le chemin entre deux amis doit être court ). Syntaxiquement, le langage FURQL permet d'utiliser des patrons ous de graphe dans la clause where et des conditions oues dans la clause filter. La syntaxe d'une ex-pression oue de graphe est proche de celle de chemin, comme déni dans SPARQL 1.1 [Harris and Seaborne, 2013], permettant d'exhiber des n÷uds reliés par des chemins exprimés sous forme d'une expression régulière. On permet ici l'expression d'une propriété oue portant sur les n÷uds reliés. Une propriété d'un chemin concerne des notions classiques de la théorie des graphes ous [Rosenfeid, 2014] : la distance et la force de la connexion entre deux n÷uds, où la distance entre deux noeuds est la longueur du plus court chemin entre ces deux noeuds et la distance d'un chemin est dénie comme étant le poids de l'arc le plus faible du chemin. Ce travail a été publié dans les actes de la 25ème Conférence internationale IEEE sur les systèmes ous (Fuzz-IEEE 16), Vancouver, Canada, 2016. Requêtes quantiées structurelles oues dans FURQL La deuxième contribution traite de requêtes quantiées oues adressées à une base de données RDF oue. Les requêtes quantiées oues ont été étudiées de façon approfondie dans un contexte de bases de données relationnelles pour leur capacité à exprimer diérents types de besoins d'information imprécis, voir notamment [START_REF] Kacprzyk | FQUERY III +:a "human-consistent" database querying system based on fuzzy logic with linguistic quantiers[END_REF], Bosc et al., 1995], où elles servent à exprimer des conditions sur les valeurs des attributs des objets stockés. Cependant, dans le cadre spécique de RDF/SPARQL, les approches actuelles de la littérature traitant des requêtes quantiées considèrent des quanticateurs non-ous uniquement [START_REF] Bry | SPARQLog: SPARQL with rules and quantication[END_REF], Fan et al., 2016] sur des données RDF non-oues. Nous étudions une forme particulière de requête quantiée oue structurelle et montrons comment elle peut être exprimée dans le langage FURQL déni précédemment. Plus précisement, nous considérons des propositions quantiées oues du type QB X are A sur des bases de données RDF oues, où Q est le quanticateur qui est représenté par un ensemble ou et est soit relatif (par exemple, la plupart) soit absolu (par exemple, au moins trois), B est une condition oue, X est l'ensemble de noeuds dans le graphe RDF, et A désigne une condition oue. Un exemple d'une telle proposition quantiée oue est : la plupart des albums récents sont très bien notés. Dans cet exemple, Q correspond au quanticateur ou relatif la plupart, B est la condition oue être récent, X correspond à l'ensemble des albums présents dans le graphe RDF et A correspond à la condition oue être très bien noté. Conceptuellement, l'interprétation d'une telle proposition quantiée oue dans une requête FURQL peut être basée sur l'une des approches de la littérature proposées dans [Zadeh, 1983, Yager, 1984, Yager, 1988]. Son évaluation comporte trois étapes: 1. la compilation de la requête quantiée oue R en une requête non-oue R , 2. l'interprétation de la requête SPARQL R , 3. le calcul du résultat de R (qui est un ensemble ou) basé sur le résultat de R . Ce travail a été publié dans les actes de la 26ème Conférence internationale IEEE sur les systèmes ous (Fuzz-IEEE'17), Naples, Italie, 2017. Mise en ÷uvre et expérimentation Dans cette thèse, nous abordons également l'implantation du langage FURQL. Nous avons à cet eet considéré deux aspects: 1. le stockage de graphes ous (modèle de données étendu que nous considérons) et 2. l'évaluation de requêtes FURQL. Le premier point peut être résolu par l'utilisation du mécanisme de réication qui permet d'attacher un degré ou à un triplet, solution proposée dans [Straccia, 2009]. Concernant l'évaluation de requêtes FURQL, nous avons développé une couche logicielle permettant la prise en compte de requêtes FURQL, que l'on associe à un moteur SPARQL standard. Cette couche logicielle, appelé SURF, est composée principalement des deux modules suivants: • Dans une étape de prétraitement, un module de compilateur de requête FURQL produit les fonctions dépendantes de la requête qui permettent de calculer les degrés de satisfaction pour chaque réponse retournée, une requête SPARQL classique qui est ensuite envoyée au moteur de requête SPARQL pour récupérer les informations nécessaires pour calculer les degrés de satisfaction. La compilation utilise le principe de dérivation introduit dans [START_REF] Pivert | Fuzzy Preference Queries to Relational Databases[END_REF] dans un contexte de bases de données relationnelles qui consiste à traduire une requête oue en une requête non oue. • Dans une étape de post-traitement, un module de traitement des données oues qui calcule le degré de satisfaction pour chaque réponse renvoyée, classe les réponses et les ltre qualitativement si une alpha-coupe a été spéciée dans la requête oue initiale. Une preuve de concept de l'approche proposée, le prototype SURF, est disponible et téléchargeable à l'adresse https://www-shaman.irisa.fr/furql/. Pour évaluer les performances du prototype SURF que nous avons développé, nous avons eectué deux séries d'expériences sur diérentes tailles de bases de données RDF oues. Les premières expériences visent à mesurer le coût supplémentaire induit par l'introduction du ou dans SPARQL, et les résultats obtenus montrent l'ecacité de notre proposition. Les deuxièmes expériences, qui concernent des requêtes quantiées oues, montrent que le coût supplémentaire induit par la présence d'un quanticateur ou dans les requêtes reste très limité, même dans le cas de requêtes complexes. Requêtes quantiées structurelles oues dans FUDGE A la n de cette thèse, nous nous situons dans un cadre plus général: celui de bases de données graphe [START_REF] Angles | Survey of graph database models[END_REF]. Jusqu'à présent, une seule approche de la littérature, décrite dans [START_REF] Castelltort | Fuzzy queries over NoSQL graph databases: Perspectives for extending the Cypher language[END_REF], considère des requêtes quantiées oues dans un tel environnement, et seulement d'une manière assez limitée. Une limitation de cette approche tient au fait que seul le quanticateur est ou (alors qu'en général, dans une proposition quantiée oue de la forme QB X are A, les prédicats A et B peuvent également l'être). Nous proposons quant à nous d'étudier des requêtes quantiées oues impliquant des prédicats ous (en plus du quanticateur) sur des bases de données graphe oues. Nous considerons le même type de requête quantiée oue structurelle que celui considéré dans FURQL mais dans un cadre plus général. Cette contribution est basée sur notre travail décrit dans [Pivert et al., 2016e], dans lequel nous avons montré comment il est possible d'intégrer ces requêtes quantiées oues dans un langage nommé FUDGE, précédemment déni dans [Pivert et al., 2014a]. FUDGE est une extension oue de Cypher [START_REF] Cypher | Cypher[END_REF] qui est un langage déclaratif pour l'interrogation des bases de données graphe classiques. Une stratégie d'évaluation fondée sur un mécanisme de compilation qui dérive des requêtes classiques pour accéder aux données est également décrite. Elle s'appuie sur une surcouche logicielle au système Neo4j, baptisée SUGAR, dont une première version, décrite dans [Pivert et al., 2015[START_REF] Pivert | SUGAR: A graph database fuzzy querying system[END_REF], permet d'évaluer ecacement les requêtes FUDGE ne comportant pas de propositions quantiées. A cet eet, nous avons mis à jour ce logiciel, qui est une couche logicielle qui implémente le langage FUDGE sur le SGBD Neo4j, pour lui permettre d'évaluer des requêtes FUDGE contenant des conditions quantiées oues. Comme preuve de concept de l'approche proposée, le prototype SUGAR est disponible et téléchargeable à l'adresse www-shaman.irisa.fr/fudge-prototype. An de conrmer l'ecacité de l'approche proposée, nous avons eectué quelques expérimentations avec le prototype SUGAR en utilisant diérentes tailles de bases de données graphe oues. Les résultats obtenus sont prometteurs et montrent que le coût du traitement de la Introduction The relational model, introduced in 1970 by Edgar F. Codd [Codd, 1970], has been the most popular model for database management for many decades in academic, nancial and commercial pursuits. In this framework, data can be stored and accessed thanks to a database management system like Oracle, Microsoft SQL Server, MySQL, etc. However, in the recent decades, the traditional relational model faced new challenges, mainly related to the development of Internet. Data to be searched are more and more accessible on the Web (i.e., open environment) and never stop to increase in volume and complexity. As a solution, an alternative model, called NoSQL (Not only Structured Query Language), came to existence and has attracted a lot of attention since 2007. It aims to process eciently and store huge, distributed, and unstructured data such as documents, e-mail, multimedia and social media [Leavitt, 2010, Robinson et al., 2015]. Among NoSQL database systems, we may nd the famous Google's BigTable [START_REF] Chang | Bigtable: A distributed storage system for structured data[END_REF]], Facebook's Cassandra [START_REF] Lakshman | Cassandra: structured storage system on a P2P network[END_REF]], Amazon's Dynamo [START_REF] Decandia | Dynamo: amazon's highly available key-value store[END_REF]], LinkedIn's Project Voldemort, Oracle's BerkeleyDB [Berkeley, 2010] and mostly Graph Databases Systems (e.g., Neo4j 1 , Allegrograph 2 ,etc.), which are designed to store data in the form of a graph. In the last decade, there has been increased attention in graphs to represent social networks, web site link structures, and others. Recently, database research has witnessed much interest in the W3C's Resource Description Framework (RDF) [W3C, 2014], which is a particular case of directed labeled graph, in which each labeled edge (called predicate) connects a subject to an object. It is considered to be the most appropriate knowledge representation language for representing, describing and storing information about resources available on the Web. This graph data model makes it possible to represent heterogenous Web resources in a common and unied way, taking into consideration the semantic side of 1 http://www.neo4j.org/ the information and the interconnectedness between entities. The SPARQL Protocol and RDF Query Language (SPARQL) [START_REF] Prud | SPARQL query language for RDF[END_REF] is the ocial W3C recommendation as an RDF query language. It plays the same role for the RDF data model as SQL does for the relational data model and provides basic functionalities (such as, union and optional queries, value ltering and ordering results, etc.) in order to query RDF data through graph patterns, i.e., RDF graphs containing variables data. RDF data are usually composed of large heterogeneous data including various levels of quality e.g., over relevancy, trustworthiness, preciseness or timeliness of data (see [START_REF] Zaveri | Quality assessment for linked data: A survey[END_REF]). It is then necessary to oer convenient query languages that improve the usability of such data. A solution is to integrate user preferences into queries, which allows users to use their own vocabulary in order to express their preferences and retrieve data in a more exible way. This idea may be illustrated by an example of a real life scenario of movie online booking stated as follows: I want to nd a recent movie with a high rating. In order to process such a query, fuzzy predicates, such as recent and high which model user preferences, have to be taken into account during database querying. These terms are vague and their satisfaction is a question of degree rather than an all or nothing notion. Motivations for integrating user preferences into database queries are manifold [START_REF] Hadjali | Database preference queriesa possibilistic logic approach with symbolic priorities[END_REF]. First, it appears to be desirable to oer more expressive query languages that can be more faithful to what a user intends to say. Second, the introduction of preferences in queries provides a basis for rank-ordering the retrieved items, which is especially valuable in case of large sets of items satisfying a query. Third, a classical query may also have an empty set of answers, while a relaxed (and thus less restrictive) version of the query might be matched by some items. Introducing user preferences in queries has been a research topic for already quite a long time in the context of the relational database model. In the literature, one may nd many exible approaches suited to the relational data model: top-k queries [START_REF] Bruno | Top-k selection queries over relational databases: Mapping strategies and performance evaluation[END_REF], the winnow [Chomicki, 2002] and Best [START_REF] Torlone | Finding the best when it's a matter of preference[END_REF] operators, skyline queries [START_REF] Borzsony | The skyline operator[END_REF], Preference SQL [Kieÿling, 2002], as well as approaches based on fuzzy set theory [Tahani, 1977, Bosc and Pivert, 1995, Pivert and Bosc, 2012]. The literature about preference SPARQL queries to RDF databases is not as abundant since this issue has started to attract attention only recently. Most of these approaches are straightforward adaptations of proposals made in the relational database context. In particular, they are limited to the expression of preferences over the values present in the nodes. In an RDF graph context the need to query about the structure of data and then extract relationships between resources in the RDF graph, has motivated research aimed to extend SPARQL and make it more expressive. In [START_REF] Kochut | SPARQLer: Extended SPARQL for semantic association discovery[END_REF], Anyanwu et al., 2007, Alkhateeb et al., 2009] and [START_REF] Pérez | nSPARQL: A navigational language for RDF[END_REF], the authors mainly extend SPARQL by allowing to query crisp RDF through graph patterns using regular expressions but in these approaches, both the graph and the search conditions remain crisp (Boolean). However, in the real world, many notions are not of a Boolean nature, but are rather gradual (as illustrated by the example above), so there is a need for a exible SPARQL that takes into account RDF graphs where data is described by intrinsic weighted values, attached to edges or nodes. This weight may denote any gradual notion like a cost, a truth value, an intensity or a membership degree. For instance, in the real world, relationship between entities may be gradual (e.g., close friend, highly recommends, etc.) and an associated degree may express its intensity. A statement involving a gradual relationship is for instance an artist recommends a movie with a degree 0.8 (roughly, this movie is highly recommended by this artist). In order to represent such information, several authors proposed fuzzy extensions of the RDF data model. However, the fuzzy extensions of SPARQL that can be found in the literature appear rather limited in terms of expressiveness of preferences. Our aim in this thesis is to dene a much more expressive query language that i) deals with both crisp and fuzzy RDF graph databases and ii) supports the expression of complex preferences on the values of the nodes and on the structure of the graph. An example of such a query is most of the recent movies that are recommended by an actor, are highly rated and have been featured by a close friend of this actor. Contributions In this thesis, our main contributions are as follows. 1. We rst propose a fuzzy extension of the SPARQL query language that improves its expressiveness and usability. This extension, called FURQL, allows (1) to query a fuzzy RDF data model involving fuzzy relationships between entities (e.g., close friends), and (2) to express fuzzy preferences on data (e.g., the release year of a movie is recent ) and on the structure of the data graph (e.g., the path between two friends is required to be short ). A prototype, called SURF, has been implemented and some experiments have been performed that show that introducing fuzziness in SPARQL does not come with a high price. 2. We then focus on the notion of fuzzy quantied statements for their ability to express dierent types of imprecise and exible information needs in a (fuzzy) RDF database context. We show how a particular type of fuzzy quantied structural query can be expressed in the FURQL language that we previously proposed and study its evaluation. SURF has been extended to eciently process fuzzy quantied queries. It has been shown through some experimental results that introducing fuzzy quantied statements into a SPARQL query entails a very small increase of the overall processing time. 3. In the same way as we did with FURQL, we deal with fuzzy quantied queries in a more general (fuzzy) graph database context (RDF being just a special case). We study the same type of fuzzy quantied structural query and show how it can be expressed in an extension of the Neo4j Cypher query language, namely FUDGE, previously proposed in [Pivert et al., 2014a]. A processing strategy based on a compilation mechanism that derives regular (nonfuzzy) queries for accessing the relevant data is also described. Then, some experimental results are reported that show that the extra cost induced by the fuzzy quantied nature of the queries remains very limited. Structure of the thesis The remainder of the thesis is organized as follows: • Chapter 1 introduces background concepts and notations that are necessary to understand the rest of this thesis. We start with the RDF data model and SPARQL, which is the standard query language for RDF data, and briey touch upon fuzzy set theory. Readers familiar with RDF, SPARQL and fuzzy set theory may want to skip this chapter. • Chapter 2 discusses the state-of-the-art research work related to this thesis. We give a classied overview of approaches from the literature that have been proposed to make SPARQL querying of RDF data more exible. Then, we summarize the main features of these approaches and point out their limits. • Chapter 3 is devoted to the presentation of our rst contribution which consists of a fuzzy extension of the SPARQL query language. First, we dene the notion of a fuzzy RDF database. Second, we provide a formal syntax and semantics of FURQL, an extension of the SPARQL query language. To do so, we extend the concept of a SPARQL graph pattern dened over a crisp RDF data model, into the concept of a fuzzy graph pattern that allows: (1) to query a fuzzy RDF data model, and (2) to express fuzzy preferences on data (through fuzzy conditions) and on the structure of the data graph (through fuzzy regular expressions). • Chapter 4 is directly related to our second contribution that addresses the issue of integrating the notion of fuzzy quantied statements in the FURQL language introduced in Chapter 3 for querying fuzzy RDF databases. We rst recall important notions about fuzzy quantiers, and present dierent approaches from the literature for interpreting fuzzy quantied statements. Then, we introduce the syntactic format for expressing a specic type of fuzzy quantied structural query in FURQL and we show how they can be evaluated in an ecient way. • Chapter 5 provides a detailed architectural implementation of the SURF prototype and reports experimental results related to approaches described in the previous chapters. These results are promising and show the feasibility of the presented approaches. • Chapter 6 concerns fuzzy quantied queries in a more general (fuzzy) graph database context. We start by recalling important notions about graph databases, fuzzy graph theory, fuzzy graph databases, and the FUDGE query language which is a fuzzy extension of the Neo4j Cypher query language. We then discuss related work about fuzzy quantied statements in a graph database context and point out their limits. In this chapter, we consider again a particular type of fuzzy quantied structural query addressed to a fuzzy graph database. We dene the syntax and semantics of an extension of the query language Cypher that makes it possible to express and interpret such queries in the FUDGE language. A query processing strategy based on the derivation of nonquantied fuzzy queries is also proposed and some experiments are performed in order to study its performances. • Finally, we conclude the thesis by summarizing our main contributions. Then, we discuss our upcoming perspectives for future research in order to improve and extend the proposed approach. Publications & Softwares Parts of this thesis have been published as i) regular papers at the IEEE International Conference on Fuzzy Systems [Pivert et al., 2016c] [Pivert et al., 2017], at the International Conference on Scalable Uncertainty Management [Pivert et al., 2016e], and the ACM Symposium on Applied Computing [Pivert et al., 2016g], ii) as posters and demos at the IEEE International Conference on Research Challenges in Information Science [Pivert et al., 2016a[START_REF] Pivert | SUGAR: A graph database fuzzy querying system[END_REF]. [Pivert et al., 2016a] [ Pivert et al., 2016c] [ [START_REF] Pivert | Fuzzy quantied queries to fuzzy RDF databases[END_REF] Moreover, some works were published in French conferences: [START_REF] Pivert | FURQL : une extension oue du langage SPARQL[END_REF] The SURF prototype and the SUGAR prototype are available and downloadable respectively on the following web sites: Introduction Int his chapter, we introduce some background notions that will be used throughout the thesis. Section 1.1 presents the RDF graph data model, section 1.2 presents the SPARQL language used for querying this model and section 1.3 presents fuzzy set theory. Let us consider an album as a resource of the Web. Characteristics may be attached to the album, like its title, its artist, its date or its tracks. In order to express such a characteristic, the RDF data model uses a statement of the form of an RDF triple. Denition 1 provides a more formal denition. Denition 2 (RDF graph). An RDF graph is a nite set of triples of (U ∪B)×U ×(U ∪L∪B). An RDF graph is said to be ground if it does not contain blank nodes. An RDF graph can be modeled by a directed labeled graph where for each triple s, p, o , the subject s and the object o are nodes, and the predicate p corresponds to an edge from the subject node to the object one. RDF is then a graph-structural data model that makes it possible to exploit the basic notions of graph theory (such as, node, edge, path, neighborhood, connectivity, distance, in-degree, out-degree, etc.). Moreover, RDF provides a schema denition language called RDF Schema (RDFS), which allows to specify semantic deductive constraints on the subjects, properties and objects of an RDF graph. It permits to declare objects and subjects as instances of given classes, and inclusion statements between classes and properties. It is also possible to relate the domain and range of a property to classes. RDFS denes a set of reserved words from URI with its own predened semantics/vocabularies (i.e., RDFS vocabulary). Among RDFS vocabularies, we can mention the following list: • (rdf:type): represents the membership to a class; • (rdfs:subClassOf ): represents the subclass relationship between classes; • (rdfs:subPropertyOf ): represents the subclass relationship between properties; • (rdfs:domain): represents the domain of properties; • (rdfs:range): represents the range of properties; • (rdfs:Class): represents the meta-classes of classes; • (rdf:Property): represents the meta-classes of properties; • etc. RDF also declares entailment rules that make it possible to derive new triples from the explicit triples appearing in an RDF graph. Such implicit triples are part of the RDF graph even if they do not explicitly appear in it. They can be explicitly added to the graph. When all implicit triples are made explicit in the graph, then, the graph is said to be saturated. In this thesis, we only consider saturated RDF graph. A database which stores RDF graphs, containing statements of the form (subject-predicateobject), is called a triple store (or simply an RDF database ). There have been a signicant number of RDF databases over the last years mainly divided into two categories [START_REF] Faye | A survey of RDF storage approaches[END_REF]: • Native RDF stores implement their own database engine without reusing the storage and retrieval functionalities of other database management systems. Some examples of native RDF stores are AllegroGraph (commercial) 6 , Apache Jena TDB (open-source) 7 , etc. • Non-native RDF Stores use the storage and retrieval functionalities provided by other database management systems. Among the non-native RDF stores, we nd the Apache Jena SDB (open-source) using conventional relational databases 8 , etc. 1.2 Example 4 [Basic Graph Pattern] The albums featuring the artist Beyonce, with their names are described by the following graph pattern. ?artist dc:creator ?album . ?artist dc:title "Beyonce" . ?album dc:title ?name . According to the graph of Figure 1.1, two subgraphs that are isomorphic to this graph pattern may be found and they are given in Figure 1.3. A classical SPARQL query has the general form given in Listing 1.4, where the clause prefix is for abbreviating URIs (which will be omitted in the following examples), the clause select is for specifying which variables should be returned, the clause from denes the datasets to be queried, and the clause where contains the triple of the researched pattern. ..,distinct ...,limit ...,offset ...,projection ... #Modifiers Listing 1.4: Skeleton of a sparql query SPARQL also provides solution modiers, which make it possible to modify the result set by applying classical operators like order by for ordering the result set in ascending (asc (.) default ordering) or descending (desc(.)) order, distinct for removing duplicate answers, limit to limit the number of answers to a xed number (chosen by a user), projection to choose certain variables and eliminate others from the solutions, or offset to dene the position of the rst returned answers. Finally, the output of a SELECT SPARQL query is a set of mappings of variables which match the patterns in the where clause. Example 5 • Optional graph pattern: uses the clause optional and allows for a partial matching of the query. The query tries to match a graph pattern and does not discard a candidate answer when some part of the optional patterns is not satised. • Filter graph pattern: using the clause filter followed by an expression to select answers according to some criteria. This expression may contain classical operators (e.g., =, + , * , -, / , < , > , ≥ , ≤) and functions (e.g., isU RI(?x), isLiteral(?x), isBlank(?x), regex(?x, "A. * )). Example 8 • SELECT query: is equivalent to an SQL SELECT, used to return a set of variables from the query pattern using the select clause. For instance, all the aforementioned examples of SPARQL queries are of the SELECT form; • CONSTRUCT query: returns a single RDF graph by creating new triples that satisfy a specic template from the query pattern. Example 9 [CONSTRUCT query] Let us assume that, if a person X knows a person Y and if this latter (X ) knows a person Z, so, we can say that the rst person X knows the person Z or any person known by Y . Thus, we can create this relationship thanks to the following CONSTRUCT query. construct { ?x foaf:knows ?z . } where { ?x foaf:knows ?y . ?y foaf:knows ?z . } Listing 1.10: An example of a CONSTRUCT query • ASK query: is used to return a Boolean result: true if there exists at least one result that matches the query pattern and false otherwise. Example 10 [ASK query] The following query illustrates the use of the ASK query: Is Beyonce the name of the resource uri:beyonce ? ask { uri:beyonce dc:title "Beyonce" . } Listing 1.11: An example of an ASK query This query returns true since the resource uri:beyonce is indeed the artist Beyonce. • DESCRIBE query: is used to return a single RDF graph with information about the selected resources. Example 11 [DESCRIBE query] An example of a DESCRIBE query is given in Lsiting 1.12. describe uri:beyonce Listing 1.12: An example of a DESCRIBE query This query returns information about the ressource <uri:beyonce>, such as, its name, its age, its rating, its type, etc. Recently, SPARQL 1.1 [Harris and Seaborne, 2013] is a new version of SPARQL supporting new features, such as, property paths, update functionalities, subqueries, negation, value assignments, aggregates functions, etc. • Property paths: they are known as regular expressions tackled in [START_REF] Kochut | SPARQLer: Extended SPARQL for semantic association discovery[END_REF], Anyanwu et al., 2007, Pérez et al., 2008, Alkhateeb et al., 2009, Pérez et al., 2010 • Assignments: The value of a complex expression can be added to a solution mapping by binding a new variable to the value of this expression. The variable can then be used in the query and also can be returned in the result. The assignment is of the form: (expression as ?var). Example 16 [Query with assignment] The following query aims to return the albums released less than 6 years before 2017. Fuzzy Set Theory In the classical set theory, there are two possible situations for an element: to belong or to not belong to a subset. In 1965, Lot Zadeh [Zadeh, 1965] proposed to extend classical set theory by introducing the concept of gradual membership in order to model classes whose borders are not clear-cut. A fuzzy set is associated with a membership function which takes its values in the range of real numbers [0,1], that is to say that graduations are allowed and an element may belong more or less to a fuzzy subset. The theory of fuzzy sets has advanced applications in articial intelligence, computer science, decision theory, expert systems, robotics, etc. They also play an important role in expressing fuzzy user preferences queries to relational databases [Dubois andPrade, 1997, Pivert andBosc, 2012]. In the following, we rst give a formal denition and some characteristics of the notion a fuzzy set, and then the main operations over fuzzy sets are detailed. 1.3.1 Denition Let X be a classical set of objects called the Universe and x be any element of X. If A is a classical subset of X, the membership degree of every element can take only extreme values 0 or 1. This corresponds to the classical denition of a characteristic function : µ A (x) = 1 i x ∈ A, 0 otherwise. When A is a fuzzy subset of X [Zadeh, 1965] it is denoted by: A = {(x, µ A (x)), x ∈ X} with µ A :X → [0,1], where µ A (x) is a degree of membership (simply denoted degree in the following) that quanties the membership grade of x in A. The closer the value of µ A (x) to 1, the more x belongs to A. Therefore, we can have the three situations: µ A (x)=0 , 0 < µ A (x) < 1 , µ A (x)=1. where µ A (x)=0 means that x does not belong to A at all, 0 < µ A (x) < 1 if x belongs partially to A and µ A (x)=1 means that x belongs entirely to A. In practice, the membership function of A is of a trapezoidal shape (see Figure 1.4) and is expressed by the quadruplet (A -a, A, B, B + b). A = {µ A (x 1 )/x 1 , ..., µ A (x n )/x n }, It is worth mentioning that in practice the elements for which the degree equals 0 are omitted. Remark 1. A fuzzy subset of X is called normal if there exists at least one element x ∈ X such as µ A (x) = 1. Otherwise it is called subnormal. 1.3.2 Characteristics of a Fuzzy Set Several notions can be used to describe a fuzzy set. Among them we can cite. 1.3.2.1 Support,height and core The support of a fuzzy subset A in the universal set X, denoted by supp(A), is a crisp set that contains all the elements of X that have a strictly positive degree in A (i.e., which belong somewhat to A). More formally: supp(A) = {x | x ∈ X, µ A (x) > 0}. The core of a fuzzy subset A, denoted by core (A), is the crisp subset of X containing all the elements with a degree equal to 1 (i.e. that completely belong to A with degree equal to 1). More formally: core(A) = {x | x ∈ X, µ A (x) = 1}. Remark 2. Note that in the case of a crisp set, the support and the height collapse, since if x is somewhat in A it belongs (totally) to A. Example 19 Let us consider two fuzzy subsets A and B of the set X, with X= {x 1 , x 2 , x 3 , x 4 , x 5 }, A= {1/x 1 , 0.3/x 2 , 0.2/x 3 , 0.8/x 4 , 0/x 5 } and B= {0.6/x 1 , 0.9/x 2 , 0.1/x 3 , 0.3/x 4 , 0.2/x 5 }. The supports of the two subsets A and B are: supp(A) = {x 1 , x 2 , x 3 , x 4 }, supp(B) = {x 1 , x 2 , x 3 , x 4 , x 5 }. The core of these two subsets is as follows: core(A)= {x 1 }, core(B)= ∅. The height of a fuzzy subset A of X denoted by hgt(A) is the largest degree attained by any element of X that belongs to A. More formally: hgt(A) = sup x∈X µ A (x). A is said to be normalized i ∃ x ∈ X, µ A (x) = 1 which means that hgt(A) = 1. 1.3.2.2 α-cut The ordinary set of such elements x ∈ X having a membership degree larger or equal to a threshold α ∈]0, 1] is the α-cut (A α ) of the fuzzy subset A dened as: A α = {x | x ∈ X, µ A (x) ≥ α}. Example 20 Let us consider X={x 1 , x 2 , x 3 } and a fuzzy subset A={0.3/x 1 + 0.5/x 2 + 1/x 3 }, the α-cuts of this subset are as follows : A 0.5 = {x 2 , x 3 }, A 0.1 = {x 1 , x 2 , x 3 }, A 1 = {x 3 } The membership function of a fuzzy subset A can be expressed in terms of characteristic functions of its α-cuts according to the following formula: µ A (x) = sup α∈]0,1] min(α, µ Aα (x)) , where µ Aα (x) = 1 i x ∈ A α , 0 otherwise. The strict (or strong) α-cut of A, denoted by A ᾱ, contains all the elements in X that have a membership value in A strictly greater than α: A ᾱ = {x|x ∈ X, µ A (x) > α}. The following properties hold: • A0 = supp(A), • A1 = core(A), • α 1 > α 2 ⇒ A α 1 ⊆ A α 2 . It can easily be checked that: (A ∪ B) α = A α ∪ B α and (A ∩ B) α = A α ∩ B α . Operations on Fuzzy Sets Classical operations on crisp sets have been extended to fuzzy sets. These extensions are equivalent to classical operations of set theory when dealing with membership functions belonging to values 0 or 1. The most commonly used operations are presented hereafter and the interesting reader may refer to [Dubois, 1980]. 1.3.3.1 Complementation The complement of a fuzzy set A, denoted by Ā, is dened as: ∀x ∈ X, µ Ā(x) = 1 -µ A (x). Example 21 Let us consider the fuzzy subset A = {1/ x 1 + 0.3/x 2 + 0.2/x 3 + 0 .8/x 4 + 0/x 5 }. Its complement is Ā = {0/x 1 + 0.7/x 2 + 0.8/x 3 + 0.2/x 4 + 1/x 5 }. This operation is involutive, i.e., Ā = A (µ Ā(x) = µ A (x)). 1.3.3.2 Inclusion Let us consider two fuzzy sets A and B dened on X. If for any element x of X, x belongs less to A than B or has the same membership, then A is said to be included in B (A ⊆ B). Formally A ⊆ B if and only if: ∀x ∈ X, µ A (x) ≤ µ B (x). When the inequality is strict, the inclusion is said to be strict and is denoted by A ⊂ B. Obviously, A=B i A ⊆ B and B ⊆ A. 1.3.3.3 Intersection and union of fuzzy sets The intersection of two fuzzy subsets A and B in the universe of discourse X, denoted by A ∩ B, is a fuzzy set given by: µ A∩B (x) = (µ A (x), µ B (x)), where is a triangular norm (abbreviated t-norm ) and usually we take the minimum. The union of two fuzzy subsets A and B in the universe X (denoted by A ∪ B) is a fuzzy subset given by: µ A∪B (x) = ⊥(µ A (x), µ B (x)), where ⊥ is a triangular co-norm (abbreviated t-conorm ) and usually we take the maximun. The t-norms and t-conorms operators follow the properties showed in Table 1 A ∩ B = {0.6/x 1 , 0.3/x 2 , 0.1/x 3 , 0.3/x 4 , 0/x 5 }. The union of the two fuzzy subsets, taking ⊥ = max, is as follows: x 1 ,0.9/x 2 ,0.2/x 3 ,0.8/x 4 ,0.2/x 5 } A ∪ B = {1/ 1 ∧ x = x 0 ∨ x = x Commutativity x ∧ y = y ∧ x x ∨ y = y ∨ x Associativity x ∧ (y ∧ z) = (x ∧ y) ∧ z x ∨ (y ∨ z) = (x ∨ y) ∨ z Monotonicity if v ≤ w and x ≤ y then v ∧ x ≤ w ∧ y v ∨ x ≤ w ∨ y Remark 3. A t-norm is associated with a t-conorm (e.g., min/max) and they satisfy De Morgan's Laws. Later, compensatory operators, such as the averaging operators, have appeared useful for aggregating fuzzy sets, especially in the context of decision making [Zimmermann, 2011]. Averaging operators for intersection (resp., union) are considered to be more optimistic (resp., pessimistic) than t-norms (resp., t-conorms ). Let us also mention many other operators that may be used for expressing dierent kinds of trade-os, such as the weighted conjunction and disjunction [START_REF] Dubois | Weighted minimum and maximum operations in fuzzy set theory[END_REF], fuzzy quantiers [START_REF] Fodor | Fuzzy-set theoretic operators and quantiers[END_REF] (that are going to be explained in Chapter 4), or the non-commutative connectives described in [START_REF] Bosc | On four noncommutative fuzzy connectives and their axiomatization[END_REF]. 1.3.3.4 Dierence between fuzzy sets The dierence between two fuzzy sets A and B is dened as: ∀x ∈ X, µ A-B (X) = (µ A (x), µ B (x)) = (µ A (x), 1 -µ B (x)), which leads to: • µ A-B (x) = min(µ A (x), 1 -µ B (x)) with (x, y) = min(x, y), • µ A-B (x) = max(µ A (x) -µ B (x), 0) if (x, y) = max(x + y -1, 0) is chosen. Example 23 Consider the following fuzzy sets A= {1/a, 0.3/b, 0.7/c, 0.2/e } and B= {0.3/a, 1/c, 1/d, 0.6/e}. Using the minimum for the conjunction, one obtains { 0.7/a, 0.3/b, 0.2/e} for the dierence A -B, while A -B= { 0.7/a, 0.3/b} with the other choice. 1.3.3.5 Cartesian product of fuzzy sets The Cartesian product of the two fuzzy sets A and B, dened as: µ A×B (xy) = (µ A (x), µ B (x)), where is a triangular norm. Introduction Int he last years, with the rapid growth in size and complexity of RDF graphs, querying RDF data in a exible, expressive and intelligent way has become a challenging problem. In the following, we present the contributions from the literature that make SPARQL querying of RDF data more exible. Three categories of approaches may be associated with the following objectives: i) introducing user preferences into queries (which is directly related to this thesis), ii) relaxing user queries and iii) computing an approximate matching of two RDF graphs. These approaches are discussed further in the following sections. A part of this chapter related to introducing user preferences inside SPARQL queries was published in the form of a survey in the proceedings of the 31st ACM Symposium on Applied Computing (SAC'16). Preference Queries on RDF Data Introducing user preferences into queries has been a research topic for already quite a long time in the context of the relational database model. Motivations for integrating preferences are manifold [START_REF] Hadjali | Database preference queriesa possibilistic logic approach with symbolic priorities[END_REF]. First, it has appeared to be desirable to oer more expressive query languages that can be more faithful to what a user intends to say. Second, the introduction of preferences in queries provides a basis for rank-ordering the retrieved items, which is especially valuable in case of large sets of items satisfying a query. Third, a classical query may also have an empty set of answers, while a relaxed (and thus less restrictive) version of the query might be matched by some items. The literature about preference queries to RDF databases is not as abundant as in the relational context since this issue has started to attract attention only recently. In this section, we present an overview of approaches that have been proposed to extend SPARQL by integrating user preferences in queries, followed by a classication of these approaches into two categories according to their qualitative or quantitative nature. We rst present quantitative approaches (Subsection 2. 1.1), then qualitative ones (Subsection 2.1.2). Quantitative Approaches The quantitative approaches share the following principle: each involved preference is dened via an atomic scoring function allowing a score (aka., satisfaction degree) to be associated with each answer, making it possible to get a total ordering of the answers (i.e., tuple t 1 is preferred to tuple t 2 if the score of t 1 is higher than the score of t 2 ). Among the works which belong to the quantitative approaches we may nd those that are based on fuzzy set theory [Zadeh, 1965] and aim to a exible extension of the query language (SPARQL) [START_REF] Cheng | f-SPARQL: a exible extension of SPARQL[END_REF], Wang et al., 2012, Ma et al., 2016]. We can nd, also, those based on top-k querying of RDF data that aim to extend the SPARQL language with top-k queries [START_REF] Bozzon | Towards and ecient SPARQL top-k query execution in virtual RDF stores[END_REF], Bozzon et al., 2012, Magliacane et al., 2012, Wang et al., 2015]. Fuzzy set-based approach The standard version of the SPARQL query language supports only a few classical ways of retrieval, all based on Boolean logic. In order to meet user needs more eectively, [START_REF] Cheng | f-SPARQL: a exible extension of SPARQL[END_REF] proposes a syntactical fuzzy extension of SPARQL, called f-SPARQL (fuzzy SPARQL), which supports the expression of fuzzy conditions including (possibly compound) fuzzy terms, e.g., recent or young, and fuzzy operators, e.g., close to or at least, interpreted in a gradual manner. µ atleastY (x) =      0, if u ≤ w; u-w Y -w , if w < u < Y ; 1, if u ≥ Y. (?X θ FT ) | (?X θ Y)] [with α], where FT denotes a fuzzy term, θ denotes a classical operator (e.g., >, <, =, ≥, ≤, ! =), θ denotes a fuzzy operator (such as close to (around), at least, and at most ), and Y is a string, an integer or an other types allowed in RDF. The optional parameter [with α] species the smallest acceptable membership degree in the interval [0, 1]. Each f-SPARQL query is prexed by #FQ#. Example 25 The fuzzy query retrieve the name of the recent albums with Beyonce is formulated by the f-SPARQL listing 2.1. #FQ# select ?name where { ?artist dc:title "Beyonce". ?artist dc:creator ?album . ?album dc:title ?name. ?album dc:date ?date. filter (?date = recent).} Let us now assume that the database of the running example embeds a rating value for each album, through a property named dc:rate connecting an album (URI resource) to a rating value (a label). When a user wants to express preferences on several attributes (e.g., date, rating, ...), he/she may assign an importance to every partial preference. If no importance is specied, it is implicitly assumed that the partial degrees are aggregated by means of the triangular norm minimum that is commonly used in fuzzy logic to interpret the conjunction. In [START_REF] Cheng | f-SPARQL: a exible extension of SPARQL[END_REF], the authors propose to use a weighted mean in order to combine the partial scores coming from dierent atomic preference criteria: score(A) = n i=1 µ(A i ) × w(F i ) (2.2) where F = (F 1 , ..., F n ) is the set of filter conditions, A i is the property concerned by F i in the candidate answer A, µ(A i ) denotes the membership degree of the answer for F i , and w(F i ) denotes the weight assigned to F i , assuming that n i=1 w(F i ) = 1. Example 26 Consider the query retrieve the name of the recent (importance 0. It is also possible to apply a threshold α i to an atomic fuzzy condition F i (this threshold is associated with the underlying attribute in the select clause). Then, an answer is qualied only if its membership degree relatively to F i is at least equal to α i . Surprisingly, it does not seem that f-SPARQL makes it possible to specify a threshold on the global satisfaction degree. As in SQLF introduced in [Bosc and Pivert, 1995], two types of queries exist in f-SPARQL depending on the type of calibration: • a qualitative calibration in the case of exible queries (#fq#) (see Listing 2.2); • a quantitative calibration in the case of top-k exible queries (#top-k fq# with k (see Listing 2.3), and then, only the top-k answers are returned. The query type has to be declared before the select clause: #fq# (exible query) in the rst case, and #top-k fq# with k (top-k exible query) when a quantitative threshold is used. Example 27 Let us consider again the query from Example 26 and assume that a user only wants to get the 10 best answers. The authors of [START_REF] Cheng | f-SPARQL: a exible extension of SPARQL[END_REF] exhibit a set of translation rules to convert f-SPARQL queries into Boolean ones so as to be able to benet from the existing implementations of standard SPARQL. The same principle was initially proposed in [START_REF] Bosc | SQLf query functionality on top of a regular relational database management system[END_REF] in the context of relational databases (under the name derivation principle ) to process SQLf (fuzzy) queries. It aims to derive a crisp (SQL) query from an SQLf query involving a (global) qualitative threshold α in order to return only answers with satisfaction degree greater or equal to the α-cut. Dierent types of translation rules were used in [START_REF] Cheng | f-SPARQL: a exible extension of SPARQL[END_REF] depending on the the types of fuzzy terms (including simple atomic terms, e.g., recent, modied fuzzy terms, e.g., very recent, and compound fuzzy terms, e.g., popular and very recent ) and fuzzy operators. Some of the authors of [START_REF] Cheng | f-SPARQL: a exible extension of SPARQL[END_REF] proposed two variants of f-SPARQL. The First one, called fp-SPARQL (fuzzy and preference SPARQL) [START_REF] Wang | fp-Sparql: an RDF fuzzy retrieval mechanism supporting user preference[END_REF], involves an alternative way of (i) interpreting modied fuzzy terms (i.e., an atomic fuzzy term modied by an adverb such as extremely, rather, etc), and (ii) interpreting compound fuzzy conditions where atomic predicates are assigned a priority. The second query language, called SPARQLf-p [START_REF] Ma | SPARQL queries on RDF with fuzzy constraints and preferences[END_REF], makes it possible to express i) more complex conditions including fuzzy relations (e.g., physical health is a fuzzy relation between height and weight) besides fuzzy terms and fuzzy operators and, ii) multidimensional user preferences. From another point of view, the authors of [START_REF] Buche | Flexible querying of fuzzy RDF annotations using fuzzy conceptual graphs[END_REF], Buche et al., 2009, Buche et al., 2013] dened a exible querying system using fuzzy RDF annotations based on the notion of similarity and imprecision. This approach is beyond the scope of our work since it does not explicitly propose an extension of the SPARQL query language. Top-k-based approach Top-k -query approaches have been proposed for already many years in a relational database context (cf., the survey of [START_REF] Ilyas | A survey of topk query processing techniques in relational database systems[END_REF]). They have been useful in several application areas such as system monitoring, information retrieval, multimedia databases, sensor networks, etc. Top-k queries [START_REF] Bruno | Top-k selection queries over relational databases: Mapping strategies and performance evaluation[END_REF] are a popular class of queries that return only the k most relevant (best) tuples according to user's preferences. The attribute values of each tuple are associated with a value or score using a simple linear function. Top-k -queries can be viewed as a special case of fuzzy queries limited to conditions of the form: attribute constant The distance between an attribute value and the ideal value is computed by mean of a dierence (absolute value), after a normalization step (which yields domain values between 0 and 1). The overall distance is calculated by aggregating the elementary distances using a function which can be the minimum, the sum, or the Euclidean distance. The steps in the computation are the following: 1. using k, and taking into account both the chosen aggregation function and statistics about the considered relation, a threshold α over the global distance is deduced, 2. a Boolean query computing the desired α-cut or a superset of this α-cut is determined, 3. this query is processed and the score attached to every element of the result is calculated, 4. if at least k tuples with a score greater than or equal to α have been obtained, the k best are returned to the user; otherwise the procedure is run again (from step 2) using a lower threshold α. For eciently processing Top-k -queries in the context of relational databases, several algorithms have been proposed (e.g., Threshold Algorithm (TA) and No Random Access Algorithm (NRA) [START_REF] Fagin | Optimal aggregation algorithms for middleware[END_REF], the Best Position Algorithm [START_REF] Akbarinia | Best position algorithms for top-k queries[END_REF], LPTA [START_REF] Das | Answering top-k queries using views[END_REF], LPTA+ [START_REF] Xie | Ecient top-k query answering using cached views[END_REF] and IV-Index [START_REF] Xie | Ecient top-k query answering using cached views[END_REF]). In the Semantic Web community, top-k -queries have raised a growing interest in the last few years [START_REF] Bozzon | Extending SPARQL algebra to support ecient evaluation of top-k SPARQL queries[END_REF], Magliacane et al., 2012, Dividino et al., 2012, Wang et al., 2015] for alleviating information overload problems. A major challenge is to make the processing of such queries ecient in a SPARQL-like setting. Classical top-k -SPARQL queries can be expressed in SPARQL 1.1 by solution modiers, such as, order by and limit clauses, that respectively order the result set, and limit the number of results. Example 28 The top-k -SPARQL query of Listing 2.4 aims to nd the best ve oers of albums ordered by a function of user ratings and oer date where g 1 and g 2 are scoring functions. select ?album ?offer (g 1 (?rating) + g 2 (?date) AS ?score) where { ?album rdf:type mo:Album. ?album dc:rating ?rating. ?album dc:date ?date. ?album dc:hasOffers ?offer. } order by desc(?score) limit 5 Listing 2.4: Standard top-k -SPARQL-query Naive query processing then relies on a materialize-then-sort procedure which entails an evaluation of all the candidate answers (i.e., those satisfying the condition in the where clause), followed by a computation of the ranking function for each of them, even if only a small number (typically, k = 5) of answers is requested. As a consequence, this processing strategy produces poor performances especially in the case of a large number of answers matching the selected query. A smart processing should stop as soon as the top-k results are returned. In this respect, recent works have proposed solutions to optimize the evaluation of these queries. For instance, the authors of [START_REF] Bozzon | Towards and ecient SPARQL top-k query execution in virtual RDF stores[END_REF], Bozzon et al., 2012, Magliacane et al., 2012] introduced a SPARQL-RANK algebra which is an extension of the SPARQL algebra [START_REF] Pérez | Semantics and complexity of SPARQL[END_REF]] and an incremental rank-aware execution model for top-k -SPARQL queries. This algebra enables splitting the scoring function that may be interleaved with other binary operators. The general objective is to derive an optimized query execution plan and reduce as much as possible the evaluation to a restricted number of answers. [ [START_REF] Bozzon | Towards and ecient SPARQL top-k query execution in virtual RDF stores[END_REF] rst applied this algebra to the processing of top-k SPARQL queries addressed to virtual RDF datasets through query rewriting using the rank-aware relational algebra presented in [START_REF] Li | RankSQL: query algebra and optimization for relational top-k queries[END_REF]. Then, [START_REF] Bozzon | Extending SPARQL algebra to support ecient evaluation of top-k SPARQL queries[END_REF] proposed a detailed version of the SPARQL-RANK algebra, which can be applied to both RDBMS and native RDF datasets. They introduced a rank-aware operator denoted by ρ for evaluating a ranking criterion and redened unary and binary operators (such as, selection (σ), join ( ), union (∪), dierence (\) and left joint ( )) for processing the ranked set of mappings in this context. New algebraic equivalence laws involving this operator have also been proposed. Among these equivalence laws we may nd, pushing ρ over binary operators, splitting the criteria of a scoring function into a set of rank operators and using commutativity of ρ with itself. In [START_REF] Magliacane | Ecient execution of top-k SPARQL queries[END_REF], an incremental execution model for the SPARQL-RANK algebra is proposed and a rank-aware SPARQL query engine denoted by ARQ-RANK based on this algebra is implemented. This engine eciently improves the performance of top-k queries. Later, in [START_REF] Zahmatkesh | Towards a top-k SPARQL query benchmark generator[END_REF], the authors presented top-k DBPSB, an extension of DBPSB (DBpedia SPARQL benchmark) that makes it possible to automatically generate top-k queries from the queries of DBPSB and its datasets. According to [START_REF] Wang | Top-k queries on RDF graphs[END_REF], the SPARQL-RANK algebra proposed by [START_REF] Bozzon | Towards and ecient SPARQL top-k query execution in virtual RDF stores[END_REF], Bozzon et al., 2012, Magliacane et al., 2012] suers from frequent unnecessary input and output in the rank-join operation and this is seen as a drawback in the case of a large dataset. To deal with this issue, they proposed in [START_REF] Wang | Top-k queries on RDF graphs[END_REF] a graph-exploration-based method for eciently processing top-k queries in crisp RDF graphs. They introduced a novel tree index called an MS-tree. Based on this MS-tree, candidate entities are constructed (ranked and ltered) in an appropriate way and the process immediately stops as soon as possible (i.e., as soon as the top-k answers are generated). In case of complex scoring functions, a cost-model-based optimization method is used in order to improve the query processing performance. An evaluation of the approach with both synthetic and real-world datasets using SPARQL-RANK as a competitor is presented in the paper. The experimental results conrm that the model proposed in [START_REF] Wang | Top-k queries on RDF graphs[END_REF] signicantly outperforms SPARQL-RANK approach in case of large datasets to be cached in memory. From an RDF data model view, in [START_REF] Dividino | Ranking RDF with provenance via preference aggregation[END_REF] the authors introduce an approach for top-k querying RDF data annotated with provenance information. In this context, annotations may concern the origin, history, truthfulness, or validity of an RDF statement. An annotated RDF statement is considered as a tuple S= α : θ 1 ,. The statement #1 says that the artist TAL plays in the Le Grand Rex and this information has been published on 03.02.17, has 0.9 as a certainty degree, and was picked up from the Web site www.legrandrex.com. The presence of multiple independent annotation dimensions in the query can induce dierent rankings of answers. In this regard, [START_REF] Dividino | Ranking RDF with provenance via preference aggregation[END_REF] discusses the problem of preference aggregation (or judgement aggregation) and proposes a framework to aggregate all the annotation dimensions into a single joint ranking ordering using dierent aggregation methods. Finally, the authors of [START_REF] Dividino | Ranking RDF with provenance via preference aggregation[END_REF] perform top-k querying using these ranking methods in oine (i.e., available results) and online (i.e., the aggregation of streaming data) settings. Qualitative Approaches: Skyline-based Approaches In the relational database domain, qualitative approaches to preference queries have attracted a large interest, in particular skyline queries [START_REF] Borzsony | The skyline operator[END_REF], which aim to lter an n-dimensional dataset S according to a set of user preference relations and return only the tuples of S that are not dominated in the sense of Pareto order. Note that these approaches only yield a partial order, contrary to the quantitative ones. Let us consider two tuples t = (u 1 , . . . , u n ) and t = (u 1 , . . . , u n ) from S (reduced to the attributes on which a preference is expressed). The tuple t dominates (in the sense of Pareto order) the tuple t , denoted by t t , i t is at least good as t in all dimensions and strictly better than t in at least one dimension. This may be represented by: t t ⇔ ∀i ∈ {1, . . . , n}, t.u i i t .u i and ∃j ∈ {1, . . . , n} such that t.u j j t .u j (2.3) Example 30 Let us assume that a user is looking for an album to listen to, and prefers an album which is recent and high rated. For every preference: recent (resp. high rated), the higher the date (resp. rating) is, the more preferred the tuple is. Consider three albums A 1 (date 2015, rating 5.8), A 2 (date 2013, rating 4) and A 3 (date 2014, rating 8). Album A 1 is more recent and has a higher rating than A 2 . So, A 1 dominates A 2 . Nevertheless, A 1 does not dominate A 3 since A 1 is more recent than A 3 but has a worse rating than A 3 . Hence, the skyline result is {A 1 , A 3 }. In the literature, few works [START_REF] Siberski | Querying the semantic web with preferences[END_REF], Gueroussova et al., 2013] have dealt with the expression and evaluation of skyline queries in a SPARQL-like language. In [START_REF] Siberski | Querying the semantic web with preferences[END_REF], Siberski et al. extend SPARQL with a preferring clause in order to support the expression of multidimensional user preferences. This extension is based on the principle underlying skyline queries, i.e., it aims to nd the nondominated objects. The main syntax of this extension is as follows: select ... where ... { filter (A or B) } preferring P and P' ... and P* Listing 2.5: Extension of SPARQL using Skyline Two types of preferences may be distinguished: Boolean preferences where the answers that meet the condition are favored over those which do not, and scoring preferences (introduced by the keywords highest or lowest, where the elements with a higher value are favored over those with a lower value and vice versa). Example 31 Let us consider that a user has the following preferences: (P 1 ) prefer the artists rated excellent over the very good ones (Boolean preference), (P 2 ) prefer the artist's concert taking place between 9pm and 1am (Boolean preference) and (P 3 ) prefer the artist's concert taking place the latest (scoring preference) pro- vided that they are taking place between 9pm and 1am. In the absence of a skyline functionality, one would use the classical SPARQL query of Listing 2.6 that returns those artists satisfying the Boolean conditions, ordered according to the starting time of their concert. As we can see, a classical skyline query can be expressed in SPARQL with the clauses filter, order by and desc. However, the classical skyline query of Listing 2.6 also returns dominated artists, but only at the bottom of the list of answers. In the extended SPARQL version of [START_REF] Siberski | Querying the semantic web with preferences[END_REF], lines 5 to 7 of Listing 2.6 are replaced by: 5 preferring 6 ?rating = ft:excellent 7 and 8 (?startingTime >= 9pm && ?endingTime <= 1am) 9 cascade highest(?startingTime) Listing 2.7: Skyline extension of SPARQL [START_REF] Siberski | Querying the semantic web with preferences[END_REF] Lines 1 to 4 represent the graph patterns and hard constraints. Line 6 corresponds to preference P 1 , line 8 corresponds to P 2 , and line 9 corresponds to P 3 . The cascade clause in line 9 species that P 3 is evaluated if and only if two answers are equivalent with respect to P 2 . The authors of [START_REF] Siberski | Querying the semantic web with preferences[END_REF] gave the semantics and the implementation of the new constructs aimed to compute a skyline query with SPARQL and extended the SPARQL implementation ARQ in order to process these types of queries. Nevertheless, no optimization aspects are discussed in the paper. The approach proposed in [START_REF] Gueroussova | SPARQL with qualitative and quantitative preferences[END_REF] is based on [START_REF] Siberski | Querying the semantic web with preferences[END_REF]] and i) introduces user preferences in the filter clause, ii) replaces the cascade clause by a prior to clause in the spirit of Preference SQL [START_REF] Kieÿling | The preference SQL system-an overview[END_REF], iii) introduces new comparators for specifying atomic preferences: between, around, more than, and less than. The authors of [START_REF] Gueroussova | SPARQL with qualitative and quantitative preferences[END_REF] show that PrefSPARQL preference queries can be expressed in SPARQL 1.0 and SPARQL 1.1 using an optional clause or features available in SPARQL 1.1 such as not exists. Nevertheless, they do not deal with implementation issues and query processing/optimization aspects. In [START_REF] Rosati | Preference queries with ceteris paribus semantics for linked data[END_REF], the authors are interested also in qualitative preferences but the preferences are represented by means of a CP-net. A CP-net (network of conditional preferences) has been earlier suggested by [START_REF] Boutilier | CP-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements[END_REF] for modeling relational database preference queries. It is a powerful graphical representation of statements that express conditional ceteris paribus (everything else being equal) preferences. Example 34 Let us consider the following ceteris paribus preferences on clothes: i) P 1 : black (b) jackets are preferred to white (w) jackets, ii) P 2 : black (b) pants are preferred to white (w) pants, iii) P 3 : if the jackets and the pants are of the same color, red (r) shirts are preferred to white (w) ones; otherwise, white shirts are preferred. These preferences are modeled by means of the CP-net depicted in The authors of [START_REF] Rosati | Preference queries with ceteris paribus semantics for linked data[END_REF] propose an RDF vocabulary to represent qualitative preference triples formulated under the ceteris paribus semantics. Inspired by [START_REF] Gueroussova | SPARQL with qualitative and quantitative preferences[END_REF], the authors of [START_REF] Rosati | Preference queries with ceteris paribus semantics for linked data[END_REF] present an algorithm to encode a CP-net into a standard SPARQL 1.1 query able to retrieve a ranked set of answers satisfying the user preferences. To the best of our knowledge, this work is the rst attempt to translate the semantics of a CP-net into a SPARQL query. Let us also mention that there exist some works (cf., [START_REF] Chen | Eciently evaluating skyline queries on RDF databases[END_REF]) that propose methods for the optimization of skyline queries in an RDF data context. Query Relaxation Nowadays, the size and the complexity of databases (including relational, semantic, etc.) increase over time at a sustained pace. In such circumstances, users when querying these databases do not have enough knowledge about their content and structure. So, they fail sometimes to formulate meaningful queries to get the expected result or even to avoid empty responses. In order to cope with these issues, some of the semantic Web systems include a query relaxation process for triple-pattern queries (i.e., adressed to data represented in the RDF format) sharing the same principle as the cooperative querying systems [START_REF] Gaasterland | Relaxation as a platform for cooperative answering[END_REF] [ Godfrey, 1997] [Chu et al., 1996] [Kleinberg, 1999] that operate on relational databases. These systems aim to automate the relaxation process of user queries when the selection criteria in the query do not make it possible to obtain answers that meet the user's needs. In a SPARQL/RDF setting, several works have been carried out [START_REF] Hurtado | A relaxed approach to RDF querying[END_REF], Hurtado et al., 2008, Huang et al., 2008, Poulovassilis and Wood, 2010, Calì et al., 2014, Frosini et al., 2017] that propose a relaxation framework for RDF data through RDFS entailment using information provided by a given ontology (see Figure 2.4) and being characterized by RDFS inferences rules (see Table 2.2). These rules enable a generalization of the SPARQL query in order to release its conditions in case of an empty result. Group A (Subproperty) (a,sp,b)(b,sp,c) (a,sp,c) (1) (a,sp,b)(x,a,y) (x,b,y) Group B (Subclass) (3) (a,sc,b)(b,sc,c) (a,sc,c) (2) (a,sc,b)(x,type,a) (x,type,b) (4) Group C (T yping) (a,dom,c)(x,a,y) (x,type,c) (a,range,d)(x,a,y) (y,type,d) x is an instance of a, then, x is an instance of b. [START_REF] Hurtado | A relaxed approach to RDF querying[END_REF], Hurtado et al., 2008] is interested in the relaxation of a conjunctive fragment of queries over RDF data (e.g., See [START_REF] Gutierrez | Foundations of semantic web databases[END_REF], Haase et al., 2004]). This type of queries has the following expression H ← B, where B is a graph pattern (i.e., a set of triples including URIs, literals, blanks nodes, and variables) and H = H 1 , ..., H n is a list of variables. It rstly aims to nd matchings from the graph pattern (i.e., the body of the query B ) to the data and, secondly, applies these matchings to the head of the query (H ) in order to get the nal answers. The authors propose to extend these conjunctive queries by introducing (one or several) relax clauses in the place of the optional clauses. This extension is detailed in the following example. Example 37 In order to avoid empty answers for some cases, a relaxation of some conditions using a specic ontology (see Figure 2.4) is needed. This ontology is represented in the form of an RDF graph based on an RDFS vocabulary that models documents along with properties that model dierent ways people contribute to them (e.g., as authors, editors, etc.). Thanks to this ontology, the following query may be generalized and relaxed in the following way: ?Z, ?Y ← {(?X, name , ?Z), relax {(?X, proceedingsEditorOf , ?Y)}}. The relax clause aims to return rstly editors of conference proceedings. Then, one can automatically rewrite the triple (?X, proceedingsEditorOf , ?Y) into (?X, editorOf, ?Y) or (?X, contributorOf, ?Y) since proceedingsEditorOf is a subproperty of editorOf and editorOf is a subproperty of contributorOf according to the ontology and rules from Table 2.2. So, the relaxed query allows to obtain people who are editors of a publication or in a more general way contributors to a document. The query relaxation strategy involves two types of relaxations: • simple type without using an ontology, which includes dropping triple patterns using the optional clause, replacing constants with variables in a triple pattern, etc. • more complex type using an ontology and inference rules, which includes: Type relaxation for example, following rule (4) from Predicate to range relaxation for example, using rule ( 6) from Table 2.2, the triple pattern (JohnRobert, editorOf, ?Y ) can be relaxed into (?Y, type, Publication) since we have (editorOf, range, Publication) ∈ cl(O). For the purpose of incrementally computing the relaxed answers to the query, an algorithm is presented, which eciently orders the answers according to how closely they meet the query conditions. In, [START_REF] Huang | Computing relaxed answers on RDF databases[END_REF] the authors points out that the approaches proposed in [START_REF] Hurtado | A relaxed approach to RDF querying[END_REF], Hurtado et al., 2008] may still be insucient. They propose a new similarity measure that requires computing the semantic similarity between the relaxed query and the original one. This measure makes it possible to reduce the number of answers as much as possible (or to the desired cardinality) and, then, ensure the quality of answers during the relaxation process. More recently, [START_REF] Reddy | Ecient approximate SPARQL querying of web of linked data[END_REF] proposed an extension of the work [START_REF] Huang | Computing relaxed answers on RDF databases[END_REF] to the web of linked data, where they dene an optimized query processing algorithm in which the relaxed queries are generated and answered on-the-y during query execution (at run time). This work diers from the approach of [START_REF] Huang | Computing relaxed answers on RDF databases[END_REF], which is dedicated only to centralized RDF repositories and aims to generate multiple relaxed queries and execute them sequentially one by one. Another related work is that by [START_REF] Dolog | Robust query processing for personalized information access on the semantic web[END_REF], Dolog et al., 2009], where the authors present user centered process for automatically relaxing over-constrained RDF queries. This relaxation is carried out by rewriting rules for making patterns optional, replacing value, replacing patterns or predicate and deleting patterns or predicate. Background knowledge about the domain of interest and the preferences of the user are taken into account during the query relaxation to rene and guide this process. From a dierent perspective, [START_REF] Poulovassilis | Combining approximation and relaxation in semantic web path queries[END_REF], introduce a framework wherein relaxations and approximations of regular path queries are combined in order to get a more exible querying of RDF data when the user lacks knowledge of their structure. [START_REF] Frosini | Flexible query processing for SPARQL[END_REF], Calì et al., 2014], rely on the work of [START_REF] Poulovassilis | Combining approximation and relaxation in semantic web path queries[END_REF] and propose a formal syntax and semantics of SP ARQL AR which is an extension of the query language SPARQL 1.1 (i.e., SPARQL with property path queries) with query approximation and query relaxation operators. A relaxation operator relies on RDF inference rules and follows the principle presented in [START_REF] Hurtado | Query relaxation in RDF[END_REF] and the approximation operator aims to transform a regular expression pattern P into a new expression pattern P using a set of edit operations (e.g., deletion, insertion and substitution). In [START_REF] Hogan | Towards fuzzy query-relaxation for RDF[END_REF], the authors base their relaxation framework on an industrial use-case from the European Aeronautic Defence and Space Company (EADS) that involve human observations which are presented in the form of natural language and may be imprecise Some contributions also address the problem of providing a guide for the user to relax his/her query. [START_REF] Elbassuoni | Query relaxation for entity-relationship search[END_REF] propose a novel approach for query relaxation based on statistical language models (LMs) for structured RDF data in an automated way. This approach generates a set of relaxation candidates which can be derived from the RDF data and also from external sources like ontologies and textual documents. From another angle, [START_REF] Fokou | Cooperative techniques for SPARQL query relaxation in RDF databases[END_REF], Fokou et al., 2017, Fokou et al., 2017] are inspired by some prior works in relational databases [Godfrey, 1997, Pivert and[START_REF] Pivert | [END_REF] and recommendation systems [Jannach, 2009] and they deal with the problem of explaining the failure of RDF queries in order to help the user to relax his/her query. In [START_REF] Fokou | Endowing semantic query languages with advanced relaxation capabilities[END_REF], the authors initially proposed an extension of SWDB (Semantic Web Database) query languages with new operators that allow to control the relaxation process. These operators describe the relaxation by specifying the part of the query to relax and the technique of relaxation to be used. Then, in [START_REF] Fokou | Cooperative techniques for SPARQL query relaxation in RDF databases[END_REF]] [Fokou et al., 2017] the authors addressed the problem of computing the Minimal Failing Subqueries (MFS) and the Maximal Succeeding Subqueries (XSSs) (i.e., which return non-empty answers) that are used to nd the parts of an RDF query that are responsible of the failure on the one hand, and the relaxed queries that are guaranteed to return a nonempty result on the other hand. Approximate Matching In the literature, the concept of graph isomorphism has been studied for a long time, cf., [Read andCorneil, 1977] [Fortin, 1996] [START_REF] Zhu | An approach for semantic search by matching RDF graphs[END_REF], De Virgilio et al., 2013, De Virgilio et al., 2015, Zheng et al., 2016]. [ [START_REF] Zhu | An approach for semantic search by matching RDF graphs[END_REF] introduces an approach for semantic search. The idea is to match RDF graphs in order to verify whether each candidate resource RDF graph matches the query RDF graph. The resource RDF graph is built up from a specic domain Web information and the query RDF graph corresponds to a user query. To do this, a new semantic similarity measure between two RDF graphs, based on an ontology has been dened. This measure takes into account the similarities between edges and also nodes. The approach proposed in [START_REF] Zhu | An approach for semantic search by matching RDF graphs[END_REF] only takes the similarity of nodes and edges into account in an RDF graph but ignores the structure formed by the nodes and the edges. To deal with this issue, [De Virgilio et al., 2013, De Virgilio et al., 2015] propose an approach dealing with approximate query answering in the context of large RDF data sets. This approach aims to measure the similarity between a portion of a (large) graph representing an RDF dataset and a sub-graph representing a query by applying substitutions and transformations to the paths of the latter. This operation is based on a scoring function that simulates the relevance of answers by taking into account two aspects: i) quality that measures how much the paths retrieved align with the paths in the query and ii) conformity that measures how much the combination of paths retrieved is similar to the combination of the paths in the query. A more recent work is [START_REF] Zheng | Semantic SPARQL similarity search over RDF knowledge graphs[END_REF], where the authors focus on the problem of Semantic SPARQL Similarity Search over RDF knowledge graphs. They propose a metric, semantic graph edit distance in order to measure the similarity between RDF graphs. This metric consider the graph structural, concept-level and semantic similarities in a uniform manner. Conclusion In this chapter, we reviewed several approaches from the literature that aim to query RDF data in a more expressive and exible way, either by introducing fuzzy user preferences, relaxing some preferences or applying approximate matching. We present a summary of these approaches in Table 2. 3. .3: Main features of the preference query approaches A rst observation concerns the limited expressiveness of the approaches. Indeed, all of them are straightforward adaptations of proposals made in the relational database context: they make it possible to express preferences on the values of the nodes, but not on the structure of the RDF graph (structural preferences may concern the strength of a path, the centrality of nodes, etc). Some of the relaxation approaches (e.g., [START_REF] Poulovassilis | Combining approximation and relaxation in semantic web path queries[END_REF], [START_REF] Calì | Flexible querying for SPARQL[END_REF] and [START_REF] Frosini | Flexible query processing for SPARQL[END_REF]), and approximation approaches (e.g., [De Virgilio et al., 2013], [De Virgilio et al., 2015] and [START_REF] Zheng | Semantic SPARQL similarity search over RDF knowledge graphs[END_REF]) have considered this issue but only in a crisp way. A second important remark is that all of the approaches presented above only deal with crisp RDF data. However, we believe that there is a real need for a exible SPARQL that takes into account RDF graphs where data is described by intrinsic weighted values, attached to edges or nodes. This weight may denote any gradual notion like a cost, a truth value, an intensity or a membership degree. The RDF data model should thus be enriched in order to represent gradual information, and new query languages should be dened. A rst step in this direction is the approach proposed in [START_REF] Cedeño | R2DF framework for ranked path queries over weighted RDF graphs[END_REF] where the authors propose an extension of the RDF model embedding weighted edges and an extension of SPARQL to support this feature, allowing new path predicates to express nodes reachability and the ability to express ranked queries. This approach takes the weights into account in order to rank the answers, but does not propose any means to express preferences in user queries. To the best of our knowledge, none of the existing approaches aims to dene a general purpose exible version of SPARQL to weighted RDF databases, which is the rst contribution of this thesis. Introduction Aswes In the literature, several types of approaches have been devoted to extending the SPARQL language among which: i) those that extend the research patterns with paths involving regular expressions, ii) those that consider fuzzy conditions. However, to the best of our knowledge, no approach cover both aspects at the same time. In this chapter, we intend to tackle this issue and we propose the FURQL query language which is a fuzzy extension of SPARQL that improves its expressiveness and usability. This In the following, in Section 3.1, we rst present the notion of the fuzzy RDF data model and then, in Section 3.2, we provide the syntax and the semantics of the FURQL query language. Fuzzy RDF (F-RDF) Graph The classical crisp RDF model is only capable of representing Boolean notions whereas real-world concepts are often of a vague or gradual nature. This is why several authors have proposed fuzzy extensions of the RDF model. Throughout the thesis, we consider the data model based on Denition 4 which synthesizes the existing fuzzy RDF models of literature ( [START_REF] Mazzieri | A fuzzy semantics for semantic web languages[END_REF], [START_REF] Udrea | Annotated RDF[END_REF], [START_REF] Mazzieri | A fuzzy semantics for the resource description framework[END_REF], [START_REF] Lv | Fuzzy RDF: A data model to represent fuzzy metadata[END_REF], [Straccia, 2009], [START_REF] Udrea | Annotated RDF[END_REF], [START_REF] Zimmermann | A general framework for representing, reasoning and querying with annotated semantic web data[END_REF]), whose common principle consists in adding a fuzzy degree to edges, modeled either by a value embedded in each triple or by a function associating a satisfaction degree with each triple, expressing the extent to which the fuzzy concept attached to the edge is satised. Example 38 [Fuzzy RDF triple] The corresponding fuzzy RDF triple ( Beyonce, recommends, Euphoria , 0.8) states that Beyonce, recommends, Euphoria is satised to the degree 0.8, which could be interpreted as Beyonce strongly recommends Euphoria. Denition 4 (Fuzzy RDF (F-RDF) graph ). A F-RDF graph is a tuple (T , ζ) such that (i) T is a nite set of triples of (U ∪ B) × U × (U ∪ L ∪ B), (ii) ζ is a membership function on triples ζ : T → [0, 1]. According to the classical semantics associated with fuzzy graphs, ζ(t) qualies the intensity of the relationship involved in the statement t. Intuitively, ζ attaches fuzzy degrees to the edges of the graph. Having a value of 0 for ζ is equivalent to not belonging to the graph. Having a value of 1 for ζ is equivalent to fully satisfying the associated concept. In the graph G M B of Figure 3.1, such edges appear as classical ones, i.e., with no degree attached. The fuzzy degrees associated with edges are given or calculated. A simple case is when, each degree is based on a simple statistical notion, e.g., the intensity of friendship between two artists may be computed as the number of their common friends over the total number of friends with respect to each artist. Remark 5. In the same way as the RDF graph, an F-RDF graph is said to be ground if it contains no blank nodes. Such a graph may be ground at the beginning or made ground e.g. by a skolemization procedure. In the following, we only consider ground fuzzy RDF graphs. JulioI Example 39 [Fuzzy RDF graph] Figure 3.1 is an example of a fuzzy RDF graph inspired by MusicBrainz 1 . This graph, denoted by G M B in the following, mainly contains artists and albums as nodes. For readability reasons, each URI node contains the value of its name instead of the URI itself. Literal values may be attached to an URI, like the age of an artist, the release date or the global rating of an album. The graph contains fuzzy relationships (e.g., friend, likes, recommends, memberOf ) as well as crisp ones (e.g., creator, date, . . . ). We limit our example to some entities including artists and albums and omit URI prexes to avoid overcrowding the gure. In order to create this graph, we started from a MusicBrainz nonfuzzy subgraph for which every relationship between nodes was Boolean and, then, we made it fuzzy by adding satisfaction degrees denoting the intensity of some relationships. Here for instance, • the degree associated with an edge of the form Art1friend → Art2 is the proportion of common friends (i.e., Boolean relationship) between Art1 and Art2 over the total number of friends of Art1 ; • the degree associated with an edge of the form Art -memberOf → Group is the number of years the artist stayed in this group over the number of years this group has been existing; • the degree associated with an edge of the form Art1likes → Art2 is the number of albums by Art2 that Art1 has liked over the total number of albums by Art2; • the degree associated with an edge of the form Artrecommends → Alb is the number of stars given by Art to Alb over the maximum number of stars. In the following, we rely on classical notions from fuzzy graph theory [Rosenfeid, 2014], which are the path, the distance and the strength (ST) of the connection between two nodes respectively given in Denitions 5,6 and 7. Denition 5 (Path between two nodes). Let G be an F-RDF data graph. Classically, a path p in G corresponds to a possibly empty sequence of triples (t 1 , • • • , t k , • • • , t n ) such that {t i | 1 ≤ i ≤ n} ⊆ G and for all 1 ≤ k ≤ n -1, the object of t k is the subject of t k+1 . Given two nodes x and y, P aths(x, y) denotes the set of cycle-free paths 2 in G connecting x to y, i.e., the set of paths of the form (t length(p) 1 , • • • , t k , • • • , t n ) such that x is the subject of t (3.1) where length(p) is the length of a path p in a fuzzy graph [Rosenfeid, 2014], dened by .2) The distance between two nodes is the length of the shortest path between these two nodes. length(p) = t∈p 1 ζ(t) . ( 3 Remark 6. In a crisp RDF graph (when ζ(t) ∈ {0, 1}), which is a special case of a fuzzy RDF graph, the distance between two nodes x and y given in Denition 6 is still valid and it expresses the number of edges between these nodes (which corresponds to the classical denition). Denition 7 (Strength between two nodes). The strength between two nodes x and y is dened by ST (x, y) = max p∈P aths(x,y) ST _path(p) (3.3) where ST _path(p) is the strength of the path connecting x and y in a fuzzy graph [Rosenfeid, 2014], dened by ST _path(p) = min({ζ(t)|t ∈ p} (3.4) The strength of a path is dened to be the weight of the weakest edge of the path. Example = min(0.8, 0.3, 0.5, 1) = 0.3. Thus, the strength between the pair of nodes (Beyonce, Euphoria) is ST(Beyonce, Euphoria)= 0.8. Here, the distance and the strength correspond to the same path, but it is of course not necessarily the case in general. Let us also mention that except for introducing the degree of truth within an RDF triple in case of imprecise information, several other extensions of RDF were proposed in the literature in order to deal with: • time ( [START_REF] Gutierrez | Introducing time into RDF[END_REF], [START_REF] Pugliese | Scaling RDF with time[END_REF], [START_REF] Tappolet | Applied temporal RDF: Ecient temporal querying of RDF data with SPARQL[END_REF]) to represent the validity periods of time of the information brought by the triple dened by an interval (containing the start and the end point of validity of this information), • trust [Hartig, 2009], used in case of uncertainty about the trustworthiness of the RDF triples. It is represented by a trust value which is either unknown or a value in the interval [-1, 1], where -1 encodes a full disbelief in the triple, 1 a total belief in the triple and 0 signies the lack of belief as well as the lack of disbelief; and, • provenance [START_REF] Dividino | Querying for provenance, trust, uncertainty and other meta knowledge in RDF[END_REF]: may contain information attached to an RDF triple (such as, origins/source (Where is this information from?), authorship (Who provided the information?), time (When was this information provided?), and others). Moreover, [START_REF] Udrea | Annotated RDF[END_REF] and [START_REF] Zimmermann | A general framework for representing, reasoning and querying with annotated semantic web data[END_REF] provided a single theoretical framework to handle the aforementioned extensions along with an extension of the RDF query language to deal with such a framework. FUzzy RDF Query Language (FURQL) In this section, we introduce the FURQL query language, and we formally study its expressiveness. FURQL is based on the notion of fuzzy graph pattern, which is a fuzzy extension of the SPARQL graph pattern notion introduced in [START_REF] Pérez | Semantics and complexity of SPARQL[END_REF] and [START_REF] Arenas | Querying semantic web data with SPARQL[END_REF] which present it in a more traditional algebraic formalism than the ocial syntax does [W3C, 2014]. In the following, we redened the associated syntax and semantics in order to introduce fuzzy preferences expressed over the F-RDF data model of Denition 4. Syntax of FURQL FURQL (Fuzzy RDF Query Language) consists in extending SPARQL graph patterns into fuzzy graph patterns. Before formally introducing the syntax of FURQL, we rst need to dene the notion of a fuzzy graph pattern. A fuzzy graph pattern allows to express fuzzy preferences on the entities of an F-RDF graph (through fuzzy conditions) and on the structure of the graph (through fuzzy regular expressions). It considers the following binary operators: and (SPARQL concatenation), union (SPARQL union), opt (SPARQL optional) and filter (SPARQL filter). We fully parenthesize expressions making explicit the precedence and association of operators. In the following, we assume the existence of an innite set V of variables such that V ∩ (U ∪ L) = ∅. By convention, we prex the elements of V by a question mark symbol. Let us rst dene the notion of a fuzzy regular expression. Denition 8 (Fuzzy regular expression). The set F of fuzzy regular expression patterns, dened over the set U of URIs, is recursively dened by: • is a fuzzy regular expression of F; • u ∈ U and '_' are fuzzy regular expressions of F; • if A ∈ F and B ∈ F then A|B, A.B, A * , A cond are fuzzy regular expressions of F. Above, denotes the empty pattern, the character '_' denotes any element of U, A|B denotes alternative expressions, A.B denotes the concatenation of expressions, A * stands for the classical repetition of an expression (the Kleene closure), A cond denotes paths satisfying the pattern A with a condition cond where cond is a Boolean combination of atomic formulas of the form: sprop is F term where sprop is a structural property of the path dened by the expression and F term denotes a predened or user-dened fuzzy term like short (see Figure 3.3). In the following, we limit the path structural properties to ST (see Denition 7) and distance (see Denition 6). Examples of conditions of this form are distance IS short and ST IS strong. We denote by A + the classical shortcut for A. • A fuzzy triple from (U ∪ V) × (U ∪ F ∪ V) × (U ∪ L ∪ V ) is a fuzzy graph pattern. • If P 1 and P 2 are fuzzy graph patterns then (P 1 and P 2 ), (P 1 union P 2 ) and (P 1 opt P 2 ) are fuzzy graph patterns. Fuzzy connectives include of course fuzzy conjunction ∧ (resp. disjunction ∨), usually interpreted by the triangular norm minimum (resp. maximum), but also many other operators that may be used for expressing dierent kinds of trade-os, such as the weighted conjunction and disjunction [START_REF] Dubois | Weighted minimum and maximum operations in fuzzy set theory[END_REF], mean operators, fuzzy quantiers [START_REF] Fodor | Fuzzy-set theoretic operators and quantiers[END_REF], or the non-commutative connectives described in [START_REF] Bosc | On four noncommutative fuzzy connectives and their axiomatization[END_REF]. Given a pattern P (which can be a fuzzy triple pattern in particular), var(P ) denotes the set of variables occurring in P . Example 42 [Fuzzy graph pattern] Let us consider P rec_low the fuzzy graph pattern dened by (?Art1, (f riend + ) distance is short .creator, ?Alb) AND (?Art1, recommends, ?Alb) AND ((?Alb, rating, ?r) FILTER (?r IS low)), of which Syntactically, FURQL naturally extends SPARQL, by allowing the occurrence of fuzzy graph patterns (which may contain fuzzy regular expressions) in the where clause and the occurrence of fuzzy conditions in the filter clause. A fuzzy regular expression is close to a property path, as dened in SPARQL 1.1 [Harris and Seaborne, 2013], but involves a fuzzy structural property (e.g. distance and strength over fuzzy graphs). The general syntactic form of a FURQL query is given in Listing 3. 1. a list of define clauses that makes it possible to dene the fuzzy terms. If a fuzzy term fterm has a trapezoidal function dened by the quadruple (A-a, A, B and B+b) meaning that its support is [A-a, B+b] and its core [A, B] , then the clause has the form define fterm as (A-a,A,B,B+b). If fterm is a decreasing function, like the term low of Figure 3.5, then, the clause has the form definedesc fterm as (δ,γ) (there is the corresponding defineasc clause for increasing functions). 1 definedesc low as (2, 8) 2 defineasc short as (3, 5) 3 select ?art1 where { 4 { ?art1 (friend+ | distance is short) ?art2 . 5 ?art2 creator ?alb . 6 ?alb rating ?r . 7 ?art1 recommends ?alb . } 8 filter (?r is low) 9 } cut 0.4 Listing 3.2: A FURQL query containing P rec_low In this example, the definedesc clause of line 1 denes the fuzzy term low of The pattern from lines 3 to 8 is the fuzzy pattern of Example 42. Line 9 species an α-cut of the fuzzy pattern with a satisfaction degree greater or equal to 0.4. Semantics of FURQL To dene the semantics of FURQL, we need to dene the semantics of fuzzy graph patterns. Intuitively, given an F-RDF data graph G, the semantics of a fuzzy graph pattern P denes a set of mappings, where each mapping (from var(P ) to URIs and literals of G) maps the pattern to an isomorphic subgraph of G. For introducing such a concept, the notion of satisfaction of a fuzzy regular expression must rst be dened. Denition 10 (Fuzzy regular expression matching of a path). Let G = (T , ζ) be an F-RDF graph and exp be a fuzzy regular expression. Let p = ( s 1 , p • exp is of the form . If p is empty then sat exp (p) = 1 else sat exp (p) = 0. • exp is of the form u ∈ U (resp. _). If p 1 is u (resp. any u ∈ U) then sat exp (p) = ζ( s 1 , p 1 , o 1 ) else 0. • exp is of the form f 1 .f 2 . Let P be the set of all pairs of paths (p 1 , p 2 ) s.t. p is of the form p 1 p 2 . One has sat exp (p) = max P (min(sat f 1 (p 1 ), sat f 2 (p 2 ))). • exp is of the form f 1 ∪ f 2 . One has sat exp (p) = max(sat f 1 (p), sat f 2 (p)). • exp is of the form f * . If p is the empty path then µ exp (p) = 1. Otherwise, we denote by P the set of all tuples of paths (p 1 , • • • , p n ) (n > 0) s.t. p is of the form p 1 • • •p n . One has sat exp (p) = max P (min i∈[1..n] (sat exp (p i ))). • exp is of the form f Cond where Cond is a fuzzy condition. sat exp (p) = min(sat f (p), µ Cond (p)) where µ Cond (p) denotes the degree of satisfaction of cond by p. Again, not satisfying is equivalent to getting a degree of 0. Denition 11 (Satisfaction of a fuzzy regular expression by a pair of nodes). Let G = (T , ζ) be an F-RDF graph and exp be a fuzzy regular expression. Let (x, y) be a pair of nodes of G. The statement the pair (x, y) satises exp with a satisfaction degree of sat exp (x, y) is dened by sat exp (x, y) = max p∈P aths(x,y) sat exp (p). Note that only cycle-free paths need to be considered in order to compute the satisfaction degree. Example Expression f 1 = (friend + ).creator is a fuzzy regular expression. A pair of nodes (x, y) satises f 1 if x has a friend-linked artist (an artist connected to x with a path made of friend edges), that created the album y. All of the pairs of nodes (EnriqueI, Justied), (Shakira, Buttery), (Beyonce, Euphoria), (Rihanna, Euphoria), (MariahC, Euphoria) and (Shakira, Euphoria), illustrated in Figure 3 = min(0.5, 1) = 0.5. Expression f 2 = (friend + ) distance is short .creator is a fuzzy regular expression. A pair of nodes (x, z) satises f 2 if x has a close friend artist y that created an album z, close meaning that x is connected to y by a short path made of friend edges (the term short is dened in Figure 3.3 on page 68). It is worth noticing that expression f 1 is a sub-expression of expression f 2 , so we are going to make use of the satisfaction degree of f 1 , denoted by sat f 1 , in order to calculate the satisfaction degree of f 2 , denoted by sat f 2 . According to the paths depicted in Figure 3.6: • the length of pair (EnriqueI, Justied) = 1/0.4 + 1 = 3.5, µ short (3.5) = 0.75 and sat f 1 (EnriqueI, Justied) = 0.4, then, sat f 2 (EnriqueI, Justied) = min(0.75, 0.4) = 0.4, • the length of pair (Shakira, Buttery)= 1/0.7 + 1 = 2.4, µ short (2.4) = 1 and sat f 1 (Shakira, Buttery) = 0.7, then, sat f 2 (Shakira, Buttery) = min(1, 0.7) = 0.7, • the length of pair (Beyonce, Euphoria) = 1/0.6 + 1/0.2 + 1 = 7.7, µ short (7.7) = 0 and sat f 1 (Beyonce, Euphoria) = 0.3, then, sat f 2 (Beyonce, Euphoria) = min(0, 0.3) = 0, • the length of pair (Rihanna, Euphoria) = 1/0.2 + 1 = 6, µ short (6) = 0 and sat f 1 (Rihanna, Euphoria)=0.2, then, sat f 2 (Rihanna, Euphoria) = min(0, 0.2) = 0, • the length of pair (MariahC, Euphoria)= 1/0.3 + 1/0.5 + 1= 6.33, µ short (6.33) = 0 and sat f 1 (MariahC, Euphoria) = 0.3, then, sat f 2 (MariahC, Euphoria) = min(0, 0.3) = 0, and • the length of pair (Shakira, Euphoria) = 1/0.5 + 1 = 3, µ short (3) = 1 and sat f 1 (Shakira, Euphoria) = 0.5, then, sat f 2 (Shakira, Euphoria) = min(1, 0.5) = 0.5. Then, the pairs of nodes (EnriqueI, Justied), (Shakira, Buttery) and (Shakira, Euphoria) are the only ones that match the fuzzy regular expression f 2 and their satisfaction degrees are sat f 2 (EnriqueI, Justied) = 0.4, sat f 2 (Shakira, Buttery) = 0.7 and sat f 2 (Shakira, Euphoria) = 0.5 respectively. Expression f 3 = (f riend+) ST >0. 65 .creator is a fuzzy regular expression. A pair of nodes (x, y) satises f 3 if x has a friend artist (an artist connected to x with a path made of friend edges which has a strength higher than 0.65), who created the album y. It is worth noticing that expression f 1 is a sub-expression of expression f 3 , so we are going to make use of the satisfaction degree of f 1 (denoted by sat f 1 ) in order to calculate the satisfaction degree of f 3 (sat f 3 ). The pair of nodes (Shakira, Buttery), shown in Figure 3.6, is the only one that matches the fuzzy regular expression f 3 with a non zero satisfaction degree: sat f 3 (Shakira, Buttery) = 0.7, where the strength between the pair of nodes (Shakira, Buttery)= min (0.7,1)= 0.7 and sat f 1 (Shakira, Buttery) = 0.7, then, sat f 3 (Shakira, Buttery) = min (0.7, 0.7) =0.7. Let us now come to the denition of a mapping. A mapping is a pair (m, d) where m : V → (U × L) and d ∈ [0, 1]. Intuitively, m maps the variables of a fuzzy graph pattern into a subgraph (answer) of the F-RDF data graph and d denotes the satisfaction degree associated with the mapping (the more satisfactory the subgraph, the higher the satisfaction degree). The expression m(t), where t is a triple pattern, denotes the triple obtained by replacing each variable x of t by m(x). The domain of a mapping m denoted by dom(m) is the subset of V for which m is dened. Two mappings m 1 and m 2 are compatible i for all ?v ∈ dom(m 1 ) ∩ dom(m 2 ), one has m 1 (?v) = m 2 (?v). Intuitively, m 1 and m 2 are compatible if m 1 can be extended with m 2 to obtain a new mapping m 1 ⊕m 2 and vice versa. Let M 1 and M 2 be two fuzzy sets of mappings. We dene the join, union, dierence and left outer-join of M 1 with M 2 as: Join M 1 M 2 ={(m 1 ⊕ m 2 , min(d 1 , d 2 )) | (m 1 , d 1 ) ∈ M 1 and (m 2 , d 2 ) ∈ M 2 and m 1 , m 2 are compatible}. The operation M 1 M 2 denotes the set of new mappings that result from extending mappings in M 1 with their compatible mappings in M 2 . Union M 1 ∪ M 2 ={(m, d) | (m, d) ∈ M 1 and m ∈ support(M 2 )} ∪ {(m, d) | (m, d) ∈ M 2 and m ∈ support(M 1 )} ∪ {(m, max(d 1 , d 2 )) | (m, d 1 ) ∈ M 1 and (m, d 2 ) ∈ M 2 } Here, ∪ corresponds to the classical set-theoretic union and support denotes the support of a fuzzy set of mappings and corresponds to the set of all elements of the universe of discourse whose their grade of membership is greater than zero. Dierence M 1 \M 2 ={(m 1 , d 1 ) | (m 1 , d 1 ) ∈ M 1 and ∀(m 2 , d 2 ) ∈ M 2 , m 1 and m 2 are not compatible}. M 1 \M 2 returns the set of mappings in M 1 that cannot be extended with any mapping in M 2 . Leftouterjoin M 1 M 2 = (M 1 M 2 ) ∪ (M 1 \M 2 ). A mapping m is in M 1 M 2 if it is the extension of a mapping of M 1 with a compatible mapping of M 2 , or if it is in M 1 and cannot be extended with any mapping of M 2 . Denition 12 (Mapping satisfying a fuzzy condition). Let m be a mapping and C be a fuzzy condition. Then m satises the fuzzy condition C with a satisfaction degree dened as follows, according to the form of C: • C is of the form bound(?x): if ?x ∈ dom(m) then m satises the condition C with a degree of 1, else 0. • C is of the form ?x θ c (where θ is a (possibly fuzzy) comparator and c is a constant): if ?x ∈ dom(m) then m satises the condition C with a degree of µ θ (m(?x), c), else 0. • C is of the form ?x θ ?y: if ?x ∈ dom(m) and ?y ∈ dom(m), then m satises the condition C with a degree of µ θ (m(?x), m(?y)), else 0. • C is of the form ?x is F term: if ?x ∈ dom(m) then m satises the condition C to the degree µ F term (m(?x)) (which can be 0). • C is of the form ¬C 1 or C 1 C 2 where is a fuzzy connective: we use the usual interpretation of the fuzzy operator involved (complement to 1 for the negation, minimum for the conjunction, maximum for the disjunction, etc [START_REF] Fodor | Fuzzy-set theoretic operators and quantiers[END_REF]). Denition 13 (Evaluation (interpretation) of a fuzzy graph pattern). The evaluation of a fuzzy graph pattern P over an F-RDF graph, denoted by P G is recursively dened by: • if P is of the form of a (crisp) triple graph pattern t ∈ (U ∪ V) × (U ∪ V) × (U × L × V) then P G = {(m, 1) | dom(m) = var(t) and m(t) ∈ G}, • if P is of the form of a fuzzy triple graph pattern t ∈ (U ∪ V) × F × (U × L × V) denoted by ?x, exp, ?y (where variables occur as subject and object) then P G = {(m, d) | dom(m) = {?x, ?y} and (m(?x), m(?y)) satises exp with a satisfaction degree d = sat exp (x, y)}. The case where the subject (resp. the object) of t is a constant of U (resp. U ∪ L) is trivially induced from this denition. • if P is of the form (P 1 and P 2 ) then P G = P 1 G P 2 G , • if P is of the form (P 1 opt P 2 ) then P G = P 1 G P 2 G , • if P is of the form (P 1 union P 2 ) then P G = P 1 G ∪ P 2 G , • if P is of the form (P 1 filter C) then P G = {(m, d) | m ∈ P G and m satises C to the degree of d}. Intuitively, expressions (P 1 and P 2 ), (P 1 union P 2 ), (P 1 opt P 2 ), and (P 1 filter C) refer to conjunction graph patterns, union graph patterns, optional graph patterns, and lter graph patterns respectively. Optional graph patterns allow for a partial match of the query (i.e., the query tries to match a graph pattern and does not omit a solution when some part of the optional pattern is not satised). Remark 7. Note that a crisp graph pattern is a special case of a fuzzy graph pattern where no fuzzy term or condition occurs (and thus, according to the previous denition, an answer necessarily has a satisfaction degree of 1). Example 45 [Evaluation of a fuzzy graph pattern] Let us recall the fuzzy graph pattern P rec_low from Example 42 dened by (?Art1, (f riend + ) distance is short .creator, ?Alb) AND (?Art1, recommends, ?Alb) AND ((?Alb, rating, ?r) FILTER (?r is low)), for which Figure 6.3 is a graphical representation. It can be represented as follows: P rec_low G M B = { ({?Art1 → EnriqueI , ?Alb → Justied, ?r → 6}, 0.33), ({?Art1 → Shakira , ?Alb → Buttery, ?r → 4}, 0.66)}. Note that the mapping {?art1 → Shakira, ?alb → Euphoria, ?r → 9} is excluded from the result of the evaluation of the pattern P rec_low since µ low_rating (9) = 0. Conclusion In this chapter, we have introduced a new query language named FURQL which is a fuzzy extension of SPARQL that goes beyond the previous proposals in terms of expressiveness inasmuch as it makes it possible i) to deal with crisp and fuzzy RDF data, and ii) to express fuzzy structural conditions beside more classical fuzzy conditions on the values of the nodes present in the graph. We rst presented the notion of a fuzzy RDF graph that makes it possible to model relationships between entities and then, we formalized a formal syntax and semantics of FURQL based on the notion of fuzzy graph pattern, which extends Boolean graph patterns introduced by several authors in a crisp querying context. Associated implementation issues and experiments will be presented in Chapter 5. In the following chapter, we propose to extend the FURQL query language to be able to express more sophisticated fuzzy conditions, namely fuzzy quantied statements. Introduction Fuzzyquant ied queries have been long recognized for their ability to express dierent types of imprecise and exible information needs in a relational database context. However, in the specic RDF/SPARQL setting, the current approaches from the literature that deal with quantied queries consider crisp quantiers only [START_REF] Bry | SPARQLog: SPARQL with rules and quantication[END_REF], Fan et al., 2016] over crisp RDF data. In the present chapter, we integrate fuzzy quantied statements in FURQL queries addressed to a fuzzy RDF database. We show how these statements can be dened and implemented in FURQL, which is a fuzzy extension of the SPARQL query language that we previously presented in Chapter 3. This work has been published in the proceedings of the 26th IEEE International Conference on Fuzzy Systems (Fuzz-IEEE'17), Naples, Italy, 2017. In the following, in Section 4.1 we rst present a refresher on fuzzy quantied statements in a relational database context, then, in Section 4.2 we introduce the syntactic format for expressing fuzzy quantied statements in the FURQL language and we describe their interpretation using dierent approaches from the literature. 4.1 Refresher on Fuzzy Quantied Statements In this section, we recall important notions about fuzzy quantiers, then, we present three approaches that have been proposed in the literature for interpreting fuzzy quantied statements. 4.1.1 Fuzzy Quantiers Fuzzy logic extends the notion of quantier from Boolean logic (e.g., ∃ and ∀) and makes it possible to model quantiers from the natural language such as most of, at least half, few, around a dozen, etc. In [Zadeh, 1983], the author distinguishes between absolute and relative fuzzy quantiers. Absolute quantiers refer to a number while relative ones refer to a proportion. Quantiers may also be increasing, as at least half , or decreasing, as at most three. An absolute quantier Q is represented by a function µ Q from an integer range to [0, 1] whereas a relative quantier is a mapping µ Q from [0, 1] to [0, 1]. In both cases, the value µ Q (j) is dened as the truth value of the statement Q X are A when exactly j elements from X fully satisfy A (whereas it is assumed that A is fully unsatised for the other elements). According to [Yager, 1988], fuzzy quantiers can be increasing (proportional) which means that if the criteria are all entirely satised, then the statement Q X are A is entirely true, and if the criteria are all entirely unsatised, then the statement Q X are A is entirely false. Moreover, the transition between those two extremes is continuous and monotonous. Therefore, when Q is increasing (e.g., most, at least a half ), function µ Q is increasing. Similarly, decreasing quantiers (e.g., at most two, at most a half ) are dened by decreasing functions. The characteristics of monotonous fuzzy quantiers are given in Table 4.1. Increasing quantier Decreasing quantier Calculating the truth degree of the statement Q X are A raises the problem of determining the cardinality of the set of elements from X which satisfy A. If A is a Boolean predicate, this cardinality is a precise integer (k), and then, the truth value of Q X are A is µ Q (k). If A is a fuzzy predicate, this cardinality cannot be established precisely and then, computing the quantication corresponds to establishing the value of function µ Q for an imprecise argument. µ Q (0) = 0 µ Q (0) = 1 ∃k such that µ Q (k) = 1 ∃k such that µ Q (k) = 0 ∀a, b, if a > b then µ Q (a) ≥ µ Q (b) ∀a, b, if a > b then µ Q (a) ≤ µ Q (b) Fuzzy quantied queries have been thoroughly studied in a relational database context, see e.g. [START_REF] Kacprzyk | FQUERY III +:a "human-consistent" database querying system based on fuzzy logic with linguistic quantiers[END_REF], Bosc et al., 1995] where they serve to express conditions about data values. The authors distinguished between two types of uses of fuzzy quantiers: • horizontal quantication (the quantier is used as a connective for combining atomic conditions in a where clause; this use was originally suggested in [START_REF] Kacprzyk | FQUERY III +:a "human-consistent" database querying system based on fuzzy logic with linguistic quantiers[END_REF]); • vertical quantication (the quantier appears in a having clause in order to express a condition on the cardinality of a fuzzy subset of a group, as in nd the departments where most of the employees are well-paid ). This is the type of use we make in our approach. Interpretation of Fuzzy Quantied Statements We now present dierent proposals from the literature for interpreting quantied statements of the type Q B X are A (which generalizes the case Q X are A by considering that the set to which the quantier applies is itself fuzzy) where X is a (crisp) referential and A and B are fuzzy predicates. 4. 1.2.1 Zadeh's interpretation Let X be the usual (crisp) set {x 1 , x 2 , . . ., x n } and n the cardinality of X. Zadeh [Zadeh, 1983] denes the cardinality of the set of elements of X which satisfy A, denoted by Σcount(A), as: Σcount(A) = n i=1 µ A (x i ) (4.1) The truth degree of the statement Q X are A is then given by µ(Q X are A) =        µ Q (Σcount(A)) (absolute), µ Q Σcount(A) n (relative) (4.2) One may notice, however, that a large number of elements with a small degree µ A (x) has a same eect as a small number of elements with a high degree µ A (x), due to the denition of Σcount. Example 46 Let us consider the following sets: X 1 = {0.9/x 1 , 0.9/x 2 , 0.9/x 3 , 0.8/x 4 , 0.8/x 5 , 0.7/x 6 , 0.6/x 7 }, X 2 = {1/x 1 , 1/x 2 , 0.3/x 3 , 0.2/x 4 , 0.1/x 5 , 0/x 6 , 0/x 7 }, X 3 = {1/x 1 , 1/x 2 , 1/x 3 , 1/x 4 , 1/x 5 , 0.8/x 6 , 0.3/x 7 }. and the quantier at least ve represented in Figure 4. As for quantied statements of the form Q B X are A (with Q relative), their interpretation is as follows: µ(Q B X are A) = µ Q Σcount(A ∩ B) Σcount(B) = µ Q x∈X (µ A (x), µ B (x)) x∈X µ B (x) (4.3) where denotes a triangular norm (for instance the minimum). Example 47 Let us evaluate the quantied statement Q B X are A where B={0.6/x 1 , 0.3/x 2 , 1/x 3 , 0.1/x 5 }, A={0.8/x 1 , 0.4/x 2 , 0.9/x 3 , 1/x 4 , 1/x 5 } and Q(x) = x 2 . Then, µ(Q B X are A) = µ Q ( 0.6+0.3+0.9+0+0.1 0.6+0.3+1+0+0.1 ) = µ Q ( 1.9 2 ) = µ Q (0.95) = 0.90. Yager's Competitive Type Aggregation The interpretation by decomposition described in [Yager, 1984] was originally limited to increasing quantiers. It was later generalized to all kinds of fuzzy quantiers in [Bosc et al., 1995], but hereafter, we consider the basic case where Q is increasing. The proposition Q X are A is true if an ordinary subset C of X satises the conditions c 1 and c 2 given hereafter: c 1 : there are Q elements in C, c 2 : each element x of C satises A. The truth value of the proposition: Q X are A is then dened as: µ(Q X are A) = sup C ⊆ X min(µ c 1 (C), µ c 2 (C)) (4.4) with µ c 1 (C) =        µ Q (|C|) if Q is absolute, µ Q |C| n if Q is relative (4.5) and .6) It has been shown in [Yager, 1984] that: µ c 2 (C) = inf x ∈ C µ A (x). ( 4 µ(Q X are A) = sup 1 ≤ i ≤ n min(µ Q (i), µ A (x i )). (4.7) where the elements of X are ordered in such a way that µ A (x 1 ) ≥ . . . ≥ µ A (x n ). Formula (4.7) corresponds to a Sugeno integral [Sugeno, 1974]. For quantied statements of the form QBX are A, the principle is similar. The statement is true if there exists a crisp subset C of X that satises the conditions c 1 and c 2 hereafter: c 1 : Q B X are in C, c 2 : each element x of C satises the implication (x is B) ⇒ (x is A). The truth value of the proposition: Q B X are A is then dened as: µ(Q B X are A) = sup C ⊆ X min(µ c 1 (C), µ c 2 (C)) (4.8) with µ c 1 (C) =            µ Q x∈C µ B (x) if Q is absolute, µ Q   x∈C µ B (x) x∈X µ B (x)   if Q is relative (4.9) and µ c 2 (C) = inf x ∈ C µ B (x) → µ A (x) (4.10) where → is a fuzzy implication (see e.g. [START_REF] Fodor | Fuzzy-set theoretic operators and quantiers[END_REF]). Notice that µ(Q B X are A) is undened when ∀x ∈ X, µ B (x) = 0 since this would result in a division by zero in Formula 4.9. Interpretation based on the OWA operator In [Yager, 1988], Yager considers the case of an increasing monotonous quantier and proposes an ordered weighted averaging operator (OWA) to evaluate quantications of the type Q X are A. It is shown in [Bosc et al., 1995] i) how it can be extended in order to evaluate decreasing quantications and ii) that this interpretation boils down to using a Choquet fuzzy integral. The OWA operator is dened in [Yager, 1988] as: OWA(x 1 , . . . , x n ; w 1 , . . . , w n ) = n i=1 w i × x k i (4.11) where x k i is the i th largest value among the x k 's and n i=1 w i = 1. Let n be the crisp cardinality of X. The truth value of the statement Q X are A is computed by an OWA of the n values µ A (x i ). The weights w i involved in the calculation of the OWA are given by w i =        µ Q (i) -µ Q (i -1) if Q is absolute, µ Q i n -µ Q i -1 n if Q is relative. (4.12) The aggregated value which is calculated is: OWA(µ A (x 1 ), µ A (x 2 ), . . . , µ A (x n ); w 1 , . . . , w n ) = n i=1 w i × c i (4.13) where c i is the i th largest value among the µ A (x k )'s. Example 48 Let us consider the sets X 1 , X 2 , and X 3 , and the quantier at least ve from Example 46. We have: w 1 = 0, w 2 = 0, w 3 = 1/3, w 4 = 1/3, w 5 = 1/3, w 6 = 0, w 7 = 0. We evaluate the statement at least ve elements of X 1 are A and we get the degree 0.83 (= 0.9 × 1/3 + 0.8 × 1/3 + 0.8 × 1/3). The same way, we get the degrees 0.2 for X 2 and 1 for X 3 . This interpretation corresponds to using a Choquet integral [Choquet, 1954], see also [START_REF] Murofushi | [END_REF]Sugeno, 1989, Grabisch et al., 1992]. As for statements of the form Q B X are A, Yager suggests to compute the truth degree of statements of the form Q B X are A by an OWA aggregation of the implication values µ B (x) → KD µ A (x) where → KD denotes Kleene-Dienes implication (a → KD b = max(1 -a, b)). Let X = {x 1 , . . . , x n } be such that µ B (x 1 ) ≤ µ B (x 2 ) ≤ . . . ≤ µ B (x n ) and n i=1 µ B (x i ) = d. The weights of the OWA operator are dened by: The implication values are denoted by c i and ordered decreasingly: w i = µ Q (S i ) -µ Q (S i-1 ), c 1 ≥ c 2 ≥ . . . ≥ c n . Finally: We rst order the elements of X such that µ B (x k 1 ) ≤ ... ≤ µ B (x kn ), e 1 = 0, e 2 = 0.1, e 3 = 0.3, e 4 = 0.6, e 5 = 1 and d = 2. Thus, we get S 1 = 0, S 2 = 0.05, S 3 = 0.2, S 4 = 0.5, S 5 = 1. µ(Q B X are A) = n i=1 w i × c i . µ Q (S 1 ) = 0, µ Q (S 2 ) = 0.025, µ Q (S 3 ) = 0.04, µ Q (S 4 ) = 0.25, µ Q (S 5 ) = 1. Therefore, the weights of the OWA operator are: w 1 = µ Q (S 1 ) -µ Q (S 0 ) = 0, w 2 = µ Q (S 2 ) -µ Q (S 1 ) = 0.025, w 3 = µ Q (S 3 ) -µ Q (S 2 ) = 0.04 -0.0025 = 0.0375, w 4 = µ Q (S 4 ) -µ Q (S 3 ) = 0.25 -0.04 = 0.21, w 5 = µ Q (S 5 ) -µ Q (S 4 ) = 1 -0.25 = 0.75. For each x i we calculate the implication value c i = max((1 -µ B (x i )), µ A (x i )) and these values are ordered decreasingly such that c 1 ≥ . . . ≥ c n . c 1 = max(0.4, 0.8) = 0.8, c 2 = max(0.7, 0.4) = 0.7, c 3 = max(0, 0.9) = 0.9, c 4 = max(1, 1) = 1, c 5 = max(0.9, 1) = 1. We reorder the implication values and we get c 1 = 1(c 4 ), c 2 = 1(c 5 ), c 3 = 0.9(c 3 ), c 4 = 0.8(c 1 ), c 5 = 0.7(c 2 ). Finally, the satisfaction degree using the OWA aggregation is: µ = (1) * 0 + 0.0025 * (1) + (0.375) * 0.9 + 0.21 * (0.8) + 0.75 * (0.7) = 0.73. FURQL with Fuzzy Quantied Statements In this section, we rst present some recent proposals from the literature for incorporating quantied statements into SPARQL queries, and then, we propose to integrate fuzzy quantied statements in the FURQL language. Related Work: Quantied Statements in SPARQL In an RDF database context, quantied statements have only recently attracted the attention of database community. In [START_REF] Bry | SPARQLog: SPARQL with rules and quantication[END_REF], Bry et al. propose an extension of SPARQL (called SPARQLog) with rst-order logic (FO) rules and existential and universal quantication over node variables. This query language makes it possible to express statements such as: for each lecture there is a course that practices this lecture and is attended by all students attending the lecture . This statement can be expressed in SPARQLog as follows: all ?lec ex ?crs all ?stu construct { ?crs uni:practices ?lec . ?stu uni:attends ?crs . } where { ?lec rdf:type uni:lecture . ?stu uni:attends ?lec . } More recently, in [START_REF] Fan | Adding counting quantiers to graph patterns[END_REF], Fan et al. introduced quantied graph patterns, an extension of the classical SPARQL graph patterns using simple counting quantiers on edges. Quantied graph patterns make it possible to express numeric and ratio aggregates, and negation besides existential and universal quantication. The authors also showed that quantied matching in the absence of negation does not signicantly increase the cost of query processing. However, to the best of our knowledge, there does not exist any work in the literature that deals with fuzzy quantied statements in the SPARQL query language, which is the main goal of the present chapter. Fuzzy Quantied Statements in FURQL In this subsection, we show how fuzzy quantied statements may be expressed in FURQL queries. We rst propose a syntactic format for these queries, and then we show how they can be evaluated in an ecient way. Syntax of a Fuzzy Quantied Query in FURQL In the following, we consider fuzzy quantied statements of the type Q B X are A over fuzzy RDF graph databases, where the quantier Q is represented by a fuzzy set and denotes either a relative quantier (e.g., most ) or an absolute one (e.g., at least three ), B is the fuzzy condition to be connected to a node x, X is the set of nodes in the RDF graph, and A denotes a (possibly compound) fuzzy condition. Example 50 [Fuzzy quantied statement] An example of a fuzzy quantied statements of the type Q B X are A is: most of the recent albums are highly rated. In this example, Q corresponds to the relative quantier most, B is the fuzzy condition to be recent, X corresponds to the set of albums present in the RDF graph, and A corresponds to the fuzzy conditions to be highly rated. Since the FURQL query language supports the expression of fuzzy preferences involving fuzzy structural properties (like for example, the distance and strength between two nodes over fuzzy graphs), fuzzy quantied structural queries can be expressed in the FURQL language and an example of such query is given hereafter. Example 52 [Fuzzy Quantied Structural Query in FURQL] We now consider a slightly more complex version of the above example by adding a fuzzy structural condition on the strength of the authors' recommendation: retrieve every artist (?art1) such that most of the recent albums (?alb) that he/she strongly recommends are highly rated and have been created by a young friend (?art2) of his/hers. The syntactic form of this query, denoted by R mostAlbums_ST , is given in Listing 4.3. 1 defineqrelativeasc most as (0.3,0.8) defineasc recent as (2010,2015) 2 defineasc high as (2,5) The interpretation of a fuzzy quantied statement in a FURQL query can be based on one of the formulas (4.3), ( 4.8), or (4.17). Its evaluation involves three stages : 1. the compiling of the fuzzy quantied query R into a crisp query denoted by R atBoolean , 2. the interpretation of the crisp SPARQL query R atBoolean , 3. the calculation of the result of R (which is a fuzzy set) based on the result of R atBoolean . Compiling The compiling stage translates the fuzzy quantied query R into a crisp query denoted by R atBoolean . This compilation involves two translation steps. First, R is transcripted into an intermediate query R at that allows to interpret the fuzzy quantied statement embedded in R. The query R f lat , whose general form 1 is given in List- ing 4.4, is obtained by removing the group by and having clauses from the initial query and adding the optional clause for the A part. This query aims to retrieve the elements of the B part of the initial query, matching the variables ?res and ?x, and possibly the elements of the A part of the initial query, matching the variable ?x, for which we will then need to calculate the nal satisfaction degree. select ?res ?X IB IA where { B(?res,?X) optional { A(?X) } } Listing 4.4: Derived query R at of R mostAlbums For each pair (?res, ?x), we retrieve all the information needed for the calculation of µ B and µ A , i.e., the combination of fuzzy degrees associated with relationships and node attribute values involved in B(?res,?x) and in A(?X), respectively denoted by I B and I A . Listing 4.5 of Example 53 below presents the derived query associated with the query R mostAlbums . The evaluation of R at is based on the derivation principle introduced by [START_REF] Pivert | Fuzzy Preference Queries to Relational Databases[END_REF] in the context of relational databases: R f lat is in fact derived into another query denoted by R atBoolean . The derivation translates the fuzzy query into a crisp one by transforming its fuzzy conditions into Boolean ones that select the support of the fuzzy statements. For instance, following this principle, the fuzzy condition ?year IS recent dened as defineasc recent as (2013,2016) becomes the crisp condition ?year > 2013 in order to remove the answers that necessary do not belong to the support of the answer. In the general case of a membership function having a trapezoidal form dened by a quadruple (a, b, c, d), the derivation introduces two crisp conditions ( ?var > a and ?var < d). Listing 4.6 of Example 53 below is an illustration of the derivation of the query R f lat . Crisp interpretation The previous compiling stage translates the fuzzy quantied query R embedding A fuzzy quantied statement and fuzzy conditions into a crisp query R atBoolean , whose interpretation is the classical Boolean one. For the sake of simplicity, we consider in the following that the result of R at , denoted by r at , is made of the quadruples (?res i , ?x i , µ Bi , µ Ai ) matching the query. Final result calculation The last stage of the evaluation calculates the satisfaction degrees µ B and µ A according to I B and I A . If the optional part does not match a given answer, then µ A = 0. The answers of the initial fuzzy quantied query R (involving the fuzzy quantier Q) are answers of the query R at derived from R, and the nal satisfaction degree associated with each element e can be calculated according to the three dierent interpretations mentioned earlier in Subsection 4. 1.2. Hereafter, we illustrate this using [Zadeh, 1983] and [Yager, 1988]'s approaches (which are the most commonly used for interpreting fuzzy quantied statements ). • Following Zadeh's Sigma-count-based approach (cf. Subsection 4. 1.2.1) we have: µ(e) = µ Q {(?res i ,?x i ,µ Bi ,µ Ai )∈ R at |?res i =e} min(µ Ai , µ Bi ) {(?res i ,?x i ,µ Bi ,µ Ai )∈ R at |res i =e} µ Bi (4.18) In the case of a fuzzy absolute quantied query, the nal satisfaction degree associated with each element e is simply Example 53 [Evaluation of a Fuzzy Quantied Query] Let us consider the fuzzy quantied query R mostAlbums of Listing 4.2. We evaluate this query according to the fuzzy RDF data graph G MB of Figure 4. 5. In order to interpret R mostAlbums , we rst derive the following query R at from R mostAlbums , that retrieves the artists (?art1) who recommended at least one recent album (corresponds to B(?art1,?alb) in lines 2 and 3), possibly (optional) highly rated and created by a young friend (corresponds to A(?alb) in lines 5 to 7). where µ p denotes the membership degree of the predicate p and ζ(t) denotes the membership value associated with the triple t (cf., Denition 4 on page 62). µ(e) = µ Q   {(?res i ,?x i ,µ Bi ,µ Ai )∈ R at |?res i =e} µ Ai   . For the sake of readability, the query of Listing 4.6 is a simplied version of the real derived query (cf. Listing A.1 in Appendix A). According to the fuzzy RDF data graph G MB of Figure 4.5, R at concerns three artists {JustinT, Shakira, Beyonce}. EnriqueI, Drake, Mariah and Rihanna do not belong to the result set of R at because EnriqueI, Drake and Mariah have not recommended any album made by any of their friends and Rihanna did not recommend any somewhat recent album. Then, the set of answers of the query R f lat , denoted by R f lat , is as follows: R f lat = { (?art1→ JustinT, ?alb→ One dance, µ B → 0.4, µ A → 0.3), (?art1→ JustinT, ?alb → Home, µ B → 0.1, µ A → 0.6), (?art1→ Shakira, ?alb → Euphoria, µ B → 0.1 , µ A → 0.07), (?art1→ Shakira, ?alb → Butterfly, µ B → 0.2, µ A → 0), (?art1→ Shakira, ?alb → Justified, µ B → 0.3, µ A → 0.4), (?art1→ Beyonce, ?alb → Home, µ B → 0.4, µ A → 0.3)}. Finally, assuming for the sake of simplicity that µ most (x) = x, the nal result of the query R mostAlbums evaluated on G MB using Formula 4.18 is: R mostAlbums = { ({?art1 → JustinT }, 0.80), ({?art1 → Beyonce }, 0.75), ({?art1 → Shakira }, 0.62)}. • Using Yager's OWA-based approach, for each element e returned by R at we calculate µ(e) = {(?res i ,?x i ,µ Bi ,µ Ai )∈ R at |?res i =e} w i × c i . (4.19) Let us consider condition B = {µ B 1 /x 1 , ..., µ Bn /x n } such that µ B 1 ≤ ... ≤ µ Bn , condition A = {µ A 1 /x 1 , ..., µ An /x n } and d = n i=1 µ B i . The weights of the OWA operator are dened by w i = µ Q (S x i ) -µ Q (S x i-1 ) with S x i = i j=1 µ B j d The implication values are denoted by c x i = max(1 -µ B i , µ A i ) and ordered decreasingly such that c 1 ≥ . Then, with µ most (x) = x, we get µ Q (S Euphoria ) = 0.17, µ Q (S Buttery ) = 0.5 and µ Q (S Justied ) = 1. Therefore, the weights of the OWA operator are: W 1 = µ Q (S Euphoria ) -µ Q (S 0 ) = 0.17, W 2 = µ Q (S Buttery ) -µ Q (S Euphoria ) = 0. 33, and W 3 = µ Q (S Justied ) -µ Q (S Buttery ) = 0.5. The implication values are: c Euphoria = max(1 -0.1, 0.07) = 0.9, c Buttery = max(1 -0.2, 0) = 0.8, and c Justied = max(1 -0.3, 0.36) = 0.7. Thus, c 1 = 0.9, c 2 = 0.8 and c 3 = 0.7. Finally, we get: µ(Shakira) = 0.17 × 0.9 + 0.33 × 0.8 + 0.5 × 0.7 = 0.15 + 0.26 + 0.35 = 0.77. Finally, assuming for the sake of simplicity that µ most (x) = x, the nal result of the query R mostAlbums evaluated on G MB using Formula 4.19 is: R mostAlbums = { ({?art1 → Shakira }, 0.77), ({?art1 → JustinT }, 0.66), ({?art1 → Beyonce }, 0.6) }. Conclusion In this chapter, we have investigated the issue of integrating fuzzy quantied structural queries of the type Q B X are A into the FURQL query language (a fuzzy extension of the SPARQL that we proposed in Chapter 3) aimed to query fuzzy RDF databases. We have dened the syntax and semantics of an extension of FURQL, that makes it possible to deal with such queries. A query processing strategy based on the derivation of nonquantied fuzzy queries has also been proposed using dierent interpretations from the literature previously discussed in Section 4. 1. The following chapter discusses implementation issues and presents some experiments. Introduction Chapters3and4cont ain the main contributions of the thesis which consist of the definition of the FURQL query language, which is a fuzzy extension of SPARQL with fuzzy preferences (including fuzzy quantied statements ) addressed to fuzzy RDF databases as well as crisp ones. In the present chapter, in Section 5.1, we describe a prototype implementation of FURQL built on top of a classical SPARQL engine and, then in Section 5.2, we present a performance evaluation of the prototype system using dierent sizes of fuzzy RDF databases. The main objective behind these experiments is to show that the extra cost due to the introduction of fuzziness remains limited/acceptable. rdf:subject, rdf:predicate, rdf:object and uri:degree that model respectively the type, the subject, the predicate, the object and the degree of the new statement. In order to create a fuzzy RDF database, we start from a nonfuzzy RDF subgraph database for which every relationship between nodes is Boolean and then, we make it fuzzy by adding satisfaction degrees denoting the intensity of some relationships using the reication mechanism (as illustrated in Example 55). Shakira MariahC friend Blank node Evaluation of FURQL Queries Concerning the evaluation of FURQL queries, two architectures may be thought of: • A rst solution consists in implementing a specic query evaluation engine inside the data management system. Figure 5.2 is an illustration of this architecture. The advantage of this solution is that optimization techniques implemented directly in the query engine should make the system very ecient for query processing. An important downside is that the implementation eort is substantial, but the strongest objection for this solution is that the evaluation of a FURQL query in a distributed architecture would imply having available a FURQL query evaluator at each SPARQL endpoint, which is not realistic at the time being. • An alternative more realistic architecture consists in adding a software add-on layer over a standard and possibly distant classical SPARQL engine (endpoint) which is the evaluation strategy that we adopted for processing FURQL queries. This software, called SURF 1 (Sparql with fUzzy quantieRs for rdF data), is imple- mented within the Jena Semantic Web Java Framework 2 for creating and manipulating 1. In a pre-processing step, the Query compiler module, produces the query-dependent functions that allow to compute the satisfaction degrees for each returned answer, a (crisp) SPARQL query which is then sent to the SPARQL query engine for retrieving the information needed to calculate the satisfaction degrees. The compilation uses the derivation principle introduced in [START_REF] Bosc | SQLf query functionality on top of a regular relational database management system[END_REF] in a relational database context that consists in translating a fuzzy query into a Boolean one. 2. In a post-processing step, the Score calculator module calculates the satisfaction degree for each returned answer, ranks the answers, and qualitatively lters them if an α-cut has been specied in the initial fuzzy query. SURF makes it possible to query FURQL queries (including quantied ones) as well as regular SPARQL queries. The dierent evaluation scenarios are presented hereafter. 1. For a FURQL query (that does not involve any quantied statement), the principle is simple, we rst evaluate the corresponding (crisp) SPARQL query returned by the Query compiler module (obtained using the derivation rules). For each tuple x from the result of the crisp SPARQL query, we calculate its satisfaction degree using the Score calculator module. Finally, a set of answers ranked in decreasing order of their satisfaction degree is returned. At the current time, Zadeh's approach [Zadeh, 1983] and Yager's OWA-based approach [Yager, 1988] have been implemented, and the choice of the interpretation to be used is made through the conguration tool of the system. Finally, we get a set of answers ranked in decreasing order of their satisfaction degree. The SURF GUI was created using Vaadin 3 , a web framework for Java under NetBeans IDE 8.2. It is mainly composed of two frames: • an input text area for entering and running a FURQL query, and • a table for visualizing the results of a query. Example 56 Figure 5.4 presents a screenshot of the SURF GUI, which contains the nal result of the evaluation of a FURQL query. Experimentations In order to demonstrate the performances of our approach in the case of fuzzy graph pattern queries, we ran two experiments in order to calculate the execution time of each step of the evaluation for FURQL queries with and without quantied statements and then to assess the cost of adding fuzzy preferences for each type of queries. In the following, we rst present the setup we used for the evaluation and then, we describe in detail each experiment. Experimental Setup All of the experiments were carried out on a personal computer running Windows 7 (64 bits) with 8GB of RAM. For these experiments, we used four dierent sizes of fuzzy RDF datasets containing crisp and fuzzy triples, as described in Table 5. 1. Our In the following, we rst present experiments on nonquantied FURQL queries (Section 5.2.2) and then on quantied ones (Section 5.2.3). Experiments for nonquantied FURQL Queries For this experiment, we considered dierent kinds of nonquantied FURQL queries (summarized in Table 5.2), based on the typology presented in [START_REF] Umbrich | Link traversal querying for a diverse web of data[END_REF]. Three types have been used. For each kind of queries, we consider two fuzzy subtypes: 1) a subtype for which a condition concerns a value, and 2) a subtype for which a condition concerns the intensity of the relationships. Such subtype is called Structural in the following. • Edge queries: that consist in retrieving an entity e by means of a pattern where e may appear either i) in the subject (denoted by edge-s ), ii) in the object (denoted by edge-o), or iii) both (denoted by edge-so). We consider in the following four edge queries of the form edge-so given in Figure 5. 5. Query Q 1.2 is a fuzzy edge query containing a fuzzy condition that aims to nd the recent albums recommended by an artist; Its corresponding crisp query, denoted by Q 1.1 , Q 1.3 Q 1.2 Q 1.4 Star query Q 2.1 , Q 2.3 Q 2.2 Q 2.4 Simple path query Q 3.1 , Q 3.3 Q 3.2 Q 4.4 (resp., Figure 5.8.(b)) presents the execution time in milliseconds of the processing of the edge queries (resp., star queries) from Table 5.2. Figure 5.8.(c) presents the execution time in milliseconds of the processing of the path queries from Table 5.2. The execution time is the elapsed time between submitting the query to the system and obtaining the query answers, it is measured in milliseconds using the system command time. A rst (and predictable) observation is that, for each crisp and fuzzy query presented in Table 5.2, the processing time of the overall process is proportional to the size of the dataset, the number of the results and the complexity of the query. It is straightforward to see that for all the crisp queries the query compiler and the score calculator modules do not play any role in the processing of the queries. Thus, the corresponding execution times in Figure 5.8 are equal to 0. In the case of fuzzy queries, these modules, which are directly related to the introduction of exibility into the query language, are strongly dominated in time by the crisp SPARQL evaluator (which includes the time for executing the query and getting the result set). As we can see in Figure 5.8, the time of the evaluation of the initial query by the SPARQL evaluator engine represents at least 89% of the overall process. Moreover, the FURQL compiling module takes so little time compared to the other two steps that it cannot even be seen in Figure 5.8. This time remains almost constant, and is then independent on the size of the dataset. As to the score calculation module, the time used for calculating the nal satisfaction degrees is slightly higher than the last step and is dependent on the size of the result set and the nature of the query. Comparing the pairwise queries (Q i.1 with Q i.2 and Q i.3 with Q i.4 ), we see that the processing time of the fuzzy query is slightly higher than that of its crisp version. The increase is 10% on average. Finally, the results obtained tend to show that introducing fuzziness into a SPARQL query entails a rather small increase of the overall processing time. According to our experimentations, it represents around 11% of the overall time needed for evaluating a FURQL query in the worst case. Q4 crisp complex complex crisp between two or four triple patterns. Each Q crisp contains three crisp conditions. These queries are detailed in Appendix A. In order to evaluate these queries, we used Yager's OWA-based interpretation. The results, depicted in Figure 6.8, present the execution time in milliseconds of the processing of the fuzzy quantied queries involving crisp conditions from Table 5.3 over the RDF datasets from Table 5.1 on page 103. Fuzzy quantied query involving fuzzy conditions We processed again four fuzzy quantied queries with fuzzy conditions (of the type Q B X are A) by changing each time the nature of the patterns corresponding to conditions B and A from simple to complex ones. Table 5.4 presents these queries. A complex pattern diers from a simple one by the number and the nature (including structural properties) of its statements. During these experiments, a complex pattern is composed of nine triple patterns at most, while a simple pattern contains between two and four triple patterns. For each complex pattern a fuzzy structural property (e.g., involving the notions of strength or distance) is involved. Each Q fuzzy contains three fuzzy conditions. These queries are detailed in Appendix A. The results of these experiments, using Yager's OWA-based interpretation, are depicted in Figure 5.10 that presents the execution time in milliseconds of the processing of the fuzzy quantied queries from Table 5.4 over the RDF datasets from Table 5.1 on page 103. Results interpretation A rst and obvious observation from Figure 5.9 and Figure 5.10 is that, for all the fuzzy quantied queries, the processing time taken by the overall process is proportional to the size of the dataset and the complexity of the pattern in the query. One can see that, the processing time taken by the compiling and the score calculation module, which are directly related to the introduction of exibility into the query language, are very strongly dominated by the time taken by the SPARQL evaluator (which includes the time for executing the query and getting the result set). As it is shown in Figure 5.9 and Figure 5.10, the time of the evaluation of the initial query by the SPARQL evaluator engine represents 99% on average of the overall process. Indeed, the FURQL compiling step takes so little time compared to the score calculation and the SPARQL evaluator modules that it cannot even be seen in Figure 5.9 and Figure 5.10. This time remains almost constant, and is independent on the size of the dataset while slightly increasing in the presence of complex patterns or fuzzy conditions. Moreover, the time needed for calculating the nal satisfaction degree in the score calculator module is relatively dependent on the size of the result set and the nature of the patterns. Again, these experimental results, even though preliminary, appear promising. They tend to show that introducing fuzzy quantied statements into a SPARQL query does not come with a high price (i.e., entails a very small increase of the overall processing time). Finally, this conclusion can be extended to the case of Zadeh's interpretation [Zadeh, 1983], inasmuch as it is even more straightforward, in terms of computation, than Yager's OWAbased approach [Yager, 1988]. Thus, the processing time of the score calculating step can only be smaller than in the case of Yager's OWA-based interpretation. Conclusion In this chapter, in Section 5.1, we discussed implementation issues related to the FURQL language and we presented an architecture which consists of a software add-on layer (called SURF) over the classical SPARQL engine. Then, in Section 5.2, we performed two set of experiments over dierent sizes of datasets in order to study the performances of our proposed approach. The rst experiments aimed to measure the additional cost induced by the introduction of fuzziness into SPARQL, and the results obtained show the eciency of our proposal. The second experiments, which concerned fuzzy quantied queries, show that the extra cost induced by the fuzzy quantied nature of the queries remains very limited, even in the case of rather complex fuzzy quantied queries. The results of the experiments performed in this chapter are summarized in Table 5. 5. Each cell of the table contains three values corresponding to the percentage of time devoted to the compilation, the crisp evaluation and the score calculation stages respectively. They show that in both experiments the compilation and the score calculation stages are strongly dominated by the crisp SPARQL evaluation. The latter represents at least 95% of the overall process. Thus, these results conrm the hypothesis that the extra cost due to the introduction of fuzziness remains limited/acceptable. Finally, these experiments are preliminary and more work is required to further assess FURQL by using dierent variety of queries (e.g., complex path queries of undetermined length) and considering large databases. Introduction Int he previous chapters, we addressed mainly the issue of dening an ecient approach for exible querying in a particular type of graph databases, namely RDF databases. This approach makes it possible to express fuzzy nonquantied and quantied queries into an extension of the SPARQL language. In the present chapter, we place ourselves in a more general framework: graph database [START_REF] Angles | Survey of graph database models[END_REF]]. An ecient approach for exible querying of fuzzy 6.1. Background Notions graph databases has been proposed in [START_REF] Pivert | On a fuzzy algebra for querying graph databases[END_REF]. This approach makes it possible to express only fuzzy nonquantied conditions. However, fuzzy quantied queries have a high potential in this setting since they can exploit the structure of the graph, beside the attribute values attached to the nodes or edges. So far, only one approach from the literature, described in [START_REF] Castelltort | Fuzzy queries over NoSQL graph databases: Perspectives for extending the Cypher language[END_REF], considered fuzzy quantied queries to graph databases but only in a rather limited way. This chapter is based on our work reported in [Pivert et al., 2016e], in which we showed how it is possible to integrate fuzzy quantied queries in a framework named FUDGE that was previously dened in [Pivert et al., 2014a]. FUDGE is a fuzzy extension of Cypher [START_REF] Cypher | Cypher[END_REF] which is a declarative language for querying (crisp) graph databases. This work is mostly related to the work presented in Chapter 4 in which we deal with the same type of fuzzy quantied structural query but in a more specic type of graph databases, called RDF database. The remainder of this chapter is organized as follows. Section 6.1 presents the dierent elements that constitute the context of the work. Section 6.2 discusses related work. In Section 6.3, we propose a syntactic format for expressing fuzzy quantied queries in the FUDGE language, and we describe its interpretation. Section 6.4 deals with query processing and discusses implementation issues. In Section 6.5, some experimental results showing the feasibility of the approach are presented. Background Notions In this section, we recall important notions about graph databases, fuzzy graph theory, fuzzy graph databases, and the query language FUDGE. 6.1.1 Graph Databases In the last few years, graph databases has attracted a lot of attention for their ability to handle complex data in many application domains, e.g., social networks, cartographic databases, bibliographic databases, etc [Angles andGutierrez, 2008, Angles, 2012]. They aim to eciently manage networks of entities where each node is described by a set of characteristics (for instance a set of attributes), and each edge represents a link between entities. A graph database management system enables managing data for which the data structure of the schema is modeled as a graph and data is handled through graph-oriented operations and type constructors [START_REF] Angles | Survey of graph database models[END_REF] [START_REF] Angles | Survey of graph database models[END_REF] for an overview), including the attributed graph (aka., property graph) aimed to model a network of entities with embedded data. In this model, nodes and edges can be described by data in attributes (aka., properties). Example 57 Figure 6.1 is an example of an attributed graph, inspired from DBLP 1 with crisp edges. Nodes are assumed to be typed. If n is a node of V , then T ype(n) denotes its type. In Figure 6.1, the nodes IJIS16 and IJIS10 are of type journal, the nodes IJIS16-p, IJIS10-p and IJIS10-p1 are of type paper, and the nodes Maria, Claudio and Susan are of type author. For nodes of type journal, paper and author, a property, called name, contains the identier of the node. Information about the title and the pages may be attached to node of type paper and information about the volume and the date may be attached to node of type journal. In Figure 6.1, the value of the property name for a node appears inside the node. Such a model may be extended into the notion of a fuzzy graph database where a degree may be attached to edges in order to express the intensity of any kind of gradual relationship (e.g., likes, is friends with, is about). In the following section, we introduce the notion of fuzzy graphs. Fuzzy Graphs A graph G is a pair (V, R), where V is a set and R is a relation on V . The elements of V (resp. R) correspond to the vertices (resp. edges) of the graph. Similarly, any fuzzy relation ρ on 6.1. Background Notions a set V can be regarded as dening a weighted graph, or fuzzy graph, see [Rosenfeld, 1975], where the edge (x, y) ∈ V × V has weight or strength ρ(x, y) ∈ [0, 1]. Having no edge between x and y is equivalent to ρ(x, y) = 0. A fuzzy data graph may contain both fuzzy edges and crisp edges as a fuzzy edge with a degree of 0 or 1 can be considered as crisp. Along the same line, a crisp data graph is simply a special case of fuzzy data graph (where ρ : V × V → {0, 1} is Boolean). We then only deal with fuzzy edges and data graphs in the following. An important operation on fuzzy relations is composition. Assume ρ 1 and ρ 2 are two fuzzy relations on V . Thus, composition ρ = ρ 1 • ρ 2 is also a fuzzy relation on V s.t. ρ(x, z) = max y min(ρ 1 (x, y), ρ 2 (y, z)). The composition operation can be shown to be associative : (ρ 1 • ρ 2 ) • ρ 3 = ρ 1 • (ρ 2 • ρ 3 ). The associativity property allows us to use the notation ρ k = ρ • ρ • . . . • ρ for the composition of ρ with itself k -1 times. In addition, following [Yager, 2013], we dene ρ 0 to be s. t. ρ 0 (x, y) = 0, ∀(x, y). Useful notions related to fuzzy graphs are those of strength and length of a path. These notions were previously used in Chapter 3 in the RDF context, Their denition, drawn from [Rosenfeld, 1975], is recalled hereafter. In other words, the strength of a path is dened to be the weight of the weakest edge of the path. Two nodes for which there exists a path p with ST (p) > 0 between them are called connected. We call p a cycle if n ≥ 2 and x 0 = x n . It is possible to show that ρ k (x, y) is the strength of the strongest path from x to y containing at most k links. Thus, the strength of the strongest path joining any two vertices x and y (using any number of links) may be denoted by ρ ∞ (x, y). Strength of a path. A path p in G is a sequence x 0 → x 1 → . . . → x n (n ≥ 0) such that ρ(x i-1 , x i ) > 0, 1 ≤ i ≤ n Length and distance. The length of a path p = x 0 → x 1 → . . . → x n in the sense of ρ is dened as follows: Length(p). Length(p) = n i=1 1 ρ(x i-1 , x i ) . ( 6 ( .3) It is the length of the shortest path from x to y. DB subgraph where variables can occur. An answer maps the variables to elements of DB. A fuzzy graph pattern expressed à la Cypher consists of a set of expressions (n1:Type1)-[exp]->(n2:Type2) or (n1:Type1)-[e:label]->(n2:Type2) where n1 and n2 are node variables, e is an edge variable, label is a label of E, exp is a fuzzy regular expression, and Type1 and Type2 are node types. Such an expression denotes a path satisfying a fuzzy regular expression exp (that is simple in the second form e) going from a node of type Type1 to a node of type Type2. All its arguments are optional, so the simplest form of an expression is ()-[]->() denoting a path made of two nodes connected by any edge. Conditions on attributes are expressed on nodes and edges variables in a where clause. Example 59 [Graph pattern] We denote by P the graph pattern: ing that its support is [A-a, B+b] and its core [A, B] , then the clause has the form define fterm as (A-a,A,B,B+b). If fterm is a decreasing function, then the clause has the form definedesc fterm as (δ,γ) meaning that the support of the term is [0, γ] and its core [0, δ] (there is the corresponding defineasc clause for increasing functions). 2. A match clause, which has the form match pattern where conditions that expresses the fuzzy graph pattern. Example 60 [FUDGE query] Listing 6.2 is an example of a FUDGE query. This pattern aims to retrieve the authors (au2) who have, among their close contributors (connected by a short path Length is short made of contributor edges), an author (au1) who published a paper (ar1) in IJWS12 and also published a paper (ar2) in a journal (j2) which has a high impact factor (i.value is high). The fuzzy terms short and high are dened on line 1. Figure 6.4 is a graphical representation of this pattern where the dashed edge denotes a path and information in italics denotes a node type or an additional condition on node or edge attributes. Related Work In the last decades, fuzzy quantied queries have proved useful in a relational database context for expressing dierent types of imprecise information needs [Bosc et al., 1995]. Recently, in a graph database context, such statements started to attract increasing attention of many researchers [Yager, 2013, Castelltort and Laurent, 2014, Castelltort and Laurent, 2015] since they can exploit the structure of the graph, beside the attribute values attached to the nodes or edges. In [Yager, 2013], R.R. Yager briey mentions the possibility of using fuzzy quantied queries in a social network database context, such as the question of whether most of the people residing in western countries have strong connections with each other and suggests In [START_REF] Castelltort | Extracting fuzzy summaries from NoSQL graph databases[END_REF], the same authors propose an approach aimed to summarize a (crisp) graph database by means of fuzzy quantied statements of the form Q X are A, in the same spirit as what [START_REF] Rasmussen | Summary SQL -A fuzzy tool for data mining[END_REF]] did for relational databases. Again, they consider that the degree of truth of such a statement is obtained by a sigma-count (according to Zadeh's interpretation) and show how the corresponding queries can be expressed in Cypher. More precisely, given a graph database G and a summary S = a[r ]>b, Q, the authors consider two degrees of truth of S in G dened as follows: truth 1 (S) = µ Q ( count(distinct S) count(distinct a) ) (6.4) truth 2 (S) = µ Q ( count(distinct S) count(distinct a[r ]>(?)) ) (6.5) They illustrate these notions using a database representing students who rent or own a house or an apartment. The degree of truth (in the sense of the second formula above) of the summary S = student [rent ]>apartment, most meaning most of the students rent an apartment (as opposed to a house) is given by the membership degree to the fuzzy quantier most of the ratio: (number of times a relationship of type rents appears between a student and an apartment) over (number of relations of type rents starting from a student node). The corresponding Cypher query is: A limitation of this approach is that only the quantier is fuzzy (whereas in general, in a fuzzy quantied statement of the form Q B X are A, the predicates A and B may be fuzzy too). The work the most related to that presented here is [START_REF] Pivert | Fuzzy quantied queries to fuzzy RDF databases[END_REF] described in Chapter 4, where we introduced the notion of fuzzy quantied statements in a (fuzzy) RDF database context. We showed how this statement could be expressed in the FURQL language (which is a exible extension of the SPARQL query language) that we previously proposed in [Pivert et al., 2016c]. Fuzzy Quantied Statements in FUDGE In this section, we show how a specic type of fuzzy quantied statements may be expressed in the FUDGE query language. We rst propose a syntactic format for these queries, then we show how they can be eciently evaluated. 6.3.1 Syntax of a Fuzzy Quantied Query In the following, we consider fuzzy quantied queries involving fuzzy predicates (beside the quantier) over fuzzy graph databases. The fuzzy quantied statements considered are of the same type as those used in Chapter 4 in the context of RDF databases. They are of the form Q B X are A, where the quantier Q is represented by a fuzzy set and denotes either an increasing/decreasing relative quantier (e.g., most ) or an increasing/decreasing absolute one (e.g., at least three ), where B is the fuzzy condition to be connected (according to a given pattern) to a node x, X is the set of nodes in the graph, and A is the fuzzy (possibly compound) condition. An example of such a statement is: most of the recent papers of which x is a main author, have been published in a renowned database journal. The general syntactic form of a fuzzy quantied query of the form Q B X are A in the FUDGE language is given in Listing 6. Example 61 [Fuzzy Quantied Query] The query, denoted by Q mostAuthors , that consists in retrieving every author (a) such that most of the recent papers (p) of which he/she is a main author, have been published in a renowned database journal (j) may be expressed in FUDGE as follows: 2. the interpretation of the crisp query Q derivedBoolean , 3. the calculation of the answers to Q based on the answers to Q derivedBoolean . Compiling The compiling stage translates the fuzzy quantied query Q into a crisp query denoted by Q derivedBoolean . This compilation involves two translation steps. First, Q is transcripted into a derived query Q derived whose aim is to retrieve the elements necessary to the interpretation of the fuzzy quantied statement from Q. The query Q derived , whose general form 3 is given in Listing 6.5, makes it possible to get the elements of the B part of the initial query, matching the variables res and x, for which we will then need to calculate the nal satisfaction degree. It is obtained by removing the with and having clauses from the initial query, and adding the optional match clause before the fuzzy graph pattern in condition A. The processing of Q derived is based on the derivation principle introduced by [START_REF] Pivert | Fuzzy Preference Queries to Relational Databases[END_REF] in the context of relational databases: Q derived is in fact derived into another query denoted by Q derivedBoolean . The derivation step translates the fuzzy query into a crisp one by transforming its fuzzy conditions into Boolean ones that select the support of the fuzzy statements. For instance, following this principle, the fuzzy condition p.year IS recent (where recent is dened as defineasc recent as (2013,2016)) becomes the crisp condition p.year > 2013 in order to remove the answers that do not belong to the support of the answer. Listing 6.7 of Example 62 below is an illustration of the derivation of the query Q derived . The derivation principle applied to the FUDGE language is detailed in [Pivert et al., 2015]. Crisp interpretation The previous compiling stage translates the fuzzy quantied query Q embedding fuzzy quantiers and fuzzy conditions into a crisp query Q derivedBoolean , that can be processed by a classical graph DBMS (e.g., Neo4j). For the sake of simplicity, we consider in the following that the result of Q derived , denoted by Q derived , is made of the quadruples (res i , x i , µ Bi , µ Ai ) matching the query. Final result calculation The last stage of the evaluation calculates the satisfaction degrees µ B and µ A according to I B and I A . If the optional part does not match a given answer, then µ A = 0. The answers of the initial fuzzy quantied query Q (involving the fuzzy quantier Q) are answers of the query Q derived derived from Q, and the nal satisfaction degree associated with each element e can be calculated according to the three dierent interpretations mentioned earlier in Section 4.1. Hereafter, we illustrate this using [Zadeh, 1983] and [Yager, 1988]'s approaches (which are the most commonly used when it comes to interpreting fuzzy quantied statements ). Following Zadeh's Sigma-count-based approach (cf. Subsection 4. 1.2.1) we have: µ(e) = µ Q {(res i ,x i ,µ Bi ,µ Ai )∈ Q derived |res i =e} min(µ Ai , µ Bi ) {(res i ,x i ,µ Bi ,µ Ai )∈ Q derived |res i =e} µ Bi (6.6) In the case of a fuzzy absolute quantied query, the nal satisfaction degree associated with each element e is simply µ(e) = µ Q   {(res i ,x i ,µ Bi ,µ Ai )∈ Q derived |res i =e} µ Ai   . Example 62 [Evaluation of a Fuzzy Quantied Query] Let us consider the fuzzy quantied query Q mostAuthors of Listing 6.4. We evaluate this query according to the fuzzy data graph DB of Figure 6. Finally, assuming for the sake of simplicity that µ most (x) = x, the nal result of the query Q mostAuthors evaluated on DB using Formula 6.6 is Using Yager's OWA-based approach (cf. subsection 4. 1.2.2), for each element e returned by Q derived we calculate µ(e) = {(res i ,x i ,µ Bi ,µ Ai )∈ Q derived |res i =e} w i × c i . The weights of the OWA operator are dened by Q mostAuthors = { µ(Peter) = µ most ( 0. w i = µ Q (S x i ) -µ Q (S x i-1 ) with S x i = i j=1 µ B j d . The implication values are denoted by c x i = max(1 -µ B i , µ A i ) and ordered decreasingly such that c 1 ≥ . . . ≥ c n . Example 63 In order to calculate µ(Maria) from Q derived , let us consider B (resp. A) the set of satisfaction degrees corresponding to condition B (resp. A) of element Maria as follows B={0.33/IJAR14, 0.6/IJIS16} and A={1/IJAR14, 6.4. About Query Processing The compilation uses the derivation principle introduced in [START_REF] Bosc | SQLf query functionality on top of a regular relational database management system[END_REF] in the context of relational databases. 2. In a post-processing step, the Score calculator module performs a grouping (according to the with clause of the initial query) of the elements, then calculates µ B , µ A and µ for each returned answer, and nally ranks the answers. For quantied queries of the type introduced in the previous sections (i.e., using relative quantiers), the principle is to rst evaluate the fuzzy query Q derivedBoolean derived from the original query. For each element x ∈ Q derivedBoolean , we return the satisfaction degrees related to conditions A and B, denoted respectively by µ A and µ B . The nal satisfaction degree µ can be calculated according to Formulas (4.3), (4.8) or (4.17) (presented in Chapter 4 Subsection 4. 1.2) using the values of µ B and µ A . At the current time, [Zadeh, 1983]'s approach and [Yager, 1988]'s OWA-based approach have been implemented, and the choice of the interpretation to be used is made through the system conguration tool. Finally, a set of answers ranked in decreasing order of their satisfaction degree is returned. As a proof-of-concept of the proposed approach, the FUDGE prototype is available at www-shaman.irisa.fr/fudge-prototype. A screenshot of this prototype is shown in Figure 6.7 which contains the nal result of the evaluation of the query Q mostAuthors of Example 61. The GUI is composed of two frames: • a central frame for visualizing the graph and the results of a query, and • an input eld frame (placed under the central one), for entering and running a FUDGE query. Figure 6.7: Screenshot of the FUDGE prototype 6.5 Experimental Results In order to conrm the eectiveness and eciency of the approach, we carried out some experiments on a computer running on Windows 7 (64 bits) with 8 Gb of RAM. The queries used in these experiments are based on the typology of [Angles, 2012] that considers three categories of queries: • Adjacency query: tests whether two nodes are adjacent (or neighbors) when there exists an edge between them or whether two edges are adjacent when they have a common node. • Reachability query: tests whether two given nodes are connected by a path. Two types of paths are considered: xed length paths, which contain a xed number of nodes and • The fourth query Q 4 (Listing 6.11), where A is a pattern matching, aims to nd the authors (a) such that most of the recent papers (p) of which they are main authors, have been published in a renowned database journal (j). The results of the processing of these queries over the RDF datasets from Table 6.1 are depicted in Figure 6.8 where Figure 6. The main result is that the processing time taken by the compiling and the score calculation stages, which are related to the introduction of exibility into the query language, are very strongly dominated by the time taken by the crisp Cypher evaluator. Moreover, the FUDGE compiling stage takes so little time compared to the other two stages that it cannot even be seen in Figure 6. Conclusion In this chapter, we have dealt with fuzzy quantied structural queries, addressed to fuzzy graph databases. We have rst dened the syntax and semantics of a fuzzy extension of the query language Cypher. This extension makes it possible to express and interpret such queries with dierent approaches from the literature. A query processing strategy based on the derivation of nonquantied fuzzy queries has also been proposed. Then, we updated the software SURF described in [Pivert et al., 2015[START_REF] Pivert | SUGAR: A graph database fuzzy querying system[END_REF] to be able to express such queries and performed some experiments using dierent sizes of fuzzy graphs in order to study its performances. The results of these experiments show that the cost of dealing with fuzzy quantication in a query is reasonable w.r.t. the cost of the overall evaluation. Conclusion The last decade has witnessed an increasing interest in expressing preferences inside database queries for their ability to provide the user with the best answers, according to his/her information need. Even though most of the work in this area has been devoted to relational databases, several proposals have also been made in the Semantic Web area in order to query RDF databases in a exible way. However, it appears that these approaches are mainly straightforward adaptations of proposals made in the relational database context: they make it possible to express preferences on the values of the nodes, but not on the structure of the RDF graph. Structural preferences are quite important in a graph database context and may concern the strength of a path, the distance between two nodes, etc. Moreover, these approaches consist of exible extensions of the SPARQL query language that only deal with crisp RDF data. In the real world, though, semantic Web data often carry gradual notions such as friendship in social networks, aboutness in a bibliographic context, etc. Such notions can be modeled by fuzzy sets, which leads to attaching a degree in [0, 1] to the edges of the graph. Motivated by these concerns, we addressed in this thesis the issue of ecient querying of (fuzzy) RDF data with the aim of extending the SPARQL query language so as to be able to express i) fuzzy preferences on data (e.g., the release year of a movie is recent ) and on the structure of the data graph (e.g., the path between two friends is required to be short ). and ii) fuzzy quantied preferences (e.g., most of the albums that are recommended by an artist, are highly rated and have been created by a young friend of this artist). To the best of our knowledge, this thesis is the rst attempt in this direction in which we provide solutions for these dierent issues. After motivating our work, we presented in Chapter 1 basic notions related to our thesis, namely the RDF data model, the SPARQL query language and fuzzy set theory. In Chapter 2 we provided an overview of the main proposals made in the literature that propose a exible extension of SPARQL based on user preferences queries, relaxation techniques and approximate matching. We discussed these approaches, classied them and pointed out their limits. Chapter 3 was dedicated to the denition of a fuzzy extension of SPARQL that goes beyond the previous proposals in terms of expressiveness inasmuch as it makes it possible i) to deal with both crisp and fuzzy RDF databases, and ii) to express fuzzy structural conditions beside more classical fuzzy conditions on the values of the nodes present in the RDF graph. The language, called FURQL, is based on the notion of fuzzy graph pattern which extends Boolean graph patterns introduced by several authors in a crisp querying context. Then, in Chapter 4 we proposed to integrate more complex conditions, namely, fuzzy quantied statements of the type Q B X are A into the FURQL language (addressed to fuzzy RDF database) previously introduced in Chapter 3. We dened the syntax and semantics of an extension of the FURQL query language, that makes it possible to deal with such queries. A query processing strategy based on the derivation of nonquantied fuzzy queries has also been proposed. These functionalities were successfully implemented using a prototype called SURF. Experimental results, described in Chapter 5, show the validity of the approach. In the case of fuzzy nonquantied queries, the results obtained indicate that introducing fuzziness into a SPARQL query comes with a very limited cost. And in the case of fuzzy quantied queries, the results show that the extra cost induced by the fuzzy nature of the queries remains also very limited, even in the case of rather complex fuzzy quantied queries. Finally, the last chapter was devoted to integrating fuzzy quantied queries in an extension of the Neo4j Cypher language, called FUDGE, (described in [START_REF] Pivert | On a fuzzy algebra for querying graph databases[END_REF][START_REF] Pivert | SUGAR: A graph database fuzzy querying system[END_REF]) in a more general (fuzzy) graph database context (of which fuzzy RDF databases are a special case). We rst proposed a syntactic format for expressing these queries in the FUDGE language, and we described their interpretation using dierent approaches from the literature. Then, we carried out some experimentations in order to assess the performances of the evaluation method. The results of these experiments show that the cost of dealing with fuzzy quantication in a query is very reasonable w.r.t. the cost of the overall evaluation. Future Work In this thesis, we have proposed a fuzzy extension of the SPARQL query language that makes it possible to express fuzzy structural conditions and fuzzy quantied statements in an ecient way. This work serves as a baseline and leaves some open questions to solve and sets the basis for further extensions. Dierent perspectives on short-term and long-term work have been identied and are outlined hereafter. Extend the FURQL and FUDGE languages with more sophisticated preferences In this thesis, we limited fuzzy structural properties to the distance and the strength where the distance between two nodes is the length of the shortest path between these two nodes and the strength of a path is dened to be the weight of the weakest edge of the path. It is also worth to consider other structural properties, like: • the centrality, the prestige and the inuence used in social networks analysis [START_REF] Rusinowska | Social networks: prestige, centrality, and inuence[END_REF]. For instance, the degree of centrality of a node measures the extent to which this node is connected with other nodes in a given social network. The question to answer is how central this node is in this network. The degree of prestige measures the extent to which a social actor in a network receives or serves as the object of relations sent by others in the network. Persons, who are chosen as friends by many others have a special position (prestige) in the group. • the clique which is one of the basic concepts of classical graph theory. Ronald R. Yager in [Yager, 2014] redened this notion in the case of a fuzzy graph. Moreover, we introduced a specic type of fuzzy quantied queries of the form: Q nodes, among those that are connected to a node x according to a certain pattern, satisfy a fuzzy condition c. An example of such a statement is most of the papers whose x is a main author, have been published in a renowned database journal. It would be interesting to study other types of fuzzy quantied queries, in particular, those that aim to nd the nodes x such that x is connected (by a path) to Q nodes reachable by a given pattern and satisfying a given condition c. An example of such a query is nd the authors x that had a paper published in most of the renowned database journals. And also those that aim to nd if there exists a path from x to a node satisfying c such that this path contains Q nodes (where Q is an absolute quantier). Make SURF and SUGAR more user-friendly The softwares that we developed make it possible to express fuzzy user preferences where the query is explicitly written in the syntax of the formal query language (FURQL for RDF data and FUDGE for graph data) and the fuzzy terms are dened in the query by a predened clause define. These softwares may be improved further in order to make them more user-oriented. One can think rst about proposing a way to help non-expert users dene fuzzy terms easily. There is, therefore, a denite need about developing a user interface in order to help casual users dene their preferences and the underlying fuzzy membership functions in a more easy way, following the work of [START_REF] Smits | ReqFlex: Fuzzy Queries for Everyone[END_REF] in which the authors described ReqFlex, an intuitive user interface to the denition of preferences and the construction of fuzzy queries in a relational context. Moreover, we may think also about integrating and analysing user proles in order to focus more on the user's interest and preferences and take into account the user's context in order to personalise the retrieved information. Add Quality-Related Metadata Another important perspective concerns the management of quality-related metadata [START_REF] Fürber | Using semantic web resources for data quality management[END_REF]. Since the RDF model that we used in this thesis makes it possible to model fuzzy notions, we can extend this model to represent data quality dimensions (e.g., accuracy, completeness, timeliness, consistency and so on). Indeed, these dimensions are of fuzzy nature and the values returned by the associated metrics may be viewed as satisfaction degrees. Then, it would be also worth investigating the way our framework FURQL could be extended: 1. to express fuzzy preferences queries concerning some quality dimensions. 2. to associate quality information with the answers to a query. This would make it possible to rank-order the answers according to their quality level (on one or several dimensions) and to warn the user about the presence of suspect answers, for instance. [START_REF] Bizer | The Berlin SPARQL Benchmark[END_REF], DBpedia SPARQL Benchmark (DBPSB) [START_REF] Morsey | DBpedia SPARQL BenchmarkPerformance Assessment with Real Queries on Real Data[END_REF], etc.) for data generator and benchmark queries in order to evaluate the performance of RDF stores. However, none of the existing benchmarks provides fuzzy RDF data or explicitly deals with fuzzy user preferences. For that purpose, in this thesis, we initially performed the evaluation of our approaches using a fuzzy RDF database inspired by Musicbrainz 4 with synthetic data generated by a script that allowed us to create datasets of dierent sizes. A future work would be to consider further evaluation using some of the existing real-world data benchmarks or ideally create our own Fuzzy RDF Benchmark. Obviously, many research problems remain open and this thesis is only a rst step which will help, we hope, to convince the databases community of the interest of using fuzzy logic for the exible/intelligent management of data in information systems. Example 2 [ 2 RDF graph] Let us consider an example of an RDF subgraph extracted from the MusicBrainz database 1 which is an open music encyclopedia that collects music metadata. The resource uri:lemonade is an album, entitled Lemonade. It was released in 2016, with genre R&B and rating 8.7. It was created by the resource uri:beyonce, named Beyonce, being 38 years old and a rating of 7. The resource uri:sorry and the resource uri:holdup entitled hold up are tracks of the latter resource. The resource uri:beyonce also created the resource uri:B'Day, entitled B'Day, that was released in 2006. Figure 1 . 1 Figure 1 . 1 : 1111 Figure 1.1: Sample RDF graph extracted from MusicBrainz In fact, RDF data may be represented by dierent syntaxes such as, RDF/XML (eXtensible Markup Language) 2 , N-Triples 3 , Notation 3 or N3 4 and Turtle (Terse RDF Triple Language) 5 , etc. Example 3 [RDF representations] Listing 1.1 is the RDF/XML representation corresponding to the resource uri:Lemonade from the RDF graph of Figure 1.1. rdf:resource=mo:album </rdf:type> 11 <dc:track> rdf:resource="uri:sorry" </dc:track> 12 <dc:track> rdf:resource="uri:hold up" </dc:track> 13 </rdf:Description> 14 </rdf:RDF> Listing 1.1: RDF/XML document In this listing, line 1 indicates an XML declaration and line 2 says that the following XML document is about RDF. Lines 2-4 declare namespaces which indicate the URI that will be used later. Lines 5-14, as the tag is closed in line 14, present the description of a resource in which lines 6-12 describe characteristics of this resource. Its corresponding N-Triples representation is given in Listing 1.2. Listing 1 . 3 : 13 A SPARQL Basic Graph PatternA graphical representation of this graph pattern is depicted in Figure1.2. Figure 1 . 2 : 12 Figure 1.2: A graphical representation of the graph pattern from Listing 1.3 Figure 1 . 3 : 13 Figure 1.3: Possible subgraphs from Figure 1.1 . 4. The set of t-norms (resp. t-conorms ) has an upper (resp. lower) element which is the minimum (resp. maximum) operator. Example 22 Let us come back to Example 19. The intersection of the two fuzzy subsets, taking = min, is as follows: Most fuzzy terms are assumed to be represented by a trapezoidal membership function (see for instance a possible representation of recent in Figure2.1). Figure 2 . 1 : 21 Figure 2.1: Membership function of recent Figure 2 . 2 : 22 Figure 2.2: Membership function of the fuzzy number at least Y Listing 2 . 1 : 21 An f-SPARQL query This query aims to retrieve from a music database the albums by Beyonce that have been recently released. If the MusicBrainz RDF database of Figure 1.1 on page 23 is queried, then the album entitled Lemonade belongs to the answer, with a satisfaction degree of 0.66, which corresponds to the degree of membership of value 2016 to the fuzzy term recent (see Figure 2.1). The other album from Figure 1.1, released in 2006, does not belong to the answer as it is not at all recent according to Figure 1.1. Figure 2 . 2 Figure2.3, where J, P and S are binary variables corresponding to the colors of the jacket, the pants and the shirt respectively. b ∧ b : r w w ∧ b : w r b ∧ w : w r w ∧ w : r w Figure 2 . 3 : 23 Figure 2.3: CP-net of Example 34 Figure 2 . 4 : 24 Figure 2.4: An RDFS Ontology [ or vague. They propose a conceptual framework to relax RDF queries relying on a matcher function (i.e., distance function) that assigns a relaxation score in [0,1] to a pair of values. tated in the previous chapter, RDF is a graph-based standard data model for representing semantic web information, and SPARQL is a standard query language for querying RDF data. Because of the huge volume of linked open data published on the web, these standards have aroused a large interest in the last years. extension allows(1) to query both crisp and fuzzy RDF data model, and (2) to express fuzzy preferences on values present in the graph as well as on the structure of the data graph, which has not been proposed in any previous fuzzy extension ofSPARQL. 62 3 . 1 . 31 Fuzzy RDF (F-RDF) GraphThis work has been published in the proceedings of the 25th IEEE International Conference on Fuzzy Systems (Fuzz-IEEE'16),Vancouver, Canada, 2016. Figure 3 . 1 : 31 Figure 3.1: Fuzzy RDF graph G M B inspired by MusicBrainz 41 [Distance and strength between two nodes] Let us consider the cycle-free paths from G M B connecting Beyonce to Euphoria, depicted in Figure 3.2, and let us compute the distance and the strength between the pair of nodes (Beyonce, Euphoria). The distance between the pair of nodes (Beyonce, Euphoria) is calculated as follows distance(Beyonce, Euphoria) = min (length(p 1 ), length(p 2 ), length(p 3 )), with length (p 1 )= 1/ζ (Beyonce, recommends, Euphoria )= 1/0 Figure 3 . 3 : 33 Figure 3.3: A possible representation of the fuzzy term short Figure 6 Figure 3 . 4 : 634 Figure 6.3 is a graphical representation. 2 .Figure 3 . 5 : 235 Figure 3.5: Representation of the fuzzy term low applied to a rating value Figure 3 . 5 , 35 Figure 3.5, and the following clause denes the fuzzy term short of Figure 3.3. Figure 3 . 6 : 36 Figure 3.6: Some paths from G M B . .6 , satisfy f 1 with the following satisfaction degrees:sat f 1 (EnriqueI, Justied) = min(ζ(EnriqueI, friend, JustinT), ζ(JustinT, creator, Justied)) = min(0.4, 1) = 0.4, sat f 1 (Shakira, Buttery) = min(ζ(Shakira, friend, MariahC), ζ(MariahC, creator, Buttery)) = min(0.7, 1) = 0.7, sat f 1 (Beyonce, Euphoria) = max(min(ζ(Beyonce, friend, Rihanna), ζ(Rihanna, friend, EnriqueI),ζ(EnriqueI, creator, Euphoria)), min(ζ(Beyonce, friend, MariahC), ζ(MariahC, friend, Shakira), ζ(Shakira, friend, EnriqueI), ζ(EnriqueI, creator, Euphoria))) = max(min(0.6, 0.2, 1), min(0.8, 0.3, 0.5, 1)) = 0.3, sat f 1 (Rihanna, Euphoria) = min(ζ(Rihanna, friend, EnriqueI), ζ(EnriqueI, creator, Euphoria)) = min(0.2, 1) = 0.2, sat f 1 (MariahC, Euphoria) = min(ζ(MariahC, friend, Shakira), ζ(Shakira, friend, EnriqueI), ζ(EnriqueI, creator, Euphoria)) = min(0.3, 0.5, 1) = 0.3, and sat f 1 (Shakira, Euphoria) = min(ζ(Shakira, friend, EnriqueI), ζ(EnriqueI, creator, Euphoria)) Figure 3 . 3 Figure3.8 gives the set of subgraphs of G M B satisfying the pattern P rec_low .The matching value of Art1 is either Shakira or EnriqueI who match the pattern P rec_low (i.e they are the only artists that have liked a low rated album created by another artist among their close friends).Note that (f riend + ) distance is short .creator is the fuzzy regular expression f 2 of Example 44 with sat f 2 (EnriqueI, Justied) = 0.4, sat f 2 (Shakira, Buttery) = 0.7 and sat f 2 (Shakira, Euphoria) = 0.5 and we consider µ low_rating (4) = 0.66, µ low_rating (6) = 0.33 and µ low_rating (9) = 0 dened in Figure3.5 on page 70.Then, the evaluation of the pattern P rec_low over the RDF graph G M B includes two mappings with their respective satisfaction degrees: Figure 3 . 8 : 38 Figure 3.8: Subgraphs satisfying P rec_low Figure 4 . 1 Figure 4 . 1 : 4141 Figure 4.1 gives two examples of monotonous decreasing and increasing fuzzy quantiers respectively. Figure 4 . 2 : 42 Figure 4.2: The fuzzy quantier at least ve us consider a quantied statement of the form Q B X are A from Example 47 and Q(x) = x 2 . Figure 4 . 3 : 43 Figure 4.3: Membership functions of Example 51 1 Figure 4 . 4 : 144 Figure 4.4: Membership function of the fuzzy term strong Figure 4 . 5 : 45 Figure 4.5: Fuzzy RDF graph G M B inspired by MusicBrainz 6 : 6 Query R atBoolean derived from R at This query returns a list of artist (?art1) with their recommended albums (?alb), satisfying the conditions of query R at , along with their respective satisfaction degrees µ B = min(µ recent (?alb), ζ(?art1, recommends, ?alb)) and µ A = min(µ high (?rating), µ young (?age), ζ(?art1, f riend, ?art2)). Figure 5 . 1 : 51 Figure 5.1: Reication of fuzzy triple of Example 55 ClientFigure 5 . 2 : 52 Figure 5.2: Implementation of a specic FURQL query evaluation engine Figure 5 . 3 53 Figure 5.3 illustrates this architecture. Figure 5 . 3 : 53 Figure 5.3: SURF software architecture 3 . 3 For a classical SPARQL query, we skip the Query compiler and Score calculator modules and the original query is transferred directly to the classical SPARQL engine. All the answers returned by the SPARQL engine are kept in the nal resultset with a satisfaction degree equal to1. Figure 5 . 4 : 54 Figure 5.4: Screenshot of SURF Figure 5 . 5 : 55 Figure 5.5: Edge query of the form edge-so Figure 5.8: Experimental results about the evaluation of FURQL queries Figure 5 . 9 : 59 Figure 5.9: Experimental results of Fuzzy Quantied queries involving crisp conditions Figure 5 . 5 Figure 5.10: Experimental results of Fuzzy Quantied queries involving fuzzy conditions and where n is the number of links in the path. The strength of the path is dened asST (p) = min i=1..n ρ(x i-1 , x i ). 4 ( 4 -[:author_of]->(ar1:paper), (ar1)-[:published]->(j1), au1)-[:author_of]->(ar2:paper), (ar2)-[:published]->(j2) 5 where j1.name="IJWS12" and j1.name <> j2.name Listing 6.1: Pattern expressed à la Cypher This pattern models information concerning authors (au2) who have, among their contributors, an author (au1) who published a paper (ar1) in IJWS12 and also published a paper (ar2) in another journal (j2). Figure 6 . 3 63 is a graphical representation of P. Figure 6 . 3 : 63 Figure 6.3: Pattern P 4 ( 5 ( 45 -[(contributor+)|Length is short]->(au1:author), au1)-[:author_of]->(ar1:paper), (ar1)-[:published]->(j1), 5 : 5 Derived query Q derived Such a query allows to retrieve the pairs {res, x} that belong to the graph and all the information needed for the calculation of µ B and µ A , i.e., the combination of fuzzy degrees associated with relationships and node attribute values involved in B(res,x) and in A(x), respectively denoted by I B and I A . The Listing6.6 of Example 62 below presents the derived query associated with the query Q mostAuthors . 2 . 2 In order to interpret Q mostAuthors , we rst derive the following query Q derived from Q mostAuthors , that retrieves the authors (a) who highly contributed to at least one recent paper (p) (corresponds toB(a,p) in lines 1 and 2) possibly (optional) published in a renowned database journal (corresponds to A(p) in lines 3 to 5). 1 match (a:author)-[author_of|ST IS strong]->(p:paper) 2 where p.year is recent 3 optional match (p)-[:published]->(j:journal), 4 (j)-[:impact_factor]->(i:impact_factor), (j)-[:domain]->(d:dom) 5 where i.value is high and d.name="database" 6 return a p µ A µ B Listing 6.6: Query Q derived derived from Q mostAuthors For the running example, Q derived returns the four answers {Peter, Maria, Claudio, Michel}. The authors Andreas, Susan and Bazil do not belong to the result of Q mostAuthors because Susan has not written a journal paper yet and Andreas and Bazil do not have a recent paper.For the running example, we then have Q derived (P eter) = {((0.2, 1)/IJAR14_p)}, Q derived (M aria) = {((0.33, 1)/IJAR14_p), ((0.6, 0.33)/IJIS16_p)}, Q derived (Claudio) = {((0.33, 1)/IJAR14_p), ((0.3, 0.07)/IJUFK15_p)}, and Q derived (M ichel) = {((0.3, 0.07)/IJUFK15_p)}. 2 0.2 ) = 1, µ(Maria) = µ most ( 0.66 0.93 ) = 0.71, µ(Claudio) = µ most ( 0.4 0.63 ) = 0.63, µ(Michel) = µ most ( 0.07 0.3 ) = 0.23}. the fuzzy set B = {µ B 1 /x 1 , ..., µ Bn /x n } such that µ B 1 ≤ ... ≤ µ Bn , the fuzzy set A = {µ A 1 /x 1 , ..., µ An /x n } and d = n i=1 µ B i . Figure 6 Figure 6 . 6 : 666 Figure 6.6 illustrates this architecture. p) are ( (p)-[:published]->(j:journal), 7 (j)-[:impact_factor]->(i:impact_factor), (j)-[:domain]->(d:dom) 8 where i.value is high and d.name="database" ) 9 return a Listing 6.11: Fuzzy quantied query with pattern matching Our experiments have been performed on a database inspired from DBLP containing crisp (e.g., published ) and fuzzy edges (e.g., contributor ). A java script have been developed to create random graph data of dierent sizes. In these experiments, four database sizes have been considered, see Table 6 . 1. 8.(a) (resp., Figure 6.8.(b)) presents the execution time in milliseconds using Zadeh's interpretation (resp., Yager's OWA-based interpretation). 8. This time remains almost constant, and is independent on the size of the dataset while slightly increasing in the presence of complex patterns or fuzzy conditions. As to the score calculation stage, it represents around 9% of the time needed for evaluating a fuzzy quantied FUDGE query. The time used for calculating the nal satisfaction degree is of course dependent on the size of the result set and the nature of the patterns. Figure 6.8: Experimental results of fuzzy quantied queries in FUDGE 'interroger un modèle de données RDF ou dans lequel les triplets sont porteurs de notions graduelles (dont le modèle RDF non ou est un cas particulier), et 2. d'exprimer des préférences oues portant non seulement sur les données mais également sur la structure du graphe, que celui-ci soit ou ou non. Modèle RDF ou Dans cette thèse, nous considérons un modèle de données, appelée F-RDF, qui synthétise les modèles RDF ous de la littérature Par exemple, le triplet ou Beyonce, recommande, Euphoria auquel est attaché le degré 0.8 indique que Beyonce, recommande, Euphoria est satisfait au niveau 0.8, ce qui peut être interprété comme Beyonce recommande fortement Euphoria. Les degrés ous peuvent être donnés ou calculés, matérialisés ou non. Dans sa forme la plus simple, un degré peut correspondre au calcul d'une notion statistique reétant l'intensité de la relation à laquelle le degré est attaché. Par exemple, l'intensité d'une relation d'amitié d'une personne p 1 vers une autre personne p 2 peut être calculée par la proportion d'amis communs par rapport au nombre total d'amis de p 1 . [ Pivert et al., 2016f] Pivert, O., Slama, O., and Thion, V. (2016f ). Requêtes quantiées oues structurelles sur des bases de données graphe. In Actes des Rencontres Franco- phones sur la Logique Floue et ses Applications (LFA'16), La Rochelle, France, pages 9-16. Denition 1 (RDF triple). Let U be the set of URIs, B the set of blank nodes, and L the set of literals. An RDF triple t:= s, p, o ∈ (U ∪ B) × U × (U ∪ L ∪ B) where the subject s denotes the resource being described, the predicate p denotes the property of the resource, and the object o denotes the property value. A triple t states that the subject s has a property p with a value o.Example 1 [RDF triple] For instance, the triple Beyonce, creator, Lemonade states that Beyonce has Lemonade as a creator property, which can be interpreted as Beyonce is a creator ofLemonade. 1 <?xml version="1.0"?> 2 <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" 3 xmlns:mo="http://purl.org/ontology/mo/" 4 xmlns:dc="http://purl.org/dc/elements/1.1/"> 5 <rdf:Description rdf:about="uri:Lemonade"> :Lemonade> <http://www.w3.org/1999/02/22-rdf-syntax-ns#/type> <mo:album> . Listing 1.2: N-Triples le <uri:Lemonade> <http://purl.org/dc/elements/1.1/date> "2016" . <uri:Lemonade> <http://purl.org/dc/elements/1.1/title> "Lemonade" . <uri:Lemonade> <http://purl.org/dc/elements/1.1/rating> "8.7" . <uri:Lemonade> <http://purl.org/dc/elements/1.1/genre> "R & B" . <uri:Lemonade> <http://purl.org/dc/elements/1.1/track> <uri:sorry> . <uri:Lemonade> <http://purl.org/dc/elements/1.1/track> <uri:hold up> . <uri SPARQL: Crisp Querying of RDF data In order to eciently query RDF data, the SQL-like language SPARQL [Prud'hommeaux and Seaborne, 2008] is promoted by the W3C as a standard query language. It is a declarative query language based on graph pattern matching, in the sense that the query processor searches for sets of triples in the data graph that satisfy a graph pattern expressed in the query. A Basic Graph Pattern (BGP) is a basic building block of SPARQL, containing a set of triple patterns. A triple pattern is an RDF triple where variables may occur in the subject, predicate, or object position. Each variable is prexed by the question mark symbol. the results that do not match a given graph pattern. The second one uses theminus clause and aims to remove answers related to another pattern. Example 15 [Negation queries] The following query aims to nd the artists exp 1 and exp 2 are property path expressions, then, exp 1 |exp 2 and exp 1 /exp 2 are property path expressions, if exp is a property path expression, then, exp * , exp + , exp ? and ˆexp are property path expressions.where exp 1 |exp 2 denotes alternative expressions, exp 1 /exp 2 denotes a concatenation of exp 1 and exp 2 , exp * denotes a path that connects the subject and object of the path by zero or more matches of exp, exp + is a shortcut for exp * /exp and denotes a path that connects the subject and object of the path by one or more matches of exp, exp ? denotes a path that connects the subject and object of the path by zero or one matches of exp, ˆexp is an inverse path (from an object to the subject). Example 13 [Update query] The following query aims to add some infor- mation about a new artist "Ed Sheeran" in the default graph. insert data { ]. These works proposed more expressive languages and extended SPARQL by allowing path extraction queries (generally of uri:EdSheeran dc:title "Ed Sheeran" . uri:EdSheeran dc:age "26" . unknown length) within RDF datasets. Property paths came with the same principle uri:EdSheeran dc:rating "8" . } which is to allow for navigational querying over RDF graphs and are ocially integrated Listing 1.14: An example of an Update query in SPARQL 1.1. A property path is a possible path through a graph between two nodes. It can be of a variable length. A property path of length exactly 1 is a triple • Subqueries: The principle is the same as subqueries in SQL: a query may use the pattern. output of other queries for achieving complex results. Denition 3 (SPARQL property path expressions). SPARQL property path expressions Example 14 [Subqueries in SPARQL] Return a name (the one with the are recursively dened by: lowest sort order) for all the artists who are friend with beyonce and have a name. an IRI 9 is a property path expression that denotes a path of length one, select ?art ?minName where { uri:beyonce uri:friend ?y . { select ?art (min(?name) as ?minName) where { ?art uri:title ?name . } group by ?art } } Listing 1.15: Query involving a subquery • Negation: can be expressed in two ways. The rst uses the not exists clause and aims Example 12 [SPARQL property path query] Find the names of the artists to lter out who have issued no albums in 2015. that recommend albums made by friends or related friends of friends. select ?name where { select ?name where { ?artist dc:creator ?album . ?art1 dc:title ?name . ?art1 dc:recommends ?alb . filter not exists { ?artist dc:date "2005" . } } ?art2 dc:creator ?alb . ?art1 dc:friend+ ?art2 . } Listing 1.16: SPARQL query with negation (not exists) Listing 1.13: SPARQL query with property path The query that aims to retrieve names of artists having no albums is depicted in Listing 1.17. • Update functionalities: In addition to querying and manipulating RDF data, select ?name where { SPARQL 1.1 Update [Gearon et al., 2012] oers the possibility to modify the graph by ?artist dc:title ?name . adding/deleting triples, loading/clearing/creating/dropping an RDF graph and many minus { ?artist dc:creator ?album . } other facilities. } Listing 1.17: SPARQL query with negation (minus) if , when X is a nite set {x 1 , ..., x n }, is: Example 18 Let us consider the example of the predicate tall described in Ta-A more convenient notation ble 1.3. Tall can be dened by a Boolean condition (height ≥ 180). It corresponds to the crisp set (non fuzzy set) of Figure 1.5 and the result is in the third column of Table 1.3. Name height(cm) Memberships Crisp Fuzzy Chris 210 1 1 Marc 200 1 1 John 190 1 1 Tom 180 1 0.66 David 170 0 0.33 Tom 160 0 0 David 150 0 0 Table 1.3: Tall men However, it seems more natural to dene the predicate tall as a fuzzy set (cf., Fig- ure 1.6). The membership degrees associated with some individuals are shown in the fourth column of Table 1.3. Degree of membership 1 0 160 170 180 190 200 210 Height, cm Figure 1.5: Graphical representation of the predicate tall (crisp set) Degree of membership Degree of 1 membership 0.66 0.33 1 µ A 0 0 A -a 160 A 170 180 B 190 200 B + b 210 X Height, cm Figure 1.4: Trapezoidal membership function Figure 1.6: Graphical representation of the predicate tall (fuzzy set) Table 1 . 1 . 4: Properties of t-norm and t-conorm operators Property T-norm T-conorm Identity Table 2 . 2 ..,θ y with α being an RDF statement and θ 1 ,...,θ y being its annotations over a xed set Γ = {p 1 , ..., p y } of independent annotation dimensions. Example 29 Let us consider the RDF statement about music concerts shown in Table 2.1. Each statement is annotated by a set of dimensions Γ = {Time, Source, Certainty}. Dimensions Id Statement Time Source Certainty #1 TAL playsIn Le Grand Rex 03.02.17 www.legrandrex.com 0.9 #2 KUNGS playsIn L'OLYMPIA 15.01.17 www.fnacspectacles.com 0.7 #3 TAL hasRating 7 10.01.17 www.itunes.apple.com 0.5 #4 KUNGS hasRating 8 08.02.17 www.itunes.apple.com 0.5 1: The set of annotated RDF statements 32 In order to illustrate the form taken by skyline queries in PrefS-PARQL, let us consider again the query from Example 2.7. Listing 2.8 expresses this in PrefSPARQL. select ?artist ?concert where { ?artist dc:concert ?concert. ?concert dc:starts ?startingTime. ?concert dc:ends ?endingTime. ?artist dc:rating ?rating . preferring ( ?rating = ft:excellent and (?startingTime between (9pm, 1am) and ?endingTime between (9pm, 1am) prior to highest (?endingTime)))} Listing 2.8: Skyline query in PrefSPARQL Example 33 So as to illustrate conditional preferences, let us now assume that a user prefers a concert which takes place after 7:30pm on the weekdays and before 7pm during the weekends, formulated in Listing 2.9. This extension of SPARQL called PrefSPARQL supports not only the expression of qualitative preferences (skyline) but also conditional ones (if-then-else). A PrefSPARQL query returns a set of partially ordered tuples according to the satisfaction of the preferences. Example select ?concert where { ?concert dc:day ?D. ?concert dc:starts ?startingTime. preferring (if (?D = ``Saturday'' || ?D = ``Sunday'') then ?startingTime < 7pm else ?startingTime >= 7:30pm)} Listing 2.9: Conditional preference in PrefSPARQL Table 2 . 2 22 : RDFS Inferences Rules Example 36 The rule (4) from Table 2.2 states that if a is a subclass of b and Table 2 2 Predicate relaxation for example, using rule (2) from Table 2.2, the triple pattern (?X, proceedingsEditorOf, ?Y) can be relaxed into (?X, editorOf, ?Y) and then into (?X, contributorOf, ?Y) since we have (proceedingsEditorOf, sp, editorOf ) ∈ cl(O) and then (editorOf, sp, contributorOf ) ∈ cl(O); Predicate to domain relaxation for example, using rule (5) from Table 2.2, the triple pattern (a, p, b) can be relaxed into the triple pattern (a, type, c), since we have the triple pattern (p, dom, c) ∈ cl(O). .2 , the triple pattern (?X, type, ConferenceArticle) can be relaxed into (?X, type, Article) and then into (?X, type, Publication) since we have (ConferenceArticle, sc, Article) ∈ cl(O) and then (Article, sc, Publication) ∈ cl(O); and has as a principle to determine if two given graphs are the same; if they are, nd a matching (mapping) between them (i.e., which nodes However, all of the existing classical graph isomorphism algorithms do not t the semantic characteristics of RDF graphs (i.e., directed graphs with la- beled edges and nodes) [Carroll, 2002]. Then, an ecient semantic similar- ity measure based on RDF graph is required. Therefore, few approaches have proposed new techniques dealing with approximate querying over RDF data from one graph correspond to which nodes in the other. Similarity measures based on graph matching are commonly used in this context. Essentially, queries are represented as a graph (called the query graph ) and the aim is to nd an appropriate matching between the query graph and the resource graph. 1 and y is the object of t n . Example 40 [Path between two nodes] The (cy- cle free) paths between the nodes Beyonce and Euphoria from the fuzzy RDF graph G M B of Figure 3.1 are shown in Fig- ure 3.2. Beyonce recommends(0.8) Euphoria (p 1 ) Beyonce friend(0.6) Rihanna friend(0.2) EnriqueI creator Euphoria (p 2 ) Beyonce friend (0.8) MariahC friend (0.3) Shakira friend (0.5) EnriqueI creator Euphoria (p 3 ) dened by distance(x, y) = min p∈P aths(x,y) Figure 3.2: Cycle-free paths from G M B connecting Beyonce to Euphoria Denition 6 (Distance between two nodes). The distance between two nodes x and y is • If P is a fuzzy graph pattern and C is a fuzzy condition then (P filter C) is a fuzzy graph pattern. A fuzzy condition is a logical combination of fuzzy terms dened by: if {?x, ?y} ⊆ V and c ∈ (U ∪ L), then bound(?x), ?x θ c and ?x θ ?y are fuzzy conditions, where θ is a fuzzy or crisp comparator, if ?x ∈ V and F term is a fuzzy term then, ?x is F term is a fuzzy condition, if C 1 and C 2 are fuzzy conditions then (¬C 1 ) and (C 1 C 2 ) (where is a fuzzy connective) are fuzzy conditions. 1 , o 1 , ..., s n , p n , o n ) ⊆ G be a path of G.The statement p satises exp with a satisfaction degree of sat exp (p) is dened as follows, according to the form of exp (in the following, f , f 1 and f 2 are fuzzy regular expressions): Finally, the result of the query of Example 43 (Listing 3.2 on page 71) over G M B is the singleton {Shakira} which is m(?art1) in the mapping {?art1 → Shakira, ?alb → Buttery, ?r → 4}, i.e., the only mapping of P rec_low G M B having a satisfaction degree greater or equal to 0.4. ?Art1 (f riend + ) distance is short .creator ?Alb rating ?r recommends low Figure 3.7: Graphical representation of pattern P rec_low g 1 : EnriqueI friend(0.4) JustinT creator Justied rating 6 recommends(0.6) 0.33 g 2 : Shakira friend(0.7) MariahC creator Buttery rating 4 recommends(0.8) 0.66 Table 4 . 4 1: Characteristics of monotonous fuzzy quantiers Listing 4.5: Query R at derived from R mostAlbums Then, we evaluate the SPARQL query R f latBoolean given in Listing4.6, derived from the FURQL nonquantied query R f lat of Listing4.5. 1 select ?art1 ?alb µB µA where { 2 ?art1 recommends ?alb . ?alb date ?date . 3 filter ( ?date > 2010.0 ) 4 optional { 5 ?art1 friend ?art2 . ?art2 creator ?alb . 6 ?alb rating ?rating . ?art2 age ?age . 1 select ?art1 ?alb µB µA where { 2 ?art1 recommends ?alb . ?alb date ?date . 3 filter (?date is recent) 4 optional { 5 ?art1 friend ?art2 . ?art2 creator ?alb . 6 ?alb rating ?rating . ?art2 age ?age . 7 filter (?rating is high && ?age is young) } } . . ≥ c n . Example 54 In order to calculate µ(Shakira) from R at , let us consider B (resp. A) the set of satisfaction degrees corresponding to condition B (resp. A ) of element Shakira as follows B ={ 0.1/Euphoria, 0.2/Butterfly, 0.3/Justified} and A= { 0.07/Euphoria, 0/Butterfly, 0.4/Justified}. We have d = 0.6 and: S Euphoria = 0.1 0.6 = 0.17, S Buttery = 0.1 + 0.2 0.6 = 0.5, and S Justied = 0.1 + 0.2 + 0.3 0.6 = 1. RDF data is inspired by Musicbrainz 4 linked data (which is originally crisp), and for representing fuzzy information, we used the reication mechanism that makes it possible to attach fuzzy degrees to triples, as discussed earlier in Subsection 5.1.1 Table 5.1: Fuzzy RDF datasets Dataset Size Reied Triples DB 1 11796 triples 47185 triples DB 2 65994 triples 263977 triples DB 3 112558 triples 450393 triples DB 4 175416 triples 701665 triples A java script have been developed to create random fuzzy RDF data of dierent sizes. Table 5 . 5 2: Dierent types of FURQL queries Type crisp query Fuzzy Condition Fuzzy Structural Edge query Table 5 . 5 3: Set of fuzzy quantied queries with crisp conditions Query P B P A Conditions Q1 crisp simple simple crisp Q2 crisp complex simple crisp Q3 crisp simple complex crisp Table 5 . 5 4: Set of fuzzy quantied queries with fuzzy conditions Query P B P A Conditions Q1 fuzzy simple simple fuzzy Q2 fuzzy complex simple fuzzy Q3 fuzzy simple complex fuzzy Q4 fuzzy complex complex fuzzy Table 5 . 5 Average (0.59, 94.53, 4.89) (0.50, 94.86, 4.63) (0.12, 94.78, 5.10) (0.14, 99.38, 0.47) 5: Experimental results summarization DB2 DB3 DB4 (0.52, 93.34, 6.15) (0.51, 93.70, 5.79) (0.2, 94.40, 5.33) (0.49, 93.76, 5.75) (0.32, 94.29, 5.40) (0.23, 94.50, 5.27) (0.42, 92.76, 6,82) (0.15, 95.42, 4.43) (0,12, 94.78, 5.10) (0.01, 99.78, 0.21) (0.00, 99.95, 0.04) (0.00, 99.96, 0.04) DB1 Nonquantied queries (1.04, 96.66, 2.30) Nonquantied edge queries (0.97, 96.92, 2.12) Nonquantied star queries (0.87, 95.40, 3.73) Nonquantied path queries Quantied queries (0.55, 97.85, 1.60) Quantied queries with crisp conditions Quantied queries with fuzzy conditions . Among the existing systems, let us mention IJIS16 dans IJIS16_p Susan author_of Maria c o n t r ib u t o r IJIS10 where: Mai 2016} {volume: 30, IJIS10_p author_of d a n s {volume: 25, where: Avril 2010} {titre: An ..., pages: 81-98} Basil dans IJIS10_p1 {title: About ..., pages: 365-385} a u t h o r _ o f a u t h o r _ o f Claudio contributor {titre: A ...,pages: 287-325} AllegroGraph [allegrograph, 2017], InniteGraph [innitegraph, 2017], Neo4j [Neo4j, 2017] and Sparksee [sparksee, 2017]. Dierent models of graph databases have been proposed in the Figure 6.1: An Attributed graph inspired from DBLP literature (see .2) Clearly Length(p) ≥ n (it is equal to n if ρ is Boolean, i.e., if G is a nonfuzzy graph). We can then dene the distance between two nodes x and y in G as Distance(x, y) = min all paths p f rom x to y This query contains a list of define clauses for the fuzzy quantiers and the fuzzy terms declarations, a match clause for fuzzy graph pattern selection, a having clause for the fuzzy quantied statement denition, and a return clause for specifying which elements should be returned in the resultset. B(res, x) denotes the fuzzy graph pattern involving the nodes res and x and expressing the (possibly fuzzy) conditions inB. B(res, x) takes the form of a fuzzy graph pattern expressed à la Cypher by P B where C B (see Section 6.1.4). A(x) denotes the fuzzy graph pattern involving the node x and expressing the (possibly fuzzy) conditions inA. A(x) takes the form of a fuzzy graph pattern expressed à la Cypher by P A where C A (see 3. 1 define... in 2 match B(res, x) 3 with res having Q(x) are A(x) 4 return res Listing 6.3: Syntax of a fuzzy quantied query Section 6.1.4). Table 6 . 1 61 : Fuzzy graph datasets Dataset Size DB 1 700 nodes & 1447 edges DB 2 2100 nodes & 4545 edges DB 3 3500 nodes & 7571 edges DB 4 4900 nodes & 10494 edges http://franz.com/agraph/allegrograph/ https://musicbrainz.org/ http://www.w3.org/TR/rdf-syntax-grammar/ http://www.w3.org/2001/sw/RDFCore/ntriples/ http://www.w3.org/DesignIssues/Notation3 http://www.w3.org/TR/turtle/ http://www.franz.com/agraph/allegrograph/ http://jena.apache.org/ http://jena.apache.org/ 1.2. SPARQL: Crisp Querying of RDF data 1.3. Fuzzy Set Theory 2.1. Preference Queries on RDF Data Considering paths containing a cycle would not change the result of the following expressions (3.1) and (3.3). Hereafter, the define clauses are omitted for the sake of simplicity. https://www-shaman.irisa.fr/surf/ https://jena.apache.org https://vaadin.com/home https://musicbrainz.org/ http://www.informatik.uni-trier.de/~ley/db/ http://www.informatik.uni-trier.de/~ley/db/ Hereafter, the define clauses are omitted for the sake of simplicity. The general syntactic form of a fuzzy quantied query of the type Q B X are A in the FURQL language is given in Listing 4. 1. define ... select ?res where { B(?res,?x) group by ?res having Q(?x) are ( A(?x) ) } Listing 4.1: Syntax of a FURQL quantied query R The define clause allows to dene the fuzzy terms and the fuzzy quantier (denoted here by Q). Fuzzy quantiers are declared in the same way as fuzzy terms (see Subsection 3.2.1 of Chapter 3). The select clause species which variables ?res should be returned in the result set. The group by clause contains the variables (here ?res) that should be partitioned. Expression B(?res,?x) (in the where clause) denotes the fuzzy graph pattern, dened in the FURQL language (see Denition 9 on page 69), involving the variables ?res and ?x and expressing the (possibly fuzzy) conditions in B and expression A(?x) (in the having clause) denotes the fuzzy graph pattern involving the variable ?x that appears in A. Example 51 [Fuzzy Quantied Query in FURQL] The query, denoted by R mostAlbums , that aims to retrieve every artist (?art1) such that most of the recent albums (?alb) that he/she recommends are highly rated and have been created by a young friend (?art2) of his/hers may be expressed in FURQL as follows: 1 defineqrelativeasc most as (0.3,0.8), defineasc high as (2,5) 2 definedesc young as (25,40), defineasc recent as (2010,2015) 3 select ?art1 where { 4 ?art1 recommends ?alb . ?alb date ?date . where the defineqrelativeasc clause denes the fuzzy relative increasing quantier most of Figure 4.3.(c), the defineasc clauses dene the (increasing) membership functions associated with the fuzzy terms high and recent of Figure 4.3.(a) and (b), and the definedesc clause denes the (decreasing) membership function associated with the fuzzy term young of Figure 4.3.(d). In this query, ?art1 corresponds to ?res of Listing 4.2, ?alb corresponds to ?x of Listing 4.2, lines 4 to 5 correspond to B(?res,?x) of Listing 4.2 and lines 8 to 10 correspond to A(?x) of Listing 4.2. Implementation of FURQL In this section, we discuss implementation issues related to the FURQL query language. Two aspects have to be considered: i) the storage of fuzzy RDF graphs (see Subsection 5.1.1), and ii) the evaluation of FURQL queries with and without fuzzy quantied statements (see Subsection 5.1.2). Storage of Fuzzy RDF Graphs In this thesis we deal with fuzzy RDF graph, for which we need to attach fuzzy degrees to some edges in the RDF graph. Example 55 The fuzzy RDF triple ( Shakira, friends, MariahC , 0.7) states that Shakira, friends, MariahC is satised to the degree 0.7, which could be interpreted as Shakira is a close friend of MariahC. The representation of this fuzzy RDF triple using reication is given in Listing 5.1. The satisfaction degree 0.7 is given by the statement in Line 5. A possible graphical representation of this reication is depicted in Figure 5.1. The nodes in dashed lines represent reied nodes with the properties rdf:type, • Star queries (star-shaped queries): consist of three acyclic triple patterns that share the same node (called central node). The central node may appear in dierent positions; i.e., it can be the subject of the three triples patterns (denoted by star-s3 ), the object of three triples patterns (denoted by star-o3 ), the subject of a triple patterns and the object of the two others (denoted by star-s1-o2 ), or the subject of two triples patterns and the object of the remaining triple pattern (denoted by star-s2-o1 ). Again we used four queries of the form star-s2-o1 shown in Figure 5.6. • Path queries: consist of two or three triple patterns that form a path such that two triples share a variable. We may nd path shaped queries of length two or three. We consider in the following an example of a path shaped query of length three of the form given in Figure 5.7. Query Q 3.4 is a fuzzy structural simple path query containing a fuzzy structural condition that aims to nd every artist who has among his close friends an artist who created an album (cf., Listing 5.13). Its crisp counterpart, denoted by Q 3.3 , aims to nd every artist who has among his friends (with a friendship degree greater than 0.8) an artist who created an album (cf., Listing 5.12). select ?art1 where { ?art2 creator ?alb. ?alb date ?d . /* reification */ ?X1 subject ?art1. ?X1 predicate friend. ?X1 object ?art2. ?X1 degree ?degree. filter ( ?degree > 0.8 ) } Listing 5.12: Crisp strutural path query Q 3.3 defineasc strong as (0.7, 0.9) select ?alb where { ?art2 creator ?alb. ?alb date ?d. /* structural condition */ ?art1 (friend | ST is strong) ?art2 .} Listing 5.13: Fuzzy structural path query Q 3.4 We evaluated separately each type of queries over the dierent sizes of database given in Table 5.1 on page 103. The results of these queries are depicted in Figure 5.8. Figure 5.8.(a) Although these experimental results are preliminary observations, they appear very encouraging since they show that our approach does not entail any important overhead cost. 5.2.3 The main objective of these experiments is to assess the cost of each stage involved in the evaluation of fuzzy quantied queries and to show that the extra cost due to the introduction of fuzzy quantied statements remains limited/acceptable. Fuzzy quantied query involving crisp conditions In the rst experiment, we processed four fuzzy quantied queries with crisp conditions (of the type Q B X are A) by changing each time the nature of the patterns corresponding to conditions B and A from simple to complex ones. These queries are summarized in Table 5. 3. A complex pattern diers from a simple one by the number of its statements. Here, a complex pattern is composed of nine triple patterns at most, while a simple pattern has 6.1.3 Fuzzy Graph Databases We are interested in fuzzy graph databases where nodes and edges can carry data (e.g., keyvalue pairs in attributed graphs). So, we consider an extension of the notion of a fuzzy graph : the fuzzy data graph as dened in [Pivert et al., 2014a]. Denition 14 (Fuzzy data graph). Let E be a set of labels. A fuzzy data graph G is a quadruple (V, R, κ, ζ), where V is a nite set of nodes (each node n is identied by n.id), R = e∈E {ρ e : V × V → [0, 1]} is a set of labeled fuzzy edges between nodes of V , and κ (resp. ζ) is a function assigning a (possibly structured) value to nodes (resp. edges) of G. In the following, a graph database is meant to be a fuzzy data graph. The following example illustrates this notion. Example 58 [Fuzzy data graph] Figure 6.2 is an example of a fuzzy data graph, inspired from DBLP 2 with some fuzzy edges (with a degree in brackets), and crisp ones (degree equal to 1). In this example, the degree associated with A -contributor-> B is the proportion of journal papers co-written by A and B, over the total number of journal papers written by B. The degree associated with J -domain -> D is the extent to which the journal J belongs to the research domain D. Nodes are assumed to be typed. If n is a node of V , then T ype(n) denotes its type. In Figure 6.2, the nodes IJWS12, IJAR14, IJIS16, IJIS10 and IJUFK15 are of type journal, the nodes IJWS12-p, IJAR14-p, IJIS16-p, IJIS10-p, IJIS10-p1 and IJUFK15-p of type paper, and the nodes Andreas, Peter, Maria, Claudio, Michel, Bazil and Susan are of type author, the nodes named Database are of type domain and the other nodes are of type impact_factor. For nodes of type journal, paper, author and domain, a property, called name, contains the identier of the node and for nodes of type impact_factor, a property, called value, contains the value of the node. In Figure 6.2, the value of the property name or value for a node appears inside the node. 6. 1.4 The FUDGE Query Language FUDGE, based on the algebra described in [Pivert et al., 2015], is an extension of the Cypher language [START_REF] Cypher | Cypher[END_REF] propose any formal language for expressing such queries. A rst attempt to extend Cypher with fuzzy quantied queries in the context of a regular (crisp) graph database is described in [Castelltort and[START_REF] Castelltort | [END_REF][START_REF] Castelltort | [END_REF]. In [START_REF] Castelltort | Fuzzy queries over NoSQL graph databases: Perspectives for extending the Cypher language[END_REF], the authors take as an example a graph database representing hotels and their customers and consider the following fuzzy quantied query: In this query, a corresponds to res, p corresponds to x, lines 3 and 4 correspond to B and lines 6 to 8 correspond to A. According to the general syntax introduced in Listing 6.3, the variable a instantiates res and the variable p instantiates x. 6.3.2 where µ q denotes the membership degree of the predicate q and ρ e (x, y) denotes the weight of the edge (x, y). Let us consider Q derived (a) the set of answers of the query Q derived for a given author a. The set Q derived (a) provides a list of papers with their respective satisfaction degrees. This result set is of the form Then, with µ most (x) = x, we get µ Q (S IJAR14 ) = 0.35 and µ Q (S IJIS16 ) = 1. Therefore, the weights of the OWA operator are: The implication values are: Lastly, the nal result of the query Q mostAuthors evaluated on DB, given by Formula 6.7, is: 84, µ(Michel) = 0.7, µ(Maria) = 0.61}. About Query Processing For the implementation of these quantied queries, we updated the SUGAR software described in [Pivert et al., 2014a[START_REF] Pivert | SUGAR: A graph database fuzzy querying system[END_REF], which is a software add-on layer that implements the FUDGE language over the Neo4j graph DBMS. This software eciently evaluates FUDGE queries that contain fuzzy preferences, but its initial version did not support fuzzy quantied statements. The SUGAR software basically consists of two modules, which implement the Compiling and Final result calculation stages dened in Section 6. 3.2. These modules interact with a Neo4j engine, which implements the Crisp implementation stage dened in Section 6.3.2. 1. In a pre-processing step, the Query compiler module produces • the query-dependent functions that allow us to compute µ B , µ A and µ, for each returned answer, according to the chosen interpretation, and, • the (crisp) Cypher query Q derivedBoolean , which is then sent to the Neo4j engine for retrieving the information needed to calculate µ B and µ A . edges; and regular simple paths, which allow some node and edge restrictions (e.g., regular expressions). • Pattern matching query: graph pattern matching consists in nding all subgraphs of a data graph that are isomorphic to a graph pattern. During our experiments, we considered four queries with various forms of condition A. • The rst query Q 1 (Listing 6.8), where A is an adjacency pattern, aims to nd the authors such that most of the recent papers of which they are main authors, have been published in a journal. Sample of Queries The following listing is an example of a derived nonfuzzy query. • Q 4crisp : A fuzzy quantied query with complex pattern in B and complex pattern in A, involving crisp conditions (see Listing A.5), defineqrasc most AS (0,1) defineasc strong AS (0. ?art2 <uri:creator> ?alb2 . ?alb2 <uri:rating> ?rating2 . filter ( ?rating2 is high && ?age2 is young ) ) Listing A.7: A fuzzy quantied query Q 2f uzzy • Q 3f uzzy : A fuzzy quantied query with simple pattern in B and complex pattern in A, involving fuzzy conditions (see Listing A.8), defineqrasc most AS (0,1) defineasc recent AS (2014AS ( ,2016) ) definedesc young AS (25,32) defineasc high AS (3,6) defineasc high AS (2,5) select ?art1 where { ?art1 <uri:recommends> ?alb2 . ?alb2 <uri:date> ?date2 . filter ( ?date2 is recent )} group by ?art1 having most(?alb2) are ( ?art1 ( <uri:friend> | ST IS strong ) ?art2 . ?art2 <uri:age> ?age2 . ?art2 <uri:creator> ?alb2 . ?alb2 <uri:rating> ?rating2 . ?art2 <uri:rating> ?r2 . ?art2 <uri:memberOf> ?m2 . ?art2 <uri:gender> ?g2 . ?art2 <uri:type> ?t21 . ?alb2 <uri:type> ?t22 . filter ( ?rating2 is high && ?age2 is young ) ) Listing A.8: A fuzzy quantied query Q 3f uzzy • Q 4f uzzy : A fuzzy quantied query with complex pattern in B and complex pattern in A, involving fuzzy conditions (see Listing A.9). defineqrasc most AS (0,1) defineasc recent AS (2014AS ( ,2016) ) definedesc young AS (25,32) defineasc high AS (3,6) defineasc high AS (2,5) select ?art1 where { ?art1 (recommends | ST is strong) ?alb2 . ?alb2 <uri:date> ?date2 . ?art1 <uri:rating> ?r1 . ?art1 <uri:memberOf> ?m1 . ?art1 <uri:gender> ?g1 . ?art1 <uri:age> ?age1. ?art1 <uri:type> ?t11. filter ( ?date2 is recent )} group by ?art1 having most(?alb2) are ( ?art1 ( <uri:friend> | ST IS strong ) ?art2 . ?art2 <uri:age> ?age2 . ?art2 <uri:creator> ?alb2 . ?alb2 <uri:rating> ?rating2 . ?art2 <uri:rating> ?r2 . ?art2 <uri:memberOf> ?m2 . ?art2 <uri:gender> ?g2 . ?art2 <uri:type> ?t21 . ?alb2 <uri:type> ?t22 . filter ( ?rating2 is high && ?age2 is young ) ) Listing A.9: A fuzzy quantied query Q 4f uzzy
225,894
[ "781957" ]
[ "490899" ]
01749805
en
[ "chim" ]
2024/03/05 22:32:07
2013
https://hal.univ-lorraine.fr/tel-01749805/file/DDOC_T_2013_0067_MAZURENKO.pdf
INTRODUCTION The progress of modern analytical chemistry is closely associated with the increase of sensitivity and selectivity of the analysis, as well as with the development of new equipment that would allow quick and easy analysis of complex objects outside laboratory. These requirements have led to the emergence of a new trend in analytical chemistry -chemical sensors. One of the most promising types of chemical sensors is biosensors, containing bio-recognition element. Biosensors can be characterized, above all, by extraordinary selectivity due to the high specificity of bio-recognition element, such as an enzyme, allowing in some cases separate identification of stereoisomers. Secondly, biosensors demonstrate fast response time that allows express analysis. Thirdly, the analysis with biosensors is highly sensitive because, in most cases, it refers to the kinetic methods based on measuring of reaction rate. Biosensors are particularly promising in the analysis of organics, including biologically active substances. Acute need for such analysis is associated with high demands for quality control of food, medicines. It associated also with the importance of the definition of such substances in biological fluids for the purposes of diagnosis of large number of diseases. However, routine analysis of the above objects is complicated by the presence of large variety of components inside the matrix. That leads to the analysis with reduced sensitivity and selectivity. In fact, nowadays only chromatography allows to analyze such objects with confidence, suffering from such disadvantages as long duration and high cost of analysis, complex procedure of sample preparation and high demands for the purity of the reagents. Considering the absence of these disadvantages, biosensors are real alternative to the chromatographic methods in the analysis of organic substances in complex objects. The research conducted so far for the development of new biosensors is limited, in most cases, to the glucose oxidase as a bio-recognition element. This enzyme allows to implement different approaches for enzyme immobilization on the electrode surface due to its high activity and stability. At the same time, other available enzymes from the oxidoreductases class, offer more opportunities for the selection of analyte and object of analysis. However, these enzymes can often be characterized by worse stability and activity that prevents the application of traditional immobilization approaches and directs research efforts to finding of new matrices for bio-encapsulation. One of the promising methods of enzyme immobilization is their encapsulation in thin SiO 2 film on the electrode surface by sol-gel technology. This allows to keep them active and to ensure unhindered access of substrate molecules to the enzyme, increasing the sensitivity of the modified electrodes. The electrodeposition method acquired good reputation for deposition of such films on the electrode surface, allowing to obtain uniform and homogeneous coating. This method was previously proposed in our laboratory for encapsulating of glucose and hemoglobin. Another pathway for increasing of the biosensors sensitivity consists in the use of metallic and carbonaceous nanomaterials, which can be characterized by high surface area and catalytic properties. CHAPTER 1. LITERATURE SURVEY General overview of amperometric biosensors Development of electrochemical sensors is one of the most popular areas of analytical chemistry, widely developed in recent years. The electrode is used as a transducer in such sensors contributing to their cheapness and availability. Owing to easy registration of electrical signal, electrochemical sensors first reached commercialization and are widely used in clinical, industrial, agrochemical and environmental analysis [1]. Electrochemical biosensors combine the analytical power of electrochemical techniques with unprecedented specificity of biological processes of recognition. The general idea of such sensors is appropriate immobilization bio-recognition element in close proximity to the electrode surface and generating electrochemical (more often amperometric or potentiometric) signal, the magnitude of which depends on the concentration of the analyte. The level of development of modern technologies can get tiny, cheap and easy to use biosensors, which are already exploited in many fields of analytical chemistry -analysis of food [2,3], the environmental [4] and clinical analysis [START_REF] Rastislav | Application of Electrochemical Biosensors in Clinical Diagnosis / R. Monošík[END_REF]. Unanimously adopted by the world scientific community as a powerful analytical methods, biosensors own chapters in modern textbooks about analytical chemistry [START_REF] Otto | Modern methods in analytical chemistry[END_REF][START_REF] David | Modern Analytical Chemistry / D. Harvey[END_REF][8]. The term "biosensor" usually refers to the use of any material of biological origin as a recognition element. Microorganisms, organelles, nucleic acids, antibodies, they all find their application in biosensors. However, the most popular are biosensors, containing an enzyme (s). This is because of relative simplicity of their preparation and the fact that enzymes, conceived as natural catalysts, are characterized by high specificity, efficiency and response rate. Therefore, below the term "biosensors" will refer to enzyme-based biosensors. Concept of amperometric biosensors Unlike many chemical reactions, electrochemical processes always occur at the interface between the electrode and the solution. Depending on conditions, the electrochemical measurements can be carried out in potentiometric (equilibrium) or potentiostatic (nonequilibrium) modes. In the first case, the experiment is carried out in static mode with no current flowing through the electrochemical cell. The established potential of electrode allows to determine the concentration of analyte in solution. Potentiometry -an important method in analytical chemistry, and new ionselective membranes that have been developed over the past 10-20 years, allow direct monitoring of many ions in complex samples using this method [START_REF] Evans Alun | Potentiometry and Ion Selective Electrodes[END_REF]. Potentiostatic or voltammetric methods of analysis are based on dynamic nonequilibrium situation, when applied to the electrode potential induces electrochemical reactions at the electrode-solution boundary. The electric current passing through the cell is different from zero. This current can be used for the characterization of occurring reaction and for detection any electroactive substances in solution. The advantages of voltammetry are a high sensitivity and selectivity, wide linear range, portable and cheap equipment, and a large number of different available electrodes [START_REF] Joseph | Analytical Electrochemistry[END_REF]. Immobilized on the electrode enzyme catalyzes some reaction, which can be schematically represented as follows: Substrate + Co-Reactant(coenzyme) 𝑒𝑛𝑧𝑦𝑚𝑒 → Product + Co-Reactant(coenzyme)', Therefore the choice of transducer depends primarily on the enzymatic system that is used in each case. For example, the enzymatic reaction of urease leads to changes in pH, so the best choice is pH-sensitive electrodes, while decarboxylase initiating carbon dioxide release can be used together with potentiometric gas sensors. However, the use of voltammetry is advantageous for most enzymes, because of release or consumption of electroactive substances easily detectable with electrodes. Historically amperometric biosensors can be divided into three generations [START_REF] Brian | Chemical Sensors and Biosensors[END_REF] (Fig. 1.1): 1) In the first-generation biosensors (Fig. 1.1a) the enzymatic oxidation of the substrate involves participation of dissolved oxygen. The first such biosensor was developed on the basis of glucose oxidase and Clark oxygen electrode [START_REF] Clark | Continuous recording of blood oxygen tensions by polarography[END_REF]. It allowed to measure the concentration of glucose by detecting the decrease in the concentration of dissolved oxygen consumed as a result of enzymatic reaction [START_REF] Updike | The enzyme electrode / S[END_REF]. However, these biosensors have significant limitations invoked by the need to maintain a constant concentration of dissolved oxygen and very low potential of its reduction (-0.7 V). Therefore, cathodic reduction of oxygen has been replaced by anodic oxidation of hydrogen peroxide released as a result of the reaction. 2) The second generation biosensors (Fig. 1.1b) uses so-called mediator or electrons-carrier involved in the enzymatic reaction. Mediators are usually molecules that can be easily and reversible oxidized and reduced on the electrode at low potential (e.g., ferrocene and ferrocyanides). Their role is electron transfer from the molecule of the enzyme to the electrode, inducing current, which depends on the substrate concentration. The use of mediators gave a significant boost to the development of new types of biosensors, while the problems remain such as an effective mediator immobilization on the electrode surface and very strict demands to the molecule (low redox potential, pH independence, lack of reaction with other components of biosensors) [START_REF] Dzyadevych | Amperometric enzyme biosensors[END_REF]. 3) Third-generation biosensors (Fig. 1.1c) operate on the basis of direct electron transfer between the electrode and the active center of the enzyme. This approach makes possible getting rid of any intermediary molecules and almost directly converts the concentration of the substrate on the measurable electrochemical signal. However, the design of such biosensors is not easy task, especially because the electrochemically-active group of the enzyme is usually located deep inside of the protein molecule beyond the shield protein groups [START_REF] Freire | Direct Electron Transfer : An Approach for Electrochemical Biosensors with Higher Selectivity and Sensitivity[END_REF]. Regardless of biosensor's generation, the enzyme must be firmly fixed in close proximity to the electrode, and the unobstructed diffusion of substrate to the enzyme as well as the reaction products to the electrode must exist. As noted above, the concentration of analyte in solution is determined by the amount of current flowing through the working electrode. There are two methods of its measurement that can achieve the best results. The voltammograms (linear or cyclic) are being obtained by gradual change of the potential at the working electrode at a certain rate. The electrochemical reaction occurs at some point characterized by increased current and peak appearance at the voltammogram. The value of the peak is proportional to the concentration of electrochemically-active substances according to the equation of Randles-Sevcik [START_REF] Brian | Chemical Sensors and Biosensors[END_REF] (see Appendix B) However, the methods with potential scanning is not very practical for the applications in biosensors due to the impact of charging current of the electrical double layer [START_REF] Budnikov | Fundamentals of modern electrochemical analysis[END_REF] and the considerable time required for one scan. A more convenient method is amperometry, where a constant potential is applied to the working electrode inducing an electrochemical reaction. At such conditions current firstly decreases sharply due to the depletion of near-electrode layer, and then goes to the constant value, which can be calculated from the modified equation Cottrell (see. Appendix B). The value of quasistationary current under these conditions depends on the concentration of electrochemically-active substances, and the use of stirred solution reduces the thickness of the diffusion layer and leads to increased current. Enzyme types used in biosensors Enzymes -substances of protein nature that catalyze chemical reactions in living systems. At present several thousands of individual enzymes existing in living organisms are known [17]. The International Union of Biochemistry and Molecular Biology has developed a 4-levels system of their classification and nomenclature in accordance with the type of reaction accelerated by corresponding enzyme [18]. According to this classification oxidoreductases are the first, and probably one of the largest classes among 6 types of enzymes. Such enzymes catalyze biological redox reactions that transfer electrons from one molecule to another. These enzymes are ideal for the creation of electrochemical biosensors given that all enzymatic reactions of oxidoreductases involve the electron transfer from one molecule to the other. Oxidoreductases class can be divided into two subclasses, depending on the type of oxidant used. If molecular oxygen acts as an electron acceptor, thus turning into a molecule of hydrogen peroxide, these enzymes belong to oxidases. If a special molecule (coenzyme) acts as oxidant, these enzymes belong to the subclass of dehydrogenases. Among the variety of coenzymes (PQQ, FMN, TPP, Coenzyme A etc.), the two of them are most widespread in the oxidoreductase class: the nicotinamide adenine dinucleotide (NAD or NADP) or flavin adenine dinucleotide (FAD). These molecules are able to be oxidized and reduced in a reversible redox process. The wide popularity of oxidases as biosensors sensitive elements is primarily evoked by relative ease of electrochemical detection of hydrogen peroxide at metallic electrodes. The electrochemical detection of dehydrogenases coenzyme NAD + /NADH is complicated (see paragraph 1.5.2), but they are also widely used in the development of biosensors. The methods of biomolecules immobilization on the electrode surface Biomolecules such as enzymes may lose their activity quickly in aqueous solutions because of their gradual oxidation or destruction of quaternary structure on the edge of the liquid/air [START_REF] Nikolas | Enzyme stabilization strategies based on electrolytes and polyelectrolytes for biosensor applications[END_REF]. Given the relatively high cost of pure enzyme preparations their cost-beneficial use requires the reusability of enzyme based biosensors. For this reason, the use of enzymes in solution is an exception, and research efforts are aimed at finding new ways of enzyme immobilization, i.e. attachment to the surface. When choosing an immobilization method one should pay attention to the retention of enzyme activity and conformation of the active center. In addition, enzyme should be in biocompatible environment protected from microbial attack and pollutants. Substrate molecules should be able to diffuse freely to it from the external solution [START_REF] Iqbal | Bioencapsulation within synthetic polymers (Part 1): sol-gel encapsulated biologicals[END_REF][START_REF] Sassolas | Immobilization strategies to develop enzymatic biosensors[END_REF]. Biosensors characteristics are very dependent on the method of enzyme immobilization. The purpose of this immobilization is to ensure close contact between the enzyme and the transducer, while maintaining (and sometimes even improving) stability of the enzyme. There are physical and chemical methods of immobilization, which consist in the following [START_REF] Wilhelm | Immobilized enzymes: methods and applications[END_REF][START_REF] Krajewska | Application of chitin-and chitosan-based materials for enzyme immobilizations: a review / B. Krajewska // Enzyme and Microbial Technology[END_REF]: 1) Adsorption. A simple and cheap method, but it is often reversible, i.e. enzyme gradually desorbs from the surface during measurements. This leads to poor stability of biosensors. 2) Micro-encapsulation using solid or liquid membranes. This method is often used in first biosensors, it allows to place the biomolecules inside the semipermeable membrane in close contact with the transducer. The disadvantage of this method is the complexity and high cost of such membranes, and, most importantly, complicated diffusion of reactants through the membrane. 3) Encapsulation. The enzyme is mixed with a solution of polymer and then the polymerization is initiated. The result is a gel containing encapsulated enzyme. The absence of chemical bonding and biocompatible environment makes the enzyme more active. Unfortunately, enzyme can leach from the gel over time, resulting in signal loss. 4) Crosslinking. The enzyme is bound using so-called bifunctional reagents such as glutaraldehyde, which could form a Schiff base with aminogroups of the protein. The mild binding method almost does not change steric configuration of the enzyme, but such electrodes are characterized by low mechanical properties and diffusion of the substrate through the material is rather slow. 5) Covalent binding. The chemical binding of the enzyme to the carrier by a variety of functional groups is used. This method provides the best stability of the immobilized enzyme, however, it is difficult and tedious, with a lack of results reproducibility. In addition, the chemical binding leads to disruption of steric configuration of the enzyme molecule, which leads to the degradation of its activity and denaturation. Summing up the above, the choice of method of immobilization depends on the each particular case [START_REF] Linqiu | Immobilised enzymes: science or art? / L. Cao[END_REF]. However, it can be concluded that very tough or very weak binding of the enzyme does not lead to good results because of a lack of the electrode activity or stability respectively. In search of the optimal immobilization method the preference should be given to a "golden mean" -an encapsulation and cross-linking methods, they could provide a strong enzyme fixing together with the preservation of its activity [START_REF] Sassolas | Immobilization strategies to develop enzymatic biosensors[END_REF]. One of these soft methods is the encapsulation of enzyme in a polymer film of silicon oxide (SiO 2 ). Such material is biocompatible, has a high porosity and several other important advantages making promising the development of SiO 2 -based biosensors [START_REF] Gupta | Entrapment of biomolecules in sol-gel matrix for applications in biosensors: problems and future prospects[END_REF]. Amperometric SiO2-based biosensors Hybrid silica materials obtained by the sol-gel technology are widely used in electroanalytical chemistry [START_REF]intersection of functionalized sol-gel materials with electrochemistry[END_REF]. Interest to these materials is due to ease of synthesis, and their unique properties, including a variety of chemical compositions and structure (in the form of monoliths or thin films). Being solid inorganic substances, they have a high specific surface area (200-1500 m 2 /g) at the same time and threedimensional structure, consisting of a large number of open interconnected pores. It provides a high diffusion rate of analytes inside. Together with lots of available active sites this is a key factor in the development of highly sensitive electrochemical sensors [START_REF] Walcarius | Electroanalysis with Pure, Chemically Modified and Sol-Gel-Derived Silica-Based Materials / A. Walcarius // Electroanalysis[END_REF]. Another advantage of silica-based materials is the ease of modification with various mediators that can alter their characteristics, increasing the selectivity analysis or providing electrocatalytic properties [START_REF] Walcarius | Electroanalysis with Pure, Chemically Modified and Sol-Gel-Derived Silica-Based Materials / A. Walcarius // Electroanalysis[END_REF][START_REF] Walcarius | Electrochemical Applications of Silica-Based Organic-Inorganic Hybrid Materials / A[END_REF]. Recently it was shown that these materials can also be used to encapsulate biomolecules preserving their activity [START_REF]Biochemically active sol-gel glasses: the trapping of enzymes[END_REF][START_REF] Dave | Sol-gel encapsulation methods for biosensors[END_REF][START_REF]Organically modified sol-gel sensors / O[END_REF]. Silica-materials possess several key characteristics that make them promising for bioencapsulation. Simple lowtemperature synthesis method avoids protein denaturation and unique approach of polymer chains formation around the enzyme molecule, does not lead to a violation of its steric configuration [START_REF] Gupta | Entrapment of biomolecules in sol-gel matrix for applications in biosensors: problems and future prospects[END_REF]. These materials may contain a high amount of water in the structure that improves the long-term stability of immobilized biorecognition elements [START_REF]Enzymes and Other Proteins Entrapped in Sol-Gel Materials / D[END_REF]. In addition, silica-materials have excellent biocompatibility and ability to protect against microbial attack [START_REF] Vivek | Immobilization of Biomolecules in Sol-Gels: Biological and Analytical Applications / V. Kandimalla[END_REF]. However, the SiO 2 -based materials have several shortcomings that need to be removed. First, it is a gradual leaching of modifier molecules from the film and the destruction of the film. This problem can be solved by the introduction of the structuring and stabilizing agents (e.g., surfactants and polyelectrolytes) [START_REF] Nadzhafova | Heme proteins sequestered in silica sol-gels using surfactants feature direct electron transfer and peroxidase activity[END_REF][START_REF]Surfactant-Induced Modification of Dopants Reactivity in Sol-Gel Matrixes / C. Rottman[END_REF][START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF]. Secondly, it is a need for a uniform distribution of enzymes in the film (without the formation of conglomerates), that can be achieved by selecting the appropriate method of SiO2-film obtaining. Biomolecules encapsulation into the silica-matrix on the electrode surface In general, the sol-gel method involves hydrolysis of the precursor (alkoxide) in acidic or alkaline medium with subsequent condensation and polycondensation of monomers leading to the formation of porous gel [START_REF] Brinker | Sol-Gel Science: the physics and chemistry of sol-gel processing[END_REF]: The properties of the formed gel, such as porosity, surface area, polarity, hardness, largely depend on the rate of hydrolysis and condensation reactions (Fig. 1.2), as well as on the choice of the precursor, molar ratio, choice of solvent, temperature, processes drying and aging [START_REF] Joseph | Sol-gel materials for electrochemical biosensors[END_REF]. Moreover, the process of aging may occur long time after the formation of the gel, forming additional bonds inside sol-gel matrix. During the aging process the solvent can be removed from the pores, which leads to a change in polarity and viscosity and to reduction of the pores diameter [START_REF] Brinker | Sol-Gel Science: the physics and chemistry of sol-gel processing[END_REF][START_REF] Kulwinder | Characterization of the Microenvironments of PRODAN Entrapped in Tetraethyl Orthosilicate Derived Glasses / K. K. Flora[END_REF]. For analytical purposes, sol-gel materials can be obtained either as monoliths or thin films [START_REF] Gupta | Entrapment of biomolecules in sol-gel matrix for applications in biosensors: problems and future prospects[END_REF]. Monoliths can be from hundreds of micrometers to several centimeters of thickness and can effectively immobilize large number of (a) (b) (c) biomolecules that are kept inside because of its size and molecular weight. However, the main drawback of monoliths is very large response time due to slow diffusion. In addition, they usually do not find their use in electrochemistry due to lack of conductivity of thick layers of silica. One way to solve this problem can be the creation of composite bioelectrode [START_REF] Joseph | Sol-gel materials for electrochemical biosensors[END_REF] by mixing the sol-gel precursor with enzyme and conductive materials -carbon paste [START_REF] Pankratov | Sol-gel derived renewable-surface biosensors / I. Pankratov[END_REF], graphite [START_REF] Gun | Sol-gel derived, ferrocenyl-modified silicate-graphite electrode: Wiring of glucose oxidase / J. Gun[END_REF] or metal particles [START_REF] Bai | Gold nanoparticles-mesoporous silica composite used as an enzyme immobilization matrix for amperometric glucose biosensor construction[END_REF]. At the same time, thin sol-gel films with thickness less than one micrometer offer significantly faster diffusion of analyte to biorecognition centers guarantying fast response. Therefore they are considered more promising for application in electrochemical sensors [START_REF]Organically modified sol-gel sensors / O[END_REF]. There are several ways of enzyme immobilization within the sol-gel film on the electrode surface: Covalent binding of the enzyme to the SiO 2 -matrix (Fig. 1.3) by carbodiimide coupling reaction was used for glucose oxidase [START_REF]Covalent immobilization of an enzyme (glucose oxidase) onto a carbon sol-gel silicate composite surface as a biosensing platform / X[END_REF] and lactate dehydrogenase [START_REF] Cheng-Li | Amperometric L-lactate sensor based on sol-gel processing of an enzyme-linked silicon alkoxide[END_REF], but this method has not gained widespread use due to the loss of enzyme activity as result of binding to hard matrix. The so-called «sandwich»-configuration (Fig. 1.3) imply enzyme placing between two layers of sol-gel film. Such configuration have been used for the first time for glucose oxidase in work [START_REF]Glucose Biosensor Based on a Sol-Gel-Derived Platform / U. Narang[END_REF] and shown higher activity and fast response compared to conventional methods. Later it was also used for lactate dehydrogenase [START_REF] Ramanathan | Immobilization and Characterization of Lactate Dehydrogenase on TEOS Derived Sol-Gel Films[END_REF][START_REF]Immobilization of lactate dehydrogenase on tetraethylorthosilicate-derived sol-gel films for application to lactate biosensor[END_REF]. Unfortunately, the result of this configuration is uneven distribution of the enzyme throughout the film-modifier reducing the reproducibility of the analytical response. Double-layer SiO 2 -film (Fig. 1.3) was used for immobilization of lactate oxidase [START_REF]Sol-gel based amperometric biosensor incorporating an osmium redox polymer as mediator for detection of L-lactate[END_REF] and peroxidase [START_REF] Stephen | Development of a sol-gel based amperometric biosensor for the determination of phenolics[END_REF]50], together with the osmium redox mediator. But the enzyme and the mediator in this configuration may contact only at the boundary between two layers, making efficiency of such biosensors significantly lower. The encapsulation of enzyme in whole thin SiO 2 -film (with or without a mediator) is one of the most popular methods. Thus, a uniform distribution of modifier-molecules in the film and close contact with the electrode can be achieved. The important key factors of thin film formation are homogeneity and thickness of the film, adhesion to the electrode, resistance to cracking and minimization of possible enzyme leaching. The film thickness is a major parameter for the obtained modified electrodes, as its increase slows diffusion of the analyte to the active centers inside the film, reducing the response [START_REF] Vivek | Immobilization of Biomolecules in Sol-Gels: Biological and Analytical Applications / V. Kandimalla[END_REF]. However, the amount of immobilized enzyme is low in very thin films, which also leads to the signal drop. Methods of thin sol-gel films obtaining on the electrode surface The main methods for the obtaining of thin sol-gel coatings on the surface of the electrodes is dip-coating and spin-coating [START_REF]Review of sol-gel thin film formation[END_REF], less common methods are dropcoating and spray-coating (Table 1.1). Despite the variety of methods for the obtaining of thin sol-gel films several difficulties need to be got round on the road to a successful application of such films in biosensors [START_REF] Gupta | Entrapment of biomolecules in sol-gel matrix for applications in biosensors: problems and future prospects[END_REF]. Firstly, most of the techniques do not allow to reproducibly modify surfaces with complex morphology, such as fibers. Second, for achievement of measurable signal level, thin films require large biomolecules content, and this can create problems for the enzymes that are insoluble or precipitate in the sol. Thirdly, homogeneous films often require the presence of significant amounts of alcohol as a viscosity modifier, that can lead to denaturation of immobilized biomolecules. Finally, unlike the monoliths with slow processes of drying and aging, these processes for thin films occur simultaneously and very fast. This could potentially lead to the film cracking and dehydration of immobilized biomolecules. Thus, the researchers face with the task of developing new, simple and effective ways of electrodes modification with bio-composite films SiO 2 -enzyme that would allow to solve the above problems. One of the promising methods that can be used for this purpose is electrochemically-assisted deposition. Electrochemically-assisted deposition method The method of electrochemically-assisted sol-gel deposition (EAD) is a relatively new way for getting thin coatings, it was first described in 1999 [START_REF] Shacham | Electrodeposition of Methylated Sol-Gel Films on Conducting Surfaces[END_REF]. This method now can be applied only to the conductive surfaces, but can solve the basic problem of the traditional methods of sol-gel processing -impossibility of modification of surfaces with complex morphology and small size [START_REF] Shacham | Pattern recognition in oxides thin-film electrodeposition: Printed circuits[END_REF]. This method consists in immersion of the electrode in a solution of prehydrolyzed sol-gel precursor and application of sufficient negative potential (EAD at positive potential is also possible [START_REF] Collinson | Electrodeposition of Porous Silicate Films from Ludox Colloidal Silica[END_REF]) over some time (Fig. 1.4): When applying a negative potential to the electrode the reactions of water (and oxygen) reduction are occurring according to the scheme [START_REF]Electrochemically deposited sol-gel-derived silicate films as a viable alternative in thin-film design[END_REF]: 2H 2 O + 2ē → 2OH -+ H 2 O 2 + 2H 2 O + 4ē → 4OH - O 2 + 2H 2 O + 2 ē → H 2 O 2 + 2OH - (1.1) Hydroxyl-ions generated by these reactions, enhance pH in the near-electrode area, significantly accelerating the polycondensation reaction of SiO 2 -precursor (Fig. 1 .2b, c) and the formation of SiO 2 -film on the electrode surface. Since the time of potential application is typically less than a few minutes and even seconds, it does not lead to significant changes in the overall pH of the whole solution. Thickness of the formed film can be easily changed by controlling the value [START_REF] Shacham | Electrodeposition of Methylated Sol-Gel Films on Conducting Surfaces[END_REF] and time [START_REF]the functionalization of macroporous electrodes / F. Qu[END_REF] of negative potential applying. This affects the amount of the formed catalyst (OH --ions) and thus -the rate of the precursor polycondensation. In particular, the EAD method was used to obtain ultra-thin SiO 2 -films with ordered vertically-oriented pores [START_REF]Electrochemically assisted self-assembly of mesoporous silica thin films[END_REF]. Since the gelation and drying processes during EAD occur independently of each other, coatings obtained by this method are much more porous than ones obtained by classical methods of sol-gel deposition [START_REF]intersection of functionalized sol-gel materials with electrochemistry[END_REF][START_REF] Collinson | Electrodeposition of Porous Silicate Films from Ludox Colloidal Silica[END_REF]. The higher number of pores facilitates diffusion of reactants through the film increasing the sensitivity and response time, which is especially important for electrochemical sensors. The addition to the sol-gel solution during EAD can be used for the formation of films containing encapsulated mediators such ferrocenedimethanol and ruthenium bipyridyl [START_REF]Electrochemically deposited sol-gel-derived silicate films as a viable alternative in thin-film design[END_REF]. The EAD method is also suitable for the encapsulation of biomolecules, particularly due to lower amount of produced alcohol [START_REF] Collinson | Electrodeposition of Porous Silicate Films from Ludox Colloidal Silica[END_REF][START_REF]Electrochemically deposited sol-gel-derived silicate films as a viable alternative in thin-film design[END_REF], which is formed during deposition and can lead to denaturation of proteins. It was used for the immobilization of biomolecules such as glucose oxidase [START_REF]A glucose biosensor based on chitosan-glucose oxidase-gold nanoparticles biocomposite formed by one-step electrodeposition[END_REF][START_REF] Jia | One-Step Immobilization of Glucose Oxidase in a Silica Matrix on a Pt Electrode by an Electrochemically Induced Sol-Gel Process[END_REF][START_REF] Oksana | Direct electrochemistry of hemoglobin and glucose oxidase in electrodeposited sol-gel silica thin films on glassy carbon / O. Nadzhafova[END_REF], hemoglobin [START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF][START_REF] Oksana | Direct electrochemistry of hemoglobin and glucose oxidase in electrodeposited sol-gel silica thin films on glassy carbon / O. Nadzhafova[END_REF], peroxidase [START_REF] Yang | Simple approach for efficient encapsulation of enzyme in silica matrix with retained bioactivity[END_REF]. These biomolecules are among the most stable and can withstand relatively harsh chemical conditions, so widely used in the development of new types of biosensors [START_REF] Wilson | Glucose oxidase: an ideal enzyme / R. Wilson[END_REF]. However, the application of EAD method for the immobilization of other types of enzymes is still unresolved and poorly researched issue. In addition, owing to the vast possibilities of the thickness control of the formed film this method is of interest for the modification of nanostructured electrodes [START_REF] Shacham | Pattern recognition in oxides thin-film electrodeposition: Printed circuits[END_REF][START_REF]the functionalization of macroporous electrodes / F. Qu[END_REF][START_REF] Mandakini | Quantitative Control over Electrodeposition of Silica Films onto Single-Walled Carbon Nanotube Surfaces[END_REF][START_REF] Stefanie | Local electrocatalytic induction of sol-gel deposition at Pt nanoparticles[END_REF]. Application of nanomaterials in biosensors development Nanomaterials having the size less than 100 nm in at least one dimension, are widely used in modern analytical chemistry, particularly in the field of chemical sensors. Their uniqueness is associated with significant differences in the properties of nanosized particles from those of macrosize made from the same material [START_REF] Pratibha | Prospects of Nanomaterials in Biosensors[END_REF]. This creates the possibility of modification of the nanomaterials properties by varying their size [START_REF] Tewodros | Recent advances in nanostructured chemosensors and biosensors[END_REF]. They have exceptional thermal and electrical properties, high activity and surface area, thus can be used to improve the sensitivity and response time of (electrochemical) sensors [START_REF] Joseph | Nanomaterial-based electrochemical biosensors[END_REF]. Nanosize materials were used for the achievement of direct electron transfer between the electrode and the enzyme, for acceleration of electrochemical reaction, for amplification of the biorecognition element signal etc. [68]. 1.3.1. General properties of nanomaterials: nanoparticles, nanotubes, nanofibers. Nanomaterials that have found application in the development of sensors can be classified by the dimensionality [START_REF] Valentini | Nanomaterials and Analytical Chemistry[END_REF][START_REF]Functional One-Dimensional Nanomaterials: Applications in Nanoscale Biosensors[END_REF]: zero-dimensional (0D) nanomaterials. These include particles with the size in all three dimensions less than 100 nm, nanoparticles and quantum dots; one-dimensional (1D) nanomaterials. These particles have a size larger than 100 nm in only one dimension, the two others do not exceed 100 nm. These include a variety of nanofibers, nanotubes, nanowires; two-dimensional (2D) nanomaterials. Such materials have a size greater than 100 nm in two dimensions, representing a variety of nanosheets and nanofilms with thickness less than 100 nm. In recent years, rapid development of sensors based on this type of material is associated with the discovery of graphene [71,72]. Although nanomaterials may be made of any material, only metal or carbon nanomaterials and conductive polymers are mainly used in electrochemistry, because of their high electrical conductivity. Gold and platinum nanoparticles are the most widely used type of nanomaterials for the development of amperometric biosensors [START_REF] Eugenii | Electroanalytical and Bioelectroanalytical Systems Based on Metal and Semiconductor Nanoparticles[END_REF]. Several layers of nanoparticles deposited on the electrode lead to the formation of porous layer with large surface area, which can adsorb and concentrate a large number of substances [START_REF] Fang | Electrochemical sensors based on metal and semiconductor nanoparticles[END_REF]. They can also be considered as an array of nanoelectrodes with own advantages [START_REF] Wenlong | Colloid chemical approach to nanoelectrode ensembles with highly controllable active area fraction[END_REF]. Gold nanoparticles are able to provide stable immobilization of enzymes while preserving their activity, to achieve the direct electron transfer between the electrode and the enzyme without addition of mediators [START_REF] José | Gold nanoparticle-based electrochemical biosensors[END_REF][START_REF] Shaojun | Synthesis and electrochemical applications of gold nanoparticles[END_REF]. Platinum nanoparticles are mainly used in amperometric biosensors based on oxidases, due to their ability to greatly facilitate oxidation and reduction of hydrogen peroxide and oxygen [START_REF]Amperometric glucose biosensor based on integration of glucose oxidase with platinum nanoparticles/ordered mesoporous carbon nanocomposite / X[END_REF][START_REF] Hrapovic | Electrochemical biosensing platforms using platinum nanoparticles and carbon nanotubes[END_REF][START_REF] Minhua | Electrocatalysis on platinum nanoparticles: particle size effect on oxygen reduction reaction activity[END_REF]. However, of the greatest interest in the development of electrochemical biosensors are zero-dimensional materials [START_REF] Joseph | Nanomaterial-based electrochemical biosensors[END_REF][START_REF]Functional One-Dimensional Nanomaterials: Applications in Nanoscale Biosensors[END_REF]. Due to the considerable length-todiameter ratio, they can act as nanowires, increasing conductivity of film-modifier and connecting the electrode surface with molecules encapsulated in the film [START_REF] Joseph | Nanomaterial-based electrochemical biosensors[END_REF]. Metal nanofibers can be used for direct detection of biological and chemical substances [START_REF] Fernando | Nanowire-Based Biosensors / F[END_REF], but their use in the design of enzymatic biosensors, except several examples [START_REF]Platinum nanowire nanoelectrode array for the fabrication of biosensors[END_REF][START_REF] Qu | Electrochemical biosensing utilizing synergic action of carbon nanotubes and platinum nanowires prepared by template synthesis[END_REF], is not enough explored. Among carbon nanomaterials carbon nanotubes (CNT) attract great interest since their discovery [START_REF] Sumio | Helical microtubules of graphitic carbon / S. Iijima[END_REF]. Single-walled CNT consist of a monatomic layer of graphite rolled into a cylinder, which has a large length-to-diameter ratio. Multiwalled CNT consist of several cylinders of different diameters, placed one inside another (Fig. 1.5). CNT have unique electrical, mechanical and structural properties that make them very attractive for use in electrochemical sensors [START_REF]New materials for electrochemical sensing VI: Carbon nanotubes / A[END_REF][START_REF] Lourdes | Role of carbon nanotubes in electroanalytical chemistry: a review[END_REF]. High sensitivity of CNT conductivity to the adsorbed molecules allows their use as nanoscale DNAsensors, and the ability to accelerate the electron transfer of many important biomarkers can significantly improve the characteristics of CNT-based enzyme electrodes [START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF]. Also CNT can accumulate important biomolecules (e.g., nucleic acids [START_REF] Joseph | Carbon-nanotube-modified glassy carbon electrodes for amplified label-free electrochemical detection of DNA hybridization[END_REF]), and neutralize the process of the electrode surface poisoning by reaction products [START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]. Possibility of H 2 O 2 and NADH detection at low potential and limited surface passivation during the oxidation of NADH makes CNT an ideal material for use in amperometric biosensors based on oxidases and dehydrogenases [START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF] . Moreover, studies have shown that vertically-oriented CNT can be used as a direct conductor between the electrode and the active center of the enzyme (which is usually located deep inside the molecule and isolated by protein chains) that covalently bound to the end of nanotube [START_REF]Protein electrochemistry using aligned carbon nanotube arrays[END_REF][START_REF] Fernando | Long-range electrical contacting of redox enzymes by SWCNT connectors[END_REF]. a b Methods of the electrodes modification with carbon nanotubes. Method of electrophoretic deposition Successful implementation of CNT in amperometric biosensors requires adequate control of their chemical and physical properties, their functionalization and immobilization on the surface [START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF]. Simple mixing CNT with a solution of the enzyme in most cases leads to very low final concentration of nanotubes and often to enzyme denaturation or aggregation of CNT. There are two basic approaches for surface modification with CNT: direct synthesis of CNT from metal catalysts located on the electrode surface, or surface modification with pre-synthesized CNT obtained by various techniques. Direct methods of synthesis allow to obtain a big number of firmly fixed on the electrode surface CNT and sometimes even to get vertically-oriented CNT [START_REF]Vertically Aligned Carbon Nanotube Electrodes Directly Grown on a Glassy Carbon Electrode[END_REF]. However, the most synthesis methods require high temperatures, pressures and sophisticated equipment, which limit their use in ordinary laboratories. Therefore most of modified electrodes are being obtained by deposition of CNT dispersion and subsequent drying or preparation of composite electrodes based on CNT mixed with carbon paste [START_REF]Carbon nanotube purification: preparation and characterization of carbon nanotube paste electrodes[END_REF][START_REF] María | Enzymatic Biosensors Based on Carbon Nanotubes Paste Electrodes[END_REF], teflon [START_REF] Joseph | Carbon nanotube/teflon composite electrochemical sensors and biosensors[END_REF], polymers [START_REF]Carbon nanotube-polymer composites: Chemistry, processing, mechanical and electrical properties[END_REF], ceramics [START_REF] Biuck | Simultaneous determination of acetaminophen and dopamine using SWCNT modified carbon-ceramic electrode by differential pulse voltammetry[END_REF]. Because the synthesized CNT usually contain many impurities of other allotropic modifications of carbon their purification before use by treating with acidoxidants is often necessary [98]. Besides elimination of impurities, this treatment also aims to create carboxyl functional groups on the places of defects in the structure of CNT. The presence of these groups enables covalent immobilization of biorecognition molecules or integration of CNT in polymer structure [99]. The limitation on the way of broad application of CNT in the development of biosensors is their negligible solubility in most solvents (including complete insolubility in inorganic solvents) [100], which makes the preparation of composite electrodes with high content of CNT quite challenging. Moreover, the difficulties of CNT manipulating are related to their small size and tendency to aggregation that prevents the formation of homogeneous and reproducible coatings on the electrode surface [101]. Therefore, the additives are often used to improve the dispersion of CNT in solvents such as surfactants, nafion, chitosan, DNA and others [102]. Two conditions are desirable for fabrication of enzyme electrodes based on CNT: a) a sufficiently high content of CNT in the final film-modifier in order to take full advantage of their properties, and b) the biocompatibility and mild conditions of bio-composite electrode fabrication to ensure the retention of the enzymatic activity (hence the undesirability of organic solvents and a large quantity of surfactants). Given this, it would be optimal simultaneous deposition and concentration of CNT from the aqueous solution. To date, CNT-coatings, obtained by this method are mainly used for the creation of new composite materials [108], field emission devices [109,110], supercapacitors [111,112], fuel cells [113][114][115], and for biomedical applications [116][117][118]. However, the EPD method, due to inherent advantages such as the possibility of CNT deposition from low-concentrated aqueous solutions can be used for the fabrication of CNT-matrix for amperometric biosensors. Currently, in the literature there are only a few examples of electrophoretically-deposited CNT application (as a part of CNT-polyaniline composite) for construction of biosensors based on cholesterol oxidase [119] and glycerol dehydrogenase [120], but the influence of the characteristics of CNT layer in the above works is not discussed. Amperometric biosensors based on SiO2 and oxidases Overview of oxidases used in amperometric biosensors Oxidases subclass contains more than 100 enzymes that catalyze specific oxidation of many important biologically-active organic compounds by molecular oxygen [START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF]. In most cases, enzymatic reactions catalyzed by oxidases are following: 𝑆𝑢𝑏𝑠𝑡𝑟𝑎𝑡𝑒 + О 2 𝑜𝑥𝑖𝑑𝑎𝑠𝑒 → 𝑃𝑟𝑜𝑑𝑢𝑐𝑡 + Н 2 О 2 (1.2) The enzyme glucose oxidase (GOx) -"pioneer" in the field of amperometric biosensors, the rapid development of this industry began from a message about the first GOx-based biosensor [121]. The reason of its popularity is in the spread of diabetes disease, which affects the people around the globe. The regular screening of glucose concentration in the blood becomes for them a daily routine task and amperometric biosensors may be the solution of this problem [122]. However, the popularity of GOx is also linked to its unique properties -a high specificity, activity and stability [START_REF] Wilson | Glucose oxidase: an ideal enzyme / R. Wilson[END_REF]. That is the reason for its applicability for testing new methods of In addition to the above, there are existing biosensors based on xanthine oxidase, ascorbate oxidase, bilirubin oxidase, lysine oxidase and other enzymes [1]. There is a considerable number of works about the immobilization of mentioned enzymes into SiO 2 -film by sol-gel method for the development of amperometric biosensors. This particularly concerns cholesterol oxidase [139,141,[155][156][157], lactate oxidase [START_REF]Sol-gel based amperometric biosensor incorporating an osmium redox polymer as mediator for detection of L-lactate[END_REF]126,158,159], tyrosinase [160][161][162][163], glucose oxidase [START_REF]Glucose Biosensor Based on a Sol-Gel-Derived Platform / U. Narang[END_REF][START_REF] Oksana | Direct electrochemistry of hemoglobin and glucose oxidase in electrodeposited sol-gel silica thin films on glassy carbon / O. Nadzhafova[END_REF][164][165][166][167]. Among other types of enzymes choline oxidase has an important substrate -choline, the determination of which is necessary in biomedical practice as well as in the analysis of food. In addition, this enzyme can be used to analyze important inhibitors -pesticides. However, in the literature we found only a few reports of the ChOx immobilization in SiO 2 -film [168,169]. Taking into account the mentioned above properties of SiO 2 -materials, the development of ChOx-and SiO 2based biosensors is a promising possibility. Problems of electrochemical detection of H 2 O 2 As noted above (see section 1.1.1) the indicator reaction of the first-generation oxidases-based biosensors may be electrochemical detection of consumed oxygen by its reduction on the electrode at potential -0.7 V. Although the first known biosensors used this principle, this approach has some significant drawbacks leading to its renunciation. First, this is significant variation of oxygen content depending on pH, temperature and composition of the solution, and secondly, it is the simultaneous reduction of hydrogen peroxide at the same potential. Therefore the electrochemical detection of hydrogen peroxide, which is one of the products of enzymatic reactions of almost all oxidases, is considered more promising [170]. The detection can be performed by its oxidation at potential about 0.6 V -the dissolved oxygen does not interfere with the reaction under these conditions. However, the use of such biosensors in the analysis of real objects faces the problems associated with side reactions that may occur at the electrode at this potential. For example the analysis of biological fluids is complicated due to a lot of components may be oxidized at the electrode at potential 0.6 V, such as ascorbic acid, dopamine, bilirubin and some others [START_REF] Brian | Chemical Sensors and Biosensors[END_REF]. Amperometric biosensors based on SiO2 and dehydrogenases Overview of dehydrogenases used in amperometric biosensors Dehydrogenases subclass includes more than 300 enzymes that use coenzyme NAD + /NADH (or its modification NAD(P) + /NAD(P)H) as an electron acceptor. Dehydrogenases can (due to reversibility of NAD + /NADH) also catalyze the reverse reaction in most cases [187]. In general, the enzymatic reaction of dehydrogenases can be represented as: 𝑆𝑢𝑏𝑠𝑡𝑟𝑎𝑡𝑒 + 𝑁𝐴𝐷 + 𝑑𝑒ℎ𝑦𝑑𝑟𝑜𝑔𝑒𝑛𝑎𝑠𝑒 ↔ 𝑃𝑟𝑜𝑑𝑢𝑐𝑡 + 𝑁𝐴𝐷𝐻 (1.3) The number and diversity of dehydrogenases provide more choices for the construction of various amperometric biosensors. The absence of oxygen in the enzymatic reaction scheme can eliminate most of the disadvantages associated with it (difficulty of detecting. interfering influence, etc.) [188]. The reversibility of enzymatic reactions allows to expand analytical applications with the ability to determine the reaction product instead of the substrate. Nevertheless, biosensors based on dehydrogenases are much less common than ones on the basis of oxidases. This is associated with the difficulty of electrochemical detection of NAD + /NADH (see paragraph 1.5. 2) The most widespread enzymes from the dehydrogenases subclass used in the development of biosensors (alcohol dehydrogenase, glucose dehydrogenase, lactate dehydrogenase, glutamate dehydrogenase) more or less duplicate the functions of the corresponding enzymes of oxidases subclass (see paragraph 1.4.1) and their applications. However, alcohol dehydrogenase is used more than analog from oxidases due to its higher activity and stability in the sensors and much higher specificity towards ethanol [145]. Lactate dehydrogenase is also characterized by higher selectivity than lactate oxidase [189]. (Form)aldehyde dehydrogenase -an enzyme that catalyzes the oxidation of formaldehyde to formic acid. Besides the coenzyme NAD + , the reaction also requires the presence of coenzyme glutathione. Biosensors based on this enzyme can be used to determine the formaldehyde having allergic, mutagenic and toxic effects in the food, pharmaceutical and cosmetic industries [190][191][192][193]. The disadvantage of this enzyme is low activity, high cost and difficulty of application due to the need of two coenzymes [193]. Glycerol dehydrogenase provides oxidation of glycerol by coenzyme NAD + . Biosensors based on this enzyme may be useful for wine quality control during fermentation [194,195] as well as in clinical analysis of blood [196]. At the same time, the lack of selectivity of this enzyme and reversibility of enzymatic reaction are reported [197]. Sorbitol dehydrogenase (DSDH) -an enzyme that catalyzes the conversion of polyhydric alcohol sorbitol to fructose. Measurement of sorbitol content is important in the analysis of diabetic food and clinical analysis to prevent the development of diabetes. To date there are only several reports on the biosensors development based on this enzyme [198][199][200][201], among them only in work [198] it was used for the real objects analysis. In addition, immobilized DSDH can be used for the design of the bioreactor for electro-enzymatic synthesis [202,203]. Malate dehydrogenase catalyzes the oxidation of malic acid and its salts. Biosensors based on this enzyme can be used to determine the malic acid in food: fruits, juices and wines, where it affects the organoleptic [187,204,205]. 3-hydroxybutyrate dehydrogenase can be used in the biosensors for clinical analysis of blood, where the determination of 3-hydroxybutyrate is important to avoid life-threatening diabetic ketoacidosis in diabetic patients [206][207][208]. As mentioned above, the data mainly is missing in the literature about the immobilization of dehydrogenases in SiO 2 -film by sol-gel method and the development of amperometric biosensors based on such modified electrodes. Among all the above enzymes such procedure has been described only for lactate dehydrogenase [209][210][211] and malate dehydrogenase [212]. Given the small number of publications and the importance of substrate the point of interest is the development of biosensor based on sorbitol dehydrogenase immobilized in SiO 2 -film. Problems of electrochemical detection of NAD + /NADH The coenzyme NAD + serves as an electron acceptor in most enzymatic A particular interest is the presence of well-developed aromatic system in phenothiazine dyes that determines their ability to be easily and firmly adsorbed on carbon surfaces, including carbon nanotubes [233,234]. Such combination of nanostructured electrode and mediator leads to a synergistic effect -the potential decrease and the increase of the sensitivity and stability NADH detection [235,236]. This allows to apply this approach in the development of dehydrogenase-based biosensors [189,[237][238][239]. Conclusions from the literature survey Analysis of the literature data has shown that electrochemical biosensors is a promising branch of the sensor development. An approach that deserves attention for the immobilization of biomolecules on the electrode surface is their encapsulation into a thin polymer film of silica, due to the large porosity and biocompatibility of the latter. However, there are unresolved issues about the use of such bio-composite films in the biosensors development, e.g., the achievement of strong enzyme fixing in the film, as well as maintaining its sufficient activity. In addition, the fabrication of SiO 2 -films by traditional methods does not always lead to reproducible results. The method of electrochemically-assisted deposition is an alternative to this methods allowing to obtain reproducible porous films with controlled thickness. On the example of glucose oxidase and hemoglobin this method was successfully applied also for bioencapsulation, but information about its application for other types of enzymes, particularly dehydrogenases, is absent in the literature. The use of nanomaterials can significantly improve the analytical characteristics of biosensors based on oxidases and dehydrogenases, including sensitivity and selectivity. The platinum nanoparticles are promising for the choline oxidase immobilization, since they can increase the sensitivity of detection of hydrogen peroxide. Carbon nanotubes are suitable for the stable and low potential detection of coenzyme NADH, so they can be used for the immobilization of sorbitol dehydrogenase. However, there is not much information in the literature about the selection of nanomaterials and the methods of their immobilization. CHAPTER 2. EXPERIMENTAL PART. Chemicals and reagents Enzymes Three enzymes from the oxidoreductases class were used in this work: -Glucose oxidase (GOx, EC 1.1. Solution with concentration 10 mg/mL (activity 100 units/mg). Isoelectric point 4,3. Enzyme solutions (except DSDH) were prepared by dissolving an appropriate amount (final concentration 10 mg/mL) in 0.067 M PBS (pH 6.0) and stored at 4С when not used. Reagents for sol-gel synthesis. For the synthesis of SiO 2 -based sol the tetraethoxysilane (TEOS, 98%, «Alfa Aesar»), concentrated hydrochloric acid (HCl, 36%, «Prolabo») and deionized water from a «Purelab Option» water purification system were used. The polyelectrolytes and surfactants were added to the sol as additives: poly- Addition of positively-charged polyelectrolytes and surfactants was intended for improvement of the enzyme interaction with silica-groups of the sol-gel film. Silaca-groups are negatively charged at the pH used during the sol-gel synthesis (5 -7) due to their deprotonation [240]. This results in electrostatic repulsion taking into account the negative charge of the enzyme molecules at such pH. Therefore, the use of positively-charged polyelectrolyte, that plays a role of stabilizing agent between SiO 2 and protein, leads to significant improvements of the enzyme encapsulation [241]. In addition, cationic surfactants, such as CTAB, improve structure of the solgel film and can enhance the response stability of the encapsulated in SiO 2 -film biomolecules due to their inclusion into micelles [START_REF] Nadzhafova | Heme proteins sequestered in silica sol-gels using surfactants feature direct electron transfer and peroxidase activity[END_REF][START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF]. While immobilizing coenzyme NAD + into SiO 2 -film 3-glycidoxypropyltrimethoxysilane (GPS, 98%, «Sigma») was used as bonding agent. It is able to bind the adenine residue in the coenzyme molecule and react with silanol-groups in the condensation reaction [203]. Reagents for voltammetric measurements. All solutions were prepared using deionized water by dissolving of accurate amounts and/or appropriate dilution. Standardization of solutions was performed using titrimetry. 1,1'-Ferrocenedimethanol (98%, «Aldrich»), hydrogen peroxide (H 2 O 2 , 35%, «Acros»), oxidized and reduced forms of β-nicotinamide adenine dinucleotide (NAD + /NADH, 98%, «Sigma») were used as electrochemical probes. Glucose («Acros»), D-sorbitol (98%, «Sigma») and choline chloride (97%, «Fluka») were used as substrates of the corresponding enzymes. Solutions of glucose were left to mutorotate at least for 24 h before using. The voltammetric measurements were conducted using phosphate and Tris-HCl buffer solutions as background electrolytes unless otherwise specified. Acidity and composition of buffer solutions were chosen taking into consideration optimal pH of enzymatic activity and the absence of interfering influence of solution components for the substrate determination. The adsorption of dye methylene green (MG, >80%, «Sigma») was used for the determination of electroactive surface area of electrodes. Substances studied for interfering influence. To investigate the interfering effect on the determination of sorbitol the number of carbohydrates that can be found in the food was chosen as well as representatives of the homologous series of polyhydric alcohols, including sorbitol stereoisomer (mannitol). The components that can be part of cosmetic products (glycerol, sodium lauryl sulfate) and biological fluids (ascorbic acid, urea) were tested as well. All solutions were prepared by dissolving accurate amounts of substances or by dilution of initial solutions. Iron(III) nitrate, acidified to prevent its hydrolysis, was used for ascorbic acid masking. To investigate the interfering effect on the determination of choline we studied mono-and disaccharides that can be part of the food (glucose, sucrose, lactose), substances that are part of the biological fluids (uric and ascorbic acid, urea), ethanol and some metal ions (Pb (II), Zn (II), Cu (II), Fe (III)), which are classical inhibitors of enzymes. All solutions were prepared by dissolving accurate amounts of substances or by dilution of initial solutions. Apparatus All -For the purpose of comparison platinum and gold macroelectrodes were useds. For Pt-Nfb and GCE a specially designed Teflon cell were used in which the working electrode is located on the bottom side, and its required surface area is limited by rubber ring (ø = 6 and 9,5 mm). The electrode was connected to the potentiostat using copper wire and silver glue (див. Scheme 2.1 для Pt-Nfb). Scheme 2.1. The connection way of Pt-Nfb to electrochemical cell. For electrophoretic deposition of CNT a specially designed device was used consisting of a DC source of alternating voltage and two steel plates-electrodes that were placed strictly parallel to each other at a distance of 6 mm (Fig. 1.6). Plate, which acted as an anode, was shorter providing possibility of GCE connection. The study of electrodes surface was carried out by method of scanning electron microscopy (using commercial microscope Hitachi FEG S4800, SCMEM, University of Nancy) and by atomic-force microscopy (using a microscope Asylum Research MFP-3D-Bio). To ensure continuous mixing of solutions magnetic stirrers with adjustable speed was applied. Acidity of the solution was checked using pH-meter Mettler Toledo S220 and Piccolo HI 1290. Argumentation of the choice of objects and methods Modification of electrodes by electrochemically-assisted deposition method The modification of electrodes with SiO 2 -film was performed by electrochemically-assisted deposition method using alcohol-free sol composition, which does not inhibit the enzyme activity [START_REF] Oksana | Direct electrochemistry of hemoglobin and glucose oxidase in electrodeposited sol-gel silica thin films on glassy carbon / O. Nadzhafova[END_REF]. -For the preparation of the sol for GOx immobilization 2.28 mL of TEOS, 2.0 mL of H 2 O, and 2.5 mL of 0.01 M HCl were mixed with a magnetic stirrer for 16 h. Then, prior to introduction of the enzyme in the medium, 1.66 mL of 0.1 M NaOH was added to neutralize the sol (to avoid possible enzyme denaturation in acidic medium). The enzyme solution (50 μL of PBS (0.067 M, pH 6.0) and 100 μL of 10 mg/mL GOx solution) was added to 0.5 mL of the hydrolyzed sol and left to stay for 1 h. -For the preparation of the sol for ChOx immobilization 0,21 mL of TEOS, 0,15 ml of H 2 O and 0,26 mL of 0,01 М HCl were mixed with a magnetic stirrer for 12 h. Prior to EAD 0,4 mg of CTAB, 0,03 mL of 0,067 М PBS (рН 6,0) and 0,01 mL of ChOx solution (10 mg/mL) were added to 0.5 mL of the hydrolyzed sol. -For the preparation of the sol for DSDH immobilization 2.28 mL of TEOS, 2.0 mL of H 2 O, and 2.5 mL of 0.01 M HCl were mixed with a magnetic stirrer for 16 h. The final sol was diluted three times with pure water and, then, a 100 μL aliquot of this solution was mixed with 100 μL of PDDA (20 wt% in water) and 100 μL of DSDH solution. -For the preparation of the sol for co-immobilization of DSDH and NAD+ [203] 2.28 mL of TEOS, 2.0 mL of H 2 O, and 2.5 mL of 0.01 M HCl were mixed with a magnetic stirrer for 16 h. The sol was diluted five times with water and its 20 μL aliquot was mixed with 10 μL of 20% PEI-solution, 10 μL ofNAD + -GPS solution (prepared by mixing 25 mg of NAD + and 37.6 mg of GPS in 400 μL of Tris-HCl buffer solution pH 7.5 for 14 h at room temperature) and 20 μL of DSDH-solution. Prepared sol was introduced in the electrochemical cell where some negative potential was applied to the working electrode in order to initiate the generation of OH-ions (typically from -1.1 to -1.3 V). Potential and/or duration of its application were optimized for each individual case. After electrodeposition the working electrode was kept in the sol for 5 minutes, then gently washed with water and dried at room temperature for 1 hour before use. Modification of electrodes with carbon nanotubes by the electrophoretic deposition method This work used single-walled carbon nanotubes functionalized carboxylic groups (CNT, > 90%, 4-5 nm × 0,5-1,5 microns, 1-3 atom.% COOH-groups, «Sigma»). The presence of carboxyl groups improves dispersancy of CNT in water and gives them a negative charge required for their movement in a constant electric field. For the suspension preparation the appropriate amount of CNT was weighed on an analytical balance (considering concentration 0.1 mg/mL), then deionized water was added, and the suspension was treated in ultrasonic bath for 12 hours. Glassy carbon plates were polished before the modification by wet emery paper 4000 with Al 2 O 3 powder (0.05 micron, «Buehler»). The same procedure was used to renew the electrode surface after suc-cessful modification and measurements. The two parallel electrodes were introduced in a 4-mL aliquot of the dispersion and a constant potential difference of 60 V (corresponding to the magnitude of the electric field between the electrodes 100 V/cm) was applied for required time usually from 5 to 120 seconds. The optimal applied voltage of 60 V was used throughout because it constitutes a good compromise between a high speed of deposition and limited decomposition of ultra-pure water (which would generate oxygen bubbles that may affect CNT assembly). The level of electrodes dipping was kept constant in order to ensure the same area (1 cm 2 ) of their contact with dispersion and reproducibility of deposition. After deposition, the glassy carbon plate was carefully removed from the remaining dispersion, gently washed in water, first dried horizontally at room temperature and then put in an oven at 450•C for 1 h. The For the preparation of macroporous CNT-layers the mixture of CNT (0,1 mg/mL) and polystyrene beads (0.05 mg/mL to 0.5 mg/mL) was used. It was obtained by adding to the suspension of CNT certain aliquots of concentrated (5%) suspension of polystyrene beads (PS-beads, 500 nm), obtained by the method [243]. Polystyrene was chosen because of facility of homogeneous beads synthesis and their removal by heating. This mixture was stirred, brought into the cell and applied to EPD as was described above. After the deposition electrode was carefully removed from the suspension and left at room temperature until dry. The template removal was carried out in an oven at 450 C for 1 h with 15 C/min ramp. For the purpose of comparison, GCE was also modified with CNT-layer by drop-coating method. The CNTs were deposited from aqueous solution, by dropping 10 μL of the same water suspension as for EPD and left to dry completely. For electrochemical generation of platinum nanoparticles on the surface of CNT we used technique described in [START_REF] Stefanie | Local electrocatalytic induction of sol-gel deposition at Pt nanoparticles[END_REF]. Electrode was dipped in the solution of 1 mM Pt(NO 3 ) 2 , that also contained 0.1 M NaNO 3 , and was exposed to a series of pulses. The sequence of pulses in each series was as follows: 0.035 V for 1 s, -0.7 V for 0.2 s, 0.035 V for 1 s. The electrochemical reduction of platinum and the formation of nanoparticles occurs during the application of negative potential. Application of the positive potential was used for dumping, it does not happen any electrochemical reaction at such potential. The choice of the number of series of pulses is justified in section 3.2.1.1. Voltammetric measurements To study the properties of modified electrodes we used methods of cyclic and linear voltammetry, amperometry and hydrodynamic voltammetry. All voltammogramms were obtained using saturated silver chloride electrode The sensitivity of the modified electrode to the substrate was calculated as the slope of a graph current-substrate concentration [244]. Surface observations Investigation of the surface of modified electrodes by scanning electron microscopy (SEM) was carried out by connecting the modified electrode with substrate using a special conductive tape. Since the conductivity of the electrodes was sufficient spraying metal plating wasn't used. Energy of scanning electron beam ranged from 1 to 15 kV, magnification -from 1,000 to 400,000 times. If necessary, the electrode was placed at a certain angle to the electron beam. Investigation of the surface by atomic force microscopy (AFM) was conducted in the temperature-controlled room using V-shaped silicon nitride probe (MLCT-EXMT-BF, «Veeco Instruments», USA) with a radius of curvature 50 nm and a constant of elasticity 0.1 N/m. Scanning was carried out in contact mode, the speed ranged from 0.5 to 2 Hz, the data was captured from laser optical detector. A thin scratch was made with a needle for profile and film thickness measurements. AFM images were captured on the border of scratches, allowing to estimate the difference in film thickness. Processing of AFM images was carried out using software WSxM 5.0 («Nanotec Electronica SL», Spain) [245]. Scanning electrochemical microscopy (SECM) with a special device developed on the basis of commercial appliance Sensolytics (Germany) was used to study the conductivity of bio-composite films. The measurements were performed using a platinum microelectrode (ø = 25 microns) in a solution containing 0.1 M KCl, 1 mM ferrocenedimethanol. Profilometry with SECM was also applied to study the morphology of the layers of CNT with a thickness greater than 500 nm, in this case a glass needle was used, measuring the distance from it to the surface using piezoelectric sensors [246]. For electrochemical studies of electroactive surface area the dye methylene green was adsorbed on the electrode surface by immersion of the electrode in a 1 mM solution of the dye and stirring on a magnetic stirrer for 12 hours. Then the electrode was washed thoroughly with water and dried at room temperature. Calculations based on electrochemical measurements Calculation of apparent Michaelis constant The apparent Michaelis constant was calculated in order to determine the degree of the relationship of the corresponding enzyme to substrates, as well as to determine the upper limit of the linear range of modified electrodes. The assumption was made that the enzymatic reaction is one-substrate or the concentration of the second substrate (in the case of coenzyme) is saturating, that allows to apply the Michaelis-Menten kinetics to it [247]: 𝑉 = 𝑉 𝑚𝑎𝑥 × 𝑆 𝐾 𝑀 + 𝑆 (2.1) wh. V -the rate of enzymatic reaction; V max -the maximum rate of enzymatic reaction at such conditions; S -equilibrium concentration of substrate, mM; K M -Michaelis constant, mM. The current flowing through the electrochemical cell with a known concentration of the substrate was taken as rate of enzymatic reaction. In this case, the graph of V-S relation for equation (2.1) has the form of hyperbole that approaches to a straight line V = V max . The processing of such curves is difficult (although possible with modern software) so the linearization of this equation was performed by the method of Lineweaver-Burk [248]: 1 𝑉 = 1 𝑉 𝑚𝑎𝑥 + 𝐾 𝑀 𝑉 𝑚𝑎𝑥 × 𝑆 (2.2) In the coordinates 1/V -1/S the graph of the equation ( 2.2) has the form of a straight line that intersects the abscissa at the point -1/K M making possible finding Michaelis constant. Further, in all cases apparent Michaelis constant was found out from the equation of the graph 1/V against 1/S, calculated by the method of least squares, and the point of intersection of this line with the abscissa. As an alternative method direct processing of the V-S graph with filter «Hill» in software «Origin 8.5» was used. Calculation of the electroactive surface area using dye adsorption The adsorption of the dye methylene green (MG) was used for the calculation of approximate electroactive surface area of electrodes modified with CNT. It was assumed that the dye molecules form a monolayer on the surface of nanotubes [234]. The area of anodic or cathodic peak of adsorbed dye on voltammogram of modified electrode was calculated using computer program «Origin 8.5» (e.g. Fig. 4.5a). Electroactive surface area calculation was performed using the formula: 𝑆 𝑒𝑙.𝑎𝑐𝑡 = 𝑆 𝑝𝑒𝑎𝑘 × 6,24 • 10 18 × 𝑆 1 𝜈 × 𝑛 (2.3) wh. S peak -area of the MG peak on the voltammogram, AV; ν -potential scan rate, V/s; n -number of electrons transferred in the electrochemical reaction (n = 2 for MG); 6,2410 18 -number of electrons for the charge of 1 C; S 1 -area occupied by one adsorbed molecule of dye (for MG 0,8 nm 2 , or 810 -13 mm 2 [249]), mm 2 . Conclusions to the chapter The chapter lists and argues the choice of reagents and methods for carrying out the necessary experiments, describes the objects of study. The choice of interfering substances investigated in developing amperometric methods for detection of sorbitol and choline was argued. The techniques of sol-gel proceeding and generation of bio-composite SiO 2 -films on the electrode surface by electrodeposition method were described. The method of the formation of CNTcoatings by electrophoretic deposition on the surface of glassy carbon electrode was defined, including macroporous CNT-layers. The techniques of voltammetric measurements, spectroscopic studies and formulas for calculations were listed. CHAPTER 3. ENCAPSULATION OF OXYDASES BY ELECTROCHEMICALLY-ASSISTED DEPOSITION OF SOL-GEL BIOCOMPOSITE One of the needs of electroanalytical chemistry is the development of new, simple and effective methods of enzyme immobilization on the surface of electrodes. The EAD method allows to encapsulate biomolecules in SiO 2 -film on the electrode surface by simple one-step process, which has currently been applied only for certain enzymes. Meanwhile, the immobilization of less active enzymes, such as ChOx, requires ways to increase the sensitivity of such modified electrodes. As it was mentioned in the literature survey (paragraph 1. One possible approach for achievement of this goal is use of platinum-based nanomaterials, including nanoparticles and nanofibers [START_REF]Platinum nanowire nanoelectrode array for the fabrication of biosensors[END_REF]253]. Such materials can exhibit electrocatalytic properties towards hydrogen peroxide oxidation due to their small size and large surface area. This chapter covers the results of the EAD method application for the immobilization of oxidases in bio-composite SiO 2 -film on electrode surface. It was shown the prospects of the combination of this method with nanomaterials to improve the sensitivity of modified electrodes. The influence of platinum nanostructured materials on the analytical signal of modified electrodes was studied. Immobilization of glucose oxidase into the SiO2-film on the surface of platinum nanofibers Platinum nanofibers (Pt-Nfb), forming a conductive network with a large number of intersections can be quite easily obtained by the method electrospinning, which allows to control the thickness and density of the network [242,254,255]. Despite the attractive characteristics such as small diameter and high density of fibers there is no information in the literature about the use of electrospun Pt-Nfb for the development of biosensors. The first part of this section presents the results of modification of Pt-Nfb with bio-composite SiO 2 -film, containing an enzyme from the class of oxidases. As a model, we have chosen the enzyme glucose oxidase (GOx), which is often used in amperometric biosensors [122]. This is due to the acute need of quick monitoring of the glucose concentration in the blood (and food) for people with diabetes. Another reason for the popularity of GOx is its high stability, which allows application of different ways of immobilization without loss of enzyme activity. Given this, we have chosen this particular enzyme as a model, to verify the applicability of Pt-Nfb network for the oxidases immobilization in the SiO 2 -film by electrochemically-assisted deposition method. In this case, the advantage of this method is the ability to selectively modify only Pt-Nfb, without affecting the glass substrate. Morphological and electrochemical characteristics of platinum nanofibers network Given that electrospun Pt-Nfb was not previously used as an electrode, we have investigated their morphological and electrochemical properties, as well as response of hydrogen peroxide on the electrodes of this type. (b) AFM pictures of fibers with scratch and cross-section profile. Electrochemical properties of platinum nanofibers network. The mechanical stability of the assembly is good enough in solution to perform electrochemical measurements. As shown in Fig. 3.2, the fiber density affects the electroactive surface area. Samples displaying various densities of Pt nanofibers have been characterized by cyclic voltammetry in sulfuric acid solution [256]. The electroactive surface area measurement was performed by the area calculation of anodic oxidation peak of adsorbed hydrogen (Fig. 3.2a, shaded), which corresponds to the charge, using the formula: The estimation made from the integration of the hydrogen desorption peak shows that one fiber layer exhibits an electroactive surface area (0.29 cm 2 ) similar to the geometric surface area defined by the O-ring of the electrochemical cell (0.28 Immobilization of glucose oxidase using electrochemically-assisted deposition method In an attempt to cover only the surface of Pt-Nfb sol-gel electrochemicallyassisted deposition was preferred over the classical evaporation method, which is basically restricted to film deposition onto flat surfaces. We carried out an immobilization of model enzyme GOx on the surface of Pt-Nfb by EAD method, and the influence of EAD parameters on the thickness of the formed SiO 2 -film and the electrochemical response of immobilized enzyme was studied. Voltammetric characteristics of platinum nanofibers modified with SiO 2 -glucose oxidase film The cyclic voltammogram of Pt-Nfb in phosphate buffer has a complex shape, inherent in platinum electrodes (Fig. 3.4, curve 1). The current increase is noted in the anodic region at potential 0.7 V, which can be attributed to the formation of oxides of platinum. The significant peak at the potential of 0.0 V in the cathodic region can be noticed caused by several reactions, including the reduction of platinum As mentioned in the literature survey (paragraph 1.2.3), the properties of the formed film, and therefore the response of immobilized enzyme are affected by electrochemically-assisted deposition parameters, such as potential and duration. Therefore, further optimization of EAD duration was conducted to achieve the best response of immobilized enzyme. Influence of the duration of electrochemically-assisted deposition on the amperometric response of immobilized glucose oxidase To assess the optimal conditions for obtaining electrogenerated film SiO 2 -GOx, we investigated the relationship of modified electrode amperometric response to glucose with the duration of EAD at a constant potential of -1.2 V. At 1-2 s the response is negligible, further signal sharply increases for the duration of electrodeposition 3 s, and then the sensitivity gradually decreases, approaching zero at 10 s (Fig. 3.5). To conclude this section, it can be stated that silica deposition occurs essentially on the Pt nanofibers for very short deposition times (<3 s). More prolonged electrolysis resulted in larger amounts of OH -catalyst diffusing onto the overall surface, thereby inducing film formation on both the platinum nanofibers and the glass substrate, which can then lead to the full encapsulation of the fibers into the silica material. Such morphological variations affect the electroactivity of the immobilized GOx. the biocomposite film totally covers the nanofiber assembly (i.e., 10 s deposition, Fig. 3.6f), it totally blocks the electrochemical detection of glucose. Comparison of platinum nanofibers and platinum macroelectrode for the immobilization of glucose oxidase A comparison was performed with Pt-Nfbs and bare Pt electrode with using 3 s electrochemically-assisted deposition in the same conditions. Similar results were obtained which suggests that here the electrochemical response of the assembly is mainly limited by diffusion of glucose in the sol-gel layer (Fig. 3.8). At these conditions Pt-Nfbs behave as partially-blocked electrode [257] and diffusion layers of individual nanofibers overlap to form a common layer, similar to the diffusion layer of macroelectrode [258,260,261]. However, such a control experiment has to be taken into account very carefully as the electrolysis of the sol occurs at more negative potential on the Pt bare electrode than on Pt-Nfbs, potentially affecting the thickness of the deposit. In any case, these results highlight the interest of the low-dimensional Pt-Nfbs and well-controlled solgel electrochemically assisted deposition to improve the response of biomoleculedoped thin silica films produced by the electrochemically-assisted deposition method. Immobilization of choline oxidase into the SiO2-film Choline oxidase (ChOx), as well as glucose oxidase, belongs to the class of oxidases, which makes possible the use of common approach of sensitive hydrogen peroxide detection to develop a biosensor based on it. However, unlike GOx, ChOx is characterized by much lower stability, which makes necessary finding ways to increase its stability in the immobilized state. Thanks to biocompatibility of SiO 2based materials, they are promising for use as a matrix for encapsulation of the enzyme on the surface of the electrode. The use of rapid one-step EAD method allows immobilization in mild conditions and shortening of the time of unfavorable conditions exposure on the enzyme molecule. Electrochemical generation of the platinum nanoparticles on the surface of glassy carbon electrode for the choline oxidase immobilization Pt-Nfb did not demonstrate significant advantages of sensitivity and selectivity compared with planar platinum electrode and method for their preparation requires sophisticated equipment. Therefore, in order to obtain choline-based biosensor, we decided to investigated another type of electrode -glassy carbon electrode modified with platinum nanoparticles (Pt-Nps). Nanoparticles were chosen due to the fact that they can be easily obtained on the electrode surface by electrochemical generation. At the same time, the size of such particles can be much smaller than the diameter of Pt-Nfb, which may contribute to an increase of electrocatalytic effect, compared with nanofibers. Optimization of the conditions of the glassy carbon electrode modification with platinum nanoparticles We used a method described in work [START_REF] Stefanie | Local electrocatalytic induction of sol-gel deposition at Pt nanoparticles[END_REF] for electrochemical generation of nanoparticles on the surface of GCE, which involves alternation of reducing and dumping pulses of short duration. While imposing negative potential the reduction of platinum precursor occurs on the electrode surface, which leads to the formation of platinum nanoparticles, at dumping pulse no electrochemical reaction occurs. SEMimage of modified electrode (Fig. 3.9a) reveals a presence of nanoparticles that are uniformly distributed over the electrode surface. nm a It is noticeable that unmodified GCE is almost inactive to hydrogen peroxide in the range of potentials -0.5 -1.0 V. There is no difference in voltammograms of this electrode in the presence and in the absence of H 2 O 2 (Fig. 3.9b, curves 1,2). A slight increase of current is seen from the potential 0.9 V you, which apparently precedes the peak of H 2 O 2 , oxidation observed at potentials higher than 1 V. In contrast, the voltammogram of modified GCE-Pt-NPs in the presence of hydrogen peroxide is significantly different from voltammogramms in the buffer solution (Fig. 3.9c). Two half-wave, cathodic and anodic, are visible at potential about 0.35 V corresponding to the reduction and oxidation of hydrogen peroxide, respectively (Fig. 3.9). They appear due to the presence of platinum nanoparticles on the surface of GCE, catalyzing the electrochemical reaction of H 2 O 2 . Thus, the electrodeposition of Pt-NPs on the surface of GCE significantly shifts the potential of H 2 O 2 oxidation in the negative direction and improves the electrochemical response. Such behavior makes the electrode a promising sensor for hydrogen peroxide. The assumption can be made that the pulse duration affects the size of nanoparticles and the number of pulses affects their quantity. The change of these parameters may affect their catalytic properties. Therefore, to increase the sensitivity of GCE-Pt-NPs to hydrogen peroxide the number and duration of pulses have been optimized. Sensitivity was calculated as described in paragraph 2.3.3). The results are shown in Table 3.1. Table 3.1 This can be explained by the fact that smaller particles are likely to exhibit stronger catalytic properties. Sensitivity also increases with the number of pulses from one to three, and further increase leads to a decrease in the signal, possibly due to the formation of a large number of particles forming agglomerates. Thus, the optimal parameters of electrochemical generation of Pt-NPs were three pulses of 0.3 sec each. Influence of the duration (1) and number of pulses (2) of Pt-Nps electrogeneration on the sensitivity of GCE-Pt-Nps to H2O2 Investigation of the stability of amperometric response of glassy carbon electrode modified with platinum nanoparticles In addition to high sensitivity to hydrogen peroxide electrochemical transducer must demonstrate the stability and reproducibility of the response. Therefore, we have tested the stability of amperometric and voltammetric responses of GCE-Pt-Nps to hydrogen peroxide (Fig. 3.10). In the efforts to improve stability and to prevent leaching of nanoparticles from the electrode surface we have coated the GCE-Pt-Nps electrode with biocomposite film SiO 2 -ChOx with the same parameters of electrodeposition as explained in section 3.1. This modified electrode was active to choline (Appendix C), but its stability was still quite low. Thus, the glassy carbon electrode modified with platinum nanoparticles is characterized by poor stability of the response. This situation can be explained by the gradual leaching of platinum nanoparticles from the electrode surface or their deactivation. Therefore, given the high demands on the stability of amperometric sensors, GCE-Pt-NPs was not desirable for using as a transducer in amperometric sensor for choline. The choice of electrode for the choline oxidase immobilization Given the poor stability of nanoparticles a comparative study of the other types of electrodes was conducted in order to find a new type of transducer for the immobilization of ChOx (Fig. 3.11). As objects of comparison we selected screen-printed electrodes as they have low cost and easy fabrication, as well as small size including both working, auxiliary and reference electrodes. At the same time, specially designed for use in the sensors, they usually have high electrochemical activity. The type working electrode material (gold and platinum) was chosen because of their high sensitivity to H 2 O 2 . Cyclic voltammograms of platinum (Fig. 3.11c) and gold (Fig. 3.11d) screenprinted electrodes (AuSPE) shows active oxidation of hydrogen peroxide on themthe current increase in both cases is noticeable starting from potential 0.2 V (Fig. 3.11c, d curve 2 ). The value of current is approximately the same for these two electrodes and commensurate with the current on GCE modified with platinum nanoparticles (Fig. 3.11b). Given this, any of the two screen-printed electrodes can be used as a transducer for the immobilization of ChOx. However, taking into account the slightly higher response to H 2 O 2 , lower cost and advantages of gold electrode [251], we have chosen the AuSPE for the immobilization of ChOx. Immobilization of choline oxidase into SiO 2 -film on the surface of gold screen-printed electrode For the one-step immobilization of ChOx in the SiO 2 -film on the surface of AuSPE the EAD method was proposed with parameters similar to those used for the immobilization of GOx on the surface of Pt-Nfb. However, to achieve a significant response the duration of deposition has been increased to 20 seconds. The AuSPE modified with film SiO 2 -ChOx demonstrates significant increase of anodic current on the voltammogram in the presence of choline (Fig. 3.12a). The method of hydrodynamic voltammetry (Fig. 3.12b) confirms this finding. Indeed, the voltammogram of AuSPE-SiO 2 -ChOx in the presence of choline (Fig. 3.12b, curve 2) depicts two half-waves, whose presence can be explained by a complex mechanism of hydrogen peroxide oxidation on gold electrodes [262]. From this voltammogram the optimum working potential (0.7 V) was selected. Choice of the deposition potential of the SiO 2 -choline oxidase film Negative potential imposed during EAD, affects the rate of formation of electrogenerated catalyst affecting the parameters of formed SiO 2 -films, such as thickness and porosity. In turn, these characteristics influence the response of The results shown in Fig. 3.13 indicate that the change of deposition potential leads to changes in the sensitivity of the modified electrode. The greatest response observed for the EAD at potential -1.1 -1.2 V, which correlates with the optimal parameters of deposition of GOx-containing film on the surface of Pt-Nfb (section 3.1). At the same time, at higher potential (Fig. 3.13b, curve 1) the response is almost absent due to slow speed of electrogeneration resulting in the absence of SiO 2 -film on the electrode. However, more negative potentials (-1.3 B) (Fig. 3.13b, curve 4) lead to a too violent reaction of electrolysis of water and active hydrogen bubbles formation which prevent the formation of the film, causing its destruction. Investigation of the response stability of gold screen-printed electrode modified with SiO 2 -choline film Stability is an important parameter of biosensors, so we performed a study of long-term stability of AuSPE-SiO 2 -ChOx. The study was conducted by recording the calibration curves every 4 days for 2 weeks and comparing the sensitivity of the modified electrode to choline relative to the initial value immediately after modification. As known, the main causes of sensitivity loss of biosensors is inactivation of enzymes in the film, as well as their leaching, including as a result of the film destruction. The presence of surfactants in sol during EAD significantly affects the last factor, enhancing the interaction of the enzyme with silanol groups of film as well as acting as structuring agent and improving its morphological properties (paragraph 1.2, 2.1.2). Therefore, we have investigated the effect of concentration of CTAB in the sol on the long-term stability of AuSPE-SiO 2 -ChOx. To assess the stability of the biosensor the relative sensitivity of the electrode modified with film SiO 2 -CTAB-ChOx was calculated, using the formula: 𝑆 = 𝑆 𝑥 𝑆 1 × 100% (3.5) wh. S -relative sensitivity, %; S 1 -sensitivity of the electrode to substrate at first day after modification, μA/mM; S х -sensitivity of the electrode to substrate at x-th day after modification, μA/mM. The modified electrode was stored in a refrigerator at +4 C in the interim between experiments. It should be noted that the free enzyme in buffer solution under these conditions completely lost the activity in week, which contrasts with preservation of activity of immobilized enzyme. The data presented in Table 3.2. The data obtained shows that the biggest stability of modified electrode is observed at concentrations of CTAB in sol 12.2 mM, which is much greater than its CMC in aqueous solutions (≈ 1 mM [START_REF]Surfactant-Induced Modification of Dopants Reactivity in Sol-Gel Matrixes / C. Rottman[END_REF]). This correlates with data on a range of CTAB concentrations in sol with the highest analytical response of immobilized biomolecules [START_REF] Nadzhafova | Heme proteins sequestered in silica sol-gels using surfactants feature direct electron transfer and peroxidase activity[END_REF][START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF]. High concentrations of CTAB leads to a high foaming and flotation effect, and may lead to the restructuring of micelle and film structure reducing amperometric response of immobilized enzyme. The operational stability of AuSPE-SiO 2 -ChOx which is necessary during continuous operating of biosensor was investigated in voltammetric and amperometric modes at optimum concentration of CTAB in sol (Fig. The constant response is also observed in amperometric mode. Catalytic oxidation current of hydrogen peroxide, which is formed by the enzymatic oxidation of choline after 50 minutes of continuous measurements while mixing decreases no more than at 6% (Fig. 3.14b). Thus, these data indicate high operational stability of AuSPE-SiO 2 -ChOx, which is achieved through the use of sol-gel encapsulation of enzymes and addition of surfactant CTAB. The resulting stability is quite sufficient for the application of developed modified electrode as a sensitive element of biosensor and its use in the analysis different objects. Further, the optimum parameters were the EAD at potential -1.1 V for 20 s, and the presence of 12.2 mM CTAB in sol. Dependence of the amperometric response of the gold screen-printed electrode modified with SiO 2 -choline oxidase film from the choline concentration Dependence of the amperometric response of AuSPE-SiO 2 -ChOx on the concentration of choline resembles the saturation curve (Fig. (mM) (R 2 = 0/998). The detection limit, calculated by 3S-criteria was 5 μM choline (0.7 mg/L). Thus, the sensitivity of the modified electrode is sufficient for the determination of choline in foods and biological fluids. Conclusions to chapter 3 The oxidation current of hydrogen peroxide on the surface of platinum Characterization of the electrophoretically deposited carbon nanotubes layers The electrophoretic deposition method allows the accumulation of charged particles from their suspension onto one of the two electrodes used in the device. For this reason, carbon nanotubes bearing a high content of negatively-charged carboxylic groups on their surface have been chosen to facilitate their dis-persion (as non-modified CNT of this sort are almost insoluble in the pure water) and to make them likely to move in the electric field in order to get fast deposition of high quality CNT layers. The rate of the precipitation of charged particles depends on the applied potential difference, surface area of electrodes, distance between them and the process duration [266]. Therefore, the quantity of deposited nanotubes can be finely controlled by varying the time of potential application while keeping other parameters constant. The optimal applied potential difference of 60 V was used throughout because it constitutes a good compromise between a high speed of deposition and limited decomposition of ultra-pure water (which would generate oxygen bubbles that may affect CNT assembly). Morphological characteristics A necessary condition for the efficiency of electrode modification with CNT is the formation of a homogeneous and reproducible coating. The application of EPD method allows obtaining of such coatings. All images can clearly show the presence of distinct horizontally-oriented nanotubes, indicating the absence of aggregation, occuring, for example, when CNT dispersion is simply dropped on the electrode. The influence of the duration of deposition on the CNT-layer thickness The formation of CNT assemblies was also monitored by atomic force microscopy. In spite of the difficulties to recognize single nanotubes (for more detailed image see Appendix D), the overall thickness and uniformity of film can be The equation of this dependence is following: That gives the possibility to control the quantity of deposited carbon nanotubes and, consequently, the film thickness by tuning the electrophoretic deposition time. Such variation could influence the electrochemical characteristics of modified electrode. Measurement of the electroactive surface area of CNT-layer Electroactive surface area is one of the most important parameters of the electrodes, whose increase usually leads to an increase in current density, and hence the sensitivity of such electrodes. To estimate the surface area of GCE-CNT electrodes, we have used the adsorption of the dye methylene green (MG). It is known that phenothiazine dyes can easily adsorb onto the surface of carbon nanotubes owing to the favorable electrostatic and π-π stacking interactions [233]. We have thus exploited the adsorption of methylene green, which is Thus, the present electrophoretic method allows obtaining in a controlled way a porous conductive matrix with extended electroactive surface area, which might be of interest for the immobilization of a large quantity of enzyme. According to the formula (2.3), by integrating the peak area of adsorbed MG one may calculate the approximate area of electroactive surface of electrode modified with CNT. For the electrode modified with CNT by EPD duration of 120 s (Fig. 4.5a, curve 5) electroactive surface area was 1033 mm 2 , while the geometric area of the electrode of the same diameter (6 mm) is 28 mm 2 (according to the formula S = πr 2 ). Thus, modification of electrode with CNT by EPD method increases electroactive surface area of the electrode for almost 40 times, leading to the improvement of its sensitivity to electroactive substances. Electrocatalytic properties of CNT assemblies towards NADH oxidation To reveal the outlook for the application of hereby deposited carbon nanotubes film for the bioelectrode development, we examined its electrocatalytic effect toward the oxidation of NADH, an enzymatic cofactor mostly required for enzymes from dehydrogenases group. showing no significant anodic current below 0.6 V (actually a peak (not shown on the figure) can be observed at ca. 0.8 V), in agreement with previous literature [START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]). On the opposite, the cyclic voltammogram of NADH recorded using the GC-CNT electrode (Fig. Comparing parts a and b in Fig. 4.6 and curves 1 and 2 in Fig. 4.7 reveals significant decrease in overpotentials for NADH oxidation on carbon electrodes mainly due to the expected electrocatalytic action of CNT. These results are similar to those obtained in work [START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]. The potential of NADH oxidation is not so low as ca. 0.0 V observed by some researchers on the nanotubes particularly treated to induce the formation of quinone-like functional groups [219,220]. The absence of visible peaks in the low-potential range argues that here the high-voltage excitation during the EPD does not produce quinone species and the catalytic effect of carbon nanotubes relies mostly on their edge plane sites exposed to the solution [217]. Comparison of the voltammetric characteristics of NADH oxidation on the modified with CNT and non-modified glassy carbon electrods A comparison of the electrochemical behavior of CNT simply drop-coated on the electrode surface was also performed. One has to comment that it is not obvious to obtain a homogeneous film by drop-coating on the surface of the electrode because carbon nanotubes demonstrated a tendency to form aggregates during the dryingprocess. Here, we deposited the CNTs from aqueous solu tion, by dropping 10 μL of the same water suspension as for EPD (deposition has also been performed with DMF for the dispersion of the CNT with a similar result). The electrochemical behavior of the drop-coated film CNTs was then compared to the film deposited by EPD. On the voltammogram in PBS of GC-electrode with drop-coated CNTs (Fig. 4.6c ) an anodic peak at potential about -0,05 V can be noticed that is absent in case of electrophoretically-deposited CNTs (Fig. 4.6b). This peak can be attributed to the presence of catalytic impurities or to some easily-oxidized groups on the surface of as-produced CNTs (e.g. quinone-like). One should notice also that the quantity of CNTs (indirectly compared from the magnitude of the background current) deposited in case of drop-coating was here several times higher than those deposited electrophoretically. But the current of NADH oxidation at 0.42 V was higher in case of electrophoretically-deposited CNTs, possibly because of the better distribution of the CNT in the porous structure obtained with EPD allowing a good access to the internal surface of the electrode. Stability of the NADH voltammetric response The stability of the signal is an important biosensors parameter. Oxidation of NADH at non-modified electrodes often leads to contamination of surface with reaction products and the decrease of response over time [START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]. The stability of NADH oxidation peak at GCE and GCE-CNT have been investigated (Fig. 4.8). Thus, the NADH oxidation current on the unmodified GCE is significantly reduced after the second scan, and after the 3rd cycle the signal is only 50% of the original one (Fig. 4.8a). At the same time, current on the GCE-CNT decreases much more slowly, after the 3-d potential scan the current decrease is not more than 15% of the original response ((Fig. 4.8b). It should be noted that for the subsequent scans the current approaches to a constant value and remains constant over time. Summing up the above, we can conclude that the modification of GCE with CNT by EPD method significantly improves its electrochemical characteristics toward NADH oxidation. In particular, the potential peak is shifted by 0.25 -0.30 V toward negative values, that facilitates the process of coenzyme oxidation on electrode. In addition, the stability of the electrode response to NADH increases about 3 times due to the absence of surface electrode contamination with reaction products. It should be emphasized that the EPD method enables the obtaining of more homogeneous and porous coatings than conventional drop-coating method, which leads to increased in about two and a half times sensitivity of the electrode to NADH. Electrochemically-assisted deposition of SiO2-sorbitol dehydrogenase film on the CNT-modified electrode To highlight the potential interest of such assemblies in designing bioelectrochemical devices, the electrophoretically-deposited CNT layer was used as a support for deposition of a sol-gel biocomposite comprising dehydrogenase enzymes (d-sorbitol dehydrogenase, DSDH). Due to the heterogeneous topography of the porous CNT layer, the evaporation-based sol-gel deposition approach is limited, at least as far as uniform film formation around CNT is expected, so that the EAD method (which already led to successful sol-gel film formation on non flat supports) was applied here. One of the advantages of the method of sol-gel EAD is the possibility of partial modification of the required part of the electrode only. For instance, we showed earlier the feasibility of controlled modification by sol-gel biocomposite of platinum nanofibers lying on a glass substrate (see paragraph 3.1). The situation of the GC-CNT electrode is more complex as the glassy carbon used as a substrate is conductive and the formation of silica film could be triggered not only by the carbon nanotubes but also by the underlying support. Therefore, the linear sweep voltammograms were recorded at the different types of electrodes dipped in the silica sol in order to examine the activity of the reactions responsible for the generation of the catalyst for the silica condensation (i.e., water and/or oxygen reduction (1.1)). Comparing the curves obtained for bare GC and GC-CNT electrodes (see curve 1 and 2 on Fig. 4.9), clearly indicates much higher reduction rates on the carbon nanotubes than on the bare electrode (e.g., more than one order of magnitude at -1.3 V). The reason for this has to be found in the much larger electroactive surface area for GC-CNT as well as catalytic effect of carbon nanotubes leading to faster reaction rate. Thus, by applying the negative potential of -1.3 V to the GC-CNT electrode, one would be able to induce the formation of hydroxyl-ions mainly in the vicinity of the nanotubes walls and not so much onto the glassy carbon support, which should lead to preferable CNT covering with DSDH-doped silica as it was demonstrated before for the localized sol-gel deposition on Pt nanoparticles lying on a glassy carbon surface [START_REF] Stefanie | Local electrocatalytic induction of sol-gel deposition at Pt nanoparticles[END_REF]. - using either bare GC or the GC-CNT electrode covered with a silica film containing DSDH generated by the method of EAD. Note that to make the data comparable, it was necessary to apply a more cathodic potential to the bare GC electrode (i.e., -1.8 V, with respect to -1.3 V for GC-CNT) to ensure the generation of the same electrolysis current and thus the same amount of so-produced hydroxyl ions catalysts [268] (which would lead to similar quantities of deposited biocomposite on both electrodes). Thus NADH oxidation was facilitated when performed using the GC-CNT electrode (by ca. 0.3 V). Also, significantly larger bioelectrocatalytic currents were obtained in the presence of CNT. Assuming a same amount of deposited materials in both cases (as discussed above) and thus a same quantity of enzyme in the film, the smaller current observed with bare GC can be explained by some diffusional restrictions for the analyte to reach the active enzymatic centers through the silica layer, whereas the porous structure of the carbon nanotube assemblies resembles more to a large surface area 3D-electrode ensuring fast diffusion of reactants to the proximal electrode surface. Thereby, the biocomposite modified electrode constructed on the basis of electrophoretically deposited carbon nanotubes offers the advantages of lower potential of detection and larger current due to its porous structure associated to the intrinsic electrocatalytic properties of nanotubes. Optimization of the parameters of the biocomposite film In order to receive the best response of the immobilized enzyme and to study the influence of the main parameters affecting the characteristics of either the silicabased biocomposite layer or the underlying carbon nanotube assemblies, the two-step optimization of the deposition parameters was performed. First, we examined the effect of the electrodeposition time (in the range 5-25 s), which directly affects the quantity of deposited biocomposite, while keeping the same parameters of carbon nanotubes layer (time of electrophoretic deposition 60 s). The naked-eyes appearance of the electrode was not changed while the potential was applied for 5 s, nevertheless a clearly visible silica layer could be remarked for 25 s deposition. Analyzing the amperometric response of such electrodes to the d-sorbitol analyte (Fig. 4.11a) revealed a very strong impact of the deposition time on the bioelectrode sensitivity. The highest responses only appear in a narrow time window, i.e., between 12 and 20 s, with a maximum reached at 16 s (Fig. 4.11b). Such great influence of the EAD process duration relies on the catalytic nature of this sol-gel deposition process. As previously assessed using AFM and EQCM techniques [269], the sol-gel deposition can be basically divided in two successive regimes, the first one being characterized by a slow rate of deposition and the second one by the fast deposition of much larger quantities of sol-gel material. As a consequence, the particular shape of the variation depicted in Fig. 4.11b can be rationalized as follows: short biocomposite deposition time resulted in low amounts of incorporated enzyme and thus to low bioelectrochemical response, whereas long deposition times led to thick deposits preventing fast mass transport into the biocomposite films resulting thereby in sensitivity loss as it was described before for the sol-gel deposition on the Pt-Nfbs (see paragraph 3.1) and macroporous gold electrode [START_REF]the functionalization of macroporous electrodes / F. Qu[END_REF]. The optimal situation arises thus from the best compromise of sufficiently high amount of incorporated enzyme while maintaining a porous structure open enough to ensure fast diffusion of reactants. A second important point to be investigated is the impact of the carbon nanotubes film thickness on the electrochemically assisted bioencapsulation. Fig. 4.12 shows the dependence of the bioelectrode sensitivity on the time of electrophoretic deposition of the CNT assemblies while keeping constant the parameters of sol-gel deposition (-1.3 V, for 16 s). The relation has also a sharpened shape with better sensitivities in the range from 20 to 60 s electrophoretic deposition. According to the equation (4.1) this corresponds to a layer thickness of CNT from 100 to 150 nm. In fact, not only the time of sol-gel electrodeposition but also the quantity of carbon nanotubes on the glassy carbon electrode affect the quantity of sol-gel materials deposited because sol electrolysis is facilitated on carbon nanotubes (see Fig. 4.9). A higher quantity of carbon nanotubes on the electrode surface leads to faster sol-gel deposition. A great care has thus to be taken when optimizing the solgel electrodeposition on porous and catalytically active materials. At the same time, the results are reproducible, while maintaining constant parameters. Analytical performance of the modified electrode The d-sorbitol dehydrogenase immobilized into silica film on the surface of carbon nanotubes showed typical Michaelis-Menten kinetics and the gradual saturation of response after concentration of d-sorbitol 5 mM (Fig. 4.13a). The apparent Michaelis-Menten constant (K m ) was calculated from the Lineweaver-Burk plot and its value was found to be 4.1 mM which is very similar to one measured in the solution [270] indicating a good activity of DSDH entrapped in the electrogenerated silica film and rapid mass transfer of reagents. Finally, the analytical characteristics of the bioelectrode have been determined by recording its amperometric response at the applied potential of +0.5 V, as a function of the d-sorbitol concentration. Using the optimal thickness of carbon nanotubes layer and the best parameters of sol-gel EAD, the calibration plot for the oxidation of d-sorbitol was obtained and found to be linear in the range from 0.5 mM to 3.5 mM. The linear regression equation was I(ΔA) = (-0.09 ± 0.15) + (2.77 ± 0.07) × C (mM) with the correlation coefficient 0.995. The detection limit by 3S criteria was found to be 0.16 mM. The modification method can be reproduced on the different electrodes and on different days with a precision of 9% (n = 3). The bioelectrode demonstrated sufficient stability for the measurements during one month as it was shown before for the film with same composition [241], the signal drop after 20 consecutive measurements did not exceed 10%. Electrophoretic deposition of macroporous CNT-assemblies for electrochemical applications The application of macroporous electrodes is a promising approach to increase the sensitivity of electrochemical analysis [271]. They can be used in particular for the immobilization of biomolecules [START_REF]the functionalization of macroporous electrodes / F. Qu[END_REF][272][273][274]. Due to the porous structure, the electrode has higher electroactive surface area and the pore size is large enough to allow the quick and unobscured diffusion of reagents from solution to the electrode. In particular, macroporous gold electrode was used to immobilize the enzyme with coenzyme that was possible thanks to the large pore volume [275]. The application of carbon nanotubes as nanoscale building blocks for the construction of analogous macroporous electrodes appears as a promising way to combine the advantages belonging to the macroporous structure [276] and the electrocatalytic properties of individual nanotube [START_REF] Lourdes | Role of carbon nanotubes in electroanalytical chemistry: a review[END_REF]. This notwithstanding, there are only a few examples dealing with three-dimensional 3D carbon nanotube ensembles with engineered and controlled macroporosity. The method of direct synthesis allows creation of 3D CNT-ensembles with long tube-to-tube distance [277,278] but it gives rather insufficient possibility of porosity control. Interesting results can be obtained by the method of ice-segregation induced self-assembly (ISISA) that was applied for the fabrication of self-supported porous CNT monoliths [279]. Another widespread method is a template approach that consists in mixing singlewalled carbon nanotubes with sacrificial particles, formation of a composite and further removal of the particles [280]. A two-dimensional network can also be obtained by using patterning methods from the solution [281]. To date, the above mentioned ensembles have not been applied in the area of electrochemical biosensors. The method of vacuum filtration allows obtaining rather thick porous CNT films [282], but it can lead to the segregation of particle and nanotube layers during the filtration process [283] and does not permit to get the films on the solid support. A possible way to overcome these limitations could be the resort to an electrophoretic deposition method for the fabrication of macroporous CNT-electrode. One drawback of such CNT-assemblies can be the uncontrolled packing of the material that hinders the access to the CNT at the bottom of the film, especially for biosensors applications. In this chapter the formation of 3D-macroporous CNT-assemblies in the form of thin films on electrode surfaces by means of electrophoretic deposition is described. With a view to evaluate the prospects of such kind of films for possible application in biosensing, their electrochemical response to hydrogen peroxide and nicotinamide adenine dinucleotide have been investigated and the first example of enzyme immobilization using electrophoretically deposited 3D macroporous CNTelectrode have been shown. Fabrication of macroporous CNT-layers on the electrode surface The method of the electrophoretic deposition basically allows the formation of deposits from suspensions of charged particles. The carbon nanotubes are negatively charged in a broad pH range due to the presence of carboxylic groups on their surface. On the other hand, the polystyrene beads are also negatively charged owing to the presence of sulfonate groups on their surface [284] (potassium persulfate was used as initiator of the polymerization in their synthesis) and can be electrophoretically deposited as well. However, we did not succeed in fabricating a uniform 3D-film consisting of PS beads only as they tended to form a monolayer with big gaps between the deposited beads (see Appendix A). This likely occurs because of the weak adhesion between the polystyrene and glassy carbon together with unfavorable interaction originated from their spherical shape and the electrostatic repulsion between particles. Considering the above observations, the deposition of a mixture of carbon nanotubes and PS beads could contribute to overcome the lacks of each individual component by bringing a volume from polymeric particles while nanotubes contribute to the reinforcement of the composite. In contrast, the appearance of the film formed in the presence of larger amounts of PS beads is different, demonstrating the absence of well-defined 500 nmmacropores but showing rather a disordered sparse packing of nanotubes (Fig. 4.15C and D), suggesting the collapsing of such thick macroporous film. Optimization of PS-beads concentration in the suspension To investigate the optimal concentration of PS beads leading to thick and highly porous CNT films, we analyzed different samples obtained with the same 135 s deposition time and PS beads contents varying from 0.05 and 0.5 mg/mL. The dependence of the deposit thickness on the PS beads concentration represents a linear correlation with quite big slope since higher amounts of PS beads increase significantly the volume of deposit (Fig. 4.16a). At the same time, when the PS template is removed, this dependence has a dramatically different shape (compare curves 1 and 2 in Fig. 4.16a) and can be basically divided into three parts. At the initial part of the plot the thickness of the template-free macroporous CNT deposit is close to the CNT-PS one (yet lower), demonstrating that burning of the template does not have significant influence on the macrostructure of deposit due to a small quantity of beads. For the same reason, such films suffer from rather small thicknesses and low pore volumes. At the second part, the rate of the thickness increasing slows down and reaches the maximum at a PS beads concentration of 0.2 mg/mL. At higher concentrations the thickness suddenly drops lower than 1 μm and does not change significantly in the third part of the plot. This demonstrates that too large concentrations of polystyrene have to be avoided as they lead to pore collapse during the calcination. In conclusion, the maximal thickness of the macroporous film can be obtained at a concentration of PS beads of about 0.2 mg/mL, defining a maximal PS beads / CNT ratio above which collapsing of macropores starts to occur. As a rule, the quantity of the electrophoretically deposited material depends strongly on the deposition time [266]. We verified thereby this approach and its impact on the feasibility of the macroporous film formation. Fig. 4.16b shows a dependence of the film thickness on the time of electrophoretic deposition. This dependence has linear type when only carbon nanotubes (no PS beads) are present in the dispersion (see curve 3 in Fig. 4.16b). In the same manner, but with much bigger slope, grows the deposit thickness when the PS beads have been introduced in the mixture (Fig. 4.16, curve 1). But this increase occurs until 100 s of deposition, and then tends to level off and could even decrease at much longer deposition times. We suppose that the rate of the PS beads deposition significantly decreases above 100 s because of a significant depletion of the particles concentrations consumed during the deposition. The loss in thickness at longer time is more difficult to explain but it might be due to the film destruction by oxygen bubbles formed on a larger time window during the deposition (they were clearly observed at the naked eye). In agreement with the observation made in Fig. 4.16a for a PS-beads content of 0.2 mg/mL, the thickness of the deposit decreases by ca. a factor 2 after template removal (Fig. 4.16b, curve 2) but still remains much bigger than that of the deposit fabricated without PS beads (Fig. 4.16b, curve 3). At low deposition time (i.e., less than 75 s), the thickness of the macroporous CNT film is below 500 nm, which is likely to correspond to a sparse thin film similar to the one that can be obtained by the patterning from solution. After 75 s of deposition, the thickness of the deposit considerably increases, up to 2 lm, indicating a formation of true macroporous film with cellular structure. Methylene green adsorption and electrochemical characterization The strong adsorption on the CNT-surface and the electroactive nature of MG was exploited to compare the surface area of the different films prepared here. Fig. Co-immobilization of sorbitol dehydrogenase and coenzyme on the macroporous CNT-assembly Though the macroporous structure does not provide true direct advantage for the direct detection of NADH in the solution, it still could be useful for the development of biosensors based on the NAD-dependent enzymes. In that case, the enhancement can be achieved by immobilization of both the enzyme and the cofactor inside the porous film. The larger inner surface area together with the bigger pore volume would lead to increasing of the number of active sites where the enzymatic reaction is allowed to take place. We used here a recently described protocol for the immobilization of NAD+ Thus, the porous structure of the macroporous CNT film provides higher sensitivity for the detection of D-sorbitol when the NAD + and DSDH are immobilized together into the silica film and this could be further exploited for the construction of advanced dehydrogenase-based biosensors. Conclusions to chapter 4 Modification of glassy carbon electrode with carbon nanotubes by electrophoretic deposition method significantly improves the efficiency of the latter when detecting coenzyme NADH. The presence of carbon nanotubes shifts NADH oxidation potential at 0.25-0.30 V in negative potentials direction, and significantly increases the stability of the response. Electrochemically-assisted deposition method allows immobilization of dehydrogenases in SiO 2 -film with preservation of enzymatic activity, which was illustrated by sorbitol dehydrogenase example. The electrode modified with this enzyme, shows stable and reproducible response to changes in the concentration of sorbitol in solution. The electrode modified with carbon nanotubes and sorbitol dehydrogenase demonstrates significant advantages in comparison with the electrode without carbon nanotubes. In particular, the sensitivity and selectivity of sorbitol detection are increased thanks to the use of low working potential 0.5 V. The response of the modified electrode depends on the thickness of carbon nanotubes layer and SiO 2 -film deposition parameters. Best analytical characteristics were obtained when using carbon nanotube layer thickness of 100 -150 nm and electrochemically-assisted deposition parameters -1.3 V, 16 s. Macroporous layer of carbon nanotubes obtained by the method of electrophoretic deposition is a promising matrix for developing biosensors based on oxidases and dehydrogenases. In particular, it allows to improve the sensitivity and selectivity of the electrode modified with film SiO 2 -dehydrogenase when combined with adsorbed methylene green dye and immobilized coenzyme. CHAPTER 5. APPLICATION OF MODIFIED ELECTRODES AS SENSITIVE ELEMENTS OF BIOSENSORS FOR THE DETERMINATION OF SORBITOL AND CHOLINE Selective determination of biologically-active organic substances is an important task of modern analytical chemistry. Such substances are often used in food production, resulting in their significant intake with food. However, excess or deficiency of such substances in the human body can cause serious physiological disorders. Currently, analytical determination of these substances is a complex task due to the similarity of the chemical properties of individual substances in homologous series (e.g., polyhydric alcohols, monosaccharides, polysaccharides have OH - groups). In fact, the only method that allows sensitive and selective analysis of organic substances is chromatography. However, it had disadvantages, such as considerable length analysis, complex sample preparation, high cost and complexity of the equipment, the need of pure reagents. All this makes impossible screening of organic substances. However, the sensitivity at the level of 10 -5 -10 -4 M is usually quite sufficient for analysis of substances, particularly in foods [285]. Examples of substances that require determination in the food are sorbitol and choline, important members of metabolism in the human body. Electrochemical biosensors are promising alternative for the determination of organic compounds in complex objects with minimal sample preparation. They are characterized by sufficient sensitivity, speed of analysis, and that is the most important, high selectivity of determination thanks to the use of enzymes as recognition elements. Price of one analysis is rather low due to the possibility of multiple use, a small amount of enzyme and cheapness of electrodes (e.g. screenprinted electrodes). This section describes the results of the application of developed modified electrodes for the determination of choline and sorbitol. The determination of sorbitol was carried out using the glassy carbon electrode modified with non-porous CNTlayer and biocomposite film SiO 2 -DSDH with optimal parameters (as described in chapter 4, paragraph 4.3). The choline analysis was conducted using gold screenprinted electrode modified with biocomposite film SiO 2 -ChOx with optimal parameters (as described in chapter 3, paragraph 3.2). The deposition of silicabiocomposite in both cases was performed by electrochemically-assisted deposition method. Determination of sorbitol using glassy carbon electrode modified with CNT and SiO2-sorbitol dehydrogenase film Sorbitol is an alcohol contained in fruits that has a sweet flavor. It is widely used in the food industry (food additive E 420) as a sweetener and [286] as well as in the cosmetic and pharmaceutical industries. Very often it is used along with fructose as a sweetener in diabetic foods for people with diabetes. In human body, sorbitol is formed from glucose and is converted to fructose with DSDH. Its abundance in the tissues leads to the swelling that occurs in diabetes [287]. It also fermented by the bacteria of large intestine to acetate and H 2 , which leads to diarrhea, severe disorders of the gastrointestinal tract and weight loss when its excessive consumption [288]. These symptoms have been reported for the abuse of sorbitol in amounts of more than 10 g/day. Thus, monitoring of sorbitol in the food is an important analytical task. In For the verification, the method of "added -found" was used. Results of the determination of sorbitol additions in buffer solution are presented in Table 5.1. Interference of some substances on the determination of sorbitol As objects of study we selected products of the food and cosmetic industries that may contain sorbitol. These products are dietary sweets, chewing gum, and Carbohydrates and polyhydric alcohols do not interfere with determination of 1 mM sorbitol, including mannitol, which is a stereoisomer of sorbitol, indicating the high selectivity of the modified electrode. Ascorbic acid does quite significant interfering influence due to the nonenzymatic oxidation on the electrode at operating potential 0.5 V. But lower working potential (0.5 V) enables an approach for its masking -the introduction of Fe (III). It was investigated that even fivefold excess of Fe(III) does not interfere with determination of sorbitol (Table 5.2). Given the fact that the objects of study contain sorbitol in significantly higher amounts than ascorbic acid, 3 mM of Fe(NO 3 ) 3 will be sufficient for the masking and will not interfere with the determination of sorbitol. According to the literature data [292,293], the masking effect Fe(III) can be explained by two factors: the formation of the transition complex of Fe(III) with ascorbic acid and subsequent oxidation of ascorbic acid by ferric iron in the complex forming Fe (II) and dehydroascorbic acid . The presence of anionic surfactant leads to a gradual decrease in signal with time during the time of contact of the electrode with the solution more than 10 minutes, probably due to leaching of the enzyme from the film [START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF]. Consequently, when analyzing cosmetic products containing surfactants one should avoid prolonged (over 10 minutes) contact of modified electrode with test solutions. Thus, given the absence of interfering effects of most of the investigated compounds and the ability to mask ascorbic acid the developed modified electrode can be used for determination of sorbitol in the food and cosmetic samples. Determination of sorbitol in the food and cosmetic samples As objects of study we used toothpaste for children «Oral B Stages berry bubble» (manufacturer «Procter & Gamble Co.», Germany), chewing gum «Orbit» (manufacturer LLC «Wrigley», Russia), biscuits "Diabetic" (producer PJSC "Kharkiv biscuit Factory", Ukraine), shower gel "White honey" (produced by "Yaka", Ukraine). The composition of the objects is given in Appendix G. Results of sorbitol determination in the objects of food and cosmetic industries are shown in Table 5.3. The obtained data are characterized by satisfactory accuracy and reproducibility. The found sorbitol content in the object "Shower gel" is perhaps somewhat underestated due to the presence of anionic surfactants, which lead to a decrease in the signal. For determination of sorbitol the electrochemical, titrimetric, polarimetric, chromatographic and enzymatic methods are used. The titrimetric method of sorbitol analysis, which is used in the pharmacopoeia is based on the formation of its complex with copper, the concentration of which is determined by iodometric titration [295]. There is also a method of reverse iodometric titration of periodate excess that remains after sorbitol oxidation in sulfuric acid medium [296]. However, these methods are not sensitive and selectivein fact they determine the total content of polyols and glucose. One of the methods is direct electrochemical detection of sorbitol on platinum [297,298] Although chromatographic methods of analysis provide selective and sensitive determination of sorbitol in many objects, their common drawback is an expensive equipment, complex sample preparation and significant duration of analysis, as well as the need for special training and high requirements for reagents purity. In enzymatic methods of analysis DSDH is most often used with diaphorase which catalyzes reduction of iodonitrotetrazolium chloride to iodonitrotetrazolium formazan (Fig. 5.2), while measuring the value of the absorption of the latter at λ = 492 nm photometrically [303]. There are also ready test kits for sorbitol and xylitol determination by this method in food [304,305]. Sorbitol + NAD + 𝑠𝑜𝑟𝑏𝑖𝑡𝑜𝑙 𝑑𝑒ℎ𝑦𝑑𝑟𝑜𝑔𝑒𝑛𝑎𝑠𝑒 → Fructose + NADH Fig. 5.2 . Schemes of reactions of enzymatic determination of sorbitol This method is very selective and sensitive, but the significant consumption of enzymes and cofactors, including their high cost (especially diaphorase), makes it not viable from an economic point of view. In addition, this method is not express, and the error of determination increases because of the use of complex two-enzyme system. The advantages of the developed method for the sorbitol determination using modified GCE-CNT-SiO 2 -DSDH electrode compared with titrimetric, polarimetric and electrochemical methods are the high selectivity of determination in the presence of other polyhydric alcohols and carbohydrates, as well as the lower limit of detection. Compared with chromatographic methods, the technique requires minimal sample preparation, characterized by expressivity and the possibility of using by unqualified personnel. The sensitivity of this method is sufficient to determine the sorbitol in the majority of objects. The method can be applied for screening sorbitol content in the objects of food and cosmetics industries, as well as for the analysis of biological fluids if sensitivity increased. To date there are only a few examples of the development of amperometric biosensors based on DSDH to determine sorbitol (Table 5.4). However, in most cases, these sensors were used only for the analysis of model solutions, not real objects. Compared with them the developed GCE-CNT-SiO 2 -DSDH electrode has high stability and reproducibility of results, limit of detection is sufficient to determine sorbitol in most objects. The advantage is also a simple modification and the absence of mediator. Determination of choline using gold screen-printed electrode modified with film SiO2-choline oxidase Choline -a quaternary ammonium base, which belongs to the vitamin B (B 4 ) group, although the human body is able to synthesize it. However, it should be necessarily present in the diet of humans; its recommended dose is 425 and 550 mg per day for women and men respectively [306]. In the body choline participates in the synthesis of important neurotransmitter acetylcholine, it also participates in the regulation of insulin levels, in the transport of fat in the liver (as the part of some phospholipids) and in the formation of cell membranes. The deficiency of choline in the body leads to serious violations: lipopexia and liver damage, kidney damage, bleeding. However, excessive consumptionof choline leads to the so-called "fish odor syndrome", sweating, excessive salivation, reduced pressure [129]. Foods rich in choline are veal liver, egg yolks, milk, spinach and cauliflower. In addition, choline is artificially added to some specialized food such as baby food, vitamin formula, sports drinks. Control of choline content in these foods is an important analytical task. For example, infant vitamin mixture is almost the only source of choline for infants, and its lack in food can lead to severe developmental Results of choline additions determination in phosphate buffer solution are presented in Table 5.5. They are characterized by satisfactory accuracy and reproducibility. Table 5.5 Results of choline additions determination in phosphate buffer Interference of some substances on the determination of choline The influence of the substances contained in the food and able to interfere with the determination of choline by the modified electrode was studied. The effect of substances at their average concentrations in the corresponding objects was investigated (Appendix G). Since urea and uric acid are lacking in food, their As seen from the results, the main macrocomponents do not interfere with the determination of choline. One should note the absence of interfering effects of ethanol in relatively high concentrations. The presence of equimolar amounts of Cu(II) leads to the signal decrease, it is known from the literature that copper cations inhibit ChOx [309], but the presence of copper in such high concentrations in the studied objects is ruled out. Ascorbic acid significantly interferes with the determination of choline due to its nonenzymatic oxidation on the electrode at 0.7 V. Elimination of interfering influence of ascorbic acid Given the fact that ascorbic acid is a common interfering component in the development of biosensors, the literature offers several ways to eliminate its impact. In particular, the most common way is the use of semipermeable membranes that limit the access of interfering substances to the electrode due to the size or charge of their molecules [310,311]. However, this approach often leads to a decrease in the sensitivity of the biosensor to analyte [176]. Therefore, another promising approach is the eliminatation of the reductants effect by their prior oxidation on the membraneoxidants [175]. The composition of such membranes can vary, but one of the best oxidants that can be used for this purpose is the oxide of manganese (IV) [173,174]. The disadvantage of such membranes is the low reproducibility of the signal and slow response time. Therefore, to simplify the procedure of the modified electrode fabrication, we decided to add MnO 2 powder directly into the solution of the analyte so that he could oxidize ascorbic acid before determination. The dry powder of MnO 2 was added (10 mg per 10 ml of solution) to the solution that was stirred with a magnetic stirrer for 30 min. Then the sample was filtered through a paper filter and used for the determination of choline as described in paragraph 5.2.1. Applied potential +0.7 V. To test the impact of MnO 2 on choline signal its determination was conducted using AuSPE-SiO2-ChOx in a series of model solutions (Table 5.7). From the obtained data it can be concluded that MnO 2 does not interact with choline because found choline amount remains constant before and after treatment with MnO 2 (solutions 1, 2, Table 5.7). At the same time, the presence of ascorbic acid leads to an overestimation of the expected results of choline determination (solution 3, Table 5.7). However, after the treatment of this solution with MnO 2 (solution 4, Table 5.7), the found amount of choline correlates with added. Thus, treatment of 0.1 mM ascorbic acid solution with MnO 2 leads to the elimination of its interfering impact in choline determination using AuSPE-SiO 2 -ChOx. According to the stoichiometry of the reaction [312], the amount of MnO 2 1 mg/ml is sufficient to eliminate the interfering effects of ascorbic acid up to its concentration in a solution 10 mM. The ascorbic acid content should not exceed 5 mM after sample preparation, so such quantity of MnO 2 is enough to mask its influence. Determination of choline in food products The infant formula «Bebi» was chosen as an object of study, its composition is listed in Appendix G. However, attempts to determine choline in food mixture using a standard photometric method with Reinecke salt [314] were unsuccessful due to the low sensitivity and reproducibility of the method, and chromatographic technique requires the use of special columns. Comparison of the standard methods of choline determination with the developed technique For determination of choline in foods the spectrophotometric, chromatographic and enzymatic methods of analysis are used. Its analysis is complicated because it can be either in free form or in the ester derivatives of phosphoric acid, so its preliminary hydrolysis is necessary before determining total choline [129]. Historically, the first photometric method for choline determination is based on formation of insoluble colored compound with Reinecke salt, whose absorption is detected in methanolic solution at 520 nm [314]. The gravimetric variant of this method is also possible [315]. Despite its simplicity, this method is time-consuming, low-sensitive (the loss of choline during washing is possible), requires the use of toxic reagents. In addition, this method is not selective, determination of choline is prevented by most other amines. Chromatographic determination of choline is possible using liquid [316][317][318], gas [318,319] NMR spectroscopy can be also used for determination of choline [322]. The method allows to determine choline content with great accuracy in various forms, but is unsuitable for most laboratories because of the high cost of equipment and low linearity and sensitivity. Enzymatic methods for choline determination are based on its oxidation with ChOx (after preliminary hydrolysis of esters by phospholipase) with the release of H 2 O 2 . The hydrogen peroxide reacts with phenol and 4-aminoantypyrine contained in the mixture in the presence of peroxidase, giving a colored reaction product, which is detected photometrically at a wavelength of 505 nm [313,323]. There are also modifications of this method with other dyes [324] [324]. This method of choline determination is quite expensive, given the high cost of the enzyme, with a significant error of determination by photometric method of signal detection. In addition, the determination of choline by this method is influenced by many reductants that can react with hydrogen peroxide. Their influence can be removed by using activated carbon [325]. The developed method of choline determination with AuSPE-SiO 2 -ChOx has significant advantages over existing methods. In particular, the use of planar technology, relatively cheap reagents and small quantities of enzyme allow producing large number of electrodes with little cost. Sensitivity and selectivity of the modified electrode is sufficient to determine choline in foods and biological fluids with minimal sample preparation. Small time of analysis and its simplicity allows the use of the developed technique by unskilled personal. Compared with other known biosensors based on ChOx, the developed modified electrode is characterized by fast response and low detection limit, which can be explained by the porous structure of SiO 2 -film that facilitates the diffusion of reactants inside (Table 5.9). Modification procedure is simple and, unlike most other biosensors, does not use mediator, which could worsen the analytical characteristics and reproducibility of biosensors. The sensitivity developed electrode slightly lower than in some works [168,307,326], but it is quite sufficient to determine choline in foods that has been shown in the analysis of real objects. It should be noted that immobilized on the electrode surface ChOx is characterized by low apparent Michaelis constant (Table 5.9), indicating the preservation of its native structure due to biocompatibility of SiO 2 -materials. In addition, immobilization into SiO 2 -film increases the stability of the ChOx structure, for example, large amounts of ethanol do not lead to its deactivation. The developed electrode is stable and can be used repeatedly (as was shown in section 3.2.5), which is particularly important for biosensors. Conclusions to chapter 5. The prospects of the developed modified electrodes application as amperometric biosensors for the analysis of real objects were shown. The technique of amperometric determination of sorbitol using glassy carbon electrode modified with biocomposite film SiO 2 -sorbitol dehydrogenase was developped. The calibration graph was linear in the range of concentrations of 510 -4 -3,510 -5 M, the detection limit was 1,610 -4 M. The equimolar amounts of sucrose, glucose, urea, mannitol, and glycerol do not interfere with the determination of 74 ABBREVIATIONS 74 ABBREVIATIONS ....................................................................................................... Fig. 1 . 1 . 11 Fig. 1.1. Schemes of amperometric biosensors of 1st (a), 2nd (b) and 3d (c) generations. Fig. 1 . 2 . 12 Fig. 1.2. Scheme of reactions occurring during the sol-gel synthesis (on the example of TEOS): hydrolysis (a), condensation (b), polycondensation (c). Fig. 1 . 3 . 13 Fig. 1.3. Scheme of different biosensor configurations based on the sol-gelmaterials [33]. Fig. 1 . 4 . 14 Fig. 1.4. Scheme of electrochemically-assisted deposition. Fig. 1 . 5 . 15 Fig. 1.5. Structure of single-walled (a) and multi-walled (b) carbon nanotubes Fig. 1.6. Scheme of electrophoretic deposition of CNT. reactions of dehydrogenases, turning to the reduced form NADH. Due to the electrochemical activity of pair NAD + /NADH the amperometric detection of concentration changes of any component of this pair could be linked to the initial concentration of the substrate-analyte. However, the NADH interacts with the electrode, causing the dependency of the position and height of the peak on the material and the structure of the electrode as well as on the pH and the buffer solution[213]. Although the formal redox potential of pair NAD + /NADH is quite low (approximately -0.515 V vs. Ag/AgCl[213][214][215]), the process is characterized by significant overvoltage on conventional electrodes (0.7 V -on platinum, 1.0 V -on gold) because of irreversible oxidation of NADH[215]. Electrochemical detection at such high potentials would have led to significant interfering effect from other electroactive substances and even water electrolysis, preventing any analytical application of such biosensors.The electrochemical oxidation of NADH at carbon electrodes has been reported to lead to the electrode poisoning due to contamination of surface by reaction products[216], including adsorbed NAD + dimers[217]. This leads to a decrease of the anodic current of NADH oxidation and consequently to the loss of electrode sensitivity with time, greatly complicating the development of stable biosensors.Therefore, research efforts are aimed at searching of new materials and ways of NADH oxidation potential reduction. The carbon nanotubes established a reputation as a promising material for such enzyme electrodes[START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF] 100]. Thus, the use of an electrode modified with carbon nanotubes reduces the potential of NADH oxidation by almost 0.5 V compared to the unmodified carbon electrode[START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]. Even better results can be obtained by performing pre-activation of carbon nanotubes by anodic oxidation[218] or microwave processing[219]. This treatment leads to partial destruction of the nanotubes structure and the emergence of highly-active centers[217, 218] as well as to the appearance of quinone-like groups on the surface of the nanotubes, which act as mediators[219, 220].Another way to solve the problem of high oxidation potential of NADH may be the use of electrochemical mediators, including various quinones, aromatic diamines, phthalocyanine, ruthenium complexes and other[215, 221]. Scheme of such mediators application may include simple addition to the solution or immobilization on the electrode by adsorption, covalent binding or (electro)polymerization[221, 222]. Phenothiazine and phenoxazine redox dyes[223] attract attention as mediators due to the high rate of heterogeneous electron transfer reactions of NADH and are considered as one of the most promising electrochemical mediators for coenzyme oxidation[222, 224]. The biosensors based on such dyes have already been developed: Meldola Blue [225, 226], Nile Blue [227], Methylene Blue [228-230] and Methylene Green [231, 232]. 3.4, «Sigma») from Aspergillus niger (M w ≈ 160000, activity 15 -50 units/mg). Isoelectric point 4,2; -Choline oxidase (ChOx, EC 1.1.3.17, «Sigma») from Alcaligenes sp. (M w ≈ 95000, activity ≥ 10 units/mg). Isoelectric point 4,1; -D-Sorbitol dehydrogenase (DSDH, EC 1.1.1.15), synthesized at the Department of Microbiology of the University of Saarland (Germany). ( dimethyldiallylammonium chloride) (PDDA, low molecular weight, 20 wt% in water, «Aldrich»), poly-(ethyleneimine) (PEI, high molecular weight, water-free, «Aldrich»), and cationic surfactant cetyltrimethylammonium bromide (CTAB, «Sigma»). electrochemical experiments were carried out on potentiostat/galvanostat PalmSens («Palm Instruments BV», Netherlands) potentiostat EmStat2 («Palm Instruments BV», The Netherlands) and bipotentsiostat/galvanostat μStat 400 («DropSens», Spain). The three-electrode cell was used containing (except gold screen-printed electrode) corresponding working electrode, an Ag/AgCl reference electrode (Ag / AgCl, 3 M KCl, «Metrohm», Germany) and a platinum auxiliary electrode. Data from the potentiostat was transferred to a personal computer and processed using software «PSTrace», «DropView» and «Origin». Following electrodes were used as working: -Platinum nanofibers (Pt-Nfb) were synthesized and deposited on the surface of the glass substrate in Physico-Chemical Institute of the University of Giessen (Germany) by the method of electrospinning [242]. There were one-, two-, and four-layers Pt-Nfb with resistance 10, 2 and 0.5 kΩ accordingly; -Glassy carbon electrode (GCE) in the form of plates (Sigradur ®, «HTW Hochtemperatur-Werkstoffe», Germany), modified with CNT; -Gold screen-printed electrode (AuSPE) («DropSens», Spain). This type of electrode contained both gold working electrode, silver reference electrode and gold auxiliary electrode. ( Ag/AgCl), all potentials are mentioned versus it. All voltammogramms were obtained in buffer solutions of appropriate pH (unless otherwise specified) in static mode (without stirring). Potential scan rate varied from 20 to 100 mV/s, in some cases the extended range from 5 to 200 mV/s was used. Coenzyme NAD + (1 mM) was added into solution during measurement in the study of the voltammetric characteristics of immobilized DSDH. Amperograms were obtained at constant stirring, applied potential was kept constant throughout the measurement. At the beginning of the measurement dumping time was at least 200 s. Hydrodynamic voltammetry was carried out in the dynamic mode (with constant stirring). Potential changed stepwise with interval 50 mV every 30 s. The equilibrium current at the end of the time interval was measured. If necessary voltammetric measurements were performed in an anaerobic mode. Oxygen removal for the anaerobic experiments was achieved by bubbling argon for 15 min prior to experiment and this atmosphere was kept in the cell during the whole measurement. 4.2), the general feature of all oxidases is release of hydrogen peroxide in the enzymatic reaction. The initial concentration of the substrate can be determined by registering the change of the H 2 O 2 concentration. One of the best materials for electrochemical detection of H 2 O 2 is platinum [250]. The oxidation of hydrogen peroxide on platinum surface occurs with high signal reproducibility and at low potential due to the catalytic action of platinum oxides. This fact can be exploited to avoid interfering effects of reductants, increasing thereby the selectivity of platinum electrodes in comparison with other types of electrodes [251]. However, even with relatively low reaction potential [252], one would desire further reduction of the H 2 O 2 oxidation potential. 3. 1 . 1 . 1 . 111 Morphological characterization of platinum nanofibers assemblies. Fig. 3 . 3 Fig.3.1 depicts typical microscopic characterization data of the Pt-Nfb assemblies. SEM (Fig.3.1a) and AFM (Fig.3.1b) imaging reveals that the diameter of individual fibers is in the range 30-60 nm. Under the conditions used here, the overall thickness of the two-layers assembly remains limited to ca. 100 nm, as shown in the AFM profile (Fig.3.1b). Platinum nanofibers deposited by electrospinning are highly interconnected, forming a 2Dnetwork of Pt nanoelectrodes on the glass substrate, suggesting good conductivity. This is indeed the case, as the conductivity of the assembly varied typically from 0.5 to 10 kΩ/cm (two points measurement), depending on the film density (which can be tuned by adjusting either the deposition time and/or the number of deposited layers). As also shown, individual Pt nanofibers can be easily evidenced by AFM, and their diameter can be evaluated quite accurately when they are deposited directly onto the glass support (see the two first features around X =1 μm in the line scan in Fig.3.1b). Fig. 3 . 1 . 31 Fig. 3.1. Characterization of platinum nanofibers. (a) SEM picture; cm 2 )Fig. 3 . 2 .Fig. 3 . 3 . 23233 Fig. 3.2. Cyclic voltammogram of Pt-Nfbs in 0.5 M H 2 SO 4 solution. Scan rate 100 mV/s; (b) Dependence of estimated electroactive surface area of Pt-Nfbs as a function of the number of nanofibers layers. Fig. 3 . 3 Fig. 3.4. (a) Cyclic voltammograms of Pt-Nfb (1), Pt-Nfb-SiO 2(2) and Pt-Nfb-SiO 2 -GOx(3) in PBS (рН 6,0) containing 10 mM glucose(2, 3). Scan rate 20 mV/s;(б) Amperometric response of Pt-Nfb (1), Pt-Nfb-SiO 2(2) and Pt-Nfb-SiO 2 -GOx(3) to the additions of glucose (on the graph). Supporting electrolyte: PBS рН 6,0.Applied potential +0,6 V. Fig. 3 . 3 Fig. 3.5. (a) Amperometric response of Pt-Nfb-SiO 2 -GOx, modified with time of EAD 1 s (1), 2 s (2), 3 s (3), 4 s (4), 5 s (5) and 10 s (6) to the additions of glucose (on the graph). Supporting electrolyte: PBS рH 6,0. Applied potential +0,6 V; (b) Dependence of electrode Pt-Nfb-SiO 2 -GOx sensitivity on glucose at potential +0,6 V as a function of time of sol-gel electrodeposition. Fig. 3 . 6 . 36 Fig. 3.6. AFM images and cross-section profiles (along white lines shown in the images) of silica-modified Pt-Nfbs prepared with using increasing electrolysis times: 1 s (a), 2 s (b), 3 s (c), 4 s (d), 5 s (e) and 10 s (f). Fig. 3 . 7 . 37 Fig. 3.7. SEM images of silica-modified Pt-Nfbs prepared for various electrodeposition times: 1 s (a), 2 s (b), 3 s (c) and 10 s (d). Fig. 3 . 8 . 38 Fig. 3.8. Amperometric response of Pt-bare electrode (1), 2-layers Pt-Nfb (2) and 4-layers Pt-Nfb (3) modified with SiO 2 -GOx film by EAD method (-1,2 V, 3 s), to the additions of glucose (on the graph). Supporting electrolyte: PBS pH 6,0. Applied potential +0,6 V. Fig. 3 . 3 Fig. 3.9. (a) SEM-image of GCE, modified with with Pt-Nps by means of electrochemical generation; (b, c) Cyclic voltammograms of GCE (b) and GCE-Pt-Nps (c) in 0.025 М PBS (рН 7.0) in the absence (1) and presence of 3 mM H 2 O 2 (2).Scan rate 50 mV/s. 1 Fig. 3 . 13 Fig. 3.10. (a) Amperometric response of GCE-Pt-Nps to the additions of H 2 O 2 (on the graph). Supporting electrolyte: 0.025 PBS (рН 7.0), applied potential +0.7 V; (b) Cyclic voltammograms (20 cycles) of GCE-Pt-Nps in the presence of H 2 O 2 (1 mM). Supporting electrolyte: 0.025 PBS (рН 7.0), scan rate 50 mV/s. Fig. 3 . 11 . 311 Fig. 3.11. Comparative cyclic voltammograms of GCE (a), GCE-Pt-Nps (b), platinum (c) and gold (d) screen-printed electrodes in the absence (1) and presence (2) of 1 mM H 2 O 2 . Supporting electrolyte: 0.025 PBS (рН 7.0), scan rate 50 mV/s. The dotted line shows proportionality of scales of (a, b) and (c, d). Fig. 3 . 3 Fig. 3.12. Cyclic voltammograms (a) and hydrodynamic voltammograms (b) of AuSPE modified with SiO 2 -ChOx film in the absence (1) and presence (2) of 2 mM choline. Supporting electrolyte: 0.025 М PBS (рН 7.5), scan rate 50 mV/s. Fig. 3 . 13 . 313 Fig. 3.13. Amperometric curves (a) and corresponding dependences of current from choline concentration in the initial range (b) for AuSPE-SiO 2 -ChOx, obtained at potential -1.0 (1), -1.1 (2), -1.2 (3), -1.3 (4) V and time of deposition 20 s. Supporting electrolyte: 0.025 М PBS (рН 7.5), applied potential +0.7 V. Fig. 3 . 3 Fig. 3.14. (a) Cyclic voltammograms of AuSPE-SiO 2 -ChOx (at optimum CTAB concentration) in the absence (1) and presence of 2 mM choline: 1st(2), 10th (3), 20th (4) and 30th (5) cycles of potential scan (50 mV/s); (b) Amperometric curve of AuSPE-SiO 2 -ChOx in the presence of choline (0.5 mM). Applied potential +0.7 V. Supporting electrolyte: 0.025 М PBS (рН 7.5) Fig. 3 . 3 Fig. 3.15. (a) Dependence of the amperometric response of AuSPE-SiO 2 -ChOx at potential +0.7 V on the choline concentration. Supporting electrolyte: 0.025 М PBS (рН 7.5); (b) Lineweaver-Burk plot for the determination of apparent Michaelis constant. Fig. 3 . 3 Fig. 3.16. Dependence of the amperometric response value of AuSPE-SiO 2 -ChOx at potential +0.7 V on the choline concentration in the solution. Supporting electrolyte: 0.025 М PBS (рН 7.5). nanofibers is increases in about 10 times compared with bare platinum electrode, making the use of such nanofibers promising for the development of oxidase-based biosensors. At the same time platinum fibers coated with SiO 2 -film do not have advantages over bare platinum electrode. The response of glucose oxidase immobilized in SiO 2 -film on the surface of platinum nanofibers significantly depends on the thickness of biocomposite film.Electrogeneration of platinum particles on the surface of the glassy carbon electrode increases the sensitivity of the latter to hydrogen peroxide. However, response of such modified electrode is unstable due to the gradual leaching of nanoparticles from the electrode surface, showing little promise of its application in the development of biosensors.The use of gold screen-printed electrode is promising for the immobilization of choline oxidase in SiO 2 -film. The application of simple one-step method of electrochemically-assisted deposition allows the immobilization of the enzyme on the surface of the gold electrode with preservation of its activity. The increase of the stability and sensitivity of the modified electrode to choline can be achieved by altering the deposition parameters and by introduction of the surfactants in the sol.The best performance of modified electrode was obtained using deposition at -1.1 V for 20 s and CTAB concentration in sol 12.2 mM. The developed sensitive element of the choline biosensor has a wide linear range and low detection limit.CHAPTER 4. IMMOBILIZATION OF SORBITOL DEHYDROGENASE ON THE ELECTROPHORETICALLY-DEPOSITED CARBON NANOTUBESThe use of dehydrogenases in biosensors are usually less common than corresponding oxidases due to their low stability and difficulty of coenzyme NAD + /NADH detection. For example, there are only a few works about the development of biosensors based on sorbitol dehydrogenase (DSDH) -enzyme that can be used to identify the substrate sorbitol. However, these works are limited to the definition of sorbitol in model solutions, data on the analysis of real objects are missing. Encapsulation in thin silica film can increase the stability of enzymes in the immobilized state, making appropriate the use of this method for the development of dehydrogenases-based biosensors. Given the lack of literature data on the application of EAD method to immobilize dehydrogenases, the point of interest is the development of methodological approaches of the EAD method application for the immobilization of DSDH into SiO 2 -film, as well as the use of developed biosensor for sorbitol determination in the real objects.The development of sensitive dehydrogenase-based biosensors requires the selective, sensitive and reliable detection of coenzyme NADH formed in the enzymatic reaction (see paragraph1.5.2). As noted in the literature review, the metal electrodes (including platinum and gold) are unsuitable for such purposes because of the high potential of NADH oxidation and poisoning of the electrode surface by reaction products, leading to a sensitivity decrease. The better choice is electrodes based on allotropic modification of carbon. For example, carbon nanotubes (CNT) are promising for use as a matrix for such biosensors due to the low potential oxidation of NADH on them. However, their handling is difficult because of small size, negligible solubility in inorganic solvents and the tendency to aggregation. As previously stated in the literature review (paragraph 1.3.2), electrophoretic deposition method is able to solve these problems allowing controlled electrode modification with CNT from low-concentrated dispersions. CNT-coatings obtained therefore, have not been used as a matrix for enzyme immobilization, but they have great potential and are promising for electrode modification including the sol-gel method. This chapter presents the results of study of CNT-layer, obtained by EPD on the surface of GCE, including the activity towards NADH oxidation. The prospects of its combination with EAD method for the immobilization of dehydrogenases are shown. Fig. 4.1. SEM image of nanotubes deposit on the surface of GCE with (a) short time of potential applying (t = 5 s) and with (b) long time of potential applying (t = 60 s). Fig. 4 . 4 Fig. 4.2. (a) AFM image and (b) cross-section profile (along white line) of the CNT-layer with 30 s of deposition. Image size 20 х 20 μm. Fig. 4 . 3 . 43 Fig. 4.3. Dependence of the thickness of carbon nanotubes layer measured by AFM from the time of electrophoretic deposition. Fig. 4 .Fig. 4 44 Fig. 4.4. (a) Cyclic voltammograms in the anaerobic conditions of GCE-CNT with adsorbed methylene green at different scan rates: (1) 5 mV/s, (2) 10 mV/s, (3) 20 mV/s, (4) 50 mV/s, (5) 100 mV/s, (6) 200 mV/s. (b) Oxidation (1) and reduction (2) peaks currents of methylene green versus scan rate. Fig. 4 . 6 . 46 Fig. 4.6. Cyclic voltammograms of the (a) GCE, (b) GCE-CNT (prepared by EPD) and (c) GCE-CNT (drop-coated) in the 0.067 M PBS (pH 7.5) without (solid lines) and with (dashed lines) 5 mM NADH. Scan rate 20 mV/s. Fig. 4 . 7 . 47 Fig. 4.7. Hydrodynamic voltammograms for 5 mM NADH in 67 mM PBS (pH 7.5) at (1) GCE and (2) GCE-CNT electrode. Fig. 4 . 8 . 48 Fig. 4.8. Cyclic voltammograms of GCE (a) and GCE-CNT (b) in the presence of 5 mM NADH: 1-st (1), 2-nd (2) and 3-d (3) cycle. Supporting electrolyte: 0,067 М PBS (рН 7,5), scan rate 20 мV/s. 1 Fig. 4 . 9 .Fig. 4 . 1494 Fig. 4.9. Linear sweep voltammograms of GC electrode (1) and GC-CNT electrode (2) in the sol-gel solution. Scan rate 100 mV/s. 1 Fig. 4 . 14 Fig. 4.10. Cyclic voltammograms of GCE (1) and GCE-CNT electrode (2) modified with thin film SiO 2 -DSDH by sol-gel EAD in the 0.1 M Tris-HCl buffer (pH 9.0) containing 5 mM D-sorbitol and 1 mM NAD + . Scan rate 20 mV/s. The conditions of sol-gel EAD were: -1.8 V, 16 s for GCE and -1.3 V, 16 s for GCE-CNT electrode. Fig. 4 . 4 Fig. 4.11. (a) Amperometric response of the GCE-CNT electrode modified by SiO 2 -DSDH film with different time of EAD to the additions of D-sorbitol: (a) 0 s, (b) 10 s, (c) 14 s, (d) 16 s, (e) 17 s, (f) 25 s. E = +0.5 V. (b) Sensitivity of the GCE-CNT electrode modified with SiO 2 -DSDH film to the D-sorbitol at +0.5 V as a function of the time used for the sol-gel EAD. Supporting electrolyte 0.1 M Tris-HCl buffer (pH 9.0) contains 1 mM NAD + . The time of electrophoretic deposition of carbon nanotubes was 60 s for all samples. Fig. 4 . 12 . 412 Fig. 4.12. Electrochemical response of the GCE-CNT electrode modified by SiO 2 -DSDH with optimal parameters (-1.3 V, 16 s) to the D-sorbitol at +0.5 V as a function of the electrophoretic deposition time and corresponding layer thickness (from equation (4.1)) of carbon nanotubes. Supporting electrolyte 0.1 M Tris-HCl buffer (pH 9.0) contains 1 mM NAD + . Fig. 4 . 4 Fig. 4.13. (a) Dependence of the amperometric response of GCE-CNT-SiO 2 -DSDH from the D-sorbitol concentration at potential +0,5 V. Supporting electrolyte: 0,1 М Tris-HCl (рН 9,0) and 1 mM NAD + . Fig. 4 . 4 Fig. 4.14A and B show edge-views of a typical CNT-PS film obtained by electrophoretic deposition on the surface of a glassy carbon electrode. The PS beads and carbon nanotubes are homogeneously mixed together forming a uniform deposit with thickness of about 4.5 μm. The apparent small amount of carbon nanotubes in the interspace of beads is a visual artifact that can be explained by low contrast of the image. The real quantity of nanotubes is much bigger as proved on the images of the film after template removal (see part C and D in Fig. 4.14). After calcination of the PS beads, the film structure is maintained, forming a cellular 3D network with pore sizes corresponding to those of the removed beads (≈ 500 nm). These pores are separated by a 40-60 nm thick CNT layer. The thickness of the template-free macroporous-CNT film (Fig. 4.14C) was only slightly smaller than the one of CNT-PS film (Fig. 4.14A). Fig. 4 . 4 Fig. 4.14. SEM-images on the edge (left column) and under the angle of 35 (right column) of a CNT-PS composite film deposited in the presence of PS-beads at 0.2 mg/mL, before (A, B) and after (C, D) template removal. Time of electrophoretic deposition was 135 s. Fig. 4 . 4 Fig. 4.16. (a) Variation of the CNT-PS layer thickness (135 s deposition) as a function of the concentration of PS-beads in the initial dispersion before (1) and after (2) template removal. (b) Variation of the CNT-PS (0.2 mg/mL) deposit thickness as a function of the electrophoretic deposition time, as measured with AFM or profilometry, before (1) and after template removal (2), as well as for the samples deposited without PS beads (3). 4 .Fig. 4 .Fig. 4 . 19 .Fig. 4 .Fig. 4 . 4441944 Fig. 4.17. (a) Cyclic voltammograms of MG adsorbed on the macroporous CNT layers ([PS-beads] = 0.1 mg/mL) with different duration of electrophoretic deposition: (1) 15 s, (2) 45 s, (3) 75 s, (4) 105 s, (5) 135 s. Supporting electrolyte: 0,067 М PBS (рН 7,5), scan rate 20 mV/s. (b) Peak current of MG as a function of electrophoretic deposition time of non-macroporous CNT layers (1,1') and macroporous CNT layers ([PS beads] = 0.1 mg/mL) (2,2'). Fig. 4 . 4 Fig. 4.22. Anodic peak current values corresponding to the adsorbed MG alone or in the presence of 1mM NADH, as a function of the electrophoretic deposition time applied to prepare the macroporous CNT electrodes. Fig. 4 . 23 . 423 Fig. 4.23. Cyclic voltammograms of (a, b) non-macroporous and (c, d) macroporous CNT film ([PS beads] = 0.2 mg/mL) prepared by electrophoretic deposition for (a, c) 15 s and (b, d) 135 s and modified with adsorbed MG and SiO 2 - Fig. 5 . 1 . 51 Fig. 5.1. Typical amperogram obtained during D-sorbitol determination with GCE-CNT-SiO 2 -DSDH using method of standard addition. Applied potential +0,5 V. Sample preparation was performed in accordance with State Standard[294].The samples of objects was weighed on an analytical balance so that in the final volume the concentration of sorbitol was between 20-50 mM. Sample was manually crushed in a porcelain mortar, quantitatively transferred into a 100 mL beaker and 40 mL of heated to 60 -70 C bidistilled water was added and stirred for one hour on a heated magnetic mixer. The additions of standard 50 mM solution of sorbitol were injected in some samples. The resulting suspension was filtered through a paper filter in a 50 mL flask and adjusted to the mark on the flask. Further determination of sorbitol in the resulting solution was performed by the method of standard additions, as described in paragraph 5.1.1. * The stability of the electrode is expressed in the format X% / Y, where Xthe percentage of response (from the original) that persists after use of the electrode during the time interval Y. 7 VFig. 5 . 3 . 753 Fig. 5.3. Typical amperogram obtained during choline determination with AuSPE-SiO 2 -ChOx using method of standard addition. Applied potential +0,7 V. Fig. 5 .Fig. 5 . 4 . 554 Fig. 5.4 shows that the current increase on the AuSPE after addition of ascorbic acid solution after contact with MnO 2 is very small (Fig.5.4a). For comparison, the Sample preparation was carried out according to the[313]. The sample was weighed on analytical balance, transferred to a beaker and 30 ml of 1 M HCl was added. The beaker was covered with a glass and heated at 60 -70 C in a water bath for 4 hours, stirring occasionally. Heating with acid promotes separation of milk proteins and hydrolysis of bound in the form of esters of choline to the free form.After cooling, the mixture was filtered through filter paper in 50 ml flask. The pH of the solution was adjusted to 3.0 -4.0 with 10 M NaOH. Treatment with 50 mg of MnO 2 was performed as indicated in section 5.2.2.1, the volume of the solution was adjusted to the mark with water. The resulting solution was stored in a refrigerator at 4 C up to one week. Determination of choline in the resulting solution was performed by the method of standard additions as described in paragraph 5.2.1. Results of determination are given in Table5.8. The data obtained are characterized by satisfactory accuracy and reproducibility and correlated with the content of choline, declared by the manufacturer (Appendix G). , ion chromatography [320] and capillary electrophoresis [321]. The mass detector [316, 317], ion exchange membrane detector [320] and indirect UV detection [321] can be applied. The general drawbacks of chromatographic techniques are long and complex sample preparation, high cost and the inability of analysis in the field. * The stability of the electrode is expressed in the format X% / Y, where Xthe percentage of response (from the original) that persists after use of the electrode during the time interval Y. CONCLUSIONS sorbitol. Interfering effect of ascorbic acid can be eliminated by the introduction of trivalent iron. Results of sorbitol determination by the developed technique in the food and cosmetic products are characterized by satisfactory accuracy and reproducibility.The technique of amperometric determination of choline using gold screenprinted electrode modified with film SiO 2 -choline oxidase was proposed. The calibration graph was linear in the concentration range of 110 -5 -610 -4 M, the detection limit was 510 -6 M. The 10-fold excess of sucrose, lactose, urea and heavy metals except Cu (II) do not interfere with the choline determination. Interfering effect of ascorbic acid may be eliminated by prior mixing with MnO 2 . Results of choline determination in the food using developed technique are characterized by satisfactory accuracy and reproducibility. The advantages of the developed techniques in comparison with known methods of sorbitol and choline determination are simple sample preparation, higher selectivity and expressivity of analysis, and low cost of a single determination. The simple one-step procedure for immobilization of oxidases and dehydrogenases in SiO 2 -film on the surface of different types of electrodes by electrodeposition method was unified. The examples of choline oxidase and sorbitol dehydrogenase showed that immobilized enzymes more actively bind to the substrate and retain their activity longer than in solution.  Electrodes modified with platinum nanomaterials are more sensitive to hydrogen peroxide than platinum macroelectrodes. At the same time, these electrodes coated with biocomposite SiO 2 -enzyme film exhibit analytical characteristics similar to platinum macroelectrodes.  Modification of glassy carbon electrode with carbon nanotubes (CNT) by electrophoretic deposition method increases the sensitivity and stability of the amperometric response of coenzyme NADH, which is promising for the development of dehydrogenase-based biosensors. The use of CNT shifts NADH oxidation potential at 0.25-0.3 V towards negative values, increasing thereby the selectivity of its detection.  The use of gold screen-printed electrode is promising for the immobilization of choline oxidase in SiO 2 -film. The increase of modified electrode sensitivity to choline is achieved by optimization of the parameters of electrochemically-assisted deposition, the maximum response was obtained during the deposition for 20 s at potential of -1.1 V.  Electrode modified with CNT and sorbitol dehydrogenase, shows significant advantages in terms of sensitivity and selectivity of sorbitol detection compared with electrode containing no CNT. Analytical signal of modified electrode depends on the parameters of electrodeposition and thickness of CNT-layer. Best characteristics were obtained using -1.3 V, 16 s of electrochemically-assisted deposition and CNT-layer thickness from 100 to 150 nm. Fig Fig. D.1. AFM-image of the surface of GCE, modified with CNT by EPD. TABLE OF CONTENTS OF TABLE Table 1 .1 Main methods of sol-gel films obtaining 1 Method Advantages Drawbacks Table 3 .2 Influence of CTAB concentration in the sol on the long-term response stability of AuSPE-SiO2-ChOx 3 С (CTAB), mM Relative sensitivity in two weeks, % 0.0 < 1 6.1 9.8 12.2 52.7 21.3 18.9 42.7 8.5 Table 5 .1 Results of the determination of sorbitol additions in the buffer solution using GCE-CNT-SiO2-DSDH. n = 3, P = 0,95 5 Sorbitol concentration, mM Added Found S r 0.40 0.41 ± 0.04 0.04 0.60 0.60 ± 0.05 0.03 0.90 0.90 ± 0.04 0.02 Table 5 .3 Results of sorbitol determination in the food and cosmetics samples. n = 3, P = 0,95 5 Sorbitol content, мг/г Sample Added Found S r Toothpaste for children «Oral B 0 117 ± 20 0,06 Stages berry bubble» 296 411 ± 33 0,04 0 549 ± 46 0,03 Chewing gum «Orbit» 694 1251 ± 122 0,04 Table 5 .4 Comparative characteristics of known amperometric biosensors based on DSDH 5 Electrode Modifier Response time, s Linear range, mM Limit of detection, μM Potential of detection, mV Stability* RSD, % Real object analysis Link Table 5 .7 Influence of MnO2 on the results of 0.8 mM choline determination using AuSPE-SiO2-ChOx 5 № Solution content Choline found, mM 1 Choline 0.81 ± 0.06 2 Choline + 10 mg MnO 2 0.81 ± 0.06 3 Choline + 0.1 mM ascorbic acid 0.91 ± 0.09 4 Choline + 0.1 mM ascorbic acid + 10 mg MnO 2 0.81 ± 0.07 Table 5 .8 Results of choline determination in model solution and food products. 5 n = 3, P = 0,95 Choline content, mM sample Added Found, х ± Δх S r 0.30 0.34 ± 0.04 0.05 Model solution* 1.00 1.09 ± 0.12 0.05 Choline content, mg/g 0 0.95 ± 0.14 0.06 Infant formula «Bebi» 1.12 1.99 ± 0.19 0.04 Table 5 .9 Comparative characteristics of known amperometric biosensors based on ChOx 5 Electrode Modifier Response time, s К м , mM Linear range, mM Limit of detection, μM Potential of detection, mV Stability* RSD, % Real object analysis Link Pt Diaminobenze 30 1.2 0.05 ÷ 50 0 85% / 8 -[327] n-Prussian blue 2 2 m. Pt Polyvinylferroc 70 2.32 0.004 ÷ 4 750 - 4,6 -[328] ene chlorate 1.2 Carb Prussian blue 30 2 0.02 ÷ 20 50 1 m. 14 -[329] on 2 paste Pt Au-Nps-PVA- 20 0.78 0.02 ÷ 10 400 80% / 7,4 -[264] Glutaraldehyde 0.4 14 d. GCE PDDA-FePO 4 - 2 0.47 0.002 ÷ 0.4 0 95% / 3,2 -[307] Prussian blue 3.2 14 d. Pt SiO 2 -CNT 15 - 0.005 ÷ 0.1 160 75% / 4,8 + [168] 0.1 1 m. Cookies "Diabetic":
175,319
[ "182170" ]
[ "411849" ]
01749849
en
[ "sdv" ]
2024/03/05 22:32:07
2013
https://hal.univ-lorraine.fr/tel-01749849/file/DDOC_T_2013_0075_SHEVCHENKO_ZAITSEVA.pdf
ABBREVIATIONS DPC-diphenylcarbazide; -ED3A -ethylendiaminetriacetic acid groups immobilized at the surface of silica besed material; FTIR -Fourier transform infrared spectroscopy; ICP-AES -inductively-coupled plasma with detection by atomic emission spectroscopy; ТPD-МS -thermo-programmed desorption with mass spectroscopy detector; XPS -X-ray photoelectron spectroscopy; XFS -X-ray fluorescence spectroscopy;. МСМ-41 -silica based mesoporouse material with structuraly ordered hexogonal pores distributed pores (d  =1.5-10 nm); MCM-41-SH -thiol-functionalized mesoporous silica of MCM-41 type; MCM-41-SH/SO 3 H-X -a series of partually oxidized (by H 2 O 2 ) thiol-functionalized mesoporous silica, where X=1-6 (6 corresponds to the sample most fully oxidized and 1 -to the sample which was not oxidized by H 2 O 2 ); MCM-41-SH/SO 3 -2.X -a series of partually oxidized (by air oxygen) thiolfunctionalized mesoporous silica, which were disposed on air during different peroid of time (from 1 day (X=1) to 2 months (X=3)); SBA-15 -structured mesoporous silica-based material with two-dimensional hexagonal pores (d  = 5-9 nm); -SH -mercaptopropyl groups immobilized at the surface of silica besed material; SiO 2 -ED3A -silica gel grafted with ethylendiaminetriacetic acid groups; SiO 2 -SH -silica gel grafted with mercaptopropyl groups; SiO 2 -SH/ED3A -silica gel simultaneously grafted with mercaptopropyl and ethylendiaminetriacetic acid groups; -SO 3 H -propylsulpfonic acid groups immobilized at the surface of silica based material; -S-S -dipropyldisulfide groups immobilized at the surface of silica besed material. ! Introduction Principalement utilisés dans l'industrie à des fins diverses, les composés du chrome sot classés comme des polluants environnementaux persistants. Le chrome entre dans la fabrication des alliages et de l'acier (jusqu'à 136 mille tonnes par an), des pigments et peintures, dans le tannage du cuir et les procédés d'électro-dépôt (jusqu'à 90 mille tonnes par an). L'apparition du chrome dans l'environnement est surtout imputable aux activités anthropiques. A titre d'exemple, la concentration de chrome relevés en Inde dans les effluents industriels est de 2-5 g L -1 , le déversement total de chrome atteignant donc jusqu'à une tonne par un. Contrairement à la plupart des autres métaux, la toxicité du chrome dépend de son degré d'oxidation. Les composés du chrome(III), qui existent principalement sous la forme de particules chargées positivement en solution aqueuse, sont caractérisés par une labilité cinétique faible et sont ainsi moins enclin à l'adsorption biologique. En revanche, le chrome(VI), qui a un fort pouvoir oxydant, existe en solution sous forme d'anion. Etant isomorphe de sels minéraux importants, le chrome(VI) est cent fois plus toxique que le chrome(III), qui se traduit par une forte activité cancérogène. Les méthodes modernes de traitement des effluents contenant du chrome(VI) toxique sont principalement basées sur l'immobilisation des composés du chrome sous forme solide. Il s'agit notamment : 1) de l'électrodialyse ou electrocoagulation, et 2) de l'adsorption sélective sur du charbon actif, des résines fonctionnalisées ou des oxydes de minéraux, ou encore sur des biopolymères naturels. Ces méthodes sont souvent appliquées à l'élimination du chrome à partir de solutions concentrées (100 mg L -1 ). Une percée significative dans le domaine des méthodes d'adsorption sélective de Cr(VI) concerne le recours à de matériaux polyfonctionnels présentant à la fois des propriétés de réduction et de fixation. A titre d'exemple, de tels matériaux sont des adsorbants composés de sulfures issus de la biomasse, des hydroxydes de fer et des sulfures métalliques, ou des adsorbants hybrides organo-minéraux. A la surface de tels matériaux, Cr(VI) peut être réduit en Cr(III) qui peut être à ! 2! son tour spécifiquement fixé par un groupe fonctionnel ou co-précipicité sous forme d'oxyde. L'efficacité de tels matériaux polyfonctionnels dépend d'un ensemble de facteurs, tels que la faculté de réduction, l'affinité des groupements fonctionnels vis-àvis des espèces Cr(III) et Cr(VI), ainsi que l'ingénierie de surface des matériaux. Ceci étant, les adsorbants de ce type peuvent présenter des inconvénients, dont les principaux sont des propriétés d'adsorption non reproductibles et caractérisés par de faibles cinétiques (particulièrement vrai pour la biomasse), des gammes de pH opérationnelles assez élevées (pH > 5, alors que la plupart des effluents contenant du chrome soient acides), de faibles quantitativités d'adsorption (< 80 %) et la formation d'énormes quantités de boues. La conception intelligente de matériaux polyfonctionnels pourrait permettre de contourner les principaux problèmes évoqués ci-dessus. Dans le cadre de cette thèse, nous nous proposons d'examiner le comportement de deux silices bi-fonctionnalisées présentant une mésostructure (i.e., ou non (i.e., gel de silice, dénommé ici SiO 2 ), ainsi que leur réactivité vis-à-vis des espèces de chrome. Les groupements fonctionnels sélectionnés pour modifier les échantillons de silice afin d'atteindre ce but sont d'une part, le mercaptopropyle et l'acide propylsulfonique (MCM-SH,SO 3 H), et d'autre part le mercaptopropyle et l'éthylènediaminetriacetate (SiO 2 -SH/ED3A). La recherche a débuté avec des matériaux structurellement ordonnés, de type MCM-41, offrant une très grande aire spécifique tout en assurant un accès rapide et aisé vers les groupements fonctionnels. Sur la base d'une MCM-41 modifiée par des fonctions thiol oxydées à divers degrés, un ensemble d'échantillons d'adsorbants caractérisés par différents rapport de grupements greffés thiol/acide sulfonique (teneur constante en soufre = 1 mmol g -1 ) ont été synthétisés. Une attention particulière a été portée à la caractérisation de la composition chimique de surface, pour laquelle on s'attend à une forte influence sur les propriétés de sorption. Une méthode simple, basée sur une seule technique instrumentale (titrage conductimétrique), a été appliquée pour la détermination simultanée des groupements thiol et sulfonique sur MCM-SH,SO 3 H. Dans un second temps, les conditions expérimentales susceptibles de permettre un piégeage effectif de Cr(VI) sur MCM-! 3! SH,SO 3 H ont été définies, en étudiant notamment l'effet du pH, du rapport solide/solution, ou encore de la composition de l'adsorbant (i.e., rapport SH/SO 3 H). Sur la base des données collectées, un mécanisme de réduction-sorption expliquant le processus d'immobilisation a été proposé. Dans une seconde approche, un autre type de silice bi-fonctionnelle (SiO 2 -SH,ED3A) a été suggéré afin d'améliorer l'affinité (propriétés de sorption) du matériau vis-à-vis des espèces Cr(III) générées lors de la réduction de Cr(VI). Le gel de silice a été choisi comme matrice pour greffer des quantités contrôlées de groupements mercaptopropyls et éthylènediaminetriacetate à sa surface. La performance de tel adsorbants bi-fonctionnels a été évaluée au regard de paramètres expérimentaux variés susceptibles d'influencer le processus de sorptionréduction (pH, rappot solide/solution, concentration) afin de déterminer le mécanisme de séquestration et de le comparer avec les adsorbants précédents. Finalement, on montrera comment le second adsorbant présente également l'avantage de pouvoir être utilisé dans des conditions dynamiques (expériences en colonne). Cette thèse a été structurée comme suit : Le chapitre I traite des analyses bibliographiques ; Le chapitre II liste les matériaux et réactifs utilisés pour la synthèse des silices bifonctionnalisée, ainsi que les méthodes mises en oeuvre afin d'examiner leur propriétés physico-chimiques et les procédures appliquée pour l'étude des processus de sorption ; Le chapitre III présente une méthode conductimétrique de détermination simultanée des groupements thiol et acide sulfonique sur MCM-SH,SO 3 H, en discutant en parallèle les effets du rapport SH/SO 3 H (fixé par oxidation partielle et contrôlée de silices thiolées avec des quantités variables de H 2 O 2 ); Le chapitre IV est dédié à l'étude de l'adsorption de Cr(III) sur silices thiolées (MCM-SH) et sulfonatées (MCM-SO 3 H), ainsi que les processus de réduction-sorption de Cr(VI) sur MCM-SH et la discussion des mécanismes de fixation du chrome à la surface de MCM-SH,SO 3 H bi-fonctionnel; Le chapitre V est dévolu à l'évaluation des conditions optimales pour l'élimination Aqueous chemistry of chromium Chromium oxidation states are ranging from 0 to +VI. However in the environment it mainly occurs only in two oxidation states, Cr(III) and Cr(VI). Cr (IV) and Cr (V) are formed as intermediates during oxidation or reduction of Cr(III) and Cr(VI) respectively. The most stable oxidation state is Cr(III) and a considerable energy should be required to convert it into other state (see Fig.1.1). The negative standard potential (E 0 ) of Cr(III)/Cr(II) ion couple indicates that Cr(II) is readily oxidized to Cr(III), and Cr(II) is stable only in the absence of oxidants (anaerobic conditions). Cr(VI) in acidic medium shows a very high positive redox potential [START_REF] Ball | Critical evaluation and selection of standard state thermodynamic properties for chromium metal and its aqueous ions, hydrolysis species, oxides, and hydroxides[END_REF], which means that it is strongly oxidizing and unstable in the presence of electron donors. [START_REF] Shriver | Inorganic Chemistry 2nd Edition[END_REF]. Considering the equilibria between Cr(III) and Cr(VI), the decisive role played by pH and redox potential must be emphasized. Formation of hydroxo complexes by Cr(III) and polynuclear species by both Cr(III) and Cr(VI), should also be taken into account. To show the conditions of pH and potential under which each species is thermodynamically stable, a Pourbaix diagram is useful (see Fig. 1.2.). The approach, however, does not take into account kinetic constraints, and when Cr is introduced into, or exists in, the natural environment, its actual form may differ from that predicted by the diagram. total chromium in solution is 10 -4 М. [START_REF] Nieboer | Biologic chemistry of chromium[END_REF] The content of aerated waters containing trivalent chromium changes in time as far as such solutions can be affected by hydrolysis, complexation, redox reactions and sorption. In the absence of complexing agents (except H 2 O and OH -), Cr(III) exists as a complex of hexaaquachromium(III) (relatively strong acid with pK ~ 4) and its hydrolysis products [START_REF] Baes | The Hydrolysis of Cations[END_REF] 2+ , Cr(OH) 2 + , Cr(OH) 3 . They dominate consistently within pH 4-10. This range contains pH values that are characteristic for natural waters, and so the cited hydroxo complexes are the main forms of Cr(III) existence in the environment. Increasing of Cr(III) concentration (> 10 -4 M) leads to formation of polynuclear hydroxo complexes (Cr 2 (OН) 2 4+ , Cr 3 (OН) 4 5+ , Cr 4 (OН) 6 6+ ) [START_REF] Rai | Chromium(III) hydrolysis constants and solubility of chromium hydroxide[END_REF]. The process of condensation through the formation of hydroxo-bridges is known as "olation" (eq. 1.4) Cr(III) aqua-complexes are highly inert which means low rate of exchange of water molecules to other ligands. Half-time of such exchange can reach several days [START_REF] Perrin | Organic analytical reagents[END_REF]. Except complexation with water molecules and hydronium ions, trivalent chromium mobility may also decrease due to binding with such macromolecular systems as humic acids [START_REF] Geisler | [END_REF]. The main forms of Cr(VI) found in natural waters are HCrO 4 -and CrO 4 2-. Under рН<1, strong acid H 2 CrO 4 is formed [START_REF] Saleh | Kinetics of chromium transformation in the environment[END_REF][START_REF] Sperling | Determination of chromium(III) and chromium(VI) in water using flow injection on-line pre-concentration with selective adsorption on activated alumina and flame atomic absorption spectrometric detection[END_REF]. Distribution of these species according to pH is described by the following equations and constants [START_REF] Allison | MINTEQA2/PRODEFA2, A Geochemical Assessment Model for Environmental Systems: Version 3[END_REF] and is illustrated by Fig. In acidic media, in case when concentration of Cr(VI) is higher than 10 mM, chromate species dimerize to form orange red dichromate ion [START_REF] Allison | MINTEQA2/PRODEFA2, A Geochemical Assessment Model for Environmental Systems: Version 3[END_REF]: 2 HCrO 4 -= Cr 2 O 7 2-+ H 2 O pK = -1.54 (1.8) Cr(VI) does not give rise to an extensive complex series of poly-acids and polyanions [START_REF] Cotton | Advanced Inorganic Chemistry[END_REF][START_REF] Greenwood | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!![END_REF] characteristics of somewhat less acidic oxides, such as those of V(V), Mo(VI) or W(VI). The reason for this is perhaps the greater extent of multiple bonding (Cr=O) for the smaller chromium. Polymerization beyond dichromate is apparently limited to formation of tri (Cr 3 O 10 2-) and tetra (Cr 4 O 13 2-) chromate [START_REF] Cotton | Advanced Inorganic Chemistry[END_REF]. Cr(VI) compounds are well soluble in water and therefore are highly mobile in the environment. They can be reduced to Cr(III) by electron donors present in water in the form of organic matter and/or reduced by inorganic particles. As far as HCrO 4 -reduction is accompanied by H + consumption (eq. 1.9), decrease in acidity decreases the formal potential (see, Fig. A high redox potential of Cr(VI)/Cr(III) metal-ion couple makes insignificant the chance of Cr(III) be oxidised by dissolved oxygen, and only the presence of manganese oxides (γ-MnO 2 , β-MnO 2 , α-MnO 2 , δ-MnO 2 ) or H 2 O 2 leads to its effective oxidation in ecological systems. [START_REF] Pettine | Digestion treatments and risks of Cr (III) -Cr(VI) interconversions during Cr ( VI ) determination in soils and sediments -a review[END_REF]. Probability of transforming Cr(III) in Cr(VI) increases with pH increasing [15]. It becomes less effective for "aged" solution of Cr(III), where not labile precipitates of chromium hydroxides are formed [START_REF] Csoba | Sorption of Cr (III) on silica and aluminium oxide#: experiments and modelling[END_REF]. Yet, if precipitetes had not been formed, manganese oxide can effectively oxidize Cr(III) (10 -5 М) during 90 days (3 < pH < 10) [START_REF] Pettine | Digestion treatments and risks of Cr (III) -Cr(VI) interconversions during Cr ( VI ) determination in soils and sediments -a review[END_REF]. Cr(III) complexes with organic ligands are also not so easily oxidized as their aqua/hydroxo analogues, which means that the trivalent state is better stabilized by ligands other than H 2 O and/or OH -. Opposite to Cr(III) complexes, Cr(VI) species are only weakly sorbed to inorganic surfaces, being the most mobile form of Cr in the environment. Hydrogen peroxide, which reduces Cr(VI) to Cr(III) under acid conditions [START_REF] Pettine | Hydrogen peroxide interference in the determination of chromium(VI) by the diphenylcarbazide method[END_REF][START_REF] Pettine | Reduction of Hexavalent Chromium by H 2 O 2 in Acidic Solutions[END_REF], is one of the possible reductants in aqueous solutions. These also include Fe(II), sulfide, sulfite and a number of organic compounds [START_REF] Pettine | Digestion treatments and risks of Cr (III) -Cr(VI) interconversions during Cr ( VI ) determination in soils and sediments -a review[END_REF]. The nature and behaviour of various Cr forms found in wastewater can be quite different from those present in natural waters because of altered physicochemical conditions of the effluents originating from various industrial sources. The presence and ! 12! concentration of Cr forms in discharged effluents depend mainly on the Cr compounds applied in the technological process, on pH and on organic and/or inorganic wastes coming from the material processing. Thus, hexavalent Cr will dominate in wastewater from the metallurgical industry, metal finishing industry (Cr hard plating), refractory industry and production or application of pigments (chromate colour pigments and corrosion inhibition pigments). Cr(III) will be found mainly in tannery, textile (printing, dying) and decorative plating industry wastewater. The presence of various inorganic and organic ligands, as well as the pH value in effluents, determines Cr forms by influencing their solubility, sorption and redox reactions. For example, although in tannery wastewater Cr(III) is the most expected Cr form, the redox reactions occurring in sludge can increase the concentration of the hexavalent form. Under slightly acidic or neutral pH conditions in this type of wastewater the poorly soluble Cr(OH) 3 .aq should be the preferred form, but a high content of organic matter originated from hide material processing is effective in forming soluble organic Cr(III) complexes [START_REF] Stein | Chromium speciation in the wastewater from a tannery[END_REF][START_REF] Walsh | Chromium speciation in tannery effluent. An assessment of techniques and role of organic Cr(III) complexes[END_REF]. Occurrence and biological effects of chromium In Nature, chromium occurs quite extensively, mostly in its trivalent state in the form of minerals, mainly as chromite. Cr(III) can also be found in fruits, vegetables and meat. It is recognised as an essential trace element in human and animal diets and important in glucose metabolism. Most diets are considered to be deficient in chromium for which the recommended daily intake is 200 µg for adults [START_REF] Anderson | Chromium as an Essential Nutrient, The Chromium File[END_REF]. Chromium can also occur as hexavalent chromium and persist in polyatomic anionic form as CrO 4 2-under strong oxidizing conditions. Natural chromates are rare. Cr(VI) and Cr(0) are mainly formed as a result of manufacturing processes. Chromium at zero valency exists as metallic chromium and in many chromiumcontaining alloys, including stainless steels. In these cases, chromium on the surface is oxidized spontaneously to Cr(III) creating a passive film which prevents further oxidation and which is responsible for corrosion resistance. ! 13! Hexavalent chromium occurs predominantly in chemical manufacturing processes and to a much lesser extent in metallurgical processes such as ferrochromium and stainless steel production, stainless steel welding and in some high temperature furnace operations that use chromium-containing refractories. Chemical manufacturing processes which cause formation of Cr(VI), namely are: -The manufacture of chromate and dichromates through the roasting of chromite ore All other industrial chromium chemicals are in turn made from sodium dichromate or, to a much lesser extent, from sodium chromate. -Chromium plating and surface treatment of metals. Conventional electrolytic processes for chromium plating use chromic acid to deposit chromium metal on surfaces of other metals. -Leather tanning. Basic chromium sulphate, in which chromium is present in the trivalent state, has been used in leather tanning for nearly 150 years. Although historically hexavalent chromium salts were converted to chromium sulphate by tanners, this practice is now rare in most countries, where tanneries are supplied with chromium tanning agents which contain no detectable levels of hexavalent chromium. -Spray painting. There is evidence of increased risk of lung cancer in workers engaged in the manufacture of zinc chromate and sparingly soluble chromate compounds which are classified as carcinogenic. These materials are used in anticorrosion primer paints applied to metal surfaces. Therefore, it is essential to minimise exposure by using local exhaust ventilation and personal protection equipment. -Refractory industries. Although chromium-based refractories are generally considered to be inert, some hexavalent chromium compounds may be present during the manufacturing stages. Many chromium-containing refractories are used in processes where the conditions may lead to the formation of hexavalent chromium, particularly high temperature operations in atmospheres containing oxygen. -Wood preservation industry. Chromium based wood preservatives are commonly used in the treatment of timber to extend its useful life. The chromium acts to fix the The toxicity of trivalent and metallic forms of chromium by conventional exposure routes is low. Trivalent chromium is poorly absorbed by the body and does not easily cross cell membranes. Metallic and alloyed forms need to be ionized in order to cross any cell membrane. The most significant occupational health effects are related to hexavalent chromium compounds. In aqueous solutions Cr(VI) exists as oxo-anions which are isomorphic to vitally important sulfates and phosphates [START_REF] Langrrd | One hundred years of chromium and cancer: A review of epidemiological evidence and selected case reports[END_REF]. The carcinogenic effect of hexavalent chromium is thought to relate to the ability of chromate ions to cross cell membranes, where subsequent chemical valence reduction is accompanied by genetic damage (see Fig. 1.5.) The relationship between exposure and effect is complicated by the fact that extra-cellular body fluids can detoxify hexavalent chromium by reducing it to the trivalent state [START_REF] Kotas | Chromium Occurrence in the Environment and Methods of Its Speciation[END_REF]. ! 15! Fig. 1.5. Hypothetical model of chromium transport and toxicity in plant roots [START_REF] Shanker | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 25 Toxicological profile of chromium[END_REF] Exposure to such compounds may result in acute effects such as skin and nasal irritation, ulceration and nasal septum perforation and respiratory sensitisation. The most serious health effect is respiratory cancer. Epidemiological studies have confirmed that long-term exposure to high levels of hexavalent chromium as encountered historically in chromate chemicals and chromate pigments manufacture and electrolytic plating processes using chromic acid has led to a measurable excess incidence of respiratory cancer with a latency period in excess of 15 years [25, 26, 27, 28, 29]. Because of these health effects, all commercially available hexavalent chromium compounds are heavily regulated and in many areas are classified as occupational carcinogens. Table 1.1 shows the classification of carcinogenicity of chromium compounds established by international organization IARC [START_REF]Monographs on the Evaluation of Carcinogenic Risks to Humans[END_REF]. concentrations from mg L -1 to µg L -1 levels. Chromium removal techniques Several well-documented reviews or monographs are available dealing with chromium removal from wastewaters [START_REF] Beszedits | Chromium removal from industrial wastewaters[END_REF][START_REF] Soundararajan | Biosorption of chromium ion[END_REF][START_REF] Mohan | Activated carbons and low cost adsorbents for remediation of tri-and hexavalent chromium from water[END_REF][START_REF] Gode | Removal of chromium ions from aqueous solutions by adsorption method[END_REF][START_REF] Sharma | Chromium removal from water: a review[END_REF][START_REF] Sen | Chromium removal using various biosorbents, Iran. J. ! 116! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!![END_REF][START_REF] Miretzky | Cr(VI) and Cr(III) removal from aqueous solution by raw and modified lignocellulosic materials: A review[END_REF][START_REF] Malaviya | Physicochemical technologies for remediation of chromiumcontaining waters and wastewaters[END_REF]. It came out that firstly proposed remediation schemes were directed to reduce the carcinogenic, soluble, and mobile Cr(VI) (i.e., in acidic medium, pH ~2) to the less toxic and less mobile Cr(III), which forms insoluble or sparingly soluble precipitates (i.e., in alkaline medium, above pH ~9-10). Such methods, however, are applicable only for concentrated industrial wastewater and have been considered as undesirable due to the use of expensive chemicals, poor removal efficiency for meeting regulatory standards, and the production of large amounts of chemical sludge [START_REF] Cabatingan | Potential of biosorption for the recovery of chromate in industrial wastewaters[END_REF]. This has thus generated a huge amount of works directed to investigate other approaches for chromium remediation. Nowadays existing technologies are mainly based on immobilization on solid supports, or separation and filtration processes, associated or not to reduction/precipitation. They include: ! 18! -membrane filtration (membrane separation [START_REF] Kozlowski | Removal of chromium(VI) from aqueous solutions by polymer inclusion membranes[END_REF], ultrafiltration [START_REF] Ghosh | Hexavalent chromium ion removal through micellar enhanced ultrafiltration[END_REF][START_REF] Mungray | Removal of heavy metals from wastewater using micellar enhanced ultrafiltration technique: a review[END_REF], reverse osmosis [START_REF] Ozaki | Performance of an ultra-low-pressure reverse osmosis membrane (ULPROM) for separating heavy metal: effects of interference parameters[END_REF], dialysis/electrodialysis [START_REF] Mohammadi | Modeling of metal ion removal from wastewater by electrodialysis[END_REF][START_REF] Li | Concentration and purification of chromate from electroplating wastewater by two-stage electrodialysis processes[END_REF] or electro-deionization [START_REF] Xing | Variable effects on the performance of continuous electrodeionization for the removal of Cr(VI) from wastewater[END_REF]48]) and other separation techniques (coagulation/electrocoagulation [START_REF] Parga | Characterization of electrocoagulation for removal of chromium and arsenic[END_REF][START_REF] Akbal | Comparison of electrocoagulation and chemical coagulation for heavy metal removal[END_REF], reduction/ coagulation/ filtration [START_REF] Qin | Hexavalent chromium removal by reduction with ferrous sulfate, coagulation, and filtration: a pilot-scale study[END_REF], flotation [START_REF] Matis | Recovery of metals by ion flotation from dilute aqueous solutions[END_REF]) or extraction processes (solvent extraction [START_REF] Salazar | Equilibrium and kinetics of Cr(V1) extraction with Aliquat 336[END_REF], chemical or electrochemical precipitation [START_REF] Roundhill | Methods and techniques for the selective extraction and recovery of oxoanions[END_REF], electrokinetic extraction [START_REF] Roundhill | Methods and techniques for the selective extraction and recovery of oxoanions[END_REF], sedimentation [START_REF] Song | Sedimentation of tannery wastewater[END_REF], coagulation-flocculation-sedimentation [START_REF] Haydar | Coagulation-flocculation studies of tannery wastewater using combination of alum with cationic and anionic polymers[END_REF]); -ion exchange (resins, mainly anion exchangers for Cr(VI) [START_REF] Shi | Removal of hexavalent chromium from aqueous solutions by D301, D314 and D354 anion-exchange resins[END_REF][START_REF] Rafati | Removal of chromium (VI) from aqueous solutions using Lewatit FO36 nano ion exchange resin[END_REF][START_REF] Neagu | Removal of hexavalent chromium by new quaternized crosslinked poly(4-vinylpyridines)[END_REF] but also cation exchangers for Cr(III) [60,[START_REF] Cavaco | Evaluation of chelating ionexchange resins for separating Cr(III) from industrial effluents[END_REF], and ion exchange columns [START_REF] Kabir | Removal of chromate in trace concentration using ion exchange from tannery wastewater[END_REF][START_REF] Sahu | Removal of chromium(III) by cation exchange resin, Indion 790 for tannery waste treatment[END_REF][START_REF] Tang | Column study of Cr(VI) removal by cationic hydrogel for in-situ remediation of contaminated groundwater and soil[END_REF]); -selective adsorption (on various, often low cost, adsorbents [START_REF] Mohan | Activated carbons and low cost adsorbents for remediation of tri-and hexavalent chromium from water[END_REF][START_REF] Pollard | Low cost adsorbents for waste and wastewater treatment: a review[END_REF][START_REF] Babel | Low cost adsorbents for heavy metals uptake from contaminated water: a review[END_REF] such as activated carbon [START_REF] Mohan | Activated carbons and low cost adsorbents for remediation of tri-and hexavalent chromium from water[END_REF][START_REF] Fang | Cr(VI) removal from aqueous solution by activated carbon coated with quaternized poly(4-vinylpyridine)[END_REF], mineral oxides [START_REF] Bois | Experimental study of chromium adsorption on minerals in the presence of phthalic and humic acids[END_REF], functionalized resins [START_REF] Misra | Iminodiacetic acid functionalized cation exchange resin for adsorptive removal of Cr(VI), Cd(II), Ni(II) and Pb(II) from their aqueous solutions[END_REF][START_REF] Gode | Column study on the adsorption of Cr(III) and Cr(VI) using Pumice, Yarikkaya brown coal, Chelex-100 and Lewatit MP 62[END_REF], sol-gelderived functional materials [71,[START_REF] Park | Adsorption of chromium (VI) from aqueous solutions using an imidazole functionalized adsorbent[END_REF][START_REF] Liu | Removal of Cr(III, VI) by quaternary ammonium and quaternary phosphonium ionic liquids functionalized silica materials[END_REF] or natual (bio)polymers [START_REF] Miretzky | Cr(VI) and Cr(III) removal from aqueous solution by raw and modified lignocellulosic materials: A review[END_REF][START_REF] Crini | Recent developments in polysaccharide-based materials used as adsorbents in wastewater treatment[END_REF]) and bioadsorption (on various biological materials [START_REF] Soundararajan | Biosorption of chromium ion[END_REF][START_REF] Moussavi | Biosorption of chromium(VI) from industrial wastewater onto pistachio hull waste biomass[END_REF][START_REF] Saha | Biosorbents for hexavalent chromium elimination from industrial and municipal effluents[END_REF][START_REF] Sahmoune | Advanced biosorbents materials for removal of chromium from water and wastewaters[END_REF]); -and some other processes (photocatalytic reduction [START_REF] Kajitvichyanukul | Sol-gel preparation and properties study of TiO 2 thin film for photocatalytic reduction of chromium(VI) in photocatalysis process[END_REF], phytoremediation, … [START_REF] Roundhill | Methods and techniques for the selective extraction and recovery of oxoanions[END_REF]). All these methods exhibit advantages and disadvantages and are most often applied to the removal of chromium from solutions containing relatively high initial chromium concentrations (i.e., > 100 mg L -1 ). Adsorptive filtration and ion exchange are suitable for small-scale applications. Membrane technology is effective in removing both hexavalent and trivalent species of chromium, but can suffer from membrane fouling and high costs [START_REF] Malaviya | Physicochemical technologies for remediation of chromiumcontaining waters and wastewaters[END_REF]. Adsorption, though likely to generate non-negligible amounts of sludge with associated disposal problems, merged recently among the most promising approaches for simple, efficient, and selective chromium removal [START_REF] Mohan | Activated carbons and low cost adsorbents for remediation of tri-and hexavalent chromium from water[END_REF][START_REF] Gode | Removal of chromium ions from aqueous solutions by adsorption method[END_REF]. An interesting breakthrough in the field is the possibility to use adsorbents with both reductive and sorption properties in a single solid, giving rise to chromium immobilization according to a reduction-sorption process. In doing so, one part of the adsorbent has the propensity to reduce the most toxic Cr(VI) species while another part is likely to immobilize the so-generated Cr(III) moieties. Examples of materials reported to exhibit such reduction-sorption capabilities include mainly sludge and/or sulfur-containing biomass and other biosorbents [START_REF] Deng | Polyethylenimine-modified fungal biomass as a high-capacity biosorbent for Cr(VI) anions: sorption capacity and uptake mechanisms[END_REF]82,[START_REF] Sanghi | Fungal bioremediation of chromates: conformational changes of biomass during sequestration, binding, and reduction of hexavalent chromium ions[END_REF][START_REF] Escudero | Modeling of kinetics of Cr(VI) sorption onto grape stalk waste in a stirred batch reactor[END_REF][START_REF] Wu | Cr(VI) removal from aqueous solution by dried activated sludge biomass[END_REF][START_REF] Li | Mechanism of electron transfer in the bioadsorption of hexavalent chromium within Leersia hexandra Swartz granules by X-ray photoelectron spectroscopy[END_REF][START_REF] Liu | Polyethylenimine modified eggshell membrane as a novel biosorbent for adsorption and detoxification of Cr(VI) from water[END_REF], but also organic-inorganic hybrids [71] or inorganic iron metal or sulfides [START_REF] Demoisson | Pyrite oxidation by hexavalent chromium: investigation of the chemical processes by monitoring of aqueous metal species[END_REF][START_REF] Cao | Remediation of Cr(VI) using zero-valent iron nanoparticles: kinetics and stoechiometry[END_REF]. Even if some ! 19! redox active centers (for Cr(VI) reduction) and/or complexing groups (for Cr(III) binding) can be identified [71,[START_REF] Deng | Polyethylenimine-modified fungal biomass as a high-capacity biosorbent for Cr(VI) anions: sorption capacity and uptake mechanisms[END_REF][START_REF] Sanghi | Fungal bioremediation of chromates: conformational changes of biomass during sequestration, binding, and reduction of hexavalent chromium ions[END_REF][START_REF] Li | Mechanism of electron transfer in the bioadsorption of hexavalent chromium within Leersia hexandra Swartz granules by X-ray photoelectron spectroscopy[END_REF], the intrinsic complexity of these materials often made difficult the deep understanding of the main chemical parameters affecting the overall uptake process. Functionalized adsorbents for chromium sequestration Today there exists a variety of modified sorbents, creating for the concentration of heavy metals. They can be classified by matrix or modifier type. As a modifier can be used: organic reagents and their complex compounds, mineral acids (heteropolyacid) and their salts, natural compounds and some microorganisms [START_REF] Ostrovska | Voda. Indikatornye sistemy (Water, Indicator systems)[END_REF]. As a matrix for adsorbents are used: synthetic and natural polymers, mineral carriers or inert materials, which can be grafted by modifying compounds. Chemical modification of silica-based adsorbents can be divided into groups according to the method of their synthesis: 1. Covalent (chemical) bounding of functional groups at the surface of an adsorbent [START_REF] Lisichkin | Khimiya privitykh poverkhnostnykh soedinenii (Chemistry of Grafted Surface Compounds)[END_REF]: by grafting [START_REF] Zaitsev | ! 121! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!![END_REF]93], (scheme 1.1, a). Modification of an adsorbent matrix by attachment of functional molecules to the surface of the pores; groups are relatively isolated. However, by employing just enough water in the process to form a monolayer on the pore surface, more continuous coats of organosilanes may be obtained, leading to a high concentration of organics in the product. Excess water must be avoided, because it can lead to uncontrolled polymerization of the silylation reagents within the channels or external to the mesoporous adsorbent. by co-condensation principle [START_REF] Melde | Hybrid Inorganic -Organic Mesoporous Silicates Nanoscopic Reactors Coming of Age[END_REF], (polymerization, sol-gel technology), scheme 1.2. In this case, during the synthesis of material, modificator can be incorporated into the matrix of final material; (1.2) 2. Non-covalent immobilisation [START_REF] Nadzhafova | Test paper for the determination of aluminim in solutions[END_REF] of functional groups at the surface by principle of: -impregnation (soaking) of adsorbent matrix by solution of modificator; -dispersion, electrostatic, dipol-dipole or hydrogen bonds interactions. Ion-exchange The exchange reaction of Cr(VI) with the ion exchangers are described by the following equations: ! 2R + -OH -+ CrO 4 2-↔ R + 2 -CrO 4 2-+ 2OH - (1.11) R + 2 -CrO 4 2-+ CrO 4 2-+ H + ↔ R + 2 -Cr 2 O 7 2-+ OH -! ! , where R is ion-exchanger matrix. The NaOH solutions are applied for anion exchanger regeneration. As mentioned earlier, the industrial wastewaters formed in chromium plating of surfaces of other metals also contain ions of other elements, besides chromium ions. ! 21! Therefore, the problem of separation of individual components and first of all the recovery of both chromium and other metals ions occupies an important position. The performance of the chromate ion-exchange process has been reported to be greatly influenced by the properties of anion-exchange resins. Сommercially available polymeric sorbents containing both weak and/or strong bases on the surface are used as anion-exchangers. Examples of strong anion-exchangers are Amberlite IRA 400 [START_REF] Mustafa | Selectivity reversal and dimerization of chromate in the exchanger Amberlite IRA-400[END_REF] and IRA-900 [97]. Weak anion exchengers are represented by such resins as Amberlite [START_REF] Tenorio | Treatment of chromium plating process effluents with ion exchange resins[END_REF]. Capacity of ion-exchengers depends on the type of their functional groups, the type of the matrix and concentration of functional groups. Comparison of these values are brought in line at A high distribution coefficient of Cr(VI) ions removal was found also applying anion exchange fibres with pyridine groups [110]. Investigations showed that basicity of the sorbent does not affect its selectivity for chromium ions. The capacity of the obtained resin with pyridine groups was estimated to be 130 mg g -1 . It was also found that the fibres containing pyridine groups of low basicity are resistant to oxidizing effect of Cr(VI) ions which stabilize their ion exchange capacity in sorption-desorption cycles. The principle of cation-exchange (scheme 1.12) is used to in order isolate chromium in its trivalent form: ! nR-SO 3 -H + + M n+ ↔ nR-SO 3 -M n+ + nH + (1.12) , where M n+ is either hexaaquachromium(III) or its hydrolysis products For this purpose cation-exchangers with either strong sulphonic (Lewatit 100 S), iminodiacetic (Amberlite IRC 718 is) acid groups or weak carboxyl groups acid groups (Chelex-100, Amberlite IRC 76) are used [115,116]. Removal of Cr(III) from industrial wastewaters is accompanied by some difficulties particularly in the presence of sulphates. Literature data indicate existence of many Cr(III) complexes in the solution. Cr(III) complexes, in the amount from 10% to 32% were found which were not sorbed by sulphonic cation exchangers [117]. Besides, in acid solutions behavior of Cr(III) -cation-exchanger system becomes more complecated, due to formation of differently charged complexes. Their relative amount depends on the composition of the solutions and the conditions of their preparation. Thus purple aqueous solutions of Cr(III) salts are higly inert due to formation of aqua-complexes. Heating of such solutions in the presence of Cl -, Br -, I -, NO Adsorption of Cr(III) by cation-exchanger is complicated by the formation of coordination bonds between the acidic groups (carboxylic and sulfonic acid groups) and the main complexes of Cr(III), which affects its quantitative removal. Particularly, it is seen from adsorption isotherm of Cr(III) on strongly acidic cation exchanger (sulfonic acid groups) [115] (see Figure 1.7) that in the concentration range 0.1 -1 mmol L -1 binding of Cr(III) is not that much effective, as one would predict. bondings between acid groups of cation-exchanger and Cr(III) aqua-complexes, even in case of such strong acid groups as sulfonic and phosphinic) in comparison with anionexchange applied for Cr(VI) removal. Moreover sorption properties of cationexchangers are strongly affected by increasing of ionic strength and so can not be recommended for effective chromium removal in its trivalent form after preliminary reduction of Cr(VI). Chelating adsorbents Alternative selective adsorption of chromium can be performed by chelating Reduction of Cr(VI) before its adsorption was used by Sumida [124] for selective determination of both forms of chromium. This method is based on selective adsorption of Cr(III) on the surface of the polymer sorbent modified with iminodiacetic acid groups. To do this, two identical columns were used. A cap with reducing agent was placed between them. Сr(III) was adsorbed at the first column, while Cr(VI) was reduced in the cap and then adsorbed at the second column in the form of Cr(III). Elution was performed by 2 M solution of HNO 3 . To achieve selective binding and extraction of heavy metal ions, 4-Amino-3hydroxy-2-(2-chlorobenzene)-azo-1-naphthalene sulfonic acid (AHCANSA) was used as a chelating agent [125]. AHCANSA groups were immobilized either physically (I) or covalently (II) at the surface of two silica samples. Concentrations of immobilized groups were 0.488 and 0.473 mmol g -1 , respectively. It was found that these adsorbents adsorb effectively such heavy metals as Cr(III), Ni (II), Cu (II), Zn (II), Cd (II), Pb (II). Sorption capacity of these metals for these adsorbents are in the range from 0.250 to ! 28! 0.483 mmol g -1 . Both silica were found to be effective metal adsorbents (adsorpton yields were 95.2-98.1% and 92.5-97.1% respectively). Under optimal conditions, sorbents can remove trace amounts of metal ions (up to 1.0 and 2.00-2.50 µg mL -1 , respectively). Chinolinol was immobilized on silica surface for the removal of trace amounts of In contaminated surface waters at pH lower than neutral, chromates can be accompanied by other heavy metal ions like Cu(II), Zn(II) and Ni(II). The above described chelating ion exchanger can be applied for simultaneous removal of both heavy metal ions and chromates(VI) from a given medium (Fig. 1.9). Application of organic ligands capable of forming complex compounds with Cr(III) can be affected by other heavy methals ions, but the examples given in this section have demonstrated that the use of special experimental conditions (pH, chromatographic separation) can guarantee appropriate parameters of chromium adsorption. Quantitative sequestration of Cr(VI) on chelating adsorbents, after being reduced to Cr(III), can not be considered as qualitative approach to solving the problem. Cr(VI) reduction in solution requires special optimization in each and every case (qualitative and quantitative content of effluent must be considered) to avoid incomplete reduction. Moreover additional impurities like reducing agent and products of redox reaction are introduced into the treated wastewater. Low cost (bio)sorbents from natural materials Various natural and/or biological materials can relatively efficiently adsorb different metal ions: what in generalization is called bio-sorption [130]. A huge amount of work was devoted to sorption of Cr(VI) on such materials as sludge and/or sulfur-! containing biomass [START_REF] Wu | Cr(VI) removal from aqueous solution by dried activated sludge biomass[END_REF], inanimate bacteria [START_REF] Li | Mechanism of electron transfer in the bioadsorption of hexavalent chromium within Leersia hexandra Swartz granules by X-ray photoelectron spectroscopy[END_REF], algae [START_REF] Sanghi | Fungal bioremediation of chromates: conformational changes of biomass during sequestration, binding, and reduction of hexavalent chromium ions[END_REF], fungi [START_REF] Deng | Polyethylenimine-modified fungal biomass as a high-capacity biosorbent for Cr(VI) anions: sorption capacity and uptake mechanisms[END_REF], bio-waste products [82,[START_REF] Escudero | Modeling of kinetics of Cr(VI) sorption onto grape stalk waste in a stirred batch reactor[END_REF][START_REF] Liu | Polyethylenimine modified eggshell membrane as a novel biosorbent for adsorption and detoxification of Cr(VI) from water[END_REF] and iron sulfides [START_REF] Demoisson | Pyrite oxidation by hexavalent chromium: investigation of the chemical processes by monitoring of aqueous metal species[END_REF][START_REF] Cao | Remediation of Cr(VI) using zero-valent iron nanoparticles: kinetics and stoechiometry[END_REF]. Over the past decade, more than 200 articles was published in various international journals concerning Cr(VI) biosorption. Most of the early studies suggest that Cr(VI) is removed from the aqueous phase by electrostatic interaction of chromates with positively charged groups of biomaterials. Recently it has been suspected that such interpretation was incorrect [131] as far as no attention was paid to chromium speciation in equilibrium solution, at the surface of biomaterials, and also, in many cases, contact time was insufficient to establish equilibrium The latest review on bioadsorbent appearence [139] propose the existance of four models of Cr(VI) bio-sorption. They are as follows: 1. Anionic adsorption [137,138,140,141] Negatively charged chromium species (CrO The amount of adsorption depends on the nature of the biomass. [146,147,148] According to this mechanism a part of hexavalent chromium is reduced to Anionic and cationic adsorption ! 32! trivalent chromium. The hexavalent chromium (anionic) and trivalent chromium (cationic) are adsorbed to biomass. Reduction and anionic adsorption mechanism [149] According to this mechanism a part of the hexavalent chromium is reduced to Cr(III) by biosorbent and mainly Cr(VI) is adsorbed to the biomass while Cr(III) remains in the solution. It's interesting to consider the removal efficiency of different forms of chromium (Table 1.7) and total chromium (Table 1.8) on bio-adsorbents depending on pH. Efficiency of Cr(III) removal increases in more basic media, while the effectiveness of Cr(VI) reduction is restricted. The highest removal efficiency of total chromium is observed when spruce bark (85%) and pine cones (71.8%) are used at pH = 2.17, and coal (97%) is used at pH = 5. For other materials adsorption efficiency is very low (from 25 to 60%). Several attempts were made to determine active redox centers [START_REF] Deng | Polyethylenimine-modified fungal biomass as a high-capacity biosorbent for Cr(VI) anions: sorption capacity and uptake mechanisms[END_REF][START_REF] Sanghi | Fungal bioremediation of chromates: conformational changes of biomass during sequestration, binding, and reduction of hexavalent chromium ions[END_REF][START_REF] Li | Mechanism of electron transfer in the bioadsorption of hexavalent chromium within Leersia hexandra Swartz granules by X-ray photoelectron spectroscopy[END_REF] (responsible for Cr(VI) reduction) and binding groups (responsible for chromium ! 33! binding) present at the surface of bio-materials. Still chemical content of such complicate systems as biological materials is never fixed and may vary from sample to sample, depending on many factors (origin, method of preparation). Consequently, bioadsorbents are charactarized with low reproducibility of their properties. exchangers applied for Cr(III) removal. Moreover in case of (bio)sorbents the mechanism of chromium binding is not fully understood. For these reasons biosorption of Cr(VI) still is mainly confined to lab studies only. To understand the mechanizm a synthetic system reproducing the qualities of bio-adsorbents should be used. This would also help to obtain systems with better sorption capacity and specificity to target metal ion. Application of silica for Cr(VI) removal via 'adsorption-coupled reduction' mechanism The pioneer work which applies a silica-based materials for Cr(VI) sequestration by reduction-adsorption mechanism was a paper published by Deshpande et al. in 2005 [71]. They has introduced a bifunctional organic-inorganic hybrid material chemically modified with amino and/or thiol groups for one-step sequestration of Cr(VI). A comparative study of mono and bifunctionalized materials showed the advantages of those materials which were modified with both amino and thiol groups. In this work no attention was paid to control distribution of reductively-generated Cr(III) in solid and liquid phase. Chromium was considered to be adsorbed as Cr(III) on the basis of green color development by the treated adsorbent. Adsorption capacity of such bifunctionalized adsorbent was not calculated. Another approach of reductive-precipitation mechanism of Cr(VI) removal with the silica application was described in 2007 [151] and maintained by the following range of works [152,153,154 ]. In the presence of silica gel, Cr(VI) was proposed to be reduced by a zero-valent metal (Fe(0), Zn(0)). Silica was suggested to catalyze reduction and assist further precipitation of freshly-generated Cr(III). This novel approach to combine strong inorganic reductant and advanced OH-bearing adsorbent has revealed several disadvantages: formation of huge amount of sludge, simultaneous contamination of waters by products of reduction (Zn(II), Fe(III), Cr(III)). However Fe@SiO 2 nanocomposite designed by [154] should be marked out as the most efficient among the material proposed for Cr(VI) sequestration. Fe@SiO 2 dose 0. , I -, CN -, SCN -, OH -, NO 3 -, NO 2 -, SO 3 2-, SO 4 2-, CO 3 2-, C 2 O 4 2-etc. ). The ability of ligands to form complexes decreases in the range: thiol-disulfide modified silica for Cr(VI) adsorption. As a result Cr(III) and 2 disulfide groups are formed, with the possibility of chromium detection in the phase of the sorbent. OH -> C 2 O 4 2-> SO 3 2-> CH 2 COO -> HCOO -> SO 4 2-> Cl -> NO 3 -. In Among the articles dedicated to the mechanism of Cr(VI) interaction with thiolbearing organic compounds it was found that Cr(VI) interacts stepwise with formation of thiochromates as intermediats (see, scheme 1.3) [159]. First Cr(VI) is reduced to Cr(IV), then thiel-radicals interact and form disulfid bonds [160]. In more recent works it is shown that sulfonic acids can also be formed. [161]. Great difference between standart redox-potentials of redox couple Cr(III)/Cr(VI) Generally, interaction of a metal cation with the EDTA ligand can be written as follows: (Е о = 1.47 V) and S 2-/SO 4 2-(Е о = 0.303 V) or S 2-/S 2 2-(Е о = -0.524 V) means EDTA can form complex with Cr(III) with coordination number 6 in 1:1 ratio. A high stability of these complexes (see Table 1.9) is explained by the presence of six functional groups as donor atoms in the molecule of EDTA (containing nitrogen and oxygen). Trivalent chromium forms complexes with EDTA and its derivatives, but very slowly. In the literature one can find different information about the time of complex The manuscript dedicated to studying of Cr(III)DTPA complex formation [180] mentions that in aqueous solutions there migh exist 5 main types of species shown on 22.05; 28.18; 31.03; 32.48. One could assume that the distribution diagram for similar processes but on the surface of silica gel will have a slightly different appearance, the curves would be shifted toward lower pH, as generally the strength of immobilized acid and so value of the protonation constants increases. Elemental analysis of so synthesized silica revealed that the surface layer contain the mixture of ethylendiamine, ethylendiamineacetic acid, ethylendiaminediacetic acid and ethylendiaminetriacetic acid. ! 41! Finally technology of synthesis of ED3A-containing silane was imroved and now it became commercialy available and so it is possible graft silica with ED3A groups in one stage. Despite the fact that this silane exists as 55-65% aqueous solution of sodium salt, it was stated that ED3A groups were covalently bounded [185, , 186, 187] Reagents and solutions All reagents were analytical grade and solutions were prepared with high-purity water (18 MΩ cm -1 ) from a Millipore Milli-Q water purification system. Working solutions were prepared by dilution and pH was adjusted with nitric acid. Reagents K 2 Cr 2 O 7 , Cr(NO 3 ) 3 •9Н 2 О, Cu(NO 3 ) 2 •10Н 2 О, Pb(NO 3 ) 2 , Co(NO 3 ) 2 •6Н 2 О, Ni(NO 3 ) 2 •6Н 2 О, FeCl 3 •6Н 2 О, -Stock solutions of Cr(VI) were prepared by dissolving accurate weighed portions of a K 2 Cr 2 O 7 powder in deionized water, from which working solutions were prepared by dilution and pH adjustment using nitric acid. -Other metal solutions were prepared by dissolving the appropriate analyticalgrade reagents. -Ag(I) solutions were obtained from its nitrate salt (AgNO 3 ). They were prepared by diluting a stock solution of 0.10 M (in 0.1 M HNO 3 ) in distilled water and stored in dark glass. Their concentrations were checked with conductimetric titration by using a certified standard solution of NaCl. -DPC solution was prepared by dissolving 0,1 g of DPC in 50 mL of ethanol, by adding 20 mL of H 2 SO 4 (1:9). The obtained solution was stored during 1 month at 3- 5 о С. ! 46! Preparation of functionalized mesoporous silicas Synthesis of thiol-functionalized mesoporous silicas They were prepared by co-condensation of TEOS and MPTMS at room temperature in the presence of CTAB template, according to a published procedure [189,190]. Briefly, 0.9 mole of a TEOS/MPTMS mixture (9:1 molar ratio) was added under stirring to a surfactant solution made of CTAB (0.3 mole), water (50 mole), ethanol (150 mole) and ammonia (10 mole). After precipitation, the medium was left stirring for 2 hours and the resulting solids were filtered, washed with ethanol, and dried under vacuum (<10 -2 bar) for 24 h. Template extraction was achieved in ethanol/HCl (1M) under reflux for 24 h. The whole synthetic procedure was applied under inert atmosphere (Ar) in order to avoid any oxidation of thiol groups. The resulting solid was ascribed as MCM-41-SH; it contained 0.9 mmol g -1 -SH groups and it was characterized by a well-defined hexagonal mesostructure (powder XRD d 100 spacing of 33 Å), a specific surface area of 1598 m 2 g -1 , a total pore volume of 0.76 cm 3 g -1 and a pore diameter of 20 Å. Another sample was prepared without taking care about oxidation and used for comparison purposes (noted MCM-41-SH/SO 3 H-1 as it contained some sulfonic acid moieties, see Chapter IV for explanation). Another sample MCM-41-SH/SO 3 H-2.0 was synthesized according to the aforementioned procedure of MCM-41-SH/SO After oxidation, the material was always mesostructured, yet with a small lattice contraction (d 100 at 30 Å), but its specific surface area fell down to 860 m 2 g -1 and the total pore volume to 0.42 cm 3 g -1 due to the bigger size of sulfonic acid groups and possible degradation of the long-range order in the material [191] (parameters of functionalized materials are listed in Table 2.1). Materials oxidized in different ways were obtained by adjusting varied amount of H 2 O 2 in the oxidizing medium (i.e. 0, 0.19, 0.38, 1.15, 1.9 and 3.6 respectively). These materials are ascribed in the text as MCM-41-SH/SO 3 H-X, where X corresponds to the number of the sample (from 1 to 6). Pretreatment of silica-gel before functionalization In order to remove possible trace metal impurities in silica, a pre-treatment was carried out as follows: 400 mL of concentrated acids H 2 SO 4 /HNO 3 (9:1) mixture was poured to 20 g of pure silica and stirred with mechanical agitator during one night. SiO 2 was then filtered, washed with distilled water (up to neutral pH) and dried at 100 °C. Purified silica was stored in a tightly closed vessel. Before silanization it was annealed in muffle furnace at 450 °C during 3 hours. To avoid absorption of water from the air, calcined silica was cooled in a vacuum desiccator under P 2 O 5 , before transferring it into reactor. Silica gel with covalently attached mercaptopropyl groups Thiol-functionalized silica samples (SiO 2 -SH) were synthesized according to a grafting procedure described in the literature [192,193]. Silica gel (5 g) was placed into a flask, which was then filled with toluene (60 mL) and left for blending for several minutes. Then a selected aliquot of MPTMS solution (2 mL) in toluene was added to the slurry to reach a final content of 1 mmol of organosilane for 1 g of silica gel. Instrumentation Solution analysis Solution-phase analysis of total chromium was performed by inductively-coupled plasma with detection by atomic emission spectroscopy (ICP-AES, Plasma 2000, Perkin-Elmer). Distinction between Cr(VI) and Cr(III) species was made using the conventional diphenylcarbazide (DPC)-UV/Vis spectrometric method [ 195]. Other metal ions were analyzed by atomic absorption spectrometry (AAS) using a flame atomization "Saturn" apparatus and a propane-butane-air flame. Characterization of materials Composition of the adsorbents was determined by elemental analysis (CHONS) using a Thermofinnigan FlashEA 1112 analyzer. The amount of thiol and sulfonic acid groups in MCM-41-SH/SO 3 H-X materials was also determined by conductimetric titration, as performed in the Arrhenius cell. FTIR spectra were measured from self-supporting transparent tablets on a "Nexus-470" spectrometer, manufactured by "Termo-Nicolеt" in nitrogen, at 120°C. X-ray fluorescence (XRF) spectra were taken with the help of "ElvaX-Light" spectrometer, with an energy dispersion detector sensitive to radiation with 2-45 eV. Time of accumulation for each sample was 300 seconds. The tablets for XRF spectra were made by pressing sorbent with the filler (poly(vinyl alcohol)) in correlation 1:1 (0.05 g : 0.05 g) using a mold with 10 mm diameter. Diffuse reflectance spectra (DRS) were recorded on a "Specord M-40" spectrometer in the wavelength region 12000-40000 cm -1 . Measuring precision of position of the absorption maximum is ± 40 cm -1 . Equilibration procedures Static mode Batch equilibrations were performed at room temperature, under magnetic stirring. Solid and liquid phases were separated by filtration and analyzed with spectroscopic methods described in section 2.4.1. Equilibrated solutions of Cr(VI) were analyzed for the content of either Cr(VI) or Cr (III). ! 51! Sorption of Cr(III) and Cr(VI) versus pH Studying sorption/reduction efficiency of Cr(III) and/or Cr(VI) versus pH was performed in suspensions containing selected amounts of adsorbents and constant volumes (see Table 2.2) in the pH range 1-7. Sorption of Cr(III) and Cr(VI) versus solid-to-solution ratio Effect of solid-to-solution ratio on sorption of Cr(III) and Cr(VI) was studied in suspensions with constant concentrations of Cr species, constant volumes and selected amount of adsorbents (see Table 2.3). Sorption of Cr(III) and Cr(VI) versus time of interaction These experiments were conducted in suspensions with constant amount of sorbents, concentration of Cr species and pH (which corresponded to optimal sorption conditions). For more details see Table 2.4 Sorption of Cr(III) and Cr(VI) versus concentration of chromium species in solution Adsorption isotherms were studied in conditions of optimal pH, constant volumes and amount of sorbents. Detailed information is listed in Table 2.5 Sorption of Cr(VI) on SiO 2 -SH/ED3A versus ionic strength Equilibrations were performed for 24 hours in 25 mL solution with constant SiO 2 -SH/ED3A amount (0.05 g) and Cr(VI) concentration (10 -3 mole L -1 ) and pH = 2.5. Ionic strength was adjusted with NaCl (from 0.01 to 1M). of Cr(VI). Dynamic mode For all tests performed in dynamic mode, the same column (d = 7mm) was filled with 0,1 g of sorbent (SiO 2 -SH/ED3A). Before placing SiO 2 -SH/ED3A into the column it was soaked in distilled water during 12 hours. After the column was packed with sorbent, it was stored under the layer of distilled water. Before use, the column was washed with a solution of distilled water with a working pH = 2.5. pH of Cr(VI) solutions were adjusted to 2.5. Sorption of Cr(VI) versus solution flow rate Solution (25 mL) containing 0.2 mmol L -1 of Cr(VI) were passed through the column with different speed rates: 0.1; 0.2; 0.5; 0.7; 1.0 mL min -1 . 2 mL aliquots collected at the output of the column were adjusted to 25 mL and concentration of total chromium and Cr(VI) was detected in each of the sample using ICP-AES and (DPC)-UV/VIS spectrometric method respectively. Studying of the breakthrough volume for the column filled with SiO 2 -SH/ED3A The experiments have been performed at two Cr(VI) concentrations (2 mM and 4 mM), at pH 2.5. 25 mL of such solutions were passed through the column (parameters mentioned in section 2.5.2) at three different flow rates (0.1 mL min -1 0.4 mL min -1 , and 1.0 mL min -1 ). Aliquots at the output of the column (2 mL) were diluted up to 25 ! 54! mL and were analyzed by ICP-AES method. Desorption of metals from the column filed with SiO 2 -SH/ED3A A series of metals Cu(II), Pb(ІІ), Cо(ІІ), Ni(II), Fe(III) and Cr(VI)were preadsorbed on SiO 2 -SH/ED3A from their water solution with C=80 µmole L -1 . Desorption was accomplished by passing hydrochloric acid with variable concentration through the column filled with SiO 2 -SH/ED3A in a salt form (Na + ). In order to achieve constant changings in concentration of hydrochloric acid a system of "communicating vessels" was built (see Scheme 2.1). (2.1) Two 25 mL cylinders were interconnected with a teflon tube. The liquids in both cylinders were always at the same level. If the liquid was drawn off from one cylinder (speed rate 1 mL min -1 ), then the liquid from another cylinder went via the tube into the first cylinder and the level of liquid in both cylinders was balanced again. Thus the solution of hydrochloric acid (0.1 M) from the second cylinder was gradually added to the first cylinder filled with distilled water where magnetic stirrer was constantly mixing the liquid. In the process of liquid moving from one cylinder to another, concentration of hydrochloric acid in the first cylinder was increasing in monotonic way. Such a solution with a gradient concentration of hydrochloric acid was passed through the column filled with SiO 2 -SH/ED3A to check the changings of pH in each aliquot at the output of the column (2 mL). Than the heavy metals were pre- H 2 O HCl Peristaltic pump ! 55! concentrated in the same conditions as Cr(VI) and diluted by the previously mentioned technique. Each aliquot was deleted to 25 mL and analyzed by spectroscopic method. Intensity of Kα line of Cr (XRF spectroscopy) versus the content of chromium in matrix of the sorbent Solutions (25 mL) with different concentrations of Cr(VI) were passed through the column with the flow rate 1 mL min -1 . Solutions at the output of the column were analyzed by (DPC)-UV/VIS spectrometric method. The solid samples were dried on air and the tablets for Cr(VI) solid phase XRF detection were prepared. The influence of Cr(VI) solution volume on chromium sorption at column filed with SiO 2 -SH/ED3A Different volumes of solution (from 0.01 to 1 L) with constant values of pH = 2.5 and constant amount of Cr(VI) (10 -5 mole) were passed through the column at the flow rate 1 mL min -1 . After passing through the column solution was analyzed by ICP-AES method. The solids were taken out from the column and dried on air. The tablets were prepared for XRF analysis by the technique mentioned in section 2.4.2. Most commonly, the contents of immobilized thiol or sulfonic acid groups are S i! S! S i! S! S i! S! S i! S! O! S i! S! S i! S! O! O! S i! S! O! O! O! S i! S! S i! S! O! O! S i! O! H! S i! O! H! S i! O! H! S i! O! H! S i! O! H! S i! S! H! S i! S! H! n! ! + H! 2! O! 2! ! in methanol! 1! 2! 3! 4! 5! 0 ! 59! calculated from the results of elemental analysis of the adsorbent [189,204]. However, the method cannot selectively determine S-containing groups with different degree of The features of a direct acid-base titration were first studied for the material (SiO 2 -SO 3 H) characterized with monofunctionalized layer of sulfonic acid groups [212] to be safe from overlapping of deprotonation of thiol-groups. Similarly to neutralization of strong acid in solution, the curve of conductimetric titration of SiO 2 -SO 3 H in aqueous suspension is V-shaped (see curve 1, Fig. This observation is typical for neutralization of weak acid with pK a ≥6 in solution and was also expected for immobilized mercaptopropyl groups. So the concentration of bonded -SO 3 H groups was determined from the position of V-type minimum on the titration curve as 0.28 mmol g -1 . This number is in good agreement with the concentration calculated for the same material discussed in [212]. Sharp V-type minimum on the curve of acid-base titration of SiO 2 -SO 3 H and weak interference with other immobilized groups as well as with silica matrix suggest that conductimetry is H 2 O 2 , but was stored on air) also exhibits a small V-shaped minimum, which corresponds to 30 µmol g -1 of -SO 3 H (Fig. 3.3, solid square). The origin of sulfonic acid groups presented at the MCM-41-SH surface is concerned with spontaneous oxidation on air and will be discussed hereafter. is newer reached neither in this research nor in other known sources [191,214]. Indeed, from Table 3.1, where the compositions of immobilized layer for different samples of MCM-41-SH/SO 3 H-X are summarized, it can be seen that for selected conditions maximum degree of transformation (ω H +) from -SH to -SO 3 H groups is 65%. The results obtained from conductimetric titration of MCM-41-SH/SO 3 H-X agreed with the data received from elemental analysis of MCM-41-SH/SO 3 H-X samples (see Table 3.1). Indeed, for all MCM-41-SH/SO 3 H-X samples overall concentration of -SH and -SO 3 H groups (ΣC L ), determined from conductimetric titration is about equal to the concentration of immobilized groups determined from elemental analysis on sulfur. In this regard, all MCM-41-SH/SO 3 H-X samples, synthesized by hydrogen peroxide oxidation are polyfunctional and contain at least two types of immobilized groups: propylthiol and sulfonic acid. To explain increasing in oxidation stability of immobilized layer on TSFH its FTIR, XPS and TPD-MS spectra were studied. + + + - - ⇒ + - - H SAg CH SiO Ag SH CH SiO 3 2 2 3 2 2 ) ( ) ( ! 64! FTIR spectra of MCM-41-SH/SO 3 H-X FTIR spectra of MCM-41-SH/SO 3 H-X samples show characteristic peaks of stretching (2982, 2929, 2898 and 2858 cm -1 ) and deformation (1450, 1411 and 1344 cm - 1 ) vibrations of aliphatic CH 2 -groups. This is evidence of MCM-41-SH/SO 3 H-X stability against a destructive effect of oxidation. Low-intense adsorption of S-H stretching band from propylthiol was also registered at 2569 cm -1 for MCM-41-SH only (see Fig. XPS study of MCM-41-SH/SO 3 H-X As an alternative approach to confirm the above titration results, we have performed XPS analysis of the same series or materials. XPS spectra (S2p core level) of MCM-41-SH/SO 3 H-X series were fitted using doublets 2p 1/2 and 2p 3/2 separated by a spin-orbit splitting of 1.2 eV. The S(2p 1/2 ) peak area was constrained to be one half of the area from major S(2p 3/2 ) band. For MCM-41-SH sample the energy of the major band in the doublet is observed at 163.3±0.1 eV (Fig. 3 a) and it is typically attributed to S(II) in mono-or poly-sulfide species [162]. This position remains constant for the whole MCM-41-SH/SO 3 H-X set, while the peak area decreases with increasing of oxidation rate of the immobilized layer. For oxidized MCM-41-SH/SO 3 H-X samples together with doublet at 165 eV another asymmetric peak at higher energy is observed (Fig. 3.5). There also is no direct evidence for absolute selectivity of back-conductimetric titration used in this research to determine concentration of -SH groups. Therefore the methods applied above cannot prove or refute the presence of disulfide bridges on TSFH surface. Study of MCM-41-SH/SO 3 H-X by TPD-MS In order to reveal the products of incomplete oxidation of immobilized thiols, the method of thermo-programmed desorption with mass spectroscopy detection (TPD-MS) was applied. Some results of TPD-MS study of MCM-41-SH/SO 3 H-X is presented on an evident arm at 400°С. Also all samples discharge H 2 S 2 at 440-450°C but intensity thermal desorption of this compound is very low (less than 5% from H 2 S intensity). H 2 S 2 can only be generated from thermal decomposition of disulfides like 1-3 (scheme 3.1). So from TPS-MS spectra of MCM-41-SH/SO 3 H-X it can be suggested that the contribution of disulfide moieties to the composition of the immobilized layer on oxidized MCM-41-SH is not significant. In this context, the hypothesis of disulfide bridge formation as explanation of incomplete -SH to -SO 3 H transformation seems to be wrongful, at least for MCM-41-SH/SO 3 H-X materials, synthesized by cocondensation, where propylthiol groups are incorporated into the matrix during the solgel synthesis. Contrary, the intensity of high-temperature shoulder of H 2 S discharge for MCM-41-SH/SO 3 H-6, having only 35% of -SH group left, is higher than a low-temperature one. For later sample sulfonic acid micro-environment for residual -SH moieties is more likely than for the first one. This microenvironment can stabilize -SH fragment due to the formation of thiosulphonate, according to scheme 3.5. Similar effect is observed for SO 2 thermo-desorption curve. For MCM-41-SH/SO 3 H-6 sample the high temperature shoulder as twice as high of low-temperature one, Fig. ! As it can be seen from the figure, a typical minimum for sulfonic acid groups is observed at each titration curve. The results for all the curves are in close agreement, stating the presence of 15 µmol g -1 of sulfonate groups and 572 µmol g -1 of thiol groups. Comparison of these curves with the curve of the sample, which was kept in atmosphere of argon, shows that disposition of thiol groups on air provokes insignificant oxidation, which doesn't increase with prolongation of the contact. Conclusions The proposed single instrumental method is applicable for detection of thiol and sulfonic acid groups simultaneously present at the surface of silica-based mesoporous organic-inorganic materials containing up to 1 m g -1 of propylthiol groups and 15-600 µmol g -1 of sulfonic acid groups. Reliability of the obtained results is confirmed by the repeatable coincidence of the total concentration of thiol and sulfonic acid groups calculated by conductimetric titration, which in turn agrees with the data of elemental analysis. It is demonstrated that thiol and sulfonic acid groups are present at the surfaces of all studied samples except the one that was synthesized in conditions of inert atmosphere. The concentrations of strong acid moieties calculated from conductimetric titration curves were found to be in linear proportion to the quantity of added oxidant, which prove the transformation of one groups into others. But still, in agreement with the XPS and TPD-MS data, the proposed method of titration proves incomplete transformation of thiol groups to sulfonic acid moieties (65%), even at 100-fold molar excess of oxidant. TPD-MS analysis of MCM-41-SH/SO 3 H-X samples did not reveal any formation of disulfide bonds during oxidation by hydrogen peroxide. On the contrary, it was suggested that thiol groups surrounded by sulfonic groups are stable to oxidation by hydrogen peroxide owing to the formation of thiosulfonate!bonds. At the same time, it is also shown that thiol-functionalized materials synthesized on air contain small concentrations of sulfonic acid groups (up to 3 % of the total concentration of sulfur-containing groups) and are not likely to further oxidation by being disposed on air in room conditions for 2 months. 1.5 g L -1 of adsorbent whereas lower contents of materials in suspension gave rise to lower Cr(III) adsorption (e.g., 53% at 0.5 g L -1 , a value corresponding yet to an excess of sulfonate groups with respect to the amount of chromium in solution). These results indicate that rather high contents of adsorbent would be necessary to enable the uptake of all Cr(III) species from dilute solutions (the sorption yield being function of pH) and that the experimentally observed capacities are much lower than the amount of binding sites in the material. On the other hand, the uptake process was very fast as equilibrium was reached within 1 min, indicating the presence of minimal mass transfer resistances in agreement with other observations made for metal ion binding to ordered mesoporous silica bearing organo-functional groups [189,191]. Cr(III) sorption to MCM-41-SH/SO 3 H-6 at pH 2 was further characterized by drawing the corresponding isotherm (inset in Fig. 4.1.B) indicating a maximum adsorption capacity of 32 mg g -1 (0.62 mmol g -1 ), which is of the same order of magnitude as those reported for other cationexchanger bearing sulfonic acid (see Fig. 1.7 in Chapter I) and solid-phase extractants used for Cr(III) removal (for comparison see Table 1.8 in Chapter I). This value corresponds to an adsorbed quantity exactly equal to the amount of sulfonic acid groups in the material (as determined by titration, more details in Chapter IV), confirming again the optimal accessibility to binding sites in ordered mesoporous adsorbents [189,191]. By comparison and unsurprisingly, no Cr 3+ binding to thiol-functionalized mesoporous silica (MCM-41-SH) was detected, at least when the adsorbent was prepared in inert atmosphere (see curve "a" on Fig. 4.1. A). This indicates that Cr 3+ is not likely to be complexed by thiol groups (under these conditions) and further confirms negligible binding to the surface silanol groups in acidic medium. Interestingly, some Cr 3+ binding was observed on thiol-functionalized mesoporous silica, which was prepared without strict atmosphere control (i.e., in air), but the uptake yield was significantly lower than that observed on MCM-41-SH/SO 3 H-6 (compare curves "b" and "c" on Fig. 4.1. A). This can be explained by the assumption that materials preparation in the presence of oxygen (especially the template removal step in acidic medium) has led to a bi-functionalized system containing both thiol and sulfonic acid On the opposite, the remaining amount of Cr(III) in solution was found to be more important when decreasing pH (see Fig. "a"). The experiment was thus repeated using a large excess of Cr(VI) in solution and the resulting spectra (curve "b") was now visible, consisting of two main peaks corresponding to 2p 3/2 and 2p 1/2 core levels of chromium. The main 2p 3/2 peak was located at a binding energy of 577.3 eV, which corresponds to Cr(III) on the basis of values ranging between 577 and 577.5 eV for Cr2p 3/2 reported for Cr(III)-containing materials [START_REF] Fang | Cr(VI) removal from aqueous solution by activated carbon coated with quaternized poly(4-vinylpyridine)[END_REF]218,219]. The Cr2p 1/2 signal located at 586.7 eV also supports the existence of Cr(III) [218]. This demonstrates that Cr(VI) species have been indeed reduced by thiol groups and that the adsorbed species are really in the form of Cr(III) on the material. This appears to be advantageous in comparison to the commonly used activated carbon adsorbents for which the presence of both Cr(III) and the more toxic Cr(VI) species has been identified on the solid [START_REF] Fang | Cr(VI) removal from aqueous solution by activated carbon coated with quaternized poly(4-vinylpyridine)[END_REF]. more complicated than a simple ion exchange of Cr 3+ species at sulfonic acid centers in the material as total desorption cannot be achieved (i.e., only 60% desorption in a solution made of 2M HCl, as measured after Cr(VI) reduction / Cr(III) sorption on MCM-SH/SO 3 H-1). No attempt was made to characterize the exact coordination of Cr(III) in the material but this could involve the participation of silanol groups, as suggested from IR spectroscopic measurements (decrease in the absorption band of free silanol groups at 3750 cm -1 observed after reductive adsorption of Cr(VI) on MCM-SH/SO 3 H). It should be also noted that the presence of a small contribution of Cr(VI) cannot be discarded from XPS data (possible contribution of Cr2p 3/2 signal around 579. 5-580 eV [218, 219], which could be due to some impregnation of Cr(VI) (that wasn't washed out after excess treatment). Influence of solid-to-solution ratio Sorption yields can be dramatically improved by increasing the solid-to-solution ratio (see inset in Again, pH was found to play an important role in the reduction-sorption process (Fig. 4.5) and the trend was similar as that observed with less solid in suspension (Fig. Overall mechanism and optimization of the process It is clear from the above results that MCM-41-SH/SO 3 H-1 is likely to reduce Cr(VI) via its thiol groups and to immobilize Cr(III) species generated by this reaction via its sulfonate groups. One could be surprised, however, that Cr(III) sorption yield gave rise to more than 50% binding of reduced chromium. This suggests that reaction of thiol groups with Cr(VI) would have increased the amount of sulfonate groups in the material (in agreement with increased Cr(III) binding capacities observed with adsorbent containing higher amounts of sulfonate groups, see Fig. 4.1.A). To demonstrate that point, experiments have been performed with the aid of MCM-41-SH (i.e. a thiol-functionalized mesoporous silica containing no oxidized groups), thus maintaining the reduction ability of the material but not its sequestration properties (no sulfonate groups in the starting material). In that case, Cr(VI) reduction was always quantitative (at pH 2.2) and some of the generated Cr(III) species were indeed sequestrated in the material, in a proportion reaching up to about 35 % depending on the solid-to-solution ratio (see part "a" on Fig. 4.6). Meanwhile, XPS measurements made on the solid before and after reaction point out an oxidation of some thiol groups into sulfonic acid moieties (decrease in the S 2p line located at 163.4 eV (-SH) with concomitant increase of that situated at 168.5 eV (-SO 3 H), as much more pronounced as high was Cr(VI) concentration in solution, see Fig. 4.7). One can rationalize the above data by the following equation ( 4 4.6). This also explains why the use of a bi- ! 85! functionalized material containing both thiol and sulfonic acid groups (i.e., MCM-41-SH/SO 3 H) gave rise to better performance for the sequestration process (inset in Fig. In attempt to optimize the performance of the reduction/sequestration scheme, we have thus evaluated the influence of the SH/SO 3 H ratio on the sorption yields (see parts "b-d" on Fig. 4.6)). As shown, the presence of sulfonic acid groups in the starting material is of major importance to reach the best performance (sorption yields close to 100 % under optimized conditions and at high solid-to-solution ratio), but their amount relative to the thiol group content is not critical as no distinguishable variation was found between MCM-41-SH/SO 3 H-X samples respectively oxidized at 15, 20, and 40 % (for more information see Table 3.1). Only little amounts of sulfonate groups are thus necessary to ensure efficient reduction-sorption in the adsorbent. As reduction is equally quantitative in all cases, one can conclude that the limiting step in total sequestration of chromium is always the binding of the reduced products (i.e., Cr 3+ ). 4.3.A). Conclusions Once optimized (adsorbent at a content higher than 5 g L -1 , pH 2.2), the reduction-sorption process using MCM-41-SH/SO 3 H-1 is efficient enough to ensure residual chromium concentrations as low as the tolerance level accepted for industrial wastes (0.05-0,1 mg L -1 , see by reaction with the ED3A-silane coupler in water-methanol medium, as described in section 2.3.5). Again, it was possible to adjust the amount of immobilized groups by tuning the ED3A-silane coupler-to-silica ratio (see Fig. 5.1). A maximum ED3A content of ca. 0.7 mmol g -1 was achieved. When using the SiO 2 -SH material instead of bare silica, the amount of immobilized ED3A groups was lower due to the presence of SH groups occupying a significant portion of the silica surface. This is confirmed by chemical analysis, indicating the presence in the final bi-functional adsorbent (SiO 2 -SH/ED3A) of SH groups (0.38 ± 0.02 mmol g -1 from elemental analysis; 0.40 ± 0.03 mmol g -1 from silver titration), and ED3A groups (0.39 ± 0.02 mmol g -1 , from elemental analysis). The presence and the integrity of the organo-functional groups were further checked by FTIR (see Fig. 5.2). The spectra of thiol-bearing samples were characterized by C-H stretching vibrations of the propyl chains at 2855 and 2960 cm -1 and a weak vibration corresponding to the -SH group at 2578 cm -1 [221]. The ED3A-groups can be identified via the vibration of their carboxylate/carboxylic acid moieties, leading to a band at 1728 cm -1 (COOH) and two others (COO -) at 1631 cm -1 (this latter being superimposed to that of weakly physi-sorbed water [222]) and 1405 cm -1 , in agreement with previous observations [223], whereas the characteristic band of ED3A at 1332 cm -1 [224] was almost invisible because it was located too close to the huge signal corresponding to siloxane moieties in the 1000 to 1300 cm -1 range [221]. [217]). The accumulation process is very slow, requiring several hundreds of hours to reach significant sorption yields (see curve a in Fig. 5.4). This is explained by the kinetic inertness of Cr(H 2 O) 6 3+ ions with respect to ligand exchange reactions (see section 1.4.2), which constitutes the rate-determining step in complex formation of Cr(III) with ethylenediaminetriacetic acid. On the other hand, one should mention that no measurable Cr(III) uptake was observed up to pH 4, using the SiO 2 -SH adsorbent, confirming the absence of any interaction between thiol groups and Cr(III) species (discussed in section 3.1). Cr(VI) reduction-sorption on SiO 2 -SH/ED3A While no interaction between the SiO 2 -ED3A material and Cr(VI) species was observed, Figure 5.3 (part b) reveals that using the bi-functionalized SiO 2 -SH/ED3A adsorbent led to significant chromium uptake, especially between pH 1 and 3. At this stage it is difficult to distinguish unambiguously between these 2 processes but several features tend to indicate that the ED3A chelate plays an important role. First, the stoechiometry of the redox reaction (Eq. 5.1) shows that 2 Cr(III) species are formed when only one SO 3 H is generated whereas sorption yields as high as 90% have been observed (Fig. 5.3), demonstrating that ion exchange (Eq. 5.2a) cannot be the only process explaining Cr(III) immobilization and, therefore, complex formation with ED3A should occur. Secondly the pH range for effective reduction-sorption using SiO 2 -SH/ED3A (i.e., in the 1-3 range, see Fig. chelates (which are indeed present in the material at a content of 0.4 mmol g -1 ) via complex formation (Eq. 5.2a) and then a weaker binding to the generated sulfonic acid species via ion exchange/electrostatic interactions (Eq. 5.2b). Only this second (weak interaction) binding process was likely to occur with the mono-functionalized SiO 2 -SH material. These results also suggest the formation of a 1:1 complex between Cr(III) and ED3A on silica, consistent with the stoechiometry of the corresponding Cr(III)-HEDTA complex in solution. One can distinguish the characteristic blue/violet colour of the Cr(III)-ED3A complex on SiO 2 -SH/ED3A, the intensity of which increasing as high was the chromium loading. This is more quantitatively evidenced from recording UV-Vis diffuse spectra (see Fig. 5.7) for which the main peak at 565 nm was found to increase linearly up to a chromium loading of 0.4 mmol g -1 (corresponding to the content of ED3A groups in the material) and then tended to level off. Column sorption experiments The potential for SiO and three different flow rates (0.1 mL min -1 (■,□), 0.4 mL min -1 (▲,∆), and 1.0 mL Several conclusions can be drawn from these results. First, if the maximum sorption yields are independent on the solution flow rate (in the 0.1 -1.0 mL min -1 range), this parameter has some effects on the overall speed of the reduction-sorption process, especially more marked at 4 mM Cr(VI) than at 2 mM. This can be evidenced ! 103! from the breakthrough data (Fig. 5.9.A) in which the S-shaped curves were more welldefined at lower flow rates, as expected from longer contact times for reaching steadystate. This is also evident from adsorbent capacity variations (Fig. 5.9.B), showing that higher flow rates required larger solution volumes to fill the column. Secondly, the maximum uptake capacity was ca. 0.37 mmol g -1 (i.e., near the content of ED3A groups in the material), suggesting that only the strong chelating ED3A groups are likely to retain Cr(III) species that have been reduced by thiol groups. In such dynamic conditions, the lability of the SiO 2 -SO 3 -,Cr 3+ ion pair does not allow the durable immobilization of Cr(III) species via electrostatic interactions with sulfonic acid moieties. This confers definite advantage to the SiO 2 -SH/ED3A adsorbent over the previously reported SiO 2 -SH/SO 3 H one, which cannot be used in column (no measurable Cr(III) retention). Thirdly, under optimal conditions (i.e., low flow rates), there exists an inverse relationship between the breakthrough volume and the chromium solution concentration (i.e., 0.02 L for [Cr(VI)] = 2 mM, and 0.01 L for [Cr(VI)] = 4 mM, see Fig. 5.9.A), suggesting that the chromium concentration does not affect the kinetics of the reduction-sorption process. To establish optimal volume suitable for effective concentration of Cr (VI) different volumes of solutions (10-1000 mL) with constant amount of Cr (VI) (10 µmole) were passed through at the column filled with SiO 2 -SH/ED3A. As shown in It should be mentioned that the intensity of Kα lines of Cr (XRF), adsorbed on the surface SiO 2 -SH/ED3A from different volumes also varies slightly in case when solution is diluted to 250 mL (see Figure 5.11). The value of the signal intensity obtained after concentration of Cr (VI) from 500 and 1000 mL differ by 500 conventional units and cannot be considered as a method error. Thus, under the conditions of described experiment (4-fold excess of ED3A groups to amount of chromium, the volume rate = 1 mL min -1 , column d∅ = 6 mm and mSiO2-SH/ED3A = 100 mg) efficiency of chromium adsorption is sufficient in case of passing 250 mL of Cr (VI) solution. This makes it possible to apply columns filled with SiO 2 -SH/ED3A for adsorptiomn of toxic Cr(VI0 from diluted solutioins.! Influence of the presence of concomitant foreign species The effect of high ionic strength on the adsorbent capacity was first considered in static conditions, by working in excess Cr(VI), for which a decrease in the reductionsorption capacity by about 40 % was observed at high ionic strength (>0.5M, see Fig. 5.12). At the same time, distribution coefficients (Kd values) were found to decrease by a factor of about 2 (see inset in Fig. 5.12), but remained rather high, even at high ionic strength (e.g., 435 mL g -1 in the presence of 1 M NaCl). This behaviour can be rationalized by considering the reduction-sorption mechanism discussed above. Indeed, only Cr(III) species weakly bonded to -SO 3 -moieties contribute to the loss in capacity (i.e., from 0.69 to 0.43 mmol g -1 ). This confirms again the major importance of the strong ED3A chelates in maintaining the immobilization properties of the bi-functional adsorbent. Because ED3A-functionalized materials are likely to adsorb other metal ions, we have then investigated the possible effect of the presence of such species, which can be present along with chromium in typical acidic wastewaters (e.g., from electroplating). Figure 5.13.A shows that under pure thermodynamic competition conditions (i.e., excess adsorbent over solute), the presence of foreign species (Fe(III), Cu(II), Ni(II)) at concentrations ranging from half to twice as that of chromium did not dramatically affect the sorption yields (less than 20 % decrease when interfering species were two times in excess over Cr(VI)), even if these species were likely to bind to the material (Fig. 5.13.B) but at ca. 10 times lower contents in comparison to chromium. More importantly, pH was found to play a major role on the selectivity series. Conclusion Adequately engineered bi-functional silica adsorbents, SiO 2 -SH/ED3A, bearing one component designed to reduce toxic Cr(VI) (thiol groups in this case) and another one specifically selected for immobilizing the so-generated, yet less toxic, Cr(III) species (i.e., ethylenediaminetriacetate moieties), have proven to offer good performance for the efficient removal of chromium from aqueous medium, according to a reduction-sorption mechanism. Actually, the produced Cr(III) species can be sequestrated in the material via either complexation with the ethylenediaminetriacetate ! 109! ligand (ED3A) or ion exchange/electrostatic interactions with sulfonic acid moieties generated concomitantly to Cr(VI) reduction by thiol groups. The strong chelate properties of ED3A towards Cr(III) species, however, gave rise to significant improvement (in terms of operational pH range, durable adsorbent capacity, and low residual chromium concentration in solution) in comparison to the former SiO 2 -SH/SO 3 H material suffering form weak interactions between Cr(III) and the sulfonic acid groups. This has been notably exploited here in dynamic mode (column experiments) for which the presence of ED3A groups was essential to immobilize the reduced Cr(III) species on the adsorbent, which were not retained by "simple" electrostatic interactions with sulfonate moieties. Cr(VI) reduction and subsequent Cr(III) uptake was also possible in the presence of other metal species and the immobilized Cr(III) was found to be more stable at lower pH values than the other metal ions. Finally, optical (UV-vis) and spectroscopic (XRF) techniques can be used to quantify the adsorbed species and to distinguish between the strong ED3A-Cr 3+ complex and the weaker SO 3 -,Cr 3+ ion pair on the solid ! 110! General conclusions This work propose bifunctionalized silica-based adsorbents MCM-SH,SO 3 H-X and SiO 2 -SH/ED3A as effective adsorbents for selective removal of Cr(VI). For this purpose, the methods of synthesis of corresponding mono-and bifunctional materials are developed. In order to distinguish between the concentrations of both mercaptopropyl and propylsulfonic acid groups which are simulteniously present at the surface of MCM-SH,SO 3 H-X the method of conductometric titration has been proposed. It allows detecting existence of sulfonic acid groups in such low concentrations as 15-600 µmole g -1 , which is suitable for surveillance of thiol-containing adsorbents. Using conductimetric method, it is shown that prolonged exposure of thiol-containing adsorbent to an oxic atmosphere doesn't cause the formation of sulfonic acid groups. The contents of MCM-SH,SO 3 H-X layers are precised using spectroscopic methods (TPDMS, IR and XPS), which confirmed bifunctionality and various ratios of -SH and-SO 3 H groups at their surfaces. It is evaluated that bifunctuanalized adsorbents operate much more effective than the corresponding monofunctionalized silica-based materials. It is shown that the presence of minor amounts (0.2 mmol g -1 ) of binding groups (e.g. SO 3 H) helps to improve the efficency of sorption up to 100%. Although the adsorption capacities (20-30 mg g -1 ) are found to be not greater (yet of the same order of magnitude) than those for conventional sorbents, the proposed bifunctionalized adsorbents help to reduce toxic Cr(VI) (thiol groups) and selectively immobilize the so-generated, yet less toxic, Cr(III). In case of MCM-SH,SO 3 H-X, the binding of Cr(III) is achieved by ion exchange with SO 3 H groups, while at the surface of SiO ! ! 4 ! 4 de Cr(VI) dans les eaux usées à l'aide d'adsorbants bi-fonctionnalisés SiO 2 -SH/ED3A, et discute le mécanisme de fixation du chrome sur cet adsorbant via deux types d'interactions (électrostatiques avec -SO 3 H, ou par formation de complexes avec -ED3A); La possibilité d'utiliser SiO 2 -SH/ED3A pour l'analyse de Cr(VI) en phase solide par spectrométrie de fluorescence X est également démontrée. Fig. 1 . 1 . 11 Fig. 1.1. The Frost diagram for chromium (Cr) species in acidic solution [2]. !Fig. 1 . 1 Fig. 1.2. A Pourbaix diagram for Cr species dominating in aireted aqueous solutions in the absence of any other complexing agents rather than OH -and H 2 O. Concentration of ) 3 exhibits amphoteric properties, and so pH increasing leads to formation of soluble tetra-hydroxo complex Cr(OН) (III) ions in aqueous solution calculated for 5⋅10 -4 М solution for pH range from 2 to 12 is shown in Fig.1.3. Fig. 1 . 3 . 13 Fig.1.3. Mole ratio of Cr(III) particles in aqueous solution with a total concentration of Cr(III) 5⋅10 -4 М (calculated with the help of ScQuery Vn.5.34) Fig. 1 . 4 . 14 Fig. 1.4. Distribution of Cr(VI) species in water solutions versus pH (C Cr(VI) = 10 -6 M) scheme 1.1, b). In the grafting processes noted above, silylation reagents were typically added under dry conditions to avoid hydrolysis and condensation away from the pore walls. Under anhydrous conditions the hydrophilic portion of the silica surface is preserved during silylation and surface ! 20! Fig. 1 . 7 17 Fig. 1.7 Isotherm of Cr(III) sorption on Lewatit S 100. (C original (Cr(III)) = 0.1 -1 mmol L -1 ; m adsorbent = 0.5 g; V solution = 30 mL; рН =3.8, contact time: 150 min) adsorbents. Since chromium is able to form complexes only in the trivalent state, Cr(VI) should be reduced before the contact with chelating adsorbent. The methods of chemical reduction were listed in Part 1.3. Despite this photocatalytic reduction on the improved organic-inorganic hybrid material ZnS(en)0.5 (Zn(II) sulfide modified with ethylenediamine (EN)) [122] and γ-ray irradiation in the presence of TiO 2 , Al 2 O 3 or SiO 2 [123] are used for Cr(VI) transformation. FigFig. 1 . 9 . 19 Fig.1.8) is a specific ion-exhanger on the basis of chelating adsorbent.The polymer matrix built of crosslinked polystyrenedivinylbenzene was combined with the functional bis-picoliamine group containing nitrogen atom, which is a donor of electron pair for Levis acid-Cu(II) cation, in a covalent way. Copper ions are coordinated in such a way that complete neutralization of the positive charge does not [Fig. 1 . 1 Fig. 1.10. Proposed mechanism of the Cr(VI) biosorption by natural biomaterials). Cr(VI) is removed from an aqueous system by natural biomaterials through both direct (I) and indirect (II) reduction mechanisms. charged functional groups on the surface of biosorbents. This mechanism is based on the observation that at low pH Cr(VI) adsorption increases and at high pH Cr(VI) adsorption decreases. At low pH functional groups of the biosorbent become protonated, and easily attract negatively charged chromium, but at high pH deprotonation occurs, functional groups become negatively charged repelling negatively charged chromium. 2. Adsorption-coupled reduction [131, 142, 143] Reduction followed by cationic adsorption was first proposed by Volesky for algae sargassum biomass [144]. This mechanism is popularized by Prabhakaran on the basis of experiments [145]. According to this mechanism Cr(VI) is totally reduced to Cr(III) by biomass in the presence of acid. Then part of Cr(III) is adsorbed to biomass. 37 ! 37 Fig. 1.11 Sorption efficiency of Cu(II) (1), Bi(II) (2), Cd(II) (3), Co(II) (4), Ni(II) (5), Zn(II) (6) by silica gel chemically modified with mercaptopropyl groups versus HCl concentration and pH (С Ме = 5 mg L -1 , V = 10 mL, m c = 0,1 g, t = 5 min) Fig. 1 . 1 Fig. 1.12. UV-VIS spectra of chromium solutions: ( ) Cr(III) 1000 mg L -1 ; ( ) CrO 4 2-25 mg L -1 ; ( )Сr 2 O 7 2-50 mg L -1 ; ( ) Cr(III)-EDTA 250 mg Fig. 1 . 1 Fig.1.13. In pH range from 4 to 7, together with the main CrY 2-form (where Y is DTPA), the monoprotonated CrHY species are present. At рН = 3 the complex is protonated with two protons. When pH is >7 the monohydrate complex predominates, the metal is coordinated with one water molecule (Fig.1.14). Cr(III) complexes with DTPA (CrY 2-, CrHY -, CrH 2 Y, CrH 3 Y + ) are characterized with the following constants: Fig. 1 . 1 Fig. 1.13 The structure of Cr(III)DTPA complex Conductivities of suspensions were measured by means of AC Conductivity Bridge R-5058 at operational frequency of 1000 Hz at room temperature. For acid-base conductimetric titration, solutions of 0.011-0.024 M NaOH were used to titrate residual and bulk concentrations of sulphonic acid groups respectively. A batch of each sample (∼0.15 g) was previously soaked in 25 mL of deionised water. Titration of equilibrated suspension was performed in in 24h. For back conductimetric titration 0.1M NaCl solution was used to titrate excess of silver ions. The sample (∼0.15 g) was equilibrated with a mixture of 10 mL of 0.04 M AgNO 3 and 20 mL of deionised water. The suspension was kept without light access during 12 h. It was then filtrated and an aliquot of equilibrated solution was titrated in the Arrenius cell. pH of silver nitrate mixture was checked before and after the contact with samples. It was always stated to be 4-5. Solid desiccation under pressure (over P 2 O 5 ) preceded all of the titration experiments. photons. Powders were pressed at room temperature onto the adhesive side of a copper adhesive electrical tape. The binding energies were corrected on the basis of the standard value of C 1S from contaminants at 284.6 eV. Narrow scanned spectra were used to obtain the chemical state information for sulfur and chromium. Thermogravimetric studies were performed on Q-1500 D derivatograph of Paulic-Paulic-Erdey system (Hungary) at atmospheric pressure, with open platinum crucibles and Al 2 O 3 as standard for comparison. Heating rate was 10 °C⋅min -1 in the range between 20 and 800°C. Interference studies were performed during 24 hours by adding metal ions (Fe(III), Cu(II), Ni(II)), at concentrations ranging from 0.004 to 0.48 mmol L -1 in the suspensions made of 0.05 g of adsorbent in 25 mL of solution containing 0.23 mmol L-1 ! Ce chapitre est dédié à la détermination quantitative, par titrages conductimétriques, des groupements thiol et acide sulfonique présents à la surface de silices mésoporeuses organiquement modifiées (MCM-SH,SO 3 H) constituées de différents rapports thiol/acide sulfonique. Dans cet objectif, un ensemble d'échantillons de type MCM-41 contenant 1 mmol g -1 de groupements mercaptopropyle (MCM-SH) a été oxydé à l'aide de quantités sélectionnées de peroxyde d'hydrogène afin de produire des matériaux présentant différents rapports SH/SO 3 H. Le titrage conductimétrique a été proposé comme technique susceptible de distinguer les teneurs respectives des groupements thiol et acide sulfonique présents simultanément à la surface du matériau. Cette méthode permet d'estimer quantitativement la présence d'acide sulfonique à des teneurs inférieures à 1% en masse. Les concentrations en acides forts (SO 3 H), elles aussi calculées d'après les courbes de titrage conductimétrique, se sont révélées être directement proportionnelles à la quantité d'oxydant ajouté. Les données calculées sur base des titrages conductimétriques ont été confirmées par des analyses XPS, démontrant une bonne corrélation des résultats obtenus par ces deux techniques. A partir de ces données, et conformément à la littérature, la transformation incomplète des groupements thiol en acide sulfonique (65%), même en utilisant un excès molaire d'un oxydant d'un facteur 100, a été confirmée. La spectrométrie de masse avec thermodésorption programmée (TPD-MS) a également été utilisée pour compléter l'étude. Il a été démontré que l'oxidation de MCM-SH par le peroxyde d'hydrogène en concentrations croissantes ne conduit pas à la formation de quantités notables de groupements disulfures. La formation de groupements poly-sulfonates stables a été suggérée comme une des raisons possibles de l'oxydation incomplète des groupements thiol à la surface du matériau.Mesoporous silicas functionalized with thiol groups (the obvious case was mentioned in Chapter III -surfactant templated MCM-41-SH), which are generally used to remove heavy metals[164, 196], often serve as precursors for preparation of alkylsulfonic acid functionalized silica[197, 198] (MCM-41-SH/SO 3 H-X in Chapter II and III, where X corresponds to the number of the sample (from 1 to 6)). Extensive use of both thiol-functionalized materials and their oxidized derivatives was the reason of a vigorous interest in study the composition of their immobilized layer. It was shown that oxidation of immobilized propylthiol species with H 2 O 2 leads to generation of a polyfunctional layer[199, 200] that consists of both unreacted propylthiol groups and Scontaining groups with different oxidation state [201] (demonstrated on scheme (3.1)). With the help of C 13 CP/MAS NMR [201], RAMAN, IR and XPS methods [202] it was demonstrated that functional layer of such materials as MCM-41-SH/SO 3 H-X mainly consist of propylthiol (type 0, scheme 3.1) and propylsulfonate (type 5, scheme 3On the other hand, it was demonstrated that the surface layer of MCM-41-SH could also be poly-functional. For instance during sol-gel synthesis of MCM-41-SH in alkaline aqueous solution, atmospheric oxygen can cause oxidation of surface alkylthiol-groups to disulfide moieties (type 1, scheme 3.1) [203]. 3.1).At the same time the conductivity of aqueous suspension of unmodified structurally ordered silica based material of SBA type (see curve 2, Fig.3.1) in selected range increases in linear proportion to quantity of added base. Similarly to silanol groups, immobilized propylthiol groups shall not affect to smooth increase of suspension conductivity[162]. Indeed, acid-base titration of MCM-41-SH demonstrates no inflexion on conductimetric curve, similar to titration of SBA (see later discussions). ! 61 !!Fig. 3 . 1 . 6131 Fig. 3.1. Curves of conductimetric titration of SiO 2 with covalently attached groups of ethylsulfonic acid (1) and unmodified structurally ordered silica based material (SBA type) (2) !Fig. 3 . 2 . 1 !Fig. 3 . 3 . 32133 Fig. 3.2. Curves of direct conductimetric titration of differently oxidized samples: MCM-41-SH/SO 3 H-X, where (1) X = 1, (2) X = 2, (3) X = 3, (4) X = 4, (5) X = 1 3 3 . Fig. 3 . 4 . 34 Fig. 3.4. Fragments of FTIR spectra of MCM-41-SH (a), MCM-41-SH/SO 3 H-4 (b) and MCM-41-SH/SO 3 H-6 (c) ! Fig. 3 . 5 . 35 Fig. 3.5. XPS spectra (S 2p core level) of MCM-41-SH (1), MCM-41-SH/SO 3 H-3 (2), MCM-41-SH/SO 3 H-4 (3), MCM-41-SH/SO 3 H-5 (4), MCM-41-SH/SO 3 H-6 (5) ! The intensity of this peak is growing with increasing of oxidation degree of the immobilized layer, Fig.3.5. The later peak is shifted by 5 eV to higher binding energy in respect to the position of S(II) one, and attributed to S(VI) sulfur species in alkylsulphonic fragments [162]. Consequently, XPS data confirms formation of alkylsulfonic acid species on the MCM-41-SH/SO 3 H-X surface. Similar to the results of conductimetric titration, XPS data denote incomplete transformation of propylthiol into propylsulfonic groups under H 2 O 2 treatment: spectrum of MCM-41-SH/SO 3 H-6 sample that was obtained in highest concentration of H 2 O 2 is still characterized with both S(II) and S(VI) bands, Fig.3.5.The XPS data cannot be used directly for quantitative determination of concentration of bonded species because of unavailability of the standards (and because this technique is only analyzing the extreme surface of the materials), but the ratio of intensities (h) for different XPS peaks correlates with the concentration ratio for corresponded fragments. Table3.2 summarized the results obtained form XPS spectra aFig. 3 . 6 . 2 ! 362 Fig. 3.6. Correlation of -SO 3 H to -SH ratio, calculated from conductimetric titration curves (hollow circles) and heights of XPS peaks (solid squares), with concentration of Fig. 3 . 7 . 37 Fig. 3.7. Under thermal treatment MCM-41-SH/SO 3 H-X samples release H 2 S, H 2 S 2 , SO 2 and CH 3 -CH=CH 2 . Hydrogen sulfide is observed for all MCM-41-SH/SO 3 H-X samples at high enough temperature (337°С), Fig.3.7. This peak is asymmetric and has ! 69 !Fig. 3 . 7 .! 6937 Fig. 3.7. TPD-MS study of MCM-41-SH/SO 3 H-1 (a), MCM-41-SH/SO 3 H-3 (b), MCM-41-SH/SO 3 H-6(c)!Predominance of the low-temperature H 2 S discharge for lightly oxidized MCM-41-SH/SO 3 H-X (where x=1 and 3) assign the peak observed at 320-330°C to the thermal decomposition of thiol groups in accordance with the scheme(3.2). H 2 S 2 is generated form decomposition of dipropyldisulfides (type 1, scheme 3.1), according to scheme 3.3. Peak of SO 2 discharge observed form MCM-41-SH/SO 3 H-X at lowtemperature (260°C) is assign to the thermal decomposition of propylsulfonic acid groups according to scheme 3.4. As far as high-temperature peaks for hydrogen sulfide and sulfur dioxide discharge is observed at the same temperature (400°C) simultaneous decomposition of the surface fragment reviling both of these gases can be suggested. This fragment is presented on scheme 3.1 as thiosulfonate with formula 4. Formation and thermal decomposition of immobilized thiosulfonate under TPD-MS is presented by scheme 3.5. Fig. 4 . 4 Fig. 4.1. (A) Variation of Cr(III) sorption yields as a function of pH, using various adsorbents: MCM-41-SH (a), MCM-41-SH/SO 3 H-1 (b), and MCM-41-SH/SO 3 H-6 (c, d); experimental conditions: solid-to-solution ratios of 0.8 g L -1 (a-c) or 6 g L -1 (d); starting Cr(III) concentration in solution equal to 50 µM. (B) Variation of Cr(III) sorption yields on MCM-41-SH/SO 3 H-6 at pH 2 as a function of the solid-to-solution ratio (Inset: adsorption isotherm obtained in the same conditions). Fig. 4.2. XPS spectra (S 2p core-level) of MCM-41-SH (a) MCM-41-SH/SO 3 H-1 (b) and MCM-41-SH/SO 3 H-6 (c). Figure 4 . 3 . 43 A shows that adding MCM-41-SH/SO 3 H-1 particles into Cr(VI) solutions results in the uptake of some chromium species. The process starts at pH lower than 5 to increase rapidly when decreasing pH, giving rise to a maximal effectiveness of about 55% sequestration at pH 1-2. Chemical analyses after equilibration have revealed the presence of both Cr(III) and Cr(VI) in the supernatant solution. Cr(VI) concentration in solution was found to sharply decrease with lowering pH (Fig.4.3. B). This demonstrates the possible reduction of Cr(VI) by thiol groups immobilized in the mesoporous material, in agreement with observations made with other thiol-containing solids, and this process is more quantitative at lower pH values. Such pH dependence is consistent with the variation of the apparent potentials for Cr(VI)/Cr(III) redox couple (i.e., increasing when decreasing pH, see Fig.1.2 in chapter I). Fig. 4 . 3 . 43 Fig. 4.3. Variation as a function of pH of (A) total chromium sorption yield and (B) equilibrium concentration of Cr(VI) and Cr(III) in solution after reaction with MCM-41-SH/SO 3 H-1; (experimental conditions: solid-to-solution ratios of 0.5 g L -1 ; starting Cr(VI) concentration in solution equal to 50 µM.) Inset in part (A): variation of chromium sorption yields at pH 2 as a function of the solid-to-solution ratio. Fig. 4 . 4 . 44 Fig. 4.4. XPS spectra (Cr 2p core-level) of MCM-41-SH/SO 3 H-1 after contacting (a) equimolar and (b) large excess (0.5 M) quantities of Cr (VI) with respect to thiol groups. Fig. 4 . 4 3.A), as expected from a higher amount of organo-functional groups to reduce Cr(VI) and to immobilize the generated Cr(III) species. These data show that quantitative uptake (i.e., 100% sorption yield) was reached at about 5 g L -1 , a solid-to-solution ratio corresponding to an excess of about 2 orders of magnitude of organo-functional groups in the adsorbent with respect to the initial amount of Cr(VI) in solution. In fact, the variation in sorption yields for Cr(VI) on MCM-41-SH/SO 3 H-1 (inset in Fig.4.3.A) follows a trend rather similar as that for Cr(III) sorption on most fully oxidized sample MCM-41-SH/SO 3 H-6 (Fig.4.1.B), suggesting that the limiting step would be the immobilization of Cr(III) species arising from Cr(VI) reduction and not the redox transformation of Cr(VI) by thiol groups. This is also sustained by the fact that complete Cr(VI) reduction was already achieved at a solid-to-solution ratio of 1 g L -1 , although only 64% of the generated Cr(III) species were bonded to the adsorbent in these conditions. 4. 3 ) 83 !Fig. 4 . 5 . 38345 Fig. 4.5. Variation as a function of pH of (A) total chromium sorption yield and (B) equilibrium concentration of Cr(VI) and Cr(III) in solution after reaction with MCM-41-SH/SO 3 H-1); experimental conditions: solid-to-solution ratios of 7.5 g L -1 ; starting Cr(VI) concentration in solution equal to 50 µM. allowing MCM-41-SH/SO 3 H-1 to react with Cr(VI) than directly with Cr(III) (compare data on Fig. 4.3.A with curve "b" on Fig. 4.1.A). For example at pH 2, 0.8 g L -1 of MCM-41-SH/SO 3 H-1 was likely to immobilize less than 20% of Cr(III) from a 50 µM solution whereas 0.5 g L -1 of MCM-41-SH/SO 3 H-1 in 50 µM of Cr(VI) Fig. 4 . 6 . 46 Fig. 4.6. Variation of chromium sorption yields, as a function of the solid-tosolution ratio, using MCM-41-SH/SO 3 H-X particles suspended in 50 µM Cr(VI) solution at pH 2.2: (a) MCM-41-SH, (b, ○) MCM-41-SH/SO 3 H-2, (b, □) MCM-41-SH/SO 3 H-3, (b, ∇) MCM-41-SH/SO 3 H-5. Fig. 4 . 7 . 47 Fig. 4.7 . XPS spectra (S 2p core-level) of MCM-41-SH after reaction with solutions containing Cr(VI) at increasing concentrations: (a) 50 µM, (b) 2 mM, (c) 5 mM and (d) 0.5 M. Fig. 5 . 1 . 51 Fig. 5.1. Variation of the amount of ED3A groups attached to the silica surface, as a function of the concentration of the ED3A-silane coupler solution. Fig. 5 . 2 . 52 Fig. 5.2. FTIR spectra of SiO 2 -SH (a), SiO 2 -SH/ED3A (b) and SiO 2 -ED3A (c) in the 1250-3000 cm -1 range. Fig. 5 . 5 Fig. 5.3 (a) Variation of Cr(III) sorption yields as a function of pH, using the SiO 2 -ED3A adsorbent; experimental conditions: solid-to-solution ratio of 2.0 g L -1 , starting Cr(III) concentration in solution equal to 100 µM (25 mL solution). (b) Variation of Cr(VI) reduction-sorption yields as a function of pH, using the SiO 2 -SH/ED3A adsorbent; experimental conditions: solid-to-solution ratio of 2.0 g L -1 , starting Cr(VI) concentration in solution equal to 100 µM (25 mL solution). Fig. 5 . 4 .Figure 5 . 5 . 5455 Fig. 5.4. Effect of contact time between (a) a Cr(III) solution and the SiO 2 -ED3A adsorbent, or (b) a Cr(VI) solution and the SiO 2 -SH/ED3A adsorbent, on the chromium sorption yields; experimental conditions: solid-to-solution ratios of 4.0 g L -1 (a) or 2.0 g L -1 (b), starting Cr(III) and Cr(VI) concentrations in solution equal to 192 µM and 100 µM, respectively (25 mL solution), and pH in the medium equal to 5.0 (a) or 2.5 (b). Figure 5 . 5 . 98 !Figure 5 . 5 . 4 Fig. 5 . 6 . 559855456 Figure 5.5.B shows that SiO 2 -SH/ED3A is likely to decrease the residual chromium concentration below low threshold values, but this requires the use of solid/solution ratios high enough. For instance, working in a solution containing initially 0.1 mM Cr(VI), the use of SiO 2 -SH/ED3A contents increasing from 2 to 4 and then to 6 g L -1 resulted in sorption yields of 88, 97, and 99 %, respectively. The adsorbent is thus likely to reduce chromium concentration under the µM concentration level. In view of the importance of the solid-to-solution ratio (Fig. 5.5.B) and considering the fast sorption kinetics (Fig. 5.4.b), flow-through experiments could be Fig. 5 . 7 .Fig. 5 . 8 . 5758 Fig. 5.7. UV-Vis diffuse reflectance spectra of Cr(VI)-treated SiO 2 -SH/ED3A at various chromium contents, ranging from 0.04 to 25 mg g -1 . Fig. 5 . 9 . 59 Fig. 5.9. Cr(VI) reduction-sorption onto SiO 2 -SH/ED3A in dynamic mode. Variation of (A) the concentration of remaining (non adsorbed) chromium in solution at the output of the column, and (B) the amount of adsorbed chromium onto the solid phase, as a function of the solution volume passed through the column; the experiments have been performed at two Cr(VI) concentrations (2 mM (■,▲,•) and 4 mM (□,∆,○), at pH 2.5), Figure 5 . 5 Figure 5.10 (curve 2), the sorption efficiency decreases by 15% while volume is increasing to 100 mL. While diluting solution to 500 mL, sorption efficiency remains constant (71-75%) but it is reduced sharply (10%) when the volume of Cr(VI) solution passed through the column is 1000 mL. Efficiency of Cr (VI) reduction (curve 1, Fig.5.10) exceeds 90% when low volumes (10-25 mL) are passed through the column and is reduced up to 80% when 1000 mL are used.Apparently, dilution of Cr (VI) solution has little effect on reduction ability of SiO 2 -SH/ED3A column, but the sorption efficiency of reductively-generated Cr (III) is somewhat reduced when solution is diluted to 500 mL and is rather low (56%) when diluted to 1000 mL. Fig. 5 . 10 .Fig. 5 . 11 . 510511 Fig. 5.10. Efficiency of Cr(VI) reduction (1) and sorption (2) after interaction with SiO 2 -SH/ED3A in dynamic mode versus the volume of Cr(VI) solution; Inset: corresponding variation of distribution coefficients (mSiO2-SH/ED3A = 100 mg, n Cr(VI) = 10 µmole, ν = 1 mL min -1 )! Fig. 5 .Fig. 5 . 55 Fig. 5.12. Influence of the ionic strength on adsorption capacity SiO 2 -SH/ED3A adsorbent versus Cr(VI) (2.0 g L -1 ; starting Cr(VI) concentration in solution equal to 2 mM; pH 2.5), and corresponding variation of distribution coefficients (inset). Fig. 5 . 5 Fig. 5.14. Variations of the desorption yields, as a function of pH, of a column made of 100 mg of SiO 2 -SH/ED3A material pre-treated with a solution (25 mL) containing 80 µM of various metal species; desorption was made at a flow rate of 1.0 mL min -1 in a pH-gradient elution mode (see variation in pH values as a function of the elution volume in the inset). Metal species were Pb(II) (○), Co(II) (▼), Fe(III) (◊), Ni(II) (▲), Cu(II) (□), and Cr(VI) (adsorbed as Cr(III) species, •). 2 -SH/ED3A, the binding mainly occurs through complexation with ED3A groups (if the quantity of Cr(VI) exceeds the molar ratio with ED3A, the binding of Cr(III) excess is reached by ion-exchange with the generated SO 3 H groups). Due to the reductive sorption mechanizm of Cr(VI) sequestration, the selectivity of bifunctionalized adsorbents is higher in comparison to conventional Cr(VI) adsorbents. It is shown that high concentrations of either ! 111! competing anions (high ionic strength is typical for wastewaters from plating industry) or heavy metals doesn't prevent the efficiency of Cr(VI) removal. The selectivity of Cr(VI) extraction is based on the kinetic inertness of Cr(III)ED3A complex at the applied conditions (pH=1-3).Both bifunctionalized adsorbents are advantageous in comparison to anion exchangers that can suffer from competition with other anions such as sulfate, nitrate or phosphate. Besides they operate under conditions typical for industial wastewaters (pH = 1-3), which facilitate the process of wastewater treatment. Experiments in dynamic mode revealed the possibility to use SiO 2 -SH/ED3A for column treatment of Cr(VI) contaminated waters. It was shown that such column can be used for complete sequestration of Cr(VI) from solutions with Cr(VI) concentrations as low as the tolerance level accepted for industrial wastes (0.05-0,1 mg L -1 ). Pre-concentration of Cr(VI) at the surface of SiO 2 -SH/ED3A in combination with X-ray fluorescence analysis can also be used for creation of a test method of Cr(VI) detection. Occurrence of characteristic color of Cr(III)ED3A complex in the phase of SiO 2 -SH/ED3A after interaction with solutions containing Cr(VI) allows detecting its presence visually. : Cr(OH) 2 (H 2 O) 4+ + H 2 O = Cr(OH) 3 + H 3 O + (1.3) abbreviated as Cr(OH) ! Cr(H 2 O) 6 3+ + H 2 O = Cr(OH)(H 2 O) 5 2+ + H 3 O + (1.1) Cr(OH)(H 2 O) 5 2+ + H 2 O = Cr(OH) 2 (H 2 O) + + H 3 O + (1.2) ! 8! Table 1 .1 Evaluation of carcinogenicity of chromium compounds [30] 1 Taking in account drastically different biochemical reactivity of Cr(III) and Cr(VI), the present-day regulations and quality guidelines call for accurate distinction between Cr(VI) and Cr(III) species. As shown at Table 1.2., maximum allowable concentrations (MAC) of hexavalent chromium in some countries can be 1000-times lower than concentration of total chromium. ! 16! *Group 1: Carcinogenic to humans Group 2: Probably carcinogenic to humans Group 3: Not classifiable as to its carcinogenicity to humans Table 1 .2 Maximum allowable concentrations for waste waters, mg L -1 [30] 1 Total Cr 0,5 1 France, Ukraine, Germany (metallurgy, chemical industry), South Africa (waste waters) Germany (tanning) 2 Japan (public water systems) 0,005 South Africa (ecosystems) Cr(VI) 0,05-0,1 France, Ukraine, Germany (metallurgy, chemical industry), South Africa (agricultural use) 0,5 Germany (tanning), Japan (public water systems), USA At the same time, concentrations of hexavalent chromium at the output of the regular galvanizing plant (see, Table 1.3) reach values thousand times higher than MAC. ! 17! Table 1 .3 Concentration of the magor components in the effluent from galvanic plant [31] 1 Ingridient Concentration, mg L -1 Ingridinent Concentration, mg L -1 Cr(VI) 1000 Sn 10 Cu 30 Cl - 500 Ni 50 SO 4 2- 1000 Zn 50 CN - 30 Cd 15 NO 3 - 60 Pb 10 - - Considering afore cited toxicity of Cr(VI), technology used for purification of wastewaters ought to be that much effective to certify reduction of Cr(VI) Table 1 1 .4. Table 1 .4 Comparison of the Adsorption Capacity of Several Amine or Quaternary Ammonium Based adsorbents 1 Sample Functional groups Sorption capacity, mg g -1 references Amberlite IRA 400 60 102 Amberlite IRA 900 quaternary ammonium 153 103 Amberlite IR 67 RF imine 56 98 Amberlite IRA-96 amine 32 102 QAPAN fiber amine, quaternary ammonium 248 111 APAN fiber amine 35 104 APAN fiber amine 133 105 4-VP grafted PET fiber 4-vinyl pyridine 72 106 PANI-jute fiber amine and imine 63 107 RQA resin amine, quaternary ammonium 48 108 RCl resin N-methylimidazolium 132 109 VION AN-1 Pyridinium ion 130 110 Table 1 .5 Distribution coefficients of Cr(III) and Cr(VI) between AG-1X8 and solution of H 2 SO 4 with different concentrations [113] 1 Transformation of pyridine groups into more basic forms decreases chemical stability of resin in Cr(VI) adsorption from solutions. A novel fibrouse adsorbent functionalized with both amino and quaternary ammonium groups [111] can be repeatedly used for removal of aqueous Cr(VI). It is also stated that Cr(VI) is partially reduced to Cr(III) at its surface. The anion-exchanger Table 1 .6 Cr(III) distribution coefficients between cation-exchangers and acid solutions [118] 1 3 -, ClO 3 -, ClO 4 -anions cause formation of complex ions and change the colour from purple to green. Distribution coefficients of Cr(III) between cation-exchangers and acid solutions depends on acid concentrations, (see Table 1.6). Table 1 .7 Efficiency of adsorption of different chromium species on low cost (bio)adsorbent at different pH [150] 1 Adsorption yields, % Adsorbent рН = 2 рН = 5 Сr(VI) Cr(III) Сr(VI) Cr(III) Wool 69,3 0,0 5,8 58,3 Olive pits 47,1 0,0 8,4 74,8 Sawdust 53,5 0,0 13,8 96,8 Pine needles 42,9 0,0 13,0 79,4 Almonds pits 23,5 0,0 2,3 60,0 Coal 23,6 - 2,4 99,4 Cactus 19,8 0,0 8,2 55,2 Table 1 .8 Final solution pHs, removal efficiencies and sorption capacities of total Cr at equilibrium state on different types of low cost (bio)sorbents [77, 131] 1 Adsorbent Removal efficiency of total Cr, % Final solution рН Sorption capacity, mg g -1 Pine needle 38,0 2,21 21,5 Pine bark 85,0 2,17 - Pine cone 71,8 2,17 - Banana skin 25,5 2,37 - Green tea waste 64,8 2,27 5,7 Oak leaf 48,7 2,29 - Walnut shell 24,6 2,21 5,88 Rice straw 26,3 2,24 - Peanut shell 41,0 2,19 - Sawdust 19,9 2,24 53,5 Orange peel 49,9 2,27 - Rice husk 25,2 2,21 45,6 Rhizopus 27,2 2,32 23,9 Ecklonia 77,2 2,5 - Sargassum 64,1 2,46 32,6 Enteromorpha 15,8 2,37 - As one can see from the Table 1 .8 the sorption capacities of polifunctional (bio)sorbents are of the same order of magnitude as those reported for cation-! In 2011Guo et al. has published the data of reductive-adsorption capability of bacteria immobilized at sol-gel[153]. Unfortunately chromium adsorption parameters were not well examined. Moreover the proposed adsorbent is characterized with low kinetics of interaction. It takes days to reach complete reduction of Cr(VI).In this work we propose to examine application of bifunctionalized silica based materials for selective Cr(VI) removal from acid solutions. By planning the chemical composition of the surface and the structure of bifunctionalized adsorbent, we will try to improve sorption properties and increase the efficiency of sequestration of chromium in its less toxic form. By analogy to[71], in this work we propose to consider silica based 15 g L -1 can help to remove 70 mg L -1 of Cr(VI) under pH 6, removal capacity of Fe@SiO 2 reaches ! 35! 467 mg of Cr per g. materials modified by thiolpropyl as a reductant group and either by propylsulfonic acid or ethylenediaminetriacetate as a binding group. We should now revise the literature concerned with interaction of previously mentioned ligands with Cr(III), Cr(VI) and other heavy metals. 1.4 Justification for functional group choice 1.4.1 Interaction of Cr(III) and Cr(VI) with thiols Cr(III) belongs to hard acids [155], it forms inert complexs with coordination number 6 and octahedral configuration of ligands containing oxygen, nitrogen and sulfur as donor atoms. They include complexes with neutral groups (H 2 O, NO, NH 3 , NO 2 , SO 2 , S, P, CO, N 2 H 4 , NH 2 OH, C 2 H 5 OH, C 6 H 6 і т.д.) and ions (F -, Cl -, Br - aqueous solutions, because of the kinetic inertness of aqua-complexes, Cr(III) does not interact with sulfides [155] and thiols . Chromium(III) sulfide (Cr 2 S 3 ), formed by CrCl 3 treatment with gaseous H 2 S, is poorly soluble substance that decomposes very slowly in contact with water. This is the reason why there are any publication devoted to adsorption of Cr(III) by thiol-functionalized material from its aqueous solutions. Nevertheless thiol and sulfide-bearing materials are used for Cr(VI) reduction [156, 157] . Trofimchuk reports in his work [158] the possibility of using ! 36! Table 1 .9 Stability constants of some EDTA complexes; t=20 0 С; µ=0,1 in KNO 3 [179] 1 Potassium dichromate in the presence of EDTA in weakly alkaline medium, is gradually reduced, the process is significantly accelerated when boiled in the presence of manganese salts[168]. As a result a purple compound is formed, which is likely to be the CrY -complex. If a reduction agent (iodide) is present in solution, a violet complex is formed immediatly, most likely due to formation of anhydrous ion Cr 3+ [176]. - UV-spectra of this complex (Fig.1.12) reveals two absorption peaks at pH 2 (396 and 538 nm)[177]. In basic medium (pH 11) the absorption peaks are shifted to 390 and 590 nm. This changes in color can easily be detected by eyes. A purple complex CrY -in case of higher pH turns into darc blue complexes Cr(OH)Y 2-, Cr(OH) 2 Y 3-. The CrY - Cation Complex log K MY 2- Cu 2+ CuY 2- 18,8 Cd 2+ CdY 2- 16,46 Pb 2+ PbY 2- 18,04 Co 2+ CoY 2- 16,31 Ni 2+ NiY 2- 18,62 Zn 2+ ZnY 2- 16,5 Al 3+ AlY - 16,13 Fe 3+ FeY - 25,1 Cr 3+ CrY - 23,4 formation : 30 min [170, 171] , 45 min [172] or 5 min [173] . The process can be accelerated by heat. Complex formation at ordinary temperatures can be speed up by adding a catalyst such as acetate and bicarbonate [174], as well as trace amounts of Cr M n+ + H 4 Y → MY (n-4)+ + 4H + . (1.13) ! 38! (II) [175]. complex is very stable (see, Table 1 .9) and characterized with constant values of light adsorption at pH = 1.5-4. The optimal light adsorption is observed in case of Cr to EDTA ratio 1:6. In such conditions spectrophotometric method anables to detect up to 6 mg of Cr(III) per liter [178] . NaHSO 3 , NaCl, NaOH and diphenylcarbazide (DPC). average particle size of 125 ± 25 µm, a specific surface area of 425 ± 25 m 2 g -1 , (2) organosilanes, tetraethoxysilane (TEOS, >98%, Merck), mercaptopropyltrimethoxysilane (MPTMS, 95%, Lancaster) and N-[(3- trimethoxysilyl)-propyl]-ethylenediamine triacetate (ED3A, 50% (w/w) aqueous solution of sodium salt of ED3A-silane coupler, Petrarch Systems Inc.); (3) solvents, toluene (95%, Merck), ethanol, methanol (Merck) and (4) reagents cetyltrimethylammonium bromide (CTAB, Merck), ammonia (28%, Merck), hydrogen peroxide (35%, Merck) Solutions -Stock solution of Cr(III) was prepared by dissolving 0.7073 g of K 2 Cr 2 O 7 in 100 mL of 2M H 2 SO 4 and adding 5 g of NaHSO 3 in small portions under vigorous stirring. Concentrated HCl (37%), HNO 3 (65%), H 2 SO 4 (95-97%) and standart solutions of NaCl, NaOH and HNO 3 . ! 45! Chemicals used to prepare the adsorbents were: (1) the silica support, a chromatographic grade silica Kieselgel 60 obtained from Merck (characterized by an average pore size of about 7 nm, and a surface hydroxyl content of 3.6 mmol OH g, an The obtained medium was next boiled to complete Cr(VI) reduction into Cr(III) and to expel SO 2 , and was then diluted to 1 L to get a final Cr(III) concentration of 0.250 g L -1 . 2 Oxidation of thiol groups into sulfonic acid moieties 3 H-1 synthesis without taking care about oxidation by air oxygen. It was divided into three parts and titrated after different time of storing on air. Conductimetric titration was performed on the samples being disposed on air during 1 day (MCM-41-SH/SO 3 H-2.1), 2 weeks (MCM-41-SH/SO 3 H-2.2) and 2 months (MCM-41-SH/SO 3 H-2.3). Stractural parameters of functionalized materials are mentioned in Table 2.1.mixture of 10 mL of aqueous 35% H 2 O 2 (3.5 g) and 35 mL of methanol. After 24 h stirring, the solid particles were filtered and washed with water and ethanol. The wet materials were re-suspended in 0.1 M H 2 SO 4 and stirred for another 4 h period, filtered again, washed with water and ethanol, and dried under vacuum (<10 -2 bar) for 24 h. 2.3. MCM-41-SH was partially oxidized by adapting a previously published procedure involving the use of H 2 O 2 in a methanol-water mixture [191] . To get a fully oxidized material (MCM-41-SH/SO 3 H-6), 0.7 g of MCM-41-SH was suspended in a ! 47! washed in Soxhlet's apparatus with toluene (1h) and then with methanol for 24h. The final material contained about 0.4 mmol of covalently immobilized thiol groups per gram of adsorbent (for more information about structural parameters see Table2.1) SH was placed into 20 mL of water-methanol solution (1:1 in volume) and left blended for 30 minutes. Then 4.0 mL of ED3A-silane coupler solution was added to SiO 2 -SH slurry and blended for 3 days at the room temperature. The adsorbent was filtered, washed with distilled water and dried at 100 2.3.5 Silica gel with covalently attached mercaptopropyl and ethylenediaminetriacetate groups Thiol-and ethylenediaminetriacetate-functionalized silica samples (SiO 2 - SH/ED3A) have been prepared according to a similar procedure as that reported for getting ethylenediaminetriacetate-bonded silica gel (SiO 2 -ED3A) [194]. Briefly, 5.0 g of SiO 2 - The grafting reaction was conducted at 100 o C for 6 hours. The resulting precipitate was then ! 48! o C under reduced pressure. It contained about 0.4 mmol of immobilized ED3A groups per gram of material, i.e., the same amount as SH groups (for other adsorbent parameters see Table 2 .1). Such optimal conditions for functionalization of SiO 2 -SH with ED3A groups have been defined after investigating the binding of the ED3A-silane coupler to pure silica gel (1 g mixed with 4 mL of a 1:1 water-methanol solution containing the ED3A-silane coupler at concentrations varying from 13 to 290 g L -1 ). Table 2 .1 Parmeters of functionalized materials 2 Sample C(-SO 3 H), mmole g -1 C(-SH), mmole g -1 C(-ED3A), mmole g -1 BET surface area, m 2 g -1 Mesopore volume, сm 3 g -1 Pore diameter, Å MCM-SH/SO 3 H-1 0.035 0.898 … … … - ≥860 ≥0,42 ≥20 MCM-SH/SO 3 H-6 0.606 0.327 MCM-SH - 0.931 - 1598 0,76 20 SiO 2 -SH/ED3A - 0,382 0,401 346 - 5 SiO 2 -SH - 0,392 - 448 - 6 SiO 2 -ED3A - - 0,181 364 - 5 Table 2 .2 Conditions applied in experiments which study effect of pH on Cr uptake 2 Cr species Sorbent m, g V, mL t, hours MCM-41-SH/SO 3 H-6 0,02 0,15 Cr (III) MCM-41-SH/SO 3 H-1 MCM-41-SH SiO 2 -ED3A 0,02 0,05 25 24 Cr (VI) MCM-41-SH/SO 3 H-1 0,01 0,15 20 SiO 2 -SH/ED3A 0,05 25 ! Table 2 .3 Conditions applied in experiments which study effect of solid-to-solution ratio on Cr uptake 2 Cr species Sorbent m, g V, mL C, mole⋅L -1 pH t, hours Cr (III) MCM-41-SH/SO 3 H-6 0,01-0,35 MCM-41-SH 0,01-0,20 Cr (VI) MCM-41-SH/SO 3 H-2 MCM-41-SH/SO 3 H-3 0,03-0,2 0,03-0,2 50 5⋅10 -5 -10 -4 2,2 24 MCM-41-SH/SO 3 H-5 0,05-0,2 SiO 2 -SH/ED3A 0,01-0,3 3 Table 2 .4 Conditions applied in experiments which study kinetics of Cr(III) and Cr(VI) interaction with sorbents 2 Cr species Sorbent m, g C, mole L -1 pH t, hours Cr (III) SiO 2 -ED3A 0,1 1,92⋅10 -4 5 72 Cr (VI) SiO 2 -SH/ED3A 0,05 1,00⋅10 -4 2,5 24 Table 2 .5 Conditions applied in experiments which study effect chromium concentration on sorption 2 Cr species Sorbent V, mL m, g C, mole L -1 pH t, hours Cr (III) MCM-41-SH/SO 3 H-6 0,02 10 -5 -1,7⋅10 -3 2 Cr (VI) SiO 2 -SH SiO 2 -SH/ED3A 25 0,125 4⋅10 -5 -4⋅10 -3 2⋅10 -6 -2⋅10 -2 2,5 24 Table 3 .1 Concentrations of different sulfur groups in MCM-41-SH/SO 3 H-X series 3 the values were calculated from the total concentration of sulfur bearing groups c concentration determined by elemental analysis As it is illustrated on Fig.3.3, linear correlation exists between H 2 O 2 concentration and -SO 3 H loading on MCM-41-SH/SO 3 H-X. So it can be expected that adding MCM-41-SH to 5 mM solution of H 2 O 2 will follow complete oxidation of the surface layer to -SO 3 H groups. Nevertheless such transformation is not observed for any concentration of H 2 O 2 and can only be achieved when the concentration of thiol groups at the surface is low [201]. Such a restriction, in case of mild oxidation with H 2 O 2, is generally attributed to formation of disulfide groups stable to oxidation [215]. Sample C(H 2 O 2 ), mmol L -1 C(-SO 3 H), mmol g -1 C(-SH), mmol g -1 ΣC L , mmol g -1 ΣC L mmol g -1 c , ω H +, % MCM-41-SH 0 0 0.931 0.931 0 MCM-41-SH/SO 3 H-1 MCM-41-SH/SO 3 H-2 MCM-41-SH/SO 3 H-3 0 a 0.16 0.31 0.035 0.164 0.194 0.898 0.769 b 0.739 b 0.933 -- 1.06 4 18 21 MCM-41-SH/SO 3 H-4 0.91 0.354 0.599 0.953 37 MCM-41-SH/SO 3 H-5 MCM-41-SH/SO 3 H-6 a oxidation on air 1.45 2.49 0.361 0.606 0.550 0.327 b 0.911 - 40 65 b Table 3 . 3 2 summarized the results obtained form XPS spectra of different MCM-41-SH/SO 3 H-X samples together with data obtained from their conductimetric titration. Better correlation between S(VI)/S(II) ratio and ω H +, % is observed for data obtained from the peak intensities but not from mass concentration according to the quantification report of XPS spectra analyzer. 5 4 3 2 20 (CPS) 1 172 168 164 160 Binding&energy,&eV ! Table 3 . 3 2. Data from the XPS spectra Height (h) of Results of conductimetric Sample the peak, cm S(II) S(VI) S(VI)/S(II), % determination mmol g -1 C(SH), C(SO 3 H), mmol g -1 ω H +, % MCM-41-SH/SO 3 H-6 1.7 2.6 60 68 a 0.33 0.61 65 MCM-41-SH/SO 3 H-5 2.1 1.9 48 65 a 0.55 0.36 40 MCM-41-SH/SO 3 H-4 2.4 1.5 38 48 a 0.60 0.35 37 MCM-41-SH/SO 3 H-3 4.4 1.6 26 40 a 0.74 0.19 21 Table 3 .3 Concentrations of different sulfur groups in MCM-41-SH/SO 3 H-2.X series 3 3.7. from 35 to 100% of propylthiol groups. No essential amount of dipropyldisulfide is detected for any samples with oxidation degree from 0 to 65%, so stabilization towards complete oxidation by H 2 O 2 cannot be explained by the surface disulfide formation. ! Table 1 . 1 2), holding great promise for wastewater treatment by reducing the volume of toxic sludges with respect to the conventional reduction- 100 (cps) a b sulfonic acid thiol c d 175 170 165 160 Binding energy (eV) ! Chapter V VI) removal via reduction-sorption by SiO 2 -SH/ED3A 5.1 Adsorbent preparation and characteristics Attachment of mercaptopropyl groups (SH), on one hand, and ethylenediaminetriacetate moieties (ED3A), on the other hand, onto the silica surface requires two distinct grafting procedures. The first one is the "classical" grafting reaction of MPTMS on silica in refluxing toluene (see section 2.3.4) but special care should be taken to avoid complete coverage of the whole silica surface to enable further immobilization of ED3A groups, which can be easily made by adjusting the MPTMS/silica ratio [192]. Then, ED3A groups were attached to the SiO 2 -SH material 5.2 Batch sorption experiments 5.2.1 Cr(III) sorption on SiO 2 -ED3A Cr(III) species (Cr 3+ , or more accurately the aqua complex Cr(H 2 O) 6 3+ in acidic medium) can be immobilized on SiO 2 -ED3A and, as illustrated in Figure 5.3 (part a), this process is pH-dependent. As shown, no adsorption was observed at pH than 2.5, but sorption yields increased then rapidly at pH 3, to reach maximum values above pH 5. The sorption process is expected to be due to metal-ligand complex formation (on the basis of known Cr(III)-chelate complexes with chelators such as N-(2- hydroxyethyl)ethylenediaminetriacetic acid (HEDTA) or ethylenediaminetetraacetic acid (EDTA)), but one cannot exclude a contribution of non-selective bonding of Cr(H 2 O) 6 3+ species to silanolate groups, especially at higher pH values (i.e., above pH 4 5.3) is significantly larger than for SiO 2 -SH formation with ED3A (even at pH values as low as 0.5-1, consistent with highly stable Cr(III) complexes with HEDTA [170] or EDTA [226] in strongly acidic media). Kinetics associated to Cr(VI) reduction and subsequent Cr(III) immobilization onto SiO 2 -SH/ED3A are very fast (see curve b in Fig. 5.4), showing steady state values for maximum sorption yields in less than 15 min (i.e., the first measured data). This supports the idea of fast binding of freshly generated Cr(III) in the form of Cr 3+ species, which are formed close to ED3A groups in the porous material and thereby likely to undergo rapid complexation, contrary to as-prepared solutions of Cr(III) in which chromium is in the form of Cr(H 2 O) 6 3+ which requires slow ligand exchange before being complexed with the ED3A chelate (as supported by curve a in Fig. 5.2). ! 96! (i.e., between pH 2 and 3 for MCM-SH/SO 3 H-X), indicating more efficient Cr 3+ binding in acidic medium for the SiO 2 -SH/ED3A sorbent, which can be explained by complex
136,397
[ "788583" ]
[ "411849" ]
01749959
en
[ "sdv" ]
2024/03/05 22:32:07
2012
https://hal.univ-lorraine.fr/tel-01749959/file/DDOC_T_2012_0397_SANTINI.pdf
Karen Burga Anna Maria Becker Cédric Mondy Hugues Clivot Audrey Cordi Anne- Sophie Foltete Eric Gismondi Nelly Jacquet Stéphane Jomini Huong Thi Thuy Ngo Nubia Quiroz Guillermo Restrepo Benjamin Schmidt Didier Techer Rosy Toda Hela Toumi Philippe Wagner O Santini Vasseur P 1&2 Frank, H 1&2 Article 2: Copper effects on Na + /K + -ATPase and H + -ATPase in the freshwater bivalve Anodonta anatina Keywords: calcium homeostasis, copper, Na + /K + -ATPase, H + -ATPase, Anodonta anatina Anodonta cygnea metal homeostasis, freshwater bivalve, phytochelatins, Anodonta cygnea 8 Article 4: Phytochelatins, a group copper, freshwater bivalve, phytochelatins, Anodonta cygnea, metal tolerance Effects of copper on the activities of the cell plasma membrane H + -ATPase and Na + /K + -ATPase of the freshwater mussel Anodonta anatina were assessed after 4, 7, and 15 days of exposure to Cu 2+ at the environmentally relevant concentration of 0.35 µmol L -1 . The H + -ATPase was measured in the mantle, and the Na + /K + -ATPase in the gills, the digestive gland and the mantle. The Na + /K + -ATPase activities showed significant inhibition upon 4 days in the gills (72 %) and in the digestive gland (80 %) relative to control mussels: in the mantle, no inhibition of the enzyme activities was noted. Incipient recovery of the Na + /K + -ATPase activity was registered after 7 and 15 days in the gills and the digestive gland yet, in the gills not returning to basal level, within 15 days. H + -ATPase activity remained unaffected by Cu 2+ at the test concentration. [START_REF] Kang | Metallothionein redox cycle and function[END_REF], MT: metallothionein, ROS: reactive oxygen species, GSH: glutathione, GSSG: glutathione disulfide 61 Fig. 8: Fig. 8: PC synthesis [START_REF] Vatamaniuk | Worms take the 'phyto' out of 'phytochelatins[END_REF] 62 Peak "a" is an unidentified compound originating from the derivatization reaction with reagent. Standard concentration was 10 µmol L -1 for cysteine (Cys), glutathione (GSH), -glutamylcysteine ( -GluCys), 5 µmol L -1 for the internal standard N-acetylcysteine (NAC), and 2 µmol L -1 for phytochelatins 2-5 (PC 2-5 ). 114 Peak "a" is an unidentified compound originating from the derivatization reaction with reagent. Standard concentration was 10 µmol L -1 for cysteine (Cys), glutathione (GSH), -glutamylcysteine ( -GluCys), 5 µmol L 1 for the internal standard N-acetylcysteine (NAC), and 2 µmol L -1 for phytochelatins 2-5 (PC 2-5 ). 128 List of tables Background Table 1: Functions of the main cuproproteins (Tapiero et al., 2003) 40 Extended summary Extended summary Table 1: Basal enzymatic activities of plasma membrane ATPase(s) (µmol P i /mg protein/min) and cytoplasmic carbonic anhydrase (U/mg protein) in Anodonta anatina. 72 Table 2: Basal levels of phytochelatines 2-4 (µg PC/g tissue wet weight), -GluCys (µg -GluCys/g tissue wet weight), and metallothionein (mg MT/g protein) in Anodonta cygnea. 76 Article 1 Table 1: PMCA activities (µmol P i -1 L -1 mg -1 protein min -1 ) in different tissues of freshwater and marine organisms. 88 Table 2: Carbonic anhydrase activities in freshwater and marine organisms evaluated by measurement of pH decrease caused by enzymatic hydration of CO 2 . 89 Article 2 Table 1: Plasma membrane Na + /K + -ATPase and H + -ATPase activities (µmol/L P i /mg protein/min) in different tissues of freshwater and marine organisms. 103 Article 3 Table 1: Average retention time in minutes (n = 20) ± SD, limit of detection (LOD), and limit of quantification (LOQ) in pmol per 20 µL injected for cysteine-rich metal-binding peptides standards. Standard curves were run with 7 concentrations: 1 to 10 µmol L -1 for Cys, GSH, -GluCys, and 0.2 to 2 µmol L -1 for PC 2-5 . 112 Table 2: PC content [µg PC g -1 tissue wet weight] in digestive gland and gills of Anodonta cygnea, means (n = 6) ± SD. 117 Article 4 Table 1: -GluCys and MT content in the digestive gland and the gills of A. cygnea exposed to 0.35 µmol L -1 Cu 2+ for 0 h, 12 h, 48 h, 4 d, 7 d, and 21 d. Means of results (n = 6) ± SD. *= significant difference between exposed group and its respective control (Kruskal-Wallis and Mann-Whitney two sided test (n = 6, = 0.05)) 131 Summary Copper (Cu) is one of the metals contaminating European fresh water ecosystems. Filter feeding bivalves have high bioaccumulation potential for transition metals as Cu. While copper is an essential micronutrient for living organisms, it causes serious metabolic and physiological impairments when in excess. The objectives of this thesis are to get knowledge on toxic effects and detoxification mechanisms of copper in Anodonta cygnea and Anodonta anatina, two mussel species widely distributed in continental waters. Because Ca plays a fundamental role in shell formation and in numerous biological processes, Cu 2+ effects on cellular plasma membrane calcium transport were studied first. In the second step, the investigations focused on Cu 2+ detoxification mechanism involving cysteine (Cys) rich compounds known to play a major role in homeostasis of essential trace metals and in cellular metal detoxification. Under our experimental conditions, copper inhibition of Ca 2+ -ATPase activity was observed in the gills and the kidneys, and inhibition of Na + /K + -ATPase in the gills and the digestive gland (DG) upon 4 d of exposure. At day 7 of exposure to environmental Cu 2+ concentrations total recoveries was observed in the kidneys and the gills for Ca 2+ -ATPase activity, and in the DG for Na + /K + -ATPase, but not at high doses. Ca and Na transport inhibition may entail disturbance of osmo-regulation and lead to continuous under-supply of Ca. Recoveries of Na + /K + -ATPase and Ca 2+ -ATPase enzymes function suggest that metaldetoxification is induced. Phytochelatins (PC) are Cys-rich oligopeptides synthesised by phytochelatin synthase from glutathione in plants and fungi. Phytochelatin synthase genes have recently been identified in invertebrates; this allows us to hypothesize a role of PC in metal detoxification in animals. In the second part of this work, PC and their precursors as well as metallothionein were analyzed in the gills and in the DG of Anodonta cygnea exposed to Cu 2+ . Our results showed for the first time the presence of PC 2-4 in invertebrates. PC were detected in control mussels not exposed to metal, suggesting a role in essential metal homeostasis. Compared to control, PC 2 induction was observed during the first 12 h of Cu 2+ exposure. Those results confirm the role of PC as a first line detoxification mechanism in A. cygnea. Key words: calcium homeostasis, copper, Anodonta freshwater bivalve, phytochelatins Zusammenfassung Kupfer ist eines der Übergangs-Metalle, die in Süsswasser-Ökosystemen Europas am weitesten verbreitet sind und in Konzentrationen auftreten, die ökotoxikologische Bedeutung haben können. Die filtrierenden, zweischaligen Muscheln haben die Fähigkeit, solche Übergangs-und Spuren-Metalle wie Kupfer durch Bioakkumulation anzureichern, da Kupfer auch ein wesentliches essentielles Metall für alle lebenden Organismen ist; bei übermässiger Belastung mit diesem Spuren-Metall werden jedoch schwere metabolische und physiologische Störungen hervorgerufen. Diese Doktorarbeit zielt darauf ab, die Kenntnisse über toxische Wirkungen sowie Entgiftungsmechanismen von Kupfer in Anodonta cygnea und Anodonta anatina zu erweitern, zwei Arten von zweischaligen Süsswassermuscheln, die in Gewässern Europas weit verbreitet sind. Da Calcium eine fundamentale Rolle für die Zusammensetzung der Muschel-Schalen sowie bei zahlreichen biologischen Prozessen spielt, wurden zuerst die Wirkungen von Cu 2+ auf den zellulären Transport von Calcium auf der Ebene der Plasmamembran studiert. Dann wurde der Schwerpunkt der Studie auf die Entgiftungsmechanismen des Kupfers gelegt, wobei cysteinreiche Peptide und Proteine im Vordergrund stehen. Cystein ist bekannt für seine Rolle als funktionelles Element für die Homöostase der essenziellen Spurenmetalle sowie für die zelluläre Metallentgiftung. Unter den gewählten experimentellen Bedingungen wurde nach vier Tagen Cu 2+ -Exposition bei 0,35 µmol L -1 die Hemmung der Ca 2+ -ATPase in den Kiemen und Nieren und die Hemmung der Na + /K + -ATPase in den Kiemen und der Mitteldarmdrüse beobachtet. Nach sieben Tagen Exposition wurde die Erholung der Enzymtätigkeit in den Nieren und in den Kiemen für die Ca 2+ -ATPase beobachtet, in der Mitteldarmdrüse für die Na + /K + -ATPase. Die Erholung der Enzymtätigkeiten weist auf die Induktion der Metallentgiftungs-Kapazität hin. Bei doppelt so hoher Cu-Konzentration dauerte die Hemmung aller genannten Enzyme über den gesamten Expositionszeitraum von 15 Tagen an. Die Hemmung des Transports von Calcium und Natrium kann Störungen in der Osmoregulation verursachen und zu einem Defizit an Calcium führen. Phytochelatine (PC) sind cysteinreiche Polypeptide, die in Pflanzen und Hefen durch PC Synthase aus Glutathion synthetisiert werden. Gene, die zu funktionellen PC Synthasen führen können, wurden in Wirbellosen identifiziert, was uns annehmen ließ, dass PC auch bei Tieren eine Rolle bei der Metallentgiftung spielen könnte. Daher wurden im zweiten Teil der Arbeit PC und deren Vorläufer sowie die Metallothioneine in den Kiemen und in der Mitteldarmdrüse von kupferexponierten Anodonta cygnea analysiert. Die Ergebnisse zeigen zum ersten Mal die Präsenz von PC 2-4 in Wirbellosen. Die PC wurden auch in Kontrollmuscheln gefunden, was auf ihre Rolle bei der Homöostase essenzieller Metalle hinweist. Eine Induktion von PC 2 wurde in den ersten 12 Stunden der Cu 2+ -Exposition beobachtet, eine Bestätigung seiner Rolle als erster Metallentgiftungsmechanismus in A. cygnea. Schlüsselwörter: Calcium Homöostase, ionisches Kupfer, Zweischalige Süsswasser-Muscheln Anodonta, Phytochelatine Résumé Le cuivre est l'un des métaux contaminants des écosystèmes d'eau douce le plus représenté en Europe. Les bivalves filtreurs ont une grande capacité de bioaccumulation des métaux de transitions tel que le cuivre. Le cuivre est un oligo-élément essentiel pour les organismes vivants, mais en excès il provoque de graves perturbations métaboliques et physiologiques. L'objectif de cette thèse est d'acquérir des connaissances sur les effets toxiques et les mécanismes de détoxification du cuivre chez Anodonta cygnea et Anodonta anatina, deux espèces de bivalves dulcicoles largement distribuées dans les eaux continentales. Parce que le calcium joue un rôle fondamental dans la composition de la coquille et pour de nombreux processus biologiques, les effets du Cu 2+ ont été étudiés d'abord sur le transport cellulaire du calcium au niveau de la membrane plasmique. Dans un deuxième temps, l'étude a été axée sur les mécanismes de détoxification du Cu 2+ impliquant des composés riches en cystéine (Cys), connus pour jouer un rôle majeur dans l'homéostasie des métaux traces essentiels et dans la détoxification des métaux dans les cellules. Dans nos conditions expérimentales (0,35 et 0,64 µmol L -1 ), l'inhibition de la Ca 2+ -ATPase par le Cu 2+ a été observée dans les branchies et les reins, et l'inhibition de la Na + /K + -ATPase dans les branchies et la glande digestive, après 4 jours d'exposition. Au delà de 7 jours d'exposition à la concentration de 0,35 µmol L -1 Cu 2+ , une récupération totale de l'activité enzymatique a été observée dans les reins et les branchies pour Ca 2+ -ATPase, et dans la glande digestive pour la Na + /K + -ATPase. A dose élevée (0,64 µmol L -1 ), l'inhibition persiste. L'inhibition du transport du calcium et du sodium peut entraîner des perturbations de l'osmorégulation et conduire à des carences en calcium. La récupération de l'activité enzymatique de la Ca 2+ -ATPase et de la Na + /K + -ATPase suggère une induction de fonctions de détoxification des métaux. Les phytochélatines (PC) sont des oligopeptides riches en Cys synthétisés par la phytochélatine synthase à partir du glutathion, chez les plantes et les champignons. Des gènes codant pour des phytochélatine synthases fonctionnelles ont étés identifiés chez des invertébrés tel que le ver de fumier. Ceci nous a incités à rechercher la présence de PC dans les bivalves. Nos résultats ont montré pour la première fois, la présence de PC 2-4 chez les invertébrés. Les PC ont été détectés dans des moules témoins non exposées aux métaux, ceci suggère une fonction dans l'homéostasie des métaux essentiels. Nous avons donc étudiés leur rôle éventuel dans la détoxification des métaux chez ces organismes et les animaux en général. Jusqu'ici les phytochélatines étaient considérées jouer un rôle uniquement chez les végétaux. Dans la seconde partie de ce travail, les PC et leurs précurseurs, ont été recherchés dans les branchies et la glande digestive d'Anodonta cygnea exposé au Cu 2+ . Une induction de PC 2 a été observée dés les 12 premières heures d'exposition au Cu 2+ , comparé aux bivalves témoins. Ces résultats confirment le rôle du PC en tant que mécanisme de première ligne de détoxification des métaux chez A. cygnea. Les métallothioneines ont été analysées en parallèle, mais aucune induction n'a été prouvée en présence de cuivre. (Tapiero et al., 2003). Les propriétés qui rendent ce métal essentiel Les bivalves sont en contact étroit avec leur environnement, ils sont largement utilisés pour la surveillance de la pollution dans les écosystèmes aquatiques. Parmi ces animaux, les bivalves Unionidae sont largement utilisés comme organismes indicateurs de la bioaccumulation et des effets toxiques des polluants métalliques et organiques (Winter, 1996;Falfushynska et al., 2009). Pour ces raisons, nous avons choisi comme modèles biologiques pour l'étudier de la toxicité du cuivre, Anodonta cygnea et Anodonta anatina qui appartiennent aux Unionidae. (Coimbra et al., 1993). Dans notre étude des inhibitions de l'activité des Ca L'identification de gènes en mesure de donner des PCS fonctionnelles chez les invertébrés (Clemens et al., 2001;Vatamanuik et al., 2001;Brulle et al., 2008), nous avons fait des recherches de PC chez les organismes animaux. Des gènes homologues de phytochélatine synthase ont été trouvés répandus chez les invertébrés, (Clemens et Peršoh, 2009) Matériel et méthodes Acclimatation des bivalves L'entretien de moules a été décrit en détail dans notre premier article précédent (Santini et al., 2011a) Exposition au cuivre Les effets du cuivre ont étés évalués chez A. anatina sur les activités enzymatiques de Ca 2+ -ATPase, Na + /K + -ATPase, H + -ATPase de la membrane plasmique et AC cytosolique. Les activités des Ca 2+ -ATPase, Na + /K + -ATPase, et H + -ATPase ont été déterminées dans chacun des organes à partir du culot C75600 remis en suspension (fig. 1). La quantification du phosphate inorganique libéré lors de la dégradation de l'ATP a été réalisée avec la méthode de Chifflet et al. (1988) par dosage spectrophotométrique du complexe molybdate d'ammonium à 850 nm. L'activité de CA a été déterminée avec la méthode de Vitale et al. (1999) par la mesure de la chute de pH en présence des extraits de tissus et de CO 2 . Analyse des composés riches en thiol Résultats Ca 2+ -ATPase de la membrane plasmique et anhydrase carbonique L'activité moyenne des Ca 2+ -ATPase (± écart type) chez les animaux témoins était de 0,087 ± 0,023 µmol P i /mg de protéine/min dans les reins, 0,45 ± 0,13 dans les branchies, et 0,095 ± 0,03 dans la glande digestive. Le cuivre a inhibé les activités PMCA dans le rein de façon significative après 4 jours d'exposition à toutes les concentrations testées, soit de 0,26 à 1,15 µmol L -1 Cu 2+ . A 0,35 µmol L -1 Cu 2+ , une récupération de l'activité PMCA a été observée au delà de 7 jours d'exposition. A forte concentration (0,64 µmol L -1 ), une inhibition de 20 % de l'activité PMCA a été observée sur les 15 jours d'exposition sans aucune récupération. Dans les branchies un profil similaire à celui des reins, mais avec des variations non significatives est observé. Aucun effet significatif de l'éxposition à 0,35 µmol L -1 de Cu 2+ a été noté sur l'anhydrase carbonique, à l'exception d'une légère inhibition, mais non significative, après 15 jours d'exposition. Na + /K + -ATPase et H + -ATPase La moyenne Na + /K + -ATPase (± écart type) chez les animaux témoins en septembre était de 0,098 ± 0,006 µmol P i /mg de protéine/min dans les branchies, 0,045 ± 0,007 dans la glande digestive, et 0,042 ± 0,009 dans le manteau. L'activité de la H + -ATPase était de 0,002 ± 0,0012 dans le manteau. Une inhibition significative de l'activité de la Na + /K + -ATPase par rapport aux témoins a été observée à 4 jours d'exposition à 0,35 µmol L -1 de Cu 2+ dans les branchies (72 % d'inhibition) et la glande digestive (80 % d'inhibition). Une reprise de l'activité est observée entre le quatrième et quinzième jour d'exposition à 0,35 µmol L -1 de Cu 2+ , en fin de test une inhibition (54 %) persiste. Dans le manteau, l'activité de la Na + /K + -ATPase baisse de la même façon (26 % d'inhibition à 4 jours) mais la différence n'est pas significative par rapport aux témoins. Aucun effet significatif du cuivre sur la H + -ATPase n'a été noté dans le manteau de moules après 15 jours d'exposition. D'une façon générale et dans tous les tissus, les activités des deux ATPases ont été plus élevées en juillet et septembre par rapport aux valeurs mesurées en janvier, mars et avril. Métallothionéine, phytochélatines et ses précurseurs Le temps d'élution moyenne des standards de phytochélatines ont été: PC 2 à 13,09 min, PC 3 à 16,62 min, PC 4 à 18,59 min, et PC 5 à 19,75 min. Des composés marqués au mBBr avec des temps de rétention correspondant aux standards de PC 2 , PC 3 , PC 4 ont été détectés dans les extraits de glande digestive et de branchies de moules témoins. La PC 5 était au dessous de la limite de détection. La PC 2 présentait la concentration la plus forte avec 2,17 ± 0,59 et 0,88 ± 0,15 mg PC 2 / g de poids frais dans les tissus de la glande digestive et des branchies, respectivement. Dans ces deux organes, l'ordre de grandeur des concentrations en PC 2-4 était classé de la façon suivante : PC 2 > PC 3 > PC 4 . Les proportions de PC 2 et de PC 3 sont deux ou trois fois plus élevées dans la glande digestive que dans les branchies, tandis que le niveau de PC 4 est à peu près équivalent dans ces tissus. Le niveau de PC 2 a nettement et significativement augmenté dans les branchies des moules exposées à 0,35 µmol L -1 de Cu 2+ . Comparé aux bivalves témoins respectifs, une induction de 50 % de PC 2 a été observée dès les 12 premières heures jusqu'à 4 jours Discussion Nous avons étudié les effets du cuivre sur les enzymes impliquées dans le transport du calcium et dans les processus de ionorégulation, et les mécanismes de détoxification par phytochélatines et métallothionéines chez les bivalves Anodonta. Le calcium joue un rôle fondamental dans de nombreux processus biologiques (production d'énergie, métabolisme cellulaire, contraction musculaire, reproduction) et a d'importantes fonctions mécaniques (coquille, squelette) chez les organismes vivants (Mooren et Kinne, 1998). Contrairement aux mollusques des écosystèmes marins qui sont généralement hypoosmotiques et pour lesquels l'absorption du calcium est facilitée par de plus fortes concentrations ambiantes de calcium, les bivalves d'eau douce sont hyperosmotiques et nécessitent une régulation stricte de leur métabolisme calcique. La présente étude nous a permis de déterminer des niveaux de base des activités enzymatiques de Ca 2+ -ATPase, Na + /K + -ATPase et H + -ATPase de la membrane plasmique et de l'anhydrase carbonique cytosolique dans les organes impliqués dans l'homéostasie du calcium. Il y a un manque de données physiologiques chez les bivalves d'eau douce par rapport aux modèles marins tels que Mytilus edulis ou Mytilus galloprovincialis. Ces données sont nécessaires à la compréhension des mécanismes de transport du calcium puisque les bivalves d'eau douce sont soumis à une osmo-iono-régulation différente des bivalves marins. Dans notre étude, l'activité de la Ca 2+ -ATPase de la membrane plasmique (PMCA) des branchies d'A. anatina était quatre fois supérieure à celle trouvée chez Mytilus edulis (Burlando et al., 2004). Ceci reflète l'importance du transport actif du calcium chez Anodonta anatina. Les concentrations de calcium en eau douce sont inférieures à celles trouvées dans l'eau de mer, où l'absorption du calcium est plus facile pour Mytilus edulis. L'activité de la PMCA est également élevée dans les reins et la glande digestive, favorisant l'absorption du calcium provenant des aliments et la réabsorption du calcium dans l'ultrafiltrat rénal. Ces données physiologiques nous ont permis de comprendre dans quelle mesure la PMCA est importante pour l'homéostasie du calcium chez les bivalves d'eau douce par rapport aux organismes marins. En outre, chez les unionidés, le calcium joue un rôle direct dans la reproduction, puisque les glochidies sont incubées dans le marsupium (branchies), il est donc déterminant pour la croissance des populations de moules. La Na + /K + -ATPase maintient le gradient de sodium cellulaire transmembranaire nécessaire pour la diffusion facilitée du calcium par le Na + /Ca 2+ antiporteur. L'augmentation significative de l'activité de la Na + /K + -ATPase a été mesurée dans les branchies et la glande digestive en juillet et septembre par rapport au reste de l'année. Ces deux mois correspondent à la période de biominéralisation chez Anodonta sp. (Moura et al., 2000). Les reins ont un rôle essentiel dans la filtration et la réabsorption des ions, de l'eau et des molécules organiques de l'ultrafiltrat. Comme les bivalves d'eau douce sont hyperosmotiques, la pression osmotique résultant du gradient de concentration entre les compartiments internes et l'environnement conduit à l'absorption d'eau par osmose et à une perte par diffusion ionique. L'absorption de l'eau est compensée par la production d'urine et la perte ionique est limitée par la réabsorption ionique. Chez les bivalves d'eau douce, la production journalière d'urine est élevée. Les reins jouent un rôle essentiel dans l'homéostasie du Ca en limitant les pertes ioniques dans l'urine, par réabsorption active des ions Ca 2+ dans le filtrat (Turquier, 1994). Une inhibition de la réabsorption de calcium peut donc considérablement perturber l'homéostasie calcique. Notre étude a montré une inhibition de l'activité enzymatique de la PMCA dans les branchies et les reins chez les moules exposées à 0,35 µmol L -1 de Cu 2+ . Une reprise totale de l'activité de la PMCA a été observée à partir de 7 j d'exposition à 0,35 µmol L -1 de Cu 2+ , mais l'inhibition a persisté à concentration plus élevée (0,64 µmol L -1 ). En raison de problèmes analytiques inhérents à la faible masse des reins, ce tissu a été peu étudié chez les bivalves d'eau douce jusqu'alors. Cet organe joue pourtant un rôle important dans les processus de détoxification (Viarengo et Nott, 1993) ; de plus le rein est essentiel pour la iono-régulation. Nos résultats ont montré la grande sensibilité de cet organe au Cu 2+ à des concentrations réalistes au plan environnemental (0,35 µmol L -1 = 22,3 µg L -1 ). Dans le présent travail, une inhibition de la Na + /K + -ATPase a été observée dans les branchies et dans la glande digestive. Une telle perturbation de la iono-régulation pourrait conduire à une carence en calcium ; elle peut également affecter les voies de signalisation cellulaires calciques, en plus de perturber la biominéralisation et la formation de la coquille des moules et des glochidies. Une reprise des fonctions enzymatiques (Ca 2+ -ATPase et Na + /K + -ATPase) suggère qu'un mécanisme de détoxification des métaux est induit. Par conséquent, dans la deuxième partie de cette étude, nous nous sommes concentrés sur les mécanismes de détoxification par les composés riches en Cys, chélateurs de métaux par leur groupement thiol. Les bivalves dulcicoles unionidés sont largement reconnus pour leur capacité à accumuler dans leurs tissus, une grande variété de contaminants de l'environnement, y compris les métaux (Bonneris et al., 2005). Cette tolérance aux métaux est permise par des stratégies biochimiques impliquant leur séquestration. La séquestration intracellulaire des métaux se fait selon une séquence d'événements en cascade impliquant différents ligands avec une force croissante de liaison vis-à-vis des éléments métalliques. A concentration élevée, les métaux peuvent inhiber ces mécanismes de détoxication. Dans nos résultats, le retour au niveau basal des Ca 2+ -ATPase et Na + /K + -ATPase observé au-delà de 7 jours d'exposition à 0,35 µmol L -1 de Cu 2+ indique des capacités d'adaptation des bivalves, via des systèmes de détoxification efficace à faible concentration de cuivre. A une concentration plus élevée de Cu 2+ (0,64 µmol L -1 ), aucune récupération de l'activité enzymatique n'a été notée. Il est intéressant de remarquer que la récupération a été observée uniquement à faible concentration (0,35 µmol L -1 ) environnementalement pertinente. Ceci souligne l'importance d'utiliser des concentrations environnementales dans la recherche écotoxicologique; l'extrapolation de résultats observés à des doses élevées pour des situations environnementales peut être critiquée en raison des différents mécanismes de toxicité développés à faibles et fortes doses. L'étude de phytochélatines chez Anodonta cygnea, nous a amené à développer et optimiser un protocole analytique par HPLC pour la quantification des PC dans les tissus animaux, basée sur la méthode de Minocha et al. (2008). Le présent travail a confirmé la présence de phytochélatines chez les invertébrés. A notre connaissance, c'est la première fois que les PC ont été trouvés chez les animaux et chez les invertébrés. Chez les végétaux, les PC sont rapidement induites dans les cellules et les tissus exposés aux métaux de transition. Les PC jouent un rôle important dans la détoxification des métaux. Les PC pourraient également être impliquées dans l'homéostasie des métaux essentiels (Hirata et al., 2005). Dans notre étude, la PC 2 , PC 3 et PC 4 , ont été détectés dans les branchies et la glande digestive d'A. cygnea chez les témoins non exposés. Dans les deux organes, la PC 2 a été trouvée en concentration la plus forte suivie de la PC 3 elle-même en concentration supérieure à celle de la PC 4 . Les concentrations de PC 2 et PC 3 étaient environ deux à trois fois plus élevée dans la glande digestive que dans les branchies. Le niveau de base des PC 2-4 en l'absence de cuivre et d'autres métaux d'exposition suggère leur rôle dans l'homéostasie des métaux essentiels. A 12 h d'exposition au cuivre, une induction significative de PC 2 a été observée par rapport aux témoins correspondants dans les branchies et à un moindre niveau dans la glande digestive. Ces résultats confirment le rôle des PC comme chélateur de métaux impliqués en première ligne dans les mécanismes de détoxification chez A.cygnea. Au delà de 7 jours et jusqu'à 21 j, dans les branchies des moules exposées au cuivre, la PC 2 diminue pour revenir au niveau basal, identique à celui des témoins. Cette diminution suggère que la détoxification du cuivre a été prise en charge par d'autres mécanismes, sur le long terme. Chez les Unionidae, les métallothionéines et les granules (concretions intracellulaire insolubles le plus souvent minérale, mais également organiques) sont connues pour jouer ce rôle. Bonneris et al. (2005) ont montré que les concentrations de cadmium, de zinc et de cuivre dans la fraction de granules des branchies étaient significativement corrélées avec les concentrations de ces métaux sur l'environnement. Les granules sont connues pour être des sites privilégiés pour le stockage de cuivre dans les branchies des unionidés. Environ 65 % du cuivre total dans les branchies a été trouvé séquestré dans les granules chez Anodonta grandis grandis, où les concrétions de calcium représentent 51 % du poids sec des branchies. Des valeurs similaires ont été trouvées chez Anodonta cygnea (Bonneris et al., 2005). Aucune variation significative du niveau de MT n'à été observée lors de l'exposition des bivalves au cuivre durant les 21 jours de la présente étude. L'isoforme de MT (<10 kDa) dans l'extrait de moules élué, après séparation dans nos conditions HPLC, n'a pas été induit par le cuivre. Une détoxification par d'autres isoformes de MT non détectées par notre méthode HPLC chez A. cygnea, ne peut être exclue. En effet un polymorphisme important des MT est connu chez les invertébrés (Amiard et al., 2006). Les métallothionéines, les granules et les systèmes antioxydants ont été décrits comme étant impliqués dans les mécanismes de détoxication des bivalves d'eau douce. Cossu-Leguille et al. (1997) ont montré le rôle majeur joué chez les Unionidae par les antioxydants et en particulier par le glutathion réduit (GSH) pour la détoxification des métaux. Dans les branchies de Unio tumidus une diminution de 45 % du GSH a été observée chez les moules exposées en site contaminés par des métaux, par rapport aux témoins. Cette diminution du niveau de GSH chez les Unionidae exposés au cuivre a été confirmée par Doyotte et al. (1997) dans des conditions contrôlées de laboratoire, indiquant la séquestration des métaux directement par le groupe SH du GSH ou de son utilisation comme substrat par les enzymes antioxydantes. En effet, une implication en parallèle des enzymes antioxydantes a été décrite. L'activité de ces enzymes augmente lors d'exposition à faibles concentrations en métaux (Vasseur et Leguille, 2004). Le GSH joue également un rôle dans la synthèse des PC. Ce double rôle dans la détoxification directe des métaux, et comme précurseur de synthèse des PC pourrait expliquer cette diminution. Les populations d'Unionidae représentent la plus grande partie de la biomasse totale dans de nombreux systèmes aquatiques. Ils prennent une part active dans la purification de l'eau, dans la sédimentation, la modification des communautés de phytoplancton et des invertébrés détritivores (Aldridge, 2000). Par conséquent la disparition des Unionidae pourrait produire des perturbations structurelles et fonctionnelles dans les écosystèmes aquatiques. Les effets potentiellement négatifs de la compétition entre bivalves, autochtones et invasifs est une question qui fait débat. Les bivalves d'eau douce invasifs Corbicula fluminea, Dreissena polymorpha n'appartenant pas aux Unionidae colonisent les hydrosystèmes dulcicoles avec des effets préjudiciables pour les autres invertébrés. Ces espèces invasives ne jouent pas le même rôle fonctionnel dans les écosystèmes que les unionidés. Leur présence est l'une des hypothèses expliquant la baisse des populations d'unionidés. La combinaison de différents facteurs explique la disparition progressive des Unionoidea au profit des espèces invasives. L'appauvrissement en salmonidés qui sont de potentiels poissons hôtes pour les larves d'unionidés est présenté comme une autre hypothèse possible à celle d'une compétition par des éspeces invasives. Les populations de salmonidés sont touchées par la pollution de l'eau, pouvant indirectement renforcer les maladies naturelles tels que le parasitisme connu pour son impact négatif sur les populations de salmonidés (Voutilainen et al., 2009). Les autres principales raisons de ce déclin sont la dégradation physique des ruisseaux par la modification des lits de rivières et canaux, ainsi que la dégradation de la qualité de l'eau. En effet les interactions entre polluants, sont une cause de perturbation même à faible concentration (Vighi et al., 2003). Afin de protéger les populations autochtones d'Unionidae, il est important de déterminer comment et dans quelle mesure ces facteurs sont impliqués dans leur disparition. Les résultats acquis au cours de ce travail de thèse contribuent à la compréhension des effets de la pollution par les métaux sur l'homéostasie du calcium chez les bivalves d'eau douce. La pollution aquatique par les métaux est susceptible d'être l'une des raisons du déclin généralisé des populations d'Unionoidea dans les rivières européennes (Frank et Gerstmann, 2007). Nos résultats sur les effets du cuivre sur A. anatina ont confirmé cette hypothèse. Conclusion Les communautés du macrobentos sont d'excellents indicateurs de qualité de l'eau (Ippolito et al., 2010). Les objectifs du notre travail ont été l'étude des effets toxiques du cuivre et des mécanismes de détoxification chez les espèces de bivalves dulsicoles du genre Anodonta appartenant aux Unionidae. Tout d'abord, l'étude du Cu 2+ comme un inhibiteur potentiel des protéines enzymatiques jouant un rôle dans le transport du calcium et les processus biominéralisation a été réalisée chez Anodonta anatina. Deuxièmement, l'étude s'est orientée sur les mécanismes de détoxification du Cu 2+ par les composés riches en Cys chélateurs de métaux chez Anodonta cygnea. Dans une première partie de la présente étude, les effets de l'exposition au Cu 2+ sur les activités enzymatiques de la Ca 2+ -ATPase, la Na + /K + -ATPase, et la H + -ATPase de la membrane plasmique, et de l'AC cytosolique ont été évalués chez le bivalve dulcicole Anodonta anatina. Dès 4 jours, une inhibition de l'activité enzymatique de la Ca 2+ -ATPase des branchies et des reins, et de la Na + /K + -ATPase des branchies et la glande digestive, a été observée chez Anodonta anatina exposée à 0,35 µmol L -1 de Cu 2+ . Une récupération totale de l'activité de la Ca 2+ -ATPase a été observée à 7 jours d'exposition à 0,35 µmol L -1 de Cu 2+ , indiquant l'induction de mécanismes de détoxication des métaux. Chez les bivalves exposés à 0,64 µmol L -1 de cuivre aucune récupération de l'activité enzymatique n'a été observée sur 15 jours. Il serait intéressant d'étudier les conséquences à long terme de la perturbation des fonctions osmorégulatrices des reins chez A. anatina exposés à 0,64 µmol L -1 de Cu 2+ . Dans un deuxième temps nous avons recherché la présence de phytochélatines chez les bivalves. Au cours de notre recherche nous avons identifié les phytochélatines PC 2 , PC 3 et PC 4 . C'est la première fois que la présence de PC est mise en évidence chez des organismes animaux et chez A. cygnea en particulier. Chez Anodonta cygnea exposés à 0,35 µmol L -1 de Cu 2+ , la PC 2 a été induite par rapport aux témoins à 12 h dans la glande digestive et les branchies, l'induction a persisté après 7 jours dans les branchies. Ces résultats suggèrent un rôle des PC dans l'homéostasie des métaux essentiels, et confirme le rôle des PC en tant que mécanisme de première ligne pour la détoxification des métaux chez A.cygnea. Le cuivre a été signalé comme un faible inducteur des polypeptides riches en thiols chélateur de métaux (Zenk, 1996). L'exposition à un métal non essentiel et puissant inducteur de PC comme le cadmium pourrait être intéressante pour l'étude de l'induction des PC 3-4 chez A. cygnea. Par ailleurs la comparaison des effets de l'exposition à un métal essentiel comme le cuivre et un métal non essentiel tel que le cadmium pourrait nous permettre de déterminer le rôle joué par les PC dans l'homéostasie des métaux essentiels et les mécanismes de détoxification. Nos résultats d'analyses de métallothionéine (MT) par HPLC n'ont montré aucune variation de son niveau d'expression après exposition au cuivre. Ceci indique que l'isoforme MT quantifié avec notre méthode n'est pas induit par ce métal. Des analyses complémentaires des MT chez A. cygnea exposés au cuivre dans les mêmes conditions, avec une méthode de spectrométrie qui permet la quantification de la totalité des isoformes de MT seraient intéressantes afin de déterminer si d'autres isoformes de MT sont induites chez cette espèce. De plus en parallèle, il serait nécessaire d'optimiser notre protocole d'analyse HPLC de manière à séparer et mesurer les isoformes de MT induites par le cuivre chez A. cygnea. Le cuivre est un inducteur d'espèces réactives de l'oxygène (ERO) à travers la réaction de Fenton. Le rôle antioxydant joué par les PC (Hirata et al., 2005) serait intéressant à étudier pour une meilleure compréhension du mécanisme de détoxification du cuivre par les PC chez A. cygnea. La pollution chimique est l'une cause de dégradation de l'environnement aquatique responsable du déclin des populations de bivalves dulcicoles. La toxicité dépend non seulement de la biodisponibilité des polluants et de leur toxicité intrinsèque, mais également de l'efficacité des systèmes de détoxification dans l'élimination des espèces chimiques réactives. Cette thèse est une contribution significative à l'étude des systèmes de détoxification des métaux chez les bivalves dulsicoles A. cygnea et A. anatina. L'originalité de notre travail est la mise en évidence, pour la première fois, de phytochélatines chez des organismes animaux. Nous avons démontré leur rôle dans la protection vis-à-vis de la toxicité du Cu 2+ , à des concentrations réalistes et pertinentes au plan environmental. Introduction 1 General introduction Human activity is associated with the development of industry and agriculture which have become indispensable. These sectors are responsible for the production and release of many pollutants. The chemical and physical properties and the different types of transport determine pollutant diffusion in all compartments of ecosystems. The aquatic environment is a sink for most many pollutants including metals. Copper belongs to the metals most commonly used because of its physical (particularly its electrical and thermal conductivity) and chemical properties. As pure metal, as alloy, or in the ionic state it is employed in many industrial and agricultural sectors. Therefore, copper is a metal frequently detected in the aquatic continental environment where it is present in the water column and accumulates in sediments [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. The chemical properties of copper (including as catalyst) make it essential to many biological processes involving such vital functions as respiration or photosynthesis (Tapiero et al., 2003). The properties that make this metal an essential element are at the same time the reasons for its toxicity when in excess. Copper is bioaccumulative and can become a threat to biocenoses. Mechanisms regulating the copper concentration and detoxification are essential to all living organisms. Molluscs represent an important group of macroinvertebrates in aquatic ecosystems. Among this group, bivalves are particularly interesting. Through their important filtering activity to satisfy respiratory and nutrition, bivalves have the capacity to accumulate a variety of environmental contaminants. In freshwater ecosystems they play an important role in matter transfer from the water column to the sediment. Faeces and pseudofaeces of bivalves make the plankton fraction available to detrivores and can change the sediment quality by pollutant sedimentation and concentration. In close contact with their environment, they are widely used for pollution monitoring in aquatic ecosystems. Among these animals, freshwater mussels of the Unionidae family are employed for controlling the bioaccumulation and toxic effects of metallic and organic pollutants (Winter, 1996;Falfushynska et al., 2009). For these reasons we chose Anodonta cygnea and Anodonta anatina which belongs to the Unionidae as biological models to study copper toxicity. These invertebrates are autochtonous species in European hydrosystems, but regression of Unionidae populations, like many other freshwater bivalves, has been observed in the last decades, the main reason for such ecotoxicological studies as this one. The objective of this thesis was to gain knowledge on the mechanisms of disturbance of calcium metabolism in Anodonta anatina by copper. Calcium is a critical element in the functioning of eukaryotic organisms. It controls multiple processes of reproduction, life, and death (Ermak and Davies, 2001). Absorption, biomineralization, and maintenance of intracellular calcium concentrations are effected and controlled by its passage through cell membranes. This takes place by simple diffusion and also by means of transport proteins. In bivalves, calcium is also very important for exo-skeleton synthesis by biomineralization. In addition to calcium, for biomineralization carbonate ions are required, produced by carbonic anhydrase (CA) catalysis. The effects of Cu 2+ on calcium transport have been tested in Anodonta anatina by assessment of the enzymatic activities of Ca 2+ -ATPase, Na + /K + -ATPase, H + -ATPase of the plasma membrane, and of the cytosolic CA. These enzymes are involved in calcium absorption and in the biomineralization processes. Organs studied were the gills, the digestive gland, the kidney, and the mantle which play an important role in the absorption of calcium and the synthesis of the shell (Coimbra et al., 1993). Enzymatic inhibition in A. anatina exposed to Cu 2+ at concentrations ranging from 0.26 to 1.15 µmol L -1 was observed. At low concentration of 0.35 µmol L -1 , a total recovery following the inhibition indicated the induction of detoxication mechanisms. The second part of this study focused specifically on metal binding Cys-rich compounds. These compounds are oligopeptides with high Cys content such as phytochelatins, or proteins such as metallothioneins. They play a major role in animals and plants as metal chelator, in essential metal homeostasis, and in non-essential metal detoxication. Phytochelatins (PC) are thiol rich oligopeptides with ( -Glu-Cys) n -Gly as general structure synthesised by phytochelatin synthase from glutathione. PC binds metal ions and forms metal complexes that reduce the intracellular free metal ion concentration in cells of plants, fungi, and microalgae. Since the identification of homologous genes able to give a functional phytochelatin synthase in invertebrates (Clemens et al., 2001;Vatamaniuk et al., 2001;Brulle et al., 2008), PC are strongly suspected to play a wider role in trace metal detoxification in animals. In invertebrates including bivalves, PC synthase genes were found to be widespread (Clemens and Peršoh, 2009). Therefore invertebrates such as Unionidae, known to bioaccumulate metals, are likely to express PC. Induction of PC and their precursors and of MT were assessed in the gills and the digestive gland of Anodonta cygnea under Cu 2+ exposure. This thesis manuscript is divided into eight chapters. After this general introduction, the second chapter contains a literature review on freshwater bivalves and data on copper, on its chemical and biological properties, its distribution in aquatic ecosystems, bioaccumulation, toxic effects, and detoxification mechanisms. An extended summary is presented in the third chapter followed by a conclusion (chapter 4). A published article on the inhibition of Ca 2+ -ATPase and CA activities by Cu 2+ is presented in the fifth chapter, and the sixth chapter is on the inhibition of Na + /K + -ATPase and the H + -ATPase activities; both dealing with the Origin and use Copper (symbol Cu, atomic number 29, atomic mass 63.546, two stable isotopes) is a transition metal belonging to the group 11 IB (silver, gold) of the periodic table of elements. Copper is a ductile metal with good electrical and thermal conductivities. It occurs naturally in oceans, lakes, rivers, and soils. The average natural levels of copper in the Earth's crust vary from 40 to 70 mg kg -1 . Values in ores range from 80 to 150 mg kg -1 (IFEN, 2011). Copper is found mainly as the sulfides CuS and Cu 2 S in tetrahedrite (Cu 12 Sb 4 S 13 ) and enargite (Cu 3 AsS 4 ) and as the oxide Cu 2 O (cuprite). The most important ore is chalcopyrite (Cu 2 S, Fe 2 S 3 ). It is also found in malachite (CuCO 3 (OH) 2 ), azurite Cu 3 (CO 3 ) 2 (OH) 2 , chalcocite (Cu 2 S), and bornite (Cu 5 FeS 4 ). Since 1980, an increase in global mining production has been observed due to the growing demand for copper in the fields of renewable energy, Copper is commonly used because of its physical properties, particularly its electrical and thermal conductivity and its resistance to corrosion. It is mainly used as metal in electronics (42 %), construction (28 %), and in vehicles (12 %). Seventy percent of pure copper are used for electrical wire laminates, and pipes. In alloys, the main families are the brasses (copper-zinc), the bronzes (copper-tin), copper-aluminium, and copper-nickel blends. Many other alloys are also in use for special purposes, such as copper-nickel-zinc, coppersilicon, copper-lead, copper-silver, copper-gold, copper-zinc-aluminium-magnesium, as well as copper alloyed with cadmium, tellurium, chromium, or beryllium [START_REF] Blazy | Cuivre: ressources, procédés et produits[END_REF]. Salts of copper at different levels of oxidation are also used for their chemical and biochemical properties. Copper is encountered in many fields of technical application: -Electricity, used pure or alloyed in wires of electrical equipment (coils, generators, transformers, connectors) [START_REF] Blazy | Cuivre: ressources, procédés et produits[END_REF]. -Electronics and communication: pure or alloyed it plays an important role in communications technology, computers, and mobile phones [START_REF] Blazy | Cuivre: ressources, procédés et produits[END_REF], in printed circuit boards, in solar cells of solar panels, and in the semiconductor industry. -Building: copper and brass are used for pipes and plumbing fixtures, in heating and guttering, as copper sheets for cover. -Transportation: alloys of copper and nickel are used in boat hulls and in anti-fooling paint to reduce hull fouling by algae. It is also in motors, radiators, connectors, and brakes; an automobile contains about 22.5 kilograms of copper. Trains can contain between 1 and 4 tons [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. -In various industrial equipments, copper and copper alloys are used in the manufacture of turbine blades, gears and bearings, heat exchangers, tanks and pressure equipment, marine equipment, pressed steel, and smelters. Some commercial preparations are particularly polluting for the aquatic compartment since they contain copper in ionic form directly soluble in water. Salts of copper are used in many preparations and in industrial quantities: -As, copper acetate, cupric chloride, and copper sulphate for colouring of textiles, glass, ceramics, paints, and varnishes, and for tanning (OECD, 1995). Copper acetate, cuprous and cupric oxide, cuprous and cupric chloride are used as catalysts in petrochemical organic synthesis and for rubber production. They are also used in solder pastes, in electroplating, as polishing agent for optical glass, and for refining of copper, silver, and gold. -Pyrotechnics, oxides and cupric copper to colour firework, brass in ammunition. -General consumer products, pool algicides, decorative objects made in copper and its alloys (buttons, zippers, jewelry), coins. -In the agriculture, the farming of cattle, pigs, and poultry is a large sector, using copper as a dietary supplement [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Copper acetate, copper sulphate, copper hydroxide, cuprous oxide, and tetracupric oxychloride are used in fungicides, herbicides, insecticides, molluscicides, and as antiseptics. Copper treatment is practiced in vinyards and on fruit trees with average doses from 1000 to 2500 g Cu / ha / year and from 3750 to 5000 g Cu / ha / year respectively (ADEME-SOGREAH, 2007). -In wood preservation, copper sulphate, cupric oxide, and cupric chloride are used (INERIS, 2005). Copper is also used as a substitute for other substances such as lead, arsenic, and tributyltin. This enumeration illustrates the widespread use of copper. Sources of emissions and environmental levels Natural inputs in the order of decreasing importance are volcanic eruptions, plant decomposition, forest fires, and marine aerosols [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Water and soil surrounding sites of agricultural and industrial activities are most heavily exposed to copper and, to a lesser extent, the air contaminated by road and rail traffic. In the European Union, some countries represent a large part of copper emission such as France with 43% in water, 15% in soil, and 13% in air. According to IFEN (2011), anthropogenic inputs of copper originating from industrial activities are mostly into waters and soils, while urban and agricultural activities as well as road traffic emit mostly into the air. In 2007, the emissions of copper in-to the environment in Europe were estimated as 371,000 kg year -1 to water, 146,000 kg year -1 to air, and 139,000 kg year -1 to soil [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Air emissions are mainly non-industrial. The road transport sector is a major issue representing 51 % of emissions in France in 2004. These are mainly caused by wear of brake pads containing copper. The railway transport sector represents 32 % of total air emissions of copper which is caused by wear of overhead lines (ADEME-SOGREAH, 2007). Emissions of copper-related road transport are increasing with the development of this sector. Industrial discharges of copper are mainly from the synthesis of organic chemicals, in the production of nonferrous metals from ore, ferrous metal smelting, and the production of iron and steel. In the atmosphere, copper metal oxidizes slowly to Cu 2 O which covers the metal with a protective layer against corrosion. Copper is released into the atmosphere as particulate oxide, sulphate, or carbonate as particulate matter. Releases to the aquatic environment are mainly due to corrosion of equipment made from copper or brass, and urban waste is another important source of release. One of the most significant non-industrial discharges of copper is the treatment of urban wastewater. This sector represents 28 % (15617 kg year -1 ) of the total French emissions of copper and its derivatives to the aquatic environment [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Copper is one of the compounds always detected in the input and output of secondary sewage treatment plants (but not always in sewage treatment plants with tertiary treatment). Copper is one of the highest concentrated compounds in sewage treatment plant inflow with concentrations generally greater than 10 µg L -1 . At the outlet it is usually found at concentrations between 1 and 10 µg L -1 . In Europe in 2007, the main emitters of industrial origin are the United Kingdom, France, Germany, and Romania. They represent respectively 35 % (129000 kg), 15 % (55700 kg), 11 % (39800 kg) and 7 % (24500 kg) of the European industrial emissions to water [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. In the European Union, the most significant sectors are thermal power plants and other combustion plants. In France, another important sector is the production of nonferrous metals from ore from concentrates and from secondary materials. Speciation and behaviour of copper in the environment will directly influence its bioavailability. In aquatic environments, the fate of copper is influenced by many processes and factors such as chelation by organic ligands (especially on NH 2 and SH groups, to a lesser extent on the OH group). Adsorption phenomena can also occur on metal oxides, clays, or particulate organic matter, bioaccumulation, presence of competing cations (Ca 2+ , Fe 2+ , Mg 2+ ) or of anions (OH, S 2-, PO 4 3- , CO 3 2- ) plays also a role in copper behaviour [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. The great portion of copper released into the water is in the particulate form and tends to settle, to precipitate, or adsorb to organic matter, hydrous iron, manganese oxides, or clay particles (ATSDR, 1990;INERIS, 2005). In hard water (carbonate concentration up to 1 mg L -1 ), the largest fraction of copper is precipitated as insoluble compounds. Cuprous oxide Cu 2 O is insoluble in water. Except in the presence of a stabilizing ligand such as sulfides, cyanide, or fluoride the oxidation state Cu(I) is easily oxidized to Cu(II) as CuSO 4 , Cu(OH) 2 and CuCl 2 more soluble in water. The Cu 2+ ion forms many stable complexes with inorganic ligands such as chloride or ammonium, or organic ligands. When copper enters the aquatic environment, the chemical equilibrium of oxidation states and of soluble and insoluble species is usually reached within 24 hours (INERIS, 2005). In aquatic environments, copper is mainly absorbed on particles, and suspended solids are often heavily loaded. In Europe, according to the forum of the European geological surveys (FOREGS, 2010), copper concentrations up to 3 µg L -1 are found in continental water. In regions with intensive human activities (agricultural, urban), copper concentrations of 40 µg L -1 and even 100 µg L -1 can be found seasonally [START_REF] Neal | A summary of river water quality data collected within the Land-Ocean Interaction Study: core data for eastern UK rivers draining to the North Sea[END_REF]Falfushynska et al., 2009). Trace metals as copper are mainly transported to the marine environment by rivers through estuaries. The magnitude of metal input to the marine environment depends on the levels in the river waters and on the physico-chemical processes that take place in the estuaries [START_REF] Waeles | Distribution and chemical speciation of dissolved cadmium and copper in the Loire estuary and North Biscay continental shelf, France[END_REF]. In sea water, copper is found in concentrations ranging from 0.1 to 4 µg L -1 [START_REF] Waeles | Seasonal variations of dissolved and particulate copper species in estuarine waters[END_REF][START_REF] Levet | Agence de l'eau Seine-Normandie[END_REF]. Depositions of copper into soils are from different sources. In respect to industrial activities in Europe, in 2007 the major industrial emitters to soil are the United Kingdom, France, and Germany with respectively 74 tons, 60 tons, and 5 tons. The most significant activities are landfilling or recycling of non-hazardous waste (45 %), abattoirs (21 %) and sewage sludge from urban waste water treatment plants (12 %) (E -PRTR, 2010). The main agricultural sources identified (ADEME-SOGREAH, 2007) for the release of copper are animal wastes (faeces and manure), sewage sludge from water treatment plants, compost, mineral fertilizers, lime and magnesium soil amendment. Animal waste represents an important input into soils (53 %). This reflects the fact that the feed of cattle, pig and poultry production is supplemented with copper for promotion of growth and prevention of diseases. As copper is poorly absorbed, it is added in great quantities to the feed (sometimes added at levels 30 times in excess of the needs of the animal), and the surplus is then found in the droppings. The spreading of animal manure is important for agricultural soils [START_REF] Jondreville | Le cuivre dans l'alimentation du porc : oligoélément essentiel, facteur de croissance et risque potentiel pour l'Homme et l'environnement[END_REF]. Thirty four % of the copper amount found in agricultural soils comes from phytosanitary treatment (ADEME-SOGREAH, 2007). Furthermore, mineral fertilizers and lime and magnesium soil amendments contribute to copper inputs to agricultural soils. The burden of copper to soil by urban activities is mainly through spreading and composting of sludge from wastewater treatment plants. For example, the average concentration of copper in sewage sludge of the Rhone Mediterranean basin was about 350 mg kg -1 in 2004 (Agence de l'eau Rhône Méditerranée Corse, 2004). In soils, copper is in the oxidation states I or II in the form of sulfides, sulphates, carbonates, and oxides. The behavior of copper in soils depends on many factors such as soil pH, redox potential, cation exchange capacity, type and distribution of organic matter, presence of oxides, rate of decomposition of the organic matter, proportions of clay, silt, and sand, climate, and type of vegetation [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Acidification for instance causes a decrease of the metal bound to solids and thus release from solid matter [START_REF] Hlavackova | Evaluation du comportement du cuivre et du zinc dans une matrice de type sol à l'aide de différente méthodologie[END_REF]. Copper binds preferentially to organic matter representing between 25 to 50 % of copper, to iron oxides, manganese oxides, carbonates, and to clays. This characteristic makes the majority of copper strongly adsorbed in the upper few centimeters of soil, especially on organic matter. Copper does not migrate deeply, except under special circumstances of drainage or in highly acidic environment [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. In the European Union, in 2010 the most polluted solid matters are the sediments of alluvial plains with 25 mg kg -1 , the sediments of rivers with 22 mg kg -1 , soil with 17 mg kg -1 . The maximum concentration determined in a river sediment was found to be 877 mg kg -1 , more than 40 times the average (FOREGS, 2010). In France, concentrations of copper in the surface layers of soil can reach 46 mg kg -1 . The levels of copper in surface and deep layers of the most polluted soils are found mainly in the South of France and the West of Bretagne. Copper metabolism and its physiological role in animals (generalities) Copper is an essential trace element in microorganisms, plants, and animals. It plays a basic role being in the active centre of enzymes involved in connective tissue formation with lysyl oxidase, in respiration with cytochrome C oxidase (López de Romana, 2011), in photosynthesis with plastocyanin [START_REF] Grotz | Molecular aspects of Cu, Fe and Zn homeostasis in plants[END_REF], and in controlling the level of oxygen radicals with Cu / Zn-superoxide dismutase (Table 1). It allows also the transport of oxygen in the haemolymph of many invertebrates. Copper is used in several cell compartments, and the intracellular distribution of copper is regulated in response to metabolic demands and changes in the cell environment (Tapiero et al., 2003). The same properties that make transition metal ions indispensable for life at low exposure level are also the ones that are responsible for toxicity when present in excess. Copper metabolism must be tightly regulated, ensuring a sufficient supply without toxic accumulation. Copper homeostasis involves a balance between absorption, distribution, use, storage, and detoxification. Table 1: Functions of the main cuproproteins (Tapiero et al., 2003) Metalloenzymes Functions Superoxide Photosynthesis In the mammalian intestine where low pH and the presence of ligands promote its solubility, copper passes by the enterocytes. Copper is absorbed by saturable active transport through membrane transporters. In the cytosol of enterocytes, copper binds to metallothioneins which play a role in copper sequestration and transport. Once absorbed, copper is transported to the liver, predominantly by albumin, but also by transcuprein and free amino acids including histidine. In the liver, depending on the status of the animal, copper is bound to metallothioneins to be stored, incorporated into ceruloplasmin [START_REF] Cousins | Absorption, transport and hepatic metabolism of copper and zinc: special reference to metallothionein and ceruloplasmin[END_REF], and then transported to other organs, or secreted into the intestine via the bile. In monogastric animals, copper homeostasis is largely maintained by increased excretion avoiding excessive accumulation in the liver (fig. 1). Biliary excretion is the major route, losses via urine, skin, or via cellular desquamation in the gut are minor [START_REF] Arredondo | Iron and copper metabolism[END_REF]. In aquatic organisms, especially bivalves, copper is ingested and internalized after solubilization in the gastrointestinal tract and absorbed by the intestinal tract (associated with the particulate phase or in solution). Copper in the dissolved phase can also be directly absorbed by the tissues (gills, mantle, etc.) in contact with the surrounding water (Viarengo and Nott, 1993). Fig. 1: Copper metabolism in mammals [START_REF] Jondreville | Le cuivre dans l'alimentation du porc : oligoélément essentiel, facteur de croissance et risque potentiel pour l'Homme et l'environnement[END_REF]. The entry of copper into the cell occurs mainly by mechanisms that depend on copper transporting membrane channels. In this process, Cu 2+ ion is first reduced to Cu + by reductases associated with the membrane to facilitate its entry. Once inside the cytoplasm, it is likely that reduced glutathione (GSH) and metallothioneins (MT) bind the copper, serving as intracellular stores. Copper appears to be interchanged between MT forming a stable complex and its bound state with GSH. As copper bound to GSH turns over more rapidly than when bound to MT, copper becomes available for other uses and for its transport. The delivery of copper ions to their specific pathways into the cell is mediated by metallochaperones that protect the metal from intracellular scavengers and delivers it directly to the respective target proteins and cellular compartments [START_REF] Kozlowski | Copper, iron, and zinc ions homeostasis and their role in neurodegenerative disorders (metal uptake, transport, distribution and regulation)[END_REF]. The copper chaperone for Cu/Zn superoxide dismutase guides copper to superoxide dismutase (Cu/Zn SOD) which participates in the defence against oxidant stress within the cytoplasm. Cytochrome c oxidase copper chaperone is another protein that channels copper to cytochrome c oxidase in the inner mitochondrial membrane which plays a critical role in the electron transport chain for cellular respiration. Antioxidant protein 1 presents copper to the ATPase of type P, ATP7A of ATP7B [START_REF] Lutsenko | Human copper homeostasis: a network of interconnected pathways[END_REF]. These three transporters play an essential role in copper homeostasis. They perform distinct functions depending on their cellular localization: when at the Golgi apparatus they enable the management of copper by ceruloplasmin and incorporation of copper into enzymes which require it as cofactor. When localized in vesicular compartments, they allow the removal of copper from the cell and thus participate in copper homeostasis [START_REF] Arredondo | Iron and copper metabolism[END_REF][START_REF] Hejl | Du gène à la maladie : les anomalies des transporteurs du cuivre From gene to disease: Copper-transporting P ATPases alteration[END_REF]. Toxicology of copper (general) In living organisms, cupric ion (Cu 2+ ) is fairly soluble whereas cuprous (Cu + ) solubility is in the sub-micromolar range. Cu is present mainly as Cu 2+ since in the presence of oxygen or other electron acceptors, Cu + is readily oxidized. Strong reductants such as ascorbate or reduced glutathione can reduce Cu 2+ back to Cu + [START_REF] Arredondo | Iron and copper metabolism[END_REF]. As in the case of iron through the Haber-Weiss and Fenton reactions, free copper ions can catalyse the production of hydroxyl radicals (HO • ). Copper toxicity results also from nonspecific binding which can inactivate important regulatory enzymes by displacing other essential metal ions from catalytic sites, by binding to catalytic Cys groups or by allosterically altering the functional conformation of proteins [START_REF] Mason | Metal detoxification in aquatic organisms[END_REF]. Thus, the mechanisms of toxicity are associated both with oxidative stress and direct interactions with cellular compounds. Free copper ions have high affinity to sulfur-, nitrogen-, and oxygen-containing functional groups in biological molecules which can inactivate and damage them. Cytotoxicity observed in copper poisoning results from inhibition of the pyruvate oxidase system by competing for the protein's sulfhydryl groups. Glucose-6-phospho-dehydrogenase and GR are also inhibited (competitive inhibition) proportionally to the concentration of intracellular copper [START_REF] Barceloux | Copper[END_REF]. The same applies to some transporters as the ATPase(s) which are also inhibited by copper, causing disruption of homeostasis of the respective transported entities. Toxic effects of copper can also result from its affinity to DNA [START_REF] Agarwal | Effects of copper on mammalian cell components[END_REF][START_REF] Bremner | Manifestations of copper excess[END_REF][START_REF] Sagripanti | Interaction of copper with DNA and antagonism by other metals[END_REF]. Another mechanism of toxicity of excessive concentrations of copper is the modification of the zinc finger structures of transcriptional factors which cannot any longer bind to DNA [START_REF] Pena | A delicate balance: homeostatic control of copper uptake and distribution[END_REF]. Copper in excess can also promote apoptosis [START_REF] Kozlowski | Copper, iron, and zinc ions homeostasis and their role in neurodegenerative disorders (metal uptake, transport, distribution and regulation)[END_REF] wile copper deficiency may be the cause of many diseases due to cuproprotein and copper dependent reaction inhibition. In mammals, copper homeostasis is primordial. Wilson's and Menkes diseases are caused by genetic mutations in copper transporter proteins. The former results from accumulation of copper in several organs and tissues. There are different varieties, the most common being liver disease and anaemia [START_REF] Hejl | Du gène à la maladie : les anomalies des transporteurs du cuivre From gene to disease: Copper-transporting P ATPases alteration[END_REF]. The accumulation of copper arises from a defect in the P-type Cu-protein ATP7B (called Wilson protein), a specific transporter of copper. The gene which encodes this protein is located on autosome number 13 in humans. It also allows the incorporation of copper in cuproproteins and excretion of copper into the bile. Accumulation of copper leads to liver cirrhosis and neurodegeneration. Menkes disease is a neurodegenerative disease. Copper, after ingestion, accumulates in the intestine and absorption by other organs and tissues (blood, liver, brain) is defective. Menkes syndrome is caused by a mutation in the ATP7A gene located on chromosome X which encodes a protein ATP7A Cu-type P. This membrane protein is the first specific transporter of copper found in eukaryotes. Copper (Cu), besides cadmium, is one of the major metals causing environmental problems in fresh water ecosystems. Since it is highly toxic to fish, it is also used as piscicide [START_REF] Manzl | Copper-induced formation of reactive oxygen species causes cell death and disruption of calcium homeostasis in trout hepatocytes[END_REF] For Anodonta cygnea common names are: swan mussel (English), anodonte des cygnes (French), Große Teichmuschel (German); for Anodonta anatina: duck mussel (English), mulette des cannards (French), Gemeine Teichmuschel (German). Description: In both species, the shell is nacreous, and there are no teeth on the ligament. Anodonta anatina (fig. 2A): Shell slightly elongated, triangular because of the presence of a large rear wing or crest, the top and bottom form an angle of greater or lesser extent, the upper edge rises in an almost straight line towards the back to the highest point of the crest, then descends in a more concave line towards the posterior end; the anterior is broadly rounded, rather short, the posterior is longer, ending posteriorly by an obtuse rostrum; the ligament is rather long and prominent; the top is a little bulging, covered with fine wrinkles, a little curved, obliquely cutting the growth lines; the shell is rather thick, solid, shiny, greenish gray, or brownish. (Length: 14 cm, height: 10.5 cm, thickness: 6 cm) [START_REF] Vrignaud | Clef de détermination des Naïades d'Auvergne[END_REF]. Anodonta cygnea (fig. 2B): Shell oval, more or less elongated with the top and bottom edges roughly parallel or convex, the upper edge is more straight than the lower, the posterior area being much longer; the ligament is rather long, and prominent; the top is a little bulging, covered with fine wrinkles parallel to growth lines; the shell is thin, not strong, fairly shiny, greenish-yellow. (Length: 20 cm, height: 10 cm, thickness: 6 cm) [START_REF] Vrignaud | Clef de détermination des Naïades d'Auvergne[END_REF]. A B Fig. 2: A : Anodonta anatina, B: Anodonta cygnea (www.biopix.eu) Geographic distribution Anodonta cygnea and Anodonta anatina are autochthonous bivalves, widely distributed under the polar circle in European continental hydrosystems. The two mussels have a similar distribution, but wider for Anodonta cygnea. The geographical distribution (fig. 3) of Anodonta anatina extends from the British Isles to the eastern limit of Europe and from Sweden to Spain. Anodonta cygnea is present from the British Isles to Siberia and from Sweden to North Africa [START_REF] Başçınar | A Preliminary study on reproduction and larval development of swan mussel [Anodonta cygnea (Linnaeus, 1758)] (Bivalvia: Unionidae), in Lake Çıldır (Kars, Turkey)[END_REF]; www.discoverlife.org). Life Cycle Of the unionids, the individual animals are usually gonochoric. No age limit is known for gametogenesis. The spermatozoids are released into the water through the exhalant siphon. They are filtered by mussels located downstream. After fertilization, the eggs are incubated in the marsupium which is a modification of the gills of the mussel where the larvae or glochidia will hatch (fig. 4). They are produced in large quantities, from 50000 to 2 million [START_REF] Başçınar | A Preliminary study on reproduction and larval development of swan mussel [Anodonta cygnea (Linnaeus, 1758)] (Bivalvia: Unionidae), in Lake Çıldır (Kars, Turkey)[END_REF], some of them attaching themselves with hooks at the end of their valves on the gills of a fish host, and then live as encysted parasites. After a few weeks, the cyst bursts and releases a small bivalve similar in its anatomy to an adult mussel. This juvenile will grow buried in the substratum before becoming adult and gradually rising to the surface. The expulsion of glochidia begins at the end of winter and may continue until September [START_REF] Mouthon | Les mollusques dulcicoles -Données biologiques et écologiques -Clés de détermination des principaux genres de bivalves et de gastéropodes de France[END_REF]. The Unionidae produce only one generation per year. It is estimated that mussels of the genus Anodonta can live very long, about fifteen to twenty years [START_REF] Taskinen | Age-, size-and sex-specific infection of Anodonta piscinalis (Bivalvia, Unionidae) with Rhipidocotyle fennica (Digenea, Bucephalidae) and its influence on host reproduction[END_REF]. Diet According to the conventional nomenclature, among the freshwater molluscs one distinguishes vegetarians, detritivores, and more rarely omnivores; there is no true carnivor. Most bivalves have a mixed diet with detritivore and vegetarian dominance throughout the year, as for the Unionidae. Except during their larval stage when the glochidium parasites a host fish and feeds on his plasma [START_REF] Uthaiwan | Culture of glochidia of the freshwater pearl mussel Hyriopsis myersiana (Lea, 1856) in artificial media[END_REF], Anodonta mussels are filter feeders, although their diet is not precisely known. They feed on seston (phytoplankton, filamentous algae, detritus, protists, epipelic) suspended in water. Water enters the mantle cavity through the inhalant siphon, passes through the gills and is exhaled through the exhalant siphon. Food particles in the water are intercepted by rows of cilia or cirri which extend between the gill filaments. The particles are either transferred and incorporated into mucus strings or carried in a concentrated suspension as they are transported to the labial palps. Particles selected for ingestion are transferred to the mouth, while rejected particles are bound up into a mucus ball and expelled from the inhalant siphon or pedal gap (gap between the valves) as pseudofaeces. Anatomy and ecology Anatomy The mantle cavity is divided by the gills into a ventral inhalant chamber and a dorsal exhalant chamber. Water passes through the inhalant siphon to enter the branchial chamber, flows across the gills and then into the exhalant suprabranchial chamber. As the water crosses the gills, food particles are filtered from it and oxygen is removed. From the suprabranchial chamber the water flows out of the exhalant siphon. The gills consist of a sheet of coalescent filaments folded into a "W" shape and attached to the dorsal wall of the mantle. This sheet divides the mantle cavity into the ventral chamber and the dorsal suprabranchial chamber. To get from the ventral chamber to the dorsal, water must pass through pores in the gills. The Ecology Anodonta cygnea and Anodonta anatina are species that frequent ponds, oxbow lakes, slow rivers, and canals with weak stream which often have high trophic level. They prefer bottom with fine granulometry (silt, sand, gravel), often with accumulations of organic matter. These species are very tolerant to average temperature of the water as evidenced by their presence in Spain to Sweden. They are localised in bed areas, out of strong stream undergoing some erosion, in general in a depth range between 0.2 and 2.5 m. These are burrowing organisms, living vertically partially embedded in the substrate. These bivalves are generally sedentary organisms. In Spain, a displacement of about ten meters by year was found for mussels A. cygnea and A. anatina marked by Aldridge (2000). However, flooding can cause passive dispersal from upstream to downstream. The most mobile stage is the larval stage, which is capable of moving long distances through the host fish. Threats Over the past 50 years, a general decline of freshwater mussel has-been observed. It seems that some factors are combined to cause the gradual disappearance of these species: among the potential causes, human activities leading to chemical pollution and habitat exchange have been suggested (Aldridge, 2000;[START_REF] Kádár | Avoidance responses to aluminium in the freshwater bivalve Anodonta cygnea[END_REF]. Given the lack of ecological and ecotoxicological knowledge for these bivalves (including the tolerance to -The virtual disappearance or severe depletion of potential host-fish. -The physical degradation of streams and reworks of the beds of rivers and canals (Aldridge, 2000), rectification, dams and channel dredging, but also the impact of intensive agriculture, quality and quantity of sediments and their transit. -Degradation of water quality, including eutrophication, as well as pollutants from human activity. -The fragmentation of populations, one of the main causes of biodiversity loss. -Introduced species, the potential effects of competition in bivalves such as Corbicula fluminea, Sinanodonta woodiana as well as Dreissena polymorpha are poorly documented. Zebra mussels appear to have a negative impact by binding to the valves of certain freshwater mussels, hampering their opening. S. woodiana also seems to have a negative impact on native unionid populations [START_REF] Adam | L'Anodonte chinoise Sinanodonta woodiana (Lea, 1834) (Mollusca, Bivalvia, Unionidae) : une espèce introduite qui colonise le bassin Rhône-Méditerranée[END_REF]. Ecotoxicological interest Molluscs are common, highly visible, ecologically and commercially important on a global scale as food and as non-food resources [START_REF] Rittschof | Molluscs as multidisciplinary models in environment toxicology[END_REF]. In some aquatic ecosystems (lakes, slow streams) molluscs can represent up to 80% of the total biomass of the benthic macroinvertebrates, so their impact can become major. Populations of bivalves filter large amounts of water (Unionidae 300ml/individual/h) and take an active part in sedimentation and water purification. Faeces and pseudofaeces concentrate sometimes a large fraction of planktonic microorganisms not used, making them accessible to detritivorous invertebrates such as oligochaetes, and many diptera. But they also change the quality of sediment by concentration and excretion of many substances (metals, pesticides, radionuclides). Because of their sedentary lifestyle, their filtration capacity, and their wide distribution, molluscs and bivalves are excellent sentinels for monitoring the fraction of bioavailable pollutants in their environment [START_REF] Hayer | Accumulation of extractable organic halogens (EOX) by the freshwater mussel Anodonta cygnea L., exposed to chlorine bleached pulp and paper mill effluents[END_REF]. In close contact with water-suspended particles and sediment, they are widely used for controlling the bioaccumulation and toxic effects of metallic and organic pollutants in aquatic ecosystems [START_REF] Viarengo | Critical evaluation of an intercalibration exercise undertaken in the framework of the MED POL biomonitoring program[END_REF][START_REF] Rittschof | Molluscs as multidisciplinary models in environment toxicology[END_REF]. Several Unionidae mussels, especially from the Anodonta genus have since been used as biomonitor organisms in toxicity assessment of numerous compounds released in continental water. The anatomy and physiology of these animals has been studied for a long time, allowing the study of toxic effects of compounds but also the mechanisms of detoxification. Anodonta cygnea and Anodonta anatina are biological models widely used in ecotoxicology (Falfushynska et al., 2009). They are present in large quantities, sedentary, easy to collect and to acclimatize in aquaria. 2.3 Effects of copper exposure and detoxification mechanisms Copper exposure effects Copper bioaccumulation Trace elements are known to be highly accumulated by aquatic molluscs. Bivalves are in close contact with sediments which constitute a major environmental sink for metals, with an important filtering activity to satisfy respiration and nutrition, and tolerance mechanisms that involve metal sequestration rather than metal exclusion or elimination. They provide accurate and integrated information about the environmental impact and bioavailability of chemicals. They are therefore extensively applied in marine environments using mussels and oysters, but are also implemented in freshwater systems using other bivalve species such as Anodonta sp., Dreissena polymorpha, Elliptio complanata, and Asiatic clams. Among freshwater organisms, unionid molluscs are widely recognised for their capacity to accumulate a variety of environmental contaminants including metals in their tissues (Winter, 1996;[START_REF] Bilos | Trace metals in suspended particles, sediments and Asiatic clams (Corbicula fluminea) of the Rio de la Plata Estuary, Argentina[END_REF][START_REF] Kádár | Avoidance responses to aluminium in the freshwater bivalve Anodonta cygnea[END_REF]Falfushynska et al., 2009). The widespread recent decline in the species diversity and population density of freshwater mussels (Lydeard et al., 2004) may be partly related to chronic, low-level exposure to toxic metals (Frank and Gerstmann, 2007). Freshwater mussels are exposed to metals that are dissolved in water, associated with suspended particles, and deposited in bottom sediments. Thus, freshwater mussels can bioaccumulate certain metals to concentrations that greatly exceed those dissolved in water. In adult mussels, the most common site of metal uptake is the gills, followed by the digestive gland, the mantle, and the kidneys [START_REF] Pagliarani | Mussel microsomal Na + /Mg 2+ -ATPase Sensitivity to Waterborne Mercury, Zinc and Ammonia[END_REF]Bonneris et al., 2005). Bioaccumulation of metals varies strongly according the water chemistry conditions, pH and water hardness being important parameters. Aqueous concentrations of calcium probably enhance the bioavailability and toxicity of metal cations, because the permeability of membranes is inversely related to aqueous calcium concentration; calcium ions apparently compete with other metal cations for binding sites on the gill surface, decreasing the direct uptake of other cationic metals. The pH influences both the chemistry of metals and macromolecules of surface structures. A modification of membrane permeability causes an alteration in metal diffusion. Additionally, changes in membrane potential modify the transport of polar metal species (Winter, 1996). In bivalves, the biological barriers are the gill epithelium, the wall of the digestive tract, and the shell (which is often reported as a site of bioaccumulation). Metals or metalloids in solution are more easily absorbed by the surfaces in direct contact with the external environment, while those associated with the particulate phase are rather ingested and internalized after solubilization in the digestive tract, or transferred by endocytosis to then undergo lysosomal digestion (Wang and [START_REF] Rainbow | Influence of metal exposure history on trace metal uptake and accumulation by marine invertebrates[END_REF]. Once past the first barrier, the mechanisms of transfer of metals or metalloids into the cell involve intracellular diffusion (passive or facilitated), active transport, and endocytosis (phagocytosis and pinocytosis). The oxygen concentration of the water or the density of microalgae are parameters that will directly influence the ventilatory activity of the bivalve and so its exposure to metals in solution or particulate [START_REF] Tran | Mechanism of oxygen consumption maintenance under varying levels of oxygenation in the freshwater clam Corbicula fluminea[END_REF][START_REF] Tran | Copper detection in the Asiatic clam Corbicula fluminea: optimum valve closure response[END_REF]. Copper and metal bioaccumulation can change between individuals of the same species with the difference in sizes, or between different species mussels due to different physiology (breathing, eating) (Winter, 1996;[START_REF] Bilos | Trace metals in suspended particles, sediments and Asiatic clams (Corbicula fluminea) of the Rio de la Plata Estuary, Argentina[END_REF]Gundacker, 2000;[START_REF] Hédouin | Allometric relationships in the bioconcentration of heavy metals by the edible tropical clam Gafrarium tumidum[END_REF]Falfushynska et al., 2009). In bivalves, the determination of trace metal concentrations in whole individuals presents little interest, since determination of bioconcentration factors in various tissues suggested that the principal accumulating organs are: the gills, the digestive gland, the kidney, and the mantle, also the shell acting as a storage matrix (Viarengo and Nott, 1993;[START_REF] Roméo | Metal distribution in different tissues and in subcellular fractions of the Mediterranean clam Ruditapes decussatus treated with cadmium, copper, or zinc[END_REF][START_REF] Das | Dose-dependent uptake and Eichhornia-induced elimination of cadmium in various organs of the freshwater mussel, Lamellidens marginalis (Linn.)[END_REF]Bonneris et al., 2005). The intracellular sequestration of metals is based on a sequence of cellular events involving a cascade of different ligands with increasing metal binding strengths. Anodonta sp. and other molluscs accumulate metals to high levels in their tissues (Falfushynska et al., 2009). Metal tolerance in such accumulator organisms involves sequestration of metals in non-toxic forms. Among the best studied intracellular sites involved in the sequestration of essential and nonessential metals in aquatic invertebrates are lysosomes, granules, and soluble Cys-rich ligands as metal-binding peptide and proteins. Unionids also lay down calcium microspherule concretions, particularly in the connective tissue of the gills, in the mantle, and in the digestive gland [START_REF] Pynnönen | Occurrence of calcium concretions in various tissues of freshwater mussels, and their capacity of cadmium sequestration[END_REF]Moura et al., 2000;[START_REF] Lopes-Lima | Isolation, purification and characterization of glycosaminoglycans in the fluids of the mollusc Anodonta cygnea[END_REF]. Dissimilar mechanisms for copper and metal uptake, storage, mobilisation, and excretion performed by different cell types in different organs explain the pattern of metal accumulation and tissue distribution [START_REF] Soto | The contribution of metal/shell-weight index in target-tissues to metal body burden in sentinel marine molluscs. 1. Littorina littorea[END_REF]. Oxidative stress Reactive oxygen species comprise a variety of unstable oxygen derivatives. Some [START_REF] Viarengo | Heavy metal effects on lipid peroxidation in the tissues of mytilus galloprovincialis lam[END_REF]Géret et al., 2002a;[START_REF] Sheehan | Oxidative stress and bivalves: a proteomic approach[END_REF]). An excess of ROS causes a risk of oxidative stress to the cell. Free radicals are naturally produced and may also have a physiological role. The major role of ROS is an activity of regulation of molecules containing sulfhydryl groups. This type of regulation can in particular affect molecules implied in the mechanisms of signal transduction like protein kinase C [START_REF] Lander | An essential role for free radicals and derived species in signal transduction[END_REF][START_REF] Dalton | Regulation of gene expression by reactive oxygen[END_REF]. Significant sources of free oxygenated radicals are redox cycles and oxidations catalysed by cytochrome P450 monooxygenases implicated in detoxification. Like many exogenous compounds that can stimulate the production of ROS, copper is known as hydroxyl radical inducer [START_REF] Company | Antioxidant biochemical responses to long-term copper exposure in Bathymodiolus azoricus from Menez-Gwen hydrothermal vent[END_REF]. Metal deficiency can also lead to oxidative stress. Indeed, copper belong to the molecular composition of antioxydent enzyme such as Cu / Zn-SOD. Thus copper deficiency disrupts in this case the dismutation of the superoxide anion. ROS can potentially react with every cellular component. They influence in particular the thiol groups of proteins, leading to the formation of intra-or inter-molecular disulphide bridges. The most widely studied action of ROS is lipid peroxidation, mainly carried out by HO • . After rearrangement and addition of oxygen, ROO • and RO • radicals are generated. Oxidation of phospholipids modifies the membranes with loss in permeability and membrane potential, leading to inactivation of membrane receptors and enzymes. In a general way, during this reaction, various compounds are produced such malondialdehyde (MDA) and 4hydroxynonenal, both able to bind to proteins and to form DNA adducts [START_REF] Blair | Lipid hydroperoxide-mediated DNA damage[END_REF]. Thus, lipid peroxidation may trigger endogenous DNA damage as nuclear DNA and mitochondrial DNA are the targets of ROS. Four main classes of damage can be noted: single and double strand breaks, modified bases, DNA-DNA and DNA-protein bridges, and abasic sites. Proteins are essential structural and functional cellular components which may undergo oxidative modification, thus inducing their aggregation or digestion by proteases [START_REF] Davies | The oxidative environment and protein damage[END_REF]. Oxidation of amino acids, particularly of sulphur-containing and aromatic amino acids, result in structural modifications of proteins. HO • is the main initiator of the oxidation of polypeptide chains producing free radicals. In the absence of oxygen, two radicals can react together to form intra-or interchain cross-links. In the presence of oxygen, an addition reaction may take place to yield a peroxyl radical. A series of reactions follows, leading to the formation of alkoxyl radicals, precursors to the fragmentation of polypeptide chains. The oxidation of glucose can be performed in the presence of metal ions, leading to the release of aldehydes and hydrogen peroxide. This leads to glycation of proteins by attachment of aldehydes, often entailing the cleavage of protein chains [START_REF] Wolff | Autoxidative glycosylation": free radicals and glycation theory[END_REF]. Glycation of proteins promotes their oxidizability. The structural modifications induce functional changes, in particular of cellular metabolism. The oxidation by ROS can disturb ionic transport, enzymatic activities, and calcium homeostasis. Antioxidant defense systems can directly inhibit the production of ROS, limit their propagation, or destroy them. These antioxidant systems may act by reducing ROS species or by trapping them to form stable compounds. Two categories of antioxidant systems are generally defined, i. In the antioxydent system glutathione peroxidases (GPxs) / glutathione reductase (GR), GPxs are also able to reduce H 2 O 2 and other peroxides. Their enzymatic activities of GPxs are coupled with the oxidation of glutathione (GSH). Oxidized glutathione (GSSG) will be reduced by GR using NADPH. These enzymes are localised in the cytoplasm and the mitochondrial matrix [START_REF] Lushchak | Environmentally induced oxidative stress in aquatic animals[END_REF]. Antioxidant molecules or free radical scavengers such as Cys thiol rich compounds, principally GSH and the metallothioneins, will be discussed later. Their antioxidant capacity is conferred by the thiol group of the Cys residue. Vitamin E is also known for its powerful anti-radical activity operating in lipid membranes. Other scavenger molecules include vitamin C, carotenoids, and uric acid. Calcium transport and perturbation of bio-mineralization Calcium transport perturbation Normally, intracellular Ca homeostasis is maintained by a balance between extrusion and compartmentation systems (Mooren and Kinne, 1998). Alteration of these processes during cell injury can result in inhibition of Ca 2+ extrusion or intracellular compartmentation mechanisms, as well as in enhanced Ca 2+ influx and release of Ca 2+ from intracellular stores such as the endoplasmic reticulum and mitochondria [START_REF] Marchi | Mercury-and copper-induced lysosomal membrane destabilisation depends on [Ca 2+ ] i dependent phospholipase A2 activation[END_REF]. This can lead to uncontrolled rises in cytosolic Ca 2+ concentration. The biological functions of Ca 2+ are versatile; they control multiple processes of birth, life and death. Location, duration, and amplitude of calcium signals form a complex code that can control vital physiological processes. Any disturbance in these signals would change the "Ca 2+ code" and modify multiple life processes which are usually associated with loss of cell viability (Ermak and Davies, 2001). Filter feeding bivalves have high bioaccumulation potential for trace metals which at higher concentration can cause serious metabolic, physiological, and structural impairments. Continuous decline of freshwater mussels during the past decades, could be attributed to calcium homeostasis perturbation by trace metals (Frank and Gerstmann, 2007). Slight deregulations of Ca 2+ homeostasis like those deriving from low-dose trace metal contamination can affect the cellular ability to maintain and modulate Ca 2+ signaling. Copper is known to increase Ca 2+ intracellular concentration by release of the Calcium stock of the endoplasmic reticulum, leading to lysosomal membrane destabilisation in the mussels [START_REF] Marchi | Mercury-and copper-induced lysosomal membrane destabilisation depends on [Ca 2+ ] i dependent phospholipase A2 activation[END_REF] and apoptosis or cytotoxicity and necrosis (Ermak and Davies, 2001). Cell Ca 2+ extrusion systems allowing physiological calcium concentration maintenance appear to be essential for cell viability. The absorption and release of calcium in the environment, biomineralization, and the maintenance of intracellular calcium concentration in involved its passage through cell membranes. Ca 2+ passively enters the apical membrane down to its concentration gradient through carrier-mediated facilitated diffusion Na + /Ca 2+ antiporters and via simple diffusion through Ca 2+ channels. Intracellular Ca 2+ gradients are maintained by membrane Ca 2+ -ATPases across the plasma membrane (PMCA) and the membranes of intracellular stores. In Mytilus edulis copper is known to inhibit PMCA. Thiol groups of PMCA proteins are directly oxidized or they bind copper because of high affinity to the metal. Hydroxyl radicals OH • are induced by copper which indirectly causes PMCA protein impairment [START_REF] Viarengo | Possible role of Ca 2+ in heavy metal cytotoxicity[END_REF]Viarengo et al., 1996;Burlando et al., 2004). Besides the Ca 2+ -ATPases, another important system for the maintenance of calcium homeostasis is the plasma membrane Na + /Ca 2+ antiporter, the activity of which is based upon the transmembrane Na + electrochemical gradient. Inhibition of the Na + /K + -ATPase modifies the Na + gradient, and therefore it can also affect the activity of the Na + /Ca 2+ antiporter. Na + /K + -ATPase inhibition was observed in Mytilus edulis and Perna viridis after Ag, Cr, and copper exposure (Viarengo et al., 1996;[START_REF] Vijayavel | Sublethal effect of silver and chromium in the green mussel Perna viridis with reference to alterations in oxygen uptake, filtration rate and membrane bound ATPase system as biomarkers[END_REF]. Due to their physiological function for respiratorion, nutrient absorption and excretion, the gills, the digestive gland, and the kidneys are preferential sites of metal uptake and bioaccumulation in bivalves. These organs play important roles in iono-regulation, in particular for calcium homeostasis in freshwater bivalves (Coimbra et al., 1993). Effects on the Ca 2+ transport systems in these tissues leads to perturbation of calcium homeostasis in the whole organism. Perturbation of biomineralization Biomineralization is a complex process; in bivalves principally it enables the formation, growth, and repair of the shell. In addition to the shell, mineral concretions are produced which also play a role in detoxification mechanisms (Viarengo and Nott, 1993). In freshwater molluscs such as Anodonta, several internal tissues, bathed by haemolymph, also produce calcified structures, namely microspherules [START_REF] Lopes-Lima | Isolation, purification and characterization of glycosaminoglycans in the fluids of the mollusc Anodonta cygnea[END_REF]. Microspherules are usually present between both epithelia of the mantle under CaCO 3 and / or Ca 3 (PO 4 ) 2 deposits. These calcium transitory reserves are devoted to shell growth or glochidia larvae. The mantle of lamellibranchs is a leaflet that covers the internal surface of the shell and surrounds the body of the animal. It consists of two epithelia: internal in contact with the external medium, and external, the outer mantle epithelium (OME) facing the shell. The mantle is the tissue responsible for shell synthesis (Coimbra et al., 1993). Shell growth and maintenance of its mineral content are thus dynamic equilibrium involving a continuous exchange of Ca 2+ between the shell and the OME through the extrapalleal fluid dividing the two. The dynamic calcium exchange may result in a net accumulation of Ca 2+ in the shell, or its re-absorption, depending on the developmental stage and the metabolic state of the animal. The biomineralization mechanism depends on pH changes or its tendency [START_REF] Lopes-Lima | Isolation, purification and characterization of glycosaminoglycans in the fluids of the mollusc Anodonta cygnea[END_REF]. H + -ATPases in cells of the OME regulate the deposition of calcium or its reuptake into the haemolymph by pH control. H + -ATPases probably play a role equivalent to those of the two bone cell lines, the osteoblasts and the osteoclasts [START_REF] Machado | Ultrastructural and cytochemical studies in the mantle of Anodonta cygnea[END_REF]. H + -ATPases of the OME induce calcium mobilisation from the shell mainly by pumping protons into the extrapalleal fluid. H + -ATPase inhibition under bis(tributyltin)-oxide exposure was observed (Machado et al., 1989). CA is a ubiquitous enzyme; in the mantle of bivalves it controls bicarbonate balance between haemolymph and extrapalleal fluid for CaCO 3 formation. CA is a bioindicator of biomineralization already studied in ecotoxicology of metals (Vitale et al., 1999[START_REF] Rousseau | Biomineralisation markers during a phase of active growth in Pinctada margaritifera[END_REF]. So, if some alteration of the ion transport mechanisms across the mantle occurred in bivalves, it may depend on a direct inhibition of biomineralization. In freshwater bivalves, trace metals and in particular copper, seem to interfere with the shell calcification conditions by direct action on the proton pump and CA. Perturbation of biomineralization could cause subsequent shell thickening (Machado et al., 1989) or thinning. Detoxification mechanisms Cysteine thiol rich compounds Glutathione The tripeptide glutathione (GSH, -glutamyl-cysteinylglycine) is a non-protein thiol. The GSH/glutathione disulfide (GSSG) system is the most abundant redox system in eukaryotic cells, playing a fundamental role in cell homeostasis and metal detoxification, and being involved in signalling processes associated with programmed cell death termed apoptosis [START_REF] Canesi | Heavy metals and glutathione metabolism in mussel tissues[END_REF][START_REF] Camera | A nalytical methods to investigate glutathione and related compounds in biological and pathological processes[END_REF]. It is present mainly in its reduced form, GSH, and represents the most significant thiol in eukaryotic cells (0.2 to 10 mM). GSH biosynthesis requires two enzymes (fig. 6): first, glutamate-cysteine-ligase ( -ECS) catalyzing the fusion of glutamic acid (Glu) and Cys to -glutamyl-cysteine ( -GluCys), which in turn is converted into GSH by addition of glycine (Gly) by glutathione synthase (GS). -ECS is inhibited in feedback by GSH [START_REF] Monostori | Determination of glutathione and glutathione disulfide in biological samples: An in-depth review[END_REF]. GSH exerts many functions in the cell. It intervenes in processes of reduction such as the synthesis and degradation of proteins, and the formation of deoxyribonucleotides. GSH plays a role as co-enzyme of various reactions, and it is also used for being combined with either endogenous (oestrogens, prostaglandins, and leucotrienes) or exogenous compounds (drugs and xenobiotics), thus taking part in their metabolism. GSH is thus regarded as a central component in antioxidant defense [START_REF] Cossu | Antioxidant biomarkers in freshwater bivalves, Unio tumidus, in response to different contamination profiles of aquatic sediments[END_REF][START_REF] Manduzio | The point about oxidative stress in molluscs[END_REF]. The biologically active site of GSH is represented by the thiol group of the Cys residue. The high nucleophilicity of the thiol function facilitates the role of GSH as a free radical scavenger both under physiological conditions and in xenobiotic toxicity. GSH also helps in the regeneration of other antioxidants such as vitamin E, ascorbic acid, and metallothionein [START_REF] Knapen | Glutathione and glutathione-related enzymes in reproduction: A review[END_REF]. GSH is a cofactor for glutathione peroxidase in the decomposition of hydrogen peroxide or organic peroxides (fig. 6), for glyoxalase 1 in the detoxification of methylglyoxal and other oxo-aldehydes, and for maleylacetoacetate isomerase in the conversion of maleylacetoacetate and maleylpyruvate to the corresponding fumaryl derivatives [START_REF] Monostori | Determination of glutathione and glutathione disulfide in biological samples: An in-depth review[END_REF]. GS-conjugates of endogenous compounds are involved in metabolism, transport, and storage in cell. In addition, conjugation of xenobiotics to GSH initiates a detoxification pathway that generally leads to excretion or compartmentation of the biotransformed compound. Although some adducts can be formed directly, glutathione-S-transferases (GST) mediated reactions generally predominate. GST belong to cellular mechanisms of detoxification and elimination of molecules (Wünschmann et al., 2009). In animals, GS-conjugates are hydrolyzed and degraded by c-glutamyltransferases, followed by carboxypeptidation of the glycine residue. GR reduces GSSG to GSH which, with synthesis of new GSH, maintains the cellular GSH stock (fig. 6). [START_REF] Vašák | Advances in metallothionein structure and functions[END_REF]. The designation MT reflects the extremely high thiolate sulfur and metal content, both of the order of 10% (w/w). MT have been identified in a wide range of organisms, from bacteria to mammals, in many fish and aquatic invertebrates, mainly molluscs. Classically these extremely heterogeneous polypeptides were grouped into three classes of MT [START_REF] Fowler | Nomenclature of metallothionein[END_REF]Viarengo and Nott, 1993) Metallothioneins are playing a role in the homeostatic control of essential metals (Cu, Zn) as they can act as essential metal stores ready to fulfil enzymatic and other metabolic demands (Amiard et al., 2006). Their involvement in metal metabolism is based on their capacity to complex metals, effectively buffering free-metal ion concentrations in the intracellular environment. Additionally, the biosynthesis of these metalloproteins may be induced by exposure to essential and non-essential metals (Bonneris et al., 2005). Vital roles for this pleiotropic protein result in its involvement in homeostasis of essential trace metals, zinc and copper, or sequestration of the environmental non essential metals. Moreover, MT could protect cells from oxidative stress (Géret et al., 2002b). In the presence of ROS, zinc can be removed [START_REF] Vašák | Advances in metallothionein structure and functions[END_REF]. The release of zinc is accompanied by the formation of the MTdisulfide (or thionin, oxidized form of the protein) which in turn can be reduced by the ratio GSH / GSSG to restore the ability of the protein to bind zinc. This redox cycle of MT (fig. 7) plays a crucial role in maintaining physiological homeostasis of metals, detoxification of toxic metals and protection against oxidative stress [START_REF] Kang | Metallothionein redox cycle and function[END_REF]. MT plays an essential role in other metabolic processes, since MT expression is rapidly induced by a variety of factors such as cold, heat, hormones, cytokines [START_REF] Monserrat | Pollution biomarkers in estuarine animals: Critical review and new perspectives[END_REF], in some molluscs important seasonal variations in MT levels also correlate with the gametogenesis process. MT, like other proteins, are degraded in lysosomes [START_REF] Ng | Metallothionein turnover, cytosolic distribution and the uptake of Cd by the green mussel Perna viridis[END_REF]. ) (Hirata et al., 2005). These results suggest that PC play important roles not only in the chelation of trace metals but also as antioxidant defence. PC are synthesized from glutathione, homo-glutathione, hydroxymethyl-glutathione or -glutamylcysteine, catalysed by a transpeptidase, named phytochelatin synthase (fig. 8), which is a constitutive enzyme requiring post-translational activation by trensition metals [START_REF] Grill | Phytochelatins, the heavy-metalbinding peptides of plants, are synthesized from glutathione by a specic -glutamylcysteine dipeptidyl transpeptidase (phytochelatin synthase)[END_REF]Clemens, 2006). Phytochelatin synthase has been shown to be activated by a broad range of metals and MT Reductant GSH Oxidant GSSG MT-disulfide MT-thiols Metal (Zn 2+ ,…) Oxidant GSSG, ROS,… Reductant + Metal Synthesis Degradation metalloids, in particular Cd, Ag, Pb, Cu, Hg, Zn, Sn, Au, and As, both in vivo and in vitro. After the completion of the full genome sequence of the nematode Caenorhabditis elegans two publications independently described a functional PC synthase enable to synthesize PC in this model invertebrate (Clemens et al., 2001;Vatamaniuk et al., 2001). In addition sequences similar to PC synthase gene have been identified from the aquatic midge, Chironomus, and a species of earthworm (Brulle et al., 2008;[START_REF] Cobbett | A family of phytochelatin synthase genes from plant, fungal and animal species[END_REF]. There is, yet, no evidence that these animal genes encode PC-synthase activity, however it seems likely that they to encode PC synthase. It became clear that PC could play a wider role in trace metal detoxification than has previously been thought. Organisms with an aquatic or soil habitat are more likely to express PC (Cobbett, 2000). Clemens et al. (2001) hypothesized that PC may be ubiquitously involved in the tolerance and homeostasis of metals in all eukaryotic organisms. Clemens and Peršoh (2009) suggest how PC synthase genes are far more widespread than anticipated, homologous sequences are found through the entire animal kingdom including in bivalves. Metal detoxification mechanisms in bivalves Bivalves and particularly freshwater mussel unionids are widely recognised for their capacity to accumulate a variety of environmental contaminants, including metals, in their tissues and yet survive in these polluted environments (Winter, 1996;Cossu et al., 1997;[START_REF] Das | Dose-dependent uptake and Eichhornia-induced elimination of cadmium in various organs of the freshwater mussel, Lamellidens marginalis (Linn.)[END_REF]Bonneris et al., 2005). Such tolerance depends on the ability of these animals to regulate the essential metal concentration and detoxify nonessential metal. In molluscs three physiological and biochemical ways allow metal regulations: by binding to specific, soluble, Cys-rich ligand; by compartmentalization within organelle; and by formation of insoluble non-toxic precipitates (Viarengo and Nott, 1993;[START_REF] Rainbow | Influence of metal exposure history on trace metal uptake and accumulation by marine invertebrates[END_REF]. Metal sub-cellular partitioning depending of organs and metal nature. For example in mussels, copper bound to granules are dominant in the gills, in the digestive gland copper is principally bound to soluble Cys-rich compounds (Bonneris et al., 2005). For a long period of exposure, metals might be displaced from soluble metal-binding ligands to granules [START_REF] Roméo | Metal distribution in different tissues and in subcellular fractions of the Mediterranean clam Ruditapes decussatus treated with cadmium, copper, or zinc[END_REF]. Insolubles storage, compartmentalization and elimination of metals In mollusc an important storage or detoxification system is carried out by sequestration of metals in intracellular cytosolic and compartmentalized precipitated structures named granules [START_REF] Howard | The composition of intracellular granules from the metal-accumulating cells of the common garden snail (Helix aspersa)[END_REF]Viarengo and Nott, 1993). Granules are observed in different tissues of bivalves, mainly in the digestive gland, gills, and kidneys which are implicated in metal homeostasis and detoxification (Wang and [START_REF] Rainbow | Influence of metal exposure history on trace metal uptake and accumulation by marine invertebrates[END_REF]. Lysosome is mainly involved in the catabolism of both endogenous and exogenous molecules. Ni and appear to be linked with metal detoxification. These insoluble granules are often present in cells from which they can be eliminated by exocytosis [START_REF] Howard | The composition of intracellular granules from the metal-accumulating cells of the common garden snail (Helix aspersa)[END_REF]Viarengo and Nott, 1993). Granules alone or with the lysosome are finally eliminated, principally by exocytosis in urine, haemolymph, and faeces. In addition, a specificity of unionids is calcium extracellular concretions named microspherules. They are found in extrapalial fluid and haemolymph particularly in the gills, and in the mantle. The calcium of these microspherules is usually bound to phosphate and carbonate, either as orthophosphate Ca 3 (PO 4 ) 2 , and carbonate calcium CaCO 3 (Moura et al., 2000;[START_REF] Lopes-Lima | Isolation, purification and characterization of glycosaminoglycans in the fluids of the mollusc Anodonta cygnea[END_REF]. These microspherules appear to act as a calcium reservoir, serving as a source of calcium for embryonic shell development, but can also play a role in the detoxification of metals [START_REF] Pynnönen | Effects of episodic low pH exposure on the valve movements of the freshwater bivalve Anodonta cygnea L[END_REF]Bonneris et al., 2005). An other site of metal storage in mollusc is the shell which may act as safe storage matrix for toxic contaminants resistant to soft tissue detoxification mechanisms [START_REF] Das | Dose-dependent uptake and Eichhornia-induced elimination of cadmium in various organs of the freshwater mussel, Lamellidens marginalis (Linn.)[END_REF]. In shell of bivalves and unionids, substantial bioaccumulation of copper and other metals was shown (Gundacker, 2000). Cysteine rich metal binding compounds detoxification mechanisms A strategy for cells to detoxify non-essential metal ions and essential metal ions excess is the synthesis of high-affinity binding sites to suppress blockage of physiologically important functional groups (Clemens, 2006). Metal ions have high reactivity with thiol, amino or hydroxyl groups making molecules carrying these functional groups candidates for metal detoxification processes (Viarengo and Nott, 1993). Therefore, the best-known and presumably the first line to Cu, Zn and non-essential metal chelation are the Cys-containing peptides glutathione and phytochelatins and small Cys-rich proteins metallothioneins. Metal cation with a high affinity for SH residues displace Zn 2+ from a metallothionein physiological pool always present in the cells. An excess of trace metal cations, including Zn 2+ released from pre-existing metallothioneins, induces in the nucleus the synthesis of the mRNA coding for metallothioneins, consequently increasing MT on the cytosolic compartment. These MT chelate the trace metal cations, thus reducing their cytotoxic effects (Viarengo and Nott, 1993). The metal-thiolate clusters within the MT molecules allow rapid exchanges of metallic ions between clusters and with other MT molecules [START_REF] Monserrat | Pollution biomarkers in estuarine animals: Critical review and new perspectives[END_REF]. MT are usually not saturated by a single metal but contain several atoms of Cu, Zn, Cd, or Hg and Ag when present (Amiard et al., 2006). In Zn-thioneins the seven metal atoms of Zn or Cd are distributed between two clusters: cluster of 3 Zn and cluster of 4 Zn [START_REF] Maret | Fluorescent probes for the structure and function of metallothionein[END_REF]. Functionally, the two clusters show different affinities for metal cations. At pH 7 or below, cluster is the first to be saturated and cluster the first to release the metal. Data ion is different for MT (Géret et al., 2002b). Indeed, the stability constant for copper is 100 times higher than for cadmium and 1000 times higher than for zinc. Due to the high affinity of Cu(I) for the SH residues, the complex is stable and the metal is not easily released. It is important to note that the Cu-thionein has distinct chemical characteristics, including the capacity to produce oxidized insoluble polymers. In the digestive gland of bivalve, the metal is detoxified into the lysosomes trapped in the form of oxidized insoluble Cu-thioneins, which is subsequently eliminated by exocytosis of residual bodies. Nevertheless, literature data indicate that MT synthesis is not always induced in freshwater mussel, studies carried out in laboratory in A. cygnea by [START_REF] Tallandini | Regulation and subcellular distribution of copper in the freshwater molluscs Anodonta cygnea (L.) and Unio elongatulus (Pf.)[END_REF] and in Dreissena polymorpha by [START_REF] Lecoeur | Evaluation of metallothionein as a biomarker of single and combined Cd/Cu exposure in Dreissena polymorpha[END_REF] showed no induction of MT after Cu 2+ exposure. MT in response to a metal exposure might be reflected in increased turnover (synthesis and breakdown) of the protein, but not necessarily in changes in MT concentration [START_REF] Mouneyrac | Partitioning of accumulated trace metals in the talitrid amphipod crustacean Orchestia gammarellus: a cautionary tale on the use of metallothionein-like proteins as biomarkers[END_REF]. The Cys-containing peptides PC and GSH have been shown to form complexes with various metals, through their thiolate sulfur atom function. In aquatic organisms, glutathione is believed to play a fundamental role in detoxifying metals. The soluble tripeptide GSH complexes and detoxifies trace metal cations soon after their entrance in the cells, thus representing a first line of defence against trace metal cytotoxicity [START_REF] Canesi | Heavy metals and glutathione metabolism in mussel tissues[END_REF]. In particular, reduction of Cu(II) by GSH produces a stable Cu(I)-SG complex which is also a physiological donor of Cu(I) to copper apoproteins in both mammalian and marine invertebrate cells. Methylmercury has also a high affinity for SH groups of GSH, methylmercury-SG complexes have been identified in different animals tissues. Glutathione metal complexes are transported across the plasma membranes and, therefore represent a carrier for the elimination of the metal from the cells (Viarengo and Nott, 1993). Conjugation of glutathione to metals prevents them from interacting deleteriously with other cellular components. Enhanced cadmium toxicity after glutathione depletion has been observed in both in vitro and in vivo mammalian studies. There is also evidence that glutathione depletion enhances metal toxicity in aquatic organisms. Glutathione depletion by inhibitor of glutathione synthesis (buthionine sulfoximine) enhances copper toxicity in the oysters Crassostrea virginica [START_REF] Conners | Effects of glutathione depletion on copper cytotoxicity in oysters (Crassostrea virginica)[END_REF]. Biochemical and genetic studies have confirmed that GSH is the substrate for PC biosynthesis. PC have been assumed to function in the cellular homeostasis of essential transition metal nutrients, particularly Cu and Zn [START_REF] Schat | The role of phytochelatins in constitutive and adaptive heavy metal tolerances in hyperaccumulator and nonhyperaccumulator metallophytes[END_REF]. PC are high-affinity chelators of metals, and play major roles in the detoxification. Unequivocal evidence was shown for the involvement of PC synthesis in metal detoxification [START_REF] Courbot | Cadmium-Responsive Thiols in the Ectomycorrhizal Fungus Paxillus involutus[END_REF]Morelli and Scano, 2004;[START_REF] Thangavel | Changes in phytochelatins and their biosynthetic intermediates in red spruce (Picea rubens Sarg.) cell suspension cultures under cadmium and zinc stress[END_REF]. Induction of PC is triggered by exposure to various physiological and non-physiological metal ions. Some of these: Cd, Ag, Pb, Cu, Hg, Zn, Sn, Au, and As form complexes with PC in vivo in algae, plants, and fungi (Clemens, 2006). They are suspected to play a role in animals as well as in plant (Cobbett, 2000;[START_REF] Vatamaniuk | Worms take the 'phyto' out of 'phytochelatins[END_REF]Clemens and Peršoh, 2009). In the structural model of a PC-Cd complex, for example, the Cd coordinately binds one, two, three or, at maximum capacity, four sulfur atoms from either single or multiple PC molecules, resulting in amorphous complexes. In vivo the size of the PC chain molecules, and the pH stability are essential and determined the metal binding capacity per molecule of PC (Hirata et al., 2005). [START_REF] Konishi | Enhancing the tolerance of zebrafish (Danio rerio) to heavy metal toxicity by the expression of plant phytochelatin synthase[END_REF], introduced into the early embryos of zebrafish, mRNA coding for PC synthase from Arabidopsis thaliana. As a result, the heterogeneous expression of PC synthase and the synthesis of PC from GSH in embryos could be detected. The developing embryos expressing PC synthase and became more tolerant to Cd exposure. Extended summary 3 Extended summary The present work was undertaken in order to investigate some potential causes involved in the Europe-wide observed phenomenon of the decline of various freshwater bivalve species. As stated in the IUCN Red List of Threatened Species, 44 % of all freshwater molluscs are under threat (http://www.iucnredlist.org/news/european-red-list-press-release). One of the potential causes is the increasing use and corresponding local or regional release of industrially and technologically important metals, besides many other potential factors involved in such a complex eco-toxicologically sequence of events. Clearly, such limited toxicological-pathobiochemical study as presented in the few investigations described here is far too limited to come to a distinct and exclusive delineation of one single, individual explanation for the widespread decline of freshwater molluscs including the bivalves, one class of this important, threatened phylum. On the other hand, without such detailed and necessarily limited studies of concentrating on the potential patho-biochemical and toxicological mechanisms of a single likely pollutant, the delineation of the multitude of facets of such ecotoxicological effects will remain blurred. Therefore, the overall conclusions derived from the investigations presented in more detail in the following is not meant as proof of the only relevance of the increasingly used, technologically important metal copper, but as a demonstration of its potential relevance when considering various possibilities which can have an impact on the stability of bivalve populations, including the most spectacular and most strongly threatened, the European pearl mussel. Optimization of analytical protocols Bivalve maintenance and copper exposure Particular attention was paid to mussel maintenance in order to carry out the tests under stable, homogenous, and reproducible conditions (detailed in article 1). The bottom of the tank was covered with a layer of glass beads of 10 mm of diameter, so the mussels could find conditions for burying. This was important for giving the animals a substrate for close-tonature behaviour, at the same avoiding the use of natural river sediment as substrate; thus, unpredictable or unaccountable metal accumulations by adsorption onto sediment particles or interferences of microorganisms by metal species conversion could be avoided without excessively disturbing the behaviour of the animals, as indicated by their regular ventilation activity. Moreover, the large beads facilitated a deep daily cleaning of the tank by suction. The water, artificially reconstituted from deionised water, was renewed daily in order to ensure equal conditions during the whole experimental course for all replicates and avoid any uncontrolled metal accumulation or cross-contamination. Under these conditions, the copper concentrations could be maintained within close limits of the target concentration during the whole exposure period, as verified by graphite furnace atomic absorption spectrometry (GFASS) or inductively coupled plasma mass spectrometry (ICPMS). Enzymatic analyses The enzymatic analyses were carried out as described in article 1 and 2. The activities of Ca 2+ -ATPase and Na + /K + -ATPase were determined in the suspension of the microsomal pellet (as shown in fig. 1) obtained by homogenization of the mantle, digestive gland, gills, and kidney, and by centrifugation at 75600 g, the activity of H + -ATPase was determined in the supernatant of the first centrifugation step at 2000 g as shown in (fig. 1). Inorganic phosphate released by the ATPases was quantified by spectrophotometry of the ammonium molybdate complex according to Chifflet et al. (1988). The CA activity was evaluated in the supernatants (S 2000, fig. 1) by measuring the pH decrease according to Vitale et al. (1999). The study of phytochelatins in Anodonta cygnea lead us to develop and optimise an HPLC analytical protocol for phytochelatin (PC) quantification in animal tissues, based on the method for plants developed by Minocha et al. (2008). A protein removal step by acidification and centrifugation was added (fig. 2) and the mobile phase gradient profile was modified as described in articles 3 and 4. PC were quantified in the cytosolic fractions of deproteinized tissue homogenates and metallothionein (MT) in the non-deproteinized cytosolic fractions, after thiol reduction with tris-(2-carboxyethyl)-phosphine hydrochloride (TCEP) and 1,4-dithiothreitol (DTT), respectively. Monobromobimane (mBBr) was used as a fluorescent tag. Initially non-fluorescent, the dye mBBr became fluorescent after its binding to thiol groups under dehalogenation. It is selective for thiol groups and allows their quantification with high sensitivity by fluorimetry. Each sample was spiked with 0.6 µmol L -1 of each standard PC 2-5 to certify PC identity. The quantification limits per 20 µL sample injected into the HPLC column were 1.5 pmol for PC 2-3 , 2.7 pmol for PC 4 , 5 pmol for PC 5 . Effects of copper on calcium transport in Anodonta anatina Physiological data A good physiological knowledge of the biomarkers is important for their ecotoxicological interpretation. The present study (article 1 and article 2) allowed us to determine basal levels of the enzymatic activities of Ca 2+ -ATPase, Na + /K + -ATPase, and H + -ATPase of the plasma membrane. These enzymes were studied in gills, digestive gland, kidneys, and mantle, organs playing an important role in calcium input. The Ca 2+ -ATPase controls the cellular calcium concentration by active transport. The Na + /K + -ATPase and the H + -ATPase are maintaining sodium and proton gradients necessary for calcium Na + /Ca 2+ and H + /Ca 2+ transporters. H + -ATPase and cytoplasmic CA were also studied for their implication in the biomineralization process of the shell and in the maturation of glochidia. There is a lack of physiological data in freshwater bivalves compared to marine models such as Mytilus edulis or M. galloprovincialis. These data are useful in understanding calcium transport mechanisms since freshwater bivalves are subject to osmo-and iono-regulations different from marine mussels. Enzymatic activity found in the gills of A. anatina (table 1) were consistent to those reported in the gills of A. cygnea for Na + /K + -ATPase and Ca 2+ -ATPase 0.11 µmol P i /mg protein/min, (Lagerspetz and Senius, 1979) and 0.58 µmol P i /mg protein/min [START_REF] Pivovarova | Effect of cadmium on the ATPase activity in gills of Anodonta cygnea at different assay temperatures[END_REF] respectively, and in the gills of A. anatina for CA 2.1 U/mg protein [START_REF] Ngo | Sub-chronic effects of environment-like cadmium levels on the bivalve Anodonta anatina (Linnaeus 1758): III. Effects on carbonic anhydrase activity in relation to calcium metabolism[END_REF]. The plasma membrane Ca 2+ -ATPase (PMCA) activity in the gills (article 1) was fourfold higher than in the marine bivalve Mytilus edulis (Burlando et al., 2004). This reflects how important calcium active transport is to the freshwater mussel Anodonta anatina. Calcium concentrations in seawaters are higher than in freshwaters which explains that calcium absorption is easier for Mytilus edulis. PMCA activity is also high in the kidneys and the digestive gland, favouring calcium absorption from food and calcium reuptake from renal filtrate. These physiologic data allowed us to understand how PMCA is crucial for calcium homeostasis in freshwater bivalves compared to marine organisms. Moreover in Unionidae, calcium plays a direct role in reproduction since glochidia are incubated in the marsupium (gills) and is therefore important for population growth of mussels. Na + /K + -ATPase maintained the sodium transmembrane cellular gradient necessary for calcium facilitateddiffusion by Na + /Ca 2+ antiporters. In Anodonta anatina, Na + /K + -ATPase activity varied in a consistent manner as function of the seasons (article 2). The enzymatic activities of Na + /K + -ATPase and H + -ATPase were found to be maximum in summer, the season of shell and glochidia calcification in Anodonta mussels (Taskinen, 1998;Moura et al., 2000). This seasonal variation is an important aspect which is imperative to consider for data interpretation in ecotoxicological investigations. signalling pathways in addition to biomineralization and shell formation in mussels and in glochidia. The kidneys have an essential role in filtration and reabsorption of ions, water, and organic molecules from the ultrafiltrate. As freshwater organisms are hyperosmotic, the osmotic pressure resulting from the gradient of concentrations between internal and surrounding compartments leads to water uptake by osmosis and ionic loss by diffusion. The water uptake is compensated by urine production and ionic loss is limited by active ion reabsorption. In freshwater bivalves, the daily output of urine is high. The kidneys play an essential role in calcium homeostasis by limiting ionic loss in urine through active Ca 2+ reuptake from the filtrate (Turquier, 1994). An inhibition of calcium reabsorption may therefore dramatically disturb calcium homeostasis. Because of analytic problems inherent to the small biomass of kidneys, this tissue has been poorly studied in freshwater bivalves. This organ plays a role in detoxification (Viarengo and Nott, 1993) and is essential to ionoregulation. Our results (article 1) showed the high sensitivity of Ca 2+ -ATPase to Cu 2+ in this organ. The Unionidae freshwater bivalves are widely recognized for their capacity to accumulate a variety of environmental contaminants in their tissues including metals (Bonneris et al., 2005). This marked metal tolerance is effected by biochemical strategies that involve metal sequestration. The intracellular sequestration of metals is based on a sequence of cellular events involving a cascade of different ligands with increasing metal binding strengths. High concentration of metal could also inhibit detoxification mechanisms. In our results (article 1 and 2), recovery of Ca 2+ -ATPase and Na + /K + -ATPase activity within 7 days at the low Cu 2+ concentration (0.35 µmol L -1 ) indicated adaptive ability. This suggested the mobilisation of detoxification systems efficient at low Cu 2+ concentrations. At higher Cu 2+ concentration (0.64 µmol L -1 ), no recovery was noted. It is interesting to note that recovery was observed only at 0.35 µmol L -1 , a concentration environmentally more relevant than the higher concentration studied (article 1). Therefore, it is important to use environmental concentrations in ecotoxicological research; extrapolation of the results observed at high doses to environmental situations may be critical and not pertinent due to different mechanisms of adaptation and toxicity at high and low doses. Metal detoxification mechanism in Anodonta cygnea Aquatic molluscs have a number of properties that make them one of the most popular sentinels for monitoring water quality. The molluscs take up and accumulate high levels of trace metals, although the body concentrations show wide variability across metals and invertebrate taxa. The study of the capacity of bivalves for assessment of water quality is primarily linked to their tendency to accumulate trace metals even at high concentrations (Doyotte et al., 1997). Tolerance depends on the ability to regulate the metal concentrations in the cells and to accumulate excess metals in non-toxic forms (Viarengo and Nott, 1993). In our study, the recovery of Ca 2+ -ATPase (article 1) and Na + /K + -ATPase (article 2) activity observed within 7 days of exposure indicate an induction of detoxification mechanisms. Copper belongs to the transition metals most of which are known to show various degrees of affinity for thiol group. This property makes chelating ligands carrying thiol functions the first mechanism of metal detoxification. In articles 3 and 4, the study was focused on the mechanisms of metal detoxification by thiol rich compounds. In most animals, tolerance to trace metals depends on the induction of MT, a family of thiol-rich proteins of low molecular weight. These metallo-proteins are known to play an important role in the homeostatic control of essential metals such as Cu and Zn but also in the detoxification of excessive amounts of essential and non-essential trace metals. In the tissues of metal-exposed mussels is a rapid increase of metallothioneins, the soluble proteins involved in transition metal detoxification which is synergetic with lysosome compartmentalization (Amiard et al., 2006). Phytochelatins (PC) are thiol rich oligopeptides which have been characterized in a wide range of plant species (Grill et al., 1985). PC plays a major role in the detoxification of trace metals in plants by chelating metals with high affinity. In 2001, two publications independently described a functional phytochelatin synthase in the invertebrate nematode Caenorhabditis elegans (Clemens et al., 2001;Vatamaniuk et al., 2001). Brulle et al. (2008) provided the first evidence of a phytochelatin synthase from the earthworm Eisenia fetida implicated in dose-dependent cadmium detoxification. Neither study furnished evidence of the existence of the peptide PC in the invertebrates and animals. Recent cloning of the gene encoding phytochelatin synthase in E. fetida (Brulle et al., 2008;[START_REF] Bernard | Metallic trace element body burden sand gene expression analysis of biomarker candidates in Eisenia fetida, using an ''exposure/depuration'' experimental scheme with field soils[END_REF] suggested the existence of this detoxification pathway in this species. A superficial view of the limited selection of species in which such sequences have been identified might suggest that invertebrates with an aquatic or soil habitat are more likely to express PC (Cobbett, 2000). The objectives of our study were to determine the ability of A. cygnea to synthesize PC (article 3), and to study the possible role played by PC in copper detoxification in this freshwater bivalve (article 4). In plants, PC are rapidly induced in cells and tissues exposed to a range of transition metals and play an important role in detoxification. The results obtained in the present work (articles 3 and 4) showed the presence of phytochelatins in invertebrates; to our knowledge this is the first time. Basal levels were determined for PC 2-4 , -GluCys, and MT in the gills and the digestive gland (table 2). Besides the interest of these data for freshwater bivalve studies, the detection of PC 2-4 in the absence of excessive copper and other metals suggests their role in essential metal homeostasis. In the gills and the digestive gland, PC 2 was found in higher concentrations, followed by PC 3 which again was higher in concentration than PC 4 . PC 2 and PC 3 were found to be two or three times higher in the digestive gland than in the gills. Table 2: Basal levels of phytochelatines 2-4 (µg PC/g tissue wet weight), -GluCys (µg -GluCys/g tissue wet weight), and metallothionein (mg MT/g protein) in Anodonta cygnea. G: gills; DG: digestive gland; PC: phytochelatin; MT: metallothionein In the European continental hydrosystem affected by agricultural and / or urban activities, levels of up to 0.6 µmol L -1 of Cu 2+ are seasonally found [START_REF] Neal | A summary of river water quality data collected within the Land-Ocean Interaction Study: core data for eastern UK rivers draining to the North Sea[END_REF]Falfushynska et al., 2009). In our work (article 1 and 2), inhibition followed by recovery of Ca 2+ -ATPase and Na + /K + -ATPase activities were observed in mussels exposed to 0.35 µmol L -1 Cu 2+ , an environmentally relevant concentration. Therefore, in our studies on the detoxification mechanisms by metal binding Cys-rich compounds (articles 3 and 4), the mussels were exposed to the same Cu 2+ concentration. A statistical significant PC 2 induction in the gills of A. cygnea exposed to of 0.35 µmol L -1 Cu 2+ was observed (article 4), i.e. 50 % from 12 h to 4 d exposure, and 30 % upon 7 d. In the digestive gland, significant PC 2 induction was observed only at 12 h of Cu 2+ exposure. Relative to the respective control, -GluCys increased significantly in the gills within 48 h and 7 d exposure and within 48 h and 4 d in the digestive gland. The induction of PC 2 in A. cygnea exposed to Cu 2+ suggests its key role as metal chelator in a first line of detoxification, together with other compounds such as GSH. Higher sensitivity of PC 2 induction in the gills could be explained by the water route of Cu-exposure. The gills may play a more important role than the digestive gland in the uptake of copper dissolved in the test media. However, beyond 7 d of exposure, PC 2 declined to control values within 21 d of exposure. This decrease suggests that long-term Cu 2+ detoxification was shifting to other mechanisms. In Unionidae, MT and insoluble granules are known to play a role in metal detoxification in the long term. Bonneris et al. (2005) showed that cadmium, zinc, and copper concentrations in the gill granule fraction were significantly correlated with environmental concentrations of these metals. The granules are known to be a preferential site for copper storage in the gills of Unionidae. Around 65 % of the total copper in the gills was found to be bound to granules in Anodonta grandis grandis, were the calcium concretion represented 51 % of the gills dry weight. Similar values were found in Anodonta cygnea (Bonneris et al., 2005). No increase of MT was observed in the present study upon Cu 2+ exposure for 21 d. The MT isoform (< 10 kDa) in the mussel extract identified by HPLC separation in our study was not induced by Cu 2+ . Detoxification in A. cygnea by other MT isoforms not detected by the HPLC method cannot be excluded. Indeed, MT polymorphism is known to be important in invertebrates (Géret et al., 2002b;Amiard et al., 2006). A good example of such polymorphism and functional divergence is found for MT in the snail Helix pomatia. These snails can tolerate exceptionally high concentrations of cadmium. Additionally, they accumulate relatively high amounts of copper, needed for the biosynthesis of the oxygen carrier hemocyanin. The specific metal accumulation is paralleled by the presence MT forms which bind specifically one type of metal (Cd or Cu), both containing 18 conserved Cys residues but differing in other amino acids [START_REF] Vašák | Advances in metallothionein structure and functions[END_REF]. Metallothioneins, granules, and antioxidant systems have been described to be involved in detoxification mechanisms in freshwater bivalves. Cossu et al. (1997) have shown that in Unionidae antioxidants and especially GSH play major roles in metal detoxification. In the gills of Unio tumidus, a decrease by 45 % of GSH was found in mussels exposed at a metal-contaminated site. A decrease in GSH level in Unionidae exposed to copper was confirmed by Doyotte et al. (1997) under controlled laboratory conditions, indicating either metal blockage by SH groups or the use of GSH as substrate of antioxidant enzymes. Indeed, a parallel involvement of antioxidant enzymes had been described with increased activity at low metal concentration (Vasseur and Leguille, 2004). GSH also plays a role in PC synthesis. This double role in direct metal detoxification and as a PC precursor could explain this decrease (fig. 1). Decline of the Unionidae populations Over the past 50 years, a world-wide decline of autochthonous freshwater molluscs has been observed (Lydeard et al., 2004). Among the different species, the Unionoida taxon seems to be particularly endangered. Species such as the pearl mussels Margaritifera margaritifera and Margaritifera auricularia are declining dramatically in European rivers. Other species belonging to the Unionidae family are included in the red list of threatened species established by the World Conservation Union and is the case for Anodonta anatina and Anodonta cygnea. The Unionidae populations represented the largest part of the total biomass in many aquatic systems. They take an active part in sedimentation and water purification, modifying the phytoplankton community, and are detritivorous invertebrates (Aldridge, 2000;[START_REF] Vaughn | Community and foodweb ecology of freshwater mussels[END_REF]. Therefore, the disappearance of Unionidae may result in structural and functional perturbations of aquatic ecosystems. The competition with invasive bivalves is also a matter of debate. Invasive freshwater bivalves as Corbicula fluminea or Dreissena polymorpha not belonging to the Unionidae colonise freshwater hydrosystems with detrimental effects to other invertebrates. These invasive species do not play the same functional role as Unionidae in ecosystems. Their presence is hypothesized to contribute to the decline of the Unionidae. A combination of various factors may explain the gradual disappearance of Unionoidea in their competition with invasive species. Depletion of salmonids as host-fish has also been mentioned as a possible cause. Salmonid fish populations are affected by water pollution; moreover, pollutants may indirectly promote natural disease factors such as parasitism which is known to endanger salmonids (Voutilainen et al., 2009), and bivalves; trematode parasitism of Anodonta population entails complete infertility [START_REF] Taskinen | Age-, size-and sex-specific infection of Anodonta piscinalis (Bivalvia, Unionidae) with Rhipidocotyle fennica (Digenea, Bucephalidae) and its influence on host reproduction[END_REF]. Other major reasons for the decline are the physical degradation of streams and the reworking of river beds and canals as well as the degradation of water quality. Indeed, interactions between pollutants may disturb these biocoenoses even at low concentrations (Vighi et al., 2003). In order to protect the autochthonous Unionidae it is important to determine how and to which extent those factors contribute to their disappearance. The present work is meant as a contribution to understand the role of metal pollution in affecting calcium homeostasis in fresh water mussels, likely to be one of the reasons for the widespread decline of mussel populations in European rivers (Frank and Gerstmann 2007). Conclusion 4 Conclusion A macrobentos community is an excellent indicator of water quality (Ippolito et al., 2010). The objectives of the present work were to study some toxic effects of ionic copper and its detoxification mechanisms in the freshwater mussel Anodonta species belonging to the Unionidae. First, Cu 2+ as a potential inhibitor of proteins playing a role in calcium transport and biomineralization was assessed with A. anatina. Secondly, mechanisms of copper detoxification by MT, PC, and -GluCys were studied in A. cygnea. In the present study, the effects of Cu 2+ exposure on the activities of Ca 2+ -ATPase, Na + /K + -ATPase, and H + -ATPase of the plasma membrane, and of cytosolic CA have been evaluated in the freshwater bivalve A. anatina. Upon 4 d exposure, inhibition of Ca 2+ -ATPase activity in the gills and the kidneys, and of the Na + /K + -ATPase activity in the gills and the digestive gland was observed in A. anatina exposed to 0.35 µmol L -1 Cu 2+ . A total recovery of Ca 2+ -ATPase was observed upon 7 d of exposure, indicating that detoxication mechanisms are activated, except for the higher Cu 2+ concentration of 0.64 µmol L -1 . Ca 2+ -ATPase inhibition was particularly sensitive in the kidney, the organ playing an important role in calcium reuptake and iono-regultion. It would be interesting to study the long-term consequences on disturbances of osmoregulatory functions in the kidney of A. anatina exposed to 0.64 µmol L -1 Cu 2+ . The present work is the first to identify the phytochelatins, i.e. PC 2 , PC 3 , and PC 4 , in freshwater mussels. In A. cygnea exposed to 0.35 µmol L -1 of Cu 2+ , PC 2 was induced in the digestive gland and the gills within 12 h. The induction in the gills persisted for 7 days. These results suggest a role of PC in essential metal homeostasis and their involvement in first-line detoxification. Our HPLC results showed no variation of MT levels after Cu 2+ exposure, at least the MT isoform quantified with our method. Additional MT experiments with A. cygnea exposed to Cu 2+ at the same conditions and employing a spectrometric method which allows the quantification of the total MT isoforms would be interesting, in order to compare our results with those found in A. anatina by Nugroho and Frank (2012). In parallel the HPLC method should be optimized to allow the detection of other MT isoforms induced by copper. Copper has been reported as a weak inducer of metal binding thiol peptides (Zenk, 1996). Exposure to a non-essential metal being a strong inducer of PC, such as cadmium, could be interesting. Comparison of effects of an essential metal like copper and a non essential metal like cadmium could allow to determine the role played by PC in homeostasis of essential metal and detoxfication mechanisms. Copper is an inducer of reactive oxygen species (ROS) through Fenton type reactions. The activity of PC in scavenging ROS in A. cygnea exposed to copper would be interesting to study. Chemical pollution is one of the environmental factors that may impact bivalve populations. Toxicity depends not only on the bioavailability of pollutants and their intrinsic toxicity, but also on the efficiency of detoxifying systems in eliminating reactive chemical species. Articles INTRODUCTION Na + /K + -ATPase and H + -ATPase are important enzymes mainly involved in osmoregulation and acid-base balance. H + -ATPase plays as proton pump also an essential role in shell synthesis by pH control, a decisive factor in the biomineralization process (Machado et al., 1989). The plasma membrane Na + /K + -ATPase and H + -ATPase maintain the sodium and proton gradients necessary for the calcium antiporter system. Although calcium uptake is mainly achieved by active flux generated by Ca 2+ -ATPase. Facilitated diffusion through Ca 2+ /Na + or Ca 2+ /H + antiporters are other important ways of entry of calcium into the cell (Wheatly et al., 2002). Perturbation of calcium homeostasis has been hypothesized as a possible cause in the decline of the pearl mussel populations (Frank and Gerstmann 2007). Another freshwater mussel belonging to the Unionoida order known to bioaccumulate copper is Anodonta anatina (Cossu et al., 1997;Nugroho and Frank, 2011). Inhibition of Ca 2+ -ATPase activity in the plasma membrane was shown in A. anatina exposed to 0.35 µmol L -1 of copper (Santini et al., 2011a). Both H + -ATPase and Na + /K + -ATPase of the plasma membrane are metal-sensitive enzymes with functional SH groups. Due to its thiol affinity, copper is likely to affect the activity of these enzymes (Viarengo et al., 1991) and calcium homeostasis could thus be indirectly affected by copper; alteration of Na + and H + gradients will entail decreased efficacy of Ca 2+ /Na + or Ca 2+ /H + antiporters. Therefore, it seemed interesting to evaluate the activities of both enzymes in mussels in order to determine their sensitivity to this transition metal which nowadays is widely found in the environment at elevated levels. Therefore, in the present study the effects on the activities of Na + /K + -ATPase and H + -ATPase have been evaluated with A. anatina mussels exposed to copper ions (Cu 2+ ) at 0.35 µmol L -1 . The H + -ATPase activity was measured in the mantle, which plays the dominant role in the generation, growth and repair of the shell. Na + /K + -ATPase activity was determined in the mantle, the gills, and the digestive gland, organs which are mainly involved in calcium uptake and homeostasis (Coimbra et al., 1993). MATERIAL AND METHODS Chemicals All chemicals used for maintenance and exposure of bivalves, for sample preparation, and the biochemical assays were of analytical grade. Ethylene glycol-bis(β-aminoethylether)-N,N,N',N'-tetraacetic acid (EGTA), ouabain, phenylmethylsulfonic fluoric acid (PMSF), sodium azide, sodium dodecyl sulphate (SDS), Nethylmaleimide (NEM), and sodium orthovanadate were from Fluka (Schnelldorf, Germany). Adenosine-5'-triphosphate (ATP), N-(2-hydroxyethyl) piperazine N'-(2-ethanesulfonic acid) (HEPES), and tris-(hydroxymethyl)-aminomethane (Tris) were from Carl Roth (Karlsruhe, Germany). Animal maintenance and copper exposure Mussel maintenance was described in detail in a previous article (Santini et al., 2011a). Briefly, adult mussels A. anatina with shell lengths of 6.5 -7.5 cm were kept at 17° C in a glass aquarium containing 1.5 L per animal of filtered and aerated artificial pond water ). The water was renewed every 48 h. The bottom of the tank was covered with a 5-cm layer of glass beads (about 10 mm diameter), so the mussels could find conditions for burying. They were fed daily with unicellular algae Chlorella kessleri from a culture in the exponential growth phase, added to a final algal density of 2×10 5 cells mL -1 . Animals were acclimatized for 3 weeks in September before any experiment. For the copper (Cu 2+ ) exposure, animals were placed in a 20 L aquarium lined with dye-and pigment-free high-density polyethylene foils, with a glass beads layer and filled with 1.5 L per mussel of experimental media (pH 7.2 temperature: 17 ± 0.5° C). The bivalves were fed daily as described above and the test media were renewed every day. Concentration of 0.35 µmol L -1 Cu 2+ was chosen for exposure test. The concentration was controlled by inductively-coupled plasma mass spectrometry, with a limit of Cu detection of 0.5 µg L -1 = 8 nmol L -1 . The test was conducted with 12 mussels randomly assigned by groups of 3 to each treatment. At the beginning (time 0) a group of 3 mussels unexposed to Cu 2+ was used as control. A group of 3 mussels was taken from the aquarium after 4 d, 7 d, and 15 d of Cu 2+ exposure. Mussels used on each treatment were dissected separately for biochemical analyses, i.e. 3 replicates per treatment. Tissue sample preparation for enzymatic analysis Tissue samples of the gills, the mantle, and the digestive gland were prepared. The samples were suspended in 6 vol. ice-cold HEPES buffer (10 mmol L -1 HEPES, 250 mmol L -1 sucrose, 1 mmol L -1 EDTA, 1 mmol L -1 phenylmethylsulfonic fluoric acid (PMSF) as protease inhibitor, adjusted to pH 7.4 with HCl, 1 mol L -1 ) and homogenized by means of a motor-driven Teflon pestle homogeniser with 30 up-and-down strokes. The resulting homogenates were centrifuged at 2000 g (10 min, 4° C), the supernatants (S2000) were diluted with 12 mL ice-cold HEPES buffer per g tissue wet weight and centrifuged at 10000 g (20 min, 4° C). The supernatants (S10000) were ultracentrifuged at 75600 g (60 min, 4° C), and the final pellets were suspended in 6 vol. ice-cold Tris-HCl buffer (25 mmol L -1 Tris, 1 mmol L -1 PMSF, adjusted to pH 7.4 with HCl, 1 mol L -1 ) for plasma membrane Na + /K + -ATPase test. Plasma membrane H + -ATPase activity was determined in the S2000 supernatant used as mantle extract. All samples were frozen in liquid nitrogen and stored at -80° C until analyses were carried out, not later than one week after sampling. Protein concentrations were determined according to Bradford (1976), using bovine serum albumin as standard. Plasma membrane H + -ATPase and Na + /K + -ATPase activity assay Plasma membrane H + -ATPase and Na + /K + -ATPase activities were determined by measurement of inorganic phosphate released (Chifflet et al., 1988) and quantified by spectrophotometry of the ammonium molybdate complex at 850 nm (Spectrophotometer Unicon, Kontron 930). The H + -ATPase reaction medium contained (final concentrations) 6 mmol L -1 MgSO 4 , 50 mmol L -1 hepes, 0.5 mmol L -1 sodium orthovanadate (P-ATPase inhibitor), 0.5 mmol L -1 sodium azide as inhibitor of mitochondrial ATPase activities, 4 mmol L -1 Na 2 ATP, adjusted to pH 7.4. The mantle extract (160 µg protein) was incubated at 1 mL final volume for 60 min in a shaking water bath at 25° C, with or without addition of NEM as H + -ATPase inhibitor [START_REF] Lin | H + -ATPase activity in crude homogenates of fish gill tissue: inhibitor sensitivity and environmental and hormonal regulation[END_REF]. The reaction was stopped by addition of 400 µL sample to 400 µL of a 12 % solution of sodium dodecyl sulphate (SDS). Blanks were prepared in the same way except that the tissue samples were added after the SDS-dilution step. H + -ATPase activity, expressed in micromoles of P i released per mg protein and min, was determined as the difference between the ATPase activity in the presence of 10 mmol L -1 NEM and the ATPase activity without NEM. The Na + /K + -ATPase reaction medium contained (final concentrations) 100 mmol L -1 NaCl, 0.5 mmol L -1 EGTA, 5 mmol L -1 MgCl 2 , 25 mmol L -1 Tris, 0.5 mmol L -1 sodium azide as inhibitor of mitochondrial ATPase activities, 4 mmol L -1 Na 2 ATP, adjusted to pH 7.4. Samples of the membrane fractions (suspensions of the 75600 g pellet) of the gills, the digestive gland, and the mantle (20, 70, 60 µg protein, respectively) were incubated at 1 mL final volume for 20 min in a shaking water bath at 37° C, with or without addition of K + and ouabain as Na + /K + -ATPase inhibitor (Lionetto et al., 1998). The reaction was stopped by addition of 400 µL sample to 400 µL of a 12 % solution of sodium dodecyl sulphate (SDS). Blanks were prepared in the same way except that the tissue samples were added after the SDS-dilution step. Na + /K + -ATPase activity, expressed in micromoles of P i released per mg protein per min, was determined as the difference between the ATPase activity in the presence of 20 mmol L -1 KCl, and the ATPase activity without KCl and in the presence of ouabain (1 mmol L -1 ). Statistical analyses Data distributions were not normal, so statistical analysis for comparison of the enzymatic activities between treated and control mussels, was done using the non-parametric Mann-Whitney test (Statistica, StatSoft France 2001, data analysis software, version 6, Maison-Alfort, France). All data are reported as mean (n = 3) ± standard deviation (S.D.). Differences were considered as significant when p < 0.05. No significant effect of copper on H + -ATPase activity was noted (fig. 2) in the mantle of mussels upon 15 days of exposure. The H + -ATPase activities exhibited a continuously declining trend, although statistically not significant due to high variability between mussels. These basal activities (expressed in micromoles of P i released per mg protein per min) varied in consistent manner in function of seasons. The activities were the highest in July and September compared to the values measured in January, March and April (fig. 3). In July, the Na + /K + -ATPase activity expressed as µmol P i mg/protein/min was 0.107 ± 0.027 in the gills, 0.047 ± 0.050 in the digestive gland, and 0.053 ± 0.025 in the mantle (fig. 3). The H + -ATPase of the mantle did not significantly differed over the study period (from July to April) with 0.0025 ± 0.001 µmol P i /mg protein/min in July (fig. 4). Minimal values of Na + /K + -ATPase and H + -ATPase activities were observed in April in every tissue with a strong decrease by half and ten of the July values, depending on organ. RESULTS Fig DISCUSSION In the general decline of freshwater bivalve observed (Lydeard et al., 2004), the Unionoida taxon is particularly endangered. Alteration of ionic transport systems as Na + /K + -ATPase and H + -ATPase by copper is poorly investigated in the freshwater mussel Anodonta anatina belonging to this taxon. Copper, a metal extensively used in industry, building sector, public energy supply and transportation systems, and agriculture was studied in the present work as a potential factor implicated in this complex phenomenon of decline. The magnitude of mean basal activities of Na + /K + -ATPase in the different organs studies were gills > mantle > digestive gland. The high activity measured in the gills reflects the specialization and implication of this organ in ionoregulation. The gills are known to play a major role in active uptake of mineral ions from surrounding water (Turquier, 1994). The mantle assumes the ionic balance between haemolymph and the extrapallial fluid necessary for shell growth or calcium body reabsorption (Coimbra et al., 1993). The basal activities of Na + /K + -ATPase determined in the gills of control A. anatina mussels were in the same order of magnitude as found by Lagerspetz and Senius (1979) in A. cygnea, and in the freshwater fish Bidyanus bidyanus (Alam and Frankel, 2006) (Table 1). The hyperosmotic status of A. anatina and A. cygnea is expressed by the high basal activities of the enzyme. In comparison, the marine mussel Mytilus galloprovincialis showed a Na + /K + -ATPase activity threefold lower than A. anatina (Viarengo et al., 1996). There is little information in the scientific literature on the H + -ATPase activity in the mantle of mussels. Investigations on the enzyme activity expressed as µmol P i mg/protein/min (table 1) were carried out mostly in the gills of crustacean 0.023 and fish 0.003 to 0.041. In the present work the mean H + -ATPase activity 0.0025 ± 0.001 found in the mantle of A. anatina was lower. In this study, seasonal fluctuations of Na + /K + -ATPase basal activity were observed with clear and significantly elevated levels in July and September. It is known that the freshwater mussel shows seasonal changes in calcification (Taskinen, 1998). Our results correspond to the cycle of calcification found in Anodonta cygnea a close species. The growth of clam shells and the level of glycosaminoglycans know to be important for biomineralisation, increased both in summer with a maximum in July and August (Taskinen, 1998;Moura et al., 2000). These fluctuations, parallel to the calcification cycle, suggest an indirect implication of Na + /K + -ATPase in the calcium transport by the Ca 2+ /Na + antiporter. H + -ATPase activities showed not statistically significant comparable seasonal fluctuations. Na + /K + -ATPase of the gills and the digestive gland were strongly affected by exposure to 0.35 µmol L -1 of Cu 2+ upon 4 days of exposure, to a lesser extent in the mantle. Therefore, even though our results do not provide direct evidence of the Ca 2+ /Na + antiporter inhibition, a decrease of calcium ionic transport can be assumed. This is consistent with the results observed by Viarengo et al. (1996) in M. galloprovincialis exposed 4 d to 0.6 µmol L -1 of Cu 2+ . In A. anatina, inhibition of Na + /K + -ATPase modified the cellular Na + gradient, which could result in a reduced activity of Ca 2+ /Na + antiporter and disturbance of calcium homeostasis. A similar profile of inhibition of the H + -ATPase activity was observed in the mantle, but inter-individual variations were too high to be statistically significant. Some recovery of the Na + /K + -ATPase was observed in the digestive gland of A. anatina within 7 days of Cu 2+ exposure. Viarengo et al. (1996) also showed a return to basal level of the Na + /K + -ATPase activity in M. galloprovincialis exposed during 7 d to 0.6 µmol L - 1 of Cu 2+ . This may indicate the occurrence of detoxification systems mobilized in the first times of exposure to cope with stressors through metal sequestration or binding to metallothioneins. Despite a trend indicating partial recovery beyond 4 days the Na + /K + -ATPase activity in the gills of A. anatina was still significantly inhibited upon 15 days of experiment. An inhibition of Ca 2+ -ATPase activity was also observed in the gills and the kidneys of A. anatina exposed to Cu 2+ in same conditions as in the present study (Santini et al., 2011a). These whole results indicate an inhibition of ionic transport by Cu 2+ , which may perturb calcium homeostasis and ionoregulation more generally. In the long term, this may lead to biomineralization perturbation (shell slimmer, glochidia viability decrease), and physiologic impairments. INTRODUCTION Several transition metals (Cr, Mn, Fe, Co Cu, Zn, Mo) have chemical properties that make them essential for biological systems but toxic in excess. Essential and non-essential metals pose the problem of being toxic in the micromolar concentration range. For homeostasis and detoxification of such metals, bacteria, plants, and animals employ a strategy by synthesizing cysteine-rich peptides and proteins with high-affinity metal binding sites in the form of thiol groups (Clemens, 2006), viz. glutathione, phytochelatins, and metallothioneins. Until recently, in animal studies on trace metal detoxification, only reduced glutathione (GSH) and the family of metallothioneins (MT) were taken into consideration. The ubiquitous tripeptide GSH ( -GluCysGly) is present in most eucharyotic cells at concentrations of about 0.2 to 10 mmol L -1 . Metallothioneins are cysteine-rich, ubiquitous cytosolic proteins of low molecular weights (~ 4 -14 kDa). In addition to GSH and MT, in plants, some fungi, and yeasts, phytochelatins (PC n ) represent a third type of thiol-bearing entity (Grill et al., 1985;Clemens, 2006). Ni, Cu, Zn, Ag, Hg, and Pb, and the metalloid arsenic (Clemens, 2006). Homologous genes enabling to give a functional PCS were identified in the nematode Caenorhabditis elegans (Clemens et al., 2001;Vatamaniuk et al., 2001) and more recently in the oligochaete Eisenia fetida (Brulle et al., 2008). These findings suggest that PC n may play a role in metal homeostasis also in animals. Bivalves are known to bio-accumulate persistent organic pollutants and metals, which is one of the reasons suspected of the freshwater bivalve general decline (Franck and Gerstmann, 2007). According to Clemens and Peršoh (2009), bivalves also present a homologous gene of PCS. Anodonta cygnea is a freshwater bivalve belonging to the Unionidae, a filter-feeding and burrowing species living at the water/sediment interface. As bivalves are in close contact with the aquatic environment, they are known to bioaccumulate transition metals (Gundacker, 2000;Nugroho and Frank, 2011). The aim of the present study was to determine whether PC n are eventually synthesised by A. cygnea in the gills and the digestive gland. Indeed the presence of PC n in animals has been established for the first time. MATERIALS AND METHODS Chemicals Phytochelatins standards (PC 2 , PC 3 , PC 4 , and PC 5 ) [PC n , ( -Glu-Cys) n -Gly, where n = 2-5], and monobromobimane (mBBr) were from Anaspec (San Jose, CA, USA). Acetonitrile pooled in order to obtain sufficient mass for analysis. The samples were immediately analysed after sampling or frozen in liquid N 2 and stored at -80°C for no more than 4 weeks until analysis. To avoid enzymatic degradation or PC n oxidation, tissues were homogenised in acid buffer (6.3 mmol L -1 DTPA, 0.1% TFA) with a manual Potter-Elvehjem homogenizer with glass pestle. The homogenization was performed with 500 mg tissues in 1 mL buffer. The homogenate was centrifuged at 3500 g for 10 min (model 1-15PK, Sigma-Aldrich, St. Quentin Fallavier, France). The resulting supernatant (500 µL) was deproteinized by addition of 125 µL 5 mol L -1 perchloric acid, and centrifuged again at 13000 g for 30 min. The supernatant (500 µL) was neutralised with 100 µL 5 mol L -1 NaOH, and used for PC n analyses. Reduction and derivatization: Just after extraction, 99 µL of tissue extract was mixed with 244 µL of HEPPS buffer (200 mmol L -1 HEPPS, 6.3 mmol L -1 DTPA, pH 8.2), 10 µL TCEP solution (20 mmol L -1 TCEP in HEPPS buffer, pH 8.2) as disulfide reductant, and 4 µL of a solution of 0.5 mmol L -1 NAC as internal standard. Reduction was conducted in a water-bath at 45°C for 10 min. The sample injection volume was 20 µL. Fluorescence of mBBr-labeled compounds was monitored at an excitation wave length of 382 nm and 470 nm of emission. Derivatized PC n were separated in a reversed-phase column (Phenomenex-Synergi-Hydro RP C18), 100 mm × 4.6 mm, 4 µm particle size, protected by a C18 guard column, 4 mm × 3 mm, 5 µm (Phenomenex Security guard cartridge). The temperature of the column oven was 40°C. Peak areas were integrated using proper software (UniPoint system, version 1.90, Gilson, Villier-le-Bel, France). The bimane derivatives were separated using a gradient of mobile phase A (99.9 vol-% ACN, 0.1 vol-% TFA) and B (89.9 vol-% water, 10 vol-% ACN, 0.1 vol-% TFA). The gradient profile was a linear gradient of mobile phase A from 0 to 10.6% run for 11.2 min at 1 mL min -1 . Further, the linear gradient of solvent A was raised from 10.6 to 28.6% in 13.6 min. Before injecting a new sample, the column was rinsed with 100% of solvent A for 5.5 min at a flow rate of 2.5 mL min -1 . The column was equilibrated with 100% of solvent B for a total of 10 min at 1 mL min -1 . Total run time for each sample was 40.3 min including column rinsing and re-equilibration. Purified standards at seven increasing concentrations ranging from 0.2 to 2 µmol L -1 for PC 2-5 , and 1 to 10 µmol L -1 for Cys, GSH, and -GluCys, were used to plot calibration curve. In samples thiol compound concentration were determined using equations of calibration curve. PC content was expressed in µg of PC per g of tissue wet weight. Each sample was spiked with 0.6 µmol L -1 of each standard PC 2-5 to certify PC identity. Statistical analyses Homogeneity of variances and normality of data were not verified (Bartlett and Shapiro-Wilk tests), so statistical analysis was done using the non parametric Kruskal-Wallis and Mann-Whitney two sided tests (R Development Core Team, 2010. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.Rproject.org). All data means were reported with standard deviations (SD). Differences were considered as significant when p < 0.05. RESULTS AND DISCUSSION Table 1: Average retention time in minutes (n = 20) ± SD, limit of detection (LOD), and limit of quantification (LOQ) in pmol per 20 µL injected for cysteine-rich metal-binding peptides standards. Standard curves were run with 7 concentrations: 1 to 10 µmol L -1 for Cys, GSH, -GluCys, and 0.2 to 2 µmol L -1 for PC 2-5 . -Not determined for the NAC used as an internal standard r 2 : Pearson correlation coefficients of standard curves In this study, with precolumn mBBr derivatization and reversed-phase HPLC analyses, an excellent linearity was obtained for the calibration curves as shown by the Pearson coefficients (table 1). The quantification limits per 20 µL sample injected into the HPLC column were 1.5 pmol for PC 2-3 , GSH, -GluCys, 2.7 pmol for PC 4 , 5 pmol for PC 5 , and 1 pmol for Cys. In the samples a part of mBBr underwent reductive dehalogenation with TCEP to give tetramethylbimane (Me 4 B) (Graham et al., 2003). i.e. 2.2 ± 0.6 and 0.9 ± 0.2 µg g -1 wet weights respectively. In both organs, the PC n levels decreased in the order PC 2 > PC 3 > PC 4 . The concentrations of PC 2 and PC 3 were two to three times higher in the digestive gland than in the gills, whereas the PC 4 levels were nearly equivalent in both organs. Wilcoxon, Mann-Whitney test showed no significant difference in PC 2 content (fig. 4) and PC 3-4 (data not shown) between fresh and frozen tissues. PC are important for detoxification of the non-essential metal Cd in plants and fungi. were renewed every day, and the bivalves were fed daily as previously. For sampling, the 12 mussels used on each treatment were dissected. The gills and digestive glands of two mussels were pooled, which gives 6 replicates per treatment for analysis. Determination of PC and other cysteine-rich metal-binding peptides PC were determined according to Minocha and al. (2008) with a slight modification in the protein removal step and the HPLC mobile phase gradient profile (Santini et al., 2011b). Cysteine-rich peptides were determined in the cytosolic fractions of the deproteinized tissue homogenates after reduction. Extraction: All dissection, extraction, and centrifugation steps were carried out at 4°C. The gills and digestive gland were carefully dissected, the digestive content was removed, and the tissues were washed in Tris buffer (Tris 25 mmol L -1 , NaCl 50 mmol L -1 , pH 8.0). The organs from two mussels were pooled in order to obtain sufficient mass for analysis. The samples were immediately analysed after sampling, or frozen in liquid N 2 and stored at -80°C until analysis. For extraction to avoid enzymatic degradation or PC oxidation, the tissues were homogenised in acid buffer (6,3 mM DTPA, 0,1% TFA) with a manual Potter-Elvehjem homogenizer with glass pestle. The homogenization was performed with 500 mg of tissues in 1 mL of buffer. The homogenate was centrifuged at 3500 g for 10 min (model 1-15PK, Sigma-Aldrich, St. Quentin Fallavier, France). The resulting supernatant (500 µL) was deproteinized by addition of 125 µL perchloric acid 5 mol L -1 , and centrifuged again at 13000 g for 30 min. The supernatant, 500 µL, was neutralised with 100 µL aqueous NaOH 5 mol L - 1 , and used as tissues extract for PC analyses. (UniPoint system version 1.90, Gilson, Villier-le-Bel, France). The bimane derivatives were separated using a gradient of mobile phase A (99.9 vol-% ACN, 0.1 vol-% TFA) and B (89.9 vol-% water, 10 vol-% ACN, 0.1 vol-% TFA). The gradient profile was a linear gradient of mobile phase A from 0 to 10.6 vol-% was run for 11.2 min at 1 mL min -1 . Further, the linear gradient of solvent A was raised from10.6 to 28.6 vol-% in 13.6 min at 1 mL min -1 . Before injecting a new sample, the column was rinsed with 100 % of solvent A for 5.5 min at a flow rate of 2.5 mL min -1 . The column was equilibrated with 100 % of solvent B for a total of 10 min at 1 mL min -1 . Total run time for each sample was 40.3 min, including column rinsing and re-equilibration. Purified standards at seven increasing concentrations ranging from 0.2 to 2 µmol L -1 for PC 2-5 , and 1 to 10 µmol L -1 for Cys, GSH, and -GluCys, were used to plot calibration curve. In samples thiol compound concentration were determinate using equations of calibration curve. PC content was expressed in µg of PC per g of tissue wet weight. Each sample was spiked with 0.6 µmol L -1 of each standard PC 2-5 to certify PC identity. The quantification limits per 20 µL sample injected into the HPLC column were 1.5 pmol for PC 2- Determination of metallothionein The MT content was determined by the method of Romero-Ruiz et al. (2008). All steps were carried out at 4°C. Organs from two mussels were pooled in order to obtain sufficient mass for analysis. The samples were immediately frozen in liquid N 2 and stored at -80°C until analysis. The gills and digestive gland were homogenised in buffer (0.1 mol L -1 Tris, 1 mmol L -1 DTT, 50 mmol L -1 PMSF, 6 mmol L -1 leupeptin, pH 9.5) at a ratio of 5 mL g -1 with a manual Potter-Elvehjem homogenizer with glass pestle. The homogenate was centrifuged at 3500 g for 10 min. The resulting supernatant was centrifuged again at 25000 g for 30 min, and the supernatant was used as extract for MT analyses. Tissue extract, 125 µL was mixed with 30 µl of Tris buffer, 0.23 mol L -1 , pH 9.5, 10 µL, DTT 0.3 M, added to 5 µL EDTA, 0.1 mol L -1 , pH 7, and 63 µL of 12 % SDS aqueous solution. Reduction and denaturation was conducted in a water-bath at 70°C for 20 min. For derivatization, 16.7 µL of mBBr solution in ACN, 0.18 mol L -1 , was added and the sample was incubated for 15 min at room temperature away from light. The derivatized samples were chromatographically analyzed on VWR reversed-phase column (LiChrocart LiChrosphere RP C18, Strasbourg, France) 250 mm × 4 mm, 5 µm particle size, fitted to a C-18 guard column, 4 mm × 3 mm, 5 µm (Phenomenex Securityguard cartridge). The fluorescence detector parameters were the same than for PC analyses. MT was separated using a gradient of mobile phase A (99.9 vol-% ACN, 0.1 vol-% TFA) and B (99.9 vol-% water, 0.1 vol-% TFA). After injection, the mobile phase A was kept for 10 min at 30 %, and changed by 1 min linear rise to 70 % of phase A. These conditions were maintained for 10 min, before the initial conditions (30 % phase A) were re-established by 1min linear decrease. Before injecting a new sample, the column was rinsed and equilibrated at the initial conditions for 8 min. T flow was kept at 1 mL min -1 throughout analysis. The injection volume was 20 µL. A plot of peak area versus purified standard content was established using seven increasing concentrations from 0.4 to 1 µmol L -1 rabbit liver MT-I. The equation obtained from this calibration line was used for MT quantification in the samples. The MT content was expressed in mg per g of protein. Protein was determined according to Bradford (1976) using bovine serum albumin as standard. Statistical analyses As homogeneity of variances and normality of data were not verified (Bartlett and Shapiro-Wilk tests), statistical analysis was done using the non parametric Kruskal-Wallis and Mann-Whitney two sided tests (R Development Core Team (2010). R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.Rproject.org). All data were reported as means ± standard deviations (SD). Differences were considered as significant when p < 0.05. RESULTS A broad peak of tetramethylbimane resulting from the reaction between mBBr and TCEP eluted approximately at 10.5 min (fig. 1 A). At 17.96 min an unidentified compound (peak "a") from reagents eluted. In standards profile (fig. 1 B), the phytochelatin mean elution times, were: PC 2 at 13.09 min, PC 3 at 16.62 min, PC 4 at 18.59 min, and PC 5 at 19.75 min. (Amiard et al., 2006). In the present work, MT identification and quantification was carried out with purified rabbit liver MT-I standard. After HPLC separation, the MT in the mussel extract eluted at the same time as the rabbit liver standard, i.e. have a molecular weight and cysteine content similar to MT-I of mammals (Alhama et al., 2006). This MT isoform from A. cygnea does not appear to be induced by copper. Cu 2+ induction of MT in A. anatina was shown using a spectrophotometric method which allowed total MT isoform quantification (Nugroho and Frank, 2012). Since the discovery of the PC synthase gene, able to give a functional PC synthase in invertebrates, the question of the implication of PC in metal detoxification and homeostasis in animals has arisen. The present study shows a clear PC 2 induction in freshwater mussels under Cu 2+ exposure, indicating a key role of this cysteine rich polypeptide as metal chelator. PC 2 increased in the gills from 12 h to 7 d, so it may serve as first line defense, with other compounds such as GSH. The PC 2 decline to control values upon 21 d suggests that at later time copper is taken in charge by other detoxication mechanisms such as MT and insoluble granules (Viarengo and Nott, 1993). Sequestration of 65 % of the intracellular copper in granules was shown in the gills of Unionidae bivalves (Bonneris et al., 2005). In the digestive gland, PC 2 increased significantly only up to 12 h of Cu 2+ exposure. The water media route of Cu-exposure could be the reason of higher sensitivity of PC 2 induction in the gills, which may play a more important role than digestive gland, in the uptake of copper dissolved in the surrounding water. Moreover, PC induction depends on the metal species, Cu having been reported as a weak inducer compared to Cd (Zenk, 1996). Further studies with Cd would be interesting, in order to determin whether PC 2-4 may be efficient in detoxification of such a non-essential metal. Additional investigations are necessary to assess the relevance of PC as potential biomarker for metal exposure of molluscs. Annexe 2: Algae culture medium Summary: Copper (Cu) is one of the metals contaminating European fresh water ecosystems. Filter feeding bivalves have high bioaccumulation potential for transition metals as Cu. While Cu is an essential micronutrient for living organisms, it causes serious metabolic and physiological impairments when in excess. The objectives of this thesis are to get knowledge on toxic effects and detoxification mechanisms of copper in Anodonta cygnea and Anodonta anatina, two mussel species widely distributed in continental waters. Because Calcium (Ca) plays a fundamental role in shell formation and in numerous biological processes, Cu 2+ effects on cellular plasma membrane Ca transport were studied first. In the second step, the investigations focused on Cu 2+ detoxification mechanisms involving Cys-rich compounds known to play a major role in homeostasis of essential trace metals and in cellular metal detoxification. Under our experimental conditions, Cu 2+ inhibition of Ca 2+ -ATPase activity was observed in the gills and the kidneys, and inhibition of Na + /K + -ATPase in the gills and the digestive gland upon 4 d of exposure. At day 7 of exposure to 0,35 µmol L -1 Cu 2+ : total recovery was observed in the kidneys and the gills for Ca 2+ -ATPase activity, and in the digestive gland for Na + /K + -ATPase, but not at high doses. Ca and Na transport inhibition may entail disturbance of osmoregulation and lead to continuous under-supply of Ca. Recoveries of Na + /K + -ATPase and Ca 2+ -ATPase enzymes function suggest that metal-detoxification is induced. Phytochelatins (PC) are Cysrich oligopeptides synthesised by phytochelatin synthase from glutathione in plants and fungi. Phytochelatin synthase genes have recently been identified in invertebrates; this allows us to hypothesize a role of PC in metal detoxification in animals. In the second part of this work, PC and their precursors as well as metallothionein were analyzed in the gills and in the digestive gland of A. cygnea exposed to Cu 2+ . Our results showed for the first time the presence of PC 2-4 in invertebrates. PC were detected in control mussels not exposed to metal, suggesting a role in essential metal homeostasis. Compared to control, PC 2 induction was observed during the first 12 h of Cu 2+ exposure. Those results confirm the role of PC as a first line detoxification mechanism in A. cygnea. Key words: calcium homeostasis, copper, Anodonta freshwater bivalve, phytochelatins Résumé : Le cuivre (Cu) est l'un des métaux contaminants les écosystèmes dulcicoles Européens. Les bivalves filtreurs ont une grande capacité de bioaccumulation des métaux de transitions tel que le Cu. Le Cu est un oligo-élément essentiel pour les organismes vivants, mais en excès il provoque de graves perturbations métaboliques et physiologiques. L'objectif de cette thèse est d'acquérir des connaissances sur les effets toxiques et les mécanismes de détoxification du cuivre chez Anodonta cygnea et Anodonta anatina, deux espèces de bivalves largement distribuées dans les eaux continentales. Parce que le Calcium (Ca) joue un rôle fondamental dans la composition de la coquille et pour de nombreux processus biologiques, les effets du Cu 2+ ont été étudiés d'abord sur le transport cellulaire du Ca au niveau de la membrane plasmique. Dans un deuxième temps, l'étude a été axée sur les mécanismes de détoxification du Cu 2+ impliquant des composés riches en Cys, connus pour jouer un rôle majeur dans l'homéostasie des métaux traces essentiels et dans la détoxification des métaux dans les cellules. Dans nos conditions expérimentales, l'inhibition de la Ca 2+ -ATPase par le Cu 2+ a été observée dans les branchies et les reins, et l'inhibition de la Na + /K + -ATPase dans les branchies et la glande digestive, après 4 jours d'exposition à concentrations environnementales. Au delà de 7 jours d'exposition à 0,35 µmol L -1 Cu, une récupération totale de l'activité enzymatique a été observée dans les reins et les branchies pour Ca 2+ -ATPase, et dans la glande digestive pour la Na + /K + -ATPase. A dose élevée l'inhibition persiste. L'inhibition du transport du Ca et du Na peut entraîner des perturbations de l'osmorégulation et conduire à des carences en Ca. La récupération de l'activité enzymatique de la Ca 2+ -ATPase et de la Na + /K + -ATPase suggère une induction de fonctions de détoxification des métaux. Les phytochélatines (PC) sont des oligopeptides riches en Cys synthétisés par la phytochélatine synthase à partir du glutathion, chez les plantes et les champignons. Des gènes codant pour des phytochélatine synthases fonctionnelles ont étés identifiés chez des invertébrés. Dans la seconde partie de ce travail, les PC et leurs précurseurs, ainsi que les métallothioneines ont été étudiés dans les branchies et la glande digestive d'A. cygnea exposé au Cu 2+ . Nos résultats ont montré pour la première fois, la présence de PC 2-4 chez les invertébrés. Les PC ont été détectés dans des moules témoins non exposées aux métaux, ceci suggère une fonction dans l'homéostasie des métaux essentiels. Une induction de PC 2 a été observée dés les 12 premières heures d'exposition au Cu 2+ , comparé aux bivalves témoins. Ces résultats confirment le rôle du PC en tant que mécanisme de première ligne de détoxification des métaux chez A. cygnea. Mots clés : homéostasie du calcium, cuivre, bivalves dulcicoles Anodonta, phytochélatines Copper metabolism in mammals[START_REF] Jondreville | Le cuivre dans l'alimentation du porc : oligoélément essentiel, facteur de croissance et risque potentiel pour l'Homme et l'environnement[END_REF]. 41 Fig.2: A: Anodonta anatina, B: Anodonta cygnea (www.biopix.eu) 45 Fig. 3: Geographical distribution of Anodonta anatina, and distribution of Anodonta cygnea 46 Fig. 4: Biological cycle of Unionidae (http://biodidac.bio.uottawa.ca) 47 Fig. 5: Longitudinal-section of an Unionidae (Mouthon, 1982), p.a.m.: posterior adductor muscle, a.a.m.: anterior adductor muscle 49 Fig. 6: GSH metabolism (Mendoza-Cózatl and Moreno-Sánchez, 2006), -ECS: glutamatecysteine-ligase, GS: glutathione synthase, GR: glutathione reductase, GPx: glutathione peroxidase, GST: glutathione transferase, Xe: xenobiotic, GSH: glutathione, GSSG: glutathione disulfide 59 Fig. 7: Redox cycle of MT Fig. 1 :Fig. 1 :Fig. 1 : 111 Protocol of tissue fraction preparation for enzymes analysis of Ca 2+ -ATPase, Na + /K + -ATPase, H + -ATPase, and carbonic anhydrase (CA), in the mantle (M), digestive gland (DG), gills (G), and kidneys (K) of the fresh water mussel A. anatina. 69 Fig. 2: Protocol of tissue fraction preparation for thiol rich compound analysis, i.e. of metallothionein (MT), phytochelatins (PC) and precursors in the digestive gland (DG), and gills (G) of the fresh water mussel A. anatina. 70 Fig. 3: Phytochelatin and precursor biosynthesis in plants (Mendoza-Cózatl and Moreno-Sánchez, 2006), PCS: phytochelatin synthase, -ECS: glutamate-cysteine-ligase, GS: glutathione synthase, GST: glutathione transferase, Xe: xenobiotic, GSH: glutathione, GSSG: glutathione disulfide, HMWC: high molecular weight complexes. 78 Article 1 Schematic presentation of the Ca flow through the different compartments of a freshwater bivalve, and inhibition of PMCA by Cu in kidney and gills. MCE: mantle cavity epithelium, DG: digestive gland, PMCA: plasma membrane Ca 2+ -ATPase. 83 Fig. 2: PMCA activity in kidneys (K), gills (G), and digestive gland (DG) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 (a) and 0.64 µmol L -1 Cu (b). Means of results, (n = 3) ± SE, are presented as ratios of enzymatic activity at the respective Cu concentration and time vs. control (mentioned on the graph). Notes: For K, the error bars reflect the variability of the analytical determination of the pooled tissue sample, for G and DG the biological variation between animals. * = significantly lower than control, = significantly higher than control (U test p < 0Na + /K + -ATPase activity determination in September in gills (G), the digestive gland (DG), and mantle (M) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 Cu 2+ . Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. *= significantly lower than control (Mann-Whitney test two sided test = 0.05). 98 Fig. 2: H + -ATPase activity determination in September in mantle (M) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 Cu 2+ . Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. *= significantly lower than control (Mann-Whitney test two sided test = 0.05). 99 Fig. 3: Na + /K + -ATPase basal activity in gills (G), the digestive gland (DG), and mantle (M) of A. anatina in July, September, January, March, and April 2007 / 2008. Means of results, n = 3 ± SD, are presented as enzymatic activities in µmol P i /mg protein/min. # = significantly higher than in January, March, and April (Mann-Whitney test two sided test = 0.05). 100 Fig. 4: H + -ATPase basal activity in mantle (M) of A. anatina in July, September, January, March, and April 2007 / 2008. Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. # = significantly higher than in January, March, and April (Mann-Whitney test two sided test = 0.05). 101 Article 3 Chromatograms of (A) reagents blank with homogenisation buffer, and (B) mix of the eight cysteine-rich peptide standards. The broad peak is tetramethylbimane (Me 4 B). Fig. 2 :Fig. 1 : 21 Chromatograms of digestive gland samples obtained with the same extract: alone (A), and spiked with 0.6 µmol L -1 of each standard PC 2-5 (B). 115 Fig. 3: Chromatograms of gills samples made with the same extract: alone (A), and spiked with 0.6 µmol L -1 of each standard PC 2-5 (B). 116 Fig. 4: PC 2 content in the digestive gland (DG) and the gills (G) of Anodonta cygnea, means (n = 6) ± SD. No significant difference was found between fresh (white bars) and frozen (grey bars) tissues (Wilcoxon, Mann-Whitney test). 117 Article 4 Chromatograms of (A) reagent blank with homogenisation buffer, and (B) mix of the eight cysteine-rich peptide standards. The broad peak is tetramethylbimane (Me 4 B). Fig. 2 : 2 Chromatograms of gills samples: mussels exposed for 4 days to 0.35 µmol L -1 Cu 2+ (A), mussel controls 4 days (B), mussel controls 4 days spiked with 0.6 µmol L -1 of each standard PC 2-5 (C), a.u.: area unit. 129 Fig. 3: PC 2 content in the digestive gland (DG) and the gills (G) of A. cygnea exposed to 0.35 µmol L -1 of Cu 2+ for 0 h, 12 h, 48 h, 4 d, 7 d, and 21 d. Means of results (n = 6) ± SD. *= significant difference between exposed group and its respective control (Kruskal-Wallis and Mann-Whitney two sided test (n = 6, = 0.05)) 130 Mots clés: homéostasie du calcium, cuivre, bivalves dulcicoles Anodonta, ( respiration avec le cytochrome C oxydase, formation des tissus connectifs avec la lysyl oxydase) sont aussi à l'origine de sa toxicité lorsqu'il est en excès. Le cuivre est bioaccumulable et peut devenir une menace pour la biocénose. Des mécanismes de régulation de la concentration et de la détoxication du cuivre sont essentiels pour les organismes vivants. Les mollusques représentent une forte proportion de macroinvertébrés dans les écosystèmes aquatiques. Parmi ce groupe, les bivalves sont particulièrement intéressants. Du fait de leur importante activité de filtration nécéssaire pour satisfaire leur respiration et leur nutrition, les bivalves ont la capacité d'accumuler de nombreux contaminants. Dans l'écosystème, ils jouent un rôle important dans le transfert de matière de la colonne d'eau vers les sédiments. Les excréments et pseudofèces de bivalves rendent le phytoplancton disponible aux détritivores, et peuvent modifier la qualité des sédiments par concentration des polluants. 2+ -ATPase et Na + /K + -ATPase ont été observées chez A. anatina exposés au Cu 2+ . Une inhibition suivie d'une reprise totale de l'activité enzymatique, observée à la concentration d'exposition de 0,35 µmol L -1 de Cu 2+ mais non à forte concentration (0.64 µmol L -1 ) indique un mécanisme de détoxication. Ces résultats suggérent l'induction de mécanismes de détoxication métallique. La deuxième étape de cette étude a porté spécifiquement sur les composés riches en Cys chélateurs de métaux. Nous avons posé l'hypothèse que des phytochélatines pouvaient être présentes chez les bivalves et jouer un rôle dans la détoxification du cuivre. Les composés riches en Cys sont des polypeptides ou des protéines telles que les phytochélatines ou les métallothionéines, avec une teneur en Cys élevée. Ils jouent un rôle majeur comme chélateur de métaux, dans l'homéostasie des métaux essentiels et pour la détoxication des métaux nonessentiels. Les phytochélatines (PC) sont des polypeptides riches en thiol de formule générale : ( -Glu-Cys) n -Gly, synthétisés par la phytochélatine synthase (PCS) à partir de glutathion. Les PC complexent les ions métalliques réduisant la concentration intracellulaire en ions métalliques libres chez les plantes, les champignons et les microalgues. . Brièvement A. cygnea et A. anatina adultes (7,5 ± 0,5 cm de long) ont été placées en aquariums à 20 ± 0,5 ° C. L'eau d'étang artificiel 1,5 L / moule, pH 7,25 ± 0,10 (en mmol L -1 : 0.40 Ca 2+ , 0.20 Mg 2+ , 0.70 Na + , 0.05 K + , 1.35 Cl -, 0. Les bivalves ont été nourris quotidiennement avec une culture de Chlorella kessleri en phase exponentielle de croissance, ajoutée à une densité finale d'algues de 2 × 10 5 cellules / ml. Les animaux ont été acclimatés à ces conditions pendant deux semaines avant toute expérimentation. Fig. 2 : 2 Fig. 2 : Protocole d'analyse des composés riches en thiols GD : glande digestive, B : branchies. d'exposition au cuivre et 30 % d'augmentation à 7 jours d'exposition. Au delà de 7 jours, la PC 2 revient au même niveau que celui mesuré dans les témoins. Dans la glande digestive, aucune variation significative de la PC 2 n'a été mise en évidence, excepté à 12 h d'exposition au Cu 2+ . Les concentrations de -GluCys sont significativement supérieures dans les branchies des moules exposées au cuivre durant 48 heures et 7 jours. La concentration de -GluCys augmente également dans la glande digestive des bivalves à 48 h et 4 j d'exposition au cuivre. Aucune variation significative du niveau de MT n'a été observée dans les branchies et la glande digestive des moules sur les 21 jours d'exposition au cuivre. disruption of calcium transport and biomineralization mechanisms by copper. Detoxification mechanisms by metal binding thiol compounds are presented in the seventh and construction, and transport (LME, 2011). Primary production of copper increased from 9.6 million in 1980 to 16.9 million in 2006 (INERIS, 2010). In 2007, according to INERIS (2010), 35 % of the world copper consumption came from recycled copper. In Europe, in 2007 the recycling rate of copper was 41 %, and in 2006 copper consumption was about 4.7 million tons of copper (21 % of global demand). The demand for copper in Europe was estimated in 2007 at 3.85 million tons (European Copper Institute, 2009). Fig. 3 : 3 Fig. 3: Geographical distribution (enclosed by the red line) of Anodonta anatina, and distribution of Anodonta cygnea in European continental hydrosystems[START_REF] Başçınar | A Preliminary study on reproduction and larval development of swan mussel [Anodonta cygnea (Linnaeus, 1758)] (Bivalvia: Unionidae), in Lake Çıldır (Kars, Turkey)[END_REF] www.discoverlife.org) Fig. 4 : 4 Fig. 4: Biological cycle of Unionidae (http://biodidac.bio.uottawa.ca) Fig. 5: Longitudinal-section of an Unionidae (Mouthon, 1982), p.a.m.: posterior adductor muscle, a.a.m.: anterior adductor muscle , but especially of larvae and juveniles), little is available to define the main threats and causes of extinction: ROS are free radicals, presenting unpaired electrons (hydroxyl radical HO • ), others are nonradical species such as hydrogen peroxide H 2 O 2 . To reach a better level of stability, these radicals will capture electrons from reductor molecules, ROS reduction causing oxidations in chains. All the bio-molecules of the cell (nucleic acids, lipids, proteins, polysaccharides) are potential reductor substrates of ROS. The level of ROS instability characterized their diffusion capacity. A low-reactive form tends to act far from its site of production as it has a significant diffusion radius. On the contrary, a very reactive species acts very quickly and its diffusion is limited. ROS include the superoxide anion radical (2 ), hydroperoxy (ROO • ), and alkoxy (RO • ) radicals, nitric oxide (NO • ), and hydroxyl radical (HO • ). Molecular oxygen O 2 can also be regarded as a radical species since it has two single electrons. The superoxide anion is produced during various reactions with transition metals, and enzymes are implied in its formation. H 2 O 2 reacts weakly, diffuses freely, and has a long lifetime. The ROO • and RO • radicals arise from the peroxidation of lipids. These radicals allow the gradual propagation of lipid peroxidation. Free radicals are produced physiologically during normal cell metabolism. They can also be formed in response to a wide range of exogenous agents including radiation, metal ions, solvents, particulate matter, nitrogen oxides, and ozone. NO • plays at the same time a role in the destruction and the production of radicals. It is not very reactive with cellular components and reacts with radicals generating less reactive species. Combined with the superoxide anion radical, it may be involved in the formation of peroxynitrite, a highly toxic species. Due to its high reactivity, HO • is quite non-specific in its targets for oxidation, whereas ROS with lower rate constants react more specifically. ROS are at the origin of lipid peroxidation. Copper ions are involved in redox reactions which result in the production of ROS. In the Fenton reaction, cuprous ions react with H 2 O 2 giving rise to the extremely reactive HO •[START_REF] Labieniec | Antioxidative and oxidative changes in the digestive gland cells of freshwater mussels Unio tumidus caused by selected phenolic compounds in the presence of H 2 O 2 or Cu 2+ ions[END_REF]. In the case of organic hydroperoxides (ROOH), a homologous reaction is thought to occur leading to the formation of ROO • and of the more reactive RO • . Copper ions may participate both in the initiation and the propagation steps of lipid peroxidation, thus stimulating the in vivo degradation of membrane lipids e. antioxidant enzymes, and molecules without enzymatic activity. Antioxidant enzymes are superoxide dismutases (SODs) and catalase (CAT): SODs catalyze the dismutation of the superoxide anion radical under formation of molecular oxygen and H 2 O 2 , the latter detoxified by CAT. SOD and CAT are localised in peroxysomes, and also in mitochondria and cytosol. Isoenzymes of SOD are found in various compartments of the cell. Fig. 6 : 6 Fig. 6: GSH metabolism (Mendoza-Cózatl and Moreno-Sánchez, 2006), -ECS: glutamatecysteine-ligase, GS: glutathione synthase, GR: glutathione reductase, GPx: glutathione peroxidase, GST: glutathione transferase, Xe: xenobiotic, GSH: glutathione, GSSG: glutathione disulfide : Class I, including the MT which show biochemical homology and close elution time by chromatography with horse MT, Class II, including the rest of the MT with no homology with horse MT, and Class III, which includes phytochelatins, Cys-rich enzymatically synthesised peptides. A second classification was performed[START_REF] Binz | Metallothionein: molecular evolution and classification[END_REF]; http://www.bioc.uzh.ch/mtpage/classif.html) and takes into account taxonomic parameters and the patterns of distribution of Cys residues along the MT sequence. Cysteine (Cys) residues are distributed in typical motifs consisting of Cys-Cys, Cys-X-Cys or Cys-X-X-Cys sequences (X denoting amino-acid residues other than Cys). It results in a classification of 15 families for proteinaceous MT. Mollusc MT belong to family 2, divided in two subfamilies. Fig. 7 : 7 Fig. 7: Redox cycle of MT[START_REF] Kang | Metallothionein redox cycle and function[END_REF], MT: metallothionein, ROS: reactive oxygen species, GSH: glutathione, GSSG: glutathione disulfide Fig. 8 : 8 Fig.8: PC synthesis[START_REF] Vatamaniuk | Worms take the 'phyto' out of 'phytochelatins[END_REF] for Zn/Cu-thionein indicate that copper is present in the form of Cu(I). Unlike to Zn, Cu is arranged differently in the MT clusters. Cu(I) atoms are binding by one bivalent connection which allows 10 copper atoms to bind per MT protein. The relative affinities of each metal Fig. 1 :Fig. 2 : 12 Fig.1: Protocol of tissue fraction preparation for enzymes analysis of Ca 2+ -ATPase, Na + /K + -ATPase, H + -ATPase, and carbonic anhydrase (CA), in the mantle (M), digestive gland (DG), gills (G), and kidneys (K) of the fresh water mussel A. anatina. Fig. 3 : 3 Fig. 3: Phytochelatin and precursor biosynthesis in plants (Mendoza-Cózatl and Moreno-Sánchez, 2006), PCS: phytochelatin synthase, -ECS: glutamate-cysteine-ligase, GS: glutathione synthase, GST: glutathione transferase, Xe: xenobiotic, GSH: glutathione, GSSG: glutathione disulfide, HMWC: high molecular weight complexes. ( in mmol L -1 : 0.40 Ca 2+ , 0.20 Mg 2+ , 0.70 Na + , 0.05 K + , 1.35 Cl -, 0 Fig. 1: Na + /K + -ATPase activity determination in September in the gills (G), the digestive gland (DG), and the mantle (M) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 Cu 2+ . Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. *= significantly lower than control (Mann-Whitney test two sided test = 0.05). Fig. 2 : 2 Fig. 2: H + -ATPase activity determination in September in mantle (M) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 Cu 2+ . The Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. *= significantly lower than control (Mann-Whitney test two sided test = 0.05). Fig. 3 : 3 Fig. 3: Na + /K + -ATPase basal activity in gills (G), the digestive gland (DG), and mantle (M) of A. anatina in July, September, January, March, and April 2007 / 2008. Means of results, n = 3 ± SD, are presented as enzymatic activities in µmol P i /mg protein/min. # = significantly higher than in January, March, and April (Mann-Whitney test two sided test = 0.05). Fig. 4: H + -ATPase basal activity in mantle of A. anatina in July, September, January, March, and April 2007 / 2008. Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. # = significantly higher than in January, March, and April (Mann-Whitney test two sided test = 0.05). PC n are synthesised by phytochelatin synthase (PCS) which catalyzes the transpeptidation of the -Glu-Cys moiety of GSH onto a second GSH moiety or with PC n ( -Glu-Cys) n -Gly (n=2-11) to form PC n+1 . PC n are rapidly induced in cells and tissues when exposed to a range of transition metal ions, including the cations Cd, Derivatization was carried out by addition of 4 µL of a solution of 50 mmol L -1 mBBr acetonitrile and incubation in a water-bath at 45°C for 30 min in the dark. The reaction was stopped by addition of 40 µL of an aqueous solution of MSA 1 mol L -1 .The high-performance liquid chromatographic analysis was performed with an HPLC instrument (Gilson, Roissy, France) equipped with a dual solvent pump (model 322), autosampler (model 234), a 100 µL injection loop, and fluorescence detector (model 122). Fig. 1 :Fig. 2 :Fig. 3 : 123 Fig. 1: Chromatograms of (A) reagents blank with homogenisation buffer, and (B) mix of the eight cysteine-rich peptide standards. The broad peak is tetramethylbimane (Me 4 B). Peak "a" is an unidentified compound originating from the derivatization reaction with reagent.Standard concentration was 10 µmol L -1 for cysteine (Cys), glutathione (GSH),glutamylcysteine ( -GluCys), 5 µmol L -1 for the internal standard N-acetyl-cysteine (NAC), and 2 µmol L -1 for phytochelatins 2-5 (PC 2-5 ). Fig. 4 : 4 Fig. 4: PC 2 content in the digestive gland (DG) and the gills (G) of Anodonta cygnea, means (n = 6) ± SD. No significant difference was found between fresh (white bars) and frozen (grey bars) tissues (Wilcoxon, Mann-Whitney test). Reduction and derivatization:Just after extraction, 244 µL HEPPS buffer (HEPPS 200 mmol L -1 , DTPA 6.3 mmol L -1 , pH 8.2) was mixed with 10 µL TCEP (TCEP 20 mmol L -1 in HEPPS buffer, pH 8.2) used as disulfide reductant, 4 µL of NAC 0.5 mmol L -1 as an internal standard, and 99 µL of tissue extract. Disulfide reduction was conducted in a water-bath at 45°C for 10 min. Derivatization was carried out by addition of 4 µL of mBBr (50 mmol L -1 in acetonitrile), and incubated in a water-bath at 45°C for 30 min in the dark. The reaction was stopped by addition of 40 µL of an aqueous solution of MSA 1 M.Analyses were performed using HPLC instrument (Gilson, Roissy, France) equipped with a dual solvent pump (model 322), an autosampler (model 234), a 100 µL injection loop, and a fluorescence detector (model 122). The fluorescence of mBBr-labeled molecules was monitored with an excitation wave length of 382 nm and emission at 470 nm. The injection volume was 20 µL. The derivatized PC were separated on a reversed-phase column (Phenomenex-Synergi-Hydro RP C18), 100 mm × 4.6 mm, 4 µm particle size, fitted with a C-18 guard column (Phenomenex Securityguard cartridge) 4 mm × 3 mm, 5 µm. The temperature of the column oven was 40°C. Peak areas were integrated using proper software Fig. 1 :Fig. 2 : 12 Fig. 1: Chromatograms of (A) reagent blank with homogenisation buffer, and (B) mix of the eight cysteine-rich peptide standards. The broad peak is tetramethylbimane (Me 4 B). Peak "a" is an unidentified compound originating from the derivatization reaction with reagent.Standard concentration was 10 µmol L -1 for cysteine (Cys), glutathione (GSH),glutamylcysteine ( -GluCys), 5 µmol L 1 for the internal standard N-acetyl-cysteine (NAC), and 2 µmol L -1 for phytochelatins 2-5 (PC 2-5 ). Fig. 3 3 Fig. 3 PC 2 content in the digestive gland (DG) and the gills (G) of A. cygnea exposed to 0.35 µmol L -1 Cu 2+ for 0 h, 12 h, 48 h, 4 d, 7 d, and 21 d. Means of results (n = 6) ± SD. *= significant difference between exposed group and its respective control (Kruskal-Wallis and Mann-Whitney two sided test (n = 6, = 0.05)) Le milieu aquatique est le réservoir final pour la plupart des polluants, dont les métaux. Le cuivre appartient aux métaux les plus couramment utilisés du fait de ses propriétés physiques et chimiques (particulièrement pour ses qualités de conductivité électrique et thermique).Comme métal pur, en alliage, ou à l'état ionique, il est utilisé dans un grand nombre de Introduction L'activité humaine est associée au développement de l'industrie et l'agriculture, qui sont devenus indispensables. Ces secteurs sont responsables de la production et de la diffusion de nombreux polluants. Les propriétés chimiques et physiques et les différents types de transport déterminent la diffusion des polluants dans tous les compartiments des écosystèmes. PC : phytochélatine(s) PCS : phytochélatine synthase P i : phosphate inorganique PMCA : Ca 2+ -ATPase de la membrane plasmique R : reins S : surnageant SH : groupement thiol TCEP : tris-(2-carboxyethyl)-phosphine hydrochloride secteurs industriels et agricoles. De ce fait, le cuivre est un métal fréquemment détecté dans les milieux aquatiques continentaux où il est présent dans la colonne d'eau et s'accumule dans les sédiments (INERIS, 2010). Les propriétés chimiques du cuivre en font un élément surtout utilisé en tant que catalyseur biologique des réactions enzymatiques, et un élément essentiel à de nombreux processus biologiques impliquant des fonctions vitales comme la respiration ou la photosynthèse Ces deux invertébrés sont autochtones des systèmes hydrologiques européens. Au cours des dernières décennies, une régression des populations d'Unionidae a été observée en Europe. L'objectif de cette thèse a été d'acquérir des connaissances sur les mécanismes de perturbation par le cuivre du métabolisme du calcium chez Anodonta anatina. Le calcium est un élément essentiel dans le fonctionnement des organismes eucaryotes. Il contrôle plusieurs processus vitaux(Ermak et Davies, 2001). L'absorption, le maintien de la concentration intracellulaire de calcium dans l'organisme, et les processus de biominéralisation sont rendus possible par le contrôle de son passage à travers les membranes cellulaires. Ce passage se fait par simple diffusion, mais aussi par des protéines de transport. Le phénomène de biominéralisation nécessite en plus du calcium, des ions carbonate produits en partie par l'anhydrase carbonique (AC). Nous avons étudié les effets du Cu 2+ sur le transport du calcium chez Anodonta anatina par l'évaluation des activités enzymatiques de Ca 2+ -ATPase, Na + /K + -ATPase et H + -ATPase de la membrane plasmique et de l'activité enzymatique cytosolique de AC, enzymes impliquées dans l'absorption du calcium et dans les processus de biominéralisation. Les organes étudiés ont été les branchies, la glande digestive, le rein, et le manteau, qui jouent un rôle important dans l'absorption du calcium et la synthèse de la coquille . Les invertébrés appartenant aux Unionidae, connus pour leur facilité à bioaccumuler les métaux, pourraient être susceptibles de synthétiser des PC, c'est l'hypothèse que nous émettons dans la deuxiéme partie de la thèse. Leur présence est connue chez les végétaux, non chez les organismes animaux. L'induction de PC, de leurs précurseurs ont été évalués dans les branchies et la glande digestive chez Anodonta cygnea exposée au cuivre. Comme l'homogénéité des variances et la normalité des données ne se sont pas révélées vérifiées (tests de Bartlett et de Shapiro-Wilk), l'analyse statistique a été réalisée par tests non paramétriques de Kruskal-Wallis et de Mann-Whitney. Les différences ont été considérées comme significatives lorsque p <0,05. Analyses statistiques Analyses Des expériences préliminaires ont été effectuées avec trois bivalves exposés à chacune des concentrations de Cu 2+ : 0,26, 0,54 et 1,15 µmol L -1 pendant quatre jours, pour trouver les concentrations les plus pertinentes. Pour l'étude des effets du cuivre sur ces activités enzymatiques l'exposition a été effectuée à 0,35 et 0,64 µmol L -1 de Cu 2+ sur une durée de 15 jours. Trois moules ont été échantillonnées et disséquées pour chaque traitement : jour 0 (témoin), 4 j, 7 j et 15 j. Les organes de chaque individu ont été préparés et étudiés séparément, ce qui donne 3 réplicats par traitement. Les effets du cuivre sur les mécanismes de détoxification des métaux ont étés évalués chez A. cygnea sur les paramétres suivants : PC et leurs précurseurs, et MT. Les bivalves ont été exposés à 0,35 µmol L -1 de Cu 2+ durant 0 h, 12 h, 48 h, 4 j, 7 j et 21 j, les témoins ont étés maintenus en parallèle, sur la même durée, dans l'eau artificielle. Les milieux tests ont été renouvelés chaque jour, et les bivalves ont été nourris quotidiennement, comme décrit précédemment. Douze moules ont étés échantillonnées par traitement avant d'être disséquées. Les branchies et la glande digestive de deux moules ont été regroupées, donnant 6 réplicats par traitement pour l'analyse. enzymatiques Fig. 1 : Protocole d'analyse enzymatique M : manteau, GD : glande digestive, B : branchies, R : reins, S : surnageant, C : culot. Ca 3 (PO 4 ) 2 , Mg 3 (PO 4 ) 2 ) and pyrophosphate (Ca 2 P 2 O 7 , Mg 2 P 2 O 7 ) can contain Mn, Zn, Cu, Fe, Co, Cd and This organelle accumulate high concentrations of trace metals in non-toxic granules forms and thus represent an important detoxification way. The ferritin-rich and copper-sulphur granules are related to Fe and Cu metabolism in the respiratory pigment and also in copper detoxification. Lipofuscins are mainly lipid peroxidation end-products which are accumulated in the lysosomes as insoluble lipoprotein granules. Metals such as Cu, Cd, and Zn are trapped by the lipofuscin and sterically prevented from moving in or out of the granule. In mollusc intracellular granules composed to calcium / magnesium orthophosphate ( Table 1 : 1 Basal enzymatic activities of plasma membrane ATPase(s) (µmol P i /mg protein/min) and cytoplasmic CA (U/mg protein) in Anodonta anatina. G DG M - K Reference Ca 2+ -ATPase Na + /K + -ATPase H + -ATPase 0.45 0.098 - 0.095 0.015 -- 0.042 0.002 0.087 --- Article 1 Article 2 Article 2 CA 2.4 1.76 Article 1 G: gills; DG: digestive gland; M: mantle; K: kidneys, CA: carbonic anhydrase; ─: not determined Table 1 : 1 Plasma membrane Na + /K + -ATPase and H + -ATPase activities (µmol/L P i /mg protein/min) in different tissues of freshwater and marine organisms. Species Na + /K + -ATPase H + -ATPase Tissues References Freshwater bivalve Anodonta anatina 0.098 G The present study 0.015 DG 0.042 0.002 M Anodonta cygnea 0.109 G Lagerspetz and Senius, 1979 Asellus aquaticus 0.018 G Bouskill et al. , 2006 Dreissena polymorpha 0.006 G Marine bivalve Mytilus galloprovincialis 0.033 G Viarengo et al. , 1996 Freshwater crustacean Dilocarcinus pagei 0.023 G Firmino et al. , 2011 Brackishwater crustacean Acartia tonsa 0.014 total Pedroso et al. , 2007 Freshwater fish Oncorhynchus mykiss 0.025 G Lin and Randall, 1993 Trichogaster microlepis 0.015 0.003 G Huang et al. , 2010 Bidyanus bidyanus 0.124 0.041 G Alam and Frankel, 2006 Macquaria ambigua 0.052 0.032 G Perca flavescens 0.064 G Packer and Garvin, 1998 Plasma membrane ATPase activity µmol L -1 Pi/mg protein/min G: gills, DG: digestive gland, M: mantle. Component name Retention time (min) LOD pmol / 20 µl LOQ pmol / 20 µl r 2 Cys 3.14 ± 0.08 0.21 1.09 0.999 GSH 5.99 ± 0.12 0.24 1.40 0.999 -GluCys 6.43 ± 0.13 0.37 1.46 0.999 NAC 9.16 ± 0.23 - - - PC 2 13.09 ± 0.33 0.59 1.39 0.999 PC 3 16.62 ± 0.28 0.79 1.52 0.998 PC 4 18.59 ± 0.25 1.93 2.70 0.991 PC 5 19.75 ± 0.18 4.71 5.09 0.961 Table 2 : 2 PC content [µg PC/g tissue wet weight] in the digestive gland and the gills of Anodonta cygnea, means (n = 6) ± SD.Table2shows the PC n content in the gills and the digestive gland in µg per g tissue wet weight. The concentrations in the digestive gland and the gills were the highest for PC 2 , PC 2 PC 3 PC 4 PC 5 Digestive gland 2.17 ± 0.59 1.10 ± 0.12 0.47 ± 0.52 < LOD Gills 0.88 ± 0.15 0.72 ± 0.57 0.40 ± 0.44 < LOD < LOD: below limit of detection Clemens and Peršoh (2009) andGonzalez-Mendoza et al. (2007) support the hypothesis that PC have dual functions: metal detoxification and essential metal homeostasis (Zn, Cu). In our work, before analyses the mussels were kept at least for 2 weeks in artificial pond water (see animal maintenance) without exposure to non-essential metals like Cd or the like, but PC 2-4 could be detected anyway. The presence of PC 2-4 without the animals being exposed to Cd or other transition metals suggests that PC could have functions comparable to other cysteinerich compounds, i.e. for homeostasis of essential trace metals or as reducing agent. The evidence of homologous genes for functional PCS in animals suggests that PC n play a wider role in heavy-metal detoxification than previously thought. The results obtained in this study highlight, for the first time, the ability of the freshwater bivalve Anodonta cygnea to synthesize PC 2-4 .6 tons, worldwide about 17 × 10 6 tons (INERIS, present work 0.35 µmol L -1 Cu 2+ (controlled by graphite furnace atomic absorption spectrometry GFAAS, limit of detection for Cu = 0.5 µg L -1 = 8 nmol L -1 ) was chosen as concentration for exposure test. The experiments were performed in aquaria lined with dyeand pigment free high-density polyethylene foil, filled with 1.5 L of experimental medium (pH 7.25) per mussel. The mussels were kept at 20° C in a thermo-regulated room with a photoperiod of 16 h light and 8 h darkness. The animals were divided in groups of 12 mussels. These groups were exposed to 0.35 µmol L -1 Cu 2+ for 0 h, 12 h, 48 h, 4 d, 7 d, and 21 d, or kept for the same duration in artificial pond water as corresponding time controls. Test media INTRODUCTION Copper is extensively used for various technical applications, especially conducting metal for electro-technical equipment, electrical power lines, as catalyst in the chemical industry, and in the building industry for water pipes and roofings; a small amount goes into the use as fungicide by which way it is directly emitted into the environment. Annual copper consumption in Europe alone is 3.5 × 10 , 2.7 pmol for PC , pmol for PC 5 . Acknowledgements I am very grateful to Professor Konrad Dettner, Professor Britta Planer-Friedrich, and Professor Jouni Taskinen, to participate in the jury of defense of this Ph.D, and to accept reviewing this Ph.D. thesis. I thank Professor Pascale Bauda, Professor Laure Giambérini, and Professor Jean François Ferard, the present and past directors of the laboratory, for welcoming me into the Laboratory Interactions Ecotoxicology Biodiversity Ecosystems (LIEBE). I wish to express my sincere gratitude to my two Supervisors, Professor Hartmut Frank and Professor Paule Vasseur who welcomed me in their teams. I am indebted to my supervisors for their guidance, for all of their valuable advices and remarks throughout this Ph.D. thesis. I thank Doctor Naima Chahbane, Lecturer-HDR Carole Cossu-Leguille, and Doctor Silke Gerstmann for their interest in this research and their help. My warm thanks to Mrs Agnes Bednorz for her kindness and her help. I am indebted to Lecturer Sylvie Cotelle for allowing me to use the HPLC equipment. I acknowledge Mr Philippe Rousselle for copper determinations by GFAAS and FAAS. Many thanks to Mrs Catherine Drui, Mrs Maryline Goergen and Mrs Irmgard Lauterbach, for their help in administrative issues. I am pleased to acknowledge all the colleagues for their kindness and their cheerfulness: ACKNOWLEDGEMENTS This research was supported by the Universities of Bayreuth (Germany) and Lorraine (France) and the CPER (Contrat de Projet Etat Région) in Lorraine. The authors thank Dr. Silke Gerstmann for helpful discussion. Financial support of the Oberfranken-Stiftung and by Dr. Robert Klupp of the fisheries department of the Regional Government of Upper Franconia is appreciated. ACKNOWLEDGEMENTS This research was supported by the Universities of Lorraine (France) and Bayreuth (Germany), the PRST Region Lorraine, and the French Ministry of Research. ACKNOWLEDGEMENTS This research was supported by the Universities of Lorraine (France) and Bayreuth (Germany), PRST Region Lorraine in France, and the French Ministry of Research. List of abbreviations a.a.m.: anterior adductor muscle ACN: acetonitrile Copper and enzymatic perturbation The enzymatic activity of the plasma membrane Ca 2+ -ATPase was significantly inhibited in the kidneys of A. anatina upon 4 days of exposure at all concentrations of Cu 2+ tested in the rage of 0.26 to 1.15 µmol L -1 (article 1). In the kidneys and the gills, a significant inhibition of Ca 2+ -ATPase activity was observed upon 4 d of exposure at 0.35 µmol L -1 followed by a recovery at 7 d of exposure. Significant Ca 2+ -ATPase activity inhibition with no recovery was observed upon 15 d in the kidneys at the higher concentration of 0.64 µmol L -1 of Cu 2+ . No significant effect was noted on CA activity in gills and digestive gland of A. anatina exposed to Cu 2+ (article 1). Compared to controls mussels, the Na + /K + -ATPase activity was significantly inhibited in the gills (72 % inhibition) and the digestive gland (80 % inhibition) of A. anatina upon 4 days of exposure at 0.35 µmol L -1 of Cu 2+ (article 2). The Na + /K + -ATPase activity was inhibited by 26 % in the mantle of mussels exposed 4 d to the same conditions, but not significantly relative to controls. No recovery was observed upon the 15 days of exposure in the gills which still showed 54 % of inhibition at the end of test, and a partial recovery was observed at day 7 in the digestive gland. In the mantle, the H + -ATPase activity declined continuously but the decline was not statistically significant due to high variability between mussels (article 2). Calcium plays a fundamental role in numerous biological processes (energy production, cellular metabolism, muscle contraction, reproduction) and has important mechanical functions (shell, skeleton) in many organisms (Mooren & Kinne, 1998). Contrary to molluscs from marine ecosystem which are generally hyposmotic and for which calcium uptake is easy, freshwater bivalves are hyperosmotic and require tight regulation of their calcium metabolism. Despite some recovery beyond 4 days in the digestive gland, the Na + /K + -ATPase activity in the gills of A. anatina was still significantly inhibited upon 15 days of experiment; an inhibition of Ca 2+ -ATPase activity was also observed in the gills and the kidneys under the same conditions (article 1 and 2). Inhibition of these enzymes was consistent with the results obtained with Mytillus galloprovincialis exposed to copper by Viarengo et al. (1996) and Burlendo et al. (2004). Inhibition of Na + /K + -ATPase modified the cellular Na + gradient which could result in a reduced activity of Ca 2+ /Na + antiporter; this, in addition to direct inhibition of Ca 2+ -ATPase, may lead to a disturbance of calcium homeostasis. Such a disturbance of ionoregulations could lead to continuous under-supply of calcium which may also affect Ca 2+ VWR), Water was purified (18.2 M by a Milli-Q system (Millipore, France). Animal maintenance Mussel maintenance is described in detail in a previous article (Santini et al., 2010a). Adult mussel A. cygnea with shell lengths of 7.5 ± 0.5 cm were provided by a commercial supplier (Amazon fish, Pfaffenhoffen, France). The mussels were kept in 10 L aquaria under a photoperiod of 16 h illumination and 8 h darkness. The bottom of the tank was covered with a layer of glass beads (10 mm diameter) so mussels could find conditions for burying. The , 0.20 HCO 3 -mmol L -1 , was renewed every day. Bivalves were fed daily with unicellular algae Chlorella kessleri from a culture in the exponential growth phase, which were added to a final algal density of 2×10 5 cells mL -1 . Air was bubbled continuously to ensure aeration and water column homogeneity. Animals were acclimatized to these conditions for 2 weeks before any experiment. Determination of PC n and other cysteine-rich metal-binding peptides PC n s were determined according Minocha et al. (2008) with a slight modification in the protein removal step and in the gradient profile of the mobile phase in HPLC. Cysteinerich peptides levels were studied in the cytosolic fractions of deproteinized tissue homogenates after reduction. Extraction: All dissection, extraction, and centrifugation steps were carried out at 4°C. The gills and the digestive gland were carefully dissected, digestive content was removed, and washed in Tris buffer (Tris 25 mmol L -1 , NaCl 50 mmol L -1 , pH 8.0). Organs from two mussels were 2010). Another possible source is fly ash from coal combustion, containing up to 20 g Cu per ton coal. Copper is easily found in aquatic ecosystems [START_REF] Waeles | Distribution and chemical speciation of dissolved cadmium and copper in the Loire estuary and North Biscay continental shelf, France[END_REF] since they are the ultimate way for numerous contaminants. Copper is an essential trace element for the function of many cellular enzymes and proteins. However, copper become toxic when excessive intracellular accumulation occurs (Viarengo and Nott, 1993). Copper toxicity results both from non-specific metal binding to proteins, from is involvement in Fenton reactions leading to formation of reactive oxygen species (ROS) and oxidative stress. Through their important filtering activity they satisfy their respiratory and nutrition needs, by which bivalves have the capacity to accumulate a variety of environmental contaminants. The freshwater bivalve Anodonta cygnea belongs to Unionidae family, and is a species well distributed in continental waters. Unionidae are widely recognised for metal bioaccumulation including copper (Cossu et al., 1997). The level up to which mussels can tolerate transition metals depends on their ability to regulate the metal cation concentration in cells. The proteins of the metallothionein (MT) group, the polypeptides glutathione (GSH) and phytochelatins (PC) are protective compounds rich in the amino acid cysteine containing a thiol group (SH). Cu, as other transitions metals have high affinity for SH groups, making cysteine rich peptides the principal biological reagents for transition metal sequestration (Viarengo andNott, 1993, Clemens, 2006). PC bind transition metals with high thiol complexation constants, reducing the intracellular concentration of free ions of such metals in plants, fungi and microalgae. The general structure of PC is (γ -Glu-Cys) n -Gly (n = 2 to 11), synthesized by the constitutive enzyme phytochelatin synthase, which is activated by the presence of metal ions with GSH as substrate. (Grill et al., 1985). A PC synthase homologous sequence has been found in the genomes of the invertebrates Caenorhabditis elegans (Clemens et al., 2001;Vatamaniuk et al., 2001), Eisenia fetida (Brulle et al., 2008) and Chironomus (Cobbett, 2000) genomes and more generally throughout the invertebrates (Clemens and Peršoh, 2009). In our previous article (Santini et al., 2011b) Animal maintenance and copper exposure Mussel maintenance was described in detail in our previous article (Santini et al., 2011a). Briefly adult individuals of A. cygnea with shell lengths of 7.5±0.5 cm were kept in aquaria at a temperature of 20 ± 0.5° C. The bottom of the tank was covered with a layer of glass beads so the mussels could find conditions for burying. Artificial pond water 1.5 L/mussel, pH 7.25 ± 0.10 (in mmol L -1 : 0.40 Ca 2+ , 0.20 Mg 2+ , 0.70 Na + , 0.05 K + , 1.35 Cl -, 0.20 SO 4 2-, 0.20 HCO 3 -) was renewed every day. The bivalves were fed daily with Chlorella kessleri from a culture in the exponential growth phase and added to a final algal density of 2×10 5 cells mL -1 . The animals were acclimatized to these conditions for two weeks before any experiment. Previous results (Santini et al., 2011a) showed inhibition of enzymes involved in osmoregulationin A. cygnea exposed to 0.35 µmol L -1 of Cu 2+ . A total recovery of Ca 2+ -ATPase upon 7 d followed, indicating the induction of detoxication mechanisms. In the organs varied as follows: PC 2 > PC 3 > PC 4 , no significant induction of PC 3-4 was observed. The quantities of PC 2-4 found in digestive gland were higher than in gills (data not shown). In the gills of mussels exposed to Cu 2+ upon 48 h and 7 d -GluCys levels (table 1) are significantly superior to the respective controls. -GluCys increase as well, in the digestive gland of bivalves at 48 h and 4 d of exposure. Table 1: No induction of MT was observed upon Cu 2+ exposure for 21 d, a result similarly to the one found in A. cygnea (Amiard et al., 2006). MT polymorphism appears to be particularly important in invertebrates compared to mammals. Different isoforms of MT play Controlled by inductively coupled plasma mass spectrometry (ICPMS) (detection limit: Cu = 0.5 µg L -1 = 0.008 nmol L -1 ), means (n = 3) ± standard deviations (SD) Controlled by graphite furnace atomic absorption spectrometry (GFAAS) (detection limit Cu = 0.5 µg L -1 = 8 nmol L -1 ), means (n = 3) ± standard deviations (SD) Annexe 4: Pictures of mussel maintenance and exposure Anodonta cygnea in acclimatization aquarium Anodonta cygnea copper exposure in aquaria lined with polyethylene foil, in a thermoregulated room
222,778
[ "788631" ]
[ "117611" ]
00175042
en
[ "shs" ]
2024/03/05 22:32:07
2007
https://shs.hal.science/halshs-00175042/file/R07036.pdf
Alex Coad email: coad@econ.mpg.de Rekha Rao Hans Gersbach Karin Hoisl Camilla Lenzi Pierre Mohnen Christian Seiser The employment effects of innovation * Keywords: L25, O33, J01 Technological Unemployment, Innovation, Firm Growth, Weighted Least Squares, Aggregation, Quantile Regression Chômage technologique, Innovation, Croissance des firmes, Weighted Least Squares, Aggrégation, Régression par quantile The issue of technological unemployment receives perennial popular attention. Although there are previous empirical investigations that have focused on the relationship between innovation and employment, the originality of our approach lies in our choice of method. We focus on four 2-digit manufacturing industries that are known for their high patenting activity. We then use Principal Components Analysis to generate a firm-and year-specific 'innovativeness' index by extracting the common variance in a firm's patenting and R&D expenditure histories. To begin with, we explore the heterogeneity of firms by using semi-parametric quantile regression. Whilst some firms may reduce employment levels after innovating, others increase employment. We then move on to a weighted least squares (WLS) analysis, which explicitly takes into account the different job-creating potential of firms of different sizes. As a result, we focus on the effect of innovation on total number of jobs, whereas previous studies have focused on the effect of innovation on firm behavior. Indeed, previous studies have typically taken the firm as the unit of analysis, implicitly weighting each firm equally according to the principle of 'one firm equals one observation'. Our results suggest that firm-level innovative activity leads to employment creation that may have been underestimated in previous studies. L'innovation des entreprises et croissance de l'emploi Résumé: Nous présentons une analyse de la relation entre l'innovation et l'emploi dont l'originalité repose sur la méthodologie statistique. Nous nous concentrons sur quatre industries manufacturières qui ont une forte propension à déposer des brevets. Ensuite, nous utilisons l'analyse en composantes principales pour élaborer une indice d'innovation. Dans un premier temps, nous explorons l'hétérogénéité des firmes en appliquant la régression par quantiles. Alors que l'innovation peut entraîner dans quelques entreprises des réductions d'emplois, dans d'autres, elle est à l'origine d'une augmentation des emplois. Nous procédons à une analyse par les moindres carrés pondérés ('weighted least squares', WLS) qui prend en compte les différentes capacités de création d'emploi d'entreprises de tailles variées. Nous nous concentrons donc bien sur l'effet de l'innovation sur le nombre total d'emplois, alors Introduction Whilst firm-level innnovation can be expected to have a positive influence on the growth of a firm's sales, the overall effect on employment growth is a priori ambiguous. Innovation is often associated with increases in productivity that lower the amount of labour required for the production of goods and services. In this way, an innovating firm may change the composition of its productive resources, to the profit of machines and at the expense of employment. As a result, the general public has often expressed concern that technological progress may bring about the 'end of work' by replacing men with machines. Economists, on the other hand, are usually more optimistic. To begin with, theoretical discussions have found it useful to decompose innovation into product and process innovation. Product innovations are often associated with employment gain, because the new products create new demand (although it is possible that they might replace existing products). Process innovations, on the other hand, often increase productivity by reducing the labour requirement in manufacturing processes (e.g. via the introduction of robots [START_REF] Fleck | The Adoption of Robots in Industry[END_REF]). Thus, process innovations are often suspected of bringing about 'technological unemployment'. The issue becomes even more complicated, however, when we consider that there are not only direct effects of innovation on employment, but also a great many indirect effects operating through various 'substitution channels'. For example, the introduction of a labour-saving production process may lead to an immediate and localized reduction in employees inside the plant (the 'direct effect'), but it may lead to positive employment changes elsewhere in the economy via an increased demand for new machines, a decrease in prices, and increase in incomes, an increase in new investments, or a decrease in wages (for an introduction to the various 'substitution channels', see Spiezia and Vivarelli, 2000). As a result, the overall effect of innovation on employment needs to be investigated empirically. Although Van Reenen recently lamented the "dearth of microeconometric studies on the effect of innovation on employment" (Van Reenen, 1997: 256), the situation has improved over the last decade. Research into technological unemployment has been undertaken in different ways and at various levels of aggregation. The results emerging from different studies are far from harmonious though -"[e]mpirical work on the effect of innovations on employment growth yields very mixed results" (Niefert 2005:9). Doms et al. (1995) analyse survey data on US manufacturing establishments, and observe that the use of advanced manufacturing technology (which would correspond to process innovation) has a positive effect on employment. At the firm-level of analysis, Hall (1987) observes that employment growth is related positively and significantly to R&D intensity in the case of large US manufacturing firms. Similarly, [START_REF] Greenhalgh | Technological Activity and Employment in a Panel of UK Firms[END_REF] observe that R&D intensity and also the number of patent publications have a positive effect on employment for British firms. Nevertheless, [START_REF] Evangelista | Innovation, Employment and Skills in Services: Firm and Sectoral Evidence[END_REF] observe a negative overall effect of innovation on employment in the Italian services sector. When the distinction is made between product and process innovation, the former is usually linked to employment creation whereas the consequences of the latter are not as clearcut. Evidence presented in Brouwer et al. (1993) reveals a small positive employment effect of product-related R&D although the combined effect of innovation is imprecisely defined. Relatedly, work by [START_REF] Van Reenen | Employment and Technological Innovation: Evidence from UK Manufacturing Firms[END_REF] on listed UK manufacturing firms and Smolny (1998) for West German manufacturing firms shows a positive effect on employment for product innovations. Smolny also finds a positive employment effect of process innovations, whereas Van Reenen's analysis yields insignificant results. [START_REF] Harrison | Does innovation stimulate employment? A firm-level analysis using comparable micro data on four European countries[END_REF] consider the relationship between innovation and employment growth in four European countries (France, Italy, the UK and Germany) using data for 1998 and 2000 on firms in the manufacturing and services industries. Whilst product innovations are consistently associated with employment growth, process innovation appears to have a negative effect on employment, although the authors acknowledge that this latter result may be attenuated (or even reversed) through compensation effects. To summarize, therefore, we can consider that product innovations generally have a positive impact on employment, whilst the role of process innovations is more ambiguous (Hall et al., 2006). We must emphasize, however, that investigations at the level of the firm do not allow us to infer the aggregated and cumulative effect of innovation on 'total jobs' -this is because datasets are composed of firms of different sizes which need to be weighted accordingly. Previous research in this area, however, has implicitly given equal weights to firms, by treating each firm as one 'observation' in a larger database. These studies can shed light on the effect of innovation on employment decisions in the 'average firm', but they do not yield conclusions on the total employment effects of innovation, for society as a whole.1 We have strong theoretical motivations for suspecting that the relationship between innovation and employment is not invariant over the firm size distribution. For example, it may be the case that larger firms are more prone to introduce labour-saving process innovations, whereas smaller firms are often associated with product innovations. In this way, innovation in larger firms may be associated with job destruction whereas the innovative activity of small firms would be associated with job creation. On the other hand, smaller firms have less restrictive hiring-and-firing regulations, and so innovation may lead to reductions in employment that are more frequent in smaller firms than in their larger counterparts. Although there may be a relationship between the size of a firm and the employment effects of innovation, however, we consider the sign and magnitude to be an empirical question. Our empirical framework enables us to evaluate the effect of innovation on total employment by attributing weights to firms of different sizes. "Linking more explicitly the evidence on the patterns of innovation with what is known about firms growth and other aspects of corporate performance -both at the empirical and at the theoretical level -is a hard but urgent challenge for future research" [START_REF] Cefis | The persistence of innovative activities: A cross-countries and cross-sectors comparative analysis[END_REF]Orsenigo, 2001:1157). We are now in a position to rise to this challenge. In Section 2 we discuss the methodology, focusing in particular on the shortcomings of using either patent counts or R&D figures individually as proxies for innovativeness. We describe how we use Principal Component Analysis to extract a synthetic 'innovativeness' index from patent and R&D data. Section 3 describes how we matched the Compustat database to the NBER innovation database, and we describe how we created our synthetic 'innovativeness' index. Indeed, we have made efforts to obtain the best possible observations for firm-level innovative activity. Whilst our database does not allow any formal distinction between 'product' and 'process' innovation, however, we do not consider this to be a fatal caveat for the purposes of this investigation. Section 4 contains the semi-parametric quantile regression analysis, where we can observe how the influence of innovation on employment change varies across the conditional growth rate distribution. We then move on to the parametric analysis in Section 5. In particular, we compare the estimates obtained from conventional regressions (OLS and FE) with those obtained from weighted least squares (WLS). We observe that WLS estimation consistently yields a slightly more positive (although never statistically significant) estimate than other techniques, which suggests that previous studies may have underestimated the total employment gains from innovation. Section 6 concludes. 2 Methodology -How can we measure innovativeness? Activities related to innovation within a company can include research and development; acquisition of machinery, equipment and other external technology; industrial design; and training and marketing linked to technological advances. These are not necessarily identified as such in company accounts, so quantification of related costs is one of the main difficulties encountered during the innovation studies. Each of the above mentioned activities has some effect on the growth of the firm, but the singular and cumulative effect of each of these activities is hard to quantify. Data on innovation per se has thus been hard to find [START_REF] Van Reenen | Employment and Technological Innovation: Evidence from UK Manufacturing Firms[END_REF]. Also, some sectors innovative extensively, some don't innovative in a tractable manner, and the same is the case with organizational innovations, which are hard to quantify in terms of impact on the overall growth of the firms. However, we believe that no firm can survive without at least some degree of innovation. We use two indicators for innovation in a firm: first, the patents applied for by a firm and second, the amount of R&D undertaken. [START_REF] Cohen | Protecting their intellectual assets: Appropriability conditions and why US manufacturing firms patent (or not)[END_REF] suggest that no industry relies exclusively on patents, yet the authors go on to suggest that the patents may add sufficient value at the margin when used with other appropriation mechanisms. Although patent data has drawbacks, patent statistics provide unique information for the analysis of the process of technical change [START_REF] Griliches | Patent Statistics as Economic Indicators: A Survey[END_REF]. We can use patent data to access the patterns of innovation activity across fields (or sectors) and nations. The number of patents can be used as an indicator of inventive as well as innovative activity, but it has its limitations. One of the major disadvantage of patents as an indicator is that not all inventions and innovations are patented (or indeed 'patentable'). Some companies -including a number of smaller firms -tend to find the process of patenting expensive or too slow and implement alternative measures such as secrecy or copyright to protect their innovations [START_REF] Archibugi | Patenting as an indicator of technological innovation: a review[END_REF][START_REF] Arundel | What percentage of innovations are patented? Empirical estimates for European firms[END_REF]. Another bias in the study using patenting can arise from the fact that not all patented inventions become innovations. The actual economic value of patents is highly skewed, and most of the value is concentrated in a very small percentage of the total (OECD, 1994). Furthermore, another caveat of using patent data is that we may underestimate innovation occuring in large firms, because these typically have a lower propensity to patent [START_REF] Dosi | Sources, Procedures, and Microeconomic Effects of Innovation[END_REF]. The reason we use patent data in our study is that, despite the problems mentioned above, patents would reflect the continuous developments within technology. We complement the patent data with R&D data. R&D can be considered as an input into the production of inventions, and patents as outputs of the inventive process. R&D data may lead us to systematically underestimate the amount of innovation in smaller firms, however, because these often innovate on a more informal basis outside of the R&D lab [START_REF] Dosi | Sources, Procedures, and Microeconomic Effects of Innovation[END_REF]. For some of the analysis we consider the R&D stock and also the patent stock, since the past investments in R&D as well as the past applications of patents have an impact not only on the future values of R&D and patents, but also on firm growth. [START_REF] Hall | Exploring the Patent Explosion[END_REF] suggests that the past history of R&D spending is a good indicator of the firms technological position. Taken individually, each of these indicators for firm-level innovativeness has its drawbacks. Each indicator on its own provides useful information on a firm's innovativeness, but also idiosyncratic variance that may be unrelated to a firm's innovativeness. One particular feature pointed out by [START_REF] Griliches | Patent Statistics as Economic Indicators: A Survey[END_REF] is that, although patent data and R&D data are often chosen to individually represent the same phenomenon, there exists a major statistical discrepancy in that there is typically a great randomness in patent series, whereas R&D values are much more smoothed. Figure 1 shows that the variable of interest (i.e. ∆K -additions to economically valuable knowledge) is measured with noise if one takes either innovative input (such as R&D expenditure or R&D employment) or innovative output (such as patent statistics). In order to remove this noise, one needs to collect information on both innovative input and output, and to extract the common variance whilst discarding the idiosyncratic variance of each individual proxy that includes noise, measurement error, and specific variation. In this study, we believe we have obtained useful data on a firm's innovativeness by considering both innovative input and innovative output simultaneously in a synthetic variable.2 Principal Component Analysis (PCA) is appropriate here as it allows us here to summarize the information provided by several indicators of innovativeness into a composite index, by extracting the common variance from correlated variables whilst separating it from the specific and error variance associated with each individual variable [START_REF] Hair | Multivariate Data Analysis: Fifth Edition[END_REF]. We are not the only ones to apply PCA to studies into firm-level innovation however -this technique has also been used by [START_REF] Lanjouw | Patent Quality and Research Productivity: Measuring Innovation with Multiple Indicators[END_REF] to develop a composite index of 'patent quality' using multiple characteristics of patents (such as the number of citations, patent family size and patent claims). Another criticism of previous studies is that they have lumped together firms from all manufacturing sectors -even though innovation regimes (and indeed appropriability regimes) vary dramatically across industries. In this study, we focus on specific 2-digit sectors that have been hand-picked according to their intensive patenting and R&D activity. However, even within these sectors, there is significant heterogeneity between firms, and using standard regression techniques to make inferences about the average firm may mask important phenomena. Using quantile regression techniques, we investigate the relationship between innovativeness and growth at a range of points of the conditional growth rate distribution. We observe three types of relationship between innovation and employment. First, most firms do not experience much employment change in any given year, and what little change they have appears to be largely idiosyncratic and not strongly related to innovative activity. Second, for those firms that grow the fastest, we observe that innovation seems to be strongly positively associated with increases in employment. Third, for those firms that are rapidly shedding workers, this is strongly associated with innovative activity. We note that this heterogeneity of the response of employment change to innovation cannot be detected if we focus on conventional regression estimators that estimate 'the average effect for the average firm'. We only consider certain specific sectors, and not the whole of manufacturing. This way we are not affected by aggregation effects; we are grouping together firms that can plausibly be compared to each other. We are particularly interested in looking at the growth of firms classified under 'complex' technology classes. We base our classification of firms on the typology put forward by [START_REF] Hall | Exploring the Patent Explosion[END_REF] and [START_REF] Cohen | Protecting their intellectual assets: Appropriability conditions and why US manufacturing firms patent (or not)[END_REF]. The authors define 'complex product'3 industries as those industries where each product relies on many patents held by a number of other firms and the 'discrete product' industries as those industries where each product relies on only a few patents and where the importance of patents for appropriability has traditionally been higher. 4 We chose four sectors that can be classified under the 'complex products' class. The two digit SIC codes that match the 'complex technology' sectors are 35, 36, 37, and 38. 5 By choosing these sectors that are characterised by high patenting and high R&D expenditure, we hope that we will be able to get the best possible quantitative observations for firm-level innovation. 3 Database description Database We create an original database by matching the NBER patent database with the Compustat file database, and this section is devoted to describing the creation of the sample which we will use in our analysis. The patent data has been obtained from the NBER database (Hall et al., 2001b), and we have used the updates available on Bronwyn Hall's website 6 to obtain data until 2002. The NBER database comprises detailed information on almost 3 416 957 U.S. utility patents in the USPTO's TAF database granted during the period 1963 to December 2002 and all citations made to these patents between 1975 and 2002. A firm's patenting history is analysed over the whole period represented by the NBER patent database. The initial sample of firms was obtained from the Compustat 7 database for the aforementioned sectors comprising 'complex product' sectors. These firms were then matched with the firm data files from the NBER patent database and we found all the firms8 that have patents. The final sample thus contains both patenters and non-patenters. The NBER database has patent data for over 60 years and the Compustat database has firms' financial data for over 50 years, giving us a rather rich information set. As Van Reenen (1997) mentions, the development of longitudinal databases of technologies and firms is a major task for those seriously concerned with the dynamic effect of innovation on firm growth. Hence, having developed this longitudinal dataset, we feel that we will be able to thoroughly investigate whether innovation drives sales growth at the firm-level. Table 1 shows some descriptive statistics of the sample before and after cleaning. Initially using the Compustat database, we obtain a total of 4274 firms which belong to the SICs 35-38 6 See http://elsa.berkeley.edu/∼bhhall/bhdata.html 7 Compustat has the largest set of fundamental and market data representing 90% of the world's market capitalization. Use of this database could indicate that we have oversampled the Fortune 500 firms. Being included in the Compustat database means that the number of shareholders in the firm was large enough for the firm to command sufficient investor interest to be followed by Standard and Poor's Compustat, which basically means that the firm is required to file 10-Ks to the Securities and Exchange Commission on a regular basis. It does not necessarily mean that the firm has gone through an IPO. Most of them are listed on NASDAQ or the NYSE. and this sample consists of both innovating and non-innovating firms. These firms were then matched to the NBER database. After this initial match, we further matched the year-wise firm data to the year-wise patents applied by the respective firms (in the case of innovating firms) and finally, we excluded firms that had less than 7 consecutive years of good data. Thus, we have an unbalanced panel of 1920 firms belonging to 4 different sectors. Since we intend to take into account sectoral effects of innovation, we will proceed on a sector by sector basis, to have (ideally) 4 comparable results for 4 different sectors. Summary statistics and the 'innovativeness' index Table 2 provides some insights into the firm size distribution for each of the four sectors. We can observe a certain degree of heterogeneity between the sectors, with SIC 37 (Transportation equipment) containing relatively large proportion of large firms. Figure 2 shows the number of patents per year in our final database. For some of the sectors there appears to be a strong structural break at the beginning of the 1980s which may well be due to changes in patent regulations (see [START_REF] Hall | Exploring the Patent Explosion[END_REF] for a discussion). Table 3 presents the firm-wise distribution of patents, which is noticeably right-skewed. We find that 46% of the firms in our sample have no patents. Thus the intersection of the two datasets gave us 1028 patenting firms who had taken out at least one patent between 1963 and 1998, and 892 firms that had no patents during this period. The total number of patents taken out by this group over the entire period was 317 116, where the entire period for the NBER database represented years 1963 to 2002, and we have used 274 964 of these patents in our analysis i.e. representing about 87% of the total patents ever taken out at the US Patent Office by the firms in our sample. Though the NBER database provides the data on patents applied for from 1963 till 2002, it contains information only on the granted patents and hence we might see some bias towards the firms that have applied in the end period covered by the database due the lags faced between application and the grant of the patents. Hence to avoid this truncation bias (on the right) we consider the patents only till 1997 so as to allow for a 5-year gap between application and grant of the patent.9 Concerning R&D, 1867 of the 1920 firms report positive R&D expenditure. Table 4 shows that patent numbers are well correlated with (deflated) R&D expenditure, albeit without controlling for firm size. To take this into account, we take employees as a measure of firm size and scale down the R&D and patents measures. 10 Table 5 reports the rank correlations between firm-level patent intensity and R&D intensity. For each of the sectors we observe positive and highly significant rank correlations, which nonetheless take values lower than 0.25. These results would thus appear to be consistent with the idea that, even within industries, patent and R&D statistics do contain large amounts of idiosyncratic variance and that either of these variables taken individually would be a rather noisy proxy for 'innovativeness'. 11 Indeed, as discussed in Section 2, these two variables are quite different not only in terms of statistical properties (patent statistics are much more skewed and less persistent than R&D statistics) but also in terms of economic significance. However, they both yield valuable information on firm-level innovativeness. application. However, we allow for a five-year gap here because it has been suggested that this gap has become longer in recent years. 10 We also investigate the robustness of our results by scaling down a firm's R&D and patents by its sales instead of its employees, and obtain similar results. For a brief discussion, see the Appendix (Section A). 11 Further evidence of the discrepancies between patent statistics and R&D statistics is presented in the regression results in Tables 5 and6 of Coad and Rao (2006a). Our synthetic 'innovativeness' index is created by extracting the common variance from a series of related variables: both patent intensity and R&D intensity at time t, and also the actualized stocks of patents and R&D. These stock variables are calculated using the conventional amortizement rate of 15%, and also at the rate of 30% since we suspect that the 15% rate may be too low [START_REF] Hall | Does the market value R&D investment by European firms? Evidence from a panel of manufacturing firms in France, Germany, and Italy[END_REF]. Information on the factor loadings is shown in Table 6. We consider that the summary 'innovativeness' variable is a satisfactory indicator of firm-level innovativeness in all the sectors under analysis because it loads reasonably well with the stock variables and explains between 50% to 83% of the total variance. Our composite variable has worked well in previous studies (e.g. Coad and Rao 2006a,b,c) and in this study we find that it works reasonably well. Nevertheless, we check the robustness of our results in the Appendix (Section B) by taking either a firm's R&D stock or its patent stock as alternative indicators of 'innovativeness'. An advantage of this composite index is that a lot of information on a firm's innovative activity can be summarized into one variable (this will be especially useful in the following graphs). A disadvantage is that the units have no ready interpretation (unlike 'one patent' or '$1 million of R&D expenditure'). In this study, however, we are less concerned with the quantitative point estimates than with the qualitative variation in the importance of innovation over the conditional growth rate distribution (i.e. the 'shape' of the graphs). Semi-parametric analysis In this section we use semi-parametric quantile regression techniques to explore the heterogeneity between firms with regards to their innovation and employment behavior. We begin with an introduction to quantile regression before presenting the results. An Introduction to Quantile Regression Standard least squares regression techniques provide summary point estimates that calculate the average effect of the independent variables on the 'average firm'. However, this focus on the average firm may hide important features of the underlying relationship. As Mosteller and Tukey explain in an oft-cited passage: "What the regression curve does is give a grand summary for the averages of the distributions corresponding to the set of x's. We could go further and compute several regression curves corresponding to the various percentage points of the distributions and thus get a more complete picture of the set. Ordinarily this is not done, and so regression often gives a rather incomplete picture. Just as the mean gives an incomplete picture of a single distribution, so the regression curve gives a correspondingly incomplete picture for a set of distributions" (Mosteller and Tukey, 1977:266). Quantile regression techniques can therefore help us obtain a more complete picture of the underlying relationship between innovation and employment growth. In our case, estimation of linear models by quantile regression may be preferable to the usual regression methods for a number of reasons. First of all, we know that the standard least-squares assumption of normally distributed errors does not hold for our database because growth rates follow an exponential rather than a Gaussian distribution. The heavy-tailed nature of the growth rates distribution is illustrated in Figure 3 (see also [START_REF] Stanley | Scaling behavior in the growth of companies[END_REF] and [START_REF] Bottazzi | Common Properties and Sectoral Specificities in the Dynamics of U. S. Manufacturing Companies[END_REF] for the growth rates distribution of Compustat firms). Whilst the optimal properties of standard regression estimators are not robust to modest departures from normality, quantile regression results are characteristically robust to outliers and heavytailed distributions. In fact, the quantile regression solution βθ is invariant to outliers of the dependent variable that tend to ± ∞ [START_REF] Buchinsky | Changes in the U.S. Wage Structure 1963-1987: Application of Quantile Regression[END_REF]. Another advantage is that, while conventional regressions focus on the mean, quantile regressions are able to describe the entire conditional distribution of the dependent variable. In the context of this study, high growth firms are of interest in their own right, we don't want to dismiss them as outliers, but on the contrary we believe it would be worthwhile to study them in detail. This can be done by calculating coefficient estimates at various quantiles of the conditional distribution. Finally, a quantile regression approach avoids the restrictive assumption that the error terms are identically distributed at all points of the conditional distribution. Relaxing this assumption allows us to acknowledge firm heterogeneity and consider the possibility that estimated slope parameters vary at different quantiles of the conditional growth rate distribution. The quantile regression model, first introduced in [START_REF] Koenker | Regression Quantiles[END_REF] seminal contribution, can be written as: y it = x it β θ + u θit with Quant θ (y it |x it ) = x it β θ (1) where y it is the dependent variable, x is a vector of regressors, β is the vector of parameters to be estimated, and u is a vector of residuals. Q θ (y it |x it ) denotes the θ th conditional quantile of y it given x it . The θ th regression quantile, 0 < θ < 1, solves the following problem: min β 1 n i,t:y it ≥x it β θ|y it -x it β| + i,t:y it <x it β (1 -θ)|y it -x it β| = min β 1 n n i=1 ρ θ u θit (2) where ρ θ (.), which is known as the 'check function', is defined as: ρ θ (u θit ) = θu θit if u θit ≥ 0 (θ -1)u θit if u θit < 0 (3) Equation ( 2) is then solved by linear programming methods. As one increases θ continuously from 0 to 1, one traces the entire conditional distribution of y, conditional on x [START_REF] Buchinsky | Recent Advances in Quantile Regression Models: A Practical Guide for Empirical Research[END_REF]. More on quantile regression techniques can be found in the surveys by [START_REF] Buchinsky | Recent Advances in Quantile Regression Models: A Practical Guide for Empirical Research[END_REF] and [START_REF] Koenker | Quantile Regression[END_REF]; for some applications see the special issue of Empirical Economics (Vol. 26 (3), 2001). Graphs made using the 'grqreg' Stata module [START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Quantile regression results We now apply quantile regression to estimate the following linear regression model: GROW T H i,t = α + β 1 IN N i,t-1 + β 2 CON T ROL i,t-1 + y t + i,t (4) where IN N is the 'innovativeness' variable for firm i at time t. CON T ROL includes all of the control variables that may potentially influence a firm's employment growth;12 namely, lagged growth, lagged size and 3-digit industry dummies. We also control for common macroeconomic shocks by including year dummies (y t ). The regression results for each of the four 2-digit sectors can be seen in Figure 4 (see also Table 7). We observe considerable variation in the regression coefficient over the conditional quantiles. At the upper quantiles, the coefficient is observed to increase. This means that innovation has a strong positive impact on employment for those firms that have the fastest employment growth. At the lower quantiles, however, the coefficient on our 'innovativeness' variable often becomes negative (although not statistically significant), which indicates that innovation is associated with job destruction for those firms that are losing the most jobs. To sum up, it may be useful to distinguish between three groups of firms. First of all, the 'average firm' stays at roughly the same size. Such firms do not change their employment levels by much, and furthermore innovation seems to have little effect on their employment decisions. This is indicated by the fact that the coefficient on 'innovativeness' is close to zero at the median quantile. The second group consists of those fast-growing firms that are experiencing the largest increases in employment. For these firms, innovation has a strong positive effect on employment. The third group contains firms that are losing the most jobs. In this case, increases in firm-level innovative activity are associated with subsequent reductions in employment. This could be due to two effects, however. On the one hand, it could be due to innovation leading to a reduction in the required labour inputs (this effect is the bona fide 'technological unemployment' argument). On the other hand, it could be because some firms are unsuccessful in their attempts at innovation. This is the 'tried and failed' category of innovators described in Freel (2000) and discussed in [START_REF] Coad | Innovation and Firm Growth in High-Tech Sectors: A Quantile Regression Approach[END_REF]. We suspect that both of these effects are present for this third group of firms. In the Appendix (Section B), we check the robustness of our results by using alternative (cruder) measures of firm-level innovation. These measures are 3-year R&D and patent stocks, depreciated at the conventional rate of 15%. As expected, these two variables taken on their own are less clear-cut than our preferred composite 'innovativeness' variable. Broadly speaking, however, the results from this exercise appear to support our main results presented in this section. We have thus observed that in some cases innovation is associated with employment creation whilst in other cases it is associated with job destruction. It is of interest to see if these two categories are correlated with a firm's size. For example, we could suspect that the latter category corresponds to the largest firms who are more likely to introduce process innovations. Previous studies have not been able to test this hypothesis because they implicitly attribute equal weights to firms of different sizes. We argue that this approach is flawed, however, given that larger firms have a greater impact on the absolute number of jobs, because of their large size. We investigate this issue in the next section. Parametric analysis We begin this section with a brief introduction to the weighted least squares estimator, and then apply it to our dataset. An Introduction to Weighted Least Squares "As Mosteller and Tukey (1977, p346) suggested, the action of assigning "different weights to different observations, either for objective reasons or as a matter of judgement" in order to recognize "some observations as 'better' or 'stronger' than others" has an extensive history." Willett and Singer (1988:236) Consider the regression equation: y i = βx i + i (5) The OLS regression solution seeks to minimize the sum of the squared residuals, i.e.: min Q = n i=1 (y i -βx i ) 2 ≡ n i=1 ( i ) 2 (6) Implicit in the basic OLS solution is that the observations are treated as equally important, being given equal weights. Weighted Least Squares, however, attributes weights w i to specific observations that determine how much each observation influences the final parameter estimates: min Q = n i=1 w i (y i -βx i ) 2 (7) It follows that WLS estimators are functions of the weights w i . Although WLS can be used in situations where observations are attributed different levels of 'importance', it is most often used for dealing with heteroskedasticity. In the context of this study, the weight w i corresponds to the firm's size, measured in terms of employees. Regression results We estimate equation ( 4) using conventional estimators such as OLS and the Fixed-Effect estimator, as well as the Weighted Least Squares estimator: the results are presented in Table 7. The main feature of the regression results is that the coefficients obtained from the standard OLS and fixed-effect (FE) estimators are positive (though not always significant) for each of the four sectors. Furthermore, we observe that the R 2 coefficients are rather low, always lower than 7%. The standard interpretation of these results would be that, if anything, innovation seems to be positively associated with subsequent employment growth. However, our preferred interpretation of these results is informed by the quantile regression analysis presented in the preceding section. We observed that, for the fast-growing firms, innovation is positively associated with employment, whilst increases in innovation may also be associated with job destruction for those firms shedding the most jobs. This heterogeneity is indeed masked by standard regression techniques that focus on 'the average effect for the average firm'. We also observe that the WLS coefficient estimates are in most cases higher than the results obtained by either OLS or FE. This evidence hints that innovation in large firms is more likely to be associated with employment creation than innovation in small firms. This is an interesting finding given that larger firms have a greater potential for large increases in the absolute number of new jobs. In addition, this result is perhaps surprising given that large firms are usually associated with process innovations (see for example [START_REF] Klepper | Entry, Exit, Growth, and Innovation over the Product Life Cycle[END_REF] and process innovations, in turn, are usually classified as labour-saving. Conclusion Our main results are twofold. Our first main result emerges when we apply semi-parametric quantile regressions to explore the relationship between innovation and employment growth. We observe three categories of firms. First, most firms do not grow by much, and what little they do grow seems to be unrelated to innovation. Second, those firms that experience rapid employment growth owe a large amount of this to their previous attempts at innovation. Third, for those firms that are shedding the most jobs, increases in innovative activity seem to be associated with job destruction. The distinction between these three categories is effectively masked whenever The fact that the WLS R 2 is higher than the OLS or FE R 2 may simply be a spurious statistical result [START_REF] Willett | Another Cautionary Note About R 2 : Its Use in Weighted Least-Squares Regression Analysis[END_REF]. conventional parametric regressions are used, because these latter focus on 'the average effect for the average firm' and are relatively insensitive to heterogeneity between firms. Our second main result is observed when we investigate whether the relationship between innovation and employment varies with firm size. Our previous observations on the heterogeneity of firm behavior vis-à-vis innovation and employment effectively fuelled such suspicions. Our results indicate that, if anything, innovative activity in large firms is more positively associated with employment growth than innovative activity undertaken by their smaller counterparts. We should mention the limitations of our results that are brought on by the specificities of our dataset. In the US, the labour market is more fluid than in other countries, and this may reduce the generality of our results. Furthermore, we focus only on high-tech manufacturing sectors. Although this particular sectoral focus allows us to get relatively accurate measures of firm-level innovation, it reduces the scope of our analysis. It may be the case that the relationship between innovation and unemployment is different for other sectors of the economy. Graphs made using the 'grqreg' Stata module [START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. The fact that the WLS R 2 is higher than the OLS or FE R 2 may simply be a spurious statistical result [START_REF] Willett | Another Cautionary Note About R 2 : Its Use in Weighted Least-Squares Regression Analysis[END_REF]. Graphs made using the 'grqreg' Stata module [START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Figure 1 : 1 Figure 1: The Knowledge 'Production Function': A Simplified Path Analysis Diagram (based on Griliches 1990:1671) Figure 2 : 2 Figure 2: Number of patents per year. SIC 35: Machinery & Computer Equipment, SIC 36: Electric/Electronic Equipment, SIC 37: Transportation Equipment, SIC 38: Measuring Instruments. Figure 3 : 3 Figure 3: The (annual) employment growth rates distribution for our four two-digit sectors. Figure 4 : 4 Figure 4: Variation in the coefficient on 'innovativeness' (i.e. β 1 in Equation (4)) over the conditional quantiles. Confidence intervals extend to 2 standard errors in either direction. Horizontal lines represent OLS estimates with 95% confidence intervals. SIC 35: Machinery & Computer Equipment (top-left), SIC 36: Electric/Electronic Equipment (top-right), SIC 37: Transportation Equipment (bottom-left), SIC 38: Measuring Instruments (bottom-right).Graphs made using the 'grqreg' Stata module[START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Figure 5 : 5 Figure 5: Variation in the coefficient on a firm's 3-year R&D stock (i.e. γ 1 in Equation (8)) over the conditional quantiles. Confidence intervals extend to 2 standard errors in either direction. Horizontal lines represent OLS estimates with 95% confidence intervals. SIC 35: Machinery & Computer Equipment (top-left), SIC 36: Electric/Electronic Equipment (top-right), SIC 37: Transportation Equipment (bottom-left), SIC 38: Measuring Instruments (bottom-right).Graphs made using the 'grqreg' Stata module[START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Figure 6 : 6 Figure 6: Variation in the coefficient on a firm's 3-year patent stock (i.e. γ 1 in Equation (8)) over the conditional quantiles. Confidence intervals extend to 2 standard errors in either direction. Horizontal lines represent OLS estimates with 95% confidence intervals. SIC 35: Machinery & Computer Equipment (top-left), SIC 36: Electric/Electronic Equipment (top-right), SIC 37: Transportation Equipment (bottom-left), SIC 38: Measuring Instruments (bottom-right).Graphs made using the 'grqreg' Stata module[START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Table 1 : 1 Summary statistics before and after data-cleaning sample before cleaning sample used n=4274 firms n=1920 firms mean std. dev. mean std. dev. Total Sales 1028.156 6775.733 1178.169 7046.354 Patent applications 6.387029 45.35137 9.267999 56.86579 R&D expenditure 58.6939 363.402 57.06897 351.7708 Total Employees 8.5980 40.1245 10.21704 46.0217 Table 2: Firm size distribution in SIC35-38, 1963-1998 No. of Employees SIC 35 SIC 36 SIC 37 SIC 38 ≤250 Mean 0.104332 0.112787 0.101951 0.095604 Std. Dev 0.069861 0.06795 0.071023 0.070228 obs 2570 3196 266 3667 > 250 & ≤500 Mean 0.371858 0.36585 0.375686 0.36347 Std. Dev 0.071885 0.071307 0.075044 0.069809 obs 969 1347 204 879 > 500 & ≤5000 Mean 1.802632 1.684009 2.091483 1.641482 Std. Dev 1.187339 1.109291 1.174895 1.094161 obs 3317 2941 937 2018 > 5000 Mean 33.91514 43.34083 91.30289 25.02034 Std. Dev 50.19058 64.77395 165.8062 28.9475 obs 1729 1322 1312 935 Note: employee numbers given in thousands. Table 3 : 3 The Distribution of Firms by TotalPatents, 1963-1998 (SIC's 35-38 only) 0 or more 1 or more 10 or more 25 or more 100 or more 250 or more 1000 or more Firms 1920 1028 641 435 195 119 53 Table 4 : 4 Contemporaneous rank correlations between Patents and R&D expenditure SIC 35 SIC 36 SIC 37 SIC 38 ρ 0.4277 0.4560 0.4326 0.4591 p-value 0.0000 0.0000 0.0000 0.0000 Obs. 8533 8751 2696 7475 Table 5 : 5 Contemporaneous rank correlations between 'patent intensity' (patents/employees) and 'R&D intensity' (R&D/employees) SIC 35 SIC 36 SIC 37 SIC 38 ρ 0.1631 0.2321 0.2248 0.1990 p-value 0.0000 0.0000 0.0000 0.0000 Obs. 7906 8119 2505 6935 Table 6 : 6 Extracting the 'innovativeness' index used for the quantile regressions -Principal Component Analysis results(first component only, unrotated) SIC 35 SIC 36 SIC 37 SIC 38 Table 7 : 7 Regression estimation of equation (4). Quantile regression estimates obtained using 1000 bootstrap replications. Quantile regression OLS FE WLS 10% 25% 50% 75% 90% SIC 35 β 1 -0.0124 -0.0003 0.0111 0.0196 0.0392 0.0114 0.0195 0.0261 Std. Error 0.0082 0.0043 0.0035 0.0046 0.0124 0.0035 0.0057 0.0086 t-stat -1.51 -0.06 3.11 4.24 3.16 3.23 3.42 3.03 R 2 within 0.0424 R 2 between 0.0001 R 2 overall 0.0723 0.0582 0.0479 0.0735 0.0894 0.0599 0.0273 0.1989 obs (groups) 601 obs 6682 6682 6682 6682 6682 6682 6682 6682 SIC 36 β 1 0.002 0.0052 0.0067 0.0179 0.0255 0.0103 0.0153 0.0282 Std. Error 0.0063 0.002 0.0024 0.004 0.0044 0.0024 0.0056 0.0057 t-stat 0.32 2.58 2.8 4.5 5.73 4.29 2.75 4.96 R 2 within 0.043 R 2 between 0.0005 R 2 overall 0.0429 0.041 0.0361 0.0487 0.048 0.0479 0.018 0.1427 obs (groups) 614 obs 6891 6891 6891 6891 6891 6891 6891 6891 SIC 37 β 1 -0.0017 0.0024 0.0038 0.0096 0.0179 0.0043 0.0149 0.0149 Std. Error 0.0176 0.0031 0.0026 0.0035 0.0114 0.005 0.008 0.0056 t-stat -0.1 0.75 1.47 2.72 1.56 0.85 1.86 2.67 R 2 within 0.0548 R 2 between 0.0048 R 2 overall 0.1036 0.0787 0.065 0.0617 0.0716 0.0685 0.0261 0.2417 obs (groups) 178 obs 2154 2154 2154 2154 2154 2154 2154 2154 SIC 38 β 1 -0.0011 -0.0008 0.0125 0.041 0.0688 0.0073 0.0136 0.0044 Std. Error 0.011 0.0058 0.0086 0.0099 0.0191 0.0074 0.0107 0.0068 t-stat -0.1 -0.14 1.45 4.13 3.6 0.99 1.27 0.65 R 2 within 0.0329 R 2 between 0.0884 R 2 overall 0.0459 0.0288 0.028 0.0507 0.065 0.0284 0.0049 0.1247 obs (groups) 527 obs 5870 5870 5870 5870 5870 5870 5870 5870 Note: Table 8 : 8 Extracting the 'innovativeness' index used for the quantile regressions -Principal Component Analysis results (first component only, unrotated) SIC 35 SIC 36 SIC 37 SIC 38 R&D / Sales 0.1631 0.1351 0.3076 0.0302 Patents / Sales 0.2669 0.1239 0.4294 0.1614 R&D stock / Sales (δ=15%) 0.4628 0.4945 0.3530 0.4645 Pat. stock / Sales (δ=15%) 0.4840 0.4958 0.4830 0.5199 R&D stock / Sales (δ=30%) 0.4659 0.4888 0.3540 0.4653 Pat. stock / Sales (δ=30%) 0.4865 0.4870 0.4877 0.5200 Prop n Variance explained 0.5031 0.6155 0.4752 0.3762 No. Obs. 7858 8079 2559 6940 Table 9 : 9 Regression estimation of equation (4). Note that the quantile regression SEs have not been bootstrapped here. quantile regression OLS FE WLS 10% 25% 50% 75% 90% SIC 35 β 1 -0.03868 -0.00390 0.00225 0.00114 0.01109 -0.00127 0.00276 0.02374 Std. Error 0.00154 0.00142 0.00066 0.00066 0.00145 0.00346 0.00494 0.01522 t-stat -25.10 -2.74 3.40 1.71 7.64 -0.37 0.56 1.56 R 2 within 0.0401 R 2 between 0.0006 R 2 overall 0.06880 0.05350 0.04020 0.06060 0.07350 0.05360 0.02260 0.16560 obs (groups) 661 obs 7273 7273 7273 7273 7273 7273 7273 7273 SIC 36 β 1 -0.01129 -0.00984 -0.00063 0.01048 0.01895 -0.00240 -0.00235 0.02051 Std. Error 0.00178 0.00091 0.00049 0.00085 0.00123 0.00276 0.00279 0.01403 t-stat -6.33 -10.84 -1.29 12.28 15.45 -0.87 -0.84 1.46 R 2 within 0.0352 R 2 between 0.0050 R 2 overall 0.04190 0.0368 0.0332 0.0436 0.0400 0.0408 0.0135 0.1333 obs (groups) 614 obs 7495 7495 7495 7495 7495 7495 7495 7495 SIC 37 β 1 0.00479 0.00148 0.00304 0.00673 0.02609 0.00658 0.00362 0.02215 Std. Error 0.00376 0.00129 0.00118 0.00165 0.00269 0.00379 0.00490 0.01533 t-stat 1.28 1.14 2.57 4.06 9.72 1.74 0.74 1.44 R 2 within 0.0588 R 2 between 0.0046 R 2 overall 0.0841 0.0750 0.0594 0.0529 0.0661 0.0659 0.0233 0.2213 obs (groups) 178 obs 2389 2389 2389 2389 2389 2389 2389 2389 SIC 38 β 1 0.00038 -0.00128 0.00540 0.00347 0.00121 0.00173 0.00164 0.00174 Std. Error 0.00110 0.00061 0.00063 0.00117 0.00122 0.00268 0.00333 0.00276 t-stat 0.35 -2.11 8.52 2.97 0.99 0.64 0.49 0.63 R 2 within 0.0261 R 2 between 0.0502 R 2 overall 0.04030 0.02570 0.02560 0.03900 0.03940 0.02510 0.00610 0.11840 obs (groups) 527 obs 6421 6421 6421 6421 6421 6421 6421 6421 Note: Note however that[START_REF] Evangelista | The Impact of Innovation on Employment in Services: evidence from Italy[END_REF] estimate the effect of firm-level innovation on total employment by attributing observation-specific weights to firms. Nonetheless, their analysis is rather limited because their employment growth variable is a qualitative survey response instead of a quantitative growth rate. Following[START_REF] Griliches | Patent Statistics as Economic Indicators: A Survey[END_REF], we consider here that patent counts can be used as a measure of innovative output, although this is not entirely uncontroversial. Patents have a highly skew value distribution and many patents are practically worthless. As a result, patent numbers have limitations as a measure of innovative output -some authors would even prefer to consider raw patent counts to be indicators of innovative input. During our discussion, we will use the terms 'products' and 'technology' interchangeably to indicate generally the same idea. 4 It would have been interesting to include 'discrete technology' sectors in our study, but unfortunately we did not have a comparable number of observations for these sectors. This remains a challenge for future work.5 The 'complex technology' sectors that we consider are SIC 35 (industrial and commercial machinery and computer equipment), SIC 36 (electronic and other electrical equipment and components, except computer equipment), SIC 37 (transportation equipment) and SIC 38 (measuring, analyzing and controlling instruments; photographic, medical and optical goods; watches and clocks). The patent ownership information (obtained from the above mentioned sources) reflects ownership at the time of patent grant and does not include subsequent changes in ownership. Also attempts have been made to combine data based on subsidiary relationships. However, where possible, spelling variations and variations based on name changes have been merged into a single name. While every effort is made to accurately identify all organizational entities and report data by a single organizational name, achievement of a totally clean record is not expected, particularly in view of the many variations which may occur in corporate identifications. Also, the NBER database does not cumulatively assign the patents obtained by the subsudiaries to the parents, and we have taken this limitation into account and have subsequently tried to cumulate the patents obtained by the subsidiaries towards the patent count of the parent. Thus we have attempted to create an original database that gives complete firm-level patent information. The gap between application and grant of a patent has been referred to by many authors, among others[START_REF] Bloom | Patents, Real Options and Firm Performance[END_REF] who mention a lag of two years between application and grant, andHall et al. (2001a) who state that 95% of the patents that are eventually granted are granted within 3 years of For a survey of firm growth, see[START_REF] Coad | Firm Growth: A Survey[END_REF]. [START_REF] Scherer | Size of Firm, Oligopoly, and Research: A Comment[END_REF] discusses the possibility of scale measurement errors entering into various firm-level data. Although he is unable to verify the hierarchy of these errors, he speculates that the measurement problems are likely to be larger for assets followed by sales and (to a lesser extent) employment(Scherer 1965: 259). Appendices A Scaling down according to firm size There are at least two ways of scaling down indicators of innovative activity according to firm size [START_REF] Small | R&D performance of UK Companies[END_REF]. The first, and perhaps most common, way is to use a firm's sales as an indicator of its size. The second involves scaling down according to a firm's employment. Our analysis in this paper uses the second approach, which some authors have nonetheless identified as the preferable method. 13 However, we also investigate the robustness of our analysis by scaling down according to total sales. Table 8 contains the corresponding results for the generation of our composite 'innovativeness' indicator. We observe that this indicator does not appear to perform as well when we scale down innovative activity by a firm's total sales (compare the results here with those in Coad and Rao (2006b,c)). We nonetheless pursue the analysis using this indicator, and we obtain similar results (see Table 9). B Alternative measures of innovative activity In this section we verify the robustness of the quantile regression results presented in Section 4 by using simpler and cruder measures of firm-level innovative activity. We now estimate the following linear regression model: where IN N OV i,t-1 refers to either a firm's 3-year stock of R&D intensity (i.e. R&D / Sales) or patent intensity (Patents / Sales); the conventional depreciation rate of 15% has been used for both of these variables. The results are presented in Figures 5 and6. Broadly speaking, these results offer support to our earlier analysis. In general, we observe that the coefficient is close to zero at the median quantile. The coefficient decreases at the very lowest quantiles, often taking on a negative coefficient. In contrast, the coefficient becomes increasingly positive at the upper quantiles.
57,005
[ "841477" ]
[ "15080", "494477", "494477" ]
01750444
en
[ "chim" ]
2024/03/05 22:32:07
2013
https://hal.univ-lorraine.fr/tel-01750444/file/DDOC_T_2013_0189_GHACH.pdf
Keywords: Tetraethoxysilane TMOS: Tetramethoxysilane PEI: Poly (ethylenimine) Indium-Tin-Oxide electrode PGE: Pyrolytic graphite electrode Techniques Baclight bacterial viability test AFM: Atomic force microscopy TEM: Transmission electron microscopy Scientific terms EC: Electrochemical communication ET: Electron transfer DET: Direct electron transfer MET: Mediated electron transfer sol-gel, hybrid materials, bioencapsulation, electrodeposition, bacteria, artificial biofilm, membrane-associated enzyme, bacteriophage, electron transfer, cytochrome c PBS: Phosphate buffer solution PEG: Poly (ethylene glycol) SWCNT: Single-walled carbon nanotubes SWCNT-(EtO) 8 -Fc: Single-walled carbon nanotubes chemically-modified with poly (ethylene glycol) linker and ferrocene mediator MWCNT: Multi-walled carbon nanotubes MWCNT-Os: Multi-walled carbon nanotubes wrapped with osmium polymer AuNP: Gold nanoparticles GFP: Green fluorescent protein PI: Propidium iodide PVA: poly(vinyl alcohol) (PVA-g-P(4-VP)): poly(vinyl alcohol) and 4-vinylpyridine (PVA-g-P(4-VP)) complex Introduction générale Le travail décrit dans cette thèse a été mené à l'interface entre trois disciplines: l'électrochimie, la science des matériaux et la microbiologie. L'objectif de cette recherche était tout d'abord d'étudier l'activité de bactéries immobilisées dans un film de silice déposé par le procédé sol-gel à la surface d'électrodes. Le film peut être préparé par électrochimie ou par simple dépôt d'une goutte de sol à la surface de l'électrode. Il est alors primordial de conserver la viabilité de la bactérie dans ce film sol-gel et de favoriser les réactions de transfert électronique entre la bactérie et l'électrode. L'immobilisation de protéines membranaires au sein de couches minces sol-gel a ensuite été considérée et appliquée à l'électrocatalyse enzymatique. Ces protéines sont associées à des fragments membranaires ou vésicules qui peuvent être stabilisés dans le film sol-gel en utilisant des stratégies similaires à celles impliquées pour l'immobilisation de cellules entières. Les études proposées ici sont de natures fondamentales, mais peuvent trouver des applications dans les domaines des biocapteurs ou bioréacteurs. Un dernier travail, périphérique par rapport aux travaux précédents, a concerné l'encapsulation de bactériophage pour étudier l'influence de mode d'immobilisation sur l'infectivité du virus. L'électrochimie a tout d'abord été utilisée pour induire l'encapsulation de bactéries au sein de films hybrides organique-inorganique préparés par le procédé sol-gel (chapitre 2). En effet, il est possible d'utiliser l'électrolyse du sol pour produire localement de façon contrôlée à la surface de l'électrode les espèces OH -qui catalysent alors la gélification du matériau. Cette approche a été initialement proposée par Schacham et al. à la fin des années 90 avant d'être appliqué par Walcarius et al. à la génération de films silicatés mésostructurés. Cette même approche a ensuite été appliquée en 2007 à l'immobilisation de protéines rédox par le groupe ELAN du LCPME. L'application du dépôt sol-gel par assistance électrochimique pour l'encapsulation de bactéries se situe ainsi dans la continuité des travaux précédents, l'enjeu étant ici de protéger la viabilité des bactéries dans cette couche mince, malgré le stress de l'électrolyse. Deux modes de dépôt ont été développés pour l'encapsulation des bactéries par assistance électrochimique, en une étape par introduction des bactéries dans le sol avant dépôt, et en deux étapes, avec une première étape d'immobilisation des bactéries à la surface de l'électrode avant application de l'électrolyse dans un sol qui cette fois ci ne contient pas de bactéries. Dans cette première série d'expériences, la viabilité des bactéries a été évaluée en utilisant un kit de marqueurs fluorescents de l'ADN, Live/dead BacLight, permettant de caractériser l'intégrité membranaire. L'activité métabolique a également été testée en utilisant des bactéries génétiquement modifiées pour exprimer des protéines fluorescentes ou luminescentes en réponse à leur environnement. L'électrochimie a ensuite été utilisée comme un moyen analytique pour caractériser les réactions de transfert électronique entre des bactéries et différents médiateurs rédox (chapitre 3). Dans une première approche, un médiateur soluble, Fe(CN) 6 3-, a été utilisé, en nous inspirant ainsi des travaux décrits par le groupe de Dong. L'immobilisation du médiateur a ensuite été considérée. Deux stratégies ont été mises en oeuvre. La première fait appel à des nanotubes de carbone fonctionnalisés par des groupements ferrocène à l'aide d'un bras poly(oxyde d'éthylène). La seconde implique un médiateur naturel, le cytochrome c, qui est alors immobilisé dans la matrice de silice avec les bactéries, mimant de cette façon une stratégie développé par certains biofilms naturels pour augmenter les transferts électroniques vers l'accepteur final, minéral ou électrode. Cette dernière stratégie a d'abord été testée avec Shewanella Putrefaciens avant d'être approfondie avec Pseudomonas Fluorescens. Le troisième sujet traité dans cette thèse est l'électrochimie des protéines membranaires immobilisées dans un film sol-gel (Chapter 4). Plusieurs travaux décrits dans la littérature ont montré que ces protéines, associées à des fragments membranaires, pouvaient être stabilisés au sein de matériaux sol-gel, mais l'utilisation de ce mode d'immobilisation pour l'électrochimie reste rare ou inexistant selon le type de protéine considéré. Deux systèmes ont été étudiés. Le premier était un cytochrome de type P450 (CYP1A2) qui peut être utilisé pour l'élaboration de biocapteurs. Une propriété de ce type de protéine rédox est de pouvoir accepter un transfert électronique direct entre l'électrode et le centre catalytique. Nous avons étudié ici l'intérêt de la chimie sol-gel pour protéger la protéine afin de favoriser la stabilité de ce transfert électronique. Le second système est la mandélate déshydrogénase (ManDH). L'électrochimie de cette protéine implique un transfert électronique médié. La stabilité de cette réponse électrochimique, notamment en milieu convectif, a été testée et comparée à la réponse du système impliquant une simple adsorption des vésicules contenant cette protéine à la surface de l'électrode de carbone vitreux. La dernière section s'intéresse à l'influence de l'encapsulation du bactériophage ΦX174 dans une matrice sol-gel hybride sur son infectivité (chapitre 5). Ce type de virus, découvert au début du 20 ème siècle présente une infectivité spécifique pour certaines souches bactériennes pouvant être utilisée pour le traitement antibactérien, en remplacement ou complément des antibiotiques. L'encapsulation de virus dans un matériau sol-gel a été récemment considérée afin de permettre un relargage contrôlé dans l'organisme pour la thérapie génétique des cancers. Ce mode de relargage a également l'avantage de ralentir le développement d'anticorps en réponse à ce virus. L'apparition de souches bactériennes développant des résistances aux antibiotiques actuels a récemment remis en lumière les bactériophages comme moyen de lutte contre les infections bactériennes. Leur immobilisation contrôlée permettrait sans doute d'étendre le champ d'application de ces virus. Dans ce travail, l'effet de l'encapsulation des bactériophages dans une matrice sol-gel a été étudié, en tenant compte notamment de l'effet du séchage du matériau sur l'infectivité. Les matériaux ont été préparés sous la forme de monolithes et conservés en atmosphère humide ou sèche. Ces monolithes ont ensuite été réintroduits en solution pour relargage des bactériophages dont l'infectivité a été mesurée en présence d'Escherichia Coli. Différents additifs ont été introduits dans le gel de silice comme le glycérol ou de polymères chargés afin d'évaluer leur effet sur l'infectivité résiduelle. Toutes ces études sont précédées par une introduction sur l'intérêt des matériaux biocomposites dans les domaines médicaux et biotechnologiques (chapitre 1). Des généralités sur le procédé sol-gel, certains additifs organiques pouvant être introduits dans les matrices inorganiques pour l'élaboration de matériaux hybrides sont présentées en discutant leur intérêt potentiel pour notre travail. L'intérêt de l'encapsulation de protéines ou de microorganismes est également présenté en considérant les applications possibles, notamment en bioélectrochimie. Le principe de l'électrochimie des protéines rédox et des microorganismes et notamment les stratégies pour favoriser les réactions de transfert d'électron sont décrites. Enfin, le positionnement du sujet par rapports à la littérature est donné. Les méthodes et les techniques utilisées pour décrire les propriétés physico-chimiques des systèmes étudiés dans cette thèse, les différents protocoles sol-gel, notamment pour la modification des électrodes sont décrits dans la partie expérimentale. Enfin, une conclusion générale est proposée. General introduction The work reported in this thesis has been developed at the interface between three disciplines, i.e., electrochemistry, material science and microbiology. The purpose of this research was first to study the activity of bacteria immobilized in silica-based films prepared by the sol-gel process on electrode surfaces. The film can be prepared either by electrochemistry or drop-coating on the electrode surface. In such systems, the fundamental keys are to retain the cell viability in a sol-gel film [1] and to promote the electron transfer reactions between entrapped bacteria and electrodes materials [2]. Then, the immobilization of membrane associated redox proteins in sol-gel films have been considered and applied for electrocatalysis. These proteins are associated to membrane fragments or vesicles that can be stabilized in sol-gel films with similar strategies as reported for whole cells [3]. The approaches of the different works shown here with bacteria and membrane-associated proteins, are fundamental, but can find application for example as electrochemical biosensors or electrochemical bioreactors. A peripheral work of this PhD, will be also presented, i.e., the influence of encapsulation in sol-gel matrix on bacteriophage infectivity. Electrochemistry was first used to induce the encapsulation of bacteria in hybrid solgel films (Chapter 2). Indeed, the controlled cathodic electrolysis of the sol can be used to produce locally at the electrode surface a basic pH, which catalyzes the rapid gelification of the sol. The approach has been initially proposed by Schacham et al. at the end of the nineties [4] before to be applied by Walcarius et al. to the generation of mesostructured films [5]. The same approach has been also applied in 2007 to the immobilization of redox proteins by the group ELAN of LCPME [6]. The application of electrochemically-assisted sol-gel deposition for the encapsulation of bacteria is a continuation of these previous investigations, the challenge being here to protect the viability of the bacteria in such thin film, despite the stress of electrolysis. Two approaches have been developed for electrochemically assisted bacteria encapsulation (EABE), within one step with the bacteria introduced into the sol before deposition or within two steps with the bacteria immobilized on the electrode surface (ITO) before to apply the electrolysis of the sol that do not contain, in this case, any bacteria. In this first set of experiments, viability of the microorganism was simply controlled with the commercial Live/Dead BacLight test based on staining of the DNA material of bacteria with fluorescent dyes. Metabolic activity was also tested with using genetically modified bacteria that could express fluorescent proteins or luminescent light in response to the environment. Electrochemistry was then considered as an analytical method. Bacteria have been encapsulated in silica-based films and the electron transfer reactions from bacteria to different redox mediators have been monitored (Chapter 3). First approach simply involved ferricyanide as soluble mediators as reported by the group of Dong [7]. Then, the immobilization of the mediator was considered. Two different strategies have been implemented. First one involved carbon nanotubes functionalized with ferrocene moieties through a long poly (ethylene oxide) chain. The second strategy involved a natural mediator, i.e. cytochrome c form bovine heart, which was immobilized inside the silica gel, mimicking in some respect some strategies involved in natural biofilm to increase the ability to transfer electron towards a final acceptor, electrode or mineral. This latter strategy was initially proposed for Shewanella putrefaciens, before to be studied systematically with Pseudomonas fluorescens. The third topic of this thesis was the electrochemistry of membrane associated proteins immobilized in a sol-gel film (Chapter 4). Several reports in the literature have shown that such proteins, associated to membrane fragments could be stabilized in sol-gel materials, but application to bioelectrochemistry remains seldom [8,9] or does not exist depending on the considered proteins. Two systems have been studied. The first one was cytochrome P450 (CYP1A2) that is actually considered for biosensor development [10,11]. One major interest of this protein is the ability to operate direct electron transfer (DET) during the bioelectrochemical reaction. Here, it has been studied the potential interest of sol-gel chemistry to provide a protective environment to the protein in order to promote stable direct electron transfer reaction. The second system studied in this section was membrane associated mandelate dehydrogenase (ManDH). The electrochemical application of this protein involves mediated electron transfer (MET). The stability of the electrochemical response, notably in convective environment was tested and compared with the simple adsorption of proteins on glassy carbon electrodes. A final section considered the influence of encapsulation in a hybrid sol-gel matrix on the infectivity of bacteriophage ΦX174 (chapter 5). Such virus, discovered at the beginning of the 20 th century [12] can show infectivity for specific bacterial strains to be considered for antibacterial treatment. Encapsulation of virus in sol-gel materials has been recently considered in order to promote a controlled release in the organism for cancer gene therapy [13]. The silica gel-based delivery had moreover the advantage to slow down the development of anti-adenovirus antibodies. With the appearance of bacterial strain displaying resistances to antibiotics, the interest of bacteriophage for development of antibacterial treatment recently reemerged [14]. Their controlled immobilization could help in extending their application. Here, the effect of their encapsulation in sol-gel matrix was studied taking into account the effect of drying on their infectivity. Material have been prepared in the form of monolith and stored in humid or dry atmosphere. After storage, the gels were introduced in solution for release of the bacteriophage, whose infectivity was measured in the presence of E. coli. The different organic additives such as glycerol or positively charged polyelectrolyte have been introduced in the gel and their effects on the infectivity of the released bacteriophage were studied. All these studies have been preceded in the thesis by a brief introduction to the interest of biocomposite in biomedical and biotechnological fields (Chapter 1). Knowledge of sol-gel based materials, different organic additives and advantages of incorporation to fabricate solgel based hybrid materials are discussed. The advantages of proteins or microorganism encapsulation and their applications in bio-electrochemical and bio-technological fields as well as the principles of wiring redox proteins or microorganisms for bioelectrochemical devices have been illustrated as reported in literature. Finally, positioning of the subject compared to literature survey has been described. Methods and techniques has been also proceeded to describe the physico-chemical properties of the studied compounds, various sol-gel preparation methods, electrode modifications and experimental techniques and analyses used in this work. A conclusion will close the manuscript. Interest of biocomposite materials Enzymes, bacteria and viruses nowadays are interesting systems for technological devices due to their simplicity of manipulation, high selectivity and sensitivity. Their bioanalytical features are increasingly attracting attention in the pharmaceutical, environmental, medical, and industrial fields. The real challenge faced by efficiency of biological devices is to fulfill the ever growing requirements of environmental legislation in terms of rapidity of response, selectivity, stable reactivity and cost [1,2]. To achieve that, enzymes, bacteria and viruses should be entrapped in biodegradable composite to maintain a high viability/activity rate during the storage process and to enhance the communication with the surrounding environment [1], i.e, the entrapped cells provide advantages over free cells such as: increased metabolic activity, protection from environmental stresses and toxicity, increased plasmid stability and they might act as cell reservoir systems prolonging reaction times [3,4]. Several chemical or physical methods for bio-entrapment have been proposed in the literature. In chemical methods, enzymes and bacteria can be attached to an inert and biocompatible matrix through cross-linking using a bi-functional reagent. Proteinic supports, e.g., bovine serum albumin [5,6] and gelatin [7,8] are typically used to constitute the network with glutaraldehyde (GA) as the cross-linking agent. This method is rather simple and rapid. Nevertheless, it has also some drawbacks especially when considering microorganisms. In particular, cross-linking involves the formation of covalent bonds between the functional groups located on the outer membrane of the microorganisms/enzymes and GA. This mode of immobilization is consequently not suited when cell viability is absolutely required or when enzymes involved in the detection are expressed at the cell surface [2]. Physical methods represented by physical adsorption, is the simplest and softest method for biomolecules and microganisms immobilization. Nevertheless, it results in a weak electrostatic bonding that may cause easy desorption of the proteins and microorganisms from the surface during storage or analysis [2,9]. Another way to avoid leakage is to entrap them in an adequate material. Different strategies have been developed in this context. In a first strategy, the cells are filtered through a porous membrane, e.g., Teflon [10], polycarbonate [11], cellulose nitrate [12], silicon [13] or nylon [14] which is subsequently fixed to the electrode. In a second strategy, cells are retained near the transducer surface by a dialysis membrane [15]. These two modes of immobilization have been widely used for biosensors construction. In a third approach, proteins and microorganisms are entrapped in a chemical or biological Chapter I. Knowledge and literature survey 2013 12 polymeric matrix. Sol-gel silica [16][17][18] or hydrogels, such as poly(vinyl alcohol) (PVA) [19] and alginate [20] are typically used for that purpose. These polymers can efficiently protect microorganisms from external aggression. On the other hand, they may form a diffusion barrier that restricts the accessibility of the cells to the substrate and/or decrease electron transfer reactions [2]. The swelling properties of hydrogels may also limit their practical application in some cases. The combination of sol-gel and hydrogel materials by using a solgel-derived composite based on silica sol and PVA-4-vinylpyridine copolymer prevents silica glass from cracking during the sol-gel transition, limits hydrogel swelling and protect the bacterial viability [21]. To date, natural or engineered proteins, antibodies, antigens, DNA, RNA, cell membrane fractions, organelles and whole cells, have been encapsulated in a diverse range of inorganic, organic, and hybrid materials [22]. The realm of sol-gel biocomposite offers significant advances in technologies associated with biology, i.e, for fabrication of biosensors, biocatalysts, and bioartificial organs to the fabrication of high-density bioarrays and bioelectronic devices. Indeed, it is expected that the coming years will witness the realization of a variety of research and industrial applications, especially those aimed at the catalysis, sensing/monitoring, diagnostics, biotechnology, and biocomputing sectors [22]. Next section will provide some general information about the sol-gel process. Fig I-1. Applications and issues of bioencapsulation. (a) Summary of the major fabrication requirements of bio-immobilization technologies. Critical demands are broad applicability to a diverse range of biological materials, accommodation of specific co-immobilization, activation and/or stabilization prerequisites, availability of a variety of polymer chemistries, and amenability to macro-and micro-fabrication in different formats. (b) Overview of the performance characteristics of ideal bio-immobilizates. The major issues are the physicochemical robustness of the polymer matrix and the stabilization of the immobilized biological. These are especially important in relation to storage stability, sustained performance in gaseous and liquid media and heterogeneous environments, efficient functioning in biologically, chemically and physically aggressive environments, sustained catalytic performance in large-scale bioreactors, and close-tolerance functioning of electronicsinterfaced micro-fabricated immobilizates (adapted from reference [23]). Abbreviations: liq, liquid; UF, ultrafiltration; NF, nanofiltration; NCF, near-critical fluid; SCF, super-critical fluid. Principles of sol-gel chemistry and processing The sol-gel process is a wet-chemical technique widely used in the fields of material science and ceramic engineering. In this procedure, the initial precursor 'sol' gradually evolves towards the formation of a gel-like diphasic system containing both a liquid phase and solid phase whose morphologies range from discrete particles to continuous polymer networks. A sol is a stable dispersion of colloidal particles in a liquid. Colloids are solid particles with diameters of 1-1000 nm. A gel is an interconnected, rigid network with pores of submicrometer dimensions and polymeric chains whose average length is greater than a micrometer [24]. In the sol-gel process, the starting precursor could be either in an alkoxide or an aqueous form. Alkoxides are available as pure, single molecules whose reactivity toward hydrolysis can be efficiently controlled and thus the nature of the species that will effectively condense to form a gel can be selected [25]. Overall, the alkoxide-based sol-gel route is more flexible in terms of reaction conditions, chemical nature, functionality, and processing. However, when one considers biology-related applications of sol-gel chemistry, alkoxide precursors exhibit several limiting features. Alkoxide hydrolysis leads to the release of parent alcohol molecules that may be detrimental to biological systems. This is a major concern for material design itself, but also in terms of ecological impact when large-scale applications are to be developed [25,26]. Considering the actual knowledge in the field of sol-gel material, it can be proposed that "alkoxide" and "aqueous" routes may be suitable for different applications, related to their processing. Silicon alkoxides appear convenient precursors for the [25,27]). Fig I-2. Overview of the sol-gel process (adapted from references Hydrolysis Sol-gel silica synthesis is based on the controlled condensation of Si(OH) 4 entities. These may be formed by hydrolysis of soluble alkali metal silicates or alkoxysilanes. The commonly used compounds are sodium silicates, tetraethoxysilane (TEOS) and tetramethoxysilane (TMOS) [23]. Chapter I. Knowledge and literature survey 2013 16 Hydrolysis of aqueous silica Silicate solution species are controlled by the pH medium and silica concentration [25]. Monomolecular Si(OH) 4 is the predominant solution species below pH 7and at higher pH, anionic and polynuclear species are formed. For that, sodium silicate solutions are acidified in order to generate Si(OH) 4 species that will then condense to form siloxane bonds. The sodium silicate solution characteristics are dependent on the SiO 2 :Na 2 O ratio. Hydrolysis of alkoxysilanes The preparation of a silica glass can begin with an appropriate alkoxide, such as Si(OR) 4 , where R is mainly CH 3 , C 2 H 5 , or C 3 H 7 , which is mixed with water and/or a mutual solvent to form a homogenous solution. Hydrolysis leads to the formation of silanol groups (SiOH) (Eq. I-1). It has been well established that the presence of H 3 O + in the solution increases the rate of the hydrolysis reaction [24,25]. Eq I-1. Hydrolysis of alkoxide silica precursor in acidic medium. Condensation In a condensation reaction, two hydrolyzed or partially hydrolyzed molecules can link together through forming siloxane bonds (Si-O-Si) (Eq. I-2). This type of reaction can continue to build larger and larger silicon-containing molecules and eventually results in a In acidic conditions, the rate of hydrolysis is faster than that of condensation [24,25]. Sol-gel are generated by polycondensation of inorganic in nature or both inorganic and organic as induced either by evaporation of a sol solvent or electrochemically-assisted deposition [28,29]. Solvent evaporation x Coating mode (drop-, spin-and dip-coating): A process where the substrate (biomolecules or microorganisms) is mixed with sol to be dropped, spinned or dipped on the surface allowing solvent to be evaporated (Fig x Monolith: are defined as a bulk gel (smallest dimension ≥ 1 mm). Monolithic gels are potentially of interest because they are stable at room and low temperature [25]. In addition, bulk gel could be more convenient for entrapment of microorganisms and viruses due to its higher protection which is contributed to its thickness [17]. Electrochemically assisted deposition An alternative method for polycondensation of hydrolyzed silica precursors has been proposed by Shacham et al. involving the incorporation of electrochemistry with the sol-gel processing [30]. The basic idea of electrodeposition is to facilitate the polycondensation of sol precursors by an electrochemical control of the pH at the electrode/solution interface [31], thereby affecting thereby the kinetics associated to the sol-gel process. Starting from a sol solution where hydrolysis is optimal (i.e., pH 3) and condensation very slow, electrodeposition by applying a negative potential are likely increasing pH at the electrode/solution interface (Fig. I-4), thereby catalyzing the polycondensation on the electrode surfaces [29]. The film thickness deposited on the electrode surface is likely to be affected by the applied potential, the electrodeposition time, and the nature of the electrode [30]. The electrochemically assisted deposition could be also applied for non-silica precursors such as zirconia or titania [32,33]. In addition, electrochemically-assisted deposition can be advantageously combined with the surfactant templating process to generate highly ordered mesoporous sol-gel with unique mesopore orientation normal to the underlying support [34,35] and this was also exploited to prepare vertically aligned silica mesochannels bearing organo-functional groups [36,37]. Gelation The hydrolysis and condensation reactions discussed in the proceeding sections lead to the growth of clusters that eventually collide and link together to form a network called gel. Gels are defined as "strong" or "weak" according to the stability of bonds formed which could be reversible or permanent. The difference between strong and weak is related to time of gelation [25]. The gelation rate also influences the pore structure. Fast gelation gives an open structure because the particles are quickly connected and cannot undergo further rearrangements [38]. Aging Aging is taking place when a solution starts to lose its fluidity and takes on the appearance of an elastic solid. During aging, four processes affect the porous structure and the surface area of the silica gel [25,45]. These processes are: x Polycondensation is the further reaction of silanols and alkoxy groups in structure to form siloxane bonds. The reaction results in densification and stiffening of the siloxane network. x Syneresis is the shrinkage of the gel network. This shrinkage is caused by the condensation of surface groups inside pores, resulting in a pore narrowing. In aqueous gel systems, the pore size controlled by equilibrium between electrostatic repulsion and Van der Waals forces. In this case, shrinkage is produced by addition of an electrolyte. x Coarsening or Ostwald ripening is the dissolution and re-deposition of small particles. Also necks between particles will grow and small pores may be filled in. Chapter I. Knowledge and literature survey 2013 20 This results in an increase in the average pore size and decrease in the specific surface area. x Phase transformation can occur in aging with several types such as separation of solid phase from liquid (microsyneresis), segregation of partially reacted alkoxide droplets (gel turns white and opaque), crystallization and precipitation. Summarizing, the pore structure, surface area and stiffness of the network are changed and controlled by the following parameters: time, temperature, pH and pore fluid [9]. Drying The gel drying process consists of removal of water from the interconnected pore network with simultaneous collapse of the gel structure, under conditions of constant temperature, pressure, and humidity. Large capillary stresses can develop during drying when the pores are small ( 20 nm). These stresses will cause the gels to crack catastrophically unless the drying process is controlled by decreasing the liquid surface energy by addition of surfactants or elimination of very small pores, by supercritical evaporation, which avoids the solid-liquid interface, or by obtaining monodisperse pore sizes by controlling the rates of hydrolysis and condensation [24,25]. Sol-gel based bio-hybrid materials Sol-gel is a suitable matrix for immobilization of enzymes, viruses and bacteria to develop biotechnological applications. However, sol-gel precursors and conditions are sometimes not mild enough for designing a stable and safe biocomposite. For instance, the release of alcohol during the hydrolysis-condensation of silicon alkoxides has been considered an obstacle, due to its potential detrimental effects on the entrapped proteins and microorganisms [39,40]; the ionic strength of aqueous precursor has been also considered as harmful environment for microorganisms entrapment [39,41]. In addition, sol-gel constraints and shrinkage occurring during gelation and drying processes respectively lead to fracture of the material, pore collapse and loss of biological activity [42]. In order to overcome the limitations of inorganic matrices, sol-gel based hybrid materials have been proposed as a stable systems containing inorganic precursor and some organic and/or polymeric additives [22]. Hybrid materials can effectively eliminate the brittleness of pure inorganic sol-gel and the swelling property of some pure polymer or hydrogel. Chapter I. Knowledge and literature survey 2013 21 Biocompatible silica precursors Biocompatible silica precursors have to be used in order to avoid the denaturation of the entraped biological life. TMOS [Si(OMe) 4 )] and TEOS [Si(OEt) 4 )] are commonly used as biocompatible precursors behind the evaporation of methanol and ethanol, respectively, under vacuum. Aqueous sol-gel precursors are otherwise commonly used in order to avoid any trace of alcohol, for example a sodium silicate solution [43], or a mixture of sodium silicate and Ludox ® suspension [18,39]. Another way to avoid denaturation by alcohol, is to use biocompatible alcohols such as polyol-based silanes as hydrolyzable groups that can be hydrolyzed under mild pH conditions [44]. Organic additives Sugar Sugars can be used as additives to stabilize biological life within sol-gel matrices. Chymotrypsin and ribonuclease T1 have been trapped in the presence of sorbitol and Nmethylglycine. Glucose and amino acids significantly increase the thermal stability and biological activity of the proteins by altering their hydration state and increasing the pore size of the silica matrix [45]. D-glucolactone and D-maltolactone have also been covalently bonded to the silica network via a coupling reagent aminopropyltriethoxysilane (APTES), giving rise to non-hydrolyzable sugar moieties. Firefly Luciferase, trapped in such matrices, has been used for the ultra-sensitive detection of ATP via bioluminescent reactions [46]. Trehalose and sucrose are non-reducing disaccharides that may be accumulated at high concentrations by many organisms capable of surviving complete dehydration. They were shown to be excellent stabilizers of many biomolecules and living cells in the dry state, and appeared to be superior to other sugars [47,48]. The stabilization effects of trehalose were attributed to several mechanisms/interactions with critical biomolecules or cellular structures such as membranes and proteins. The stabilization mechanism of membranes often referred to the water replacement hypothesis. The formation of hydrogen bonds with membrane phospholipids and proteins suggests to replace water molecules which have been evaporated during drying [49]. This, in turn, inhibits Van der Waal's interactions between adjacent lipids thus preventing the elevation of membrane phase transition temperatures (Tm). Such a Tm change could cause membranes to shift from their native liquid-crystalline state to a gel state under environmentally relevant temperatures. As the membranes of biomolecules and Chapter I. Knowledge and literature survey 2013 22 microorganisms pass through this phase transition, regions with packing defects make the membranes leaky. Also, trehalose may interact by hydrogen bonding of its -OH groups to polar residues in proteins, hence preserving the conformational state against dehydration processes [50]. Trehalose has also demonstrated to decrease oxidative damage caused by oxygen radicals [51] and to be accumulated in response to heat shock and cold exposure, allowing bacteria viability at low temperatures. It was shown in literature that trehalose considerably increases the tolerance of E. coli to drying processes [41]. Glycerol Glycerol (or glycerin) is a simple polyol compound. It is water-soluble, sweet-tasting and non-toxic compound which is widely used in pharmaceutical formulations. As known in literature, glycerol is an example of compatible solute which can be accumulated in cells under hyperosmotic conditions to allow them to tolerate osmotic stresses [47]. Thus, glycerol is used as a powerful osmotic stabilizer, which lowers water activity, prevents contraction of the gel, and also protects cells against encapsulation and freezing stress [17,18,41,47]. Glycerol in aqueous sol-gel monolith has succeeded with its specific properties as water solubility, biocompatibility and maintenance of silica gel cohesion to preserve 60 % of bacterial viability from stresses of sol gelation and aging along one month [17]. Polymer additives Poly(ethylene Glycol) (PEG) PEG is flexible, water-soluble polyether polymer and highly applicable from industrial manufacturing to medicine. It can play several important roles in sol-gel applications, apart from structure modification: they decrease the non-specific binding of proteins and living cells to the surrounding matrix and fill the pores and spaces in the matrix, thereby eventually preventing matrix shrinkage and collapse [52]. In turn, this contributes to protect the entrapped biomolecules and microorganisms from denaturation and activity loss [42,47,53]. The introduction of PEG to sol-gel material has proven to be efficient for safe immobilization of bacteria in stable sol-gel film [54]. Poly(ethyleneimine) (PEI) The cationic polymers such as PEI, poly(allylamine) and poly(dimethyldiallylammonium chloride) have shown critical effects on the stability of encapsulated enzymes [55]. PEI which Chapter I. Knowledge and literature survey 2013 23 is a polymer with repeating unit structure composed of the amine group and two carbon aliphatic (CH 2 CH 2 ) spacer, has been used for irreversible bacterial adhesion on the solid materials via electrostatic interactions between its amine groups and membranes of protein and bacteria [56]. Thus, PEI could be a safe agent for immobilization of biological life on the surface of conducting electrodes for construction of electrochemical biosensors. Chitosan Chitosan is a very abundant biopolymer obtained by the alkaline deacetylation of chitin. It is a heteropolymer containing both glucosamine and acetylglucosamine units. The introduction of chitosan polymer aims at counterbalancing some of the silica disadvantages especially cracking upon aging/drying. Moreover, it may provide additional properties such as ion binding capacity, photoluminescence or chirality [57]. On the basis of IR studies evidencing interactions between chitosan and silica materials, a model proposed H-bonds of silanol groups with amide-and oxy-groups of chitosan, ionic bonds between chitosan amino groups and silanolate as well as covalent links resulting from transesterification of chitosan hydroxygroups by silanol (Fig. I-5) [57,58]. However, the fact that amino groups remain available for further binding suggests that these functions are only weakly bonded to the silica network [59]. Due to the strong compatibility and interactions between chitosan and sol-gel, these hybrid materials were also shown to be suitable for the encapsulation of enzymes [START_REF] Tan | [END_REF][61][62]. Moreover, chitosan has shown a considerable possibility to be electrodeposited with proteins for biosensor constructions [63,64]. The electrodeposition mechanism involves a cathodic electrolysis as for the electrodeposition of silica which could be interesting for co-deposition of both chitosan and silica to form electrodeposited hybrid materials. First, chitosan can be protonated and dissolved in acidic solutions. At low pH, protonated chitosan becomes a cationic polyelectrolyte (Equation 1) [63]. Eq I-3. Protonation of chitosan polymer in acidic medium. However, the increase in solution pH results in a decreasing charge and at pH=6.5 amino groups of chitosan become deprotonated. High pH can be generated at the cathode surface using the cathodic reduction of water discussed before in section 2.2.2 (Fig. I-4) [63]. Eq I-4. Cathodic reduction of water molecule. Then electric field provides electrophoretic motion of charged chitosan macromolecules to the cathode, where chitosan forms an insoluble deposit (Equation 3) [63]. Eq I-5. Cathodic electrodeposition of chitosan polymer by precipitation. The development of electrochemically-assisted codeposition between silica gel and chitosan polymer could provide a stable biodegradable and safe composite for biomolecules and cell entrapment towards biosensing applications [62]. Finally, the sol-gel based hybrid materials have gained the interest in combining the attractive properties of both organic and inorganic materials for biotechnologies and electrochemistry [65]. The hybrid materials exhibit chemical and physicochemical features that might be readily exploited when used to modify electrode surfaces due to the versatility of sol-gel chemistry in the design of electrochemical devices. Applications cover various fields such as sensors, reactors, batteries, and fuel cells [27]. Next section will discuss the interesting features of sol-gel based hybrid materials for entrapment of biological life and their potential applications in biotechnologies. Sol-gel bioencapsulation and applications The last decade has seen a revolution in the area of sol-gel-derived biocomposites since the demonstration that these materials can be used to encapsulate biological species in a safe nucleic acids and even whole cells in these applications relies largely on the successful immobilization of the biological objects in a physiologically active form [26,66]. Encapsulation of proteins Proteins can be broadly categorized as either soluble or membrane-associated. Soluble proteins, which include enzymes, antibodies, regulatory proteins and many others, reside in an aqueous environment, and thus are generally amenable to aqueous processing methods for immobilization. The solubility of these proteins arises owing to the presence of polar or charged amino acid residues on the exterior surface. Therefore, immobilization techniques for soluble proteins must provide a hydrated environment at a pH that does not alter the membrane of proteins and does not have significant polarity differences relative to water. The group of Avnir was one of the pioneers in developing the sol-gel encapsulation technique for soluble proteins as enzymes [67], whose field has become an important area of research and technology [68]. More recently, the group of Walcarius has succeeded to preserve and stabilize the activity of different enzymes in sol-gel film based on solvent evaporation protocol [69,70] and electrochemically-assisted protocol [71,72] for bio-electrochemical applications. In addition, the bioencapsulation field has also included the entrapment of the redox protein c-cytochrome in silica nanoparticles for designing electrochemical biosensors [73]. On the other hand, membrane-associated proteins, contain either completely (intrinsic membrane proteins) or partially (extrinsic membrane proteins) embedded within cellular lipid membranes [66] (Fig. I-7). The distinguishing characteristic of membrane proteins, with respect the soluble ones, is the presence of both hydrophobic residues, which associate with the lipophilic membrane, and hydrophilic amino acid residues that associate with the aqueous environment on either side of the membrane [66]. Therefore, successful immobilization of membrane proteins must address two important issues. Firstly, the method must allow retention of the tertiary folded structure of the protein as is the case for soluble proteins. Secondly, it must accommodate the phospholipid membrane structure, which is held together mainly through hydrophobic interactions [66]. These issues are necessary to retain intact the membrane, or at a minimum the proteinassociated lipids, in order to accommodate both the hydrophilic and hydrophobic portions of the protein. This latter requirement suggests that the use of organic solvent should be avoided and highlights the need for aqueous processing methods during immobilization [66]. There have been a wide number of strategies directed to the immobilization of bilayer lipid membrane [75], including physical adsorption of a bilayer through deposition, covalent attachment of a monolayer, or bilayer of phospholipids to a solid surface, and attachment via avidin-biotin linkages. Some of these typical strategies are illustrated in Fig. I-8. The biocompatibility and stability of silica-based gel has gained an attention for immobilization of membranes and membrane proteins such as: pyrene-labeled liposomes and dye-encapsulating liposomes [76,77], photoactive proton pump bacteriorhodopsin (bR) [78], two ligand-binding receptors, the nicotinic acetylcholine receptor (nAChR) and the dopamine D2 receptor [79], hydrogenases [START_REF] Zadvorny | [END_REF] and cytochrome P450s [81]. In most of cases, the water soluble polyethylene glycol (PEG) or glycerol were required as an additive to maintain the receptors in an active state (ca. 40-80% activity relative to solution) [74,[START_REF] Zadvorny | [END_REF]. The transition from traditional silica precursors and processing methods toward more biocompatible Chapter I. Knowledge and literature survey 2013 28 processing methods has been critical for the successful entrapment of many membrane-bound proteins. The use of polyols such as glycerol or polyethylene glycol have also proven to be highly beneficial for stabilization of entrapped membrane proteins, yet the underlying mechanisms of stabilization by these additives being still not fully understood. Fundamental and technological advances would expedite the use of these membrane protein doped materials for applications including microarrays, bioaffinity chromatography, biosynthesis, biocatalysis, bioselective solid phase microextraction, and energy storage [66]. Encapsulation of bacteria An ideal immobilization matrix should be functional at ambient temperature and biocompatible, should enable complete retention of the cells, and allow the flow of nutrients, oxygen, and analytes through the matrix. In theory, encapsulated cells should remain viable but not growing [21]. This aspect is of paramount importance when immobilized cells have to Chapter I. Knowledge and literature survey 2013 29 be stored for long periods of time without loss of viability. In this manner living cells can be regarded as a "ready-to-use reagent" that can be stored for prolonged time under carefully controlled conditions and, once activated, e.g., by a defined temperature change or addition of nutrients, a constant and reproducible number of viable cells can be obtained for selected applications (e.g., biosensing, bioremediation, fermentation) [1]. Cells have been successfully encapsulated in organic polymers (e.g., hydrogels such as agar) [82] and inorganic matrices (e.g., sol-gel) [41] and in sol-gel based organic-inorganic hybrid materials [17,18,21,41,47]. All of them performed well in the terms of solute diffusion and cell entrapment maintenance whereas encapsulation in organic polymers may hamper from cell leaking due to its swelling behavior. The real challenges in this field are to avoid the sol-gel constraints during gelation and aging processes and the impact of gel drying on long-term cell viability. Most of the cell encapsulation described in the literature have applied the mode of solvent evaporation and monolithic form [18,21,39,83,84], in addition to the thick sol-gel layer [85,86] in order to keep as long as possible the water/solvent ratio for preserving cell viability. Only a limited number of works succeeded in keeping cell viability in thin films either by cell encapsulation in multi-layered silica thin films [87] where protecting encapsulated cells from direct contact with the surrounding media, avoiding leaching and rapid drying, or in celldirected assembly protocol [88] where the phospholipids can self-organize around cells and confine water at their vicinity allowing long-term preservation under ambient conditions. To date a limited number of works was based on the mode of electrochemically-assisted cell encapsulation. One approach has succeeded to encapsulate protein and bacteria in thick electrodeposited hybrid sol-gel film but the authors have just confirmed the conserved membrane integrity of encapsulated bacteria without providing the accurate details, such as the ratio of PEG to silica material, the duration and % of conserved viability [89]. The other approaches have just performed the deposition with alginate polymer instead of sol-gel polymer [90,91]. to replace the rapidly depleting fossil fuel [92]. The encapsulation approach in sol-gel materials is likely to retain its photosynthetic activity for conversion of carbon dioxide into biofuels under light energy [83,84]. Thus, scientific impact of bioencapsulation in sol-gel relies on cell viability accompanied with cell integrity, metabolic activity and protein synthesis of the encapsulated bacteria for environmental biotechnologies [4,17]. All these works have exploited the sol-gel deposition protocol based on solvent evaporation and monolithic form. Actually Bioreporter Genetically-engineered reporter consists of a gene promoter as a sensing element for chemical or physical changes and a reporter gene(s) coding for measurable protein gene for detecting toxic metals [95]. ZntA gene could be incorporated to luminescent or GFP reporter for early detection of toxic metals such as cadmium, mercury, etc. The approach based on the encapsulation approach in sol-gel materials has opened a route for designing of long-term efficient fluorescent and luminescent reporters based on cells genetically modified with the green fluorescent protein (GFP) or luminescent light properties in response to general toxicity and genotoxicity [85,86,96]. All these works have used the solgel deposition protocol based on solvent evaporation and thick sol-gel film. Fig I-10. Bacterial toxicity assay principle. A quantifiable molecular reporter is fused to specific gene promoters, known to be activated by the target chemical(s) (adapted from reference [93]). Biosensor A biosensor is a device that enables the identification and quantification of an analyte of interest from a sample matrix, for example, water, food, blood, or urine. As a key feature of the biosensor architecture, biological recognition elements that selectively react with the analyte of interest are employed (Fig I -11) [START_REF] Borgmann | Advances in Electrochemical Science and Engineering: Bioelectrochemistry[END_REF][START_REF] Souza | [END_REF]. The approach based on the encapsulation in sol-gel matrices has also opened a route for development of sol-gel based biosensors such as for biochemical oxygen demand (BOD) or detection of toxic organophosphates which result from the extensive use of pesticides and insecticides as well as potential chemical warfare agents [START_REF] Yu | [END_REF]. BOD is used to characterize the organic pollution of water and wastewater, which is estimated by determining the amount of oxygen required by aerobic microorganisms for degrading organic matters in wastewater Chapter I. Knowledge and literature survey 2013 32 [21,100,101]. The encapsulation approach has succeeded to preserve the cell viability and its metabolic activity in solvent evaporation based protocol [START_REF] Souza | [END_REF]. Moreover, the electrochemically-assisted cell encapsulation in a sol-gel matrix could extend the biotechnology into the micro-and nano-scale level of environmental and diagnostic analysis for developing of bioelectronics and biosensors [90,91]. Biofuel cells A new form of green energy based on the efficient conversion of organic matter into electricity is now feasible using biofuel cells. In these devices, microorganisms support their growth by oxidizing organic compounds and anodic electrode serves as the sole electron acceptor, so electricity can be harvested. These electrons then flow from the anode, through the device to be powered, or a resistor in experimental studies, and onto the cathode (Fig. approach has preserved the viability and activity of these biofilms for few months [104] but up to now, the encapsulated biofilm have a limited applications in applied for construction of microbial fuel cells [105]. Encapsulation of virus The possibility to immobilize biological active agents like virus and bacteriophage within silica gels has opened the route for development of biomedical applications [107][108][109]. These microorganisms are important agents for the medical field such as oncolytic viruses including replication-competent adenoviruses which are emerging as a promising tool for the treatment of cancer [110] and bacteriophages which reduce food-borne pathogens during the preharvest and postharvest stages of food production [111]. The key factors for these biomedical applications are the conservation of viral infectivity and sometimes the extended release of virus to reach the specific release rate and target. For that, the possibility of immobilizing viruses into sol-gel materials without the loss of biological activity has led to the development of vaccination and viral epitopes carriers for treatment of diseases [107,109]. The process is based on sol-gel polymerization in conditions compatible with biological life. Silica materials are biodegradable in vivo to control the release rate of encapsulated viruses then they are subsequently secreted into urine [112]. The sol-gel encapsulation of viruses is designed to contain substantial amounts of water to ensure large pores in an aqueous environment, which can be a hydrophilic environment for viruses. In principle, viruses might therefore remain Investigation of bioelectrochemical communication As discussed above, redox proteins and bacteria have supplied the driving force for development of bio-electrochemical systems such as electrochemical bioreactors, biosensors and biofuel cells. The key criterion in bio-electrochemical devices is the electrochemical communication (EC) between these biomolecules/cells and an electrode surface [106,[START_REF] Borgmann | Advances in Electrochemical Science and Engineering[END_REF]. In case of redox proteins (such as cytochromes and enzymes), the active site is the critical domain for electron transfer (ET) between the protein, its cofactor and the electrode surface [START_REF] Kohlmann | [END_REF]. In case of bacteria, the EC between its intracellular enzymes and electrode surface is expected to occur via extracellular electron transfer (EET) [118,119]. The EET is a catalytic mechanism of organic substrates with different intracellular enzymes (respiration or fermentation for aerobic and anaerobic species, respectively). Electrons produced from microbial respiration at microbial ET chain and transported through the periplasmic space and cell wall to the outer electron acceptor (i.e., electrode). Electrodes can serve either as electron Chapter I. Knowledge and literature survey 2013 35 donors or electron acceptors for microorganisms/redox protein, depending on whether the electrode is considered as a cathode or anode, respectively [120]. An electron transfer pathway is generally divided into 2 categories: 1) Direct electron transfer (DET). 2) Mediated electron transfer (MET). The DET takes place via a physical contact of the bacterial cell membrane, or protein active site, with the electrode surface without requirements of any electron shuttle [106,[START_REF] Kohlmann | [END_REF]. The exclusive drawbacks for DET are (i) limiting to electroactive bacteria since most of living cells are assumed to be electronically non-conducting, such a transfer mechanism has long been considered impossible and (ii) working electrode overpotential could be toxic to the microbial and enzymatic life [START_REF] Borgmann | Advances in Electrochemical Science and Engineering: Bioelectrochemistry[END_REF]106]. The MET, as an alternative to direct-contact pathways, involves an additional molecule capable to ensure redox cycling process (i.e., electron shuttle or mediator). Redox artificial mediator can enhance the ET transferring electrons between enzymatic active sites/microbial cell membranes and the electrode surface. Moreover, it can operate at low overpotentials in order to avoid drawbacks discussed in DET [START_REF] Borgmann | Advances in Electrochemical Science and Engineering: Bioelectrochemistry[END_REF]106]. In case of bacteria, the MET mechanism of electron shuttling can be further divided into 2 categories: a) Endogenous shuttling. b) Exogenous shuttling. Endogenous shuttle, which is microbially-produced, is secreted outside the bacteria to exchange electrons by redox mechanism with the electrode surface. This process is known as self-mediated ET [118]. Few bacteria have self-mediated ET property such as S. oneidensis strain MR-1 which is able to excrete a quinone/flavin molecule to serve this purpose [121][122][START_REF] Marsili | Proc. Nat. Am. Soc[END_REF]. Exogenous shuttle is an artificial mediator used to enter/contact the bacteria to redox shuttling electrons with the electrode surface such as ferricyanide as a chemical origin [START_REF] Alferov | [END_REF] The development of different mechanisms for facilitating the ET between enzymes/bacteria and an electrode surface has extended the electrochemical technologies such as sensors and fuel cells into biological life. Chapter I. Knowledge and literature survey 2013 36 Electrical wiring of protein The development of EC between redox proteins and an electrode surface has gained an interesting attention for biosensors and enzymatic fuel cells fabrication [71,75,125,126]. Some redox proteins such as laccase, bilirubin oxidase, hydrogenase, hemoglobin and ccytochrome are likely to conduct electrons directly with the electrode surface [125,[127][128][129]. Mediators such as ferrocene dimethanol (FDM) [72], osmium polymer [55] or vitamin K 3 [70] attached to carbon nanotubes (CNT) have been investigated for wiring of redox proteins. Furthermore, biological mediators such as cytochromes have been also utilized to wire enzymes efficiently and to avoid using toxic chemical substances [130][131][132][133]. In addition, the immobilization of cytochrome c into layer-by-layer systems has improved the EC between enzymes and the electrode surface than monolayer system [134,135]. Research works have shown that an enzyme can be durably immobilized by simple entrapment in porous gel networks without requiring any covalent bond between the support and the enzyme, thus maintaining its native properties and biological activity [68,70,136]. The immobilization of artificial mediator along with enzyme and its cofactor for reagentless biosensors fabrication, has gained an attention due to stability of enzymatic activity and electrical wiring in porous materials [16,70]. Electrical wiring of bacteria Whole cells have also gained an increasing attention in EC with electrode surface for fabrication of biosensors and microbial fuel cells [20,103]. Whole cells can easily and rapidly communicate with their surrounding environment and their utilization can save the expensive and time-consuming enzyme purification step and preservation in natural environment [START_REF] Souza | [END_REF]. In case of DET, only electroactive bacteria were able to directly-conduct electrons with the electrode surface [106]. The DET in electroactive species is facilitated by outer-membrane (OM) redox protein (c-cytochrome) that allows the ET from ET chain located in cell membrane to the electron acceptor (outside the cell) as the case of Shewanella species [102] or by conductive appendages such as nanowires in flagella and pilli as the case of Geobacter species [103]. Electroactive bacteria has been considered as building blocks for construction of microbial fuel cells for its efficiency in DET with electrode materials [START_REF] Kim | [END_REF]. In case of MET, soluble mediator such as ferricyanide K 3 [Fe(CN) 6 ] has gained an attention especially for wiring gram-negative bacteria for BOD sensors because of its solubility, higher Chapter I. Knowledge and literature survey 2013 37 than oxygen, and its ability to enter through the porins of cell walls to accept electrons directly from the bacterial electron transfer chains [START_REF] Alferov | [END_REF]. However, high ferricyanide concentration (≥ 50 mM) can also damage the membrane of these bacteria [138]. One approach tried to immobilize ferricyanide in an ion exchangeable polysiloxane to fabricate BOD sensor for detection of organic pollution in industrial wastewater, effluent or natural water. This approach has improved the BOD assay than the conventional one which requires a 5 day period and complicated procedures and skilled analysts to obtain reproducible results [100]. Polymeric mediators have also gained a significant attention in electrical wiring field especially with the flexible osmium polymer which has been stably-bound on the electrode surface [139,140]. they can wire efficiently different gram-negative bacteria due to their cationic charge and flexibility [START_REF] Alferov | [END_REF]139,140]. The electrode modification with osmium polymer for wiring bacteria has been described either by using organic polymers with/without CNT in presence of gluteraldehyde as cross-linker [START_REF] Alferov | [END_REF]139,140], or by conducting polymer for enhancing ET with electrode surface [141]. Up to now, c-cytochrome has not been utilized for wiring bacteria as an artificial mediator. A limited work has been done for investigation of the electrochemical communication of entrapped bacteria in sol-gel materials [100,142]. Following the development of EC between planktonic bacteria and electrode surfaces especially the current and electric responses in electrochemical biosensors and microbial fuel cells respectively, the building of multilayers of electro-active bacteria as biofilm has gained more and more interests especially for microbial and hybrid fuel cells [125,143,144]. A biofilm can facilitate bacterial ET efficiency in biofuel cells, primarily due to much higher biomass densities and higher bacterial viability, caused by anode respiration. However, natural anodic biofilms usually have a considerable thickness varying from several to tens of microns, which may cause diffusion limitation of nutrients and insufficient interaction of bacteria with anode materials [144]. For that, the mimicking of biofilm within silica matrix and/or conducting polymers eliminates the significant cultivation times and variability typically associated with natural biofilm formation and avoids the loss of biomass in addition to serving as efficiently as natural biofilm [105,141,143]. All the discussed mechanisms for ET between bacteria or biofilm with the electrode surface have been summarized in figure I-14. Fig I-14. Summarizing the proposed mechanisms for ET to the anodic electrode. Red dots represent outer surface cytochromes, black lines represent nanowires, and the blue clouds represent the possible extracellular matrix which contains c-type cytochromes conferring conductivity (adapted from reference [102] ). Positioning of the thesis As it was discussed in the literature survey, encapsulation of bacteria, proteins [55,70] or even viruses [107,113] in silica-based materials prepared by the sol-gel process has been already described. Electrochemistry of bacteria or redox proteins is also a major field of research, providing opportunities for application in biosensors, bioreactor and biofuel cell technologies. In these active areas, the complementary expertise in electrochemistry, sol-gel chemistry and microbiology still provide opportunities for original research. Several directions will be considered in this thesis. First, we will investigate the application of electrochemistry to trigger the encapsulation of bacteria in thin hybrid sol-gel film (Chapter 2). Electrochemistry was then considered as an analytical method to study the reactions of electron transfer Introduction There is a tremendous interest for immobilization of living cells (bacteria, yeast, microalgae, …) in porous matrices for biotechnological applications [1][2][3][4][5]. Bacteria can be encapsulated in organic polymers (e.g., agar hydrogels) [6] or inorganic matrices (e.g., sol-gel-derived silica-based materials) [7,8]. Compared to organic polymers, silica gel offers several advantages such as improved mechanical strength, chemical inertness, biocompatibility, resistance to microbial attack and retention of the bacterial viability out of growing state [9,10]. In order to combine the advantage of both organic and inorganic components, sol-gel based hybrid materials have been also proposed (e.g., sol-gel including gelatin [9], glycerol [11], or copolymers of poly(vinyl alcohol) and poly(4-vinyl pyridine)) [10]. A challenge in this field is to avoid the impact of drying on cell viability so that bacteria were essentially encapsulated in monoliths [9,11] or thick films [7,8] while keeping enough water in the final materials. Both encapsulation in monolith or thick film have succeeded to preserve the viability and activity of encapsulated microorganisms for several months. While only a limited number of works succeeded in preserving cell viability in thin films (i.e., a configuration often required for practical applications). This was notably achieved by multilayered silica thin films [12] or by introducing phospholipids in cell directed assembly [13]. On the other hand, the incorporation of organic polymers, especially those bearing amine or amide groups (e.g., chitosan), in the sol-gel matrix allows the formation of organic-inorganic hybrids (often stabilized by strong hydrogen bonding) with improved protection against cracking upon aging/drying [14]. The incorporation of glycerol or poly(ethylene glycol) prevents excessive contraction of sol-gel matrices and protects the cells against osmotic stresses [15,16]. Finally, sugars (particularly trehalose) have been shown to stabilize bacteria during freezing and aging [17,18]. At the end of the nineties, besides the classical evaporation methods applied to generate thin sol-gel films [19], a novel approach was described to deposit such thin films on electrode surfaces, based on an electrochemically-driven pH increase likely to accelerate the gelification of a sol, and hence film deposition [20]. This has been rapidly extended to the generation of functionalized sol-gel layers [21] or ordered and oriented mesoporous silica films [22], as well as to the encapsulation of biological objects (e.g., haemoglobin [23] or dehydrogenases [24]) in thin sol-gel films. This approach offers some advantages compared to Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 50 other sol-gel deposition methods that are based on solvent evaporation (spin-, dip-, spraycoatings), especially in terms of enabling homogenous film deposition on small electrodes (e.g., ultra-microelectrode) [25,26] or non-flat supports [27] and on conducting supports, which can be useful for bioprocessing and electrochemical biosensing applications [28][29][30][31]. The electrochemical encapsulation has been recently developed for bacteria in alginate and studied for short-term analysis [30,31], but it has not been reported yet for bacteria in sol-gel films. The aim of this chapter is to present the application of electro-assisted deposition technique for the encapsulation of bacteria in thin hybrid sol-gel films. This technique utilizes low voltages for base-catalyzed condensation of a sol doped with suitable additives on an indiumtin-oxide (ITO) electrode. The approach involves the immersion of the electrode into a stable hydrolyzed sol in moderately acidic medium, which is followed by applying a constant negative potential to the electrode surface in order to induce a local pH increase leading thereby to the polycondensation of the hydrolyzed monomers and thus film deposition along with entrapping bacteria in the final deposit. This led to thin films contrary to thick films described elsewhere [26]. Here, we also show how organic additives can improve the electrochemical encapsulation of bacteria in hydrophilic thin sol-gel films by protecting the bacteria against both electrochemical and aging stresses. In order to assess the state of encapsulated bacteria in the film, the Live/Dead BacLight viability kit has been applied. It enables differentiation between bacteria with intact and damaged cytoplasmic membranes (membrane integrity) [32]. E. coli C600 has been used as a simple model for adjusting the optimum protocol of electrochemically-assisted encapsulation in sol-gel film. In addition, the metabolic activity of encapsulated bacteria was evaluated by measuring the luminescent activity of E. coli MG1655 pUCD607 [33] and the fluorescence activity of E. coli MG1655 pZNTA-GFP, as expressed in the presence of low concentration of Cd 2+ . Factors affecting the electrochemically assisted deposition of the hybrid sol-gel film The electrochemically assisted bacteria encapsulation has been optimized with specific parameters for successful entrapment of bacteria and optimal viability. The sol-gel reaction is This potential was found optimal in preliminary experiments as more negative potential was detrimental for the bacterial viability and less negative potential was inefficient to produce rapidly the films containing, or covering, the bacteria. Under these conditions, the thickness of the material could be controlled from less than 100 nm to more than 2 µm by varying the deposition time from 10 to 60 s. The film thickness was also significantly affected by the sol composition, especially the presence of the polymeric additives (chitosan, PEG) used to improve the cell viability in the final films. To assess the effect of these components, thickness measurements were made using AFM on films deposited under the same conditions, i.e., at -1.3 V for 30 s, for various sol compositions. Electrolysis from the starting sol containing only the silica precursor (0.25 M) did not produce a visible deposit under these conditions. Note that changing these conditions can allow faster film deposition for other applications [23]. The addition of PEG (12.5% m/v as a final concentration in the hybrid solgel film) increased the deposition rate but the thickness remained very small, i.e., 30 ± 7 nm. In contrast, the introduction of chitosan (0.25% m/v as a final concentration in the hybrid solgel film) in the sol led to a dramatic increase in the film thickness, by a factor of 10, to reach 350 ± 70 nm. The thickness increased even more by mixing together the three components, i.e., the silica precursor (TMOS), chitosan and PEG, to reach 1.9 ± 0.1 µm (Fig. II-1). Such variations could be basically explained by different polycondensation rates, which are affected by the presence of organic additives, in agreement with previously reported observations for other related systems [22,24]. But this is not the only reason. Indeed, chitosan has the ability to be electrodeposited by applying negative potential on the conducting electrode [34,35]. The thickness of a film prepared with chitosan and PEG only (no silica precursor) was rather small (24 ± 4 nm), suggesting little contribution of this process. But in the presence of the silica precursor, chitosan can be co-deposited to form a hybrid material displaying improved properties compared to the individual components taken separately. Moreover, PEG strongly influences the film drying and limits shrinkage during the aging process as reported before [36], whose effects could also be beneficial for preventing a mechanical stress to the bacterial membrane encapsulated in the film. Electrochemically assisted bacteria encapsulation in two steps (EABE(2S)) Principle and preliminary characterization Immobilization of bacteria was first performed using a two-step procedure (EABE(2S)), implying first the adhesion of the bacteria on the ITO surface, as described elsewhere [37], and then the electrochemically assisted sol-gel film deposition and bacteria entrapment, as illustrated in Fig. II- Thicknesses of the films deposited with various electrolysis times were measured using AFM. Values of 82 ± 5 nm, 130 ± 20 nm and 1.9 ± 0.1 µm were obtained for deposition times of 10, 20, and 30 s, respectively (Fig. II-3). This variation was not linear with time as reported previously for other electrochemically assisted sol-gel deposition [21]. Viability of bacteria immobilized by EABE(2S) Live/Dead BacLight viability analysis is a dual DNA fluorescent staining method used to determine simultaneously the total population of bacteria and the ratio of damaged bacteria. It was applied here on encapsulated bacteria in sol-gel films. BacLight kit is composed of two fluorescent nucleic acid-binding stains: SYTO 9 and propidium iodide (PI). SYTO 9 penetrates all bacterial membranes and stains the cells green while PI only penetrates cells with damaged membranes. The combination of the two stains produces red fluorescing cells (damaged cells) whereas those with only SYTO 9 (green fluorescent cells) are considered viable [32]. At first, the short-term viability, i.e., one hour after the encapsulation of E. coli C600 in the optimal sol-gel containing silica precursor, PEG and chitosan, was studied and high viability Influence of film components on the long-term viability of EABE(2S) As shown in The positive influence of silica gel on the cell viability was already discussed in the literature [9,10]. In addition; the presence of silica precursor, PEG and chitosan in the electrolysis bath allows here the rapid gelification of a hybrid film, thick enough to protect the bacteria as discussed in section (4.1) where the thickness reached 2 µm (approximately 4 times the diameter of bacterial cell ≈ 0.5 µm). In particular, chitosan, which is characterized by the presence of -NH 3 + groups, can interact with silicate monomers to facilitate the deposition of a stable hybrid sol-gel film [38]. As a result, the samples prepared with chitosan exhibited larger film thicknesses (≥ 100 times) than those without chitosan (see the first section of Results and discussion). In addition, PEG lowers the interaction of entrapped bacteria with the silicate matrix, minimizes the interaction with toxic solvent produced during the hydrolysis and prevents excessive contraction during sol-gel transition phase; it fills the pores and spaces in the matrix, thereby eventually preventing matrix shrinkage and collapse [36,39]. Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 57 Influence of storage conditions on the long-term viability of EABE(2S) Fig II-7. Long-term BacLight analysis for EABE(2S) of E. coli C600 (4×10 6 cells mL -1 ) in KCl 1 mM (a), moist air only (b), and moist air + trehalose inside sol-gel film (c). All films have been prepared by electrolysis at -1.3 V for 30 s. N.B (zero: 1 hour after encapsulation). According to results of Fig. II-7, the storage in moist air without introducing trehalose into the film did not succeed in preserving bacterial viability for more than 3 days due to complete drying of the sol-gel film (curve b). The storage in KCl solution could solve the problem of bacterial damage due to sol-gel film drying (curve a). On the other hand, storage in wet medium has also some disadvantages in terms of limiting long-term viability due to aqueous dissolution of the sol-gel material for long storage conditions [12], and providing higher probability of contamination than dry storage. The storage in moist air with introducing trehalose inside the sol-gel film has solved these problems, showing the best preservation of the membrane integrity as 95% of viable cells after 30 days of encapsulation (curve c). Here, trehalose was the water-replacing molecule in dry storage. As described in the literature [17,18], it provides a hydrophilic humid medium by replacing water bonding during gel Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 58 drying to avoid protein denaturation and lipid phase transition of bacterial membrane. This, in turn, inhibits van der Waals interactions between adjacent lipids thus preventing the elevation of membrane phase transition and stabilizes the structural conformation of bacterial membrane proteins against denaturation. The conditions used in this experiment are not favourable for cell division. We suppose that the observed increase in cell viability for short time, i.e. between 0 and 5 days (curve c), could be due to recovery of cell membrane integrity. The error affecting these measurements is higher for short time than for longer time. This variability seems to be related to the stress that affected some cells, long-term measurement allowing higher ratio of viable cells and lower variability in the measurement. Influence of the electrodeposition time on the long-term viability of EABE(2S) Finally, we have studied the effect of the electrolysis time, from 10 to 30 s, on the viability of encapsulated E. coli C600 (Fig. II-8). The films have been produced from the optimal sol composition defined before, in the presence of silica gel, PEG, chitosan and trehalose. All samples were stored at +4 ºC in humid air. viability could be found after 3 days when the film was produced with using 10 s electrolysis (curve a). The viability after 15 days was greatly improved by increasing the deposition time, as 40 % (curve b) and 96 % (curve c) viability were observed in the films prepared with 20 s and 30 s electrolysis, respectively. The effect of electrolysis time has to be correlated with the film thickness that increases dramatically from 82 ± 5 nm and 132 ± 17 nm for 10 and 20 s, respectively, to 1.9 ± 0.1 µm for 30 s. Here, the film deposited with applying 30 s electrolysis was thick enough for effective protection of the encapsulated bacteria with no significant cell membrane destruction by electrochemical stress. Thinner films did not ensure enough protection because the bacteria were not properly covered with the hybrid layer. Thicker films can be produced by using longer electrolysis times but it was observed that too long electrodeposition processes became detrimental for the cell integrity (data not shown). Practical applications of immobilized bacteria for biosensing could involve a large number of bacteria. Also, a large amount of encapsulated bacteria is necessary for studying the metabolic activity, notably in the luminescent mode of analysis. As shown in Fig. II-5d-f, the cell density of bacteria can be controlled during the first step adhesion. However, one disadvantage of the EABE(2S) protocol is the limited viability preservation when such high bacterial density is encapsulated. We observed that a dense bacterial film strongly limits the electrodeposition process and/or alters the film stability. For this reason, we have investigated the electrochemically assisted bacterial encapsulation in one step (EABE(1S)) in order to achieve larger amounts of encapsulated bacteria and to be able to analyze their metabolic activity once entrapped in the hybrid sol-gel films. The main results are described hereafter. Electrochemically assisted bacteria encapsulation in one step (EABE(1S)) Principle and preliminary characterization As we explained before that EABE(1S) implying the electrodeposition of hybrid sol including bacteria suspension on the electrode surface. Thicknesses of the films deposited with various electrolysis times but optimal potential -1. Using the same parameters (optimal potential, hybrid sol-gel components and optimization, time of electrodeposition) as EABE(2S), the AFM measurements for EABE(1S) have shown much higher film thicknesses than those of EABE(2S) probably due to the presence of bacteria that affected electrodeposition rate on the ITO surface. Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 61 AFM characterization also provides additional information concerning the gradual encapsulation of E. coli C600 with cell density (8×10 9 cells mL -1 ). With the one-step protocol, bacteria can be observed on all samples, even when using long electrodeposition time, i.e. 30s, leading to relatively thick films (Fig. . This confirms the random distribution of bacteria encapsulated by EABE(1S) inside the deposited gel. The roughness of the film (measured after drying) also strongly increased when increasing the film thickness. Viability analysis of bacteria immobilized by EABE(1S) Analysis of bacterial metabolic activity In order to confirm the viability of the encapsulated bacteria, the EABE(1S) protocol was applied on the bioluminescent bacterial biosensor E. coli MG1655 pUCD607. The pUCD607 plasmid contains the lux-CDABE operon under the control of the constitutive tetracycline resistant promoter [41], which is responsible for continuous light production. The bioluminescent reaction is catalyzed by luciferase (encoded by the luxA and luxB genes) which requires a long-chain aldehyde (provided by the luxCDE gene products) as a substrate, oxygen, and a source of reducing equivalents, usually reduced flavin mononucleotide (FMNH 2 ) [42]. Since the FMNH 2 production depends on a functional electron transport system, only metabolically active cells produce light. Here, the E. coli MG1655 pUCD607 Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 63 system was used to assess the effect of encapsulation and subsequent storage conditions on the overall metabolic activity of bacteria. E. coli MG1655 pUCD607 (8 × 10 9 cells per mL) was encapsulated in the sol gel matrix with the optimal composition (as described above) and compared to non-encapsulated E. coli (only deposited on PEI-modified ITO electrode). All samples have been stored at -80 ºC prior to the reactivation step of bacteria in medium for long-term luminescent analysis. All results are presented in Table II-1. After four weeks storage, the electrochemical encapsulation of E. coli MG1655 pUCD 607 in a film with optimal composition has preserved 50% of the continuous luminescence production compared to approximately null metabolic activity of nonencapsulated bacteria, which is most probably due to the absence of cell stabilizers against freezing stresses [18]. On the other hand, the same encapsulation protocol has been applied to the biosensor strain E. coli MG1655 pZNTA-GFP, which expresses GFP in response to cadmium exposure (P. Billard, unpublished data). This bacterial strain could be used for designing of fluorescent microbial sensor based on assessment of heavy metal bioavailability [43]. After one week of Conclusions Three E. coli strains (C600, MG1655 pUCD607 and MG1655 pZNTA-GFP) have been used as models of encapsulated microorganisms in thin sol-gel films generated in a controlled way by electrochemical protocols. The encapsulation can be performed with bacteria preimmobilized on the electrode surface or from a suspension of the bacteria in the starting sol. The possibility of utilizing electrochemistry for effective bacteria encapsulation in sol-gel films with preserving a high percentage of viability and activity for the entrapped microorganisms was demonstrated here. The presence of organic additives was proved to be essential as they contributed to modify significantly both the reactivity and the stability of the Introduction Development of living cell-based electrochemical devices holds great promise in biosensors, bioreactors and biofuel cells. It eliminates the need for isolation of individual enzymes, and allows the active biomaterials to work under conditions close to their natural environment, thus at a high efficiency and stability [1][2][3][4]. Living cells are able to metabolize a wide range of chemical compounds. Microbes are also amenable for genetic through mutation or through recombinant DNA technology and serve as an economical source of intracellular enzymes [2]. However, the main issues are the preservation of cell viability and efficient electron transfer (ET) between bacteria and electrodes for high performance and feasible large-scale implementation for these electrochemical devices [1,4]. The preservation of cell viability could be materialized by encapsulation technology in biocompatible matrices resembling the natural environment in order to conserve the cells alive in open porous matrix for passage of oxygen, solvents and nutrients but avoiding cell leaching [3]. Whole cells have been successfully encapsulated either by physical or chemical methods. The latter includes covalent interactions between functional groups located on the outer membrane of the microorganisms and glutaraldehyde as cross-linking agent. This encapsulation mode is consequently not suited when cell viability is absolutely required whereas excessive degrees of cross-linking can be toxic to the cells [4,5]. On the other hand, the physical entrapment is a simple and soft method to encapsulate microorganisms onto a transducer. Microorganisms can be encapsulated in organic or inorganic polymeric matrix, e.g., sol-gel [6,7], or hydrogels such as poly(vinyl alcohol) (PVA) [8], alginate [9] or agarose [10] . In order to combine the advantages of both organic and inorganic matrices, microorganisms have been entrapped in hybrid materials based on silica sol and poly(vinyl alcohol) and 4-vinylpyridine (PVA-g-P(4-VP)) [11,12]. The encapsulation can protect the cells from external aggression but at the same time might form a barrier that decreases the electrochemical interaction with the electrode surface [4]. Immobilization of the bacteria onto an electrode surface is also a key issue in the development DET has been developed between electrode and monolayer bacteria through c-type cytochromes (c-Cyts) associated with bacterial outer membrane (OM) such as Shewanella oneidensis [14] or putatively conductive nanowires such as Geobacter sulfurreducens strains [15] or with conductive biofilm [16]. Mediated ET has been performed with either bacterial self-mediated transfer such as Pseudomonas aeruginosa [17] or by an artificial mediator to enhance the ET between microbial cells and electrodes. Thus, the natural final electron acceptor (e.g., dioxygen in the case of aerobic bacteria, or other oxidant such as Fe(III) oxides in the case of anaerobic respirating organisms) can be replaced to prevent the problem of limiting concentrations of the electron acceptors [18]. Soluble artificial mediator such as ferricyanide, p-benzoquinone and 2,6-dichlorophenol indophenol (DCPIP) have been successfully used for applications of microbial fuel cells and electrochemical biosensors [19][20][21] due to their ability to circumvent the cell membrane, collect and transfer the maximum current to the electrode surface. Ferricyanide (Fe(CN) 6 ) has greater attention especially for gram-negative bacteria because the high solubility of synthetic electron acceptor overcame the limitation from lower oxygen solubility and its ability to enter through the porins of cell walls. However, high ferricyanide concentration (≥ 50 mM) can also damage the membrane for some bacteria like E. coli [22]. Polymeric mediators have also succeeded to facilitate the electrochemical communication (EC) of different bacterial species through flexible polymer stably binding on the electrode surface to avoid the problem of releasing potentially humantoxic compound in the environment [20,[23][24][25]. The conductive properties of osmium polymer promote a good EC between the electron donating system and the electrode surface due to the permanent positive charge of the redox polymer which favors its electrostatic interactions with charged cell surface [25]. In addition to the mediators, carbon nanotubes (CNT) has attracted considerable attention in electrochemical biosensors to provide large surface areas for enhancing the conduction of ET between bacteria and the electrode surface alone [26,27] or in presence of osmium polymer [28]. Actually, it seems to be of interest to consider using covalently-linked chemical mediator or biological mediator for wiring bacteria with electrode surface, but this approach has not been described so far in the literature. around -0.22 V and a smaller signal at -0.32 V (peak potentials). On the reverse scan, the anodic response appears even more complex with three successive redox signals visible at -0.28 V, -0.15 V and +0.06 V. One can reasonably suppose that only the bacteria located close to the electrode can transfer efficiently some electrons to the GCE. These signals are compatible with redox proteins secreted by the bacteria or present in the outer membrane of the S. putrefaciens cells such as membrane-bound cytochrome [28,29] or secreted riboflavin molecules [30] which could participate to electron transfer reactions to the electrode surface. Investigation of bioelectrochemical communication Electrochemical activity of bacteria in a silica gel layer Curve "a" in Figure 2B (see inset for a closer view) reports the chronoamperometric response of this electrode upon two successive additions of 20 mM sodium formate. Despite the visible electrochemistry of S. putrefaciens in the sol-gel layer, the resulting current in the presence of electron donor remains very low, almost undetectable in the range of few nA. A so low response can certainly be explained by the limited number of bacteria that are able to transfer electron to GCE in the insulating environment formed by the silica gel [2,4]. . The inset shows the measurement for (b) at a lower scale current in order to give a better resolution. On the other hand, one already know that soluble mediators like ferricyanide have been used successfully to shuttle electrons between gram-negative bacteria physically encapsulated in a sol-gel layer and an electrode surface [21,31,32], notably for the fabrication of biochemical oxygen demand (BOD) sensor [21,32]. Ferricyanide can indeed diffuse through the porins of the outer cell membrane for collecting the electrons from the bacteria [20,31]. Curve "b" in . In these conditions, a well-defined current increase, in the range of hundreds of nA, was measured after each addition of 20 mM formate in the solution, which is an indication of the good activity of the bacteria in this environment. The comparison between experiments performed in the absence (curve "a") and in the presence (curve "b") of ferricyanide in the solution clearly indicates that electron transfer from bacteria encapsulated in a sol-gel layer can be strongly hindered if no electron shuttle is added. The presence of a soluble mediator is indeed an effective approach to improve this reaction, but other strategies could be implemented in order to reach more efficient electrochemical communication. The next sections will discuss the possible interest of using SWCNT for enhancement of the electrochemical communication. This will be made first in the presence of soluble mediator and then by attaching the mediator, i.e. ferrocene, on the surface of SWCNT. Finally a last strategy based on the encapsulation of cytochrome c as natural mediator in the sol-gel layer along with bacteria will be investigated and compared with strategies involving SWCNT and chemical mediators. Influence of SWCNT on the electrochemical communication with entrapped bacteria in a silica gel layer The introduction of carbon nanotubes has been performed following two different protocols, III-3B, corresponding to case C in Fig. III-1). On the other hand, it is noteworthy that the twolayer configuration suffers from longer response time, probably because of some resistance to mass and charge transfer in the insulating silica layer to reach the conductive electrode surface (a phenomena which is expected to be less constraining in the composite layer made of CNTs distributed in the whole film). For both configurations, the electrochemical response recorded in the presence of carbon nanotubes was improved by comparison to that observed with the bacteria alone in the gel (i.e., without SWCNT, see curve "b" in Fig. III-2B). SWCNT could be in principle appropriate for networking bacteria encapsulated in the sol-gel material. The good electrochemical responses measured in the presence of soluble mediators indicates that the bacteria kept good activity in the sol-gel layer and was not disturbed by the presence of SWCNT or additives used to disperse them. However, the small improvement observed in these experiments in the absence of soluble mediators indicates that optimal use of this nano-material cannot be reached here, possibly because of the limited dispersion of SWCNT in the sol that would lead to insufficient networking. Moreover it was observed that silica is likely to cover the nanotube surface and this effect could hinder the electron transfer reaction between them or here with the bacteria. One way to overcome this limitation is the surface functionalization of SWCNT with a suitable mediator that would allow efficient interaction between bacteria and neighboring carbon nanotubes. . Amperometric current responses measured (A) in the absence and (B) in the presence of 5 mM Fe(CN) 6 3-mediator in solution upon successive additions of sodium formate (arrows) using glassy carbon electrodes modified with (a) a sol-gel layer containing both SWCNT-COOH and S. putrefaciens (as reported in Fig. III-1B) and (b) sol-gel overlayer containing S. putrefaciens on chitosan/SWCNT-COOH underlayer film (as reported in Fig. III-1C ). The measurements were performed in PBS at room temperature by applying +0.35 V versus Ag/AgCl reference electrode. Application of SWCNT-(EtO) 8 -Fc for electrochemical communication with bacteria entrapped in a silica gel layer Our group recently described the functionalization of SWCNT with ferrocene mediator with a flexible poly(ethylene glycol) linkers, and a spacer length of 8 ethylene glycol moieties was reported as the most effective one [33]. This functionalization provides both a good mobility to the ferrocene moieties in order to get well-defined cyclic voltammetric response (Fig. III-4A) and to achieve suitable electrocatalysis and an improved dispersion of the nanotubes in aqueous environment used here for bacterial immobilization. The polyethylene glycol chain could also provide a hydrophilic protection for preserving microbial viability [12]. . Amperometric current responses measured upon addition of sodium formate using glassy carbon electrodes modified with (a) sol-gel layer containing CNT-(EtO) 8 -Fc and S. putrefaciens (as reported in Fig. III-1B); (b) sol-gel overlayer containing S. putrefaciens on chitosan/SWCNT-(EtO) 8 -Fc underlayer film (as reported in Fig. III-1C). The measurement was performed under stirring in PBS at room temperature by applying +0.35 V versus Ag/AgCl reference electrode. Effect of biological mediator on cell communication The electrochemical behavior of S. putrefaciens in sol-gel has been studied in the presence of cytochrome c (biological mediator) to be compared with results obtained from SWCNT-Fc described in the previous section. This film displayed a significantly different behavior from previous system with chemical mediators (Ferrocene or Fe(CN) 6 3- ). The amperometric response was indeed measured upon successive additions of 0.2 mM formate in the solution i.e., 100 times lower concentration than those reported in the previous investigations reported in this this section. Under this conditions a maximum sensitivity of 1 mA M -1 cm -2 was observed, which is ten times higher than the best sensitivity observed with using SWCNT-(EtO) 8 -Fc. Moreover, the operating potential was lower with cytochrome c (i.e., +0.15 V) than with chemical mediators (i.e., +0.35 V), which is clearly advantageous for several applications, including biosensing and biofuel cell. But the linear range of the response was much narrower when using cytochrome c as mediator (mM range) by comparison to other mediators (in the range of 100 mM). In the present state of our investigations, it is difficult to clearly explain the different behavior of the two classes of mediators, but their different nature could be at the origin of distinct interactions with bacteria. The interaction mechanism of chemical mediator such as Fe(CN) 6 3or flexible osmium polymer linked to CNT has been mentioned in literature as entrance of mediator through the porins of outer cell membrane in order to accept electrons from electron transport chain located at the cell membrane [20,31] or even from cytosolic enzymes [35] while biological mediator could collect electrons only from the outer cell membrane without entering through its porins. (A) Cyclic voltammogramm measured with GCE modified with a sol-gel layer containing bovine heart cytochrome c. The measurement was performed in PBS at room temperature and at a scan rate of 20 mV s -1 . (B) Amperometric current responses recorded upon successive additions of 0.2 mM sodium formate, as measured using GCE modified with a sol-gel layer containing bovine heart cytochrome c and S. putrefaciens. The measurement was performed under stirring in PBS at room temperature by applying +0.15 V versus Ag/AgCl reference electrode. As a matter of comparison, a recent report involving S. putrefaciens CIP 8040 , in connection with osmium redox polymer display a maximum current of 45 µA cm -2 in the presence of 18 mM sodium lactate [24], which would correspond to a sensitivity of 2.5 mA M -1 cm -2 , twice the sensitivity reported here with using cytochrome as electron mediator in a sol-gel film. Interestingly the artificial immobilization in the same osmium polymer led to a comparable sensitivity as reported in our work. Some limitations arise from the forced immobilization of the bacteria in sol-gel material in this kind of artificial biofilm. Further improvement in the strategies used for bacteria immobilization will be necessary to make sol-gel encapsulated bacteria competitive with more freely growing biofilms. But one also has to consider that freely growing biofilms take more than a day to express the maximum current, while the artificial biofilm reported here is rapidly operating, which is clearly an advantage when considering biosensing applications. To resume this section, different ways of electrochemical communication between bacteria encapsulated in a sol-gel layer and a glassy carbon electrode has been investigated. In the absence mediator, the bacteria display poor ability to transfer electron to the electrode surface, because of the insulating nature of the matrix of encapsulation. The introduction of SWCNT on the glassy carbon surface or in the sol-gel layer does not improve significantly this transfer of electron, while the bacteria kept good activity as demonstrated in the presence of soluble mediator. SWCNT-(EtO) 8 -Fc and cytochrome c have demonstrated their ability to facilitate the electron transfer reaction between S. putrefaciens encapsulated in the silica layer and the glassy carbon electrode. The covalent linkage of ferrocene on SWCNT with polyethyelene glycol linker has provided the suitable flexibility for efficient interaction with S. putrefaciens. The better sensitivity (1 mA M -1 cm -2 ) was obtained with using cytochrome c as mediator while the widest linear response versus the concentration was observed with SWCNT-Fc The modification of the strain was only motivated by practical reason, this latter strain was simply found easier to cultivate than Shewanella putrefaciens. Investigation of biological-mediated bioelectrochemical communication The electrochemical investigation of cytochrome c as redox-active protein at electrodes [36] has gained an interesting area of research. Indeed, cytochrome c has succeeded to mediate the ET between an electrode surface and some enzymes [37,38] without chemical mediator, forming a stable protein-protein system for construction of biosensors [39,40]. Up to now, the biological mediator has not been tested for wiring bacteria, especially those having membrane-bound cytochromes. Cytochrome c displays many important features of such as harmless effect on bacterial viability, good electroactive properties and is of prime importance for the electrochemical communication into biofilms [16]. The encapsulation of both cytochrome c and bacteria in porous sol-gel matrices could mimic the natural biofilms and avoid their limitations such as insufficient interaction of bacteria and electrode materials or the poor control on the bacterial strains involved. The silica matrix mimics in these conditions the exopolysaccharide 'glue' that binds cells in a natural biofilm. The goal of this study is to evaluate an original strategy based on ET between Pseudomonas fluorescens CIP 69.13 cells and an electrode using bovine heart cytochrome c as biological mediator. The encapsulation of this bacteria-protein couple in a sol-gel film produces an artificial biofilm likely to be considered in electrochemical biosensors, biofuel cells or bioreactors. Glucose was used as a model analytical probe to investigate such novel system. Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 81 Electrochemistry of bovine heart cytochrome c in sol-gel The immobilization of cytochrome c was achieved in situ by physical entrapment in the silicate film in the course of the sol gelification. The redox-active protein cytochrome c is likely to mediate ET between other proteins and an electrode surface [36]. The first experiments were thus directed to evaluate the electrochemical characteristics of the entrapped cytochrome mediator in the sol-gel film. small signal in the same potential region (anodic peak located at + 0.08 V versus Ag/AgCl) but more complex feature can also be observed at -0.20 V (see inset in Fig. III-7A). This signal could be assigned to redox proteins present in the bacterial membrane. The electrochemical comparison of P. fluorescens with S. putrefaciens species (anodic and cathodic peaks are located in the range of -0.1 to +0.1 V and -0.2 to -0.4 V respectively with respect to bacterial strains) [29,42] has demonstrated the compatibility between P. fluorescens's redox peaks and peaks related to membrane-bound cytochromes present in electroactive bacteria. These membrane-bound cytochromes are able to transfer electrons to the electrode surface and are probably involved in the electron transfer reactions to the cytochrome c introduced in the sol-gel film. Feasibility of bacteria/protein communication in sol-gel It is well known that viable cells in microbial biosensors and biofuel cells are likely to catalyze the oxidation of various substrates (such as glucose, lactose, methanol …etc.) producing electrons in a process called microbial respiration [2,43]. The role of artificial mediators is to enhance the ET between microbial cells and the electrode. In order to study the feasibility of bacteria / c-cytochrome ET the in sol-gel layer, amperometric measurements have been applied to detect the microbial communication with the electrode upon successive additions of glucose as a substrate for respiration (Fig. III-8). As shown in III-8C). This big difference of intensity between measurements reported in figures III-8B and III-8C can be explained by the critical role of cytochrome c mediator added into the film ensuring an electron relay between the microorganism and the electrode surface, while in the absence of this mediator, extracellular ET in sol-gel film only occurs for bacteria located very near to the electrode surface. The mediation with cytochrome c occurs probably via electron hopping between the heme centers of the proteins [44], which obviously improves the microbial electrochemical communication with the electrode. A direct relation was found to exist with respect to an increasing glucose concentration and current density, up to a maximal saturation value (see the inset in Fig. III-8C). Communication disruption/competition In order to further confirm the mechanism of microbial communication with the electrode, some toxic chemicals have been tested as inhibitors of ET between P. fluorescens and the electrode surface, either by killing bacteria [46] or inhibiting the cell respiration [45]. On the basis of amperometric measurement performed upon addition of NaClO (200 mg L -1 ), it was shown that a rapid inhibition of the electrochemical communication occurred, with a decrease in the current response to approximately 80 % within 1 minute (Fig. III-9A). species in the presence of glucose as the substrate [5,27]. In view of potential application in the biosensors field, efforts should be also made to increase the sensitivity of the bioelectrode response. In this context, it was checked if the electron transfer pathway between P. fluorescens, cytochrome c and glassy carbon electrode surface can be further improved by the introduction of gold nanoparticles in the hybrid sol-gel layer. Factors affecting the bioelectrochemical communication Fig III-10. Amperometric current responses measured with glassy carbon electrodes modified with sol-gel film containing P. fluorescens and bovine heart cytochrome c with respect to (A) P. fluorescens cell density in the presence of 1mM c-based cytochrome, (B) cytochrome c concentration in the presence of P. fluorescens (9 x 10 9 cells mL-1 as initial cell density), (C) temperature and (D) pH of the medium. Other conditions are similar as in In the presence of gold nanoparticles (i.e. 0.04 mg mL -1 in the starting sol), the electrochemical response was doubled (Fig. III-11B). However, the introduction of more Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 88 nanoparticles, i.e., 0.08 mg mL -1 in the sol, led to a significant decrease in the electrochemical responses. These limitations could be related to toxicity to biological life or to modification in the composite structure and hence charge transfer properties could lower the current values [28]. In attempting to circumvent this limitation, the incorporation of multi-walled carbon nanotubes (instead of gold nanoparticles) was also evaluated as an alternative way to improving electrochemical communication between the encapsulated bacteria and the electrode (using 0.4 and 0.8 mg mL -1 in the sol), but no significant enhancement of current responses was observed in these conditions. As a final control experiment, the introduction of gold nanoparticles in absence of added cytochrome c in the gel was showing only a small current response, which confirms the importance of cytochrome c as electron shuttle between P. fluorescens and the electrode surface in the construction of the artificial biofilm for electrochemical biosensing. The sensitivity of the bioelectrode was found to be 0.38 × 10 -3 mA mM -1 cm -2 glucose in the presence of the optimal concentration of gold nanoparticles. Note that the concept of artificial biofilm was reported only recently in the literature, with planktonic bacteria wrapped with graphite fiber and encapsulated in polypyrrole [16] and biofilm covered with sol-gel layer [47]. Both considered the facilitated EC between bacteria and the electrode surface by accumulation of flavin mediator in the polypyrrole or sol-gel layer, respectively. The strategy proposed in this thesis is original, providing a new approach for the elaboration of artificial biofilms to be applied in electrochemical studies or devices. Conclusion The poor electrochemical communication between bacteria encapsulated in a silica-based solgel film and a glassy carbon electrode can be increased with using soluble mediator Fe(CN 6 3- ), or ferrocene mediator chemically linked to single walled carbon nanotubes. This "chemical" strategy, as tested with Shewanella putrefaciens, leads to rather comparable results, which would be indicative of similar mechanism involved with these two different mediators. The interpretation of these results, in agreement with other reports from the literature, can be that the mediators are able to enter the outer membrane of the bacteria, through the porins, to collect electrons [48]. Another strategy, of "biological" nature, involves cytochrome c that can be co-immobilized with the bacteria in the silica-based gel. However, cytochrome cannot enter the outer membrane of the bacteria and only collect the electron from the outer largely on the successful immobilization of the biomolecule in a physiologically active form [1]. In aqueous solutions, biological molecules such as enzyme lose their catalytic activity rapidly [2]. For that, the utilization of immobilized enzymes has gained more considerable attentions as an active sensing element for fabrication of biosensors due to their stable inherent properties such as specific recognition and catalytic activity [1,2]. In general, the key challenge of immobilization techniques is the stabilization of enzymes without altering its catalytic activity. The limitation of adsorption techniques is the weakness of bonding, thus adsorbed enzymes lack the degree of stabilization and leak easily into the solution. On the other hand, the strengthening of the bonds between support and enzyme by chemical linkage could inactivate or reduce most of the enzymatic activity [2]. For these reasons, the physical entrapment of enzyme in a porous carrier such as sol-gel materials has drawn great interest in recent years. This is due to its simplicity of encapsulation process, chemical inertness and biocompatibility of silica, tunable porosity, mechanical stability, and negligible swelling behavior [2,3]. Sol-gel encapsulation approach has succeeded to maintain the enzymatic activity to fabricate reagent less biosensors and bioreactors [4][5][6][7][8]. A diversity of works has been briefly reported about sol-gel encapsulation of enzymes as soluble protein but a limited works on the membrane-associated proteins. Membrane-associated proteins are either completely (intrinsic membrane proteins) or partially (extrinsic membrane proteins) embedded within an amphiphilic bilayer lipid membrane (BLM). Thus it is important to retain at least the essential bound lipids, and often the entire BLM, to keep the membrane-associated protein properly folded and functional. This makes such species particularly difficult to immobilize by conventional methods and highlights the need for protein-compatible immobilization methods [1,9]. Some of membrane-associated proteins have been successfully encapsulated in sol-gel materials, as reported for photoactive membrane proteins [10], membrane-bound receptors [11], two-ligand channel receptor [12] and cytochromes P450 [13,14]. The use of polyols such as glycerol or polyethylene glycol Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 95 with sol-gel materials have been proven to be highly beneficial for stabilization of entrapped membrane proteins [1]. The goal of this study is to evaluate the interest of such approaches for application of membrane-associated proteins in bioelectrochemical processes, with the goal to immobilize these proteins in an active form on the surface of electrodes. Two systems have been considered. First, cytochrome P450 was immobilized in a sol-gel layer on pyrolytic graphite electrodes, with the idea to favor and keep direct electron transfer reactions between the electrode and the protein (this work was done in collaboration with the Groupe of GM. Almeida, Université de Lisbonne). The second system was the Mandelate dehydrogenase, isolated as vesicle from Pseudomonas putida. This protein was kindly provided by Gert Kohring( Saarland University, Sarrebruck, Allemgane). Direct electron transfer reactions are not possible with this protein and had to be wired using mediators, either soluble or immobilized on the electrode surface. Direct communication of membrane-associated cytochrome P450 The basis of third generation biosensors is the direct communication between electrodes and proteins especially those containing heme groups such as cytochromes [15]. For biosensors based on DET, the absence of mediator is the main advantage to provide a superior selectivity, operate in a potential window closer to the redox potential of the enzyme itself, and therefore, less prone to interfering reactions [16]. Cytochromes P450 belong to the group of external monooxygenases [17,18]. In nature, the iron heme cyt P450s utilize oxygen and electrons delivered from NADPH by a reductase enzyme to oxidize substrates stereo-and regioselectively. Significant research has been directed toward achieving these events electrochemically [19]. Improving the electrochemical performance of cytochrome P450 enzymes is highly desirable due to their versatility in the recognition of different biological and xenobiotic compounds for application as biosensors [20], and biocatalysis [21,22]. Interfacing these enzymes to electrode surfaces and electrochemically driving their catalytic cycle has proven to be very difficult [23]. The protein was immobilized directly on electrode surface with taking advantage of electrostatic interactions on graphite electrode or by covalent linkage on gold surfaces, in lipid membranes, or using additional additives, i.e., surfactants, polymers or nanoparticles [23]. Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 96 The application of sol-gel chemistry for the immobilization of CYP has been rarely considered. One report by Iwuoha et al. described the immobilization of P450cam in a complex hybrid matrix prepared with methyltriethoxysilane in the presence of didodecyldimethyl ammonium bromide (DDAB) vesicles, bovine serum albumin (BSA) and glutaradhyde (GA) [13]. Direct electron transfer reactions between the electrode surface and the immobilized are described and applied for camphor or pyrene oxidation and an improved stability is discussed, considering the application in organic solvent. However DDAB is already known for stabilizing P450 proteins and the critical role of the inorganic precursor is not clearly demonstrated in this complex matrix including BSA and GA. Another report by Waibel et al. described screen-printed bienzymatic sensor based on sol-gel immobilized acetylcholinesterase and a cytochrome P450 BM-3 (CYP102-A1) mutant [14]. P450 was stabilized in the sol-gel matrix and applied for the catalytic oxidation of parathion in the presence of NADPH and O 2 . The resulting molecule, i.e. paraoxon, could inhibit the enzymatic activity of acetylcholine esterase that was electrochemically monitored. Different protocols for sol-gel immobilization were described, but no tentative of direct electron transfer reactions were considered with the immobilized CYP. Critical role of the sol-gel matrix for direct electron transfer electrochemistry Figure IV-1A shows the electrochemical response of CYP simply adsorbed on pyrolytic graphite electrode (PGE) surface. A small signal could be observed for some experiments, corresponding to the DET from the electrode to the CYP protein. This signal was not systematically observed but sometimes observed as a very small intensity. This is probably Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 97 due to the small amount of protein presenting a good orientation and conformation DET reaction. PEG which is known as stabilizer for electrochemistry of P450 at high temperature [24] was tested here to immobilize CYP1A2 (Fig. IV-1B). However, no signal was observed in these conditions. The immobilization in silicate has pointed a slight improvement in the measured electrochemical signal (Fig. IV-1C). -0.8 -0.6 -0.4 -0. Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 99 Electrocatalytic reduction of O 2 CYP are very sensitive to oxygen that can be used to evaluate the DET reactions [25]. Figure IV-3A reports the electrochemical response of CYP1A2 immobilized in the sol-gel-PEG hybrid film to increasing amounts of oxygen. The experiment was initiated in oxygen free solution, deareted with argon. Then, small volume of air saturated solution was introduced into the cell, leading to a gradual increase in the electrocatalytic response of the P450 protein due to oxygen reduction. The mechanism of oxygen reduction with CYP can involve the production of H 2 O 2 species with detrimental effect on the protein activity [19]. This is verified here by increasing more the concentration of oxygen in the solution, by equilibration with the atmosphere (Fig. IV-3B). The electrochemical response was measured for the first cyclic voltammogram was well defined with peak current intensity higher than 8 µA and maximum peak potential at -0.355 V vs Ag/AgCl. However continuous cyclic at 50 mV s -1 leads to a continuous degradation both in peak current intensity and peak potential. The last CV measured here was indeed located at -700 mV vs Ag/AgCl and displayed only 3 µA intensity. -0.8 -0.6 -0.4 -0.2 0.0 Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 To resume, the direct electrochemistry of P450 proteins in water-based sol-gel thin film is described here for the first time. The electrochemistry of the heme center and the electrocatalytic reaction with O 2 is greatly improved in the presence of PEG. A careful comparison of the film composition shows that only the combination of the inorganic matrix and the PEG additive allows the observation of DET reaction and the electrocatalytic activity for immobilized P450. A second system has been thus studied here, involving another membrane protein, i.e. L-Mandelate dehydrogenase. The possible immobilization of the vesicle containing the proteins in the silica-based layer developed in this section was first considered in order to evaluate the interest of this approach to stabilize the electrochemical response of this protein, whose response is based on mediated electron transfer. Mediated communication of membrane-associated Lmandelate dehydrogenase L-Mandelate dehydrogenase (ManDH) could be applied for the production of enantiopure αhydroxy acids as educts for the preparation of semi synthetic antibiotics [26,27]. Their In the case of mediated communication, redox mediators shuttle electrons between the active side of enzymes, i.e. containing the FMN cofactor, and the electrode. The main advantages of employing mediated electron transfer (MET) within a biosensor device are that the electron transfer process is independent of the presence of natural electron acceptors or donors [32]. Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 101 Most of dehydrogenases are communicating directly with the external mediator instead of the electrode surface. As mentioned above, they are interesting enzymes for electrosynthesis applications, especially for the production of rare sugars as building blocks for pharmaceutical and food industry [33]. One of the main requirements for electrosynthesis is the stable immobilization of a large amount of active proteins on the electrode surface of the reactor. A huge diversity of soluble dehydrogenases have been encapsulated in sol-gel materials without altering their enzymatic activity and stability [2,34]. However, the sol-gel encapsulation of membrane-associated dehydrogenase has not been reported yet. All the membrane-associated dehydrogenases have been studied in literature depending on simple adsorption on the electrode surface for bioelectrochemical biosensors [35][36][37]. Feasibility of the L-mandelate dehydrogenase in a hybrid SiO 2 /PEG film The sol-gel used for the encapsulation of L-mandelate dehydrogenase have been prepared according the protocol optimized previously for encapsulation of cytochrome P450 studied in section A (SiO 2 55mM/ PEG 6wt % -pH 7). L-mandelate dehydrogenase activity in sol-gel film has been measured in presence of ferrocene dimethanol (FDM) as a soluble mediator in buffer medium using electrochemical measurements (Fig. IV-4). 0,0 0,1 0,2 0,3 0,4 0,5 -0,4 0,0 E / V 0-3mM 0 250 500 750 1000 0,0 The measurement was performed in phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature. Potential scan rate was 20mVs -1 (B) Amperometric measurements for (a) bare GCE, (b) L-mandelate dehydrogenase in a hybrid sol-gel film on GCE in absence of FDM, (c) L-mandelate dehydrogenase in a hybrid sol-gel film on GCE. The measurements were performed under stirring in 50 mM phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature by applying a constant potential of + 0.3V versus Ag/AgCl reference electrode. The CV measured in the absence of mandelic acid only show the typical electrochemical response of the ferrocene mediator, whose signal is controlled by diffusion into the sol-gel film (Fig. IV-4A). The addition of mandelic acid, from 0.5 to 3 mM leads to a gradual increase in the anodic signal concomitant with a decrease of the cathodic signal, which is the signature of an electrocatalytic process. This electrocatalytic response was further tested under hydrodynamic conditions, using the chronoamperometric measurement by applying 0. Encpasulation of L-ManDH using other sol-gel protocols The experiment made in the presence of P45O have shown that some component like PEG could be necessary for the stabilization of the proteins or to favor a conformation suitable for direct electron transfer reaction. Two different conditions have been tested, 1) the same sol as in section 3.1, but in the absence of PEG and 2) a sol based on TEOS, prepared at slightly acidic pH (5) and in which a small amount of EtOH was produced during hydrolysis reactions (Fig. IV-7). Surprisingly, all systems led to a quite comparable response, showing increasing current upon addition of increasing amount of mandelic acid, measured by CV or chronoamperometry under hydrodynamic conditions. The vesicle as prepared from E. coli is rather robust and was efficiently encapsulated using these different protocols. The better stability of ManDH response compared to P450 could be due to the lower requirement for MET by comparison with DET. Indeed, DET requires a suitable conformation to be achieved by the protein on the electrode surface in order to operate the final electron transfer reaction. At the opposite, in MET mechanism, the mediator can diffuse through the gel to the protein located in the vesicle. Here the only requirement is that the vesicle should not be dissolved in the sol-gel environment, which is obviously the case. Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 105 0,0 0,1 0,2 0,3 0,4 0,5 -0,5 0,0 E vs Ag/AgCl / V 0 -3mM 0 400 800 1200 1600 0,0 Conclusion The electrochemistry of membrane-associated proteins can be promoted in sol-gel environment. Direct electron transfer (DET) has been found more sensitive than mediated electron transfer (MET) reaction. DET requires the proteins to be located at close distance to Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 106 the electrode surface in a conformation that is suitable to operate this transfer of electron. We suppose that a fraction of the total P450 CYP1A2 proteins introduced in the sol-gel material are not able to react with the electrode and only those located at the film/electrode interface leads to a measured current. Nethertheless the sol-gel material and the presence of PEG clearly allowed the electrocatalytic signal for oxygen reduction. Further experiment should involve electrochemical detection of other substrate of P450 proteins, e.g. propanolol or paracetamol. Mediated electron transfer does not display the same requirements. Proteins located far from the electrode surface can transfer their electrons to the electrode if the mediator can diffuse in the silica gel to the redox center of L-Mandelate Dehydrogenase to react with the FMN cofactor before to react again with the electrode. The immobilization of vesicle with ManDH was found rather insensitive to the sol-gel route used for immobilization on the electrode surface. The benefit of encapsulation has been shown for electrochemical measurement under convection. The electrochemical response kept stable for at least 15 hours, while no response was observed with the vesicle simply dropped on the glassy carbon electrode surface. These experiments were all developed with using a mediator in solution, i.e. ferrocenedimethanol. We could finally show that the mediator, in this case an osmium polymer wrapped on carbon nanotubes, could be co-immobilized with the protein in a reagentless configuration. Further studies could involve the implementation of such biohybrid materials in electrochemical reactors for electroenzymatic synthesis. The effect of encapsulation of bacteriophage ΦX174 in silica-based sol-gel matrices on the infectivity has been studied. The purpose of this study is to investigate some strategies to protect bacteriophage against deactivation. The encapsulation was achieved via aqueousbased sol-gel, similar as those used in chapter 3 and 4, but the protocol here is applied for the preparation of small monoliths. The materials were aged, and then introduced into water for release of the virus and assessment of the infectivity. Organic and polymeric additives have been introduced to design a protective environment for encapsulation of sensitive bacteriophage. The efficiency of sol-gel encapsulation has been evaluated by studying the viral infectivity as a function of lysis plaques in the presence of Escherichia coli. Introduction Bacteriophages have been discovered by Felix d'Herelle in 1917. They have the ability to infect selectively target bacteria, allowing, e.g., the detection and control of foodborne diseases [1,2]. They can also find application in biosensing, as molecular biology tools or for drug delivery [3]. Upon infection of a host bacterium, a bacteriophage utilizes the cell machinery for amplification of new virions, thereby causing the death of the host bacterium. Bacteriophages can be produced in large quantities, they display a better stability than antibodies and they are able to recognize only living bacteria [2]. The research on bacteriophage as antibacterial agent was important before the discovery of antibiotics, but then decreased, being active only in few locations, notably in Georgia. The recent emergence of bacterial strain that developed resistances to existing antibiotics treatments has refocused the research on this biological antibacterial treatment. To maintain the infectious character of virus is also crucial when considering their use in medical applications [4]. Bacteriophages can be produced, concentrated and stored in aqueous solutions at low temperature, showing in these conditions slow decay of the infectivity with the time. Immobilization in a protective matrix is another approach that could be used to maintain the infectivity of the virus in conditions that would be suitable for specific applications [3]. As discussed in previous chapters, sol-gel encapsulation technology has already been successfully used to preserve the viability and activity of many encapsulated biomolecules such as vegetal cells [5], enzyme [6,7], bacteria [8,9], algae [10], yeast [11]. Several example of encapsulation of viruses in sol-gel matrix can be found in the literature, with the goal to use this biological entity as a template during the gelification of the inorganic matrix [12,13], but it is important to consider that these studies did not consider the infectious character of the immobilized virus. Only recently, this approach was used to encapsulate adenovirus with the goal to extend the release time from implants [4]. One has to note that in this example the gel containing the virus was stored in aqueous solution before implantation in mice, whose conditions are favorable for keeping high ratio of infectious virus. The approach involves the polymerization of aqueous silicate monomers in the presence of silica nanoparticles for encapsulation of bacteriophage in stable bio-composite matrix. Various additives were introduced in the sol with the goal to improve the protection of the immobilized bacteriophage, especially after drying of the gel. After aging, the gel was reintroduced in solution for phage release and the infectivity of ΦX174 has been analyzed. Assessment of viral infectivity Principle of infectivity determination for encapsulated bacteriophage The infectivity analysis of encapsulated ΦX174 has been tested from single-agar-layer plaque assay [17]. The protocol is reported in Figure V-2. First, ΦX174 was encapsulated in sol-gel monolith and aged at +4 °C. Then the monolith containing bacteriophage was dissolved in water to release free ΦX174 followed by entrapping together with E. coli CN in agar plate and incubation overnight at 37 °C to allow the interactions between ΦX174 and E. coli and lysis plaques formation in the agar plate. Fig V-2. Schematic representation of bactériophage ΦX174 encapsulation and determination of its infectivity against E. coli CN. As shown on the picture reported in Figure V-2, lysis plaques can be counted, this parameter being used to compare the influence of aging, and the impact of additives on viral infectivity after release in solution (units forming plaque per mL of solution, UFP mL -1 ). Encapsulated in semi-dry silica monolith The compatibility of the aqueous sol-gel route (sodium silicate solution and Ludox nanoparticles), i.e. not involving alcohol [18], for bacteriophage encapsulation was first studied by dispersion of the bacteriophage in saturated solution of sodium silicate, pH 7. The Chapter V. Encapsulation of infectious bacteriophage in a hybrid sol-gel monolith 2013 114 infectivity was then tested for 6 days (Fig. V-3). The infectivity of ΦX174 against bacteria remained approximately constant (250-300 UFP mL -1 ) during this time, which confirm the sol was harmless on the viral infectivity and could be applied for encapsulation of bacteriophages. (PEI) or Poly(diallyldimethylammonium chloride) (PDDA) has decreased from 180 UFP mL - 1 into 10 UFP mL -1 and 1 UFP mL -1 respectively after 6 days at +4 °C. The introduction of PEI to the sol-gel has apparently improved the protection against rapid deactivation of viral infectivity during 1 week while PDDA did not. This improvement of PEI could be explained by the favorable electrostatic and H-bonds interactions between PEI and bacteriophage capsides, protecting the bacteriophage from surrounding sol-gel materials as reported before for some redox proteins [19]. Encapsulated in dried silica monolith In a second set of experiments, bacteriophage ΦX174 has been encapsulated in a monolith of hybrid sol-gel and stored at 4°C for 6 days in petri-dish without lid, allowing a complete TEM analysis of the resulting has been performed in order to evaluate the influence of the additive of the gel texture, but no obvious influence could be observed as the texture of the material was dominated by the silica nanoparticles of Ludox (Fig. V-6). No bacteriophage could be observed due to their relatively low concentration. In the present state of our investigation, we first suppose that the beneficial influence of PEI and glycerol on the bacteriophage activity is due to interactions established between the additives and the viral capside, preventing complete deactivation in the harsh conditions of the study. Note that the composition of the gel could also affect the liberation of bacteriophages from entrapped into free active state. To the best of our knowledge, this work Conclusion The bacteriophage ΦX174 has been used as a model for a systematic investigation on the conditions to favor infectivity after encapsulation in sol-gel inorganic material. The encapsulation process has been performed by mixing a suspension of the bacteriophage in the starting sol followed by rapid gelification. The presence of glycerol and polyethyleneimine was proved to be essential to modify significantly the viral infectivity against bacteria in dry environment for 6 days, the better protection being obtained with the polyelectrolyte. The favorable electrostatic and H-bonds interactions between bacteriophage and the polymer could explain this better resistance to inactivation.. This work could provide more opportunities for applications of immobilized bacteriophage in antibacterial protection and in medicine. A B C Conclusion et perspectives Conclusion and perspective 2013 123 Conclusion and perspective The main topic of this thesis was related to the immobilization and the characterization of activity of bacteria in silica-based sol-gel films. The electrochemistry was successfully used to induce the immobilization of Escherichia coli strains in a hybrid sol-gel layer by using the controlled electrolysis of the starting sol containing the silica precursors, polyethylene glycol, chitosan and trehalose. Bacteria were either adsorbed on the electrode surface before sol-gel electrodeposition or codeposited during the electrolysis. We could show that the membrane integrity of the bacteria was kept for more than one month and that bacteria were able to express luminescence or fluorescence in response to their environment. Electrochemistry was then considered for the communication with Shewanella putrefaciens and Pseudomonas fluorescens strains in the presence of sodium formate and glucose as electron donor, respectively. Both strains are able to transfer electrons from their outer membrane. The immobilization of these bacteria strongly limits the electron transfer reactions from the bacteria to the electrode surface because of the insulating character of silica matrix. Strategies have proposed to increase this electron transfer pathway in the gel, using either immobilized ferrocene mediators on carbon nanotubes or by co-immobilization of a natural mediator, i.e. cytochrome c, in the silica gel including bacteria. Both strategies lead to very different behaviors from the point view of sensitivity to the addition of the substrate, as ferrocene could be able to cross the outer membrane of the bacteria, through the porins, and to collect electrons in the inter-membrane space. At the opposite cytochrome c only collects electrons from the outer membrane. Implication of cytochrome c within silica based gel entrapping bacteria mimics in some respects the strategies developed in natural biofilms to favor electron exchanges with a solid substrate, mineral or electrode and can this system can be defined for this reason as being an artificial biofilm. The experience developed on immobilization of bacteria was further applied to the immobilization of membrane associated proteins for Bioelectrochemistry. Two different proteins have been evaluated, P450 (CYP1A2) and L-Mandelate dehydrogenase (ManDH). Both systems display electrochemical activity after immobilization, by direct electron transfer and mediated electron transfer, respectively. We observed that the electrochemistry P450 (CYP1A2) was more sensitive to the sol-gel environment than L-ManDH, whose result can be explained the nature of the electron transfer reaction between the protein and the electrode. Conclusion and perspective Nevertheless, in both systems, the immobilization in sol-gel matrix was improving the intensity and the stability of the electrocatalytic response. Finally, the some preliminary studies on encapsulation of bacteriophage in hybrid sol-gel inorganic material have shown that the infectivity of this virus could be improved by careful control of the material composition. The presence of glycerol and polyethyleneimine was proved to be essential to modify significantly the viral infectivity against bacteria in dry environment for 6 days, the better protection and/or release being obtained with the polyelectrolyte. The favorable interaction between negatively charged bacteriophage and the positively charged polymer could partially explain this better efficiency (resistance to inactivation). This work could provide more opportunities for applications of immobilized bacteriophage in antibacterial protection and in medicine. In a tentative to propose some perspectives to this thesis, we can first comment that many different systems have been studied in this thesis, each of them providing new opportunities of continuation. These futures developments concern the application of electrochemically assisted bacteria encapsulation, future development on artificial biofilm, application of immobilized membrane associated proteins and new development on bacteriophages. The reader have certainly noticed that electrochemically assisted bacterial encapsulation developed in chapter II was not applied to electrochemical studied as reported in chapter III. These two studies have been developed in parallel and the link between these two different approaches was not done here. The difficulty is that the film composition suitable for electrodeposition was different to the one used in electrochemical communication studies, so optimization are necessary for application of electrodeposited biohybrid films to electrochemical devices. Artificial biofilm appears as a promising route for controlled elaboration of bioelectrode. Such concept as some drawback, as the immobilization hinders the natural growing of bacteria. However it provides some qualities. It allows the careful control of the bacterial strain; it would allow fundamental studies in which several strains would be mixed together in a controlled manner in order to study their synergy for bioelectrochemical reactions. Biofilm growth needs time; artificial biofilm would be a valuable strategy for the rapid elaboration of microbial devices for electrochemical application such as biosensors or bioreactors. Conclusion and perspective Immobilization of membrane associated proteins in sol-gel materials could be a valuable approach for increasing the stability of the electrochemical response, which could be further tested in electrochemical reactor for electroenzymatic synthesis. The sol could be further optimized for improving as much as possible the stability of the electrocatalysis without affecting the enzymatic activity. Other membrane proteins are of interest for energy conversion, like hydrogenase. It could be of interest to consider the interest of sol-gel for immobilization of this later class of proteins. Finally, studies on bacteriophage immobilization in sol-gel materials for controlled release are in their infancy. Additional studies should consider more deeply the effect of additives on the release of liberated infective bacteriophage from silica matrix and the effect of their concentrations on the viral infectivity. Further studies could consider the potential application of such biomaterial in the form of form for controlled released as antibacterial treatment. Methods and Techniques To conduct the work presented in this thesis, several chemicals and techniques have been used to prepare and characterize the active film. This chapter presents the description of sol-gel precursors, polymeric and organic additives, chemical and biological mediators, conducting nanomaterials used as materials for fabrication of biocomposite. The preparation protocols of biomaterials such as bacteria, bacteriophage and membrane-associated enzymes have been described in details. A series of protocols used to encapsulate biomolecules and microorganisms in pure inorganic and organic/inorganic hybrid sol-gel are also described. In addition, the analyses used to investigate the viability and bioactivity (electrochemical measurements, thickness measurements, membrane integrity and metabolic activity) have been described in this chapter. At last, the techniques and analytical methods used to confirm these studies have been described and illustrated. Chemicals and biological species Additives A series of additives with different physical properties have been used in this work. These additives have been introduced either for adsorption of bacteria on electrode surface or for optimization of hybrid sol-gel materials. The effects of additives introduction into sol-gel for bioencapsulation are also investigated later. 3 provides some information on these microorganisms. Strains Physical properties Escherichia coli K12 derivative C600 E. coli C600 is an aerobic Gram-negative, rod-shaped bacterium. Optimal growth occurs at 37 °C. Optimal growth occurs at 37 °C. Bacteriophage ΦX174 ΦX174 is an aerobic virus isometric, having single-stranded circular DNA and icosahedral shape of 20 triangular faces and 12 vertices. Optimal growth occurs at 37 °C in presence of bacteria and nalidixic acid 25mg/mL. Redox mediators Redox mediators Formula MW (g.mol -1 ) Suppliers Preparation of nano-material suspensions 1) MWCNT-COOH: 2 mg of MWCNT-COOH were dispersed in 1 mL of deionized water. The solution was sonicated for 10 min followed by stirring overnight at room temperature. 2) SWCNT-EtO: 2 mg of SWCNT were dispersed in 1 mL of chitosan. The solution was sonicated for 10 min followed by stirring overnight at room temperature. 3) SWCNT-(EtO) 8 -Fc: the chemical modification of SWCNT with poly (ethylene glycol) linker and ferrocene has been prepared by SRSMC laboratory [3,4]. Its dispersion in chitosan has been applied using the same protocol as SWCNT-EtO. 4) MWCNT-Os: first, 2 mg of MWNCT-COOH were dispersed in 1mL of deionized water by sonication. Then, a mixture of 150 µL of osmium polymer P0072 and 250 µL of dispersed MWCNT-COOH were sonicated for 1 h followed by stirring overnight at room temperature. Finally, MWCNT wrapped with osmium P0072 was dispersed in chitosan 0.2 wt % (ratio 1:1) [5]. 3) Gold nanoparticles: HAuCl4 was dissolved in 20 mL of deionized water to get a final concentration of 1mM and heated. When the solution started boiling, trisodium citrate dihydrate (1 wt%, 2 mL) was added. The final solution was stirred and boiled until changing color into brick-red color. The final concentration of gold nanoparticles was estimated at 0.2 mg.mL -1 [6]. Electrodes Glassy Preparation of sol-gels for bioencapsulation We have explored various starting sol compositions for bioencapsulation (Table 5). Aqueous Sol B [7] Sodium silicate solution (0.27 M, 13 mL) was mixed with LUDOX HS-40 (40 wt %, 7 mL) and 5 wt % of pure glycerol. HCl (1 M, 2.4 mL) was added to adjust pH at 7. Finally, this sol mixture has been diluted 10 times with deionized water for further use. Sol C [8] The starting sol was prepared by mixing 4 ml TMOS with 0.5 mL of 0.1 M HCl and 2 mL of distilled water. The mixture was sonicated (Transsonic T 080) for 10 min. Then the acidified sol has been diluted to concentration 1 M with deionized water and evaporated for a weight loss of 3.52 g of methanol. Then, alcohol-free sol has been stored at 4 °C for further use. Sol D Sodium silicate solution (55 mM, 20 mL) was mixed with PEG 6% wt. HCl (1 M, 0.7 mL) was added to adjust pH at 7. Sol E The starting sol was prepared by mixing 2.25 ml TEOS with 2.5 mL of 0.01 M HCl and 2 mL of distilled water. The mixture was mixed overnight at room temperature. NaOH (1 M, 0.7 mL) was added to adjust pH at 5. Finally, this sol mixture has been diluted 3 times with deionized water for further use. Table 5. Information of different methods used to prepare the starting sols Microorganism cultures Bacteria All the strains used in encapsulation experiments are from our laboratory collection. All the bacterial strains (except E. coli CN) were streaked on TSA plates from cultures stored at -80 ºC on beads supplemented with glycerol. When needed, a 0.1 mL overnight culture from single colony was inoculated in 200 mL TSB and incubated at the optimal growth temperature for each strain under stirring (160 rpm) for 24 h (stationary growth phase). Except S. Methods and Techniques putrefaciens strain needs stirring for 48 h (stationary growth phase) in order to have depletion of oxygen inside growth medium and strain starts to produce cytochromes to adapt the anaerobic medium. Ampicillin sodium salt [100 µg/mL] and kanamycin (25 µg/mL) were added only to growth media for cultivation of E. coli MG1655 pUCD607 and E. coli MG1655 zntA-GFP strains, respectively in order to control the growth of geneticallymodified bacteria only. Bacterial growth (turbidity) was measured by monitoring the optical density at 600 nm. The culture was harvested by centrifugation at 5000 g for 10 min at room temperature. The pellet was washed twice with 1 mM KCl, and then suspended in 1 mM KCl. This washing procedure eliminated nutrients to avoid any bacterial growth in the sol-gel. Finally, bacterial suspension in KCl [1mM] could be used directly for sol-gel bioencapsulation or stored at +4 °C for few days. The strain E. coli CN has been cultured following a different protocol in order to adapt its physiological properties at exponential growth phase. Bacteria were stored at -80 ºC on beads supplemented with glycerol. When needed, a 0.1 mL overnight culture from single colony was inoculated in 200 mL MSB Nal medium and incubated at 37 ºC under stirring (160 rpm) for 3 h (exponential phase growth). Bacterial growth was measured by monitoring the optical density at 600 nm. Bacteriophage The bacteriophage ΦX174 was produced according to ISO10705-2:2000(F). A suspension of ΦX174 (107 PFU/ml as a final concentration) was incubated in MSB Nal medium containing E. coli CN cells (adapted at exponential growth phase) for 5 h at 37 °C. The culture was harvested by centrifugation at 3000 g for 20 min at room temperature. Then, supernatant was filtered by filter membrane (pores of 0.22 µm), divided into aliquots and stored at 4°C in dark until further use. Sol-gel bioencapsulation protocols 6.1. Bacteria encapsulation by electrodeposition (Chapter II) The preparation of sol used for EAD protocols has been described in the previous section (preparation of sol C) without any additives. The introduction of additives allowed enhancing deposition and stability of sol-gel film in addition to its protective role for bacteria. or E. coli MG1655 zntA-GFP (2 × 10 9 cells/mL). EABE(2S) It is divided into 2 steps: adsorption of the bacteria on the ITO plate surface through the aid of polycationic PEI polymer followed by the electrodeposition of the hybrid sol-gel film through/over the bacteria layer (scheme is illustrated later). Bacteria encapsulation by drop-coating (Chapter III) The preparation of sol used for drop-coating protocol has been described in the previous section (preparation of sol B). The protocol has been divided into 2 branches: One-layer encapsulation In Bacteriophage encapsulation (Chapter V) The preparation of sol used for monolith bioencapsulation has been described in the previous section (preparation of sol A. A collection of organic additives have been introduced uniquely into sol A in each experiment such as PEI (0.2 wt %), glycerol (10 wt %), trehalose (10 wt %), PDDA (5 wt %) and a mixture of trehalose-glycerol (5wt % -5 wt %) as a final concentrations in the sol-gel monolith. 2 mL of HCl solution (1 M) was added to raise the pH up to 7.4. Then, ΦX174 (1 mL, 6 × 10 7 UFP/mL) was added to the sol and/or hybrid sol. Finally, aliquots samples (15 µL) of mixture containing sol and bacteriophage were aged at 4°C for 6 days in humid air or harsh dry air for further analysis. Note that control experiment has been performed in absence of any additive. were performed using glassy carbon electrodes (GCE) as working electrode in presence of platinum counter and Ag/AgCl reference electrodes. Amperometric measurement (Chapter II) was performed using ITO plates as working electrode in presence of platinum counter and Ag/AgCl reference electrodes for electrodeposition. Methods of analysis Atomic force microscopy (AFM) Atomic force microscopy experiments were carried out using MFP3D-BIO instrument (Asylum Research Technology, Atomic Force F&E GmbH, and Mannheim, Germany). Triangular cantilevers with square pyramidal tips (radius of curvature 15 nm) were purchased from Atomic Force (OMCL-TR400PSA-3, Olympus, Japan). The spring constant of the cantilevers measured using thermal noise method was found to be 20 pN/nm. The films have been imaged in AFM contact mode and were collected at a fixed scan rate of 1 Hz with a resolution of 512 x 512 pixels. Experiments were performed in air at room temperature. The Film was scratched and its thickness was measured from the cross section profiles of the AFM topographical images. Values were calculated by statistical analysis, as averages of 5 measurements obtained from several regions of the film. Methods and Infectivity analysis The infectivity analysis of encapsulated ΦX174 has been tested from single-agar-layer plaque assay [16]. The aliquots of encapsulated ΦX174 samples were dissolved in sterile water (50 mL) for at room temperature for testing the preserved infectivity to bacteria. 10mL of MSA Nal soft medium (MSB containing 10g/L Agar) are steamed to liquefy and adjusted at 55°C in a water bath. 1mL of E.coli CN (adapted at exponential phase) and 1mL of dissolved aliquots samples were introduced into the melted MSA. The final mixture has been gelified in petri dish. Finally, the latter was incubated overnight at 37°C and lysis plaques were counted by naked eye. Values were calculated as average of 2-3 samples. Note that some control test has been perfomed using same protocol for ΦX174 suspension dispersed in saturated solution of silica. Chapter I. Knowledge and literature survey 2013 15 encapsulation 15 of proteins and enzymes which are not highly sensitive to alcoholic byproducts, While silicates and colloidal silica are more adapted to encapsulate sensitive microganisms such as bacteria and viruses[26].The sol-gel process consists of hydrolysis, condensation, gelation, drying and aging steps in order to form gels or xerogels as summarized in Figure I-2. 17 The 17 SiO 2 network. The H 2 O (or alcohol) expelled from the reaction remains in the pores of the network. When sufficient interconnected Si-O-Si bonds are formed in a region, they respond cooperatively as colloidal (submicrometer) particles or a sol-gel [25]. Eq I-2. Condensation of silica precursors. gel morphology is influenced by temperature, concentrations of each species (especially R ratio, R = [H2O]/[Si(OR)4], and mainly pH. Hydrolysis and condensation occur simultaneously. 18 Fig I- 3 . 183 Fig I-3. Schematic representation of coating processes via solvent evaporation such as A) dip coating; B) spin coating and C) drop-coating (adapted from reference[28]). Chapter I. Knowledge and literature survey 2013 19 Fig I- 4 . 194 Fig I-4. The principle of electrochemically-assisted generation of sol-gel on the electrode surface. Fig I- 5 . 5 Fig I-5. Possible interactions between chitosan and silicates (adapated from reference[57]). Fig I- 6 . 6 Fig I-6. Chemical structures of Sol-gel based hybrid material i.e: A: Tetramethyl orthosilane; B: Glycerol; C: Poly(Ethylene Glycol); D: Chitosan; E: Trehalose dehydrate; F: Poly(Ethyleneimine); G: sodium silicate with colloidal silica. functional state. The technological advancements in immobilization of biological species over several decades has also resulted in a revolution in the use of biological objects for the selective extraction, delivery, separation, conversion and detection of a wide range of chemical and biochemical reagents. The use of biological species such as proteins, peptides, Chapter I. Knowledge and literature survey 2013 27 27 Fig I- 7 . 7 Fig I-7. Simplified illustration showing the different types of proteins which are utilized for various immobilization formats. Soluble proteins are located within the hydrophilic intracellular compartment of the cell. Intrinsic trans-membrane proteins span the cellular phospholipid bilayer membrane, whereas extrinsic membrane proteins are partially embedded within the membrane and are exposed to either the intracellular or extracellular regions (adapted from reference [74]). Fig I- 8 . 8 Fig I-8. Illustration of some typical strategies used for immobilization of membrane proteins. (A) Direct physical adsorption of the lipid bilayer and membrane protein onto a solid support. (B) Adsorption or covalent attachment of the lipid bilayer onto a solid support, with an intermediate layer of hydrated polymer. (C) Immobilization of the lipid bilayer and membrane protein in the pores of a solid support. (D) Immobilization of a phospholipid vesicle through covalent or avidin-biotin bioaffinity attachment (adapated from reference[66]). Fig I- 9 . 9 Fig I-9. Photosynthesis of cyanobacteria encapsulated in porous silica (adapted from reference[83]). (s) relative to the environmental effects. The promoter senses the presence of target molecule (s) and activates transcription of the reporter gene and subsequent translation of the reporter mRNA produces a protein/light as a detectable signal (Fig. I-10) [93,94]. The genetically-engineering process has generated diversity of cells for designing bioreporters such as bacteria modified with zntA Chapter I. Knowledge and literature survey 2013 31 Fig I- 11 . 11 Fig I-11. Typical biosensor setup (adapted from reference[START_REF] Borgmann | Advances in Electrochemical Science and Engineering: Bioelectrochemistry[END_REF]). I- 12 ) 12 [102,103]. Since the efficiency of biofuel cells depends on the bacterial density, most biofuel cells have been performed with microbial biofilms. The sol-gel encapsulation Chapter I. Knowledge and literature survey 201333 Fig I- 12 . 12 Fig I-12. Schematic illustration of a microbial fuel cell (adapted from reference[106]). Chapter I. Knowledge and literature survey 2013 34 encapsulated 34 and infective for months[113]. In addition, the encapsulation of viruses and bacteriophage in sol-gel materials has been developed for biotechnologies such as ordered mesoporous silica with designed pores according to the size and morphology of biomaterials (Fig I-13)[114,115]. Fig I- 13 . 13 Fig I-13. Mesoporous template Synthesis of Hierarchically Structured Composites (adapted from reference[114]). Chapter I. Knowledge and literature survey 2013 39 between 39 bacteria immobilized in a sol-gel layer and the electrode material. The immobilization of these bacteria could limit the electron transfer reactions to the electrode surface because of the insulating character of silica matrix, thus strategies to favor electron transfer reactions will be implemented, for the preparation of artificial biofilm(Chapter 3).The expertise developed for electrochemistry of whole cell will then be applied to the electrochemistry membrane-associated proteins, i.e. P450 and Mandelate dehydrogenase, with the goal to improve the stability of the electrocatalytic response(Chapter 4). Finally, a last topic has been investigated, which concerns the infectivity of bacteriophages after their encapsulation in a silica-based sol-gel monolith(Chapter 5). hybrid sol-gel filmIn this chapter, a novel method, based on the electrochemical manipulation of the sol-gel process, was developed to immobilize bacteria in thin hybrid sol-gel films. This enabled the safe immobilization of Escherichia coli on electrode surfaces. E. coli strains C600, MG1655 pUCD607 and MG1655 pZNTA-GFP were incorporated and physically encapsulated in a hybrid sol-gel matrix and the metabolic activity and membrane integrity of the bacteria were assessed as a function of the aging time in the absence of nutrients at + 4 °C or + 80 °C.Live/Dead BacLight bacterial viability analysis detected by epifluorescence microscopy indicated good preservation of E. coli C600 membrane integrity in the sol-gel film. The presence of chitosan, trehalose and polyethylene glycol additives was shown to strongly improve the viability of E. coli cells in the electrodeposited matrix for 1 month after encapsulation. Finally, the bioluminescent activity of E. coli MG1655 pUCD607 was preserved by approximately half of the cells present in such composite films. Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 51 catalysed 51 by the local variation of pH that can be induced by the electrolysis of the sol. In this work, a potential of -1.3 V versus Ag/AgCl (3 M) was applied to the working ITO electrode. Fig II- 1 . 1 Fig II-1. AFM profiles measured on films composed with (a) TMOS, PEG, and chitosan, (b) TMOS and PEG, (c) TMOS and chitosan, (d) chitosan and PEG. N.B. E= -1.3 V, deposition time = 30s. Fig II- 3 . 3 Fig II-3. AFM profiles for electrodeposition of optimal hybrid sol-gel films varying with deposition times: (a) 10 s, (b) 20 s, (c) 30 s. N.B. E= -1.3 V.AFM characterization also provides additional information concerning the gradual encapsulation of E. coli C600 with cell density (4×10 6 cells mL -1 ) (displaying about 500 nm diameter of bacteria) with electrodeposition time (Figures II-3 and II-4). Bacterial cells were still visible on the planar substrate after deposition of 80-nm thick film (Fig.II-3a&4a) but was gradually masked after 20 s (Fig.II-3b&4b) and 30s (Fig.II-3c) deposition. Note that the roughness of the film (measured after drying) also strongly increased when increasing the film was observed (close to 100 %). One great advantage of EABE (2S) is that the bacterial density can be easily controlled by the first bacteria adsorption step, and the uniform orientation of the cells allows precise analysis of the viability (Fig.II-5). At the opposite, the encapsulation in one step (EABE(1S)) leads to randomly distributed bacteria in the all volume of the film (see next section). 55 Fig II- 5 . 555 Fig II-5. Short-term BacLight analysis of EABE(2S) for six E. coli C600 concentrations encapsulated in optimal sol-gel: a) [4 ×10 4 cells ml -1 ], b) [4 ×10 5 cells ml -1 ], c) [4 ×10 6 cells ml -1 ], d) [4 ×10 7 cells ml -1 ], e) [4 ×10 8 cells ml -1 ] and f) [4 ×10 9 cells ml -1 ]. Same magnification for the 3 upper figures a,b and c, same magnification for the 3 lower figures d, e and f. N.B. E= -1.3 V, deposition time = 30s. The effects of several parameters, i.e. the hybrid sol-gel components (Fig. II-6), the storage conditions (Fig. II-7) and the deposition time parameters of EABE(2S) (Fig. II-8) were then analyzed by Live/Dead BacLight over several weeks. Fig II- 6 . 6 Fig II-6. Long-term BacLight analysis for EABE(2S) of E. coli C600 (4×10 6 cells mL -1 ) in films composed with (a) sol-gel silica, PEG and chitosan, (b) PEG and chitosan, (c) sol-gel silica and chitosan, (d) sol-gel silica and PEG. Trehalose was also introduced in all films.The first measurement was done 1 hour after electrochemically assisted bacteria encapsulation. All films have been prepared by electrolysis at -1.3 V for 30 s. All samples were stored at + 4 °C in moist air. Fig. II- 7 7 Fig. II-7 reports the evolution with time of the bacterial viability of encapsulated E. coli C600 under different storage conditions, i.e., 1 mM KCl solution (curve a) or moist air (curves b andc). All samples have been stored at +4 ºC which was given as an optimum temperature for storage of encapsulated E. coli inside inorganic matrices in the literature[40]. Fig II- 8 . 8 Fig II-8. Long-term BacLight analysis for EABE(2S) of E. coli C600 (4×10 6 cells.mL -1 ) with 10 s (a), 20 s (b) and 30 s (c) deposition times. All films have been prepared by electrolysis at -1.3 V. N.B (zero: 1 hour after encapsulation).For all samples, the short-term viability measured one hour after encapsulation was high, close to 100 %. However, this viability dramatically decreased for thinner films and almost no Fig. II- 9 Fig II- 9 . 99 Fig. II-9 illustrates the EABE(1S) protocol. Here, bacteria are introduced in the starting sol, in the presence of the silica precursor, PEG, chitosan and trehalose. When the electrolysis is performed, the sol-gel transition occurs and bacteria are immobilized in the course of the hybrid sol-gel layer formation. During the immobilization, the bacteria are expected to be homogeneously distributed in the material but are randomly oriented. Fig II- 10 . 10 Fig II-10. AFM profiles for electrodeposition of optimal hybrid sol-gel films obtained with various deposition times: (a) 10 s, (b) 20 s (30 s was not measurable). All films have beenprepared by electrolysis at -1.3 V with final cell density 2 × 10 9 cells/mL. Fig II- 11 . 11 Fig II-11. AFM images of EABE(1S) for E. coli C600 (8×10 9 cells mL -1 ) encapsulated in optimal sol-gel: a) 10 s, b) 20 s and c) 30 s. All films have been prepared by electrolysis at -1.3 V. Fig Fig. II-12 reports the short-term viability of E. coli C600 immobilized by EABE(1S) in films produced with electrolysis times varying from 10 s (Fig. II-12a) to 60 s (Fig. II-12d). The bacterial cells exhibited high viability, but precise counting of viable bacteria was difficult because of their random orientation, especially for the thickest films. The quantity of bacteria immobilized on the electrode surface was controlled by the bacterial density introduced in the sol and the time of electrodeposition. As observed previously for EABE(2S), too long electrolysis time in EABE(1S) can be detrimental for the bacteria as shown in Fig. II-12d(corresponding to the film prepared with 60 s electrolysis). In this experiment, a significant number of non-viable cells were observed (but it was not possible to quantify them accurately). Note that the cell density (4 × 10 8 cells per mL) used for the EABE(1S) Fig II- 12 . 12 Fig II-12. Short-term BacLight analysis of EABE(1S) for E. coli C600 (4×10 8 cells mL -1 ) along deposition times of optimal sol-gel: a) 10 s; b) 20 s; c) 30 s and d) 60 s. Same magnification for all pictures. All films have been prepared by electrolysis at -1.3 V with final cell density 10 8 cells/mL. Fig II- 13 . 13 Fig II-13. Fluorescence metabolic activity for EABE (1S) of MG1655 pZNTA-GFP (4×10 8 cells mL -1 ) encapsulated in optimal sol-gel after 1 week: a) stressed by CdCl 2 and b) Unstressed (Control). surface biocomposite films, notably by providing high protection levels for the encapsulated microorganisms for a rather long period (1 month). Bacteria immobilized in such films can be reactivated for expression of luminescent or fluorescent proteins and they kept sensitivity to their environment, as demonstrated by the fluorescent signal in the presence of cadmium. The development of such electrochemical protocol provides new opportunities for bacteria immobilization on electrode surfaces, with possible applications in the field of electrochemical biosensors for environmental monitoring, or to optimize the electrochemical communication between bacteria and electrode collectors, which constitutes major challenges in the development of biotechnological devices. In this chapter, the work focuses on the electrochemical communication between Shewanella putrefaciens CIP 8040 or Pseudomonas fluorescens CIP 69.13 encapsulated in a silica-based sol-gel film and a glassy carbon electrode. As silica is an insulating material, strategies have found to improve the electron transfer from bacteria to electrode in this environment. Several configurations have been considered, i.e., direct electron transfer from the bacteria to the glassy carbon electrode, introduction of soluble Fe(CN) 6 3-mediator in the solution, introduction of carbon nanotubes on the electrode surface or in the sol-gel layer, functionalization of the carbon nanotubes with ferrocene mediator and long ethylene glycol arm and finally introduction of a natural mediator , i.e., cytochrome c in the gel, in close interaction with the bacteria. The comparison of these different approaches allows a discussion on the interest and limitation of these different strategies. Sodium formate and glucose have been used as electron donors for S. putrefaciens or P. fluorescens, respectively. 70 ET 70 of cell-based electrochemical devices by providing close proximity between the viable cells and the solid electrode surface to achieve fast and efficient electron transfer (ET) reactions[4]. The challenge is to enhance the link between the microbial catabolism and the communication with the electrode surface. Microorganisms can adopt different strategies for Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 , most common of which are: (1) direct electron transfer (DET) and mediated electron transfer (MET), through either exogenous or endogenous diffusive electron mediators[13]. Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 71 The 71 goal of this study is to investigate an original strategy based on ET between shewanella putrefaciens and an electrode material using novel artificial mediators. Sodium formate was used as a substrate for this bacterium. For the first investigation, Fe(CN)6 3-has been used to optimize the EC of encapsulated bacteria or film modified with bacteria and bare CNT with the glassy carbon electrode (GCE). In addition, Bare CNT and CNT covalently-linked with flexible ferrocene mediator have been compared in different entrapment protocols in order to investigate and evaluate the EC of encapsulated bacteria with GCE.The different configurations of electrode are schematically described in Figure III-1. Fig III- 1 . 1 Fig III-1. The various electrode configurations used in this study: in (A) only the bacteria are immobilized in the sol-gel layer deposited on GCE; in (B) SWCNT has been introduced in the sol-gel layer with bacteria; in (C) a first layer of SWCNT was deposited on GCE and then a second layer of silica gel containing bacteria was overcoated; and in (D) cytochrome c was introduced in the sol-gel layer with bacteria in order to serve as electron mediator. Figure Figure III-2A shows a typical cyclic voltammetric response of S. putrefaciens encapsulated in a sol-gel layer deposited onto the GCE surface (i.e., configuration as in Fig.III-1A). In these conditions a complex cathodic response can be observed with a well-defined redox signal Fig III- 2 . 2 Fig III-2. A) Cyclic voltammograms recorded with a GCE modified with a sol-gel layer containing S. putrefaciens. The measurement was performed in phosphate buffer at room temperature at scan rate of 20 mV s -1 . B) Amperometric response obtained upon successive additions of sodium formate 20 mM (arrows) with a similar electrode as (A).The measurements were performed at room temperature in PBS under stirring, by applying +0.35 V versus Ag/AgCl reference electrode (a) in the absence and (b) in the presence of 5 mM Figure Figure III-2B confirms such observation through the electrochemical response measured with the same system as reported in curve "a", but this time in the presence of 5 mM Fe(CN) 6 3- i.e., cases B and C as reported in Figure III-1. In case B, SWCNT was introduced in the solgel layer with the idea to promote a network of bacteria and nanotubes. In case C, a layer of SWCNT was first deposited on glassy carbon electrode and covered by a second layer containing the bacteria, in order to increase the underlying electrode surface area in the Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 objective to improve the interactions between the bacteria and the SWCNT at the interface between these two layers. The electrochemical responses of these assemblies were first tested in the absence of mediator in solution. The electrochemical response observed upon additions of 20 mM formate with the one-layer configuration (curve "a" in Fig. III-3A, corresponding to case B in Fig. III-1) was slightly improved when compared to the response observed in the absence of SWCNT (curve "a" in Fig. III-2B) but remained very low. The current response was enhanced with the two-layer configuration (curve "b" in Fig. III-3A, corresponding to case C in Fig. III-1), however this response dropped dramatically and rapidly with time. The same systems, i.e., cases B and C (Fig. III-1), were then tested in the presence of 5 mM ferricyanide. As expected the current increased dramatically upon addition of 50 mM formate, the current value in the one-layer configuration (curve "a" in Fig. III-3B, corresponding to case B in Fig. III-1) being two times lower than the two-layer configuration (curve "b" in Fig. Figure III- 3 3 Figure III-3. Amperometric current responses measured (A) in the absence and (B) in the presence of 5 mM Fe(CN) 6 3-mediator in solution upon successive additions of sodium formate (arrows) using glassy carbon electrodes modified with (a) a sol-gel layer containing both SWCNT-COOH and S. putrefaciens (as reported in Fig. III-1B) and (b) sol-gel overlayer containing S. putrefaciens on chitosan/SWCNT-COOH underlayer film (as reported in Fig. III-1C). The measurements were performed in PBS at room temperature by applying +0.35 V versus Ag/AgCl reference electrode. Fig III- 4 .Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 77 SWCNT-(EtO) 8 - 4778 Fig III-4. (A) Scheme of SWCNT functionalized with ferrocene linked by flexible Poly(ethylene glycol) linker. (B) Cyclic voltammogramm measured with GCE modified with SWCNT-(EtO) 8 -Fc. The measurement was performed in PBS at room temperature and at scan rate of 20 mV s -1 . Fig. III-6A reports the electrochemical response of cytochrome c in the sol-gel layer. A pair of well-defined current peak is observed located at 0.07V. Contrarily to the response measured with S. putrefaciens that display several irreversible peaks (Fig. III-2A), the signal coming from cytochrome c is clearly reversible. Investigations have shown that the electrochemical response of cytochrome c was controlled Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 78 by diffusion, indicative of the good mobility of this small protein in the silica gel (See section 3 of this chapter). Fig III-6. (A) Cyclic voltammogramm measured with GCE modified with a sol-gel layer containing bovine heart cytochrome c. The measurement was performed in PBS at room temperature and at a scan rate of 20 mV s -1 . (B) Amperometric current responses recordedupon successive additions of 0.2 mM sodium formate, as measured using GCE modified with a sol-gel layer containing bovine heart cytochrome c and S. putrefaciens. The measurement was performed under stirring in PBS at room temperature by applying +0.15 V versus Ag/AgCl reference electrode. . The different behavior observed in the presence of SWCNT-(EtO) 8 -Fc and cytochrome c is indicative of the different mechanism involved by the bacteria to transfer the electrons to the Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 80 mediator, as soluble mediator or ferrocene are susceptible to enter the porins of the outer bacterial membrane, while cytochrome c do not, limiting with the latter electron from the outer membrane of the bacteria. Further optimization has been done with using cytochrome c as natural mediator. Experiments have been performed with Pseudomonas fluorescens and are presented in the next section. Fig III- 7 . 7 Fig III-7. (A) Cyclic voltammograms for (a) bare GCE, (b) bovine heart cytochrome c in a solgel film on GCE, (c) cytochrome c and P. fluorescens in a sol-gel film on GCE. Inset: Cyclic voltammetric response of P. fluorescens alone in the sol-gel film. (B) Variation of the current response measured by cyclic voltammetry at different scan rates for a sol-gel film 83 fluorescens 83 Fig. III-8A, cytochrome c alone was not able to generate any current response upon addition of glucose, as expected from the absence of bacterial respiration, while P. fluorescens CIP 69.13 alone have generated very small current response upon addition of glucose (Fig. III-8B). By contrast, P. Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 CIP 69.13 encapsulated with bovine heart cytochrome c has shown much higher current responses upon successive additions of glucose (> 10 times that of bacteria alone, Fig. Fig III- 8 . 8 Fig III-8. Amperometric current responses measured upon the addition of glucose using glassy carbon electrodes modified with sol-gel film containing (A) bovine heart cytochrome c, (B) P. fluorescens, (C) P. fluorescens with bovine heart cytochrome c and (D) E. coli with CFig III- 9 . 9 Fig III-9. Amperometric current responses measured in the presence of 3 mM glucose with glassy carbon electrodes modified with sol-gel film containing P. fluorescens and bovine For further optimization of the conditions to improve the electrochemical communication between P. fluorescens and bovine heart cytochrome c immobilized in sol-gel films, the effect of cytochrome c concentration, cell density, temperature, pH and applied potential have been investigated and studied via amperometric measurements (Fig. III-10). Figure Figure III-10A shows that increasing the cell density (from 4 to 9 x 10 9 cells mL -1 ) encapsulated with constant concentration of cytochrome c generated higher current response. Fig. III-10B displays the electrochemical response of the biofilm as a function of the initial concentration of bovine heart cytochrome c (from 0.1 to 1.5 mM) in the sol used for electrode modification, as co-encapsulated with constant density of P. fluorescens [9 x 10 9 cells mL -1 ].The highest response was obtained with using 1 mM cytochrome c. The small decrease in the current response observed at higher concentration could be explained by some leaking of cytochrome c due to alteration of sol-gel film stability at this high protein concentration (indeed, too high contents of additives are expected to result in more open silica structures and, thereby, more easy leaching of entrapped species). Temperature variations from 20 to 35 °C resulted in significant variations in sensitivity (Fig.III-10C), with a maximum current response for the electrochemical communication obtained at 30 °C, which corresponds actually to the temperature used for growth of this bacterium. Finally, pH 7 was observed as optimal (Fig.III-10D) for this bioelectrode. The optimal pH and temperature are here in Figure III- 8 , 8 Fig III-10. Amperometric current responses measured with glassy carbon electrodes modified with sol-gel film containing P. fluorescens and bovine heart cytochrome c with respect to (A) P. fluorescens cell density in the presence of 1mM c-based cytochrome, (B) cytochrome c concentration in the presence of P. fluorescens (9 x 10 9 cells mL-1 as initial cell density), (C) temperature and (D) pH of the medium. Other conditions are similar as inFigure III-8, with the exception of the parameters under study. Fig III- 11 . 11 Fig III-11. Influence of (A) the quantity of artificial biofilm and c-cytochrome (volume of deposited sol solution) and (B) the presence of gold nanoparticles in the biocomposite film on the electrochemical response of glassy carbon electrodes modified with sol-gel films containing P. fluorescens and bovine heart cytochrome c, upon the addition of 3 mM glucose as substrate. Other conditions are similar as inFigure III-8. (N.B: PF and AuNP are the abbreviations of P.fluorescens and gold nanoparticles respectively) Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 89 membrane 89 cytochrome, leading to a different behavior, by comparison with chemical mediators, both in term of sensitivity and range of response to the addition of electron donor.The application of cytochrome c for electron transfer from bacteria to the electrode was then studied systematically with Pseudomonas fluoresecens. The resulting systems could be considered as an artificial biofilm, with the silica matrix mimicking the exopolymers involved in natural biofilms, but not only. Here the strategy involved to increase the electron transfer rate is also mimicking a strategy involved in natural biofilms, in which cytochrome can be expressed to relay the electron from the bacteria to a final acceptor, this latter being a mineral or the electrode of a microbial fuel cell. The system display high competition against O 2 as electron acceptor, and displayed clear response to the environment as demonstrated by the dependent relationship of current with glucose concentration. The introduction of gold nanoparticles has slightly enhanced the electron transfer reaction between the electrode and the bacteria/cytochrome c composite. This concept of artificial biofilm is probably an important topic to be developed in future works for application as biosensors or bioreactors.Technological advancements in immobilization of biological molecules over several decades has resulted in a revolution of biodevices for the selective extraction, delivery, separation, conversion and detection of a wide range of chemical and biochemical reagents. The use of biological molecule such as proteins, peptides and nucleic acids in these applications relies This work was performed by Patrícia Rodrigues as cooperation with Maria Gabriela Almeida (Chemistry Department. Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa) under our co-supervisions. It will report the immobilization of cytochrome P450 1A2 (CYP1A2) in sol-gel thin films prepared from water-based precursor in the presence of polyethylene glycol (PEG) with a special focus on direct electron transfer reaction. The use of sodium silicate prevents the introduction of alcohol in the starting sol while PEG improves the stabilization of the membrane proteins. The direct electron transfer reaction was characterized electrochemically in the absence and in the presence of O 2 . Fig IV- 1 . 5 Fig IV- 2 . 152 Fig IV-1. Electrochemical responses of CYP1A2 (A) simply adsorbed on PGE surface, (B) immobilized in PEG400, (C) in sodium silicate and (D) in a hybrid sol-gel matrix made of sodium silicate and PEG400. All measurements have done in 50 mM phosphate buffer at pH7 after degassing the solution for 20 min with Argon. The 40 successive scans have been performed at 50mV/s. Fig IV- 3 . 3 Fig IV-3. (A) Influence of O 2 addition on the electrochemical response of CYP1A2immobilized in sodium silicate -PEG400 hybrid sol-gel matrix. Detrimental influence of higher concentration of O 2 on the direct electron transfer reaction. All measurements have been done in 50 mM phosphate buffer at pH 7 after degassing the solution for 20 min with Argon. Potential scan rate was 50 mV s -1 . catalytic activity is expressed by several proteins, which can be soluble or membrane bound and use different co-factors like nicotine amide dinucleotide (NAD + ) or flavine mononucleotide (FMN). The Uniprot databank lists 63 ManDH related proteins from bacteria and fungi but there are many more α-hydroxy acid dehydrogenases expressing mandelate oxidizing or benzoylfomate reducing activity. Here we chose L-ManDH from Pseudomonas putida (EC 1.1.99.31) encoded in the mdlB gene as a membrane bound FMN-dependent protein[28]. The enzyme has been cloned and heterologously expressed in Escherichia coli as active catalysts and it was characterized in detail[29][30][31]. The enzymatic activity, determined by reduction of 2,6-dichlorophenol indophenols, could be demonstrated with S-ManDH bound to membrane vesicles. Fig IV- 4 . 4 Fig IV-4. (A) Cyclic voltammetry for a sol-gel film containing L-mandelate dehydrogenase.The measurement was performed in phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature. Potential scan rate was 20mVs -1 (B) Amperometric measurements for (a) bare GCE, (b) L-mandelate dehydrogenase in a hybrid sol-gel film on GCE in absence of FDM, (c) L-mandelate dehydrogenase in a hybrid sol-gel film on GCE. The measurements were performed under stirring in 50 mM phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature by applying a constant potential of + 0.3V versus Ag/AgCl reference electrode. 3 V 3 Fig. IV-4B) and with GCE modified with ManDH vesicle in sol-gel, but in the absence of electrochemical mediator in solution (curve "b" in Fig.IV-4B), confirms that the electrochemical response was due to the enzymatic activity, and that it could be observed only in the presence of mediator. Additionally, few testsFig IV- 6 . 6 Fig IV-6. (A) Amperometric measurement for a hybrid sol-gel film containing L-mandelate dehydrogenase and MWCNT-Os on GCE The measurements were performed under stirring Fig IV- 7 . 7 Fig IV-7. Cyclic voltammetry for L-mandelate dehydrogenase in (A) SiO 2 (55mM -pH 7) and (C) TEOS (0.25 M -pH 5). The measurements were performed in phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature. Potential scan rate was 20mVs -1 . Amperometric measurement for L-mandelate dehydrogenase in (B) SiO 2 (55mM -pH 7) and (D) TEOS (0.25 M -pH 5). The measurements were performed under stirring in 50 mM phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature by applying a constant potential of + 0.3V versus Ag/AgCl reference electrode. influence of sol-gel encapsulation on the infectivity of bacteriophage has not been reported. Elaboration of hybrid materials allowing the preservation of this infectivity would be of great interest for development of antibacterial coating or to control the release of the bacteriophage for biomedical purposes, enhancing the persistence of the phage at the site to be treated.The aim of this work is to study the influence of encapsulation in inorganic matrix on the biological activity of bacteriophage, considering both wet and dried gel. Bacteriophage ΦX174 was chosen as model system. It is a virus isometric about 25 nm in diameter, having single-stranded circular DNA and icosahedral shape of 20 triangular faces and 12 vertices (Fig.V-1)[14]. It is capable to infect a number of enterobacteria strains such as Shigella sonnei, Salmonella typhimurium Ra, or E. coli C by recognition of its target cells through lipopolysaccharides (LPS) present on the bacterial membrane[15,16]. Fig V- 1 . 1 Fig V-1. Representation of bactériophage ΦX174 -Image de Jean-Yves Sgro ©2004 virology.wisc.edu/virus world Fig V- 3 . 3 Fig V-3. Assessment of bacteriophage ΦX174 infectivity along 6 days of dispersion in saturated solution of sodium silicate.Bacteriophages ΦX174 have been encapsulated in sol-gel monoliths (thickness > 1mm) and then stored at 4 °C in petri-dish covered with a lid in order to avoid, in this first set of experiments, the complete drying of the gel. Some additives have been introduced so as to protect the bacteriophage against constrains during sol-gel transition and aging phases and to evaluate their influence on the viral infectivity after release in solution. Fig V-4 reports the effect of encapsulation in pure and hybrid sol-gel on preservation of infectivity against bacteria along 6 days. The infectivity of ΦX174 encapsulated in pure sol-gel (no additives) has decreased from 180 UFP mL -1 (initial value) to 13 UFP mL -1 and 1 UFP mL -1 after 1 and 6 days respectively; this rapid deactivation of viral infectivity during 6 days could be explained by capsid protein denaturation during sol-gel transition and aging phases or, this cannot be totally excluded, from a somehow limited release in solution. At first, we made the hypothesis that deactivation was mainly observed during the encapsulation in the gel. In order to improve the protection against the rapid viral deactivation from sol-gel matrix, trehalose and glycerol which are known in literature as microorganism's stabilizer against sol-gel constraints and aging stresses[19][20][21] have been introduced into the sol-gel material. The infectivity of ΦX174 encapsulated in hybrid sol-gel containing either glycerol or trehalose or Fig V- 4 . 4 Fig V-4. Assessment of bacteriophage ΦX174 infectivity along 6 days of encapsulation in hybrid sol-gel monolith and storage in semi-dry environment. Chapter V. Encapsulation of infectious bacteriophage in a hybrid sol-gel monolith 2013 116 dryingFig V- 5 . 1165 Fig V-5. Assessment of bacteriophage ΦX174 infectivity along 6 days of encapsulation in hybrid sol-gel monolith and storage in harsh-dry environment. Chapter V. Encapsulation of infectious bacteriophage in a hybrid sol-gel monolith 2013 117 representsFig V- 5 . 1175 Fig V-5. Representative transmission electron micrographs (TEM) for A) gel made with sodium silicate and Ludox nanoparticles, B) with glycerol and C) with PEI as additives. Samescale for all figures is 20 nm. sol-gel (Sol A, B and D) has been applied in order to avoid any trace of alcohol in dropcoating and monolith protocols of bioencapsulation. Alkoxide-based sol-gel (Sol C) has been applied for electrochemically-assisted bioencapsulation and (Sol E) has been applied for bioencapsulation of membrane-associated enzyme. The release of alcohol after the hydrolysis of TMOS has been considered an obstacle, due to its potential denaturing activity on the entrapped bacteria. Methanol has been removed from the TMOS sol by evaporation. A natural polymer (chitosan) and PEG were used as additives which were advantageous in providing a biological protection and deposition improvement for electrochemically-assisted bioencapsulation. In addition, trehalose and glycerol has been used for better bacterial protection against sol-gel constraints and aging. PEI has been used for either better protection of viral infectivity in sol-gel monolith or bacterial adsorption in electrochemically-assisted bioencapsulation. Note that control experiments have been applied in absence of one of hybrid sol-gel components to check its effective role on sol-gel bioencapsulation.MaterialsProcedures used to prepare the starting sols for films formation Sol A Sodium silicate solution (0.22 M, 10 mL) was mixed with LUDOX HS-40 (40 wt %, 10 mL). HCl (1 M, 2.4 mL) was added to adjust pH at 7. It is electrodeposition of a film from a sol containing the bacteria in suspension in one-step protocol (scheme is illustrated later. The preparation of hybrid sol for one-step protocol is: 5 mL of sol C [1 M] was mixed to 5 mL of 50 % PEG (w/v), 5ml of chitosan (1% w/v), 1 % (w/v) of trehalose. Then, NaOH solution (0.2 M, 1.6 mL) was added to raise the pH to 5.3 and 5mL of E. coli MG1655 pUCD607 or E. coli MG1655 zntA-GFP were introduced into the hybrid sol. Finally, the final mixture was poured into the electrochemical cell where electrochemically-assisted deposition was performed at -1.3 V at room temperature for several tens of s (deposition time). Then, ITO plates were rinsed immediately with deionized water carefully in order to remove non-deposited sol and stored at -80 °C for further analysis. N.B. the final concentrations for the hybrid sol-gel components are: TMOS (0.25 M), PEG (12.5 % w/v), chitosan (0.25 % w/v), trehalose (0.25 % w/v) and E. coli MG1655 pUCD607 1 - 2 - 12 Adsorption protocol: ITO plate was successively treated in nitric acid (65 %) for 1 h, sodium hydroxide solution (1 M) for 20 min, and PEI (0.2 % w/v) for 3 h. Between each step the ITO plate was washed with ultrapure water and dried in sterilized air at room temperature. The treated ITO plate was then introduced in E. coli C600 suspension (4×10 6 cells.mL-1) and incubated at room temperature in sterile environment for 2 h. Finally, the ITO plate was washed with and stored in a KCl solution (1 mM) for two-step EAB[9].The preparation of hybrid sol for two-step protocol is: 5 mL of sol C (1 M) was mixed to 5 mL of 50 % PEG (w/v), 5ml of chitosan (1% w/v), 1 % (w/v) of trehalose. Then, NaOH solution (0.2 M, 1.6 mL) was added to raise the pH to 5.3. Note that some control experiments were done by replacing PEG, chitosan or sol C with deionized water. N.B. the final concentrations for the hybrid sol-gel components are: TMOS (0.25 M), PEG (12.5 % w/v), chitosan (0.25 % w/v) and trehalose (0.25 % w/v). Electrodeposition step: ITO plate modified with bacteria has been introduced into hybrid sol-gel mixture (described above) and electrochemically-assisted deposition has been performed at -1.3 V for several tens of s (deposition times). The, ITO plates were rinsed immediately with deionized water carefully in order to remove nondeposited sol and stored at 4 °C for in humid air further analysis. Note that some control samples have been stored either in KCl solution [1mM] or humid air without introduction of trehalose into sol C. 6 . 2 . 2 . 6 . 3 . 62263 section A, 10 μL of sol B was mixed with 10 µL of S. putrefaciens CIP 8040 (9 × 10 9 cells/mL) and/or 10 µL of SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (2 mg/mL) dispersed in chitosan (0.5 wt%). Finally, 5µL of obtained mixture has been drop-coated on the glassy electrode and kept in humid air at 4 °C for further analysis. N.B. the final concentrations for the hybrid components are: S. putrefaciens (3 × 10 9 cells/mL) and SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (0.67 mg/mL). In section B, 30 μL of sol B was mixed with 10 µL of bovine-heart cytochrome c [1 mM], 10 µL of one of these strains (E. coli C600, P. fluorescens CIP 69.13 or S. putrefaciens CIP 8040 (9 × 10 9 cells/mL) and/or 10 µL of MWCNT-COOH dispersed in water (2 mg/mL) or 20 µL of gold nanoparticles. Finally, 5µL of obtained mixture has been drop-coated on the glassy electrode and kept in humid air at 4 °C for further analysis. N.B. the final concentrations for the hybrid components are: bacteria (1.8 × 10 9 cells/mL), SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (0.4 mg/mL) and bovine-heart cytochrome c [0.2 mM]. Two-layer encapsulation First, 5µL of SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (2 mg/mL) dispersed in chitosan has been drop-coated on the glassy electrode and kept for 2 h to be fully dried. Then, 5 µl of suspension (containing 10 μL of sol B and 10 µL of S. putrefaciens CIP 8040 (9 × 10 9 cells/mL) has been drop-coated on the glassy electrode and kept in humid air at 4 °C for further analysis. N.B. the Methods and Techniques 2013 137 final concentrations for the hybrid components are: S. putrefaciens (4.5 × 10 9 cells/mL) and SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (2 mg/mL). Membrane-associated protein encapsulation (Chapter IV) Cytochrome P450: 10 μL of sol D was mixed with 10 µL of Cytochrome P450 (CYP1A2) membrane protein. 5µL of obtained mixture has been drop-coated on the pyrolytic graphite electrode and kept in humid air at 4 °C for further analysis. L-Mandelate dehydrogenase: 10 μL of sol D or E was mixed with 10 µL of L-ManDH membrane protein. 5µL of obtained mixture has been drop-coated on the glassy electrode and kept in humid air at 4 °C for further analysis. 7. 1 . Electrochemical measurements 7 . 1 . 1 . 1711 Cyclic voltammetry (CV)[10] Cyclic voltammetry is the most widely used technique for acquiring qualitative information about electrochemical reactions. It is often the first experiment performed in an electroanalytical study. In particular, it offers a rapid location of redox potential of the electroactive species, and convenient evaluation of the effect of various parameters on the redox process. This technique is based on varying the applied potential at a working electrode Methods and Techniques 2013 138 in both forward and reverse directions (at selected scan rates) while monitoring the resulting current. The corresponding plot of current versus potential is termed a cyclic voltammogram. Figure II- 1 1 Figure II-1 shows the response of a reversible redox couple during a single potential cycle. It is assumed that only the oxidized form O is present initially. Thus, a negative-going potential scan is chosen for the first half-cycle, starting from a value where no reduction occurs. As the applied potential approaches the characteristic E0 for the redox process, a cathodic current begins to increase, until a peak is reached. After traversing the potential region in which the reduction process takes place, the direction of the potential sweep is reversed. During the reverse scan, R molecules (generated in the forward half cycle, and accumulated near the surface) are reoxidized back to O and an anodic peak results. The important parameters in a cyclic voltammogram are the two peak potentials (Epc, Epa) and two peak currents (ipc, ipa) of the cathodic and anodic peaks, respectively. Cyclic voltammetry can be used for the study of reaction mechanisms and adsorption processes, which can also be useful for quantitative purposes, based on measurements of the peak current. Figure 1 . 1 Figure 1. Typical cyclic voltammogram for a reversible redox reaction (O + ne-→ R and R → O + ne-). 7. 1 . 2 12 Amperometric measurement[10] The basis of amperometry technique is the measurement of the current response to an applied potential. A stationary working electrode and stirred solution are used. The resulting currenttime dependence is monitored. As mass transport under these conditions is solely by convection (steady state diffusion), the current-time curve reflects the change in the concentration gradient in the vicinity of the surface, which is directly related to concentration in solution (Fig.2). Figure 2 . 2 Figure 2. Typical amperometric detection for a chemical reaction upon successive additions of substrate. All the electrochemical measurements (amperometric or voltammetric) were carried out the potentiostat Palm.Sens and monitored by the software (PS Trace) using a conventional threeelectrode cell. Amperometric and cyclic voltammetry measurements (Chapters III and IV) Figure 3 . 7 . 3 . 373 Figure 3. Typical AFM 3D image for a sol-gel film scratched on the left side Figure 4 . 4 Figure 4. Transmission electron microscopy of an E. coli cell trapped within a silica gel (adapted from reference[11]). 7. 4 . 4 BacLight™ bacterial viability analysisLIVE/DEAD BacLight viability analysis is a dual DNA fluorescent staining method used to determine simultaneously the total population of bacteria and the ratio of damaged bacteria. It was applied here on encapsulated bacteria in sol-gel films. BacLight is composed of two fluorescent nucleic acid-binding stains: SYTO 9 and propidium iodide (PI). SYTO 9 penetrates all bacterial membranes and stains the cells green while PI only penetrates cells with damaged membranes. The combination of the two stains produces red fluorescing cells (damaged cells) whereas those with only SYTO 9 (green fluorescent cells) are considered viable[12].The two dye solutions, SYTO 9 (1.67 µM) and PI (1.67 µM), were mixed together with the same volume equivalent. Then, 3 µL of this solution was diluted in 1 mL of non-pyrogenic water. The sample of sol-gel film encapsulating bacteria was covered with 0.2 mL of diluted dye solution and incubated for 15 min in dark condition. Finally, the sample was washed in KCl solution (1 mM) and the counting was performed with epi-fluorescence microscope (OLYMPUS BX51) with emersion oil at 100X magnification. Viability values were calculated as averages of ten fields obtained from two samples. Figure 5 . 5 Figure 5. Image series for comparison of dye combinations in mixed E.coli suspensions (adapted from reference [13]) Figure 6 . 6 Figure 6. Typical luminescent scheme for a bacteria modified with plasmid PUCD607 Figure 7 . 7 Figure 7. Typical fluorescent scheme for a bacteria modified with plasmid ZntA-GFP Figure 8 . 8 Figure 8. Typical plaque assay for encapsulated bacteriophage upon exposure to bacteria. Chapter I. Knowledge and literature survey 2013 30 4.2.1. Bioreactor Some cells like cyanobacteria are classified as photosynthetic prokaryotes, employing the same reaction as plants to synthesize bio-organic compounds such as sucrose, starch and cellulose from CO 2 and water in presence of light (Fig. I-9). Cyanobacteria-based bioreactor could minimize the CO 2 level in the environment and produce a novel, reusable carbon source , there is a tremendous interest for immobilization of living cells (bacteria, yeast, or microalgae, …) in porous matrices for biotechnological applications such as bioreactors, bioreporters, biosensors and biofuel cells [1,2]. 2. Chapter II. Electrochemically-assisted bacteria encapsulation in 2013 thin hybrid sol-gel film PEI adsorbed on ITO + ITO electrode + + + 1 st step ITO electrode Bacteria Bacteria Bacteria immobilization Sol Hybrid gel ITO electrode Bacteria Bacteria 2 nd step ITO electrode Bacteria Bacteria Sol electrolysis Fig II-2. Mechanism of EABE(2S) of bacteria in a thin sol-gel film. PEI: poly(ethylene imine), ITO: indium tin oxide. 53 Table II - II 1. Long-term metabolic activity for EABE(1S) of E. coli MG1655 pUCD607 stored at -80 ºC. All films have been prepared by electrolysis at -1.3 V for 30 s. Le sujet principal de cette thèse concernait l'immobilisation et la caractérisation de l'activité de bactérie dans des films de silice préparés par le procédé sol-gel. L'électrochimie a été utilisée avec succès pour induire l'immobilisation de Escherichia coli dans un films sol-gel hybride en utilisant l'électrolyse contrôlée du sol initial contenant les précurseurs de silice, le poly(éthylène glycol) du chitosan et du tréhalose. Les bactéries étaient soit adsorbées à la surface de l'électrode avant le dépôt sol-gel soit codéposées pendant l'électrolyse. Nous avons montré que l'intégrité membranaire était conservée pendant plus d'un mois et que les bactéries immobilisées pouvaient exprimer de la luminescence ou de la fluorescence en réponse à leur environnement. L'électrochimie a ensuite été utilisée pour caractériser la communication électronique avec Shewanella Putrefaciens et Pseudomonas fluorescens en présence de donneurs d'électrons, respectivement le formiate de sodium et le glucose. Ces deux souches bactériennes sont capables de transférer des électrons par leur membrane externe. Mais l'immobilisation de ces bactéries limite fortement les transferts électroniques de la bactérie à l'électrode en raison de la nature isolante de la matrice de silice. Deux stratégies ont été proposées afin d'augmenter ces processus de transfert électronique dans le gel, en utilisant des nanotubes de carbone fonctionnalisés par des groupements ferrocène ou par co-immobilisation dans le gel avec les bactéries d'un médiateur naturel, le cytochrome c. Ces deux approches ont conduit à des résultats sensiblement différents du point de vue de leur sensibilité à l'ajout d'un substrat en raison des mécanismes impliqués. En effet, le ferrocène est susceptible de traverser la membrane externe, par exemple au travers des porins, pour collecter les électrons de la respiration alors que le cytochrome ne peut collecter que les électrons transmis au travers de la membrane externe. L'utilisation du cytochrome c au sein de la matrice de silice qui permet d'immobiliser les bactéries mime dans une certaine mesure une stratégie développée dans certains biofilms naturels pour favoriser le transfert électronique vers un accepteur final, CYP1A2) ou transfert médié (ManDH). Nous avons pu observer que le cytochrome était plus sensible que la déshydrogénase, ce résultat pouvant s'expliquer par la nature directe du transfert d'électron entre l'électrode et la protéine. Cependant, pour ces deux systèmes, l'immobilisation dans un film sol-gel a permis d'augmenter l'intensité et la stabilité de la réponse électrocatalytique.Finalement, les études préliminaires sur l'encapsulation du bacteriophage )X174 au sein d'une matrice sol-gel hybride a permis de montrer que l'infectivité du virus pouvait être augmentée par la modification judicieuse du matériaux. La présence de glycérol ou de polyéthyleneimine a ainsi permis de protéger cette infectivité dans une atmosphère sèche pendant plus de 6 jours, la meilleure infectivité étant observée en présence de polyélectrolyte.L'interaction favorable entre le bactériophage chargé négativement et le polymère chargé positivement pourrait expliquer, au moins en partie, cette meilleure résistance à l'inactivation. Ces deux études ont en effet été menées en parallèle et le lien entre les deux travaux n'a pu être fait ici. Une difficulté est que la composition du film adaptée pour l'électrodépôt est différente de celle utilisée pour les études de communication électronique. Il sera alors nécessaire de faire de nouvelles optimisations pour appliquer cette approche originale à l'électrodépôt de biofilm artificiel applicable en électrochimie. empêche la bactérie ou le biofilm de croître naturellement. Cependant elle a aussi ces qualités. Elle permet d'abord un très bon contrôle de la souche bactérienne présente dans le biofilm, ce qui permettrait de mener des études fondamentales dans lesquelles plusieurs souches bactériennes serait associées d'une façon bien contrôlée afin d'évaluer leur synergie dans une configuration biofilm. La croissance d'un biofilm peut demander beaucoup Conclusion and perspective encapsulation dans un film sol-gel à la surface de l'électrode, par transfert direct d'électron 2013 (P450 -Ce travail pourrait procurer de nouvelles opportunités d'application pour les bactériophages immobilisés pour la lutte contre les souches bactériennes résistantes aux antibiotiques ou en médecine. Dans une tentative de proposer des perspectives à cette thèse, nous pouvons tout d'abord dire que de nombreux systèmes différents ont été étudiés dans cette thèse, chacun conduisant à imaginer de nouvelles directions de recherche. Ces développements potentiels concernent l'application de l'encapsulation bactérienne par assistance électrochimique, l'exploitation des biofilms artificiels, l'application des protéines membranaires immobilisées pour l'électrocatalyse et de nouveaux développements avec les bactériophages. Le lecteur aura certainement noté que le protocole d'encapsulation bactérienne par assistance électrochimique développé au chapitre 2 n'avait pas été appliqué aux études décrites au 2013 l'encapsulation de temps. Le biofilm artificiel est une stratégie intéressante pour élaborer rapidement des dispositifs microbiologiques pour des applications électrochimiques telles que les biocapteurs ou les bioréacteurs. L'immobilisation des protéines rédox membranaires dans un matériau sol-gel est une voie chapitre 3. Conclusion and perspective intéressante pour augmenter la stabilité de leur réponse électrochimique, celle-ci pouvant minéral ou électrode. Il est ainsi possible de considérer que notre approche conduit à la formation d'un biofilm artificiel. L'expérience développée sur l'immobilisation de bactéries a ensuite été appliquée à l'immobilisation de protéines membranaires for la bioélectrochimie. Deux types de protéines rédox ont été étudiées, un cytochrome P450 (CYP1A2) et une mandélate déshydrogénase (ManDH). Pour ces deux systèmes, une activité électrocatalytique a pu être mesurée après 121 122 Les biofilms artificiel semblent être une voie de recherche intéressante pour l'élaboration contrôlée de bioélectrodes à base de bactérie. Cette approche a ses limites dans la mesure ou alors être testée en réacteur électrochimique pour l'électrosynthèse enzymatique. Le sol pourrait être optimisé pour augmenter autant que possible la stabilité de la réponse électrocatalytique sans perturber l'activité enzymatique. D'autres protéines membranaires présentent un intérêt pour la conversion d'énergie, telle que les hydrogénases. Il serait intéressant de considérer l'intérêt de la voie sol-gel pour l'immobilisation de cette dernière classe de protéines. Enfin, les études sur l'encapsulation des bactériophages pour contrôler leur relargage sous une forme infectieuse n'en est qu'à ses débuts. Il conviendrait d'étudier plus en détails les paramètres qui influencent à la fois la libération des batériophages et le maintient de leur infectivité. Des études futures pourraient considérer l'application de ce type de matériau sous la forme de film pour le relargage contrôlé de cet agent antibactérien. 1.1. Sol-gel reagents This work focuses on developing safe bioencapsulation techniques based on pure silica or hybrid sol-gel film to construct effective whole cell biosensors. A series of precursor with different properties are used (Table1). Tetramethoxysilane is the most common precursor used to electrodeposit sol-gel film on the conducting electrodes. Sodium silicate solution and Ludox® HS-40 colloidal have been used for bacteria and protein encapsulation by dropcoating in order to avoid any trace of alcohol (i.e. aqueous sol-gel route). Chemicals Formula Grade MW (g.mol -1 ) Suppliers Tetramethoxysilane (TMOS) Si(OCH 3 )4 98 % 152.22 Aldrich Sodium silicate solution Na 2 O 14 % - Aldrich SiO 2 27 % Ludox® HS-40 colloidal SiO2 40 % 60.08 Aldrich (40 wt. % suspension in H2O) Tetramethoxysilane (TEOS) Si(OC 2 H 5 ) 4 98 % 208.33 Alfa Aesar Table 1 . 1 Silica precursors. Table 2 2 provides some technical information on Table 2 . 2 Additives for preparation of hybrid materialsBacteriophage ΦX174 and series of microbial species such as Escherichia coli, Shewanella putrefaciens and Pseudomonas fluorescens have been used in this work. These microorganisms have been encapsulated in different sol-gel protocols and materials. The physical properties of microorganisms encapsulated in sol-gel are also investigated. Table 1.3. Bacteria and bacteriophage species Table II - 4 . II4 Chemical components of culture media. 1 L of deionized water. (pH= 7.3 ± 0.2) MSB medium contains 10 g/L peptone, 3 g/L yeast extract, 12 g/L Minimal salt broth beef extract, 3 g/L NaCl, 5 ml of Na 2 CO 3 (150g/mL), 0.3ml of (MSB) [MgCL 2 , 6H 2 O] (2g/mL) and 10mL of nalidixic acid (25mg/mL) dissolved in 1 L of deionized water. (pH= 7.3 ± 0.2) MSA medium contains 10 g/L ^peptone, 3 g/L yeast extract, 12 g/L Minimal salt agar beef extract, 3 g/L NaCl, 5 ml of Na 2 CO 3 (150g/mL), 0.3ml of (MSA) [MgCL 2 , 6H 2 O] (2g/mL), 10mL of Nalidixic acid (25mg/mL), and 10 g/L agar dissolved in 1 L of deionized water. (pH= 7.3 ± 0.2) carbon electrode (GCE, 3mm in diameter), indium-tin-oxide glass slides (ITO, Delta Technologies) or pyrolytic graphite electrode (PGE) served as working electrode. Prior each measurement, GCE was first polished on wet emery paper 4000, using Al 2 O 3 powder (0.05 mm, Buehler), then rinsed thoroughly with water to remove the embedded alumina particles.ITO electrodes were cleaned only with acetone to remove any chemical or biological contamination. Basal plane pyrolytic graphite (3 mm diameter disks). The surfaces were treated by abrasion with sandpaper and polished with alumina slurry (0.3 µm), followed by brief sonication in deionized water. Finally, the electrodes were washed with deionized water and dried with compressed air. The counter electrode was a platinum wire and the reference electrode was Ag/AgCl 3M KCl. In this chapter, the work focuses on the immobilization of membrane-associated proteins in the hybrid sol-gel matrix for bioelectrochemical applications. According to the literature, the sol-gel material could provide a suitable environment for safe immobilization of these proteins, which would be beneficial for bioelectrocatalysis. The bioencapsulation process was achieved via drop-coating of the starting sol from aqueous silicate including enzymes. PEG has been introduced to design a protective environment for encapsulation of the membrane fragments. Two different proteins have been chosen for this study, i.e., cytochrome P450 (CYP1A2) according to its ability to directly transfer electrons to the electrode surface and L-Mandelate dehydrogenase that can transfer electron only through electrochemical mediators. The biohybrid materials developed could be further used for construction of reagentless bioelectrochemical devices such as biosensors and bioreactors. Escherichia coli MG1655 pUCD607 and Escherichia coli MG1655 zntA-GFP were kindly provided by P. Billard (Laboratory of interactions microorganisms, minerals and organic materials in the sol; LIMOS -Lorraine University, France). Escherichia coli CN and Bacteriophage ΦX174 were kindly provided by Prof. C. Gantzer (Laboratory of chemistry, physics and microbiology for environment; LCPME -Lorraine University, France). Membrane-associated enzymes The membrane-associated L-Mandelate dehydrogenase (L-ManDH) solution (30 units/mg) isolated from Pseudomonas putida has been provided by Prof. G. W. Kohring (Laboratory of microbiology -Saarland University, Germany). The membrane-associated cytochrome P450-CYP1A2 has been provided by G. Almeida (Faculty of science and technology -Nova de Lisboa University, Portugal). Culture medium for microorganisms All the culture media used for bacteria and bacteriophage growth has been sterilized prior each experiment using autoclave system for 15 min at 121 °C to avoid contamination with other species. Table 4. shows the chemical composition of the culture media. L'immobilisation de protéines rédox membranaires a également été considérée dans ces couches minces inorganiques pour favoriser la stabilité de la réponse électrocatalytique. Les protéines considérées impliquent des mécanismes de transfert électronique différents, soit direct pour le cytochrome P450 (CYP1A2), soit médié pour la mandélate déshydrogénase. Finalement, l'influence de l'encapsulation dans une matrice sol-gel hybride sur l'infectivité du bactériophage ΦX174 a été étudiée, montrant l'effet protecteur de la polyéthylènenimine ou du glycérol. Mots clefs: sol-gel, matériaux hybrides, bioencapsulation, électrodépôt, bactérie, biofilm artificiel, protéine rédox membranaire, bactériophage, transfert électronique, cytochrome c. Abstract The work reported in this thesis has been developed at the interface between three disciplines, i.
210,926
[ "773183" ]
[ "411849" ]
01750454
en
[ "chim" ]
2024/03/05 22:32:07
2013
https://hal.univ-lorraine.fr/tel-01750454/file/DDOC_T_2013_0201_MAGHEAR.pdf
Development of new types of composite electrodes based on natural clays and their analytical applications Development of new types of composite electrodes based on natural clays and their analytical applications "We are what we repeatedly do. Excellence, then, is not an act, but a habit." ~Aristotle~ To my supervisors... I would like to express my deep and special gratitude to my thesis supervisor Prof. Robert Săndulescu for providing me the opportunity to join his research group and for making everything possible during all these four years. His optimism and dedication to science are impressive and I must thank him for his continuous and endless support and also for all the fruitful scientific and non-scientific discussions. I am grateful to Dr. Alain Walcarius, also my PhD supervisor, for receiving me in his research group and for his key contributions to my studies. I respect his contagious dedication to science. I dedicate this thesis to my both supervisors hoping I fulfilled their ambitions and expectations. Special thanks to Mr. and Mrs. Iuliu and Ana Marian for their endless support, encouragement, availability, understanding, and priceless lessons in science and life. I would also like to thank Dr. Cecilia Cristea and Dr. Mathieu Etienne for their help and guidance in the research lab, for their support and encouraging ideas. Tertiş and Luminiţa Fritea for their help, support, friendship, and hard work. I would also like to acknowledge the valuable contribution of all co-authors and collaborators from Romania and France for making this thesis possible, especially to Tamara Topală, Dr. Emil Indrea, Dr. Cosmin Farcău, Ludovic Mouton, and Pierrick Durand. For the financial support I am grateful to the Agence Universitaire de la Francophonie and to UMF "Iuliu Haţieganu" for the research project POS-DRU 88/1.5/S/58965. And, of course, I grace my family for their encouragement and support throughout my life. .................................................................................................... INTRODUCTION ..................................................................................................... ART ............................................................................................... 2.4.1.1. Direct detection -electrocatalysis ......................................... 2.4.1.2. Preconcentration electroanalysis ........................................... 2.4.1.3. Electrochemical biosensors and related devices ..................... 2.4.1.4 Special thanks to my colleagues Mihaela Table of Contents ABBREVIATIONS STATE OF THE INTRODUCTION Along with the progress in the industrial and technological fields, pollution has been one of the main concerns all over the world. Its impact on environment has lead among time to the development of different approaches to detect, prevent, or minimize its damaging effects. In this way, electrochemistry offers a wide variety of techniques that aim to control by different means the impact of pollution in the living world. Heavy metals are among the most important soil and biological contaminants. The interest for their detection has therefore increased a lot in the recent years. Heavy metals show high toxicity, they can accumulate in human, animal, and plant tissues and they are not biodegradable, so their quantitative determination in different media is an issue of primary importance nowadays. Electrochemistry is likely to cope with this high demand by offering new types of electrodes for real-time detection of trace metal contaminants in natural waters and also in biological and biomedical samples. Clay-modified electrodes are mostly used for this type of application. If in the past centuries clays were just used in cosmetics or to produce ceramics, in the last three decades they attracted the interest of electrochemists due to their catalytic, adsorbent, and ion exchange properties which have been exploited in the development of chemical sensors. Moreover, the adsorption of proteins and enzymes on clay mineral surfaces was intensively applied in biosensor fabrication. This research was directed towards the modification of different types of electrodes using indigenous Romanian clays for the development of sensors applied in the detection of heavy metals from matrices of biopharmaceutical and biomedical interest and biosensors for the detection of different pharmaceuticals. The studies presented here are mainly focused on electrochemical methods (voltammetry, differential pulse methods, anodic stripping) and ion-selective electrodes based on different types of clays, in order to improve the performances of the existent devices and also to develop other methods for the determination of heavy metals in various matrices. The aims of this thesis are summarized as follows: • To confirm the physico-chemical and structural characterization of the indigenous Romanian clays • To develop new clay-modified electrodes by employing different methods to immobilize the clay at the electrode surface: a. the embedment of the clay in carbon paste; b. the incorporation of the clay in polymeric conductive films; c. the immobilization of clay using a semipermeable membrane; d. the entrapment of the clay in a sol-gel matrix. • To study the electrochemical behavior of the new electrode configurations and to apply them in trace heavy metal detection or in pharmaceutical analysis: a. the analysis of acetaminophen, ascorbic acid, and riboflavin using clay-carbon paste-modified electrodes; b. the development of a HRP/clay/PEI/GCE biosensor for acetaminophen detection; c. the development of tetrabutylammonium-modified clay film electrodes covered with cellulose membranes for heavy metal detection; d. the development of a copper(II) sensor using clay-mesoporous silica composite films generated by electro-assisted self-assembly. Clay-modified electrodes A "chemical sensor is a small device that, as the result of a chemical interaction or process between analyte and the sensor device, transforms chemical or biochemical information of a quantitative or qualitative type into an analytically useful signal" [START_REF] Stetter | Sensors, chemical sensors, electrochemical sensors, and ECS[END_REF] . A chemical sensor consists of two basic components: a chemical recognition system (receptor) and a transducer 2 . In the case of biosensors, the recognition system uses a biological mechanism instead of a chemical process. The role of the transducer is to transform the response measured at the receptor into a detectable signal. Due to their remarkable sensitivity, experimental simplicity, and low cost, electrochemical sensors are the most attractive chemical sensors reported in the literature. The signal detected by the transducer can be a current (amperometry), a voltage (potentiometry), or impedance/conductance changes (conductimetry). [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] An important feature of the chemically modified electrodes consists in their ability to preconcentrate the analyte into a small volume on the electrode, allowing lower concentrations to be measured than possible in the absence of a preconcentrated step (adsorptive stripping voltammetry) 2 . Among the wide range of electrode modifiers, clays have attracted the interest of electrochemists, in particular for their analytical applications. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF][START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] Clays -definition, classification, properties The use of clay minerals as electrode modifiers is the result of electrochemists desire to achieve a high quality electrode surface with the required properties, but also the result of their three decades effort to understand and control the processes that take place at electrode surface. The clay minerals employed in this case belong to the class of phyllosilicates-layered hydrous aluminosilicates. Their layered structures comprises either one sheet of SiO 4 tetrahedra and one sheet of AlO 6 octahedra (1 : 1 phyllosilicates)or an Al-octahedral sheet is sandwiched between two Si-tetrahedral sheets (2 :1 phyllosilicates). [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF][START_REF] Bedioui | Zeolite-encapsulated and clay-intercalated metal porphyrin, phthalocyanine and Schiff-base complexes as models for biomimetic oxidation catalysts: an overview[END_REF][START_REF]Layer Charge Characteristiscs of 2:1 Silicate Clay Minerals[END_REF] The role of the exchangeable cations (Na + , K + , NH 4 + etc.) bound on the external surfaces for 1 :1 phyllosilicates and also in the interlayer in the case of 2 :1 phyllosilicates is to balance the positive charge deficiency of the layers. An important characteristic of the clay mineral is the basal distance which depends on the number of intercalated water and exchangeable cations within the interlayer space. Other important properties of this structure regard the relatively large specific surface, ion-exchange properties, and ability to adsorb and intercalate organic compounds. All these features recommend phyllosilicates, especially a group of smectites, for preparation of clay-modified electrodes. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] There are two classes of clays: cationic clays that have negatively charged alumino silicate layers, and anionic clays, with positively charged hydroxide layers, the neutrality of these materials being ensured by ions, cations or anions, depending on the clay type in the interlayer space, that balances the charge. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Cationic clays are among the most common minerals on the earth's surface, being well known due to their applications in ceramic production, pharmacy, cosmetics, catalysts, adsorbents, and ion exchangers. [START_REF] Vaccari | Preparation and catalytic properties of cationic and anionic clays[END_REF][START_REF] Vaccari | Clays and catalysis: a promising future[END_REF] Smectite clays are mostly used in the development of electrochemical sensors due to their ability to incorporate ions by an ion-exchange process and also due to their adsorption properties. Therefore, proteins adsorption is intensively used in the development of biosensors. [START_REF] Gianfreda | Enzymes in soil: properties, behavior and potential applications[END_REF] Due to its cation exchange capacity (CEC) (tipically between 0.80 and 1.50 mmol g -1 ) and anion exchange about four times lower, montmorillonite is the most used smectite. On the other hand, thixotropy recommends montmorillonite to be used as a stable and adhesive clay film. Besides smectites (montmorillonite, nontronite, hectorite), literature describes some other clay minerals (vermiculite, kaolinite, sepiolite) which can be exploited for electrodes modification. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Clay-modified electrode preparation The preparation method is the one that mostly influences the electrochemical behavior of CLME. Several techniques have been employed to cast the clay films, such as the slow evaporation of colloidal suspensions on electrode surfaces (i.e., platinum, glassy carbon, indium tin oxide, and screen printed electrodes), physical adsorption (the most widely used technique due to its simplicity and ease), spin coating thin clay films, and clay-carbon paste-modified electrodes. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] A more sophisticated strategy uses silane linkages to couple clay to the underlying electrode surface. [START_REF] Rong | Electrochemistry and photoelectrochemistry of pillared claymodified-electrodes[END_REF] Langmuir-Blodgett method was applied to prepare thin film of clays on electrode surfaces. [START_REF] Hotta | Electrochemical behavior of hexaammineruthenium (III) cations in clay-modified-electrodes prepared by the Langmuir-Blodgett method[END_REF][START_REF] Okamoto | Preparation of a clay-metal complex hybrid film by the Langmuir-Blodgett method and its application as an electrode modifier[END_REF][START_REF] He | Preparation of hybrid films of an anionic Ru(II) cyanide polypytidyl complex with layered double hydroxides by the Langmuir-Blodgett method and their use as electrode modifiers[END_REF][START_REF] He | Electrocatalytic response of GMP on an ITO electrode modified with a hybrid film of Ni(II)-Al(III) layered double hydroxide and amphiphilic Ru(II) cyanide complex[END_REF] Another study reported a hybrid film of chiral metal complex and clay, prepared by the Langmuir-Blodgett method for the purpose of chiral sensing. [START_REF] He | Creation of stereoselective solid surface by self-assembly of a chiral metal complex onto a nanothick clay film[END_REF][START_REF] He | Preparation of a novel clay/metal complex hybrid film and its catalytic oxidation to chiral 1,1'-binaphtol[END_REF] The electrodeposition of kaolin and montmorillonite using rotating quartz crystal microbalance disk electrodes or indium tin oxide (ITO) was also realized. [START_REF] Shirtcliffe | Deposition of clays onto a rotating electrochemical quartz crystal microbalance[END_REF][START_REF] Song | Preparation of clay-modified electrodes by electrophoretic deposition of clay films[END_REF] Electrochemistry at clay-modified electrodes Electron transfer at clay-modified electrodes (CLME) has been extensively studied in the 1990s. [START_REF] Baker | Electrochemistry with clays and zeolites[END_REF][START_REF] Bard | Electrodes modified with clays, zeolites and related microporous solids[END_REF][START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF][START_REF] Therias | Electrodes modified with synthetic anionic clays[END_REF] Poor charge transport is an important issue when using nonconductive solids to modify electrodes, which is related to either physical diffusion in the channels and/or to charge hopping. In the case of clays, electroactive species can be accumulated within the clay layer at different places, but only a small fraction of these species display redox activity (≈10-30%) (Figure 1). Mass transport processes through cationic and anionic clays were investigated by electrochemical quartz crystal microbalance measurements. [START_REF] Yao | Clay-modified electrodes as studied by the quartz crystal microbalance: adsorption of ruthenium complexes[END_REF][START_REF] Yao | Clay-modified electrodes as studied by the quartz crystal microbalance: redox processes of ruthenium and iron complexes[END_REF][START_REF] Yao | Mass transport on an anionic claymodified electrode as studied by a quartz crystal microbalance[END_REF] The results showed that the charge balancing during a redox reaction was accomplished by the leaching or insertion of mobile ions at the clay solution interface. Taking in to account that this phenomenon depends on the nature and the concentration of the electrolyte, modifications in the swelling properties of the clays can occur. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] The enhancement of charge transport within the clay film can be achieved by delamination processes that give less ordered coatings and consequently allow access to the channels. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Different strategies can be applied for this: the use of smaller size particles (laponite instead of montmorillonite), pillaring agents (alumina, silicate) to obtain a porous clay heterostructure, or by the intercalation of molecules (surfactants or polymers). [START_REF] Carrero | Electrochemically active films of negatively charged molecules, surfactants and synthetic clays[END_REF][START_REF] Falaras | Al-pillared acid activated montmorillonite modified electrodes[END_REF] Other methods describe the enhancement of electron hopping using electron relay, making use of the redox active cation sites within the crystal lattice (i.e., iron, cobalt, or copper for cationic clays and nickel, cobalt or manganese for anionic clays) to transfer electrons from intercalated ions to the conductive substrate. [START_REF] Qiu | Anionic clay modified electrodes: electron transfer mediated by electroactive nickel, cobalt or manganese sites in layered double hydroxide films[END_REF][START_REF] Xiang | Electron transport in clay-modified electrodes: study of electron transfer between electrochemically oxidized tris(2,2'-bipyridyl)iron cations and clay structural iron(II) sites[END_REF][START_REF] Xiang | Electrodes modified with synthetic clay minerals: evidence of direct electron transfer from structural iron sites in the clay lattice[END_REF][START_REF] Xiang | Electrodes modified with synthetic clay minerals: electron transfer between adsorbed tris(2,2'-bipyridyl) metal cations and electroactive cobalt centers in synthetic smectites[END_REF][START_REF] Xiao | Preparation, characterization and electrochemistry of synthetic copper clays[END_REF] Other possibilities for delivering charges consist of using a conductive polymer (polypyrrole) within the clay interlayer [START_REF] Rudzinski | Polypyrrole-clay modified electrodes[END_REF][START_REF] Faguy | Conducting polymer-clay composites for electrochemical applications[END_REF] or by using a composite conducting material (V 2 O 5 ). [START_REF] Anaissi | Modified electrode based on mixed bentonite vanadium (V) oxide xerogels[END_REF] In spite of all inconveniences regarding electron transfer at CLME, many applications have been found for these modified electrodes, such as electrocatalysts, photocatalysts, sensors, and, biosensors. The direct electrochemistry of heme proteins (i.e., cytochrome c, cytochrome P450, myoglobin (Mb), etc.) was reported at a CLME. [START_REF] Lei | Clay-bridged electron transfer between cytochrome P450cam and electrode[END_REF][START_REF] Sallez | Electrochemical behavior of c-type cytochromes at claymodified carbon electrodes: a model for the interaction between proteins and soils[END_REF][START_REF] Bianco | Protein modified and membrane electrodes: strategies for the development of biomolecular sensors[END_REF][START_REF] Scheller | Bioelectrocatalysis by redox enzymes at modified electrodes[END_REF][START_REF] Dai | Direct electrochemistry of myoglobin based on ionic liquid-clay composite films[END_REF] In most of these cases, the protein-clay films were prepared by depositing a certain concentration of protein onto the CLME, the heterogeneous electron transfer process between the protein and the electrode surface being facilitated by the clay modification on the electrode surface. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] It is stated that the interaction of the proteins with the clay particles didn't seriously affect the heme Fe (III)/(II) electroactive group of the incorporated proteins. Heavy metals are well-known as important environmental and biological contaminants: they are not bio-or photodegradable, they accumulate in human, animal, and plant tissues and they have high toxic effects. Their density is 6.0 g/cm [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] or more (much higher than the average particle density of soils which is 2.65 g/cm [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] ). Even they occur naturally in rocks, their concentrations are frequently elevated as a result of contamination. Arsenic, cadmium, chromium, mercury, lead, and zinc are the most important heavy metals with regard to potential hazards and occurrence in contaminated soils. 41 Metal mining, metal smelting, metallurgical industries, and other metal-using industries, waste disposal, corrosions of metals in use, agriculture and forestry, fossil fuel combustion, and sports and leisure activities are the main sources of heavy metal pollutants. Large areas worldwide are affected by heavy metal contamination. Heavy metal pollution is mostly present close to industrial sites, around large cities and in the vicinity of mining and smelting plants. Heavy metals can transfer into crops and subsequently into the food chain, which affects seriously the agriculture in these areas. 41 The increasing requirement for an efficient and real-time monitoring of trace heavy metals that pollute the environment led to the development of new detection methods and new specific sensors capable to perform in situ measurements with minimum disturbance of natural system. The ion exchange capacity and the ion exchange selectivity of clays recommended them to be applied to the accumulation of charged electroactive analytes. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Cationic exchanging clays are essentially used for sensor devices. The electroactivity of the adsorbed ions depends on the soaking time of the CLME in the analyte solution (accumulation time), on the nature and concentration of analyte, on the mode of preparation of the CLME, on the electrolyte nature, etc. Smectite clays are frequently employed at CLME because they can serve as matrices for electroactive ions as they are able to incorporate ions by an ion-exchange process, like polymeric ionomers. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Due to its cation exchange capacity typically between 0.80-1.50 mM g -1 , while anion exchange is about four times lower, montmorillonite is the most often used smectite. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Carbon paste electrodes (CPEs) were generally used for cationic heavy metals preconcentration at CLME. The detection limits depend on the nature of the metal cations or on the clay species and are lower than the European norms required for drinking water. 2 Inorganic clay heavy metal detection sensors Determination of inorganic analytes (metal ions in the most cases) can be achieved due to the ion exchange reactions on the clay modifier. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Thus, the procedure consists of an accumulation step under open circuit conditions with subsequent voltammetric determination after medium exchange (Figure 2). The fact that the accumulation process is separated from the measurement step and optimum conditions can be found and applied for each other, represents a great advantage of this procedure. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Preconcentration conditions must be optimized due to the ion exchange and sorption reactions of clays and zeolites. After removing the electrode from the preconcentration medium is careful washed with redistilled water before it is transferred into the measurement cell containing the background electrolyte. The composition and concentration of the background electrolyte has a great influence on the current response of clay electrode. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] There is competition between electrolyte cations and analyte for ion-exchange sites, while electrolyte anions can influence the ion-pairing mechanism. Also, electrolyte concentration can affect the structure of the clay modifier. These aspects should be therefore taken into account during optimization of the analytical method. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The ion exchange capability of clays was firstly proved using clay CPEs. The preconcentration of free Fe(III) cations into montmorillonite [START_REF] Wang | Trace analysis at clay-modified carbon paste electrodes[END_REF] takes places due to the replacement of the exchangeable cation in the interlayer of clay. A preconcentration model of Ag + and Cu 2+ on vermiculite modified paste electrode under open circuit conditions [START_REF] Kalcher | The vermiculite-modified carbon paste electrode as a model system for preconcentrating mono and divalent cations[END_REF] was investigated based on the conception of the simplified ion exchange process, involving the negatively charged groups of clay minerals. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The values obtained for electrochemically determined concentration equilibrium constant for the ion exchange of Ag + corresponded with those evaluated by atomic absorption spectroscopy. The model is therefore supposed to be applied to other systems. The exchange of metal cations onto montmorillonite, vermiculite and kaolinite was studied by repetitive CV on the clay CPEs. [START_REF] Kula | Voltammetric Study of Clay Minerals Properties[END_REF] The metal sorption is represented by the current's dependence on time or on the potential cycling and thus, one can distinguish individual clay minerals and metal cations, even only at the first approximation. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Copper(II) could be determined under the open circuit conditions due to its preconcentration by means of the cation exchange. [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] The exchange of other cations present in the electrolyte can be competitive with cation exchange [START_REF] Wang | Trace analysis at clay-modified carbon paste electrodes[END_REF][START_REF] Švegl | Vermicullite clay mineral as an effective carbon paste electrode modifier for the preconcentration and voltammetric determination of Hg(II) and Ag(I) ions[END_REF][START_REF] Navrátilová | Cation and anion exchange on clay modified electrodes[END_REF] , a formation of metal complexes must be considered, too, in this case (e.g., the exchange of the cationic complexes [Cu(ac)] + occurred on montmorillonite and vermiculite 47 ). A vermiculite CPE was employed as a model for a soil-like phase and the binding interactions of Cu(II) ions with the mineral were studied. [START_REF] Švegl | A methodological approach to the application of a vermiculite modified carbon paste electrode in interaction studies: Influence of some pesticides on the uptake of Cu(II) from a solution to the solid phase[END_REF] When investigating the influence of selected pesticides on the Cu(II) uptake from the solution to vermiculite, it was shown that Cu(II) uptake significantly depended on binding affinity of pesticides to vermiculite, so on their ability to form coordination compounds with Cu(II) ions. The different effects some substances of environmental importance have on the metal ion uptake to clay mineral were also described. Then research was extended, when employing different soils in their native form as electrode modifiers in order to study soil-heavy metal ions interactions by means of CPE for the first time. [START_REF] Švegl | Soil-modified carbon paste electrode: a useful tool in environmental assessment of heavy metal ion binding interactions[END_REF] The binding capabilities of the soils were examined in a model solution of Cu(II) ions, together with the correlation between the copper ions accumulation and the standard soil parameters (ion exchange capacity, soil pH, organic matter and clay content. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The nature of soil samples should be considered when developing an environmental sensor based on soil modified CPE suitable for on-site testing of soils. [START_REF] Sallez | Electrochemical behavior of c-type cytochromes at claymodified carbon electrodes: a model for the interaction between proteins and soils[END_REF] For example, the soil used as a modifier should be fully characterized due to the fact that the soil modified electrodes strongly depend on the type of soil. [START_REF] Švegl | Soil-modified carbon paste electrode: a useful tool in environmental assessment of heavy metal ion binding interactions[END_REF] In order to obtain a reproducible measurement the renewal of the CPE surface is important. Carbon paste surface can be easily removed by polishing and a new surface can be prepared. It was shown that standard deviation is 5% if a measurement is always performed on the mechanically renewed surface. [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] Regeneration by washing the electrode with water and measurement on one surface during one day was also described. [START_REF] Wang | Trace analysis at clay-modified carbon paste electrodes[END_REF] In the case of Cu(II) determination on a vermiculite modified electrode [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] , a short-term regeneration step (0.05 M KCN for 2 min and 0.1 M HClO 4 for 1 min) was applied after each voltammetric measurement, while a long-term preconditioning step (0.1 M HClO 4 for 24 hours, 0.05 M KCN for 2 min and 0.1 M HClO 4 for 1 min) was applied after mechanical renewal of the surface. The regenerated surface was applicable for one week without mechanical renewing the paste). [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Another study described the renewal of surface by exposing the electrode to 0.2 M KCN in 2 M NH 3 for 2 min. [START_REF] Kalcher | The vermiculite-modified carbon paste electrode as a model system for preconcentrating mono and divalent cations[END_REF] During the preconcentration step, the ion exchange reaction takes place by replacing the cations initially present in the clay, like in the case of Fe(III). [START_REF] Wang | Trace analysis at clay-modified carbon paste electrodes[END_REF] The peak in the reduction current is the result of the reduction of the surface bound iron, while the enrichment of iron in the carbon paste is indicated by the increase of the peak height at longer preconcentration times. The renewal of the electrode surface is needed as a smoothed electrode surface should be reused. A 0.1 M CH 3 COONa or 0.1 M Na 2 CO 3 was employed for 1 min electrode bathing to achieve a 100% removal of iron from montmorillonite, although the regeneration step can represent a limit point of the procedure because the stated time could be insufficient for higher concentration of Fe(III). [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The method did not show interferences with the coexisting metal ions, besides that Mg(II) and Al(III) in a 2-fold excess caused 32%, and 100% decrease of the current response, respectively. That can be explained due to ability of high valence cations to replace exchangeable cations in clays. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Historically, it can be noted that greater attention was given to copper(II) determination by means of the clay CPEs. The first works reported the voltammetric response of the zeolite modified CPEs for Cu(II). [START_REF] Shaw | Carbon composite electrodes containing alumina, layered double hydroxides, and zeolites[END_REF][START_REF] El Murr | The zeolite-modified carbon paste electrode[END_REF] But, the related detection limits (about 10 -4 mol L -1 ) were not low enough for practical use. In addition, the regeneration of the electrode surfaces was not an easy step. Cu(II) determination methods by means of the clay modified CPEs achieved better results as to the limit of detection with time. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Both vermiculite CPE [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] and montmorillonite modified CPE 45 used the anodic current response of the accumulated Cu(II). In the first method, clay CPE preparation was achieved by using the dry clay modifier, so in this case the activation of the electrode surface complicated the measurement. [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] The electrode was exposed to 0.1 M HClO 4 for 24 h which can damage the electrode surface, even authors don't state that. The electrode was treated with 0.05 M KCN for 2 min and with 0.1 M HClO 4 for 1 min before use. The short term (3 min) regeneration step in 0.05 KCN and 0.1 M HClO 4 was needed to remove copper residue in vermiculite taking into consideration the fact that the electrode surface was used for at least one week without mechanic renewing of the paste. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] This step can also be an inconvenient in the Cu(II) determination. However, the method was promising, with relative error of 2% for ppm Cu(II) concentrations. When using the wetted modifier, no activation of the electrode surface was necessary in the case of the montmorillonite modified CPE. No regeneration step was required in this method due to the regular mechanical removing of the paste and a newly smoothed surface was used for each measurement. [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] The limit of detection achieved by this method was ten times higher in comparison with the vermiculite CPE, although the area of the montmorillonite CPE was 20 times smaller than that of the vermiculite electrode. Some interferences were present for both methods: bivalent metal cations in a 100-fold excess (Pb, Hg, Cd, Zn) [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF][START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] , cations Fe(II, III), Co(II), Ni(II), Mn(II), and Bi(III) in a 200-fold excess [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] , while surfactants as Triton-X100, butylammonium bromide, and methylammonium bromide did not perturb the copper signal even in a 1000-fold excess. [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] On the contrary, Cu(II) current response decreased with 50% in a 4-fold excess of humic ligands. [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] So when employing vermiculite and montmorillonite CPEs for Cu(II) determination, the methods are sensitive enough, while their selectivity remains questionable. However, the Cu(II) and Zn(II) different selectivity towards nontronite is supposed to eliminate the interference of Cu(II) in the determination of Zn(II) in their mixture. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The nafion/nontronite modified mercury film electrode was proved to be suitable for this purpose [START_REF] Zen | Disposable claycoated screen-printed electrode for amitrole analysis[END_REF] as it exhibited square-wave stripping peak currents linear over the ranges 0 -5 and 6 -12 µM Zn(II) in the presence of 10 µM Cu(II). Monovalent silver and divalent mercury could be determined using a vermiculite modified CPE. [START_REF] Švegl | Vermicullite clay mineral as an effective carbon paste electrode modifier for the preconcentration and voltammetric determination of Hg(II) and Ag(I) ions[END_REF] Preconditioning in this case was achieved by exposition of the modified electrode in 0.1 M HClO 4 for 24 h. Electrode regeneration was performed in 0.1 M KCN for 2 min, followed by the activation with 0.1 M HClO 4 for 2 min before each measurement. The limits of detection were around 10 -8 mol/L for both silver and mercury. The anionic complexes HgCl 4 2-and HgCl 3 -could be determined by means of montmorillonite modified CPE on the same concentration level. [START_REF] Kula | Sorption and determination of Hg(II) on clay modified carbon paste electrodes[END_REF] The chlorocomplexes were also applied to determine gold on the montmorillonite modified CPE [START_REF] Navratilova | Determination of gold using clay modified carbon paste electrode[END_REF][START_REF] Kula | Anion exchange of gold chloro complexes on carbon paste electrode modified with montmorillonite for determination of gold in pharmaceuticals[END_REF] , the determination of gold in pharmaceuticals being also possible with this method. 100 A sodium modified montmorillonite was applied to cadmium sorption from nitrate solutions in order to simulate a cadmium polluted clay mineral. The cadmium concentration adsorbed by the clay was analyzed at equilibrium by means of differential pulse polarography showing a Freundlich sorption profile. After cadmium sorption procedure, the clay mineral was included in a carbon paste in order to investigate the cadmium content by voltammetric determination. A linear response of the CPE was observed in the 5.0×10 -5 -1.8×10 -4 mol g -1 range with good reproducibility. [START_REF] Marchal | Determination of cadmium in bentonite clay mineral using a carbon paste electrode[END_REF] A selective analysis of copper (II) was also obtained at carbon microelectrode coated with monolayers of laponite clay and polythiophene by an originally developed double-step voltcoulometry. [START_REF] Barančok | Surface modified microelectrodes for selective electroanalysis of metal ions in environmental components[END_REF] Langmuir-Blodgett technique was used for the surface modification of electrodes and a detection limit of 5 mg L -1 was reached. Authors discuss the characteristic features of the ''memory effect'' of the laponite coating. [START_REF] Barančok | Surface modified microelectrodes for selective electroanalysis of metal ions in environmental components[END_REF] When testing the effect of montmorillonite on the preparation of a calcium ion electrode by combining and immobilization of ionophore and montmorillonite into a polymer membrane, it was shown that the montmorillonite-modified electrode exhibited higher performance than that without montmorillonite. [START_REF] Wang | Immobilized ionophore calcium ion sensor modified by montmorillonite[END_REF] Good accumulation ability for the [Cu(NH 3 ) 4 ] 2-complex ion via the ion-exchange ability of nontronite was obtained when testing a nontronite/cellulose acetate-coated GCE for the preconcentration and electroanalysis of Cu 2+ in ammoniacal medium by square wave voltammetry (SWV). The cellulose acetate permselective membrane was applied to strengthen the mechanical stability of the nontronite coating on GCE and to prevent the interference from surface-active compounds. The obtained detection limit was 1.73 ppb in pH 10 ammonia solution. The practical application of the developed sensor was illustrated by the measurement of Cu 2+ in tap water, groundwater, and pond water. [START_REF] Zen | Multianalyte sensor for the simultaneous determination of hypoxanthine, xanthine and uric based on a preanodized nontronite-coated screen-printed electrode[END_REF] Electrodeposition method was uses to develop a CPE modified with montmorillonite and covered with a mercury film. In this case, both electrodeposition in situ and a preliminary electrodeposition were applied to deposit the mercury film. Anodic stripping of metals deposited on the Hg film electrodes is said to be a complicated process limited by factors as time and potential of the metal electrodeposition, while accumulation and stripping of metals depends on metal nature. By using an open-circuit sorption of Cd, Pb, and Cu with subsequent anodic stripping voltammetry, higher current responses were obtained. Besides an enhanced sensitivity, superior separation of the current responses during a simultaneous stripping of metals was achieved.. [START_REF] Navrátilová | Electrodeposition of mercury film on electrodes modified with clay minerals[END_REF] Ag(I) could be detected using two natural clays (kaolinite and montmorillonite) deposited onto a platinum electrode surface, by two deposition techniques and under different experimental variables. For both clays, a complete surface coverage for the electrode surface was achieved using the spin-coating technique. The detection limit of Ag(I) ions was as small as 10 -10 M. [START_REF] Issa | Deposition of two natural clays on a Pt surface using potentiostatic and spin-coating techniques: a comparative study[END_REF] Recently, the simultaneous determination of trace amounts of Pb 2+ and Cd 2+ with montmorillonite-bismuth-carbon electrodes was developed. The method was applied for determining Pb 2+ and Cd 2+ contents in real water samples by square wave anodic stripping voltammetry. The detection limits were 0.2 μg L -1 for Pb 2+ and 0.35 μg L -1 for Cd 2+ . [START_REF] Luo | Voltammetric determination of Pb 2+ and Cd 2+ with montmorillonite-bismuth-carbon electrodes[END_REF] A different approach for heavy metal detection was proposed by investigating the feasibility of amperometric sucrose and mercury biosensors based on the immobilization of invertase, glucose oxidase, and mutarotase entrapped in a clay matrix (laponite). Platinum electrodes were used to deposit the enzyme clay gel crosslinked with GA. The inhibition of the invertase activity enabled the measurement of mercury concentration. It was proved that the use of clay matrix, a cationic exchanger, for the invertase immobilization allows the accumulation of metal cations in the vicinity of the enzyme causing the enhancement of the inhibition effect, associated to a decrease of the biosensor recovery. The biosensor inhibition by inorganic and organic mercury was evaluated and a good selectivity towards mercury when studying interferences with other metal ions. Under optimized conditions, Hg(II) was determined in the concentration range of 10 -8 to 10 -6 M. 64 Organo-clay heavy metal detection sensors Organo-clays found their applications in the treatment of wastewaters contaminated with several inorganic cations. [START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] 'Organo-clays' represent, in fact, clay materials with enhanced sorption capacity obtained by introducing suitable organic molecules. [START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] An important feature here is the fact that, depending on the nature of the organic molecule introduced, the surface of clay become more hydrophobic or hydrophilic. [START_REF] Yariv | Organo-clay complexes and interactions[END_REF] The incorporation of organic molecule relates closely to the specific purpose of the modification. Therefore, the organic functionalities contain selective functional groups with high affinity towards the target species. In this perspective, organosilanes intercalated to the clays for the Hg 2+ detection 66 demonstrated that besides the enhanced uptake of these cation at the surface charge sites of clays, the intercalated functional groups were also involved in the binding of this metal ion, which caused of the enhanced uptake. [START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] In a similar way, the enhanced removal of Hg(II) from aqueous solutions using the grafted clay materials with 1,3,4-thiadiazole-2,5-dithiol 67 or 3-mercaptopropyl 68 was also described. A CPE modified by a natural 2:1 phyllosilicate clay functionalized with either amine or thiol groups as a sensor for Hg(II) was also evaluated. Its functionalization was achieved by grafting the pristine clay via its reaction with 3-aminopropyltriethoxysilane or 3-mercaptopropyl-trimethoxysilane. The electroanalytical procedure followed two steps: the chemical accumulation of the analyte under opencircuit conditions and the electrochemical detection of the preconcentrated species using differential pulse anodic stripping voltammetry (DPASV). The detection limits were 8.7•10 -8 and 6.8•10 -8 M, respectively, for the amine-and thiol-functionalized clays. Authors sustain that the sensor can be useful as an alerting device in environmental monitoring or for a rapid screening of areas polluted with mercury species. [START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF] Na montmorillonite was transformed organophilicly by exchanging the inorganic interlayer cations for hexadecyltrimethylammonium ions (HDTA). Based on the high affinities of an organo-clay for non-ionic organic molecules, 1,3,4-thiadiazole-2,5-dithiol was loaded on the HDTA-montmorillonite surface, resulting in the 1,3,4thiadiazole-2,5-dithiol-HDTA-montmorillonite complex, which has been shown to be an effective solid phase selective sorbent for Hg(II), that can also be applied in the preparation of a chemically modified CPE. The detection limit was estimated as 0.15 µg L -1 . [START_REF] Dias Filho | Study of an organically modified clay: Selective adsorption of heavy metal ions and voltammetric determination of mercury(II)[END_REF] A lower detection limit for Hg(II) was obtained by A.J. M) using thiol-functionalized porous clay heterostructures, which have been prepared by intragallery assembly of mesoporous organosilica in natural smectite clay. The electrode assembly consisted of a surfactant-directed co-condensation of tetraethoxysilane (TEOS) and 3-mercaptopropyltrimethoxysilane (MPTMS), at various MPTMS/TEOS ratios, in the interlayer region of the clay deposited as thin films onto the surface of GCEs and applied to the voltammetric detection of Hg(II) subsequent to open-circuit accumulation. They displayed attractive features, like fast diffusion rates, and thus great sensitivity and good mechanical stability, due to the layered morphology . [START_REF] Jieumboué-Tchinda | Thiol-functionalized porous clay heterostructures (PCHs) deposited as thin films on carbon electrode: Towards mercury(II) sensing[END_REF] The use of clays and humic acids in order to simulate soil clay-organo complex (clay humate) represents another approach to understand the soil processes related to transport and accumulation of heavy metals. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The clay-humate modified CPE was employed to study the clay humate system and its reaction with copper in comparison with the clays alone [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] and it was concluded that the Cu(II) sorption was significantly decreased in the case of montmorillonite-humate. Some organic molecules with different functional groups ( -NH 2 , -COOH, -SH, or -CS 2 ) have been introduced within the interlayer space of montmorillonite using organic compounds, such as hexamethylenediamine, 2-(dimethylamino)ethenethiol, 5aminovaleric acid, and hexamethylenediamine-dithiocarbamate. [START_REF] Stathi | Physicochemical study of novel organoclays as heavy metal ion adsorbents for environmental remediation[END_REF] These organo intercalated montmorillonite samples showed an increased uptake capacity for cations (i.e., Cd 2+ , Pb 2+ and Zn 2+ ). The suggested mechanism was that the increase in the interlayer space (between 0.3-0.4 nm) occurred due the intercalation of the organic compound, which allowed an easier access for the M 2+ and M(OH) + species to the intercalated organic compound. Meanwhile, the strong binding of the M(OH) + species to the organic compounds contributed additionally to the metal uptake, as proved by the obtained binding constants. [START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] A voltammetric sensor based on chemically modified bentonite-porphyrin CPE has been introduced for the determination of trace amount of Mn(II) in wheat flour, wheat rice, and vegetables. The method showed good Mn(II) responses at a pH range of 3.5-7.5 and a detection limit of 1.07•10 -7 mol L -1 Mn(II). Authors proved that a 1000fold excess of additive ions did not interfere on the determination of Mn(II). [START_REF] Rezaei | A selective modified bentonite-porphyrin carbon paste electrode for determination of Mn(II) by using anodic stripping voltammetry[END_REF] By combining the ion exchange capability of Na montmorillonite with the electronic conductor function of an anthraquinone, a sensitive electrochemical technique for the simultaneous determination of trace levels of Pb 2+ and Cd 2+ by DPASV was developed. This method used a non-electrolytic preconcentration, via ion exchange model, followed by an accumulation period, via the complex formation in the reduction stage at -1.2 V, followed by an anodic stripping proces s. The detection limit was 1 and 3 nM for Pb 2+ and Cd 2+ , respectively. This method found its application in the detection of trace levels of Pb 2+ and Cd 2+ in milk powder and lake water samples. [START_REF] Yuan | Simultaneous determination of cadmium (II) and lead (II) with clay nanoparticles and anthraquinone complexly modified glassy carbon electrode[END_REF] A low-cost electrochemical sensor for lead detection developed using an organoclay obtained by the intercalation of 1, 10-phenanthroline within montmorillonite. The results showed that the amount of accumulated Pb(II) increased with an increase of the accumulation time and remained constant after saturation. The limit of detection for lead was in the sub-nanomolar range (4•10 -10 M), and there was no interference with copper at concentrations 0.1×[Pb(II)]. [START_REF] Bouwe | Structural characterisation of 1,10phenanthroline-montmorillonite intercalation compounds and their application as low-cost electrochemical sensors for Pb(II) detection at the sub-nanomolar level[END_REF] Amperometric biosensors based on clays applied in pharmaceutical and biomedical analysis Enzyme modified electrodes are considered the most popular and reliable kind of biosensors. A crucial feature on the commercial development of biosensor is the stable immobilization of an enzyme on an electrode surface, with complete retention of its biological activity and good diffusional properties for substrates. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] The various enzyme immobilization methods reported in the literature include cross-linking of proteins by bifunctional reagent, covalent binding, and entrapment in a suitable matrix. Due to their hydrophilic, swelling, and porosity properties clays occupy a privileged place among all the inorganic and organic matrices described. Therefore, they were used as host matrices for enzymes (first generation biosensors), as concentration means for different redox mediators (second generation biosensors), then as mediators themselves (clay and LDH for electrons transfer), and finally, for direct enzyme regeneration (third generation biosensors). Oxygen based biosensors (first generation) In the first generation of biosensors, during glucose oxidation process, flavin-adenine dinucleotide (FAD) component of GOX is converted into FADH 2 . After glucose oxidation, FADH 2 is reconverted to FAD in the presence of oxygen (Figure 3A). Besombes et al. [START_REF] Besombes | Improvement of analytical characteristic of an enzyme electrode for free and total cholesterol via laponite clay additives[END_REF][START_REF] Besombes | Improvement of poly(amphiphilic pyrrole) enzyme electrodes via the incorporation of synthetic laponite-clay-nanoparticles[END_REF] have shown that the analytical performance of amperometric biosensors based on (polyphenol oxidase) PO and (cholesterol oxidase) CO can be very much improved by the incorporation of laponite particles within an electrogenerated polypyrrole matrix. A similar study described that laponite matrix improved the analytical characteristics and the long-term stability of biosensors compared to the corresponding biosensors simply obtained by the chemical crosslinking of glucose oxidase (GOX) or polyphenol oxidase (PPO) on the electrode surface. [START_REF] Cosnier | Mesoporous TiO2 films: new catalytic electrode materials for fabricating amperometric biosensors based on oxidases[END_REF][START_REF] Shan | A new polyphenol oxidase biosensor mediated by Azure B in laponite clay matrix[END_REF] An inexpensive, fast, and easy method for the elaboration of enzymes electrodes is the entrapment of biomolecules in clay matrix, which consists of the adsorption of an enzyme/clay aqueous colloid mixture onto the electrode surface. 2 However, slow release of enzymes into solution can considerably reduce the biosensor lifetime. This problem was overcome by using enzyme cross linking agents, such as glutaraldehyde (GA) [START_REF] Poyard | A new method for the controlled immobilization of enzyme in inorganic gels (laponite) for amperometric glucose biosensing[END_REF] , polymethyl methacrylate , or poly(o-phenylenediamine) [START_REF] Shyu | Characterization of iron containing clay modified electrodes and their applications for glucose sensing[END_REF] which have been added in the enzyme/clay coating film. 2 GOX 80 was immobilized by clay sandwich method. A polycationic organosilasesquioxane laponite clay matrix was adopted by Coche-Guérente et al. [START_REF] Coche-Guérente | Characterization of organosilasesquioxaneintercalated-laponite-clay modified electrodes and (bio)electrochemical applications[END_REF][START_REF] Coche-Guérente | Amplification of amperometric biosensor responses by electrochemical substrate recycling: Part II. Experimental study of the catechol-polyphenol oxidase system immobilized in a laponite clay matrix[END_REF][START_REF] Coche-Guérente | Amplification of amperometric biosensor response by electrochemical substrate recycling. 3. Theoretical and experimental study of phenol-polyphenol oxidase system immobilized in laponite hydrogels and layer-by-layer selfassembled structures[END_REF] for biosensor development. The group proposed an enzymatic kinetic model for the PPO amplification process. Amperometric detection of glucose was intensively studied, mainly due to the physiological importance of this analyte, the stability of glucose oxidase, and the diversity of sensing methods applied. The oxidizing agent used by GOX electrode is molecular oxygen, while the amperometric detection of glucose can be carried out via the electrooxidation of the enzymatically generated H 2 O 2 at a Pt electrode: 2 Glucose + O 2 GOX gluconic acid + H 2 O 2 H 2 O 2 → 2H + + O 2 + 2e - Interferents like ascorbic acid and uric acid, which are commonly present in biological fluids, can also be oxidized due to the high polarizing voltage applied (E app ≈ 0.6-0.8 V) leading to nonspecific signals 2 . To overcome this problem, Poyard et al. [START_REF] Poyard | Optimization of an inorganic/bio-organic matrix for the development of new glucose biosensor membranes[END_REF][START_REF] Poyard | Association of a poly(4vinylpyridine-costyrene) membrane with an inorganic/organic mixed matrix for the optimization of glucose biosensors[END_REF] used clay-semipermeable polymer composite electrodes to decrease the permeability to organic interfering compounds. Due to the fact that hydrogen peroxide can be reduced by redox mediators immobilized within CLME, another possibility consists in using redox mediators to decrease the electrode potential. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Based on this concept, cationic methylviologen (MV 2+ ) [START_REF] Zen | A glucose sensor made of an enzymatic clay modified electrode and methyl viologen mediator[END_REF] , ruthenium complexes [START_REF] Shyu | Characterization of iron containing clay modified electrodes and their applications for glucose sensing[END_REF][START_REF] Ohsaka | A new amperometric glucose sensor based on bilayer film coating of redox-active clay film and glucose oxidase enzyme film[END_REF] associated to structural iron cations, or TiO 2 underlying films [START_REF] Cosnier | Mesoporous TiO2 films: new catalytic electrode materials for fabricating amperometric biosensors based on oxidases[END_REF] were employed in the development of CLME based glucose biosensors. A particular example of amperometric biosensor for glucose detection is GOX/luminol/clay electrode which works by means of electrochemiluminescence. 87 A biosensor for phosphate detection has been fabricated by the coimmobilization of three enzymes (GOX, maltose phosphorilase and mutarotase) with complementary activities. [START_REF] Mousty | Trienzymatic biosensor for the determination of inorganic phosphate[END_REF] The amperometric detection in this case corresponded to H 2 O 2 oxidation. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Taking into account that a large percentage of environmental pollutants are known to act as enzyme inhibitors, the development of a series of sensors is based on the measurement of this property. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] In this way, a novel hypoxanthine sensor based on xanthine oxidase immobilized within a polyaniline film was developed by Hu et al. [START_REF] Hu | Biosensor for detection of hypoxanthine based on xanthine oxidase immobilized on chemically modified carbon paste electrode[END_REF] The detection in this case is based on oxygen consumed due to the enzymatic reaction measured at a montmorillonite-MV 2+ carbon paste-modified electrode. 2 Mediator based biosensors (second generation) In the second generation of biosensors, O 2 was replaced by redox mediators (i.e., tetrathio fulvalene or dopamine) as oxidizing agents. [START_REF] Lei | Hydrogen peroxide sensor based on coimmobilized methylene green and horseradish peroxidase in the same montmorillonite-modified bovine serum albuminglutaraldehyde matrix on a glassy carbon electrode surface[END_REF][START_REF] Zen | A selective voltammetric method for uric acid and dopamine detection using clay-modified electrodes[END_REF][START_REF] Zen | An enzymatic clay modified electrode for aerobic glucose monitoring with dopamine as mediator[END_REF] The amperometric measure of glucose concentration (Figure 3B) represents therefore the current flowing through the reoxidation of electron transfer agents at the electrode surface. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] In a similar way, bienzymatic configurations, consisting of GOX, horseradish peroxidase (HRP), and redox mediators coated on CLME could be applied to the detection of glucose at 0.0 V with good sensitivities and with no interferences and preventing any possible oxidation of ascorbate and ureate. [START_REF] Cosnier | A composite clay glucose biosensor based on an electrically connected HRP[END_REF][START_REF] Shan | HRP wiring by redox active layered double hydroxides: application to the mediated H2O2 detection[END_REF] Laponite was particularly used as immobilization matrix for several other oxidase enzymes immobilized on CLME, and the resulting biosensors have been used to detect NADH, lactate, ethanol, and also a wide variety of water pollutants, such as cyanide. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF][START_REF] Cosnier | Mesoporous TiO2 films: new catalytic electrode materials for fabricating amperometric biosensors based on oxidases[END_REF][START_REF] Shan | A new polyphenol oxidase biosensor mediated by Azure B in laponite clay matrix[END_REF][START_REF] Shan | HRP wiring by redox active layered double hydroxides: application to the mediated H2O2 detection[END_REF][START_REF] Cosnier | A new strategy for the construction of amperometric dehydrogenase electrodes based on laponite gel-methylene blue polymer as the host matrix[END_REF][START_REF] Cosnier | Amperometric detection of pyridine nucleotide via immobilized viologen-accepting pyridine nucleotide oxidoreductase or immobilized diaphorase[END_REF][START_REF] Cosnier | An original electroenzymatic system: flavin reductase-riboflavin for the improvement of dehydrogenase-based biosensors. Application to the amperometric detection of lactate[END_REF][START_REF] Shan | A composite poly azure B-Clay-enzyme sensor for the mediated electrochemical determination of phenols[END_REF][START_REF] Shan | Layered double hydroxides: an attractive material for electrochemical biosensor design[END_REF][100] The biosensors achieved by Cosnier's group were based on laponite-electrogenerated polymer composites. [START_REF] Cosnier | Mesoporous TiO2 films: new catalytic electrode materials for fabricating amperometric biosensors based on oxidases[END_REF][START_REF] Cosnier | A composite clay glucose biosensor based on an electrically connected HRP[END_REF][START_REF] Cosnier | A new strategy for the construction of amperometric dehydrogenase electrodes based on laponite gel-methylene blue polymer as the host matrix[END_REF][START_REF] Cosnier | Amperometric detection of pyridine nucleotide via immobilized viologen-accepting pyridine nucleotide oxidoreductase or immobilized diaphorase[END_REF][START_REF] Cosnier | An original electroenzymatic system: flavin reductase-riboflavin for the improvement of dehydrogenase-based biosensors. Application to the amperometric detection of lactate[END_REF] Hydrogenase 101 was immobilized by clay sandwich method and bovine serum albumin + GA [START_REF] Lei | Hydrogen peroxide sensor based on coimmobilized methylene green and horseradish peroxidase in the same montmorillonite-modified bovine serum albuminglutaraldehyde matrix on a glassy carbon electrode surface[END_REF] were used as enzyme cross linking agents. The redox mediators methylene green, poly 3,4 dihydroxybenzaldehyde, and 2,2'-azinobis-3-ethylbenzothiazoline-6-sulfonate were co-immobilized within the clay matrix as electron shuttles between the redox center of HRP and the electrode. [START_REF] Cosnier | A composite clay glucose biosensor based on an electrically connected HRP[END_REF][START_REF] Shan | HRP wiring by redox active layered double hydroxides: application to the mediated H2O2 detection[END_REF]102 A biosensor configuration based on the entrapment of diaphorase or dehydrogenase within laponite gel containing methylene blue (MB) as electropolymerized mediator for NADH oxidation was described. [START_REF] Cosnier | A new strategy for the construction of amperometric dehydrogenase electrodes based on laponite gel-methylene blue polymer as the host matrix[END_REF][START_REF] Cosnier | Amperometric detection of pyridine nucleotide via immobilized viologen-accepting pyridine nucleotide oxidoreductase or immobilized diaphorase[END_REF] The presence of PolyMB in the biolayer allowed an electron transfer communication between enzymes and the electrode surface and Poly MB/dehydrogenase/laponite-modified electrode was successfully applied for electroenzymatic detection of lactate and ethanol via the mediated oxidation of NADH at 0 V. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] A comparative study described the properties of two different biosensors based on the immobilization of PPO within the two different kinds of clay matrices: one cationic (laponite) and the other anionic (layered double hydroxides, LDHs). [START_REF] Shan | Layered double hydroxides: an attractive material for electrochemical biosensor design[END_REF] Due to the high permeability and the structure and charge of the particles of the LDH-based biosensors, the PPO/[Zn-Al-Cl] LDHs biosensor showed remarkable properties such as high sensitivity and good storage stability. 2 Directly coupled enzyme electrodes (third generation) The enzymatic detection of hydrogen peroxide has been reported either at mediated HRP-CLME or by direct enzyme regeneration at CLME (3rd generation, Figure 3C). [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] The synthesis of carbon nanotubes (CNTs) on clay mineral layers and the preparation of H 2 O 2 sensor based on CNT/Nafion/Na-montmorillonite composite film for the detection of H 2 O 2 were achieved and the developed sensor showed high sensitivity and applicability. 103 Step-by-step self-assembly was employed to incorporate HRP into a laponite/chitosan-modified glassy carbon electrode (GCE). The self-assembled enzyme maintained its biological activity and showed an excellent electrocatalytic performance for the reduction of H 2 O 2 with fast amperometric response (10 s), broad linear range, good sensitivity, and low detection limit. 104 The direct electron transfer of heme proteins (HRP, myoglobin, hemoglobin, cytochrome c) found its applications in sensors for the determination of H 2 O 2 , as well nitrite, NO, and trichloroacetic acid. [105][106][107][108][109] The direct electron transfer of Mb was realized by immobilizing Mb onto an ionic liquid-clay composite film modified GCE. The composite was biocompatible and promoted the direct electron transfer between Mb and electrode. The reduction of hydrogen peroxide demonstrated the biocatalytic activity of Mb in the composite film. [START_REF] Dai | Direct electrochemistry of myoglobin based on ionic liquid-clay composite films[END_REF] Nanocomposite matrix based on chitosan/laponite was employed to construct an amperometric glucose biosensor. It is stated that GOX immobilized in the material maintained its activity well as the usage of GA was avoided. 110 Taking into consideration that the majority of the amperometric glucose biosensors need stabilization membranes to prevent enzyme release and to improve selectivity, a comparative study between some of the most common membranes and two more innovative systems based on Ti and Pd hexacyanoferrate hydrogels was carried out on electrodes modified by a Ni/Al hydrotalcite (HT) electrochemically deposited on Pt surfaces, together with the GOX enzyme. The results showed that the Pd hexacyanoferrate hydrogel was the best system in terms of sensitivity, selectivity and allowed to determine glucose levels in bovine serum samples which were perfectly in agreement with the one declared by the analysis certificate. 111 The direct electron transfer between GOX and the underlying GCE could be achieved via colloidal laponite nanoparticles as immobilization matrix. Due to the decrease of oxygen electrocatalytic signal, the laponite/GOX/GCE was successfully applied in the reagentless glucose sensing at -0.45 V. The electrode exhibited fast amperometric response (8 s), low detection limit (1.0•10 -5 M), and very good sensitivity and selectivity. 112 An amperometric biosensor for phenol determination was fabricated based on chitosan/laponite nanocomposite matrix. The composite film enabled the PPO immobilization on the surface of GCE. The role of chitosan was to improve the analytical performance of the pure clay-modified bioelectrode. The biosensor had good affinity to its substrate, high sensitivity catechol, and remarkable long-term stability in storage (it retained 88% of the original activity after 60 days). 113 The comparison between four amperometric biosensors for phenol, prepared by means of immobilization of tyrosinase in organic and inorganic matrices, is also reported. In this case, the enzyme was entrapped within two organic matrices (polyacrylamide microgels and polyvinylimidazole). On the other hand, enzyme was absorbed on two inorganic matrices (laponite clay and calcium phosphate cement, called brushite) and subsequently was cross-linked by GA. Phenolic compounds were detected in aqueous and organic media by direct electrochemical reduction of the enzymatic product, o-quinone, at -0.1 V versus saturated calomel electrode (SCE). Authors attributed the large differences found in biosensors performance to the environment surrounding the enzyme, or the biomaterial layers used in the fabrication of the biosensor. The best results in detection of monophenols and catechol in aqueous solution was achieved with the sensors based on inorganic matrices. 114 The immobilization of lactate oxidase on a GCE modified with laponite/chitosan hydrogels for the quantification of L-lactate in alcoholic beverages and dairy products was also described. The study used ferrocene-methanol as artificial mediator and aimed to determine the best hydrogel composition from the analytical point of view. 115 Mesoporous silica materials and their applications in electrochemistry Silica-based mesoporous materials have had a great impact on electrochemistry research in the past two decades, especially due the tremendous development in the field of mesoporous materials. 116 Ordered mesoporous silicas (MPS) present attractive intrinsic features which enabled their application in electrochemical science. Being the first reported class of mesoporous materials, ordered MPS were usually obtained by inorganic polymerization in the presence of a liquid-crystalforming template (ionic or non-ionic molecular surfactant or block copolymer). These materials are in fact amorphous solids with cylindrical mesopores ranging from 20 to more than 100 Å in diameter, spatially organized into periodic arrays that often mimic the liquid crystalline phases exhibited by templates. [117][118][119][120] The removal of the hybrid mesophases can be achieved by calcination or extraction. This process gives rise to stable mesoporous materials with extremely high surface area (up to 1400 m 2 g -1 ), mesopore volume greater than 0.7 ml g -1 and narrow pore size distribution. 121 In the large field of chemically modified electrodes, researchers have recently included sol-gel materials [121][122][123][124][125] as inorganic modifiers (along with clays, [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]126 or zeolites 127,128 ) as a result of their need to couple the intrinsic properties of a suitable modifier to a particular redox process. More recently, due to the development in nanoscience and nanotechnology, nanomaterials have found their applications in electrochemistry, in particular to build nanostructered electrodes with improved performances. [129][130][131][132] The most important features that recommend nanomaterials for applications in electrochemistry are: (i) large surface areas (high number of surfaceactive sites, ideal support for the immobilization of suitable reagents); (ii) widely open and interconnected porous structure (fast mass transport); (iii) good conductivity and intrinsic electrocatalytic properties (for noble metals and carbons); and (iv) high mechanical stability owing to their multidimensional structure. 116 The two approaches to generate nanoarchitectures on electrode surfaces are: (i) the assembly of one dimensional nanostructures (nanoparticles, nanowires or nanorods) into functional 2D or 3D architectures 133 or nanoparticulate films, 134 and (ii) the preparation of single-phase continuous ordered and porous mesostructures from supramolecular template assemblies and their integration into electrochemical devices. 116 The sol-gel process Silica-based materials represent robust inorganic solids displaying a high specific surface area and a three-dimensional structure made of highly open spaces interconnected to each other via SiO 4 tetrahedra, generating highly porous structures. 116 Generally, the sol-gel process involves the following steps: • Hydrolysis The preparation of a silica glass begins with an appropriate alkoxide, such as Si(OR) 4 ( R = CH 3 , C 2 H 5 , or C 3 H 7 ), which is mixed with water and a mutual solvent to form a solution. Hydrolysis leads to the formation of silanol groups (SiOH). The presence of H 3 O + in the solution increases the rate of the hydrolysis reaction. 121 • Condensation In a condensation reaction, two partially hydrolyzed molecules can link together through forming siloxane bonds (Si-O-Si). This type of reaction can continue to build larger and larger silicon-containing molecules (linkage of additional Si-OH) and eventually results in a SiO 2 network. The H 2 O (or alcohol) expelled from the reaction remains in the pores of the network. When sufficient interconnected Si-O-Si bonds are formed in a region, they respond cooperatively as colloidal (submicrometer) particles or a sol. 121 The gel morphology is influenced by temperature, the concentrations of each species (attention focuses on R ratio, R = [H 2 O]/[Si(OR) 4 ), and especially acidity: -Acid catalysis generally produces weakly-cross-linked gels which easily compact under drying conditions, yielding low-porosity microporous (smaller than 2 nm) xerogel structures; -Conditions of neutral to basic pH result in relatively mesoporous xerogels after drying, as rigid clusters a few nanometers across pack to form mesopores. The clusters themselves may be microporous. -Under some conditions, base-catalyzed and two-step acid-base catalyzed gels (initial polymerization under acidic conditions and further gelation under basic conditions exhibit hierarchical structure and complex network topology. 121 Si OH Si O Si + H 2 O Si OH + • Aging Gel aging is an extension of the gelation step in which the gel network is reinforced through further polymerization, possibly at different temperature and solvent conditions. During aging, polycondensation continues along with localized solution and reprecipitation of the gel network, which increases the thickness of interparticle necks and decreases the porosity. The strength of the gel thereby increases with aging. An aged gel must develop sufficient strength to resist cracking during drying. 116 • Drying The gel drying process consists in removal of water from the interconnected pore network, with simultaneous collapse of the gel structure, under conditions of constant temperature, pressure, and humidity. Large capillary stresses can develop during drying when the pores are small (<20 nm). These stresses will cause the gels to crack catastrophically, unless the drying process is controlled by decreasing the liquid surface energy by addition of surfactants or elimination of very small pores, by hypercritical evaporation, which avoids the solid-liquid interface, or by obtaining monodisperse pore sizes by controlling the rates of hydrolysis and condensation. 121 • Electrochemically-assisted generation of silica films Electrochemistry was recently proven to be attractive for synthesizing ordered mesoporous (and macroporous) deposits on electrode surfaces. The principle is based on the electrochemical manipulation of pH on the electrode surface affecting thereby the kinetics associated to the sol-gel process. The electrode is immersed in a stable silica sol (mild acidic medium: pH 3-4) where a negative potential is applied to increase the pH locally at the electrode/solution interface, inducing therefore the polycondensation of the silica precursors only on the electrode surface, which makes the process applicable to deposit thin films on non-planar surfaces (Figure 4). 121 Silica and silica-based organic-inorganic hybrids Well-ordered MPS have monodispered pore sizes (typically between 2 and 10 nm) and various ionic or non-ionic surfactants can be employed for these materials preparation. [135][136][137] It is well known that the morphology of mesoporous silica materials depends on the synthesis conditions. ''Evaporation-Induced Self-Assembly'' (EISA) is the most common method employed to obtain mesoporous silica thin films with controlled mesostructure and pore size. 138,139 EISA process implies a diluted sol and the formation of the film at the electrode surface is due to the evaporation of the solvent upon the extraction of the solid template from the sol (e.g., by dip-coating, spin-coating, or dropping techniques). 116 The sol solution consists of inorganic precursors alone or in mixture with organically-modified metal oxides and a surfactant usually dissolved in a water-ethanol mixture. The evaporation of volatile components takes place at the air/film interface and the concomitant self-assembly polycondensation of the surfactant and the precursors gives rise to a homogeneous mesostructured deposit (with typical thicknesses in the range of 50 nm-700 nm). 116 If in the past decades the generation of ordered porous films on solid electrode surfaces led to poorly permeable deposits (probably due to the unfavourable pore orientation) and suffered from cracks formation arising from surfactant extraction 140 , the ability to precisely control the structural arrangement of silica films on the mesoscale has recently led to more accurately engineered mesoporous film electrodes with continuous structural order over wide areas and variable permeation properties depending on the film structure. 141 Mesostructure with different symmetry (cubic, hexagonal, double gyroid, rhombohedral, etc.) could be obtained by controlling the sol composition, the nature of the template, and the post-treatment temperature and humidity level. 139 The use of hard templates (nano-or microbeads) enabled the generation of ordered macroporous thin films 142 together with the achievement multimodal hierarchical porosity. 143 Due to their defined multiscale porous networks with adjustable pore size and connectivity, high surface area and accessibility, these rigid three-dimensional matrices can be applied on solid supports (like electrode surfaces), being very promising for effective transport reactions at electrode/solution interfaces. 116 Organically-functionalized silica-based materials are of interest in electrochemistry firstly due to their highly porous and regularly ordered 3D structure which ensures good accessibility and fast mass transport to the active centres. 116 This can be useful to improve sensitivity in preconcentration electroanalysis (voltammetric detection subsequent to open-circuit accumulation) and to enable the immobilization of a great variety of organo-functional groups improving in the meantime the selectivity of the recognition event. Secondly, due to their redox-active moieties, these materials should be able to induce intra-silica electron transfer chains, or they can be applied in electrocatalysis as electron shuttles or mediators. 116 Third, these materials can also find their applications in the field of electrochemical biosensors taking into consideration the possibility for nanobioencapsulation (e.g., enzyme immobilization [144][145][146][147] ), which can lead to the development of integrated systems combining molecular recognition, catalysis and signal transduction. 116 Ordered and oriented mesoporous sol-gel films It was recently described that by electrochemistry is likely to prepare ordered and oriented mesoporous silica thin films, by the so-called electro-assisted selfassembly (EASA) method. 148,149 In this method the formation of surfactant assemblies under potential control with concomitant growing of a templated inorganic film also takes place, but, unlike in EISA, the electrogenerated species (e.g., OH -) do not serve in to precipitate a metal hydroxide, but they act as catalysts to gelify a sol onto the electrode surface. 116 The mechanism of EASA implies a suitable cathodic potential applied to an electrode immersed in a hydrolyzed sol solution, in order to locally generate OH -species at the electrode/solution interface, which then induces the polycondensation of the silane precursors and the growth of silica or organicallymodified silica films onto the electrode surface. [150][151][152][153] When a cationic surfactant (i.e., cetyltrimethylammonium bromide, CTAB) is present in the sol, the obtained configuration is that of hexagonally-packed mesopore channels growing perpendicularly to the electrode surface, which is the result of the electrochemicallydriven cooperative self-assembly of surfactant micelles with simultaneous silica formation. 154 The main advantages of EASA are the possibility to obtain homogeneous deposits over wide areas (cm 2 ), even on nonflat surfaces ( with thicknesses typically in the 50-200 nm range), and vertically-aligned mesoporous silica films on various electrode materials such as glassy carbon, platinum, gold, copper, or ITO. This type of orientation with accessible pores from the surface is expected to enhance mass transport rates through the film and hence improve sensitivity of voltammetric analysis. 116 A short deposition time is required in order to avoid the formation of aggregates. 149 Insulating supports can also be employed to prepare these films using higher electric fields. 155 Mass transport in mesoporous (organo)silica particles The regular 3D structure consisting in mesochannels of monodisperse dimensions favors a fast mass transport, contrary to the diffusion in the non-ordered silica gel homologues. 116 This could be demonstrated through electrochemical methods by means of which the kinetics associated to mass transfer reactions between a solution and solid particles in suspension can be monitored in situ and in real time. 156 Therefore, by potentiometric pH monitoring of aqueous suspensions of mesoporous silica particles grafted with aminopropyl groups the kinetics associated to the protonation of the amine groups located in the material could be determined. 156,157 This approach aimed to study the variation of the apparent diffusion coefficients (D app ) as a function of the reaction progress by assuming the diffusion of protons (and associated counter anion) inside the functionalized particles as a rate-determining step, and fitting the ''proton consumption versus time'' plot to a spherical diffusion model (silica particles have been considered as spherical in a first approximation. 157 This study was achieved on two amine-functionalized mesoporous silica samples and one non-ordered silica gel grafted with the same aminopropyl groups and it was concluded that mass transfer was faster in well-ordered mesoporous samples than in the non-ordered homologues, but only at low protonation levels, and D app values decreased dramatically in mesostructured materials due to major electrostatic shielding effects when generating charged moieties onto the internal surface of regular mesochannels. 157 2.4 Selected applications of mesoporous silica materials in electrochemistry Electroanalysis, sensors and biosensors Applications in this field include mesoporous silica and organically-modified silicates in electroanalysis, 128 mesoporous silica-based materials for sensing 158 or biosensing, 159 and templated porous film electrodes in electrochemical analysis. 160,161 Direct detection -electrocatalysis Mesoporous silica is a non-conductive metal oxide which can be used as such or functionalized with appropriate catalysts. 116 In this way, mesoporous silica-based materials were employed to host mediators. Therefore, they were dispersed in carbon paste electrodes for electrocatalytic purposes. 116 The immobilization of polyoxometalates or Prussian Blue derivatives have been achieved in (protonated) amine-functionalized mesoporous silica due to favorable electrostatic interactions. [162][163][164] The amperometric detection of NO 2 -164 and ClO 3 -/BrO 3 -163 was therefore achieved. A redox polymer based on the poly(4vinylpyridine) complex of [Os(bpy) 2 Cl] + quaternized with methyl iodide was immobilized onto a mesoporous silica for the electrocatalytic reduction of nitrite ions. 165 A more sophisticated approach described, is that of zinc phthalocyanine adsorbed on Ag/Au noble metal nanoparticles (NPs) anchored onto thiolfunctionalized MCM-41(MCM = Mobil Composition of Matter), which exhibited synergistic effects for the electrocatalytic reduction of molecular oxygen. 166 2.4.1.2 Preconcentration electroanalysis Due to their sorption properties, ordered mesoporous silica were employed for the preconcentration electroanalysis of metal cations, 167 nitroaromatic compounds, 168 bisphenol A, 169 ascorbic and uric acids and xanthine, 170 nitro-and aminophenol derivatives, 171,172 and some drugs. 173,174 Chlorophenol was also successfully accumulated onto mesoporous titania prior to sensitive detection. 175 Heavy metal species can accumulate on mesoporous silica by adsorption (e.g., Hg(II)) 176 . Even so, the adsorbent properties of this material can be significantly improved by functionalization with suitable organic groups. Mesoporous organosilica have thus found many applications as nanoengineered adsorbents for pollutant removal, 177 but also as electrode modifiers for preconcentration electroanalysis. 128 Their main advantages are, firstly, the possibility of tuning the selectivity of the recognition event (and therefore the selectivity of the detection) by an appropriate selection of the organo-functional group and secondly, the well-ordered and rigid mesostructure ensuring fast transport of reactants (and thus high preconcentration efficiency and good sensitivity for the sensor). 116 The sensitivity has been significantly enhanced when using an electrode modified with a mesostructured adsorbent instead of the non-ordered homologue, when comparing Hg(II) detection subsequent to opencircuit accumulation at thiol-functionalized silica materials (Figure 5). This proved also the fastest mass transport processes in the mesostructures silica. 178 Many primary works were focused on simple functions such as thiol [179][180][181] or amine groups. 182 Several other organo-functional groups have been afterwards used when trying to improve the selectivity of the accumulation step, such as quaternary ammonium, 183 sulfonate, 184 glycinylurea, 185 salicylamide, 186 carnosine, 187 acetamide phosphonic acid, 188 benzothiazolethiol, 189 acetylacetone, 190 cyclam derivatives, 191,192 β-cyclodextrin, 193 5-mercapto-1-methyltetrazole, 194,195 or ionic liquids. 196 These materials are likely to accumulate analytes via complexation and ion exchange. 116 . Electrodes modified with mesoporous organic-inorganic hybrid silica have been employed to detect several analytes, such as Ag(I), 197 Cu(II), 192 Cd(II), 188 Hg(II), 180,189 Pb(II), 179,194 Eu(III), 186 U(VI). 198 The procedure generally involves first the preconcentration of the analyte in the mesostructured hybrid material, usually followed by its desorption in the detection medium, and subsequent quantitative electrochemical detection. 116 It was recently resorted to the detection of cations in a mixture (e.g., {Cd(II) + Pb(II) + Cu(II)} 181,190 or {Hg(II) + Cd(II) + Pb(II) + Cu(II)} 196 ) which were accumulated together before being selectively detected via voltammetric signals located at distinct potential values. Selectivity was sometimes achieved by the preferential recognition of the organo-functional group for the target metal in the presence of potentially-interfering species, which can be tuned by molecular engineering of the immobilized ligands. 116 Good selectivity for Cu(II) 191 was thus obtained by using mesoporous silica bearing cyclam groups, while the addition of acetamide arms on the cyclam centres resulted in shifting the selectivity towards Pb(II). 192 Both film and bulk composite electrode configurations were tested for preconcentration/voltammetric detection and it resulted that the response time of film electrodes is often slower, especially at low analyte concentration, due to the fact that the accumulation starts on the upper part of the film contacting the solution, and thus there are very low amounts of accumulated species when operating in dilute solutions, some of them being lost after desorption in the detection medium and therefore not detectable on the electrode surface. 195 But when very thin films with mesopore channels oriented normal to the underlying electrode are employed, this effect will be minimized and the recognition even will be accelerated. 116 It should be noted that unlike the non-ordered silica gels, ordered mesoporous adsorbents can give rise to more sensitive detection in preconcentration electroanalysis, but this is applicable only when the rate determining step is the diffusion of the analyte in the material, not in the case of kinetics dominated by the complexation reaction itself. 199 Organic species like bisphenol A 200 or dihydroxybenzene isomers 201 after accumulation at aminopropyl-grafted SBA-15 (SBA = Santa Barbara Amorphous) could also be determined using functionalized mesoporous silica. A possibility to improve selectivity by rejecting interferences is by using organically-modified mesoporous silica films as permselective barriers. This was demonstrated for aminopropyl-functionalized films exhibiting ion-permselective pH-switchable behavior. [202][203][204] 2.4.1.3 Electrochemical biosensors and related devices An electrochemical biosensor requires an effective and durable immobilization of huge amounts of biomolecules in an active form, and a favorable environment for efficient electron transfer reactions. Due to their attractive properties (large surface areas for immobilization of reactants and biocomponents, interconnected pore systems to ensure fast accessibility to active centres, intrinsic electrocatalytic properties or support for electrocatalysts, possibility of functionalization) silica-based mesoporous materials have been intensively applied in this field. 116 • Immobilization of heme proteins, direct electrochemistry and electrocatalysis Direct electrochemistry could be registered for heme proteins, such as cytochrome c, 205, 206 haemoglobin, 207, 208 or myoglobin, 209,210 when immobilized in nonconductive mesoporous silica particles or continuous mesoporous silica layers deposited as thin films on electrode surfaces. The well-defined voltammetric responses obtained in these cases indicate that the proteins retain their biological activity once immobilized within the mesoporous material. The amounts of immobilized proteins are influenced by the pore structure and size of the host material (e.g., hemoglobin almost excluded from MCM-41 while fitting well inside SBA-15 211 ) and, at similar loadings, the interconnected 3D or bimodal mesostructures resulted in higher current responses. 211,212 Additives were used in some cases to improve the electron transfer reactions (e.g., ionic liquids 213 ), or to prevent leakage of the protein out of the material (e.g., chitosan 212 ). As a result of their peroxidase activity, the immobilized heme proteins were sensitive to the presence of hydrogen peroxide, showing an electrocatalytic response towards hydrogen peroxide. 205,207,209,210,212,213 Electrode configuration set the sensitivity of the device, while the addition of gold nanoparticles or CdTe quantum dots were reported to enhance the biosensor response. 214,215 • Enzyme immobilization and electrochemical biosensors Mesoporous silica-based materials are suitable immobilization matrices for enzymes, as they keep their biological activity, [144][145][146][147] and for biosensor construction. 159 The first generation biosensors implied the enzyme immobilization on the material and the electrode was used to detect the enzymatically-generated products (e.g., GOX or HRP entrapped in mesoporous silica particles deposited as thin films on GCE [216][217][218] ). Larger pore sizes were necessary for the nonconductive silica matrices, to ensure fast mass transport processes. 116 In order to overcome this problem, it was resorted to the increase the conductivity by incorporating gold nanoparticles, 219 via the use of conducting polymers, 220 or via the formation of silica-carbon composite nanostructures. 216 A hydrogen peroxide biosensor was also constructed using a purely organic ordered mesoporous polyaniline film, fabricated by electrodeposition from a lyotropic liquid crystalline phase, as an immobilization matrix for HRP. 221 The development of bienzymatic systems based on the co-immobilization of two enzymes (e.g., tyrosinase and HRP 222 or 2-hydroxybiphenyl 3-monooxygenase and GOX 223 ) suggested that the confined environment of the mesoporous silica host preserves the necessary interactions. • Electrochemical immunosensors and aptasensors Due to their hosting properties, MPS were employed in fabricating ultrasensitive electrochemical immunosensors. There are two different approaches to be distinguished between for the electrochemical detection of the antigen-antibody recognition event. The first one refers to the use of mesoporous silica nanoparticles (in which the enzyme, e.g., HRP, a mediator and a first antibody have been immobilized) as nanolabels. 224 They are expected to bind to an electrode surface (bearing a second antibody) in the presence of the analyte (the antigen, in a sandwich configuration between the two complementary antibodies), and the resulting current response is directly proportional to the amount of nanolabels onto the electrode surface, and thus to the analyte concentration. 224 By improving the electronic conductivity of the nanolabel and the electrode surface, better performance of the device could be achieved. [225][226][227] The second approach deals with an antibody still immobilized on the mesoporous material, but coated this time on the electrode surface, while the mediator is in the solution; when the immunoconjugates are formed, the electrode becomes progressively blocked, while the signal of the electrochemical response decreases proportionally to the target analyte concentration. [228][229][230] More recently, some new kinds of label-free aptasensors were described, based on graphene-mesoporous silicagold NP hybrids as an enhanced element of the sensing platform [231][232][233] for the detection of ATP 231 or DNA. 232 Other electrochemical sensors Due to their sorption properties, mesoporous silica could be exploited in the development of electrochemiluminescence sensors. 234 Some examples dealing with electrochemical detection methods (i.e., conductivity changes, surface photovoltage measurements) include mesoporous silica for sensing alcohol vapours 235 or detection of humidity changes, 236 and tin-doped silica for NO 2 sensing. 237 Energy conversion and storage Nanoscaled engineering materials have become an important mean in designing various devices for energy conversion and storage. 116 Taking into consideration the increasing need for improved systems like batteries, supercapacitors, fuel cells, or dyesensitized solar cells, finding innovative electrode materials with architecturally tailored nanostructures is an important focus in the research field. 116 Therefore, proton-conducting organic-inorganic hybrids, like mesoporous silica containing sulfonic acids, phosphonic acid or carboxylic acid, can be used as membranes for fuel cells, because they are likely to exhibit good thermostability and efficient proton conductivity at high temperature and low humidity. 238 PERSONAL CONTRIBUTION 1 Clays -physico-chemical and structural characterization Introduction "Clay" is a collective term for a large group of sedimentary rocks with clay minerals as main components. They are generally fine-grained crystalline hydrated aluminum silicates and they exhibit plasticity when mixed with water in certain proportions. Bentonites are clay materials and represent secondary rocks formed from the devitrification, hydration and hydrolysis of other underlying rocks (e.g. volcanic tuffs, pegmatite etc.). An important feature of all bentonites is their important content of montmorillonite with its cryptocrystalline aggregate structure, at which low fractions of quartz, feldspar, volcanic glass, amphibole, pyroxene, chlorite, limonite, halloysit, etc. are added. 239 The name "montmorillonite" was assigned to the clay mineral with the theoretical formula Al 2 (Si 4 O 10 )(OH) 2 , which has a relatively high content of water molecules adsorbed between its layers. The most commonly accepted structure for montmorillonite minerals is similar to that of pyrophyllite, from which it differs only in the distribution of constitutive ions and in the overlapping of multiple sheets. Therefore, montmorillonite consists of two tetrahedral silica plans and a central octahedral aluminum plan. An invariable feature of montmorillonite structure is that water and other polar organic molecules can enter the interlayer space, resulting in a shift towards the (c) axis. The dimensions of (c) axis in montmorillonite are not fixed, but vary from 9.6 Å, when there are no polar molecules in the interlayer space, up to 15 Å, in the presence of polar molecules (Figure 6). The thickness of the water layer between the structural units depends on the nature of the adsorbed cation and on the pressure of water vapors of the working environment. 240 Romanian clays are cationic clays, with negatively charged alumino-silicate layers. The clays presented in this thesis are bentonites obtained from Răzoare and Valea Chioarului deposits in operation (Maramureş County, Romania). In order to compare their electrochemical behavior with their internal structure and properties, the structural characterization of the above measured clays was achieved. Materials and methods Physico-chemical composition studies, as well as X-ray diffraction (XRD) and transmission electron microscopy (TEM) established the properties of the two clays. For the structural characterization of Răzoare and Valea Chioarului bentonites, it was resorted to their impurities removal, in order to obtain a higher content of montmorillonite in the resulting samples. The refinement was achieved by washing and decantation obtaining a more homogeneous product, rich in montmorillonite, the main component of their structure. All of the following characterization procedures and analytical experiments are based on these refined clay samples. Separation was performed on different clay granulometric particle sizes by sedimentation, decantation, centrifugation, and ultracentrifugation after the procedures reported in the literature 242 , according to Stockes' law. Several fractions, below 50 µm, 20 µm, 8 µm, 5 µm, 2 µm, 1µm, and below 0.2 µm were separated and characterized. The chemical composition, the ion exchange capacity, the surface area, and the structural characteristics, e.g. particle size and shape, of each separated fraction were determined by XRD and TEM. Chemical composition was achieved by gravimetry (Si), complexonometry (Fe, Al, Ca, Mg), colorimetry (Ti at 436 nm), and flame photometry (Na at 589 nm, K at 768 nm). Ion exchange capacity was obtained by treating the clay sample with an ammonium chloride solution, followed by the filtration and determination of Ca and Mg (by complexonometry), Na and K (by flame photometry). For TEM studies, an aqueous suspension of clay was deposited on a thin layer of collodium, followed by evaporation. Examination of the samples was achieved using a JEOL JEM 1010 microscope. Diffractometry was achieved on fine powder material using a Shimadzu X-ray diffractometer XRD 6000 N, equipped with a monochromator and a position-sensitive detector. The X-ray source was a Cu anode (40kV, 30 mA).The diffractogram were recorded in the 0-90º 2θ range, with a 0.02 step size and a collection of 0.2 s per point. Surface area analyses were recorded with a Thermo analyzer Q Finnigam type SURF 9600 by single point method, on Răzoare clay fraction bellow 20 µm and on Valea Chioarului clay sample bellow 50 µm, respectively, without any previous thermal treatment. 239 Thermal behavior was also studied using differential thermal analysis performed with a MOM derivatograph, type 1500D, at 10 o C/minute temperature rate, in the range of 20 o C and 1000 o C. 243,244 IR analyses were recorded with a Brucker FTIR spectrometer, on Răzoare clay fraction bellow 20 µm and Valea Chioarului clay fraction bellow 0.2 µm in KBr matrix, from 4000 to 400 cm -1 . 243 Results and discussions Elemental analyses confirmed clays chemical composition and revealed the differences between the two clays (Table 1 The higher SiO 2 quantity in the composition of Răzoare bentonite indicates the presence of an increased quantity of impurities (e.g., cristobalite). TEM images of Răzoare (Figure 7A) and Valea Chioarului clays (Figure 7C) at higher magnification showed a diffusive, irregular, and opalescent surface. The very fine dispersed montmorillonite (bellow 0.2 nm) formed extremely thin lamellar layers with nanometer dimensions (Figure 7B). The XRD diffractogram obtained for Valea Chioarului clay using a sample with particle size below 0.2 μm (Figure 8B) showed a high content of montmorillonite (with its characteristic peaks at 2θ: 6. The presence of montmorillonite was also proved by comparing the XRD data of the below 0.2 μm sample before and after treatment with ethyleneglycol. After adsorption of ethylenglycol, it was noticed an increase in the reticular distance in c axis direction from 12.72 Å to 17.18 Å, corresponding to 6.9434 o and to 5.137 o peaks, respectively, characteristic for montmorillonite. XRD diffraction patterns (see Figure 8A and8B) illustrate the fact that the two investigated samples are multiple phase bentonite materials containing mainly montmorillonite hexagonal (Na 0.3 Al 2 Si 4 O 16 H 10 ) and cristobalite (SiO 2 ) crystalline phases. The fact that the intense peak that appears in the XRD pattern at 2θ = 7.9° (d = 1.23 nm) for montmorillonite compounds in the X1 sample is moved at 2θ = 8.2° (d = 1.21 nm) for the R1 sample (where it appears less intense) seems to indicate that the layered type of montmorillonite structure in Rǎzoare bentonite is largely deformed by different impurities. The value of the specific surface of the sample bellow 20 μm of Răzoare clay was 50 m 2 /g. Due to the aggregation effect which occurred during the drying process of the fine granulation prepared clay, the specific surface of the Valea Chioarului clay was determined on the sample below 50 μm. The obtained value was 190.86 m 2 /g. The ion exchange capacity of the analyzed clays was determined by replacing the compensatory ions with NH 4 + ions, followed by their quantitative determination. Therefore, the ionic exchange capacity of Răzoare clay was 68.32 mE/100g and for Valea Chioarului clay it was estimated at 78.03 mE/100g. 239 In order to complete the structural characterization of the Romanian clays several experiments were performed. The FTIR studies and the thermodifferential analysis showed the presence of a high percentage of montmorillonite. Thus, the FTIR spectrum of Răzoare clay (Figure 9A) revealed the characteristic groups of montmorillonite. The broad band at 3447 cm -1 , with its specific peak at 3620 cm -1 , was attributed to the stretching vibration of the hydroxyl group. The bands at 1000-1200 cm -1 and 466 cm -1 were produced by the Si-O stretching vibration. The bands at 793 cm -1 and 519 cm -1 were assigned to the Si-O-Al group. 243,246 The FTIR spectrum of Valea Chioarului clay (Figure 9B) presented a broad band at 3446 cm -1 with the specific peak at 3625 cm -1 attributed to the stretching vibration of the hydroxyl group. 244 The broad band at 1000-1200 cm -1 was assigned to the Si-O stretching vibration and the band at 520 cm -1 to the Si-O-Al group, all attributed to the characteristic groups of montmorillonite. In both cases the band at 1637 cm -1 was assigned to the bending vibration of H-O-H group. 243,246 Samples a [Å] b [Å] c [Å] Rp X1 montmorillonite 5.17 5.17 Regarding the thermodifferential analysis (Figure 10) of Răzoare and Valea Chioarului clays, a good superposition of the thermodifferential characteristic curves of montmorillonite was noticed, confirming that montmorillonite is the main constituent of both Romanian clays. The thermogravimetric (TGA) and thermodifferential (TDA) analysis of Răzoare clay (Figure 10 A) showed the loss of the adsorbed water between 60 o C and 200 o C, accompanied by a strong endothermic effect at 115 o C. The elimination of hydroxyl groups from the mineral network with an endothermic effect and decrease of mass occurs between 600 o C and 750 o C. The last endothermic effect, at 900 o C, immediately followed by an exothermic one, showed a modification of the crystal structure to an inferior energetic state. (A) (B) The first pronounced endothermic effect in the TDA curve (Figure 10 B) appeared between 60 o C and 250 o C due to the water loss. A substantial decrease in the clay mass occurred in the meantime according to the TG curve. Dehydration was a reversible process, at 250 o C the clay being able to readsorb the water molecules, which could be then eliminated due to a new temperature exposure 243,247,248 The second endothermic effect appeared between 600 o C and 750 o C, followed by a loss in the clay mass due to the elimination of hydroxyl groups. A third endothermic effect occurred at 880 o C, immediately followed by an exothermic effect at 900 o C due to the crystal structure modifications. 243,247,248 Conclusions Two Romanian clays from Răzoare and Valea Chioarului deposits (Maramureş County) were refined and characterized by X-ray diffraction, transmission electron microscopy, FTIR, and thermogravimetric and thermodifferential analysis. The ion exchange capacity of purified clays was determined by replacing the compensatory ions with NH 4 + ions. 239,243 The clay chemical composition was confirmed by elemental analyses and the characteristic formulas for the two clays could be calculated. The diffusive, irregular, and opalescent surface of Răzoare and Valea Chioarului clays was evidenced by TEM images and the extremely thin lamellar layers of montmorillonite with nanometer dimensions were also described. The XRD diffractograms of both clays showed the characteristic diffraction peaks of montmorillonite. The presence of other minerals like quartz and feldspar was also evidenced. The specific surface for Răzoare clay was 50 m 2 /g, while the value obtained for Valea Chioarului clay was 190.86 m 2 /g. The ionic exchange capacity of Răzoare clay was 68.32 mE/100g and for Valea Chioarului clay it was estimated at 78.03 mE/100g. FTIR spectra of both clays revealed the characteristic peaks of montmorillonite and the good superposition of the thermodifferential characteristic curves confirmed its presence in both analyzed clays. As a conclusion, all characterization methods employed revealed montmorillonite as the main component of Răzoare and Valea Chioarului clays structure. Due to its higher CEC value and larger specific surface, together with a negligible quantity of impurities, Valea Chioarului clay is highly recommended for further electroanalytical applications. Electroanalytical characterization of two Romanian clays with possible applications in pharmaceutical analysis Introduction The high demand for simple, fast, accurate, and sensitive detection methods in pharmaceutical analysis has driven to the development of novel electrochemical sensors. CLME are likely to be used for this application. Therefore, it is well known that montmorillonite has been used for centuries to produce ceramics. Furthermore, its applications in pharmacy, adsorbents, and ion exchangers are also reported in the literature. [START_REF] Vaccari | Preparation and catalytic properties of cationic and anionic clays[END_REF][START_REF] Vaccari | Clays and catalysis: a promising future[END_REF] These last applications are particularly useful for the development of electrochemical sensors. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF]239 Clay modified electrodes have attracted considerable attention attempting to control the path and scale of electrode reactions. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF][START_REF] Bard | Electrodes modified with clays, zeolites and related microporous solids[END_REF][START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF][249][250][251] The composition of the CPE modified with clay was defined as a complex heterogeneous system consisting of conductive solids, semiconductors, and insulators, including a clay-induced aqueous phase. Phenomena of charge and mass transfer in such mixtures are extremely complicated and require a thorough characterization, moreover because the clays included in the electrode material are natural compounds whose composition and structure are subject to their place of origin. In spite of the wide range of electrode modifiers, clays have attracted the interest of electrochemists, in particular for their analytical applications. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]239,[252][253][254] Electrochemical methods are well known as very sensitive, but they lack selectivity. Electrode modification is therefore the main issue in improving selectivity. This is why the different ways of modifying the electrode surface in order to obtain improved electrochemical signals represents a concern for researchers all over the world. Electrode modifications concern either the improvement of sensor selectivity by increasing the affinity for a specific analyte and rejecting, in the same time, other interfering chemical species, or the improvement of the electroanalytical performances (higher accuracy and reproducibility, lower detection and quantification limits, the possibility to determine several electroactive species without any separation process, at different oxidation or reduction potentials). Generally, chemical sensors contain two basic functional units: a receptor part which transforms the chemical information into a measurable form of energy, and a transducer part capable to convey the energy carrying the chemical information about the sample into a useful analytical signal. Biological sensors can be defined either as devices able to detect the presence, the movement, and the number of organisms in a given environment, or as sensors which contain in their structure a biological component (bacteria, algae, tissues, cells) as receptor, this type of devices being known as biosensors. 255 In the living world, there are a lot of examples of sensors consisting in biological receptors (proteins, nucleic acids, signaling molecules) located on the cell membrane, in all the tissues, organs, or even in circulating blood stream. Enzymes were used for decades in the sensors development and this led to the urge for a niche field in research: biosensors. More specific biosensors could be defined as sensitive and selective analytical devices which associate a biocomponent to a transducer. 256 Biosensors are applied with success in several fields (environment, food security, biomedical and pharmaceutical analysis), especially because of the stable source of the biomaterial (enzymes produced by bacteria, plants, or animal as by-products), and due to their catalytic properties and the possibility of modifying the surface of transducers in various ways. A key step in the development and optimization of the biosensors is related to the entrapment of the enzymes at the surface of electrode, another challenge being to preserve the microenvironment of the enzyme and hence the lifetime of the biosensor. Besides the methods used before, like adsorption, cross linking, covalent binding, biological membranes, magnetic microparticles, entrapment in sol-gel, etc., the immobilization into an electrochemical polymer or polymerizable matrices was successfully used in the development of the amperometric biosensors. 256 In this case, the procedure was effective and simple and the enzyme was less affected than during other methods of entrapment. Adsorption of proteins on clay mineral surfaces represents an important application in fields related to the agricultural and environmental sciences, but also in the pharmaceutical and biomedical analysis. [START_REF] Gianfreda | Enzymes in soil: properties, behavior and potential applications[END_REF]243 Organic molecules, macromolecules, and biomolecules can be easily intercalated in solids with a 2D structural arrangement that have an open structure. Therefore, clay minerals are likely to be exploited to improve the analytical characteristics of biosensors. This type of biosensors is based on three smectite clays (laponite, montmorillonite, and nontronite) and on layered double hydroxides. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF]243 Due to their low price and accessibility, clays offer a new immobilization method for biomolecules like enzymes. Hydrated clays represent a good environment for enzyme functioning and can improve the lifetime of the biosensor. On the other hand, a lot of electrochemical processes are controlled by diffusion and clays, due to their adsorbent properties, can improve or accelerate the diffusion of different molecules (pharmaceuticals in our case) at the electrode surface. In this way, electrochemical parameters can also be improved, allowing the recording of higher currents at lower potentials, and developing thus new electroanalytical methods with enhanced performances. Clays show also some disadvantages, the clay deposition and the thickness of the clay film on the electrode surface being factors that can decrease the electric conductivity. In this case, the use of conductive polymers, like PEI or polypyrrole, for the clay film immobilization at the electrode surface represents a good alternative. In spite of the water and alcohol solubility of PEI, this polymer does not involve any further polymerization process (such as the use of heat, the polymerization initiators or the potential scanning) which can damage the enzyme structure and functioning. The development of composite electrodes for biosensors construction based on HRP and clay films for acetaminophen (N-acetyl-p-aminophenol) detection is described. Acetaminophen is widely used as analgesic antipyretic drug having actions similar to aspirin. It is a suitable alternative for the patients who are sensitive to aspirin and safe in therapeutic doses. HRP has been a powerful tool in biomedical and pharmaceutical analysis. Many biosensors based on HRP applied in biomedical and pharmaceutical analysis are mentioned in the literature. 256,257 The enzyme immobilization was performed by retention in a PEI and clay porous gel film, technique that offers a good entrapping and, in the meantime, a "protective" environment for the biocomponent. 243 In this study, the refined bentonites obtained from Răzoare and Valea Chioarului deposits were used to modify CPEs. The electrochemical behavior of acetaminophen, ascorbic acid, and riboflavin phosphate was tested by cyclic voltammetry on the clay-modified CPEs with different clay particle sizes. The resulting CPEs revealed either better electroanalytical signals or oxidation at lower potential. The exploitation of Romanian clays in developing a biosensor for acetaminophen detection with good sensitivity and reproducibility was also achieved. The development of new claymodified sensors using such composite materials based on micro and nanoparticles could be applied in pharmaceutical analysis. 239 The studies presented in this chapter are preliminary studies. By employing the three standard pharmaceutical substances, these studies didn't aim at the development of new analytical methods with improved sensitivity and selectivity (even they showed this could be achieved further on), but to emphasize the adsorbent and ion exchange properties of the Romanian clays. Based on these principles, the tested molecules chosen were neutral (acetaminophen), anionic (ascorbic acid), and cationic (riboflavin). Materials and methods Clay water suspensions of 50 mg/mL were prepared for the fractions bellow 20 µm and 0.2 µm for Valea Chioarului clay and below 20 µm for Răzoare clay. Standard solutions of acetaminophen, riboflavin, and ascorbic acid were prepared to provide a final concentration of 10 -3 M. For biosensor development, standard solutions of acetaminophen and hydrogen peroxide were prepared to provide a final concentration of 10 -4 M for acetaminophen and 0.1mM for hydrogen peroxide. The stock solutions of acetaminophen were dissolved in phosphate buffer and kept in the refrigerator. All the experiments were performed in PBS (phosphate buffer saline) (pH 7.4; 0.1M) at room temperature (25 ºC). CPEs were modified by mixing different Răzoare clay concentrations (1%, 2.5%, 5%, and 10%) with "homemade" carbon paste prepared with solid paraffin. 239,258 Electrochemical studies, like cyclic voltammetry (CV) and chronoamperometry, were performed in a conventional three-electrode system: new modified carbon based electrodes (working electrodes), platinum (auxiliary electrode), Ag/AgCl 3M KCl (reference electrode), under stirring conditions. All the CV experiments were recorded at 100 mVs -1 . During chronoamperometry experiments the biosensor potential was kept at 0 V vs. Ag/AgCl under continuous stirring conditions. The working potential was imposed and the background current was allowed to arrive at a steady state value. Different amounts of acetaminophen standard solution were added, every 100 seconds, into the stirred electrochemical cell and the current was recorded as a function of time. The obtained configuration was used to study the biocatalytic oxidation of acetaminophen in the presence of the hydrogen peroxide. 243 The experiments were achieved with AUTOLAB PGSTAT 30 (EcoChemie, Netherlands) equipped with GPES and FRA2 software. The pH of the solution was measured using a ChemCadet pH-meter. All solutions were prepared by using high-purity water obtained from a Millipore Milli-Q water purification system. Paraffin (Ph Eur, BP, NF), graphite powder, acetaminophen (minimum 99.0 %), L-ascorbic acid (99.0 %), and riboflavin (Ph Eur) were provided by Merck and KCl (analytical grade) from Chimopar Bucureşti. All reagents were of analytical grade, used as received, without further purification. HRP (Peroxidase type II from Horseradish, EC 232-668-6), acetaminophen, hydrogen peroxide, monosodium phosphate, and disodium phosphate were provided by Sigma Aldrich; PEI (50 % in water, Mr 600000 -1000000, density 1.08 g/cm 3 (20 o C)) was purchased from Fluka. All reagents were of analytical grade, used as received. Composite film electrodes (PEI/clay/GCE) were prepared as follows: PEI (5 mg) was stirred for 15 minutes in absolute ethanol (125 μL) and distilled water (120 μL), then 6.5 μL of nanoporous clay gel were added and stirred again for 15 minutes. Two different suspensions (20 μL) containing Valea Chioarului clay particles with the diameter below 20 μm and below 0.2 μm were deposited on the surface of two different GCEs and dried for 4 hours at 4°C. 243 GCEs were provided by BAS Inc. (West Lafayette, USA) and were carefully washed with demineralized water and polished using diamond paste (BAS Inc.). Results and discussions The electrochemical behavior of several clay-modified electrodes was tested in the presence of some pharmaceutical compounds: acetaminophen, ascorbic acid, and riboflavin phosphate (Figure 11). The electrochemical behavior of these three selected redox probes (neutral, negatively and positively charged, respectively) is widely discussed in the literature on different electrode materials (Pt, carbon paste, glassy carbon, etc.) which enables the comparison between the resulting electrochemical performances with those described by other authors. Taking into consideration the adsorbent properties of the investigated clays, the study aimed to improve the oxidations and reductions potentials obtained on unmodified CPEs. Acetaminophen ( N-acetyl-p-aminophenol) is one of the most commonly used analgesics in pharmaceutical formulations for the reduction of fever and also as a painkiller for the relief of mild to moderate pain associated with headache, backache, arthritis, and postoperative pain. Acetaminophen is metabolized primarily in the liver, into toxic and non-toxic products. The hepatic cytochrome P450 enzyme system metabolizes acetaminophen, resulting in a minor yet significant alkylating metabolite known as NAPQI (N-acetyl-p-benzo-quinone imine). NAPQI is then irreversibly conjugated with the sulfhydryl groups of glutathione: Ascorbic acid is a naturally occurring organic compound with antioxidant properties. As a mild reducing agent, it degrades upon exposure to air, converting the oxygen to water. The redox reaction is accelerated by the presence of metal ions and light. Ascorbic acid can be oxidized by one electron to a radical state or doubly oxidized to the stable form called dehydroascorbic acid: + 2 H + + 2 e - Riboflavin, also known as vitamin B 2 , is a colored micronutrient, easily absorbed, with an important role in humans and animals health maintenance. This vitamin is the central component of the cofactors FAD and FMN, being therefore required by all flavoproteins. As such, vitamin B 2 is implied in a wide variety of cellular processes, playing a key role in energy metabolism, but also in the metabolism of fats, ketone bodies, carbohydrates, and proteins. The reversible oxidation and reduction processes of riboflavin take place as follows: Acetaminophen and ascorbic acid (Figure 12A and 12B) showed relatively similar electrochemical behavior in CV investigations, one irreversible oxidation peak being obtained at 0.78 V and 150 μA for acetaminophen and 200 μA for ascorbic acid with 1% clay-modified CPEs (due to the two electron transfer described in the chemical reactions above). In both cases, the increase in the clay content was followed by an important shift of the oxidation potential towards lower values, 0.70 V for acetaminophen and 0.60 V for ascorbic acid, showing that the increasing clay concentration facilitates the oxidation process (ΔE = 100 -150 mV). In both cases, during the anodic potential sweep, voltammetric peaks were formed revealing that acetaminophen and ascorbic acid electrochemical oxidation reactions are diffusion-controlled processes. Moreover, also in both cases, during the cathodic potential scan no significant currents were detected, thus both oxidation processes are irreversible. Immunohistochemistry or IHC refers to a process by which a specific antibody (A) (B) In the case of acetaminophen, an increase in the current from 150 to 350 μA could be observed in the oxidation range at the 2.5 % clay-modified CPE, while for the 5 % clay-modified CPE the current had the same order of magnitude as the 1% claymodified CPE (Figure 12A). Ascorbic acid showed a different behavior, the increase in the clay content having no influence on the current range, but facilitating the oxidation reactions, proved by the above mentioned shift of the anodic potential towards lower values (Figure 12B). This can be attributed to the electrostatic repulsion between the negatively-charged clay sheet and the negatively-charged molecule. Riboflavin phosphate exhibited a typical reversible cyclic voltammetric response at unmodified carbon paste electrode, with an oxidation peak at -0.45 V and a reduction peak at -0.60 V as presented in Figure 13. The electrostatic attraction between the positively-charged molecule and the negatively-charged clay sheets is very clear in this case. Anodic currents increased proportionally with the clay content with about 5 nA and 10 nA for the 5% and the 10% clay-modified CPEs versus unmodified CPEs, respectively. The increase in the cathodic current was higher (20 nA) than the anodic current (10 nA) for the 10% clay-modified CPEs. Thus, it can be concluded that an increase in the clay concentration favors riboflavin detection. A significant difference could be observed when the 5% clay-modified CPE current was compared with the current of the unmodified CPE. In the oxidation range, the current was 5 nA higher than the current measured at the unmodified electrode, while in the reduction range the value was about 10 nA lower than the one measured at the unmodified electrode. This proved that a lower concentration of clay was not enough for riboflavin detection. Biosensor for acetaminophen detection with Romanian clays and conductive polymers Two types of transducers were studied (CPE and GCE) using two types of clay particles (under 20 μm and below 0.2 μm). CPE has been prepared by adding various amounts of clays (1, 2.5, 5, and 10 %). CPEs made by a mixture of graphite powder and solid paraffin are simple to prepare and offer a renewable surface essential for the electron transfer. 258 The use of CPEs in electroanalysis is due to their simplicity, minimal cost and the possibility of facile modification by adding other compounds thus giving the electrodes certain predetermined properties like high selectivity and sensitivity. 243,259 In order to realize a biosensor for the detection of acetaminophen, the electrochemical behavior of the doped CPEs was compared with the behavior of the thin PEI film GCEs. The thin PEI film deposited on the surface of a GCE exhibited a better mechanical stability in spite of its relative water solubility and an improved hydration layer, essential for the immobilization of the enzyme. The difference between the two electrode configurations was clear. By CV recording of acetaminophen on modified CPE, only the oxidation process could be observed (Figure 12A). By comparison, the PEI film electrodes showed a reversible oxidation and reduction process (Figure 14). The porosity of the clay film is obvious in this case, as it let the neutral acetaminophen species quite easily reach the electrode surface. The current obtained on polymeric film electrodes was, however, lower (10-15 μA) than that obtained on clay-modified CPEs (100-350 μA). An increase of the current was noticed on different particle sizes of Valea Chioarului clay, the best response for acetaminophen being recorded for the 0.2 μm, due to the greater active surface (Figure 14). In the human body, acetaminophen is metabolized to N-acetylbenzoquinonimine (NAPQI). 260 The same conversion can be achieved in vitro by HRP in the presence of hydrogen peroxide. The amperometic studies (Figure 15) were made by recording the electrochemical reduction of the enzymatically generated electroactive oxidized species of acetaminophen (NAPQI) in the presence of hydrogen peroxide after stepwise addition of small amounts of 10 -4 M acetaminophen solution. 243,260 Acetaminophen +H 2 O 2 +2H + +HRP→NAPQI+2H 2 O NAPQI+2e -+2H + → Acetaminophen The linear range was calculated as the ratio of the standard deviation of the blank baseline (0.1 M phosphate buffer, pH 7.4), the noise and the biosensor's response to acetaminophen. 256 The amperometric assays were realized at -0.2V, which represents the reduction potential of NAPQI. The HRP/clay/PEI/GCE biosensor had a sensitivity of 6.28×10 -7 M and a linear range between 5.25×10 -6 M and 4.95×10 -5 M. The calibration curve equation of the amperometric biosensor was y= 0.0139x + 3×10 -8 (R 2 = 0.996) and showed a linear range between 5×10 -6 -4.95×10 -5 M. The reproducibility was also tested on the same electrode after 10 successive analyses in three different days. The RSDs of the slopes of the linear responses calculated by Line weaver-Burk method were less than 15%. 243 The results are comparable with those recently presented in the literature: Conclusions New composite materials based on clay micro and nanoparticles for the development of electrochemical sensors were developed. The electrochemical behavior of acetaminophen and riboflavin phosphate was tested for the first time on clay-modified CPEs with different clay particle sizes using CV and new electrochemical methods were elaborated for their detection and applied in pharmaceutical analysis. The obtained results emphasized the great active surface, the adsorbent and ionic exchange properties and showed the advantages offered by Răzoare and Valea Chioarului clays for the development of novel modified electrodes applied in pharmaceutical analysis. 239 Carbon paste was chosen in this work because it is accessible, low cost, easy to manipulate, and it showed good electrochemical properties. It offered, in the meantime, a good entrapment for the clay at the electrode surface. The main disadvantage of the carbon paste was impossibility to quantify the clay at the electrode surface which comes in contact with the electrolyte and which is practically involved in the electrochemical measurements. Therefore, the results obtained in this study didn't present higher performances in comparison with the already existent methods. A novel biosensor based on HRP immobilization in a PEI and clay porous gel film for acetaminophen was developed. When comparing the CV recording of acetaminophen on modified CPE and on PEI film electrodes, it was concluded that in the first case only the oxidation process could be observed, while in the second case reversible oxidation and reduction processes were visible. For the GCE modified electrode, the current response increased proportionally with the active surface area as a result of the decrease in the particle size. The amperometric detection of acetaminophen was successfully achieved with a sensitivity of 6.28×10 -7 M and a linear range between 5×10 -6 M and 4.95×10 -5 M. The clay offered both a good entrapping and a "protective" environment for the biocomponent. This immobilization strategy could be exploited in the development of other biomolecules for the detection of pharmaceuticals. 243 These are preliminary results as these studies didn't aim at developing new analytical methods for the detection of the selected pharmaceutical probes (acetaminophen, ascorbic acid, and riboflavin), but to emphasize the specific properties of the Romanian clays employed which recommends them for further exploitation for electrodes modification and for the development of new composite materials applied in sensors and biosensors fabrication. 3 Tetrabutylammonium-modified clay film electrodes: characterization and application to the detection of metal ions Introduction Even their development has started a few decades ago, clay-modified electrodes (CLMEs) still represent a notable field of interest, especially for their applications in electroanalysis, as discussed in some reviews [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF][START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF][START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]126,249,261 and illustrated in recent research papers where CLMEs have been used as sensors or biosensors (see examples, e.g., from our group 239,243,262 and from those of Ngameni [START_REF] Jieumboué-Tchinda | Thiol-functionalized porous clay heterostructures (PCHs) deposited as thin films on carbon electrode: Towards mercury(II) sensing[END_REF][START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF][START_REF] Bouwe | Structural characterisation of 1,10phenanthroline-montmorillonite intercalation compounds and their application as low-cost electrochemical sensors for Pb(II) detection at the sub-nanomolar level[END_REF][263][264][265][266][267]Mousty 268,269 , or some others [START_REF] Dias Filho | Study of an organically modified clay: Selective adsorption of heavy metal ions and voltammetric determination of mercury(II)[END_REF]270,271 ). Clay minerals used as electrode modifiers are primarily (but not only) phyllosilicates-layered hydrous aluminosilicates. An important characteristic of those minerals is their interlayer distance which depends on the number of intercalated water and exchangeable cations within the interlayer space [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] . They also exhibit attractive properties such as a relatively large specific surface area, ion exchange capacity, and the ability to adsorb and intercalate organic species. Smectites have been mostly used for CLMEs preparation in thin layer configuration, especially montmorillonite (MMT), due to a high cation exchange capacity (typically 0.80-1.50 mmol g -1 ) and its thixotropy likely to generate stable and adhesive clay films on electrode surfaces. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]249 One can remind here that clays are insulating materials so that their use in electrochemistry requires a close contact to an electrode surface, which can be achieved via either the dispersion of clay powders in a conductive composite matrix (e.g., carbon paste electrode 272 ) or the deposition of clay particles as thin films on solid electrode surfaces. An advantage of the clay film modified electrodes is that they are binder-free, thanks to the particular platelet morphology of clay particles bringing them self-adhesive properties toward polar surfaces [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF] , which ensure a better interaction with most electrode materials and, consequently, a more durable immobilization. Clay films can be attached to a solid electrode surfaces by physical means (through solvent casting, spin-coating, or layer-by-layer assembly [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]249,273 , or electrophoretic deposition [START_REF] Song | Preparation of clay-modified electrodes by electrophoretic deposition of clay films[END_REF] , by covalent bonding (via silane or alkoxysilane coupling agents) [START_REF] Rong | Electrochemistry and photoelectrochemistry of pillared claymodified-electrodes[END_REF]274 , or, more recently, in the form of clay-silica composite films 275 . At the beginning, CLMEs were mainly prepared from bare (unmodified) clay materials [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]249 but recent advances have been mainly based on organically-modified clays (obtained either by intercalation or grafting of organic moieties in the interlayer region of the clay [276][277][278][279] ) because they enable to tune, to control and to extend the clay properties, resulting therefore to better analytical performance in terms of selectivity and sensitivity [START_REF] Jieumboué-Tchinda | Thiol-functionalized porous clay heterostructures (PCHs) deposited as thin films on carbon electrode: Towards mercury(II) sensing[END_REF][START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF][START_REF] Bouwe | Structural characterisation of 1,10phenanthroline-montmorillonite intercalation compounds and their application as low-cost electrochemical sensors for Pb(II) detection at the sub-nanomolar level[END_REF][263][264][265][266][267] . Dealing with sensitivity, preconcentration electroanalysis at modified electrodes (in which the analyte is firstly accumulated at open circuit and then electrochemically detected) has proven to be a powerful method to improve the performance of electrochemical sensors. In this respect, the ion exchange capacity of clays and the binding properties of organoclay have been exploited for the detection of metal ions using CLMEs (see examples in reviews [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]126 ). Till now, however, very few examples are based on the use of intercalated organoclay materials for that purpose [START_REF] Bouwe | Structural characterisation of 1,10phenanthroline-montmorillonite intercalation compounds and their application as low-cost electrochemical sensors for Pb(II) detection at the sub-nanomolar level[END_REF]265 , in spite of the simpler modification procedure for intercalation than for grafting for instance (which requires the use of particular organoalkoxysilane reagents [START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF]267 ). Here, we have thus examined the interest of CLMEs based on clay particles intercalated with tetrabutylammonium (TBA + ) moieties for the preconcentration electroanalysis of some metal ions (i.e., Cd 2+ , Pb 2+ and Cu 2+ ). The choice of this tetraalkylammonium intercalation reagent was motivated by at least two features: (1) TBA + ions can be easily intercalated in the interlayer region of smectite clays by ion exchange 280 , and ( 2) it modifies the packing configurations in the interlayer of the clay thus influencing the sorptive properties of the organoclay 281 , notably with respect to adsorption of metal ions such as Cu 2+ or Cd 2+282, 283 , which could be advantageously exploited here with CLMEs. The present study describes the deposition of tetrabutylammonium modified clay particles (montmorillonite-rich natural clay from Romania) onto a glassy carbon electrode surface, subsequently covered with a dialysis tubing cellulose membrane, a configuration ensuring fast mass transport for analytes from the solution through the film to the electrode surface. The permeability properties of the novel thin clay films towards selected redox probes (cationic, neutral, and anionic) were characterized. The permeability properties and long-term operational stability of the modified electrodes are discussed regarding their physico-chemical characteristics. Surprisingly, the fully doped tetrabutylammonium clay showed lower electrochemical performance towards cations than the unmodified clay. This is explained by the positive electrostatic barrier and the blockage of interlayer adsorption sites due to the tetrabutylammonium ion. In order to diminish the positive electrostatic barrier and also to create free adsorption sites, the partial removal of tetrabutylammonium ions was performed, thus improving the electrochemical performances of the modified clays. Then, the modified electrode was applied to the detection of some metal ions chosen as relevant biological and environmental contaminants (Cd 2+ , Pb 2+ and Cu 2+ ). After optimization of various experimental parameters, a stable and reliable sensor was obtained. Experimental Clays, reagents, and electrochemical instrumentation NaNO 3 (99%, Fluka), HCl (37%, Riedel de Haen), and tetrabutylammonium bromide (TBAB, 99%, Sigma) were used as received without further purification. The redox probes employed for permeability tests were of analytical grade: ferrocene dimethanol (Fc(MeOH) 2 , Alfa Aesar); potassium hexacyanoferrate(III) (K 3 Fe(CN) 6 , Fluka); and hexaammineruthenium chloride (Ru(NH 3 ) 6 Cl 3 , Sigma-Aldrich). Singlecomponent and multicomponent cation solutions were prepared daily by diluting standardized mother solutions (comprised of 1000 mg/l each metal ion, from Sigma-Aldrich). These standards were also used to certify copper(II) solutions prepared from Cu(NO 3 ) 2 •3H 2 O and 0.05 M HNO 3 , lead(II) solutions prepared from Pb(NO 3 ) 2 and 0.5 M HNO 3 and cadmium(II) solutions prepared from Cd(NO 3 ) 2 •4H 2 O and 0.5 M HNO 3, Sigma-Aldrich), which were used to prepare diluted solutions for preconcentration studies (final pH in the electrolyte was 5.5 if not stated otherwise). The electrolyte employed was 0.1 M NaNO 3 . All solutions were prepared with high purity water (18 MΩ cm -1 ) from a Millipore milli-Q water purification system. The clay sample used in this study was a natural Romanian clay from Valea Chioarului (Maramureş County), consisting mainly of MMT, with minor amounts of quartz. Its physico-chemical characterization is provided elsewhere 239 . The structural formula is (Ca 0.06 Na 0.27 K 0.02 ) Σ=0.35 (Al 1.43 Mg 0.47 Fe 0.10 ) Σ=2.00 (Si 3.90 Al 0.10 )Σ=4.00 O 10 (OH) 2 •nH 2 O. It is characterized by a surface area (N 2 , BET) of 190 m 2 g -1 . Only the MMT-rich fine fraction of the clay (< 1 μm, as collected by sedimentation according to Stockes law, after the raw clay was suspended in water, ultrasonicated for about 15 min and allowed to settle, centrifugation and ultracentrifugation of the supernatant phase) was used here. This fine fraction has a CEC of 0.78 meq g -1 and was used before as template for an amperometric biosensor for acetaminophen detection 243 . Its XRD diffractogram showed a high content of MMT (with its characteristic peaks at 2θ : 6,94 o ; 19,96 o ; 21,82 o ; 28,63 o ; 36,14 o ; 62,01 o ) and also confirmed the almost negligible presence of other minerals. Apparatus and characterization procedures Electrochemical experiments were carried out using a PGSTAT-12 potentiostat (EcoChemie) equipped with the GPES software. A conventional three electrode cell configuration was employed for the electrochemical measurements. Film modified GCEs were used as working electrodes, with an Ag/AgCl/KCl 3M reference electrode (Metrohm) and a platinum wire as reference and counter electrode. Cyclic voltammetry (CV) determinations were carried out respectively in 1 mM K 3 Fe(CN) 6 , 0.1 mM Ru(NH 3 ) 6 Cl 3 , and 5 mM Fc(MeOH) 2 (in 0.1 M NaNO 3 ). CV curves were typically recorded in multisweep conditions at a potential scan rate of 20 mV s -1 and used to qualitatively characterize accumulation/rejection phenomena and mass transport issues through the various films. Accumulation-detection experiments were also performed using copper(II), lead(II), and cadmium(II) as model analytes. Typically, open-circuit accumulation was made from diluted cations solutions (5×10 -7 -10 -6 M) at pH 5.5 and voltammetric detection was achieved after medium exchange to a cation-free electrolyte solution (0.1 M NaNO 3 ) by square wave voltammetry (SWV), at a scan rate of 5 mV s -1 , a pulse amplitude of 50 mV and a pulse frequency of 100 Hz. The CNH elemental analysis was performed for the unmodified clay, for the TBAB fully doped MMT before and after the TBAB partial extraction, using an Elementar Vario Micro Cube, with the following experimental conditions: combustion temperature 950 o C; reduction temperature 550 o C; He flow 180 ml/min; O 2 flow 20 ml/min; pressure 1290 mbar. The film structure was characterized by X-ray diffraction (XRD), FTIR and Raman Spectroscopy. XRD measurements were performed using a BRUKER D8 Advance X-ray diffractometer, with a goniometer equipped with a germanium monochromator in the incident beam, using Cu Kα1 radiation (λ = 1.54056 Å ) in the 2θ range 15-85°. The FTIR spectra were measured on a Jasco FT/IR-4100 spectrophotometer equipped with Jasco Spectra Manager Version 2 software (550-4000 cm -1 ). The Raman spectra were acquired with a confocal Raman microscope (Alpha 300R from WiTec) using a WiTec Control software for data interpretation (-1000-3600 cm -1 , resolution > 0.5 cm -1 ). Electrochemical impedance spectroscopy (EIS) was used to characterize the electron transfer properties of the modified electrodes. The Nyquist plots were recorded with an Autolab potentiostat equiped with a FRA2 module and 4.9 version software. Clay modification with TBAB A MMT sample (10 g, particle size <1 μm) was suspended in ultrapure demineralized water (clay concentration in water 4%). A quantity of Na 2 CO 3 equivalent to 100 mEg Na 2 CO 3 per100 g clay was then added in the clay suspension and stirred for 30 minutes at 97 o C. An aqueous solution of TBAB (corresponding quantitatively to 1.1 times montmorillonite cation exchange capacity (CEC MMT = 0.78 mEq g -1 ); 0.85 mEq g -1 of TBAB), was then added and the suspension was stirred for 30 more minutes at room temperature. The obtained solid was separated by centrifugation and washed until it was free of any residual Br -. The organoclay material was then dried for 48 h at 60° C. Depending on the analytical purpose, the TBAB was kept in the clay or partially solvent-extracted from the powder in an ethanol solution containing 0.1 M NaClO 4 for 30 min under moderate stirring. Electrode assembly Glassy carbon electrodes purchased from e-DAQ (GCEs, 1 mm in diameter) were first polished using 1 and 0.05 μm alumina powder and then washed with water and sonicated for 15 min in distilled water to remove any alumina trace, leading to an electrochemically active surface area of about 7.85•10 -3 cm 2 . Clay or organoclay suspensions (5 mg/ml) were then prepared in distilled water stirred for 20 min, sonicated for 10 min, and left quiescent at room temperature. The clay or organoclay film was deposited on GCE by spin-coating. A volume of 2.5 μl of the clay suspension (5 mg/ml) was deposited on the electrode surface and then stirred for 20 minutes at 2000 rotation per minute. The electrode was then dried at room temperature for 1 hour. The clay or organoclay film was covered with a dialysis tubing cellulose membrane (Sigma) fixed first with a rubber o-ring, and then with laboratory film to avoid the solution penetration under the membrane. The five systems characterized in this study are: the bare glassy carbon electrode (GCE), the bare glassy carbon electrode with cellulose membrane (GCE/M), the unmodified MMT film coated on GCE with cellulose membrane (GCE+MMT/M), and the TBAB modified MMT film coated on GCE with cellulose membrane before (GCE+MMT+TBAB/M) and after TBAB partial extraction (GCE+MMT+TBAB(-TBAB)/M). Results and discussions Physical-chemical characterization of clays XRD was first used to characterize the eventual structural changes of the smectite clay upon intercalation of TBAB. As expected, prior to surfactant entrapment, the clay film exhibited the same MMT characteristics as those reported for the raw clay particles in the experimental section (diffraction lines at 2θ values (°) of 6.9; 19.9; 21.8; 28.6; 36.1; 62.0, data not shown). The unit cell parameters and the profile discrepancy indices values (Table 5) indicate an expansion of the interlayer region between the clay sheets. As the clay was in contact with a TBAB solution, this expansion is certainly due to the incorporation of TBA + species in the clay interlayer (in agreement with a similar process described elsewhere for other surfactants such as cetyltrimethylammonium bromide (CTAB) 284 . After partial removal of the TBAB, the clay interlayer distance was found to maintain almost the same values obtained for the fully doped clay, MMT+TBAB (see Table 5). The FTIR spectrum of Valea Chioarului clay (Figure 16, black line) presented the typical bands attributed to the characteristic groups of MMT, described elsewhere. 243,244 The FTIR spectrum of TBAB (Figure 16, blue line) presented at 739 cm -1 a band corresponding to the C-H alkanes rocking vibration. The band near 1020-1250 cm -1 can be assigned to the C-N stretching vibrations and the bands near 1450-1470 cm -1 are due to the C-H alkanes bending vibrations. The bands near 1350-1370 cm -1 correspond to the C-H alkanes rocking vibrations and the bands near 2969-2945 cm -1 to the C-H alkanes stretching vibrations. The bands near 600-1300 cm -1 can be assigned to the C-C aliphatic chain vibrations. 246 For the fully dopped TBAB clay sample (Figure 16, red line), the peak at 3625 cm -1 attributed to the stretching vibration of the hydroxyl group, the band near 1637 cm -1 due to the bending vibration of H-O-H group and the broad band near 1000-1200 cm -1 assigned to the Si-O stretching vibration are common with MMT spectrum. In the same time, the bands near 2969-2945 cm -1 due to the C-H alkanes stretching vibrations, the bands near 1475 cm -1 can be assigned to the C-H alkanes bending vibrations, and the bands near 1350 cm -1 correspond to the C-H alkanes rocking vibrations, all indicating the presence of the surfactant in the clay. It can be assumed that the broad band at 1000-1200 cm -1 includes the C-N stretching vibrations. 243,244,246 After the TBAB partial removal, the FTIR spectrum (Figure 16, green line) presents almost all the bands that were attributed to the surfactant, but with lower intensities. Raman spectrum of MMT is characterized by three strong bands near 200, 425 and 700 cm -1 , (Figure 17, black line). [285][286][287] The sharp Raman peak at 706 cm -1 is due to SiO 4 vibrations and the broader feature near 420 cm -1 has been assigned to M-OH bending vibrations 285,287 and to Si-O-Si(Al) bending modes. 286,287 The position of the strong band near 201 cm -1 varies depending on the clay mineral type 285,287 ; it is probably due to the SiO 4 and influenced by Al substitution and the dioctahedral or trioctahedral character. Weaker bands due to the OH bending vibration are observed near 850-920 cm -1 . 287 In the case of TBAB spectra (Figure 17, blue line), the sharp Raman bands near 2800-3000 cm -1 are due to the C-H bending vibrations. The strong bands near 250-400 cm -1 can be assigned to the C-C aliphatic chains vibrations. The broader feature near 1380 cm -1 can be assigned to the CH 3 bending vibration, while the feature near 1460 cm -1 is due to the asymmetric CH 3 vibrations. The band near 1331 cm -1 corresponds to the C-N bending vibration and the weaker band near 1057 cm -1 is due to the C-C aliphatic chain vibrations. 288 The Raman spectrum of the surfactant modified MMT shows, as expected, both characteristic bands of MMT and TBAB. At 705 cm -1 a sharp peak attributed to the SiO 4 vibrations, the broader feature near 421 cm -1 due to the M-OH bending vibrations, (Figure 17, red line) 285,287 and to Si-O-Si(Al) bending modes 286,287 , and also the weaker bands due to the OH bending vibration observed near 850-920 cm -1287 , all correspond to the MMT Raman spectrum. The sharp Raman bands near 2800-3000 cm -1 due to the C-H bending vibrations and the bands near 250-400 cm -1 assigned to the C-C aliphatic chains vibrations, are common with the TBAB Raman spectrum. Also the feature near 1450 cm -1 due to the asymmetric CH 3 vibrations, the band near 1318 cm -1 corresponding to the C-N bending vibration, and the weaker band near 1058 cm -1 due to the C-C aliphatic chain vibrations 288 demonstrate clearly the incorporation of the surfactant in the clay. The data is a clear proof of the surfactant incorporation in the clay interlayer. The assigned bands of TBAB are still present in the Raman spectrum (Figure 17, green line), but with lower intensities, after the partial removal of the surfactant from MMT, explaining thus, why the interlayer spaces were maintained (Table 2). X-ray diffraction, FTIR and Raman spectra data were confirmed by the elemental analysis results presented in Table 6. The TBA + content, calculated using the N% values, as the doping agent is a quaternary ammonium salt, is 10.17 % for the fully doped clay, while after the surfactant partial extraction is decreasing to 2.07%. Clay film permeability The MMT film presence at the GCE surface increases its voltammetric response in Ru(NH 3 ) 6 3+ (Figure 18C). This is possible due to the accumulation of the positivelycharged electroactive probe by ion exchange in the clay film. For GCE+MMT/M, GCE+MMT+TBAB/M, and GCE+MMT+TBAB(-TBAB)/M, the stationary value of current intensity is reached after 30 potential scans at 0.020Vs -1 (Figures 18C,18D,18E). This behavior was observed for all the MMT samples used (unmodified or modified with different compounds) and corresponds to the diffusion from the solution/clay interface to the electrode surface, results in agreement with other authors 251 . This behavior proves the cation exchange capacity of modified clay and which can be exploited in ion exchange voltammetry (cations detection after preconcentration). The first voltammetric cycles recorded with the MMT modified GCE shows less intense peaks than that obtained on bare GCE due to the fact that the clay film on the electrode surface reduces its active area. This fact was observed only when cycling potentials just after immersion of the electrode in the analyte solution. If the electrode was kept in electrolyte before testing it, the first voltammetric cycle showed peak currents comparable with those obtained on the bare electrode (data not shown). The presence of TBA + /TBAB in the clay is indicated by the lower response of the film to the Ru(NH 3 ) 6 3+ probe (lower accumulation possible by cation exchange, see Figure 18D). This signal decrease also tends to support the presence of some holes in the film (in the presence of a crack-free film the response to Ru(NH 3 ) 6 3+ would have been totally suppressed). On the other hand, after surfactant partial removal, the accumulation response of Ru(NH 3 ) 6 3+ species upon continuous potential cycling is more marked (compare Figure 18E and 3C) due to the increase in the clay interlayer and also to the cation exchange in the clay. An important feature here is that extraction of the surfactant was made using ethanol and NaClO 4 (not the classically used ethanol/HCl mixture) to avoid any chemical degradation of the clay (i.e., acid hydrolysis of the aluminum sites in the aluminosilicate) and to maintain its cation exchange capacity. These results indicate promising use of GCE+MMT+TBAB(-TBAB)/M for preconcentration electroanalysis of cationic analytes (see section 3.3 for confirmation). The negatively charged clay repulses the negatively charged [Fe(CN) 6 ] 3-(Figure 19C) and the electrochemical signal is decreasing. On the basis of the net negative charge of MMT clay, the negatively charged species would be expected to be totally rejected by the clay. Results in Figure 19 are also an indication that the MMT and the TBAB modified MMT films could present some cracks that permit the [Fe(CN) 6 ] 3-diffusion to electrode surface. However, there are some studies in literature in which is mentioned that even cation-exchanging clays have been reported to accumulate somewhat the anionic species. [START_REF] Navrátilová | Cation and anion exchange on clay modified electrodes[END_REF][START_REF] Kula | Sorption and determination of Hg(II) on clay modified carbon paste electrodes[END_REF][START_REF] Navratilova | Determination of gold using clay modified carbon paste electrode[END_REF]251,289,290 Prior to surfactant extraction, when using the Fe(CN) 6 3-probe, a very small signal can be observed, yet clearly visible (see Figure 19D), slightly and continuously growing upon successive cycling. This behavior suggests some accumulation of the negatively-charged Fe(CN) 6 3-species. This result can be attributed to the fact that tetrabutylammonium cation, TBA + , is likely to accommodate the clay particles. 284 It is known that when smectite type clays are treated with a cationic surfactant at a concentration ranging between 0.5 and 1.5 times the CEC of the clay, the surfactant molecules adopt a bilayer or pseudotrimolecular arrangement within the clay platelets, with both vertical and horizontal orientations of the alkylammonium chains. 291 Therefore, the aggregation of the organic cations occurs via both ion exchange and hydrophobic bonding which leads to the creation of positive charges in the clay layer and on the clay surface, inducing thereby possible uptake of anionic species by the clay composite via the formation of surface-anion complex. 292,293 In the present case, the slight accumulation of redox species, as observed in Figure 19D, can be ascribed to the formation of ion pairs between TBA + cations and Fe(CN) 6 3-anions. The accumulation effect is however much lower than that for classical surfactant-modified clay films (e.g., smectite clay modified with hexadecyltrimethylammonium 265 ), perhaps due to the presence of the cellulose membrane around the surfactant modified clay particles. After surfactant partial removal, the absence of signal to negatively-charged Fe(CN) 6 3-species (see Figure 19E) is again explained by electrostatic repulsions from the negatively-charged clay sheets. All the clay films used in this work greatly attenuate the current response in comparison to the bare GCE (about 70% signal suppression). If the redox probe considered is the neutral Fc(MeOH) 2 , the signal recorded on MMT modified GCE is quite similar with that obtain on bare GCE (compare Figure 20C and 20A) which proves that the clay film is still porous as it let the neutral Fc(MeOH) 2 species reach quite easily the electrode surface. The reduction potential is shifted in negative direction in the presence of clay films, from 0.034 V in the case of bare GCE to -0.09 V in the case of MMT modified GCE and -0.02 V on GCE+MMT+TBAB(-TBAB)/M. In the case of the oxidation peak the potential is shifted to less positive values from 0.16 V in the case of bare GCE to 0.03 V for GCE+MMT/M. The neutral Fc(MeOH) 2 probe is still detectable on the GCE+MMT+TBAB(-TBAB)/M film electrode but less easily than for GCE+MMT/M (compare Figure 20E and20C) because the increase in the interlayer clay makes it more difficult for the probe molecules to reach the underlying electrode surface. EIS -determinations EIS was employed to characterize the electron transfer properties of the modified electrodes (Figure 21). The typical Nyquist plot of EIS includes a semicircle and a linear zone, which correspond to the electron transfer limited process and the diffusion limited process, respectively. The proposed circuit is R1(Q1[R2W1]) both for unmodified GCE and GCE modified with MMT (simple and with TBAB). The conventional C dl is replaced by the constant phase element (CPE), representing the non uniform behavior of adsorbed species on irregular geometry and small electrode surface. The reaction seems to occur in single step and a combination of kinetic and diffusion processes with infinite thickness describe the whole process. The high frequency section of Nyquist curves describe an arc, the diameter of which displays the R ct values (namely the electron transfer resistance), that increase in the presence of MMT from 128.58 Ω (GCE) to 395.25 Ω (GCE+MMT+TBAB(-TBAB)/M) due to the electrode surface coverage with non conductive clay films which obstruct the electron transfer. The best electrode surface coverage is achieved using TBAB modified MMT films after the surfactant extraction. The experimental results of EIS confirmed that the clay films were fixed at the GCE surface and the clay's presence decrease the electron transfer rate of [Fe(CN) 6 ] 3-. Optimization of experimental conditions Supporting electrolyte for optimal metal ions detection Different supporting electrolytes have been known to present different electroanalytical responses at CLMEs towards the detection of certain analytes by increasing/decreasing either the catalytic current response and/or lowering the detection potential. To choose the suitable detection medium, the electrochemical behavior of treated ions was studied using SWV on GCE+MMT+TBAB(-TBAB)/M in different buffer mediums (such as 0.1 M HCl; 0.2 M HNO 3 ; 0.1 M KCl (pH 4.0); 0.1 M NaNO 3 (pH 5.5 in ultrapure water). It appears that the current of Cu(II) oxidation peak is higher in the 0.2 M HNO 3 (Figure 22) and in this particular case the peak potential is shifted to less negative values (-0.17 V vs. Ag/AgCl). In the case of the 0.1 M NaNO 3 , the peak potential appears at -0.22 V vs. Ag/AgCl, but the peak shape is more appropriate for analytical study, moreover an acidic medium might destroy the internal structure of the clay. Therefore, the 0.1 M NaNO 3 was used as supporting electrolyte in all subsequent electrocatalytical experiments of Cu(II) detection. Accumulation time The effect of accumulation time was studied in a range from 0 to 25 min in Cu 2+ and Cd 2+ solutions at different concentrations and the results are presented in Figure 23. The peak height (current) increases with the accumulation time both for Cu(II) and Cd(II) and, at higher concentrations, it reaches the steady-state after more than 15 min. This can be attributed to the saturation of the accessible adsorption sites in the clay film. The accumulation time was therefore set according to the amount of analytes in the solution. For the analysis of Cu 2+ and Cd 2+ performed in solutions at lower cations concentration, a longer period may be required in order to sufficiently accumulate the metal ions at detectable levels. However, focusing on the most sensitive system (GCE+MMT+TBAB(-TBAB)/M), the relative large linear response range (from 0 to 20 -25 minutes) demonstrated that is not easy for the system to be saturated with metal ions and requires longer accumulation periods. The variations are typical for preconcentration electroanalysis at modified electrodes. They involve an accumulation by analyte binding to active centers (a first linear increase of the signal followed by leveling off when reaching steady-state due to ion exchange equilibrium can be observed, together with the saturation of ion exchange sites). This is in agreement with previous observations made for copper (II) and cadmium(II) electroanalysis at other kind of modified electrodes [START_REF] Marchal | Determination of cadmium in bentonite clay mineral using a carbon paste electrode[END_REF]191,294,295 . For the further determinations 5 to 10 minutes of preconcentration time were considered, depending on experiment type. A B TBAB effect on montmorillonite The effect of TBAB insertion in clay structure was studied both for Cu 2+ and Cd 2+ water solutions after 5 min accumulation at open-circuit. From Figure 24A and Figure 24B one can observe that the modification of MMT with TBAB (after surfactant removal) led to an increase of about two times the electroanalytical signal for all cations tested (compare curve a and b in Figure 24A and Figure 24B). This proves that the interlayer distance between the clay sheets was found to increase as a result of TBAB ion exchange. Ten consecutive voltammetric measurements were performed using the same modified electrode in order to determine the system's stability towards the Cu 2+ and Cd 2+ , respectively. The MMT+TBAB(-TBAB)/M modified GCE shows good long-term operational stability for Cu 2+ (Figure 25) and Cd 2+ (Figure 26) and good reproducibility for successive preconcentration-detection steps on the same electrode (see some example signals in the inset of Figures 25 and26). This can be ascribed to the durable mechanical immobilization of the clay material by the aid of the cellulose membrane. One can observe that both unmodified and TBAB modified MMT (tested after TBAB partial extraction) films present good stability, but clay modification improves the preconcentration properties toward Cu(II) and Cd(II) ions. -0,8 -0,6 -0,4 -0,2 0,0 0,2 0,4 0,6 0,8 After each measurement, the regeneration of the modified electrode was achieved by immersing the electrode in 0.1 M NaNO 3 solution and washing under magnetic stirring for 2 minutes. The resulting relative standard deviation (RSD) is 0.37 % in the case of Cu(II) and 1.554% in the case of Cd(II) detection, indicating that the sensor obtained by fixing the clay film on the electrode with the cellulose membrane can be easily used for repetitive measurements. Calibration data SWV was used to determine the relationship between the analyte concentration (Cu(II) and Cd(II)) and the peak current intensity in the oxidation field of potentials, by using GCE+MMT+TBAB(-TBAB)/M. As shown in Figure 27 and Figure 28, there are no anodic peaks when the experiments were performed in Cu(II) and Cd(II) free solutions. In the case of Cu(II), an anodic peak appears at about 0.1 V vs. Ag/AgCl after accumulation at open-circuit in Cu 2+ solution and the peak current increases with the cation concentration. The oxidation peak potential is slightly shifted to higher positive values with the increase in the analyte concentration in preconcentration solution. The LOD values were estimated on the basis of a signal-to-noise ratio of 3, 267 while the LOQ values were estimated on the basis of a signal-to-noise ratio of 10. The relationship between Cu(II) concentration and current intensity obtained with SWV was linear in a range from 1.2×10 -7 M to 7.5×10 -6 M, according to the equation I (μA) = 2.9823±0.1077 × [Cu 2+ ] (μM) + 0.2333±0.03601 with a correlation coefficient of 0.9922 (8 points considered for the linear plotting). In the case of Cu(II), the LOD was estimated at 3.62×10 -8 M and LOQ = 10.86×10 -8 M. In the case of Cd(II), an anodic peak appears at about -0.9 V/Ag/AgCl after accumulation at open-circuit in Cd 2+ solution and the peak current increases with the cation concentration. No oxidation peak potential shift was observed when increasing Cd 2+ concentration in preconcentration solution. The relationship between Cd(II) concentration and current intensity obtained with SWV was linear in a range from 2.16×10 -7 M to 2.5×10 -6 M , according to the equation I (μA) = 0.69791±0.0144 × [Cd 2+ ] (μM) -0.0277±0.0167 with a correlation coefficient of 0.9983 (6 points considered for the linear plotting). Cd(II) detection limit was estimated at 7.2×10 -8 M, while LOQ calculated value was 21.6×10 -8 M. Simultaneous determination of multicomponent cations solutions Figure 29 shows the voltammetric response of a multi-component solution of Cd 2+ , Pb 2+ , and Cu 2+ registered on GCE+MMT+TBAB(-TBAB)/M after 5 min preconcentration using a 10 -5 M solution of each metal ion, in comparison with the signals obtained for monocomponent solutions determined in the same conditions. One can see that the interactions between the three cations is minimal in terms of peak current intensity, but the peak potential is shifted to less negative values in the case of Cd(II) signal, while in the case of Cu(II) the sift takes place to more negative values. The position of the Pb(II) peak is less affected by the presence of the other metal ions. Interference study The selectivity of the MMT based sensor for Cu 2+ and Cd 2+ was evaluated in the presence of some common metal ions expected to influence the electrochemical sensors signals: Pb 2+ , Co 2+ , Ni 2+ , Zn 2+ , Ba + , Na + , and K + tested at the same concentration. 10 -6 M solutions of Cu 2+ and Cd 2+ , and of the possible interfering ions were tested by SWV using the GCE+MMT+TBAB(-TBAB)+M system. The current intensity was measured for each determination and the results are presented in Table 7. The presence of Pb 2+ , Co 2+ , Zn 2+ , Ba + , Na + , and K + does not influence the electrochemical signal of Cu 2+ and Cd 2+ in a significant way, but an important decrease (12 -16%) in current intensity could be observed in the presence of Ni 2+ at the same concentration as the main analytes. To avoid the interferences appearance, for real sample analysis, a dilution of the samples before experimental determination is recommended if samples contain high concentration of cations. Conclusions This work demonstrates that low-cost pillared clay is attractive to be used as the electrode modifier for electrochemical sensors. The effective grafting and functionalization of MMT with TBAB were confirmed by XRD, FT-IR, FT-Raman, and EIS determinations. TBAB modified montmorillonite was prepared. After partial removal of the surfactant, the resulting material preserved the basic properties of the clay (i.e., cation exchange) and exhibited excellent permeability issues and long-term mechanical stability, which could be notably exploited in preconcentration electroanalysis. The selectivity of the MMT based sensor for Cu 2+ and Cd 2+ was not significantly influenced by the presence of Pb 2+ , Co 2+ , Zn 2+ , Ba + , Na + , and K + ions. The detection limits were estimated at 3.62×10 -8 M for Cu 2+ and 7.2×10 -8 M for Cd 2+ , respectively. The new material exhibited good permeability properties towards selected redox probes (cationic, neutral, and anionic). The partial removal of TBA + ions minimized the positive electrostatic barrier towards cations adsorption and created free adsorption sites, improving thus the electrochemical performances of the new sensor material. To conclude, the obtained sensor was easily fabricated and showed linear response ranges and good reproducibility being a promising electrode modifier for the accumulation of other cationic toxic species. This low-cost device will be a useful alternative in environmental monitoring of highly toxic contaminants. 4 Clay-mesoporous silica composite films generated by electro-assisted self-assembly Introduction Structuration of electrode surfaces with inorganic thin films has become a well established field of interest, notably for applications in electroanalysis. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]116,121,123,126,128,160,249,253,262,[296][297][298][299][300][301][302][303][304][305][306] Various materials have been used for that purpose, including zeolites 128, 297-299 , clays [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]126,249,262,300 and layered double hydroxides 300,301 , silica 128,253,302 and silica-based organic-inorganic hybrids 121,302 , sol-gel materials 121,123,303,304 and, more recently, ordered mesoporous materials. 116,128,160,305,306 The driving force to select one or another of these electrode modifiers often relies on the particular properties (ion exchange, selective recognition, hosting capacity, size selectivity, redox activity, permselectivity, etc) which can be useful to the final application (preconcentration electroanalysis, electrocatalysis, permselective coatings, biosensors, ...). As most of these materials are electronic insulators, their use in connection to electrochemistry requires a close contact to an electrode surface, which can be basically achieved via dispersion of as-synthesized powdered materials in a conductive composite matrix (e.g., CPEs 272 ) or deposited as thin films on solid electrode surfaces. In the latter case, a critical point is the uniformity and long-term mechanical stability of the thin coatings, which might be challenging when attempting to deposit particulate materials onto electrodes, requiring often the use of an additional polymeric binder. 307 This is especially the case of zeolite film modified electrodes, the situation being somewhat less problematic for clay film modified electrodes because of the particular platelet morphology of clay particles and their self-adhesive properties toward polar surfaces [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF] ensuring better interaction with most electrode materials and consequently more durable immobilization. Nevertheless, besides the classical physical attachment of a clay film to a solid electrode surface (through solvent casting, spin-coating or layer-by-layer assembly as the mainstream techniques [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]249,273 , or electrophoretic deposition [START_REF] Song | Preparation of clay-modified electrodes by electrophoretic deposition of clay films[END_REF] ), other strategies based on covalent bonding (via silane or alkoxysilane coupling agents) have been also developed. [START_REF] Rong | Electrochemistry and photoelectrochemistry of pillared claymodified-electrodes[END_REF]274 On the other hand, the versatility of the sol-gel process makes it especially suitable to coat electrode surfaces with uniform deposits of metal or semimetal oxides (mainly silica) and organic-inorganic hybrid thin films of controlled thickness, composition and porosity. The method is intrinsically simple and exploits the fluidic character of a sol (typically made of alkoxysilane precursors for silica-based materials) to cast it on the surface of a solid electrode, allowing gelation, and drying to get a xerogel film 121,123,303,304 . The driving force for film formation is thus solvent evaporation and this approach is often sufficient to yield high quality films of flat surfaces 308 , including the ordered mesoporous ones generated by evaporation induced self-assembly in the presence of surfactant templates 139 . Sol-gel thin films can be also generated onto electrode surfaces by electrolytic deposition. The method involves the immersion of the electrode into a hydrolyzed sol solution (i.e., typically in moderate acidic medium) and the application of a suitable cathodic potential likely to generate hydroxyl ions locally at the electrode/solution interface, such local pH increase contributing to catalyze the precursors polycondensation and the growth of the silica film on the electrode surface. 150,309,310 This approach can be extended to the generation of organically-functionalized silica films 152,153 or to the co-deposition of solgel/metal nanocomposites 311 or conductive polymer-silica hybrids 312 , and is compatible with the encapsulation of biomolecules to build bioelectrocatalytic devices. 313,314 When applied in the presence of surfactant template (i.e., CTAB), the method enables the deposition of highly ordered mesoporous silica films with mesopore channels oriented normal to the underlying support 148,149 , a configuration ensuring fast mass transport for analytes from the solution through the film to the electrode surface, thus offering great promise for the elaboration of sensitive electroanalytical devices 182 . Moreover, contrary to the evaporation method, the electro-assisted generation approach enables to get uniformly deposited sol-gel layers onto electrode surfaces of complex geometry or complex conductive patterns (i.e., gold CD-trodes 153 , macroporous electrodes 315 , metal nanofibers 316 or printed circuits 317 ) or at the local scale using ultramicroelectrodes 318,319 . The method was also proven to be suited to the generation of sol-gel materials through nano-or micro-objects deposited onto electrode surfaces, such as nanoparticles 320 or bacteria 321 , thus acting somewhat as a template for film growing. In the present study, we have further extended the electro-assisted deposition method to generate clay-mesoporous silica composite films and characterized their permeability properties towards selected redox probes (cationic, neutral, and anionic). The synthesis procedure involves at first the deposition of clay particles (montmorillonite-rich natural clay from Romania) onto a glassy carbon electrode surface, and the subsequent electro-assisted self-assembly of a surfactant-templated mesoporous silica around the clay particles. The use of a cationic surfactant (i.e., CTAB) in the synthesis medium was driven by at least two points, the fact that it contributes to template the silica film (which exhibits distinct permeability properties before and after extraction 148,149,182 ), and its possible incorporation into the interlayer region of the clay by cation exchange (which is known to modify the interlayer spacing between the clay sheets 284 ). Even if some silica-clay composites have been described in the literature [322][323][324] , including two as thin layers on electrode 325,326 , the present work provides, to the best of our knowledge, the first example of electrogenerated claymesoporous silica composite films. Their permeability properties and long-term operational stability of the modified electrodes are discussed regarding their physicochemical characteristics. 275 Experimental Reagents and materials Tetraethoxysilane (TEOS, 98%, Alfa Aesar), ethanol (95-96%, Merck), NaNO 3 (99%, Fluka), HCl (37%, Riedel de Haen), and CTAB (99%, Acros) were used as received for sol-gel films synthesis. The redox probes employed for permeability characterization were analytical grade: ferrocene dimethanol (Fc(MeOH) 2 , Alfa Aesar); potassium hexacyanoferrate(III) (K 3 Fe(CN) 6 , Fluka); and hexaammineruthenium chloride (Ru(NH 3 ) 6 Cl 3 , Sigma-Aldrich); they were typically used in 0.1 M NaNO 3 solution. A certified copper (II) standard solution (1000 ± 4 mg L -1 , Sigma-Aldrich) was used to prepare diluted solutions for preconcentration studies. The electrolytes KCl (99.8%) and HCl (36% solution) were obtained from Reactivul Bucureşti. All solutions were prepared with high purity water (18 MΩ cm -1 ) from a Millipore Milli-Q water purification system. The clay sample used in this study was natural Romanian clay from Valea Chioarului (Maramureş County), which consisted mainly of smectite, with minor amounts of quartz. Its physico-chemical characterization is provided elsewhere 239 . The structural formula is (Ca 0.06 Na 0.27 K 0.02 ) Σ=0.35 (Al 1.43 Mg 0.47 Fe 0.10 ) Σ=2.00 (Si 3.90 Al 0.10 ) Σ=4.00 O 10 (OH) 2 • nH 2 O. It is characterized by a surface area (N 2 , BET) of 190 m 2 g -1 . Only the montmorillonite-rich fine fraction of the clay (< 0.2 μm, as collected by sedimentation according to Stockes law, after the raw clay was suspended in water, ultrasonicated for about 15 min and allowed to settle, centrifugation and ultracentrifugation of the supernatant phase) was used here. This fine fraction has a cation exchange capacity (CEC) of 0.78 meq g -1 . Its XRD diffractogram showed a high content of montmorillonite (with its characteristic peaks at 2θ: 6. Preparation of the clay-mesoporous silica films Glassy carbon electrodes (GCE, 5 mm in diameter) were first polished on wet silicon carbide paper using 1 and 0.05 μm Al 2 O 3 powder sequentially and then washed in water and ethanol for a few minutes, respectively. GCE were afterwards coated with a clay film, which was prepared by depositing a 10 μL aliquot of an aqueous clay suspension (5 mg mL -1 ) by spin-coating onto the GCE surface, as in 251 . The film was dried for 1 h at room temperature prior to any further use. This electrode is denoted below as "GCE-clay". A mesoporous silica film was then electrogenerated through this film, around the clay particles, onto the electrode surface under potentiostatic conditions. This was typically achieved from a precursor solution containing 20 mL ethanol, 20 mL aqueous solution of 0.1 M NaNO 3 and 0.1 M HCl, to which 13.6 mmol TEOS and 4.35 mmol CTAB were added under stirring (optimized conditions as in 148,149,182 ) and the resulting sol was aged for 2 h prior to use. The GCE-clay electrode was then immersed in this sol solution and electro-assisted deposition was performed by applying -1.3 V for 10 s. The electrode was then quickly removed from the solution, rinsed with water, and dried/aged overnight in an oven at 130°C. The resulting composite film electrode is denoted "GCE-clay-mesopSiO 2 ". It can be used as such or after template removal; in this latter case, the CTAB template is solvent-extracted with an ethanol solution containing 0.1 M NaClO 4 for 5 min under moderate stirring. For comparison purposes, a template-free silica film was also deposited onto the GCE-clay electrode, exactly under the same aforementioned conditions but in the absence of CTAB in the starting sol. It is denoted "GCE-clay-SiO 2 ". 275 Apparatus and characterization procedures All electrochemical experiments were performed using a PGSTAT-12 potentiostat (EcoChemie) monitored by the GPES software. A conventional three electrode cell configuration was employed for the electrochemical measurements. Film modified GCEs were used as working electrodes, with a saturated Ag/AgCl (Metrohm) and a platinum wire as reference and counter electrode, respectively. CV measurements were carried out respectively in 5 mM Fc(MeOH) 2 , 1 mM K 3 Fe(CN) 6 , or 0.1 mM Ru(NH 3 ) 6 Cl 3 (in 0.1 M NaNO 3 ). CV curves were typically recorded in multisweep conditions (ca. 20 cycles) at a potential scan rate of 20 mV s -1 and used to qualitatively characterize accumulation/rejection phenomena and mass transport issues through the various films. Accumulation-detection experiments were also performed using copper (II) as a model analyte. Typically, open-circuit accumulation was made from diluted copper(II) solutions (10 -7 -10 -6 M) at pH 5.5, and voltammetric detection was achieved after medium exchange to a copper(II)-free electrolyte solution (0.1 M KCl + 0.1 mM HCl) by SWV, at a scan rate of 5 mV s -1 , a pulse amplitude of 50 mV and a pulse frequency of 100 Hz. The films morphology was observed by scanning electron microscopy (SEM) and micrographs were recorded with a Hitachi FEG S-4800 apparatus. The film structure was characterized by X-ray diffraction (XRD) in Bragg-Brentano geometry using a Panalytical X'Pert Pro diffractometer operating with a copper cathode (λ Kα = 1.54056 Å). Atomic force microscopy (AFM, apparatus Thermomicroscope Explorer Ecu+, Veeco Instruments SAS) was also used to evaluate the film thickness. 275 Results and discussions Films preparation and permeability properties evaluated by cyclic voltammetry The various electrode configurations investigated here are schematically represented in Figure 30. In the raw clay modified electrode (GCE-clay, Figure 3a), the clay particles are expected to lie randomly onto the electrode surface [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]249 . Then, the non occupied volume between these particles is filled upon electro-assisted generation of the surfactant-templated silica material (GCE-clay-mesopSiO 2 ) or the non templated silica layer (GCE-clay-SiO 2 ). It was proven previously that electro-assisted deposition of sol-gel films implies a mechanism involving the generation of hydroxyl ions under potential control, leading to a pH increase at the electrode/solution interface, catalyzing thereby the sol-gel film deposition 148-150, 152, 182, 309, 310, 320 . The method is thus conceptually different from the direct electrodeposition of composites such as claymetallic films (i.e., using the clay particles as quasi-templates 327 ) or conductive polymer-clay films 328 , for which the deposited matter is that resulting from the direct electrochemical transformation of the precursors (reduction of metal ions or electropolymerization of monomers), thus leading film growing obligatory from the underlying electrode surface. In the present case, the electron transfer reaction does not contribute to the electrochemical transformation of the precursors but it generates the catalyst (i.e., hydroxyl ions) that induces the precursors polycondensation and formation of the silica network. These catalytic species are expected to be present in a rather thick region at the electrode/solution interface (i.e., the diffusion layer) so that the gelification process can occur simultaneously in the whole non occupied volume between the clay particles to form the clay-mesoporous silica film (GCE-clay-mesopSiO 2 , Figure 30b-c) or clay-silica composite films (GCE-clay-SiO 2 , Figure 30d). Actually, it has been reported that electro-assisted deposition of mesoporous silica through an assembly of microparticles deposited onto an electrode surface happened in a multi-step process, the material starting to deposit around the particles (in agreement to faster gelification onto solid surfaces in comparison to bulk gelification from homogeneous sols 329 ) and then tended to fill all the interstitial volume to give the final composite films 320 . The schemes in Figure 30, though simplified, provide thus a realistic view of the different systems studied here. No real morphology difference is expected to occur between GCE-clay-mesopSiO 2 electrodes before and after surfactant extraction, except that mesoporosity would be revealed after template removal. 148 The cyclic voltammetry characterization of these 4 film electrodes (GCE-clay, GCE-clay-mesopSiO 2 before and after surfactant extraction, and the non templated GCE-clay-SiO 2 film), using three relevant redox probes (Fe(CN) 6 3-, Fc(MeOH) 2 , Ru(NH 3 ) 6 3+ ), is summarized in Figure 31. Several observations and conclusions can be drawn from these data. Figure 30 Schematic view of the various electrode configurations used in this work First, the raw clay deposit acts as a barrier to the negatively-charged Fe(CN) 6 3- species (see part B1 in Figure 31), as a result of electrostatic repulsions of the negatively-charged clay sheets. Nevertheless, the clay film is still porous as it let the neutral Fc(MeOH) 2 species quite easily reach the electrode surface, which resulted in almost similar CV response as on bare GCE, at least when reaching steady-state after 20 cycles (compare parts B2 and A2 in Figure 31). Finally, the CV signal of the positively-charged Ru(NH 3 ) 6 3+ species was found to be larger than at the bare electrode (compare parts B3 and A3 in Figure 31) and was found to continuously grow upon multiple successive potential scanning (see part B3 in Figure 31) owing to the preconcentration of these Ru(NH 3 ) 6 3+ species by cation exchange in the clay particles. Overall, this behavior is consistent with previous observations on similar systems 251 and the preconcentration capacity was exploited in ion exchange voltammetry. 330 The situation was definitely different for the GCE-clay-mesopSiO 2 electrode. Let's first consider the case prior to surfactant extraction. Using the Fe(CN) 6 3-probe resulted in a very small signal, yet clearly visible (see part C1 in Figure 31), slightly and continuously growing upon successive cycling. This behavior suggests some accumulation of the negatively-charged Fe(CN) 6 3-species. To interpret this result, one has to remind that sol-gel electro-assisted deposition was performed in the presence of CTAB in the solution (at ca. 0.1 M), and that the cetyltrimethylammonium cation, CTA + , is likely to accommodate the clay particles. 284 Actually, when smectite type clays are treated with a cationic surfactant at a concentration ranging between 0.5 and 1.5 times the CEC of the clay, the surfactant molecules adopt a bilayer or pseudotrimolecular arrangement within the clay platelets, with both vertical and horizontal orientations of the alkylammonium chains. 291 The aggregation of the organic cations thus occurs via both ion exchange and hydrophobic bonding which leads to the creation of positive charges in the clay layer and on the clay surface, inducing thereby possible uptake of anionic species by the clay composite via the formation of surface-anion complex. 292,293 In the present case, even if it is impossible to quantitatively determine the amount of surfactant molecules in the clay, one can reasonably ascribe the slight accumulation of redox species, as observed in part C1 in Figure 31, to the replacement of the weakly retained counterions (Br -) of the surfactant in the clay by the Fe(CN) 6 3-probe, leading to the formation of ion pairs between CTA + cations and Fe(CN) 6 3-anions. The accumulation effect is however much lower than that for classical surfactant-modified clay films (e.g., smectite clay modified with hexadecyltrimethylammonium 265 ) because of the presence of the surfactant-templated mesoporous materials around the clay particles in the present case. An additional indication of the presence of CTA + /CTAB in the clay is the almost totally suppressed response of the film to the Ru(NH 3 ) 6 3+ probe (no more accumulation possible by cation exchange, see part C3 in Figure 31). This signal suppression also tends to support a good coverage of the whole electrode surface with a crack-free clay-mesoporous silica composite film (the presence of some holes would have resulted in noticeable CV response). Finally, the response to the neutral Fc(MeOH) 2 species was lower than previously, but still significant (40 % of the intensity on bare GCE), and shifted by ca. 0.1 V towards more anodic potentials (compare parts B2 and C2 in Figure 31). This is explained by the solubilization of the neutral probe in the surfactant phase, as previously reported for electrodes covered with surfactant-templated mesoporous silica films 141,148 . After surfactant removal, the GCE-clay-mesopSiO 2 electrode exhibited again a distinct behavior (compare parts C1-3 and D1-3 in Figure 31). Note that extraction of the surfactant template was made using ethanol and NaClO 4 (not the classically used ethanol/HCl mixture) to avoid any chemical degradation of the clay (i.e., acid hydrolysis of the aluminum sites in the aluminosilicate) and to maintain its cation exchange capacity. The voltammetric characteristics of the surfactant-extracted GCE-clay-mesopSiO 2 electrode resembles somewhat to those observed for the raw clay film electrode (GCE-clay), yet with some differences, as it can be seen by comparing parts B1-3 and D1-3 in Figure 31. The absence of signal to negatively charged Fe(CN) 6 3- species (see part D1 in Figure 31) is again explained by electrostatic repulsions from both the negatively-charged clay sheets and the negatively-charged silica surface (as also evidenced for pure mesoporous silica films 141 ). The neutral Fc(MeOH) 2 probe is still detectable on the GCE-clay-mesopSiO 2 film electrode but less easily than for GCEclay (compare parts D2 and B2 in Figure 31) because the probe molecules have now to cross the mesoporous silica binder to reach the underlying electrode surface. On the other hand, the accumulation response of Ru(NH 3 ) 6 3+ species upon continuous potential cycling is more marked (compare parts D3 and B3 in Figure 31) due to the synergistic properties of the composite film (cation exchange in the clay and favorable electrostatic interactions with the negatively charged silica surface; this last phenomena was previously demonstrated for the accumulation of both Ru(NH 3 ) 6 3+ and Ru(bpy) 3 2+ at mesoporous silica modified electrodes 141,148 ). These results indicate promising use of GCE-clay-mesopSiO 2 for preconcentration electroanalysis of cationic analytes (see section 3.3 for confirmation). Some control experiments have been also performed using a non templated GCE-clay-SiO 2 composite film electrode. The results indicate comparable behavior as for GCE-claymesopSiO 2 (suppressed response to Fe(CN) 6 3-; significant signal for Fc(MeOH) 2 , suggesting the existence of some porosity; good response to the positivelycharged Ru(NH 3 ) 6 3+ species, in agreement with previous observations made for sol-gel derived clay-silicate film electrodes using Fe(CN) 6 3-and methylviologen as redox probes 325 ), but a less effective accumulation of the Ru(NH 3 ) 6 3+ probe in comparison to the templated composite film (compare parts E3 and D3 in Figure 31). 275 Physico-chemical characterization XRD was first used to characterize the eventual structural changes of the smectite clay upon entrapment within the CTAB-templated mesoporous silica film. As expected, prior to sol-gel electrodeposition, the clay film exhibited the same montmorillonite characteristics as those reported for the raw clay particles in the experimental section (diffraction lines at 2θ values (°) of 6.9; 19.9; 21.8; 28.6; 36.1; 62.0, data not shown). Focusing on the low angles range (Figure 32), corresponding to the d001 reflection, one can see what happened as a result of the various treatments (electro-assisted deposition in the presence of CTAB and template removal, respectively). Prior to any treatment, the d001 reflection appears at 2θ = 6.9° (see curve "a" in Figure 32), which corresponds to a d spacing of 12.9 Å (i.e., a classical interlayer distance for montmorillonite clays 331,332 ). After electro-assisted deposition of the mesoporous silica, this line at 2θ = 6.9° almost disappears, to be replaced by new lines at much lower 2θ values (i.e., 4.59° and 2.19°, see curve "b" in Figure 32). This indicates an expansion of the interlayer region between the clay sheets. As the clay was in contact with a CTAB solution prior to and during electro-assisted deposition of the mesoporous silica material, this expansion is certainly due to the incorporation of CTA + and CTAB species in the clay interlayer (a process which is known elsewhere 284,333,334 ). Actually, the XRD pattern (curve "b" in Figure 32) is very similar to those previously observed for montmorillonite treated with CTAB at a concentration at least 3 times higher than the clay cation exchange capacity (which is the case here), indicating the existence of a CTAB-clay material with the surfactant in a paraffin-bilayer configuration (i.e., with surfactant binding to the clay by ion exchange and via hydrophobic interactions). 284 This result supports the above interpretation of CV data obtained for the Fe(CN) 6 3-probe at GCE-clay-mesopSiO 2 (part C1 in Figure 31) for which the observed small increase of the voltammetric signal was attributed to the formation of ion pairs between CTA + cations immobilized in/on the clay particles and Fe(CN) 6 3-anions. After removal of the surfactant template from GCE-clay-mesopSiO 2 , the clay interlayer distance was found to recover almost its initial value (see curve "c" in Figure 32), with a d spacing of 13.2 Å (i.e., close to the 12.9 Å value measured for the raw clay). This indicates that the CTAB loading/removal from the clay is reversible and that no silica condensation occurred in the interlayer of the clay (as might occur in porous clay heterostructures 335,336 ). The control XRD measurements performed for samples prepared without clay or without CTAB (see curves "d" and "e" in Figure 32) further confirmed the above discussed phenomena. SEM characterization of similar films as those analyzed in XRD provided additional information on their texture. Both top-view and cross-sectional views are shown (Figure 33). Clay particles are clearly visible on the cross-sectional view of the spin-coated clay film (Figure 33B). At the opposite, the composite material prepared by electro-assisted deposition of mesoporous silica with CTAB is characterized by a more homogeneous texture, which can be explained by a rather good filling of the inter-particle region of the clay film with the surfactant-templated silica matrix (Figure 33D). Some increase in the film thickness can be also evidenced (as also confirmed by AFM measurements in Figure 34), consistent with the expansion of the clay in the presence of CTAB. On the other hand, the composite sample prepared from electroassisted deposition of silica from a CTAB-free sol solution leads to a texture (Figure 33F) quite comparable to the one observed with the initial clay film. This is explained by the much slower electro-assisted deposition of silica in the absence of CTAB 148 , resulting in lesser amounts of silica binder deposited around the clay particles. Note that SEM top views did not show different film features between the initial clay film and the composite material prepared by electro-assisted deposition of mesoporous silica with CTAB (compare parts A and C in Figure 33), which is also supported by AFM imaging (Figure 34), indicating that mesoporous silica deposition was essentially restricted to the clay layer. It can be concluded that electro-assisted deposition of a surfactant-templated mesoporous silica matrix through a clay film electrode allows the fabrication of a composite material displaying a good homogeneity in the whole thickness of the composite film, which confirms the above voltammetric behavior and suggests possibly good mechanical stability, as discussed below on the basis of successive uses in preconcentration electroanalysis. 275 Effect on copper(II) preconcentration and detection To further characterize the novel composite films and to discuss their potential interest for voltammetric sensing, the modified electrodes were subject to preconcentration analysis using copper(II) as a model analyte. The accumulation was made at open-circuit from an unbuffered copper(II) solution and detection was made by SWV after medium exchange to a slightly acidic chloride solution (pH 4) likely to desorb the previously accumulated copper(II) species; actually the detection sensitivity was highly pH-dependent (signal intensity increasing continuously when decreasing pH from 5 to 1) but, as far as multiple analyses with the same electrode are concerned, too acidic media have to be avoided to maintain the chemical integrity of the clay (i.e., ion exchange capacity), so that pH 4 was chosen as the best compromise between sufficient sensitivity and good reusability. In these conditions, well-defined SWV curves can be obtained and the peak intensity was dependent on the electrode type, growing from GCE-clay-SiO 2 to GCE-clay and to surfactant-extracted GCE-clay-mesopSiO 2 , in agreement with the trend observed in their CV response to the cationic (A) (B) Ru(NH 3 ) 6 3+ redox probe (Figure 31C). Focusing on the most sensitive system (GCEclay-mesopSiO 2 ), one can see in Figure 35 that the SWV response was function of both copper(II) concentration in the accumulation medium and preconcentration time. The variations were as expected for preconcentration electroanalysis at modified electrodes involving an accumulation by analyte binding to active centers (i.e., a first linear increase of the signal followed by leveling off when reaching steady-state (ion exchange equilibrium or saturation of ion exchange sites), in agreement with previous observations made for copper(II) electroanalysis at other kind of modified electrodes. 191,294,295 Interestingly, the time response was very fast (the voltammetric response of the electrode increased linearly as a function of accumulation time from the early accumulation times) contrary to some delay that can be observed to get a detectable signal, e.g., in case of restricted diffusion rates or slow binding processes. 191,294 Such good performance can be explained by the highly porous structure of the composite film after surfactant extraction. (A) (B) Another attractive feature is the rather good long-term operational stability of the GCE-clay-mesopSiO 2 , as illustrated in Figure 36 (curve "b") and the good reproducibility observed for successive preconcentration-detection steps with the same electrode (see some typical signals in the inset of Figure 36). This can be ascribed to the durable immobilization of the clay material by physical entrapment in the surrounding mesoporous silica matrix. By contrast, the response of GCE-clay to successive experiments was found to rapidly decrease to almost zero (see curve "a" in Figure 36), probably as a result of progressive leaching of the clay particles in the solution due to rather poor mechanical stability in stirred medium (it should be reminded here that preconcentration was performed in stirred solution to facilitate mass transport of the analyte). Finally, it is noteworthy that a clay-free GCE-mesopSiO 2 electrode gave also rise to stable signals upon successive preconcentration-detection experiments (see curve "c" in Figure 36), but with much lower sensitivity as a result of poor Cu 2+ preconcentration efficiency of the mesoporous silica matrix, confirming again the interest of the clay-mesoporous silica composite material developed here. The modified electrode developed here offers good performance in terms of sensitivity and long-term stability but, of course, not in terms of selectivity as the Cu 2+ recognition processes ion exchange, which is obviously not so selective with respect to other metal ions. Anyway, the method could be applied to generate composite films 0,0 0,5 1,0 0,0 0,5 1,0 0,0 0,5 1,0 0,0 0,5 1,0 0,0 0, 5 based on other clay materials, such as organically-grafted clays, which are known to exhibit much better selectivity towards target metal ions depending on the nature of the organo-functional groups used to modify the clay. [START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF]251,267,275 Conclusions This work has demonstrated the possible electro-assisted generation of claymesoporous silica composite films on electrodes by combining the electro-assisted self-assembly process 148,149 with the spin-coated clay films. After surfactant removal, the resulting materials kept the basic properties of the clay (i.e., cation exchange) and exhibited excellent permeability issues and long-term mechanical stability, which could be notably exploited in preconcentration electroanalysis. A particular feature of the developed method was the reversible intercalation/exchange of the cationic surfactant in the interlayer region of the clay (with concomitant variation in the interlayer distance), which avoided any deposition of silica between the clay sheets and probably also ensured fast mass transport through the film by creating high porosity upon surfactant extraction. The method appears rather general and could be applied, for example, to the preparation of organically-functionalized clay-mesoporous silica materials by adding suitable organosilanes in the synthesis medium, which would lead to multifunctional composite films. 275 Final conclusions The high demand for simple, fast, accurate, and sensitive detection methods in pharmaceutical and environmental analysis has led to the development of novel electrochemical sensors. Due to their ion exchange capacity and adsorbent properties, clay modified electrodes are likely to be used for this application. Montmorillonite-rich indigenous Romanian clays were employed for the modification of different types of electrodes in order to develop sensors applied in the detection of heavy metals from matrices of biopharmaceutical and biomedical interest and biosensors for the detection of different pharmaceuticals. Bentonites obtained from Răzoare and Valea Chioarului deposits (Maramureş County, Romania) were refined and characterized by X-ray diffraction, transmission electron microscopy, FTIR, and thermodifferential analysis. The ion exchange capacity of purified clays was determined by replacing the compensatory ions with NH 4 + ions. All physico-chemical studies revealed montmorillonite as the main component of their structure. The electrochemical behavior of acetaminophen, ascorbic acid, and riboflavin phosphate was tested by cyclic voltammetry on clay-modified CPEs with different clay particle sizes. Resulting CPEs revealed either better electroanalytical signals or oxidation at lower potential values. These results recommend the application of the new clay-modified sensors in pharmaceutical analysis. The development of a biosensor based on the immobilization of HRP within a Romanian clay-polyethylenimine film at GCEs surface for acetaminophen detection is described. In this case, HRP was immobilized on the surface of the GCE by retention in a polyethylenimine and clay porous gel film, a technique that offered good entrapping and a protective environment for the biocomponent due to the hydration properties of the immobilization layer. The amperometric detection of acetaminophen was successfully achieved with a sensitivity of 6.28×10 -7 M and a linear range between 5.25×10 -6 M and 4.95×10 -5 M. The use of a low-cost pillared clay as electrode modifier for electrochemical sensors development is also demonstrated in this work. For this, montmorillonite was modified with TBAB. By partial removal of the surfactant, the resulting material preserved the basic properties of the clay (i.e., cation exchange) and could be therefore exploited in the development of sensors able to detect the cationic toxic species (e.g., Cu(II), Cd(II)) from different matrices with good reproducibility and sensitivity. Finally, this thesis reveals the electro-assisted generation of clay-mesoporous silica composite films onto GCEs. The method involved the deposition of clay particles by spin-coating on GCE and the subsequent growing of a surfactant-templated silica matrix around these particles by EASA. EASA typically consisted in applying a cathodic potential to the electrode immersed into a hydrolyzed sol (containing TEOS as the silica source, and CTAB as surfactant) in order to generate the necessary hydroxyl catalysts inducing the formation of the mesoporous silica. In such conditions, alongside the silica deposition process, the interlayer distance between the clay sheets was found to increase as a result of CTAB ion exchange. After removal of the surfactant template, the composite film became highly porous (i.e., to redox probes) and the clay recovered its pristine interlayer distance and cation exchange properties. This made it promising for application in preconcentration electroanalysis, as pointed out here using copper(II) as a model analyte, especially because it offered much better long-term operational stability than the conventional (i.e., without silica binder) clay film electrode. This is the first example of electrogenerated clay-mesoporous silica composite films in the literature and the promising applications of these new composite materials are also discussed here. The rather general methods presented in this thesis can be further exploited to develop new reliable methods for environmental and pharmaceutical monitoring of highly toxic contaminants with improved selectivity. Originality of the thesis This thesis aims at developing new composite materials by exploring the ion exchange and adsorbent properties of two montmorillonite-rich Romanian natural clays (from Răzoare and Valea Chioarului deposits, Maramureş County) for sensors and biosensors construction. I believe that the results of these studies will have important impact on the pharmaceutical and environmental fields as they: 1) Present for the first time the applications of Romanian clays in electroanalysis for heavy metal detection and for development of biosensors with applications in pharmaceutical analysis; 2) Describe for the first time the complete structural characterization of Răzoare and Valea Chioarului bentonites; 3) Establish new performances for the existent sensor devices as the developed systems show good long-term operational stability and good reproducibility (e.g., GCE-clay-mesopSiO 2 ); 4) Provide new approaches for the determination of heavy metals in various matrices; 5) Give the first example of electrogenerated clay-mesoporous silica composite films with promising applications; 6) Acetaminophen and riboflavin phosphate was tested for the first time on clay-modified CPEs and new electrochemical methods are proposed to be applied for their detection in pharmaceutical analysis. This study describes reliable methods for environmental monitoring of highly toxic contaminants. The methods presented here are rather general and could be further exploited to generate composite films based on other clay materials with improved selectivity. 2 2 Figure 1 1 Figure 1 Different adsorption sites at clay-modified electrode 2 2 2 3 3 Figure 2 2 Figure 2 Preconcentration step at clay-modified electrode 2 Figure 3 3 Figure 3 Amperometric biosensors: (A) first generation, (B) second generation, (C) third generation. 2 Figure 4 4 Figure 4 Electrochemically-assisted generation of silica films Figure 5 5 Figure 5 Influence of the preconcentration time on the voltammetric response of carbon paste electrodes modified with (A) a mercaptopropyl-functionalized mesoporous silica prepared from a TEOS/MPTMS 80 : 20 and (B) an amorphous silica gel grafted with the mercaptopropyl group; accumulation medium: 5•10 -7 M Hg(NO3)2 in 0.1 M HNO3; detection medium: 3 M HCl; other conditions of the detection process: 1 min electrolysis at -0.7 V, followed by anodic stripping differential pulse voltammetric detection. Inset: variation of the stripping peak area with the accumulation time. 116 Figure 6 6 Figure 6 Smectites structure 241 Figure 7 TEMFigure 8 78 Figure 7 TEM characterization Răzoare (A, B) and Valea Chioarului (C) clays The XRD diffractogram of Razoare clay, fraction bellow 20 μm (Figure 8A) displayed the characteristic diffraction peaks of montmorillonite at 2θ (7.12 o ; 19.68 o ; 21.57 o ; 28.14 o ; 36.04 o ; 61.66 o ) and also the presence in smaller quantities of other minerals, such as cristobalite at 2θ (20.68 o ; 26.50 o ; 36.36 o ; 42.10 o ; 54.70 o ; 59.67 o ), feldspar at 2θ (23.22 o ; 24.10 o ; 27.74 o ; 35.08 o ), etc. 239, 245 94 o ; 19.96 o ; 21.82 o ; 28.63 o ; 36.14 o ; 62.01 o ), which confirmed the position of the diffraction peaks, in agreement with literature data 245 , and also the almost negligible presence of other minerals. Figure 10 10 Figure 10 Thermodifferential analysis of Răzoare (A) and Valea Chioarului clay (B) Figure 11 11 Figure 11 Chemical structures of investigated pharmaceuticals: (A) acetaminophen, (B) ascorbic acid, and (C) riboflavin 239 Figure 12 12 Figure 12 Cyclic voltammograms of 10 -3 M acetaminophen (A) and 10 -3 M ascorbic acid (B) on 1% (solid line), 2.5% (square line), and 5% (dot line) Răzoare clay-modified CPEs (KCl 0.1 M, 100 mVs -1 ) 239 239 Figure 13 13 Figure 13 Cyclic voltammograms for 10 -3 M riboflavin phosphate at unmodified CPE (dot line), at 5% (square line), and 10% (solid line) Răzoare clay-modified CPEs (KCl 0.1 M, 100 mVs -1 ) 239 Figure 14 14 Figure 14 Cyclic voltammograms of 10 -4 M acetaminophen solution using 20 μm (dot line) and 0.2 μm (solid line) Valea Chioarului clay immobilized in a 1 mg/mL PEI film (0.1 M phosphate buffer pH 7.4, 50 mVs -1 ) 243 Figure 15 15 Figure 15 Amperometric response of the clay-modified electrode (0.2 μm Valea Chioarului clay in 1 mg/mL PEI /HRP/GCE) after successive additions of 50 μL of 10 -4 M acetaminophen in phosphate buffer pH 7.4 and 0.2 mM H2O2 243 Figure 16 FTIR 16 Figure 16 FTIR spectra of MMT samples, simple (black line) modified with TBAB (red line) and after the partial TBAB extraction (green line) compared with TBAB spectrum (blue line). Figure 17 17 Figure 17 Raman spectra of MMT samples, simple (black line) modified with TBAB (red line) and after the partial TBAB extraction (green line) compared with TBAB spectrum (blue line). Figure 18 18 Figure 18 Multisweep cyclic voltammograms recorded in 10 -3 M Ru(NH3)6Cl3 in 0.1M NaNO3 using: bare GCE (A) (10 cycles); GCE/M (10 cycles) (B); GCE+MMT/M (30 cycles) (C); GCE+MMT+TBAB/M (30 cycles) (D); and GCE+MMT+TBAB(-TBAB)/M (30 cycles) (E). Figure 19 19 Figure 19 Multisweep CVs recorded in 10 -3 M [Fe(CN)6] 3-(in 0.1M NaNO3) using bare GCE (A) (5 cycles); GCE/M (10 cycles) (B); GCE+MMT/M (10 cycles) (C); GCE+MMT+TBAB/M (10 cycles) (D); and GCE+MMT+TBAB(-TBAB)/M (10 cycles) (E). Figure 20 20 Figure 20 Multisweep cyclic voltammograms recorded in 10 -3 M Fc(MeOH)2 using bare GCE (A) (5 cycles); GCE/M (5 cycles) (B); GCE+MMT/M (10 cycles) (C); GCE+MMT+TBAB/M (10 cycles) (D); and GCE+MMT+TBAB(-TBAB)/M (10 cycles) (E). Figure 21 Nyquist plots for: GCE/M (e-Daq, d=1mm) (black); GCE+MMT/M (< 1 μm; 5 mg mL -1 suspension) (red); GC+MMT+TBAB/M (5 mg mL -1 suspension) (green); GC+MMT+TBAB(-TBAB)/M (5 mg mL -1 suspension) (blue); in 10 mM K3[Fe(CN)6] in PBS (0.1M; pH7.4). Inset: Rct variation with the electrode type. (5 mgmL -1 water suspensions; amplitude 0.005; begin frequency: 100 kHz; end frequency: 0.01 Hz; number of frequencies: 71). Figure 22 22 Figure 22 The electroanalytical response of Cu(II) in different electrolytes. SWVs were recorded using GCE+MMT+TBAB(-TBAB)/M, after 5 min accumulation at open-circuit in 10 -5 M Cu(II) solution in ultrapure water. Detection performed in the supporting electrolyte after 180 s electrolysis at -0.6 V. Figure 23 23 Figure 23 Variation of current peak intensity with accumulation time using GCE+MMT+TBAB(-TBAB)/M after open circuit accumulation of: Cu(II) (A) and Cd(II) (B) at different concentrations: (1) 5•10 -7 M; (2) 10 -6 M; and (3) 5•10 -6 M. Detection performed in 0.1 M NaNO3. Figure 24 SWVs recorded using clay/GCE: (a) unmodified MMT; (b) GCE+MMT+TBAB(-TBAB) (after TBAB removal) after 5 min accumulation at open-circuit in 10 -6 M Cu(II) (A) and Cd(II) (B) solutions in ultrapure water. Detection performed 0.1 M NaNO3 after 180 s electrolysis at -0.6 and -1.0 V. Regeneration after 120 s magnetic stirring in 0.1M NaNO3 solution ((c) and (d) represent the clay films regeneration curves). Figure 25 25 Figure 25 SWV responses obtained with (a) GCE+MMT/M and (b) GCE+MMT+TBAB(-TBAB)/M to successive preconcentration of 10 -6 M Cu 2+ solution (5 min accumulation at open circuit, detection in 0.1 M NaNO3 after 180 s electrolysis at -0.6 V, regeneration after 120 s magnetic stirring in 0.1M NaNO3 solution). The inset shows SWVs obtained using GCE+MMT+TBAB(-TBAB)/M after preconcentration in Cu 2+ solution. Figure 27 27 Figure 27 Variation of current intensity on GCE+MMT+TBAB(-TBAB)+M for Cu(II) at different concentration: 0 (1); 0.25 (2); 0.50 (3); 0.75 (4); 1 (5); 2.5 (6); 5 (7); 7.5 (8) and 10 μM (9) in 0.1M NaNO3 solution. Inset: the calibration curve for Cu(II) obtained under optimized conditions. Experimental conditions: 10 min accumulation at open-circuit in Cu 2+ solutions in ultrapure water; detection performed in 0.1M NaNO3 solution, after 180 s electrolysis at -0.6 V; regeneration after 120 s magnetic stirring in 0.1M NaNO3 solution. Figure 28 Variation of current intensity on GCE+MMT+TBAB(-TBAB)+M for Cd(II) at different concentration: 0 (1); 0.25 (2); 0.50 (3); 0.75 (4); 1 (5); 5 (6); 7.5 μM (7) in 0.1M NaNO3 solution. Inset: the calibration curve for Cd(II) obtained under optimized conditions. Experimental conditions: 10 min accumulation at open-circuit in Cd 2+ solutions in ultrapure water; detection performed in 0.1M NaNO3 solution, after 180 s electrolysis at -1.1 V; regeneration after 120 s magnetic stirring in 0.1M NaNO3 solution. Figure 29 29 Figure 29 SWVs recorded using GCE+MMT+TBAB(-TBAB)/M, after 5 min accumulation at opencircuit in 10 -5 M: Cd 2+ (black), Pb 2+ (red), Cu 2+ (green) and a mixture of these three cations (blue) solutions in ultrapure water. Detection performed in 0.1M NaNO3 solution, after 180 s electrolysis at -1.1 V (frequency: 100 Hz; amplitude: 0.05 V; potential step: 0.02; counter electrode: Pt/Ti; reference electrode: Ag/AgCl without internal solution). Figure 31 31 Figure 31 Cyclic voltammetric curves recorded at 20 mV s -1 for 20 successive cycles using (A) bare GCE, (B) GCE-clay, (C, D) GCE-clay-mesopSiO2 respectively before (C) and after (D) surfactant removal, and (E) GCE-clay-SiO2 electrodes, for three redox probes (1: Fe(CN)6 3-; 2: Fc(MeOH)2; 3: Ru(NH3)6 3+ ) in 0.1 M NaNO3 Figure 32 X 32 Figure 32 X-Ray diffractograms for (a) GCE-clay, (b, c) GCE-clay-mesopSiO 2 respectively before (b) and after (c) surfactant removal, (d) GCE-clay-SiO 2 and (e) GCE-mesopSiO 2 . Figure 33 SEMFigure 34 3334 Figure 33 SEM images of a spin-coated clay film (A,B) and composite materials obtained by electro-assisted deposition of surfactant-templated silica around the clay (clay-mesopSiO2: C,D) and non-templated composite materials (E,F). Both top views (A, C, and E) and cross-sections (B, D, and F) are shown. Figure 35 ( 35 Figure 35 (A) Variation of SWV peak currents recorded using GCE-clay-mesopSiO2 after open circuit accumulation of Cu 2+ at various concentrations: (a) 10 -7 M, (b) 5•10 -7 M, and (c) 10 -6 M. Detection medium composition: 0.1 M KCl + 0.1 mM HCl. (B) Typical SWV curves obtained at the above different concentrations of Cu2+ but at the same preconcentration time (5 min). Figure 36 36 Figure 36 SWV responses obtained with (a) GCE-clay, (b) GCE-clay-mesopSiO2 after surfactant removal, and (c) GCE-mesopSiO2, to successive preconcentration of 10 -6 M Cu 2+ (2 min accumulation at open circuit; detection in 0.1 M KCl + 0.1 mM HCl). The inset shows some typical curves obtained with GCE-clay-mesopSiO2 after surfactant removal (corresponding to data in part "b" of the figure). 2 Mesoporous silica materials and their applications in electrochemistry ...... ..Clays -definition, classification, properties .............................................. .2.1 Ordered and oriented mesoporous sol-gel films .................................. Selected applications of mesoporous silica materials in electrochemistry ...... 2.4.1 Electroanalysis, sensors and biosensors ............................................... 1.1 2.3 Mass transport in mesoporous (organo)silica particles ............................ 2.4 1 Clay-modified electrodes ............................................................................ 1.2 Clay-modified electrode preparation ........................................................ 1.3 Electrochemistry at clay-modified electrodes ........................................... 1.4 Applications in environmental and biomedical analysis ........................... 1.4.1 Heavy metal detection using clay-modified electrodes ....................... 1.4.1.1 Clays implication in heavy metal detection ............................ 1.4.1.2 Inorganic clay heavy metal detection sensors ......................... 1.4.1.3 Organo-clay heavy metal detection sensors ............................ 1.4.2 Amperometric biosensors based on clays applied in pharmaceutical and biomedical analysis ........................................................................ 1.4.2.1 Clays Oxygen based biosensors (first generation) ................... 1.4.2.2 Mediator based biosensors (second generation) .................... 1.4.2.3 Directly coupled enzyme electrodes (third generation) ........... 2.1 The sol-gel process .................................................................................... 2.2 Silica and silica-based organic-inorganic hybrids ..................................... 2 ). Bentonites have a high content of SiO 2 and Al 2 O 3 and also significant water content. The components present in small quantities and in varying proportions are: MgO, CaO, K 2 O, Na 2 O, Fe 2 O 3 and TiO 2 . Elements such as Mg 2+ and Fe 3+ act as substitutes of Al 3+ in the octahedral configuration. Alkaline metals and Ca 2+ can fix by adsorption means in the spaces between the structural packages of the clay. The structural formulas of the clay minerals are: Răzoare (Ca 0.03 Na 0.30 K 0.06 ) Σ=0.39 (Al 1.54 Mg 0.37 Fe 0.10 ) Σ=2.01 (Si 3.84 Al 0.16 ) Σ=4.00 O 10 (OH) 2 · nH 2 O, and Valea Chioarului (Ca 0.06 Na 0.27 K 0.02 ) Σ=0.35 (Al 1.43 Mg 0.47 Fe 0.10 ) Σ=2.00 (Si 3.90 Al 0.10 ) Σ=4.00 O 10 (OH) 2 · nH 2 O. Table 1 Chemical composition of Răzoare and Valea Chioarului bentonites 239 1 Sample SiO2 TiO2 Al2O3 Fe2O3 CaO MgO Na2O K2O L.C. * Răzoare 68.60 0.22 13.89 1.36 0.30 3.38 1.50 0.45 11.30 Valea 59.82 0.25 16.14 1.67 0.70 3.92 1.75 0.25 15.50 Chioarului * Loss on calcination process at 1000 o C Table 2 Quantitative crystalline phase analysis and the effective crystallite mean size, Deff (nm), the root mean square (rms) of the microstrains , <ε 2 > 1/2 m and profile (Rp) discrepancy indices calculated by Rietveld refinement for the montmorillonite crystalline phases (X1=Valea Chioarului clay; R1=Răzoare clay) 2 Samples montmorillonite cristoballite amorphous Deff <ε 2 > 1/2 m × 10 3 Rp [% vol.] [% vol.] [% vol.] [nm ] X1 74.7 14.5 10.8 15.2 0.413 12.2 R1 19.6 43.0 17.4 10.4 2.846 14.3 Table 3 The unit cell parameters and profile (Rp) discrepancy indices calculated by Rietveld refinement analysis for the montmorillonite and cristoballite crystalline phases (X1=Valea Chioarului clay; R1=Răzoare clay) 3 Figure 9 FTIR spectra of Răzoare (A) and Valea Chioarului clays (B) 12.624 17.2 X1-cristobalite 6.46 6.46 6.46 22.3 R1 montmorillonite 5.06 5.06 12.544 19.6 R1 -cristobalite 6.64 6.64 6.64 26.8 (A) (B) Table 4 HRP amperometric biosensors for acetaminophen analysis Electrode configuration Enzyme/ Transducer Sensitivity (μA M -1 ) 4 Linear range LOD References LOQ Zr alcoxyde in HRP/GC 1.17×10 -7 1.96×10 -5 -2.55×10 -4 M Not Sima et al., PEI with HRP on published 2008 GC Nanoporous HRP/Carbon Not 2×10 -6 -5.7×10 -5 M Not Yu et al., magnetic paste with published published 2006 microparticles- MMPs HRP Zr alcoxyde in HRP/SPE Not 4.35×10 -7 -4.98×10 -6 M 6.21×10 -8 M Sima et al., PEI with HRP on published 2.07×10 -7 M 2010 SPE HRP/PEI/SWCT HRP/GC Not 9.99 -79.01μM 7.82×10 -6 M Tertis et al., / GCE published 2013 HRP/Ppy/SWCT HRP/SPE Not 19.96 -118.06 μM 8.09×10 / SPE published -6 M Tertis et al., 2013 Table 5 The unit cell parameters and profile (Rp) discrepancy indices calculated by Rietveld refinement analysis for the clay (MMT, clay with TBAB (MMT+TBAB) and clay with TBAB after TBAB removal (MMT+TBAB(-TBAB)) 5 Samples a b c R p [Å] [Å] [Å] MMT 5.17 5.17 12.624 17.2 MMT+TBAB 5.24 5.24 15.487 18.6 MMT+TBAB(-TBAB) 5.23 5.23 15.475 17.6 Table 6 CNH elemental analysis for the modified clay before and after the TBAB partial extraction 6 Sample C% N% H% Assignment MMT (unmodified) 0.14 0.14 2.839 Organic or/and inorganic impurities containing C and N, H2O and Si-OH groups for H, respectively MMT+TBAB (fully doped) 8.82 0.73 4.068 TBAB + and impurities MMT+TBAB(-TBAB) 3.85 0.26 3.386 TBAB + and impurities (after extraction) Table 7 Interference study Interfering ion Analyte signal (%) 7 Cu 2+ Cd 2+ Cu 2+ - 98.89 Cd 2+ 99.59 - Co 2+ 100.03 99.54 Pb 2+ 101.22 99.76 Ni 2+ 84.45 88.59 Zn 2+ 97.28 93.19 Ba + 99.36 99.10 Na + 98.88 99.32 K + 100.47 99.05 94 o ; 19.96 o ; 21.82 o ; 28.63 o ; 36.14 o ; 62.01 o ) and also confirmed the almost negligible presence of other minerals. 275 Acknowledgements Now, when I've reached the end of the path I started four years ago, I look back to express a special gratitude and I realize the list of the people I have to thank is quite long… My work is the result of a fruitful collaboration between two research groups: the Department of Analytical Chemistry which is part of the Faculty of Pharmacy, University of Medicine and Pharmacy" Iuliu Haţieganu", Cluj-Napoca, Romania, and Laboratoire de Chimie Physique et Microbiologie pour l'Environnement-Université de Lorraine, Villerslès-Nancy, France. STATE OF THE ART
224,460
[ "781777" ]
[ "411849" ]
00175048
en
[ "shs" ]
2024/03/05 22:32:07
2007
https://shs.hal.science/halshs-00175048/file/R07037.pdf
Alex Coad email: coad@econ.mpg.de Thomas Brenner Tom Broekel Guido Buenstorf Andreas Chai Christian Cordes Giovanni Dosi Corinna Manig Dick Nelson Jason Potts Kerstin Press Rekha Rao Andrea Roventini Erik Stam Vera Troeger Marco Valente Karl Wennberg Claudia Werker Ulrich Witt Exploring the « mechanics » of firm growth : evidence from a short-panel VAR Keywords: L25, L20 Firm Growth, Panel VAR, Employment Growth, Industrial Dynamics, Productivity Growth Croissance des firmes, Panel VAR, Création d'emplois, Economie industrielle, croissance de productivité This paper offers many new insights into the processes of firm growth by applying a vector autoregression (VAR) model to longitudinal panel data on French manufacturing firms. We observe the co-evolution of key variables such as growth of employment, sales, gross operating surplus, and labour productivity growth. Preliminary results suggest that employment growth is succeeded by the growth of sales, which in turn is followed by growth of profits. Generally speaking, however, growth of profits is not followed by much employment growth or sales growth. Une investigation des processus de croissance des entreprises Résumé: Cet article offre un certain nombre de résultats concernant les processus de croissance des firmes, en appliquant un modèle VAR (Vector Autoregression) à des données longitudinales sur des entreprises manufacturières francaises. Nous observons la co-evolution de variables-clés telles que la croissance d'emplois, du chiffre d'affaires, de l'excédent brut d'exploitation ('profits'), et de la productivité de la main-d'oeuvre. Nos résultats suggèrent que la croissance d'emplois est suivie par la croissance du chiffre d'affaires, qui est ensuite suivie par la croissance des profits. Il semblerait toutefois que la croissance des profits n'est pas suivie par la croissance d'emplois ou du chiffre d'affaires. Introduction The literature on firm growth, at present, consists mainly of empirical investigations along the framework of Gibrat's Law, where firm growth features as the dependent variable and firm size is an independent variable. In such regressions, different indicators of firm growth (e.g. sales growth or employment growth) are considered almost interchangeably as proxies for the same underlying phenomenon (i.e. firm growth). There are also many other 'augmented' versions of Gibrat's law, in which other characteristics of the firm at time (t -1) are included in the regression to explain the firm's growth from (t -1 : t) (for an extensive survey of the literature on firm growth, see [START_REF] Coad | Firm growth: A survey[END_REF]). Regressions of this kind have had limited success, however, because the explanatory power of these regressions is typically very low and the characteristics (in levels) of firms at a point in time (t) seem to have limited influence on the rate of change of firm size. Geroski has thus said in his despair: "The most elementary 'fact' about corporate growth thrown up by econometric work on both large and small firms is that firm size follows a random walk" (Geroski, 2000, p. 169). The empirical framework presented here is admittedly rather simple but, we argue, has the potential of shedding light on what happens inside growing firms. The approach we take is quite different to conventional empirical analysis of firm growth. Whilst sales, employment and profits are usually taken as alternative proxies for firm growth, we consider each of these indicators to be essentially different, each yielding unique information on different facets of firm growth. We therefore view firm growth as a multidimensional phenomenon. By considering the coevolution of these series, we can improve our understanding of the processes of firm growth. Implicit in our model is the idea that the growth of profits is not just a final outcome for firms but also as an input, because it provides firms with the means for expansion. Furthermore, employment growth can be seen as an input (in the production process) but also as an output if, for example, the policy maker is interested in the generation of new jobs. We suggest that this conception of the growing firm as a dynamic co-evolving system of interdependent variables is best described in the context of a panel vector autoregression (VAR) model. Theoretical considerations Whilst many theoretical propositions about firm growth have been made, these have largely escaped empirical investigation. For example, it has long been supposed that the evolutionary principle of 'growth of the fitter' should apply to firms, such that the more productive or profitable firms should grow and the least productive or profitable should shrink and exit (see, for example, [START_REF] Alchian | Uncertainty, evolution and economic theory[END_REF], [START_REF] Friedman | Essays in Positive Economics, chapter The Methodology of Positive Economics[END_REF] and [START_REF] Nelson | An Evolutionary Theory of Economic Change[END_REF]). Many (evolutionary) economists would probably accept this idea without much thought. However, a growing empirical literature casts doubt on the relevance of these theoretical assertions. Recent work on productivity dynamics suggests that, if anything, there appears to be a mild negative relationship between productivity and firm growth, with relatively low productivity firms growing more. Similarly, (scant) evidence suggests that profits do not appear to lead to higher firm growth (see [START_REF] Coad | Testing the principle of 'growth of the fitter': the relationship between profits and firm growth[END_REF] and [START_REF] Bottazzi | Productivity, profitability and financial fragility: evidence from italian business firms[END_REF], and for a review of the empirical literature on the influence of productivity and profitability on firm growth, see [START_REF] Dosi | Perspectives on Innovation, chapter Statistical Regularities in the Evolution of Industries: A Guide through some Evidence and Challenges for the Theory[END_REF] and [START_REF] Coad | Firm growth: A survey[END_REF]). Another topic that we feel is under-developed in the current literature is our understanding of the microfoundations of employment growth decisions (i.e. at the firm level). For instance, we could expect that the benefits of employment growth on profits may not be manifest immediately if it takes time for firms to adequately train new employees. Instead, new employees and new positions in the organization may make their most significant contribution to firm profitability only after a certain time lag. It is also of interest to investigate the elasticity of employment growth to profit growth. Do profitable firms create new jobs? Is there any justification in the popular vision of industrial development as being characterized by 'jobless growth' ? Our empirical framework also allows us to investigate firm-level productivity dynamics. Previous theo-retical contributions have suggested that there may be 'dynamic increasing returns' (à la Kaldor-Verdoorn) according to which firm growth would be positively associated with productivity growth. On the other hand, [START_REF] Penrose | The Theory of the Growth of the Firm[END_REF] suggested that firm growth is associated with decreases in productive efficiency, because planning for growth takes managerial focus away from keeping production costs down. The association between firm growth and productivity has not been resolved in theoretical discussions, and we therefore consider it to be an empirical question. Structure of the paper In Section 2 we present the database along with some summary statistics. In Section 3 we discuss our regression methodology. In Section 4 we present our main results. The robustness of these results is explored in Section 5. In Section 6 we discuss these results, and conclude in Section 7. Database and summary statistics This research draws upon the EAE databank collected by SESSI and provided by the French Statistical Office (INSEE). 12 This database contains longitudinal data on a virtually exhaustive panel of French firms with 20 employees or more over the period 1989-2004. We restrict our analysis to the manufacturing sectors.3 For statistical consistency, we only utilize the period 1996-2004 and we consider only continuing firms over this period. Firms that entered midway through 1996 or exited midway through 2004 have been removed. Since we want to focus on internal, 'organic' growth rates, we exclude firms that have undergone any kind of modification of structure, such as merger or acquisition. In contrast to some previous studies (e.g. Bottazzi et al. ( 2001)), we do not attempt to construct 'super-firms' by treating firms that merge at some stage during the period under study as if they had been merged from the start of the study, because of limited information on restructuring activities. To start with we had observations for around 22 000 firms per year for each year of the period,4 but at this stage we have a balanced panel of 8503 firms for each year. In order to avoid misleading values and the generation of NANs5 whilst taking logarithms and ratios, we now retain only those firms with strictly positive values for Gross Operating Surplus (GOS), 6 Value Added (VA), and employees in each year. This creates some missing values, especially for our growth of gross operating surplus variable (see Table 2). By restricting ourselves to strictly positive values for the gross operating surplus, we lose 13-14% of the observations in 1997 and 2000 whereas we lose about 26% of the observations in 2004. In keeping with previous studies, our measure of growth rates is calculated by taking the differences of the logarithms of size: GROW T H it = log(X it ) -log(X i,t-1 ) (1) where, to begin with, X is measured in terms of employment, sales, gross operating surplus, or labour pro- Empl. growth 0.0000 0.1352 -0.1049 -0.0437 -0.0096 0.0417 0.1156 Sales growth 0.0000 0.2314 -0.1759 -0.0740 -0.0038 0.0785 0.1803 GOS growth 0.0000 0.8068 -0.7630 -0.3152 0.0043 0.3191 0.7675 Prod. growth 0.0000 0.2173 -0.1956 -0.0910 -0.0019 0.0861 0.1987 2000 Empl. growth 0.0000 0.1333 -0.1168 -0.0526 -0.0117 0.0466 0.1317 Sales growth 0.0000 0.2084 -0.1708 -0.0800 -0.0077 0.0752 0.1845 GOS growth 0.0000 0.7988 -0.7688 -0.2995 -0.0029 0.3232 0.7550 Prod. growth 0.0000 0.2181 -0.2025 -0.0936 -0.0007 0.0905 0.2068 2004 Empl. growth 0.0000 0.1295 -0.1157 -0.0373 0.0164 0.0475 0.1050 Sales growth 0.0000 0.2191 -0.1763 -0.0729 -0.0002 0.0810 0.1779 GOS growth 0.0000 0.8603 -0.8465 -0.3151 0.0176 0.3257 0.8312 Prod. growth 0.0000 0.2580 -0.2138 -0.0904 0.0001 0.0931 0.2150 ductivity7 for firm i at time t. In keeping with previous work (e.g. [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF]) the growth rate distributions have been normalized around zero in each year which effectively removes any common trends such as inflation.8 Summary statistics Table 1 presents some year-wise summary statistics, which gives the reader a rough idea of the range of firm sizes in our dataset. Table 2 presents some summary stats of the growth rate distributions. Figure 1 shows the unconditional growth rates distributions for our four variables of interest. These growth rates distributions are visibly heavy-tailed.9 This gives an early hint that standard regression estimators such as OLS, which assume Gaussian residuals, may perform less well than Least Absolute Deviation (LAD) techniques which are robust to extreme observations. We also observe that the distribution of growth rates of gross operating surplus has a particularly wide support, which would indicate considerable heterogeneity between firms in terms of the dynamics of their profits. Table 3 and Figure 2 show the correlations between our indicators of firm growth and firm performance. Spearman's rank correlation coefficients are also shown since these are more robust to outliers. All of the series are correlated between themselves at levels that are highly significant. However, the correlations are indeed far from perfect, as has been noted elsewhere [START_REF] Delmar | Arriving at the high-growth firm[END_REF]). The largest correlation (0.5959) is between growth of gross operating surplus and that of labour productivity. Indeed, the positive correlation between profits and productivity has also been observed in work on Italian data -see [START_REF] Bottazzi | Productivity, profitability and financial fragility: evidence from italian business firms[END_REF]. We also observe relatively large positive correlations between these two variables and the growth of sales (0.3922 and 0.4452 respectively). Although there is a large degree of multicollinearity between these series, the lack of persistence in firm growth rates (despite a high degree of persistence of firm size) will, we hope, aid in identification in the regression analysis. Furthermore, the large number of observations will also be helpful in identification. Multicollinearity has the effect of making the coefficient estimates unreliable in the sense that they may vary considerably from one regression specification to another. With this in mind, we therefore pursue a relatively lengthy robustness analysis in Section 5. manufacturing firms. In particular, they estimate the functional form of the growth rates density in terms of the Subbotin family of distributions (of which the Gaussian (normal) and the Laplace (symmetric exponential) distributions are special cases). They observe that, in the case of French manufacturing firms, the growth rates density is even fatter tailed than the Laplace. Methodology Introducing the VAR The regression equation of interest is of the following form: w it = c + βw i,t-1 + ε it (2) where w it is an m × 1 vector of random variables for firm i at time t. β corresponds to an m × m matrix of slope coefficients that are to be estimated. In our particular case, m=4 and corresponds to the vector (Empl. growth(i,t), Sales growth (i,t), GOS growth (i,t), labour productivity growth(i,t))'. ε is an m × 1 vector of disturbances. We do not include any dummy control variables (such as year dummies or industry dummies) in the VAR equation because we anticipate that, if indeed there are any temporal or sectoral effects at work, then dummy variables will be of limited use in detecting these effects. Instead, we suspect that the specificities of individual years or sectors may have non-trivial consequences on the structure of interactions of the VAR series, and these cannot be detected through the use of appended dummy variables alone. We explore the influence of temporal disaggregation and sector of activity in detail in Section 5. Furthermore, since previous work on this dataset has not observed any dependence of sales growth on size [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF]), we do not attempt clean the series of size dependence before applying the VAR. However, we explore how our results change across firm size groups in detail in Section 5 We estimate equation ( 2) via 'reduced-form' VARs, which do not impose any a priori causal structure on the relationships between the variables, and are therefore suitable for the preliminary nature of our analysis. These reduced-form VARs effectively correspond to a series of m individual OLS regressions [START_REF] Stock | Vector autoregressions[END_REF]). One problem with OLS regressions in this particular case, however, is that the distribution of firm growth rates is typically exponentially distributed and has much heavier tails than the Gaussian. In this case OLS may provide unreliable results, and as argued in [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF] we would prefer Least Absolute Deviation (LAD) estimation. Allowing for firm-specific fixed effects A further reason why OLS (and also LAD) estimation of equation ( 2) is likely to perform poorly is if there is unobserved heterogeneity between firms in the form of time-invariant firm-specific effects. If these 'fixed effects' are correlated with the explanatory variables, then OLS (and LAD) estimates will be biased. One way of doing accounting for these fixed effects would be to introduce a dummy variable for each firm and to include this in the regression equation to obtain a standard 'fixed-effects' panel data model. The drawback with this, however, is that the inclusion of lagged dependent variables can be a source of bias for fixed-effect estimation of dynamic panel-data models. The intuition is that the fixed effect would be in some sense 'double-counted' if the dependent variable is included in the regression equation at time t and also at at previous times due to the lag structure (this problem is known as 'Nickell-bias' after Nickell (1981)). Nickell-bias is often observed to be rather small, however, and so its importance is a matter of debate. This 'Nickell-bias' problem can be dealt with by using instrumental variables (IV) techniques, such as the 'System GMM' estimator [START_REF] Blundell | Initial conditions and moment restrictions in dynamic panel data models[END_REF]). The performance of instrumental variables estimators, however, depends on the quality of the instruments. If the instruments are effective then the estimates will be relatively precisely defined. If the instruments are weak, however, the confidence intervals surrounding the resulting estimates will be large. This is likely to be the case in this study because it is difficult to find suitable instruments for firm growth rates because they are characteristically random and lack persistence (see the discussion in [START_REF] Geroski | Competence, Governance and Entrepreneurship, chapter The growth of firms in theory and practice[END_REF] and [START_REF] Coad | Testing the principle of 'growth of the fitter': the relationship between profits and firm growth[END_REF]). IV estimation of a panel VAR with weak instruments thus leads to imprecise estimates. [START_REF] Binder | Estimation and inference in short panel vector autoregressions with unit roots and cointegration[END_REF] present a panel VAR model which can include firm-specific fixed effects but that does not require the use of instrumental variables. The model is estimated using Quasi-maximum-likelihood optimization techniques. They propose the following model: w it = (I m -Φ)µ i + Φw i,t-1 + (3) where µ corresponds to the firm-specific fixed effects and Φ is the m×m coefficient matrix to be estimated. is the usual vector of disturbance terms. BHP (2005) present evidence from Monte Carlo simulations that demonstrates that their estimator is more efficient (i.e. the estimates have lower standard errors) than IV GMM. The drawback with the BHP estimator for this particular application, however, is that it assumes normally distributed errors (whereas the distributions of firm growth rates are approximately Laplace-distributed). In this paper our estimator of choice is therefore the LAD estimator, which is best suited to the case of Laplacian error terms. Causality or association? Our intentions in this paper are to summarize the comovements of the growth series. We remind the reader of the important distinction between correlation and causality. We have no strong a priori theoretical positions, and we make no attempt at any serious identification of the underlying causality at this early stage, instead preferring to describe the associations. Indeed, much can be learned simply by considering the associations between the variables without mentioning issues of causality (see [START_REF] Moneta | Causality in macroeconometrics: some considerations about reductionism and realism[END_REF] for a discussion). Aggregate analysis The regression results obtained from the OLS, Fixed-effects, and LAD estimators are presented in Tables 4,5 and 6 respectively. It is encouraging to observe that the results obtained from these estimators, and from the different regression specifications (one or two lags) are not too dissimilar. One major difference between the Gaussian estimators (OLS and FE) and LAD is that the magnitudes of the autocorrelation coefficients (along the 'diagonals') are much smaller using the LAD estimator. This was observed by [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF] and is explored in [START_REF] Coad | A closer look at serial growth rate correlation[END_REF]. We also note that the Fixed-Effects regressions yield fewer significant results than the OLS regressions, which in turn yield fewer significant results than the LAD regressions. The coefficients on the variables lagged twice are roughly speaking less significant than those on the first lag. It is also worth mentioning that whilst the growth of GOS seems to be slightly negatively associated with subsequent growth of sales and employment in the LAD results, these coefficients appear to be positive in the OLS and FE regressions (we are therefore cautious in our interpretations of this result). We base our interpretations mainly on the LAD results. A first observation is that most of the series (except for employment growth) exhibit negative autocorrelation -this is shown along the diagonals of the coefficient matrices for the lags. This is in line with previous work (see [START_REF] Coad | A closer look at serial growth rate correlation[END_REF]). The autocorrelation coefficients for the growth of profits and of labour productivity display a particularly large negative sign. Whilst a substantial previous literature has emphasized the 'persistence of profits', the growth of profits has little persistence. This pronounced negative autocorrelation for profits and productivity growth may well be due to 'behavioural' factors whereby an increase (or decrease) in performance in one year may be followed by a 'slackening off' (or 'extra effort') of the workforce. Indeed, it may be that a period of successful achievement may be followed by a renegotiation of the organization's goals in the direction of a redistribution of the rents towards the employees, or the fostering of a more relaxed working environment. Our results suggest that growth of a firm's employment is associated with previous growth of sales and of labour productivity. Sales growth and labour productivity growth have a relatively small positive effect, and the magnitude is of a similar order even at the second lag. Employment growth, however, appears to be relatively strongly associated with subsequent growth of sales and of profits. As could be expected, sales growth and productivity growth also appear to make a relatively large contribution to the subsequent growth of profits. Indeed, sales growth has a sizeable impact on GOS growth even at the second lag. It is rather straightforward to interpret the magnitudes of the coefficients. If we observe that employment growth rate increases by 1 percentage point, then ceteris paribus we can expect sales growth to rise by about 0.15 percentage points in the following year. Similarly, a 1 percentage point increase in sales growth can be expected to be followed by a 0.04 percentage point increase in employment growth. This latter result is apparently far more modest than results reported for a sample of Dutch manufacturing firms in (Brouwer et al., 1993, p. 156), who observe that a 1% increase in sales leads to a (statistically significant) 0.33% increase in employment.) However, we warn against putting too much faith in specific point estimates at this early stage. We also observe that growth in labour productivity seems to be preceded by growth of employment and of sales, although the (positive) coefficient is rather small. In addition, it appears that growth of profits is associated with a relatively small subsequent growth in sales, and an even smaller growth of employment. Growth of profits may have a more persistent effect on employment growth than for sales growth, however. Growth of sales, on the other hand, is very strongly associated with subsequent growth of profits. We also observe that the R 2 statistics are rather low, always lower than 5% in our preferred LAD specification (Table 6). Robustness analysis In the following section we explore the robustness of our results in a number of ways. First, we consider a simpler regression specification and investigate whether we obtain similar coefficient estimates when we exclude one of the VAR series (Section 5.1). We also investigate the robustness of our findings by repeating the analysis at a more disaggregated level. We disaggregate firms according to size (Section 5.2) and sector of activity (Section 5.3), as well as repeating our regressions for individual years (Section 5.4). We also explore potential asymmetries in the growth process between growing and shrinking firms (Section 5.5). Sensitivity to specification In Table 3 we observed that the highest contemporaneous correlations between the VAR series were between profits growth and labour productivity growth. This high degree of multicollinearity may lead to excessively sensitive coefficient estimates. To explore this sensitivity, we repeat the analysis excluding either the productivity growth or the GOS growth variables, and we hope to obtain similar coefficient estimates to those obtained earlier. Table 7 presents the regression results when productivity growth is excluded, and Table 8 presents the results when GOS growth is excluded. It is encouraging that we still find that employment growth is relatively strongly associated with subsequent growth of sales in all specifications, which in turn is relatively strongly associated with the growth of profits. Sales growth is also observed to have a feed-back effect on subsequent employment growth, of a similar magnitude to that found in Table 6. However, in this simplified specification we no longer observe the direct influence of employment growth on subsequent profits growth (or on productivity growth). Another difference concerns the relationship between growth of GOS and subsequent growth of employment or sales. The results obtained from the different specifications are admittedly different in a few respects, and so we should be especially cautious in drawing conclusions from this rather preliminary analysis. Size disaggregation Due care needs to be taken to deal with how growth dynamics vary with factors such as firm size. We cannot suppose that it will be meaningful to take a 'grand average' over a large sample of firms and assume a common structural specification. [START_REF] Coad | A closer look at serial growth rate correlation[END_REF] shows how the time scale of growth processes varies between small and large firms. For example, whilst small firms display significant negative autocorrelation in annual growth rates, larger firms experience positive autocorrelation which is consistent with the idea that they plan their growth projects over a longer time horizon. As a result, before we can feel confident about the robustness of our results, we should investigate the possible coexistence of different growth patterns for firms of different sizes. We split our sample into 5 size groups, according to their sales in 1996, and the results are presented in Table 9. The task of sorting growing entities into size groups is not straightforward statistical task, however. In Table 10, therefore, we use an alternative methodology for sorting the firms into size groups (i.e. according to mean number of employees 1996-2004). Although similar patterns are observed in each of the size groups, we observe that the autocorrelation coefficients (along the diagonals) do seem to vary with firm size (more on this in [START_REF] Coad | A closer look at serial growth rate correlation[END_REF]). We also observe that, as we move towards larger firms, the contribution of sales growth to both employment growth and growth of profits seems to increase in magnitude. It is also interesting to observe that employment growth has less of an effect on subsequent productivity growth for larger firms, which is consistent with the idea that small firms have to struggle to reach the minimum efficient scale (MES), and until they reach the MES increases in employment will be associated with increases in productivity. Sectoral disaggregation One possibility that deserves investigation is that there may be a sector-specific element in the dynamics of firm growth. For example, the evolution of the market may be easier to foresee in some industries (with mature technologies, for example) than in others. Industries may also vary in relation to the importance of employment growth for the growth of output. We explore how our results vary across industries by loosely following [START_REF] Bottazzi | Corporate growth and industrial structure: Some evidence from the italian manufacturing industry[END_REF], and comparing the results from four particular sectors: precision instruments, basic metals, machinery and equipment, and textiles. These sectors have been chosen to represent the different sectors of Pavitt's taxonomy of industries [START_REF] Pavitt | Sectoral patterns of technical change: towards a taxonomy and a theory[END_REF]); that is, science-based industries, scale-intensive industries, specialized supply industries, and supplier-dominated industries respectively.10 The regression results are presented in Table 11. Our results emphasize a certain degree of heterogeneity between diverse sectors. For example, in the Pharmaceuticals and Machinery/Equipment sectors, employment growth seems to make a particularly large contribution to subsequent sales growth. In addition, it appears that sales growth has a relatively large influence on subsequent growth of profits, in the Machinery/Equipment and Textiles sectors. We also observe that productivity growth is relatively strongly associated with growth of profits in the Textiles sector. Temporal disaggregation It may well be the case that the processes of firm growth are not insensitive to the business cycle. To investigate this possibility, we repeat our analysis for individual years (i.e. the years 1998, 2000. 2002 and 2004). The results are presented in Table 12. We do indeed observe that the regression results vary over time. In particular, the contribution of sales growth and employment growth to the growth of profits seems to vary considerably. Employment growth seems to have a relatively consistent effect on the growth of sales, however. Asymmetric effects for growing or shrinking firms One potential caveat of the preceding analysis is that there may be asymmetric effects for firms that increase employment and for firms that decrease employment. It may be relatively easy for firms to hire new employees while firing costs may limit their ability to lay workers off. In this section we therefore explore differential effects of the explanatory variables over the employment growth distribution. To do this, we perform quantile regressions, which are able to describe variation in the regression coefficient over the conditional employment growth quantiles. (For an introduction to quantile regression, see [START_REF] Koenker | Quantile regression[END_REF].) Figure 3 and Table 13 present the quantile regression coefficient estimates. Roughly speaking, the lower quantiles (closer to 0) represent firms with net employment losses whilst the upper quantiles (closer to 1) represent firms with net employment gains. We observe that the coefficient on lagged growth of profits is slightly higher at the lower quantiles. This suggests that, for those firms that are shedding employees, growth of profits seems to attenuate the firing of employees. Put differently, if a firm is firing employees, it can be expected to fire even more workers if it is experiencing poor financial performance. The magnitude of this effect is not very large, however. We also check for analogous effects in the relationships between other pairs of variables by looking at the quantile regression plots. Concerning the autocorrelation coefficients, we find results similar to those reported in [START_REF] Coad | A closer look at serial growth rate correlation[END_REF]. For the other relationships, we sometimes obtain interesting results.11 We therefore conclude this section by acknowledging that although there may be asymmetric effects for growing and shrinking firms, these asymmetries do not appear to be so large as to make our previous estimates unhelpful. Discussion The coefficient estimates from the preceding section have allowed us to observe the comovements of the four series -employment growth, sales growth, growth of the gross operating surplus, and growth of labour productivity. Figure 4 provides a simple summary representation of our results, which is based on an (admittedly subjective) synthesis of the LAD estimates reported in Tables 6,7, and 8. It should certainly be remembered, however, that we cannot rely too heavily on our regression estimates because our results do appear to be sensitive to regression specification, firm size, sector of activity, and year. Figure 4 illustrates that employment growth appears to contribute positively to sales growth, which in turn is associated with subsequent profits growth. These early results provide (limited) support to the idea that employment growth may perhaps be seen as the 'stimulus' which drives growth in other domains of the firm. Indeed, among the series that we consider here, employment growth is the firm's main decision variable. Our results allow us to comment on two theories of firm growth. First, the replicator dynamics model, frequently found in neo-Schumpeterian simulation models, supposes that retained profits are the main source of firm growth. In this vein, we should expect profitable firms to grow whilst struggling firms would lose market Figure 3: Quantile regression analysis of the relationship between growth of profits (t -1) and employment growth (t). Variation in the coefficient on lagged growth of profits over the conditional quantiles of the employment growth rate distribution. Conditional quantiles (on the x-axis) range from 0 (for the extreme negative-growth firms) to 1 (for the fastest-growing firms). Confidence intervals (non-bootstrapped) extend to 95% confidence intervals in either direction. Horizontal lines represent OLS estimates with 95% confidence intervals. Graphs made using the 'grqreg' Stata module [START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]). share (see [START_REF] Coad | Testing the principle of 'growth of the fitter': the relationship between profits and firm growth[END_REF] for a discussion). Second, and not altogether unrelated, the 'accelerator' models of firm investment suppose that growth of sales leads to a subsequent reinvesting in the firm, which would thus result in employment growth. The results presented here do not offer much support to these two theories of firm growth. Instead, it seems that firm growth is very much a discretionary phenomenon. The decision to take on new employees seems to be largely exogenous, and the mere generation of profits certainly does not automatically imply that these profits will be reinvested in the firm. Two stories of firm growth Our results are consistent with (at least) two possible stories of firm growth. First, one may believe that firms are incapable of accurately seeing into the future. At any time some firms may take a risk and decide to grow, and this increase in resources eventually results in an increase in sales and also an increase in profits. Other firms may be hesitant about hiring new employees, and thus they may miss out on growth opportunities.12 Second, an alternative view is that firms can accurately anticipate the evolution of the market (demand shocks or technology shocks, for example). These rational firms take on new employees with the aim of exploiting these anticipated opportunities. In this case, employment growth is merely a response to new information about market conditions. In this case it would be quite incorrect to say that employment growth causes sales growth, because it is the successful anticipation of sales growth opportunities that leads to employment growth. We note however that many intermediate cases are also possible, whereby managers do not know for sure how the business climate will evolve but they are willing to take a bet on a 'hunch' they might have. In order to decide upon the level of foresight of business firms (i.e. is employment growth an exogenous event?..), we note here that qualitative empirical work (interviews and questionnaires) may be informative. Conclusion We have presented some preliminary investigations of a regression framework that, hopefully, will allow us to better understand the growth behaviour of business firms. The application of a VAR framework to firm growth has been introduced here, and we have investigated the robustness of our results along a number of dimensions, but our analysis should be seen as preliminary. In particular, there is a considerable degree of multicollinearity between the individual statistical series that make up the PVAR model, and this seems to make our results rather 'wobbly'. We are therefore wary of putting too much confidence in any specific point estimates. We can identify (at least) three important caveats in our analysis. First, despite our efforts to conduct a robustness analysis, there are still unresolved issues of sample selection bias that stem from the fact that we have not included firms with negative values for their gross operating surplus. Furthermore, our results are obtained from analyzing a balanced panel of surviving firms and does not deal with entry or exit. Second, we observe that the R 2 statistics are rather low (typically lower than 5%). Could this be due to measurement error, aggregation effects, or perhaps due to some statistical fallacy? How does the R 2 improve when we include contemporaneous effects or longer lags? Does the R 2 statistic improve when we use data covering shorter time periods (e.g. quarterly data)? This issue clearly deserves to be explored. Third, we do not know to what extent our results are specific to the case of French manufacturing firms. A next step would be to begin thinking about moving from a reduced-form VAR to a structural VAR, in which some of the contemporaneous relationships are presented in more detail. If anything, our results would inform a structural VAR in which profits growth at (t) depends on sales growth and employment growth at (t), and where sales growth at (t) depends on employment growth at (t). Employment growth at (t), however, would not depend on any contemporaneous values of the other variables. One unresolved question concerns how we should aggregate over firms. Our results seem to suggest that, at the firm-level, employment growth precedes sales growth, and sales growth is associated with subsequent growth in profits. At the aggregate level, however, there is some evidence (from monthly US data) that increases in output are followed by a less than proportionate increase in labour hours, and that this increase occurs mostly within a 6-month interval (Sims, 1974). We also outline some directions for future research. It may be fruitful to use data at quarterly intervals, in line with macroeconomic applications of VAR models. The Compustat database provides quarterly data on sales, investment, and other series and may be a suitable database in this respect. Furthermore, series such as R&D expenditure could be added to the VAR model. These would give additional information on the relationship between innovation and firm growth. 13 In addition, we might want to include investment in fixed assets in our VAR framework, even though there are specific issues related to this variable that would have to be dealt with.14 Figure 1 : 1 Figure 1: Distribution of the unconditional growth rates of our sample of French manufacturing firms. Top left: employment growth. Top right: sales growth. Bottom left: growth of gross operating surplus. Bottom right: growth of labour productivity. Note the log scale on the y axis. Figure 4 : 4 Figure 4: A stylized depiction of the process of firm growth, based on the PVAR(1) specification in Table 6 Table 1 : 1 Summary statistics after cleaning the data Mean Std. Dev. 10% 25% Median 75% 90% obs. 1996 Sales 99328 340574 11733 17531 30693 68306 179629 Empl 101.01 235.79 25 32 45 86 190 2000 Sales 125609 447165 13670 21199 38342 84011 227723 Empl 106.16 234.71 27 34 47 93 200 2004 Sales 135671 527168 13237 21128 40046 88751 239982 Empl 104.35 238-96 25 33 47 92 200 Table 2: Summary statistics for the growth rate series Mean Std Dev 10% 25% 50% 75% 90% obs 1997 Table 3 : 3 Matrix of contemporaneous correlations for the indicators of firm growth. Scatterplot matrix of contemporaneous values of employment growth, sales growth, growth of GOS and growth of productivity in a typical year(2000). Con- Table 4 : 4 OLS estimation of equation (2). t-1 β wt Table 5 : 5 Fixed-effect estimation of equation (2). t-1 β wt Table 6 : 6 LAD estimation of equation (2). t-1 β wt Table 7 7 : LAD estimation of equation (2) where m=3 and corresponds to the vector (Empl. growth(i,t), Sales growth (i,t), GOS growth (i,t))'. w t β t-1 β t-2 Empl. gr. Sales gr. GOS gr. Empl. gr. Sales gr. GOS gr. R 2 obs Empl. growth -0.0205 0.0529 0.0013 0.0060 50277 t-stat -5.64 20.86 2.32 Sales growth 0.1194 -0.0767 0.0014 0.0054 50279 t-stat 23.06 -21.25 1.77 GOS growth -0.0073 0.2144 -0.2845 0.0341 46826 t-stat -0.31 13.10 -73.61 Empl. growth -0.0216 0.0581 0.0021 0.0119 0.0260 0.0013 0.0087 40924 t-stat -6.03 22.11 3.35 3.32 10.00 2.09 Sales growth 0.1381 -0.0850 0.0024 0.0655 -0.0358 0.0004 0.0066 40925 t-stat 24.62 -20.72 2.50 11.72 -8.83 0.36 GOS growth -0.0004 0.3269 -0.3633 -0.0276 0.1059 -0.1475 0.0472 38084 t-stat -0.02 19.82 -89.52 -1.25 6.54 -36.76 Table 8 : 8 Empl. gr. Sales gr. Prod. gr. Empl. gr. Sales gr. Prod. gr. w t β t-1 β t-2 R 2 obs Empl. growth -0.0034 0.0428 0.0186 0.0075 59332 t-stat -0.97 17.84 8.98 Sales growth 0.1286 -0.0888 0.0230 0.0053 59334 t-stat 24.93 -25.02 7.50 Prod. growth -0.0030 0.0111 -0.2287 0.0258 59266 t-stat -0.52 2.80 -65.88 Empl. growth -0.0068 0.0473 0.0217 0.0221 0.0178 0.0172 0.0109 50809 t-stat -1.81 17.59 9.44 5.83 6.61 7.21 Sales growth 0.1496 -0.0982 0.0221 0.0620 -0.0391 0.0025 0.0066 50810 t-stat 26.46 -24.43 6.43 10.90 -9.69 0.69 Prod. growth -0.0085 0.0298 -0.2837 -0.0313 0.0145 -0.1486 0.0354 50751 t-stat -1.25 6.15 -68.03 -4.59 3.00 -34.52 LAD estimation of equation ( 2 ) where m=3 and corresponds to the vector (Empl. growth(i,t), Sales growth (i,t), labour productivity growth (i,t))'. Table 9 : 9 LAD estimation of equation (2) across different size groups. Firms sorted into size groups according to their initial size (sales in 1996). Group 1 contains the smallest firms. Standard errors (and hence t-statistics) obtained from using 500 bootstrap replications. w t β t-1 Empl. gr. Sales gr. GOS gr. Prod. growth R 2 obs Size group 1 Empl. growth -0.0697 0.0306 0.0001 0.0308 0.0079 t-stat -3.81 2.42 0.06 2.01 Sales growth 0.1547 -0.1489 -0.0012 0.0405 0.0106 t-stat 5.44 -5.95 -0.49 1.54 GOS growth 0.2102 0.0945 -0.3039 0.0564 0.0412 t-stat 1.87 1.19 -12.87 0.49 Prod. growth 0.0990 -0.0030 0.0003 -0.2480 0.0388 t-stat 3.39 -0.15 0.10 -10.85 Size group 2 Empl. growth -0.0116 0.0348 -0.0037 0.0490 0.0069 10292 t-stat -0.71 3.79 -2.35 4.18 Sales growth 0.1800 -0.1452 0.0005 0.0339 0.0106 10292 t-stat 7.28 -8.68 0.19 1.75 GOS growth -0.0223 0.1304 -0.2861 0.0880 0.0348 t-stat -0.27 2.56 -12.04 1.06 Prod. growth 0.0203 0.0048 0.0008 -0.2284 0.0260 10288 t-stat 0.79 0.27 0.35 -11.48 Size group 3 Empl. growth 0.0004 0.0230 -0.0015 0.0503 0.0076 10166 t-stat 0.03 3.03 -1.18 5.25 Sales growth 0.1623 -0.1155 -0.0040 0.0753 0.0065 10166 t-stat 8.75 -7.68 -1.45 3.91 GOS growth 0.2829 0.0045 -0.3263 0.3062 0.0357 t-stat 3.17 0.06 -11.47 2.73 Prod. growth 0.0400 -0.0035 -0.0025 -0.2028 0.0239 10166 t-stat 1.49 -0.20 -0.71 -8.10 Size group 4 Empl. growth 0.0053 0.0491 -0.0015 0.0158 0.0071 t-stat 0.40 5.67 -1.02 1.46 Sales growth 0.1147 -0.0641 -0.0027 0.0323 0.0030 t-stat 5.63 -4.08 -1.02 2.26 GOS growth 0.0652 0.2084 -0.3691 0.2732 0.0422 t-stat 0.66 3.20 -11.34 2.66 Prod. growth -0.0296 0.0462 -0.0048 -0.1850 0.0157 t-stat -1.09 2.85 -1.42 -7.04 Size group 5 Empl. growth 0.1309 0.0486 -0.0016 0.0253 0.0237 10092 t-stat 8.67 7.66 -1.01 3.65 Sales growth 0.1587 -0.0040 -0.0014 0.0266 0.0061 10093 t-stat 5.60 -0.26 -0.55 1.79 GOS growth 0.1851 0.2978 -0.2939 0.2161 0.0233 t-stat 2.05 4.57 -8.91 2.46 Prod. growth -0.0406 0.0254 -0.0005 -0.1566 0.0102 10080 t-stat -1.38 1.06 -0.10 -5.52 Table 10 : 10 LAD estimation of equation (2) across different size groups. Firms sorted into size groups according to their mean size (average number of employees[1996][1997][1998][1999][2000][2001][2002][2003][2004]. Group 1 contains the smallest firms. Standard errors (and hence t-statistics) obtained from using 500 bootstrap replications. w t β t-1 Empl. gr. Sales gr. GOS gr. Prod. growth R 2 obs Size group 1 Empl. growth -0.0826 0.0319 0.0003 0.0309 0.0129 t-stat -5.36 3.56 0.18 3.08 Sales growth 0.1127 -0.1397 0.0019 0.0248 0.0089 t-stat 5.46 -7.66 0.78 1.45 GOS growth 0.1853 0.0108 -0.2909 0.1741 0.0370 9294 t-stat 1.94 0.15 -11.07 2.25 Prod. growth 0.0828 0.0020 -0.0005 -0.2310 0.0309 t-stat 2.69 0.12 -0.20 -9.73 Size group 2 Empl. growth -0.0612 0.0273 -0.0005 0.0174 0.0051 t-stat -3.90 3.14 -0.28 1.53 Sales growth 0.1282 -0.1417 -0.0022 0.0221 0.0107 t-stat 5.71 -7.78 -0.94 1.20 GOS growth 0.1707 0.0909 -0.3208 0.1344 0.0386 9454 t-stat 1.91 1.82 -11.90 1.42 Prod. growth 0.0789 -0.0019 0.0002 -0.2298 0.0279 t-stat 3.03 -0.12 0.06 -10.19 Size group 3 Empl. growth -0.0006 0.0308 -0.0043 0.0443 0.0072 t-stat -0.05 4.67 -3.07 5.13 Sales growth 0.1226 -0.0929 -0.0047 0.0426 0.0049 t-stat 6.76 -5.90 -1.72 2.64 GOS growth 0.0554 0.1877 -0.3332 0.2393 0.0386 9512 t-stat 0.58 3.11 -14.53 2.73 Prod. growth 0.0310 0.0097 -0.0006 -0.2156 0.0228 t-stat 1.23 0.55 -0.17 -9.28 Size group 4 Empl. growth 0.0670 0.0531 -0.0012 0.0309 0.0121 9903 t-stat 4.80 5.52 -1.17 3.09 Sales growth 0.1927 -0.0696 -0.0033 0.0642 0.0076 9903 t-stat 8.99 -4.41 -1.20 3.37 GOS growth 0.2840 0.0991 -0.3577 0.3899 0.0379 9166 t-stat 2.89 1.76 -9.80 3.35 Prod. growth -0.0009 0.0194 -0.0073 -0.1558 0.0163 9900 t-stat -0.05 1.18 -2.87 -7.71 Size group 5 Empl. growth 0.1335 0.0606 -0.0035 0.0402 0.0263 t-stat 7.26 6.82 -1.96 3.99 Sales growth 0.1832 -0.0035 -0.0022 0.0408 0.0086 t-stat 7.74 -0.22 -0.81 2.93 GOS growth 0.1909 0.2519 -0.2857 0.2412 0.0235 9392 t-stat 1.66 2.93 -9.08 2.12 Prod. growth -0.0421 0.0289 0.0003 -0.1595 0.0105 t-stat -1.44 1.39 0.07 -5.96 Table 11 : 11 LAD estimation of equation (2) across different industries. Standard errors (and hence t-statistics) obtained from using 1000 bootstrap replications. w t β t-1 Empl. gr. Sales gr. GOS gr. Prod. growth R 2 obs NAF 33: Precision instruments Empl. growth 0.0939 0.0721 0.0008 0.0149 0.0195 t-stat 2.53 2.95 0.26 0.55 Sales growth 0.2458 -0.0766 -0.0046 0.0739 0.0120 t-stat 3.63 -1.41 -0.95 1.26 GOS growth -0.0172 0.2630 -0.2924 0.0852 0.0406 t-stat -0.07 1.34 -4.23 0.36 Prod. growth -0.0583 0.0741 -0.0025 -0.3129 0.0322 t-stat -0.99 1.52 -0.45 -5.95 NAF 27: Basic metals Empl. growth 0.0120 0.0553 -0.0023 0.0024 0.0074 t-stat 0.33 2.50 -0.50 0.10 Sales growth 0.1060 -0.0875 -0.0111 0.0717 0.0069 t-stat 1.67 -1.77 -1.71 1.42 GOS growth 0.3084 -0.0253 -0.3701 0.2307 0.0552 t-stat 1.36 -0.24 -4.94 1.00 Prod. growth -0.1107 -0.0387 -0.0108 -0.1921 0.0342 t-stat -1.66 -0.83 -1.27 -3.27 NAF 29: Machinery and equipment Empl. growth -0.0016 0.0307 0.0002 0.0140 0.0038 t-stat -0.08 3.00 0.12 1.10 Sales growth 0.2059 -0.1896 0.0010 0.0431 0.0151 t-stat 5.30 -7.29 0.22 1.36 GOS growth 0.1371 0.2054 -0.3506 0.0937 0.0493 t-stat 0.96 2.05 -8.28 0.65 Prod. growth 0.0072 -0.0136 0.0005 -0.2332 0.0278 t-stat 0.17 -0.54 0.13 -6.08 NAF 17: Textiles Empl. growth 0.0133 0.0512 0.0000 0.0103 0.0061 t-stat 0.72 2.70 -0.01 0.78 Sales growth 0.0660 0.0429 0.0030 -0.0491 0.0051 t-stat 1.78 1.54 0.55 -1.76 GOS growth 0.2778 0.3242 -0.3617 0.2510 0.0383 t-stat 1.89 3.25 -7.27 2.36 Prod. growth 0.0521 0.0411 -0.0064 -0.1773 0.0178 t-stat 0.84 0.99 -0.80 -3.43 Table 13 : 13 Quantile regression estimation of equation (2), focusing on the relationship between profits growth (t-1) and employment growth (t). 50269 observations. Standard errors (and hence t-statistics) obtained from using 100 bootstrap replications. 10% 25% 50% 75% 90% Coeff. 0.0007 -0.0003 -0.0019 -0.0030 -0.0033 t-stat 0.40 -0.34 -2.94 -2.61 -1.37 R 2 0.0098 0.0067 0.0069 0.0074 0.0084 The EAE databank has been made available to the author under the mandatory condition of censorship of any individual information. This database has already featured in several other studies into firm growth -see[START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF],[START_REF] Coad | Testing the principle of 'growth of the fitter': the relationship between profits and firm growth[END_REF][START_REF] Coad | A closer look at serial growth rate correlation[END_REF]. More specifically, we examine firms in the two-digit NAF sectors 17-36, where firms are classified according to their sector of principal activity (the French NAF classification matches with the international NACE and ISIC classifications). We do not include NAF sector 37, which corresponds to recycling industries. 22 319, 22 231, 22 305, 22 085, 21 966, 22 053, 21 855, 21 347 and 20 723 firms respectively. NAN is shorthand for Not a Number, which refers to the result of a numerical operation which cannot return a valid number value. In our case, we may obtain a NAN if we try to take the logarithm of a negative number, or if we try to divide a number by zero. GOS is sometimes referred to as 'profits' in the following. Labour productivity is calculated in the usual way by dividing Value Added by the number of employees. In fact, this choice of strategy for deflating our variables was to some extent imposed upon us, since I was unable to find a suitable sector-by-sector series of producer price indices to be used as deflators. [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF] present a parametric investigation of the distribution of sales growth rates of French The sectors we study are NAF 33 (manufacturing of medical, precision and optical instruments, watches and clocks), NAF 27 (manufacturing of basic metals), NAF 29 (manufacturing of machinery and equipment, nec.) and NAF 17 (manufacturing of textiles). Note that we do not follow exactly the methodology in[START_REF] Bottazzi | Corporate growth and industrial structure: Some evidence from the italian manufacturing industry[END_REF] because we consider only 2-digit sectors, for want of a suitable number of observations for our empirical model. For example, there is considerable variation in the coefficient on lagged employment growth on growth of profits. At the lower quantiles of profits growth, lagged employment growth has a positive effect whilst having a negative effect at the upper quantiles. Remember however that we have excluded those unfortunate firms that obtained negative profits -which is a source of sample selection bias. We should thus be extremely wary of saying that employment growth always leads to sales or profits growth, because it may be that in some cases employment growth leads to failure. One hypothesis we could test here (recently put forward by Giovanni Dosi) is that firms have 'behavioural' decision rules for R&D expenditures, whereby they make no attempt to 'maximize expected return on all future innovation opportunities', as some neoclassical economists might suggest, but instead they simply try to adjust their R&D expenditures in an attempt to keep a roughly constant R&D/sales ratio. This line of research is currently being pursued with Rekha Rao. First, there are problems distinguishing between expansionary and replacement investment, which obscures the relationship between investment in fixed assets and firm growth. Second, there is a remarkable lumpiness in the time series of investment in fixed assets. For example, Doms and Dunne (1998) consider a large sample of US manufacturing plants 1972-1988 and observe that, on average, half a plant's total investment was performed in just three years.
53,649
[ "841477" ]
[ "15080", "494477" ]
01751139
en
[ "spi" ]
2024/03/05 22:32:07
2014
https://hal.univ-lorraine.fr/tel-01751139/file/DDOC_T_2014_0193_YANG.pdf
Dr Claude Esling Yudong Zhang Dr Yuping Ren Dr Song Li Dr Na Xiao Dr Jing Bai Ms Nan Xu Ms Shiying Wang Dr Zhangzhi Shi Mr Yan Haile Ms Xiaorui Liu Ms Jianshen 马氏体变体为复合孪生关系 。 在平直的低反差区域 , 复合孪生的 N M 马氏体变体关于板 条界面对称分布 。 然而在弯曲的高反差区域 N M 马氏体变体关于板条界面 不对称分布 。 对于七层调制马氏体 , 在平直的低反差区域 Nimnga 薄膜中存在 个 Nm 马氏体变体团 , 每个 N M 马氏体变体团有 种 Nm 马氏体变体 , 因此在整个 Nimnga 薄膜中 种不同 取向的 N M 马氏体变体 。 对于 7m 马氏体 , 同样存在 个变体团 , 不过每个变体团有 种取向的 7m 马氏体变体 , 总共有 马氏体变体 。 这说明 , Nimnga N M 马氏体变体必须经过 7m 才能获得 个变体 。 每种 马氏体变体转变为两种 取向的 EBSD 获得马氏体变体的取向和晶体学计算可知,在一个板条内部的两个 NM Keywords: ferromagnetic shape memory alloys (FSMAs), Ni-Mn-Ga thin films, martensitic transformation, EBSD, misorientation, Texture Epitaxial Ni-Mn-Ga thin films have attracted considerable attention, since they are promising candidates for magnetic sensors and actuators in micro-electro-mechanical systems. Comprehensive information on the microstructural and crystallographic features of the NiMnGa films and their relationship with the constraints of the substrate is essential for further property optimization. In the present work, epitaxial Ni-Mn-Ga thin films were produced by DC magnetron sputtering and then characterized by x-ray diffraction technique (XRD) and backscatter electron diffraction equipped in scanning electron microscope (SEM-EBSD). Epitaxial NiMnGa thin films with nominal composition of Ni50Mn30Ga20 and thickness of 1.5 µm were successfully fabricated on MgO monocrystalline substrate by DC magnetron sputtering, after the optimization of sputtering parameters such as sputtering power, substrate temperature and seed layer by the present work. XRD diffraction measurements demonstrate that the epitaxial NiMnGa thin films are composed of three phases: austenite, NM martensite and 7M martensite. With the optimized measurement geometries, maximum number of diffraction peaks of the concerning phases, especially of the low symmetrical 7M martensite, are acquired and analyzed. The lattice constants of all the three phases under the constraints of the substrate in the films are fully determined. These serve as prerequisites for the subsequent EBSD crystallographic orientation characterizations. SEM-EBSD in film depth analyses further verified the co-existence situation of the three constituent phases: austenite, 7M martensite and NM martensite. NM martensite is located near the free surface of the film, austenite above the substrate surface, and 7M martensite in the intermediate layers between austenite and NM martensite. Microstructure characterization shows that both the 7M martensite and NM martensite are of plate morphology and organized into two characteristic zones featured with low and high relative second electron image contrast. Local martensite plates with similar plate morphology orientation are organized into plate groups or groups or variant colonies. Further EBSD characterization indicates that there are four distinct martensite plates in Abstract -IV-each variant groups for both NM and 7M martensite. Each NM martensite plate is composed of paired major and minor lamellar variants in terms of their thicknesses having a coherent interlamellar interface, whereas, each 7M martensite plate contains one orientation variant. Thus, there are four orientation 7M martensite variants and eight orientation NM martensite variants in one variant group. According to the crystallographic orientation of martensites and the crystallographic calculation, for NM martensite, the inter-plate interfaces are composed of compound twins in adjacent NM plates. The symmetrically distribution of compound twins results in the long and straight plate interfaces in the low relative contrast zone. The asymmetrically distribution leads to the change of inter-plate interface orientation in the high relative contrast zone. For 7M martensite, both Type-I and Type-II twin interfaces are nearly perpendicular to the substrate surface in the low relative contrast zones. The Type-I twin pairs appear with much higher frequency, as compared with that of the Type-II twin pairs. However, there are two Type-II twin interface trace orientations and one Type-I twin interface trace orientation in the high relative contrast zones. The Type-II twin pairs are more frequent than the Type-I twin pairs. The inconsistent occurrences of the different types of twins in different zones are originated from the substrate constrain. The crystallographic calculation also indicates that the martensitic transformation sequence is from Austenite to 7M martensite and then transform into NM martensite (A→7M→NM). The present study intends to offer deep insights into the crystallographic features and martensitic transformation of epitaxial NiMnGa thin films. Des couches minces é pitaxiales avec NiMnGa de composition nominale Ni50Mn30Ga20 et d'é paisseur 1,5 µm ont é té fabriqué es avec succè s sur le substrat monocristallin de MgO par pulvé risation cathodique magné tron DC, aprè s l'optimisation des paramè tres tels que la puissance de pulvé risation cathodique, la tempé rature du substrat et de la couche d'ensemencement dans le cadre du pré sent travail. Les mesures de diffraction DRX montrent que les couches minces é pitaxiales NiMnGa sont composé es de trois phases: austé nite, martensite NM et martensite modulé e 7M. Avec les gé omé tries de mesure optimisé es, le nombre maximum possible de pics de diffraction des phases relatives, en particulier compte tenu de la basse symé trie de la martensite 7M, sont acquis et analysé s. Les constantes de ré seau de l'ensemble des trois phases dans le cadre des contraintes du substrat dans les films sont entiè rement dé terminé es. L'analyse SEM-EBSD en profondeur du film a permis en outre de vé rifier la situation de coexistence de trois phases constitutives: austé nite, 7M martensite et martensite NM. La martensite NM se trouve prè s de la surface libre du film, l'austé nite au-dessus de la surface du substrat, et la martensite 7M dans les couches intermé diaires entre l'austé nite et la martensite NM. La caracté risation de microstructure montre que la martensite 7M et la martensite NM ont une morphologie de plaque et sont organisé es en deux zones caracté ristiques dé crites avec des bas et haut contraste en images d'électrons secondaires. Des plaques de martensite locales similaire en orientation morphologique sont organisé es en groupes de plaques ou colonies ou variantes de colonies. Une caracté risation plus poussé e en EBSD indique qu'il existe quatre plaques de martensite distinctes dans chaque colonie de variante à la fois pour la martensite NM et 7M. Chaque plaque de martensite NM est composé e de variantes lamellaires majeures et mineures en termes d'épaisseurs appariées et ayant une interface interlamellaire cohé rente, alors que chaque plaque de martensite 7M contient une variante d'orientation. Ainsi, il existe quatre variantes d'orientation de martensite 7M et huit variantes d'orientation de martensite NM dans une colonie de variantes. Selon l'orientation cristallographique des martensites et des calculs cristallographiques, pour la martensite NM, les interfaces inter-plaques sont constitué es de macles de type composé es dans des plaques adjacentes de martensite NM. La distribution symé trique des macles composé es ré sulte dans des interfaces de plaques longues et droites dans la zone de contraste relatif faible. La répartition asymétrique conduit à la modification de l'orientation d'interface entre les plaques de la zone de contraste relativement é levé .Pour la martensite 7M, à la fois les interfaces de type I et de type II sont à peu prè s perpendiculaires à la surface du substrat dans les zones à faible contraste relatif. Les paires de macles de type-I apparaissent avec une fré quence beaucoup plus é levé e, par comparaison avec celle des macles de type II. Cependant, il ya deux traces d'interface de macles de type II et une trace d'interface de macles de type I dans les zones de contraste relatifs é levé s. Les paires de macles de type II sont plus fré quentes que les paires de macles de type-I. Les apparitions incohé rentes des diffé rents types de macles dans les diffé rentes zones sont dues à la contrainte du substrat. Le calcul cristallographique montre é galement que la sé quence de la transformation martensitique est d'austé nite en martensite 7M qui est ensuite transformé e en martensite NM (A →7M→NM). La présente étude se propose d'offrir une étude approfondie des caractéristiques cristallographiques et de la transformation martensitique de films minces de NiMnGa pré paré s par é pitaxie. Mots clé s: Alliages ferromagné tique à mé moire de forme (FSMAs); Films minces de Ni-Mn-Ga; transformation martensitique; EPCA; dé sorientation; texture。 General introduction The miniaturization of electronic systems and the increase of their functionality require the implementation of active and sensitive devices on a small scale [START_REF] Yang | Substrate constraint induced variant selection of 7M martensite in epitaxial NiMnGa thin films[END_REF]. Micro-electro-mechanical systems (MEMS) or nano-electro-mechanical systems (NEMS) are one of the potential technologies for decreasing size of these devices and have been established in wide domains such as information technology, automotive, aerospace and bio-medical applications [2,3]. Developing excellent active and sensitive materials are of technological interest and also represent dominating challenges for the design of MEMS and NEMS. In particular, active and sensitive materials that can exhibit large strains with rapid response (also referred to smart materials) are desirable [4,5]. A number of active materials such as magnetostrictive materials, piezoelectric ceramics and shape memory alloys that show a few percent strains under an applied external field have been proposed as these kinds of actuator and sensor materials [6][7][8][START_REF] Fukuda | Giant magnetostriction in Fe-based ferromagnetic shape memory alloys[END_REF][START_REF] Handley R C | Modern magnetic materials: principles and applications[END_REF][START_REF] Kohl | Ferromagnetic Shape Memory Microactuators[END_REF]. Among the numerous advanced materials, the ferromagnetic shape memory alloys (FSMAs), also referred to magnetic shape memory alloys (MSMAs), are a group of fascinating material, which can provide a significant and reversible strain at high frequency driven by external magnetic field [START_REF] Faehler | An Introduction to Actuation Mechanisms of Magnetic Shape Memory Alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Jiles | Recent advances and future directions in magnetic materials[END_REF][START_REF] Wilson | New materials for micro-scale sensors and actuators: An engineering review[END_REF][START_REF] Nespoli | The high potential of shape memory alloys in developing miniature mechanical devices: A review on shape memory alloy mini-actuators[END_REF][START_REF] Hauser | Chapter 8 -Thin magnetic films[END_REF][START_REF] Acet | Chapter Four -Magnetic-Field-Induced Effects in Martensitic Heusler-Based Magnetic Shape Memory Alloys[END_REF]. FSMAs not only overcome the low potential efficiency of thermally controlled shape memory actuators, but also exhibit much larger output strains than those of the magnetostrictive, the piezoelectric or the electrostrictive materials. The unique properties of FSMAs have attracted extensive research interest during the past few years. The discovery of ferromagnetic Ni-Mn-based Heusler alloys quickly promoted a breakthrough in the application of ferromagnetic shape memory alloys [START_REF] Sozinov | 12% magnetic field-induced strain in Ni-Mn-Ga-based non-modulated martensite[END_REF][START_REF] Sozinov | Giant magnetic-field-induced strain in NiMnGa seven-layered martensitic phase[END_REF][START_REF] Kohl | Recent Progress in FSMA Microactuator Developments[END_REF]. To date, magnetic field induced strain as high as 12% has been achieved recently in the bulk NiMnGa single crystals. Whereas, although extensive research has been centered on bulk polycrystalline NiMnGa alloys to understand their mechanical, magnetic properties and phase transformation behaviors [START_REF] Xu | Ni-Mn-Ga shape memory alloys development in China[END_REF][START_REF] Cong | Crystal structures and textures in the hot-forged Ni-Mn-Ga shape memory alloys[END_REF][START_REF] Cong | Crystal structure and phase transformation in Ni53Mn25Ga22 shape memory alloy from 20 K to 473 K[END_REF][START_REF] Cong | Crystal structures and textures of hot forged Ni48Mn30Ga22 alloy investigated by neutron diffraction technique[END_REF][START_REF] Li | Composition-dependent ground state of martensite in Ni-Mn-Ga alloys[END_REF][START_REF] Li | Microstructure and magnetocaloric effect of melt-spun Ni52Mn26Ga22 ribbon[END_REF][START_REF] Li | Twin relationships of 5M modulated martensite in Ni-Mn-Ga alloy[END_REF][START_REF] Wang | Twinning stress in shape memory alloys: Theory and experiments[END_REF][START_REF] Pond R C | Deformation twinning in Ni2MnGa[END_REF][START_REF] Matsuda | Transmission Electron Microscopy of Twins in 10M Martensite in Ni-Mn-Ga Ferromagnetic Shape Memory Alloy[END_REF][START_REF] Murray | 6% magnetic-field-induced strain by twin-boundary motion in ferromagnetic Ni--Mn--Ga[END_REF], magnetic field induced strains in bulk NiMnGa polycrystals are still in the magnitude order of 1%, owing to the complex microstructure. In addition, bulk NiMnGa alloys are very brittle in polycrystalline state, making them difficult to deform into desirable shape. Ductility can be improved in a single crystal or thin film form. NiMnGa thin films deposited with various physical vapor deposition methods have shown superior mechanical properties [3,[START_REF] Backen | Epitaxial Ni-Mn-Ga Films for Magnetic Shape Memory Alloy Microactuators[END_REF][START_REF] Eichhorn | Microstructure of freestanding single-crystalline Ni2MnGa thin films[END_REF][START_REF] Dunand | Size Effects on Magnetic Actuation in Ni-Mn-Ga Shape-Memory Alloys[END_REF][START_REF] Chernenko | Magnetic domains in Ni-Mn-Ga martensitic thin films[END_REF][START_REF] Hakola | Ni-Mn-Ga films on Si, GaAs and Ni-Mn-Ga single crystals by pulsed laser deposition[END_REF][START_REF] Dong | Shape memory and ferromagnetic shape memory effects in single-crystal Ni[sub 2]MnGa thin films[END_REF][START_REF] Castano | Structure and thermomagnetic properties of polycrystalline Ni--Mn--Ga thin films[END_REF][START_REF] Dong | Epitaxial growth of ferromagnetic Ni[sub 2]MnGa on GaAs(001) using NiGa interlayers[END_REF][START_REF] Dong | Molecular beam epitaxy growth of ferromagnetic single crystal (001) Ni[sub 2]MnGa on (001) GaAs[END_REF]. However, most of the as-deposited NiMnGa thin films are composed of complex microstructure and several phases, which are the main obstacles to achieve huge magnetic field induced strain [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Kallmayer | Compositional dependence of element-specific magnetic moments in Ni 2 MnGa films[END_REF][START_REF] Annadurai | Composition, structure and magnetic properties of sputter deposited Ni-Mn-Ga ferromagnetic shape memory thin films[END_REF]. Revelation of the local crystallographic orientation and interfaces between microstructural constituents in epitaxial NiMnGa thin films have been an essential issue, in order to provide useful guidance for post treatments to eliminate the undesired martensite variants [START_REF] Witherspoon | Texture and training of magnetic shape memory foam[END_REF][START_REF] Wang | A variational approach towards the modeling of magnetic field-induced strains in magnetic shape memory alloys[END_REF][START_REF] Gaitzsch | Magnetomechanical training of single crystalline Ni-Mn-Ga alloy[END_REF][START_REF] Chulist | Twin boundaries in trained 10M Ni-Mn-Ga single crystals[END_REF][START_REF] Chmielus | Training, constraints, and high-cycle magneto-mechanical properties of Ni-Mn-Ga magnetic shape-memory alloys[END_REF]. Ferromagnetic shape memory effect Of all shape memory effects discovered, magnetic field induced shape memory effect is the most conducive to application, due to its high response frequency and giant actuation strain. The nature of giant magnetic-field-induced strain is due to either orientation rearrangements of martensite variants or the structural transition of these materials [START_REF] Acet | Chapter Four -Magnetic-Field-Induced Effects in Martensitic Heusler-Based Magnetic Shape Memory Alloys[END_REF][START_REF] Liu | Giant magnetocaloric effect driven by structural transitions[END_REF][START_REF] Kainuma | Magnetic-field-induced shape recovery by reverse phase transformation[END_REF]. Orientation rearrangements of martensite variants, also referred to magnetic field induced re-orientation, is based on magnetostructural coupling which takes advantage of the larger uniaxial magnetocrystalline anisotropy of the martensite phase [START_REF] Acet | Chapter Four -Magnetic-Field-Induced Effects in Martensitic Heusler-Based Magnetic Shape Memory Alloys[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF]. Normally, the c-axis (short axis) is the easy axis in the modulated martensites, while it is the hard-axis in non-modulated martensite which displays easy magnetization planes perpendicular to the c-axis. As shown Fig. 1.1, in the practical materials, twin variants in martensite have magnetic moments in different directions. On the application of magnetic field, the variants that are not aligned with the applied field, will de-twin to align their moments with the external magnetic field. This movement results in a macroscopic change in length resulting in strain. The unique mechanism of magnetic field induced reorientation has been observed in NiMnGa alloys [START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Tickle | Ferromagnetic Shape Memory Materials[END_REF]. The microstructure evolution due to magnetic field induced variant rearrangement [START_REF] Tickle | Ferromagnetic Shape Memory Materials[END_REF]. The other mechanism to achieve giant magnetic field induced strain is the structural transitions under external magnetic field. The structural transitions contains the transition from paramagnetic parent phase to ferromagnetic martensite observed in Fe-Mn-Ga alloys [START_REF] Zhu | Magnetic-field-induced transformation in FeMnGa alloys[END_REF] and the reverse transition from antiferromagnetic martensite to ferromagnetic parent phase observed in Ni-Mn-In(Sn) alloys [START_REF] Hernando | Thermal and magnetic field-induced martensite-austenite transition in Ni(50.3)Mn(35.3)Sn(14.4) ribbons[END_REF][START_REF] Krenke | Martensitic transitions and the nature of ferromagnetism in the austenitic and martensitic states of Ni-Mn-Sn alloys[END_REF]. Since the required external magnetic field to induce structural transition is much larger than that to induce the reorientation of martensite variants, the NiMnGa Heusler alloys are more preferable to be applied in the practical actuators and sensors. Bulk NiMnGa Heusler alloys NiMnGa alloys are one of the typical Heusler alloys and exhibit magnetic field induced strain as large as 6%~12% in the order of kHz [START_REF] Sozinov | 12% magnetic field-induced strain in Ni-Mn-Ga-based non-modulated martensite[END_REF][START_REF] Murray | 6% magnetic-field-induced strain by twin-boundary motion in ferromagnetic Ni--Mn--Ga[END_REF], even under relatively low bias magnetic fields. However, the phase constituents, crystal structure of various phases, crystallographic features and mechanical properties of the Ni-Mn-Ga alloys are highly sensitive to the chemical composition and temperature. A deep and complete understanding of NiMnGa alloys is crucial to deal with the magnetic field induced reorientation of NiMnGa martensite variants. Their crystal structures, and crystallographic features and some specific properties are introduced here. NiMnGa alloys have an L21 structure in the austenite state, which is based on the bcc structure and consists of four interpenetrating face centered cubic (FCC) sublattices as shown for Ni2MnGa in Fig. 1.3(a). When the temperature is decreased, Ni2MnGa and off-stoichiometric Ni-Mn-Ga Heusler alloys undergoes martensitic transformations and transforms to the L10 tetragonal structure at low-enough Ga concentrations, as shown in Fig. 1.3(b) [START_REF] Pons | Structure of the layered martensitic phases of Ni-Mn-Ga alloys[END_REF][START_REF] Pons | Crystal structure of martensitic phases in Ni-Mn-Ga shape memory alloys[END_REF][START_REF] Banik | Structural studies of Ni_{2+x}Mn_{1-x}Ga by powder x-ray diffraction and total energy calculations[END_REF]. Modulated structures other than the tetragonal structure can also be found in the martensite state especially at higher Ga concentrations. The most common crystal structure of modulated martensites are the 5M and 7M modulated structures (also denoted as 10M and 14M), as shown Fig. 1.3(c) and Fig. 1.3(d) [START_REF] Righi | Crystal structure of 7M modulated Ni-Mn-Ga martensitic phase[END_REF][START_REF] Righi | Commensurate and incommensurate "5M" modulated crystal structures in Ni-Mn-Ga martensitic phases[END_REF][START_REF] Righi | Incommensurate modulated structure of the ferromagnetic shape-memory Ni2MnGa martensite[END_REF]. Phase constituents and their crystal structures of NiMnGa alloys From Fig. 1.3(c) and Fig. 1.3(d), the generated modulations can be seen for the 5M and 7M cases. The letter 'M' refers to the monoclinic resulting from the distortion associated with the modulation. Although this particular example is given for Ni2MnGa, similar modulated states are observed in off-stoichiometric martensitic Heusler alloys incorporating other Z(Z=Ga, In, Sn) elements as well. The crystal structures of martensite given here are only typical examples, as for the individual alloys, the structure parameters of martensite depends on their particular composition. Twin boundaries of martensites in NiMnGa alloys For the shape memory performances, low twinning stress is the key property of the ferromagnetic shape memory alloys, since magnetically induced rearrangement is mediated by twin boundary motion. In the 5M and 7M modulated NiMnGa martensite, magnetically induced rearrangement can be realized by the motion of type I or type II twin boundaries. However, Type I twin boundaries exhibits a strong temperature-dependent twinning stress typically of about 1 MPa at room temperature. Type II twin boundaries exhibit a much lower and almost temperature independent twinning stress of 0.05-0.3 MPa [START_REF] Wang | Twinning stress in shape memory alloys: Theory and experiments[END_REF][START_REF] Chulist | Diffraction study of bending-induced polysynthetic twins in 10M modulated Ni-Mn-Ga martensite[END_REF][START_REF] Heczko | Different microstructures of mobile twin boundaries in 10M modulated Ni-Mn-Ga martensite[END_REF][START_REF] Chulist | Characterization of mobile type I and type II twin boundaries in 10M modulated Ni-Mn-Ga martensite by electron backscatter diffraction[END_REF]. Therefore, the Type II twin boundaries are preferred to achieve giant magnetic field induced strain at lower external magnetic field. Orientation relationship between martensite variants in NiMnGa alloys Clarification of the orientation relationship between the martensitic variants is quite helpful to understand the rearrangement of martensitic variants and to find a training process or loading scheme which will eliminate or control the twin boundaries. To date, both the orientation relationships between martensite variants of non-modulated and modulated martensite were successfully determined, based on electron backscatter diffraction (EBSD) [START_REF] Cong | Microstructural and crystallographic characteristics of interpenetrating and non-interpenetrating multiply twinned nanostructure in a Ni-Mn-Ga ferromagnetic shape memory alloy[END_REF][START_REF] Cong | Experiment and theoretical prediction of martensitic transformation crystallography in a Ni-Mn-Ga ferromagnetic shape memory alloy[END_REF] orientation determination, using the accurate crystal structure information. For NM martensite, it is shown that there are two martensite variants with compound twin Orientation relationship of martensitic transformation in NiMnGa alloys The martensitic transformation is a diffusionless, displacive phase transformation under a specific orientation relationship between the parent phases and the product phase. Determination of the orientation relationship during martensitic transformation enables us to not only predict the microstructural configuration of martensite variants and their crystallographic correlation, but also control the microstructure of martensite via thermal treatment. Dependent on the chemical composition, there are three possible transformation sequences for NiMnGa alloys, which are A-5M-7M-NM, A-7M-NM and A-NM. Thus, the martensitic transformation may result in three different types of martensite (i.e. 5M, 7M, and NM martensites), which also depends on the chemical composition. Up to now, Li et.al [START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF] NiMnGa thin films Shape memory material in thin films appears with the advancement of fabrication technology, where shape memory alloys are deposited directly onto micromachined materials or as stand-alone thin films. Micro-actuation models taking advantage of magnetic field induced reorientation have been proposed in NiMnGa thin films. Magnetically induced re-orientation in constrained NiMnGa film has been reported by Thomas et al. [START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Thomas | Stress induced martensite in epitaxial Ni-Mn-Ga films deposited on MgO(001)[END_REF]. With magnetic field operation, they are bound to have the advantage of fast response and high frequency operation. To date, since Ni-Mn-Ga thin films are promising candidates for magnetic sensors and actuators in MEMS, they have attracted considerable attention that focused on fabrication and freestanding, characterization of crystallographic features and microstructure, as well as martensitic transformation of NiMnGa thin films. Fabrication process The fabrication method needs to tune both microstructure and composition well, since microstructure and chemistry of NiMnGa films affects the phase constituent, the mobility of martensite variants and the critical stress. To date, a variety of techniques like sputtering, melt-spinning, pulsed laser deposition, molecular beam epitaxial and flash evaporation, have been used to deposit NiMnGa thin films [START_REF] Dong | Shape memory and ferromagnetic shape memory effects in single-crystal Ni[sub 2]MnGa thin films[END_REF][START_REF] Castano | Structure and thermomagnetic properties of polycrystalline Ni--Mn--Ga thin films[END_REF][START_REF] Dong | Epitaxial growth of ferromagnetic Ni[sub 2]MnGa on GaAs(001) using NiGa interlayers[END_REF][START_REF] Dong | Molecular beam epitaxy growth of ferromagnetic single crystal (001) Ni[sub 2]MnGa on (001) GaAs[END_REF][START_REF] Hakola | Pulsed laser deposition of NiMnGa thin films on silicon[END_REF]. Via the sputtering, it is easy to tailor the composition which is a critical parameter for many desired properties related to actuation and shape memory effect. For instance, using co-sputtering, the composition of NiMnGa can be finely tuned by adding Ni or Mn to vary the overall composition. Because the NiMnGa thin film is found to have ~ 3-5% increase in Ni with respect to that in the targets and a corresponding decrease in Mn and Ga depending upon the deposition parameters, desired thin film composition can also be obtained by appropriate adjustment of target alloy composition. As significant texturation through reducing the number of variant colonies or groups in the film could be realized by epitaxial growth. Preparation of Epitaxial NiMnGa thin films on single crystal substrates such as MgO , Al2O3 , GaAs [START_REF] Hakola | Ni-Mn-Ga films on Si, GaAs and Ni-Mn-Ga single crystals by pulsed laser deposition[END_REF][START_REF] Dong | Epitaxial growth of ferromagnetic Ni[sub 2]MnGa on GaAs(001) using NiGa interlayers[END_REF] by magnetron sputtering has been performed. To date, the procedure to fabricate continuous films with homogeneous chemical composition and controllable thicknesses has been parameterized [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Yeduru S R, Backen | Microstructure of free-standing epitaxial Ni-Mn-Ga films before and after variant reorientation[END_REF][START_REF] Jetta | Phase transformations in sputtered Ni-Mn-Ga magnetic shape memory alloy thin films[END_REF]. For instance, Heczko et al. [START_REF] Heczko | Epitaxial Ni-Mn-Ga films deposited on SrTiO[sub 3] and evidence of magnetically induced reorientation of martensitic variants at room temperature[END_REF] showed that thin films deposited on SrTiO3 substrate showed epitaxial structure with a twinned orthorhombic martensite. Khelfaoui et al. [START_REF] Khelfaoui | A fabrication technology for epitaxial Ni-Mn-Ga microactuators[END_REF] reported epitaxial thin film deposition on MgO (100) substrate using DC magnetron sputtering at substrate temperature of 350 °C. Phase constituent, microstructural and crystallographic features In spite of the progress made in studying epitaxial NiMnGa alloys, the microstructural and crystallographic characterization of the produced films has been challenging due to the specific constrains from the substrate, the specific geometry of the film, and the ultra fineness of the microstructural constituents. Many pioneer efforts [34, 42-44, 55, 77, 81-88] have been made to characterize the phase constituent, microstructural and crystallographic features to advance the understanding of their physical and mechanical behaviors. The significant scientific and technical challenges remain. Crystal structure of phases in epitaxial NiMnGa thin films X-ray diffraction (XRD) [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Thomas | Stray-Field-Induced Actuation of Free-Standing Magnetic Shape-Memory Films[END_REF] was first used to analyze phase constituents and texture components of epitaxial NiMnGa films. Multiphase (austenite, modulated martensite and non-modulated martensite) have been identified in most of the epitaxial NiMnGa films and strong textures of each phase have been revealed. For austenite and the non-modulated martensite that have relative simple crystal structure, the lattice constants have been unambiguously determined from the limited XRD reflection peaks [START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kauffmann-Weiss | Magnetic Nanostructures by Adaptive Twinning in Strained Epitaxial Films[END_REF][START_REF] Buschbeck | In situ studies of the martensitic transformation in epitaxial Ni-Mn-Ga films[END_REF]. For the modulated martensite (7M or 14M), the maximum attainable number of observed reflection peaks in most cases has not been sufficient [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], thus the complete lattice constant determination is not possible. Without information on the monoclinic angle, the modulated martensite has to be simplified as pseudo-orthorhombic other than the monoclinic structure that has been identified in the bulk materials [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. However, it is commonly accepted that 7M (or 14M) modulated NiMnGa martensite are of monoclinic crystal structure as in bulk NiMnGa materials [START_REF] Righi | Crystal structure of 7M modulated Ni-Mn-Ga martensitic phase[END_REF][START_REF] Righi | Commensurate and incommensurate "5M" modulated crystal structures in Ni-Mn-Ga martensitic phases[END_REF][START_REF] Righi | Incommensurate modulated structure of the ferromagnetic shape-memory Ni2MnGa martensite[END_REF]. Generally, to describe the lattice parameters of 7M martensite, two different coordinate reference systems are frequently used in the literature, i.e. an orthogonal coordinate system attached to the original cubic austenite lattice and a non-orthogonal coordinate system attached to the conventional monoclinic Bravais lattice [27, 28, 63-65, 72, 73]. Since the monoclinic angle is very close to 90°, the modulated structure may be approximated as a pseudo-orthorhombic structure under the former scheme [START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. Fig. 1.4(a) illustrates the settings of the cubic coordinate system for austenite, the pseudo-orthorhombic coordinate system [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF] for 14M modulated martensite under the adaptive phase model [START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], and the monoclinic coordinate system for 7M modulated martensite [START_REF] Li | Twin relationships of 5M modulated martensite in Ni-Mn-Ga alloy[END_REF][START_REF] Li | Evidence for a monoclinic incommensurate superstructure in modulated martensite[END_REF][START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF][START_REF] Li | New approach to twin interfaces of modulated martensite[END_REF]. It is seen from Fig. 1.4(b) that the deviations of the respective monoclinic basis vectors (amono, bmono and cmono) from the corresponding directions in the cubic coordinate system ) are very small (ranging from 1° to 3°), depending on the lattice constants of materials with different chemical compositions. The two settings have their own disadvantages. Apparently, a direct atomic or lattice correspondence between the austenite structure and the martensite structure is not so conveniently accessible under the monoclinic coordinate system. However, in the pseudo-orthorhombic setting, the basis vectors shown in Fig. 1.4(a) are not true lattice vectors, because the basis vectors of the original cubic cell become irrational in the newly formed structure after the martensitic transformation. This makes the precise description of the twin relationships of martensitic variants using well-defined twinning elements (such as twinning plane, twinning direction, and etc.) particularly difficult, especially when there are irrational twinning elements, as presented in table 1.2, the description of twinning elements in both 14M-adaptive and 7M monoclinic crystal reference system. It can be clearly seen that the precise twinning elements of Type II twin and compound twin have not been determined in the 14M-adaptive crystal system. Moreover, the pseudo-orthorhombic cell does not possess the same symmetry group as that of the monoclinic Bravais lattice. This affects the determination of the number of martensitic variants and the orientation relationships between adjacent variants. Microstructure and crystallographic features of epitaxial NiMnGa thin films For epitaxial NiMnGa thin films, so far microstructure examinations have been made mainly using scanning electron microscopy (SEM), atomic force microscopy (AFM) and high resolution scanning tunneling microscopy (STM). The crystallographic aspects have been deduced by examining the surface corrugations of the films. Microstructure analyses by scanning electron microscopy (SEM) revealed that martensite is in plate shaped and organized in groups as is the case in bulk material but much finer. The important feature found is that there are two kinds of martensite groups in terms of secondary electron (SE) image contrast. One is with relative homogeneous contrast that consists of long strips running parallel to the two edge directions ( MgO [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] and MgO [ 010] ) of the substrate. The other is with relatively high contrast that contains plates with their length direction running roughly in 45° with respect to the substrate edges [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF]. Close observations using atomic force microscopy [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF] revealed that the different SE contrasts correspond to different surface corrugations of the films: the low SE relative contrast zones to low surface corrugation, whereas the high SE contrast zones to high surface relief. According to the height profile of the surface relief in the high surface corrugation zones that are considered to be of 7M martensite, twin relationship between the adjacent martensite plates has been deducedthe so-called a-c twin relationship [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF]. It means that the pseudo-orthorhombic unit cells from the neighboring plates share a common b axis. The plate boundaries correspond to the (101)orth and they are inclined roughly 45° to the substrate surface. Later high resolution scanning tunneling microscopy (STM) examination indicated that within the 7M martensite plates, there are still fine periodic corrugations [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF]. The fine corrugations have been considered to be related to the structure modulation of the 7M martensite [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF] and preferentially, the 7M modulated martensite was considered to be composed of nano-twinned NM martensite (the so-called adaptive phase), as the lattice constants of the three phases fulfill the relation proposed by the adaptive phase theory (a14M=cNM+aNM-aA, aA=b14M, c14M=aNM [START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]). For the low contrast zones (composed of strips running parallel to the edges of the substrate ( MgO [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] and the MgO [010] )), the nature of the martensite has not been clearly revealed. Some consider that they are of NM tetragonal martensite [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF] but others thought that they are of 7M martensite [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF]. The neighboring plates were deduced to also be a-c twin related with the plate interfaces ((101)orth) perpendicular to the substrate surface, according to the epitaxial relationship between the substrate MgO and the austenite and the Bain relationship between the austenite and the NM martensite. For the substructure within the NM plate, as there are no significant corrugations detected, the microstructure and crystallographic features in the NM plates remains unclear. Recently, further attempts have been made in further revealing the characteristic microstructures of NiMnGa films [START_REF] Niemann | The Role of Adaptive Martensite in Magnetic Shape Memory Alloys[END_REF]. Two types of twins have been proposed, namely type I and type II twins, and build as paper models. Apparently, the type I twin is the so-called a-c twin with (101)A as twinning plane. However, it is not easy to correlate the proposed "type II twin" with yet published twin relationships in NiMnGa in literature, as there is no clear crystallographic description (such as twinning plane and twinning direction)) [START_REF] Niemann | The Role of Adaptive Martensite in Magnetic Shape Memory Alloys[END_REF]. It has been found that the type I twins are mainly in low surface corrugation zones (the so-called Y pattern) and the type II twins mainly in high surface corrugation zones (the so-called X pattern) [START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF]. Cross section observations [START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF] of the film have evidenced that in the low surface corrugation zones there are two interface orientations, one being perpendicular to the substrate surface and the other parallel to the substrate surface. In the high corrugation zones, there is only one interface orientation that is tilted to the substrate surface by roughly 45° [START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF]. The apparent different occurrences of different twins in the two different variants zones are due to the constraints of the substrate [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF]. So far the short intra-plate interfaces in NiMnGa films have not been identified and the orientation relationship between the variants connected by such interfaces is not known. Clearly, due to the lack of direct microstructure and crystallographic orientation correlation on the martensite variants (either 7M or NM martensite) and the inaccuracy of the orientation relationship between the parent austenite and the martensite, the precise information on the crystallographic configurations of the martensite lamellae inside each martensite plate and further the crystallographic organization of the martensite plates in each colony or group, especially the lamellar interfaces and plate interfaces, is still unavailable. In consequence, the possible number of variants in one variant group, their precise orientation relationships and interface planes stay unclear, which hinder the uncovering of microstructural singularities of NiMnGa films with respect to their counterparts in bulk materials. Content of the present work Based on the state of art mentioned above, epitaxial Ni-Mn-Ga thin films were produced by DC magnetron sputtering and then characterized by x-ray diffraction technique (XRD) and backscatter electron diffraction equipped in scanning electron microscope (SEM-EBSD). The scientific aim of the present work is first to produce qualified NiMnGa films with designed chemical compositions and film thicknesses and then to clarify the crystal structures of phase constituents, the configurations of martensite variants and their orientation correlations. The followings are the main content of the present work: (1) Produce NiMnGa thin films with designed chemical composition and film thicknesses by magneto sputtering through optimizing the deposition parameters, such as sputtering power, substrate materials and seed layer materials.. Chapter Experimental and crystallographic calculation In this chapter, the techniques used to prepare and characterize Ni-Mn-Ga thin films are summarized. The essential crystallographic calculation methods are also described in details. Sample preparation Preparation of sputtering target In order to obtain NiMnGa thin films with different phases at ambient temperature, NiMnGa alloys with various nominal composition were dedicatedly prepared by arc melting using high purity elements Ni (99.97 wt. %), Mn (99.9 wt. %) and Ga (99.99 wt. %). The ingots with designed nominal composition were melted by arc-melting in water cooled copper crucible under argon atmosphere. In order to obtain ingots with homogenous composition, each ingot was re-melted for four times and electromagnetic stirring was applied during the melting process. For further reducing the composition inhomogeneity of the as-cast ingots, the ingots were homogenized at 900 o C for 12 h in a vacuum quartz tube, and then cooled within the furnace. Magnetron sputtering is a physical vapor deposition technique by which ions accelerated under certain high voltage bombard the target surface to eject atoms towards the substrate to be deposited, as illustrated in Fig. 2.2 (a). In a typical deposition, substrate with clean surface is attached to the substrate holder. Sputtering is done under high vacuum conditions to avoid contamination of the film. Base pressure is pumped down to 10 -6 -10 -9 Torr using turbo and molecular pumps. On realizing a very high negative potential to the cathode target, plasma is generated. The Argon atoms in the plasma are ionized and accelerated towards the target at very high velocities. Due to the collision cascade, sputtered atoms with enough velocity get deposited on the substrate surface. The residual kinetic energy in target atoms is converted to thermal energy which increases their mobility and facilitates surface diffusion. Hence a continuous chain of atoms are formed resulting in a high quality film. Magnetic field produced by a permanent magnet referred is used to increase the path of the electron and hence the number of ionized inert atoms. It also can change the path of accelerated ions, which results in ring-like ejection zones on the surface of target. In the present thesis, a custom-built multi-source magnetron sputtering system (JZCK-400DJ) was used for thin film deposition. As illustrated in Fig. 2.2(b), the magnetron sputtering system has three sputtering sources arranged in confocal symmetry, which is capable of multilayer deposition, co-sputtering and deposition with a seed layer or cap layer. The base pressure before depositing is pumped below 9.0×10 -5 Pa. In order to obtain a continuous NiMnGa thin film, the working Ar pressure is fixed at 0.15 Pa during the whole deposition process. The deposition parameters, such as sputtering power, substrate materials, substrate temperature, time and seedlayer have been optimized through examining their effects on film thickness, film composition, and phase constituents and their microstructures. (b) The x-ray diffractometer equipped with a curved position sensitive detector. Here,  is the angle between incidence beam and sample surface. is the angle between incidence beam and diffraction plane. 2is the angle of incidence beam and reflected beam.  is the tilting angle with respect to the substrate surface.isthe rotation angle along the film normal. represents that the sample is not rotated. Characterization X-ray diffraction X-ray diffraction is well developed technique for determining the crystal structure and macroscopic crystallographic features of crystalline materials. In the present study, three X-ray diffractometers were employed to determine both the crystal structure and macroscopic crystallographic orientations. All the diffractometers are of four circle X-ray diffractometer with Cobalt cathode source (Co K= 0.178897 nm). For the determination of the crystal structure of the phases in NiMnGa thin films, two X-ray diffractometers were employed. One is of conventional diffractometer with point detector (Fig. 2.3a) and the other is of X-ray diffractometer with curved position sensitive (CPS) detector (Fig. 2.3b). Compared with the conventional diffractometers, X-ray diffractometer with curved position-sensitive offers fast data collection over a wide 2 range. As illustrated in Fig. 2.3(b), the reflection-mode Debye-Scherrer geometry for CPS detectors is unique in the sense that the angle for the incident X-ray beam is kept fixed with respect to the normal of a flat diffracting sample, while the reflected beams are measured at multiple angles with respect to the sample normal. Considering that the thin film possesses in-plane texture, X-ray diffractometer with curved position-sensitive detector was used to obtain maximum numbers of diffraction peaks from the NiMnGa film. SEM and SEM-EBSD The microstructure, composition and microscopic crystallographic features of NiMnGa thin films were investigated by the field emission gun scanning electron microscope (FEG-SEM, JEOL-6500F). The composition of the NiMnGa films were investigated by the energy dispersive X-ray spectrometry system (EDS, Bruker) attached this SEM. The morphology and microstructure of the NiMnGa thin films were characterized by the secondary electron image and backscattered electron image. The microstructure and the microscopic crystallographic orientations of the NiMnGa martensite variants were analyzed using the same SEM equipped with an EBSD acquisition camera (Oxford HKL) and acquisition software (Oxford Channel 5). The EBSD patterns from the martensite variants were manually acquired using Channel 5 Flamenco's interactive option. The sample for SEM-EBSD analysis was electrolytically polished with the solution of 20%HNO3 in volume in CH3OH at room temperature. TEM Philips CM 200 TEM with 200 kV accelerating voltage and LaB6 filaments has been used to observe fine martensite variants in NiMnGa thin films. JEOL-2100F HR-TEM with field-emission gun was used to investigate the atomic-level stacking faults between variants within martensite plates. Specimens for TEM investigation were cut off out the freestanding NiMnGa thin film and further thinned using twin-jet electrolytic polishing with an electrolyte of 20% HNO3 in volume in CH3OH at ambient temperature. Crystallographic calculations With the precise crystallographic orientations of martensite variants, the orientation relationships between the martensite variants and their interfaces, and orientation relationship between austenite and martensite can be calculated. In addition, the following presents the calculation methods used in the present work. Fundaments and definitions Reference systems Considering the characteristics of the crystal structures of the phases concerned in the present study, austenite, modulated martensite and non modulated martensite, as given in Table 2. The lattice vector in reciprocal space can be expressed by the lattice vector in the direct space, as shown in Eq.(2-1) ) ( * c b a c b a     , ) ( * a c b a c b     , ) ( * b a c b a c     (2-1) For the orthonormal crystal coordinate systems, the basis vectors are three orthogonal Metric tensors The metric tensor contains the same information as the set of crystal lattice parameters, but in a form that allows direct computation of the dot product between two vectors. The computational and algebraic aspects of these mutually reciprocal bases can be conveniently expressed in terms of the metric tensors of these bases. The matrix of the metric tensor of the direct basis G , or briefly the direct metric tensor, is The corresponding reciprocal metric tensor is                                                                                               2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 sin ) cos cos (cos ) cos cos (cos ) cos cos (cos sin ) cos cos (cos ) cos cos (cos ) cos cos (cos sin V b a V bc a V c ab V bc a V c a V abc V c ab V abc V c b                      c c b c a c c b b b a b c a b a a a G * (2-4) Where ) cos cos cos cos cos cos 2 1 ( 2 2 2 2 2 2 2            c b a V . With the matrix tensor, the vectors of direct space and reciprocal space can be easily transformed between each other according to Eq.( 2 -5).   T w v u is a direction vector in direct space.   T l k h is a miller index of a plane in direct space and also a direction vector in reciprocal space.                       l k h w v u * G and                       w v u l k h G (2-5) In the present study, for austenite, the matrix tensors in direct and reciprocal space are given in Eq.(2-6)            2 2 2 0 0 0 0 0 0 A A A a a a A G and                    2 2 2 * 1 0 0 0 1 0 0 0 1 A A A a a a A G (2-6) For NM martensite, the matrix tensors in direct and reciprocal space are given in Eq.(2-7)            2 G (2-9) Coordinate transformation As presented in Fig. 2.6, the sample coordinate system and the Bravais lattice cell are related by two coordinate transformations. The first transformation is from the sample coordinate system to the orthonormal crystal coordinate system. Normally, this transformation can be expressed by the Euler angles   [START_REF] Bunge | The role of the inversion centre in texture analysis[END_REF][START_REF] Bunge | Texture analysis in materials science-mathematical methods[END_REF], which can be directly determined by EBSD system. According to Eq.(2-10), the transformation matrix can be constructed with the acquired Euler angles. The second transformation is from the orthonormal crystal coordinate system to the bravais lattice cell. In general, let the vector ] [uvw be in the old coordinate system ) (abc and ] [UVW be the same vector expressed in the new coordinate system ) (ijk . Since, the coordinate transformation does not change the value of the vector, it follows that 2 1 , ,   Φ in Bunge notation                            2 1 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2 1                     E M (2-10) k j i c b a W V U w v u      (2-11) The coordinate transformation can be made by:                      w v u W V U C M (2-12) If the transformation is from the orthonormal crystal coordinate system to a triclinic Bravais lattice unit cell,                                   sin 0 sin / cos cos cos cos cos 0 0 sin / cos cos cos sin 2 2 c a c b a a C M (2-13) In the present study, there are three phases: austenite, NM martensite and 7M martensite. The transformation matrixes from the orthonormal coordinate system to their Bravais crystal basis are given in Eq. (2)(3)(4)(5)(6)(7)(8)[START_REF] Fukuda | Giant magnetostriction in Fe-based ferromagnetic shape memory alloys[END_REF][START_REF] Handley R C | Modern magnetic materials: principles and applications[END_REF][START_REF] Kohl | Ferromagnetic Shape Memory Microactuators[END_REF][START_REF] Faehler | An Introduction to Actuation Mechanisms of Magnetic Shape Memory Alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Jiles | Recent advances and future directions in magnetic materials[END_REF] to Eq. (2)(3)(4)(5)(6)(7)(8)[START_REF] Fukuda | Giant magnetostriction in Fe-based ferromagnetic shape memory alloys[END_REF][START_REF] Handley R C | Modern magnetic materials: principles and applications[END_REF][START_REF] Kohl | Ferromagnetic Shape Memory Microactuators[END_REF][START_REF] Faehler | An Introduction to Actuation Mechanisms of Magnetic Shape Memory Alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Jiles | Recent advances and future directions in magnetic materials[END_REF][START_REF] Wilson | New materials for micro-scale sensors and actuators: An engineering review[END_REF][START_REF] Nespoli | The high potential of shape memory alloys in developing miniature mechanical devices: A review on shape memory alloy mini-actuators[END_REF] . For austenite A a c b a    , o 90       , the coordinate transformation matrix crystal direction vector is            A A A a a a 0 0 0 0 0 0 A C M (2-14) For non modulated martensite, , ,    in Bunge's convention [START_REF] Bunge | The role of the inversion centre in texture analysis[END_REF]. The rotations are between the orthonormal sample coordinate system the the orthonormal crystal coordinate system, as described in the coordinate transformation part (2.3.1.3). In the present study, this method was employed to calculate the misorientation between two martensite variants. If the orientations of variant A and variant B are specified in rotation matrices Δg going from variant A to variant B can be defined as follows: j B 1 A 1 i B A S G G S Δg    (2-17) where the term Δg (2-18) The misorientation angle  and rotation ) d , d , (d 3 2 1 d can be calculated: The misorientation relationship, expressed in rotation axis/angle pairs, can be used to determine twinning type and certain twin elements. According to the definition of twin orientation relationship, there is at least one 180° rotation for a twin. As shown in table 2.3, if there is one 180° rotation and the Miller indices of the plane normal to the rotation axis are rational, the crystal twin belongs to type-I and K1 is identified. If there is one 180° rotation and the Miller indices of the rotation axis are rational, the crystal twin belongs to type-II and η1 is identified. If there are two independent 180° rotations and the planes normal to the rotation axes are also rational, the twin is a compound twin, with one rotation axis being the twinning direction and the other being the normal to the twining plane [START_REF] Zhang | A general method to determine twinning elements[END_REF]. With the determined twin type and rotation axis, the other twinning elements can be calculated using the general method developed by our group [START_REF] Zhang | A general method to determine twinning elements[END_REF].           2 1 arccos 33 22 11 g g g  (2-19) (1)  180                                  ) sgn(g ) sgn(d m, i convention by 0, d 1,2,3 i , d max d with 2 1 g , 2 1 g , 2 1 g ) d , d , (d im i m i m 33 22 11 3 2 1 d (2-20) (2)  0     0 0, 1, ) d , d , (d 3 2 1   d (2-21) (3)  180   and  0                 sin 2 g , sin 2 g , sin 2 In addition, if the twin type and twinning elements are known, the rotation angle and rotation axis are also known. Thus, the misorientation matrix can be constructed by Eq. . Suppose the rotation angle is  , the rotation axis is   T 3 2 1 d , d , d  d expressed in the orthonormal crystal coordinate system and must be a unit vector.                              ) cos 1 ( cos sin ) cos 1 ( sin ) cos 1 ( sin ) cos 1 ( ) cos 1 ( cos sin ) cos 1 ( sin ) cos 1 ( sin ) cos 1 ( ) cos 1 ( cos 2 3 1 3 2 2 3 1 1 3 2 2 2 3 2 1 2 3 1 3 2 1 2 1                   d d d d d d d d d d d d d d d d d d d d d R M (2-23) Stereographic projection and traces determination The stereographic projection is a particular mapping that projects a sphere onto a plane, which is one of the projection methods to calculate pole figures. To calculate the pole figure of crystal directions or plane normal vectors, the vectors have to be expressed in the macroscopic coordinate system set with respect to the equatorial plane (for example the sample coordinate system) t before the stereographic projection. For instance, when a plane normal vector is  * hkl V P  . The corresponding vector in the sample coordinate system S V can be calculated using Eq. . P * i C E S V G S M M V  (2-24) Here, E M is the Euler angle matrix, C M is the coordinate transformation matrix from the orthonormal crystal coordinate system to the Bravais lattice basis. i S is the symmetric element of a crystal, * G is the metric tensor in reciprocal space. Then, the vector T S t t t ] , , [ 3 2 1  V should be normalized to unit vector. As presented in Fig. 2.8a, vector OP represents the vector S V in the sphere S V OP  . P is the intersection of vector OP with the sphere and is defined as the pole of plane ( hkl ). Let the line connect point P and the South Pole S. Line PS intersect the equatorial plane at point P'. P' is the stereographic projection of pole P. On the equatorial plane, P is expressed with the polar angle  and the azimuth angle  (  , ), where            OP ON OP ON arccos  (2-25) and             ' ' arccos 2 OP OE OP OE   (2-26) Line HK vertical to vector OP' in the equatorial plane represents the trace of plane ( hkl ). As shown in Fig. 2.9, the orthonormal reference frame is set such that the basis vector Known the orientation of the martensite variants, the orientation of the austenite with respect to the sample coordinate system expressed in matrix can be calculated with Eq. (2-28) 1 1 1 1 1 ) ( ) ( ) ( ) ( ) (         A C i A M j m C m E A C i A M j m C m E A E M S T S M M M S T T S M M M (2-28) Where, For NiMnGa alloy, the possible orientation relationship between the austenite and martensite are shown in Table 2.4. If the orientation relationship between austenite and martensite is assumed to be the one between austenite and martensite, with Eq. (2-28), the orientation of the austenite from the measured orientations of the martensitic variants can be derived. The schematic illustration of the determination of the OR between austenite and martensite is shown in Fig. 2.10. As presented in Fig. 2.10, if the distinct austenite orientations calculated from the observed variants have at least one orientation in common, it can be deduced that the assumed orientation relationship indeed exists. The common orientation is the orientation of the austenite. ) M A 0] 1 [1 || [001] [96] Kurdjumov-Sachs (K-S) M A (101) || (111) M A ] 1 [11 || 0] 1 [1 [97] Nishiyama-Wassermann(N-W) M A (101) || (111) M A ] 1 [10 || 11] 2 [ [98, 99] Pitsch M A ) 2 1 (1 || (110) M A ] 1 1 1 [ || 0] 1 [1 [100] Displacement gradient tensor It is well known that martensitic transformation is a displacive transformation realized by coordinated atomic movements. As shown in Fig. 2.11, the transformation can be regarded as the deformation of an artificially set reference cell based on the orientation relationship plane and direction of austenite to the corresponding reference cell of martensite by certain lattice distortion. The reference cells of austenite and martensite can be built according to the orientation relationship ( M A ) ( || ) ( hkl hkl , M A ] [ || ] [ uvw uvw ). Normally, the reference cell of austenite is an orthogonal unit cell. To analysis the atomic displacement during martensitic transformation, the orthonormal coordinate systems are also helpful. As presented in Fig. 2.11, orthonormal coordinate systems ( k j i --) were fixed in both the austenite and martensite. The lattice vectors for both austenite and martensite can be expressed in the orthonormal coordinate system, as presented in Eq.(2-29) and Eq.(2-30)                        k j i z k j i y k j i x A A A A A A 0 0 0 0 0 0 z y x (2-29)                        k j i z k j i y k j i x z y x z y x z y x z z z y y y x x x M M M M M M M M M M M M (2-30) Then the displacement describing the lattice change from austenite to martensite can be obtained as z y x D       , where k j i z k j i y k j i x                               ) ( ) 0 ( ) 0 ( ) 0 ( ) ( ) 0 ( ) 0 ( ) 0 ( ) ( M M M M M M M M M z z z z y y y y x x x x z y x z y x z y x (2-31) Therefore, the displacement D can be rewritten as Eq. (2-32) k j i k j i k j i z y x D                                                                               k j i z z z y y y x x x z z z y y y x x x D D D z z z z y y y x x x z z z y y y y x x x z z z y y y x x x x z z y x z y y x z y x x ) ( ) 0 ( ) 0 ( ) 0 ( ) ( ) 0 ( ) ) 0 ( ) 0 ( ) ( )) ( ) 0 ( ) 0 (( )) 0 ( ) ( ) 0 (( )) 0 ( ) 0 ( ) (( A A A M A A M A A M A A M A A A M A A M A A M A A M A A A M A M M M M A M M M M A M (2-32) According to the definition, the displacement gradient tensor with respect to the orthonormal coordinate system is                                                                       A A M A M A M A M A A M A M A M A M A A M     1 pitsch A A MgO pitsch A A MgO M S M M M S M M       i D i k (2-34)                       1,2,  :                  1 0 0 0 ) 45 cos( ) 45 sin( 0 ) 45 sin( ) 45 cos( A MgO M ,                   2 2 2 2 0 0 0 1 2 2 2 2 0 pitsch A M . ( 2 Experimental procedure In the present study, the films are produced with the base pressure higher than 9.0×10 -5 Pa and the working pressure fixed at a low argon pressure (0.15 Pa) in the DC magnetron sputtering machine (JZCK 400DJ), in order to obtain continuous film. For Si substrates, the substrate temperature, the target composition, the sputtering power and deposition time to produce NiMnGa films are given in Fig. 3.1. The selection of the deposition time is under the consideration to ensure the same film thickness at all the sputtering powers used. The targeted film thickness is about 1 micrometer. The film deposited under 75W is further undergone annealing at 750°C, 800°C and 900°C. Using the optimized sputtering power from the above tests, the other deposition parameters to produce the films are further refined on the MgO substrates without and with a NiMnGa thin films deposited on Si substrate with various sputtering power. As presented in Fig. 3.1, the concentration of Ni and Mn in NiMnGa thin films is slightly increased with the sputtering power, whereas the concentration of Ga is slightly decreased with the sputtering power. However, the changes are not linear with the sputtering power. Influence of annealing The film deposited under 75 W were further annealed at different temperature for 2 h. Influence of seed layer To obtain continuous film, a 100 nm seed layer of either Ag or Cr is deposited on the substrate. The misfit between the MgO and the austenite of NiMnGa is 2.42 %, whereas that between Cr and NiMnGa is 0.75%. It is easier to generate epitaxial thin films with small misfit. Discussion When the sputtered atoms arrive at the substrate, a thin film is formed. The morphology, the microstructure and the crystallographic orientation of the thin films depend on the sputtering conditions, which has been depicted in a so-called structure zone model [START_REF] Charpentier | X-Ray diffraction and Raman spectroscopy for a better understanding of ZnO:Al growth process[END_REF]. This model correlates the argon pressure and the substrate temperature to the morphological properties of the produced films. As shown in Fig. 3.12, low argon pressure produces films with microstructures that are smooth and featureless while high argon pressure produces films with microvoids surrounding the amorphous islands. In addition, the grain growth and Summary In summary, NiMnGa thin films with continuous microstructures were successfully produced by DC magnetron sputtering, after the optimization of sputtering parameters such as substrate temperature, sputtering power, substrate materials, seed-layer materials, film composition and thickness. Low argon pressure, high substrate temperature and seed layer are necessary to fabricate the epitaxial NiMnGa thin films with continuous microstructure. The optimum fabrication parameters to produce NiMnGa films with continuous microstructure and with martensite phase at room temperature are:  Target material: Ni46at.%-Mn32at.%-Ga22at.%;  Substrate: MgO(100) substrate with 100 nm Cr seed layer.  working pressure: 0.15Pa;  sputtering power: 75 w;  substrate temperature 500 ℃; 4 Chapter 4 Determination of crystal structure and crystallographic features by XRD Introduction A key requirement for considerable magnetic field-induced strains in ferromagnetic shape memory alloys, as reported for bulk NiMnGa samples, is the presence of modulated martensite. The precise knowledge of martensites in terms of crystal structure and crystallographic texture is of utmost importance. To conduct such characterizations, XRD techniques have been one of the privileged characterization techniques to analyze the phase constituents and texture components in many previous studies on NiMnGa epitaxial thin films [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Thomas | Stray-Field-Induced Actuation of Free-Standing Magnetic Shape-Memory Films[END_REF]. For crystal structure analyses of the constituent phases, the crystal structure of the cubic austenite and the tetragonal non-modulated (NM) martensite have been unambiguously determined from attainable XRD reflection peaks [START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kauffmann-Weiss | Magnetic Nanostructures by Adaptive Twinning in Strained Epitaxial Films[END_REF][START_REF] Buschbeck | In situ studies of the martensitic transformation in epitaxial Ni-Mn-Ga films[END_REF]. However, for the monoclinic modulated martensite (7M or 14M), the lattice constant determination have suffered from insufficient number of measured reflection peaks [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], which result in the monoclinic angle of the crystal unattainable. Without knowing precisely the monoclinic angle, the monoclinic modulated martensite has to be simplified to a pseudo-orthorhombic structure [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. The simplification to pseudo-orthorhombic structure makes the precise description of the orientation relationships between martensitic variants using well-defined twinning elements (such as twinning plane, twinning direction, and etc.) particularly difficult, especially when there are irrationally indexed twinning elements. For crystallographic texture analyses, several studies have been performed on the preferred orientations of the two kinds of martensite. Special attention has been paid to link the orientation relationship between the two martensites, and to provide experimental evidences for the "adaptive phase model". In the chapter, the phase constituents and texture components of NiMnGa thin films were investigated by X-ray diffraction techniques. The crystal structure of modulated martensite and lattice constants expressed in monoclinic Bravais cell was fully determined and the texture characteristics were precisely detected. Experimental Ni-Mn-Ga thin films with nominal composition of Ni50Mn30Ga20 and nominal thickness of 1.5 m were deposited from a cathode target of Ni46Mn32Ga22 by DC magnetron sputtering. A Cr buffer layer of 100 nm thick was pre-coated on the MgO(100) monocrystal substrate. The crystal structures of the as-deposited NiMnGa thin films were determined by X-ray diffraction using Co-Kradiation ( = 0.178897 nm). Considering that the thin films may possess in-plane texture, two four-circle X-ray diffractometers, one with a conventional 2 coupled scan and the other with a rotating anode generator (RIGAKU RU300) and a large-angle position sensitive detector (INEL CPS120), were used to collect a sufficient number of diffraction peaks. The geometrical configurations of the two X-ray diffractometers are schematically illustrated in Fig. 2.3. In the former case (Fig. 2.3a), the 2 coupled scans were performed between 45° and 90° at tilt angle  ranging from 0° to 10° with a step size of 1°. In the latter case (Fig. 2.3b), the 2 scans were conducted at tilt angle  from 0.75° to 78.75° with a step size of 1.25°. At each tilt angle, the sample was rotated from 0° to 360° with a step size of 5°. Two incident angles  (27.9° and 40°) were selected in order to obtain possible diffraction peaks at the low 2 (48°-58°) and the high 2 (around 82.5°) regions. The final diffraction patterns were obtained by integrating all diffraction patterns acquired at different sample positions. Results and discussion Determination of crystal structure Fig. 4.1a shows the -dependent XRD patterns of the as-deposited Ni-Mn-Ga thin films obtained by conventional -2 coupled scanning at ambient temperature. At each tilt angle , there are only a limited number of diffraction peaks. Besides, no diffraction peaks become visible over the 2 range of 48°-55°. This could be attributed to the texture effect of the as-deposited thin films, as several peaks at these positions were well observed in previous powder XRD measurements [START_REF] Righi | Crystal structure of 7M modulated Ni-Mn-Ga martensitic phase[END_REF][START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF]. A close look at the measured and calculated peak positions, we found that fairly good matches can be found between the measured and recalculated ones for the two kinds of martensite, although a deviation of 0.2~2° exists. The good fits confirm that the modulated martensite and the NM martensite possess the same crystal structure as their counterparts in bulk materials, whereas the angular deviations indicates that the lattice constants of the 7M and NM martensite in the as-deposited thin films are not exactly the same as the ones derived from their powder counterparts, as the substrate imposes a constraint on the as-deposited thin films during the phase transitions. Using the measured peak positions of each phase, the lattice constants of the 7M and NM martensite were resolved. Table 4.1 summarizes the complete crystal structure information on the three different phases (austenite, 7M martensite, and NM martensite) involved in the as-deposited thin films. The unit cell of the 7M and NM martensite is shown in Fig. 4.2. This information is prerequisite for the subsequent EBSD orientation analyses. It should be noted that the macroscopic crystallographic features obtained in the present work are similar to those identified in the previous studies [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. The only difference is that choice of the lattice cell for the two martensite is not the same. The (2 0 20)mono, (2 0 20 )mono and (0 4 0)mono planes in the present work refer to the respective (0 4 0)orth, (4 0 0)orth and (0 0 4)orth planes in the pseudo-orthorhombic coordinate system [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], whereas the (004)Tetr and (220)Tetr planes refer to the respective (004)NM and (400)NM in the previous studies [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. Summary X-ray crystal structural analysis shows that three different phases, i.e. austenite, 7M modulated martensite, and NM martensite, co-exist in the as-deposited Ni50Mn30Ga20 thin films. The austenite phase has a cubic L21 crystal structure (Fm Experimental procedure The microstructures and crystallographic orientations of the thin films with nominal composition (at.%) of Ni50Mn30Ga20 and nominal thickness of 1.5 m were analyzed using the same SEM equipped with an EBSD system, where the EBSD patterns from martensitic variants were manually acquired using Channel 5 Flamenco's interactive option. Prior to the microstructural observations and orientation measurements, the thin film samples were subject to thickness-controlled electrolytic polishing with a solution of 20%HNO3 in CH3OH at 12 volts at room temperature. NiMnGa thin films were also investigated by transmission electron microscope (TEM) with 200 kV accelerating voltage. Specimens for TEM investigation were cut off out the freestanding NiMnGa thin film and further thinned using twin-jet electrolytic polishing with an electrolyte of 20% HNO3 in volume in CH3OH at ambient temperature. ), the high relative contrast zones are of shorter and somewhat bent plates that orient roughly at 45° with respect to the substrate edges. In fact, the SE image contrast is related to the surface topography of an observed object. Thus, the low relative contrast zones and the high relative contrast zones are expected to have low and high surface reliefs, respectively. Further examination at a higher magnification in backscattered electron imaging (BSE) mode reveals that the fine lamellae having two different brightness levels are distributed alternately inside each plate, as highlighted with green and blue lines in Fig. 5.1(b). Of the two contrasted neighboring lamellae, one is thicker and the other is thinner. As the BSE image contrast for a monophase microstructure with homogenous chemical composition originates from the orientation differences of the microstructural components, the thicker and thinner lamellae distributed alternately in each plate should be correlated with two distinct orientations. The crystal structures of the coarse plates in the near surface layer of the film and fine plates well under the film surface were further verified by EBSD, using the determined crystal structure information for the 7M and NM martensite. In contrast, the pattern in Fig. 5.2(f) represents a single pattern and can be well indexed with the monoclinic superstructure of the 7M martensite (Fig. 5.2(g)). Large misfits between the acquired and calculated patterns are generated if using the tetragonal crystal structure of the NM martensite (Fig. 5.2(h)) to index this pattern. In this context, the coarse plates in the top layer of the film are composed of the NM martensite, whereas the fine plates in the film interior are of the 7M martensite. By taking into account of the X-ray measurement results, one may deduce that the NM martensite is located near the free surface of the film, the austenite above the substrate surface, and the 7M martensite in the intermediate layers between them. Using the determined lattice constants of the NM and 7M martensites and the published atomic position data [START_REF] Righi | Crystal structure of 7M modulated Ni-Mn-Ga martensitic phase[END_REF] as initial input, the constituent phases in the surface layers of the as-deposited thin films were verified by EBSD analysis. The Kikuchi line indexation has evidenced that both low and high relative contrast zones displayed in Fig. 5.3(a) can be identified as the tetragonal NM martensite other than the 7M martensite. Apparently, there is a difference in interpreting the microstructural features of Ni-Mn-Ga thin films between the present result and previous work by other groups [42-44, 55, 82, 85, 86, 88, 89]. This may be related to the different thickness of thin films used for microstructural characterizations. In the present study, the NM martensite was found at the surface of the produced thin films of about 1.5 µm thick, while the others reported the existence of the 7M martensite at the surface of the thin films with maximum thickness of about 0.5 µm. Indeed, the formation of different types of martensite is very sensitive to local constraints. It is obvious that at the film surface, the constraint from the substrate decreases with the increasing film thickness. Thus, the surface layers of thick films without much constraint from substrate would easily transform to the stable NM martensite, as demonstrated in the present case. the { 1 12}Tetr lamellar interfaces in P1 and P2 to be on edge, as shown in Fig. 5.7. The selected area diffraction patterns acquired from plate P1 and P3 in Fig. 5.6 and P1 and P2 in Fig. 5.7 Results Microstructure of epitaxial NiMnGa thin films Crystallographic features of NM martensite well revealed that the adjacent fine lamellae in the same plate have their { 1 12}Tetr in parallel. This further confirms the compound twin relationship between them. Orientation correlation Detailed EBSD orientation analyses were conducted on the NM martensite plates in the low and high relative contrast zones (Z1 and Z2 in Fig It is noted that the orientations of the major and minor variants in the low relative contrast zones are different from those in the high relative contrast zones. For the low relative contrast zones (Z1), the major and minor variants are oriented respectively with their (110)Tetr planes and (001)Tetr planes nearly parallel to the substrate surface (Fig. 5.8(c)). In the high relative contrast zones (Z2), such plane parallelisms hold for plates 2 and 4 but with an exchange of the planes between the major and minor variants, whereas both major and minor variants in plates 1 and 3 are oriented with their (110)Tetr planes nearly parallel to the substrate surface (Fig. 5.8(d)). In correlation with the microstructural observations, plates 2 and 4 are featured with higher brightness and plates 1 and 3 with lower brightness. Misorientation relationships Based on the misorientation calculations in part 2.3.2 and Eq. (2)(3)(4)(5)(6)(7)(8)[START_REF] Fukuda | Giant magnetostriction in Fe-based ferromagnetic shape memory alloys[END_REF][START_REF] Handley R C | Modern magnetic materials: principles and applications[END_REF][START_REF] Kohl | Ferromagnetic Shape Memory Microactuators[END_REF][START_REF] Faehler | An Introduction to Actuation Mechanisms of Magnetic Shape Memory Alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Jiles | Recent advances and future directions in magnetic materials[END_REF][START_REF] Wilson | New materials for micro-scale sensors and actuators: An engineering review[END_REF][START_REF] Nespoli | The high potential of shape memory alloys in developing miniature mechanical devices: A review on shape memory alloy mini-actuators[END_REF][START_REF] Hauser | Chapter 8 -Thin magnetic films[END_REF], the orientation relationships between adjacent lamellar variants were further determined from their orientation data manually acquired by EBSD. For both low and high relative contrast zones, the in-plate lamellar variants are found to have a compound twin relationship with the twinning elements K1 = (112)Tetr, K2 = (11 2 )Tetr,  = ] 1 11 [ etr,  = ] 111 [ Tetr, P = (1 1 0)Tetr and s = 0.412. Here, all the crystallographic elements are expressed in the crystal basis of tetragonal Bravais lattice for the convenience in interpreting the EBSD orientation data and introducing the symmetry elements of the tetragonal structure to find proper misorientation axes and angles. The present twin relationship is the same as that reported for the bulk alloys [START_REF] Cong | Microstructural and crystallographic characteristics of interpenetrating and non-interpenetrating multiply twinned nanostructure in a Ni-Mn-Ga ferromagnetic shape memory alloy[END_REF][START_REF] Zhang | A general method to determine twinning elements[END_REF], and it is also consistent with the so-called a-c twin relationship found in the thin films [START_REF] Eichhorn | Microstructure of freestanding single-crystalline Ni2MnGa thin films[END_REF][START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF][START_REF] Buschbeck | In situ studies of the martensitic transformation in epitaxial Ni-Mn-Ga films[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. By means of the indirect two-trace method [START_REF] Zhang | Indirect two-trace method to determine a faceted low-energy interface between two crystallographically correlated crystals[END_REF], the in-plate interlamellar interface planes in the two relative contrast zones were further determined. They coincide well with the As for the high relative contrast zones (Fig. 5.8(d)), the interlamellar interfaces in one plate (e.g. plate 1) are roughly perpendicular (88.6°) to the substrate surface. Whereas, the interlamellar interfaces in its neighboring plates (e.g. plate 2) are inclined 44.4° toward the substrate surface. Moreover, the orientation relationships between two lamellae connected by an inter-plate interface in the low and high relative contrast zones were calculated. The respective minimum rotation angles and the corresponding rotation axes are displayed in Table 5.1 and Table 5.2. The counterpart major and minor variants in adjacent plates are related by a rotation of ~83° around the <110>Tetr axes and a rotation of ~11-14° around the <301>Tetr axes with certain degrees of deviation, respectively. For the low relative contrast zones, the closest atomic planes from the counterpart major variants are referred to the ( 2 )Tetr planes with an angular deviation of 1.9° and those from the counterpart minor variants to the {010}Tetr planes with an angluar deviation of 4.2°, as outlined with black dotted rectangles in the {010}Tetr and {112}Tetr pole figures in Fig. 5.8(c). These characteristic planes are all nearly perpendicular to the substrate surface and the corresponding planes of the counterpart major and minor variants are positioned symmetrically to the (010)MgO plane. As for the high relative contrast zones (Fig. 5.8(d)) , the closest atomic planes from the counterpart major and minor variants are respective of the ( 121 )Tetr and (010)Tetr, similar to the case in the low relative contrast zones. However, they are no longer perpendicular to the substrate surface. Detailed calculations on plates 1 and 2 show that their ( It should be noted that the present results on the orientation relationships between lamellar variants in two neighboring NM plates (e.g. plates A and B in the low relative contrast zones and plates 1 and 2 in the high relative contrast zones), the orientations of in-plate interlamellar interfaces and those of inter-plate interfaces with respect to the substrate are similar to those reported in literature [42-44, 82, 89]. However, our direct EBSD orientation measurements have clarified that plates C and D in the low relative contrast zones (or plates 3 and 4 in the high relative contrast zones) are not a repetition of plates A and B (or plates 1 and 2), in terms of the orientations of the lamellar variants, intra-plate interfaces and inter-plate interfaces. Discussion As demonstrated above, the morphology and the surface topology of the NM martensite plates in the low and high relative contrast zones are clearly different, although they appear to be composed of the same (112)Tetr compound twins that act as the primary microstructural elements. In essence, the crystallographic orientations of the in-plate martensitic variants with respect to the substrate surface are not the same. This may be the origin of the morphological and topological differences observed for the two relative contrast zones, as discussed below. (1) Low relative contrast zone Fig. 5.9(a) illustrates the atomic correspondences of eight lamellar variants organized in four NM plates (representing one variant group) for the low relative contrast zones (Z1), viewed from the top of the as-deposited thin films. The atomic correspondences were constructed with the individually measured orientations of lamellar variants and the detemined intra-and inter-plate interface planes. The width ratio (expressed in the number ratio of atomic layers) between the minor and major variants is 0.492, being determined according to the phenomenological theory of martensitic transformation (known as WLR theory [START_REF] Wechsler | On the Theory of the Formation of Martensite[END_REF][START_REF] Wayman | The phenomenological theory of martensite crystallography: Interrelationships[END_REF]) under the assumption that the invariant plane is parallel to the MgO subtrate surface. This width ratio is very close to 2:4 (0.5), i.e. that of the ideal (5 2 ) stacking sequence. It should be noted that for the sake of saving article space, only two atomic layers for the minor variants and five atomic layers for the major variants were taken in Fig. 5.9(a) to illustrate the thickness ratio between the minor variants and the major variants, the structures of intra-and inter-plate interfaces and the orientations of lamellar variants with respect to the substrate. In reality, the lamellar variants are much thicker in nanometer range. One should not confuse this with the structure model from the "adaptive phase theory" [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Kauffmann-Weiss | Magnetic Nanostructures by Adaptive Twinning in Strained Epitaxial Films[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF][START_REF] Niemann | The Role of Adaptive Martensite in Magnetic Shape Memory Alloys[END_REF], where the unit cell of the monoclinic 7M martensite is built based on a fixed number of atomic layers in each tetragonal variant. Fig. 5.9(b) presents a 3D display of the atomic correspondences between two alternately distributed lamellar variants with the (112)Tetr compound twin relationship in one NM martensite plate. It is seen from Figs. 5.9(a) and 5.9(b) that the in-plate interlamellar interfaces are coherent with perfect atomic match. To further reveal the inter-plate interface features, a 3D configuration of two adjacent NM plates was constructed using the twinned lamellar variants as blocks, as illustrated in Fig. 5.9(c). From Figs. 5.9(a) and 5.9(c), it is seen that the inter-plate interfaces are incoherent with certain amout of atomic mismatch. These inter-plate interfaces correspond to the so-called a-c twin interfaces, as mentioned in literature [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF]. Interestingly, the atoms from two adjacent NM plates are not totally disordered at the inter-plate interfaces but show some periodicity. For instance, each pair of major and minor lamellae constitutes one period, and two end atoms at one period possess perfect match at the inter-plate interface. If choosing those coherent atoms as reference, the atoms within one period experience an increased mismatch symmetrically to the plate interface when approching the interlamellar interface enclosed in the period. Both the periodic coherence and the symmetrical mismatch would define a straight inter-plate interface, which acts as another invariant plane for the NM martensite in the low relative contrast zones. Indeed, the width ratio required by this invariant plane is the same as that required by the invariant plane parallel to the MgO substrate surface. Due to such an atomic construction, inter-plate interface are always straight without need to bend to accommodate the unbalanced interfacial atomic misfits. This boundary character has been evidenced in the microstructural observation (Fig. 5.3(a)). As the combination of the lamellar variants ((112)Tetr compound twins) are the same in neigboring plates, the atomic structures of all inter-plate interfaces are the same. Therefore, all plate interfaces in the low relative contrast zones are parallel to one another, as demonstrated in Fig. 5 .3(b). Moreover, as the major and the minor lamellar variants ((112)Tetr compound twins) have the same orientation combination for all NM plates and they are distributed symmetrically to the inter-plate interfaces (Fig. 5.9(c)), there appear no microscopic height misfits across the inter-plate interfaces in the film normal direction. The NM plates in these zones are relatively flat without prononced surface relief or corrugation. Therefore, no significant relative contrast is visible between adjacent NM plates in the SE image. (2) High relative contrast zone Similarly to the above analyzing scheme, the atomic correspondences of eight lamellar variants in four NM plates, the 3D-atomic correspondences of twinned lamellar variants in one NM plate, and the 3D configuration of two adjacent NM plates were also constructed for the high relative contrast zone (Z2), as shown in Fig. 5.10. Note that each high relative contrast zone contains two distinct orientation plates (e.g. plates 1 and 2 or plates 3 and 4 in Fig. 5.10(a)) in terms of the orientations of paired lamellar variants (the (112)Tetr compound twins). Due to the orientation differences, the width ratios between the minor and major variants in plate 1 and plate 2 are respectively 0.47 and 0.48, compared with that (0.492) for the low relative contrast zones. Here, the ideal width ratio 0.5 was used to construct Fig. 5.10. For a real material, the deviations from the ideal width ratio may be accommodated by stacking faults. From Fig. 5.10(a), it is seen that both major and minor lamellar variants in plate 1 are with their (110)Tetr planes near-parallel to the substrate surface, whereas the major and minor lamellar variants in plate 2 are respectively with their (001)Tetr and (110)Tetr planes near-parallel to the substrate surface. If taking the coherent (112)Tetr compound twin interfaces in plate 2 as reference, the (112)Tetr compound twin interfaces in plate 1 can be generated by a rotation of about 90° around the MgO [110] . In this manner, the (112)Tetr compound twin interfaces become perpendicular to the substrate surface. It is commonly considered that the (112)Tetr compound twinning may bring about the atomic corrugations in NM plates, as illustrated in Fig. 5.10(b). As the direction of the atomic corrugations in plate 1 lie in the film plane, no significant surface relief is created on the free surface of thin films. Scanning tunneling microscopy (STM) imaging has evidenced that the free surface of the NM platesbeing equivalent to plate 1 in the present work -stays smooth [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF]. )Tetr and (010)Tetr planes from the neighboring plates are not in mirror relation with respect to the inter-plate interface. As the lengths of one pair of major and minor variants (considered as one period) in two plates are not the same along the macroscopic plate interface (Fig. 5.10(c)), unbalanced atomic misfit can be expected. This misfit is accumulative and increases with the increased length (parallel to the film surface) and the height (normal to the film surface) of the plates. Therefore, the plate interface orientation could be dominated by the orientation of the ( 121 )Tetr of the major variant in either plate 1 or plate 2, depending on local constraints. This may be the reason why the inter-plate interfaces in the high relative contrast zones are bent after running in certain length. As displayed in Figs. 5.10(b) and 5.10(c), the atomic misfit also arises in the film normal direction. For example, the planar spacing of the major and minor variants in the film normal direction is 0.272 nm in plate 1, but 0.334 (major) and 0.272 nm (minor) in plate 2. If assuming that the major variants in plate 2 make dominant contribution to the atomic misfit between the two plates at the plate interface, the region of plate 2 is elevated by 23% in height with respect to that of plate 1. In the present work, the as-deposited thin films were subject to electrolytic polishing before microstructural observation. The constraints induced by the atomic misfits in the height direction may be fully released at the free surface. Thus, significant height difference between plate 1 and plate 2 can be expected. In the high relative contrast zones of Fig. 5.3(a), the plates with higher brightness are those with larger planar spacing in the film normal direction (major variants in plate 2), whereas the plates with lower brightness are those with smaller planar spacing in the film normal direction. This could well account for the distinct levels of brightness between neighboring plates observed in the high relative contrast zones. the dotted yellow and green lines is about 15°, being bisected by one solid black line that is parallel to the [ 1 1 0]MgO direction. The observed microstructural features are similar to those reported in recent studies [42-44, 82, 86]. A low relative contrast zone (Group 1) corresponds to the so-called Type Y pattern, and a high relative contrast zone (Group 2 or Group 3) to the Type X pattern [START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF]. In most cases, the neighboring plates in both the low and high relative contrast zones have almost the same width. The microstructural-correlated characterizations of crystallographic orientations of 7M martensite plates were conducted by EBSD, where the macroscopic reference frame was set to the crystal basis of the MgO monocrystal substrate. It is found that each 7M martensite plate is specified by a single crystallographic orientation, being designated as one orientation variant. Crystallographic features of 7M martensite There are a total of four different orientation variants distributed in one plate group, as illustrated in Fig. 5.12(a) and Fig. 5.12(b). Here, the four orientation variants -representing one plate group with low or high relative contrast -are denoted by the symbols 7M-V1, V2, V3, V4 (Fig. 5.12(a)) and 7M-VA, VB, VC, VD (Fig. 5.12(b)), respectively. and Group 2 in the form of {2 0 20}mono, {2 0 20 }mono and {0 4 0}mono pole figures. The subscript "mono" stands for monoclinic, as the three indices of the crystallographic planes are defined in the monoclinic Bravais lattice frame. Clearly, in the low relative contrast zone (Group 1), the four variants 7M-V1, V2, V3, V4 are all with their (2 0 20)mono plane nearly parallel to the substrate surface (Fig. 5.12(c), left). These variants correspond to the a-c twins with their common b axis perpendicular to the substrate surface, the so-called b-variants in the literature [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF]. However, in the high relative contrast zone (Group 2), the variants VA and VD (colored in yellow and green in Fig. 5.12(b)) are with their (2 0 20 )mono plane nearly parallel to the substrate surface (Fig. 5.12(c), middle), and the variants VB and VC (colored in blue and red in Fig. 5.12(b)) with their (0 4 0)mono plane nearly parallel to the substrate surface (Fig. 5.12(c), left). Furthermore, the variants VB and VC and the variants VA and VD are of low brightness and high brightness, respectively. They correspond to the a-c twins with their common b axis parallel to the substrate surface, the so-called a-variants for VA and VD and c-variants for VB and VC [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF]. It should be noted that in the high relative contrast zones, one plate group is composed of two (2 0 20 )mono variants (VA and VD) and two (0 4 0)mono variants (VB and VC). But in the literature [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF], a pair of the (2 0 20 )mono variants (VA and VD) was designated to be one 0)orth, (4 0 0)orth and (0 0 4)orth planes in the pseudo-orthorhombic coordinate system, respectively [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]). In consequence, only two variants were found in each plate group [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF], instead of four variants as evidenced in the present work. From the calculated misorientation data, the complete twinning elements -K1, K2, , P and s -of the above three types of twins were derived using a general method developed by our group [START_REF] Li | New approach to twin interfaces of modulated martensite[END_REF][START_REF] Zhang | A general method to determine twinning elements[END_REF], as shown in Table 5.5. The twinning modes are exactly the same as those in bulk materials [START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF][START_REF] Li | New approach to twin interfaces of modulated martensite[END_REF]. Here, an attempt is made to correlate the present twin relationships (identified by EBSD) under the monoclinic crystal system with the published ones (deduced from XRD) under the pseudo-orthorhombic crystal system. It is seen from Table 5.6 that the so-called a-c twins with the (1 0 1)A as the twin interface are the Type-I twins in the present work. Interestingly, the Type-II twins in the present work also belong to the so-called a-c twins. Clearly the specification of twin relationships under the pseudo-orthorhombic crystal system has difficulty to distinguish the subtle differences between the two types of twins. Apart from this, the compound twins could not be differentiated due to the simplification of the 7M martensite with a pseudo-orthorhombic crystal structure. To have a general view on the configuration of the 7M variants, the occurrences of the three types of twins in the two relative contrast zones are further examined. In the low relative contrast zone (Fig. 5.12(a)), the Type-I twins (V1:V3 and V2:V4) and the Type-II twins (V1:V2 and V3:V4) are those with the interface traces marked by solid black lines and dotted white lines, respectively. In these zones, the traces of Type-I and Type-II interfaces are in parallel, and the majority of variants are Type-I twins. Moreover, no compound twin relationship (V1:V4 and V2:V3) can be observed between two adjacent variants. However, in the high relative contrast zone (Fig. 5.12(b)), the majority of variants are Type-II twins. The interface traces of Type-I twins have only one orientation (marked by the solid black lines), whereas those of Type-II twins have two orientations (marked by the dotted yellow and green lines).   mono 1 10 2 1  K Type I twin   mono 1 1 2 1  K   A mono 1 M 14 1 1 0 1 || || K K a14M-c14M twin K1 close to (1 0 1)14M The interfaces with compound twin relationship (VA:VD and VB:VC) are those intra-plate interfaces, occurred with the bending of martensite plates. Using the indirect two-trace method [START_REF] Zhang | Indirect two-trace method to determine a faceted low-energy interface between two crystallographically correlated crystals[END_REF][START_REF] Cong | Determination of microstructure and twinning relationship between martensitic variants in 53 at.%Ni-25 at.%Mn-22 at.%Ga ferromagnetic shape memory alloy[END_REF], the planes of twin interfaces in the low and high relative contrast zones were calculated with the measured crystallographic orientations of individual variants and orientations of their interface traces on the film surface. Since the epitaxial NiMnGa thin films were prepared on the MgO monocrystal substrate at elevated temperature, the thin films were composed of monocrystalline austenite when the deposition process was finished. According to the scanning tunneling microscopy (STM) analysis [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF], the orientation relationship between the MgO substrate and the NiMnGa austenite refers to A A MgO MgO ] 0 0 1 [ ) 1 0 0 ( ] 0 1 1 [ ) 1 0 0 ( || , as illustrated in Fig. 5.14(a). Upon cooling, the cubic austenite transforms to the monoclinic 7M modulated martensite below the martensitic transformation start temperature. Due to the displacive nature of martensitic transformation, the phase transformation from austenite to 7M martensite in the present thin films is realized by coordinated lattice deformation of the parent phase following certain orientation relationship between the two phases. The Pitsch orientation relationship ( mono mono A A ] 1 1 1 [ ) 1 2 1 ( ] 0 1 1 [ ) 0 1 1 ( || ), previously revealed for the austenite to 7M martensite transformation in bulk NiMnGa [START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF], is further confirmed in the present work, as illustrated in Fig. 5.14(b). It has been demonstrated that the austenite to 7M martensite transformation should be self-accommodated to minimize the macroscopic strain by forming specific variant pairs or groups with specific variant volume fractions [START_REF] Hane | Microstructure in the cubic to monoclinic transition in titanium-nickel shape memory alloys[END_REF][START_REF] Yang | Self-accomodation and shape memory mechanism of ϵ-martensite -II. Theoretical considerations[END_REF][START_REF] Bhattacharya | Wedge-like microstructure in martensites[END_REF]. This self-accommodation character in accordance with the substrate constraint may be the origin of the characteristic organizations of 7M variants in different variant zones, notably the low and high relative contrast zones in the present work. With the aid of the Pitsch orientation relationship, the lattice deformation during the martensitic transformation can be described by the displacement gradient tensor (M) in the orthonormal basis referenced to the ) should be subject to different constraints from the MgO substrate. To quantify these differences and their influence on the occurrences of different variant pairs in different martensite groups, the displacement gradient tensor of each variant is calculated and presented in the macroscopic reference frame A ] 0 1 [0 - A ] 1 0 [1 - A ] 1 0 [1 , i.e.                         0 ( MgO ] 0 0 1 [ - MgO ] 0 1 0 [ - MgO ] 1 0 0 [ ). For the low relative contrast zone (Group 1), the displacement gradient tensors of the four 7M variants (V1, V2, V3, V4) in the macroscopic reference frame are expressed as According to our experimental observations on the low relative contrast zones, the variant pair V1:V3 (or V2:V4) belongs to Type-I twins, and the variant pair V1:V2 (or V3:V4) to Type-II twins. To estimate the overall transformation deformations of the two types of twins, a mean tensor matrix is defined as ) 1 ( 1 3 or 2 1 1 d M d M M      , where d1 represents the relative volume fraction of one variant. Fig. 5.15 plots the non-zero ij e components of the matrix M against d1 for the Type-I and Type-II twin variant pairs. In the case of the Type-I twins, the ii e components representing dilatation in the three principal directions (Fig. This may be the reason why in the low relative contrast zones, Type-I twins appear with much higher frequency compared to the Type-II twins. Moreover, the formation of the Type-I twins does not create any height difference between adjacent variants in the film normal direction ( 31 e and 32 e = 0, and 33 e is the same for the constituent variants). Thus, a homogenous SE image contrast should be obtained for all the variants. For the high relative contrast zone (Group 2), the displacement gradient tensors of the four 7M variants (VA, VB, VC, VD) in the macroscopic reference frame are: ( Following the above analysis scheme for the low relative contrast zones, the non-zero ij e components in the matrix M ( ) (1 × + × = 1 or 1 d - M d M M C B A ) were calculated as a function of the volume fraction d1 of variant VA, as shown in Figs. 5.17 When the Type-II twin pair forms, the situation for the dilatation deformations is similar to that for the Type-I twin pair. Fig. 5.17(c) shows that the 22 e and 33 e terms are nearly cancelled at d1 = 0.4964, whereas the 11 e term at d1 = 0.6020. For the shear deformations, all the components converge towards zero around d1 = 0.5 (Fig. 5.17d). The j e 3 term can nearly be cancelled at two volume ratios that are very close, d1 = 0.5284 and d1 = 0.5364. When the 31 e equals to zero, the 32 e is -0.0006; when the 32 e equals to zero, the 31 e is 0.0008. This indicates that the formation of the Type-II twins in the high relative contrast zone could effectively eliminate the shear deformation along MgO 1] 0 [0 direction. Moreover, at these volume ratios, the other components have relatively small values as shown in the matrices in Fig. 5.17(d). These volume ratios are very close to the effectively observed thickness ratio and close to the values reported in the literature [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF]. In this connection, the formation of the twins should appear with higher frequency that is in accordance with our experimental observation. In addition, there exist height differences between adjacent variants, as presented by the 33 e component of the matrix in Eq. (5-3) for the four 7M variants. This accounts for the high relative contrast between the pairs of variants in the SE images. Crystallography and sequence of martensitic transformation As mentioned above, the local microscopic microstructural and local crystallographic features in specific martensite variant groups were correlated by EBSD. To obtain the macroscopic crystallographic features, several martensite variant groups in the present NiMnGa thin films were analyzed by EBSD. It is seen that macroscopically, martensite variants are organized in colonies or groups, demonstrated by different brightnesses in Fig. 5.18. There are two differently oriented bright long straight strips parallel to the substrate edges ( MgO [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] or MgO [010] ) that correspond to the low relative contrast zones mentioned in section 5. Summary Microstructure characterization shows that both the 7M martensite and NM martensite the Type-II twin pairs. The rigid constraint from the substrate accounts for this preference. Due to the fact that the formation of Type-I twin pair can cancel the shear deformation in the film normal direction at macroscopic scale (this shear tends to "peel" the film off the substrate) and requires much lower effort. In the high relative contrast zones, there exists one interface trace orientation for Type-I twins but two interface trace orientations for Type-II twins on the film surface. In contrast to the low relative contrast zones, the Type-II twin pairs are more frequent than the Type-I twin pairs, due to the constraint from the rigid substrate. The formation of Type-II twin pair allows cancelling the shear deformation in the film normal direction at macroscopic scale. Large relative height differences between adjacent variants account for the high relative contrast. Crystallographic calculation indicates that the martensitic transformation sequence is from Austenite to 7M martensite and then to NM martensite (A→7M→NM). The present study intends to offer deep insights into the crystallographic features and martensitic transformation of epitaxial NiMnGa thin films. Les couches minces é pitaxiales de Ni-Mn-Ga ont attiré une attention considé rable, car ils sont des candidats prometteurs pour les capteurs et actionneurs magné tiques dans des microsystè mes é lectromé caniques. Des informations complè tes sur les caracté ristiques de la microstructure et de la cristallographie des films NiMnGa et leur relation avec les contraintes du substrat sont essentielles à l'optimisation des proprié té s. Dans le pré sent travail, les couches minces é pitaxiale de Ni-Mn-Ga ont é té produites par pulvé risation cathodique magné tron à courant continu et ensuite caracté risé es par la technique de diffraction des rayons X (XRD) et la diffraction d'é lectrons ré trodiffusé s dans un microscope é lectronique à balayage é quipé d'analyse EBSD (MEB-EBSD). ,目前制备出的 NiMnGa 薄膜中包含多种晶体结构的相、多种取 向的马氏体变体和复杂的微观组织, 导致其很难获得较大的磁场诱发应变。 揭示 NiMnGa 薄膜中微观组织与晶体学取向之间的内在联系、探究 NiMnGa 薄膜中马氏体相变过程, 是通过训练消除不利的马氏体变体,获得较大磁场诱发应变的前提。但是,NiMnGa 薄 膜中的马氏体板条尺寸远远小于块体材料的板条尺寸,使得马氏体相的晶体学取向表征 异常困难,因此在 NiMnGa 薄膜种,局部马氏体变体的详细取向信息尚未报道。本论文 针对这一科学难题,利用 X-射线衍射(XRD) ,扫描电镜,电子背散射衍射(EBSD) 和透射电镜(TEM) ,对外延生长 NiMnGa 薄膜进行了系统全面的晶体学分析和研究。 结果如下: 首先,通过调控磁控溅射的工艺参数(溅射功率、气压、基板温度) ,薄膜的成分、 厚度等,成功制备出柱状晶和强取向的外延生长 NiMnGa 薄膜。由于柱状晶薄膜的晶界 会限制磁诱导形状变化的效果,导致其磁场诱发应变受到限制,因此本论文重点研究外 延生长 NiMnGa 薄膜。 XRD 结果表明,外延生长 NiMnGa 薄膜由奥氏体、非调制马氏体(NM)和七层调 制马氏体(7M)三相组成。其中七层调制马氏体的晶体结构为单斜结构。在薄膜中, 。通过分析 EBSD 的 Kikuchi 花样,发现表 层粗大的马氏体板条为 NM 马氏体,底层细小的马氏体板条为 7 层调制马氏体结构。 每个马氏体变体团由 4 种不同取向的马氏体板条组成,且每个板条中由两种取向的 Fig. 1 . 1 11 Fig.1.1 Schematic illustration of the magnetic field induced deformation due to the rearrangements of martensite variants. Fig. 1 . 1 2 shows magnetic field-induced variant rearrangement occurring in a rectangular bar specimen under the applied field along the ] 100 [ direction of the specimen. The twinned microstructure is visible using polarized microscopy on a polished (001) surface of the specimen. Fig. 1 . 2 12 Fig.1.2The microstructure evolution due to magnetic field induced variant rearrangement[START_REF] Tickle | Ferromagnetic Shape Memory Materials[END_REF]. Fig. 1 . 3 13 Fig.1.3 The illustration of crystal structure of NiMnGa alloys in austensite(a), non modulated martensite (b) and modulated martensite (c,d) relationships existing in each martensite plate and in total eight martensite variants in each variant group. The twin related variants have a minimum misorientation angle of ~79° around the <110>Tetr axis [69, 70]. For modulated martensites (5M and 7M), four types of alternately distributed martensite variants (A, B, C, and D) in one martensite variant group were determined to be twin-related: A and C (or B and D) possess type I twin relation, A and B (or C and D) type II twin, and A and D (or B and C) compound twin. All the twin interfaces are in coincidence with the respective twinning plane (K1) [26-28, 71-74]. Fig. 1 . 4 14 Fig.1.4 (a) the relationship of crystal coordinate systems of austenite, adaptive phase and monoclinic seven layer modulated martensite. (b) The pole figure presents the projection of monoclinic seven layer modulated martensite coordinate system along cA ( ( 2 )( 4 ) 24 Determination of the phase constituents, their crystal structures and macroscopic crystallographic features of various types of martensite in NiMnGa thin films by X-ray diffraction technique. (3) Correlation the microstructure with local crystallographic orientation of NM martensite and 7M modulated martensite by means of EBSD. Investigation of the orientation relationships between martensite variants and the orientation relationship for martensitic transformation by determined local crystallographic orientations and crystallographic calculation. Examination of the influence of substrate constraint on martensitic transformation and preferential selection of martensite variants in NiMnGa thin films based on crystallographic analyses. Fig. 2 . 1 21 Fig. 2.1 the schematics of fabrication process for NiMnGa target Fig. 2 . 2 22 Fig.2.2 illustration of direct current magnetron sputtering process (a) and the confocal magnetron sputtering system (b). Fig. 2 . 3 23 Fig.2.3 schematic illustration of X-ray diffractometers. (a) the conventional four-circle x-ray diffractometer. 2 and shown in Fig. 2.4, in connect with SEM/EBSD orientation definition (Convention Oxford-Channel 5), two crystal coordinate systems are selected by convenience.One is referred to the Bravais lattice cell of each phase and the other is a Cartesian coordinate system set to the lattice cell in the present work. The macroscopic sample coordinate system is referenced to the three edges of MgO substrate ( Fig. 2 . 4 24 Fig. 2.4 Schematic representation of a general (triclinic or anorthic) unit cell. vectors i, j and k with j parallel to b (j//b) and i perpendicular to b-O-c plane ( bOc i  ) following Channel 5 convention, as shown in Fig.2.5 for the three phases in the present work. Fig. 2 . 5 25 Fig.2.5 The schematics of selection of orthonormal coordinate systems , the matrix tensors in direct and reciprocal space are given in Eq.(2 Fig. 2 . 6 26 Fig.2.6 Illustration of coordinate transformation from sample coordinate system to Bravais lattice cell. Fig. 2 . 7 27 Fig.2.7 The schematics of misorientation between two martensite variants M is the Euler angle matrix). Then the misorientation matrix B A 1 AG 1  is the reverse matrix of A G , transforming from crystal frame A back to the sample frame. S are the generic elements of the rotation symmetry group.This provides an alternate description of misorientation as the successive operation of transforming from the first variant frame (A) back to the sample frame and subsequently to the new variant frames (B). Various methods can be used to represent this transformation operation, such as: Euler angles, axis/angle (where the axis is specified as a crystallographic direction), or unit quaternion.Misorientation expressed in angle and rotation axis is of particular interest for the present work, as it brings first information about twin orientation relationship between martensite variants. The determination of the angle and axis is explained as follow[START_REF] Humbert | Determination of the Orientation of a Parent [beta] Grain from the Orientations of the Inherited [alpha] Plates in the Phase Transformation from Body-Centred Cubic to Hexagonal Close Packed[END_REF]. If we denote the misorientation matrix Fig. 2 . 8 28 Fig. 2.8 The stereographic projection of the normal on crystal 2 eFig. 2 . 9 229 Fig.2.9 Schematic illustration of the determination of OR between austenite and martensite. MM is the Euler angle matrix of martensite, which can be determined by SEM-EBSD. i S and j S are the symmetry elements of austenite and martensite, respectively.A M T  is the transformation matrix from martensite to austenite. A T and M T are the transformation matrix from the Bravais lattice cell of austenite and martensite to the orthonormal reference frame set with respect to the parallel plane and direction.. are the transformation matrix for the orthonormal crystal coordinate system to the respective Bravais lattice cell of austenite and martensite. Fig. 2 . 2 Fig. 2.10 Schematic illustration of the determination of orientation relationship between austenite and martensite. A 1n g and A n g 2 ( N n , , 2 , 1    ) represent possible distinct orientations of the austenite Fig. 2 . 2 Fig. 2.11 the schematics of martensitic transformation examining the change of the length of the basis vectors of the reference cell before and after the transformation, the displacement gradient tensor can be readily built.To clearly analysis the constraint from MgO substrate during martensitic transformation process, the deformation gradient tensors of variants transformed from the transformed to the basis coordinate system of MgO substrate, with the following equation. components of the displacement gradient tensor in the MgO substrate coordinate system represent the dilatation and shear. The components) ( j i e ij stand for the normal strain, which represent the dilatation along in the MgO are shear strains, which indicates the shear magnitude in j plane (the plane normal direction is the j axis) along i direction. Fig. 2 . 2 Fig.2.12 Illustration of coordinate transformation from the coordinates of MgO substrate to austenite and from the coordinates of austenite to Pitsch. Fig. 3 . 1 31 Fig. 3.1 Composition of NiMnGa thin films deposited on Si substrate with various sputtering power Fig. 3 . 2 32 Fig. 3.2 Microstructure of thin films deposited on Silicon substrate at various sputtering power (a) 30w, (b) 50w, (c) 75w, (d) 100w. Fig. 3 . 3 Fig. 3.3 shows the microstructures of the thin film annealed at different temperatures. It can be seen that the column thicknesses increase with the annealing temperatures. The film annealed at 900 ℃ for 2 h show a plate structure in each column with the plate width of about 200 nm. . Fig. 3 . 3 Fig. 3 . 4 X 3 . 3 . 2 3 . 3 . 2 . 1 Fig. 3 . 5 Fig. 3 . 33343323321353 Fig. 3.3 SEM microstructure of NiMnGa thin films deposited on Si annealed at different temperature for 2 h (a) As-deposited; (b) 750°C ; (c) 800°C ; (d) 900°C Fig. 3 . 6 36 Fig. 3.6 NiMnGa thin films deposited on MgO(100) at different substrate temperature(a)400℃, (b) 500 ℃. Fig. 3 . 3 Fig.3.6 shows the X-ray diffraction patterns of the NiMnGa thin film deposited on the MgO substrate. X-ray diffraction spectra show that the crystal structure of these thin films is of those of 7M martensite and austenite. Figure 3 . 3 7 shows the microstructures of the NiMnGa thin films deposited on MgO (100) and on MgO (100) with Ag or Cr seed layer. As can be seen in Fig. 3.7, only the NiMnGa thin films deposited on MgO\Cr show a continuous character. On one hand the Cr seed layer improves element diffusion on the substrate and on the other hand, the Cr seed layer decreases lattice misfit between the substrate and the NiMnGa thin films. Since the lattice parameter of Cr and NiMnGa austenite is 2.888 Å and 5.803 Å, respectively. The orientation relationship between MgO, Cr and NiMnGa is 10 Fig. 3 . 7 3 . 3 . 3 Influence of film composition and thickness 3 . 3 . 3 . 1 373333331 Fig. 3.7 NiMnGa thin films deposited on MgO (100) with and without seed layer (a) MgO(100),(b) MgO\Ag(100 nm), (c) MgO\Cr(100 nm)3.3.3 Influence of film composition and thickness Fig. 3 . 8 Fig. 3 . 9 Fig. 3 . 3 . 3 . 3 . 2 Fig. 3 . 3839333323 Fig. 3.8 SEM image of MgO\Cr\NiMnGa thin films with different compositions (a-b) Ni51.42at.%-Mn29.25 at.%-Ga19.33 at.%, (c-d) Ni55.45 at.%-Mn26.62 at.%-Ga17.93 at.% Figure 3 .Fig. 3 . 33 Figure 3.10 presents the microstructure of MgO\Cr\NiMnGa thin film(composition?) withdifferent thickness prepared in the present work. It is seen from the figures that with the increase of the film thickness, the amount of terrace shape constituents is increased. SEM-EDX analyses show that the composition of the terraces it is the same as that of the matrix. This Fig. 3 . 3 Fig. 3.11 XRD patterns of MgO\Cr\NiMnGa thin films with two different thicknesses (a)1.5 m, (b) 3 m. microstructure of deposited films is classified in different zones depending on the temperature, TS is the substrate temperature). High substrate temperature also produces films with continuous microstructure. However, in the present study, in order to fabricate NiMnGa thin films with continuous microstructure other than column microstructure, both the low argon pressure (0.15 Pa) and high substrate temperature (500 ℃) were employed. Maybe due to the limitations of the present depositing equipment, films with continuous microstructure were not attainable direct on the substrates without utilizing any seed layer. Introducing a seed layer proves to be an alternative. For the NiMnGa deposited on the MgO single crystal substrate, the Cr seed layer improves the element diffusion on the substrate, which results in the continuous microstructure. Fig. 3 . 3 Fig.3.12 Thornton structure zone model correlating the argon pressure and the substrate temperature to the morphological properties of the films [101]. Fig. 4 . 1 41 Fig.4.1 XRD patterns of as-deposited Ni-Mn-Ga thin films with MgO/Cr substrate: (a) measured by conventional -2 coupled scanning at different tilt angles ; (b) measured by 2θ scanning at two incidence angles ω and integrated over the rotation angle φ. Fig. 4 . 4 Fig. 4.1b presents the XRD patterns measured using a large-angle position sensitive detector under two different incident beam conditions. Compared with Fig. 4.1a, extradiffraction peaks are clearly seen in the 2range of 48°-55° and at the higher 2 positions (around 82°). These additional peaks in the 2range of 48°-55° were not detected in the previous studies on the Ni-Mn-Ga thin films[START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Backen | Comparing properties of substrate-constrained and freestanding epitaxial Ni-Mn-Ga films[END_REF][START_REF] Buschbeck | In situ studies of the martensitic transformation in epitaxial Ni-Mn-Ga films[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. By combining all the characteristic diffraction peaks in Figs.4.1a and 4.1b, we are able to conduct more reliable phase identification and lattice constant determination on the constituent phases of the as-deposited thin films, especially for the modulated martensite. Fig. 4 . 2 42 Fig.4.2 the illustration of crystal structure of NM martensite and 7M martensite in NiMnGa thin films Fig. 4 . 4 Fig.4.3 presents the pole figures of NM martensite and 7M martensite. Let the edges of MgO substrate as the macroscopic sample coordinate system as shown in Fig.4.3(a) and the crystal direction of Fig. 4 . 3 43 Fig. 4.3 Pole figures of NM and 7M martensite in NiMnGa thin films (a) NM martensite, (b) 7M martensite figures and local surface morphologies. To determine the twin relationship and precise twinning elements, local crystallographic orientation correlated with microstructure is the prerequisite information. In this chapter, a spatially-resolved orientation analysis is conducted on martensites of NiMnGa thin films by means of electron backscatter diffraction (EBSD)a SEM based microstructural-crystallographic characterization technique. The microstructural features of both NM martensite and 7M modulated martensitic variants are directly correlated with their crystallographic orientations. The roles of substrate constraint in the preferential selection of martensitic variants are addressed. Based on the crystallographic calculation the martensitic transformation sequence in the present NiMnGa thin films was verified. Fig. 5 . 1 51 Fig.5.1 SE image of electrolytically polished Ni-Mn-Ga thin films, showing martensite plates that are clustered in groups with low and high relative contrasts. The sample coordinate system is set in accordance with the basis vectors of the MgO substrate. (b) High-magnification BSE image of the squared area Z1 in Fig. 5.1a, showing fine lamellae distributed alternately inside each plate. The inter-plate interfaces are marked with white dotted lines, and the intra-plate interfaces with blue and green solid lines. Fig. 5 . 5 Fig.5.1(a) shows a SE image of the as-deposited thin films after slight electrolytic polishing. It is clearly seen that the martensite appears in plate shape, and individual plates are clustered into groups that can be distinguished locally by the alignment of parallel or near-parallel inter-plate boundaries. According to the SE image contrast of neighboring martensite plates, the clustered groups are characteristic of two different relative contrasts -low relative contrast (Z1) or high relative contrast (Z2), as illustrated in Fig.5.1(a). Whereas the low relative contrast zones consist of long and straight plates running with their length direction parallel to one edge of the substrate (i.e. Fig. 5 . 5 Fig. 5.2(a) presents the secondary electron (SE) image acquired from the sample with gradient thickness film thickness. This sample was prepared by a controlled polishing to obtain an increased polishing depth from the left to the right, as schematically illustrated in Fig. 5.2(b). The right side and the left side of Fig. 5.2(a) represent the microstructure near the film surface and that deep inside the film, respectively. It can be seen from Fig. 5.2(a) that the thin film has an overall plate-like microstructure, with plate thickening from its interior to its surface. Although the polishing depth of the film changes smoothly, an abrupt change in the plate thickness occurs without any transition zone. This indicates a complete change of microstructural constituents or phases along the film thickness. Fig. 5 . 5 2(c) and Fig. 5.2(f) present the respective Kikuchi patterns, acquired from one coarse plate (with high brightness in the dotted blue square of Fig. 5.2(a)) and one thin plate (with high brightness in the dotted yellow square of Fig. 5.2(a)). Obviously, the measured patterns bear intrinsic differences, as arrowed in Fig. 5.2(c) and Fig. 5.2(f). Fig. 5.2(d) and Fig. 5.2(e) demonstrate that each of the two overlapping patterns appeared in Fig. 5.2(c) can be indexed with the tetragonal crystal structure of the NM martensite. It means that one coarse plate in the top surface layer of the film contains two NM martensitic variants. This is coherent with the BSE observations, shown in Fig. 5.1(b). Fig. 5 . 2 . 52 Fig.5.2. (a) SE image showing the plate-like microstructure of an electropolished sample with gradient film thickness. (b) Schematic illustration of the sample thickness change from the left side to the right side, produced by gradually electrolytic polishing. (c-e) Kikuchi patterns acquired from one coarse plate with high brightness in the dotted blue square of Fig. 5.2a. The mixed patterns in Fig. 1c are indexed as two NM variants (Fig. 5.2d and Fig. 5.2e) with Euler angles of (156.7 o , 175.2 o , 38.1 o ) and (163.8 o , 96.7 o , 44.7 o ), respectively. (f) Kikuchi pattern acquired from one fine plate with high brightness in the dotted yellow square of Fig. 5.2a.(g and h) Calculated Kikuchi patterns using the monoclinic superstructure of 7M martensite and the tetragonal structure of NM martensite, respectively. Note that in the latter case there appear large mismatches with the measured Kikuchi pattern. Fig. 5 . 5 Fig.5.3 (a) SE image of the as-deposited thin films after slight electrolytic polishing. (b) High-magnification BSE image of the squared area Z1 in Fig. 5.3(a). The insets represent the crystallographic orientation of the major and minor lamella variants. V1 and V2 are the major and minor variants in one martensite plate. V3 and V4 are the major and minor variants in the left adjacent martensite plate. Fig. 5 . 5 4 presents an example of the Kikuchi line pattern acquired from one martensite plate with high brightness in the high relative contrast zone and the calculated patterns using the tetragonal NM martensite structure. As the in-plate lamellar variants are too fine and beyond the resolution of the present EBSD analysis system, we could not obtain a single set of Kikuchi lines from one variant. For each acquisition, there are always mixed patterns from two neighboring variants in one image, as displayed in Fig.5.4(a). However, the high intensity reflection lines belonging to two different variants can be well separated in the image, as outlined by the green and red triangles in Fig. 5.4(b). By comparing Figs. 5.4(c) and 5.4(d) with Fig. 5.4(a), perfect matches between the acquired Kikuchi lines for the two variants and the calculated ones using the tetragonal NM martensite structure are evident. Fig. 5 . 4 . 54 Fig. 5.4. (a) Kikuchi line pattern acquired from one martensite plate in the high relative contrast zone (Z2) in Fig. 5.3a. (b) Indication of high intensity reflection lines from two adjacent variants 1 (the green triangle) and 2 (the red triangle). (c-d) Calculated Kikuchi line patterns using the tetragonal structure for variants 1 and 2, respectively. Fig. 5 . 5 55 Fig.5.5 The TEM bright field image of NiMnGa thin films (a) the low relative contrast zone, (b) the high relative contrast zone. The yellow dotted lines represent traces of the inter plate interfaces. The blue dashed lines represent traces of the intra plate interfaces. Fig. 5 . 6 56 Fig.5.6 The TEM bright field image in the high relative contrast zones of NiMnGa thin films (a-b) and the corresponding diffraction patterns of P1 and P3 (c-d). The yellow dotted lines represent traces of the inter plate interfaces. The blue dashed lines represent traces of the intra plate interfaces. Here the electron beam is parallel to the <110>Tetr zone axes of P1 and P3. Fig. 5 . 7 57 Fig.5.7 The TEM bright field image in the high relative contrast zones of NiMnGa thin films (a-b) and the corresponding diffraction patterns of P1 and P2 (c-d). Here the electron beam is parallel to the <201>Tetr zone axes of P1 and P2. Fig. 5 . 5 Fig. 5.6 and 5.7 show the TEM bright field images from the same area displayed in Fig. . 5.3(a)). Individual plates were identified to be composed of two alternately distributed orientation variants (lamellae) with different thicknesses, as schematically illustrated in Fig. 5.8(a) and 5.8(b). The clustered martensite plates running parallel to the substrate edges (Z1) represent one kind of variant group, whereas those running roughly at 45° to the substrate edges (Z2) represent another kind. According to the orientations of the thicker variants in plates, two sets of orientation plates can be distinguished, i.e. the set of plates A, B, C and D in the low relative contrast zones and the set of plates 1, 2, 3 and 4 in the high relative contrast zones. Thus, there are in total eight orientation variants of the NM martensite in one variant group. For easy visualization, they are denoted as V1, V2, …, V8 in Fig. 5.8(a) and SV1, SV2, …, SV8 in Fig. 5.8(b), where the symbols with odd subscripts correspond to the thicker (major) variants and those with even subscripts the thinner (minor) variants. Taking the basis vectors of the MgO(100) monocrystal substrate as the sample reference frame, as defined in the "Experimental" part, the measured orientations of the NM variants in the two relative contrast zones are presented in the form of {001}Tetr, {110}Tetr, {010}Tetr and {112}Tetr pole figures, as displayed in Figs. 5.8(c) and Fig. 5.8(d). Fig. 5 . 5 . 55 Fig. 5.5. (a-b) Schematic illustration of the geometrical configuration of NM plates. V1, V2, …, V8 and SV1, SV2, …, SV8 denote eight orientation variants in the low relative contrast zones (Z1) and in the high relative contrast zones (Z2), respectively. (c-d) Representation of measured orientations of in-plate lamellar variants in the form of {001}Tetr, {110}Tetr, {010}Tetr and {112}Tetr pole figures. The orientations of the intraand inter-plate interface planes are respectively indicated by light green solid squares and black dotted rectangles in the {010}Tetr and {112}Tetr pole figures. ( 112 ) 112 Tetr twining plane (roughly corresponding to the (101) planes of the austenite [44, 55, 88, 89]) and are perfectly coherent, as indicated by the light green solid squares in Figs. 5.8(c) and 5.8(d). According to the {112}Tetr pole figure of the low relative contrast zones shown in Fig. 5.8(c), the interlamellar interfaces in each plate are inclined roughly 47.5° toward the substrate surface, and two interlamellar interfaces from adjacent plates are positioned symmetrically either to the (010)MgO plane (e.g. plates A and B) or to the (100)MgO plane (e.g. plates B and C). Fig. 5 . 9 . 59 Fig. 5.9. (a) Atomic correspondences of eight lamellar variants in four NM plates for the low relative contrast zones (viewed from the top of as-deposited thin films). Only Mn and Ga atoms are displayed. (b) 3D-atomic correspondences of two alternately distributed lamellae with (112)Tetr compound twin relationship in one NM plate. The coherent twinning planes are outlined in green. is the dihedral angle between the (110)Tetr plane of the major variants and the MgO substrate, and is the dihedral angle between the (001)Tetr plane of the minor variants and the MgO substrate. (c) 3D configuration of two adjacent NM plates using in-plate lamellar variants as blocks to illustrate the plate interface misfits. Fig. 5 . 5 Fig. 5.10. (a) Atomic correspondences of eight lamellar variants in four NM plates for high relative contrast zones (viewed from the top of as-deposited thin films). Only Mn and Ga atoms are displayed. (b) 3D-atomic illustration of the combination of two lamellar variants ((112)Tetr compound twins) in plates 1 and 2. The twinning planes between lamellar variants are colored in green. is the dihedral angle between (001)Tetr plane of the major variants and the MgO substrate, and is the dihedral angle between the (110)Tetr plane of the minor variants and the MgO substrate. (c) 3D construction of two adjacent plates showing the plate interface misfits. Fig. 5 . 5 Fig. 5.11 presents a selected SE image of 7M martensite plates with full-featured microstructural constituents. It is evident that in Fig. 5.11(a) the individual martensite plates are clustered into groups, exhibiting either low relative contrast (Group 1) or high relative SE contrast (Group 2 and Group 3), as the case of NM martensite. The low relative contrast zones contain long straight plates parallel to one of the substrate edges, whereas the high relative Fig. 5 . 5 Fig. 5.11. (a) SE image of plate groups of 7M martensite with low relative contrast (Group 1) and high relative contrast (Group 2 and Group 3). (b) Magnified image of individual plates belonging to Group 2. Note that the traces of inter-plate interfaces have three distinct spatial orientations, as highlighted by the dotted yellow lines (Type-II twin interfaces), the dotted green lines (Type-II twin interfaces), and the solid black lines (Type-I twin interfaces). Fig. 5 . 5 Fig. 5.12. (a and b) Illustration of 7M martensite plates with four orientation variants in Group 1 (denoted by V1, V2, V3, V4) and Group 2 (denoted by VA, VB, VC, VD). The insets show the corresponding microstructures in the same zones. (c) Representation of individually measured orientations of 7M variants in Group 1 and Group 2 in the form of {2 0 20}mono, {2 0 20 }mono and {0 4 0}mono pole figures. Fig. 5 . 5 Fig. 5.12(c) presents the individually measured orientations of the 7M variants in Group 1 single a-variant and a pair of the (0 4 0)mono variants (VB and VC) to one single c-variant. The same situation has also happened to the identification of constituent variants in the low relative contrast zones, e.g. a pair of the (2 0 20)mono variants (V1 and V4, or V2 and V3) was previously taken as one single b-variant. The appearance of such a discrepancy in specifying the number of variants in one plate group arises from the fact that in the other work, the crystallographic orientations of constituent variants in thin films were determined by conventional X-ray diffraction (XRD). The limited spatial resolution with this technique has prevented differentiating the corresponding (2 0 20)mono, (2 0 20 )mono, and (0 4 0)mono planes from variant pairs in pole figures. Here, the (2 0 20)mono, (2 0 20 )mono and (0 4 0)mono planes refer to the (0 4 5 . 5 interface traces on the film surface are both in parallel to the cases can be found for the Type-II twin interfaces. As shown in the Fig.5.13(a), middle), the two Type-II twin interfaces that have parallel traces on the film surface intersect the substrate surface at +85° (V3 and V4) and -85° (V1 and V2), respectively. Apparently, the four Type-I and Type-II twin interfaces are oriented differently through the film thickness, but they have the same interface trace orientation on the film surface. As for the Compound twin interfaces, the interfaces between adjacent variants are all parallel to the film surface, as shown in the Fig.5.13(a), right). That is why no Compound twin interfaces could be observed by examining solely the film surface microstructure in the present work. Such Compound twin interfaces have been detected by cross section SEM observation[START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF].For the high relative contrast zone (Group 2), it is seen from the 13(b), left) that the Type-I twin interfaces between variants VA and VC and those between variants VB and VD possess roughly the same orientation in the film. They incline at about 45° to the substrate surface and their traces on the film surface are along the , for the Type-II twin interfaces as shown in the figure (Fig.5.13(b), middle), although the interface between VA and VB and that between VC and VD are inclined at about 45° to the substrate surface, their interface traces on the film surface do not possess the same orientation. One is 37.3° (VA and VB) away from the angle between two differently oriented Type-II twin interface traces on the film surface is 15° (as shown in Fig.5.11(b)), being in accordance with the results reported in the literature[START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF].As for the Compound twin interfaces, the interface between VA and VD and that between variant VB and VC possess roughly the same orientation (Fig.5.13(b), right). They are nearly perpendicular to the substrate surface and their traces on the film surface are nearly parallel to the is found that the bending of martensite plates in the high relative contrast zone is associated with the orientation change from VA to VD or from VB to VC, which results in an change of the plate interface from type I twin interface to type II twin interface or from type II twin interface to type I twin interface. Fig. 5 . 5 Fig. 5.14. (a) Illustration of the orientation relationship between NiMnGa austenite and MgO substrate. (b) Schematic of the Pitsch orientation relationship between the cubic austenite lattice and the monoclinic 7M martensite lattice. Note that only the average monoclinic unit cell [73] is used for describing the 7M structure. the diagonal term eii represents the dilatation in the i direction ( the off diagonal term eij represents the shear in the i direction on the plane normal and in the j direction. It is seen that the largest deformation during the transformation happens as a shear in the   , i.e. the twinning plane and the twinning direction of the 7M martensite. Here, for simplicity, we ignore the structural modulation of the 7M martensite and the 7M superstructure is reduced to an average monoclinic unit cell that corresponds to one cubic unit cell[START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF].Due to the cubic crystal symmetry of the austenite phase, there are six distinct but equivalent confirmed that the four 7M variants in Group 1 in the low relative contrast zone are originated from the . Clearly, the 7M variants that originate from the two austenite planes ( (5- 2 ) 2 It is seen that the diagonal components of the four displacement gradient tensors are exactly the same, whereas the off diagonal components are of the same absolute value but sometimes with opposite signs. The maximum deformation appears as a shear 21 e in the (1 highlighted in bold in the matrices in Eq. (5-2). 5. 15 ( 31 e 1531 a), as well as the 23 e component representing the shear in the (0 0 1)MgO plane along the . 5.15(b)), remain unchanged with the variation of d1. This means that these transformation deformations cannot be accommodated by forming twin variants. However, when the variant volume ratio d1 reaches 0.5, the largest shear 21 e and the shear are cancelled out. This volume ratio is very close to the thickness ration observed in the present work. For the case of the Type-II twins, the ii e and 31 e terms stay unchanged under the variant volume change. When the volume ratio reaches 0.5, the largest shear 21 e and the shear 23 e are both cancelled out. Fig. 5 . 15 . 515 Fig. 5.15. Displacement gradient tensor components of (a-b) Type-I twins and (c-d) Type-II twins in the low relative contrast zone, averaged with the volume fractions of two constituent variants. Fig. 5 . 16 . 516 Fig. 5.16. Illustration of the shear deformations from austenite to 7M martensite in the low relative contrast zone with respect to the substrate, viewed on (a) the (0 0 1)MgO plane and (b) the (0 1 0)MgO plane. - 3 ) 3 Here, all the components in each displacement gradient tensor are not zero and their absolute values are in the same order of magnitude. The maximum deformation appears as a e ), as highlighted in bold in the matrices in Eq. (5-3). and 22 e 22 (a-b) (Type-I twin pair) and Figs.5.17(c-d) (Type-II twin pair). For the Type-I twin pair, it is seen from Fig.5.17(a) that the dilatation deformations can be accommodated to some extent through variant volume change, but they cannot be accommodated at one single volume ratio. Indeed, the 11 e terms are nearly cancelled out at d1 = 0.4876, but the 33 e term at d1 = 0.5935. Similarly, shear deformation accommodation exists but cannot be achieved at one single volume ratio, as shown in Fig.5.17(b). The 13 e , 21 e , 23 e and 31 e terms are roughly cancelled at d1 = 0.4698 but the 12 e and 32 e terms at d1 = 0.5940. If we consider that the j e 3 terms are the most important components to be accommodated due to the constraints from the rigid substrate, no volume ratio can be reached to cancel these components. When the 31 e component of the Type-I twin pair equals to zero, the 32 e component is -0.0111; when the 32 e component of the Type-I twin equals to zero, the component 31 e is 0.0111. Therefore, the formation of Type-I twins in the high relative contrast zone is energetically costly. Fig. 5 . 17 . 517 Fig. 5.17. Displacement gradient tensor components of (a-b) Type-I twins and (c-d) Type-II twins in the high relative contrast zone, averaged with the volume fractions of two constituent variants. Fig. 5 . 5 Fig.5.18 the BSE image of the NiMnGa thin films (a), (b) is the magnification view of area A and (c) is the magnification view of area B. The dotted lines are the variant group boundaries. Fig. 5 . 5 Fig.5.18 presents the BSE images of NiMnGa thin films after slight electrolytic polishing. 3 . 2 . 32 Adjacent to the long strips (G5 and G6), there are several either bright or dark polygonal zones, which are the corresponding high relative contrast zones. Fig.5.18(b) and Fig.5.18(c) are the magnified view that presents the martensite groups. As shown in Fig.5.18(c), although the traces of plate interface in G3 are close to those in G4, the image brightness of the two martensite variant groups (G3 and G4) are different, indicating that group G3 and G4 possess different crystallographic features. As the contrast of a BSE image for a monophase microstructure with homogenous chemical composition originates from the orientation differences of the microstructural components, the difference in contrast between two martensite variant groups indicates the difference in their crystallographic orientations. Fig. 5 .Fig. 5 . 2 552 Fig. 5.19 (004)Tetr pole figure of NM martensite from the six variant groups determined by EBSD Fig. 5 .Fig. 5 . 55 Fig.5.22 Pole figures of NM martensite variants in the six variant groups are of plate morphology and organized into two characteristic zones featured with low and high relative SEM second electron image contrast. Local martensite plates with similar plate morphology orientation are organized into variant groups or variant groups. Low relative contrast zones consists of long straight plates running with their length direction parallel to the substrate edges, whereas the high relative contrast zones consists of shorter and bent plates with the length direction roughly in 45° against the substrate edges. SEM-EBSD in film depth analyses further verified the co-existence of the three constituent phases: austenite, 7M martensite and NM martensite. NM martensite is located near the free surface of the film, austenite above the substrate surface, and 7M martensite in the intermediate layers between austenite and NM martensite.Further EBSD characterization indicates that there are four distinct martensite plates in each variant group for both NM and 7M martensite. Each NM plate is composed of paired major and minor lamellar variants (i.e. (112)Tetr compound twins) in terms of their thicknesses having a coherent interlamellar interface, whereas, each 7M martensite plate contains one orientation variant. Thus, there are four orientation 7M martensite variants and eight orientation NM martensite variants in one variant group.For NM martensite, in the low relative contrast zones, the long and straight inter-plate interfaces between adjacent NM plates result from the configuration of the counterpart (112)Tetr compound twins that have the same orientation combination and are distributed symmetrically to the macroscopic plate interfaces. As there are no microscopic height misfits across plate interfaces in the film normal direction, the relative contrasts of adjacent NM plates are not distinct in the SE image. However, in the high relative contrast zones, the asymmetrically distributed (112)Tetr compound twins in adjacent NM plates lead to the change of inter-plate interface orientation. The pronounced height misfits across inter-plate interfaces in the film normal direction give rise to the surface reliefs, hence high relative contrast between adjacent plates.For 7M martensite, both Type-I and Type-II twin interfaces are nearly perpendicular to the substrate surface in the low relative contrast zones. The Type-I twin pairs appear with much higher frequency, as compared with that of the Type-II twin pairs. However, there are two Type-II twin interface trace orientations and one Type-I twin interface trace orientation in the high relative contrast zones. The Type-II twin pairs are more frequent than the Type-I twin pairs. The inconsistent occurrences of the different types of twins in different zones are originated from the substrate constrain.The crystallographic calculation also indicates that the martensitic transformation sequence is from Austenite to 7M martensite and then to NM martensite (A→7M→NM). The present study intends to offer deep insights into the crystallographic features and martensitic transformation of epitaxial NiMnGa thin films.In conclusion, both NiMnGa thin films with column and continuous microstructures were successfully fabricated by DC magnetron sputtering, after the optimization of sputtering parameters such as substrate temperature, sputtering power, substrate, seed-layer, film composition and thickness.X-ray diffraction analysis demonstrates that three different phases, i.e. austenite, 7M modulated martensite, and NM martensite, co-exist in the as-deposited epitaxial Ni50Mn30Ga20 thin films. The austenite phase has a cubic L21 crystal structure (Fm 3 m, No. 225) with lattice constant aA = 0.5773 nm. The 7M martensite phase has an incommensurate monoclinic crystal structure (P2/m, No. 10) with lattice constants a7M = 0.4262 nm, b7M = 0.5442 nm, c7M = 4.1997 nm, and = 93.7  The NM martensite phase is of tetragonal crystal structure (I4/mmm, No. 139) with lattice constants aNM = 0.3835 nm and cNM =0.6680 nm.By the combined the XRD and EBSD orientation characterizations, it is revealed that the as-deposited microstructures is mainly composed of the tetragonal NM martensite at the film surface and the monoclinic 7M martensite beneath the surface layer. For the NM martensite, there are two characteristic zones featured respectively with low and high relative SE image contrast. The NM martensite in the low relative contrast zones consists of long straight plates running with their length direction parallel to the substrate edges, whereas the NM martensite in the high relative contrast zones consists of shorter and bent plates with the length direction roughly in 45° against the substrate edges.Each NM plate is composed of paired major and minor lamellar variants (i.e. (112)Tetr compound twins) having a coherent interlamellar interface. There are in total eight orientation variants in one variant group. Indeed, in the low relative contrast zones, the long and straight inter-plate interfaces between adjacent NM plates result from the configuration of the counterpart (112)Tetr compound twins that have the same orientation combination and are distributed symmetrically to the macroscopic plate interfaces. As there are no microscopic height misfits across plate interfaces in the film normal direction, the relative contrasts of adjacent NM plates are not distinct in the SE image. However, in the high relative contrast zones, the asymmetrically distributed (112)Tetr compound twins in adjacent NM plates lead to the change of inter-plate interface orientation. The pronounced height misfits across inter-plate interfaces in the film normal direction give rise to the surface reliefs, hence high relative contrast between adjacent plates.The plate-like microstructures of the 7M martensite are composed of two distinct kinds of plate groups too. Each 7M martensite plate contains one orientation variant. There are four orientation variants in one plate group. The inter-plate interfaces are either Type-I or Type-II twin interfaces. The Type-I twin pairs appear with much higher frequency, as compared with Table 2 .1 2 List of NiMnGa targets with various compositions for deposition No. Nominal composition of phase of Targeted composition in Targeted phase the targets targets at thin films of thin film at room room temperature temperature A Ni46at.%-Mn32at.%-Ga-22at.% A Ni50at.%-Mn30at.%-Ga-20at.% 7M B Ni48at.%-Mn30at.%-Ga-22at.% A Ni50at.%-Mn28at.%-Ga-22at.% 5M C Ni50at.%-Mn28at.%-Ga-22at.% 5M Ni52at.%-Mn26at.%-Ga-22at.% 7M D Ni50at.%-Mn30at.%-Ga-20at.% 7M Ni52at.%-Mn28at.%-Ga-20at.% NM 2 .1.2 Thin film deposition sputtering technique also produces films with a better adhesion than films produced by evaporation techniques. Substrate heating can be done by attaching a heater to the sample holder for epitaxial depositions. Thermal energy imparted to the deposition atoms increases the mobility of atoms forming thin film that have a nearly perfect atomic stacking during deposition. Several deposition methods have been developed to deposit thin films with the thickness ranging from several nanometers to several microns. Comparing with other depositing techniques, magnetron sputtering process has a number of advantages. First of all, plasma can be sustained at lower Argon pressures and hence resulting in low level of impurities. In addition, since the sputtering mechanism is not a thermal evaporation process, even the material with highest melting point can be deposited at ambient temperature. Usually, the deposited alloy films have a composition close to their target composition. Moreover, Table 2 . 2 22 The selection of coordinate systems Phases Crystal structure Lattice parameters Austenite Cubic (L21) Crystal coordinate system Table 2 . 2 3 the characteristic features of different twinning types Type I twin Type II twin Compound twin Number of 180°Rotation axis 1 1 2 Twinning plane K1 Rational Rational Conjugate twinning plane K2 Rational Rational Twinning shear direction 1 Rational Rational Conjugate Twinning direction 2 Rational Rational Table 2 . 2 4 the relationships of martensitic transformation Relationship Plane Direction Ref. Bain (001 A (001) || M 3 Chapter 3 Fabrication of NiMnGa thin films 3.1 Introduction In -36) spite of the effort made recently to use sputtering technology for the preparation of thick Ni-Mn-Ga films (from sub-micron to micron range (<2µm)), there are still some open questions in scientific and technical aspects in producing NiMnGa thin films, such as the control of the film composition during sputtering, the attainability of ultra thick epitaxial films (in several microns thick). The present work is dedicated to tackle these issues by optimizing the substrate parameters and the deposition parameters. Two substrate materials were used in the present work: one is thermal oxidization silicon and the other one is monocrystalline MgO [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] . It is known that compared with MgO, Si substrate is not optimum for producing qualified epitaxial NiMnGa films in terms of mono-crystallinity of austenite at high temperature but it is of low cost. It allows to determinate the optimum depositing parameters that require large numbers of trial and errors with a relatively low cost. Thus in the present work, the Si substrate is used to optimize the primary deposition parameters and the MgO substrate is used to finally produced the qualified films with limited numbers of tests. Table 3 . 3 1 the corresponding parameters of NiMnGa thin films deposited on Si with various sputtering power Power Time Thickness Sputtering Film composition (at.%) (W) (h) (m) rates(nm/s) Ni Mn Ga Si 1 30 6 1.142 0.059 54.85 23.92 21.23 Si 2 50 3.5 1.142 0.091 55.36 24.22 20.42 Si 3 75 1.5 0.714 0.132 55.76 23.92 20.02 Si 4 100 2 1.285 0.159 55.17 24.97 19.87 Table 4 . 1 . 41 Calculated lattice constants of austenite, 7M martensite and NM martensite for as-deposited Ni-Mn-Ga thin films. Lattice constants Phase Space group Crystal system a (nm) b (nm) c (nm)    7M P2/m (No. 10) Monoclinic 0.4262 0.5442 4.199 90°,  93.7° NM I4/mmm (No. 139) Tetragonal 0.3835 0.3835 0.6680 ° Austenite Fm 3 m (No.225) Cubic L21 0.5773 0.5773 0.5773 ° 4.3 .2 Determination of crystallographic texture 12 1 )Tetr planes are inclined to the substrate surface at 42.57° and 47.24°, and the (010)Tetr planes are at 48.29° and 41.68°, respectively. Neighboring plates Variant pairs Misorientation angle  ( o ) Rotation axis A/B V1-V3 V2-V4 82.6518 13.6865 3.4 o from the <110>Tetr direction 5.4 o from the <031>Tetr direction C/D V5-V7 V6-V8 83.0965 13.1146 3.7 o from the <110>Tetr direction 4.6 o from the <301>Tetr direction B/C V3-V5 V4-V6 83.1537 14.4210 3.6 o from the <110>Tetr direction 4.0 o from the <301>Tetr direction D/A V7-V1 V8-V2 83.0093 14.4189 3.7 o from the <110>Tetr direction 3.1 o from the <301>Tetr direction Table5.1. Minimum rotation angle and rotation axis between two lamellar variants connected by an inter-plate interface in the low relative contrast zones. Table 5 . 2 . 52 Minimum rotation angle and rotation axis between two lamellar variants connected by an inter-plate interface in the high relative contrast zones. Neighboring plates Variant pairs Misorientation angle  ( o ) Rotation axis 1/2 SV1-SV3 SV2-SV4 82.9699 14.1760 3.9 o from the <110>Tetr direction 2.1 o from the <301>Tetr direction 3/4 SV5-SV7 SV6-SV8 82.7963 14.8473 3.4 o from the <110>Tetr direction 3.6 o from the <031>Tetr direction 2/3 SV3-SV5 SV4-SV6 82.5782 12.2859 4.7 o from the <110>Tetr direction 3.5 o from the <031>Tetr direction 4/1 SV7-SV1 SV8-SV2 82.8110 11.8090 5.0 o from the <110>Tetr direction 9.9 o from the <031>Tetr direction Table 5 . 3 . 53 Calculated misorientations between adjacent 7M variants in Group 1, expressed as rotation axis (d) and angle () in the orthonormal crystal reference frame. The Euler angles of four variants V1, V2, V3, and V4 are respectively (137.94°, 46.93°, 90.45°), (221.82°, 133.15°, 270.46°), (41.68° ,133.18°, 270.44°), and (318.32°, 46.80°, 90.50°). Variant pairs Misorientation angle () d1 Rotation axis d d2 d3 Twin type V1:V3 83.1337 179.9229 -0.7342 0.4504 -0.0010 0.7482 -0.6789 -0.4871 Type I V2:V4 82.8455 179.9432 -0.7339 0.4493 -0.0007 0.7498 -0.6791 -0.4856 Type I V1:V2 96.7436 179.9519 0.7265 -0.5136 -0.0005 0.6643 0.6871 0.5430 Type II V3:V4 97.2851 179.9806 0.7254 0.5166 0.0002 -0.6607 0.6882 -0.5445 Type II V1:V4 179.3063 179.5858 0.6837 -0.7297 0.0036 -0.0060 -0.7297 -0.6837 Compound V2:V3 179.3435 179.8736 0.6841 0.7293 -0.0011 0.0057 -0.7293 0.6841 Compound Table 5 . 4 . 54 Calculated misorientations between adjacent 7M variants in Group 2, expressed as rotation axis (d) and angle () in the orthonormal crystal reference frame. The Euler angles of four variants VA, VB, VC, ) of the four 7M variants in each plate group, the orientation relationships between neighboring variants can be further calculated.Table 5.3 and Table 5.4 show the calculated misorientations of adjacent variants in the low and high relative contrast zones, expressed in terms of rotation axis and angle. Detailed analysis has revealed that each pair of adjacent variants can be identified as either Type-I or Type-II or Compound twins, depending on whether the rotation axis is close to the normal of a rational plane or/and a rational direction in the monoclinic crystal basis. It is seen from Table1that in the low relative and VD are respectively (48.43°, 136.75°, 95.28°), (178.19°, 92.35°, 2.78°), (271.81°, 87.65°, 182.78°), and (41.57°, 43.25°, 275.28°). Variant pairs Misorientation angle () d1 Rotation axis d d2 d3 Twin type VA:VC 82.3098 178.5255 -0.7414 -0.4414 0.0195 -0.7529 -0.6707 0.4880 Type I VB:VD 82.2390 177.9083 -0.7476 0.4364 -0.0277 0.7534 -0.6635 -0.4917 Type I VA:VB 97.8691 179.5722 0.7295 -0.5156 0.0049 -0.6569 0.6839 0.5500 Type II VC:VD 97.6586 179.7170 0.7246 0.5187 -0.0032 0.6583 0.6891 -0.5454 Type II VA:VD 177.3972 179.9432 0.6871 0.7263 -0.0004 0.0227 -0.7265 0.6869 Compound VB:VC 179.6341 179.8470 0.6693 0.7429 -0.0013 0.0031 -0.7429 0.6693 Compound With the correctly determined crystallographic orientations (represented with three Euler angles in Bunge's notation [92] Table 2 , 2 where VA:VC and VB:VD belong to Type I twins, VA:VB and VC:VD to Type II twins, and VA:VD and VB:VC to Compound twins. Table 5 . 5 . 55 Twinning elements of 7M variants represented in the monoclinic crystal coordinate frame. K1 is the twinning plane, K2 the reciprocal or conjugate twinning plane,  the twinning direction,  the conjugate twinning direction, P the shear plane, and s the magnitude of shear. Twin Type I twin Type II twin Compound twin elements (VA:VC, VB:VD; V1:V3, V2:V4) (VA:VB, VC:VD; V1:V2, V3:V4) (VA:VD, VB:VC; V1:V4, V2:V3) K1 (1 2 ) 10 mono (1.1240 2 ) 8.7602 mono (1 0 10) mono K2 1.1240 ( 2 mono 8.7602) 1 ( 2 10) mono 1 ( 0 10) mono  [11.0973 10 mono ] 0.8903 10 [ 10 mono 1] 10 [ 0 mono 1]  [10 10 mono ] 1 [11.0973 10 mono ] 0.8903 [10 0 mono 1] P (1 0.1161 mono 11.1610) (1 0.1161 mono 11.1610) (0 1 0) mono s 0.2537 0.2537 0.0295 Table 5 . 6 . 56 Comparison of twin orientation relationships identified in the present work with those reported in the literature. 14M-adaptive [44, 82, 91] 7M modulated super cell 7M-average unit cell Notes a14M-c14M twin Type I twin K1 = (1 0 1)14M Table 5 . 8 58 The published orientation relationships of martensitic and intermartensitic transformation in NiMnGa alloys Orientation relationship Austenite Martensite Plane direction Plane direction Bain (001) A A [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] This work is supported by the Sino-French Cai Yuanpei Program (N°24013QG), the Northeastern University Research Foundation for Excellent Doctor of Philosophy Candidates (No. 200904), and the French State through the program "Investment in the future" operated by the National Research Agency (ANR) and referenced by ANR-11-LABX-0008-01 (LabEx DAMAS). The work presented in this thesis is completed at LEM3 (former LETAM, University of Lorraine, France) and the Key Laboratory for Anisotropy and Texture of Materials (Northeastern University, China). seed layer (100 nm Cr or Ag). The corresponding parameters are displayed in Table 3.2. The film thickness was verified by stylus profiler (DEKTAK 150). SEM (JEOL 6500F) equipped with EDS were used to analyze the microstructure and composition of these films. Home-made X-ray diffraction machine (FSM) with Cobalt cathode source (nm was used to determine the phase constituents of the produced films. for Compound twins, as in the case for bulk materials. For easy visualization of the through-film-thickness orientations of twin interfaces in two relative contrast zones, the With the crystallographic orientations of the NM martensite variants in the six variant groups, the crystallographic orientations of 7M martensite variants can be calculated based on the published orientation relationship between 7M and NM martensites [71]. Fig. 5.20 presents the (001)mono pole figures of the 7M martensite calculated based on the orientation of the NM martensite variants. As shown in Fig. 5.20, there are also six 7M martensite variant groups, each containing 4 distinct variants. Then in total twenty-four 7M martensite variants. Results Influence of sputtering parameters and post annealing Influence of sputtering power Crystallographic orientation of 7M martensite variant The calculated crystallographic orientation of 7M martensite variants are consistent with that acquired by EBSD in part 5.3.3. It is indicated that the intermartensitic transformation from 7M martensite to NM martensite follows the orientation relationship determined by Li et.al [71]. In order to compare the pole figures obtained by EBSD with those by XRD (Fig. 4.3) in the present study and previous studies [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], the (2 0 20)mono , (2 0 20 ) mono, (0 4 0) mono (Invited lecture). II. Contributions to
173,609
[ "760084" ]
[ "178323" ]
00175129
en
[ "phys" ]
2024/03/05 22:32:07
2008
https://hal.in2p3.fr/in2p3-00175129/file/adc_publi_ieee_lpc.pdf
Gérard Bohner Roméo Bonnefoy Rémi Cornat Pascal Gay Jacques Lecoq Samuel Manen Laurent Royer A very-front-end ADC for the electromagnetic calorimeter of the International Linear Collider Keywords: ADC, pipeline, CMOS integrated circuit, comparator, differential, amplifier, ILC, CALICE, calorimeter, veryfront-end electronics A 10-bits pipeline Analog-to-Digital Converter (ADC) is introduced in this paper and the measurements carried out on prototypes produced in a 0.35 µm CMOS technology are presented. This ADC is a building block of the very-front-end electronics dedicated to the electromagnetic calorimeter of the International Linear Collider (ILC). Based on a 1.5-bit resolution per stage architecture, it reaches the 10-bits precision at a sampling rate of 4 MSamples/s with a consumption of 35 mW. Integral and Differential Non-Linearity obtained are respectively within ±1 LSB and ±0.6 LSB, and the measured noise is 0.47 LSB at 68% C.L. The performance obtained confirms that the pipeline architecture ADC is suitable for the Ecal readout requirements. I. INTRODUCTION T HE Electromagnetic Calorimeter (ECAL) of the Inter- national Linear Collider (ILC) requires a performant very-front-end readout electronics which implies an ambitious R&D inside the CALICE collaboration [START_REF] Brinkmann | Tesla Technical Design Report, part II[END_REF]. This integrated electronics has to process 10 8 channels which deliver a 15-bits dynamic range signal with a precision of 8 bits. Moreover, the minimal cooling available for the embedded readout electronics imposed an ultra-low power limited to 25 µW per channel. This issue will be reached thanks to the timing of ILC which allows the implementation of a power pulsing with a duty ratio of 1% . A key component of the very-front-end electronics is the Analog-to-Digital Converter (ADC) which has to reach a precision of 10 bits. In order to save the die surface of the chip and to limit the power consumption, one ADC will be shared by several channels. To fullfill this request, an ADC operating at a sampling rate of the order of one MSamples/s has been designed. This paper presents the design and the performance of a 35-mW 4-MSamples/s 10-bits ADC. After a description of the 1.5-bit per stage architecture, the gain-2 amplifier and comparator are detailed. Measurement results of the engineering samples fabricated in a 0.35 µm CMOS technology are then reported. II. 1.5-BIT/STAGE PIPELINE ADC ARCHITECTURE Among the various efficient ADC architectures developed and improved last tens years, the pipeline architecture is adapted to reach high resolution, speed and dynamic range with relative low power consumption and low component count. Typically, resolutions in the range of 10-14 bits at sampling frequency up to 100 MSamples/s are achieved in CMOS technologies with power lower than 100 mW [START_REF] Sumanen | A 10-bit 200-MS/s CMOS Parallel Pipeline A/D Converter[END_REF]- [START_REF] Kurose | 55-mW 200-MSPS 10bit Pipeline ADCs for Wireless Receivers[END_REF]. The block diagram of a conventional m-bits pipeline ADC [START_REF] Van De Plassche | Integrated analog to digital and digital to analog converters[END_REF] with n output bits per stage is given in Fig. 1. Each stage consists of a sample-and-hold (S/H), a low-resolution n-bits flash sub-ADC, a substractor and a residue amplifier. This stage converts the input voltage into the corresponding n-bits code and provides the amplified residual voltage to the next stage. The complete m-bits system consists in piling up m stages and in adding an Error Correction Logic block which delivers the final digital code. The simplest architecture is designed with a resolution of 1 bit per stage. In such basic ADC, each sub-ADC is composed of one comparator, with the reference signal V REF fixed at the middle of the analog input dynamic ADC range, and the gain of the residue amplifier is 2. The algorithm for the i th stage is as follows: if (V In ) i > V REF then b i = 1 and (V In ) i+1 = 2(V In ) i -V REF , else b i = 0 and (V In ) i+1 = 2(V In ) i , where (V In ) i is the input voltage of the i th stage and b i the output bit of this stage. The accuracy of this type of ADC is limited by three main [START_REF] Cho | Low power, low voltage analog to digital conversion techniques using pipelined architectures[END_REF]- [START_REF] Lee | A 15-bit 1-Ms/s digitally self-calibrated pipeline ADC[END_REF] parameters : • the interstage gain 2 accuracy limited by the gainbandwidth product of the amplifier and the mismatch of its feedback components; • the offset voltage of the comparators caused by the mismatch of components of its input stage; • the thermal noise which varies between samples. The noise contribution of the sampling in each stage, represented by the kT /C expression, is generally predominant but this noise contribution of a latter stage is effectively attenuated by the gain of the previous stage. An architecture with a resolution of 1.5 bits per stage is a solution to attenuate the contributions of the gain error and offset voltage to the non-linearity of the ADC. The Integral Non-Linearity (INL) obtained with algorithmic simulations is displayed on Fig. 2 ] i is delivered by each stage i. The corresponding algorithm is given by: • if (V In ) i < V Low T h then [b 2 b 1 ] i = [00] and (V In ) i+1 = 2(V In ) i+1 • if V Low T h < (V In ) i < V High T h then [b 2 b 1 ] i = [01] and (V In ) i+1 = 2((V In ) i -V Low Ref ) • if (V In ) i > V High T h then [b 2 b 1 ] i = [10] and (V In ) i+1 = 2((V In ) i -V High Ref ) III. SCHEMATIC OF ONE STAGE The global schematic of one ADC stage with resolution of 2 bits is given on Fig. 4. In order to reject the common mode noise, a fully differential structure has been adopted. In the very-front-end electronics of the electromagnetic calorimeter of ILC, the differential mode is particularly relevant considering the presence in the chip of a digital electronics which could induced common mode noise into the analogue part. Since the captor delivers a common mode signal, a commonmode to differential-mode conversion is required before the digitalisation stage. This interface should be inserted as soon as possible during the analogue processing of the signal in order to take advantage of the improvement signal-to-noise induced by the differential mode. As represented on Fig. 4, the value of the reference signal added to the (V In ) i input signal is selected by comparators outputs through switches. Then the circuit operates on two clock phases: during the sampling phase, the input signal and the reference signal are summed through capacitors 2C, while during the hold phase, the summed signal is amplified by factor 2. A. The gain-2 amplifier The gain-2 amplifier is built with a differential amplifier and a capacitive feedbacked loop. A better matching is obtained with capacitors and they have been preferred to resistors [START_REF] Hastings | The art of analog layout[END_REF]. This matching is particularly important because it affects the precision of the gain 2, and therefore, the linearity of the ADC. Thus, feedback capacitors must be large enough to minimize both the thermal noise kT /C and the components mismatch proportional to 1/ √ C. In contrast, both the small die surface and the dynamical performance achieved with low supply current have to be carry out. Then capacitors values of 300 fF and 600 fF are used. To match these components with the precision required of 0.1%, the capacitors array has been drawn using a common-centroid layout composed of six poly-poly capacitor unit cells of 300 fF each. Dummy switches have been introduced in order to counterbalanced the parasitic capacitors introduced by the reset switches. B. The operational amplifier As represented in Fig. 5, the operational amplifier is based on a fully differential architecture with rail-to-rail inputs and outputs [START_REF] Gharbiya | Operational Amplifiers rail to rail input stages using complementary differential pairs[END_REF]- [START_REF] Kaiser | Introduction to CMOS analog integrated circuit[END_REF]. It includes a resistive Common Mode FeedBack circuit (CMFB) to control the common mode output voltage. This table shows that despite a large gain-bandwidth product of 50 MHz, the consumption is limited to 1.9 mW with a 5V power supply. The design of the differential amplifier must guarantee a stable behaviour of the gain-2 feedback amplifier over process parameters fluctuations. Both differential output signal and common mode output signal must be stable with a total load capacitance evaluated to 1 pF. On that purpose, a pole-zero compensation network is connected at the differential output and its parameters adjusted. Then, stability simulations give a phase margin better than 66 degrees for both differential and common mode signals. Statistical simulations with process parameters fluctuations give standard deviation of 3 degrees and 5 degrees respectively for differential and common mode phase margins. A M P L I F I E R V R E F l o w - g n d 2 C C l o c k 2 C 2 C 2 C C C + - V R E F h i g h - V R E F l o w + g n d V R E F h i g h + V i n + V i n - V T H l o w - V T H l o w + C O M P . + - + - C l o c k V T H h i g h - V T C. The comparator As represented on Fig. 6, the comparator consists on a latched comparator [START_REF] Crolla | A fast latching current comparator for 12-bit A/D applications[END_REF] followed by a dynamic memory. This architecture is fully differential with two differential inputs: one for the differential signal (V In ) i from the previous stage and one for the differential reference signal. However, for simplicity, only one differential input is represented in Fig. 6. The very high impedance of the load represented by the latch gives the very high sensitivity of this comparator. The high gain is reached when the switch of the latch is open, leading the latch output to switch into a logical state corresponding to the sign of the differential input voltage namely (V In ) i -V Ref . The intrinsic sensitivity of this comparator is so high that the sensitivity is finally dominated by the input comparator noise. The characteristics of the designed comparator are summarized on the table II. The code occurence histogram at the code value of 512 is given in Fig. 9. All the codes delivered are included into 512 ± 1 LSB, with about 90% of them at the center value. At a sampling rate of 4 Msamples/s the dissipation of the chip is 35 mW. With a time of 250 ns to convert one analogue signal and considering the number of events stored per channel to be less than 5, the power consumption integrated during the ILC duty cycle of 200 ms is evaluated to 0.22 µW /channel. Assuming that the ON-setting time and the pipeline latency of the convertion are neglected when the ADC is shared by tens of channels, the integrated power dissipation of the ADC is then limited to 1 % of the total power available for the very front-end electronics of the ECAL. V. CONCLUSION A 10-bit 4-MSamples/s 35-mW CMOS ADC based on a pipeline 1.5-bit/stage architecture has been designed and tested. Its performance confirms that this architecture fullfill the ADC requirements of the ECAL at ILC. Nevertheless, such results can not be obtained without the efficient comparator and amplifier presented in this paper. These building blocks have been optimized to fulfill the offset, sensitivity, speed and stability requirements with low power consumption. Their schematics are as simple as possible in order to improve integration and reliability. Bearing in mind that the comsumption is a key point, the next step will consist on a portage in 3V supply. AFig. 1 . 1 Fig. 1. Conventional m-bits pipeline ADC architecture with n bits delivered per stage. Fig. 2 .Fig. 3 . 23 Fig. 2. 1.5-bit/stage ADC: variation of INL as a function of offset voltage Voff on comprarators. and Fig 3. The Integral Non-Linearity refers to the deviation, in LSB, of each individual output code from the ideal transfert-function value. Fig 2 shows that, for an input dynamic range of 2V, the INL of the ADC is not affected by the offset voltage V of f of the comparators up to ±250 mV. Moreover, as reported in Fig 3, the precision is less sensitive to the gain 2 accuracy with the 1.5-bit/stage architecture compare to a basic 1-bit/stage one. This 1.5-bit/stage pipeline ADC architecture [9] involves two comparators per stage, with separate threshold voltages V Low T h and V High T h , and two reference voltages V Low Ref and V High Ref . A 2-bits word [b 2 b 1 1 b 2 Fig. 4 .Fig. 5 . 1245 Fig. 4. Simplified schematic of one ADC stage: 2 comparators give the output bits of the stage and determine the reference signals substracted to the input signal; the amplifier amplifies by a factor 2 the redidual voltage and delivers it to the next stage. Fig. 9 . 9 Fig. 9. ADC code occurence histogram at the code centre TABLE I OPEN I LOOP AMPLIFIER CHARACTERISTICS. Power supply 5.0 V Consumption 1.9 mW Gain-Bandwidth 50 MHz Differential phase margin 66 ± 3 degrees @ 68% C.L. Common Mode phase margin 66 ± 5 degrees @ 68% C.L. The main characteristics of the open loop amplifier are reported in the table I. Code INL (LSB) Fig. 8. Integral nonlinearity measurement IV. MEASUREMENT RESULTS A 10-bits ADC prototype has been fabricated using the Austriamicrosystems 0.35 µm 2-poly 4-metal CMOS process. The total area of the ADC is 1.35 mm 2 and the chip is bounded into a JLCC 44 pins package. The circuit is measured with a 5.0 V supply and a differential input swing of 2.0 V pp , at a frequency clock of 4 MHz. Main performance is reported on table III. A dynamic input range of 2.0 V is measured with zero and gain errors respectively of 0.5 % and 0.8 % of the full scale. The standard deviation of the noise is lower than 0.5 LSB at 68% C.L. The static linearity curves of the 1.5-bit/stage ADC are given in Fig. 7 and Fig. 8. The Differential Non-Linearity (DNL) is displayed in Fig. 7. This error is defined as the difference between an actual step width and the ideal value of one LSB. The DNL measured is
14,241
[ "829302" ]
[ "131", "131", "131", "131", "131", "131", "131" ]
01751340
en
[ "sdv" ]
2024/03/05 22:32:07
2017
https://theses.hal.science/tel-01751340/file/Vannier_Nathan.pdf
Christophe Mougel Denis Tagu J Ean-Christophe Simon A Chim Quaiser Marie Duhamel Sarah Ben Maamar Romain Olivier Guillaume Maxime, K evin H, K evin T, A lice, Manon, Youn, Tiphaine, J ean, K aï na, A rmand, Hélè ne, Gorenka, Delphine Merci Aux Doctorants Du Labo, Diana Sarah Mê R emerciements C'est avec un soulagement modéré que j'écris ces remerciements avant d'appuyer sur le bouton qui clôturera ces 3 ans d'aventure. Modéré parce qu'avec le soutient et l'aide de tous ceux que je vais citer ici, ces trois ans, bien qu'intenses en travail, ont été un réel plaisir. C'était cool ! J 'espè re avoir la chance de retrouver des gens aussi gentils tout au long de mes prochaines aventures. J e voudrais commencer par remercier mes super encadrants Cendrine Mony, A nne-K ristel Bittebiè re et Philippe Vandenkoornhuyse pour leur précieuse aide. Merci pour votre disponibilité, votre bienveillance et surtout votre bonne humeur au quotidien. J e trouve qu'on a formé une super équipe et mê me si quatre tê tus qui discutent ça peut faire des réunions compliquées je crois qu'on a réussi a faire émerger des idées de tout ça. Merci aussi Cendrine pour tout ce magnésium disponible dans ton bureau (3 ans de thè se, 5 de régime) et pour les sorties terrains rafraî chissantes bien qu'un peu humides ! Merci aussi Philippe pour ta bonne humeur et pour les biè res d'inspiration scientifique. Merci aussi A nne-K ristel pour tes précieux conseils (mê me a 700 km tu m'as beaucoup soutenu), notamment grâce a tes yeux bioniques capables de voir le moindre changement de police ou décalage dans un diapo. R ésumé F rançais Depuis plusieurs années déjà, la communauté scientifique a tiré la sonnette d'alarme sur les risques accrus de crises alimentaires du fait des événements climatiques qui deviennent plus fréquents et extrê mes (Battisti et al., 2009). L a productivité agricole mondiale diminueainsi alors mê me qu'en parallè le la population humaine continue a croî tre réguliè rement avec une population humaine estimée a 10 milliard à l'horizon 2100 (Lutz et al., 2001). Pour subvenir à une telle demande de nourriture la production agricole devra presque doubler [START_REF] Cirera | Income distribution trends and future food demand[END_REF]. A fin d'augmenter la production, les besoins en espaces agricoles seront en grande partie acquis par la transformation d'écosystè mes naturels (forê ts) en cultures. Il est estimé ainsi la transformation de 10 9 hectares en agrosystè mes d'ici à 2050 [START_REF] Tilman | Forecasting agriculturally driven global environmental change[END_REF] induisant ainsi une perte de biodiversité et de fonctions écosystémiques et impactant considérablement la quantité de CO2 émise vers l'atmosphè re. L e défi pour l'agriculture durant ce siè cle est donc de produire plus efficacement et durablement pour nourrir la plante tout en préservant au maximum les écosystè mes naturels.En agriculture, les contraintes locales de l'environnement ont été contournées par l'utilisation d'intrants dans les cultures (fertilisants, pesticides, etc). Cependant, la disponibilité des ressources permettant de produire ces fertilisants est limitée et se pose donc la question de la durabilité de l'agriculture conventionnelle (Duhamel & Vandenkoornhuyse 2013). s. Durant les derniè res décennies, les communautés symbiotiques de bactéries, champignons et A rchaea ont été reconnues comme des déterminants majeurs des fonctions écosystémiques à échelle globale [START_REF] Van Der Heijden | Mycorrhizal fungal diversity determines plant biodiversity, ecosystem variability and productivity[END_REF][START_REF] Urich | A rchaea predominate among ammonia-oxidizing prokaryotes in soils[END_REF][START_REF] Falkowski | The microbial engines that drive Earth's biogeochemical cycles[END_REF]van der Heijden et al., 2008), notamment via les fonctions de résistance et d'acquisition des ressources qu'ils fournissent. A insi, de nombreux chercheurs ont commencé à reconnaî tre le potentiel des micro-organismes associés aux plantes pour résoudre le dilemme productivité agricole-maintien de la biodiversité. L a prochaine étape pour l'agriculture de demain serait donc de prendre en compte les communautés microbiennes des cultures pour optimiser les fonctions qu'elles assurent. A fin de parvenir a cet objectif d'utilisation des communautés microbiennes à des fins d'agriculture durable, il est nécessaire d'appréhender les rè gles qui régissent le fonctionnement de ces communautés. Cette thè se a pour objectif de déterminer les rè gles d'assemblage des communautés microbiennes associées aux plantes et leur influence sur le phénotype de la plante iv hôte. Elle est articulée autour de trois chapitres qui sont chacun composés d'articles en préparation, soumis ou publiés. Chapitre 1 : Conséquences écologique et évolutive de la plasticité induite par les mutualistes. In natura, les plantes sont colonisées par une grande diversité de micro-organismes à l'intérieur et à l'extérieur de leurs tissus. Ces microorganismes constituent leur microbiote. Ce microbiote fournit à son hôte des fonctions clés telles que l'acquisition des ressources ou la résistance aux stress biotiques et abiotiques (Friesen et al., 2011;Mü ller et al., 2016), influençant ainsi tous les aspects de la vie d'une plante, de l'établissement à la croissance jusqu'à la reproduction (Mü ller et al., 2016). L a composition du microbiote des plantes est déterminée par une large gamme de facteurs environnementaux tels que les propriétés du sol comprenant le pH ou la disponibilité en nutriments et en eau [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Shakya et al., 2013;Schreiter et al., 2014). En effet, les micro-organismes associés aux plantes sont majoritairement recrutés depuis le sol. Du fait de leur style de vie sessile (i.e. immobile), les plantes doivent donc faire face à l'hétérogénéité de l'abondance des micro-organismes dans le sol. L es plantes ont ainsi développé différents mécanismes permettant de tamponner les effets des contraintes environnementales tels que des modifications plastiques (i.e. production de plusieurs phénotypes à partir d'un seul génotype) (Bradshaw, 1965). Ces modifications sont classiquement considérées comme étant déterminées par le génome de l'organisme. Or les mécanismes épigénétiques (régulant l'expression du génome) et les micro-organismes symbiotiques ont été identifiés à de multiples reprises comme des sources de variations phénotypiques (Richards, 2006;Friesen et al., 2011;Holeski et al., 2012;Vandenkoornhuyse et al., 2015). Dans le premier chapitre de cette thè se nous avons étudié l'influence de ces deux sources de variations phénotypiques sur le phénotype et l'adaptation des plantes. L 'article de synthè se bibliographique présenté dans cette thè se (A rticle I) a permis de mettre en évidence l'importance du microbiote et de l'épigenetique comme sources de plasticité phénotypique. L es avancées récentes en épigénétique et sur les symbiotes des plantes ont démontré leur impact majeur sur la survie, le développement et la reproduction des plantes. Ces sources de plasticité permettent aux plantes de s'adapter aux contraintes environnementales sur des pas de temps courts (au cours la vie de la plante) et longs (i.e. temps évolutifs). De plus, les variations phénotypiques ainsi produites peuvent ê tre intégrées dans le génome par accommodation génétique et ainsi influencer les trajectoires évolutives. De maniè re générale, cette synthè se bibliographique indique également que des interactions entre microbiote et épigénétique pourraient exister et v devraient donc ê tre étudiées en parallè le des effets respectifs de ces mécanismes sur la survie et l'évolution des plantes. Dans le second article présenté dans ce chapitre (A rticle II), nous nous sommes intéressés au rôle des micro-organismes dans la réponse des plantes clonales à l'hétérogénéité environnementale. L es plantes clonales sont des organismes particuliè rement plastiques. En effet, leur structure organisée en réseau (i.e. individus clonaux connectés) permet la propagation dans l'espace ainsi que le partage des ressources et de l'information (i.e. intégration physiologique ; Oborny et al., 2001). De nombreuses études ont mis en évidence que ce réseau permet le développement de deux principaux types de réponses plastiques à l'hétérogénéité des ressources : le foraging (i.e. exploration de l'espace pour les ressources) et la spécialisation (i.e. division du travail pour l'exploitation des ressources) (Slade & Hutchings, 1987;Dong, 1993;Hutchings & de K roon, 1994;Birch & Hutchings, 1994). Par transposition, la présence de microorganismes symbiotiques dans le sol, pourrait induire le mê me type de réponses plastiques, en raison de leur rôle clé pour la plante (lire les synthè ses Friesen et al., 2011;Mü ller et al., 2016).. Nous avons testé expérimentalement cette hypothè se en manipulant l'hétérogénéité de présence de champignons mycorhiziens à arbuscules (MA ) et en mesurant les traits de réponses plastiques de foraging et de spécialisation de la plant clonale Glechoma hederacea. Dans nos expérimentations, nous avons démontré 1.) que Glechoma hederacea ne produit pas de réponse plastique de spécialisation et une réponse de foraging limitée a l'hétérogénéité des champignons MA et que 2.) les traits architecturaux impliqués dans les réponses de foraging des plantes ne sont pas affectés par les espè ces de champignons MA testées, contrairement aux traits d'allocation des ressources liés aux réponses de spécialisation. L e fait que G. hederacea ne réponde que peu ou pas à l'hétérogénéité de distribution des champignons MA suggè re que les plantes pourraient homogénéiser cette distribution en les transmettant d'un ramet a un autre dans le réseau. Cette hypothè se est corroborée par des observations en microscopie électronique à balayage révélant la présence d'hyphes à la surface d'échantillons de stolons et des observations en coupe montrant la présence de structures apparentées à des champignons colonisant les cellules de stolons. Chapitre 2 : L 'héritabilité du microbiote des plantes, vers le concept de méta-holobionte Dans ce chapitre nous avons testé expérimentalement l'hypothè se de transmission des microorganismes entre ramets clonaux. Ce chapitre vise également à évaluer les conséquences d'un tel mécanisme de transmission pour les performances des plantes et pour la compréhension du fonctionnement et de l'évolution des plantes. vi Des études précédentes ont mis en évidence l'existence de transmission de symbiotes endophytiques via la colonisation des graines de la plante hôte. Ce type transmission a majoritairement été décrit pour le champignon Neotyphodium coenocephalum qui colonise les graines et permet la résistance à certains stress environnementaux (Clay & Schardl, 2002;Selosse et al., 2004). Cet exemple est le seul décrit dans la littérature montrant une transmission verticale de micro-organismes entre générations de plantes et n'implique le transfert que d'une seule espè ce de champignon a un moment spécifique de la vie de l'hôte. Comme mis en évidence dans le premier chapitre les plantes clonales pourraient potentiellement transmettre des micro-organismes, en plus des ressources et de l'information, à leurs descendants clonaux, constituant un niveau supplémentaire d'intégration physiologique . Dans le premier article présenté dans ce chapitre (A rticle III), nous avons cherché à démontrer expérimentalement l'existence de transmission de microorganismes entre générations de plantes clonales. Dans nos expérimentations, nous avons détecté la présence d'archées, de bactéries et de champignons dans l'endosphè re des ramets mè res de G. hederacea. Certains de ces microorganismes (des champignons et des bactéries) ont été détectés dans les racines et stolons des ramets filles démontrant ainsi l'héritabilité d'une partie du microbiote de la plante à ses descendants. De plus, les communautés transmises étaient similaires entre ramets filles bien que plus pauvres en nombre d'OTUs que l'assemblage présent au niveau des racines du ramet mè re initial. Ce résultat démontre l'existence d'une filtration des micro-organismes durant la transmission par une diminution de la richesse et une homogénéisation des communautés transmises constituant un « core microbiote » D'un point de vue théoriquela transmission du microbiote aux descendants est avantageuse car elle permet d'assurer la présence de symbiotes bénéfiques et permet donc d'assurer la qualité de l'environnement des descendants (Wilkinson & Sherratt, 2001). Nos résultats ouvrent des perspectives sur la capacité des plantes clonales à sélectionner et transmettre préférentiellement certains micro-organismes spécifiques. Cependant, les modalités de cette filtration ne sont pour l'instant pas connues et représentent donc une perspective de recherche importante de cette thè se. L es micro-organismes transmis pourraient ainsi ê tre filtrés en fonction de leurs capacités de dispersion (i.e. de colonisation des entre-noe uds ou stolons) ou en fonction des fonctions qu'ils remplissent pour le ramet mè re (par exemple l'acquisition de ressource ou la résistance a un stress). vii Cette transmission verticale de micro-organismes constitue une continuité du partenariat entra la plante et ses symbiotes (Zilber-Rosenberg & Rosenberg, 2008). Récemment, la compréhension des interactions hôtes-symbiotes a évolué vers une approche holistique de l'hôte et de ses symbiotes. L a théorie de l'holobionte fournit un cadre théorique pour l'étude des interactions hôte-symbiotes (Zilber- Rosenberg & Rosenberg, 2008;Theis et al., 2016). Dans l'hologenome (i.e. l'ensemble formé par le génome de l'hôte et celui de ses symbiotes) un micro-organisme peut ê tre assimilé à un gè ne dans le génome. De la mê me maniè re que les gè nes sont hérités lors de la reproduction sexuée, l'héritabilité des micro-organismes entre les générations d'hôtes est ainsi un paramè tre clé de l'évolution de l'hologénome. Dans ce contexte, le mécanisme démontré ici intè gre les plantes dans la gamme des organismes qui peuvent ê tre considérés et appréhendés comme des holobiontes. De plus, la structure en réseau des plantes clonales suggè re un niveau supplémentaire de complexité dans l'assemblage de l'holobionte puisque dans ce réseau les holobiontes peuvent échanger ou partager une partie de leur microbiote. Dans le second article de ce chapitre deux (A rticle IV ) nous proposons une extension du concept de l'holobionte aux organismes clonaux organisés en réseaux : le méta-holobionte. En plus des bases théoriques de la compréhension des réseaux clonaux comme des méta-holobiontes nous proposons dans cette article de transposer des corpus de connaissances issus des théories écologiques des méta-communautés et des réseaux écologiques. L e cadre théorique des métacommunautés devrait permettre d'appréhender l'impact de la transmission des micro-organismes sur l'assemblage et la survie des communautés de micro-organismes et sur la gestion par les plantes des micro-organismes d'intérê t. L e cadre théorique de l'écologie des réseaux devrait fournir des perspectives sur l'optimisation de la résilience et des performances du réseau clonal en fonction de sa structure et de la transmission des micro-organismes qui dépend de cette structure. Chapitre 3 : Importance du contexte de la communauté de plantes pour l'assemblage du microbiote d'une plante hôte L a composition des communautés de micro-organismes du sol dépend de différents facteurs environnementaux dont par exemple le type de sol et ses propriétés (e.g. pH, humidité, concentration en nutriments) [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF]L undberg et al., 2012;[START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]Shakya et al., 2013;Schreiter et al., 2014). Du fait de l'immobilité des plantes, ce pool de microorganismes disponibles pour le recrutement détermine en grande partie leur microbiote (L undberg viii et al., 2012;[START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]. En plus du type de sol et de ses propriétés, les plantes peuvent également modifier ce pool de micro-organismes via différents mécanismes. L es plantes sont capables de recruter préférentiellement certains micro-organismes à partir du sol (Vandenkoornhuyse et al., 2015) et de promouvoir les symbiotes les plus bénéfiques [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011). De plus les exsudats racinaires des plantes peuvent également augmenter ou diminuer l'abondance de micro-organismes spécifiques en fonction de l'espè ce de plante considérée (pour une synthè se lire Berendsen et al., 2012). On s'attend donc à ce que les plantes modifient localement la composition des communautés microbiennes du sol. Suivant cette hypothè se, le voisinage d'une plante donnée (c'est-à-dire l'identité et l'abondance des plantes voisines) devrait influencer localement les communautés de micro-organismes du sol et donc les micro-organismes disponibles pour le recrutement par d'autres plantes de la communauté. A insi, le voisinage d'une plante donnée devrait influencer l'assemblage du microbiote de la plante focale. Cependant, ce rôle potentiel du contexte de la communauté de plantes sur l'assemblage du microbiote n'a pas été étudié expérimentalement. En outre, des espè ces [START_REF] Oh | Distinctive bacterial communities in the rhizoplane of four tropical tree species[END_REF][START_REF] Bonito | Plant host and soil origin influence fungal and bacterial assemblages in the roots of woody plants[END_REF] et des écotypes [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012) différents de plantes ont des microbiotes spécifiques ce qui suggè re que les plantes peuvent promouvoir différents microorganismes et donc avoir des effets contrastés sur les communautés du sol. Considérant la grande diversité de fonctions fournies par les symbiotes des plantes (Friesen et al., 2011, Mü ller et al., 2016) et leurs effets sur le phénotype et les performances des plantes mis en évidence dans cette thè se (A rticles I et II), des changements de composition du microbiote devraient affecter les performances de la plante focale. Dans ce chapitre qui est composé d'un article en préparation nous avons testé expérimentalement l'hypothè se de l'effet du contexte de la communauté de plantes sur l'assemblage du microbiote d'une plante hôte. Nous avons utilisé un dispositif de mésocosmes manipulant des communautés prairiales de plantes. Nous avons échantillonné les assemblages fongiques du sol de ces mésocosmes en utilisant Medicago truncatula comme plante piè ge et en cartographiant le recouvrement en plante sur plusieurs années. Dans notre expérience (A rticle V ), nous avons détecté un effet significatif de l'abondance des espè ces de plantes du voisinage sur la composition du microbiote colonisant les racines de M. truncatula. Cette effet dépend de l'espè ce de plante du voisinage avec certaines plantes ayant un effet positif et d'autres un effet négatif sur la richesse et l'équitabilité des assemblages fongiques colonisant M. truncatula. Nous avons démontré ainsi qu'une plante spécifique peut filtrer et déterminer les assemblage fongiques locales présentes et disponibles pour le recrutement dans le sol ix comme proposé par Valyi et al., (2016). Nos résultats indiquent également que cet effet du voisinage de plantes ne se limite pas aux champignons MA mais influence tout l'assemblage fongique. L 'influence du voisinage en plantes sur le microbiote racinaire se produit à échelle fine (i.e. de 5 à 25 cm) . De plus, il s'avè re qu'à la fois le recouvrement passé et le recouvrement présent en plantes impactent les assemblages fongiques des racines suggérant que les plantes peuvent laisser une empreinte durable sur la composition des assemblages fongiques du sol. En outre, nous avons également démontré que les changements de composition des assemblages fongiques colonisant les racines impactent in fine les performances de la plante hôte M. truncatula. A insi, des assemblages fongiques plus riches ou plus équitables ont induit de meilleures performances chez les plantes hôtes. Dans leur ensemble ces résultats démontrent l'existence d'interactions plante-plante via les champignons et impactant in fine la fitness de la plante hôte et ouvrent des perspectives sur la manipulation du microbiote racinaire des plantes via leur voisinage dans l'objectif d'améliorer la productivité. Ce travail a donc permis de mieux comprendre les modalités d'assemblage du microbiote et démontre l'existence d'une empreinte du voisinage des plantes sur la composition des communautés de micro-organismes. Un second résultat majeur est la démonstration de l'héritabilité d'une fraction de ce microbiote, sans doute un microbiote transmis essentiel au développement de son hôte. Ces travaux offrent de nouvelles opportunités de recherche qui devraient permettre de mieux comprendre le microbiote des plantes, son assemblage, ses fonctions et son influence sur le phénotype de l'hôte. A u delà des réflexions conceptuelles induites par cette thè se, ces travaux interrogent d'un point de vue appliqué la place du voisinage des plantes en agriculture pour contribuer à la diversité du réservoir des micro-organismes symbiotiques. Une perspective majeure de ce travail est donc le développement de polycultures dans lesquelles les espè ces de plantes induisent des effets de facilitation via la promotion de la diversité du microbiote. x Table of contents INT R ODUC T ION GE NE R A L E . ...............................................................................1 General context of the thesis. .............................................................................2 I. Scientific context................................................................................................2 II. Structure of the PhD thesis. ..............................................................................5 L iterature review. ...............................................................................................6 1. Microbiota composition.....................................................................................6 1.1 Plants and microorganisms, an ubiquitous alliance. The root microbiota: Diversity and composition. .............................................................9 2. Endophytes induced functions and phenotypic modifications. .......................12 2.1 Resources acquisition. ....................................................................................................12 2.2 Resistance to environmental constraints. .................................................................................15 3. Plant microbiota assembly. ..............................................................................17 3.1 Microbiota recruitment...................................................................................................18 3.1.1 The soil as a microbial reservoir. ............................................................................18 3.1.2 Rhizosphere............................................................................................................19 3.1.3 Endosphere.............................................................................................................20 3.2 Controls of the plant over its microbiota ........................................................................20 3. Vertical transmission or heritability. .......................................................................26 4. The plant is a "holobiont". ...............................................................................27 5. Objectives of the thesis. ...................................................................................29 C hapter I: C onsequences of mutualist-induced plasticity. ....................................31 I. [START_REF] Simms | The relative advantages of plasticity and fixity in different environments: when is it good for a plant to adjust ?[END_REF] .........................................................................................................................33 I.2 A rticle I: Epigenetic mechanisms and microbiota as a toolbox for plant phenotypic adjustment to environment. ..............................................................35 I.3 A rticle II: A M fungi patchiness and the clonal growth of Glechoma hederacea in heterogeneous environments. .........................................................57 C hapter II: T he heritability of the plant microbiota, toward the meta-holobiont concept. .......................................................................................................................83 xi II.1 Introduction...................................................................................................83 Scientific context..................................................................................................................83 Objectives of the chapter. .....................................................................................................84 Methods................................................................................................................................84 Main results..........................................................................................................................85 II.2 A rticle III: A microorganisms journey between plant generations. ..............87 II.3 A rticle IV: Introduction of the metaholobiont concept for clonal plants. ...121 C hapter III:Importance of the plant community context for the individual plant microbiota assembly. ...............................................................................................141 III.1 Introduction...............................................................................................141 Scientific Context...............................................................................................................141 Objectives of the chapter. have raised the alarm on the risks of food crises becoming more and more frequent (Battisti et al., 2009) due to extreme climatic events or pests spreading. A gricultural productivity is thus expected to decline worldwide and in parallel human population is continuously increasing and is expected to reach 10 billion people before 2100 (L utz et al., 2001). By this time, it is estimated that agricultural production would need to almost double to fulfill world demands [START_REF] Cirera | Income distribution trends and future food demand[END_REF]. This need for space will be fulfilled by the transformation of natural ecosystems in cropland and following estimations about 10 9 ha are likely to be lost by 2050 [START_REF] Tilman | Forecasting agriculturally driven global environmental change[END_REF] constituting a massive loss of biodiversity and ecosystem functions. The future challenges for agriculture during this century is thus to produce enough food to support world population expansion and in parallel to limit collateral damage to the environment. In agriculture, local environment constraints have been overstepped by the use of additives in the cultures (fertilizers, insecticides etc). However, the phosphorus that is used in fertilizers relies on high quality rock phosphate, which is a finite resource and major agricultural regions such as India, A merica, and Europe are already dependent on P imports (Duhamel & Vandenkoornhuyse 2013). The use of inputs have thus failed to solution the current challenges and have even impoverished soils. The last decades the symbiotic communities of bacteria, archaea and fungi have been recognized as major drivers of global ecosystem functions [START_REF] Van Der Heijden | Mycorrhizal fungal diversity determines plant biodiversity, ecosystem variability and productivity[END_REF][START_REF] Urich | A rchaea predominate among ammonia-oxidizing prokaryotes in soils[END_REF][START_REF] Falkowski | The microbial engines that drive Earth's biogeochemical cycles[END_REF]van der Heijden et al., 2008). Researchers have thus started to recognize the potential of plant-associated microorganisms to solve the agricultural productivity-biodiversity loss conundrum. Together with the root nodules, A M fungi are now recognized as one of the important symbiosis that help feed the world (Marx et al., 2004, in The roots of plant-microbe collaborations) and are promising for the developtment or organic farming [START_REF] Gosling | A rbuscular mycorrhizal fungi and organic farming[END_REF]. Industrials and farmers have even started to conduct experiments using microorganisms inoculation to enhance crop resistance to pathogens or survival on deprived soils (de Vrieze, 2015 Science "The littlest farmhands"). The next step will be to engineer crops microbiota in order to optimize functional characteristics provided. For example, flowering time and biomass are suitable candidate plant traits for the research of microorganism-services. In order to fulfill this goal, scientists will have to describe in precision the links between the microbiota assembly and functioning and the resulting plant phenotype. In natura, plants are colonized by a high diversity of microorganisms both inside and outside of their tissues, constituting the microbiota. These microorganisms affect plant health and growth in a beneficial, harmful, or neutral way. Because of their sessile lifestyle (i.e. immobile), plants rely on these micro-organisms for many ecological needs. Indeed, plant-associated microorganisms provide important ecological functions to the plant such as the acquisition of resources or the resistance against biotic and abiotic stresses (Friesen et al., 2011;Mü ller et al., 2016), all along the plant life (development, survival and reproduction) (Mü ller et al., 2016). Considering the importance of microbiota, the last decade have seen significant advances in the description of the factors shaping the assembly of the plant microbiota. The improvements of sequencing technologies allow to describe more and more finely the composition of the microbota. Composition and dynamics of this microbiota are driven by a large range of environmental factors such as soil properties comprising pH, nutrient and water availability [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Shakya et al., 2013;Schreiter et al., 2014). In addition, the complexity and versatility of plant-microorganisms associations makes difficult to disentangle all the rules leading to a given microbiota composition. Thus, determining the assembly rules of the plant microbiota is a current conundrum in ecology. Because they are immobile, plants have to cope with environmental heterogeneity such as the heterogeneity of microorganisms presence in soil. Plants classically respond to environmental heterogeneity through the production of phenotypic plasticity (i.e. production of different phenotypes from a single genotype; Bradshaw, 1965). In nature, clonal plants represent up to 70% of plant species in temperous ecosystems (van Groenendael & de K roon, 1990). Clonal plants are organized as a network of ramets (i.e. clonal individuals constituted of leaves and roots) connected by above or below-ground modified stem (i.e. stolons and rhizomes respectively) (see Harper, 1977 for a description of clonal structure). This network allow the sharing of resources and information (i.e. physiological integration, Oborny et al., 2001) and thus allows to produce plastic responses at the scale of the network. These plants are thus able to modify the structure of the network allowing to explore the environment (i.e. foraging) ensuring resources acquizition (e.g. light or nutrients, Slade & Hutchings, 1987;Dong, 1993;Hutchings & de K roon, 1994;Birch & Hutchings, 1994). Considering that microorganisms can have large effects on plant morphology (see for review Friesen et al., 2011;Mü ller et al., 2016), they may alter plant ability to produce such plastic responses despite it has not been demonstrated. In addition, because microorganims provide key functions to the plant, the expectation is thus that plants should develop plastic responses to ensure the availability of beneficial symbionts like they do for resources. Despite the ubiquity of clonal plants, the possibility that they forage for microorganisms to construct their microbiota has never been investigated. A major mechanism ensuring the availability of beneficial microorganisms for macroorganisms is the transmission of microrganisms between individuals (investigated in the Chapter 2). Two kinds of transmission have been theorized, vertical (i.e. direct transmission from parents to offsprings) and pseudo-vartical (i.e. transmission by sharing the same environment) (Wilkinson, 1997). In plants, vertical transmission has only been evidenced through seeds colonization and the most well-known example is stress-protective endophyte Neotyphodium coenocephalum to the descendants in several grass plant species (Clay & Schardl, 2002;Selosse et al., 2004). In clonal plants, the progeny is connected to the parent plant in the clonal network. Stuefer et al., (2004) suggested that in these plants, microorganisms could be transmitted in addition to resources and information. Despite it has been suggested and could represent a major factor structuring microbiota assembly, such transmission of microorganisms within the clonal network has never been investigated so far. In prairial ecosystems, many plants species are coexisting and interact in competitive or facilitative ways. A large number of studies, mainly focusing on A rbuscular Mycorrhizal Fungi (A M fungi hereafter), have investigated how the plant community can be influenced by the diversity of soil fungi [START_REF] Grime | Floristic diversity in a model system using experimental microcosms[END_REF]van der Heijden et al., 1998a;K lironomos et al., 2000;[START_REF] Bever | A rbuscular Mycorrhizal Fungi: More Diverse than Meets the E ye, and the Ecological Tale of Why: The high diversity of ecologically distinct species of arbuscular mycorrhizal fungi within a single community has broad implications for plant ecology[END_REF]. Reciprocally, microcosm studies based on a focal plant design have demonstrated that the surrounding plant composition influence the fungal microbiota of the focal plant-associated (J ohnson et al., 2004). Such microcosms-based studies have provided insights into the possible effect of the community context on microbiota assembly. However, microcosms are oversimplifications of the plant community real context where many species co-exist in a spatiotemporal dynamic system. In multispecific plant communities, Hausmann & Hawkes (2009) showed that the plant community composition influenced the composition of a focal plant microbiota. A s a plant harbor a specific microbiota [START_REF] Hardoim | Properties of bacterial endophytes and their proposed role in plant growth[END_REF][START_REF] Redford | The ecology of the phyllosphere: geographic and phylogenetic variability in the distribution of bacteria on tree leaves[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012) then we can expect a plant to leave a specific fingerprint on the soil pool of microorganisms. A plant neighborhood could thus determine its microbiota assembly (investigated in Chapter 3). Nevertheless, the spatial and temporal scales of this influence are still unclear whereas it could determine the fingerprint of the plant community on the soil pool of fungi. In addition, the above-described studies mainly focused on particular type of plant symbionts, the A M fungi. Despite many other plant endophytes are known to provide important functions (see Introduction section), the influence of the plant community on bacteria, fungi, and archaea has not yet been described. My PhD thesis aimed to address hypotheses related to these 3 current important frontiers of research in Ecology and Plant Science, (1) the importance of plant-micoorganisms interactions for plant phenotype and plasticity (2) the understanding of the vertical heritability of the plant microbiota and ( 3) the ability of plants to manipulate the local symbiotic compartment thus to leave a fingerprint on the pool of microorganisms. The work I did also aimed to question different theoretical concepts of ecology and evolution and to provide new perspectives regarding these concepts. II. Structure of the PhD thesis This thesis begins with a literature review of the existing knowledge on the plant microbiota composition, assembly rules and dynamics and the links between this microbiota and plant phenotypes. This state of the art aims at delimiting the current knowledges and gaps in our understanding of the plant microbiota and at identifying the objectives of this thesis work. The second part of this work consists in articles presented in three chapters addressing the questions raised by the literature review. Each of these chapters begins with a short presentation of the scientific context and the experimental procedure, as well as a brief summary of the main results. The results are then presented and discussed in the form of articles published, submitted or in preparation for scientific journals with a proofread committee. L iterature review 1. Microbiota composition 1.1 Plants and microorganisms, an ubiquitous alliance Plant and microorganisms have long been considered as separate but interacting organisms. This conception dividing two fields of research on plants and on microorganisms has progressively been replaced by an integrated approach to plant-associated microorganisms. This evolution in understanding of the plant has slowly evolved through the numerous studies describing how plants are systematically associated with microorganisms. For example, all plant species that were investigated were colonized by leaf endophytes (A rnold et al., 2003;[START_REF] Schulz | The endophytic continuum[END_REF]A lbrectsen et al., 2010;[START_REF] Gilbert | Symbiosis as a source of selectable epigenetic variation: taking the heat for the big guy[END_REF] and parallel studies also highlighted the difficulty of growing transplants of different species without bacteria and fungi [START_REF] Hardoim | Properties of bacterial endophytes and their proposed role in plant growth[END_REF]. Indeed, microorganisms colonize every plant species known and are found in various parts of these plants: roots, stems, xylem vessels, apoplast, seeds, and nodules (Rosenblueth & Martinez-Romero 2006; Figure 1). Some of these microorganisms live within or on the surface of plant organs, live within the endosphere or episphere and are called endophytes and epiphytes, respectively. The endosphere and episphere constitute the plant microbiota. On the surface of leaves, prokaryotes are found at densities of 10 6 to 10 7 per cm -2 (Lindow & Brandl 2003) and lab-based estimates of endophytic bacterial populations range from 10 7 to 10 10 cells per gram of tissue [START_REF] Hardoim | Properties of bacterial endophytes and their proposed role in plant growth[END_REF]. A mong the endophytic microorganisms colonizing plants, A rbuscular Mycorrhizal Fungi (A M fungi hereafter) have been found in 80% of all terrestrial plants [START_REF] Bonfante | Plants, mycorrhizal fungi, and bacteria: a network of interactions[END_REF]. Evidence of colonization of plant roots by A M fungi has been found in fossils dating from the early Devonian and Ordovician, suggesting that these fungi were already associated with plants over 400 to 460 million years ago [START_REF] Remy | Four hundred-million-year-old vesicular arbuscular mycorrhizae[END_REF][START_REF] Redecker | Glomalean fungi from the Ordovician[END_REF]. A M fungi are suggested to be at the origin of major evolutionary events such as the colonization of land, and the evolution and diversification of plant phototrophs [START_REF] Selosse | The land flora: a phototroph-fungus partnership?[END_REF][START_REF] Heckman | Molecular evidence for the early colonization of land by fungi and plants[END_REF][START_REF] Brundrett | Coevolution of roots and mycorrhizas of land plants[END_REF][START_REF] Bonfante | Mechanisms underlying beneficial plant-fungus interactions in mycorrhizal symbiosis[END_REF]. A ll these elements converge towards a consideration of the plant as an obligate host of numerous microorganisms. In the light of these observations [… ] a plant that is completely free of microorganisms represents an exotic exception, rather than the -biologically relevant -rule [… ] (Partida-Martinez & Heil, 2011). Symbiotic interactions between plants and the micro-organisms can be cooperative, neutral or antagonistic, thus a continuum of interactions ranging from mutualism to parasitism (i.e. an interaction being parasitic if it is disadvantageous for one of the organisms or a mutualistic when beneficial for both partners; e.g. [START_REF] Rico-Gray | Interspecific interaction[END_REF]. The level of intimacy (physical association) is likely to be variable among symbionts and interactions are considered as symbioses when the physical association is strong. This is the case for many known associations in nature such as corals with the algae Symbiodinum (along with other symbionts, Rosenberg et al., 2007), lichens (an association between fungi and photobiontes; [START_REF] Spribille | Basidiomycete yeasts in the cortex of ascomycete macrolichens[END_REF], legumes with Rhizobia, and more generally plants with A rbuscular Mycorrhizal Fungi. Such symbiosis has long been considered as beneficial to both organisms. Nevertheless, symbionts behavior cannot be considered as binary (i.e. either parasitic or mutualistic) but rather as more or less beneficial. This continuum in symbionts behavior has been demonstrated in different plant symbioses. For example, different A M fungi isolates can provide quantitatively different amounts of phosphorus in exchange for a given quantity of carbohydrates (K iers et al., 2011), with cheaters racketeering their host whereas cooperators display 'fair-trade' behavior (see section 3.2.3 for more details). Furthermore, a given symbiont can behave as a parasite or a mutualist depending on the ecological context. [START_REF] Hiruma | Root endophyte Colletotrichum tofieldiae confers plant fitness benefits that are phosphate status dependent[END_REF] demonstrated in an elegant experimental study that the endophyte Colletotrichum tofieldae transfers phosphorus to shoots of Arabidopsis thaliana via root-associated hyphae only under phosphorus-deficient conditions. This shift in C. tofieldae behavior was associated with a phosphate starvation response of the plant indicating that the behavior of the symbiont on was dependent on the nutrient status of the host. In addition, symbiosis generally involves more than two partners that can either be parasites or mutualists [START_REF] Graham | Functioning of mycorrhizal associations along the mutualism-parasitism continuum[END_REF][START_REF] Saikkonen | Fungal endophytes: a continuum of interactions with host plants[END_REF] and can colonize different parts of the plant at the same time. In the above-described experiment, C. tofieldae was shown to first colonize the roots and then spread to the leaves [START_REF] Hiruma | Root endophyte Colletotrichum tofieldiae confers plant fitness benefits that are phosphate status dependent[END_REF]. c) 1.3 The root microbiota: Diversity and composition The diversity of microorganisms associated with plants roots is estimated to be in the order of tens of thousands of species (Berendsen et al., 2012). Tremendous advances in the approaches used to describe the microorganisms living in association with plants have been made in recent decades. A pproaches based on the cultivation of organisms were for a long period of time the only methods available to characterize microbial communities [START_REF] Vartoukian | Strategies for culture of 'unculturable'bacteria[END_REF][START_REF] Margesin | Diversity and ecology of psychrophilic microorganisms[END_REF][START_REF] Rosling | A rchaeorhizomycetes: unearthing an ancient class of ubiquitous soil fungi[END_REF]. However, since most microorganisms, and especially fungi, are not cultivable [START_REF] Epstein | The phenomenon of microbial uncultivability[END_REF], the diversity of plant-associated microorganisms has long been underestimated. Estimations of plant-associated fungal diversity for a long time relied on spore counts and morphology (see for example [START_REF] Burrows | A rbuscular mycorrhizal fungi respond to increasing plant diversity[END_REF]L andis et al., 2005). Molecular methods developed in recent decades, such as Sanger sequencing of cloned products, PCR amplicons (polymerase chain reaction) or PCR-RFL P (restriction fragment length polymorphism) revolutionized the experimental approaches and were rapidly adopted for microorganisms identification and classification (Balint et al., 2016). This ability to rapidly detect DNA and at low cost provided information about the diversity of organisms in an environmental sample and led to numerous descriptions of the wide diversity of microorganisms associated with plants. Thanks to these studies, we now know that plants harbor an extreme diversity of archaea (Edwards et al., 2015), bacteria [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012) and fungi [START_REF] Vandenkoornhuyse | Extensive fungal diversity in plant roots[END_REF]. These studies extended our knowledge of plant-associated microorganisms and led to an upward revision of their diversity. In 2002, Vandenkoornhuyse et al. demonstrated that the diversity of fungi colonizing plants roots was much greater than previously believed and included non mycorrhizal fungi of the phyla Basidiomycota and Ascomycota. The composition of this microbiota varies significantly between plant species [START_REF] Hardoim | Properties of bacterial endophytes and their proposed role in plant growth[END_REF][START_REF] Redford | The ecology of the phyllosphere: geographic and phylogenetic variability in the distribution of bacteria on tree leaves[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012) and also depends on the genotype of the plant (Inceoğlu et al., 2010(Inceoğlu et al., , 2011) ) but to a limited extent [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF](Bulgarelli et al., , L undberg et al., 2012; see section 3.2.3 for host genetics effect). A lthough the composition of the plant microbiota has been extensively described for different species of bacteria with estimations of the relative abundances and species richness for different organs (Figure 2) such knowledge is lacking for fungal and archaeal communities and only a few seminal papers have been published (Vandenkoornhuyse et al., 2002a;Edwards et al 2015;Colemann-derr et al., 2016). Within the fungal compartment, the most studied organisms are rootassociated mycorrhizae that are either endomycorrhizal (within the plant cells) or ectomycorrhizal (outside the cell walls) forming hyphae around the roots. A ll A M fungi belong to the phylum Glomeromycota whereas the ectomycorrhizal fungi are spread among the A scomycota and Basidiomycota phyla (A rnold, 2007;[START_REF] Rodriguez | Fungal endophytes: diversity and functional roles[END_REF][START_REF] Rudgers | A fungus among us: broad patterns of endophyte distribution in the grasses[END_REF]). In roots and shoots, endophytic Basidiomycota and A scomycota also exist in a non-mycorrhizal form. Recent studies enlarging the fungal compartment to groups other than mycorrhizae detected over 3000 OTUs in total in the soil, episphere and endosphere of agave species with a third of these OTUs (1007 OTUs belonging to nine orders) being found in the endosphere (Colemann-derr et al., 2016). Endophytes induced functions and phenotypic modifications Endophytic microorganisms are involved in a variety of phenotypic changes in plants. These changes have been widely studied especially in plants of agricultural importance. Because of their sessile lifestyle plants have to cope with the environmental conditions. A biotic conditions especially, such as light, water and others are spatially variable, so plants must cope with this heterogeneity [START_REF] Hodge | The plastic plant: root responses to heterogeneous supplies of nutrients[END_REF]. There are numerous reports of fungal and bacterial symbionts conferring tolerance to a variety of stresses to host plants as well as other benefits (e.g. Friesen et al., 2011;Mü ller et al., 2016). The range of functions provided by roots microorganisms are reviewed in the following section. Resources acquisition Microorganisms living in association with plants improve the acquisition of different resources and, more especially, allow the acquisition of otherwise inaccessible ones. L eaf epiphytic cyanobacteria for example transfer atmospherically fixed nitrogen to plants that can represent 10 to 20% of the leaf nitrogen [START_REF] Bentley | Direct transfer of newly-fixed nitrogen from free-living epiphyllous microorganisms to their host plant[END_REF]. This ability to fix atmospheric nitrogen is displayed by at least six different bacterial phyla, the most common being Proteobacteria, and by several archaea lineages [START_REF] Martinez-Romero | Dinitrogen-fixing prokaryotes[END_REF]Friesen et al., 2011). Nitrogen acquisition also occurs within the tree roots in boreal forests thanks to Ectomycorrhizal fungi (Lindahl et al., 2007). In parallel, other nutrients can be more easily obtained through the help of endophytic organisms and a given organism can provide multiple resources. For instance, the arbuscular mycorrhizal fungi living in association with plants supply these plants with water, phosphorous, nitrogen, and trace elements [START_REF] Smith | Mycorrhizal Symbiosis[END_REF]Smith et al., 2009). In the arbuscular mycorrhizal fungi symbiosis, mineral nutrients uptake can be 5 and 25 times higher for nitrogen and phosphorus respectively, in mycorrhized as compared to non-mycorrhized roots (Van Der Heijden et al., 2003;Vogelsang et al., 2006). A M fungi acquire these resources more efficiently because hyphae can access narrower soil pores and increase the uptake of immobile resources (especially inorganic phosphate), by acquiring nutrients beyond the depletion zone of the root and stimulating the production of exudates that release immobile soil nutrients (Maiquetı á et al., 2009;[START_REF] Courty | The role of ectomycorrhizal communities in forest ecosystem processes: new perspectives and emerging concepts[END_REF][START_REF] Cairney | Ectomycorrhizal fungi: the symbiotic route to the root for phosphorus in forest soils[END_REF]. In addition, A M fungi are able to acquire both organic and inorganis nitrogen, which is not the case of plants (see [START_REF] Hodge | A rbuscular mycorrhiza and nitrogen: implications for individual plants through to ecosystems[END_REF] for a review on A M fungi nitrogen uptakes). One example is the nitrogen in soil organic matter released by hydrolytic and oxidative enzymes produced by Ericoid mycorrhizae. This enhanced acquisition of resources not only directly affects the individual plant' s fitness but also its phenotype and competitive interactions with other plants (see section II 2.2.1 for examples; for a review see [START_REF] Hodge | A rbuscular mycorrhiza and nitrogen: implications for individual plants through to ecosystems[END_REF]. Indeed, a phenotypic consequence of the more efficient nutrient uptake of mycorrhizae for the plant is a reduced number of fine roots together with a lower root:shoot ratio and specific root length (Smith et al. 2009). Improved resource acquisition also allows the plant to cope more efficiently with environmental constraints. Resistance to environmental constraints A biotic stresses Numerous studies indicate that plant adaptation to stressful conditions may be explained by the fitness benefits conferred by mutualistic fungi (for example resources acquisition described in the section 2.1) [START_REF] Stone | A n overview of endophytic microbes: endophytism defined[END_REF][START_REF] Rodriguez | The role of fungal symbioses in the adaptation of plants to high stress environments[END_REF]. In addition to these indirect effects of resources acquisition, fungi and bacteria also provide direct resistance to a large range of stresses through the production of certain compounds. A mong the stresses alleviated by plant endophytes the main ones are salinity, extreme heat, drought and heavy metal pollutants. Salinity tolerance for example can be increased by different metabolites such as trehalose produced by bacteria like Rhizobia in nodules (Lopez et al., 2008). Such improved tolerances are not only provided by bacteria. The fungal endophyte Curvularia found on thermal soils in Yellowstone Park has been shown to increase tolerance to extreme heat (Redman et al., 2002). Fungal endophytes such as F usarium culmorum colonizing the above-and below-ground tissues and seed coats of Leymus mollis, also confers salinity tolerance (Rodriguez et al., 2008). In addition a given microorganism can at the same increase resource acquisition and provide resistance to an abiotic stress (independently of its effect on resources acquisition). A M fungi increase stomatal conductance when they are inoculated to plants either under normal or drought conditions. This increase of stomatal conductance has been linked to greater drought tolerance of rice and tomato plants inoculated with such fungi [START_REF] Raven | Plant nutrient-acquisition strategies change with soil age[END_REF]Rodriguez et al., 2008). Following observation of the described tolerance, several microorganisms have been considered as suitable candidates for bioremediation of polluted soils. This is the case of plant growth-promoting rhizobacteria that elicite tolerance to heavy metals [START_REF] Glick | Phytoremediation: synergistic use of plants and bacteria to clean up the environment[END_REF][START_REF] Zhuang | New advances in plant growth-promoting rhizobacteria for bioremediation[END_REF]. Biotic stresses In addition to their role in the resistance against abiotic stresses plant-associated microorganisms also mediate plant resistance against biotic constraints among which the most studied are aggressions by pathogens. Indeed, endophytes are able to secrete defensive chemicals in plant tissues (A rnold et al. 2003;Clay & Schardl 2002). Different studies have identified chemicals providing defense against pathogens and produced by a myriad of microorganisms associated with plants. The range of compounds produced by symbionts consists of antimicrobials with direct effects on the pathogen or indirect effects such as a diminution of its pathogenicity. Compounds with a direct and immediate antibacterial effect include antimicrobial auxin and other phytohormones [START_REF] Morshed | Toxicity of four synthetic plant hormones IA A , NA A , 2, 4-D and GA against A rtemia salina (L each)[END_REF]. Compounds with indirect effects are for example nonanoic acid produced by the fungus Trichoderma harzianum that inhibits mycelial growth and spore germination of two pathogens in the tissues of Theobroma cacao (A neja et al. 2005). A lternatively, microorganisms can also produce compounds such as A HL -degrading lactonases that are able to alter the communication between pathogens and thus prevent the expression of virulence genes (Reading & Sperandio, 2006). In addition, different strains and species can produce different compounds and each endophyte can itself produce several compounds. Even within a species, each strain can produce multiple antibiotics as in Bacillus where these antibiotics have synergistic interactions against pathogens (Haas et al., 2000). Such protection against biotic aggressors can also be mediated by stimulation of the plant immune system. [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF] highlighted mutualist-induced signaling pathways initiated by flagellin/FL S2 and EF-Tu/EFR recognition receptors, allowing plants to respond to virus, pests and pathogens. The defensive responses induced by mutualists are not always localized and given microorganisms like the fungus Trichoderma may induce both systemic and localized resistances to a variety of plant pathogens [START_REF] Harman | Trichoderma species-opportunistic, avirulent plant symbionts[END_REF]. Such resistances often protect against crop damage and many plantassociated microorganisms like the latter can be used for biocontrol [START_REF] Harman | Trichoderma species-opportunistic, avirulent plant symbionts[END_REF]. Biotic constraints are not limited to pathogens and many other aggressors or competitors can affect plants. One of the most studied biotic stresses alleviated by plant-associated microorganisms is herbivory. Mutualist-induced resistance to herbivory has been identified and described for various plant feeding herbivores. Such resistance often involves the production by the endophyte of compounds that are toxic for herbivores or thatdiminish plant palatability. For example, [START_REF] Tanaka | A symbiosis expressed nonribosomal peptide synthetase from a mutualistic fungal endophyte of perennial ryegrass confers protection to the symbiotum from insect herbivory[END_REF] showed that the fungal endophyte Neotyphodium produces the secondary metabolite peramine that protects Epichloë festucae from insect herbivory. L ike for abiotic stress tolerance (see above section 2.2.1) a single microorganism can produce various compounds. For instance, clavicipitaceous endophytes such as Neotyphodium induce the production of alkaloids, lolitrems, lolines, and peramines allowing grasses to defend against herbivores [START_REF] Rowan | L olitrems, peramine and paxilline: mycotoxins of the ryegrass/endophyte interaction[END_REF]Siegel et al., 1989;[START_REF] Clay | Fungal endophytes of grasses[END_REF]Clay & Schardl 2002). Such patterns of defense against herbivores have been mostly evidenced in grasses and also comprise other endophytes limiting mammalian herbivory through the production of lysergic acid amide [START_REF] White J R | Endophyte-host associations in forage grasses. V III. Heterothallism in Epichloë typhina[END_REF][START_REF] Gentile | Origin divergence and phylogeny of Epichloë endophytes of native argentine grasses[END_REF][START_REF] Zhang | Neotyphodium endophyte increases A chnatherum inebrians (drunken horse grass) resistance to herbivores and seed predators[END_REF]. Non-toxic compounds conferring antifeeding properties include for example alkaloids that reduce rabbit herbivory on plants [START_REF] Panaccione | Effects of ergot alkaloids on food preference and satiety in rabbits, as assessed with gene-knockout endophytes in perennial ryegrass (L olium perenne)[END_REF]. Growth and reproductive strategy A s has been shown in other hosts such as insects with Wolbachia, endophytes can also alter plant growth and reproductive strategy. In clonal plants, Streitwolf-Engel et al., (1997, 2001) showed that colonization of the roots by A rbuscular Mycorrhizal Fungi could alter the growth and reproduction of clonal plants. In this experiment, the authors found that ramets (i.e. clonal units composed of roots and shoots) production by Prunella vulgaris was differentially affected by the inoculation of three A M fungi isolates (see figure 3 for clonal growth plant description). The number of ramets produced changed by a factor of up to 1.8 independently of the isolates' effects on plant biomass. A M fungi can thus alter the trade-off between growth and reproduction. In addition, microorganisms can also alter the trade-off between different compartments of plant growth. In the above-described experiment the authors also found that branching (lateral ramification) was affected by the inoculation, suggesting that foraging by the plant (i.e., the strategy for resources acquisition) was modified by A rbuscular Mycorrhizal Fungi inoculation. In another experiment conducted, Sudova (2009) showed that growth and reproductive strategy modifications induced by the A M fungi vari both with the fungi identity and the plant species. Using five co-occurring plant species with 3 A M fungal isolates the authors showed that plant growth response to inoculation varied widely from negative to positive depending on the inoculum. A M inoculation led to changes in clonal growth traits such as an increase in stolon number and length only in some plant species. The effects of microorganisms on growth and reproductive strategy of plants appear thus to depend on the matching between plant and microorganisms identities although this idea has not been extensively tested. A s shown in this section the vast diversity of plant-associated microorganisms ensures essential functions impacting plant growth, development, survival and resistance to environmental constraints in general. However, the described studies tended to focus on describing the effects of the microbiota but not on the use of this microbiota by the plant. Considering the benefits of symbiotic associations, evolution should favorize a plant that optimizes its interactions with microbes. However, the extent to which plants might forage for microorganisms has not been investigated to date. Plant microbiota assembly The additive ecological functions supplied by the plant mutualists described in the previous section extend the plant' s adaptation ability (e.g., Vandenkoornhuyse et al., 2015), leading to fitness benefits for the host in highly variable environments [START_REF] Conrath | Priming: getting ready for battle[END_REF] and therefore can affect evolutionary trajectories (e.g., [START_REF] Brundrett | Coevolution of roots and mycorrhizas of land plants[END_REF]. In addition, because microbial communities may produce a mixture of antipathogen molecules that are potentially synergistic (see section 2.2.2), we can predict that plant hosts will be better protected against biotic stresses in the presence of more diverse microbial communities (Friesen et al., 2011). The same idea has been proposed for resources acquisition following results showing complementarity between symbionts in the acquisition of resources [START_REF] Van Der Heijden | Mycorrhizal fungal diversity determines plant biodiversity, ecosystem variability and productivity[END_REF]. Thus the composition of the plant microbiota is of major importance in determining the ecological success (the fitness) of plants. In this context, the assembly rules shaping microbiota diversity and composition have only started to be described in recent years, and the current knowledge is reviewed in the following section (see figure 4 for an overview of the factors shaping microbiota assembly). These factors inducing a filter of the microorganism communities from the soil to the rhizosphere and subsequently wihtin the endosphere are presented in this section. 3.1 Microbiota recruitment 3.1.1 The soil as a microbial reservoir The soil is a highly rich and diversified reservoir of microorganisms [START_REF] Curtis | Estimating prokaryotic diversity and its limits[END_REF][START_REF] Torsvik | Prokaryotic diversity--magnitude, dynamics, and controlling factors[END_REF][START_REF] Gams | Biodiversity of soil-inhabiting fungi[END_REF]Buee et al., 2009). For example, forest soil samples of 4 g were found to contain approximately 1000 molecular operational taxonomic units assigned to fungal species (Buee et al., 2009). Recent studies investigating the composition of microbiota in different compartments (soil, rhizosphere, root and leaf endosphere) of A gave species showed that the majority of bacterial and fungal OTUs found in the endosphere were present in the soil [START_REF] Coleman-Derr | Plant compartment and biogeography affect microbiome composition in cultivated and native A gave species[END_REF]. This observation was independently evidenced in the common model plant Arabidopsis thaliana where most of the root-associated bacteria were also found in the soil [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Schlaeppi et al., 2014). Root-associated microorganisms are thus mainly recruited from the surrounding soil, and the pool of microorganisms available for recruitment in the soil determines the diversity and composition of the plant root microbiota. The above-described recent studies, in line with others, have clearly demonstrated from experimental approaches that the main environmental factors determining the soil pool (and thus the plant roots endophytic microbiota) are soil type and properties such as pH, temperature or water availability [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Shakya et al., 2013;Schreiter et al., 2014). Rhizosphere While the composition of the soil microbial communities seems to be mostly determined by soil parameters (see above), the rhizosphere is a thin layer of soil at the periphery of the roots. This specific habitat of soil microorganisms contains a great abundance of microbes up to 10 11 microbial cells per g of soil and is also highly diversified with more than 30,000 prokaryotic species (e.g. [START_REF] Egamberdieva | High incidence of plant growth-stimulating bacteria associated with the rhizosphere of wheat grown on salinated soil in Uzbekistan[END_REF][START_REF] Mendes | Deciphering the rhizosphere microbiome for disease-suppressive bacteria[END_REF]. This rhizosphere corresponds to the influence zone of plant root exudates and oxygen releases (Vandenkoornhuyse et al., 2015). Exudates are highly complex mixtures comprising carbon-rich molecules and other compounds secreted by the plant as products of its metabolism. These exudates have a strong effect on the fungal and bacterial communities because they can be resources available for microorganisms, signal molecules (e.g. chemotaxy), and at the same time anti-microbial compounds (Vandenkoornhuyse et al., 2015). The large quantity of organic molecules secreted allows plants to manipulate the rhizosphere from the surrounding bulk soil reservoir (Bais et al., 2006). Differences in microbial communities in the rhizosphere of the same soil, depending on the plant species, have been highlighted (Berg et al., 2006;[START_REF] Garbeva | Rhizosphere microbial community and its response to plant species and soil history[END_REF][START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Micallef | Influence of A rabidopsis thaliana accessions on rhizobacterial communities and natural variation in root exudates[END_REF] demonstrating the strong influence of plants and the induced "rhizosphere effect". Differences have also been detected within species as genotypes were shown to harbor distinct rhizosphere communities [START_REF] Micallef | Influence of A rabidopsis thaliana accessions on rhizobacterial communities and natural variation in root exudates[END_REF]. In addition some species have also been shown to create similar communities in different soils [START_REF] Miethling | Variation of microbial rhizosphere communities in response to crop species, soil origin, and inoculation with Sinorhizobium meliloti L 33[END_REF]. If specific richness seems to be lower in the rhizosphere than in the surrounding soil, suggesting that organisms are filtered from the original soil pool [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF], the intensity of the rhizosphere effect seems however to vary [START_REF] Uroz | Pyrosequencing reveals a contrasted bacterial diversity between oak rhizosphere and surrounding soil[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Schlaeppi et al., 2014). The importance and intensity of the rhizosphere effect still need to be clarified but it has been hypothesized that plant roots select for specific microorganisms to prosper in the rhizosphere through the composition of the exudates that they release into the rhizosphere (discussed in section 3.2). Endosphere The endosphere constitutes the area within the plant roots and is colonized by a large variety of organisms comprising mycorrhizal and non-mycorrhizal fungi [START_REF] Vandenkoornhuyse | Extensive fungal diversity in plant roots[END_REF][START_REF] Smith | Mycorrhizal Symbiosis[END_REF], bacteria (Reinhold-Hurek & Hurek, 2011) and A rcheae [START_REF] Sun | Endophytic bacterial diversity in rice (Oryza sativa L .) roots estimated by 16S rDNA sequence analysis[END_REF]. The plant' s influence on microbial assemblages is stronger in this area than in the rhizosphere because of the influence of the plant immune system (Vandenkoornhuyse et al., 2015). In this area the diversity and abundance of organisms are lower than in the surrounding soil [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Schlaeppi et al., 2014), with specific assemblages differing from the soil communities (Vandenkoornhuyse et al., 2015). Several key microorganisms are indeed enriched (i.e. with higher relative abundance) in the endosphere, as compared to the soil and rhizosphere compartments, whereas others are depleted (i.e. with a lower relative abundance). L undberg et al. (2012) detected 96 OTUs that were enriched in the endosphere of A.thaliana and 159 OTUs that were depleted. These results have been confirmed in a more recent study using a larger set of Arabidopsis and Cardamina hosts where A ctinobacteria, Betaproteobacteria and Bacteroidetes dominated the datasets obtained from the rhizosphere and endosphere samples (Schlaeppi et al., 2014). These patterns of organisms enrichment and depletion can be attributed to the plant' s ability to control its microbiota through its immune system. 3.2 Controls of the plant over its microbiota Microorganisms recruitment through compounds secretion The microorganisms communities colonizing the rhizosphere, and those that penetrate the endosphere and form the plant microbiota, have been described with regard to their role in plant health. Studies have highlighted a large panel of mechanisms that allow the plant to control this microbiota by filtering the pathogens and promoting beneficial microbes [START_REF] Doornbos | Impact of root exudates and plant defense signaling on bacterial communities in the rhizosphere. A review[END_REF]. These control mechanisms have been identified for both the rhizosphere and endosphere and involve exudates secretion and the plant' s immune system. Berendsen et al., (2012) reviewed the mechanisms that enabled the plant to control its rhizosphere composition and showed that plant could exude secondary metabolites such as benzoxazinoids that inhibited the growth of specific microbes in the rhizosphere (Bais et al., 2002;[START_REF] Zhang | Secondary metabolites from the invasive Solidago canadensis L . accumulation in soil and contribution to inhibition of soil pathogen Pythium ultimum[END_REF]. Some studies have also evidenced plant-secreted compounds that suppress microbes cell-to-cell communication (i.e. "quorum sensing") and thus alter the expression of microbial virulence genes. Such compounds have been identified in a variety of species such as pea (Pisum sativum), rice (Oryza sativum) and green algae (Chlamydomonas reinhardtii) [START_REF] Teplitski | Plants secrete substances that mimic bacterial N-acyl homoserine lactone signal activities and affect population density-dependent behaviors in associated bacteria[END_REF][START_REF] Teplitski | Chlamydomonas reinhardtii secretes compounds that mimic bacterial signals and interfere with quorum sensing regulation in bacteria[END_REF][START_REF] Ferluga | OryR is a L uxR-family protein involved in interkingdom signaling between pathogenic Xanthomonas oryzae pv. oryzae and rice[END_REF]. A study on Medicago truncatula [START_REF] Gao | Production of substances by Medicago truncatula that affect bacterial quorum sensing[END_REF] found that ~20 compounds were produced and released in seedlings exudates that altered quorum sensing. These exudates also exhibited anti-microbial properties against pathogens and at the same time attracted beneficial organisms. A n example of this attractive effect is the beneficial rhizobacterium Pseudomonas putida KT2440 that is tolerant to and chemotactically attracted by exudate compounds [START_REF] Neal | Benzoxazinoids in root exudates of maize attract Pseudomonas putida to the rhizosphere[END_REF]. Such recruitment of microorganisms, mediated by the root exudates, has been described in fungal antagonists as a response to colonization of the plant by the pathogen Verticillium dahliae. Other studies have indicated that upon attack by pathogens, planst respond by recruiting specific beneficial microorganisms [START_REF] Rudrappa | Root-secreted malic acid recruits beneficial soil bacteria[END_REF][START_REF] Ryu | Foliar aphid feeding recruits rhizosphere bacteria and primes plant immunity against pathogenic and non-pathogenic bacteria in pepper[END_REF]. For example, when the leaves of A rabidopsis were infected by Pseudomonas syringae pv., the roots were more colonized by Bacillus subtilis F B17 which is a beneficial endophyte. These observations led to the suggestion that the recruitment of specific microorganisms within the endosphere by the plant constituted a means for plants to repress pathogens (Berendsen et al., 2012). The plant is thus able to preferentially recruit microorganisms depending on its needs by repressing or facilitating the microorganisms within the rhizosphere. The control of the plant over the microbial communities is not limited to the area around the root and is even greater within the roots. The immune system A nother way for the plant to control the composition of its microbiota is through the innate immune system. The forms of innate immunity and their role in controlling microbiota composition have been reviewed (see for example J ones [START_REF] Dangl | The plant immune system[END_REF]. Two forms of innate immunity exist depending on the pattern of the molecules triggering the immunity that can be pathogen-associated molecular pattern (PTI) or gene-based effectors (ET I). [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF] described how host plant receptors detect molecular patterns of damage and microbes and initiate the immune response by producing reactive oxygen, thickening their cell walls and activating defense genes in response to bacterial flagellin. In the "arms race" between plants and their pathogens, pathogens have evolved the use of effectors to suppress the above-described PTI mechanisms (K amoun, 2006) whereas the plant has developed a perception of these effectors through ET I that induces a strong response of cell apoptosis and local necrosis [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF]. In a recent paper, Hiruma et al., (2016) suggested that the immune system has wider physiological roles than simply restricting pathogen growth and may be involved in symbiont behavior. In an experiment described previously in section 1.2 they used metabolic profiling and experimental inoculations of endophytic fungi to demonstrate that the plant's innate immune system favorized the colonization of Colletotrichum tofieldiae during phosphate starvation and defense responses under phosphate-sufficient conditions [START_REF] Hiruma | Root endophyte Colletotrichum tofieldiae confers plant fitness benefits that are phosphate status dependent[END_REF]. A t the same time the immune system stimulated the indole glucosinolate metabolism [START_REF] Pant | Identification of primary and secondary metabolites with phosphorus status-dependent abundance in A rabidopsis, and of the transcription factor PHR1 as a major regulator of metabolic changes during phosphorus limitation[END_REF] that induced a shift in symbiont behavior, triggering the transfer of phosphorus by the fungi from root-associated hyphae to the plant shoots. The immune system could thus be a tool allowing the plant to manipulate symbionts for its own good. Regulation of symbiotic interactions A s explained in section 1.2, symbiosis consists of a continuum of partnerships ranging from parasitism to mutualism. The level of symbiosis efficiency is a consequence of the evolutionary trajectory of the symbiont under consideration. This means that in any kind of symbiosis cheaters and cooperators can co-exist. But why cheating? A large proportion of the plant microbiota provides beneficial functions to the plant and is thus believed to consist of mutualists. Mutualism implies reciprocal benefits with an associated cost for both partners [START_REF] Cameron | Giving and receiving: measuring the carbon cost of mycorrhizas in the green orchid, Goodyera repens[END_REF][START_REF] Davitt | Do the costs and benefits of fungal endophyte symbiosis vary with light availability?[END_REF]Fredericksson et al., 2012). This type of interaction favors "the tragedy of the commons" since it creates a conflict of interest between organisms. From an evolutionary point of view, mutualism is thus expected to be unstable and to evolve toward an asymmetric relationship with one partner benefiting from the interaction at the expense of the other. By doing so, a cheater symbiont or a host providing few benefits (at low cost) to the other in exchange for high rewards would selfishly improve its own fitness and thus invade the community. The stability of mutualism has been proposed to rely on control mechanisms on both sides of the symbiosis [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011). It has been clearly demonstrated that plants can exert a carbon sanction on the less beneficial A rbuscular Mycorrhizal fungi cooperators (K iers et al., 2011) thus reducing the fitness of any cheaters. This mechanism of carbon sanction by the host plant has been demonstrated for both phosphorus [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011) and nitrogen transfer [START_REF] Fellbaum | Carbon availability triggers fungal nitrogen uptake and transport in arbuscular mycorrhizal symbiosis[END_REF] by A M fungi. Reciprocally on the fungal side, phosphorus is only delivered to the host in exchange for photosynthates. In this symbiosis, being a cheater thus means increasing the carbon cost to deliver the phosphorus (K iers et al., 2011). Considering that a given host plant is colonized by multiple symbionts at the same time, cheaters can only proliferate if the plant is not able to favorise other symbionts. The level of cooperation is thus likely correlated with the level of diversity of symbionts. 3.2.4 The host plant effect: genetics and biogeography The colonization of plants by microorganisms occurs in the light of a finely tuned innate plant immune system allowing a selective recognition and filtration of microorganisms. A plant is thus capable of constructing its own microbiota. The inherent expectation is thus a strong effect of plant identity on the assembly of the microbiota. A lthough the initial microorganisms communities depend on the original soil they become more and more plant-specific and less diverse during colonization of the rhizosphere and endosphere (see above section 3.1). Even within these compartments, communities become increasingly plant-specific during plant growth and development [START_REF] Chaparro | Rhizosphere microbiome assemblage is affected by plant development[END_REF][START_REF] Sugiyama | Changes in the bacterial community of soybean rhizospheres during growth in the field[END_REF]Edwards et al., 2015;[START_REF] Shi | Successional trajectories of rhizosphere bacterial communities over consecutive seasons[END_REF]. Plants represent microorganisms habitats and thus the root architecture or nutrient quality and quantity can determine the plant microbiota depending on the plant species [START_REF] Oh | Distinctive bacterial communities in the rhizoplane of four tropical tree species[END_REF][START_REF] Bonito | Plant host and soil origin influence fungal and bacterial assemblages in the roots of woody plants[END_REF]. Differences between microbiota compositions can even be detected at a finer scale such as the ecotype. A. thaliana ecotypes establish different rhizosphere communities from each other at a statistically significant level [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012). However the same studies along with others also highlighted that environmental factors seemed to exert greater control on root-associated bacteria than the plant genotype or even plant species [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Shakya et al., 2013;Schlaeppi et al., 2014). This suggests that plant identity might not be the primary or at least the strongest factor determining differences in plant microbiota. Recent advances indicate that the importance of the host species might be microorganism-specific. [START_REF] Coleman-Derr | Plant compartment and biogeography affect microbiome composition in cultivated and native A gave species[END_REF] found that while prokaryotic communities of A gave species were primarily shaped by plant compartment and soil properties, fungal communities were structured by the biogeography of the plant host species. Such differences in biogeographic effect have been independently found in other studies where the root fungal communities (ectomycorrhiza) were differentiated more by geographic distance than bacterial communities [START_REF] Peay | A strong species-area relationship for eukaryotic soil microbes: island size matters for ectomycorrhizal fungi[END_REF]Shakya et al., 2013;[START_REF] Meiser | Meta-analysis of deep-sequenced fungal communities indicates limited taxon sharing between studies and the presence of biogeographic patterns[END_REF]. Biotic interactions Microorganisms living within the plant interact with each other in different ways and these interactions may determine their co-existence. The plant itself is also interacting with other plants in a community context. In this section we review the importance of the biotic interactions in assembly of the microbiota, focusing primarily on recent discoveries regarding microbe-microbe interactions and then emphasizing the under-appreciated role of the plant community context. Microbe-microbe interactions Considering the fact that plant roots constitute a habitat for microorganisms and that the roots represent a finite volume, co-existing microorganisms within the plant are expected to be in competition for space and resources. Competition between bacterial and fungal species has been demonstrated at the scale of the species. [START_REF] Werner | Partner selection in the mycorrhizal symbiosis[END_REF] demonstrated that the order of arrival of A M fungi structured the colonization of the plant roots suggesting that mycorrhizal fungi compete for root space. Specifically they evidenced that the A M fungal that first colonized plant roots (through an earlier inoculation) benefitted from a priority effect and were thus found in greater abundance. A t higher taxonomic levels, a recent study focusing on the whole endospheric root fungal microbiota (L ê Van et al., 2017) highlighted the coexistence of Glomeromycota, Ascomycota and Basidiomycota in Agrostis stolonifera. A n important result of this work was the demonstration that the phylogenetic structure and phylogenetic signal measured differed among phyla. A phylogenetic overdispersion was observed for Glomeromycota and Ascomycota indicating facilitative interactions between fungi or competitive interactions resulting in the coexistence of dissimilar fungi. Conversely, the fungi within Basidiomycota displayed a clustered phylogenetic pattern suggesting an assemblage governed by environmental filtering (i.e. plant host) favoring the coexistence of closely related species (L ê Van et al., 2017). These mechanisms of facilitation, competition or environmental filtering acting on the fungal microbiota composition are observed at the level of an individual plant. However, competitive or facilitative interactions between microorganisms in the plant microbiota are not stable over time and colonization of the root by a new microorganism can bring major changes. For example, colonization by the fungal pathogen Rhizoctonia solani,caused a significant shift in composition of the sugar beet and lettuce rhizosphere community often with an increase in specific family abundances [START_REF] Chowdhury | Effects of Bacillus amyloliquefaciens FZB42 on lettuce growth and health under pathogen pressure and its impact on the rhizosphere bacterial community[END_REF][START_REF] Chapelle | Fungal invasion of the rhizosphere micro-biome[END_REF]. Moreover, plant-microorganisms may themselves be colonized by bacteria and viruses (i.e. three way symbiosis) affecting symbiotic communication. For example, the endophyte Curvularia protuberata contains a virus that is required to induce heat tolerance [START_REF] Má Rquez | A virus in a fungus in a plant: three-way symbiosis required for thermal tolerance[END_REF], the fungus being asymptomatic without the virus. Current knowledge thus indicates that microbemicrobe interactions can, like host-microbe interactions, drive plant microbiota assembly and its effect on phenotype. However, information acquired about these microbe-microbe interactions is still very limited. A n example is the above-described three way symbioses that can mediate the effect of a given symbiont but whose occurrence in natural ecosystems is unknown. The plant community context The pool of microorganisms available in the soil conditions plant recruitment and thus determines the plant microbiota (see section 3.1.1). This soil pool depends on local environmental filtering and microorganisms dispersal. A nother factor structuring this pool is plant host filtering. Indeed, as a plant is able to selectively recruit and promote microorganisms depending on its needs (see section (2016) proposed that the degree of host filtering on the A M fungal community would depend on the spatial scale considered and could be stronger than environmental factors at a local scale. Taking into account that a given plant recruits its microbiota within its local environment, there is a need to evaluate the intensity of this plant community context in determining both the A M fungal and whole microbiota assembly. Microbiota transmission Diverse modes of transmission of microorganisms between generations, also called heritability, have been described and reviewed, and organized into different classifications [START_REF] Mcfall-Ngai | Unseen forces: the influence of bacteria on animal development[END_REF]Zilber-Rosenberg & Rosenberg, 2008). Herein, we present these modes of heritability in two categories: pseudo-vertical (or horizontal) and vertical transmissions. Horizontal or pseudo-vertical transmission Pseudo-vertical transmissions are indirect transmissions mediated by the environment. Two individuals sharing the same environment are able to recruit similar microorganisms from the soil pool and thus plants producing microorganisms at low dispersal distance may share a part of their microbiota to this environmental transmission. Such transmission has been largely described for a wide range of organisms with significant impact on plant phenotype such as growth-promoting rhizobacteria [START_REF] Smith | Genetic basis in plants for interactions with disease-suppressive bacteria[END_REF][START_REF] Singh | Unravelling rhizospheremicrobial interactions: opportunities and limitations[END_REF][START_REF] Somers | Rhizosphere bacterial signalling: a love parade beneath our feet[END_REF][START_REF] Egamberdieva | High incidence of plant growth-stimulating bacteria associated with the rhizosphere of wheat grown on salinated soil in Uzbekistan[END_REF], rhizobium [START_REF] Stougaard | Regulators and regulation of legume root nodule development[END_REF][START_REF] Davies | How rhizobial symbionts invade plants: the Sinorhizobium-Medicago model[END_REF] and even A rbuscular mycorrhizal fungi and non-photosynthetic fungi when seeds fall close to the mother plant [START_REF] Wilkinson | Mycorrhizal evolution[END_REF]Wang & Qui, 2006). Vertical transmission or heritability For plants, vertical transmission has primarily been reported for the cytoplasmic inheritance of chloroplasts [START_REF] Margulis | Origins of species: acquired genomes and individuality[END_REF]. Other true vertical transmission in plants does exist even if it has not been extensively studied. A limited diversity of endophytic fungi and bacteria are contained in the germplasm and are thus vertically transmitted in grasses (Clay & Schardl, 2002), conifers [START_REF] Ganley | Fungal endophytes in seeds and needles of Pinus monticola[END_REF], and tropical trees (U'Ren et al., 2009). A nother kind of vertical transmission has been proposed for plants through vegetative propagules [START_REF] Wilkinson | Mycorrhizal evolution[END_REF]Zilber-Rosenberg & Rosenberg, 2008). Plants are modular organisms and thus a fragment of plant (i.e. propagule) falling on the ground is able to grow and produce a new plant. In this context, it is probable that microorganisms developing within the plant fragment are transmitted to the progeny even if this has not been experimentally demonstrated. A nother form of vertical microbiota transmission was suggested for clonal plants by Stuefer et al., (2004). Clonal plants are organized as a network of ramets (i.e. clonal, potentially autonomous individuals) connected by above or below-ground modified stems (i.e. stolons) (Figure 3). This network structure promotes plant propagation in space (i.e. physical integration) and in some species the sharing of resources within the network to optimize fitness of the network as a whole (i.e. physiologic integration, Oborny et al., 2001). A nother layer of integration could occur in such a network because the stolons could permit the transfer of microorganisms between ramets through the vascular system or via the surface of the stolons. This hypothesis has never been addressed despite its importance regarding microorganisms dispersal and plant adaptation to environmental constraints. Considering all the benefits that can be attributed to symbiotic microorganisms (described in section 2) vertical transmission of part of or the whole microbiota would be advantageous because it would limit the costs of searching for suitable symbionts (Wilkinson & Sherratt, 2001) and thus could ensure habitat quality for the progeny. In terms of evolution, the transmission of symbiotic microorganisms between plant generations constitutes a "continuity of partnership" between the plant and the transmitted symbionts (Zilber- Rosenberg and Rosenberg, 2008). There is thus a need in plant science to search for vertical transmission of a significant part of the microbiota influencing the plant's fitness. [START_REF] Schlichting | Phenotypic plasticity-an evolving plant character[END_REF]. The plant is a "holobiont" Following the observations described in the previous sections, a new understanding of plantmicroorganisms interactions has been proposed. Experiments and observations have clearly demonstrated the fundamental role of microorganisms in all aspects of host life with important consequences on host phenotype and fitness. Thus the plant should not be considered as a standalone entity but needs to be viewed as a host together with its microbiota. In this context, the holobiont idea, originally introduced in 1994 during a symposium lecture by Richard J efferson and the holobiont hypothesis later developed by Zilber-Rosenberg and Rosenberg ( 2008) allow a collective view of the functions and interactions of a host and its symbionts. Holobiont is a term derived from the Greek word holos (whole or entire) and is used to describe an individual host and its microbial community. This holobiont theory proposes that an entity comprising a host and its associated microbiota, can be considered as a unit of selection (i.e. not the only unit) in evolution (Zilber- Rosenberg & Rosenberg, 2008;Theis et al., 2016). The host and microbial genomes of a holobiont are collectively defined as its hologenome [START_REF] Rosenberg | Role of microorganisms in adaptation, development, and evolution of animals and plants: the hologenome concept[END_REF]Bordenstein & Theis, 2015). This concept relies on different tenets among which the idea that microorganisms can be vertically or horizontally transmitted between generations, constituting a continuity of partnership, which shapes the holobiont phenotype. Genomic novelties can thus occur within any components of the hologenome (host or microorganisms) and holobiont evolution could lead to variations in either the host or the microbiotic genomes (Zilber- Rosenberg & Rosenberg, 2008;[START_REF] Rosenberg | Role of microorganisms in adaptation, development, and evolution of animals and plants: the hologenome concept[END_REF]. Furthermore, variations in the hologenome can also arise by recombination in the host and/or microbiome and by acquisition of new microbial strains from the environment [START_REF] Rosenberg | Role of microorganisms in adaptation, development, and evolution of animals and plants: the hologenome concept[END_REF]. In this context, selection acts on the result of the interactions both between the host and its symbionts and between symbionts: the holobiont phenotype. The recent development of the holobiont concept and hologenome theory (Bordenstein & Theis, 2015) has triggered a debate on its validity and its usefulness as a scientific framework (Moran & Sloan, 2015;[START_REF] Douglas | Holes in the hologenome: why host-microbe symbioses are not holobionts[END_REF]Theis et al., 2016). The primary role of this holobiont concept and related hologenome theory is to provide adequate vocabulary to describe and study host microbiota symbiosis (Theis et al., 2016) while blending the effects of complex microbiota on host phenotypes that may be cooperative or competitive [START_REF] Bosch | Metaorganisms as the new frontier[END_REF]Vandenkoornhuyse et al., 2015). Holobionts and their hologenomes are theoretical entities that require understanding. Metaphorically, skepticism and critics are the selection pressures shaping the evolution of science and "...the hologenome concept requires evaluation as does any new idea" (Theis et al., 2016). One of the main critics and prospects regarding the validity and significance of the holobiont and hologenome theory is the lack of studies showing vertical transmission of microorganisms between holobiont generations (i.e. vertical transmission, or heritability). Observational and experimental studies addressing the ubiquity (i.e. frequency of occurrence in natural populations), the fidelity (i.e. homogeneity in the organisms transmitted) and the significance (i.e. amount of microorganisms transmitted and impact on the plant phenotype) of the heritability are thus needed to build a comprehensive holobiont theory. A nother aspect of interest is the role of heritability in holobiont assembly and especially in the covariance of genomes between hosts and microbiota components that can induce variation in the phenotypes on which selection acts. In order for the holobiont and hologenome theory to be comprehensive, it should encompass many symbiosis models. The current understanding of the plant as a holobiont suggests another layer of complexity that has not been theorized yet. Many organisms are modular but plants harbor an additional level of modularity through clonal reproduction. The network structure of clonal plants potentially allows exchanges between holobionts, and thus provides a particular model for comprehension of holobiont assembly and evolution. A n extension of the holobiont and hologenome theory for clonal organisms is thus required. Objectives of the thesis This thesis is organised around two major objectives that are the consequence of each another. This work aims first at identifying the assembly rules governing the structure of the plant microbiota and the consequences of this asssembly regarding plant phenotypes. More precisely, we aimed at determining : (i) The importance of the microbiota regarding plant phenotypic plasticity and adaptation (Chapter 1) with the hypothesis that fungal endophytes and specifically A M fungi influence the spatial spreading and performance of plants (ii) The occurrence of microbiota heritability between clonal generations in plants and its impact on the assembly of the microbiota (Chapter 2). Specifically, we tested the hypothesis that clonal plants are able to transmit to their clonal progeny a part of their microbiota, including important symbionts for plant fitness, through the connective structures forming the clonal network. The second objective of the thesis is to determine the importance of the plant community context for the plant-microorganisms interactions and the outcome for plant performance (chapter 3). The hypothesis is that the abundance of a given plant in a community can modify the composition and structure of the fungal community colonizing a focal host-plant, ultimately determining its performance. C hapter I: C onsequences of mutualist-induced plasticity I. 1. Introduction Scientific context Plants are sessile organisms and have thus to cope with environmental constraints. In nature, the spatial distribution of resources and environmental constraints in general, are heterogeneous. Plants have thus evolved different mechanisms to buffer these constraints such as plastic modifications (i.e. production of different phenotypes from a single genotype) to adapt to local conditions (Bradshaw, 1965). However, the tools available for plantplastic responses are not limited to its genome. Indeed, epigenetic mechanisms (regulating the expression of the genome) and microorganisms has been repeatedly shown as sources of variations (Richards, 2006;Friesen et al., 2011;Holeski et al., 2012;Vandenkoornhuyse et al., 2015). A large number of studies have separately identified the effect of specific plant-associated microorganisms and epigenetic marking on plant phenotype. There is however a need to bring together these findings, to evaluate the importance of these sources of phenotypic variation on the plant ability to produce plastic responses and adapt to environmental constraints. Clonal plants are particularly plastic organisms. In clonal plants, the network structure (i.e. clonal individual connected) allows the propagation in space, together with resource and information sharing (i.e. physiological integration; Oborny et al., 2001). The network thus allow to develop plastic responses to the heterogeneity at low modular level such as a leaf or a root to higher modular level such as the ramet. Many studies have evidenced two plastic responses to the heterogeneity of resources that are specific to this network structure: foraging and specialization (Slade & Hutchings, 1987;Dong, 1993;Hutchings & de K roon, 1994;Birch & Hutchings, 1994). Plant-associated symbionts provide a large range of key functions that can be useful for the plant (Friesen et al., 2011;Mü ller et al., 2016). From a theoretical point of view, plants should thus consider microorganisms as a resource and develop foraging and specialization behaviors in response to their heterogeneous distribution. . Based on the optimal foraging theory (de K roon & Hucthings, 1995), clonal plants are expected to aggregate ramets (i.e. clonal unit comprising shoot and roots) in favorable patches and to avoid unfavorable patches through an elongation of stolons (i.e. connective structure). In agreement with the division of labor theory (Stuefer & de K roon, 1996), plants are also expected to develop specialization response by developing preferentially roots in a nutrient rich soil patch. Such plant response to microorganisms heterogeneous distribution has never been investigated to date. In addition, as microorganism affect plant phenotype, they could alter its ability to produce plastic response. Objectives of the chapter This chapter aims at determining the role of microorganisms in the production of phenotypic plasticity to environmental heterogeneity. More precisely we address the following questions: 1) Is the genome the only source of phenotypic plasticity for plants ? A re microbiota and epigeneticsignificant sources of phenotypic plasticity allowing plant to adapt to environmental constraints ?(A rticle I) 2) Does plant develop foraging and specialization plastic responses to the spatial heterogeneity of A rbuscular Mycorrhizal Fungi presence ? A rbuscular Mycorhizal Fungi identity is susceptible to alter plant traits, thus impacting its ability to produce foraging and plastic responses ? (A rticle II) Methods The work presented in this chapter is based on two different approaches. The first article presented is a review investigating the current knowledges and limits on plant phenotypic plasticity induced by epigenetic and microbiota. This paper aims at identifying perspectives in the roles of microorganisms for plant response to environmental constraints. The second article presented herein is based on experimental cultivations of the clonal plant Glechoma hederacea in controlled conditions (light, temperature, water and nutrient availability, no exterior contaminations). Two different experiments are presented in this article.We tested the G. hederacea foraging and specialization responses to the heterogeneous distribution of A M fungi. We manipulated the heterogeneity of A M fungi distribution by setting artificial patches (separate cultivation pots) that were inoculated (presence) or not (absence) with a mixture of three A M fungi species. To test the effect of the fungal species inocula on the plant traits, plants were cultivated into larger pots and the treatments consisted of the inoculation of individual fungal species (three treatments) or no inoculation at all. Main results The review presented herein (A rticle I) allowed to evidence the importance of microbiota and epigenetics as sources of phenotypic plasticity for plants adaptation to environmental constraints. Recent discoveries on epigenetics and plant-associated microorganisms demonstrated their impact on plant survival, development and reproduction. Microbiota and epigenetics represent non genome based sources of phenotypic variations that can be used by the plant to cope with environmental conditions from very short (within lifetime) to longer time scales (i.e. evolutive). These phenotypic variations can furthermore be integrated in the genome through genetic accommodation and thus impacts plants evolutionary trajectories. Our review of the existing knowledge overall evidenced that an interplay between microbiota and epigenetic mechanisms could occur and deserves to be investigated together with the respective impacts of these mechanisms on plant survival and evolution. In our experiments, we demonstrated 1.) in Glechoma hederacea no specialization and a limited foraging response to the heterogeneous distribution of A M fungi and 2.) that the architectural traits involved in the plant' s foraging response were not affected by the A M fungi species tested, contrary to resource allocation traits (linked to the specialization response). Two possible explanations were proposed: (i) plant responses are buffered by the differences observed in fungal species individual effects. Indeed, as A M fungal species have contrasted effects on plant traits and resources acquisition, in mixture their cumulative effects could be neutral; (ii) the initial heterogeneous distribution of A M fungi is perceived as homogeneous by the plant either by reduced physiological integration or due to the transfer of A M fungi propagules through the stolons. Indeed the fact that G. hederacea does not forage for arbuscular mycorrhizal fungi suggests that plant could homogenize the distribution of its symbiotic microorganisms through their transmission between ramets i.e. generations. A dditional observation of Scanning electron microscopy revealed the presence of hyphae on the stolon surface and several cells close to the external surface of the stolon cross-section were invaded by structures which could be interpreted as fungi. DNA sequencing of stolon samples confirmed these results and demonstrated the presence of A M fungi in the stolons. This suggests that fungi can be transferred from one ramet to another, at least by colonization of the stolon surface and/or within the stolon tissues. The possible transfer of A M fungi between ramets suggests that plants could have evolved a mechanism of vertical transmission of a part of their microbiota. This mechanism would ensure the habitat quality for their progeny and constitutes a continuity of partnership [START_REF] Wilkinson | Mycorrhizal evolution[END_REF]. To date, vertical transmission examples only concerns a few microorganisms colonizing plant seeds (Clay & Schardl., 2002;Selosse et al., 2004). Such mechanisms would be of importance since induced phenotypic modifications inherited between generations allow short, medium and long-time scale adaptation of plants to environmental constraints (A rticle I). The hypothesis of microbiota transmission between ramets is addressed in the following chapter. the sources of such variation that has led to the diversification of living organisms, is therefore of major importance in evolutionary biology. Diversification is largely thought to be controlled by genetically-based changes induced by ecological factors [START_REF] Schluter | Experimental evidence that competition promotes divergence in adaptive radiation[END_REF]2000). Phenotypic plasticity, i.e., the ability of a genotype to produce different phenotypes (Bradshaw, 1965;[START_REF] Schlichting | The evolution of phenotypic plasticity in plants[END_REF][START_REF] Pigliucci | Evolution of phenotypic plasticity: where are we going now?[END_REF], is a key developmental parameter for many organisms and is now considered as a source of adjustment and adaptation to biotic and abiotic constraints (e.g Because of their sessile lifestyle, plants are forced to cope with local environmental conditions and their survival subsequently relies greatly on plasticity [START_REF] Sultan | Phenotypic plasticity for plant development, function and life history[END_REF]. Plastic responses may include modifications in morphology, physiology, behavior, growth or life history traits [START_REF] Sultan | Phenotypic plasticity for plant development, function and life history[END_REF]. In this context, the developmental genetic pathways supporting plasticity allow a rapid response to environmental conditions (Martin and Pfennig, 2010) and the genes underlying The links between genotype and phenotypes are often blurred by factors including (i) epigenetic effects inducing modifications of gene expression, post-transcriptional and posttranslational modifications, which allow a quick response to an environmental stress (Shaw and Etterson, 2012) and (ii) the plant symbiotic microbiota recruited to dynamically adjust to environmental constraints (Vandenkoornhuyse et al., 2015). We investigate current knowledge regarding the evolutionary impact of epigenetic mechanisms and symbiotic microbiota and call into question the suitability of the current gene-centric view in the description of plant evolution. We also address the possible interactions between the responsive epigenetic mechanisms and symbiotic interactions shaping the biotic environment and phenotypic variations. Genotype-phenotype link: still appropriate? In the neo-Darwinian synthesis of evolution (Mayr and Provine, 1997), phenotypes are determined by genes. The underlying paradigm is that phenotype is a consequence of genotype (A lberch, 1991) in a nonlinear interaction due to overdominance, epistasis, pleiotropy, and covariance of genes (see A lberch, 1991;[START_REF] Pigliucci | Evolution of phenotypic plasticity: where are we going now?[END_REF]. Both genotypic variations and the induction of phenotypic variation through environmental changes have been empirically demonstrated, thus highlighting the part played by the environment in explaining phenotypes. These phenotypes are consequences of the perception, transduction and integration of environmental signals. The latter is dependent on environmental parameters, including (i) the reliability or relevance of the environmental signals (Huber and Hutchings 1997), (ii) the intensity of the environmental signal which determines the response strength [START_REF] Hodge | The plastic plant: root responses to heterogeneous supplies of nutrients[END_REF], (iii) the habitat patchiness (A lpert and Simms, 2002) and (iv) the predictability of future environmental conditions with current environmental signals information [START_REF] Reed | Phenotypic plasticity and population viability: the importance of environmental predictability[END_REF]. The integration of all these characteristics of the environmental stimulus regulates the triggering and outcomes of the plastic response (e.g. A lpert and Simms, 2002). In this line, recent works have shown that plant phenotypic plasticity is in fact determined by the interaction between plant genotype and the environment rather than by genotype alone (El-Soda et al., 2014). Substantial variations in molecular content and phenotypic characteristics have been repeatedly observed in isogenic cells (K aern et al., 2005). Moreover, recent analyses of massive datasets on genotypic polymorphism and phenotype often struggle to identify single genetic loci that control phenotypic trait variation (A nderson et al., 2011). The production of multiple phenotypes is not limited to the genomic information and the idea of a genotype-phenotype link no longer seems fully appropriate in the light of these findings. Besides, evidence has demonstrated that phenotypic variations are related to genes-transcription and RNA s-translation, which are often linked to epigenetic mechanisms, as discussed in the following paragraph [START_REF] Rapp | Epigenetics and plant evolution[END_REF]. E pigenetics as a fundamental mechanism for plant phenotypic plasticity "Epigenetics" often refers to a suite of interacting molecular mechanisms that alter gene expression and function without changing the DNA sequence (Richards, 2006;Holeski 2012) Epigenetics is now regarded as a substantial source of phenotypic variations (Manning et al., 2006;Crews et al., 2007;K ucharski et al., 2008;Bilichak, 2012;[START_REF] Zhang | Epigenetic variation creates potential for evolution of plant phenotypic plasticity[END_REF] in response to environmental conditions. More importantly, studies have suggested the existence of epigenetic variation that does not rely on genetic variation for its formation and maintenance (Richards, 2006;[START_REF] Vaughn | Epigenetic natural variation in Arabidopsis thaliana[END_REF]. However, to date, only a few studies have demonstrated the existence of pure natural epi-alleles (Cubas et al., 1999) although they are assumed to play an important role in relevant trait variation of cultivated plants [START_REF] Quadrana | Natural occurring epialleles determine vitamin E accumulation in tomato fruits[END_REF] adaptive value through modifications in the form, regulation or phenotypic integration of the trait. In the "adaptation loop", the effect of environment on plant performance induces the selection of the most efficient phenotype. The epigenetic processes are not the only engines of plant phenotypic plasticity adjustment. Indeed, plants also maintain symbiotic interactions with microorganisms to produce phenotypic variations. Plant phenotypic plasticity and symbiotic microbiot Plants harbor an extreme diversity of symbionts including fungi (Vandenkoornhuyse et al., 2002) and bacteria [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012). During the last decade, substantial research efforts have documented the range of phenotypic variations allowed by symbionts. Examples of mutualist-induced changes in plant functional traits have been reported (Streitwolf-Engel et al., 1997;2001;[START_REF] Wagner | Natural soil microbes alter flowering phenology and the intensity of selection on flowering time in a wild Arabidopsis relative[END_REF], which modify the plant's ability to acquire resources, reproduce, and resist biotic and abiotic constraints. The detailed pathways linking environmental signals to this mutualist-induced plasticity have been identified in some cases. For instance, [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF] highlighted several mutualist-induced signaling pathways allowing a plastic response of plants to virus, pests and pathogens initiated by flagellin/FLS2 and EF-Tu/EFR recognition receptors. Mutualist-induced plastic changes may affect plant fitness by modifying plant response to its environment including (i) plant-resistance to salinity (Lopez et al., 2008), drought (Rodriguez et al., 2008), heat (Redman et al., 2002) and (ii) plant nutrition (e.g., Smith et al., 2009). These additive ecological functions supplied by plant mutualists extend the plant's adaptation ability (Bulgarelli et al., 2013;Vandenkoornhuyse et al., 2015), leading to fitness benefits for the host in highly variable environments [START_REF] Conrath | Priming: getting ready for battle[END_REF] and therefore can affect evolutionary trajectories (e.g. [START_REF] Brundrett | Coevolution of roots and mycorrhizas of land plants[END_REF]. In fact, mutualism is a particular case of symbiosis (i.e. long lasting interaction) and is supposed to be unstable in terms of evolution because a mutualist symbiont is expected to improve its fitness by investing less in the interaction. Reciprocally, to improve its fitness a host would provide fewer nutrients to its symbiont. Thus, from a theoretical point of view, a continuum from parasite to mutualists is expected in symbioses. However, the ability of plants to promote the best cooperators by a preferential C flux has been demonstrated both in Rhizobium/ and A rbuscular Mycorrhiza/Medicago truncatula interactions (K iers et al., 2007; 2011). Thus, the plant may play an active role in the process of mutualist-induced environment adaptation as it may be able to recruit microorganisms from soil (for review Vandenkoornhuyse et al., 2015) and preferentially promote the best cooperators through a nutrient embargo toward less beneficial microbes (K iers et al., 2011). In parallel, vertical transmission or environmental inheritance of a core microbiota is suggested (Wilkinson and Sherratt, 2001) constituting a "continuity of partnership" (Zilber- Rosenberg and Rosenberg, 2008). Thus the impact on phenotype is not limited to the individual's lifetime but is also extended to reproductive strategies and to the next generation. Indeed, multiple cases of alteration in reproductive strategies mediated by mutualists such as arbuscular mycorrhizal fungi (Sudová , 2009) or endophytic fungi (A fkhami and Rudgers, 2008) have been reported. Such microbiota, being selected by the plant and persisting through generations, may therefore influence the plant phenotype and be considered as a powerhouse allowing rapid buffering of environmental changes (Vandenkoornhuyse et al., 2015). The idea of a plant as an independent entity on the one hand and its associated microorganisms on the other hand has therefore recently matured towards understanding the plant as a holobiont or integrated "super-organism" (e.g., Vandenkoornhuyse et al., 2015). Holobiont plasticity and evolution If the holobiont can be considered as the unit of selection (Zilber- Rosenberg and Rosenberg, 2008), even though this idea is still debated (e. g. L eggat et al., 2007;Rosenberg et al., 2007), then the occurrence of phenotypic variation is enhanced by the versatility of the holobiont composition, both in terms of genetic diversity (i.e. through microbiota genes mainly) and phenotypic changes (induced by mutualists). Different mechanisms allowing a rapid response of the holobiont to these changes have been identified [START_REF] Simms | The relative advantages of plasticity and fixity in different environments: when is it good for a plant to adjust ?[END_REF] 3) recruitment of new mutualists within the holobiont (Vandenkoornhuyse et al., 2015). In this model, genetic novelties in the hologenome (i.e. the combined genomes of the plant and its microbiota, the latter supporting more genes than the host) are a consequence of interactions between the plant and its microbiota. The process of genetic accommodation described in section 3, impacts not only the plant genome but can also be expanded to all components of the holobiome and may thus be enhanced by the genetic variability of microbiota. In the holobiont, phenotypic plasticity is produced at different integration levels (i.e., organism, super-organism) and is also genetically accommodated or assimilated at those scales (i.e., within the plant and mutualist genomes and therefore the hologenome). The holobiont thus displays greater potential phenotypic plasticity and a higher genetic potential for mutation than the plant alone, thereby supporting selection and the accommodation process in the hologenome. In this context, the variability of both mutualistinduced and epigenetically-induced plasticity in the holobiont could function as a "toolbox" for plant adaptation through genetic accommodation. Consequently, mechanisms such as epigenetics allowing a production of phenotypic variants in response to the environment should be of importance in the holobiont context. Do microbiota and epigenetic mechanisms act separately or can they interact ? Both epigenetic and microbiota interactions allow plants to rapidly adjust to environmental conditions and subsequently support their fitness (Figure 1). Phenotypic changes ascribable to mutualists and mutualists transmission to progeny are often viewed as epigenetic variation (e.g., [START_REF] Gilbert | Symbiosis as a source of selectable epigenetic variation: taking the heat for the big guy[END_REF]. However, this kind of plasticity is closer to an "interspecies induction of changes" mediated by epigenetics rather than "epigenetics-induced changes" based solely on epigenetic heritable mechanisms (see section on epigenetics for a restricted definition). A part from the difficulty of drawing a clear line between epigenesis and epigenetics (J ablonka and L amb, 2002), evidence is emerging of the involvement of epigenetic mechanisms in mutualistic interactions. A n experiment revealed changes in DNA adenine methylation patterns during the establishment of symbiosis [START_REF] Ichida | DNA adenine methylation changes dramatically during establishment of symbiosis[END_REF], suggesting an effect of this interaction on the bacterial epigenome or at least, a role of epigenetic mechanisms in symbiosis development. Correct methylation status seems also to be required for efficient nodulation in the Lotus japonicus -Mesorhizobium loti symbiosis [START_REF] Ichida | Epigenetic modification of rhizobial genome is essential for efficient nodulation[END_REF] These epigenetic mechanisms and microbiota sources of plant phenotypic plasticity may act synergistically although this idea has never convincingly been addressed. A s far as we know, different important issues bridging epigenetic mechanisms and microbiota remain to be elucidated such as [START_REF] Simms | The relative advantages of plasticity and fixity in different environments: when is it good for a plant to adjust ?[END_REF] the frequency of epigenetic marking in organisms involved in mutualistic interactions, (2) the range of phenotypic plasticity associated with these marks either in the plant or in microorganisms, (3) the consequences of these marks for holobiont phenotypic integration, (4) the functional interplay between epigenetic mechanisms and microbiota in plant phenotype expression, [START_REF] Pigliucci | Reaction norms of Arabidopsis. V. flowering time controls phenotypic architecture in response to nutrient stress[END_REF] the inheritance of epigenetic mechanisms and thus their impact on symbiosis development, maintenance and co-evolution. To answer these questions, future studies will need to involve surveys of plant genome epigenetic states (e.g., methylome) in response to the presence/absence of symbiotic microorganisms. Recent progress made on bacteria methylome survey methods should represent useful tools to design future experiment on this topic (Sá nchez-Romero et al., 2015). A lthough research on the interaction between microbiota and epigenetics is in its infancy in plants, recent works mostly on humans support existing linkages. Indeed, a clear link has been evidenced between microbiota and human behavior (Dinan et al., 2015). Other examples of microbiota effects are their (i) deep physiological impact on the host through serotonin modulation [START_REF] Yano | Indigenous bacteria from the gut microbiota regulate host serotonin biosynthesis[END_REF] and (ii) incidence on adaptation and evolution of the immune system (L ee and Mazmanian, 2010). Such findings should echo in plant-symbionts research and encourage further investigations on this topic. More broadly, and despite the above-mentioned knowledge gaps, our current understanding of both epigenetic mechanisms and the impact of microbiota on the expression of plant phenotype, invite us to take those phenomena into consideration in species evolution and diversification. Martin and Pfennig, 2010) but rapid shifts in plant traits, as allowed by both microbiota and epigenetics, would provide accelerated pathways for their evolutionary divergence. In addition, such rapid trait shifts also permit rapid character displacement. Induction of DNA methylation may occur more rapidly than genetic modifications and could therefore represent a way to cope with environmental constraints on very short time scales (during the individual's lifetime; [START_REF] Rando | Timescales of genetic and epigenetic inheritance[END_REF]. In parallel, microbiota-induced plasticity is achieved both at a short time scale (i.e. through recruitment) and at larger time scales (i.e. through symbiosis evolution; Figure 2). Because of the observation of transgenerational epigenetic inheritance, the relevance of epigenetically-induced variations is a current hot topic in the contexts of evolutionary ecology and environmental changes (Bossdorf et al., 2008;[START_REF] Slatkin | Epigenetic inheritance and the missing heritability problem[END_REF][START_REF] Zhang | Epigenetic variation creates potential for evolution of plant phenotypic plasticity[END_REF]Schilichting and Wund, 2014). This has stimulated renewed interest in the 'extended phenotype' (Dawkins, 1982). The central idea of Dawkins 'extended phenotype' (Dawkins, 1982) is that phenotype cannot be limited to biological processes related to gene/genome functioning but should be 'extended' to consider all effects that a gene/genome (including organisms behavior) has on its environment. For example, the extended phenotype invites us to consider not only the effect of the plant genome on its resources acquisition but also the effect of the genome on the plant symbionts as well as on nutrient availability for competing organisms. A fkhami, M. E., and Rudgers, J . A . (2008) No specialization and only a limited foraging response to A M fungi heterogeneous distribution was found. A n effect of the A M fungal species on plant mass allocation and ramet production, but not on spacer length, In nature, environmental conditions, especially resources, vary spatially and temporally even at a fine scale. The spatial variations in resources abundance are perceived by organisms as environmental heterogeneity as long as the patches of resources are smaller than the organism and larger than the response unit 1,2 . Plants, because of their sessile lifestyle, have to cope with this heterogeneity and have evolved complex and diverse buffering mechanisms, such as phenotypic plasticity (i.e. production of different phenotypes from a single genotype 3 ). Phenotypic plasticity improves the plant' s ability to respond to resource heterogeneity during its lifetime by allowing trait adjustment to current environmental conditions 4,5,6,7 . Plasticity is expressed at different modular levels in plants 8 ranging from first order modules such as leaf or root to a superior modular level such as the ramet (see Harper, 1977 for modular structure description 9 ). This plastic response results from a trade-off between environment exploration for a resource (e.g. foraging for nutrient-rich patches) and resources exploitation (e.g. uptake of the resource and establishment in the patches). In clonal plants, each individual consists of a set of ramets connected through belowground (i.e. rhizomes) or aboveground horizontal modified stems (i.e. stolons). These connections result in a network structure and promote plant propagation in space(i.e. physical integration). In some species they also allow sharing of information and resources within the physical clone (i.e. physiological integration 10 ). A s a result of this network architecture, clonal individuals experience spatial heterogeneity at centimetric scales. They also share information about this environmental signaling between ramets. This leads to plastic responses at the local scale to optimize performance, through resource-sharing, at the clone level 11 . The response of clonal individuals to this small-scale heterogeneity results from a resource exploitation-exploration trade-off. Exploration responses are mostly linked to ramet positioning and induce modifications in clonal network architecture to allow foraging for available resources 12,13 . The optimal foraging theory predicts that ramets should maximize resource acquisition by aggregating in rich patches and avoiding poor patches 12,14,15,16 . Such aggregation may be achieved through modifications of the horizontal architecture of clonal plants, such as internode shortening or increased branching 12,17,18 . Exploitative responses involve (changes in resource acquisition traits. A s a result of physiological integration, each ramet may specialize in acquiring the most abundant resource (division of labor theory 19 ) and share it throughout the network. This specialization can involve modifications in ramet resource allocation patterns 20,21 whereby a higher root/shoot ratio is observed in ramets developing in nutrient-rich patches, and a lower ratio in light-rich patches 20,22 . Clonal foraging and ramet specialization have been demonstrated in response to soil nutrient heterogeneity 22,23,24,25 . However, under natural conditions, plant-nutrients uptake is mostly mediated by symbiotic micro-organisms such as A rbuscular Mycorrhizal (A M) fungi which colonize ~80% of terrestrial plants 26 . A M fungi symbionts (i.e. Glomeromycota) colonize roots and develop a dense hyphal network exploring soil to 'harvest' mineral nutrients for the plant' s benefit 26 . Plants with mychorrized roots can thus attain higher rates of phosphorus and nitrogen absorption n minus of 2 samples(x 5 and x 25 respectively) than plants with non-mycorrhized roots 27,28 . In turn, AM fungi obtain from plants, the carbohydrates required for their survival and growth 29,30 . In natural conditions, plant roots are colonized by a complex community of A M fungi 31 . These fungi display different levels of cooperation ranging from good mutualists to more selfish ones (i.e. cheaters 32 ). Within the root-colonizing fungal assemblage, plants have been shown to preferentially allocate carbon toward the best cooperators, thereby favoring their maintenance over cheaters 33 . The additive nutrient supply provided by A M fungi can be assimilated as a resource for the plant. A n important raising expectation is that plants may respond to the heterogeneous presence of A M fungi as for a nutritive resource. The plant could thus forage (optimal foraging theory) or specialize (division of labor theory) in response to A M fungi presence. The opposite hypothesis is that A M fungi and foraging or specialization are alternatives to cope with resource heterogeneity implying that plant with clonal mobility do not rely on AM fungi to respond to the heterogeneity. Our aim in this study was to analyze a plant's plastic response to A M fungal heterogeneity by performing two experiments under controlled conditions with the clonal herb Glechoma hederacea. In the first experiment, we tested the plant's foraging and specialization response to the heterogeneous distribution of A M fungi. The treatments consisted of a mixture of three species of A M fungi that has been shown to display various cooperativeness in precedent studies. Two assumptions were tested: (i) according to the optimal foraging theory, clones should aggregate ramets in the patches containing A M fungi by reducing their internodes lengths and (ii) according to the division of labour theory, clones should specialize ramets with a higher allocation to roots in the presence of A M fungi than in their absence. To better understand the results obtained in experiment 1 and because of the potential impact of different cooperation levels in fungi involved in this symbiosis, we carried out a second experiment to test the effect of A M fungal identity on the foraging and specialization response of G. hederacea. We tested i) the effect on plant traits of the individual presence of the three different species of A M fungi used in the assemblage treatment and ii) the assumption that A M fungal species differ in their effects on the traits involved in foraging and specialization responses. In both experiments, the performance of clonal individuals was expected to be reduced in the absence of A M fungi. ME T HODS Biological material We used the clonal, perennial herb Glechoma hederacea which is a common L amiaceae in woods and grasslands. G. hederacea clones produce new erect shoots at the nodes at regular intervals of 5 to 10 cm on plagiotropic monopodial stolons (i.e. aboveground connections). Each ramet consists of a node with two leaves, a root system and two axillary buds. In climatic chambers with constant conditions, G. hederacea does not flower and displays only vegetative growth 12 . This species is known to exhibit foraging behavior 12,22,45 and organ specialization 22 in response to nutrients or light heterogeneity. The ramets used in our experiments were obtained from the vegetative multiplication of 10 clonal fragments taken in 10 different locations sufficiently spaced to obtain different genotypes. Plants were cultivated for three months under controlled conditions to avoid parental effects linked with their original habitats 51 . Vegetative multiplication was carried out on a sterilized substrate (50% sand and 50% vermiculite, autoclaved at 120°C for 20 minutes) to ensure the absence of A M fungi propagules. For each experiment, the transplanted clonal unit consisted of a mature ramet (leaves and axillary buds) with one connective internode (to provide resources to support ramet survival) 52 , and without roots (to avoid prior mycorrhization). The A M fungi inocula used in both experiments were Glomus species: Glomus intraradices (see [START_REF] Stockinger | Glomus intraradices DA OM197198', a model fungus in arbuscular mycorrhiza research, is not Glomus intraradices[END_REF] for discussion on G. intraradices reclassification 53 ), Glomus custos, and Glomus clarum. These A M species were chosen to limit phylogenetic differences between fungi life-history traits 54 . G intraradices has been shown to display beneficial P uptake in Medicago truncatula 33 . The use of three different A M species also ensure a range of cooperativeness in the symbionts. The inocula used in the two experiments consisted of a singlespecies inoculum produced in in vitro root cultures (provided by S.L . Biotechnologia Ecologica, Granada, Spain) or a mixture of equal proportions of all three inocula. The inoculations consisted of an injection of 1mL of inoculum directly above the roots, and were administered when the plants had roots of 0.5 to 1 cm in length. E xperimental conditions Experiment 1 was designed to test the foraging and specialization responses of G. hederacea to the heterogeneous distribution of A M fungi. Experiment 2 tested the effect of the species of A M fungus on the plant traits involved in these responses. Both experiments were carried out with cultures grown on the same sterile substrate (50% sand, 50% vermiculite) in a climate-controlled chamber with a diurnal cycle of day /12h night at 20°C. Plants were watered with deionized water every two days to check for nutrient availability. Necessary nutrients were supplied by watering the plants every 10 days using a fertilizing Hoagland's solution with strongly reduced phosphorus content to ensure ideal conditions for mycorrhization (i.e. phosphorus stress) 55,56,57 . A t each watering, the volumes of deionized water and fertilizing solution per pot were 25 mL and 250 mL respectively for the first and second experiments. We also controlled nutrient accumulation during the experimental period by using pierced pots that allowed evacuation of the excess watering solution. To prevent nutrient enrichment due to the inoculum, AM fungi-free pots were also inoculated with a sterilized inoculum (autoclaved at 100°C for five minutes). E xperiment 1: E ffect of heterogeneous A M fungal distribution on G. hederacea foraging and specialization responses. The responses of G. hederacea to four different spatial distributions of AM fungi were tested. G. hederacea was grown in series of 11 consecutive pots: two homogeneous treatments with the presence (P) or absence (A ) of A M fungi in all pots; and two heterogeneous treatments with two patches of 5 pots either in presence then absence (PA) or absence then presence (A P) (Fig. 1). The two latter treatments were done to take into account a potential effect of ramet age in the plant's response to heterogeneity. These treatments were replicated for 10 clones of glechoma hederacea (see Methods section "Biological material" for precision on plants used). Each clone was grown in plastic pots (8 × 8 × 7 cm3) filled with sterile substrate. Only one ramet was allowed to root in each pot and plant growth was oriented in a line by removing lateral ramifications. The initial ramet, in all treatments, was planted in a pot without AM fungi. For each treatment, the inoculum consisted of a mixture of the three A M fungal species (G. clarum, G. custos and G. intraradices). Inoculations were started on the second pot of each line which actually contained the fourth ramet of the clone (exceptionally, the first three ramets rooted in the same first pot due to internode shortness, see Fig. 1). Inoculations were administered for each ramet separetely when the ramet had roots of 0.5 to 1 cm in length to avoid a ramet age effect on the A M fungi colonization process. The clones were harvested when the final ramet (number 13) had rooted in the 11th pot. This ensured that each clone had 10 points for sampling environmental quality. The 5th, 6th, 10th and 11th ramets of each clone in the pot line (Fig. 1) were used for statistical analyses. These ramets corresponded to the second and third ramets experiencing the current patch quality. Indeed, L ouâpre et al. ( 2012) emphasized the role of the "past experience" of the clone in developing a plastic response. The choice of these four ramets thus ensured that the clone had enough sampling points to assess the quality of its habitat i.e. in the patches where A M fungi were present or absent, in the heterogeneous treatments, and to adjust accordingly when initiating new ramets 35 . Each ramet was carefully washed after harvesting. The foraging response was assessed by measuring the length of the internode just after the ramet. A n aggregation of ramets, with shortened internodes was expected in patches where A M fungi were present and an avoidance of patches, i.e. production of longer internodes, was expected where A M fungi were absent. Modifications in ramification production linked to the effect of the treatment were checked by recording the number of ramifications produced by the ramets throughout the experiment. The specialization response was examined by measuring the root/shoot ratio (R/S) i.e. the biomass allocated to the below-and above-ground resource acquisition systems, after separating the roots and shoots and after oven-drying for 72h at 65°C. We expected a higher R/S ratio in patches where A M fungi were present than in patches where A M fungi were absent. Clone performance was assessed from (i) the total biomass of the clone, calculated as the sum of ramet roots, shoots and stolons after oven-drying for 72h at 65°C and (ii) the growth rate calculated as the number of days needed for the clone to develop the 10 sampling ramets i.e. the number of days between rooting of the 4th ramet and final harvesting. E xperiment 2: E ffect of A M fungal identity on G. hederacea performance and traits. The effects of individual A M fungal species on G. hederacea foraging and specialization traits were tested using four culture treatments: 1) no A M fungi, 2) with Glomus custos, 3) with Glomus intraradices, and 4) with Glomus clarum. Each treatment was replicated eight times with four related ramets assigned to each treatment replicate (32 clones in total), to control for plant-genotype effects. The initial ramet of each clone had previously been cultivated on sterile substrate to ensure root system development and facilitate survival after transplanting. The initial ramets were then transplanted in pots (27.5 × 12 × 35 cm3) filled with substrate. The A M fungi inoculations consisted of three injections of 1 mL of inoculum directly on the roots of the first three rooted ramets to ensure colonization of the whole pot. The plants were harvested after six weeks. The following traits involved in foraging were measured: (i) the longest primary stolon length (of order 1) as an indicator of the maximum spreading distance of space colonization (ii) the number of ramifications as an indicator of lateral spreading and clone densification. We also measured biomass allocation to the roots, shoots and stolons at the clone level, i.e. traits involved in the specialization response, after oven-drying for 72h at 65°C. Plant performance for the entire clone was determined from: (i) the total biomass calculated as the sum of the dry weights of the shoots, roots and stolons after oven-drying for 72h at 65°C. and (ii) the number of ramets i.e. the number of potential descendants. Performance was expected to be higher in pots inoculated with fungi and to differ depending on the fungal species. Statistical analysis For experiment 1, to test whether G. hederacea develops a plastic foraging (internode length) or specialization (R/S ratio) response to the heterogeneous distribution of AM fungi, A NOVA analyses were performed using the linear mixed-effects model procedure in R 3. 1.3 58 with packages "nlme" 59 and "car" 60 . Ramets of the same age were compared between genotypes to control for a possible effect of ramet age. For experiment 2, to determine whether the species of A M fungi induced changes in plant traits and performance, A NOVA analyses were performed using linear mixed models with the same R packages and version described above. W Resource allocation was tested by using the clone total biomass as covariate to take into account the trait variance associated with clone growth. For both experiments genotype-induced variance and data dependency was controlled by considering the treatment (four modalities) as a fixed factor and the plant-clone genotype as a random factor. The effect of genotype was assessed by comparing the intra-and inter-genotype variance and was considered significant when the inter-genotype variance was strictly superior to the intra-genotype variance. When a significant effect of treatment was detected by A NOVA , post hoc contrast tests were performed using the "doBy" package 61 to test for significant differences between modalities. When necessary, the normality of the residuals was ensured by subjecting the data to log transformation. The total clone biomass (summed dry weights of shoots, roots, and stolons) was used as covariate to account for variance due to differences in clone performance. R E SULT S In both experiments G. hederacea traits variation was not significantly influenced by plant genotype (i.e. the inter-genotypic variance was not greater than the intra-genotypic variance). E xperiment 1: E ffect of heterogeneous A M fungi distribution on G. hederacea foraging and specialization responses. The hypothesis of modified foraging and specialization responses of Glechoma hederacea to the patchiness of A M fungal presence was tested by comparing the internode lengths and the R/S ratio between the treatments for the 5 th , 6 th , 10 th and 11 th ramets (see Methods for details on ramet selection and experimental design). We found a significant effect of the A M fungal treatment on the 10 th internode length (P=0.005; F=5.74) (Tab. 1, Fig. 2) with a longer internode in the PA treatment (A M fungi present then absent) than in the absence (A ) and presence (P) treatments. Conversely, no significant effect was found for the 5 th ramets (P=0.71; F=0.45) and 6 th ramets (P=0.15; F=1.92) (Fig. 2). The 11 th ramets seem to display the same response patterns as the 10 th ramets, but no significant differences were detected between the treatments (P=0.93; F=0.15), due to a partial bimodal distribution of data in the "P" treatment with a few individuals exhibiting longer stolon. In addition, the number of ramifications for the 5 th , 6 th , 10 th , and 11 th ramets was not significantly affected by treatment (Tab. 1). No changes in the R/S ratio in response to A M fungal treatment were detected in any of the four tested ramets (Tab.1). with Glomus intraradices allocating significantly fewer resources to stolons (Fig. 3) and more to shoots (P=0.019, F=4.24) than plants without A M fungi. The allocation to roots, however, was not dependent on the treatment (P=0.68; F=0.50). DISC USSION The plants did display some foraging behavior in response to A M fungi heterogeneity, as elongation of the internodes was observed in patches without A M fungi after the plant had experienced patches with AM fungi. This behavior would correspond to an avoidance of resource-poor patches, as expected from the optimal foraging theory. However, this behavior was only detected at a particular ramet age (10 th ramets), indicating a possible role of the ontogenic state in development of the plastic response 34 . This may be due to a "lag time" in the plant's response based on the need for environmental sampling. Indeed, L ouâpre et al., (2012) demonstrated that clonal plants may need a minimum number of sampling points as benchmarks in order to perceive and respond to resource availability 35 . In their study, Potentilla reptans and P. anserina started to respond to the treatment after the 5 th internode, suggesting a strong effect of patch size. A similar patch size effect had already been demonstrated in modeling studies 10,36 . No plastic modifications corresponding to a ramet specialization of G. hederacea, in response to A M fungal spatial heterogeneity, were found either. Contrary to the results expected with the specialization theory, biomass was not preferentially allocated to the roots in patches with A M fungi or to the shoots in patches without AM fungi. This absence of response was recorded for all the ramet ages tested. These results -a mild foraging response and no specialization -give credit to the theory supported by Ornitchenko & Zobel (2000) that the species with high mobility do not rely on A M fungi to cope with resource heterogeneity 37. . Glechoma with its high clonal mobility should thus show no response to A M fungi response. However, our results do not fit with the literature predictions for specialization and foraging response 38 . This divergence may be explained by two alternative hypotheses that are developed in the following sections. The first explanation is linked with the occurrence of an individual effect of the species of A M fungus on plant traits, which may predominate or modify the response to the presence/absence of A M fungi when all three species exist together (experiment 2); the second is linked with reduced physiological integration either due to a direct effect of A M fungi on this plant trait, or to the absence of a clear contrast between the different patches sensed by the plant. In our second experiment, we demonstrated that the architectural traits involved in the plant's foraging response were not affected by the species of A M fungi tested, consistently with the weak response detected in the first experiment. On the contrary, significant changes in resource allocation traits (linked to the specialization response) were detected, depending on the species of A M fungus. Only one species, G. intraradices induced a change in allocation by the plant, in comparison to the absence of A M fungi treatment, which led to an increased allocation to shoots at the expense of stolons. Modifications of plant phenotype, depending on the A M fungal species, have already been observed in such traits 39,40 . These authors identified a significant effect of Glomus species isolates on branching, stolon length and ramet production in Prunella vulgaris and Prunella grandiflora. In the first analysis of the A M fungal genome, Tisserant et al. (2013) revealed existing pathways attributed to the synthesis of phytohormones or analogues 41 . Such molecules would have a direct effect on host phenotype. In the individual effect observed, plant response in the presence of G. intraradices symbiosis was coupled with decreased plant performance due to a diminution of ramet production relative to biomass in this treatment. In contrast, the G. custos treatment led to a decrease in the potential number of descendants of the clone. A ccording to experiment 1, root colonization by an inoculum containing three species had no effect on plant traits associated with specialization and foraging. This suggests two alternative hypotheses: i) G. intraradices may be less cooperative than G. custos with Glechoma hederacea and the result is a consequence of the plant's rewarding process to the more cooperative fungus 33 and/or ii) root colonization by G. custos or G. clarum buffers the effect of G. intraradices due to a 'priority effect' (i.e. order of arrival in the colonization as a key to fungal community structure in roots) 41 . To test this, the mycorrhization intensity of the three A M fungal species inoculated in the first experiment would need to be assessed by qPCR. A lternatively, the combined effects of the three A M fungal species on plant phenotype might result in the environment not being perceived as heterogeneous by the plant. This hypothesis is developed in the following section. The intraclonal plasticity predicted by the foraging and division of labor theories is based on the ability of ramets to sense environmental heterogeneity, share information and resources within the clonal network, to locally adapt and optimize the performance of the whole clone. The weak response of G. hederacea to A M fungal heterogeneity could thus be explained by a decrease in physiological integration that reduces the level of resource-sharing within the clone and prevents the plant from developing an optimized foraging or specialization response. This diminution could initially be due to the presence of AM fungi. Only a few studies have been carried out on the effect of AM fungi on the degree of integration 43 . These authors demonstrated that A M fungi induced a decrease of physiological integration in the clonal plant Trifolium repens when grown in a heterogeneous environment. This effect was dependent on the presence and richness of A M fungal species. Whether this observed diminution of physiological integration is due to a direct manipulation of the host plant phenotype by the fungi remains, as far as we know, unknown. Secondly, this diminution may depend on the individual plant's perception of environmental conditions that might be sensed as homogeneous because the patch contrast is less important than expected. A reduction of plant integration is expected when the maintenance of high physiological integration is more costly than beneficial 44,45 , such as when the environment is resource-rich, not spatially variable 46 or not sufficiently contrasted 10,47 . Such a reduced contrast might result from the effect of the three A M fungal species on the plant phenotype (when used as a mixed inoculum), which is unlikely. A more probable mechanism of environment homogenization could result from A M fungal transfer through the stolons. Scanning electron microscopy of the clone cultures (see protocol in supplementary material) revealed the presence of hyphae on the stolon surface (fig. 5). In addition, several cells close to the external surface of the stolon cross-section were invaded by structures which could be interpreted as fungi. DNA sequencing of stolon samples (fig. 6) confirmed these results and demonstrated the presence of A M fungi in stolons. This suggest that fungi can be transferred from one ramet to another, at least by colonization of the stolon surface (as shown in Fig. 5a) and/or within the stolon (Fig. 5b). Whether fungi are passively or actively transferred through the plant's stolon tissues, and hence to all related ramets, remains an open question. Further studies are therefore needed to confirm these fungal transfers to plant clones and to measure their intensities in contrasted environments. Studies of the response of clonal plants to environmental heterogeneity have classically focused on abiotic heterogeneity 48,49 . Our study is the first to investigate clonal response to a heterogeneous distribution of A M fungi, based on the assumption that A M fungi can be regarded as a resource for the plant. However, in response to the heterogeneous distribution of A M fungi, G. hederacea clones displayed only a weak foraging response and no specialization, which suggests respectively that clones do not aggregate more especially in patches with A M fungi or maximize their proportion of roots in contact with A M fungi. We provide a first explanation by highlighting the impact of A M fungal identity on the plant phenotypes and more particularly on the allocation traits involved in specialization. More importantly, we provide evidence that stolons might be vectors for the transfer of micro-organisms between ramets, thereby buffering (through this dispersion of fungi) the initial heterogeneous distribution. If this is true, stolons will have to be regarded in a different way, and be seen as ecological corridors for the dispersion of micro-organisms allowing a continuity of partnership along the clone. Considering the plant as a holobiont 31,50 , this novel view of stolon function is expected to stimulate new ideas and understanding about the heritability of microbiota in clonal plants. A cknowledgements This work was supported by a grant from the CNRS-EC2CO program (MIME project), CNRS-PEPS program (MY COL A ND project) and by the French ministry for research and higher education. We thank S. L e Panse for performing microscopy analysis and L . L eclercq for performing preliminary experiments. We are also grateful to D. Warwick for helpful comments and suggestions for modifications on a previous version of the manuscript. A uthor contribution NV, A K B, PV and CM conceived the ideas and experimental design. NV, A K B, CM did the experiments. NV did the data analyses. NV, A K B, PV and CM did the interpretations and writing of the publication. C ompeting F inancial Interests statement No competing financial interests. C hapter II: T he heritability of the plant microbiota, toward the metaholobiont concept II.1 Introduction Scientific context The first chapter findings suggested that the elongation of clonal plants stolons is accompanied by the transmission of a 'cohort' of microorganisms, that includes arbuscular mycorrhizal fungi, from the mother ramet to spatially distant descendants (i.e. other developing ramets). This chapter aims at experimentally testing this hypothesis and at reviewing its consequences for both plant performance and the conceptual understanding of plants functioning and evolution. Previous studies have evidenced the existence of vertical inheritance of endophytic symbionts, colonizing host-plants through seeds. The most described example of such transmission is the stress-protective endophyte Neotyphodium coenocephalum that colonizes plant seeds and is transmitted to descendants in several grass species (Clay & Schardl, 2002;Selosse et al., 2004). However, this process is the only known example of true vertical transmission in plants. It represents the transfer of only a few plant-associated microorganism and at a single moment of the plant life. In clonal plant networks, information and resources can be shared within the physical clone (i.e. physiological integration; Oborny et al., 2001). A n additional level of integration may then occur through the sharing of microorganisms within the clonal network, as previously proposed by Stuefer et al. (2004) and suggested by our previous results (A rticle II). From a theoretical point of view, vertical and pseudo-vertical transmissions (i.e. inheritance of conspecific symbionts from parents to offsprings sharing the same environment; Wilkinson, 1997) are advantageous because they limit the costs of foraging for suitable symbionts (Wilkinson & Sherratt, 2001). In this context, microbiota heritability allows the plant to ensure environmental quality for its progeny. Vertical transmission would thus permit a "continuity of partnership" between the plant and its symbionts (Zilber- Rosenberg & Rosenberg, 2008). Recently, the understanding of host-symbionts interactions has evolved toward an holistic perception of the host and its associated microorganisms. The holobiont theory provides a theoretical framework for the study of host-symbiont interactions (Zilber- Rosenberg & Rosenberg, 2008;Theis et al., 2016). In the hologenome, a microorganism can be equated to a gene in the genome. L ike genes are inherited during sexual reproduction, a key parameter for the evolution of the hologenome is the heritability of microorganisms between host generations. In this context, the mechanism suggested in the previous chapter would integrate clonal plants into the range of organisms that can be considered as holobionts. Furthermore, the network structure of clonal plants suggests another layer of complexity in the holobiont assembly since holobionts would be susceptible to share a part of their microbiota within the clonal network. There is thus a need to develop an extension of the holobiont concept for clonal organisms organized as networks, "the meta-holobiont concept". Objectives of the chapter This chapter aims at testing the hypothesis of the heritability of a core microbiota in clonal plants and intends to extend the current theories regarding the holobiont assembly and evolution to clonal plant networks. More precisely we address the following questions : 1) Is there a transmission of a cohort of microorganisms in the clonal plant G. hederacea through stolon elongation ? Is this cohort composed of specific microorganisms, thus constituting an inherited core microbiota ? (A rticle III) 2) To which extent this mechanisms redefine the holobiont theory for clonal organisms organized in network ? What are the perspectives of clonal plants as model organisms for the study of holobionts assembly ? (A rticle IV ) Methods This chapter is composed of two articles (III and IV ). The first paper is based on an experimental approach to demonstrate the existence of a microbiota heritability mechanism in clonal plant. The second paper is an opinion paper addressing the consequences of this mechanism on the understanding of holobionts assembly in clonal network. We tested the hypothesis of microorganisms' transmission to progeny through clonal integration with an experimental approach using the clonal herbaceous species Glechoma hederacea . Plants from 10 ecotypes were grown under controlled conditions in individual pots. The mother pot was filled with field soil in to provide an initial microbial inocula and newly emitted ramets were forced to root in separate pots containing sterilized substrate. To detect endophytic microorganisms transferred from mother to daughter ramets, we sampled roots of both mother (growing in pots containing microorganisms) and daughter ramets (growing in sterile substrate without microorganisms) as well as internodes connecting them. High-throughput amplicon sequencing of 16S and 18S rRNA genes was used to detect and identify Bacteria, A rchaea and Fungi within the roots endosphere and internodes. We constructed an opinion paper to address the significance of the above heritability mechanism on pour understanding of plants fitness and evolution. In this review we mobilise knowledges on clonal plants network to propose hypotheses on whether they may use microorganisms transfer as a tool for adaptaton. We also mobilise knowledges from network theory and meta-community ecology to provide directions for future research on clonal network linked to microbiota assembly and plant fitness. Main results In our experiment (A rticle III), we detected the presence of archaea, bacteria and fungi within the root endosphere of the mother ramets. Some of these microorganisms were also found within the stolons and the roots of the daughter ramets, comprising fungi and bacteria but not A rchaea. We thus demonstrated the heritability of a part of the plant microbiota to its progeny. In addition, endophytic communities of daughter ramets roots were found to be similar between each other, while they were different from the original mother communities. We thus demonstrated a filtration process during the transmission of the microbiota (decrease in richness and homogenization of the transmitted communities). Our results confirm the hypothesis of microorganism transmission between ramets constituting thus an heritable core microbiota (A rticle III). Whether the transmission could occur in the reverse sense, i.e. from the daughter to the mother ramets, remains an open question. Microbiota transmission to the progeny is advantageous because it provides suitable symbionts and thus ensures habitat quality for the progeny (Wilkinson & Sherratt, 2001). Our results open new questions on the ability of clonal plants to preferentially select and transmit particular sets of microorganisms. Indeed, we observed that microorganisms were filtered during the transmission process but the modalities of this filtration remain unknown. Microorganisms could be filtered based on their ability to colonise the internodes (i.e. dispersal abilities) or alternatively they could be filtered depending on the functions they provide to the mother ramet. In the latter case, it would suggest an active filtering by the plant. The expectation is thus that beneficial organisms such as cooperative A M fungi would be preferentially transmitted to the progeny. Our results invite to revise our understanding of clonal plants. Especially an extension of the holobiont concept toward the meta-holobiont for clonal plant network has been introduced (A rticle IV ). In addition, we propose that clonal network sharing of microrganisms should be apprehended through the network theory and meta-community frameworks. The network theory framework should provide insights on how the network structure and associated microorganisms sharing could enhance the network resilience and performance as a whole. The meta-community framework should help to understanding the impact of microorganisms transmission on microorganisms communities assembly and survival and on plant management of useful microorganisms. Introduction A ll living plants experience interactions with ectospheric and endospheric microorganisms and are known to harbor a great diversity of symbionts including fungi (Vandenkoornhuyse et al., 2002; L ê Van et al., 2017) , bacteria [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Schlaeppi et al., 2014) and A rchaea (Edwards et al., 2015) which collectively form the plant microbiota. This microbiota performs ecological functions that extend the plant' s ability to adapt to environmental conditions (Bulgarelli et al., 2013;Vandenkoornhuyse et al., 2015). Studies using maize cultivars demonstrated that genetic control of the composition of the microbial rhizosphere by the host-plant was detectable, even if limited [START_REF] Peiffer | Diversity and heritability of the maize rhizosphere microbiome under field conditions[END_REF]. Plant microbiota composition is thus, at least in part, not only a consequence of the pool of microorganisms available for recruitment in the surrounding soil but also of plant selective recruitment within the endosphere. This filtering system includes plant defense mechanisms (Berendsen et al., 2012;Yamada et al., 2016) and promotion of the best cooperators through a nutrient embargo toward less beneficial fungi (Vandenkoornhuyse et al., 2015;K iers et al., 2011). From a theoretical point of view, vertical and pseudo-vertical transmissions (i.e. inheritance of conspecific symbionts from parents to offspring sharing the same environment; Wilkinson et al., 1997) are advantageous because they limit the costs of foraging for suitable symbionts (Wilkinson and Sherratt, 2001). Vertical transmission would thus permit a "continuity of partnership" between the plant and its symbionts (Zilber- Rosenberg and Rosenberg, 2008). In this context, microbiota heritability is also a way for the plant to ensure environmental quality for its progeny. In natura, plants can reproduce either by seed production or by clonal multiplication (van Groenendael and de K roon, 1990;[START_REF] Hendricks | Clonal plant architecture: a comparative analysis of form and function. (The ecology and evolution of clonal plants[END_REF]. Some studies have evidenced a vertical inheritance of endophytic symbionts colonizing host-plants through the seeds: the most well-known example is perhaps the transmission of the stress-protective endophyte Neotyphodium coenocephalum to the descendants in several grass plant species (Clay and Schardl, 2002;Selosse et al., 2004). Recent findings suggest that vegetative elongation of the horizontal stems forming the clonal plants network is accompanied by the transmission of a 'cohort' of microorganisms, that includes arbuscular mycorrhizal fungi, to spatially distant descendants (Vannier et al., 2016). This form of heritability of microorganisms to plant-progeny is not mediated environmentally (i.e. through environment sharing) or sexually. Such a process would support the niche construction of plant progeny while microorganisms could benefit from a selective dispersal vector allowing them to reach a similar and hence suitable host. Transmission in clonal plants has been demonstrated to involve information-and resource-sharing within the physical clone (i.e. physiological integration; Oborny et al., 2001). A n additional level of integration might occur through the sharing of microorganisms within the clonal network, as previously proposed by Stuefer et al. (2004). We tested this hypothesis of microorganisms transmission to progeny through clonal integration and addressed the new concept of a core microbiota heritability in clonal plants, using the clonal herbaceous species Glechoma hederacea as model. The growth form of this plant consists of a network of ramets connected through horizontal stems (i.e. aerial stolons), one of the most widespread forms of clonality (Zilber- Rosenberg and Rosenberg, 2008). Plants from 10 ecotypes were grown under controlled conditions. First, a juvenile ramet without roots (mother ramet) was transplanted into a pot containing field soil. Plant growth was oriented by forcing the newly emitted ramets (daughter ramets) of the two ramifications to root into separate pots containing sterilized substrate (Figure 1). Our aim was to detect the endophytic microorganisms present in the mother ramet roots and transferred to the daughter ramets through the clone stolons. High-throughput amplicon sequencing of 16S and 18S rRNA genes was used to detect and identify Bacteria, A rchaea and Fungi within the roots endosphere and the stolon internodes. Control pots randomly distributed in the experiment were also analyzed to remove from the dataset all operational taxonomic units (OTU) which could not be attributed to a plant-mediated transfer of microorganisms (see methods in supplementary information). Material and methods Biological material We used the clonal, perennial herb Glechoma hederacea, which is a common model for studying clonal plant response to environmental constraints (Slade and Hutchings 1987;Birch and Hutchings, 1994;Stuefer et al., 1996). G. hederacea clones produce new erect shoots at the nodes at regular intervals of 5 to 10 cm (the internodes) on plagiotropic monopodial stolons (i.e. aboveground connections). Each ramet consists of a node with two leaves, a root system and two axillary buds. In climatic chambers with controlled conditions and in the absence of enriched substrate, G. hederacea does not invest in flowering but displays only vegetative growth (Birch and Hutchings, 1994). The ramets used in our experiments were obtained from the vegetative multiplication of 10 clonal fragments taken at 10 different locations separated by at least 1 km to sample different ecotypes. Plants were grown for three months under controlled conditions to limit parental effects related to their geographic location and habitats [START_REF] Dyer | The role of adaptive trans-generational plasticity in biological invasions of plants[END_REF]). Vegetative multiplication was carried out on a sterilized substrate (50% sand and 50% vermiculite, autoclaved twice at 120°C for 1h). E xperimental conditions Experiments were carried out with cultures grown on the same sterile substrate (50% sand, 50% vermiculite) in a climate-controlled chamber with a diurnal cycle of 12h day /12h night at 20°C. Plants were watered with deionized water every two days to avoid a bias in nutrient availability. Necessary nutrients were supplied by watering the plants every 10 days with a lowphosphorus watering solution to favor mycorhization (Oborny et al., 2001). A t each watering, the volumes of deionized water and fertilizing solution per pot were 25 mL. To test for the transmission of microorganisms within the clonal network we transplanted an initial ramet (mother ramet) into a pot with field soil and oriented its growth to force the newly emitted ramets to root in different individual pots containing sterilized substrate (Figure 1). During the experiment, secondary ramifications of daughter ramets were removed to limit spread and confine the growth of the plant to a simple network of five ramets comprising the mother ramet and four daughter ramets equally distributed between two stolons (two on each primary stolon). By using two stolons we could test whether the potential transmission was systematic within the clone or whether this transmission varied between stolons (i.e. transfer of random organisms from the mother pool). The transplanted clonal unit (i.e. the mother ramet) consisted of a mature ramet (leaves and axillary buds) with one connective stolon internode (to provide resources to support ramet survival; Huber and Stuefer, 1997), and without roots (to avoid prior colonization of the roots by micro-organisms). Field soil was collected from grassland harboring native Glechoma hederacea and located in the experimental garden of the University of Rennes. Soil was then sieved through 0.5 cm mesh to remove stones and roots. The experiment was stopped and the ramets harvested when the clone had reached the stage with a mother ramet and four rooted daughter ramets. The composition of endospheric microorganisms in the root and internode samples was analyzed by separating the clonal network into stolon internodes, roots and shoots for both the mother and the daughter ramets. Each internode and root sample was meticulously washed first with water, secondly with a 1% Triton X 100 (Sigma) solution (three times) and lastly with sterile water (five times). This procedure ensured removal of the ectospheric microorganisms [START_REF] Vandenkoornhuyse | A ctive root-inhabiting microbes identified by rapid incorporation of plant-derived carbon into RNA[END_REF]. In order to control for potential contaminations, three control pots were also randomized into the experimental design. These pots were filled with the same sterile substrate and watered similarly to the other pots. Substrate from these control pots was sampled at the end of the experiment so that all contaminant microorganisms that were not plant-transmitted could be removed from the sequence analyses and from all subsequent statistical analyses. A ll root, internode and substrate samples were frozen at -20°C before DNA extraction and subsequent molecular work. DNA extraction and amplification DNA was extracted from cleaned roots and internodes, as well as from the substrate from control pots, using the DNeasy plant mini kit (Qiagen). The 18S rRNA gene was PCR amplified using fungal primers NS22b (5'-A ATTA A GCA GA CA A ATCA CT-3') and SSU817 (5'-TTA GCATGGA ATA ATRRA ATA GGA -3')(Lê Van et al., 2017). The conditions for this PCR comprised an initial denaturation step at 95°C for 4 min followed by 35 cycles of 95°C for 30 s, 54°C for 30 s and 72°C for 1 min with a final extension step at 72°C for 7 min. The 16S rRNA gene was amplified using Bacterial primers 799F (5'-A A CMGGATTA GATA CCCK G-3') and 1223R. (5'-CCATTGTA GTA CGTGTGTA -3'). The conditions for this PCR consisted of an initial denaturation step at 94°C for 4 min followed by 32 cycles of 94°C for 30 s, 54°C for 30 s and 72°C for 1 min with a final extension step at 72°C for 10 min. The 16S rRNA gene was also amplified using a nested PCR with A rchaea primer. The first PCR primers were Wo_17F (5'-ATTCY GGTTGATCCY GSCGRG-3') and A r_958R (5'-Y CCGGCGTTGA MTCCA ATT-3') and PCR conditions comprised an initial denaturation step at 94°C for 2 min followed by 35 cycles of 94°C for 30 s, 57.5°C for 50 s and 72°C for 50 s with a final extension step at 72°C for 10 min. The second PCR primers were A r_109F (5'-A CK GCTCA GTA A CA CGT-3') and A r_915R (5'-GTGCTCCCCCGCCA ATTCCT-3') and PCR conditions comprised an initial denaturation step at 94°C for 4 min followed by 32 cycles of 94°C for 30 s, 57°C for 30 s and 72°C for 1 min with a final extension step at 72°C for 10 min. A ll amplification reactions were prepared using Illumina RTG PCR beads with 2µ L of extracted DNA and target PCR products were visualized by agarose gel electrophoresis. Sequencing and data trimming A ll PCR amplifications products were purified using A gencourt A MPure X P kit. A fter purification, the amplifications products were quantified and their qualities checked using A gilent high sensitivity DNA chip for bioanalyzer and Invitrogen fluorimetric quantification. A ll PCR amplifications products were then subjected to an end repair step and adaptor ligation using the Neb library preparation kit. Multiplexing was done with a PCR step using NEBnext Ultra 2 multiplex oligo (dual index). Multiplexed products were then quantified and quality checked using A gilent high sensitivity DNA chip for bioanalyzer and quantitative PCR with SmartChip Wafergen. A mplicons libraries were pooled to equimolar concentration and paired-end sequenced (2x250 bp) with an Illumina MiSeq instrument. Data trimming consisted of different steps: primer removal (Cutadapt), and classical sequence quality. A n additional step consisted of checking the sequence orientation using a homemade script. This stringent data trimming resulted in 9,592,312 reads. Trimmed sequences were then analyzed using the FROGS pipeline [START_REF] Escudie | FROGS: Find Rapidly OTU with Galaxy Solution[END_REF]) (X .SIGENA E [http://www.sigenae.org/]). FROGS pre-process was performed with a custom protocol (K ozich et al., 2013) for A rchaea and Fungi and with the FROGS standard protocol for bacteria reads. In this pre-process, bacteria reads were assembled using Flash [START_REF] Magoč | FLA SH: fast length adjustment of short reads to improve genome assemblies[END_REF]. The clustering step was performed with SWA RM to avoid, in an innovative manner, the use of identity thresholds to group sequences in OTUs [START_REF] Mahé | SWA RM: robust and fast clustering method for amplicon-based studies[END_REF]. Following the pipeline designer's recommendations, a de-noising step was performed with a maximum distance of aggregation of 1 followed by a second step with a maximum distance of aggregation of 3. Chimera were filtered with the FROGS remove chimera tool. A filter was also applied to keep those OTUs with sequences in at least three samples to avoid the presence of artificial OTUs. A ll statistical analyses were also done with a five samples filter and results were similar. We herein present only the R2 Fungi and R1 A rchaea results based on affiliation statistics that indicated a better quality of affiliation. OTUs affiliation was performed using Silva 123 16S for Bacteria and A rchaea and Silva 123 18S for Fungi. OTUs were then filtered based on the quality of the affiliations with a threshold of at least 95% coverage and 95% BLA ST identity. The stringent parameters used in FROGS enabled us to finally obtain 4,068,634 bacterial reads, 2,222,950 fungal reads and 113,008 A rchaeal reads. Rarefaction curves were generated using R (package vegan 2.2-1;Oksanen et al., 2015) to determine whether the sequencing depth was sufficient to cover the expected number of operational taxonomic units (OTUs). The sequencing depth was high enough to describe the microbial communities in detail (Supplementary Figure S1). To homogenize the number of reads by sample for subsequent statistical analyses, samples were normalized to the same number of reads based on graphical observation of the rarefaction curves using the same R package. During this step, samples with less reads than the normalization value were removed from the dataset. A ll OTUs found in the soil of the control pots were then removed from the data set. Sequences data are available through the accession number PRJ EB20603 at European Nucleotide A rchive. Statistical analyses The positions and stolon of each ramet within the network were recorded as two factors for the statistical analyses. We considered three positions in the network: the mother ramet, the 1st daughter ramet and the 2nd daughter ramet. The stolon was considered as a factor with two levels: the 1st and the 2nd stolon emitted during growth. We analyzed heritability, richness and composition of microorganisms assemblages in G. hederacea ecotypes. We analyzed fungi and bacteria assemblages separately. No statistical analyses were performed on A rchaea data as they were found in the mother ramet roots and in the stolon internodes following the mother ramets but not in the daughter roots. A ll statistical analyses were performed using the R software (R Development [START_REF] Development | R: A language and environment for statistical computing[END_REF]. Heritability calculation and null model construction Heritability was measured for each taxonomic group in each ecotype as the number of OTUs present in the mother ramet and shared by at least two daughter ramets (we also tested the heritability calculation for three and four daughter ramets). To determine whether the observed heritability could be expected stochastically, we compared the observed heritability against a null model. This procedure is designed to test the null hypothesis that species from the mother ramets are randomly distributed within each daughter ramet and do not reflect the selection or the dispersal of a particular set of species from the mother pool. It allows assessment of the probability that the observed heritability indexes are greater than would be expected under a null distribution (Mason etal., 2008). We built a null model for each of the 10 ecotypes by generating daughter ramets communities with a random sampling of microorganism species within the mother' s pool. The probability of species sampling was the same for all species in the mother's pool (i.e. independent of their initial abundance in the mother roots). Only species identity was changed from one model to another while species richness within the daughter communities remained unmodified. For each daughter ramet community within the 10 ecotypes, 9999 virtual communities were randomly sampled from the mother' s pool and the heritability indexes calculated for each of these models. Results were similar when a less stringent heritability was used (e.g. OTU present in at least one daughter ramet) but the heritability could not be more stringent because it would create null communities with zero inherited OTUs for most of the null communities and thus overestimate the difference between the observed and the random heritability values. For each ecotype, we computed the Standard Effect Size (SES), calculated as described by Gotelli whereas positive SES values reveal higher heritability than expected by random (heritability of microorganisms from the mother ramet). A one-sample t-test with the alternative hypothesis "greater" was then applied to the SES values to determine whether they were significantly greater than zero after checking for the data normality. A nalyses of richness through linear mixed models Richness was calculated as the number of OTUs present in the sample. Richness was calculated separately for bacteria and fungi at the scale of the whole community and at the scale of the phyla (OTU richness in each phylum). We chose these two scales to detect general patterns in microorganisms richness and also to detect potential variation in these patterns between taxonomic groups (phyla). We conducted our analyses at the phylum scale rather than at a more precise taxonomic level because we were constrained by the sequence affiliation that produced multiaffiliation of OTUs at lower taxonomic levels. To test whether the richness was affected by the sample position in the clonal network, we performed linear mixed-effects models using R packages "nlme" (Pinheiro et al., 2015) and "car" (Fox and Weisberg, 2011). We initially tested for differences in richness between mother and daughter ramets. We then tested for differences in richness between 1st and 2nd daughter ramets by considering the position in the clone (1st daughter or 2nd daughter) and the stolon (1st stolon, 2nd stolon) within the plant ecotype as explanatory variables. Ecotype-induced variance and data dependency were controlled by considering the position in the clone (mother or daughter) and the stolon as fixed factor and the plant ecotype as a random factor in the mixed models. Normality of the models residuals was verified using a graphical representation of the residuals and the data were log or square-root transformed when necessary. For several fungal and bacterial groups exhibiting low abundances in the samples, the models testing differences in richness did not respect the normality of the residuals and thus these results are not presented. A nalysis of microorganisms community composition A PLS-DA analysis was used to test whether the microbiota composition varied significantly between mother and daughter ramets and between daughter ramets. The PLS-DA consists of a Partial L east Squares (PL S) regression analysis where the response variable is categorical (y-block; describing the position in the ecotype), expressing the class membership of the statistical units [START_REF] Sjöström | PLS discrimination plots[END_REF][START_REF] Sabatier | Two approaches for discriminant partial least squares[END_REF][START_REF] Mancuso | Soil volatile analysis by proton transfer reaction-time of flight mass spectrometry (PTR-TOF-MS)[END_REF]. This procedure makes it possible to determine whether the variance of the x-blocks can be significantly explained by the y-block. The x-blocks (OTUs abundance) are pre-processed in the PLS-DA analysis using an autoscale algorithm (i.e., centers columns to zero mean and scales to unit variance). The PLS-DA procedure includes a cross-validation step producing a p-value that expresses the validity of the PLS-DA method regarding the data set. The PL S-DA procedure also expresses the statistical sensitivity indicating the modeling efficiency in the form of the percentage of misclassification of samples in categories accepted by the class model. Our aim in using this model was to test the variance of community composition that could be explained by the position of the ramet in the clone. The entire data set was subdivided into two or three groups depending on the groups tested (i.e. mother ramets vs 1st daughter ramets vs 2nd daughter ramets, mother ramets vs all daughter ramets and 1st daughter ramets vs 2nd daughter ramets). Results A rchaeal, Bacterial and fungal communities in the roots of Glochoma hederacea A rchaea (Thaumarcheota), fungi and bacteria were found in mother ramets. A rchaea were not detected in the daughter ramets, but fungi and bacteria were found in daughter roots (Figure 2). Comparison of the sequences obtained from the roots of mother and daughter ramets revealed a subset of 100% identical reads in both mother and daughter ramets, representing 34% and 15% of the daughter fungal and bacterial reads respectively. Heritability, calculated as the number of OTUs found in the mother and in the roots of at least two daughters, varied from 15 to 374 OTUs (µ =100. 2±118.6) for bacteria and from 0 to 12 OTUs (µ =6. 1±3.63) for fungi, depending on the ecotypes. To test whether this observed heritability was higher than would be expected stochastically (i.e. random dispersal of OTUs), we used a null model approach in which the identity of the fungi or bacteria species in the experimental samples was randomized while keeping the OTU richness identical. For each ecotype, we thus generated bacterial and fungal random daughter communities by sampling species from all the mother roots communities (regional pool) and compared the observed heritability in our dataset to this distribution of random heritability values. The null model approach indicated that the observed communities displayed significantly higher OTUs heritability between the roots of mothers and daughters than expected stochastically (one sample t-test with alternative hypothesis "higher", P<0.01 t = 3.03, df = 8, and P<0.001 t = 6.11, df = 9 for fungi and bacteria respectively) (Supplementary Figure S2). In addition to the non-random presence of OTUs in daughter roots we also found communities of fungi and bacteria in the stolon internodes connecting the ramets in the network (Supplementary Figure S3). These internodes exhibited similar phyla richness to that observed in the daughter roots. The transmission of bacteria and fungi within the G. hederacea clonal network was thus clearly demonstrated. Microbial communities filtration during transmission Endophytic microorganisms were strongly filtered during the transmission process. Daughter roots displayed significantly lower fungal OTUs richness than mother ramet roots with mother communities averaging 40 OTUs compared to an average of 10 OTUs in the daughter ramets (linear mixed model, F 1,31 =280, P<0.001; mother ramet: 40±7; daughter ramet: 10±3)(Figure 2; supplementary table S1). The same significant pattern was observed for Bacteria with mother communities averaging 800 OTUs compared to an average of 100 OTUs in the daughter ramets (linear mixed model, F 1,39 =410, P<0.001; mother ramet: 800±131; daughter ramet: 100±100 Figure 2, supplementary table S1). The observed 'low' richness of the transmitted communities indicates that the transmitted microbiota is filtered from the original pool (i.e. the mother microbiota). A significant effect of ecotype, on the richness of the transmitted microbiota, was also found (Methods for details on the statistics and random factor used). Comparison of the microorganisms in the roots of mothers and daughters revealed a general decrease in richness of most phyla during the transmission process. The fungal communities colonizing the roots were mostly from the phyla A scomycota (106 OTUs) and Basidiomycota (39 OTUs) and to a lesser extent from Glomeromycota (24 OTUs), Zygomycota (7 OTU) and Chytridiomycota (4 OTUs) (Figure 2a). The mean OTU richness of A scomycota and Glomeromycota was significantly lower in daughter roots than in mother roots (supplementary table S1) whereas no significant variation was observed in the OTU richness of Basidiomycota. (supplementary table S1). This striking observation clearly advocates for the presence of a fungus-dependent filtering mechanism. The bacterial communities colonizing the roots were distributed in 3384 OTUs mostly belonging to Proteobacteria ( 2009OTUs) and Bacteroidetes (715 OTUs) which together represented about 80% of all the sequences, the remaining 20 % belonging to 6 additional phyla (Figure 2b). Consistently with fungi, the bacterial OTU richness was significantly lower in daughter roots than in mother roots for the Proteobacteria, Bacteroidetes, A cidobacteria, A ctinobacteria and Firmicutes (supplementary table S1). This observation suggests that bacterial phyla are indifferently affected by the filtering mechanism. T he heritability of a core microbiota The differences in microorganisms community composition between mother and daughter roots were assessed using a multi-regression approach with a Partial L east Squares Discriminant A nalysis procedure (PLS-DA ) (see Material and Methods, supplementary information). The advantage of this analysis is its ability to test an hypothesis based on a grouping factor of the samples in the data set (i.e. an explicative factor) and to obtain the significance of the factor as well as the part of the variance explained by the factor. With this analysis the entire dataset can be used and most of the variance conserved in contrast to NMDS approaches in which the distances between samples such as Bray-Curtis or J accard summary the variance between samples. Significant differences in the composition of daughters communities compared to mothers' were detected for both fungi (PPL S-DA = 0.001, PMothers vs Daughters < 0.01, explained variance = 87.3%, Figure 3a) and bacteria (PPLS-DA = 0.001, PMothers vs Daughters < 0.01, explained variance = 72.4%, Figure 3b; supplementary table S2). These differences in composition between mothers and daughters can be explained by the observed diminution in richness during the transmission process. These result indicates that only a portion of the original pool of microorganisms is transmitted from the mother to the daughters (i.e. a specific set of organisms). To test the validity of the hypothesis that a plant filtering mechanism allows the transmission of a core microbiota we analyzed the filtering consistency between daughters by comparing the microbiota composition within the daughter roots using a PLS-DA procedure. The composition of the roots communities was not significantly different between the 1st and 2nd daughter ramets (PPL S-DA = 0.09 and PPL S-DA = 0.33 for fungi and bacteria respectively, supplementary table S2), thus confirming that a specific set of organisms was similarly transmitted to daughter-plants of all ecotypes. E ffect of dispersal distance and dispersal time We found patterns of richness dilution in bacterial communities along the stolons (linear mixed model, F 1,18 = 6.13, P < 0.05, supplementary table S3) showing that those ramets most distant from the mother were less rich in bacteria. This finding suggests that colonization of the daughters by bacteria is limited by dispersal distance. This pattern of richness dilution also followed the course of plant development as stolons produced earlier in the experiment (i.e 1st stolon emitted by the plant) were found to be richer (linear mixed model, F 1,9 = 4.92, P < 0.05, supplementary table S3), which suggests that richness of the bacterial community also depends on dispersal time. A lternatively, these patterns may be linked to a cumulative filtering effect at each node of the clonal network, reducing the pool of transmitted bacteria. Conversely these richness dilution patterns were not detected for fungal communities (supplementary table S1), suggesting either that dispersion of the transmitted species was not limited or that the fungal community was already strongly filtered during the initial transmission. These two non-exclusive hypotheses are supported by our observation of a variation in the diminution of fungal community richness between mothers and daughters, probably dependent on the life history and dispersal traits of the different fungal taxonomic groups. Discussion This work provides the first demonstration of vertical transmission and heritability of a specific endospheric microbiota (fungi and bacteria) in plants. A long with other studies, it supports an understanding of the plant as a complex-rather than a standalone-entity and is aligned with the idea that the plant and its microbiota have to be considered as holobionts (Zilber- Rosenberg and Rosenberg, 2008;Vandenkoornhuyse et al., 2015;Theis et al., 2016). Our demonstration of coremicrobiota transmission supports the idea that microbial consortia and their host constitute a combined unit of selection. This finding does not conflict with the idea that this heritability of microbiota (microbial components metaphorically called 'singers' in [START_REF] Doolittle | It's the song, not the singer: an exploration of holobiosis and evolutionary theory[END_REF], within clonal-plants, in fact consists of the heritability of a selected set of functions (the 'song' in [START_REF] Doolittle | It's the song, not the singer: an exploration of holobiosis and evolutionary theory[END_REF]. Thus, our work reconciles aspects of the on-going debate regarding the evolutionary process at work within the holobiont entity (Bordenstein et al., 2015;Moran and Sloan, 2015;Theis et al., 2016). For the plant, the transmission of a microbiota along plant clonal networks extends to microorganisms the concept of physiological integration previously demonstrated for information and resources. This integrated network-architecture questions the idea of a meta-holobiont organization where ramets (i.e. holobionts) can act as sinks or sources of micro-organisms. Such a structure may ensure exchanges between the holobionts, and especially between the mother source and the daughter "sinks", thereby increasing the fitness of the clone as a whole. Indeed, the inheritance of a cohort of micro-organisms that has already gone through the plant filtering system provides a pool of microorganisms available for recruitment in the newly colonized environments. This "toolbox" of microorganisms could allow the plant to rapidly adjust to environmental conditions and therefore provide fitness benefits in a heterogeneous environment (Clay and Schardl, 2002). This may be assimilated to plant niche construction and provide a competitive advantage when colonizing new habitats. From the perspective of microorganisms, the stolons can be seen as ecological corridors facilitating the dispersal at a fine scale. In addition to propagules transport in the environment, this process ensures a spread of the transmitted organisms from one suitable host to another. A s a consequence, transmitted symbiotic partners may benefit from a priority effect when colonizing the rooting system within the new environment (Werner and K iers, 2015). Future work will thus need to address (i) the direction (uni vs. bidirectional) of microorganisms transmission within the clonal network as well as the modalities of (ii) the transmission mechanism (active or passive), and of (iii) microorganisms filtering during this transmission to determine (iv) the significance of the process in T he genome is not the only support of phenotypic variations The theoretical basis of neo-Darwinian evolution (i.e. the modern synthesis) is linked to the idea that a genetic variation, as a mutation, is an accidental random event having neutral consequences or inducing advantageous or disadvantageous effects on fitness with the natural selection increasing the advantageous variants in large populations (e.g. [START_REF] Charlesworth | The sources of adaptive variation[END_REF]. This sorting of genetic variations by natural selection acts on the individual phenotypic value. It is thus generally believed that an organism, its functions and its ability to adapt to environmental constraints can be addressed by the analysis of its genome (i.e. the information repository of the organism). Behind this idea is the assumption that organisms develop through programmed genes. This view of genomes is an oversimplification [START_REF] Goldman | What is a genome ?[END_REF]. There are existing examples of physical transience in genomes (i.e. genome composition and stability are not fixed at all times in every organism) as in ciliates (e.g. Oxytricha) [START_REF] Bracht | Genomes on the edge: programmed genome instability in ciliates[END_REF]. Elsewhere and less anecdotal is the existence of additional information beside the genome supporting phenotype expression. These are of two main types. Firstly, the epigenetic marks (i.e. DNA methylation, histone modification, histone variants, and small RNA s) which can be heritable and reversible. They induce a suite of interacting molecular mechanisms which impact gene expression and function and thus phenotypes without changing the DNA sequence (e.g. Richards, 2006;Holeski et al., 2012;[START_REF] Huang | Epigenomic Diversity in a Global Collection of Arabidopsis thaliana A ccessions[END_REF]. Secondly, all macro-organisms, animals and plants, are interacting as hosts with symbiotic partners forming the microbiota. Such associations deeply impact the phenotypic variations, and determine host fitness (e.g. [START_REF] Hooper | Interactions between the microbiota and the immune system[END_REF][START_REF] Tremaroli | Functional interactions between the gut microbiota and host metabolism[END_REF]Mc Fall-Ngai et al., 2013;[START_REF] Blaser | The microbiome revolution[END_REF]Vandenkoornhuyse et al., 2015;Vannier et al., 2015). A large proportion of population genetics studies considers the host genome solely whereas microbe-free plants are not facts but artifacts (e.g. [START_REF] Partida-Martinez | The microbe-free plant: fact or artifact? F rontiers in Plant Science[END_REF]. Thus phenotypic variations are often mistakenly attributed to genome variants and the neo-darwinian approach of species evolution fails to explain these variations. A n holistic understanding of plant-microbes associations is thus needed. T he holobiont and hologenome concepts A given macro-organism can no longer be considered as an autonomous entity but rather as the result of the host and its associated micro-organisms forming a holobiont (e.g. Bordenstein & Theis, 2015) with their collective genome forming the hologenome. The holobiont is a unit of biological organization and encompasses not solely the host and obligate symbionts but also includes [… ] the facultative symbionts and the dynamic associations with the host [… ] (Theis et al., 2016). This new understanding of what a macro-organism is, deeply modifies our perception of evolution processes in complex organisms. From this, an important idea is that genetic variations occurring in any genomic subunits have to be considered as hologenomic variations which may be neutral, deleterious or beneficial for the holobiont (Zilber- Rosenberg & Rosenberg, 2008;Bordenstein & Theis, 2015). This parsimonious idea is the keystone of the hologenome theory of evolution (Zilber- Rosenberg & Rosenberg, 2008). Beside these genetic changes in the hologenome, a plant, for example, can recruit micro-organism(s) within the phyllosphere and the rhizosphere to buffer environmental constraints (e.g. Vandenkoornhuyse et al., 2015). The microbiota modification in a plant holobiont, thus hologenome modifications, can be seen as a rapid acquisition of new functions to adjust to biotic and abiotic environmental fluctuations. Because the micro-organisms recruited can be vertically or pseudo-vertically transmitted [START_REF] Rosenberg | The hologenome theory of evolution contains L amarckian aspects within a Darwinian framework[END_REF] this adjustment would be a reboot of the L amarckian deterministic evolution (i.e. inheritance of acquired characteristics). In this context, the microbiota assembly is the repository of the information transmitted between generations. T he microbiota assembly of the holobiont Natural selection acts on holobiont phenotype, thus on hologenome. The case of neutrality (nor advantage neither disadvantage) or circum-neutral effect on individual fitness, could explain the heterogeneity of microbiota community structure observed among hosts. From ecological theories, both niche partitioning (through environmental filtering or through species interactions sorting) and neutral processes (Hubbell, 2001;2005) are classically admitted to drive community assemblies and to explain diversity patterns. Coexistence is mainly assumed to result from complementarity in resource use (Webb et al., 2002). The neutral and stochastic vs deterministic host-microbiota assembly has been debated [START_REF] Nemergut | Patterns and processes of microbial community assembly[END_REF], Bordenstein & Theis, 2015). From a random community assembly model [START_REF] Nemergut | Patterns and processes of microbial community assembly[END_REF], if a large proportion of the microbiota is recruited from the environment (i.e. horizontal transmission), the microbiota community composition is not expected to differ from a random assembly. However, because the functions assumed by the microbiota are important for holobiont fitness, a specialization at the metabolic level of the microbiota is expected from a strong selection process on hologenomes. In this case, the resulting observation would be a deterministic host-microbiota assembly (i.e. non-random assembly). This microbiota specialization could act at two different levels. The first level is the micro-organisms recruitment and filtering by host, a process that should culminate in the vertical or pseudo-vertical transmission of microbiota components. The second level of specialization is, for particular micro-organisms within the microbiota, the relaxation of selection pressure on 'useless' genes and accumulation of mutations in these particular genes toward a loss of functions evolution. In addition, microorganisms within the holobiont experience selection pressures both at the individual level within the holobiont and at the whole holobiont level. Holobiont and hologenome: the special case of plants Plants are sessile macro-organisms. This conditions their interactions with their environment. First, they are not able to escape environmental constraints and need to adapt to either abiotic (e.g. resource limitation) or biotic (e.g. competition, predation) stresses. Second, plants act on their own environmental conditions. For instance, they deplete through their nutrition mineral and water resources of their habitat, or modify microclimate and local soil characteristics through the developement of above and belowground organs [START_REF] Marschner | Root-induced changes in the rhizosphere: Importance for the mineral nutrition of plants[END_REF][START_REF] Orwin | L inkages of plant traits to soil properties and the functioning of temperate grassland[END_REF][START_REF] Veen | Peeking into the black box: a traitbased approach to predicting plant-soil feedback[END_REF]. This retroaction of plants on their environment drives environmental fluctuations occurring at different time scales. In these two above-described situations, the recruitment of microorganisms within the microbiota enables to acquire new ecological functions increasing resource acquisition or buffering stressful conditions (see for review Friesen et al., 2011;Bulgarelli et al., 2013;Mü ller et al., 2016). Changes in the microbiota composition along the seasons or years may thus reflect the temporal needs of the plants. Considering that the environmental changes can be either continuous or disruptive and occur over short or long timescales, the recruitment of microorganisms represents a quick and long-term adaptation to environmental constraints that is less costly for the plant than genomic plastic responses (see A lpert & Simms, 2002 for a review of plasticity advantages). Plant-associated micro-organisms then condition plant survival and fitness and provide quick phenotypic adjustments allowing adaptation to rapid and long-term environmental changes (Vannier et al., 2015). Plants are modular organisms and can be seen as a discrete organization of units (modules) forming a system (Harper, 1977). Their growth is iterative through undifferentiated and totipotent tissues (meristems). Such modularity is expressed at different levels of integration: from simple subunits such as a leaf, a root, or a flower (order 1 of modularity) to more complex units such as ramets (i.e.potentially autonomous clonal units composed of roots and shoots modules) produced by clonal multiplication (e.g. order 2 of modularity) (Fig. 2a). Most plants are then constituted as reticulated networks that sample the environment and can adjust their structure to the environment (van Groenendael & de K roon, 1990). A physiological integration involving information-and resource-sharing within the clonal network has been demonstrated (e.g. Oborny et al., 2001). Physiological integration occurs for all first order modules within each second order module, at least partially during its life-cycle. Some species display physiological integration between all or part of the second order modules (Price & Hutchings, 1992), for a short period or the whole clonal plant life (e.g. splitter vs. integrator strategies). In response to small-scale environmental heterogeneity, such modularity and integration enable plastic adjustments of order 2 modules with an impact on the fitness of either or both 1 st and 2 nd order modules depending on integration level. (Stuefer et al., 2004;Hutchings & Wijesinghe 2008). More specifically, plastic changes have been reported in the network architecture in response to patchy distribution of favorable habitats (i.e. foraging, Fig 2b). Such changes occur along a morphological gradient from a 'phalanx' (the clonal network is dense with high branching and low internode length) to a 'guerilla' form (loose network with low branching and high internode length) [START_REF] Doust | Population dynamics and local specialization in a clonal perennial (Ranunculus repens): I. The dynamics of ramets in contrasting habitats[END_REF]. Phalanx forms promotes the exploitation of nutrient rich habitats through ramet aggregation whereas guerilla forms enable space exploration and the colonization of habitats at distance from the mother plant. In addition to the sampling of resources in the environment, this plastic network also allow to sample potentially symbiotic microorganisms within the soil pool. In clonal plants, this process has never been taken into account for microbiota assembly despite the microbiota importance for plant fitness. Pseudo-vertical (i.e. acquisition by environment sharing) and vertical transmissions of microbiota ensure the presence of suitable symbionts and thus limit the associated foraging costs (Wilkinson & Sherratt, 2001). The question of the vertical or pseudo-vertical transmission of the microbiota is then of fundamental importance. In aggregated networks (i.e. phalanx), the clonal progeny is expected to be in contact with a similar pool of microorganisms than the mother plant (i.e.strong pseudo-vertical transmission). Conversely, in scattered networks (i.e. guerilla), a weak pseudovertical transmission is expected (i.e. progeny encounter different microorganisms than parents). However, it has recently been demonstrated that clonal plants are able to vertically transmit a core microbiota (i.e. a fraction of the mother microbiota) containing bacteria and fungi through their stolons (Vannier et al., 2016;Vannier et al., submitted). This vertical transmission of a subset of the mother microbiota could be seen as an insurance of habitat quality for progeny. F rom holobiont to meta-holobiont concept Considering that a particular clone is colonized by a complex microbiota, that at least part of this microbiota is transmitted between clonal generations and that these transmitted microorganisms altere the clone fitness (e.g. among others, arbuscular mycorhizal fungi, A M fungi hereafter) (Vannier et al., Submitted), a clonal individual thus satisfies the holobiont and hologenome theory (Zilber- Rosenberg & Rosenberg, 2008;Theis et al., 2016). The plant clonal network represents however an additional level of organization in which holobionts are inter-connected and integrated in a higher modularity model (the clonal network). To explain this level of organization, we introduce the concept of meta-holobiont. T he meta-holobiont concept Bordenstein & Theis, (2015) have proposed a framework of ten principles for the holobiont and hologenome theory which do not change the rules of evolutionary biology but redefine what a macro-organism really is. The meta-holobiont concept is primarily dedicated to better understand and define clonal organisms forming a network (i.e. like clonal plants) through which holobionts can share molecules and micro-organisms (Fig 3). The meta-holobiont concept thus posits that through a network passive or active transfers of microorganisms, resources and information between holobionts can impact an individual holobiont. The meta-holobiont also posits the existence of specific physical host structures to build the network used for these exchanges. The physical structures linking clonal plant holobionts can be stolons or rhizomes. This physical link is of crucial importance as it is the support of the plastic responses developed at both i.e. the holobiont and meta-holobiont scales. In the case of mycorhizal hyphae for example, the physical link (hyphae) linking two plants is not produced by the host plant and thus does not allow integrated plastic responses at the metaholobiont scale (in this case two plants linked by a mycorhizal hyphae). In this case, the selection The inherent physiological integration of the meta-holobiont network (Oborny et al., 2001) and the microorganisms transmission from mother to clonal progeny (demonstrated in Vannier et Wijesinghe & Hutchings, 1999;[START_REF] Roiloa | Small-scale heterogeneity in soil quality influences photosynthetic efficiency and habitat selection in a clonal plant[END_REF]. A nother important response of clonal plants to heterogeneity is their ability to specialize individual modules in the acquisition of the most abundant resource to the network benefit (Fig 2c;[START_REF] Stuefer | Division of labour in clonal plants[END_REF][START_REF] Stuefer | Two types of division of labour in clonal plants: benefits, costs and constraints[END_REF]. This specialization process has been extended to the concept division of labour concept (sensu Stuefer et al., 1996), occurring when the spatial distributions of two resources are negatively correlated [START_REF] Stuefer | Division of labour in clonal plants[END_REF][START_REF] Van K Leunen | Quantifying the effects of reciprocal assimilate and water translocation in a clonal plant by the use of steam-girdling[END_REF]). On the one hand, microbiota transfer within the clonal network may be an alternative to these plastic mechanisms by providing ramets the ability to compensate for nutrient-limitation. For instance the transfer of arbuscular mycorhizal fungi (A M fungi) with high resource uptake ability at the ramet scale is probably less costly for the plants than developing an increased rooting system or than increasing spacer length to forage for better patches. Specialization may then be seen in a wider context at the holobiont level (i.e. ramets that do specialize because of the recruitment of particular microbiota). Foraging through plant trait modifications could likely not be as beneficial as using microbiota functions. Note that such trade-off between foraging plant traits versus the use of microbial activity for resource uptake has already been well described at the individual level with root development (see for instance [START_REF] Eissenstat | L inking root traits to nutrient foraging in arbuscular mycorrhizal trees in a temperate forest[END_REF][START_REF] Iu | Complementarity in nutrient foraging strategies of absorptive fine roots and arbuscular mycorrhizal fungi across 14 coexisting subtropical tree species[END_REF]. What we suggest herein at the meta-holobiont scale, thus only consists in an extension of plant foraging tradeoffs. On the other hand, foraging and specialization mechanisms, may be indirectly mediated by the holobiont microbiota. Studies have demonstrated that microorganisms often manipulate plant traits [START_REF] Cheplick | Recovery from drought stress in L olium perenne (Poaceae): are fungal endophytes detrimental?[END_REF] inducing changes in plant architecture (e.g. connection branching or elongation; Streitwolf-Engel et al., 1997;Sudova, 2009;Streitwolf-Engel, 2001;Vannier et al., 2015), biomass allocation (Du et al., 2009;Vannier et al., 2015). In parallel, other works demonstrated that high diversity of A M fungi can reduce plant physiological integration in heterogeneous environments (Du et al., 2009). These results thus suggest a retroaction between plants and their microbiota in plastic adjustments to environmental heterogeneity developed at the clone level. In these two cases, microorganisms sharing between holobionts in the meta-holobiont impacts both the holobiont and the meta-holobiont phenotypes. The meta-holobiont concept may shed light on a new understanding of these integration and plastic responses mechanisms, and explain the large range of strategies observed between plants. Holobionts coexistence and dynamics within the meta-holobiont network: transposition of metacommunity-based theories Microbiota transmission within the meta-holobiont induces consequences at the holobiont scale. A s previously exposed, the meta-holobiont could specialize if the environment is heterogeneous and thus alter individual holobionts microbiota. Reversely an homogenization of holobionts microbiota within the meta-holobiont can be expected in homogeneous environments. This may condition the dynamics of microbiota assembly at the meta-holobiont level because in the first case, several holobionts may represent poorer pools of genomes than others to be transferred within the metaholobiont while in the second case, all holobionts represent the same potentialities. Understanding microbiota assembly within meta-holobiont networks and its consequence on microoragnisms species dynamics is likely to be achieved based on metacommunity theories. Different models of metacommunities have been described (see L eibold et al., 2004 for a review): specialization of microbiota within the meta-holobiont is close to the source-sink metacommunity model whereas homogenization corresponds to the neutral or patch dynamic models. Such theories transposition to the frame microorganisms transfer through plant network should provide an interesting framework for the understanding of the meta-holobiont impacts on microorganisms dispersal in natural ecosystems. Network theory and meta-holobiont properties Because microorganisms alter holobiont phenotypes, their recruitment in each holobiont and transmission dynamics within the meta-holobiont may condition the properties of the metaholobiont. Such properties can for instance include productivity (e.g. fitness of the meta-holobiont, plant mass productivity), adaptation to heterogeneous conditions or resilience to disturbance. There is a wide theoretical corpus of knowledge based on graph theory about the relationships between network topology and its properties. For instance the number of nodes, their connectance or shape (i.e. modularity, nestedness) within a network, condition its stability to perturbations since it determines individual fluxes at population and community levels (see review of [START_REF] Proulx | Network thinking in ecology and evolution[END_REF]. In plants in general and in clonal plants in particular, the modules topology has been demonstrated to be shaped by structural blue-print, ontogeny, and plastic response to the environmental conditions (Huber et al., 1999;[START_REF] Bittebiere | Structural blueprint and ontogeny determine the adaptive value of the plastic response to competition in clonal plants: a modelling approach[END_REF]. Many researches have been specifically done on clonal plants to investigate how the network topology provides emergent functions to the clonal plant such as response to heterogeneous conditions, fluctuating environments, or disturbance (see examples given in the review of [START_REF] Oborny | From virtual plants to real communities: a review of modelling clonal growth[END_REF]. Transposing graph theories to the meta-holobiont concept may allow for instance to determine keystone holobionts or specific network structures that maximize the whole network performance. This could be achieved through the maximization of resilience based on holobionts' positions within the network or the degree of redundancy of microbiota compositions in the network. The meta-holobiont concept may provide a new way to consider these questions while taking into account the plant-microorganisms associations. C hapter III: Importance of the plant community context for the individual plant microbiota assembly III.1 Introduction Scientific C ontext The composition of the soil pool of microorganisms depends on different environmental factors comprising soil type and properties (e.g. pH, water content, nutrient concentration) [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF]L undberg et al., 2012;[START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]Shakya et al., 2013;Schreiter et al., 2014). Because plants are sesssile this pool of microorganisms available for recruitment determine the plant microbiota (Lundberg et al., 2012;[START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]. In addition to soil type and properties, plants can also modify this soil pool of mcroorganisms through different mechanisms. Plants are able to selectively recruit microorganisms from the soil (Vandenkoornhuyse et al., 2015) and promote the most beneficial symbionts [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011). In addition, plant exudates have been shown to either enhance or reduce particular microorganisms abundances depending on the plant species (for a review see Berendsen et al., 2012). The expectation is thus that plants can locally alter the composition of the soil microbial pool. Following this idea, the neighborhood of a given plant (i.e. the identity and abundance of neighboring plants) should influence the local microorganisms soil pool and thus the microorganisms available for recruitment by other plants in the community. Thus the neighborhood of a given plant shoul influence the focal plant microbiota assembly. However, this potential role of the plant community context in the plant microbiota assembly has not been extensively described yet. In addition, plants have been shown to harbor contrasted microbiota between species [START_REF] Oh | Distinctive bacterial communities in the rhizoplane of four tropical tree species[END_REF][START_REF] Bonito | Plant host and soil origin influence fungal and bacterial assemblages in the roots of woody plants[END_REF] and ecotypes [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012). A n expectation is thus that plant species select and promote different microorganisms and have thus constrasted effects on the soil pool and on the different microbial groups within this soil pool. Considering the diversity of functions provided by plant-associated endophytes (Friesen et al., 2011, Mü ller et al., 2016) and their effects on plant phenotypes and performance evidenced herein (A rticles I and II), changes in endophyte community composition is likely to affect individual plant performance. Objectives of the chapter This chapter aims at testing the hypothesis that the plant community context and especially the close neighborhood of a plant determine its fungal microbiota assembly, ultimately impacting its performance. More precisely we will address the following questions: 1) Does the neighborhood plant species abundances influence the richness and equitability of the focal plant fungal microbiota ? Does neighborhood plant species have contrasted effect ? (A rticle V ) 2) A t which temporal and spatial scales act this neighborhood effect ? (A rticle V ) 3) A re changes in fungal groups richness and equitability mediated by the neighborhood plants determining the focal plant performance ? (A rticle V ) Methods This chapter is composed of a single article (V ). In this study, we tested the hypothesis of a neighborhood effect on plant microbiota assembly impacting plant performance using an outdoor experimental mesocosm design with grassland plant communities. This mesocosm comprises a range of grassland plant communities varying in composition and richness. Soil cores were sampled within each mesocosm and Medicago truncatula was used as a trap plant to capture soil fungal species. High-throughput amplicon sequencing of 18S rRNA genes was used to detect and identify fungi within the root endosphere of M. truncatula individuals. To determine plant neighborhoods, plant species abundances were mapped two consecutive years (one year before and the same year than soil sampling). We thus explained the richness and equitability (equality of species abundances) of the fungal community trapped by M. truncatula by the abundances of plant species within the neighborhood of the sampling point. We considered two temporal scales of the neighborhood, the past covering (one year before sampling) and the present covering (the same year than sampling). We additionally assess the effect of the microbiota composition on M. truncatula biomass as a measure of its performance. Main results In our experiment (A rticle V ), we detected a significant effect of the plant neighborhood species abundances on the composition of the microbiota colonizing M. truncatula's roots. This effect was plant species dependant with several plants affecting positively and others negatively the richness and the equitability of the M.truncatula fungal endophytes. We thus demonstrated that a particular plant can filter and determine the local fungal pool available for recruitment in the soil, as proposed by Valyi et al., (2016). This was true not solely at the A M fungal diversity level but was also the case for the other fungal groups and more widely for the whole fungal community. Plant neighborhood influence on soil microbiotaoccurred at small scale (i. Heijden et [START_REF] Stuefer | Two types of division of labour in clonal plants: benefits, costs and constraints[END_REF]2008;Wagg et al., 2014, Fester et al., 2014) through nutrients cycling and symbioses (e.g. [START_REF] Bardgett | Belowground biodiversity and ecosystem functioning[END_REF]. These symbioses between plants and soil microorganisms are widespread in natura and are compulsory for plant development and growth [START_REF] Hacquard | Survival trade-offs in plant roots during colonization by closely related beneficial and pathogenic fungi[END_REF]Bulgarelli et al., 2013;Vandenkoornhuyse et al., 2015). A mong symbiotic microorganisms, the A rbuscular Mycorrhizal (A M) fungi have received much attention since they are arguably the world' s most abundant mutualists and among the most important terrestrial symbionts 'that help feed the world' (Marx, 2004). In exchange for carbohydrates, A M association provides many ecological functions to plants such as nutrients acquisition and protection under stressful conditions. Despite their ubiquity, the existence of such mutualisms is unexpected in terms of evolutionary trajectories [START_REF] Hardin | The tragedy of the commons[END_REF] because cheaters (i.e. providing little phosphorus in exchange for carbohydrates) are predicted to spread at the expense of cooperative partners and thus indirectly favor competing strains (West et al., 2002). The ability of plants to preferentially reward A M fungal symbionts according to their level of "cooperativeness" was demonstrated using Medicago truncatula (K iers et al., 2011). This selective rewarding mechanism mitigates the fitness of less cooperative fungi, stabilizes the A M fungal symbiosis (K iers et al., 2011) and is supposed to allow plants to filter and exclude part of the colonizers. A consequence of this filtering phenomenon is the observation of a 'host-plant preference' (Vandenkoornhuyse et al., 2002;Duhamel & Vandenkoornhuyse, 2013). This filtering effect occurring at the individual plant level is then likely to influence the local diversity and abundance of fungal propagules. A t the plant community scale, the very first experimental studies based on a matrix-focal plant design showed that the identity of the plant species in the neighborhood determines the composition of the focal plant fungal community (J ohnson et al., 2004;Hausmann & Hawkes, 2009). This observation remains consistent for A M fungi in more complex mixtures of plants and can be extended to their temporal dynamics (Hausman & Hawes, 2010). The first established plant species has a filtering effect on the pool of A M fungi colonizing the second established plant species (Hausman & Hawes, 2010). Following these findings, the temporal scale effect has been suggested as a key parameter in understanding of the whole fungal community assembly [START_REF] Bennett | A rbuscular mycorrhizal fungal networks vary throughout the growing season and between successional stages[END_REF]Cotton et al., 2015). However in the above-described experiment, the temporal scale tested was very short (several weeks) and the fingerprint of a plant on later fungal communities has never been tested over a long period of time [START_REF] Hausmann | Order of plant host establishment alters the composition of arbuscular mycorrhizal communities[END_REF]. A lthough the influence of plant community composition on fungal assemblages has been described (e.g. J ohnson et al., 2004;[START_REF] Hausmann | Order of plant host establishment alters the composition of arbuscular mycorrhizal communities[END_REF], the spatial scale of this influence is still unclear. In a recent review, Valyi et al. (2016) proposed that the relative influence of environmental conditions, dispersal and host filtering on the A M fungal community was dependent on on the spatial scale considered. The effect of the host plant (i.e. host filter) would then be stronger at a local scale (Valyi et al., 2016). A s far as we know, there are very few empirical or experimental supports for this hypothesis. A single seminal paper demonstrated that the relationship between plant and A M fungi compositions was detectable at the 25 cm² point scale but not at the 1m² plot scale (L andis et al., 2005). However, the fact that the influence of a given plant on the composition of fungal endophytes can be detectable at a fine scale suggests that the plant communities and their dynamics may condition the A M fungal community assembly. A t a given location, we can make the parsimonious assumption that the fungal propagule reservoir available is necessarily a consequence of the fungal community structure and composition within the previous hosts. If the fungal diversity pool results from the cumulative influences of plants, the consequence is that we can explain A M fungal community assembly and quantify the respective influences of the surrounding plant species. This assumption drives the idea that the plant neighborhood determines the assembly of the fungal pool available for recruitment by the focal plant. The plant-associated fungal endophytes are known to provide benefits for plant nutrition and resistance to abiotic and biotic constraints (e.g. Friesen et al., 2011). Changes in fungal groups richness should therefore affect the ecological functions performed by the fungi to the focal plant and ultimately impact its fitness. We herein address the hypothesis that a plant neighborhood fingerprint on the root fungal community impacts host-plant fitness. We used a mesocosm design comprising experimental assemblages of different grassland species, with varying levels of plant diversity. We spatially sampled soil cores from plots on which Medicago truncatula was then grown as a trap plant. Root fungal assemblages were characterized by amplicon sequencing. Plant neighborhoods around the soil sampling positions were characterized using centimetric maps of plant species occurrences recorded over two years. We analyzed the effect of past and current plant neighborhoods on endophyte assemblages of M. trunculata and ultimately on its biomass. In this study we considered two diversity scales, the sample scale (alpha diversity) and the plot scale (gamma diversity). More specifically, we tested the following hypotheses: (i) the root fungal community structure is determined by the local plant neighborhood and the relationships between plant composition and endophyte diversity and richness should be weaker at the plot than at the sample scale. The scale of influence is likely to be a few centimeters (ii) this relationship occurs across different fungal taxonomic groups (iii) the changes in the fungal communities colonizing the plant should ultimately impact the focal plant performance. Materials and Methods A nalyses were performed in two stages. We first analyzed the influence of past and present plant neighborhoods (i.e. plant species around the sampling point) on the fungal assemblages at the plot scale and at the sample scale. We based our analyses on indexes describing the fungal communities calculated for different taxonomic groups following a down scale approach (from the whole assemblage to the phyla and classes). Experimental design To determine whether fungal communities respond to the overlying plant communities depending on their taxonomic groups, and at which spatio-temporal scales, we used 112 experimental plant communities settled in 1.30 × 1.30 × 0.25 m mesocosms in 2009 (see [START_REF] Benot | Fine-scale spatial patterns in grassland communities depend on species clonal dispersal ability and interactions with neighbours[END_REF][START_REF] Bittebiere | Clonal traits outperform foliar traits as predictors of ecosystem function in experimental mesocosms[END_REF] for additional information on the experimental design). This study was conducted in the experimental garden of the University of Rennes1. Communities were constituted from a set of plant species widely distributed in temperate grasslands of Western France (des A bbayes et al., 1971), with one to 12 plant species in each mixture. Plants were grown on a homogeneous substrate composed of sand (20%) and ground soil (80%). Weeds were regularly removed, and the mesocosms were watered every two days during the dry season. A bove-ground vegetation was mown once a year in late September and plant flowers were cut off to suppress sexual reproduction. Plant community dynamics was therefore dependent only on the plant' s clonal growth. The present plant neighborhood therefore resulted from the previous one. Plant neighborhood characterization The plant species spatial distributions in the plots changed over time due to the ongoing community dynamics. To take these dynamics into account, species occurrences were therefore mapped in all mesocosms after two and three years of experimental communities cultivation (i.e. early spring 2011 and 2012), using an 80 × 80 cm squared lattice centered on the mesocosm (Fig. 1). We recorded presence/absence data in 5 × 5 cm cells of the lattice (i.e. 256 cells in total per mesocosm). A plant species was considered as present when at least one individual rooted within the cell, with each individual belonging to a single cell. GIS (A rcGIS ver. 9.3.,ESR I) was used to calculate the number of cells colonized by each plant species (i.e. their abundances) at each spatial scale tested. Plant species abundances were calculated at the plot scale as the total number of cells occupied over the square-lattice. Within the plot, we analyzed plant species abundance at five spatial scales around the central sampling point, ranging from 5 to 25 cm (Fig. 1) (see Bittebiere & Mony, 2015 for details on the method). We considered two temporal scales by performing these calculations in 2011 (past plant neighborhood) and 2012 (present plant neighborhood). Per species correlations between 2011 and 2012 for each spatial scale exceeding 90% were not detected. Endophyte assemblages analyses through a trap plant bioassay To determine the fungal pool of species available for plant recruitment, we sampled five soil cores per mesocosm within the 80x80cm lattice (the four corners and the center) (i.e. 560 soil samples in total). This design enabled the fungal community to be captured both at the sample (sample in the center of the mesocosm) and the plot scales (the five sampling points of the plot). These soil samples were used as substrates for the cultivation of Medicago truncatula individuals, used as trap plants. M. truncatulawas transplanted as seedlings after being germinated in sterile conditions for 3 weeks. This species is known to display a very low host-preference, trapping most of the fungal species present in the soil (Cook, 1999). M. truncatula individuals were cultivated for seven weeks under controlled conditions (constant temperature and water availability) with a 12h day/light cycle and nutrients were provided with a watering solution before harvesting. To evaluate M. truncatula performance, root and shoot samples from each individual were weighed at the end of experiment. Plant total fresh mass was used as a proxy of performance. DNA extraction and amplicon preparation Medicago truncatula root samples were carefully washed with detergent (Triton 100X , 1% V /V ), thoroughly rinsed in sterile distilled water and ground to powder using a pestle and mortar under liquid nitrogen. Then, total DNA was extracted using the DNeasy plant kit (Qiagen) according to the manufacturer's recommendations. A 480 bp fragment of the fungi SSU rRNA was specifically amplified by PCR using NS22/0817 primers (L ê Van et al., 2017) with PuReTaq Ready-to-go PCR beads (GE Healthcare). A ll the PCRs were done using fusion primers containing sequencing adapters and multiplex identifiers in addition to PCR primer (more details about amplifications in L ê Van et al., 2017). For each of the 560 samples, true technical amplicon replicates were performed (i.e. two independent PCRs for each extracted DNA sample). A mplicons were purified using A MPure X P -PCR kit (A gencourt/Beckman-Coulter). Purified amplicons were then quantified (Quant-iT Picogreen ds DNA assay, Invitrogen). A n equimolecular amount of each amplicon was pooled to prepare the sequencing library. Traces of concatemerized primers were removed (L abchipX T, Caliper) before emPCR and sequencing on a GS FL X + instrument (Roche), following the manufacturer' s instructions. Data trimming and contingency matrix preparation Trimming, filtering, clustering, OTU identification and taxonomic assignments were performed as described elsewhere (e.g. Ben Maamar et al., 2015, L ê Van et al., 2017). To summarize the strategy, short sequences (<200 bp), sequences with homopolymers (>8 nucleotides) or ambiguous nucleotides, sequences containing errors in the multiplex identifier or primer, were deleted from the dataset. Detected chimeric sequences using chimera.uchine were deleted. A fter these steps, and from the two replicates, only sequences displaying 100% identity were kept. The remaining sequences were grouped into OTUs using DNA clust [START_REF] Ghodsi | DNA CL UST: accurate and efficient clustering of phylogenetic marker genes[END_REF] with a 97% sequence identity threshold, and a contingency matrix was built. A fter these steps, the sequencing depth (i.e. number of sequences per sample to describe the community) was checked from rarefaction curves computed using the V EGA N package (Oksanen et al., 2015) in R (R Core Team, 2013). We removed 154 samples which did not satisfy these criteria and samples were normalized to 1351 sequences. OTUs affiliation and taxonomic groups selection A total of 3471 fungal OTUs for the 406 samples were obtained. A large proportion of these OTUs were rare (i.e. >70% of the OTUs represented by less than 25 sequences). Similarly to plant community studies, OTUs occurring in at least 1% of the samples were used for the statistical analyses. To check the sensitivity of the statistical signal of this additional filtering step, we tested other thresholds and obtained similar statistical results (data not shown). The resulting dataset contained 2057 fungal OTUs. A ll the statistical analyses were performed at three taxonomic levels, all fungi, within phyla (i.e. Ascomycota, Basidiomycota, Glomeromycota), and within classes (i.e. Sordariomycetes, Glomeromycetes, and Agaricomycetes) (i.e. seven datasets in total). Phyla and classes were selected according to their respective dominance in the whole assemblage and within the phyla. The Ascomycota, the Glomeromycota, and the Basidiomycota contained 1587 OTUs (77.2% of the total richness), 308 OTUs (15% of the total richness) and 100 OTUs (4.86% of the total richness), respectively. In each of these three fungal phyla, Sordariomycetes (186 OTUs), Glomeromycetes (80 OTUs) and Agaricomycetes (86 OTUs) were the dominant classes in terms of OTU richness. Diversity indices calculation Indices were calculated at the plot scale (i.e. the five samples from each mesocosm pooled) and at the sample scale (i.e. based on the sample from the center of the mesocosm only) for the seven fungal datasets (see above). To characterize the fungal community at the plot scale (gamma diversity), we calculated the OTUs richness as the total number of OTUs occurring in at least one of the five samples from the plot. A t the sample scale (alpha diversity), we calculated the OTU richness (S), the Pielou equitability index (J ), and the Shannon and Simpson diversity indices. Because strong correlations (i.e. > 90%) between Shannon diversity, Simpson diversity, and equitability were found, the Shannon and Simpson indices were discarded before further analyses. No strong correlations (i.e. > 90%) were found between the two remaining indices regardless of the taxonomic level analyzed. Indices were calculated using the V EGA N package (Oksanen et al., 2013) in R (R Core [START_REF] Pinheiro | nlme: L inear and nonlinear mixed effects models[END_REF]. Statistical analyses To determine whether the fungal community structure was influenced by the past and present plant neighborhoods at the sample and plot scales, we used multiple regression analyses with plantspecies abundances as explanatory variables in linear model (L M) procedures. These analyses were performed on the seven fungal datasets. First, to test for the influence of past and present plant neighborhoods on the fungal pool at the plot scale, we tested the effect of the plant abundances for each date (see above section Plant neighborhood characterization) on the fungal OTUs richness. We constructed two models for each taxonomic level analyzed (i.e. 14 models), which were optimized using a backward stepwise selection procedure of the explanatory variables based on the A kaike' s Information Criterion (described below). Second, to test for the influence of past and present plant neighborhoods on the fungal pool at the sample scale (i.e. corresponding to the soil sample from the mesocosm center), we tested the effect of the plant abundances for each date (see above section Plant neighborhood characterization) on the fungal richness and equitability for the five neighborhood sizes examined. This enabled us to determine the spatio-temporal scale of response of taxonomic groups from the local fungal pool to the plant neighborhood. One model was developed for each date and neighborhood size. We therefore constructed a total of ten models per index (two indices) and per taxonomic level analyzed (i.e. 140 models in total), and each model was optimized using a backward stepwise selection procedure of the explanatory variables. We used the information-theoretic model comparison approach based on A kaike's Information Criterion (A IC) and compared for each index, all the optimized models through second-order A IC corrected for small sample sizes (A ICc) [START_REF] Burnham | Model selection and multimodel inference: A practical information-theoretic approach[END_REF]. In our analyses, we considered models with smaller A ICc values and with a substantial level of empirical support (i.e., a difference of A ICc > 2 with other models) as the most probable [START_REF] Burnham | Model selection and multimodel inference: A practical information-theoretic approach[END_REF]. Thus this procedure enabled us to compute multiple models and to compare these models while minimizing the risk of producing false positives. richness of the phylum Basidiomycota (P<0.05, 0.06≥R 2 ≥0.09) but decreased the richness of the class Agaricomycetes (P<0.05, 0.07≥R 2 ≥0.1). The same pattern was observed for F. rubra increasing the richness of the phylum Ascomycota (P<0.05, 0.04≥R 2 ≥0.12) but decreasing the richness of the class Sordariomycetes (P<0.05, 0.1≥R 2 ≥0.11). On the contrary, several species had a consistent effect between taxonomic levels. A. tenuis for example significantly increased the richness of the whole fungal community and at the phylum levels for both Ascomycota and Basidiomycota, and at the class level for Sordariomycetes. The same pattern was observed with B. pinnatum that increased the richness of both the phylum Glomeromycota and the class Glomeromycetes. F ungal equitability at the sample scale To investigate the effect of the plant neighborhood on the equitability of the fungal community at fine-scale, we produced linear models at the sample scale (center of the mesocosm) with plant species abundances as explanatory variables. The multiple linear models analysis revealed that the equitability of the whole fungal community, of the Ascomycota, Basidiomycota, Glomeromycota, Sordariomycetes, Glomeromycetes, and Agaricomycetes were all significantly determined by the plant neighborhood (Tab 2). A t the level of the whole fungal community, the equitability significantly decreased with the presence of only Holcus mollis in the neighborhood whereas the abundances of other plant species had no significant effect on fungal community equitability (P<0.05; 0.04≥R 2 ≥0.07). Tab l e 1 . wesp o n ses o f each fu n gal gro u p ri ch n ess at th e p l o t scal e to th e p ast an d p resen t p l an t n ei gh b o rho o ds ( resu l ts o f l i n ear mo del s) . t -v al u es an d adj u sted w² are p resen ted. hn l y sp eci es p arti ci p ati n g si gn i fi can tl y to mo del b u i l di n g are sh o wed as wel l as th ei r effect o n th e fu n gal ri ch n ess ( + i n creasi n g th e ri ch n ess; -decreasi n g th e ri ch n ess) . E ffect of temporal and spatial scales on the link between plant landscape and fungal community We determined the spatio-temporal scale of response of the fungal community to the plant neighborhood by producing linear models with past and present plant species distributions at five neighborhood sizes (5 to 25 cm). These analyses were performed at the scale of the sample (center of the mesocosm) and the best models were selected according to the A ICc criteria (see Material & Methods section). These selected models showed that fungal community richness and equitability responded equally to present and past plant neighborhoods (i.e. the A ICc criteria of the models were not different; Tab. 2). Furthermore, the species explaining the variations in richness and equitability of the fungal community were the same at both temporal scales (i.e. past and present). Only the equitability of Glomeromycota, Basidiomycota, and Sordariomycetes and the richness of Glomeromycetes responded to a single temporal scale. The whole fungal community richness and equitability responded indifferently to the plant neighborhood at the five neighborhood size scales analyzed (i.e. 5, 10, 15, 20 and 25 cm around the sampling point) (Tab. 2). Only the Glomeromycota equitability index responded to a specific neighborhood size (i.e. 25 cm), whereas the two other fungal phyla and the classes analyzed responded to at least two of the five neighborhood sizes tested for both richness and equitability. Thus a clear neighborhood effect was highlighted but no specific neighborhood size was detected in the response of the fungal community to the plant neighborhood. To investigate the effect of the changes in fungal communities richness and equitability on the fitness of the trap plant we produced linear models at the sample scale (center of the mesocosm) with fungal groups richness or equitability as explanatory variables. The biomass of the trap plant Medicago truncatula was not affected by the richness of its whole fungal community (Tab 3). However, the biomass increased significantly with the richness of the Basidiomycota and Glomeromycota phyla (P<0.01 and P<0.05 respectively) (Tab. 3) but not with the phylum Ascomycota (Tab 3). The combined effects of Basidiomycota and Glomeromycota richness explained ~12% of the variations in plant biomass (P<0.05; R 2 =0.12). A t the class level, the same positive effect on M. truncatula biomass was found for Glomeromycetes and Agaricomycetes richness but not for Sordariomycetes richness (P<0.01; R 2 =0.13; Tab 3). If a higher microbiota fungal richness positively impacted the M. truncatula biomass, we would expect a reciprocal effect of equitability, because the detection of rare OTUs from the same sequencing depth within a more evenly distributed community, is less likely. A s expected, the data analyses indicated that the biomass of the trap plant increased with the equitability of its whole fungal microbiota community (P<0.01; R 2 =0.07; Tab 3). However, the biomass increased only with the equitability of the phylum Ascomycota but not with the Basidiomycota and Glomeromycota phyla (P<0.01, R 2 =0.1; Tab 3). A t the class level, the same positive effect on M. truncatula biomass was found for Sordariomycetes equitability (P<0.01; R 2 =0.1; Tab 3) but not for Agaricomycetes and Glomeromycetes equitability. . This result suggests that the biotic interactions structuring the fungal community (i.e., host preference and host filtering) mostly act at a centimetric scale (Hazard et al., 2013;Valyi et al., 2016). A t the sample level, our results indicated that the plant neighborhood determined, at least in part, the richness and competitive balance (i.e. equitability) of the whole fungal community colonizing the trap plant, M. truncatula. Such heterogeneity of microbiota composition within a given host-plant species is a recurrent observation (e.g. [START_REF] Davison | A rbuscular mycorrhizal fungal communities in plant roots are not random assemblages[END_REF]Schlaeppi et al., 2014; L ê Van et al., 2017) and has been linked to plant recruitment from the soil "reservoir" (Vandenkoornhuyse et al., 2015). Indeed, root-associated fungi in agave species have been shown to be mainly recruited from the surrounding soil [START_REF] Coleman-Derr | Plant compartment and biogeography affect microbiome composition in cultivated and native A gave species[END_REF]. The importance of abiotic factors, notably soil properties, as determinants of the soil microbial pool composition (Shakya et al., 2013;Schreiter et al., 2014;[START_REF] Coleman-Derr | Plant compartment and biogeography affect microbiome composition in cultivated and native A gave species[END_REF] has been repeatedly demonstrated and is considered as the main source of variation of the plant microbiota (Vandenkoornhuyse et al., 2015). We demonstrated here that the plant neighborhood at a centimetric scale (i. The fungal community also results from past plant neighborhoods In agreement with our expectations (hyp. 2) we demonstrated that the past neighborhood also impacted the fungal community structure of the following year and that this was not due to correlations between past and current neighborhoods. This result was found for the whole fungal microbiota community level as well as for the three phyla and classes analyzed. The key interpretation of this result is that past plants do leave a "fingerprint" on the composition of microorganisms colonizing the present plant community. A lthough the fungal communities responded at both temporal scales, there was no difference in the intensity of response between the two years. The persistence of the effect of the plant over these years could be due to the short period investigated in the present study (two years) because the majority of fungal spores and propagules can survive in soil for a long time. A study involving a longer period (i.e. >2 years), for example mapping the plant communities 5 years before sampling, would allow the temporal limits of this potential soil fungal bank "memory" to be determined. The structure of the root endospheric fungal community impacts M. truncatula biomass In agreement with our expectations (hyp.3) we demonstrated that the performance of the Medicago truncatula plant was affected by the composition of the fungal endophytes community. The measures of Medicago truncatula performance indicated that the individual biomass production depended on the richness of Glomeromycota and Basidiomycota (but not on the total richness of fungal OTUs or on the Ascomycota richness). This relationship between Glomeromycota richness and plant performance has already been demonstrated for A M fungi (Van der Heijden et al., 1998, K lironomos et al., 2000[START_REF] Hiiesalu | Species richness of arbuscular mycorrhizal fungi: associations with grassland plant richness and biomass[END_REF], due to their beneficial effects. The increase in A M fungal diversity has experimentally been shown to result in more efficient exploitation of available resources such as soil phosphorus (Van der Heijden et al., 1998) and to decrease plant pathogens [START_REF] Van Der Putten W H | Empirical and theoretical challenges in aboveground-belowground ecology[END_REF]. To our knowledge, however, the positive effect of fungal species richness on plant performance has never been demonstrated with the phylum Basidiomycota or the class Agaricomycetes. A s far as we know, little is known about the functions of the endospheric Agaricomycetes in grass plants and the role of this group on plant growth has still to be clarified. Interestingly, Medicago truncatula performance only depended on the equitability of the whole community, the phylum Ascomycota and the class Sordariomycetes that represented the larger fungal groups in the samples. This result suggests that a community dominated by a few Ascomycota and Sordariomycetes species has a less beneficial effect on plant growth. Ascomycota is a phylum known to be composed of very diversified organisms performing various functions for host plants. In this context, an equitable community could represent a higher diversity of organisms available to recruitment in the plant "toolbox" for adjustment to environmental conditions (Vannier et al., 2015). neighborhood on the productivity of the trap plant M. truncatula, mediated by an increase in fungal richness or equitability, was evidenced. This change in fungal soil assemblage may be due to a complementarity of the niches that plants provide to fungal symbionts. The degree of host preference differs between plants [START_REF] Helgason | Selectivity and functional diversity in arbuscular mycorrhizas of co-occurring fungi and plants from a temperate deciduous woodland[END_REF], which may then constitute different niches for fungi. These differences in host preference can be due to the level of mycotrophy of the host with generalist hosts favoring a rich fungal community contrary to more specialist hosts. Such positive effects of particular plant identities have already been suggested: for instance, spore abundance in salt marsh was determined by the proximity of mycotrophic hosts [START_REF] Carvalho | Spatial variability of arbuscular mycorrhizal fungal spores in two natural plant communities[END_REF], whereas the presence of the grass Anthoxantum odoratum increased the abundance of A M fungi in the soil regardless of the plant mixtures (De Deyn, 2010). Conversely, we also demonstrated in our experiment the negative effects of several plant species on the richness and equitability of key fungal groups, which ultimately reduced the trap plant biomass. For example, H. mollis, A. stolonifera and H. lanatus indirectly decreased the productivity of the trap plant M. truncatula. This negative impact might be due to a host-preference effect. Indeed, several groups of fungi can harbor a limited range of host plants. A given plant can thus represent either a good or a bad choice of host. It can also be interpreted as an allelopathic-like phenomenon although it has not been evidenced before on these species. For example, a species belonging to the family Festuca has been shown to produce allelopathic compounds suppressing D. glomerata growth but depending on the competition context of the community (Viard-Crétat et al., 2012). Plant production of allelopathic compounds may suppress the germination of spores or the formation of mycorrhizal associations [START_REF] Stinson | Invasive plant suppresses the growth of native tree seedlings by disrupting belowground mutualisms[END_REF]Callaway et al., 2008) and thus alter the reservoir of soil microorganisms. We More importantly, we showed that A M fungal species used have contrasted effects on plant traits. In our experiments, the difficulty of identifying the role of a given symbiont in determining the plant phenotype resided in the fact that a given plant was colonized by different symbionts with contrasted effects at the same time (A rticle II). It is thus necessary to identify the range of variations due to symbiont identity. A perspective of this work would thus be to test a wider range of A M fungal partners to determine whether they specifically modify plant traits and act on performances. We performed a preliminary study (not presented in the document) testing this hypothesis. We grew individuals of G. hederacea in controlled conditions inoculated with a fungal isolateor not inoculated. We tested the effect of nine fungal species selected to constitute a large range of cooperative behabior (i.e. provinding high benefits to low or no benefits). We found significant differences in various plant traits comprising performance traits depending on the fungal partner identity. In addition to direct effects described herein (A rticle II), the microbiota can also indirectly impact plant phenotype through its interplay with epigenetic mechanisms (A rticle I). These two sources of phenotypic variation happening during the plant' s life currently constitute two separated fields of research. The few observations of a link between epigenomic modifications and the establishment of symbiosis [START_REF] Ichida | DNA adenine methylation changes dramatically during establishment of symbiosis[END_REF][START_REF] Ichida | Epigenetic modification of rhizobial genome is essential for efficient nodulation[END_REF]De Luis et al., 2012) suggest however that interplays does exist. These two separated fields of research will thus need to develop a common framework to determine the occurrence and significance for plant phenotype of microbiotaepigenetics interplays. Conducting a monitoring of epigenetic marking while inoculating different experimentally engineered microbiota would surely provide insightful results and could allow to disentangle complex endophytes-induced phenotypic changes. Plants have evolved mechanisms ensuring the presence of symbionts Plants have been interacting with microorganisms for a very long time (over 400 millions years for A M fungi; [START_REF] Remy | Four hundred-million-year-old vesicular arbuscular mycorrhizae[END_REF][START_REF] Redecker | Glomalean fungi from the Ordovician[END_REF] allowing the establishment of co-evolution patterns [START_REF] Brundrett | Coevolution of roots and mycorrhizas of land plants[END_REF]. Subsequently, plants should have evolved mechanisms to optimize their association with beneficial symbionts. Many studies have described how plants have evolved, filtration, defense, promotion or regulation mechanisms regarding microorganism colonization. For example, mechanisms allowing plants to regulate the symbiosis and avoid cheating behavior with A M fungi have already been evidenced [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011). However, since plants are sessile organisms, their "recruitment choice" depends on available microorganisms in the local environment (e.g. Vandenkoornhuyse et al., 2015). In this context, the heterogeneity of beneficial microorganisms presence in the soil could be considered as a source of stress for plant (i.e. absence of beneficial microbes). Regarding this heterogeneity, different forms of "continuity of partnership" could be expected to ensure beneficial interactions. Clonal plants are known to display plastic responses maximizing either exploration (foraging) or exploitation (specialization) of heterogeneous resources relying on their ability to share resources and information within the clonal network. Such responses could then be expected to buffer beneficial microorganisms heterogeneity. In our experiments (A rticle II), we however demonstrated that Glechoma hederacea only displayed a low foraging and no specialization response to A M fungi heterogeneity. Clonal plants have however evolved another mechanism allowing to ensure the presence of microorganisms and thus their habitat quality. We evidenced the existence of vertical transmission (or heritability) of microorganisms through the connexions between ramets (A rticle III). This previously unknown mechanism allows the transfer of a core microbiota that ensures habitat quality of theplant progeny. This results suggest that plants physiological integration (Oborny et al., 2001) comprise in addition to resources and information, the sharing of microorganisms, constituting a prospect of interest. These results open a large avenue for future research questions such as 1.) the direction of this transfer (unidirectional or bidirectional; i.e. from progeny to parent plants), 2.) the mechanisms of transfer, active through the vascular network fluxes or passive by colonization of the stolon surface; 3.) the modalities of microorganisms filtering during the process. Considering the latter hypothesis we indeed evidenced that transmitted communities are similar and an important consequence is the existence of a filtration process during the transmission that creates this similarity. This filtration process could be based either on microorganisms dispersal ability (i.e. ability to colonize stolons) or on the functions they provide to plants. Different hypotheses can be drawn from these resulst like the transmission of a core microbiota necessary for the clone establishment or alternatively the transfer of only the most cooperative and important fraction of the microbiota transmitted. We are currently testing the hypothesis that more beneficial symbionts could be preferentially transmitted. This experiment comprises two phases. A first phase (already completed, see above) aims at identifying the impact of the inoculation of single A M fungal species on plant traits and performance to detect beneficial and deleterious A M species. The second phase of the experiment consists in the inoculation of clonal plants with a designed mixture of A M fungal species comprising a range of symbiont qualities. T his second phase aims at determining which A M fungal species are transmitted to the progeny, depending on their level of cooperation and their effects on plants. Estimating the ecological significance of this mechanism should be the main focus of future research by determining its ubiquity, fidelity and its significance for plant phenotype expression. The next step would thus be to test if specific microbial cores can be recruited depending on plant ecological needs. For example when the plant is subjected to a stress, it should preferentially transfer organisms conferring an increased resistance to this particular stress. II. Redefining the individual plant : from holobiont to meta-holobiont Plants can no longer be considered as standalone entities Plants are colonized by a high diversity of symbionts that individually (A rticle II) and collectively (A rticles I and V ) determine the plant phenotype and its subsequent ability to grow, reproduce and adapt to environmental conditions (A rticle I). This microbiota is present in every known plant and can be transmitted between generations (A rticle II). The plant can thus no longer be seen as an independent entity. Regarding selection and evolution processes the plant has to be considered as a complex and multiple entity comprising the plant and its microbiota forming the holobiont as well as their genomes forming the hologenome (Vandenkoornhuyse et al., 2015). Selection processes can act at different organisation levels in such complex organisms (i.e. on the plant phenotype, on the microorganisms phenotypes or on the result of their interactions). Toward a novel understanding of clonal network organisms The discovery of a novel microbiota heritability mechanism deeply impacts our understanding of This idea could be extended to other clonal networks as soon as organisms can be shared within the network. Future research are needed on clonal organisms in order to determine whether such structure happens to exist in other organisms or if the meta-holobiont only fits for clonal plants. Even if the meta-holobiont theory developped herein (A rticle V ) cannot be extended to other organisms, we emphasize the suitability of the clonal plant model (and thus the meta-holobiont) for the study of microbiota assembly in the context of complex organisms. The role of the meta-holobiont in determining microorganisms dispersal and plant fitness in natural ecosystem is a major perspective. Especially, in the context of plant community, transmission and filtration mechanisms in the meta-holobiont could impact the dispersal of beneficial symbionts in the environment. If the meta-holobiont is able to selectively facilitate the spreading of the most useful microorganisms within the plant community it could impact the microbiota assembly of the whole plant community. III. From individuals to community The community context shapes the microbiota A t the plant community scale, plants are interacting with each other in neutral, competitive, or facilitative ways. A given plant in the community is thus interacting with its neighborhood and it is likely that this neighborhood could affect the assembly of its microbiota. More importantly, at a given location, the fungal propagule reservoir (i.e. the soil pool) available for recruitment by the plant is a consequence of the fungal community within the previous host-plants. We demonstrated that the plant neighborhood determines in part this fungal soil pool available for plant recruitment (A rticle V ). We explained the competitive balance and richness of the fungal communities colonizing the trap plant Medicago truncatula based on the close neighborhood of plants surrounding the soil sampling point. Valyi et al. (2016) proposed that the relative influence host filtering on the A M fungal community is stronger at local scale (Valyi et al., 2016). Our results fits perfectly with this hypothesis and introduce the role of the local plant community context (i.e. the plant neighborhood) as a factor structuring the assemblage of the plant microbiota, aside from the plant genetics and the environmental factors like soil properties. The results obtained herein (A rticle V ) suggest that the host preference and host filtering structuring the fungal community mostly act at centimetric scale (Hazard et al., 2013;Valyi et al., 2016). The effect of the plant neighborhood occurred at finer scale than previously thought (i.e. 5 cm to 25 cm around the sampling point) on the fungal community richness and composition. Previous studies focusing on A M fungi reported that A M fungal community composition was highly heterogeneous, even at local scale [START_REF] Brundrett | Mycorrhizal fungus propagules in the jarrah forest[END_REF][START_REF] Carvalho | Spatial variability of arbuscular mycorrhizal fungal spores in two natural plant communities[END_REF][START_REF] Wolfe | Origins of major human infectious diseases[END_REF]). Bahram et al. (2015) also reported in a meta-analysis the existence of a spatial autocorrelation of A M fungi, suggesting thus dispersal patterns of fungi at the scale of the meter. This corpus of knowledge, together with our results, suggest that fungi, and possibly more widely the microorganisms, are dispersal limited at the scale of the plant community. However, evidences that isolation and dispersal limitation exist for microbial assemblages are scarce [START_REF] Telford | Dispersal limitations matter for microbial morphospecies[END_REF]. There is thus a need to reconsider the scale at which we study plant microbiota assembly rules. In addition, the mechanism of microbiota heritability evidenced herein (A rticle III) allow the dispersal of fungi in the community and calls for the development of a framework on microorganisms dispersal. Toward the concept of micro-organisms micro-landscape Plant host has been shown to affect the microbiota structure, in particular its composition and diversity [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012). In addition, we demonstrated that the plant neighborhood can have contrasted effects on fungal taxonomic groups (e.g. different phyla or classes) richness and equitability. Theoreticall, niche partitioning and neutral processes (Hubbell, 2001(Hubbell, , 2005) ) are classically admitted to drive community assemblies and explain diversity patterns. In the context of symbiotic microorganisms, plants can be considered as habitat and the assembly of the plant community is driven by the nich partitioning of microorganisms. This result thus suggest that a particular plant constitutes a host of a specific quality (favorable or unfavorable) that can be assimilated to a patch of habitat for symbiotic organisms. From here, the plant community can be assimilated to a dynamic mosaï c of patches. The composition of the patches and their spatial arrangement could thus determine the microbiota assembly of plant species within the mosaï c. In this context, a major perspective is to transpose and adapt macro-landscape ecological frameworks at the scale of the microorganisms landscape. Plant community can be redefined from the fungal perspective in an attempt to characterize what habitat size, isolation and dispersal should be considered in microbial ecology. From our results, we develop the idea that for a given fungus, the plant community is a dynamic mosaï c of plants of various quality that can be assimilated to patches in a landscape. A n important consequence of this understanding of the fungal landscape is the transposition of macro-landscape elements to a micro-scale. For example, a perspective of this approach is to investigate the impact of the connectivity in the fungal micro-landscape. The connectivity describe the permeability of landscape elements to the dispersal of organisms. If plants can be considered as habitat of different quality for a fungus, then the abundance of a favorable host within the fungal landscape would increase the dispersion of the fungus. We are currently engaged in the redaction of a review (not presented herein) that aims at transposing these knowledge of macro-landscape to the micro-landscape, introducing the micro-landscape. In parallel, we conducted an experiment in collaboration (not presented herein) using experimental mesocosms to test the hypothesis that landscape parameters such as the connectivity between two hosts (defined as the presence of favorable hosts between the focal plants) could explain the similarity of their microbiota. IV. Microbiota assembly and agricultural practices The ability to engineer plant-microbiota is a key challenge to resolve the productivity-biodiversity loss conundrum. Microbiota engineering aims at optimizing specific attributes of the focal host organisms through the selection of a specific microbiota (Mü ller et al., 2016). In plants, several candidate traits have been identified, comprising for example flowering time and plant biomass [START_REF] Panke-Buisse | Selection on soil microbiomes reveals reproducible impacts on plant function[END_REF]. To fulfill this goal, future research will have to describe in precision on one the hand the rules driving microbiota assembly and on the other hand the host-plant fitness resulting from a given assembly. While beneficial microorganisms have started to be commercialized [START_REF] Berg | Plant-microbe interactions promoting plant growth and health: perspectives for controlled use of microorganisms in agriculture[END_REF]; and see de Vrieze, 2015 for a chronicle) they are not always reliable on crops, especially depending on the local context. Our results provide evidences that it is not only the presence of one particular beneficial microbe that increases plant performance but also the global richness of the plant fungal microbiota (A rticle V ). For a few years, scientists have proposed improvements for agricultural practices such as the increase in plant crop richness to promote microorganisms diversity and then enhance the diversity of ecological functions provided to the plant (Duhamel & Vandenkoornhuyse, 2013). The work presented herein demonstrated that the rules of plant microbiota assembly, comprising the richness of major fungal groups, and the resulting effects on plant phenotype depend on the plant community context (A rticle V ). There is thus perspectives for agricultural practices to consider not only the plant richness but also the plant composition and spatial arrangement. Studies have until now focused on either the identification of key beneficial microorganisms or the maximization of microorganisms diversity. We additionally demonstrate that opportunities exist of maximizing the equitability of the microbiota to increase plant fitness. In addition, the effect on plant phenotype does not necessarily rely on the plant community richness as previously thought, but can also depend on plant identities in the host neighborhood. This encourage for the development of polyculture practices. For example, combination of facilitative plants can be used in cultures to maximize key fungal groups richness thus ultimately increasing productivity. To conclude, i can see two major perspectives arising from this PhD thesis. The first perspective is linked to the above mentioned limits of the neo-Darwinian synthesis of evolution in describing and understanding plant-microbes interactions and their evolutionnary consequences. Both microbiota and epigenetics are not linked to the genome and are thus not integrated in the synthesis. The synthesis do not encompass biological units of organization like holobionts and meta-holobionts. From my perception, one of the major challenges of the next few years relies on the development of an extention for the neo-Darwinian synthesis of evolution integrating nongenome based and interactions mediated evolutionnary patterns. The second pespective is linked to the concrete application of acquired knowledges for the development of a sustainable agriculture. The results obtained during this PhD highlights the complexity and the dynamics of plant microbiota assembly and of its resulting effects on plant phenotype. From my perception thus, the ability to maximize crops resistance, resilience and productivity does not reside on the study of a given beneficial microorganism but rather on the understanding of the mechanics regulating emerging properties and functionning of microbiota assemblages. A BSTRA CT Plants live in association with a wide diversity of microorganisms forming the microbiota. The plant microbiota provides a variety of key functions that influence many aspects of plant's life comprising establishment, growth and reproduction. The present thesis aims at determining the assembly rules of the plant microbiota and its consequences for plant phenotype, adaptation and evolution. To fulfill this objective, we used different experimental approaches using either clonal plants as model organisms or grassland mesocosms for community-wide analyses. Our results demonstrated i) that A rbuscular Mycorrhizal Fungi induce important phenotypic variations in clonal plants traits involved in space exploration and resources exploitation. These changes depended on the identity of the symbionts and altered the plants ability to produce plastic responses to environmental heterogeneity. ii) Plants have evolved a mechanism allowing the transmission of a part of their microbiota to their progeny, ensuring thus their habitat quality. iii) The plant community context is a major factor structuring local plant microbiota assembly. Particular plant species identity in the neighborhood increase or decrease the microbiota diversity and ultimately determine the focal plant performance. This thesis overall demonstrates the importance of symbiotic microorganisms in the understanding of the plant adaptation and evolution. From the knowledges acquired we developed a novel understanding of symbiotic interactions in clonal plants by extending the holobiont theory to the meta-holobiont theory. 2 . 1 21 Microorganisms recruitment through compounds secretion..................................20 3.2.2 The immune system...............................................................................................21 3.2.3 Regulation of symbiotic interactions......................................................................22 3.2.4 The host plant effect: genetics and biogeography..................................................23 3.3 Biotic interactions. 1 1 Microbe-microbe interactions.................................................................................24 3.3.2 The plant community context................................................................................25 3.4 Microbiota transmission. of global changes (i.e. global warming), local conditions are changing and scientists Figure 1 . 1 Figure 1. a) Schematic plant showing the different compartments of the plant and its environment colonized by microorganisms comprising bacteria, fungi and archaea. Numbers of bacterial cells are indicated for phyllosphere, atmosphere, rhizosphere, and root and soil bacterial communities and were taken from (Bulgarelli et al., 2013). b) Schematic root cross section of the root hair zone showing epiphytic colonization endophystes colonizing the endosphere. c) Schematic longitudinal section of a root and the zhizosphere and bulk soil around the root. The bulk soil is colonized by a wide diversity of microorganisms and they are filtered within the rhizosphere and subsequently in the endosphere. The figure was modified from Mü ller et al., (2016). Figure 2 . 2 Figure 2. Phylogenetic structure of bacteria associated to the roots and leaves of different plant species. This figure was realised from sequencing data of different papers and analyzed using a reference-based operational taxonomic unit (OTU) picking method and were subsequently combined at the family level. A bbreviations: n.d., not detected. From Mü ller et al., (2016). Figure 3 . 3 Figure 3. Growth form of the clonal plant Glechoma hederacea. The plant is organized as a network of ramets that are potentially independant individual units composed of roots, shoots and a node.These ramets are connected by stolons and the section of the stolon separating two nodes (i.e. two ramets) is called an internode. Modified fromBirch & Hutchings, 1994. Figure 4 . 4 Figure 4. Schematic root surface and root endosphere as welle as surrounding rhizosphere (zone of the soil unfer plant influence) and bulk soil. The figure shows the different abiotic and biotic factors structuring the assembly of the microorganisms community within the different compartments. 3. 2 2 ) a given plant will influence the local soil pool. Thus at the scale of the plant community, the effect of a plant on the local soil pool of microorganisms indirectly affects the microbiota assembly of the other plants in the community. More importantly, as the filtering effect is dependent on plant identity, studies based on matrix-focal plant designs have demonstrated that this identity determines the composition of the focal plant fungal community(J ohnson et al., 2004; Hausmann & Hawkes, 2009). Such host filtering effects have long been considered negligible for plant microbiota assembly in comparison to abiotic environmental factors. However, a recent review by Valyi et al. I. 2 A 2 rticle I: E pigenetic mechanisms and microbiota as a toolbox for plant phenotypic adjustment to environment 35 Evolution is driven by selection forces acting on variation among individuals. Understanding these induced phenotypes are subjected to selection(Pfennig, 2010). If selection acts primarily on phenotype, the environmental constraints an organism has to face can lead either to directional selection or disruptive selection of new phenotypes(Pfennig, 2010). Thus, novel traits can result from environmental induction followed by genetic accommodation of the changes (West-Eberhard, 2005). These accommodated novelties, because they are acting in response to the environment, are proposed to have greater evolutionary impact than mutation-induced novelties (West-Eberhard, 2005). Figure 1 : 1 Figure 1: (A ) Plant phenotypic plasticity is trigged by environmental constraints. Phenotypic changes induced are not solely genetically controlled but are also based on either epigenetic marks (box 2) or plant microbiota by recruitment of mutualists. This plant 'toolbox' allows a rapid response to environmental constraints. (B) The control over plant phenotypic plasticity may crosstalk or synergistically interplay with different possible interactions. 1) Co-evolution plant-symbiont 2) Interplay genetic/epigenetics 3) Cross-talk epigenetic/microbiota. These mechanisms also act at the modular scale of plant structure. 57 1 57 T., Hutton, M. G. and Denison, R. F. (2007). Human selection and the relaxation of legume defences against ineffective rhizobia. Proc. R.Soc. B 274,[3119][3120][3121][3122][3123][3124][3125][3126] K iers, E.[START_REF] Duhamel | Reciprocal rewards stabilize cooperation in the mycorrhizal symbiosis[END_REF].Reciprocal rewards stabilize cooperation in the mycorrhizalsymbiosis. Science 333, 880-882 K ucharski, R., Maleszka, J ., Foret, S., and Maleszka, R. (2008). Nutritional control of reproductive status in honeybees via DNA methylation. Science 319, 1827-1830 L ee, Y.K ., and Mazmanian, S. K . (2010). Has the microbiota played a critical role in the evolution of the adaptive immune system? Science 330, 1768-1773.L eggat, W., A insworth,T., Bythell, J ., Dove, S., Gates, R., Hoegh-Guldberg, O., et al. (2007). The hologenome theory disregards the coral holobiont.Nat. Rev. Microbiol. 5, doi:10.1038/nrmicro1635C1 L ira-Medeiros, C. F., Parisod, C., Fernandes, R. A ., Mata, C. S., Cardoso, M. A ., and Ferreira, P. C. G. (2010). Epigenetic variation in mangrove plants occurring in contrasting natural environment. PLoS ONE 5, e10326L opez, M., Herrera-Cervera, J .[START_REF] Tejera | Trehalose and trehalase in root nodules of Medicago truncatula and Phaseolus vulgaris in response to salt stress[END_REF].Growth and nitrogen fixation in Lotus japonicus and Medicago truncatula under NaCl stress: Nodule carbon metabolism. J . Plant Physiol. 165, 641-650 De L uis, A ., Markmann, K ., Cognat, V., Holt, D. B., Charpentier, M., Parniske, M., et al. (2012). Two microRNA s linked to nodule infection and nitrogen-fixing ability in the legume Lotus japonicus. Plant Physiol. 160, 2137-2154 L undberg, D. S., L ebeis, S. L ., Paredes, S. H., Yourstone, S., Gehring, J ., Malfatti, S., et al. (2012). Defining the core Arabidopsis thaliana root microbiome. Nature 488, 86-90 Manning, K ., Tör, M., Poole, M., Hong, Y., Thompson, A . J ., K ing, G. J ., et al. (2006). A naturally occurring epigenetic mutation in a gene encoding an SBP-box transcription factor inhibits tomato fruit ripening. Nat. Genet. 38, 948-952 Martin, R.A . and Pfennig, D.W. (2010). Field and experimental evidence that competition and ecological opportunity promote resource polymorphism. Biol. J . Linn. Soc. 100, 73-88 Mayr, E. and Provine, W. B. (Eds.) (1998). The evolutionary synthesis: perspectives on the unification of biology. Harvard University Press Molinier, J ., Ries, G., Zipfel, C., and Hohn, B. (2006). Transgeneration memory of stress in plants. Nature 442,1046-1049 Peck, L .S., Thorne, M. A ., Hoffman, J . I., Morley, S. A ., and Clark, M. S. (2015). Variability among individuals is generated at the gene expression level. Ecology 96, 2004-2014 Pfennig, D.W., Wund, M. A ., Snell-Rood, E. C., Cruickshank, T., Schlichting, C. D., and Moczek, A . P. (2010). Phenotypic plasticity's impacts on diversification and speciation. Trends Ecol. Evol. 25, 459-467 Pigliucci, M. (2005). Evolution of phenotypic plasticity: where are we going now? Trends Ecol. Evol. 20,481-486 I.3 A rticle II: A M fungi patchiness and the clonal growth of Glechoma hederacea in heterogeneous environments Université de Rennes 1, CNRS, UMR 6553 EcoBio, Campus Beaulieu, Avenue du Général L eclerc, RENNES Cedex (France) 2 Université de Lyon 1, CNRS, UMR 5023 L EHNA 43 Boulevard du 11 Novembre 1918, V IL L EURBA NNE Cedex (France) A bstract:The effect of A M fungi spatial distribution on individual plant development may determine the dynamics of the whole plant community. We investigated whether clonal plants display a foraging or a specialization response, as for other resources, to adapt to the heterogeneous distribution of A M fungi. Two separate experiments were done to investigate Glechoma hederacea response to a heterogeneous distribution of a mixture of 3 A M fungi species, and the single effects of each species on colonization and allocation traits. was detected. Two possible explanations are proposed: (i) the plant' s responses are buffered by differences in individual effects of the fungal species or their root colonization intensity. (ii) the initial A M fungi heterogeneity is sensed as homogeneous by the plant either by reduced physiological integration or due to the transfer of A M fungi propagules through the stolons. Microscopic and DNA sequencing analyses provided evidences of this transfer, thus demonstrating the role of stolons as dispersal vectors of A M fungi within the plant clonal network. K ey words: Glechoma hederacea, A rbuscular Mycorrhizal Fungi, Phenotypic Plasticity, Clonality, Heterogeneity, Scanning Electron Microscopy, Patches Fig.- 1 : 1 Fig.-1: Schematic drawing of the experimental design composed of pots arranged in lines. Ramets were forced to root in different pots and lateral ramifications were removed to orient growth in a line. Four treatments of AM fungal distribution were applied based on the presence or absence of A M fungi in the pots: A bsence (A ) (10 pots without A M fungi); Presence (P) (10 pots with A M fungi); Presence-A bsence (PA) (five pots with A M fungi followed by five pots without A M fungi); A bsence-Presence (A P) (five pots without A M fungi followed by five pots with A M fungi). Fig.- 2 : 2 Fig.-2: Foraging response: internode length under the four treatments applied (cm per gram of ramet total biomass) (A ). Specialization response: root:shoot ratio (R/S) of 5 th , 6 th , 10 th and 11 th ramets under the four applied treatments (g of roots per g of shoots after drying) (B). A bsence (blue bars), Presence (grey bars), Presence-A bsence (orange bars), A bsence-Presence (green bars). Statistical significance of the internode length or R/S variations between treatments: NS, not significant; **, P<0.01. Fig.- 3 : 3 Fig.-3: A llocation traits of the whole clone for the four treatment of A M fungi inoculation: T1= no A M fungi (white bars), T2= Glomus custos (blue bars), T3= Glomus intraradices (yellow bars), T4= Glomus clarum (red bars). Means of each organ (shoots, roots and stolons) biomass in grams per gram of total clone biomass. Statistical significance of the organ biomass variations between treatments: NS, not significant; *, P<0.05. Fig.- 4 : 4 Fig.-4: Performance traits of the clone for the four treatments of A M fungi inoculation: T1= no AM fungi (white bars), T2= Glomus custos (blue bars), T3= Glomus intraradices (yellow bars), T4= Glomus clarum (red bars). Total clone biomass in grams after drying (A ). Number of ramets per gram of total clone biomass (B). Statistical significance of the total biomass and number of ramets variations between treatments: NS, not significant; *, P<0.05. Fig.- 5 : 5 Fig.-5: Maximum likelihood tree of the GTR +I+G model using PhyML . Multiple alignment has been produced with MUSCL E 62 . Bootstrap values at the nodes were produced from 200 replicates. Only values above 50 are shown. Multiple alignment and tree reconstruction were performed using SEAV IEW 63 . OT Us have been obtained from a Glechoma hederacea stolon after DNA extraction using DNEasy plant mini kit (Qiagen), PCR amplification using fungal primers NS22b and SSU817, and Illumina MiSeq sequencing. In addition to reference sequences within the Glomeromycota phylum, we sampled 13 sequences among the best BL A ST hits ( † ). Fig.- 6 : 6 Fig.-6: Results for the microscopy analysis of stolons harvested from G. hederacea pre-cultures. Scanning electron microscopy of the stolon surface showing hyphae attached to the stolon hairs (A ). Stolon microscopy cross-section observed with an optical microscope. A rrows indicate cortical cells invaded by structures which may be interpreted as fungi (B). F igure 1 | 1 E xperimental design. (a), clonal ramets of 10 ecotypes were forced to root in separate individual pots and connected by stolons. A t the end of the experiment, the clonal network consisted of the mother ramet and 4 daughter ramets. The daughter ramets (1st and 2nd mother ramets) were positioned along the two primary stolons produced by the mother ramet. Pots with mother ramets were filled with homogenized field soil, those with daughter ramets contained sterilized substrate, and contact was only by the internode that separated two consecutive ramets. M = mother, D1 = 1st daughter, D2 = 2nd daughter. (b), picture of the experimental design: the pots are only connected by the internodes. F igure 2 | 2 C omposition of the bacterial and fungal communities within the root endosphere at the different positions in the clonal network. (a), mean number of OTUs of each fungal phylum and mean total number of OTUs for all phyla together found in the root samples at the different positions in the clonal network (mother, 1st daughter or 2nd daughter). Vertical bars represent the standard error of the mean for each phylum. Significance of the linear mixed models testing the differences in OTUs richness between mothers and daughters in the clonal network is indicated: *** P<0.001. (b), Mean number of OTUs of each bacterial phylum and mean total number of OTUs for all phyla together found in the root samples at the different positions in the clonal network (mother, 1st daughter or 2nd daughter). Vertical bars represent the standard error of the mean for each phylum. Significance of the linear mixed models testing the differences in OTUs richness between mothers and daughters in the clonal network is indicated: *** P<0.001. 103F igure 3 | 3 Partial L east Square Discriminant A nalysis (PL S-DA ). (a), PLS-DA testing the significance of the position (mothers, 1st daugthers and 2nd daughters) on the composition of the root bacterial communities. (b), PLS-DA testing the significance of the position (mothers, 1st daugthers and 2nd daughters) on the composition of the root fungal communities. The groups used as grouping factor in the model are represented on the graphs. They correspond to mother, 1st and 2nd daughter ramets. 1st and 2nd ramets were grouped independently of the stolon to which they belonged. This analysis was used to test the hypothesis that roots at different ramet positions in the clonal network exhibit similar compositions of fungal and bacterial communities. The percentage of variance indicated on each axis represent the variance of the communities composition explained by the grouping factor. ecosystems. A s regards this last aspect, plant communities are dominated by clonal plants and our findings demonstrate their fundamental role in the spreading of microorganisms between trophic levels and reveal a new ecological function of plant clonality. Considering that the heritability process demonstrated herein affects different compartments within the ecosystem, this novel ecosystem process consisting of microbiota filtering and transfer by clonal plants is of paramount importance. between plant generations Nathan Vannier 1 , C endrine Mony 1 , A nne-K ristel Bittebiere 2 , Sophie Michon-C oudouel 1 , Marine Biget 1 , Philippe Vandenkoornhuyse 1* II.3 A rticle IV: Introduction of the metaholobiont concept for clonal plants. Fig 1 : 1 Fig 1: Glechoma hederacea phenotypic plasticity induced by its microbiota composition. A ll the plants have the same genotype (i.e. same clone) grown under controlled conditions (light, temperature, hygrometry, duration, water supply). Only one of their microbiota component differed (i.e.mycorrhizal colonizer). Figure 2 . 2 Figure 2. a) Schematic clonal plant showing the twol levels of modularity within clonal plants: 1 st order modules and 2 nd order modules. b) Schematic clonal plant showing a plastic repsonse of foraging in resource heterogeneous environment. The foraging response consist on ramet aggregation in good patches through the modification of internodes length while avoiding poor patches. c) Schematic of a clonal plant showing a plastic repsonse of specialization to combine heterogeneity of light and nutrient. The response consist on preferentially develop roots in high nutrient and shoots in high light a redistribute resources within the network thanks to the physiological integration. Fig 3 : 3 Fig 3: The meta-holobiont. (A ) the holobiont entity and meta-holobiont entity and their respective composition are explained in (A ) and (B) respectively. Colored dots correspond to epiphytic or endophytic microorganisms forming the root microbiota. Notice that roots and shoots are colonized by microorganisms. al., Submitted) and possibly reciprocally (i.e. from clonal offsprings to ascendent ramets), although this has not been demonstrated yet, clearly represent an important level to consider in the understanding of the clonal holobiont. If holobionts and hologenomes are units of biological organization for the observation and understanding of a given macro-organism (principle 1, 2, 3 inBordenstein & Theis, 2015) then the meta-holobiont could be considered alike a supra-organism.Holobiont impacts on meta-holobiont structure and reciprocal effectsIn patchy resource distribution, many authors demonstrated that clonal individuals aggregate ramets through internodes shortening and increased branching in resource-rich patches, and avoid resource poor patches through spacers elongation (i.e. foraging, Fig 2b;[START_REF] De K Roon | Morphological plasticity in clonal plants: the foraging concept reconsidered[END_REF] e. from 5 to 25 cm), stressing the importance of the local plant community context for endophytes assembly. In addition, both the present and the past plant neighborhood impacted the root fungal community thus demonstrating that plants can leave a durable fingerprint on the composition of the soil fungal pool. Furthermore, we demonstrated that such changes in the fungal composition of the host plantroots impacted M. truncatula performance. Higher species richness and more equitable endophyte assemblages increased the fitness of the plant. Both results demonstrated the existence of plant-plant interactions mediated by fungi ultimately impacting the plant fitness. This offers a new understanding of the plant microbiota assembly through the underestimated role of the local plant community. The role of leaf endophytes diversity in the diversity-productivity relationship has been recently proposed (L aforest-L apointe et al., 2017) and our result suggest that it could be extended to the root microbiota and depend on the plant community context. III.2 A rticle V: Plant-plant interactions mediated by fungi impact plant fitness Introduction The great diversity of soil microorganisms, micro-and meso-fauna is a key-driver of aboveground ecosystem functioning which determines vegetation composition and dynamics (e.g. van der Figure 1 . 1 Figure 1. Sampling protocol. Plant neighborhoods were determined by mapping the abundances of the different plant species in the past (one year before sampling) and present (the same year than sampling). Five soil cores were sampled within each plot and an individual of M. truncatula was grown on each soil sample as a trap plant. fungal richness and equitability on the trap plant biomass e. 5 cm to 25 cm around the sampling point) also determines in part the fungal soil pool available for plant recruitment. Our findings thus open up new avenues in the understanding of plant microbiota assembly rules by introducing the role of the local plant community context (i.e. the plant neighborhood) as a structuring factor. herein explain the competitive balance and richness of the fungal communities colonizing Medicago truncatula based on the close neighborhood of plants surrounding the sampling point. These findings expand the rules of plant fungal microbiota assembly to consideration of the fingerprint that a plant in the local neighborhood can leave on the soil fungal pool. These plant filtering mechanisms operated on the fungal community at a finer scale than previously thought (below 25 cm). The clear influence of plant neighborhood on the local soil microbial reservoir is an additional ecological force driving microbiota complexity and heterogeneity with strong consequences on the focal plant fitness. Recent advances demonstrated a significant impact of the leaf-associated microbiota diversity on ecosystem productivity(Laforest- L apointe et al., 2017). We extend this idea to include the root-associated microbiota and support the correlation between plant-associated microbial diversity and ecosystem productivity, suggesting that manipulations of plant neighborhoods can be used to improve biodiversity-ecosystem functioning relationships. V iard-C rétat F, Baptist F, Secher-F romell H, Gallet C . 2012. The allelopathic effects of Festuca paniculata depend on competition in subalpine grasslands. Plant ecology 213: 1963-1973. Wagg C , J ansa J , Schmid B, van der Heijden MG. 2011. Belowground biodiversity effects of plant symbionts support aboveground productivity. Ecology Letters 14: 1001-1009. Wagg C , Bender SF, Widmer F, van der Heijden MG. 2014. Soil biodiversity and soil community composition determine ecosystem multifunctionality. Proceedings of the National Academy of Sciences 111: 5266-5270. West SA , K iers E T, Pen I, Denison R F. 2002. Sanctions and mutualism stability: when should less beneficial mutualists be tolerated? J ournal of Evolutionary Biology 15: 830-837. Z hang Q, Sun Q, K oide RT, Peng Z , Z hou J , Gu X , Gao W, Y u M. 2014. A rbuscular mycorrhizal fungal mediation of plant-plant interactions in a marshland plant community. The Scientific World J ournal: 923610. A cknowledgements This work was supported by a grant from the A NR program, by a grant from the CNRS-EC2CO program (MIME project) and by the French ministry for research and higher education. We thank K evin Potard for advice on statistical analysis. We are also grateful to D. Warwick for helpful comments and suggestions for modifications on a previous version of the manuscript. We also thank the Institut National de la Recherche A gronomique (INRA , Montpellier, France) for providing M. truncatula seeds. A uthor C ontributions A .K .B., P.V. and C.M. conceived the ideas and experimental design. A .K .B. and C.M. did the experiments. N.V. did the data analyses. N.V., A .K .B., P.V. and C.M. did the interpretations and writing of the publication. I. Plants and microorganisms: a tight association Microbiota matters for plant phenotype Considering the diversity of functions provided by the plant microbiota, it is clear that endophytes have important impacts on plants phenotype (A rticle I). Clonal plants that represent most of the known plants in temperate ecosystems (van Groenendael & de K roon, 1990) are no exception to this rule (A rticle II). We demonstrated that the inoculation of symbionts (A M Fungal species) can alter architectural, allocative and reproductive traits (A rticle II). The effects of these plantassociated microoragnisms on the traits involved in clonal plants foraging and specialization have been poorly described to date. Our experiments thus provided indications that A M fungi could be responsible for changes in the ability of clonal plant to produce foraging and specialization responses. plant phenotype and evolution. Not only plants can be considered as holobionts but the structure of clonal plants in network connecting clonal ramets extends the holobiont concept to a more complex scale that we have called the meta-holobiont (article IV ). The clonal network allows indeed the dynamic sharing of microorganisms between ramets. Different holobionts are connected in clonal network allowing them to exchange of part of their hologenome. The network thus constitutes an additional level on which selection can act. Selection can act on the phenotype of the individual holobiont or on the phenotype of the network. In addition, these two levels of organization are intricated because the fitness of one member of the network could effect the fitness of the whole network. K eywords: Clonal plants; Microbiota; A rbuscular Mycorrhizal Fungi; Holobiont; Plant-plant interactions; Community assembly RESUME L es plantes vivent en association avec une grande diversité de micro-organismes qui forment son microbiote. Ce microbiote fournit des fonctions clés qui influencent tous les aspects de la vie d'une plante, son établissement, sa croissance et jusqu'à sa reproduction. Cette thè se a pour intention de déterminer les rè glent d'assemblage du microbiote et ses conséquences pour le phénotype ,l'adaptation et l'évolution des plantes. Pour atteindre cet objectif nous avons utilisé différentes approches expérimentales comprenant des plantes clonales comme organismes modè les ainsi que des mésocosmes prairiaux pour des analyses à l'échelle des communautés.Nos résultats ont démontré i) que les Champignons Mycohiziens à A rbuscules induisent d'importantes variations phénotypiques pour les traits des plantes clonales impliqués dans l'exploration de l'espace et l'exploitation des ressources. Ces changements dépendent de l'identité des symbiotes et altè rent les capacités des plantes à développer des réponses plastiques à l'hétérogénéité environnementale. ii) Les plantes ont évolué un mécanisme permettant la transmission d'une partie de leur microbiote a leur descendance, assurant ainsi la qualité de leur habitat. iii) L e contexte spécifique des communautés de plantes est un facteur majeur structurant l'assemblage du microbiote des plantes à échelle locale. L 'abondance de certaines espè ces de plante dans le voisinage d'une plante cible augmente ou diminue la diversité de son microbiote, déterminant in fine ses performances De maniè re générale, cette thè se démontre l'importance des organismes symbiotiques dans la compréhension de l'adaptation et de l'évolution des plantes. A partir des connaissances acquises nous avons développé une nouvelle compréhension des interactions symbiotiques chez les plantes clonales en introduisant la théorie du méta-holobiont une comme extension de la théorie de l'holobiont.Mots-clés : Plantes clonales ; Microbiote ; Champignons Mycorhiziens à Arbuscules ; Holobiont ; Interactions plante-plante ; A ssemblage des communautés ..13 2.2.1 A biotic stresses.......................................................................................................13 2.2.2 Biotic stresses.........................................................................................................14 2.3 Growth and reproductive strategy. . Introduction...................................................................................................31 Scientific context..................................................................................................................31Objectives of the chapter......................................................................................................32 Methods................................................................................................................................32 Main results. ...................................................................................................142 Methods..............................................................................................................................142 Main results........................................................................................................................143 III.2 A rticle V: Plant-plant interactions mediated by fungi impact plant fitness ...........................................................................................................................145 GE NE R A L DISC USSION A ND PE R SPE C T IV E S..............................................179 I. Plants and microorganism: a tight association............................................. ..180 II. Redefining the individual plant : from holobiont to meta-holobiont............183 III. From individuals to community ................................................................ ..184 IV. Microbiota assembly and agricultural practices..........................................186 Bibliography . The best-known epigenetic mechanisms involve DNA methylation, histone modifications and histone variants, and small RNA s. These epigenetic mechanisms lead to enhanced or reduced gene transcription and RNA -translation (e.g.Richards, 2006; Holeski, 2012). A more restricted definition applied in this paper considers as epigenetic the states of the epigenome regarding epigenetic marks that affect gene expression: DNA methylation, histone modifications (i.e. histone amino- terminal modifications that act on affinities for chromatin-associated proteins) and histone variants (i.e. structure and functioning), and small RNA s. These epigenetic marks may act separately or concomitantly, and can be heritable and reversible (e.g. Molinier et al., 2006;[START_REF] Richards | Natural epigenetic variation in plant species: a view from the field[END_REF] Bilichak, 2012) . The induction of defense pathways and metabolite synthesis against biotic and abiotic constraints by epigenetic marks has been demonstrated during the last decade mainly in the model plant species A rabidopsis and tomato (e.g. [START_REF] Rasmann | Herbivory in the previous generation primes plants for enhanced insect resistance[END_REF][START_REF] Slaughter | Descendants of primed Arabidopsis plants exhibit resistance to biotic stress[END_REF][START_REF] Sahu | Epigenetic mechanisms of plant stress responses and adaptation[END_REF] . horizontal gene transfer between members of the holobiont (i.e., transfer of genetic material between bacteria; Dinsdale et al., 2008) (2) microbial amplification (i.e., variation of microbes abundance in relation to environment variation) and ( The extended phenotype: The gene as the unit of selection. Oxford University Press Dinan,T. G., Stilling, R. M., Stanton, C., Cryan, J . F (2015). Collective unconcious: how gut microbes shape human behaviour. J Psychiatr.Res. 63,[START_REF] Simms | The relative advantages of plasticity and fixity in different environments: when is it good for a plant to adjust ?[END_REF][START_REF] Wijesinghe | The effects of environmental heterogeneity on the performance of Glechoma hederacea: the interactions between patch contrast and patch scale[END_REF][START_REF] Bradshaw | Evolutionary significance of phenotypic plasticity in plants[END_REF][START_REF] Schlichting | Phenotypic plasticity-an evolving plant character[END_REF][START_REF] Pigliucci | Reaction norms of Arabidopsis. V. flowering time controls phenotypic architecture in response to nutrient stress[END_REF][START_REF] Sultan | Phenotypic plasticity for plant development, function and life history[END_REF][START_REF] Miner | Ecological consequences of phenotypic plasticity[END_REF][START_REF] Pigliucci | Evolution of phenotypic plasticity: where are we going now?[END_REF][START_REF] Harper | Population biology of plants[END_REF] Dinsdale, E.A ., Edwards, R. A ., Hall, D., A ngly, F., Breitbart, M., Brulc, J . M., et al. (2008). . Symbiosis lost: Imperfect vertical transmission of fungal endophytes in grasses. Am. Nat. 172, 405-416 A lberch, P. (1991). From genes to phenotype: dynamical systems and evolvability. Genetica 84, 5-11 Dawkins, R. (1982). Functional metagenomic profiling of nine biomes. Nature 452, 629-632 A lpert, P., and Simms, E. L . (2002). The relative advantages of plasticity and fixity in different El-Soda, M., Malosetti, M., Zwaan, B. J ., K oornneef, M., and A arts, M. G. (2014). Genotype × environments: when is it good for a plant to adjust? Evol. Ecol. 16, 285-297 environment interaction QTL mapping in plants: lessons from Arabidopsis. Trends Plant Sci. A nderson, J . T., Willis, J . H., and Mitchell-Olds, T. (2011). Evolutionary genetics of plant 19, 390-398 adaptation. Trends Genet. 27, 258-266 Gilbert, S.F., McDonald, E., Boyle, N., Buttino, N., Gyi, L ., Mai, et al. (2010). Symbiosis as a Transgenerational epigenetic imprints on mate preference. Proc. Natl. Acad. Sci. U.S.A. 104, 5942-5946 Cubas, P. V incent, C., and Coen, E. (1999). A n epigenetic mutation responsible for natural variation in floral symmetry. Nature 401, 157-161 Bilichak, A . (2012) . The Progeny of Arabidopsis thaliana plants exposed to salt exhibit changes in DNA methylation, histone modifications and gene expression. PLoS ONE 7, e30515 Boller, [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF] . A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors. Ann. Rev. Plant Biol. 60, 379-406 Bossdorf, O., Richards, C. L ., and Pigliucci, M. (2008) . Epigenetics for ecologists. Ecol. Let. 11, 106-115 Bradshaw, A . D. (1965) . Evolutionary significance of phenotypic plasticity in plants. Adv. Genet. 115-155 Brundrett, M. C. (2002) . Coevolution of roots and mycorrhizas of land plants. New Phytol. 154, 275-304 Bulgarelli, D., Rott, M., Schlaeppi, K ., Ver L oren van Themaat, E., A hmadinejad, N., A ssenza, F, et al. (2012) . Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota. Nature 488: 91-95 Conrath, U., Beckers, G. J ., Flors, V., García-A gustín, P., J akab, G., Mauch, F.,et al. (2006) . Priming: getting ready for battle. Mol. Plant-Microbe Interact. 19, 1062-1071 Crews, D., Gore, A . C., Hsu, T. S., Dangleben, N. L ., Spinetta, M., Schallert, T., et al. (2007) . source of selectable epigenetic variation: taking the heat for the big guy. Proc. R. Soc. B 365, 671-678 Hodge, A . (2004) . The plastic plant: root responses to heterogeneous supplies of nutrients. New Phytol. 162, 9-24 Holeski, L . M., J ander, G., and A grawal, A . A . (2012) . Transgenerational defense induction and epigenetic inheritance in plants. Trends Ecol. Evol. 27, 618-626 Huber, H. and Hutchings, M.J . (1997) . Differential response to shading in orthotropic and plagiotropic shoots of the clonal herb Glechoma hirsuta. Oecologia 112, 485-491 Ichida, H. Matsuyama, T., A be, T., and K oba, T. (2007) . DNA adenine methylation changes dramatically during establishment of symbiosis. F EBS J . 274, 951-962 Ichida, H. Yoneyama, K ., K oba, T., and A be, T. (2009) . Epigenetic modification of rhizobial genome is essential for efficient nodulation. Bioch. Bioph. R. Com. 389, 301-304 J ablonka, E. and L amb, M.J . (2002) . The changing concept of epigenetics. Ann. N. Y. Acad. Sci. 981,[82][83][84][85][86][87][88][89][90][91][92][93][94][95][96] K aern, M., Elston, T. C., Blake, W. J . and Collins, J . J . (2005) . Stochasticity in gene expression: from theories to phenotypes. Nat. Rev. Genet. 6,[451][452][453][454][455][456][457][458][459][460][461][462][463][464] E. Table . - . 1: Results of linear models for each traits linked to the plants foraging, specialization and performance. Trait E xperiment 2: E ffect of A M fungi identity on G. hederacea traits. Treatment F -value P -value (α =0.05) The hypothesis that modifications in G. hederacea foraging and specialization traits were affected by the A M Total biomass F -value P -value (α =0.05) Intra : lower/estimate/upper Inter : lower/estimate/upper R andom factor (G enotype) Total Biomass G rowth time fungal species was tested by comparing the allocation, architectural and growth traits of four treatments 0.39 0.75 --0.72 / 0.95 / 1.25 2.7 0.067 --3.98 / 5.24 / 6.89 0.45 0.71 1.58 0.22 1.19 / 1.61 / 2.18 1.92 0.15 8.32 <0.01 1.03 / 1.4 / 1.9 inoculated with different A M fungal species (see Methods for details on experimental design). Primary 5 th in te rn o de l e n g th 6 th in te rn o de l e n g th stolon length (an architectural trait) tended to vary (P=0.07; F=2.83) in response to the presence and species 5.74 <0.01 4.38 <0.05 0.59 / 0.81 / 1.12 0.15 0.93 0.02 0.87 0.96 / 1.34 / 1.86 0.48 0.69 --n/a 0.18 0.9 --n/a 1.09 0.37 --n/a 0.46 0.7 --n/a 1.1 0.36 14.49 <0.01 0.99 / 1.31 / 1.7 0.46 0.7 5.2 <0.01 1.06 / 1.40 / 1.85 0.26 0.84 1.91 0.18 0.89 / 1.18 / 1.55 0.88 0.46 1.08 0.3 0.89 / 1.18 / 1.56 1 0 th Ln te rn o d e l e n g th 11 th ramet number of ramifications 10 th ramet number of ramifications 6 th ramet number of ramifications 5 th ramet number of ramifications 11 th ramet root/shoot 10 th ramet root/shoot 6 th ramet root/shoot 5 th ramet root/shoot 11 th internode length of A M fungi whereas the number of ramifications (P=0.25; F=1.49) did not (Tab. 2). A llocation to stolons 0.43 / 0.81 / 1.53 0.37 / 1.79 / 8.53 0.75 / 1.45 / 2.8 0.34 / 0.83 / 2.05 0.54 / 0.97 / 1.74 0.41 / 0.94 / 2.17 n/a n/a n/a n/a 0.38 / 0.83 / 1.8 0.81 / 1.46 / 2.64 0.36 / 0.77 / 1.66 0.22 / 0.59 / 1.63 F-values and P-values of the treatment and total biomass (when used as covariable) are presented, as well as lower, estimate and upper values of intra and inter genotype variance (random factor). was significantly affected by the presence and species of A M fungi (P=0.017; F=4.51) with plants inoculated Table . - . 2: Results of linear models for each traits linked to the plants resources allocation and performance. Trait Treatment F -value P -value (α =0.05) Total biomass F -value P-value (α =0.05) R andom factor (G enotype) Intra : lower/estimate/upper Inter : lower/estimate/upper Total Biomas s Number of ramets (allocation) P rimary stolon length Number of ramifications S tolons weight (allocation) S hoots weigth (allocation) R oots weight (allocation) 0.67 3.55 2.84 1.49 4.51 3.96 0.5 0.57 <0.05 0.07 0.25 <0.05 <0.05 0.68 -46.6 1.99 5.8 91.37 1528 30.72 -<0.001 0.17 <0.05 <0.001 <0.001 <0.001 0.27 / 0.38 / 0.52 5.97 / 8.45 / 11.96 10.99 / 15.45 / 21.75 0.46 / 0.66 / 0.93 0.08 / 0.11 / 0.17 0.06 / 0.09 / 0.13 0.06 / 0.09 / 0.12 0.03 / 0.14 / 0.67 7.58 / 13.7 / 24.8 4.53 / 10.69 / 25.23 0.24 / 0.53 / 1.18 0.03 / 0.09 / 0.22 0.04 / 0.08 / 0.18 0.006 / 0.03 / 0.19 F-values and P-values of the treatment and total biomass (when used as covariable) are presented, as well as lower, estimate and upper values of intra and inter genotype variance (random factor). Iobs is the observed heritability index value, Inull is the mean of the null distribution and σ null is its standard deviation. SES aims to quantify the direction and magnitude of each ecotype heritability index compared to the null distribution. Negative SES values indicate lower heritability than in the random model (heritability of microorganisms species not present in the mother ramet), and McCabe, 2002: SES 3 I obs -I null σ null where Considering the fungal phyla and classes separately, plant-species effects were better revealed. For example, the equitability of the phylum Ascomycota significantly increased with B. Glomeromycetes, and Agaricomycetes whereas A. stolonifera increased the equitability of Glomeromycetes but decreased the equitability of Agaricomycetes. The explained variance in the models increased when lower as compared to higher fungal taxonomic levels were considered (e.g. 14% of the variance was explained for the Ascomycota phylum (P<0.01) while ~24% of the variance was explained for the Sordariomycetes class (P<0.001). Present pinnatum but decreased with H. mollis abundances (P<0.01; 0.11≥R 2 ≥0.14), whereas Past Glomeromycota equitability increased with F. rubra abundance (P<0.01; R 2 =0.09). Several plant Taxonomic groups species had the same effect on the equitability for every fungal group whereas others had different P-value R² Significant plant species P-value R² Significant plant species A ll Fungi effects between groups: the abundance of B. pinnatum always increased fungal equitability within 0.04 0.04 A sto (-) 0.11 A scomycota 0.07 0.03 Hmol (-) 0.09 the Sordariomycetes, 0.01 0.02 -- Glomeromycota 0.04 0.05 Cnig (-) 0.08 0.02 - Basidiomycota 0.12 0.03 Bpin (+) 0.03 0.04 Hlan (-) A garicomycetes 0.02 0.04 Hlan (-) 0.02 0.04 Hlan (-) Sordiariomycetes 0.03 0.07 Erep (+) Dglo (+) L per (+) 0.01 0.08 A ten (+) Cnig (+) L per (+) Glomeromycetes 0.004 0.08 Hmol (-) Cnig (+) 0.01 0.06 Cnig (- 162 Table 2 . 1622 Responses of each fungal group richness and equitability at the sample scale to the past and present plant neighborhoods (results from linear models). P-values and the range of adjusted R² of best models are presented. Significant timeand spatial scales of neighborhood are also indicated. Only species significantly participating to the model building are showed as well as their effect on the fungal richness and equitability (+ increasing;decreasing). *, P<0.05; **, P<0.01; ***, P<0.001. Taxonomic groups Time scale of response A ll F ungi Richness Present/Past Equitability Present/Past A scomycota Richness Present/Past Equitability Present/Past Glomeromycota Richness Present/Past Equitability Present Basidiomycota Richness Spatial scale of response (radius) 5 to 20cm 10 to 25cm 5 to 25cm 5 to 20cm 5 to 25cm 25cm R² 0,07-0,1 (*) 0,04 -0,07 (*) 0,04 -0,12 (*) 0,11 -0,14 (**) 0,06 -0,09 (*) 0,09 (**) Significant plant species A ten (+) Hmol (-) A ten (+) Frub (+) Hmol (-) Bpin (+) Erep (-) Bpin (+) Dglo (-) Frub (+) Table 3 . 3 Results of linear models testing the effect of each fungal group richness and equitability on the biomass of the trap plant M. truncatula. ANOVA P-values, F -values and adjusted R² of the best models are presented. Only fungal groups significantly participating to the best model building are presented. Tax o n o mi c gro u p s ! l l Cu n gi t -v al u e 0 . 2 4 wi ch n ess C-v al u e 1 . 4 2 w² 0 . 0 0 5 t -v al u e 0 . 0 0 7 9q u i tab i l i ty C-v al u e 7 . 6 7 w² 0 . 0 8 t h y l a / l assess ! sco my co ta . asi di o my co ta Dl o mero my co ta {o rdari o my cetes ! gari co my cetes Dl o mero my cetes ---0 . 0 1 0 . 0 2 ---0 . 0 1 0 . 0 4 ---2 . 5 6 2 . 3 8 ---6 . 8 3 4 . 3 4 0 . 1 2 0 . 1 4 0 . 0 0 3 ------0 . 0 0 7 0 . 0 5 7 --- 9 . 5 7 ------7 . 6 2 3 . 7 1 --- 0 . 1 0 . 1 Université de Rennes 1, CNRS, UMR 6553 EcoBio, campus Beaulieu, Avenue du Général L eclerc, 35042 RENNES Cedex (France) Université de Lyon 1, CNRS, UMR 5023 L EHNA 43 Boulevard du 11 Novembre 1918, 69622 V IL L EURBA NNE Cedex (France) *Corresponding author. Email: philippe.vandenkoornhuyse@univ-rennes1.fr Université de Rennes 1, CNRS, UMR6553 EcoBio, campus Beaulieu, Avenue L eclerc, 35042 RENNES Cedex (France) Université de Lyon 1, CNRS, UMR 5023 L EHNA 43 Boulevard du 11 Novembre 1918, 69622 V IL L EURBA NNE Cedex (France) ~In preparation~1 A cknowledgments: This work was supported by a grant from the CNRS-EC2CO program (MIME project), CNRS-PEPS program (MY COLA ND project) and by the French ministry for research and higher education. We also acknowledge E. T. K iers and D. Warwick for helpful comments and suggestions for modifications on a previous version of the manuscript and A . Salmon for helpful discussions about epigenetics. Summary The classic understanding of organisms focuses on genes as the main source of species evolution and diversification. The recent concept of genetic accommodation questions this gene centric view by emphasizing the importance of phenotypic plasticity on evolutionary trajectories. Recent discoveries on epigenetics and symbiotic microbiota demonstrated their deep impact on plant survival, adaptation and evolution thus suggesting a novel comprehension of the plant phenotype. In addition, interplays between these two phenomena controlling plant plasticity can be suggested. Because epigenetic and plant-associated (micro-) organisms are both key sources of phenotypic variation allowing environmental adjustments, we argue that they must be considered in terms of evolution. This 'non-conventional' set of mediators of phenotypic variation can be seen as a toolbox for plant adaptation to environment over short, medium and long time-scales. K ey-words : Plant, Symbiosis, Phenotypic Plasticity, Epigenetic, Evolution More recently the development of the 'hologenome theory' (Zilber- Rosenberg and Rosenberg, 2008) posits that evolution acts on composite organisms (i.e., host and its microbiome) with the microbiota being fundamental for their host fitness by buffering environmental constraints. Both the 'extended phenotype' concept and 'hologenome theory' admit that the environment can leave a "footprint" on the transmission of induced characters. Thus, opportunities exist to revisit our understanding of plant evolution to embrace both environmentally-induced changes and related 'genetic accommodation' processes. A microorganisms journey between plant generations Nathan Vannier 1 , Cendrine Mony 1 , A nne-K ristel Bittebiere 2 , Sophie Michon-Coudouel 1 , Marine Biget 1 , Philippe Vandenkoornhuyse ~In revision for ISME J ~89 A bstract Plants are colonized by a great diversity of symbiotic microorganisms which form a microbiota and perform additional functions for their host. This microbiota can thus be considered a toolbox enabling plants to buffer local environmental changes, with a major influence on plant fitness. In this context, the transmission of the microbiota to the progeny represent a way to ensure habitat quality. However, examples of such transmission are scarce and their importance unclear. We investigated the transmission of symbiotic partners to plant progeny within the clonal network using Glechoma hederacea as plant model. We demonstrated the vertical transmission of a significant proportion of the mother' s symbiotic Bacteria and Fungi to the daughters and the heritability of a specific core microbiota. In this clonal plant, microorganisms are transmitted between individuals through connections, thereby ensuring the availability of symbiotic partners for the newborn plants as well as the dispersion between hosts for the microorganisms. This previously unknown ecological process allows the dispersal of microorganisms in space and across plant generations. The vast majority of plants are clonal, this process might be therefore a strong driver of ecosystem functioning and assembly of plant and microorganism communities in a wide range of ecosystems. Supplementary F igure 1 | (a), mean rarefaction curves for the Bacterial communities in the mother roots samples (red), the daughter roots samples (green), the internode samples (brown) and the control pots (blue). (b), mean rarefaction curves for the fungal communities in the mother roots samples (red), the daughter roots samples (green), the internode samples (brown) and the control pots (blue). (c), mean rarefaction curves for the fungal communities in the mother roots samples (red), the daughter roots samples (green), the internode samples (brown) and the control pots (blue). Coloured ideas indicate ± SE. ecotypes by generating daughter communities with random samples of the microorganisms species occurring within the species pool (regional pool) of the mother ramet. Only the species identity was changed while species richness within the null daughter communities remained unmodified. We created 9999 null datasets for each daughter ramet and measured the OTUs heritability for each ecotype created in this way as the number of OTUs shared between the mother and at least 2 daughters. We then calculated the Standard effect size (SES) values of each ecotype. Negative SES values indicated that the observed heritability was lower than would be expected in the null-model (heritability of OTUs not specific to the ecotype mother ramet), whereas positive SES values revealed a higher heritability than expected (heritability of microorganisms from the mother). The horizontal line represents an SES value of 0 (no difference between the observed heritability and the null heritability). P-values indicate the significance of the one sample t-test to determine whether SES values are significantly higher than 0 (alternative hypothesis "greater"). Plant-plant interactions mediated by fungi impact plant fitness ~In preparation~1 Summary • The plant microbiota is now recognized as a major driver of plant health. The rules governing root microbiota assembly have been recently investigated and the importance of abiotic determinants has been highlighted. The role of the biotic context of the plant community however remains unclear. • We tested whether the plant neighborhood may leave a fingerprint on the fungal soil pool. We used outdoor experimental mesocosms comprising various floristic composition and mapped plant distribution over two years. Medicago truncatula was used as a trap plant on soil samples and root DNA was sequenced to describe fungal communities. The trap plant performance was estimated through biomass measures. • Fungal communities richness and equitability were influenced by the abundance of key plant species in the neighborhood. A given plant had contrasted effects on fungal phyla and classes. A dditionally, we demonstrated that changes in fungal groups richness and equitability influenced the biomass of the trap plant. • Our results suggest the existence of plant-plant interactions mediated by fungi and impacting plant fitness. The shift between facilitation and competition may be mediated by the fungal community. The ecosystem diversity-productivity relationship could be extended to the root microbiota and depend on the plant community context. To determine the impact of fungal taxonomic groups, OTUs richness, and equitability on the trap plant performance we used linear models with the indices as explanatory variables and M. truncatula biomass as the dependent variable. This was done at the sample scale (central point of the mesocosm) and at the three taxonomic levels analyzed (all fungi, phyla, classes). For all models, data were log or square-root transformed when necessary to satisfy the assumption of a normal distribution of the residuals. The model coefficients and the proportion of index variation that was accounted for by the regression (R² ) were calculated. The significance of each explanatory variable was tested with an A NOVA procedure. A ll the statistical analyses were performed using the packages "car" (Fox & Weisberg, 2011) and "A ICcmodavg" [START_REF] Mazerolle | Package 'A ICcmodavg[END_REF] in R (R Core Team, 2013). R esults E ndospheric fungal microbiota in Medicago truncatula roots The 2057 fungal OTUs found in the M. truncatula root endosphere belonged to five phyla (i.e. Zygomycota, Chytridiomycota, Glomeromycota, Ascomycota, and Basidiomycota), with Zygomycota and Chytridiomycota together accounting for less than 3% of the total number of OTUs F ungal community richness response to the plant neighborhood F ungal richness at the plot scale To investigate the effect of the plant neighborhood on the richness of the fungal community at the scale of the whole mesocosm (i.e. gamma diversity), we produced linear models at the plot scale (i.e. the five sampling points of the mesocosm) with the plant species abundances as explanatory variables (Tab. 1). The plot richness of the whole fungal community was significantly determined by the present plant neighborhood. However, the proportion of the variation in fungal richness explained by the model was low (p=0.04; R 2 =0.04). In addition, the fungal richness within the Ascomycota and Basidiomycota was not determined by the plant neighborhood and only 5% of the variance in Glomeromycota richness could be attributed to the present-plant neighborhood (P=0.04; R 2 =0.05) (Tab. 1). When considering the past-plant neighborhood, only Basidiomycota richness was weakly determined by the plant neighborhood (P=0.03; R 2 =0.04) whereas Ascomycota and Glomeromycota, as well as the fungal community as a whole, were not. F ungal richness at the sample scale To investigate the effect of the plant neighborhood on the richness of the fungal community at finescale we produced linear models at the sample scale (i.e. center of the mesocosm) with plant species abundances as explanatory variables. Our multiple linear models analysis revealed that the richness of the fungal community was significantly determined by the plant neighborhood for all taxonomic groups tested (Tab 2). In comparison with the models produced at the plot scale, a larger proportion of the variations in fungal community richness could be explained at this alpha diversity scale. A t the level of the whole fungal community, the richness increased significantly only with the abundance of Agrostis tenuis in the neighborhood whereas the abundance of the other plants had no significant effect on fungal community richness (P<0.05; 0.07≥R 2 ≥0.1). In addition, A. tenuis was one of the rarest species in the experiment and the effect detected could be an artifact of this rarity. Ascomycota richness increased with the abundance of A. tenuis and F estuca rubra (P<0.05; 0.04≥R 2 ≥0.12) whereas Glomeromycota richness increased with Brachypodium pinnatum and Dactylis glomerata and decreased with Elytrigia repens (P<0.05; 0.06≥R 2 ≥0.09). The effects (positive or negative) of plant species at the phylum-level were not necessarily the same at the class-level. The presence of D. glomerata in the plant neighborhood significantly increased the Plant-plant interaction mediated by fungi affects plant performance K nowing that the fungal communities compositions were determined by the plant neighborhood and that these changes affected plant performance (biomass productivity), this study demonstrates the existence of plant-plant interactions mediated by the fungal communities. Our results indicate that these shifts in fungal community composition can have positive or negative effects on the trap plant biomass, suggesting that the neighborhood can have either a facilitative or a competitive effect on the trap plant. Studies investigating the shift between plant-plant facilitation and competition suggested that this shift is linked to environmental stress or disturbance intensity and is spatially heterogeneous [START_REF] O'brien | The shift from plant-plant facilitation to competition under severe water deficit is spatially explicit[END_REF]. Previous studies have indicated that plant-plant facilitation (i.e. a beneficial effect of a given plant presence) may be linked to the compositions of their A M fungal communities [START_REF] Montesinos-Navarro | The network structure of plant-arbuscular mycorrhizal fungi[END_REF], Zhang et al., 2014). [START_REF] Montesinos-Navarro | The network structure of plant-arbuscular mycorrhizal fungi[END_REF] argued that stronger facilitation occurs between pairs of plant species with different associated A MF. This phenomenon underlies a potential mechanism which increases A M fungal diversity in the shared rhizosphere and promotes complementarity between the beneficial effects of each A M fungus (Van der Heijden et al., 1998;Wagg et al., 2011). We herein provide the first experimental evidence of this assumption by showing that the richness of Glomeromycota increased with the abundance of specific plants in the neighborhood, which ultimately increased M. trunculata's biomass. Our results also indicate that changes in plant fungal communities can be detrimental as several plant species had a negative effect on fungal richness and/or equitability. We thus propose that the shift from plant-plant facilitation to competition is mediated by changes in the fungal communities available for recruitment in the surrounding soil. More importantly, we demonstrated here that these phenomena of plant-plant facilitation and competition mediated by fungi are not restricted to A M fungi but have to be extended to other fungal groups such as Ascomycota and Basidiomycota. We thus encourage future studies to consider the whole fungal community and to take into account the feedbacks between plant and fungal communities that can affect ecosystem properties such as productivity (Cadotte et al., 2008). Positive and negative effects of given plants on the trap plant biomass A n increase in the plant trap biomass associated with an increase in fungal groups richness (e.g. Glomeromycetes and Agaricomycetes) and equitability (e.g. Ascomycota and Sordariomycetes) was Physiology and Biochemistry, 40 [START_REF] Hutchings | Performance of a clonal species in patchy environments: effects of environmental context on yield at local and whole-plant scales[END_REF], 983-995. Bais, H. P., Weir, T. L ., Perry, L . G., Gilroy, S., & Vivanco, J . M. (2006). The role of root exudates in rhizosphere interactions with plants and other organisms. Annu. Rev. Plant Biol.,[START_REF] Cui | Facilitation of plant phosphate acquisition by arbuscular mycorrhizas from enriched soil patches[END_REF] Bá lint, M., Bahram, M., Eren, A . M., Faust, K ., Fuhrman, J .
334,755
[ "1221295" ]
[ "928" ]
00175142
en
[ "phys" ]
2024/03/05 22:32:07
2008
https://hal.science/hal-00175142/file/BFstringfinal.pdf
Winston J Fairbairn email: winston.fairbairn@ens-lyon.fr Alejandro Perez email: perez@cpt.univ-mrs.fr Extended matter coupled to BF theory . In this paper, we discuss various aspects of the four-dimensional theory. Firstly, we study classical solutions leading to an interpretation of the theory in terms of strings propagating on a flat spacetime. We also show that the general classical solutions of the theory are in one-to-one correspondence with solutions of Einstein's equations in the presence of distributional matter (cosmic strings). Secondly, we quantize the theory and present, in particular, a prescription to regularize the physical inner product of the canonical theory. We show how the resulting transition amplitudes are dual to evaluations of Feynman diagrams coupled to threedimensional quantum gravity. Finally, we remove the regulator by proving the topological invariance of the transition amplitudes. I. INTRODUCTION Based on the seminal results [START_REF] Deser | Three-Dimensional Einstein Gravity: Dynamics of Flat Space[END_REF] of 2 + 1 gravity coupled to point sources, recent developments [START_REF] Freidel | Effective 3d quantum gravity and non-commutative quantum field theory[END_REF], [START_REF] Noui | Three dimensional Loop Quantum Gravity: Towards a self-gravitating Quantum Field Theory[END_REF] in the non-perturbative approach to 2 + 1 quantum gravity have led to a clear understanding of quantum field theory on a three-dimensional quantum geometrical background spacetime. The idea is to first couple free point particles to the gravitational field before going through the second quantization process. In this approach, particles become local conical defects of spacetime curvature and their momenta are recasted as holonomies of the gravitational connection around their worldlines. It follows that momenta become group valued leading to an effective notion of non-commutative spacetime coordinates. The Feynman diagrams of such theories are related via a duality transformation to spinfoam models. All though conceptually very deep, these results remain three-dimensional. The next step is to probe all possible extensions of these ideas to higher dimensions. Two ideas have recently been put forward. The first is to consider that fundamental matter is indeed pointlike and study the coupling of worldlines to gravity by using the Cartan geometric framework [START_REF] Wise | MacDowell-Mansouri gravity and Cartan geometry[END_REF] of the McDowell-Mansouri formulation of gravity as a de-Sitter gauge theory [START_REF] Macdowell | Unified geometric theory of gravity and supergravity[END_REF]. The second is to generalize the description of matter as topological defects of spacetime curvature to higher dimensions. This naturally leads to matter excitations supported by co-dimension two membranes [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF], [START_REF] Baez | Exotic statistics for loops in 4-D BF theory[END_REF]. Before studying the coupling of such sources to quantum gravity, one can consider, as a first step, the BF theory framework as an immediate generalization of the topological character of three-dimensional gravity to higher dimensions. This paper is dedicated to the second approach, namely the coupling of string-like sources to BF theory in four dimensions. The starting point is the action written in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] generating a theory of flat connections except at the location of two-dimensional surfaces, where the curvature picks up a singularity, or in other words, where the gauge degrees of freedom become dynamical. The goal of the paper is a two-fold. Firstly, acquire a physical intuition of the algebraic fields involved in the theory which generalize the position and momentum Poincaré coordinates of the particle in three-dimensions. Secondly, provide a complete background independent quantization of the theory in four dimensions, following the work done in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF]. The organization of the paper is as follows. In section II, we study some classical solutions guided by the three-dimensional example. We show that some specific solutions lead to the interpretation of rigid strings propagating on a flat spacetime. More generally, we prove that the solutions of the theory are in one-to-one correspondence with distributional solutions of general relativity. In section III, we propose a prescription for computing the physical inner product of the theory. This leads us to an interesting duality between the obtained transition amplitudes and Feynman diagrams coupled to three-dimensional gravity. We finally prove in section IV that the transition amplitudes only depend on the topology of the canonical manifold and of the spin network graphs. II. CLASSICAL THEORY A. Action principle and classical symmetries Let G be a Lie group with Lie algebra g equipped with an Ad(G)-invariant, non degenerate bilinear form noted 'tr' (e.g. the Killing form if G is semi-simple). Consider the principal bundle P with G as structure group and as base manifold a d + 1 dimensional, compact, connected, oriented differential manifold M . We will assume that P is trivial, all though it is not essential, and chose once and for all a global trivialising section. We will be interested in the following first order action principle, describing the interaction between closed membrane-like sources and BF theory [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF]: S[A, B; q, p] = S BF [A, B] - W tr(B + d A q)p). (1) The action of free BF theory in d + 1 dimensions is given by S BF [A, B] = 1 κ M tr(B ∧ F [A]). (2) Here, B is a g-valued (d -1)-form on M , F is the curvature of a g-valued one-form A, which is the pull-back to M by the global trivializing section of a connection on P, and κ ∈ R is a coupling constant. In the coupling term, W is the (d -1)-brane worldsheet defined by the embedding φ : E ⊂ R d-1 → M , d A is the covariant derivative with respect to the connection A, q is a g-valued (d -2)-form on W and p is a g-valued function on W . The physical meaning of the matter variables p and q will be discussed in the following section. Essentially, p is the momentum density of the brane and q is the first integral of the (d -1)-volume element; the integral of a line and surface element in in three and four dimensions (d = 2, 3) respectively. The equations of motion governing the dynamics of the theory are those of a topological field theory: F [A] = κp δ W (3) d A B = κ[p, q]δ W (4) φ * (B + d A q) = 0 (5) d A p| W = 0. (6) Here, δ W is a distributional two-form, also called current, which has support on the worldsheet W . It is defined such that for all (d -1)-form α, W α = M (α ∧ δ W ). The symbol φ * denotes the pull-back of forms on W by the embedding map φ. We can readily see that the above action describes a theory of local conical defects along brane-like (d-1)submanifolds of M through the first equation. The second states that the obstruction to the vanishing of the torsion is measured by the commutator of p and q. The third equation is crucial. It relates the background field B to the dynamics of the brane. For instance, this equation describes the motion of a particle's position in 3d gravity [START_REF] Ph | On spin and (quantum) gravity in (2+1)-dimensions[END_REF]. The last states that the momentum density is covariantly conserved along the worlsheet. It is in fact a simple consequence of equation [START_REF] Deser | Three-Dimensional Einstein Gravity: Dynamics of Flat Space[END_REF] together with the Bianchi identity d A F = 0. We will see how this is a sign of the reducibility of the constraints generated by the theory. The total action is invariant under the following (pull back to M of) vertical automorphisms of P, ∀g ∈ C ∞ (M, G), B → B = gBg -1 (7) A → A = gAg -1 + gdg -1 p → gpg -1 q → gqg -1 and the 'topological', or reducible transformations ∀η ∈ Ω d-2 (M, g), B → B + d A η (8) A → A p → p q → q -η where Ω p (M, g) is the space of g-valued p-forms on M . B. Physical interpretation: the flat solution In this section, we discuss some particular solutions of the theory leading to an interpretation of matter propagating on flat backgrounds. We discuss the d = 2 and d = 3 cases where the gauge degrees of freedom of BF theory become dynamical along one dimensional worldlines and two-dimensional worldsheets respectively. The point particle in 2 + 1 dimensions We now restrict our attention to the d = 2 case with structure group the isometry group G = SO(η) of the diagonal form η of a three-dimensional metric on M ; η = (σ 2 , +, +) with σ = {1, i} in respectively Riemannian (G = SO(3)) and Lorentzian (G = SO(1, 2)) signatures. We denote (π, V η ) the vector (adjoint) representation of so(η) = R{J a } a=0,1,2 , i.e., V η = R 3 and V η = R 1,2 in Riemannian and Lorentzian signatures respectively. The bilinear form 'tr' is defined such that tr(J a J b ) = 1 2 η ab . In this case, the free BF action (2) describes the dynamics of three-dimensional general relativity, where the B field plays the role of the triad e. The matter excitations are 0-branes, that is, particles and the worldsheet W reduces to a one-dimensional worldline that we will note γ. The degrees of freedom of the particle are encoded in the algebraic variables q and p which are both so(η)-valued functions with support on the world-line γ. Firstly, we consider the open subset U of M constructed as follows. Consider the three-ball B 3 centered on a point x 0 of the worldline γ and call x and y the two punctures ∂B 3 ∩ γ. Pick two non intersecting paths γ 1 and γ 2 on ∂B 3 both connecting x to y. The open region bounded by the portion of ∂B 3 contained between the two paths and the two arbitrary non intersecting disks contained in B 3 and bounded by the loops γγ 1 and γγ 2 defines the open subset U ⊂ M . Next, we define the coordinate function X : M → V η mapping spacetime into the 'internal space' so(η) isomorphic, as a vector space, to its vector representation space V η . The coordinates are chosen to be centered around a point x in M traversed by the worldline; X(x) = 0. Associated to the coordinate function X, there is a natural solution to the equations of motion (3), ( 4), ( 6), [START_REF] Noui | Three dimensional Loop Quantum Gravity: Towards a self-gravitating Quantum Field Theory[END_REF] in U e = dX = δ (9) A = 0 q = -X | γ p = constant, where δ is the unit of End(T p M, V η ), δ(v) = v forall v in T p M and all p in U . The field configuration e = δ (together with the A = 0 solution) provides a natural notion of flat Riemannian or Minkowskian spacetime geometry via its relation to the spacetime metric g = 2tr(e ⊗ e). This flat background is defined in terms of a special gauge (notice that one can make e equal to zero by transformation of the form ( 8)). From now on, we will call such gauge a flat gauge. The solution for q is obtained through the equation ( 5) relating the background geometry to the geometry of the worldline. Here, we can readely see that q represents the particle's position X, first integral of the line element defined by the background geometry e. Below we show that equation (4) forces the worldline to be a straight line. Finally, p = constant trivially satisfies the conservation equation [START_REF] Wise | MacDowell-Mansouri gravity and Cartan geometry[END_REF]. In fact, the curvature equation of motion (3) constrains p to remain in a fixed adjoint orbit so we can introduce a constant m ∈ R * + such that p = mv with v ∈ so(η) such that trv 2 = -σ 2 . Consequently, p satisfies the mass shell constraints p 2 := tr(p 2 ) = -σ 2 m 2 and acquires the interpretation of the particle's momentum. We can now relate the position q and momentum p, independent in the first order formulation, by virtue of (4). Indeed, the chosen flat geometry solution e = δ, A = 0 leads to a everywhere vanishing torsion d A e. Hence, the commutator [p, q] = X × p, where × denotes the usual cross product on V η , vanishes on the worldline. This vanishing of the relativistic angular momentum (which is conserved by virtue of equation ( 3)) implies, together with the flatness of the background fields, that the the worldline γ of the particle defines a straight line passing through the origin and tangent to its momentum p. Equivalently, we can think of the momentum p as Hodge dual to a bivector * p, in which case the worldline is normal to the plane defined by * p. Note that translating γ off the origin, which requires the introduction of spacetime torsion, can be achieved by the gauge transformation q → q + C with C = constant which leaves all the other fields invariant. In this way we conclude that the previous solution of our theory can be (locally) interpreted as the particle following a geodesic of flat spacetime. More formally, we can also recover the action of a test particle in flat spacetime by simply 'switching off' the interaction of the particle whith gravity. This can be achieved by evaluating the action (1) on the flat solution and neglecting the interactions between geometry and matter, namely the equations of motion linking the background fields to the matter degrees of freedom (e.g. e = dX). This formal manipulation leads to the following Hamilton function S[p, X, N ] = γ tr(p Ẋ) + N (p 2 -m 2 ), (10) which is the standart first order action for a relativistic spinless particle. The string in 3 + 1 dimensions We now focus on the four dimensional (d = 3) extension of the above considerations. Here again we consider the isometry group G = SO(η) of a given four dimensional metric structure η = (σ 2 , +, +, +), in which case the value σ = 1 leads to the Riemannian group G = SO(4), while σ = i encodes a Lorentzian signature G = SO [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF][START_REF] Deser | Three-Dimensional Einstein Gravity: Dynamics of Flat Space[END_REF]. As in three dimensions, we denote (π, V η ), with V η = R{e I } I , I = 0, ..., 3, the vector representation of so(η) = R{J ab } a,b=0,...,3 . Finally, we choose the bilinear form 'tr' such that, forall a, b in so(η), it is associated to the trace tr(ab) = 1 2 a IJ b IJ in the vector representation. We are using the notation α IJ = α ab π(J ab ) IJ := α ab J IJ ab for the matrix elements of the image of an element α ∈ so(η) in End(V η ) under the vector representation. The dynamics of the theory is governed by the action [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] where the matter excitations are string-like and the worldsheet W is now a two-dimensional submanifold of the four dimensional space time manifold M . The string degrees of freedom are described by an so(η)-valued one-form q and an so(η)-valued function p living on the world-sheet W . As before, we construct an open subset U ⊂ M by cutting out a section of the four-ball B 4 , and define the coordinate function X : M → V η , centered around a point x in M ∩ W . Consider the following field configurations which define a flat solution to the equations of motion (3), ( 4), ( 6), [START_REF] Noui | Three dimensional Loop Quantum Gravity: Towards a self-gravitating Quantum Field Theory[END_REF] in U .: B = * (e ∧ e), with e = dX = δ (11) A = 0 q = - * XdX p = constant, where the star ' * ' is the Hodge operator * : Ω p (V η ) → Ω 4-p (V η ) acting on the internal space; ( * α) IJ = 1 2 ǫ KL IJ α KL , with the totally antisymmetric tensor ǫ normalized such that ǫ 0123 = +1. The solutions B = * (δ ∧δ) (A = 0), leads to a natural notion of flat Riemannian or Minkowski background geometry through the standard construction of a metric out of B when B is a simple bivector; B = * (e ∧ e) with e = δ. We can readily see that the q one-form is the first integral of the area element defined by the background field B. As in 3d, the equations of motion constrain p to remain in a fixed adjoint orbit so that we can introduce a constant τ ∈ R * + such that p = τ v and v ∈ so(η) has a fixed norm; trv 2 = -σ 2 . We call τ the string tension, or mass per unit length, and p the momentum density which satisfies a generalized mass shell constraint. This momentum density p is related to the q field by analysis of equation ( 4). The solution (B = * δ∧δ, A = 0) has zero torsion d A B. Accordingly, the commutator [p, q] = [ * XdX, p] vanishes on the worldsheet. This leads to the constraint X I p IJ = 0. Putting everything together, we see that the flat solution in the open subset U leads to the picture of a locally flat worldsheet (a locally straight, rigid string) in flat spacetime, dual1 , as a two-surface, to the momentum density bivector p (if p is simple, namely if it defines a two-plane). If we consider more general solutions admitting torsion, the plane can be translated off the origin. Indeed, the equation ( 5) determines the field q in terms of the geometry of the B field up to the addition of an exact one-form β = dα which encodes the translational information. For instance, the translation X → X + C of the plane yields q → q - * CdX and consequently corresponds to a function α defined by dα = - * CdX. This potential is in turn determined by the torsion T = d A B of the B field via the equation ( 4). More general solutions can be found for arbitrary α's, as discussed below. Following the same path as in the case of the particle case, we can 'turn off' the interaction between the topological BF background and the string by evaluating the action on the flat solution (this implies, here again, that we ignore the equations of motion of the coupled theory, i.e. the relation between the matter and geometrical degrees of freedom). We obtain the following Hamilton function S[p, X, N ] = W tr( * p dX ∧ dX) + N (p 2 -τ 2 ), (12) up to a constant. This is the Polyakov action on a non trivial background with metric G µν = 0 and antisymmetric field b = v ∈ so(η). Now, the previous action leads to trivial equations of motion that are satisfied by arbitrary X (because p = constant and so the Lagrangian is a total differential). This is to be expected, from the string theory viewpoint, this would be a charged string moving on a constant potential, so the field strength is zero. This seems in sharp contrast to the particle case where the effective action leads to straight line solutions. Here any string motion is allowed; however, from the point of view of the full theory, all these possibilities are pure gauge. The reason for this is that in 2 + 1 dimensions the flat gauge condition e = δ fixes the freedom (8) up to a global translation, and hence gauge considerations are not necessary in interpreting the effective action. In the string case, B = * (δ ∧ δ) partially fixes the gauge; the remaining freedom being encoded in η = dα for any α. C. Geometrical interpretation: cosmic strings and topological defects The above discussion shows that particular solutions of the theory in a particular open subset lead to the standard propagation of matter degrees of freedom on a flat (or degenerate) background spacetime. In fact, we can go further in the physical interpretation by considering other solutions, defined everywhere, which are in one-to-one correspondence with solutions of four dimensional general relativity in the presence of distributional matter. These solutions are called cosmic strings. Cosmic strings It is well known that the metric associated to a massive and spinning particle coupled to three-dimensional gravity is that of a locally flat spinning cone. The lift of this solution to 3 + 1 dimensions corresponds to a spacetime around an infinitely thin and long straight string (see for instance [START_REF] Deser | Time travel?[END_REF] and references therein). Let us endow our spacetime manifold M with a Riemannian structure (M, g) and let x ∈ M label a point traversed by the string. We can choose as a basis of the tangent space T x M the coordinate basis {∂ t , ∂ r , ∂ ϕ , ∂ z } associated to local cylindrical coordinates such that the string is lying along the z axis and goes through the origin. The embedding of the string is given by φ(t, z) = (t, 0, 0, z). Let τ and s respectively denote the mass and intrinsic (spacetime) spin per unit length of the string. Note that τ is the string tension. Solving Einstein's field equations for a such stationary string carrying the above mass and spin distribution produces a two-parameter (τ, s) family of solutions described by the following line element written in the specified cylindrical coordinates ds 2 = g µν dx µ ⊗ dx ν = σ 2 (dt + βdϕ) 2 + dr 2 + (1 -α) 2 r 2 dϕ 2 + dz 2 , (13) where β = 4Gs and α = (1 -4Gτ ), G is the Newton constant. In fact this family of metrics is the general solution to Einstein's equations describing a spacetime outside any matter distribution in a bounded region of the plane (r, ϕ) and having a cylindrical symmetry. Exploiting the absence of structure along the z axis, by simply suppressing the z direction, reduces the theory to that of a point particle coupled to gravity in 2 + 1 dimensions, where the location of the particle is given by the point where the string punctures the z = 0 plane. We will come across a such duality again in the quantization process of the next sections. The dual co-frame for the above metric is written e 0 = dt + βdϕ (14) e 1 = cos ϕdrαr sin ϕdϕ e 2 = sin ϕdr + αr cos ϕdϕ e 3 = dz, such that ds 2 = e I ⊗e J η IJ .If we assume that the connection A associated to the above metric is Riemannian, it is straight-forward to calculate its components by exploiting Cartan's first structure equation (d A e = 0). The result reads A = A IJ µ σ IJ dx µ = 4Gτ σ 12 dϕ, (15) where {σ IJ } I,J is a basis of Ω 2 (V η ) ≃ so(η). Using the distributional identity ddϕ = 2πδ 2 (r)dxdy (x = r cos ϕ, y = r sin ϕ, and dxdy is a wedge product), it is immediate to compute the torsion T = T 0 e 0 and curvature F = F 12 σ 12 of the cosmic string induced metric : T 0 = 8πGs δ 2 (r) dxdy, F 12 = 8πGτ δ 2 (r) dxdy. (16) These equations state that the torsion and curvature associated to the cosmic string solution are zero everywhere except when the radial coordinate r vanishes, i.e. at the location of the string worlsheet lying in the zt plane. If we now focus on the spinless cosmic string case s = 0, we can establish a one-to-one correspondence between the above solutions of general relativity and the following solutions of BF theory coupled to string sources: B 01 = sin ϕdrdz + αr cos ϕdϕdz, B 02 = -(cos ϕdzdrαr sin ϕdzdϕ) B 03 = αrdrdϕ, B 12 = -σ 2 dzdt B 13 = -σ 2 (sin ϕdtdr + αr cos ϕdtdϕ), B 23 = σ 2 (cos ϕdtdrαr sin ϕdtdϕ), A 12 = 4Gτ dϕ, (17) q 12 = σ 2 (zdt -tdz), p 12 = τ, where only the non vanishing components have been written and the coupling constant κ in (1) has been set to 8πG. In this way, solutions of our theory are in one-to-one correspondence to solutions of Einstein's equations. The converse is obviously not true as our model does not allow for physical local excitations such us gravitational waves. However, augmenting the action (1) with a Plebanski term constraining the B field to be simple, would lead to the full Einstein equations in the presence of distributional matter, ǫ IJKL e J ∧ F KL = 8πGτ ǫ IJKL e J J KL 12 δ W , (18) where J KL 12 = δ [K 1 δ L] 2 , starting from the theory considered in this paper. Many-strings-solution One can also construct a many string solution by 'superimposing' solutions of the previous kind at different locations. Here we explicitly show this for two strings. We do this as the example will illustrate the geometric meaning of torsion in our model. Assume that we have two worlsheets W 1 and W 2 respectively traversing the points p 1 and p 2 . We will work with two open patches U i ⊂ M , i = 1, 2, such that p 1 and p 2 both belong to the overlap U 1 ∩ U 2 . The cylindrical coordinates (t i , r i , ϕ i , z i ) associated to the charts (U i ⊂ M, X µ i : U i → R 4 ) are chosen such that the strings lie along the z axis, are separated by a distance x 0 in the x-direction, and are such that r i (p i ) = 0. The coordinate transform occurring in the overlap U 1 ∩ U 2 is immediate; it yields t i = t, x 2 = x 1 + x 0 y i = y and z i = z, for i = 1, 2. The two embeddings are given consequently by φ 1 (t, z) = (t, 0, 0, z) and φ 2 (t, z) = (t, x 0 , 0, z). Our notations are such that a field φ expressed in the coordinate system associated to the open subset U i is noted φ Ui . Our strategy to construct the two-string-solution is the following. We need to realize the fact that, regarded from a particular coordinate frame, one of the two strings is translated off the origin. We will choose to observe the translation of W 2 from the coordinate frame 1. Now, the study of the flat solution discussed in the previous section has showed that translations of the worlsheet are related to the torsion T of the B field. In particular, we know how to recognize a translation of the form X → X + C, with C = x 0 e 1 . It corresponds to a torsion of the form T = κ[p, dα], with dα = - * CdX. Hence, the two-string-solution is based on the tetrad field which leads to the desired value of the B field torsion taking into account the separation of the two worldsheets. For simplicity, here we assume that the two strings are parallel, hence that they have same momentum density p U1 = p U2 = τ σ 12 , (19) and accordingly create the same curvature singularity in both coordinate frames 1 and 2. The associated connection yields A Ui = 4Gτ dϕ i σ 12 , ∀i = 1, 2. ( 20 ) The dual co-frame e Ui = e I Ui ⊗ e IUi is defined by the following components e 0 Ui = dt (21) e 1 Ui = cos ϕ i dr i -αr i sin ϕ i dϕ i e 2 Ui = sin ϕ i dr i + (αr i cos ϕ i + δ i2 κ 4π τ x 0 )dϕ i e 3 Ui = dz. By integrating the B = * e ∧ e solution with e given by ( 21), we can now calculate the q field, up to the addition of an exact form β = dα q Ui = σ 2 (zdt -tdz) σ 12 + dα IJ i σ IJ . (22) The potential α is derived from the equation of motion (4) relating the commutator of p and q to the B field torsion three-form T = d A B = * d A e ∧ e + * e ∧ d A e : T Ui = δ i2 1 2 κ τ x 0 δ(r)(dx 2 dy 2 dz σ 01 + σ 2 dt dx 2 dy 2 σ 13 ). (23) This torsion indeed corresponds to a two-string-solution since it yields the desired value - * CdX for the form dα, dα i = δ i2 1 2 x 0 (dz σ 02 + σ 2 dt σ 23 ). (24) One can add more than one string in a similar fashion, leading to multiple cosmic string solutions. It is interesting to notice that torsion of the mutiple string solution is related to the distance x 0 separating the world sheets. Of course this is a distance defined in the flat-gauge where B = * δ ∧ δ. This concludes our discussion on the physical aspects of the action (1) of string-like sources coupled to BF theory. We now turn toward the quantization of the theory. III. QUANTUM THEORY For the entire quantization process to be well defined, we will restrict our attention to the case where the symmetry group G is compact. For instance, we can think of G as being SO(4). We will also concentrate on the four-dimensional theory and set the coupling constant κ to one. Also, to rely on the canonical analysis performed in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF], we will work with a slightly different theory where the momentum p is replaced by the string field λ ∈ C ∞ (W , G). This new field enters the action only through the conjugation τ Ad λ (v) of a fixed unit element v in g, and the theory is consequently defined by the action (1) with p set to τ λvλ -1 . The field λ transforms as λ → gλ under gauge transformations of the type [START_REF] Macdowell | Unified geometric theory of gravity and supergravity[END_REF] and the theory acquires a new invariance under the subgroup H ⊆ G generated by v. The link between the two theories is established by the fact that, as remarked before, the equation of motion F = pδ W implies that p remains in the same conjugacy class along the worldsheet. Here, we choose to label the class by τ v and to consider λ as dynamical field instead of p. A. Canonical setting As a preliminary step, we assume that the spacetime manifold M is diffeomorphic to the canonical split R × Σ, where R represents time and Σ is the canonical spatial hypersurface. The intersection of Σ with the string worldsheet W forms a one dimensional manifold S that we will assume to be closed2 . We choose local coordinates (t, x a ) for which Σ is given as the hypersurface {t = 0}. By definition, x a , a = 1, 2, 3, are local coordinates on Σ. We also choose local coordinates (t, s) on the 2-dimensional world-sheet W , where s ∈ [0, 2π] is a coordinate along the one-dimensional string S . We will note x S = φ | Σ the embedding of the string S in Σ. We pick a basis {X i } i=1,...,dim(g) of the real Lie algebra g, raise and lower indices with the inner product 'tr', and define structure constants by [X i , X j ] = f k ij X k . Next, we choose a polarization on the phase space such that the degrees of freedom are encoded in the configuration variable (A, λ) ∈ A × Λ defined by the couples formed by (the pull-back to Σ of) connections and string momenta. The canonical analysis of the coupled action [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] shows that the legendre transform from configuration space to phase space is singular : the system is constrained. Essentially3 , the constraints are first class and are given by the following set of equations : G i := D a E a i + f k ij q j a π a k δ S ≈ 0 ( 25 ) H a i := ǫ abc F ibc -π a i δ S ≈ 0. (26) Here, E a i = ǫ abc B ibc is the momentum canonically conjugate to A a i , π a i = ∂ s x a S p i is conjugate to q and satisfies D a π a i = 0, where p i = tr(X i p) denote the components of the Lie algebra element p in the chosen basis of g. The symbols D and F denote respectively the covariant derivative and curvature of the spatial connection A. The first constraint [START_REF] Pfeiffer | Quantum general relativity and the classification of smooth manifolds[END_REF], the Gauss law, generates kinematical gauge transformations while the second (26), the curvature constraint, contains the dynamical data of the theory. To quantize the theory, one can follow Dirac's program of quantization of constrained systems which consists in first quantizing the system before imposing the constraints at the quantum level. The idea is to construct an algebra A of basic observables, that is, simple phase space functions which admit unambiguous quantum analogues, which is then represented unitarely, as an involutive and unital ⋆-algebra of abstract operators, on an unphysical or auxiliary Hilbert space H. Since the classical constraints are simple functionals of the basic observables, they can be unambiguously quantized, that is, promoted to self-adjoint operators on H. The kernel of these constraint operators are spanned by the physical states of the theory. The structure of the constraint algebra enables us to solve the constraints in different steps. One can first solve the Gauss law to obtain a quantum kinematical setting. Then, impose the curvature constraint on the kinematical states to fully solve the dynamical sector of the theory. In [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF], the kinematical subset of the constraints is solved and a kinematical Hilbert space H kin solution to the quantum Gauss law is defined. We first review the kinematical setting of [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] before exploring the dynamical sector of the theory. B. Quantum kinematics : the Gauss law The Hilbert space H kin of solutions to the Gauss constraint is spanned by so-called string spin network states. String spin network states are the gauge invariant elements of the auxiliary Hilbert space H of cylindrical functions which is constructed as follows. Auxiliary Hilbert space H Firstly, we define the canonical BF states. Let Γ ⊂ Σ denote an open graph, that is, a collection of one dimensional oriented sub-manifolds4 of Σ called edges e Γ , meeting if at all only on their endpoints called vertices v Γ . The vertices forming the boundary of a given edge e Γ are called the source s(e Γ ) and target t(e Γ ) vertices depending on the orientation of the edge. We will call n ≡ n Γ the cardinality of the set of edges {e Γ } of Γ. Let φ : G ×n → C denote a continuous complex valued function on G ×n and A(e Γ ) ≡ g eΓ = P exp ( eΓ A) denote the holonomy of the connection A along the edge e Γ ∈ Γ. The cylindrical function associated to the graph Γ and to the function φ is a complex valued map Ψ Γ,φ : A → C defined by : Ψ Γ,φ [A] = φ(A(e 1 Γ ), ..., A(e n Γ )), (27) forall A in A. The space of such functions is an abelian ⋆-algebra denoted Cyl BF,Γ , where the ⋆-structure is simply given by complex conjugation on C. The algebra of all cylindrical functions will be called Cyl BF = ∪ Γ Cyl BF,Γ . Next, we define string states. Since the configuration variable is a zero-form, we expect to consider wave functions associated to points x ∈ Σ. Accordingly, we define the ⋆-algebra Cyl S of cylindrical functions on the space Λ of λ fields as follows. An element Φ X,f of Cyl S is a continuous map Φ X,f : Λ → C, where X = {x 1 , ..., x n } is a finite set of points in S and f : G ×n → C is a complex valued function on the Cartesian product G n , defined by Φ X,f [λ] = f (λ(x 1 ), ..., λ(x n )). ( 28 ) Both algebras Cyl BF and Cyl S , regarded as vector spaces, can be given a pre-Hilbert space structure. Fixing a graph Γ ⊂ Σ with n edges and a set of m points X ⊂ Σ, we define the scalar products respectively on Cyl BF,Γ and Cyl S,p as < Ψ ′ Γ,φ , Φ Γ,ψ > = G ×n φ ψ, (29) and < Φ ′ X,f , Φ X,g > = G ×m f g, (30) where the integration over the group is realized through the Haar measure on G. These scalar products can be extended to the whole of Cyl BF (resp. Cyl S ), i.e. to cylindrical functions defined on different graphs (resp. set of points), by redefining a larger graph (resp. set of points) containing the two different ones. The resulting measure, precisely constructed via projective techniques, is the AL measure. The string Hilbert space was in fact introduced by Thiemann as a model for the coupling of Higgs fields to loop quantum gravity [START_REF] Thiemann | Kinematical Hilbert spaces for fermionic and Higgs quantum field theories[END_REF] via point holonomies. Completing these two pre-Hilbert spaces in the respective norms induced by the AL measures, one obtains the BF and string auxiliary Hilbert spaces respectively denoted H BF and H S . Tensoring the two Hilbert spaces yields the auxiliary Hilbert space H = H BF ⊗ H S of the coupled system. Using the harmonic analysis on G, one can define an orthonormal basis in H BF and H S the elements of which are respectively denoted (open) spin networks and n-points spin states. Using the isomorphism of Hilbert spaces L 2 (G ×n ) ≃ eΓ L 2 (G eΓ ), any cylindrical function Ψ Γ,φ in H BF decomposes according to the Peter-Weyl theorem into the basis of matrix elements of the unitary, irreducible representations of G : Ψ Γ,φ [A] = ρ1,...,ρn φ ρ1,...,ρn ρ 1 [A(e 1 Γ )] ⊗ ... ⊗ ρ n [A(e n Γ )], (31) where ρ : G → Aut(V ρ ) denotes the unitary, irreducible representation of G acting on the vector space V ρ and the mode φ ρ1,...,ρn := ⊗ n i=1 φ ρi is an element of ( V ρi ⊗ V ρi * ) ⊗ n i=1 . The functions appearing in the above sum are called open spin network states. Equivalently, the string cylindrical functions decompose as : Φ X,f [λ] = ρ1,...,ρm f ρ1,...,ρm ρ 1 [λ(x 1 )] ⊗ ... ⊗ ρ m [λ(x m )], (32) and a given element in the sum is called an n-point spin state. String spin network states One can now compute a unitary action of the gauge group C ∞ (Σ, G) on H by using the transformation properties of the holonomies and of the string fields λ → gλ under the gauge group and derive the subset of G-invariant states, that is, the states solution to the Gauss constraint. A vectorial basis of the vector space of gauge invariant states can be constructed, in analogy with 3d quantum gravity coupled to point particles [START_REF] Noui | Three-dimensional loop quantum gravity: coupling to point particles[END_REF], by tensoring the open spin network basis with the n point spin states elements. Such an tensorial element is required the following consistency conditions to be G-invariant. The graph Γ of the open spin network has a set of vertices V Γ including the points {x 1 , ..., x n } forming the set X. The vertices of Γ are coloured with a chosen element ι v of an orthonormal basis of the vector space of intertwining operators Hom G   eΓ|t(eΓ)=vΓ V ρe Γ , eΓ|s(eΓ)=vΓ V ρe Γ   , (33) if the vertex v Γ is not on the string. If a vertex v Γ is on the string, it coincides with some point x k ∈ X. In this case, we chose an element ι vΓ in an orthonormal basis of Hom G   eΓ|t(eΓ)=vΓ V ρe Γ ,   eΓ|s(eΓ)=vΓ V ρe Γ   ⊗ V ρ k   , (34) where V ρ k is the representation space associated to the point x k . By finally implementing the invariance under the sub-group H ⊆ G generated by v of the n-point spin states by choosing the modes to be H-invariant, one obtains a vectorial basis in the kinematical Hilbert space H kin where the inner product is that of (BF and string) cylindrical functions. The elements of this basis are called string spin networks states and are of the form (see fig. 1) : Ψ Γ,X [A, λ] := (Ψ Γ ⊗ Φ X )[A, λ] = eΓ∈Γ ρ eΓ [A(e Γ )] x∈X ρ x [λ(x)] . vΓ∈Γ ι vΓ , (35) where the dot '.' denotes tensor index contraction. This concludes the quantum kinematical framework of strings coupled to BF theory performed in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF]. We now solve the curvature constraint and compute the full physical Hilbert space H phys . C. Quantum dynamics: the curvature constraint In this section, we explore the dynamics of the theory by constructing the physical Hilbert space H phys solution to the last constraint of the system, that is, the curvature or Hamiltonian constraint [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF]. Note that the physical states that we construct below are also solutions to the constraints of four dimensional quantum gravity coupled to distributional matter, as in the classical case. We first underline a crucial property of the curvature constraint of d + 1-dimensional BF theory with d > 2, namely its reducible character which has to be taken into account during the quantization process. We then proceed (as in [START_REF] Noui | Three-dimensional loop quantum gravity: coupling to point particles[END_REF]) à la Rovelli and Reisenberger [START_REF] Reisenberger | A Left-handed simplicial action for Euclidean general relativity[END_REF][START_REF] Rovelli | The Projector on physical states in loop quantum gravity[END_REF] by building and regularizing a generalized projection operator mapping the kinematical states into the kernel of the curvature constraint operator. This procedure automatically provides the vector space of solutions with a physical inner product and a Hilbert space structure, and leads to an interesting duality with the coupling of Feynman loops to 3d gravity [START_REF] Barrett | Feynman loops and three-dimensional quantum gravity[END_REF], [START_REF] Freidel | Ponzano-Regge model revisited. I: Gauge fixing, observables and interacting spinning particles[END_REF] from the covariant perspective. The reducibility of the curvature constraint A naive imposition of the curvature constraint on the kinematical states leads to severe divergences. This is due to the fact that there is a redundancy in the implementation of the constraint; the components of the curvature constraint of (d + 1)-dimensional BF theory are not linearly independent, they are said to be reducible, if d > 2. The same is true for the theory coupled to sources under study here. As an illustration of this fact, let us simply count the degrees of freedom of source free (τ = 0) BF theory in d + 1 dimensions. The configuration variable of the theory A i a is a g-valued connection one-form, thus containing d × dim(g) independent components for each space point of Σ. In turn, the number of constraints is given by the dim(g) components of the Gauss law (25) plus the d × dim(g) components of the Hamiltonian constraint [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF] for each space point x ∈ Σ. Hence, we have N C = (d + 1) × dim(g) constraints per space point. This leads to a negative number of degrees of freedom. What is happening5 ? The point is that the N C constraints are not independent: the Bianchi identity (D (2) F = d (2) F + [A, F ] = 0, where the superscript (p) indicates the degree of the form acted upon) imply the reducibility equation D a H a i = 0. ( 36 ) In the case where sources are present, the reducibility equation remains valid because the curvature constraint F = pδ S together with the Bianchi identity automatically implements the momentum density conservation Dp = 0. We will come back to this reducibility of the matter sector of the theory. The system is said to be (d -2)-th stage reducible in the first class curvature constraints. This designation is due to the fact that the operator d (2) is himself reducible since d (3) d (2) ≡ 0. In turn, d (3) is reducible and so on. The chain stops after precisely d -2 steps since the action of the d (d) de-Rham differential operator on d-forms is trivial. Accordingly, the N R = dim(g) reducibility equations (36) imply a linear relation between the components of the curvature constraint. The number N I of independent constraints is thus given by N C -N R = d × dim(g). Using N I to count the number of degrees of freedom leads to the correct answer, namely zero degrees of freedom for topological BF theory. The standard procedure to quantize systems with such reducible constraints consists in selecting a subset H | irr of constraints which are linearly independent and impose solely this subset of constraints on the auxiliary states of H. Keeping this issue in mind, we now proceed to the definition and regularization of the generalized projector on the physical states and construct the Hilbert space H phys of solutions to all of the constraints of the theory. Physical projector : formal definition -the particle/string duality We start by introducing the rigging map η phys : Cyl → Cyl * (37) Ψ → δ( Ĥ | irr ) Ψ, where Cyl * is the (algebraic) dual vector space of Cyl = Cyl BF ⊗ Cyl S . The range of the rigging map η phys formally lies in the kernel of the Hamiltonian constraint of the coupled model. The power of the rigging map technology is that it automatically provides the vector space η phys (Cyl) = Cyl * phys ⊂ Cyl * of solutions to the Hamiltonian constraint with a pre-Hilbert space structure encoded in the physical inner product < η phys (Ψ 1 ), η phys (Ψ 2 ) > phys = [η phys (Ψ 2 )](Ψ 1 ) :=< Ψ 1 , δ( Ĥ | irr ) Ψ 2 >, (38) for any two string spin network states Ψ 1 , Ψ 2 ∈ H kin . The scalar product used in the last equality is the kinematical inner product ( 29), [START_REF] Perez | On the regularization ambiguities in loop quantum gravity[END_REF]. The physical Hilbert space H phys is then obtained by the associated Cauchy completion of the quotient of H kin by the Gel'fand ideal defined by the set of zero norm states. Accordingly, the construction of the physical inner product can explicitly be achieved if we can rigorously make sense of the formal expression δ( Ĥ | irr ). This task is greatly simplified by virtue of the following duality. Indeed, we can re express the above formal quantity as follows δ( Ĥ | irr ) = x∈Σ δ( Ĥ | irr (x)) = N Dµ[N ] exp i Σ tr(N ∧ Ĥ) . (39) Here, N ∋ N is the space of regular g-valued one-forms on Σ and Dµ[N ] denotes a formal functional measure on N imposing constraints on the test one-form N to remove the redundant delta functions on H. Simply plugging in the explicit expression of the exponent in (39) leads to H[N ] = Σ tr(N ∧ H) = Σ tr(N ∧ F ) + S tr(N p) = S 3d BF +part [N, A], (40) which, in the case where G = SO(η), where η is a three-dimensional metric, is the action of 3d gravity coupled to a (spinless) point particle [START_REF] Ph | On spin and (quantum) gravity in (2+1)-dimensions[END_REF], [START_REF] Freidel | Ponzano-Regge model revisited. I: Gauge fixing, observables and interacting spinning particles[END_REF] : the role of the triad is played by N , the mass and the worldline of the particle are respectively given by the string tension τ (hidden in the string variable p = τ Ad λ (v)) and S . Finally, the role of the Cartan subalgebra generator J 0 is played by v (also hidden in p). This relation is reminiscent to the link between cosmic strings in 4d and point particles in three-dimensional gravity discussed in the first sections. More generally, we have in fact the following duality : P (Ω) d+1 BF +(d-2)-branes = Z d BF +(d-3)-branes , (41) where Ω denotes the (d + 1-dimensional) no-spin-network vacuum state, Z is the path integral of BF theory in d spacetime dimensions and we have introduced the linear form6 P on Cyl ⊂ A defined by ∀Ψ ∈ Cyl, P (Ψ) = < η phys (Ω), η phys (Ψ) > phys (42) = < Ω , δ( Ĥ | irr ) Ψ > . Furthermore, when d = 3, the formal functional measure Dµ[N ] introduced above to take into account the reducible character of the four-dimensional theory corresponds to the Fadeev-Popov determinant gaugefixing the translational topological symmetry of the 3d theory; the reducibility of the 4d theory is mapped via this duality onto the gauge redundancies of the three-dimensional theory. Now because of the above duality (41), regularizing the formal expression (39) is, roughly speaking, equivalent to regularizing the path integral for 3d gravity coupled to point particles, up to the insertion of spin network observables. The physical inner product in our theory will therefore be related to amplitudes computed in [START_REF] Freidel | Ponzano-Regge model revisited. I: Gauge fixing, observables and interacting spinning particles[END_REF], [START_REF] Oriti | Group field theory formulation of 3-D quantum gravity coupled to matter fields[END_REF], although they would have here a quite different physical interpretation. Following [START_REF] Noui | Three-dimensional loop quantum gravity: coupling to point particles[END_REF][START_REF] Noui | Three-dimensional loop quantum gravity: Physical scalar product and spin foam models[END_REF], we will regularize the Hamiltonian constraint at the classical level by defining a lattice-like discretization of Σ and by constructing holonomies around the elementary plaquettes of the discretization as a first order approximation of the curvature. However, there are two major obstacles to the direct and naive implementation of such a program. The first is the reducible character of the curvature constraint and the second is the presence of spin network edges ending on the string. We will use the above duality to treat the first issue while the second will be dealt with by introducing an appropriate regularization scheme. Physical projector : regularization Throughout this section, we will concentrate on the definition of the linear form (42) evaluated on the most general string spin network state Ψ ∈ H kin , since it contains all the necessary information to compute transition amplitudes between any two arbitrary elements of the kinematical Hilbert space. We will consider a string spin network basis elements Ψ of H kin defined on the (open) graph Γ. The set of end points of the graph living on the string S will be denoted X. We follow the natural generalization of the regularization defined in [START_REF] Noui | Three-dimensional loop quantum gravity: coupling to point particles[END_REF] for 2 + 1 gravity coupled to point particles. In order to deal with the curvature singularity at the string location, we thicken the smooth curve S to a torus topology, smooth, non-intersecting tube T η of constant radius η > 0 centered on the string S . The radius η is defined in terms of the local arbitrary coordinate system. If the string is disconnected, we blow up each string component in a similar fashion. Next, we remove the tube T η from the spatial manifold Σ. We are left with a three-manifold with torus boundary Σ \ T η noted Σ η . For instance, if Σ has the topology of S 3 , we know by Heegard's splitting, that the resulting manifold has the topology of a solid torus whose boundary surface is the Heegard surface defined by the string tube. In this way we construct a new three-manifold with boundary where each boundary component is in one-to-one correspondence with a string component and has the topology of a torus. Finally, the open graph Γ is embedded in the bulk manifold and its endpoints lie on the boundary torus. The next step is to choose a simplicial decomposition7 of Σ η or more generally any cellular decomposition, i.e., a homeomorphism φ : Σ η → ∆ from our spatial bulk manifold Σ η to a cellular complex ∆. The discretized manifold ∆ ≡ ∆ ǫ depends on a parameter ǫ ∈ R + controlling the characteristic (coordinate) 'length scale' of the cellular complex. We will see that, by virtue of the three-dimensional equivalence between smooth, topological and piecewise-linear (PL) categories, together with the background independent nature of our theory, no physical quantities will depend on this extra parameter. We will note ∆ k the k-cells of ∆. To make contact with the literature, we will in fact work with the dual cellular decomposition ∆ * . The dual cellular complex ∆ * is obtained from ∆ by placing a vertex v in the center of each three-cell ∆ 3 , linking adjacent vertices with edges e topologically dual to the two-cells ∆ 2 of ∆, and defining the dual faces f , punctured by the one-cells ∆ 1 , as closed sequences of dual edges e. The intersection between ∆ * and the boundary tube T η induces a closed, oriented (trivalent if ∆ is simplicial) graph which is the one-skeleton of the cellular complex ∂∆ * = (v, e, f ) dual to the cellular decomposition ∂∆ of the 2d boundary T η induced by the bulk complex ∆. We will note F the set of faces f of the cellular pair (∆ * , ∂∆ * ) and require that each dual face of F admits an orientation (induced by the orientation of Σ η ) and a distinguished vertex. Finally, among all possible cellular decompositions, we select a subsector of two-complexes which are adapted to the graph Γ. Namely, we consider dual cellular complexes (∆ * , ∂∆ * ) whose one-skeletons admit the graph Γ as a subcomplex. In particular, the open edges of Γ end on the vertices v of the boundary two-complex ∂∆ * . The meaning of the curvature constraint F = p δ S is that the physical states have support on the space of connections which are flat everywhere except at the location of the string where they are singular. In other words, the holonomy g γ = A(γ) of an infinitesimal loop γ circling an empty, simply connected region yields the identity, while the holonomy g γ circling the string around a point x ∈ S is equal to exp p(x), the image of the fixed group element u = e τ v under the inner automorphism Ad λ : G → G; u → λ(x) u λ -1 (x), with the string field λ evaluated at the point x. The integration over the string field λ appearing in the computation of the physical inner product then forces the holonomy of the connection around the string to lie in the same conjugacy class Cl(u) than the group element u. To impose the F = 0 part of the curvature constraint, we will require that the holonomy A : F → G (43) ∂f → g f = e⊂∂f A(e), around all the oriented boundaries of the faces f of F be equal to one 8 . Each such flat connection defines a monodromy representation of the fundamental group π 1 (Σ η ) in G. Concretely, the holonomies are computed by taking the edges in the boundary ∂f of the face f in cyclic order, following the chosen orientation, starting from the distinguished vertex. Reversing the orientation maps the associated group element to its inverse. It is here crucial to take into account the reducible character of the curvature constraint to avoid divergences due to redundancies in the implementation of the constraints (i.e. coming from the incorrect product of redundant delta functions). As discussed above, the reducibility equation induced by the Bianchi identity implies that the components of the curvature are not independent. In the discretized framework, we know [START_REF] Kawamoto | Lattice Chern-Simons gravity via Ponzano-Regge model[END_REF] that forall set of faces f forming a closed surface S with the topology of a two-sphere, f ∈S g f = 1 1, (44) modulo orientation and some possible conjugations depending on the base points of the holonomies. Accordingly, there is, for each three-cell of the dual cellular complex ∆ * , one group element g f , among the finite number of group variables attached to the faces bounding the bubble, which is completely determined by the others. It follows that imposing g f = 1 1 on all faces of the cellular complex ∆ * is redundant and would create divergences in the computation of the physical inner product. The proper way [START_REF] Freidel | Spin networks for noncompact groups[END_REF], [START_REF] Freidel | Ponzano-Regge model revisited. I: Gauge fixing, observables and interacting spinning particles[END_REF] to address the reducibility issue, or over determination of the holonomy variables, is to pick a maximal tree T of the cellular decomposition ∆ and impose F = 0 only on the faces of ∆ * that are not dual to any one simplex contained in T . A tree T of a cellular decomposition ∆ is a sub-complex of the one-skeleton of ∆ which never closes to form a loop. A tree T of ∆ is said to be maximal if it is connected and goes through all vertices of ∆. The fact that T is a maximal tree implies that one is only removing redundant flatness constraints taking consistently into account the reducibility of the flatness constraints. Finally, we need to impose the F = p part of the curvature constraint. The idea is to require that the holonomy g γ around any loop γ in ∂∆ * based at a point x, belonging to the homology class of loops of the boundary torus T η normal to the string S (these loops are the ones wrapping around the cycle of the torus circling the string, i.e., the non-contractible loops in Σ η ), be equal to the image of the group element u under the adjoint automorphism λ(x)uλ(x) -1 , i.e., belong to Cl(u). Intuitively, this could be achieved by picking a finite set {γ i } i of such homologous paths all along the tube T η and imposing g γi = Ad λi (u), with the field λ i evaluated at the base point of the holonomy g γi . However, here again, care must be taken in addressing the reducibility issue induced by the equation D a H a i = 0. In the presence of matter, the reducibility implies that the curvature constraint F = pδ S together with the Bianchi identity DF = 0 induce the momentum density conservation Dp = 0. In our setting, this is reflected in the fact that the holonomies g γ1 and g γ2 associated to two distinct homologous loops γ 1 an γ 2 circling the string satisfy the property Cl(g γ1 ) = Cl(g γ2 ) on shell. This is due to the Bianchi identity in the interior of the cylindrically shaped section of the torus T η bounded by γ 1 and γ 2 , and the flatness constraint F = 0 imposing the holonomies around all the dual faces on the boundary of the cylindrical section to be trivial (see e.g. ( 44)). Accordingly, imposing g γ1 ∈ Cl(u) naturally implies that g γ2 belongs to the conjugacy class labeled by u. In other words, choosing one arbitrary closed path circling the string, say γ 1 based at a point x 1 , and imposing F = p only along that path naturally propagates via the Bianchi identity and the flatness constraint and forces the holonomy g γ2 around any other homologous loop γ 2 based at a point x 2 to be of the form g γ2 = huh -1 ∈ Cl(u), for some h ∈ G. This shows that imposing F = p more than once, e.g. also around γ 2 , would lead to divergences which can be traced back to the reducibility of the constraints. However, for the prescription to be complete 9 , it is not sufficient to have g γ2 in Cl(u); we need to recover the fact that the holonomy along the loop γ 2 is the conjugation of the group element u under the dynamical field λ evaluated at the base point x 2 of the holonomy, namely λ 2 = λ(x 2 ). This suggests an identification of the group element h conjugating u with the value of the string field λ 2 , which leads to a relation between the holonomy g β a path β connecting the points x 1 and x 2 and the value of the string field λ at x 1 and x 2 : g β = λ s(β) λ t(β) -1 , stating that λ is covariantly constant along the string. We have seen that the Bianchi identity together with the full curvature constraint induces the momentum density conservation. Our treatment of the reducibility issue consists in truncating the curvature constraint, i.e., in imposing F = p only once, and using the Bianchi identity supplemented with the momentum conservation Dp = 0 to recover the truncated components of the curvature constraint without any loss of information. Accordingly, the full prescription is defined via a choice of a closed, oriented path α and a finite set C of open, oriented paths β in ∂∆ * . The closed path α circles the string (it is non-contractible in the three-manifold Σ η ). This loop is based at a point x ∈ X lying on a dual vertex v ∈ ∂∆ * supporting a spin network endpoint. The open paths β ∈ C are defined as follows. Let γ ∈ ∂∆ * be an oriented loop based at x, non-homologous to α (along the cycle of T η contractible in Σ η ) and connecting all the spin network endpoints x k ∈ X. Define the open path γ by erasing the segment of γ supported by the edge ē which is such that x = t(ē). The paths β of C are 1d sub-manifolds of γ each connecting x to a vertex v traversed by γ. If the graph Γ is closed, one reiterates the same prescription simply dropping the requirements on the spin network endpoints x k ∈ X, in particular, the base point x is chosen arbitrarily. We then impose g α = exp p with p evaluated at the point x, and g β = λ s(β) λ t(β) -1 , where x = s(β), on each open path β of C. To summarize, we choose a regulator R (η,ǫ) = (T η , (∆ ǫ , ∂∆ ǫ ), T, α, C) consisting in a thickening T η of the string, a cellular decomposition (∆ ǫ , ∂∆ ǫ ) of the manifold (Σ η , T η ) adapted to the graph Γ, a maximal tree T of ∆, a closed path α in ∂∆ * , and a collection C of open paths β in ∂∆ * . The associated regularized physical scalar product is then given by P [Ψ] := lim η,ǫ→0 P [R (η,ǫ) ; Ψ], (45) with P [R (η,ǫ) ; Ψ] =< Ω ,   f / ∈T δ (g f ) α δ (g α exp p) β ∈ C δ(g β λ t(β) λ s(β) -1 )   Ψ >, (46) where the product over α is to take into account the possible multiple connected components of the string. It is important to point out that, in addition to the expression of the generalized projection above, we can use the regularization to give an explicit expression of the regularized constraint corresponding to H[N ] in equation ( 40). With the notation introduced so far the regulated quantum curvature constraint becomes Ĥη,ǫ [N ] = f ∈∆ * Tr[N (x f )g f ] + α Tr[N (x p )g α exp p], (47) 9 To understand these last points, consider two loops γ 1 and γ 2 belonging to the same homology class circling the string at two neighboring points, say x 1 and x 2 . These two loops define a section of the torus Tη homeomorphic to a cylinder. Suppose that the dual cellular complex ∆ * is such that the cylinder is discretized by a single face with two opposite edges glued along an dual edge β in ∂∆ * connecting x 1 and x 2 . The flatness constraint on the boundary of the cylinder implies the following presentation of the cylinder's fundamental polygon: gγ 1 g β g -1 γ 2 g -1 β = 1 1 , which relates the holonomies gγ 1 and gγ 2 by virtue of the Bianchi identity in the interior of the tube. Hence, imposing F = p only along one of the two loops, say γ 1 , naturally leads to the constraint gγ 2 ∈ Cl(u). Finally, plugging the relation g β = λ 1 λ -1 2 in the value of the holonomy gγ 2 leads to the required constraint gγ 2 = λ 2 uλ -1 2 . where x f is an arbitrary point in the interior of the face f and x p and arbitrary point on the string dual to the loop α (the sum over α is over all the string components). It is easy to check that the regulated quantum curvature constraints satisfy off-shell anomaly freeness condition. For instance U [g] Ĥη,ǫ [N ]U † [g] = Ĥη,ǫ [gN g -1 ], (48) where U [g] is the unitary generator of G-gauge transformations. Therefore the regulator does not break the algebraic structure of the classical constraints. The quantization is consistent. The quantum constraint operator is defined as the limit where η and ǫ are taking to zero. Instead of doing this in detail we shall simply concentrate on the regulator independence of the physical inner product in the following section. The inner product is computed using the AL measure, that is, by Haar integration along all edges of the graph (∆ * , ∂∆ * ) ∋ Γ and along all endpoints x k ∈ X ⊂ ∂∆ * of the graph Γ. We can now promote the classical delta function on the lattice phase space to a multiplication operator on H kin by using its expansion in irreducible unitary representations : ∀g ∈ G, δ(g) = ρ dim(ρ)χ ρ (g), ( 49 ) where χ ρ ≡ tr ρ : G → C is the character of the representation ρ. Each χ ρ is then promoted to a self-adjoint Wilson loop operator χρ on H kin creating loops in the ρ representation around each plaquette defined by our regularization, which is charged for the face bounded by the loop α. To summarize, we have, for each face f of the regularization, a sum over the unitary, irreducible representations ρ f of G, a weight given by the dimension dim(ρ f ) of the representation ρ f summed over and a loop around the oriented boundary of the face in the representation ρ f . See for instance [START_REF] Noui | Three-dimensional loop quantum gravity: Physical scalar product and spin foam models[END_REF], [START_REF] Thiemann | Quantum spin dynamics (QSD)[END_REF], [21] for details. This concludes our regularization of the transition amplitudes of string-like sources coupled to four dimensional BF theory. Now, the physical inner product that we have constructed above depends manifestly on the regulating structure R (η,ǫ) . To complete the procedure, we have to calculate the limit in which the regulating parameters η and ǫ go to zero. IV. REGULATOR INDEPENDENCE Throughout this section we will suppose that the cellular complex (∆, ∂∆) is simplicial, i.e. a triangulation of (Σ η , T η ). We will also make the simplifying assumption that G = SU(2), the unitary irreducible representations of which will be noted ( j π, V j ), with the spin j in N/2. The generalization to arbitrary cellular decompositions and arbitrary compact Lie groups can be achieved by using the same techniques that we develop below. If ∆ is a now a regular triangulation, it cannot be adpated to any graph Γ. It can only be so for graphs with three-or-four-valent vertices. Now, we can always decompose a n-valent intertwiner, with n > 3, into a three-valent intertwiner by using repeatedly the complete reducibility of the tensor product of two representations10 . We will therefore decompose all n-valent interwiners ι v with n > 3 into three-valent vertices. We now show how to remove the regulator R (η,ǫ) from the regularized scalar product (45). Instead of computing the η, ǫ → 0 limit, we demonstrate that the transition amplitudes are in fact independent of the regulator. To prove such a statement, we show that the expression (45) does not depend on any component of the regulator. The transition amplitudes are proven to be invariant under any finite combination of elementary moves called regulator moves R : R (η,ǫ) → R ′ (η,ǫ ′ ) , where each regulator move is a combination of elementary moves acting on the components of the regulator : • Bulk and boundary (adapted) Pachner moves (∆, ∂∆) → (∆ ′ , ∂∆ ′ ), • Elementary maximal tree moves T → T ′ , • Elementary curve moves γ → γ ′ . We will see how the invariance under the above moves also implies an invariance under dilatation/contraction of the string thickening radius η. To conclude on the topological invariance of the amplitudes from the above elementary moves, we will furthermore prove that the transition amplitudes are invariant under elementary moves acting on the string spin network graph G : Γ → Γ ′ which map ambient isotopic PL-graphs into ambient isotopic PL-graphs. We now detail the regulator and graph topological moves. A. Elementary regulator moves The regulator moves are finite combinations of the following elementary moves acting on the simplical complex (∆, ∂∆), the maximal tree T and the paths α and β ∈ C. Adapted Pachner moves The first invariance property that we will need is that under moves acting on the simplicial-pair (∆, ∂∆), leaving the one-complex Γ invariant and mapping a Γ-adapted triangulation into a PL-homeomorphic Γadapted simplicial structure. We call these moves adapted Pachner moves. There are two types of moves to be considered : the bistellar [START_REF] Pachner | Ein Henkeltheorem f 'ur geschlossene semilineare Mannigfaltigkeiten[END_REF], acting on the bulk triangulation ∆ and leaving the boundary simpicial structure unchanged, and the elementary shellings [START_REF] Pachner | homeomorphic manifolds are equivalent by elementary shellings[END_REF], deforming the boundary triangulation ∂∆ with induced action in the bulk. a. Bistellar moves. There are four bistellar moves in three dimensions: the (1, 4), the (2, 3) and their inverses. In the first, one creates four tetrahedra out of one by placing a point p in the interior of the original tetrahedron whose vertices are labeled p i , i = 1, ..., 4, and by adding the four edges (p, p i ), the six triangles (p, p i , p j ) i =j , and the four tetrahedra (p, p i , p j , p k ) i =j =k . The (2, 3) move consists in the splitting of two tetrahedra into three : one replaces two tetrahedra (u, p 1 , p 2 , p 3 ) and (d, p 1 , p 2 , p 3 ) (u and d respectively refer to 'up' and 'down') glued along the (p 1 , p 2 , p 3 ) triangle with the three tetrahedra (u, d, p i , p j ) i =j . The dual moves, that is the associated moves in the dual triangulation, follow immediately. See FIG. 2. b. Elementary shellings. Since the manifold (Σ η , T η ) has non-empty boundary, extra topological transformations have to be taken into account to prove discretization independence. These operations, called elementary shellings, involve the cancellation of one 3-simplex at a time in a given triangulation (∆, ∂∆). In order to be deleted, the tetrahedron must have some of its two-dimensional faces lying in the boundary ∂∆. The idea is to remove three-simplices admitting boundary components such that the boundary triangulation admits, as new triangles after the move, the faces along which the given tetrahedron was glued to the bulk simplices. Moreover, for each elementary shelling there exists an inverse move which corresponds to the attachment of a new three-simplex to a suitable component in ∂∆. These moves correspond to bistellar moves on the boundary ∂∆ and there are accordingly three distinct moves for a three-manifold with boundary, the (3, 1), its inverse and the (2, 2) shellings, where the numbers (p, q) here correspond to the number of two-simplices of a given tetrahedron lying on the boundary triangulation. In the first, one considers a tetrahedron admitting three faces lying in ∂∆ and erases it such that the remaining boundary component is the unique triangle which did not belong to the boundary before the move. The inverse move follows immediately. The (2, 2) shelling consists in removing a three-simplex intersecting the boundary along two of its triangles such that, after the move, ∂∆ contains the two remaining faces of the given tetrahedron. These shellings and the associated boundary bistellars are depicted in FIG. 3. The subset of bistellar moves and shellings which map Γ-adapted triangulations into Γ-adapted triangulations will be called adapted Pachner moves, and, considering a local simplicial structure T k = ∪ k n=1 ∆ n 3 , k = 1, ..., 4, an adapted (p, q) Pachner move T p → T q will be noted P (p,q) , or more generally P. Any two PL-homeomorphic, Γ-adapted triangulations (∆, ∂∆) of the PL-pair (Σ η , T η ) are related by a finite sequence of such moves. Here it is important to take into account the PL-embedding of the string spin network graph Γ in the (dual) triangulation (∆, ∂∆). We will call Γ k = Γ ∩ T * k the restrictions of the graph Γ to the local simplex configurations T k appearing the moves. If the graph Γ k is not the null graph, we will consider that it is open and does not contain any loop. If this was not the case, the set of adapted moves would reduce to the identity move, under which the transition amplitudes are obviously invariant. Hence, Γ k , if at all, can only be either an edge, or more generally a collection of edges, either a (three-valent) vertex. The associated string spin network functional Ψ Γ k will be represented by a group function φ k , which is the constant map φ k = 1 if Γ k is the null graph. We will make sure to check that, under a (p, q) Pachner move, φ k transforms as φ p → φ q . Maximal tree moves It is also necessary to define topological moves for the trees [START_REF] Freidel | Spin networks for noncompact groups[END_REF], [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF]. Any two homologous trees T 1 and T 2 are related by a finite sequence of the following elementary tree moves T : T 1 → T 2 . Definition 1 (Tree move) Considering a vertex ∆ 0 belonging to a tree T , choose a pair of edges ∆ 1 , ∆ ′ 1 in ∆ touching the vertex ∆ 0 such that ∆ 1 is in T , ∆ ′ 1 is not in T and such that ∆ ′ 1 combined to the other edges of T does not form a loop. The move T consists in erasing the edge ∆ 1 from T and replacing it by ∆ ′ 1 . ∆0 ∆0 ∆1 ∆1 ∆ ′ 1 ∆ ′ 1 There is another operation on trees that we need to define. When acting on the simplicial complex (∆, ∂∆) with a bistellar move or a shelling, one can possibly map (∆, ∂∆) into a simplicial complex (∆ ′ , ∂∆ ′ ) with a different number of vertices. Hence, a maximal tree T of ∆ is not necessarily a maximal tree of ∆ ′ ; the Pachner moves have a residual action on the trees. This leads us to define the notion of maximal tree extension (or reduction) accompanying Pachner moves modifying the number of vertices of the associated simplicial complex. Definition 2 (Tree extension or reduction) An extended (or reduced) tree T , associated to a Pachner move P : ∆ → ∆ ′ modifying locally the number of vertices of a simplicial complex ∆, is a maximal tree of ∆ ′ obtained from a maximal tree T of ∆ by adding (or removing) the appropriate number of edges to T as a mean to transform T into a maximal tree T P of ∆ ′ . Obviously, there is an ambiguity in the operation of tree extension or reduction. But, because of the fact that the regularized physical inner product will turn out to be independent of a choice of maximal tree, there will be no trace of this ambiguity in the computations of the transition amplitudes. Curve moves Finally, we define the PL analogue of the Reidemeister moves, which where in fact a crucial ingredient in the proof of Reidemeister's theorem. Any two ambient isotopic PL embeddings γ 1 and γ 2 of a curve γ in the dual complex (∆ * , ∂∆ * ) are related by a finite sequence of the following elementary topological moves C : γ 1 → γ 2 . Definition 3 (Curve move) Consider a PL path γ lying along the p boundary edges e 1 , e 2 , ..., e p of a twocell f of the dual pair (∆ * , ∂∆ * ), where f has no other edges nor vertices traversed by the curve γ. Erase the path γ along the edges e 1 , e 2 , ..., e p and add a new curve along the complementary ∂f \ {e 1 , e 2 , ..., e p } of the erased segment in f . e ∂f \ e f f We have now defined all of the elementary regulator moves. To summarize, an elementary regulator move R : R (η,ǫ) → R ′ (η,ǫ ′ ) is a finite combination of all the above moves : R[R (η,ǫ) ] = (T η , P[(∆, ∂∆)], T[T P ], C[α], C[C]). Proving the invariance of the regularized physical inner product (45) under all of these elementary regulator moves is equivalent to showing the independence of the regulating structure R (η,ǫ) . Note however that we have not included contractions or dilatations of the string tube T η radius η in the regulator moves. This is because the invariance under shellings implies the invariance under increasing or decreasing of η. Indeed, the bistellars and shellings are the simplicial analogues of the action of the homeomorphisms Homeo[(Σ η , T η )]. In particular, the topological group Homeo[(Σ η , T η )] contains transformations deforming continuously the boundary T η , like for instance maps decreasing or increasing the (non-contractible) radius η > 0 of the boundary torus T η . Hence, showing the invariance under elementary shellings is sufficient to prove the independence on the string thinckening radius, and the moves defined above are sufficient to conclude on the regulator independence of the definition of the regularized physical inner product. To push the result further and conclude on the topological invariance of the transition amplitudes, we need extra ingredients that we define in the following section. B. String spin network graph moves We now introduce the following elementary moves respectively acting on the edges and vertices of the open graph Γ. All ambient isotopic PL embeddings of the one complex Γ are related by a finite sequence of the following elementary moves noted G. Definition 4 (Edge move) An edge move is a curve move applied to an edge e Γ of the graph Γ. Note that these moves apply also to the open edges of the graph Γ. However, there exists other moves which displace the endpoints. Definition 5 (Endpoint move) Considering an open string spin network edge e Γ ending on the point x k ∈ X supported by a dual vertex v of ∂∆ * , which is such that its neighbouring vertex v ′ not touched by e Γ belongs to ∂∆ * , an endpoint move consist in adding a section to e Γ connecting v to v ′ . e ∂f \ e v v f f v ′ v ′ We also need similar moves for the vertices. Definition 6 (Vertex translation) Let v Γ denote a three-valent spin network vertex sitting on the vertex v of the dual complex (∆ * , ∂∆ * ). Choose one edge e Γ among the three edges emerging from v Γ and call v ′ the dual vertex adjacent to v which is traversed by e Γ . Call e, e ′ ⊂ (∆ * , ∂∆ * ) the dual edges locally supporting e Γ , i.e, such that ∂e = {v, v ′ } and v ′ = e Γ ∩ (e ∩ e ′ ). The move consists in translating the vertex v Γ along e from v to v ′ . This is achieved by choosing one dual face sharing the dual edge e and not containing the dual edge e ′ , and acting upon it with the edge move. v v v ′ ′ e e e ′ e ′ Note that the use of rectangular faces in the above picture is only for the clarity of the picture, the move is defined for faces of arbitrary shape. It is important to remark that the above moves respect the topological structure of the embedding because no discontinuous transformations are allowed and because the number and nature of the crossings are preserved since the faces used to define the moves are required to have empty intersections with the string or graph a part from the specified ones. The combination of the adapted Pachner moves and the spin network moves are the simplicial analogues of the action of the homeomorphisms Homeo[(Σ η , T η )] on the triple ((Σ η , T η ), Γ). C. Invariance theorem We can now prove the following theorem. Theorem 1 (Invariance theorem) Let Ψ Γ denote a string spin network element of a given basis of H kin defined with respect to the one-complex Γ. Choose a regulator R (η,ǫ) = (T η , (∆ ǫ , ∂∆ ǫ ), T, α, C) consisting in a thickening T η of the string, a cellular decomposition (∆ ǫ , ∂∆ ǫ ) of the manifold (Σ η , T η ) adapted to the graph Γ, a maximal tree T of ∆, a closed path α in ∂∆ * , and a collection C of open paths β in ∂∆ * . Let R : R (η,ǫ) → R ′ (η,ǫ ′ ) (resp. G : Γ → Γ ′ ) denote an elementary regulator move (resp. a string spin network move). The evaluated linear form (46) is invariant under the action of R and G: P [R (η,ǫ) ; Ψ Γ ] = P [ R[R (η,ǫ) ]; Ψ Γ ] (50) = P [R (η,ǫ) ; Ψ G(Γ) ]. Proof. We proceed by separatly showing the invariance under each elementary regulator moves, before proving the invariance under graph moves. • Invariance under maximal tree moves. Here, we simply apply to the proof of invariance under maximal tree moves writen in [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF]. Firstly, we need to endow the tree T of the left hand side of the move with a partial order. To this aim, we pick a distinguished vertex r of T , chosen to be the other vertex of the edge ∆ ′ 1 . The rooted tree (T, r) thus acquires a partial order : a vertex ∆ ′ 0 of (T, r) is under a vertex ∆ 0 , ∆ ′ 0 ∆ 0 , if it lies on the unique path connecting r to ∆ 0 . We can now define the tree T ∆0 to be the subgraph of T connecting all the vertices above ∆ 0 : ∆ 0 is the root of T ∆0 . The second ingredient that we need is the notion of Bianchi identity (44) applied to trees. Indeed, if T is a tree of the (regular) simplicial complex ∆, its tubular neighborhood has the topology of a 3-ball and its boundary has the topology of a 2-sphere. This surface S can be built as the union of the faces f dual to the edges ∆ 1 in ∆ touching the vertices of T without belonging to T . Hence, applying the Bianchi identity to the tree T ∆0 yields g f ′ 0 =   f ∈S1 g f   g f0   f ∈S2 g f   . (51) Here, f 0 and f ′ 0 are the faces dual to the segments ∆ 0 and ∆ ′ 0 (note that ∆ 0 does not belong to T ∆0 ). The sets S 1 , S 2 are the set of faces dual to the segments ∆ 1 touching the vertices of T ∆0 without belonging to T ∆0 , and which are not f 0 nor f ′ 0 . The presence of two different sets S 1 and S 2 is simply to take into account the arbitrary positioning of the group element g f0 amoung the product over all faces. As usual, the group elements are defined up to orientation and conjugation. Next, we apply a delta function to both sides of the above equation and multiply the result as follows : f / ∈ T f = f 0 , f ′ 0 δ(g f ) δ(g f ′ 0 ) = f / ∈ T f = f 0 , f ′ 0 δ(g f ) δ     f ∈S1 g f   g f0   f ∈S2 g f     (52) f / ∈T δ(g f ) = f / ∈T ′ δ(g f ). In the second step, we have simply used the delta functions with which the expression has been multiplied to set the group elements associated to the sets S 1 and S 2 to the identity (the faces of S 1 and S 2 are dual to segments not belonging to T ). One can then check that the various steps of the proof remain valid if the boundaries of the dual faces carry string spin networks. This shows the invariance of the regularized inner product (46) under maximal tree move • Invariance under adapted Pachner moves. To prove the invariance under Pachner moves, we introduce a simplifying lemma [START_REF] Oeckl | Generalized lattice gauge theory, spin foams and state sum invariants[END_REF], [START_REF] Girelli | Spin foam diagrammatics and topological invariance[END_REF]. Lemma 1 (Gauge fixing identity) To each vertex of the dual triangulation ∆ * are associated four group elements {g a } a=1,...,4 , six unitary irreducible representations {j ab } a<b=1,...4 of G and a string spin network function φ 1 ({g a } a=1,...,4 ). If φ 1 is the constant map φ 1 = 1, or depends on its group arguments only through monomial combinations {g a g b } a =b of degree two, then the following identity holds : 4 b>a=1 G dg a jab π (g a g b ) φ 1 ({g d } d ) = 4 b>a=1 G dg a δ(g c ) jab π (g a g b ) φ 1 ({g d } d ), (53) for c = 1, 2, 3 or 4. Proof of Lemma 1. The above equality is trivially proven by using the invariance of the Haar measure and performing the change of variables γ cb = g c g b , for c < b (resp. γ bc = g b g c , for c > b) in the left hand side. This translation is always possible since the group function φ is either the constant map or depends on the group elements only through monomials of degree two Let us comment here on the validity of the hypothesis made on the spin network function φ 1 associated to the graph Γ 1 = Γ ∩ T 1 , with T 1 = ∆ 3 in the above Lemma. In fact φ 1 depends necessarily on combinations of the form g a g b locally if the graph Γ 1 is not the null graph. Indeed, Γ 1 can either be a collection of edges, in which case this requirement simply states that the edges are open, either a vertex, where one can always use the invariance of the associated intertwining operator to satisfy the desired assumption. Hence, this requirement is always locally satisfied. We can now show the invariance under bistellar moves and shellings. -Bistellars moves: * The (4, 1) move. Consider the four simplices configuration T 4 in (∆, ∂∆) (FIG. 2). Since the amplitudes do not depend on the maximal tree T of ∆, we are free to chose it. The simplest choice consists in considering a maximal tree T whose intersection T 4 with the simplex configuration T 4 reduces to the four external vertices and to a single one-simplex touching the central vertex. We work in the dual picture and label the four external dual edges from one to four. The face dual to the internal tree segment is chosen to be the face 142. We note g a and h ab , a, b = 1, ..., 4, the group elements 11 associated to the external and internal dual edges respectively, while the representations assigned to the dual faces are noted j ab . The general PL string spin network state restricted to the configuration T * 4 is noted φ 4 ({g a } a , {h ab } a<b ). Obviously, φ 4 is not a function of all of its ten arguments, otherwise it would contain a loop, but can generally depend on any one of these ten group elements, as suggested by the notation. The regularized physical inner product (46) restricted to these four simplices yields j12 π (g 1 g 2 ) j13 π (g 1 g 3 ) j14 π (g 1 g 4 ) j23 π (g 2 h 23 g 3 ) j24 π (g 2 h 24 g 4 ) j34 π (g 3 h 34 g 4 ) δ(h 23 ) δ(h 34 ) δ(h 23 h 34 h 24 ) φ({g a } a , {h 23 , h 34 , h 24 }) = G 4 dg 1 dg 2 dg 3 dg 4 j12 π (g 1 g 2 ) j13 π (g 1 g 3 ) j14 π (g 1 g 4 ) j23 π (g 2 g 3 ) j24 π (g 2 g 4 ) j34 π (g 3 g 4 ) φ 1 ({g a } a ), (55) where we have integrated over the delta functions to eliminate the interior variables in the second step. The right hand side of the above equality corresponds to the one simplex configuration of the (4, 1) move with the associated maximal tree reduction (the obvious removal of the internal tree segment; T 1 is given by the four vertices of the resulting tetrahedron). In other words, we have just proved the invariance under the transformation P (4,1) : (T 4 , T 4 ) → (T 1 , T ′ 1 ), with T k = T ∩ T k and T ′ = T P (4,1) . * The (3, 2) move. Here, we consider the three simplices configuration T 3 (see FIG. 2) and chose a tree T intersecting T 3 only on its five vertices. Concentrating on the dual graph, we label the three vertices from one to three and respectively note g α a , h ab , and j αβ ab , a, b = 1, ..., 3, α, β = 1, 2, the external and internal group elements, and the representation labels. The associated string spin network state is called φ 3 . The transition amplitude, restricted to these three simplices, yields (omitting the sum over representations and associated dimensions) Using the gauge fixing identity at the vertices 2 and 3 to eliminate the variables h 1a , a = 1, 11 The notations takes into account the appropriate orientations. G 9 dg 1 1 dg 2 1 dg 1 2 dg 2 2 dg π (g 1 1 hg 2 1 ) j 12 22 π (g 1 2 hg 2 2 ) j 12 33 π (g 1 3 hg 2 3 ) φ 2 ({g a } a , h), where we have used the inverse gauge fixing identity in the last step. This expression corresponds to the two simplices configuration T 2 of the (3, 2) move. -Shellings : * The (3, 1) move. Remarkably, writting the amplitudes associated to the left and right hand sides of the (3, 1) shelling leads to the same expression than the (4, 1) bistellar, even if the geometrical interpretation is obviously different. This is due to the fact that we are imposing the flatness constraint F = 0 also on the faces of the boundary12 simplicial complex ∂∆ and integrating also on the boundary edges. The only difference is in the presence of possible open string spin network edges reflected in the group function φ = φ({g a } a , {λ a } a ), without any incidence an any steps of the proof given for the (4, 1) bistellar. Accordingly, the proof of invariance under the (3, 1) shelling is the one sketched above. * The (2, 2) move. The same remark applies here, the amplitudes are exactly identical to the ones of the (3, 2) bistellar. Accordingly, we have proven the invariance under adapted Pachner moves • Invariance under curve moves. We here show that the regularized physical inner product is invariant under curve moves. The proof uses the flatness constraint F = 0. Consider a particular dual face f of (∆ * , ∂∆ * ) containing n boundary edges positively oriented from vertex 1 to vertex n. Suppose that there are p < n dual edges e 1 , ..., e p supporting a curve positively oriented w.r.t the face f , to which is associated a spin j representation. We want to prove that the associated amplitude is equal to the amplitude corresponding to the curve lying along the np edges of ∂f \ {e 1 , ..., e p } after the edge move. We start from the initial configuration G n n a=1 dg a j π (g 1 ...g p ) δ(g 1 ...g p g p+1 ...g n ) δ(g 1 G 1 )δ(g 1 H 1 ) ... δ(g n G n ) δ(g n H n ), (58) where the capital letter G a , H a , a = 1, ..., n, represent the sequences of group elements associated to the two others faces sharing the edge a. We then simply integrate over the group element g 1 to obtain G n-1 n a=2 dg a j π (g -1 n ...g -1 p+1 ) δ(g -1 n ...g -1 2 G 1 ) δ(g -1 n ...g -1 2 H 1 ) ... δ(g n G n ) δ(g n H n ) (59) = G n n a=1 dg a j π (g -1 n ...g -1 p+1 ) δ(g 1 ...g p g p+1 ...g n ) δ(g 1 G 1 ) δ(g 1 H 1 ) ... δ(g n G n ) δ(g n H n ). Note the reversal of orientations intrinsic to the move. This closes the proof of invariance under curve move We finish the proof of theorem 1 by showing the second part, namely the invariance of the transition amplitudes under string spin network graph moves. • Invariance under edge moves. The proof is the one given for the curve move • Invariance under endpoint moves. Here, we use the momentum conservation Dp = 0. Considering a particular dual face f containing n boundary edges, with p < n dual edges e 1 , ..., e p supporting an open string spin network edge (positively oriented w.r.t f ) ending on the boundary of the edge p, to which is associated a spin j representation. We call λ k the string field evaluated at the target of the kth edge. We choose the holonomy starting point x to be on the endpoint of the p edge (we prove below that nothing depends on this choice) and, since nothing depends on the paths β by virtue of the invariance under curve moves, we choose a path β of C along the edge p + 1. The relevant amplitude is given by G n n a=1 dg a dλ p dλ p+1 j π (g 1 ...g p λ p )δ(g p+1 λ p+1 λ -1 p ), (60) where the notations are the same than above. It is immediate to rewrite the above quantity as G n n a=1 dg a dλ p dλ p+1 j π (g 1 ...g p g p+1 λ p+1 )δ(g p+1 λ p+1 λ -1 p ), (61) which concludes the proof of endpoint move invariance • Invariance under vertex translations. Here, we consider three dual face f i , i = 1, 2, 3, of (∆ * , ∂∆ * ) each containing n i boundary edges positively oriented from vertex 1 to vertex n i . The three faces meet on the common edge e which is such that 1 = t(e), i.e., e = e i ni , forall i. Suppose that there are p 1 < n 1 dual edges e 1 1 , ..., e 1 p1 (resp. p 2 < n 2 dual edges e 2 1 , ..., e 2 p2 ) of the face f 1 (resp. f 2 ) supporting a string spin network edge e 1 Γ (resp. e 2 Γ ) colored by a spin j 1 (resp. j 2 ) representation and oriented negatively w.r.t the orientation of f 1 (resp. f 2 ). Suppose also that the face f 3 contains n 3p 3 dual edges e 3 p3+1 , ..., e along which lies a positively oriented (w.r.t. to the orientation of the face) string spin network edge e 3 Γ colored by a spin j 3 representation. Consider that the three edges meet on the vertex v Γ supported by the vertex 1 of (∆ * , ∂∆ * ). Noting g the group element associated to the common edge e, one can write the spin network function associated to the three valent vertex v Γ lying on 1 and use the invariance property of the associated intertwining operator ι to 'slide' the vertex along the edge e : j1 π ((g 1 p1 ) -1 (g 1 p1-1 ) -1 ...(g 1 1 ) -1 ) j2 π ((g 2 p2 ) -1 (g 2 p2-1 ) -1 ...(g 2 1 ) -1 ) (62) j3 π (g 3 p3+1 ...g 3 n3-1 g) ι j1j2j3 = j1 π ((g 1 p1 ) -1 ...(g 1 1 ) -1 g -1 ) j2 π ((g 2 p2 ) -1 ...(g 2 1 ) -1 g -1 ) j3 π (g 3 p3+1 ...g 3 n3-1 ) ι j1j2j3 . It is then possible to use the flatness constraint F = 0 on either of the faces f 1 or f 2 to implement an edge move on e 1 Γ or e 2 Γ thus completing the vertex move By virtue of all the above derivations, we have now fully proven theorem 1. To be perfectly complete, we need to verify that the amplitudes are also independent under orientation and holonomy base point change. This leads to the following proposition. Proposition 1 The regularized physical inner product (46) is independent of the choice of orientations of the dual edges and faces of (∆ * , ∂∆ * ), and does not depend on the choice of holonomy base points. Proof of Proposition 1. It is immediate to see that the amplitudes do not depend on the orientations of the dual faces and dual edges of (∆ * ∂∆ * ), nor on the holonomy starting points on the boundaries of the dual faces [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF]. Indeed, a dual face and dual edge orientation change correspond respectively to a change g e → g -1 e and g f → g -1 f which are respectively compensated by the invariance of the Haar measure, dg e = dg -1 e , and of the delta function: δ(g f ) = δ(g -1 f ). A change in the holonomy base point associated to a dual face f will have as a consequence the conjugation of the group element g f by some element h in G. Since the delta function is central, δ(g f ) = δ(hg f h -1 ), the regularized physical inner product (46) will remain unchanged under a such transformation. Concerning the base point x used to define the holonomies along the loop α and the paths β ∈ C in (46), the situation is similar. Let x be noted x 1 and suppose that we change the point x 1 to another point x 2 in X neighbouring x 1 . Since we have showen the invariance under bistellars and shellings, we are free to choose the simplest discretization 13 of the manifold (Σ η , T η ). We choose it such that the cylindrical section of T η between x 1 and x 2 is discretized by a single dual face with two opposite sides glued along a dual edge e linking x 1 to x 2 . By virtue of the curve move invariance, we are also free to choose the path β to be along e. The amplitude based on x 2 as a starting point for the paths α and β, restricted to this section of T η , yields 2 ) δ(g β λ 1 λ -1 2 ) δ(g 2 g β g -1 1 g -1 β ) f ({g a } a , {λ a } a , g β ), where g a is the holonomy around the disk bounding the tube section at the point x a and the function f describes the string spin network function together with the other delta functions containing the group elements g a and g β . It is immediate to rewrite the above expression as G 5 2 a=1 dg a dλ a dg β δ(g 1 λ 1 uλ -1 1 ) δ(g -1 β λ 2 λ -1 1 ) δ(g -1 1 g -1 β g 2 g β ) f ({g a } a , {λ a } a , g β ), (64) which is the amplitude based on x 1 as a starting point for the paths α and β There are two major consequences due to the above theorem and proposition. First, there is no continuum limit to be taken in (45). Since the transition amplitudes are invariant under elementary regulator moves, the regularized physical inner product (46) is independent of the regulator and the expression (46) is consequently exact, there is no need to take the limits14 ǫ, η → 0. In particular, we have shown that the amplitudes are invariant under any finite sequence of bistellar moves and shellings which implies, by Pachner's theorem, that the physical inner product is well defined and invariant on the equivalence classes of PL-manifolds 15 (∆, ∂∆) up to PL-homeomorphisms. Accordingly, the transition amplitudes are invariant under triangulation change and thus under refinement. This leads to the second substantial consequence of theorem 1. The crucial point is that the equivalence classes of PL-manifolds up to PL-homeomorphism are in one-to-one correspondence with those of topological manifolds up to homeomorphism. See for instance [START_REF] Pfeiffer | Quantum general relativity and the classification of smooth manifolds[END_REF] for details. Hence, showing the invariance of the regularized physical inner product under triangulation change is equivalent to showing homeomorphism invariance: the discretized expression (46) is in fact a topological invariant of the manifold (Σ η , T η ). In particular, the amplitudes are invariant on the equivalence classes of boundary torii T η up to homeomorphisms. It follows that they do not depend on the embedding of the string S . Combining these results with the second part of theorem 1 stating that the regularized physical inner product is invariant under string spin netwotk graph moves, we obtain the following corollary. This corollary concludes our study of the topological invariance of the theory of extended matter coupled to BF theory studied in this paper. V. CONCLUSION In the first part of this paper we have studied the geometrical interpretation of the solutions of the BF theory with string-like conical defects. We showed the link between solutions of our theory and solutions of general relativity of the cosmic string type. We provided a complete geometrical interpretation of the classical string solutions and explained (by analyzing the multiple strings solution) how the presence of strings at different locations induces torsion. In turn torsion can in principle be used to define localization in the theory. We have achieved the full background independent quantization of the theory introduced in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF]. We showed that the implementation of the dynamical constraints at the quantum level require the introduction of regulators. These regulators are defined by (suitable but otherwise arbitrary) space discretization. Physical amplitudes are independent of the ambiguities associated to the way this regulator is introduced and are hence well defined. There are other regularization ambiguities arising in the quantization process that have not been explicitly treated here. For an account of these as well as for a proof that these have no effect on physical amplitudes see [START_REF] Perez | On the regularization ambiguities in loop quantum gravity[END_REF]. The results of this work can be applied to the more general type of models introduced in [START_REF] Montesinos | Two-dimensional topological field theories coupled to four-dimensional BF theory[END_REF], were it is shown that a variety of physically interesting 2-dimensional field theories can be coupled to the string world sheet in an consistent manner. An interesting example is the one where in addition to the degrees of freedom described here, the world sheet carries Yang-Mills excitations. There is an intriguing connection between this type of topological theories and certain field theories in the 2+1 gravity plus particles case. One would expect a similar connection to exist in this case. However, due to the higher dimensional character of the excitations in this model this relationship allows for the inclusion of more general structures: only spin and mass is allowed in 2+1 dimensions. The study of the case involving Yang-Mills world-sheet degrees of freedom is of special interest. This work provides the basis for the computation of amplitudes in the topological theory. A clear understanding of the properties of string transition amplitudes should shed light on the eventual relation with field theories with infinitely many degrees of freedom. FIG. 1 : 1 FIG.1:A typical string spin network (the string is represented by the bold line). FIG. 2 : 2 FIG.2: The (4, 1) and (3, 2) bistellar moves. FIG. 3 : 3 FIG.3: The (3, 1) and (2, 2) shellings, their dual moves and the associated boundary bistellars. G 5 2 a=1 2 dg a dλ a dg β δ(g 2 λ 2 uλ -1 Corollary 1 1 The physical inner product (46) is a topological invariant of the triple ((Σ η , T η ), Γ) :P [R (η,ǫ) ; Ψ Γ ] = P [[(Σ η , T η )]; Ψ [Γ] ],(65)where [(Σ η , T η )] and [Γ] denote the equivalence classes of topological open manifolds and one-complexes up to homeomorphisms and ambient isotopy respectively. G 10 dg 1 dg 2 dg 3 dg 4 dh 12 dh 13 dh 14 dh 23 dh 24 dh 34 (g 3 h 34 g 4 ) δ(h 12 h 23 h 31 ) δ(h 13 h 34 h 41 ) δ(h 23 h 34 h 24 ) φ 4 ({g a } a , {h ab } a<b ), where we have omitted the sum over representations weighted by the associated dimensions. We start by implementing Lemma 1 at vertices 2, 3 and 4 to eliminate the three variables h 1b , b = 1. We obtain G 7 dg 1 dg 2 dg 3 dg 4 dh 23 dh 24 dh 34 (54) j12 π (g 1 h 12 g 2 ) j13 π (g 1 h 13 g 3 ) j14 π (g 1 h 14 g 4 ) j23 π (g 2 h 23 g 3 ) j24 π (g 2 h 24 g 4 ) j34 π 12 h 23 h 13 ) φ 3 ({g a } a , {h ab } a<b ). 1 3 dg 2 3 dh 12 dh 13 dh 23 (56) j 11 12 π (g 1 1 h 12 g 1 2 ) j 11 13 π (g 1 1 h 13 g 1 3 ) j 11 23 π (g 1 2 h 23 g 1 3 ) j 22 12 π (g 2 1 h 12 g 2 2 ) j 22 13 π (g 1 2 h 13 g 2 3 ) j 22 23 π (g 2 2 h 23 g 2 3 ) j 12 11 π (g 1 1 g 2 1 ) j 12 22 π (g 1 2 g 2 2 ) j 12 33 π (g 1 3 g 2 3 ) δ(h Note that this is exactly the same result than the one obtained for the point particle, if we think of the 3d momentum as Hodge dual to a bivector. In fact if Σ is compact the equations of motion (3) implies that the string must be closed (or have zero tension). See the original work[START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] for a detailed canonical analysis. More precisely, one usually endows the canonical hypersurface Σ with a real, analytic structure and restricts the edges to be piecewise analytic or semi-analytic manifolds, as a mean to control the intersection points. We thank Merced Montesinos for pointing out this property of BF theory's constraints. More precisely, the linear form P , once normalized by the evaluation P (1), is a stateP/P (1) : Cyl ⊂ A → C,whose associated GNS construction leads equivalently to the physical Hilbert space Hphys. The associated Gel'fand ideal I is immense by virtue of the topological nature of the theory under consideration. Indeed, one can show that any element of Cyl based on a contractible graph is equivalent to a complex number. The associated physical representation πphys : Cyl → End(Hphys) is defined such that forall cylindrical function a Γ ∈ Cyl defined on a contractible graph Γ, πphys(a Γ [A])Ψ = aγ [0]Ψ, forall Ψ in Hphys. Note that in dimension d ≤ 3, each topological d-manifold admits a piecewise-linear-structure (this is the so-called 'triangulation conjecture'). Note that the blow up of the string, reflected here in the presence of a flatness constraint on the boundary torus, gives us the opportunity to impose that the connection is flat also on the string. One can show that the amplitudes obtained using the three-valent decomposition (where the virtual edges are assumed to be real) are identical to the ones obtained using a suitable non-simplicial cellular decomposition adapted to spin networks with arbitrary valence vertices. In this sense, the boundary amplitudes are very different from a 2 + 1 quantum gravity model defined on an open manifold. Anticipating on the next paragraph, we are using the fact that the invariance under Pachner moves implies the topological invariance of the amplitudes. Accordingly, we can use a cellular decomposition which is not necessarily a triangulation. More precisely, we have shown that (46) is invariant when going from R (η,ǫ) to R (η ′ ,ǫ ′ ) for all (η ′ , ǫ ′ ) = (η, ǫ) which implies regulator independence. More precisely, a triangulation ∆ is not a PL-manifold. It is a combinatorial manifold which is PL-isomorphic to a PLmanifold. Acknowledgements We thank Romain Brasselet for his active participation to the early stages of this project. WF thanks Etera Livine for discussions. This work was supported in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes) through a visiting professor fellowship.
101,856
[ "842948", "2441" ]
[ "13", "400", "179898" ]
01751474
en
[ "spi" ]
2024/03/05 22:32:07
2014
https://hal.univ-lorraine.fr/tel-01751474/file/DDOC_T_2014_0309_XU.pdf
After all the confusion or epiphany, depression or inspiration, sadness or happiness, finally it comes to the stage when we are modest and grateful, as the last thing that I learned from my doctoral studies. I would like to begin by expressing my most sincere gratitude to my Ph.D. advisor, Professor Michel Potier-Ferry, an inspiring and extraordinary person to work with during the entire span of this thesis. In addition to introducing me to the subject of instability and guiding me to grow in computational mechanics, his patient guidance and relentless pursuit of excellence have helped shape me into not only a good researcher with rigorous attitude but also a responsible and considerate person than I otherwise would have been. Plainly put, he has been a great mentor. It has been an honor and a privilege working with him over the last three years. Sincere appreciation is extended to my co-advisor, Dr. Salim Belouettar, for his kind supports and helps as well as deep trusts for giving me full autonomy in research activity. I would like to express my gratitude to the other thesis committee members, Professor Basile Audoly, Professor Yibin Fu, Professor Martine Ben Amar and Professor Hachmi Ben Dhia, for taking the time to read this thesis and for offering helpful comments and suggestions. Professor Basile Audoly and Professor Yibin Fu deserve a special thank for their inspiring and encouraging reports on this thesis. Many thanks go in particular to Professor Yanping Cao at Tsinghua University for his expert advice and valuable scientific discussions, which enlightened me and eliminated my confusions on some technical points. I would also like to record my gratitude to my colleagues and my friends, Dr. Yu Cong and Dr. Yao Koutsawa, for our scientific (and not-so-scientific) discussions. It has been a pleasure working with them. Special thanks go to Professor Hamid Zahrouni for allowing me to use his powerful workstation to perform heavy simulations. My thanks also go to my friends, Kodjo Attipou, Cai Chen, Yajun Zhao, Junliang Dong, Qi Wang, Dr. Wei Ye, Dr. Jingfei Liu, Dr. Kui Wang, Alex Gansen, Qian Shao, Sandra Hoffmann, Dr. Duc Tue Nguyen, etc. for sharing my joy and sadness, and offering helps and supports whenever needed. I treasure every minute that I have spent with them. Lastly, I would like to give my most special thanks to my parents and my fiancée for their unconditional love, 24/7 support, share of happiness and for always believing in me. Without their encouragement I would never have made it this far. Financial support for this research was provided by AFR Ph.D. Grants from Fonds National de la Recherche of Luxembourg (Grant No. FNR/C10/MS/784868). 2.1 An elastic stiff film resting on a compliant substrate under in-plane compression. The wrinkle wavelength λ x is much larger than the film thickness h f . The ratio of the substrate thickness h s to the wavelength, h s /λ x , can vary from a small fraction to a large number. . . . . . . . . . . . . . . . . . (v R 1 , v I 1 , v R 2 , v I 2 ) . The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. 4.4 Buckling of a clamped beam under uniform compression: one real envelope (v 0 ) and four complex envelopes (v R 1 , v I 1 , v R 2 , v I 2 ) . The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. . xiii 4.5 Buckling of a clamped beam under uniform compression: one real envelope (u 0 ) and four complex envelopes (u R 1 , u I 1 , u R 2 , u I 2 ). The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. . Thin films on hyperelastic substrates under equi-biaxial compression. The left column shows a sequence of wrinkling patterns, while the right column presents the associated instability shapes at the line X = 0.5L x . Localized folding mode and checkerboard mode appear in the bulk. . . . . . . . . . . xv General introduction Surface wrinkles of stiff thin layers bound to soft materials have been widely observed in nature, such as wrinkles of hornbeam leaf and human skin, which has raised considerable research interests for several decades. When depositing a stiff thin film on a polymeric soft substrate, the developed residual compressive stresses in the film during the cooling process due to the large mismatch of thermal expansion coefficient between the film and the substrate, are relieved by buckling with wrinkle patterns, which was pioneered by Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF] in 1998. These surface wrinkles are a nuisance in some applications, but can be widely applied in modern industry ranging from micro/nanofabrication of flexible electronic devices with controlled morphological patterns [START_REF] Bowden | The controlled formation of ordered, sinusoidal structures by plasma oxidation of an elastomeric polymer[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF] to the mechanical property measurement of material characteristics [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF]. Over the last decade, several theoretical, numerical and experimental works have been devoted to stability analyses in order to determine the critical conditions of instability and the corresponding wrinkling patterns [START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF]. Although linear perturbation analyses can predict the wavelength at the initial stage of instability threshold, determination of the following post-buckling morphological evolution really requires nonlinear buckling analyses. During post-buckling, the wavelength and amplitude of wrinkles may vary with respect to externally applied compressive load. However, instability problems are complex and often involve strong effects of geometrical nonlinearities, large rotations, large displacements, large deformations, loading path dependence and multiple symmetry-breakings. That is why most nonlinear buckling analyses have resorted to numerical approaches since only a limited number of exact analytical solutions can be obtained. In most of previous works, the 2D or 3D film/substrate system is often discretized by spectral method or Fast Fourier Transform (FFT) algorithm, which is fairly computationally inexpensive but prescribes periodic boundary conditions and simple geometries. Moreover, within the spectral or FFT framework, it is rather difficult to describe localized behavior that often occurs in soft matters. By contrast, such systems can be modeled by using finite element methods, which is more computationally expensive but more flexible to describe complex geometries and general boundary conditions, and allows using com-mercial computer codes. In addition, there is a shortage of study concerning the effect of boundary conditions on instability patterns, which is important in practice. Localizations are often caused by stress concentration near the boundary or by symmetry-breakings, and finite element method is a good way to capture the localized behavior such as folding, creasing or ridging, while the spectral or FFT technique has difficulties to achieve it. Overall, pattern formation modeling and post-buckling analysis deserve new numerical investigations, especially through finite element method that can provide the whole view and insight into the formation and evolution of wrinkle patterns in any condition. Therefore, the main objective of the present thesis is to apply advanced numerical methods for multiple-bifurcation analyses of film/substrate systems. These advanced numerical approaches include path-following techniques, bifurcation indicators, bridging techniques, multi-scale analyses, etc. The point of this thesis lies in, but is not limited to, the application of the following numerical methods to the instability pattern formation of film/substrate systems: • Finite element method to be able to deal with all the geometries, behaviors and boundary conditions; • Path-following technique for nonlinear problem resolution; • Bifurcation indicator to detect bifurcation points and the associated instability modes; • Reduction techniques of models by multi-scale approaches; • Bridging techniques to couple full models and reduced-order models concurrently. The thesis is outlined as follows. In Chapter 1, literature review revisits previous contributions and reports cuttingedge works as well as research trends in the relative fields. Challenges and problems that need to be overcome and solved are positioned and discussed. Besides, the main methods and techniques that will be applied and developed in the following chapters are introduced. This includes firstly an advanced numerical continuation technique to solve nonlinear differential equations, namely Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF]; secondly a Fourier-related multi-scale modeling technique for instability pattern formation; and lastly the well-known Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] for model coupling between different scales or different meshes. In Chapter 2, we apply advanced numerical methods for bifurcation analyses to typical film/substrate systems, and focus on the post-bifurcation evolution involving secondary bifurcations and the associated instability modes. A finite element model based on the 2 ANM is developed for nonlinear analysis of pattern formation, with a particular attention on the effect of boundary conditions. Up to four successive bifurcation points have been detected. The evolution of wrinkling patterns and post-bifurcation modes including period-doubling has been observed beyond the first bifurcation. Next in Chapter 3, following the same strategy, we extend the 2D work to 3D cases. Spatial pattern formation in stiff thin films on compliant substrates is investigated based on a 3D finite element model by coupling shell elements representing the film and block elements describing the substrate. Typical post-bifurcation patterns include sinusoidal, checkerboard and herringbone shapes, with possible spatial modulations, boundary effects and localizations. Up to four successive bifurcation points have been found on nonlinear response curves. Chapter 4 presents a very original nonlocal bridging technique between microscopic and macroscopic models for the wrinkling analysis. We discuss how to connect a fine and a coarse model within the Arlequin framework. We propose a nonlocal reductionbased coupling operator that allows us to accurately describe the response of the system near the boundary and to avoid locking phenomena in the coupling zone. The proposed method can be viewed as a guide for coupling techniques involving other reduced-order models and it shows a flexible way to analyze cellular instability problems involving thin boundary layers, e.g. membrane wrinkling, buckling of thin metal sheets, etc. In the last Chapter 5, a macroscopic modeling framework for film/substrate systems is developed based on the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. More specifically, a 2D macroscopic film/substrate model is derived from the 2D model presented in Chapter 2, with all the mechanical fields being represented by Fourier coefficients. In this way, the computational cost can be reduced significantly, since only a few number of elements are sufficient to describe nearly periodic wrinkles. In the same spirit, a 3D macroscopic film/substrate model that couples a nonlinear envelope membrane-wrinkling model and a linear elastic macroscopic model is then established. Unless particularly mentioned, computations performed throughout the thesis have been based on self-developed computer codes in MATLAB. Introduction générale Le plissement dans les films minces sur un substrat plus mou a été largement observé dans la nature, par exemple dans les feuilles d'arbres et la peau humaine. Ces phénomènes ont suscité un intérêt considérable depuis plusieurs décennies. Lors du dépôt d'un film sur un substrat polymère souple, des contraintes résiduelles de compression se développent dans le film pendant la phase de refroidissement en raison du grand décalage de coefficient thermique entre le film et le substrat, puis elles se relaxent par flambage: on se reportera par exemple à l'article pionnier de Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF] en 1998. Ces plis de surface sont une nuisance dans certaines applications, mais peuvent être largement utilisés dans l'industrie moderne pour des applications allant de la micro/nano-fabrication de dispositifs électroniques flexibles avec motifs morphologiques contrôlés [START_REF] Bowden | The controlled formation of ordered, sinusoidal structures by plasma oxidation of an elastomeric polymer[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF] à la mesure des propriétés mécaniques des matériaux [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF]. Au cours de la dernière décennie, des recherches théoriques, numériques et experimentales ont été consacrées aux analyses de stabilité afin de déterminer les conditions critiques d'instabilité et les modes correspondants [START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF]. Bien que les analyses linéaires de perturbation puissent prévoir la longueur d'onde à l'initiation de l'instabilité, la détermination de l'évolution morphologique ultérieure nécessite vraiment des analyses non-linéaires. Au cours du post-flambage, la longueur d'onde et l'amplitude peuvent varier. Mais, les problèmes d'instabilité sont complexes et impliquent souvent de forts effets de non-linéarité géométrique, de grandes rotations, de grands déplacements, de grandes déformations, une dépendance par rapport au chemin de chargement et de multiples brisures de symétrie. C'est pourquoi la plupart des analyses non-linéaires de flambement ont recouru à des approches numériques parce qu'on ne peut obtenir qu'un nombre limité de solutions exactes de manière analytique. Dans la plupart des travaux antérieurs, le système film/substrat en 2D ou 3D est souvent discretisé par la méthode spectrale ou la transformée de Fourier rapide (FFT), qui coûtent peu cher en termes de temps calcul mais imposent des conditions aux limites périodiques et des géométries simples. En outre, dans le cadre spectral ou FFT, il est assez difficile de décrire un comportement localisé qui apparait souvent avec la matière molle. En revanche, ces systèmes peuvent être modélisés par la méthode des éléments finis, ce qui est plus coûteux en temps calcul mais plus flexible pour décrire des géométries complexes et des conditions aux limites génériques, et permet d'utiliser des codes de calculs commerciaux. En plus, il y a peu d'études concernant les effets de conditions aux limites sur les modes d'instabilité, ce qui est important dans la pratique. Les localisations sont souvent causées par la concentration des contraintes près des bords ou par les multiples brisures de symétrie, et la méthode des éléments finis est une bonne façon de capturer des comportements localisés tels que le pliage, le plissement ou la formation de crêtes, ce qui est plus difficile avec la technique spectrale ou FFT. Globalement, la modélisation de la formation de plis et l'analyse de post-flambage méritent de nouvelles investigations numériques, en particulier par la méthode des éléments finis qui peut founir une vue complète de la formation et de l'évolution des modes de plissement dans toutes les conditions. L'objectif principal de cette thèse est donc d'appliquer aux systèmes film/substrat des méthodes numériques avancées pour des analyses de bifurcations multiples. Ces approches numériques avancées incluent des techniques de cheminement, des indicateurs de bifurcation, des méthodes de couplage de modèle, des analyses multi-échelle, etc. Le point fort de cette thèse est l'application des méthodes numériques suivantes aux instabilités dans les systèmes film/substrat: • Méthode des éléments finis pour pouvoir traiter toutes les géométries, tous les comportements et conditions aux limites; • Technique de cheminement pour la résolution de problèmes non-linéaires; • Indicateur de bifurcation pour détecter les points de bifurcations et les modes d'instabilité associés; • Techniques de réduction de modèles par des approches multi-échelles; • Méthodes de couplage pour coupler les modèles complets et les modèles réduits. Le contenu détaillé de la thèse est décrite ci-dessous. Dans le Chapitre 1, une revue de la littérature présente les travaux récents et les tendances de la recherche dans le domaine. Les défis et les problèmes à surmonter sont positionnés et discutés. Par ailleurs, les méthodes et techniques principales qui seront appliquées et développées dans les chapitres suivants sont introduits: tout d'abord une technique numérique de pilotage pour résoudre les équations différentielles non-linéaires, à savoir la Méthode Asymptotique Numérique (MAN) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF]; deuxièmement une technique de modélisation multi-échelle de type Fourier pour la formation de modes d'instabilité; et enfin la méthode Arlequin [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] bien connue pour le couplage des modèles entre différentes échelles ou différents maillages. Dans le Chapitre 2, nous appliquons des méthodes numériques avancées pour des analyses de bifurcations aux modèles typiques de systèmes film/substrat, et il se concentre sur l'évolution post-bifurcation impliquant bifurcations secondaires et modes d'instabilité associés. Un modèle éléments finis basés sur la MAN est développé pour l'analyse nonlinéaire, avec une attention particulière sur l'effet des conditions aux limites. Jusqu'à quatre points de bifurcation successifs ont été détectés. L'évolution des modes de plissement et des modes de post-bifurcation, y compris le doublement de la période, a été observée au-delà de la première bifurcation. Dans le Chapitre 3, suivant la même stratégie, nous étendons le travail 2D au 3D. La formation de modes spatiaux dans les films minces sur des substrats mous est étudiée à partir d'un modèle éléments finis 3D par le couplage d'éléments de coque et d'éléments volumiques. Les modes typiques de post-bifurcation incluent des formes sinusoïdaux, de damiers et de chevrons, avec des modulations spatiales possibles, des effets des conditions aux limites et des localisations. Jusqu'à quatre points de bifurcation successifs ont été trouvés sur des courbes de réponse non-linéaires. Le Chapitre 4 présente une technique très originale de couplage non-local entre les modèles microscopiques et macroscopiques pour l'analyse de plissement. Nous discutons comment connecter un modèle fin et un modèle grossier dans le cadre Arlequin. Nous proposons un opérateur de réduction non-local qui nous permet de décrire avec précision la réponse du système à proximité de la frontière et d'éviter des phénomènes de verrouillage dans la zone de couplage. La méthode proposée peut être considérée comme un guide pour les techniques de couplage impliquant d'autres modèles réduits et il montre la manière flexible pour analyser les problèmes d'instabilité cellulaire impliquant des couches minces, par exemple, le plissement des membrane, le flambage de tôles minces métalliques, etc. Dans le dernier Chapitre 5, un cadre de modélisation macroscopique pour les systèmes film/substrat est développé sur la base de la technique de Fourier à coefficients lentement variables [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Plus précisément, un modèle 2D macroscopique de film/substrat est dérivé du modèle 2D présenté dans le Chapitre 2, tous les champs mécaniques étant représentés par des coefficients de Fourier. De cette manière, le coût de calcul peut être considérablement réduit, car seul un petit nombre d'éléments suffit pour décrire les plis périodiques. Dans le même esprit, un modèle 3D macroscopique de film/substrat est établi, qui couple un modèle de membrane-plissement enveloppe non-linéaire et un modèle macroscopique élastique linéaire. Sauf mention particulière, les calculs effectués tout au long de la thèse ont été basés sur les codes informatiques de notre cru écrits dans MATLAB. Wrinkles of a stiff thin layer attached on a soft substrate have been widely observed in nature (see Fig. 1.1) and these phenomena have raised considerable research interests over the last decade [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Mahadevan | Self-organized origami[END_REF][START_REF] Efimenko | Nested self-similar wrinkling patterns in skins[END_REF][START_REF] Dervaux | Morphogenesis of growing soft tissues[END_REF]128,[START_REF] Rogers | Materials and mechanics for stretchable electronics[END_REF][START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Kim | Hierarchical folding of elastic membranes under biaxial compressive stress[END_REF]. The underlying mechanism of wrinkling is generally understood as a stress-driven instability, analogous to Euler buckling of an elastic column under compressive stress. By depositing a stiff film on an elastomeric soft substrate, the developed residual compressive stresses in the film during the cooling process due to the large thermal expansion coefficient mismatch between the film and the substrate are relieved by buckling with a pattern of wrinkles, while the film remains bonded to the substrate, which was pioneered by Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Bowden | The controlled formation of ordered, sinusoidal structures by plasma oxidation of an elastomeric polymer[END_REF] in 1998. These surface wrinkles are a nuisance in some applications, but can be widely applied ranging from micro/nano-fabricating surfaces with ordered patterns with unique wetting [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Bowden | The controlled formation of ordered, sinusoidal structures by plasma oxidation of an elastomeric polymer[END_REF], buckled single crystal silicon ribbons [START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF], optical property of electronic eye camera, mechanical property measurement of surface characteristics of the materials [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF], biomedical engineering [START_REF] Genzer | Soft matter with hard skin: From skin wrinkles to templating and material characterization[END_REF] as well as biomechanics [START_REF] Amar | Swelling instability of surface-attached gels as a model of soft tissue growth under geometric constraints[END_REF][START_REF] Dervaux | Buckling condensation in constrained growth[END_REF][START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF], to the design of flexible semiconductor devices and stretchable electronics [START_REF] Rogers | Materials and mechanics for stretchable electronics[END_REF], conformable skin sensors, smart surgical gloves, and structural health monitoring devices [START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF]. When subjected to sufficiently large in-plane compressive stresses, in order to minimize the total potential elastic energy, a film/substrate system may buckle into different intricate patterns depending on loading and boundary conditions, e.g. sinusoidal, checkerboard, herringbone, hexagonal and triangular patterns, as shown in Figs. 1.2 and 1.3. One of the mathematical expressions of those pattern shapes is the combinations of trigonometric functions [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF], which can be written as 1D sinusoidal mode: z = sin(kx), Checkerboard mode: z = cos(kx) cos(ky), Herringbone mode: z = cos(kx) + sin(kx) cos(ky), Hexagonal mode: z = cos(kx) + 2 cos ( 1 2 kx ) cos ( √ 3 2 ky ) , Triangular mode: z = -sin(kx) + 2 sin ( 1 2 kx ) cos ( √ 3 2 ky ) . (1.1) First, let us distinguish three typical phenotypes in morphological instability of soft materials: wrinkling, folding and creasing (see Fig. 1.4). Wrinkling is related to periodic or aperiodic surface undulations appearing on a flat surface. It often occurs during the buckling of thin structures with lateral foundations. For example in 2D cases, a stiff film bonded to a compliant substrate may buckle into sinusoidal waves. Folding usually refers to a buckling induced surface structure with a localized surface valley. Folds are often observable during the post-buckling evolution of surface wrinkles in a hard layer lying on a gel substrate or floating on a liquid. The term folding has been extensively adopted in some fields like biomedicine and tectonophysics to represent traditional wrinkling. By contrast, creasing usually appears at the surface of soft materials without hard skins, when an initially smooth surface forms a sharp self-contacting sulci [START_REF] Li | Mechanics of morphological instabilities and surface wrinkling in soft materials: a review[END_REF]. The broad interest on surface morphological instabilities of stiff thin layers attached on soft substrates has motivated recent studies, especially focusing on stability analysis. Several theoretical, numerical and experimental works have been devoted to linear perturbation analyses and nonlinear buckling analyses in order to determine the critical conditions of instability and the corresponding wrinkling patterns [START_REF] Huang | Instability of a compressed elastic film on a viscous layer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Mahadevan | Self-organized origami[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF][START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF][START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF]. In particular, there are many analytical solutions of models linearized from homogeneous finite deformation, in the case of half-spaces [START_REF] Hayes | Surface waves in deformed elastic materials[END_REF][START_REF] Dowaikh | On surface waves and deformations in a pre-stressed incompressible elastic solid[END_REF] as well as film/substrate systems [START_REF] Steigmann | Plane deformations of elastic solids with intrinsic boundary elasticity[END_REF][START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF]. Cai and Fu [START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF] analytically studied the buckling of a pre-stressed coated elastic half-space with the aid of the exact theory of nonlinear elasticity, treating the coating as an elastic layer and using its thickness as a small parameter. Besides, they also determined the imperfection sensitivity of a neo-Hookean surface layer bonded to a neo-Hookean half-space [START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF]. Although linear perturbation analyses can predict the wavelength at the initial stage of instability threshold, determination of the post-buckling morphological evolution and the mode transition of surface wrinkles really requires nonlinear buckling analyses. During post-buckling, the wrinkle wavelength and amplitude will vary with respect to externally applied compressive load. Due to its well- known difficulty, most post-buckling analyses have resorted to numerical and experimental approaches, since only a limited number of exact analytical solutions can be obtained in very simple or simplified cases. Some researchers considered the wrinkling of thin film by using Föppl-von Kármán nonlinear elastic plate theory and applied linear elasticity theory for the substrate, and then carried out the minimization of the potential energy to characterize the effective instability parameters, which is quite a classical and simple way to model the film/substrate systems. Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] have elucidated nonlinear aspects on the buckling behavior of some periodic modes and developed a closed-form solution. They calculated the wavelength and amplitude of sinusoidal wrinkles through considering an infinitely thick substrate. Through numerical way by using finite element method and simulating a single elementary cell, a square or a parallelogram with periodic boundary conditions, Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] found that the herringbone pattern has the minimum energy among several patterns including sinusoidal and checkerboard, which is the reason why it was frequently observed in experiments. Huang et al. [START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF] extended the work of Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] to the case of a film bound to a substrate of finite thickness. Instead of modeling the substrate as a Winkler foundation (a foundation made of an array of springs and dashpots) [START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF], they developed a spectral method to evolve two-dimensional patterns of wrinkles in separate spaces and represented the three-dimensional elastic field of the substrate in the Fourier space. The calculation of wrinkle patterns was performed in a square cell in the plane with periodic boundary conditions replicating the cell to the entire plane. They observed stripe wrinkles, checkerboard, labyrinths or herringbone patterns depending on the loading conditions and showed that the wavelength of the wrinkles remains constant as the amplitude of the wrinkles increases. Independently, Mahadevan and Rica [START_REF] Mahadevan | Self-organized origami[END_REF] proposed an analysis of herringbone patterns based on amplitude equations, which is suitable for the analysis of large wavelength perturbations on top of the straight wrinkles, but with an assumption that does not apply to the geometry of herringbones. Audoly and Boudaoud [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF] proposed a simplified buckling model by considering a film attached on an isotropic half-infinite substrate and then solved it in the Fourier space. They found that the undulating stripes evolve smoothly towards a pattern similar to herringbone one under increasing loads. In their companion papers [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF], they conducted asymptotic methods and Fast Fourier Transform (FFT) algorithm to explore aspects of behavior expected in the range of very large overstress with emphasis on the herringbone mode. Most previous studies consider a homogeneous substrate, while systems consisting of a stiff thin layer resting on an elastic graded substrate are often encountered both in nature and industry. In nature, many living soft tissues including skins, brains, mucosa of esophagus and pulmonary airway can be modeled as a soft substrate covered by a stiff thin surface layer [START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF]. It is noted that the sub-surface layer (i.e. substrate) usually has gradient mechanical properties because of the spatial variation in the microstructure or composition. Besides, many practical systems in industry have a functionally graded substrate [START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF][START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF][START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. For instance, the deposition process of a stiff thin film on a soft substrate may lead to a variation of the mechanical properties of the substrate along the thickness direction (functionally graded), which would affect the wrinkling of film/substrate system [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF]. Lee et al. [START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF] performed a stability and bifurcation analysis for the surface wrinkling of an elastic half space under in-plane compression, with Young's modulus arbitrarily varying along the depth. They developed a finite element method to solve this problem. Cao et al. [START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF] carried out theoretical analyses and finite element simulations for a hard film wrinkling on an elastic graded substrate subjected to in-plane compression. In particular, they investigated two typical variations in the substrate modulus along the depth direction, expressed by a power function and an exponential function, respectively. Nevertheless, up to now, there is a shortage of theoretical or numerical investigation on the instability of a stiff layer lying on an elastic graded substrate. On the other hand, when the substrate is made of viscous or viscoelastic materials, wrinkling patterns may evolve with time due to their time-dependent mechanical properties [START_REF] Huang | Instability of a compressed elastic film on a viscous layer[END_REF][START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF]. In such systems, the instability characteristics can be determined by integrating the methods of energetics and kinetics [START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF]. A spectral method was developed by Huang and Im [START_REF] Huang | Dynamics of wrinkle growth and coarsening in stressed thin films[END_REF] for numerical simulations of wrinkle growth and coarsening in stressed thin films on a viscoelastic layer. A random perturbation distribution of lateral deflection is imposed to trigger disordered labyrinth patterns. Later, a Fourier transform method was employed by Im and Huang [START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF] for wrinkling analysis of an anisotropic crystal film on a viscoelastic substrate layer. Even so, investigations in terms of the effects of viscosity on wrinkling pattern evolution are still very limited and still deserve much further effort. Notwithstanding much effort have been made on the modeling of morphological wrinkling in film/substrate systems, most of those previous studies are mainly constrained to determine the critical conditions of instability and corresponding wrinkling patterns near the instability threshold. The post-buckling evolution and mode transition of surface wrinkles are only recently being pursued [128,129,[START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Kim | Hierarchical folding of elastic membranes under biaxial compressive stress[END_REF][START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF][START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF][START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. Experimentally through thin elastic membranes being supported on a much softer elastic solid or a fluid, Pocivavsek et al. [128] found a transition from periodic surface wrinkling to symmetry-broken folding when the compression is beyond a critical value. When the fluid substrate is replaced by a polydimethylsiloxane (PDMS) foundation, the film/substrate system shows distinctly different pattern evolution with increasing com-pression. Brau et al. [START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF] experimentally discovered that further compressions above the onset of buckling triggers multiple bifurcations: one wrinkle grows in amplitude at the expense of its neighbors. These bifurcations create a period-doubling or even a periodquadrupling surface topography under progressive compressions. Li et al. [START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF] numerically reproduced these interesting phenomena in volumetric growth and surface wrinkling of a mucosa and submucosa by using a pseudo dynamic solution method. Through numerical simulations, Cao and Hutchinson [START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF] uncovered advanced post-bifurcation modes including period-doubling, folding and a newly identified mountain ridge mode, in the post-buckling of a bilayer system wherein an unstretched film is bonded to a prestretched compliant neo-Hookean substrate with buckling arising as the stretch in the substrate is relaxed. Then, Zang et al. explored this localized mountain ridge mode in greater depth by finite element simulation and an analytical film/substrate model. Meanwhile, Cao and Hutchinson [START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF] provided another further insight into the connection between wrinkling and an alternative surface mode, the finite amplitude crease or sulcus. Additionally, hierarchical folding of an elastic membrane on a viscoelastic substrate can be observed under a continuous biaxial compressive stress: the folds delineate individual domains and each domain subdivides into smaller ones over multiple generations [START_REF] Kim | Hierarchical folding of elastic membranes under biaxial compressive stress[END_REF]. By experimentally modifying the boundary conditions and geometry, Kim et al. [START_REF] Kim | Hierarchical folding of elastic membranes under biaxial compressive stress[END_REF] demonstrated control over the final network morphology. Challenges and discussion Although considerable progresses have been made over the last decade on the modeling of morphological wrinkling in film/substrate systems, there remain lots of significant and interesting problems that deserve further investigations. Advances in theoretical and numerical modeling in this field are impeded by a number of mechanical, mathematical and numerical complexities. For instance, the surface instability of stiff layers on soft materials usually involves strong geometrical nonlinearities, large rotations, large displacements, large deformations, loading path dependence, multiple symmetry-breakings, nonlinear constitutive relations, localizations, and other complexities, which makes the theoretical and numerical analyses quite difficult. The morphological post-buckling evolution and mode shape transition beyond the critical load are incredibly complicated, especially in 3D cases, and the conventional numerical methods of post-buckling have difficulties in predicting and detecting all the bifurcations and the associated wrinkling modes on their complex response curves. Reliable and robust path-following techniques are in strong demand for post-buckling analyses of film/substrate system, especially for predicting and tracing surface mode transitions, while it was rarely explored in the literature. Moreover, in conventional finite element analysis, the post-buckling simulation may suffer from the convergence issue if the film is much stiffer than the substrate. In most of previous works, the 2D or 3D spatial problem is often discretized by spectral method or FFT algorithm, which is fairly computationally inexpensive but prescribes periodic boundary conditions and simple geometries. Furthermore, within the spectral or FFT framework, it is quite difficult to capture localized behavior that often occurs in soft matters in complex geometries and boundary conditions. It has been early recognized by Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] that such systems can also be studied by using finite element methods, which is more computationally expensive but more flexible to describe complex geometries and general boundary conditions, and allows using commercial computer codes. However, only few following contributed works can be found in literatures and 3D finite element simulations of film/substrate buckling were studied only in few papers [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF]. Besides, the post-buckling evolution and mode transition of surface wrinkles in 3D film/substrate systems are merely studied in the case of periodic cells [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF]. In particular, Cai et al. [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF] employed an analytical upper-bound method through a 3D model considering a nonlinear Föppl-von Kármán plate bonded to a linear elastic foundation. Through performing 3D finite element simulations on a specific unit cell with periodic boundary conditions, a new equilateral triangular mode was identified and the mode transition from a triangular mode to an asymmetric three lobed mode under increasing overstress was analyzed. Still, there is a shortage of study concerning the effect of boundary conditions on instability patterns, which is important in practice. Localizations are often caused by stress concentration due to the real boundary and loading conditions or by symmetrybreakings, and finite element method is a good way to capture the localized behavior such as folding or ridging, while the spectral or FFT technique has difficulties to achieve it. Overall, pattern formation and evolution deserve further numerical investigations, especially through finite element method that can provide the overall view and insight into the formation and evolution of wrinkle patterns in any condition. Can one obtain the variety of wrinkling patterns reported in the literature by using classical finite element models? Can one predict and trace the whole evolution path of buckling and post-buckling of this system? Can one capture the exact post-buckling modes on strong nonlinear response curves? Under what kind of loading and boundary conditions can each type of patterns be observed at what value of bifurcation loads? What are the effects of boundary conditions and material properties on pattern formation and evolution? What are the critical parameters influencing their wavelengths and amplitudes? These questions will be addressed in this thesis, from 2D to 3D cases, from analytical to numerical methods, from classical to multi-scale perspectives. We will start from developing 2D and 3D classical finite element film/substrate models that consider nonlinear geometry for the film and linear elasticity for the substrate, as often employed in the literature [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF], by taking into account the boundary effects and localizations, by following the post-buckling evolution path, by predicting the bifurcation points, then move to the multi-scale standpoint. The introduction on main advanced numerical methods that we need to solve the problems and to develop the subjects will be briefed in the following sections, which includes firstly an advanced nonlinear resolution perturbation technique, namely Asymptotic Numerical Method (ANM) as a path-following approach to predict bifurcations, and secondly a recent Fourier series based multi-scale modeling technique for cellular instability pattern formation, and then lastly the well-known Arlequin method for multiple models/domains coupling between different scales or levels. Asymptotic Numerical Method for nonlinear resolution As mentioned in the last section, post-buckling evolution of film/substrate systems beyond the critical load are usually complicated, while the conventional numerical resolution approaches have difficulties in getting a reliable convergent solution and predicting all the bifurcations as well as the associated instability patterns on their evolution paths. These challenges pose a need for developing an effective nonlinear resolution technique that can follow the post-buckling evolution path on film/substrate instability problem. In this thesis, Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] will be incorporated with 2D and 3D models of film/substrate systems to resolve extremely strong geometrical nonlinearities. It offers considerable advantages in terms of efficiency and reliability to get a robust path-following technique compared with classic iterative algorithms. For some cases involving multiple bifurcations, probably, it would be rather difficult to achieve a convergent solution on very strong nonlinear response curves by using conventional numerical methods. The solution to many physical problems can be achieved through the resolution of nonlinear problems depending on a real parameter λ. The corresponding nonlinear system of equations can be written as R(U, λ) = 0, (1.2) where U ∈ R n is the unknown vector and R ∈ R n is a vector of "n" equations that are supposed to be sufficiently smooth with respect to U and λ. The main idea of the ANM is to compute a solution path (U, λ) of the nonlinear system (1.2) using a step-by-step method, with each step corresponding to a truncated Taylor series. The standing point (U j+1 , λ j+1 ) of the step (j + 1) is determined using the 1.2. Asymptotic Numerical Method for nonlinear resolution last point solution (U j , λ j ) of the previous step j (see Fig. 1.5). The ANM has been proven to be an efficient path-following technique to deal with various nonlinear problems both in solid and fluid mechanics [START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF][START_REF] Cadou | ANM for stationary navierstokes equations ans with petrov-galerkin formulation[END_REF]4,[START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF][START_REF] Nezamabadi | A multilevel computational strategy for handling microscopic and macroscopic instabilities[END_REF][START_REF] Nezamabadi | Solving hyperelastic material problems by asymptotic numerical method[END_REF][START_REF] Lazarus | Continuation of equilibria and stability of slender elastic rods using an asymptotic numerical method[END_REF][START_REF] Lejeune | Automatic solver for non-linear partial differential equations with implicit local laws: Application to unilateral contact[END_REF][START_REF] Cong | Simulation of instabilities in thin nanostructures by a perturbation approach[END_REF]. The idea of the ANM application is to associate a perturbation technique with an appropriate numerical resolution scheme, such as finite element method. This allows transforming a given nonlinear problem into a set of linear problems to be solved successively, leading to a numerical representation of the solution in the form of power series truncated at relatively high orders. Once the series are fully computed, an accurate approximation of the solution path is provided inside a determined radius of convergence. Unlike classical incremental-iterative algorithm, this method does not require iterative corrections thanks to the high order predictor [START_REF] Lahman | High-order predictor-corrector algorithms[END_REF]5]. For efficiency concerns, all governing equations need to be set into quadratic form before applying the series expansion. Details of these procedures can be found in [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF]. Perturbation technique Starting from a known point solution (U j , λ j ), the solution path is represented by truncated power series using a path parameter a:            U(a) = U j + ∞ ∑ p=1 a p U p = U j + aU 1 + a 2 U 2 + . . . λ(a) = λ j + ∞ ∑ p=1 a p λ p = λ j + aλ 1 + a 2 λ 2 + . . . (1.3) This solution path is a solution of Eq. (1.2), which can be written as 0 =R(U j + aU 1 + a 2 U 2 + . . . , λ j + aλ 1 + a 2 λ 2 + . . .) =R(U j , λ j ) + ∂R ∂U ⌋ j (aU 1 + a 2 U 2 + . . .) + ∂R ∂λ ⌋ j (aλ 1 + a 2 λ 2 + . . .) + 1 2 ∂ 2 R ∂U 2 ⌋ j (aU 1 + a 2 U 2 + . . .)(aU 1 + a 2 U 2 + . . .) + . . . (1.4) Considering the fact that R(U j , λ j ) = 0, and after rearranging the terms as increasing power of a, the above equation becomes 0 =a { ∂R ∂U ⌋ j U 1 + ∂R ∂λ ⌋ j λ 1 } + a 2 { ∂R ∂U ⌋ j U 2 + ∂R ∂λ ⌋ j λ 2 + 1 2 ∂ 2 R ∂U 2 ⌋ j U 1 U 1 + 1 2 ∂ 2 R ∂λ 2 ⌋ j λ 2 1 + ∂ 2 R ∂U∂λ ⌋ j λ 1 U 1 } + a 3 { ∂R ∂U ⌋ j U 3 + ∂R ∂λ ⌋ j λ 3 + terms depending on U 1 , U 2 , λ 1 , λ 2 } . . . + a p      ∂R ∂U ⌋ j U p + ∂R ∂λ ⌋ j λ p + terms depending on U 1 . . . U p-1 , λ 1 . . . λ p-1 -F nl p      . . . (1.5) Or in a condensed form: R(U(a), λ(a)) = aR 1 + a 2 R 2 + a 3 R 3 + . . . = 0. (1.6) Eq. (1.6) should be verified for each value of a. Therefore, the resolution of the nonlinear system (1.2) leads to the resolution of a recurrent system of linear equations in the following form: R p = 0 for p 1. (1.7) At each order p, the vector of equations R p = 0 is a linear system with respect to U p and λ p : ∂R ∂U ⌋ j U p + ∂R ∂λ ⌋ j λ p = F nl p , (1.8) where the right-hand sides member F nl p only depends on the terms of previous orders. Path parameter Eq. (1.8) represents a system of n linear equations and (n+1) unknowns. Therefore, a complementary condition is needed as also required in predictor-corrector methods. This complementary condition can be found by defining the path parameter a using the quasiarc-length parameter (by projection of the increment on the tangent direction (U 1 , λ 1 )) as follows: a = (U -U j )U 1 + (λ -λ j )λ 1 . (1.9) After substituting Eq. (1.3) into Eq. (1.9), one can obtain the supplementary condition at each order: { ∥U 1 ∥ 2 + λ 2 1 = 1, U p U 1 + λ p λ 1 = 0. (1.10) The calculation of the step j is achieved by the calculation of N order right-hand sides F nl p and the resolution of N order linear problems in Eqs. (1.8) and (1.10). In contrast to the other predictor-corrector methods, only one matrix ∂R ∂U ⌋ j needs to be calculated and inversed, which can save an important amount of calculation time. Continuation approach To achieve an efficient algorithm, the analysis of validity range and the definition of a new starting point should be adaptive, i.e. the value of path parameter a has to be automatically determined by satisfying a given accuracy tolerance. A simple way to define the value of a max follows the remark: polynomial solutions are very similar inside the radius of convergence of the series, but they tend to rapidly separate when this radius is reached. Thus, a simple criterion is to require that the difference of displacements between two successive orders should be smaller than a given precision parameter δ: Validity range: a max = ( δ ∥u 1 ∥ ∥u n ∥ ) 1/(n-1) , ( 1.11) where the notation ∥•∥ stands for the Euclidean norm. The method for determining a max consists in imposing an accuracy parameter δ, and that the relative difference between solutions of two successive orders be small in comparison with δ [START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF]. Note that a max is computed in a posteriori manner based on the available series coefficients. So the step length determination in the ANM framework can be considered as fully adaptive and completely automatic as opposed to classical iterative algorithms. When there is a bifurcation point on the solution path, the radius of convergence is defined by the distance to the bifurcation. Thus, the step length becomes smaller and smaller, which looks as if the continuation process "knocks" against the bifurcation [START_REF] Baguet | On the behaviour of the ANM continuation in the presence of bifurcations[END_REF]. This accumulation of small steps is a very good indicator of the presence of a bifurcation point on the path. All the bifurcations can be easily identified in this way by the user without any special tool. Bifurcation indicator As mentioned in Section 1.1.1, complex systems such as stiff thin layers bound to soft materials often involves strong geometrical nonlinearities and multiple bifurcations, which makes the numerical resolution quite difficult. The detection of bifurcation points is really a challenge. In literatures, test functions are widely introduced to compute critical points. They are scalar functions vanishing at a singular point. Once a critical point is detected between two states, its accurate position can be determined through using a bisection or secant iteration scheme. Two classes of methods, i.e. direct methods and indirect methods, can be distinguished. In direct methods, the existence condition of critical points is embedded in the system of equations to be solved [START_REF] Weinitshke | On the calculation of limit and bifurcation points in stability problems of elastic shells[END_REF][START_REF] Wriggers | A quadratically convergent procedure for the calculation of stability point in finite element analysis[END_REF][START_REF] Wriggers | A general procedure for the direct computation of turning and bifurcation point[END_REF] and the solution of system is exactly the critical point. The indirect methods consist in computing a solution branch and evaluating test functions. The most popular test functions are the determinant or the smallest eigenvalue of tangent stiffness matrix. To distinguish bifurcation point from limit point, the current stiffness parameter is used [START_REF] Wagner | A simple method for the calculation of postcritical branches[END_REF]. Once the bifurcation points are obtained, the associated eigenmodes can be captured. The resulting nonlinear problems are usually solved by the Newton-Raphson method with a choice of piloting strategy [START_REF] Batoz | Incremental displacement algorithms for nonlinear problems[END_REF]. Despite a lot of progresses have been made using the Newton-Raphson method, an efficient and reliable algorithm is quite difficult to be established. Indeed, it would cost considerable computing time in the bisection sequence and corrector iteration because of very small step lengths close to the bifurcation. In the framework of the ANM, a bifurcation indicator has been proposed to detect bifurcation points [START_REF] Boutyour | Méthode asymptotique-numérique pour le calcul des bifurcations: application aux structures élastiques[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. It is a scalar function obtained through introducing a fictitious perturbation force in the problem, which becomes zero exactly at the bifurcation point. Indeed, this indicator measures the intensity of the system response to perturbation forces, which can be computed explicitly along the equilibrium branch through perturbation technique. Next, the roots of this function characterize singular points easily and exactly, since the function is known in a closed form. By evaluating it through an equilibrium branch, all the critical points existing on this branch and the associated bifurcation modes can be determined. In this thesis, these generic bifurcation schemes will be explicitly incorporated with 2D and 3D models of film/substrate systems in order to carry out multiple-bifurcation analyses that involve capturing exact bifurcation points and the corresponding instability modes. Multi-scale modeling for instability pattern formation Instability pattern formation is a very common phenomenon in nature [START_REF] Mahadevan | Self-organized origami[END_REF] and in scientific fields [START_REF] Cross | Pattern formation out of equilibrium[END_REF][START_REF] Aranson | The world of the complex Ginzburg-Landau equation[END_REF][START_REF] Hoyle | Pattern formation, an introduction to methods[END_REF]. In these cases, the spatial shape of system responses looks like a slowly modulated oscillation. Direct calculation of such cellular instabilities in a big sample often requires numerous degrees of freedom, such as membrane wrinkling [START_REF] Rossi | Simulation of light-weight membrane structures by wrinkling model[END_REF], Rayleigh-Bénard convection in large boxes [START_REF] Newell | Finite band width, finite amplitude convection[END_REF][START_REF] Segel | Distant side walls cause slow amplitude modulation of cellular convection[END_REF][START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF], buckling of long structures [START_REF] Damil | Wavelength selection in the postbuckling of a long rectangular plate[END_REF][START_REF] Léotoing | Nonlinear interaction of geometrical and material properties in sandwich beam instabilities[END_REF][START_REF] Abdelmoula | Influence of distributed and localized imperfections on the buckling of cylindrical shells[END_REF], microbuckling of carbon nanotubes [START_REF] Ru | Axially compressed buckling of a double-walled carbon nanotube embedded in an elastic medium[END_REF][START_REF] He | Buckling analysis of multi-walled carbon nanotubes: a continuum model accounting for van der Waals interaction[END_REF] or fiber microbuckling of composites [START_REF] Kyriakides | On the compressive failure of fiber reinforced composites[END_REF][START_REF] Waas | Compressive failure of composites, part II: Experimental studies[END_REF][START_REF] Drapier | A structural approach of plastic microbuckling in long fibre composites: comparison with theoretical and experimental results[END_REF], surface morphological instabilities of stiff thin films attached on compliant substrates [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF], flatness defects in sheets induced by industrial processes [START_REF] Fischer | Buckling phenomena related to rolling and levelling of sheet metal[END_REF][START_REF] Jacques | Buckling and wrinkling during strip conveying in processing lines[END_REF]2]. Therefore, from the computational point of view, it is better to apply reduced-order models not only to satisfy the desired accuracy but also to dramatically cut down the computational time and cost. Classically, such cellular instabilities can be modeled by bifurcation analysis according to the famous Ginzburg-Landau theory [153,[START_REF] Iooss | Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems[END_REF]. The Ginzburg-Landau equation follows an asymptotic double scale analysis. At the local level, one accounts for the periodic nature of buckles, while the slow variations of envelopes are described at the macroscopic scale. Nevertheless, the macroscopic evolutions governed by the Ginzburg-Landau equation have some drawbacks. First, this bifurcation equation is valid only close to the critical state. Second, it cannot account for the coupling between global nonlinear behavior and appearance of patterns, for instance, for structures undergoing both local and global buckling. Third, within the Ginzburg-Landau double scale approach, it is not easy to deduce consistent boundary conditions. A new approach based on the concept of Fourier series with slowly varying coefficients has been presented recently, which is developed to study the instabilities with nearly periodic patterns [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. It has been consecutively applied in buckling of a long beam lying on a nonlinear elastic foundation [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF], global and local instability interaction of sandwich structures [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF], and membrane wrinkling [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. This multi-scale approach is based on the Ginzburg-Landau theory [153,[START_REF] Iooss | Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems[END_REF]. In the proposed theory, the envelope equation is derived from an asymptotic double scale analysis and the nearly periodic fields (reduced model) are represented by Fourier series with slowly varying coefficients. This mathematical representation yields macroscopic models in the form of generalized continua. In this technique, the macroscopic field is defined by Fourier coefficients of the microscopic field. It has been established that the models obtained in this way are consistent with the Ginzburg-Landau technique, but they can remain valid away from the bifurcation [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Besides, the coupling between global and local buckling can be taken into account in a computationally efficient manner [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF]. Moreover, this approach could be very useful to analyze instability problems like Rayleigh-Bénard convection whose discretization requires a huge number of degrees of freedom. As in paper [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], the multi-scale approach, based on the concept of Fourier series with slowly varying coefficients, will be adopted in this thesis. Let us consider a physical phenomenon described by the field U (x)(x ∈ R). The instability wave number q is supposed to be known. All the unknowns of model U = {u(x), v(x), n(x)...} are described as Fourier series, whose coefficients vary more slowly than the harmonics: U (x) = +∞ ∑ j=-∞ U j (x)e jiqx , ( 1.12) where the Fourier coefficient U j (x) denotes the envelope for the j th order harmonic and U -j (x) denotes its conjugate value. The macroscopic unknown fields U j (x) slowly vary Mean field Mean field+amplitude over a period [ x, x + 2π q ] of the oscillation. It is worth mentioning that at least two functions, U 0 (x) and U 1 (x), are necessary to describe the nearly periodic patterns as depicted in Fig. 1.6. The zero order variable U 0 (x) is identified as the mean value and U 1 (x) represents the envelope or amplitude of the spatial oscillations. Notice that U 0 (x) is real valued and U 1 (x) can be expressed as U 1 (x) = r(x)e iφ(x) . The latter mathematical expression represents the first harmonic where r(x) is the amplitude modulation and φ(x) is the phase modulation. The main idea of macroscopic modelling is to deduce differential equations satisfied by the amplitude U j (x). In the present thesis, this Fourier-related multi-scale modeling methodology will be applied in buckling of an elastic beam subjected to a nonlinear Winkler foundation firstly. Then, a generalized framework for macroscopic modeling of film/substrate will be pro-posed. Lastly, it goes to specific 2D and 3D cases with simplifications and assumptions to save computational cost when studying nearly periodic morphological instabilities in film/substrate systems. Arlequin method for model coupling In this thesis, the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] will be used to analyze the influence of boundary conditions on multi-scale modeling of instability pattern formation. More specifically, the coarse model, obtained from a suitable Fourier-related technique [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF] presented in the last section, is inaccurate in the boundary region. Hence, near the boundary, the full model is employed, which is then coupled to the coarse model in the remainder of the domain within the Arlequin framework. This multiple-domain coupling strategy offers a flexible way to design multi-scale models so as to balance the desired accuracy and the computational cost. Despite of considerable advances in computational techniques and computing power, direct simulation of those cellular instability problems is still not a viable option. For instance, the discretization of Rayleigh-Bénard convection problems in large boxes [START_REF] Newell | Finite band width, finite amplitude convection[END_REF][START_REF] Segel | Distant side walls cause slow amplitude modulation of cellular convection[END_REF] requires a huge number of degrees of freedom [START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF], which is a challenge for direct computations. Therefore, there is a need for reliable and efficient techniques in a consistent manner that can take into account the most important scales involved in the goal of the simulation, while allowing ones to flexibly choose the desired level of accuracy and detail of description. Generally, direct finite element modeling of structures involving local defects such as cracks, holes and inclusions, is very cumbersome when the local refinement needs to be considered. To overcome these difficulties, important innovations and efficient numerical methods have been developed during several decades to improve the flexibility of finite element method. Let us list a few in particular the meshless method [START_REF] Belytschko | Element-free Galerkin methods[END_REF], the sequential adaptation method (i.e. h-adaptation, p-adaptation and hp-adaptation), the multigrid (MG) method, the partition of unity finite element method (PUFEM) [START_REF] Babuška | The partition of unity method[END_REF], the generalized finite element method (GFEM) [START_REF] Strouboulis | The design and analysis of the Generalized Finite Element Method[END_REF], the extended finite element method (XFEM) [START_REF] Belytschko | Elastic crack growth in finite-elements with minimal remeshing[END_REF][START_REF] Moës | A finite element method for crack growth without remeshing[END_REF]. All these approaches are essentially monomodel and may either lack flexibility or relevance to address the above issues. Later, hierarchical global-local strategies, including s-version method by Fish [START_REF] Fish | The s-version of finite element method[END_REF][START_REF] Fish | Adaptive s-method for linear elastostatics[END_REF][START_REF] Fish | Adaptive and hierarchical modelling of fatigue crack propagation[END_REF] and Arlequin method by Ben Dhia et al. [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF], allow the superposition of different mechanical models and different meshes. The s-version method is a multilevel solution scheme where each level is discretized using finite element mesh of arbitrary element size and polynomial order. It superimposes additional local refined meshes to an existing global one, thus allowing different modeling in the superimposed meshes. Like the s-method, the Arlequin method aims at creating a multimodel framework. Opposed to s-method, the models are not added but overlapped and glued to each others in the Arlequin framework. In addition, since not only displacement fields but also complete mechanical states (e.g. stress and strain) are potentially allowed to concurrently exist in the superposition zone, the Arlequin method has no redundancy problem. Besides, the iteration of the superposition process (by taking care of gluing zones) can potentially lead to relevant multi-scale models. Over the last decade, the Arlequin method or the bridging domain method [START_REF] Xiao | A bridging domain method for coupling continua with molecular dynamics[END_REF] have been successfully applied to couple heterogeneous models in various cases. One can couple classical continuum and shell models [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF], particle and continuum models [START_REF] Xiao | A bridging domain method for coupling continua with molecular dynamics[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]130,[START_REF] Bauman | Adaptive multiscale modeling of polymeric materials with Arlequin coupling and Goals algorithms[END_REF][START_REF] Chamoin | Ghost forces and spurious effects in atomic-to-continuum coupling methods by the Arlequin approach[END_REF][START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF], heterogeneous meshes [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF] or more generally heterogeneous discretizations [START_REF] Ben Dhia | On the use of XFEM within the Arlequin framework for the simulation of crack propagation[END_REF][START_REF] Biscani | Variable kinematic plate elements coupled via Arlequin method[END_REF]. By superposing and gluing models, the Arlequin method offers an extended modeling framework for the design of various mechanical models for engineering materials and structures in a rather flexible way. In this thesis, within the Arlequin framework, discussion on the transition between a fine and a coarse model will be provided, which is almost a generic but hard topic in applying bridging techniques to reduced-order models or multi-scale models. Especially, a new bridging technique based on a nonlocal reduction operator defined by Fourier series will be presented and highlighted. Energy distribution In the Arlequin framework, the domain of the whole mechanical system is partitioned into two overlapping sub-zones Ω 1 and Ω 2 . Let S g denote the gluing zone supposed to be a non-zero measured polyhedral subset of S = Ω 1 ∩ Ω 2 . The potential energy contribution of the whole system and external load can be re- Arlequin method for model coupling spectively expressed as P int i (u i ) = 1 2 ∫ Ω i α i σ(u i ) : ε(u i )dΩ, (1.13) P ext i (u i ) = ∫ Ω i β i f • u i dΩ. (1.14) In order to have consistent modeling and not to count the energy in the overlapping domain twice, the energy associated to each domain is balanced by weight functions which are represented by α i for the internal work and β i for the external work. These weight functions are assumed to be positive piecewise continuous in Ω i and satisfy the following equations:      α 1 = β 1 = 1, in Ω 1 \S, α 2 = β 2 = 1, in Ω 2 \S, α 1 + α 2 = β 1 + β 2 = 1, in S. (1.15) One can choose constant, linear, cubic, or higher-order polynomial functions for energy distribution (see Fig. 1.8). More details on selection of these functions can be found in [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF][START_REF] Chamoin | Ghost forces and spurious effects in atomic-to-continuum coupling methods by the Arlequin approach[END_REF][START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF]. Coupling choices The Arlequin method aims at connecting two spatial approximations of an unknown field, generally a fine approximation U f and a coarse approximation U r . The idea is to require that these two approximations are neighbor in a weak and discrete sense and to introduce Lagrange multipliers in the corresponding differential problems. A major concern in the Arlequin framework is to define an appropriate coupling operator. At the continuous level, a bilinear form must be chosen, which can be L 2 -type, H 1 -type or energy type [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. The H 1 -type coupling operator C can be defined as C (λ, u) = ∫ Sg ( λ • u + ℓ 2 ε(λ) : ε(u) ) dΩ. (1.16) When ℓ = 0, it becomes an L 2 -type coupling operator. The choice of the length ℓ and the comparisons between H 1 and L 2 couplings have been well discussed in the literature [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Guidault | On the L 2 coupling and the H 1 couplings for an overlapping domain decomposition method using Lagrange multipliers[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF][START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF]. The first and important application of the Arlequin method is the coupling between two different meshes discretizing the same continuous problem: in this case, the mediator problem should be discretized by a coarse mesh to avoid locking phenomena [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] and spurious stress peaks [START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF]. In a recent paper [START_REF] Chamoin | Ghost forces and spurious effects in atomic-to-continuum coupling methods by the Arlequin approach[END_REF], the origin of these so-called "ghost forces" was carefully analyzed, and some corrections were proposed with an appropriate choice of weights and especially by introducing interaction forces between coarse and fine model. Now we present a very representative example [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] to explicitly show the differences of displacement field caused by choices of mediator space. This example has been previously studied by Ben Dhia and Rateau [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] to illustrate the relationship between the coupling operator and the deterioration of the condition number. Let us consider a 1D small strain linear elastic mechanical problem, which consists in evaluating the vertical displacement field, u, in a vertical column with uniform cross section, clamped at both ends and loaded by its own weight. For convenience, the Young's modulus E, the section S, the density ρ and the gravity factor g are chosen to satisfy ρg = ES. Continuous 1D linear elements are conducted to approximate displacement fields as well as Lagrange multiplier fields. Different refinements of the superimposed models are considered. Precisely, we define the coarse domain as Ω c = [0, 2] and the fine domain as Ω f = [START_REF][END_REF][START_REF] Abdelmoula | Influence of distributed and localized imperfections on the buckling of cylindrical shells[END_REF]. The weight functions α f and β f are associated to the fine model. The analytical solution of displacement field is denoted as reference. Figs. 1.9 and 1.10 illustrate the influence of mediator space used to discretize the Lagrange multiplier field on the mechanical states solution. It can be seen that depending on whether the gluing forces space is chosen based on the glue zone of the fine or the coarse finite element model, the fine mechanical state is either tightly locked to the coarse one (see Fig. 1.9), or linked to an average value in a weak sense (see Fig. 1.10). However, the two connected problems are not always in the same space, as for example when dealing with particle and continuous problems. In this case, a prolongation operator has to be introduced to convert the discrete displacement into a continuous one, and then a connection between continuous fields is performed [START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]: this is consistent because the continuous model can be seen as the coarsest one. A similar approach has been applied in the coupling between shell and 3D models. A prolongation operator has been introduced (i.e. from the coarse to the fine level) and the integration is done in the 3D domain but the discretization of the Lagrange multiplier corresponds to a projection on the coarsest problem: thus, in this sense, this coupling of shell/3D is also achieved at the coarse level. In the same spirit, for the coupling between a fine model and a macroscopic envelope model that is discussed in this thesis, the connection should also be done at the coarse level, i.e. between Fourier coefficients. On the contrary, a prolongation operator from the coarse to the fine model had been introduced in the previous paper [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF] and the connection had been done at this level. Therefore, one can wonder if the imperfect connection observed in [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF] could be improved by introducing a coupling at the relevant level. This thesis tries to answer this question by studying again the Swift-Hohenberg equation [START_REF] Swift | Hydrodynamic fluctuations at the convective instability[END_REF] that is a simple and representative example of quasi-periodic instabilities. Very probably, the same ideas can be applied to 2D macroscopic membrane models that were recently introduced in [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. Note that the presented new technique can be considered as a nonlocal coupling since it connects Fourier coefficients involving integrals on a period. A similar nonlocal coupling has been introduced in [START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF] in the case of an atomic-to-continuum coupling, where the atomic model is reduced by averaging over a representative volume. Chapter conclusion This chapter aims at positioning the context and focus of the present thesis with consideration of research trends in fields of instability pattern formation modeling of film/substrate systems. The challenges in modeling and post-buckling resolution, especially from numerical standpoint, were discussed as well. We presented the main technical ingredients that we need to solve the problems and to develop the subjects in the following chapters. This includes firstly an advanced numerical nonlinear resolution technique, namely Asymptotic Numerical Method (ANM) that is a robust path-following technique; then secondly a Fourier-related multi-scale modeling technique for cellular instability pattern formation; and thirdly the well-known Arlequin method for multiple models/domains coupling between different scales or different meshes. A simple benchmark case is presented to demonstrate the effect of different choices of mediator space when using the Arlequin method. In the following chapters, we will present how to model surface wrinkling of film/substrate system, from 2D to 3D cases, from classical to multi-scale perspectives, and how the ANM framework is adapted and incorporated in various finite element models for nonlinear problem resolution and bifurcation analysis, and how to extend and improve the Arlequin framework for instability pattern formation from a multi-scale standpoint. Chapter 2 Multiple bifurcations in wrinkling analysis of film/substrate systems Introduction Wrinkles of a stiff thin layer attached on a soft substrate have been widely observed in nature and these phenomena have raised considerable interests over the last decade. The underlying mechanism of wrinkling is generally understood as a stress-driven instability, analogous to Euler buckling of an elastic column under compressive stress. The pioneering work of Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF] implied that surface wrinkling of film/substrate system can widely apply ranging from micro/nano-fabricating surfaces with ordered patterns with unique wetting, buckled single crystal silicon ribbons, optical property of electronic eye camera, mechanical property measurement of surface characteristics of the materials, to the design of flexible semiconductor devices and stretchable electronics. This leads to several theoretical and experimental works in terms of stability study devoted to linear perturbation analysis and nonlinear buckling analysis [START_REF] Huang | Instability of a compressed elastic film on a viscous layer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Mahadevan | Self-organized origami[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF][START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF]. In particular, there are many analytical solutions of models linearized from homogeneous finite deformation, in the case of half-spaces [START_REF] Hayes | Surface waves in deformed elastic materials[END_REF][START_REF] Dowaikh | On surface waves and deformations in a pre-stressed incompressible elastic solid[END_REF][START_REF] Shield | The buckling of an elastic layer bonded to an elastic substrate in plane strain[END_REF] as well as film/substrate systems [START_REF] Steigmann | Plane deformations of elastic solids with intrinsic boundary elasticity[END_REF][START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF]. It has been also recognized that wrinkling can be the precursor of delamination and failure of the film [START_REF] Shield | The buckling of an elastic layer bonded to an elastic substrate in plane strain[END_REF]. However, most previous studies have been mainly constrained to determine the critical conditions of instability and corresponding wrinkling patterns near the instability threshold. The post-buckling evolution and mode transition of surface wrinkles are only recently being pursued [START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF][START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF]. To our best knowledge, the effect of boundary conditions on wrinkling has not yet been investigated. This study aims at applying advanced numerical methods for bifurcation analysis to typical models of film/substrate system and focuses on the post-bifurcation evolution involving secondary bifurcations and advanced wrinkling modes like period-doubling mode. For this purpose, a finite element (FE) model based on Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] is developed for the nonlinear analysis of wrinkle formation. In this model, the film undergoing moderate deflections is described by Föppl-von Kármán nonlinear elastic plate theory, while the substrate is considered to be a linear elastic solid. This idea is analogous to the modeling of sandwich structures considering each layer described by its own kinematic formulation and the displacement continuity is satisfied at each interface [START_REF] Léotoing | Nonlinear interaction of geometrical and material properties in sandwich beam instabilities[END_REF][START_REF] Léotoing | First applications of a novel unified model for global and local buckling of sandwich columns[END_REF][START_REF] Hu | A novel finite element for global and local buckling analysis of sandwich beams[END_REF]. Instead of solving the resulting nonlinear equations using classical predictor-corrector algorithms such as the Newton-Raphson procedure, we adopted the ANM which appears as a significantly efficient continuation technique [START_REF] Doedel | AUTO: A program for the automatic bifurcation analysis of autonomous systems[END_REF][START_REF] Allgower | Numerical continuation methods[END_REF] without any corrector iteration. It has been proven to be an efficient path-following technique to deal with various nonlinear problems both in solid mechanics [START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF]4,[START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF][START_REF] Assidi | Regularization and perturbation technique to solve plasticity problems[END_REF][START_REF] Nezamabadi | Solving hyperelastic material problems by asymptotic numerical method[END_REF][START_REF] Lazarus | Continuation of equilibria and stability of slender elastic rods using an asymptotic numerical method[END_REF][START_REF] Lejeune | Automatic solver for non-linear partial differential equations with implicit local laws: Application to unilateral contact[END_REF] and in fluid mechanics [START_REF] Brezillon | A numerical algorithm coupling a bifurcating indicator and a direct method for the computation of Hopf bifurcation points in fluid mechanics[END_REF][START_REF] Girault | An algorithm for the computation of multiple Hopf bifurcation points based on Padé approximants[END_REF][START_REF] Jawadi | Asymptotic numerical method for steady flow of power-law fluids[END_REF][START_REF] Guevel | Parametric analysis of steady bifurcations in 2D incompressible viscous flow with high order algorithm[END_REF]. The underlying principle of the ANM is to build up the nonlinear solution branch in the form of relatively high order truncated power series. The resulting series are then introduced into the nonlinear problem, which helps to transform it into a sequence of linear problems that can be solved numerically. In this way, one gets approximations of the solution path that are very accurate inside the radius of convergence. Moreover, by taking the advantage of the local polynomial approximations of the branch within each step, the algorithm is remarkably robust and fully automatic. Furthermore, unlike incremental-iterative methods, the arc-length step size in the ANM is fully adaptive since it is determined a posteriori by the algorithm. A small radius of convergence and step accumulation appear around the bifurcation and imply its presence. Detection of bifurcation points is really a challenge. Direct computation of eigenvalues of the Jacobian matrix is possible, but it costs considerable computing time in bisection sequences. Such methods are expensive and difficult to manage, with the only advantage to be available in libraries like ARPACK [START_REF] Lehoucq | ARPACK User's Guide: Solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods[END_REF]. A more efficient approach is to iteratively solve the nonlinear system characterizing the bifurcation points. This technique was initiated in [140] for ordinary bifurcations and in [START_REF] Jepson | Numerical Hopf bifurcation[END_REF] for Hopf bifurcation, but the convergence of the process depends strongly on the initial guess. There is another class of methods, named "indirect methods", where one computes a "bifurcation indicator" vanishing at singular points. So the determinant of the Jacobian matrix is a bifurcation indicator that is not easy to compute for large scale problems. That is why we prefer another bifurcation indicator that is a sort of measure of the tangent stiffness, is easily implemented and applied in the ANM framework and yields the bifurcation mode. Its reliability has been assessed by many applications in solid mechanics [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF] as well as for stationary and instationary bifurcations in fluid flows [START_REF] Brezillon | A numerical algorithm coupling a bifurcating indicator and a direct method for the computation of Hopf bifurcation points in fluid mechanics[END_REF][START_REF] Girault | An algorithm for the computation of multiple Hopf bifurcation points based on Padé approximants[END_REF][START_REF] Guevel | Parametric analysis of steady bifurcations in 2D incompressible viscous flow with high order algorithm[END_REF]. Alternative techniques are also available from the ANM framework like the method of Padé approximants [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF] or an attractive method of series re-analysis introduced recently in [START_REF] Cochelin | Power series analysis as a major breakthrough to improve the efficiency of Asymptotic Numerical Method in the vicinity of bifurcations[END_REF], but we limited ourselves to the most secure and validated technique. Note that it is also possible to combine several methods to get a very reliable detection technique [START_REF] Brezillon | A numerical algorithm coupling a bifurcating indicator and a direct method for the computation of Hopf bifurcation points in fluid mechanics[END_REF][START_REF] Girault | An algorithm for the computation of multiple Hopf bifurcation points based on Padé approximants[END_REF][START_REF] Guevel | Parametric analysis of steady bifurcations in 2D incompressible viscous flow with high order algorithm[END_REF]. This chapter explores the occurrence and post-bifurcation evolution of aperiodic mode beyond the onset of the primary sinusoidal wrinkling mode in greater depth. The work presented in this chapter, i.e. multiple bifurcations in wrinkling analysis of thin films on compliant substrates, is viewed as an interesting, original and the first work that addresses the post-bifurcation response of film/substrate systems from the quantitative standpoint, which has been published in International Journal of Non-Linear Mechanics [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. Mechanical model and dimensional analysis We consider an elastic stiff film of thickness h f bound to an elastic compliant substrate of thickness h s that can vary by orders of magnitude in applications, whose surface can buckle under in-plane compression (see Fig. 2.1). Upon wrinkling, the film elastically buckles to relax the compressive stress and the substrate concurrently deforms to maintain perfect bonding at the interface. In the following, the elastic potential energy of the system, is considered in the framework of Hookean elasticity. The film/substrate system is considered to be two-dimensional. Let x and z be the longitudinal and the transverse coordinates. The top surface of the film is traction free. The deformation of the system is described by a deflection w along the z direction and a displacement u along the x direction. In the literature, most of the authors model the film by Föppl-von Kármán nonlinear elastic plate theory [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF], which implies moderate rotations and small strains. When the film wrinkles, the wavelength is much larger than the film thickness, so that Föpplvon Kármán nonlinear elastic plate theory can adequately model the thin film [START_REF] Landau | Theory of Elasticity[END_REF]. Moreover, the substrate is generally considered to be a linear elastic solid with 2D plane strain deformation. These assumptions make sense in the case of a large stiffness ratio E f /E s , E f and E s being Young's modulus of the film and the substrate, respectively. Typically, we will consider a ratio E f /E s in the range O (10 4 ). In this range, critical strains are very small and thus the linear elastic framework is relevant. Other studies consider much softer films or stiffer substrates, typically with a stiffness ratio E f /E s in the range O [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF], where the critical strain is relatively large and the small strain framework is no longer appropriate. Therefore, large strain constitutive laws such as neo-Hookean hyperelasticity have to be chosen for E f /E s ≈ O [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF] [START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Hutchinson | The role of nonlinear substrate elasticity in the wrinkling of thin films[END_REF]. In this chapter, we limit ourselves to large stiffness ratio E f /E s ≈ O (10 4 ) so that we can choose the most common framework: 1. The film is represented as a geometrically nonlinear beam with Föppl-von Kármán approximation. 2. The constitutive law of the substrate is 2D linear elasticity. The same framework and especially the beam model were chosen in the most of previous mentioned papers, but obviously a careful application of 2D finite elements should be possible as for instance in [START_REF] Hutchinson | The role of nonlinear substrate elasticity in the wrinkling of thin films[END_REF], with the drawback of leading to very large scale problems due to the film thinness. Such limitations are not necessary when analytically solving incremental systems and 2D/3D models can be considered for the film. Nevertheless, beam/plate models are better Figure 2.1: An elastic stiff film resting on a compliant substrate under in-plane compression. The wrinkle wavelength λ x is much larger than the film thickness h f . The ratio of the substrate thickness h s to the wavelength, h s /λ x , can vary from a small fraction to a large number. for numerical studies of very thin films. Their consistency with 3D approaches are well established for film/substrate systems, see for instance [START_REF] Ciarlet | A justification of the von Kármán equations[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF]. Based on the above assumptions, the internal potential energy P int of the film/substrate system on a cellular wavelength can be given by a sum of two parts: P int = 1 2 ∫ λx 0 ( E f h f ε 2 x + E f h 3 f K 2 12 ) dx + 1 2 ∫ λx 0 ∫ λz 0 t ε s L s ε s dzdx, (2.1) in which                ε x = du f dx + 1 2 ( dw f dx ) 2 , K = d 2 w f dx 2 , t ε s = { ∂u s ∂x , ∂w s ∂z , ∂u s ∂z + ∂w s ∂x } , (2.2) where E f and L s are Young's modulus of the film and elastic matrix of the substrate, respectively. Let u f and w f be the longitudinal and transverse displacement of the film, while u s and w s denote the longitudinal and transverse displacement of the substrate, respectively. The response of the film can be considered periodic or nearly periodic, but the wavelength λ x is not given a priori. The surface instability does not necessarily affect the entire substrate, but only an influence zone whose depth is of the order of λ z . In Appendix A, it is explained why the ratio α = λ x /λ z is of the order of unity. In order to carry out dimensional analysis, firstly, we introduce some dimensionless variables as follows:                  x = λ x x, z = λ z z, u = h 2 f λ x u, w = h f w, L s = E s L s , (2.3) where the notation • stands for dimensionless variables. By introducing Eq. ( 2.3) into Eq. ( 2.2), one can obtain                ε x = h 2 f λ 2 x ε x , K = h f λ 2 x K, t ε s = h f λ x { 0, 1 α ∂w s ∂z , ∂w s ∂x } + ( h f λ x ) 2 { ∂u s ∂x , 0, 1 α ∂u s ∂z } . (2.4) Since the surface wrinkling wavelength λ x is much larger than the film thickness h f , the term (h f /λ x ) 2 is much smaller than the term h f /λ x and it can be reasonably neglected. Then substituting Eq. (2.4) into Eq. (2.1), consequently, P int = E f h 5 f 2λ 3 x ∫ 1 0 ( ε 2 x + K 2 12 ) dx + E s h 2 f α 2 ∫ 1 0 ∫ 1 0 t ε s L s ε s dzdx. (2.5) The critical wavelength can be obtained when the two coefficients E f h 5 f /(2λ 3 x ) and E s h 2 f α/2 have the same order, consequently, E f h 5 f /(2λ 3 x ) E s h 2 f α/2 = O(1). (2.6) Then the critical wavelength λ c x reads λ c x = O [ h f ( E f αE s ) 1/3 ] , (2.7) which is consistent with the analytical solution provided based on linearized stability analysis in [START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF]. Therefore, only the dimensionless stiffness matrix L s appears in the energy (2.5). For an isotropic substrate, the only significant parameter is Poisson's ratio ν s and one can suspect that its influence is relatively weak. In other words, for a very large system (L ≫ λ x , h s ≫ λ x ) and a given ν s , the film/substrate system is generic and all these systems are equivalent according to the change of variables in Eq. (2.3). In more general situations, some parameters can affect the response of film/substrate system, especially the wave number (L/λ x ), the relative thickness of the substrate (h s /λ x ), the anisotropy of the substrate (L s ) and the heterogeneity of the substrate. In what follows, we will discuss the effect of boundary conditions, graded material properties and anisotropy of the substrate in the case of a not too large wave number. 1D reduced model The film/substrate system will be studied numerically within the previously described framework: a linear elastic substrate and a film modeled with Föppl-von Kármán approximation. In the literature, this system is generally discretized by Fast Fourier Transform (FFT) [START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF], which disregards boundary effects. Standard finite element model would be a good candidate for this problem. For convenience, we will adapt a finite element procedure [START_REF] Léotoing | Nonlinear interaction of geometrical and material properties in sandwich beam instabilities[END_REF][START_REF] Léotoing | First applications of a novel unified model for global and local buckling of sandwich columns[END_REF][START_REF] Hu | A novel finite element for global and local buckling analysis of sandwich beams[END_REF] used for sandwich beams. First, the substrate is divided into layers, which leads to a 1D multilayer model. Then this 1D model will be discretized by standard 1D elements. The advantages of this approach are to ensure a perfect continuity between the film and substrate and also to preserve basic symmetries of the continuous system as well as bifurcation points arising from these symmetries. The film/substrate system is considered to be two-dimensional and the geometry is as shown in Fig. 2.2. Let x and z be the longitudinal and the transverse coordinates. The length of the system is denoted as L. The parameters h f , h s and h t are, respectively, the thickness of the film, the substrate and the total thickness of the system. Since the analytical kinematics of substrate is unknown, a finite element methodology has to be applied to discretize the substrate into some sublayers along the z direction first, then discretize it along the x direction. Consequently, h i denotes the thickness of each sublayer in the substrate. Since the film is bound to the substrate, the displacement must be continuous at the interface, which is a restrictive assumption to derive general governing equations. Chapter 2. Multiple bifurcations in wrinkling analysis of film/substrate systems Kinematics The kinematics of the film/substrate system by considering the first discretization along the z direction is given in Eq. (2.8)-(2.10). Film          U f (x, z) = u f (x) - ( z - h f 2 -h s ) ∂W f (x, z) ∂x , h s ≤ z ≤ h t W f (x, z) = w f (x). (2.8) 1 st sublayer          U s1 (x, η) = 1 -η 2 u 0 (x) + 1 + η 2 u 1 (x), -1 ≤ η ≤ 1, (h s -h 1 ) ≤ z ≤ h s W s1 (x, η) = 1 -η 2 w 0 (x) + 1 + η 2 w 1 (x). (2.9) n th sublayer          U sn (x, η) = 1 -η 2 u n-1 (x) + 1 + η 2 u n (x), -1 ≤ η ≤ 1, 0 ≤ z ≤ h n W sn (x, η) = 1 -η 2 w n-1 (x) + 1 + η 2 w n (x). (2.10) Here, the longitudinal and transverse displacement fields are represented by U and W , respectively. Let u f and w f denote the longitudinal and transverse displacement fields of the neutral fiber of the film, while u n and w n are the longitudinal and transverse displacement fields at the interfaces between each sublayer of the substrate, respectively. The superscript f and sn stand for the film and the n th sublayer of the substrate, respectively. The local coordinate along the z direction is described by η. Note that in above kinematic model, the displacement continuity is automatically satisfied at the interfaces between different sublayers. Finite element formulation The expression of the internal virtual work of film/substrate system can be simplified by neglecting stresses whose energetic contribution are quite low, i.e. σ f zz and σ f xz . Consequently, the following constitutive and geometric equations are taken into account:          σ f xx = E f ϵ f xx , σ sn xx = (λ s + 2G s ) ϵ sn xx + λ s ϵ sn zz , σ sn zz = (λ s + 2G s ) ϵ sn zz + λ s ϵ sn xx , σ sn xz = G s γ sn xz , (2.11)            ϵ f xx = U f ,x + 1 2 (W f ,x ) 2 , ϵ sn xx = U sn ,x , ϵ sn zz = W sn ,z , γ sn xz = U sn ,z + W sn ,x , (2.12) where E f , E s and ν s are Young's modulus of the film, substrate, and Poisson's ratio of the substrate, respectively. Lamé's first parameter λ s is expressed as λ s = E s ν s / [(1 + ν s )(1 -2ν s )], while G s is the shear modulus of the substrate expressed as G s = E s / [2(1 + ν s )]. The notation , x stands for the partial derivative ∂ ∂x . The principle of virtual work reads P int (δu) + P ext (δu) = 0, ∀δu ∈ K.A. (2.13) where δu is the virtual displacement and K.A. represents the space of kinematically admissible displacements, while P int (δu) and P ext (δu) are the internal and external virtual work, respectively. The internal virtual work of system is given as P int (δu) = - ∫ Ω f σ f xx δϵ f xx dΩ - ∑ sn ∫ Ω sn (σ sn xx δϵ sn xx + σ sn zz δϵ sn zz + σ sn xz δγ sn xz ) dΩ, (2.14) where Ω f and Ω sn stand for the domain of the film and the n th sublayer in the substrate, respectively. Considering the load proportional to a scalar parameter λ, the external virtual work is defined as P ext (δu) = λ ∫ Ω FδudΩ, (2.15) where F denotes the external load. Through substituting Eq. (2.8)-(2.12) into Eq. (2.13), the film/substrate model can be developed in the following parts. Internal virtual work of the substrate First, let us define the unknown variables in each sublayer ⟨q sn 1 ⟩ = ⟨u n-1 w n-1 u n w n ⟩ , ( 2.16 ) ⟨q sn 2 ⟩ = ⟨u n-1,x w n-1,x u n,x w n,x ⟩ . (2.17) According to the kinematics (2.10), the displacement field reads { U sn W sn } = [N z ] {q sn 1 } , ( 2.18) where [N z ] =    1 -η 2 0 1 + η 2 0 0 1 -η 2 0 1 + η 2    . (2.19) The strain vector {ε sn } and stress vector {s sn } can be respectively expressed as {ε sn } =      ϵ sn xx ϵ sn zz γ sn xz      = [B 1 ] {q sn 1 } + [B 2 ] {q sn 2 } , ( 2.20 ) {s sn } = [C sn ] {ε sn } , (2.21) in which [B 1 ] =       1 -η 2 0 1 + η 2 0 0 - 1 h n 0 1 h n - 1 h n 0 1 h n 0       , (2.22) [B 2 ] =     0 0 0 0 0 0 0 0 0 1 -η 2 0 1 + η 2     , (2.23) [C sn ] =    λ s + 2G s λ s 0 λ s λ s + 2G s 0 0 0 G s    . (2.24) The internal virtual work of the substrate can be represented as the sum of all the sublayers: P s int (δu) = - ∫ L 0 ∫ hs 0 ⟨δε s ⟩ {s s } dzdx = - ∫ L 0     ∑ sn ⟨δq sn 1 ⟩ ∫ hn 0 T [B 1 ] {s sn } dz Φ + ∑ sn ⟨δq sn 2 ⟩ ∫ hn 0 T [B 2 ] {s sn } dz Ψ     dx. (2.25) Through considering Eqs. (2.20) and (2.21), one can obtain Φ = ∑ sn (∫ hn 0 T [B 1 ] [C sn ] [B 1 ] dz {q sn 1 } + ∫ hn 0 T [B 1 ] [C sn ] [B 2 ] dz {q sn 2 } ) , ( 2.26) Ψ = ∑ sn (∫ hn 0 T [B 2 ] [C sn ] [B 1 ] dz {q sn 1 } + ∫ hn 0 T [B 2 ] [C sn ] [B 2 ] dz {q sn 2 } ) . (2.27) One can also combine the above two equations in the following form: { Φ Ψ } = [C s ] { q sn 1 q sn 2 } . ( 2.28) Now we consider the discretization of substrate along the x direction. The unknown vectors can be given as {q sn 1 } = [N s ] {v s } , (2.29) {q sn 2 } = [ N s ,x ] {v s } , ( 2.30) where {v s } is the elementary unknown vector of the substrate and [N s ] is the shape function. Note that the longitudinal displacement u is discretized by linear Lagrange functions, while the transverse displacement w is discretized by Hermite functions. Consequently, the internal virtual work of the substrate can be written as P s int (δu) = - ∑ e ⟨δv s ⟩ ∫ le 0 ( T [N s ]Φ + T [N s ,x ]Ψ ) dx = - ∑ e ⟨δv s ⟩ ∫ le 0 ( [ T N s , T N s ,x ] [C s ] [ N s N s ,x ]) dx{v s }, (2.31) where l e is the length of 1D element. Internal virtual work of the film As for the thin film, the strain energy is mainly generated by normal strain ϵ f xx , the other two terms ϵ f zz and γ f xz being neglected. Consequently, the internal virtual work of the film is expressed as P f int (δu) = - ∫ Ω f σ f xx δϵ f xx dΩ = - ∫ L 0 [ N f ( δu f ,x + w f ,x δw f ,x ) + M f δw f ,xx ] dx, (2.32) where      N f = E f h f ( u f ,x + 1 2 (w f ,x ) 2 ) , M f = 1 12 E f h 3 f w f ,xx . (2.33) The generalized strain vector {ε f } of the film is defined as { ε f } = ( [H] + 1 2 [A(q f )] ) { q f } , ( 2.34) in which [H] = [ 1 0 0 0 0 1 ] , [A(q f )] = [ 0 w f ,x 0 0 0 0 ] , { q f } =      u f ,x w f ,x w f ,xx      . ( 2 .35) Since [A(q f )] and { q f } are linear functions of u f and w f , the internal virtual work of the film in Eq. (2.32) is a quadratic form with respect to the displacement and the stress: P f int (δu) = - ∫ L 0 ⟨ δq f ⟩ ( T [H] + T [A(q f )] ) { s f } dx, (2.36) where the generalized stress of the film reads { s f } = { N f M f } = [D] ( [H] + 1 2 [A(q f )] ) { q f } , (2.37) in which [D] = [ E f h f 0 0 1 12 E f h 3 f ] . (2.38) Note that the discretization of unknown variables { q f } takes the same shape function as for the substrate, i.e. Lagrange functions for the longitudinal displacement and Hermite functions for the transverse displacement. Connection between the film and the substrate In the kinematics (2.8)-(2.10), there are 2(n + 2) unknown functions depending on the number of sublayers in the substrate. As the film is bonded to the substrate, the displacement should be continuous at the interface. Therefore, the connection between the film and the substrate reads { U f (x, h s ) = U s1 (x, -1), W f (x, h s ) = W s1 (x, -1). (2.39) From Eqs. (2.8)-(2.9) and (2.39), the following relations can be obtained:    u f = u 0 - h f 2 w 0,x , w f = w 0 . (2.40) Consequently, the above conditions reduce the unknown functions to 2(n+1) independent unknown variables {u 0 , w 0 , u 1 , w 1 , . . . , u n , w n }. Resolution technique and bifurcation analysis Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] is used to solve the resulting nonlinear equations. The ANM is a path-following technique that is based on the succession of high order power series expansions (perturbation technique) with respect to a well chosen path parameter, which appears as an efficient continuation predictor without any corrector iteration. Besides, one can get approximations of the solution path that are very accurate inside the radius of convergence. In this chapter, the main interest of the ANM is its ability to detect bifurcation points. First, small steps are often associated with the occurrence of a bifurcation. Then, a bifurcation indicator will be defined, which permits to exactly detect the bifurcation load and the corresponding nonlinear mode. Path-following technique Let us write a generalized nonlinear problem as R (U, λ) = L (U) + Q (U, U) -λF = 0, (2.41) where U is a mixed vector of unknowns including displacement and stress, L(•) a linear operator, Q(•, •) a quadratic one, F the external load vector and R the residual vector. The external load parameter is denoted as a scalar λ. The principle of the ANM continuation consists in describing the solution path by computing a succession of truncated power series expansions. From a known solution point (U 0 , λ 0 ), the solution (U, λ) is expanded into truncated power series of a perturbation parameter a: U(a) = U 0 + n ∑ p=1 a p U p = U 0 + aU 1 + a 2 U 2 + . . . + a n U n , (2.42) λ(a) = λ 0 + n ∑ p=1 a p λ p = λ 0 + aλ 1 + a 2 λ 2 + . . . + a n λ n , (2.43) a = ⟨u -u 0 , u 1 ⟩ + (λ -λ 0 ) λ 1 , (2.44) where n is the truncation order of the series. Eq. (2.44) defines the path parameter a that can be identified to an arc-length parameter. By introducing Eqs. (2.42) and (2.43) into Eqs. (2.41) and (2.44), then equating the terms at the same power of a, one can obtain a set of linear problems: Order 1 : L 0 t (U 1 ) = λ 1 F, (2.45 ) ⟨u 1 , u 1 ⟩ + λ 2 1 = 1. (2. 46) Order p 2 : L 0 t (U p ) = λ p F - p-1 ∑ r=1 Q (U r , U p-r ) , (2.47) ⟨u p , u 1 ⟩ + λ p λ 1 = 0, (2.48) where L 0 t (•) = L(•) + 2Q(U 0 , •) is the linear tangent operator. Note that this operator depends only on the initial solution and takes the same value for every order p, which leads to only one matrix inversion at each step. These linear problems are solved by FE method. Once the value of U p is calculated, the path solution at the step (j + 1) can be obtained through Eq. (2.42). The maximum value of the path parameter a should be automatically defined by analyzing the convergence of the power series at each step. The a max can be based on the difference of displacements at two successive orders that must be smaller than a given precision parameter δ: Validity range: a max = ( δ ∥u 1 ∥ ∥u n ∥ ) 1/(n-1) , ( 2.49) where the notation ∥•∥ stands for the Euclidean norm. Unlike incremental-iterative methods, the arc-length step size a max is adaptive since it is determined a posteriori by the algorithm. When there is a bifurcation point on the solution path, the radius of convergence is defined by the distance to the bifurcation. Thus, the step length defined in Eq. (2.49) becomes smaller and smaller, which looks as if the continuation process "knocks" against the bifurcation [START_REF] Baguet | On the behaviour of the ANM continuation in the presence of bifurcations[END_REF]. This accumulation of small steps is a very good indicator of the presence of a singularity on the path. All the bifurcations can be easily identified in this way by the user without any special tool. It is worth mentioning that there are only two parameters controlling the algorithm. The first one is the truncation order n of the series. It was previously discussed that the optimal truncation order should be large enough between 15 and 20, but bigger values (e.g. n = 50) lead to good results for large scale problems as well [START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF]. Another important parameter is the chosen tolerance δ that affects the residual. For instance, very small values of tolerance (e.g. δ = 10 -6 ) ensure quite a high accuracy and a pretty robust path-following process. In this chapter, the ANM has been chosen for its ability for branch-switching and to detect bifurcation points, but not necessarily to reduce the computation time. Nevertheless, let us underline that this computation time may remain moderate even with many terms of the series. For instance in [START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF], very large scale problems have been solved and 50 terms of Taylor series have been computed for a cost smaller than the one of two linear problems. Complementary results can be found in [START_REF] Cochelin | Méthode asymptotique numérique[END_REF]. The implementation of the recurrence formulae as (2.47) is relatively simple for Föpplvon Kármán nonlinear plate or Navier-Stokes equations, but it can be tedious in a more general constitutive framework. That is why some tools have been proposed to compute high order derivatives of constitutive laws [START_REF] Lejeune | Automatic solver for non-linear partial differential equations with implicit local laws: Application to unilateral contact[END_REF] that are based on techniques of Automatic Differentiation [START_REF] Griewank | Evaluating derivatives: Principles and techniques of algorithmic differentiation[END_REF]. Very efficient software is available for small systems [START_REF] Karkar | Manlab: an interactive path-following and bifurcation analysis software[END_REF]. In this chapter, the simplest ANM algorithm is considered. A variant with possible correction phases [START_REF] Lahman | High-order predictor-corrector algorithms[END_REF] should lead to somewhat smaller computation time. Here, the priority is the reliability of path-following and it is simpler to choose very small values of the accuracy parameter δ in (2.49) to ensure a secure method for bifurcation analysis. The presented algorithm belongs to the family of continuation methods and not to local bifurcation analyses that are generally associated with the names of [START_REF] Liapounoff | Problème général de la stabilité du mouvement[END_REF][START_REF] Schmidt | Uber die auflösung der nichtlinearen integralgleichungen und die verzweigung ihrer lösungen[END_REF][START_REF] Koiter | On the stability of elastic equilibrium[END_REF]. It is certainly possible to numerically compute a branch issued from a bifurcation point in the form of high order Taylor series, while historical papers used only few terms computed analytically. From such numerical results, see for instance [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF], it appears that the radius of convergence can be large, but finite, so that the standard ANM algorithm remains necessary for the next steps. Note that a recent study [START_REF] Liang | A Koiter-Newton approach for nonlinear structural analysis[END_REF] attempts to deduce a computational method from Koiter's ideas. Detection of bifurcation points There are many methods to detect bifurcation points, the most important ones being briefly mentioned in the introduction. None of these methods is perfect, the main difficulty being their reliability. That is why we have chosen the bifurcation indicator method that has been proven to be very secure in many cases of solid and fluid mechanics, see for instance [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Brezillon | A numerical algorithm coupling a bifurcating indicator and a direct method for the computation of Hopf bifurcation points in fluid mechanics[END_REF][START_REF] Girault | An algorithm for the computation of multiple Hopf bifurcation points based on Padé approximants[END_REF][START_REF] Guevel | Parametric analysis of steady bifurcations in 2D incompressible viscous flow with high order algorithm[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF]. At each step, one defines a scalar parameter that vanishes only when the tangent stiffness matrix is singular. This indicator is a scalar measure of the tangent stiffness, while it is not an eigenvalue except at the singular point. Definition of a bifurcation indicator Let ∆µf be a fictitious perturbation force applied to the structure at a given deformed state (U, λ), where ∆µ is the intensity of the force f and ∆U is the associated response. Through superposing the applied load and perturbation, the fictitious perturbed equilibrium can be described by L (U + ∆U) + Q (U + ∆U, U + ∆U) = λF + ∆µf . (2.50) Considering the equilibrium state and neglecting the quadratic terms, one can obtain the following auxiliary problem: L t (∆U) = ∆µf , ( 2.51) where L t (•) = L(•) + 2Q(U, •) is the tangent operator at the equilibrium point (U, λ). If ∆µ is imposed, this leads to a displacement tending to infinity in the vicinity of the critical points. To avoid this problem, the following displacement based condition is imposed: ⟨ L 0 t (∆U -∆U 0 ) , ∆U 0 ⟩ = 0, (2.52) where L 0 t (•) is the tangent operator at the starting point (U 0 , λ 0 ) and the direction ∆U 0 is the solution of L 0 t (∆U 0 ) = f . Consequently, ∆µ is deduced from the linear system (2.51) and (2.52): ∆µ = ⟨∆U 0 , f ⟩ ⟨L -1 t (f ), f ⟩ . (2.53) Since the scalar function ∆µ represents a measure of the stiffness of structure and becomes zero at the singular points, it can define a bifurcation indicator. It can be directly computed from Eq. (2.53) but it requires to decompose the tangent operator at each point throughout the solution path. For this reason, the system (2.51) and (2.52) can be more efficiently resolved by the ANM. Goal: find a b such that: L t (U(a b )) ∆U = 0 Method: L t (U(a)) ∆U = ∆µf , ⟨L 0 t (∆U -∆U 0 ) , ∆U 0 ⟩ = 0 Output: when ∆µ(a b ) = 0, ∆U(a b ) is the bifurcation mode Thus, the bifurcation indicator ∆µ(a) is easily computed from the auxiliary problems (2.51) and (2.52). This function ∆µ(a) depends on the fictitious perturbation force f , but the numerical solutions of the initial problem (2.41), the bifurcation points and the bifurcation modes are quite independent on it. Exceptionally, the bifurcation indicator could miss a bifurcation if the fictitious force vector f is orthogonal to the instability mode. The choice of a random perturbation force avoids this problem. In what follows, each field ∆U(a b ) is called instability mode. Computation of the bifurcation indicator by the ANM The perturbation (∆U, ∆µ) is searched by the following asymptotic series expansions: ∆U(a) = ∆U 0 + n ∑ p=1 a p ∆U p = ∆U 0 + a∆U 1 + a 2 ∆U 2 + . . . + a n ∆U n , ( 2 L 0 t (∆U 0 ) = ∆µ 0 f , (2.56) where the condition ∆µ 0 = 1 is prescribed a priori at each step. Order p 1 : L 0 t (∆U p ) = ∆µ p f -2 p ∑ j=1 Q (U j , ∆U p-j ) , (2.57) ⟨∆U p , f ⟩ = 0, (2.58) where the vectors U j are determined during the computation of the equilibrium branch and L 0 t is exactly the same tangent operator obtained from the equilibrium branch. Even though this procedure requires computing the order p series at each step, the corresponding computing speed is fast since the same tangent stiffness matrix is used at every order. The discretization of the problem at the order p in Eqs. (2.57) and (2.58) gives [ K 0 t ] {∆u p } = ∆µ p {f } + {∆F p } , (2.59) T {∆u p } [ K 0 t ] {∆u 0 } = 0, (2.60) where [K 0 t ] denotes the tangent stiffness matrix computed at the starting point (U 0 , λ 0 ). The vectors {∆u p } and {∆u 0 }, respectively represent nodal displacements at the order p and order 0 associated to the fictitious perturbation {f }. The vector {∆F p } depends only on the solutions at the previous (p -1) orders. From Eqs. (2.59) and (2.60), one can obtain ∆µ p = - ⟨∆F p , ∆u 0 ⟩ ⟨f , ∆u 0 ⟩ . (2.61) Since ∆µ p is computed and then substitute it into Eq. ( 2.59), one can get ∆u p . In this way, all the asymptotic expansion terms of the bifurcation indicator can be determined. Results and discussion The path-following technique described in Section 2.4.1 will be used in any case, near bifurcation points or away. Specific procedures to compute branches emanating from bifurcation are available [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Guevel | Automatic detection and branch switching methods for steady bifurcation in fluid mechanics[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF], but for simplicity we limited ourselves to the basic continuation algorithm in Section 2.4.1. That is why we introduce a small perturbation force to trigger a continuous transition from the fundamental branch to the bifurcated one: this transverse perturbation f z = 10 -6 is imposed in the middle of the film. The introduction of such small perturbation forces is quite a common technique in the solution of bifurcation problems by continuation techniques [START_REF] Doedel | AUTO: A program for the automatic bifurcation analysis of autonomous systems[END_REF][START_REF] Allgower | Numerical continuation methods[END_REF], even when using commercial finite element codes. This artifice could be avoided by applying a specific procedure to compute the bifurcation branch as in [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF]. In this chapter, the perturbation force f z allows us to compute the whole bifurcated branch with a single continuation algorithm. Note that these forces differ from the fictitious perturbation force in Section 2.4.2 that acts only on the bifurcation indicator. To check if the global tangent stiffness matrix K t is positive definite, Crout decomposition is applied at each step during the nonlinear resolution to evaluate the stability of the solution. The cost of this stability test is negligible since Crout decomposition has also to be done to solve the problems (2.45) and (2.47). The sketch of the film/substrate system under compression forces is illustrated in Fig. 2.3. On the bottom surface of the substrate, the vertical displacement w n , and the tangential traction are taken to be zero. On both left and right sides, simply supported and clamped boundary conditions will be considered, respectively. Precisely, the transverse displacement w i , is locked to be zero in the case of simply supported boundary conditions and its derivative w i,x is also zero in the case of clamped boundary conditions. The material and geometric parameters of film/substrate system are similar to those in [START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF], which is shown in Table 2.1. The huge ratio of Young's modulus, E f /E s , determines the critical wavelength λ c x that remains practically unchanged as the amplitude of the wrinkles increases [START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF]. Poisson's ratio is a dimensionless measure of the degree of compressibility. Compliant materials in the substrate, such as elastomers, are nearly incompressible with ν s = 0.48. A relative thin film has been chosen so that an isotropic and homogeneous system is not parameter dependent, as established in Section 2.2. In what follows, we will investigate the effect of boundary conditions and material properties of the substrate on instability patterns in the whole buckling and post-buckling evolution. The dimensional analysis in Section 2.2 demonstrated that the 2D response of film/substrate systems is not parameter dependent within the chosen framework. More precisely, for isotropic materials, a thick and soft substrate (sufficiently large h s /λ x and E f /E s ) and a response with many waves (large L/λ x ), the only influencing parameter is Poisson's ratio of the substrate. Therefore, all the responses of such systems should be similar. Nevertheless, the literature reports a variety of nonlinear responses of these systems, including period-doubling, localized creasing, folding and ridging [START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF]. Some of these responses cannot be obtained here since this study is limited to 2D systems with small strains in the substrate. In this part, we will discuss the influence of the substrate anisotropy and two cases of elastic graded materials [START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF]. Besides, two cases of boundary conditions for the film/substrate will be studied: simply supported and clamped, respectively. To our best knowledge, previous studies in the literature did not consider the effect of boundary conditions and anisotropy of the substrate. Thus, we will detail most of the possible bifurcation sequences in the basic case of 2D systems with a very soft substrate. Very fine meshes, at least ten elements within one wavelength, are adopted to discretize the film/substrate system along the x direction. As for the z direction, convergence analysis is examined through the test with different number of sublayers. Fig. 2.4 shows the applied load versus transverse displacement with respectively 5, 10, 15, 20 or 25 sublayers in the substrate. It can be observed that fifteen sublayers are sufficient to obtain a convergent solution and well agree with the critical load of full 2D model (CPE8R elements in [START_REF][END_REF]). Besides, the critical load of sinusoidal wrinkles based on classical linearized stability analysis was presented in [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF], with Föppl-von Kármán nonlinear elastic plate assumption for the film. For a finite thick substrate, the critical load is expressed as F c = 1/4h f E f ( 3E s /E f ) 2/3 , where E f = E f / (1 -v f 2 ) and E s = E s / (1 -v s 2 ) . By introducing the material and geometric parameters in Table 2.1, one can obtain the analytical solution for periodic boundary conditions F c = 0.048N/mm, which is very close to our FE results with real boundary conditions (about 0.049N/mm in Fig. 2.4). The established 1D FE model based on the ANM gives a very fast computing speed with small number of steps to reach the secondary or higher bifurcations (see Fig. 2.5). Four bifurcations have been detected from the small step accumulation in the ANM framework. Besides, it is found that the transverse displacement along the z direction follows an exponential distribution as shown in Fig. 2.6. This result is analytically justified in Appendix A. Although the small step accumulation is a good indicator of the occurrence of bifurcation, the exact bifurcation points may locate between two neighbouring steps, which cannot be captured directly. Therefore, bifurcation indicators are computed to detect the exact position of bifurcation points. By evaluating this indicator through an equilibrium branch, all the critical points (see Fig. 2.7) existing on this branch and the associated bifurcation modes (see Fig. 2.8) can be determined. Overall, four bifurcation points have been found. Fig. 2.8 shows a sequence of wrinkling patterns corresponding to the critical loads and their associated instability modes. The first instability mode is localized near the boundary and starts at λ = 0.04893. Then the pattern tends to be a uniform sinusoidal mode at the second bifurcation point when λ = 0.055681. A period-doubling mode occurs at the third bifurcation point when λ = 0.086029. Besides, a localized ridge mode can be evident in the middle. At the fourth bifurcation point when λ = 0.11595, an aperiodic mode (period-trebling and period-quadrupling) emerges. The stability of all these solutions has been checked by using Crout decomposition of tangent stiffness matrices. Lastly, it is noted that the strain is less than 1% at the bifurcation and lower than 4% at the end of the loading process. Likely, more accurate results at the end of loading could be obtained in a finite strain hyperelasticity framework. The same remark holds for the following examples. Film/substrate with clamped boundary conditions Following the same strategy, we study the pattern formation in the case of clamped boundary conditions. Very fine meshes are adopted to discretize the film/substrate system along the x direction. Convergence of the computational results is carefully examined. In Figs. 2.9 and 2.10, two bifurcation points have also been captured through evaluating the bifurcation indicators along the equilibrium branch. The same method will also be used in the following examples, but the corresponding indicator curves will no longer be presented. The sequence of wrinkling patterns corresponding to the bifurcation loads and their associated instability modes are illustrated in Fig. 2.11. The two modes correspond to modulated oscillations, the first one with a sinusoidal envelope and the second one with a hyperbolic tangent shape. These modes are quite common [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF] and can be predicted by the asymptotic Ginzburg-Landau equation [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. The first bifurcation mode in Fig. 2.8b has been inhibited by the clamped boundary conditions and it does not appear in this case. The period-doubling was not observed in the range of load-displacement curve in Fig. 2.9 either. Nevertheless, the observed patterns are quite similar except near the boundary: compare for instance Fig. Functionally Graded Material (FGM) substrate with simply supported boundary conditions In Section 2.2, it has been shown that the elastic matrix L s contributes to the potential energy, which would affect wrinkling patterns to some extent. In what follows, we will extend the 1D FE model to consider the substrate as elastic graded materials to study the influence of variable Young's modulus on the pattern formation. Systems consisting of a stiff thin layer resting on an elastic graded substrate are often encountered both in nature and industry. In nature, many living soft tissues including skins, brains, mucosa of esophagus and pulmonary airway can be modeled as a soft substrate covered by a stiff thin surface layer [START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF]. It is noted that the sub-surface layer (i.e. substrate) usually has graded mechanical properties because of the spatial variation in the microstructure or composition. Besides, many practical systems in industry have a functionally grad- ed substrate [START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF][START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF][START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF]. For instance, the deposition process of a stiff thin film on a soft substrate may lead to a variation of the mechanical properties of the substrate along the thickness direction (functionally graded), which would affect the wrinkling of film/substrate system [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF]. In particular, we explore typical exponential variations of Young's modulus of the substrate along the thickness direction (softening E s1 and stiffening E s2 ) (see Fig. 2.12), while Poisson's ratio is constant as before. In fact, these two typical kinds of exponential variations (softening and stiffening) were also assumed in [START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF][START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF] for studying the effect of graded material property on pattern formation. First, we investigate the pattern formation with simply supported boundary conditions and softening Young's modulus E s1 . The first bifurcation occurs at λ = 0.024935, which happens much earlier than the homogeneous case since the stiffness of the system reduces under the exponential grading Young's modulus situation (see Fig. 2.13 and Fig. 2.5). Instability modes are also captured by computing bifurcation indicators. The first mode is localized near the boundary (see Fig. 2.14). Hence, it seems typically related to the simply supported boundary conditions. The second mode is periodic with little boundary effect. The wavelength is larger than in the homogeneous case due to the softening of the substrate, which is consistent with Eq. (2.7). The third mode corresponds to a period-doubling. FGM substrate with clamped boundary conditions We study the pattern formation of FGM substrate with clamped boundary conditions and softening Young's modulus E s1 . Three bifurcations have been observed (see Fig. 2.15). The first bifurcation appears at λ = 0.026732, which occurs earlier than in the homogeneous case as expected (see Fig. 2.15 and Fig. 2.9). The sequence of wrinkling patterns corresponding to the bifurcation loads and the associated instability modes are shown in Fig. 2.16. The first bifurcation mode corresponds to a sinusoidal envelope while the second one looks like a hyperbolic tangent, as in the case of a homogeneous substrate. The period-doubling mode occurs at the third bifurcation point when λ = 0.099932. FGM substrate with stiffening Young's modulus We explore the wrinkle formation in the case of a stiffening Young's modulus E s2 . Up to now at each bifurcation point, only one branch has been computed, which is proven to be stable because of the positivity of Jacobian matrix. This stability after bifurcation is classical for supercritical bifurcation. In the present case of a stiffening Young's modulus, we try to compute both the fundamental branch and the bifurcated one. The bifurcated branch has been always observed in the previous cases and it naturally results from the path-following algorithm. Bifurcation theory provides theoretical means to define all the branches emanating from a bifurcation point and this can lead to practical algorithms [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF]. Nevertheless, the fundamental branch can also be found without changes in the path-following algorithm in Section 2.4.1. The idea is to relax the tolerance δ just before the bifurcation. Practically, as recommended in [START_REF] Baguet | On the behaviour of the ANM continuation in the presence of bifurcations[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF], we keep the same precision parameter δ = 10 -5 up to the neighbourhood of the third bifurcation. Then we increase this parameter to δ = 10 -2 . Consequently, two solution branches clearly distinguish in Fig. 2.17. We checked that the post-bifurcation branch remains accurate despite of the greater value of tolerance δ (residual lower than 10 -3 ) and that the Jacobian matrix is no longer positive along the fundamental solution, as expected theoretically. Similar bifurcation portraits could be found in all the other cases, but the practical interest of computing unstable branches is rather limited. In Figs. 2.18 and 2.19, the response shows four bifurcation points, the first two ones being very similar to those obtained in the previous cases, but with a shorter wavelength: the first mode is strongly influenced by boundary conditions and the second one corresponds to a uniform amplitude in the bulk. The third and the fourth bifurcation modes are slightly different from those in Fig. 2.8 (homogeneous substrate) and Fig. 2.14 (softening Young's modulus E s1 ). Indeed, period-doubling is not observed here, but only a slow modulation of the wrinkle envelopes. Hence, period-doubling mode appears as a possible event that does not happen in any case. Anisotropic substrate We study the effect of an orthotropic substrate on pattern formation. Simply supported boundary conditions are applied. The elastic matrix of the substrate in Eq. (2.24) becomes [C sn ] =        E 1 (1 -ν 23 ) 1 -2ν 12 ν 21 -ν 23 - E 1 ν 21 ν 23 + 2ν 12 ν 21 -1 0 - E 2 ν 12 ν 23 + 2ν 12 ν 21 -1 E 2 (ν 12 ν 21 -1) (1 + ν 23 )(ν 23 + 2ν 12 ν 21 -1) 0 0 0 E s 2(1 + ν 12 )        . (2.62) Precisely, here we choose E 1 = 10E s , E 2 = E s , ν 12 = ν s , ν 21 = ν s /10 and ν 23 = ν s . From Fig. 2.20, one can see that the anisotropic system is slightly stiffer than the isotropic one, but the post-buckling path follows the similar trend as the isotropic case. Furthermore, small step accumulations appear almost in the same loading region in both curves, which implies the bifurcation loads are comparable. Three instability modes are detected by computing bifurcation indicators in the anisotropic case (see Fig. 2.21). The comparison between these three modes in Fig. 2.21 and the final shape in Fig. 2.22 is a little disturbing since a period-doubling appears clearly in the final shape, but not in the modes. Likely this period-doubling grows continuously along the response curve. Comments Three types of bifurcation modes have been encountered in the four studied cases. First, a mode localized near the boundary can primarily appear in the case of simply supported boundary conditions. Then the instability pattern tends to be periodic except in the boundary region where the response is influenced by the boundary conditions. Last, this perfect periodicity can be broken by another bifurcation, leading to perioddoubling, period-trebling or even period-quadrupling modes. Such a loss of periodicity had been previously discussed, for instance in [START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF][START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF]. Curiously, all the computed bifurcations are supercritical so that the post-bifurcation solutions remain stable. The responses of the system as well as the third bifurcation mode are represented in Fig. 2.23 as a function of the substrate properties: isotropic homogeneous, orthotropic homogeneous, inhomogeneous softening and stiffening, respectively. The bifurcation portrait seems generic at the beginning: a boundary mode is followed by a second mode that is almost periodic except in the boundary region. The third and eventually fourth bifurcations are no longer periodic, but several forms can be encountered: a clear period-doubling in the FGM softening case, a more confused situation for homogeneous isotropic substrates (see Fig. 2.8) or modulated modes without period-doubling in the FGM stiffening and orthotropic cases. The multiple bifurcations can be seen as markers of the evolution of instability patterns, but sometimes the pattern can change continuously without obvious bifurcation: such an evolution has been found in the anisotropic case in Section 2.5.6 where period-doubling mode appears in the final shape (see Fig. 2.22), but not in the observed bifurcation modes (see Fig. 2.21). This is not surprising because bifurcation is not a generic singularity and Chapter conclusion Wrinkling phenomena of stiff films bound to compliant substrates were investigated from the quantitative standpoint, with a particular attention on the effect of boundary conditions, which was rarely studied previously. A classical model was used associating geometrically nonlinear beam for the film and linear elasticity for the substrate. The presented results rely heavily on robust solution techniques based on the ANM that is able to detect secondary bifurcations and to compute bifurcation modes on a nonlinear response curve. Probably, it would be rather difficult to detect all the bifurcations found in this chapter by conventional numerical methods. Six cases have been studied that are characterized by the boundary conditions and by the material properties of the substrate (homogeneous, graded material and orthotropic). Up to four successive bifurcation points have been found and the bifurcation portrait seems more or less generic. In the case of a simply supported film/substrate system, the first mode is a boundary mode. Then the pattern tends to be periodic in the bulk and the response near the ends is locally influenced by boundary conditions. When the amplitude increases, the periodicity is broken and one can observe period-doubling bifurcations or appearance of modulated patterns. The present study is limited to moderate rotations and 2D domains. It would be an interesting challenge to extend this analysis in 3D cases [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF], without too restrictive assumptions, but the computation time could increase dramatically. In this respect, an idea is to introduce reduced-order models, for instance via the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF][START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. Nevertheless, 3D film/substrate models are essential because the periodicity can be broken by 3D bifurcations before the occurrence of perioddoubling mode [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF][START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF]. Chapter 3 Pattern formation modeling of 3D film/substrate systems Introduction Surface morphological instabilities of stiff thin layers attached on soft substrates are of growing interest in a number of academic domains including micro/nano-fabrication and metrology [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF], flexible electronics [START_REF] Rogers | Materials and mechanics for stretchable electronics[END_REF], mechanical and physical measurement of material properties [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF], and biomedical engineering [START_REF] Genzer | Soft matter with hard skin: From skin wrinkles to templating and material characterization[END_REF] as well as biomechanics [START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF]. The pioneering work of Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF] leads to several theoretical and numerical works in terms of stability study devoted to linear perturbation analysis and nonlinear buckling analysis [START_REF] Huang | Instability of a compressed elastic film on a viscous layer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Huang | Dynamics of wrinkle growth and coarsening in stressed thin films[END_REF][START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF][START_REF] Mahadevan | Self-organized origami[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF][START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF]. In most of these papers, the 2D or 3D spatial problem is discretized by either spectral method or Fast Fourier Transform (FFT) algorithm, which is fairly inexpensive but prescribes periodic boundary conditions. In this framework, several types of wrinkling modes have been observed, including sinusoidal, checkerboard, herringbone (see Fig. 3.1) and disordered labyrinth patterns. It has been early recognized by Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] that such systems can also be studied by finite element methods, which is more computationally expensive but more flexible to describe complex geometries and more general boundary conditions, and allows using commercial computer codes. In addition, 3D finite element simulations of film/substrate instability were studied only in few papers [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF]. Furthermore, the post-buckling evolution and mode transition of surface wrinkles in 3D film/substrate systems are rarely studied and only in the case of periodic cells [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF], which still deserves further investigation, especially through finite element method that can provide the overall view and insight into the formation and evolution of wrinkle patterns in more general conditions. Can one obtain the variety of 3D wrinkling modes reported in the literature by using classical finite element models? Can one describe the whole evolution path of buckling and post-buckling of this system? Under what kind of loading and boundary conditions can each type of patterns be observed at what value of bifurcation loads? These questions will be addressed in this chapter. This study aims at applying advanced numerical methods for bifurcation analysis to typical cases of film/substrate system and focuses on the post-buckling evolution involving multiple bifurcations and symmetry-breakings, for the first time with a particular attention on the effect of boundary conditions. For this purpose, a 2D finite element (FE) model was previously developed for multiperiodic bifurcation analysis of wrinkle formation [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. In this model, the film undergoing moderate deflections is described by Föppl-von Kármán nonlinear elastic theory, while the substrate is considered to be a linear elastic solid. Following the same strategy, we extend the work to 3D cases by coupling shell elements to represent the film and block elements to describe the substrate. Therefore, large displacements and rotations in the film can be considered and the spatial distribution of wrinkling modes like sinusoidal, checkerboard and herringbone (see Fig. 3.1) could be investigated. Surface instability of stiff layers on soft materials usually involves strong geometrical nonlinearities, large rotations, large deformations, loading path dependence, multiple symmetry-breakings and other complexities, which makes the numerical resolution quite difficult. The morphological post-buckling evolution and mode shape transition beyond the critical load are incredibly complicated, especially in 3D cases, and the conventional numerical methods have difficulties in detecting all the bifurcation points and associated instability modes on their evolution paths. To solve the resulting nonlinear equations, continuation techniques give efficient numerical tools to compute these nonlinear response curves [START_REF] Doedel | AUTO: A program for the automatic bifurcation analysis of autonomous systems[END_REF][START_REF] Allgower | Numerical continuation methods[END_REF]. In this chapter, we adopted Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] which appears as a significantly efficient continuation technique without any corrector iteration. The underlying principle of the ANM is to build up the nonlinear solution branch in the form of relatively high order truncated power series. The resulting series are then introduced into the nonlinear problem, which helps to transform it into a sequence of linear problems that can be solved numerically. In this way, one gets approximations of the solution path that are very accurate inside the radius of convergence. Since few global stiffness matrix inversions are required (only one per step), the performance in terms of computing time is quite attractive. Moreover, as a result of the local polynomial approximations of the branch within each step, the algorithm is remarkably robust and fully automatic. Furthermore, unlike incremental-iterative methods, the arc-length step size in the ANM is fully adaptive since it is determined a posteriori by the algorithm. A small radius of convergence and step accumulation appear around the bifurcation and imply its presence. Detection of bifurcation points is really a challenge. Despite a lot of progresses have been made using the Newton-Raphson method, an efficient and reliable algorithm is quite difficult to be established. Indeed, it would cost considerable computing time in the bisection sequence and corrector iteration because of very small step lengths close to the bifurcation. In the ANM framework, a bifurcation indicator has been proposed to detect bifurcation points [START_REF] Boutyour | Méthode asymptotique-numérique pour le calcul des bifurcations: application aux structures élastiques[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. It is a scalar function obtained through introducing a fictitious perturbation force in the problem, which becomes zero exactly at the bifurcation point. Indeed, this indicator measures the intensity of the system response to perturbation forces. By evaluating it through an equilibrium branch, all the critical points existing on this branch and the associated bifurcation modes can be determined. This chapter explores the occurrence and post-bifurcation evolution of sinusoidal, checkerboard and herringbone mode in greater depth. The work presented in this chapter, i.e. 3D finite element modeling for instabilities in thin films on soft substrates, is considered as an interesting study and a valuable contribution to film/substrate instability problems, which has been published in International Journal of Solids and Structures [START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF]. 3D mechanical model We consider an elastic thin film bonded to an elastic substrate, which can buckle under compression. Upon wrinkling, the film elastically buckles to relax the compressive stress and the substrate concurrently deforms to maintain perfect bonding at the interface. In the following, the elastic potential energy of the system, is considered in the framework of Hookean elasticity. The film/substrate system is considered to be three-dimensional and the geometry is as shown in Fig. 3.2. Let x and y be in-plane coordinates, while z is the direction perpendicular to the mean plane of the film/substrate. The width and length of the system are denoted by L x and L y , respectively. The parameters h f , h s and h t represent, respectively, the thickness of the film, the substrate and the total thickness of the system. Young's modulus and Poisson's ratio of the film are denoted by E f and ν f , while E s and ν s are the corresponding material properties for the substrate. The 3D film/substrate system will be modeled in a rather classical way, the film being represented by a thin shell model to allow large rotations while the substrate being modeled by small strain elasticity. Indeed, the considered instabilities are governed by nonlinear geometric effects in the stiff material, while these effects are much smaller in the soft material. Since the originality of this chapter lies in the numerical treatment of multiple bifurcations, we limit ourselves to this classical framework for the sake of consistency with previous literatures. The large rotation framework for the film has been chosen because of the efficiency of the associated finite element. Note that the same choice of a shell with finite rotations coupled with small strain elasticity in the substrate had been presented for numerical reasons in [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF]. The application range of this model is limited by two small parameters: the aspect ratio of the film h f /L x , h f /L y and the stiffness ratio E s /E f . In the case of a larger ratio E s /E f , a finite strain model should be considered in the substrate as in [START_REF] Hutchinson | The role of nonlinear substrate elasticity in the wrinkling of thin films[END_REF] and this would be not too difficult to be implemented in our framework [START_REF] Nezamabadi | Solving hyperelastic material problems by asymptotic numerical method[END_REF]. Nonlinear shell formulation for the film Challenges in the numerical modeling of such film/substrate systems come from the extremely large ratio of Young's modulus (E f /E s ≈ O(10 5 )) as well as the big thickness difference (h s /h f O( 102 )), which requires very fine mesh if using 3D block elements both for the film and for the substrate. Since finite rotations of middle surface and small strains are considered in the thin film, nonlinear shell formulations are quite suitable and efficient for modeling. Hereby, a three-dimensional shell formulation proposed by Bücheter et al. [START_REF] Büchter | Three-dimensional extension of non-linear shell formulation based on the enchanced assumed strain concept[END_REF] is applied. It is based on a 7-parameter theory including a linear varying thickness stretch as extra variable, which allows applying a complete 3D constitutive law without condensation. It is distinguished from classical shell models that are usually based on degenerated constitutive relations (e.g. Kirchhoff-Love, Reissner-Mindlin theories). It is also incorporated via the Enhanced Assumed Strain (EAS) concept proposed by Simo and Rifai [START_REF] Simo | A class of mixed assumed strain methods and method of incompatible modes[END_REF] to improve the element performance and to avoid locking phenomena such as Poisson thickness locking, shear locking or volume locking. This hybrid shell formulation can describe large deformation problems with hyperelasticity and has been successively applied to nonlinear elastic thin-walled structures such as cantilever beam, square plate, cylindrical roof and circular deep arch [START_REF] Sansour | A theory and finite element formulation of shells at finite deformations involving thickness change: Circumventing the use of a rotation tensor[END_REF][START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. Geometry and kinematics of shell element are illustrated in Fig. 3.3, where the position vectors are functions of curvilinear coordinates (θ 1 , θ 2 , θ 3 ). The geometry description relies on the middle surface θ 1 and θ 2 of the shell, while θ 3 represents the coordinate in the thickness direction. The current configuration is defined by the middle surface displacement and the relative displacement between the middle and the upper surfaces. The large rotations are taken into account without any rotation matrix since the current direction vector is obtained by adding a vector to one of the initial configurations. In the initial undeformed configuration, the position vector x representing any point in the shell can be defined as x(θ α , θ 3 ) = r(θ α ) + θ 3 a 3 (θ α ), (3.1) where r(θ α )(α = 1, 2) denotes the projection of x in the middle surface and θ 3 describes its perpendicular direction with θ 3 ∈ [-h f /2, h f /2] in which h f is the reference thickness of shell. The normal vector of middle surface is represented by a 3 = a 1 × a 2 . Similarly, in the current deformed configuration, we define the position of point x by the vector x: x(θ α , θ 3 ) = r(θ α ) + θ 3 a 3 (θ α ), where { r = r + v, a 3 = a 3 + w. (3.3) Therefore, the displacement vector associated with an arbitrary material point in the shell, linearly varies along the thickness direction reads u(θ α , θ 3 ) = x -x = v(θ α ) + θ 3 w(θ α ). (3.4) Totally, six degrees of freedom can be distinguished in Eq. (3.4) to describe the shell kinematics: three vector components related to the translation of the middle surface (v 1 , v 2 , v 3 ) and other three components updating the direction vector (w 1 , w 2 , w 3 ). The Green-Lagrange strain tensor is used to describe geometrical nonlinearity, which can be expressed in the covariant base: γ = 1 2 ( g ij -g ij ) g i ⊗ g j with i, j = 1, 2, 3, (3.5) where g i are the contravariant base vectors, while g ij = g i •g j and g ij = g i •g j respectively represent the components of covariant metric tensor in the initial configuration and the deformed one [START_REF] Büchter | Three-dimensional extension of non-linear shell formulation based on the enchanced assumed strain concept[END_REF]. The hybrid shell formulation is derived from a three-field variational principle based on the Hu-Washizu functional [START_REF] Büchter | Three-dimensional extension of non-linear shell formulation based on the enchanced assumed strain concept[END_REF][START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF]. The stationary condition can be written as Π EAS (u, γ, S) = ∫ Ω { t S : (γ u + γ) - 1 2 t S : D -1 : S } dΩ -λP e (u), (3.6) where D is the elastic stiffness tensor. The unknowns are, respectively, the displacement field u, the second Piola-Kirchhoff stress tensor S and the compatible Green-Lagrange strain γ u . The enhanced assumed strain γ, satisfies the condition of orthogonality with respect to the stress field. The work of external load is denoted by P e (u), while λ is a scalar load parameter. Concerning the enhanced assumed strain γ, classical shell kinematics requires the transversal strain field to be zero (γ 33 = 0). In reality, since 3D constitutive relations are concerned, this condition is hardly satisfied due to Poisson effect, especially for bending dominated cases. This phenomenon is commonly referred to as "Poisson thickness locking". To remedy this issue, an enhanced assumed strain γ contribution has been introduced in the shell formulation [START_REF] Büchter | Three-dimensional extension of non-linear shell formulation based on the enchanced assumed strain concept[END_REF], acting across the shell thickness direction and supplementing the compatible strain field γ u . It describes the linear variation of the the thickness stretch or compression, and is expressed with respect to the local curvilinear coordinates θ 3 : γ = θ 3 γ 33 g 3 ⊗ g 3 with γ 33 = γ 33 (θ α ), (3.7) and satisfies the condition of orthogonality with respect to the stress field S: ∫ Ω t S : γ dΩ = 0. (3.8) In this way, "spurious" transversal strains induced by Poisson effect for bending dominated kinematics are balanced by the assumed strain γ, which clears the "thickness locking" problem. This approach is applied in this chapter, since the associated finite element is very efficient, especially for very thin shells [START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. A 8-node quadrilateral element with reduced integration is utilized for the 7-parameter shell formulation. The enhanced assumed strain γ does not require inter element continuity, neither contribute to the total number of nodal degrees of freedom. Therefore, it can be eliminated by condensation at the element level, which preserves the formal structure of a 6-parameter shell theory with totally 48 degrees of freedom per element. Linear elasticity for the substrate Since the displacement, rotation and strain remain relatively small in the substrate, the linear isotropic elasticity theory with updated geometry can accurately describe the substrate. The nonlinear strain-displacement behavior has essentially no influence on the results of interest [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF]. The potential energy of the substrate can be expressed as Π s (u s ) = 1 2 ∫ Ω t ε : L s : εdΩ -λP e (u s ), (3.9) where L s is the elastic matrix of the substrate. In this chapter, 8-node linear block elements with reduced integration are applied to discretize the substrate, with totally 24 degrees of freedom on each block element. Connection between the film and the substrate As the film is bonded to the substrate, the displacement should be continuous at the interface. However, the shell elements for the film and 3D block elements for the substrate cannot be simply joined directly since they belong to dissimilar elements. Therefore, additional incorporating constraint equations have to be employed. Hereby, Lagrange multipliers are applied to couple the corresponding node displacements in compatible meshes between the film and the substrate (see Fig. 3.4). Note that using 8-node linear block element here is only for coupling convenience, 20-node quadratic block element would be another good candidate, while both of them follow the same coupling strategy. Consequently, the stationary function of film/substrate system is given in a Lagrangian form: L (u f , u s , ℓ) = Π EAS + Π s + ∑ node i ℓ i [ u - f (i) -u s (i) ] , (3.10) in which u - f (i) = v(i) - h f 2 w(i). (3.11) where the displacements of the film and the substrate are respectively denoted as u f and u s , while the Lagrange multipliers are represented by ℓ. At the interface, the displacement continuity is satisfied at the same nodes and connects the bottom surface of the film (u - f ) and the top surface of the substrate. From Eq. (3.10), three equations are obtained according to δu f , δu s and δℓ:              δΠ EAS + ∑ node i ℓ i δu - f (i) = 0, δΠ s - ∑ node i ℓ i δu s (i) = 0, ∑ node i δℓ i u - f (i) - ∑ node i δℓ i u s (i) = 0. (3.12) Figure 3.4: Sketch of coupling at the interface. Resolution technique and bifurcation analysis Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] is used to solve the resulting nonlinear equations. The ANM is a path-following technique that is based on a succession of high order power series expansions (perturbation technique) with respect to a well chosen path parameter. It appears as an efficient continuation predictor without any corrector iteration. Moreover, one can get approximations of the solution path that are very accurate inside the radius of convergence. In this chapter, the main interest of the ANM is its ability to detect bifurcation points. First, small steps are often associated with the occurrence of a bifurcation. Then, a bifurcation indicator will be defined, which allows exactly detecting the bifurcation load and the corresponding nonlinear mode. Path-following technique The resulting nonlinear problem (3.12) can be rewritten as δL (u f , u s , ℓ) = ⟨R (U, λ) , δU⟩ = 0, (3.13) in which R (U, λ) = L (U) + Q (U, U) -λF = 0, (3.14) where U = (u f , u s , ℓ) is a mixed vector of unknowns, R the residual vector, L(•) a linear operator, Q(•, •) a quadratic one and F the external load vector. The external load parameter is denoted as a scalar λ. The principle of the ANM continuation consists in describing the solution path by computing a succession of truncated power series expansions. From a known solution point (U 0 , λ 0 ), the solution (U, λ) is expanded into truncated power series of a perturbation parameter a: U(a) = U 0 + n ∑ p=1 a p U p = U 0 + aU 1 + a 2 U 2 + . . . + a n U n , (3.15) λ(a) = λ 0 + n ∑ p=1 a p λ p = λ 0 + aλ 1 + a 2 λ 2 + . . . + a n λ n , ( 3.16 ) a = ⟨u -u 0 , u 1 ⟩ + (λ -λ 0 ) λ 1 , (3.17) where n is the truncation order of the series. Eq. (3.17) defines the path parameter a that can be identified to an arc-length parameter. By introducing Eqs. (3.15) and (3.16) into Eqs. (3.13) and (3.17), then equating the terms at the same power of a, one can obtain a set of linear problems. The maximum value of the path parameter a should be automatically defined by analyzing the convergence of the power series at each step. The a max can be based on the difference of displacements at two successive orders that must be smaller than a given precision parameter δ: Validity range: a max = ( δ ∥u 1 ∥ ∥u n ∥ ) 1/(n-1) , ( 3.18) where the notation ∥•∥ stands for the Euclidean norm. Unlike incremental-iterative methods, the arc-length step size a max is adaptive since it is determined a posteriori by the algorithm. When there is a bifurcation point on the solution path, the radius of convergence is defined by the distance to the bifurcation. Thus, the step length defined in Eq. (3.18) becomes smaller and smaller, which looks as if the continuation process "knocks" against the bifurcation [START_REF] Baguet | On the behaviour of the ANM continuation in the presence of bifurcations[END_REF]. This accumulation of small steps is a very good indicator of the presence of a singularity on the path. All the bifurcations can be easily identified in this way by the user without any special tool. It is worth mentioning that there are only two parameters controlling the algorithm. The first one is the truncation order n of the series. It was previously discussed that the optimal truncation order should be large enough between 15 and 20, but bigger values (e.g. n = 50) lead to good results for large scale problems as well [START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF]. Another important parameter is the chosen tolerance δ that affects the residual. For instance, very small values of tolerance (e.g. δ = 10 -6 ) ensure quite a high accuracy and a pretty robust path-following process. Detection of bifurcation points Detection of exact bifurcation points is a challenge. It takes much computation time in the bisection sequence and in many Newton-Raphson iterations because of small steps close to the bifurcation. In the framework of the ANM, a bifurcation indicator has been proposed to capture exact bifurcation points in an efficient and reliable algorithm [START_REF] Boutyour | Méthode asymptotique-numérique pour le calcul des bifurcations: application aux structures élastiques[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. Let ∆µf be a fictitious perturbation force applied to the structure at a given deformed state (U, λ), where ∆µ is the intensity of the force f and ∆U is the associated response. Through superposing the applied load and perturbation, the fictitious perturbed equilibrium can be described by L (U + ∆U) + Q (U + ∆U, U + ∆U) = λF + ∆µf . (3.19) Considering the equilibrium state and neglecting the quadratic terms, one can obtain the following auxiliary problem: L t (∆U) = ∆µf , ( 3.20) where L t (•) = L(•) + 2Q(U, •) is the tangent operator at the equilibrium point (U, λ). If ∆µ is imposed, this leads to a displacement tending to infinity in the vicinity of the critical points. To avoid this problem, the following displacement based condition is imposed: ⟨ L 0 t (∆U -∆U 0 ) , ∆U 0 ⟩ = 0, (3.21) where L 0 t (•) is the tangent operator at the starting point (U 0 , λ 0 ) and the direction ∆U 0 is the solution of L 0 t (∆U 0 ) = f . Consequently, ∆µ is deduced from the linear system (3.20) and (3.21): ∆µ = ⟨∆U 0 , f ⟩ ⟨L -1 t (f ), f ⟩ . (3.22) Since the scalar function ∆µ represents a measure of the stiffness of structure and becomes zero at the singular points, it can define a bifurcation indicator. It can be directly computed from Eq. (3.22), but it requires to decompose the tangent operator at each point throughout the solution path. For this reason, the system (3.20) and (3.21) can be more efficiently resolved by the ANM. Goal: find a b such that: L t (U(a b )) ∆U = 0 Method: L t (U(a)) ∆U = ∆µf , ⟨L 0 t (∆U -∆U 0 ) , ∆U 0 ⟩ = 0 Output: when ∆µ(a b ) = 0, ∆U(a b ) is the bifurcation mode In what follows, each field ∆U(a b ) is called instability mode or wrinkling mode. Note that it is recommended to use a random perturbation force vector f [START_REF] Boutyour | Méthode asymptotique-numérique pour le calcul des bifurcations: application aux structures élastiques[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. The bifur-cation indicator in Eq. (3.22) vanishes at singular points only if the fictitious force vector f is not orthogonal to the instability mode. The choice of a random perturbation force can avoid this problem. It is worth mentioning that the fictitious perturbation force f influences neither the numerical solutions of the initial problem (3.13) nor the detection of bifurcation points via (3.20) and (3.21), but only the auxiliary unknown ∆U(a). Results and discussion Three types of wrinkling patterns, sinusoidal, checkerboard and herringbone, will be investigated under different loading and boundary conditions. On the bottom surface of the substrate, the deflection u z and the tangential traction are taken to be zero. The material and geometric properties of film/substrate system are similar to those in [START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF], which is shown in Table 3.1. The different dimensional parameters and loading conditions for each case are presented in Table 3.2 and Fig. 3.5, respectively. The huge ratio of Young's modulus, E f /E s , determines the critical wavelength λ c that remains practically unchanged as the amplitude of the wrinkles increases [START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. Poisson's ratio is a dimensionless measure of the degree of compressibility. Compliant materials in the substrate, such as elastomers, are nearly incompressible with ν s = 0.48. A relative thin film has been chosen so that an isotropic and homogeneous system is not parameter dependent [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. In order to trigger a transition from the fundamental branch to the bifurcated one, small perturbation forces, f z = 10 -8 , are imposed in the film. The introduction of such small perturbation forces is quite a common technique in the solution of bifurcation problems by continuation techniques [START_REF] Doedel | AUTO: A program for the automatic bifurcation analysis of autonomous systems[END_REF][START_REF] Allgower | Numerical continuation methods[END_REF], even when using commercial finite element codes. This artifice could be avoided by applying a specific procedure to compute the bifurcation branch as in [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF]. In this chapter, the perturbation forces f z allow us to compute the whole bifurcated branch with a single continuation algorithm. Note that these forces differ from the fictitious perturbation force in Section 3.3.2 that acts only on the bifurcation indicator. The number of elements required for a convergent solution was carefully examined. Critical loads can be detected by bifurcation points in the load-displacement curve. Although the small step accumulation is a good indicator of the occurrence of bifurcation, the exact bifurcation points may locate between two neighbouring steps, which cannot be captured directly. Therefore, bifurcation indicators are computed to detect the exact position of bifurcation points. By evaluating this indicator through an equilibrium branch, all the critical points existing on this branch and the associated bifurcation modes can be determined. In what follows, we will explore in greater depth the formation and evolution of three kinds of patterns (sinusoidal, checkerboard and herringbone) in the case of a not too large wave number. In experiments, one often observes more disordered wrinkles, like a frustrated labyrinth [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Yin | Deterministic order in surface micro-topologies through sequential wrinkling[END_REF]. These more intricate patterns can be predicted by generic finite element procedure as the one presented in this chapter, provided that sufficient computer resources are available. Sinusoidal patterns First, we study the sinusoidal pattern formation and evolution via Film/Sub I. The film is uniaxially compressed along the x direction as shown in Fig. 3.5a. The displacements v 2 , v 3 , w 2 and w 3 are taken to be zero on loading sides y and { (see Fig. 3.5a) that are parallel to O y . This means that these sides are simply supported because the rotation w 1 around O y is not locked. The other two sides x and z are set to be free. To avoid rigid body motions, the displacement v 1 in the film center is locked as well. The film is meshed with 50 × 50 shell elements to ensure at least five elements within one wavelength. The substrate is compatibly discretized by 12500 block elements with five layers. Totally, the film/substrate system contains 100827 degrees of freedom (DOF) including the Lagrange multipliers. The critical load of sinusoidal wrinkles based on classical linearized stability analysis was presented in [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF], with Föppl-von Kármán nonlinear elastic plate assumption for the film. For a finite thick substrate, the critical load is expressed as F c = 1/4h f E f ( 3E s /E f ) 2/3 , where E f = E f / (1 -v f 2 ) and E s = E s / (1 -v s 2 ) . By introducing the material and geometric parameters in Table 3.1, one can obtain the analytical solution for periodic boundary conditions F c = 0.048N/mm, which is close to our 3D finite element results with real boundary conditions (about 0.052N/mm in Fig. 3.6). The established 3D model based on the ANM offers a very fast computing speed to reach secondary bifurcations with few steps (see Fig. 3.6). These steps are very large except in the very small region where the load is between 0.052 and 0.056 N/mm. In this region, one can observe two packets of small steps corresponding to two bifurcation points. Their exact locations have been captured through evaluating the bifurcation indicators along the equilibrium branch (see Fig. 3.7). The same method will also be used in the following examples, but the corresponding curves will no longer be presented. The sequence of wrinkling modes ∆v corresponding to the bifurcation loads and their associated instability modes ∆v 3 are illustrated in Fig. 3.8. These two instability modes are similar to classical patterns obtained for instance in membrane wrinkling. Their shapes are sinusoidal with fast oscillations in the compressive direction and with spatial modulation. These oscillations are located in the center of the film for the first mode and near the sides y and { for the second one, which corresponds to zones where the compressive stresses are larger. When the load increases, the pattern tends to a more or less uniform sinusoidal shape (see Fig. 3.9) with small boundary effects close to the loading sides. In other words, boundary effects are important at the first appearance of wrinkles, then the amplitudes of the oscillations tend to be uniform. Similar evolutions have been obtained in similar problems where one converges to an oscillation whose envelope looks like a hyperbolic tangent, for instance for a clamped beam [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF] or a clamped membrane [START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. The tendency to a uniform oscillation is in agreement with predictions of the asymptotic Ginzburg-Landau equation [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Following the same strategy, we investigate the surface morphological instability via Film/Sub I in the case of clamped boundary conditions. More specifically, the displacements w 1 is also taken to be zero on loading sides y and { (see Fig. 3.5a) that are parallel to O y . This means that these sides are clamped because the rotation w 1 around O y is locked. The other boundary conditions and loadings are the same as before. The same mesh as in simply supported case is carried out. Two bifurcation points have been found as shown in Fig. 3.10. The two instability modes correspond to modulated oscillations (see Fig. 3.11). The first one is similar to simply supported case except the vanishing rotations on the boundary. The second one takes a hyperbolic tangent envelope except a small localization in the middle. Then the pattern tends to be a uniform hyperbolic tangent shape when the load reaches the final step (see Fig. 3.12). Checkerboard patterns Checkerboard modes are explored via Film/Sub II. The square film is under equibiaxial compression both in x and y direction (see Fig. edges x, y, z and {, are locked to be zero, which means the film is simply supported on the whole boundary. The displacements v 1 and v 2 in the film center are also set to be zero to avoid rigid body movements. The same mesh as in the sinusoidal case with totally 100827 DOF is performed. Four bifurcations have been captured through computing bifurcation indicators (see Fig. 3.13). In any case, the main symmetries with respect to the medians have been preserved. The first wrinkling load is slightly lower than in the uniaxial loading case (λ = 0.048361 instead of 0.052812). Fig. 3.14 presents a sequence of wrinkling modes ∆v corresponding to the critical loads and their associated instability modes ∆v 3 . In the first mode, the pattern is not uniform and one observes a corner effect due to stress concentration in this area. As in the two previous cases, this first bifurcation is due to local effects that should not appear under periodic boundary conditions. The uniform checkerboard mode matures in the bulk at the second bifurcation, but this growth in the center seems to occur gradually since the first bifurcation. Boundary and corner effects are significantly growing when the load reaches the third bifurcation (see Fig. 3.14e). Nevertheless, it still maintains the checkerboard shape in the middle bulk. The fourth mode is very similar to the third mode due to their proximate critical loads, which cannot be obviously distinguished and is not shown here. The wrinkling pattern in the final step is depicted in Fig. 3.15, which appears strong boundary and corner effects. The growth of checkerboard patterns is not as stable as for the previously observed sinusoidal patterns, since three bifurcations occur for rather small values of the deflection (v 3 /h f ≈ 0.375) and there is a local maximum value of the deflection. Note that the checkerboard mode is Herringbone patterns Herringbone modes are investigated via Film/Sub III with a rectangular surface (L x /L y = 0.5) so as to more clearly observe the patterns, since the wavelength λ x and λ y are not identical. The film is under biaxial step loading as shown in Fig. 3.5c. More precisely, the film is compressed along the x direction at the first step, where the loading and boundary conditions are the same as the sinusoidal case with simply supported boundary conditions in Section 3.4.1: simply supported on sides y and {, free on sides x and z. Then, the displacements v 1 , v 3 , w 1 and w 3 along four sides x, y, z and {, are locked at the beginning of the second step of loading, which means that the sides x and z are simply supported in this second step. Compressions on two edges x and z are then imposed along the y direction. The film is meshed with 26 × 50 shell elements, while the substrate is compatibly discretized by 6500 block elements with five layers. Totally, the film/substrate system contains 53235 DOF including the Lagrange multipliers. The uniaxial compression in the first step generates the same type of sinusoidal wrinkles as in Section 3.4.1. In Fig. 3.16a, two bifurcation points have been found. The sequence of wrinkling modes ∆v corresponding to the bifurcation loads and their associated instability modes ∆v 3 are illustrated in Fig. 3.17. The first mode is modulated in a sinusoidal way while the second one corresponds to a quasi-uniformly distributed oscillation. During the second step of compression along the y direction, two bifurcations have been captured by computing bifurcation indicators (see Fig. 3.16b). The first mode shows aperiodic wrinkles (see Fig. 3.18a and Fig. 3.18b), where the perfect periodicity in Fig. 3.17d has been broken by the new bifurcation. Such a loss of periodicity had been previously discussed in [START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF][START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF], where period-doubling or even period-quadrupling is observed. Here, the periodicity is broken by the appearance of 3D wrinkling patterns and one can wonder in which cases the sinusoidal modes lose their stabilities by the occurrence of period-doubling or 3D wrinkling modes. The herringbone mode (see Fig. 3.18c and Fig. 3.18d) appears around the second bifurcation with an inplane wave occurring along the y direction in order to satisfy the minimum energy states [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF]. Apparently, the wavelength λ y is larger than the sinusoidal wavelength λ x , which is consistent with the experimental results in [START_REF] Yin | Deterministic order in surface micro-topologies through sequential wrinkling[END_REF]. A symmetric phase shifting can be obviously seen in the final step (see Fig. 3.19), which justifies that the new in-plane wave spreads along the y direction while oscillates in the x direction. Nevertheless, the wave number in the x direction remains unchanged during the second step of loading. Chapter conclusion Pattern formation and evolution of stiff films bound to compliant substrates were investigated, by accounting for boundary conditions in 3D cases, which was rarely studied previously. A classical model was applied associating geometrically nonlinear shell formulation for the film and linear elasticity for the substrate. Then the shell elements and block elements were coupled by introducing Lagrange multipliers. The presented results rely heavily on robust solution techniques based on the ANM that is able to detect secondary bifurcations and to compute bifurcation modes on a nonlinear response curve. Probably, it would be rather difficult to detect all the bifurcations found in this chapter by conventional numerical methods. Notably, the occurrence and evolution of sinusoidal, checkerboard and herringbone modes have been observed in the post-buckling range. The boundary conditions lead to non-uniformly distributed modes, but these boundary effects hold only at the onset of the instability and further bifurcation modes correspond to more or less uniform amplitudes of oscillations. In our simulations, the appearance of sinusoidal, checkerboard or herringbone patterns is mainly related to the loading conditions. The nonlinear behavior of moderately large wrinkles has been investigated and it seems that 1D patterns are more stable than 2D ones. The presented nonlinear 3D model can describe moderately large displacements and rotations in the film, while the computation cost is dramatically increasing compared to the 2D model in [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF] (100827 DOF in the 3D model instead of 1818 DOF in the 2D model). In this respect, an idea for simulating larger samples is to introduce reducedorder models, for example via the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF][START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. Introduction Wrinkling phenomenon is one of the major concerns for the analysis, design and optimization of structures [START_REF] Rossi | Simulation of light-weight membrane structures by wrinkling model[END_REF] and material processing [2], self-organized surface morphology in biomechanics [START_REF] Efimenko | Nested self-similar wrinkling patterns in skins[END_REF], pattern formation for micro/nano-fabrication [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF], etc. To analyze such phenomena, we propose the use of macroscopic models based on envelope equations as in the field of cellular instability problems [153,[START_REF] Cross | Pattern formation out of equilibrium[END_REF][START_REF] Hoyle | Pattern formation, an introduction to methods[END_REF]. Such macroscopic descriptions are common for Rayleigh-Bénard convection [START_REF] Newell | Finite band width, finite amplitude convection[END_REF][START_REF] Segel | Distant side walls cause slow amplitude modulation of cellular convection[END_REF], buckling of long structures [START_REF] Damil | Wavelength selection in the postbuckling of a long rectangular plate[END_REF][START_REF] Boucif | Experimental study of wavelength selection in the elastic buckling instability of thin plates[END_REF][START_REF] Abdelmoula | Influence of distributed and localized imperfections on the buckling of cylindrical shells[END_REF], surface wrinkling of stiff thin films resting on compliant substrates [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF], fiber microbuckling and compressive failure of composites [START_REF] Drapier | A structural approach of plastic microbuckling in long fibre composites: comparison with theoretical and experimental results[END_REF][START_REF] Kyriakides | On the compressive failure of fiber reinforced composites[END_REF][START_REF] Waas | Compressive failure of composites, part II: Experimental studies[END_REF], wrinkling of membranes [START_REF] Rossi | Simulation of light-weight membrane structures by wrinkling model[END_REF][START_REF] Wong | Wrinkled membranes, Part I: experiments[END_REF][START_REF] Rodriguez | Numerical study of dynamic relaxation with kinetic damping applied to inflatable fabric structures with extensions for 3D solid element and non-linear behavior[END_REF][START_REF] Lecieux | Experimental analysis on membrane wrinkling under biaxial load-Comparison with bifurcation analysis[END_REF][START_REF] Lecieux | Numerical wrinkling prediction of thin hyperelastic structures by direct energy minimization[END_REF] and many other instabilities arising in various scientific fields [153,[START_REF] Cross | Pattern formation out of equilibrium[END_REF]. The responses of such systems are often nearly periodic spatial oscillations. Therefore, the evolution can be described by envelope models similar to the famous Ginzburg-Landau equation [START_REF] Segel | Distant side walls cause slow amplitude modulation of cellular convection[END_REF][START_REF] Damil | Amplitude equations for cellular instabilities[END_REF][START_REF] Hunt | Cellular buckling in long structures[END_REF][START_REF] Iooss | Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems[END_REF]. A new approach has been recently adopted by Damil and Potier-Ferry [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF] to model wrinkling phenomena. The approach is based on the Ginzburg-Landau theory [153,[START_REF] Iooss | Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems[END_REF]. In the proposed theory, the envelope equation is derived from an asymptotic double scale analysis and the nearly periodic fields (reduced model) are represented by Fourier series with slowly varying coefficients. This mathematical representation yields macroscopic models in the form of generalized continua. In this case, the macroscopic field is defined by Fourier coefficients of the microscopic field. It has been shown recently that this approach is able to account for the coupling between local and global buckling in a computationally efficient manner [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF] and it remains valid beyond the bifurcation point [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Nevertheless, a clear and secure account of boundary conditions cannot be obtained, which is a drawback intrinsically linked to the use of any model reduction. To solve this problem, a multi-scale modeling approach has been recently proposed in order to bypass the question of boundary conditions [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF]: the full model is implemented near the boundary while the envelope model is considered elsewhere, and these two models are bridged by the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF]. This idea makes it possible to clarify the question of boundary conditions, which keeps the advantages of the two approaches: the envelope model in the bulk makes it possible to simplify the response curves and limit the total number of degrees of freedom; the fine model avoids the cumbersome problem of the boundary conditions being applied to the envelope equation. In this chapter, we revisit these coupling techniques between a reference model and a reduced model of Ginzburg-Landau type. Over the last decade, various numerical techniques have been developed to couple heterogeneous models, e.g. the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF] or the bridging domain method [START_REF] Xiao | A bridging domain method for coupling continua with molecular dynamics[END_REF]. One can couple classical continuum and shell models [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF], particle and continuum models [START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]130,[START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF][START_REF] Bauman | Adaptive multiscale modeling of polymeric materials with Arlequin coupling and Goals algorithms[END_REF][START_REF] Xiao | A bridging domain method for coupling continua with molecular dynamics[END_REF], heterogeneous meshes [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF] or more generally heterogeneous discretizations [START_REF] Ben Dhia | On the use of XFEM within the Arlequin framework for the simulation of crack propagation[END_REF][START_REF] Biscani | Variable kinematic plate elements coupled via Arlequin method[END_REF]. For instance, local stresses around the boundary have been computed by coupling 2D elasticity near the boundary and 1D beam model elsewhere [START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF][START_REF] Hu | Multi-scale nonlinear modelling of sandwich structures using the Arlequin method[END_REF]. Basically, the Arlequin method aims at connecting two spatial approximations of an unknown field, generally a fine approximation U f and a coarse approximation U r . The idea is to require that these two approximations are neighbor in a weak and discrete sense and to introduce Lagrange multipliers in the corresponding differential problems. At the continuous level, a bilinear form must be chosen, which can be L 2 -type, H 1 -type or energy type [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. The first and important application of the Arlequin method is the coupling between two different meshes discretizing the same continuous problem: in this case, the mediator problem should be discretized by a coarse mesh to avoid locking phenomena [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] and spurious stress peaks [START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF]. But the two connected problems are not always in the same space, as for instance when dealing with particle and continuous problems. In this case, a prolongation operator has to be introduced to convert the discrete displacement into a continuous one and next a connection between continuous fields is performed [START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]: this is consistent because the continuous model can be seen as the coarsest one. A similar approach has been applied in the coupling between plate and 3D models. A prolongation operator has been introduced (i.e. from the coarse to the fine level) and the integration is done in the 3D domain but the discretization of the Lagrange multiplier corresponds to a projection on the coarsest problem: thus, in this sense, this coupling of plate/3D is also achieved at the coarse level. In the same spirit, for the coupling between a fine model and an envelope model that is discussed in this chapter, the connection should also be done at the coarse level, i.e. between Fourier coefficients. On the contrary, a prolongation operator from the coarse to the fine model had been introduced in the previous paper [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF] and the connection had been done at this level. Therefore, one can wonder if the imperfect connection observed in [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF] could be improved by introducing a coupling at the relevant level. This chapter tries to answer this question by studying again the Swift-Hohenberg equation [START_REF] Swift | Hydrodynamic fluctuations at the convective instability[END_REF] that is a simple and illustrative example of quasi-periodic bifurcation. Very probably, the same ideas can be applied to 2D macroscopic membrane models that were recently introduced in [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF]. Note that the presented new technique can be considered as nonlocal since it connects Fourier coefficients involving integrals on a period. A similar nonlocal coupling has been introduced in [START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF] in the case of an atomic-to-continuum coupling, where the atomic model is reduced by averaging over a representative volume. The question addressed in this chapter is more or less generic in applying bridging techniques to reduced models or multi-scale models. The first papers about the Arlequin method focused on the choice of a bilinear form and its discretization. But in asymptotic multiple scale methods [136] or in computational homogenization [START_REF] Feyel | A multilevel finite element method (FE 2 ) to describe the response of highly non-linear structures using generalized continua[END_REF], one clearly distinguishes two independent spatial domains: a macroscopic domain to account for slow variations and a microscopic domain for the rapid variations. Therefore, the connection operators between the two levels have to be clearly defined, as well as the level at which the coupling is achieved. This subject will be discussed in this chapter. The work presented in this chapter, i.e. the new bridging technique based on a nonlocal reduction operator, is considered as an original, logical and relevant application of multiscale modeling with good motivations, explanations and interesting numerical results, which has been published in International Journal of Solids and Structures [START_REF] Xu | Bridging techniques in a multiscale modeling of pattern formation[END_REF]. Macroscopic modeling of instability pattern formation The numerical test considered in this chapter is the famous Swift-Hohenberg equation [START_REF] Swift | Hydrodynamic fluctuations at the convective instability[END_REF] that corresponds to the problem of a compressed elastic beam coupled with a nonlinear foundation. It has been studied in many papers, for instance in [START_REF] Hunt | Structural localization phenomena and the dynamical phase-space analogy[END_REF][START_REF] Hunt | Cellular buckling in long structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF][START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF], because it is a very representative example in the study of cellular instabilities. From this microscopic model, a macroscopic envelope model will be presented and studied in the rest of the chapter. Among those discussed in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], it is not the more accurate, but it is the simplest one and it is able to describe the amplitude modulation of the oscillation. Let us recall that the central point of this chapter is a bridging technique used to correct a reduced model near the boundary. This technique has to be robust and it has to play its part for several levels of reduced model. Description of the microscopic model We consider the example of an elastic beam subjected to a nonlinear elastic foundation as shown in Fig. 4.1. The unknowns are the components u(x) and v(x) of the displacement vector and the normal force n(x), which represents U (x) = {u(x), v(x), n(x)}. We will study the following set of differential equations: n(x) EI n(x) ES L cv+c 3 v 3                        dn dx + f = 0, (a) n ES = du dx + 1 2 ( dv dx ) 2 , ( b ) d 2 dx 2 ( EI d 2 v dx 2 ) - d dx ( n dv dx ) + cv + c 3 v 3 = 0. (c) (4.1) These equations will be referred to as the microscopic model that depends on four structural parameters EI, ES, c, c 3 and a given axial force f (x). This system is able to describe periodic patterns. For instance, in the case without horizontal force (f = 0), with constant coefficients EI, c and a prescribed uniform compression stress µ (n(x) = -µ), a relation between the critical load µ and the wave number q of periodic patterns can be deduced from the linearized version of (4.1-c): µ(q) = EIq 2 + c q 2 . ( 4.2) The critical wave number q = 4 √ c/EI can be defined as the minimum of the neutral stability curve µ(q). Note that the solutions of the system (4.1) are stationary points of the following potential energy: P(u, v) = ∫ L 0 ( ES 2 ( u ′ + v ′2 2 ) 2 + EI 2 v ′′2 + c 2 v 2 + c 3 4 v 4 -f u ) dx. (4.3) Reduction procedure by Fourier series We will conduct a multi-scale approach based on the concept of Fourier series with slowly varying coefficients. Let us suppose that the instability wave number q is known. In this way, all the unknowns of model U (x) = {u(x), v(x), n(x)...} can be written in the form of Fourier series, whose coefficients vary more slowly than the harmonics: U (x) = +∞ ∑ j=-∞ U j (x)e jiqx , ( 4.4) where the Fourier coefficient U j (x) denotes the envelope for the j th order harmonic, which is conjugated with U -j (x). The macroscopic unknown fields U j (x) slowly vary over a period [ x, x + 2π q ] of the oscillation. In practice, only a finite number of Fourier coefficients will be considered. As shown in Fig. 1.6, at least two functions U 0 (x) and U 1 (x) are necessary to describe nearly periodic patterns: U 0 (x) can be identified with the mean value while U 1 (x) represents the envelope or amplitude of the spatial oscillations. The mean value U 0 (x) is real valued, while the other envelopes are complex. Consequently, the envelope of the first harmonic U 1 (x) can be written as U 1 (x) = r(x)e iφ(x) , where r(x) represents the amplitude modulation and φ(x) is the phase modulation. If the phase varies linearly like φ(x) = Qx + φ 0 , this type of approach is able to describe quasi-periodic responses whose wave number q + Q slightly differs from the a priori chosen q. Hence, the method makes it possible to account for a change in wave number. The main idea of macroscopic modeling is to deduce differential equations satisfied by the amplitude U j (x). Some calculation rules have been introduced in [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF] to manage these Fourier series with slowly varying coefficients. A simple macroscopic model with two real envelopes The previous reduction procedure has been applied to the microscopic model (4.1) and (4.3) (see [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]). Several reduced models have been established, depending on the number of harmonics and some additional assumptions. The general methodology to obtain the macroscopic models is detailed in Appendix B, as well as a very accurate reduced model with five harmonics and another with one real and one complex envelope in Appendix C. In this chapter, we only recall the simplest possible model that involves only three real functions: the mean values of the membrane unknowns u 0 (x), n 0 (x) and the first amplitude of the oscillation of deflection v 1 (x). The potential energy of this simple macroscopic model is given by (see [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]) P(u 0 , v 1 ) = ∫ L 0 ( ES 2 ( u ′ 0 + v ′2 1 + q 2 v 2 1 ) 2 + EI ( 6q 2 v ′2 1 + q 4 v 2 1 ) + cv 2 1 + 3c 3 2 v 4 1 -f 0 u 0 ) dx. (4.5) The differential equations of the system follow from the stationarity of the potential energy Transition operators in the framework of Fourier series with variable coefficients The next problem studied in this chapter is the coupling between the macroscopic model (4.6) and the microscopic model (4.1). According to the Arlequin framework, a bilinear form has to be defined and this will be done in the following sections. In this part, we define and analyze transition operators between the full model and the reduced one. Prolongation and reduction operators Let us discuss the possible manners of connecting the fine and reduced models. Firstly, let us consider the transition from the envelopes U j (x) to the full model (U j (x) → U (x)). It has been previously introduced (see [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF]): P(U j ) = U (x) = +∞ ∑ j=-∞ U j (x)e jiqx . ( 4.7) With the simplification in Section 4.2.3, the unknowns are reduced to the mean membrane displacement u 0 (x) and to the first envelope of the deflection v 1 (x). Consequently, the Eq. (4.7) can be simplified as P(u 0 , v 1 ) = { u(x) v(x) } = { u 0 (x) 2v 1 (x) cos(qx + φ) } . ( 4.8) Conversely, according to the assumption of slowly varying envelope over a period [ x -π q , x + π q ] , the macroscopic unknowns can be deduced from microscopic ones by the classic formula of Fourier series: U j (x) = q 2π ∫ π q -π q U (x + y)e -jiq(x+y) dy. (4.9) Therefore, considering the simplified theory in Section 4.2.3, the reduction operator reads R(u, v) =      u 0 (x) v R 1 (x) v I 1 (x)      = q 2π ∫ π q -π q      u(x + y) v(x + y) cos[q(x + y)] -v(x + y) sin[q(x + y)]      dy. ( 4 Numerical analysis of the reduction procedure Theoretical remarks Before presenting a bridging technique, we first examine the meaning of the reduction procedure in Eqs. (4.9) and (4.10). This is performed through the analysis of a numerical solution of the microscopic model (4.1). For a periodic function U (x), the Fourier coefficients are given by Eq. (4.9). In what follows, we compute the real part U R j and imaginary part U I j as follows: U R j (x) = q 2π ∫ π q -π q U (x + y) cos [jq(x + y)] dy, (4.11) U I j (x) = - q 2π ∫ π q -π q U (x + y) sin [jq(x + y)] dy. (4.12) From the mathematical standpoint, it is straightforward to obtain all the envelopes through Eq. (4.9). The reduction of transversal displacement and longitudinal displacement is conducted by considering five envelopes j = 0, j = ±1 and j = ±2, respectively. We consider a beam with length L = 30π, ES = 1, EI = 1, c = 1 and c 3 = 1/3. We choose the instability wave number q = 1. The beam is subjected to an increasing global end shortening u(L) = -µL and the body force is f 0 = 0. The whole beam is divided into 120 cubic elements, which means that the element length is l e = π/4. Theoretically, to implement this nonlocal reduction, we can choose any point x in Eqs. (4.11) and (4.12) as the center to carry out the integral over the period [ x -π q , x + π q ] except the boundary regions [ 0, π q ] and [ L -π q , L ] . For simplicity, we choose each node of the microscopic mesh as the center of these integrals. Therefore, for each reduction point, the integral domain covers eight elements over the whole period as shown in Fig. The discretization of this nonlocal reduction can be written as U R j (x i ) = q 2π ∫ x i + π q x i -π q U (x) cos (jqx) dx ≈ q 2π l e 2 ∑ xn∈gp U (x n ) cos(jqx n ), (4.13) U I j (x i ) = - q 2π ∫ x i + π q x i -π q U (x) sin (jqx) dx ≈ - q 2π l e 2 ∑ xn∈gp U (x n ) sin(jqx n ), (4.14) where x i are the nodes of the mesh and x n ∈ gp represent the corresponding Gauss points within the integration domain I( x i ) = [ x i -π q , x i + π q ] . Note that the above equations should be limited to functions that are exactly periodic with a period 2π/q. But in general, it is not possible to precisely predict the period of solutions of nonlinear equations that changes with their amplitudes. Let us consider for instance a harmonic function with a wave number Q that is close but a little different from the a priori given wave number q: v(x) = e i(Qx+φ) + e -i(Qx+φ) , Q ≈ q, Q ̸ = q. ( 4.15) From Eqs. (4.13) and (4.14), the first order Fourier coefficient can be written as v 1 (x) = q 2π   e iφ ∫ π q -π q e i(Q-q)(x+y) Slowly variable dy + e -iφ ∫ π q -π q e -i(Q+q)(x+y) Oscillating dy   . ( 4.16) Therefore, the Fourier coefficients involve a slowly varying part and a rapidly varying one, this second part being disregarded by the macroscopic models in Section 4.2. The oscillating part is relatively small when the two wave numbers are close to each other. Numerical tests for a simply supported beam In order to analyze the practical accuracy of the reduction formulae (4.13) and (4.14), firstly, we have computed the response of the microscopic model (4.1) with simply supported boundary conditions: v(0) = v ′′ (0) = 0, v(L) = v ′′ (L) = 0. Then the reduction terms (4.13) and (4.14) are calculated when µ = 2.21 for the orders j = 0, j = 1 and j = 2, respectively. Their spatial distribution is plotted in Fig. 4.3. It is found that only the imaginary part of the first order envelope v I 1 is not small, the ratios |v R 1 /v I 1 | and |v 0 /v I 1 | being of the order of 10 -3 or 10 -4 . The second order amplitudes v R 2 and v I 2 even have a lower level than v 0 . In other words, the response of v(x) is approximately 0.3166 sin x and the complementary contributions are relatively small. This means that the effective wave number Q for µ = 2.21 corresponds precisely to the predicted quantity q = 1. We have checked that the wave number is also Q = 1 for any µ in the interval [2, 2.21]. Numerical tests for a clamped beam With clamped boundary conditions: v(0) = v ′ (0) = 0, v(L) = v ′ (L) = 0, it is known that the effective wave number Q is not exactly the one predicted by the linear theory 106 Transition operators in the framework of Fourier series with variable coefficients (v R 1 , v I 1 , v R 2 , v I 2 ) . The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. (v R 1 , v I 1 , v R 2 , v I 2 ) . The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. (v 0 , v R 1 , v I 1 , v R 2 and v I 2 ) is compared with the exact solution v(x). Bridging technique and discretization In this section, the microscopic model (4.1) and (4.3) is implemented in a small region close to the boundary, which allows for the introduction of "exact" boundary conditions into the system. The simplest envelope model (4.5) and (4.6) will be applied in the bulk. These two types of models can be bridged by the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF]. According to the Arlequin framework, these two mechanical fields are matched in a weak sense inside the gluing zone and the potential energy is distributed between these two models. Arlequin method in the context of prolongation or reduction coupling The domain of the whole mechanical system is partitioned into two overlapping subzones: Ω f (microscopic fine model domain) and Ω r (macroscopic reduced model domain). The resulting superposition zone S = Ω f ∩ Ω r contains the gluing zone S g (S g ⊆ S) (see Fig. 4.11). Here the two zones S and S g cannot coincide because of the nonlocal character of the reduction operator. Energy distribution Setting u f = {u(x), v(x), x ∈ Ω f } and u r = {u 0 (x), v 1 (x), x ∈ Ω r }, the energy contribution of the two models for the potential energy of the whole system defined in Eqs. (4.3) and (4.5) is as follows:            P f (u f ) = ∫ Ω f [α f W(u f ) -β f f u] dΩ, P r (u r ) = ∫ Ωr [α r W(u r ) -β r f 0 u 0 ] dΩ, (4.17) where          W(u f ) = ES 2 (u ′ + v ′2 2 ) 2 + EI 2 v ′′2 + c 2 v 2 + c 3 4 v 4 , W(u r ) = ES 2 (u ′ 0 + v ′2 1 + q 2 v 2 1 ) 2 + EI(6q 2 v ′2 1 + q 4 v 2 1 ) + cv 2 1 + 3c 3 2 v 4 1 . (4.18) In order to have consistent modeling of the energy in the overlapping domain, the energy associated to each domain is balanced by weight functions which are represented by α i for the internal work and β i for the external work. These weight functions are assumed to be positive piecewise continuous in Ω i and satisfy the following equations: More details on selection of these functions can be found in [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. In this chapter, we choose piecewise linear continuous weight functions in the overlapping region S.                α f = β f = 1, in Ω f \S, α r = β r = 1, in Ω r \S, α f + α r = β f + β r = 1, in S. Coupling alternatives The coupling technique implies a connection between the microscopic model and the envelope model. According to the Arlequin framework, generally, coupling based on the coarse model is preferred to avoid the locking phenomena. This requires defining a nonlocal reduction operator u f → R(u f ) which involves the Fourier transform. Conversely, the other way is to perform the inverse connection by using a local prolongation operator u r → P(u r ), which reproduces a compatible field from u r to be coupled with u f (see Section 4.3.1). Therefore, the coupling is conducted by requiring that one of the two following conditions could be satisfied in a mean sense: R(u f ) -u r = 0, ∀x ∈ S g ; (4.20) u f -P(u r ) = 0, ∀x ∈ S g . (4.21) The literature in terms of the Arlequin method recommends the first way (4.20) [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. For simplicity, the prolongation method (4.21) was studied in [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF]. Here, we will test the reduction approach (4.20) that should lead to a better coupling. Prolongation coupling approach The prolongation operator was defined in Eq. (4.8). The reduced model is based on strong simplifications and especially on the assumption of a constant arbitrary phase. As in [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF], we choose φ = -π/2 in what follows, which leads to the following form of the prolongation operator: P(u r ) = [ 1 0 0 2 sin(qx) ] { u 0 v 1 } , ∀x ∈ S g . (4.22) By introducing Lagrange multipliers λ = {λ u (x), λ v (x), x ∈ S g } as a fictitious gluing force, the coupling equation (4.21) can be rewritten in a weak form as C (λ, u f -P(u r )) = 0, ∀λ ∈ M, (4.23) where M is the mediator space. Eq. (4.23) could be considered as a constraint in an optimization problem. The corresponding stationary function is given in a Lagrangian form as L (u f , u r , λ) = P f (u f ) + P r (u r ) + C (λ, u f -P(u r )) . (4.24) From Eq. (4.24), three equations are obtained according to δu f , δu r and δλ:                P f (δu f ) + C (λ, δu f ) = 0, ∀δu f ∈ K.A., P r (δu r ) -C (λ, P(δu r )) = 0, ∀δu r ∈ K.A., C (δλ, u f ) -C (δλ, P(u r )) = 0, ∀δλ ∈ M, (4.25) where K.A. stands for kinematically admissible. Finally, the coupling operator C is defined as follows: C (λ, u) = ∫ Sg ( λ • u + ℓ 2 ε(λ) : ε(u) ) dΩ. (4.26) It is an H 1 -type coupling operator. When ℓ = 0, it becomes an L 2 -type coupling operator. The choice of the length has been discussed in the literature [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Guidault | On the L 2 coupling and the H 1 couplings for an overlapping domain decomposition method using Lagrange multipliers[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF][START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF] and the difficulties of L 2 coupling have been pointed out. This point will be re-discussed in Section 4.5.3. Reduction-based coupling approach In Section 4.3.2, it was shown that the longitudinal displacement can be adequately described by a single envelope u 0 since it is almost linear. Moreover, the two longitudinal displacements coincide (see Eqs. (4.22) and (4.28) below) so that their bridging procedure (4.23) and (4.27) is identical. In the transversal direction, the reduction operator is the first order Fourier coefficient that has a nonlocal character (see Eq. (4.10)). For simplicity, C (λ, u) will be the L 2 -type scalar product. The reduction operator has been defined in Eq. (4.10). With φ = -π/2, the weak form of coupling formula (4.20) reads C (λ, R(u f ) -u r ) = 0, ∀λ ∈ M, ( 4.27) where R(u f ) = { R u (x) R v (x) } =      u(x) q 2π ∫ π q -π q v(x + y) sin [q(x + y)] dy      . (4.28) The corresponding stationary function is also in a Lagrangian form: L (u f , u r , λ) = P f (u f ) + P r (u r ) + C (λ, R(u f ) -u r ) . (4.29) From Eq. (4.29), one can obtain three equations according to δu f , δu r and δλ:                P f (δu f ) + C (λ, R(δu f )) = 0, ∀δu f ∈ K.A., P r (δu r ) -C (λ, δu r ) = 0, ∀δu r ∈ K.A., C (δλ, R(u f )) -C (δλ, u r ) = 0, ∀δλ ∈ M. (4.30) Comments In works of Ben Dhia and Rateau [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF], it was established that it is better to discretize the coupling equation in a coarse manner (choice of the discrete mediator space M ). In this chapter, we will further discuss whether the coupling equation has to be defined at a fine level (4.21) and (4.23) or at a coarse level (4.20) and (4.27). Discretization The chosen discretization of the microscopic model (4.1) is very classical with linear interpolation for the axial displacement and cubic Hermite interpolation for the deflection. C 0 elements can be chosen since the reduced energy (4.5) involves only the first derivatives. As in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], we use 3-node quadratic elements. The discretization of the coupling operators is presented with more details. Discretization of the prolongation coupling The finite element method is applied to solve Eq. (4.25). The discretization of unknowns is as follows: u f = { u v } e = [ N f u N f v ] { Q f } e , ( 4.31 ) u r = { u 0 v 1 } e = [ N r u N r v ] { Q r } e , ( 4.32 ) λ = { λ u λ v } e = [ N r u N r v ] { Q λ } e , ( 4.33) where { Q f } e , {Q r } e and {Q λ } e are the elementary nodal unknowns of u f , u r and λ, respectively. The shape functions N f u and N f v are, respectively, described by Lagrange and Hermite interpolating polynomials. To avoid locking the microscopic behavior to the macroscopic behavior in the coupling zone, the discretization of λ should be conducted as u r (see more details in [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF]). Finally, one can obtain the global discrete system in the generic form of a mixed problem:                [ R f (Q f ) ] + [C f ] t {Q λ } = 0, [R r (Q r )] -[C r ] t {Q λ } = 0, [C f ] { Q f } -[C r ] {Q r } = 0, (4.34) where the residuals R f (Q f ) and R r (Q r ) are detailed in Appendix D. The coupling matrix C f and C r are assembled from the elementary matrices that have the following forms: C e f = ∫ Ωe ([ N r u N r v ] [ N f u N f v ] t + ℓ 2 [ N r u ′ N r v ′ ] [ N f u ′ N f v ′ ] t ) dΩ, (4.35 ) C e r = ∫ Ωe ([ N r u N r v ] [ N r u 2N r v sin(qx) ] t + ℓ 2 [ N r u ′ N r v ′ ] [ N r u ′ 2N r v ′ sin(qx) + 2qN r v cos(qx) ] t ) dΩ, (4.36 ) in which Ω e represents the elementary integration domain. The resulting nonlinear system (4.34) is solved using the classic Newton-Raphson method. Discretization of the reduction-based coupling The discretization of unknowns u f , u r and λ is the same as in Section 4.4.2. In addition, the global discrete system (4.34) has the same form as those in the prolongation coupling but with a completely different coupling matrix C f due to the nonlocal character of the coupling operator. Now let us look at the discretization of the bilinear form C (λ, R(u f )) that follows from two integrations. The bilinear form C (•, •) is an integral in the macroscopic domain split into macroscopic elements E, each of them being associated with their Gauss points (GP (E)) in S g . This first integral is quite classical with finite element method. The reduction operator R(•) is an integral in the interval I(x i ) = [ x i -π q , x i + π q ] in the microscopic domain that is split into small elements e, each of them being associated with its Gauss points (gp(e)). Classically, these Gauss points are defined in a reference interval [-1, 1]. The bilinear form is discretized in quite a classical way: C (λ, R(u f )) = ∑ E L E 2 ∑ x i ∈GP (E) ( ⟨Q λ u ⟩ E {N r u (x i )}⟨N f u ⟩ { Q f u } e + ⟨Q λ v ⟩ E {N r v (x i )}R v (x i ) ) , (4.37) where L E is the length of the macroscopic elements. Note that two Gauss points on each macroscopic element can fully meet the accuracy requirements in this case (see Fig. 4.12). The reduction operator R v (x i ) in Eq. (4.28) is detailed from the following classical integration formula: R v (x i ) = q 2π ∫ x i + π q x i -π q v(x) sin(qx)dx ≈ q 2π ∑ e l(e) 2 ∑ x j ∈gp(e) v(x j ) sin(qx j ) = q 2π ∑ e l(e) 2 ∑ x j ∈gp(e) sin(qx j )⟨N f v (x j )⟩ { Q f v } e , ( 4.38) where the length of the interval l(e) is the intersection of the microscopic element with the integral region I(x i ) = [ x i -π q , x i + π q ] . It does not necessarily coincide with the interval of interpolation (see Fig. 4.12). The coupling matrix C r can be assembled from the elementary matrices as C e r = ∫ Ωe ([ N r u N r v ] [ N r u N r v ] t ) dΩ. (4.39) The nonlinear system is solved using the classic Newton-Raphson method. bifurcation, which can be explained via the Ginzburg-Landau equation [153,[START_REF] Damil | Wavelength selection in the postbuckling of a long rectangular plate[END_REF]. In what follows, we distinguish the bulk behavior and the boundary behavior, respectively in the regions [3π, 15π] and [0, 3π]. The post-buckling amplitude in the bulk is correctly captured by all the models, especially the macroscopic model (see Fig. 4.15). Indeed, the boundary conditions have little influence on the macroscopical amplitude in the bulk. In the same way, all the models correctly predict the bifurcation point µ ≈ 2. Near the boundary, the macroscopic model and prolongation coupling lead to divergent results. Only the reduction-based coupling is able to reproduce the prediction of the reference model (see Fig. 4.16). The details of the response near the boundary are illustrated in Fig. 4.19 for µ = 2.21. One can observe the coincidence between the reduction-based coupling and the reference model. As for the macroscopic model and prolongation coupling, the post-buckling instability pattern is qualitatively similar to the reference model but with a significant phase shift. It is well-known that the coupling matrices have to be defined on the coarse level to avoid locking phenomena. In [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF], this coarse character is related to the discretization of the Lagrange multipliers λ. In the present case, the coupling matrices C e f and C e r depend on the discretized mediator space, but there is also another alternative: to connect the Fourier coefficients (reduction-based coupling) or the functions of the fine model (prolongation coupling). Our results clearly establish that the coupling procedure has to be done in the coarse space, i.e. in the space of Fourier coefficients. Indeed, considering the values of the functions v f (x) and v r (x) in the gluing zone for the prolongation coupling (see Fig. 4.17), one can observe that the two functions coincide (v f (x) = v r (x) in S g ) and this locking phenomenon causes undesirable behavior in the boundary region (see Fig. 4.19). On the contrary, as for the reduction-based coupling (see Fig. 4.18), v f (x) and v r (x) are not identical in the gluing zone, which leads to an accurate prediction in the boundary region (see Fig. 4.19). About convergence The definition of the numerical model depends on the three meshes (micro, macro and bridging), the reduced model and the location of gluing zone. The choice of meshes follows the same rules as with finite element technique and it is not re-discussed here. The definition of the reduced model induces some limits to the accuracy that can be expected in the macroscopic domain Ω r . With the choice made in this chapter, one can get a good prediction of the amplitude of wrinkling patterns in this zone, but not their phase. This limitation clearly appears in Figs. 4.17 and 4.18 and cannot be improved within this reduced model. Thus, we focus on the influence of the gluing zone or, equivalently, on the size of the microscopic domain Ω f . In Figs. 4.20 and 4.21, the Arlequin solution is compared with the reference solution in the cases of gluing zones in [3π, 4π] and [8π, 9π], respectively. In the first case with a small Ω f , the Arlequin solution is valid in almost all this interval Ω f = [0, 3π]. This is already a good result that was not found with the prolongation coupling, compared with Fig. 4.19. If one extends the microscopic domain Ω f up to [0, 8π], the Arlequin solution becomes accurate both in microscopic and gluing zones, which is the best accuracy to be expected with this reduced model. H 1 versus L 2 coupling The choice of the coupling bilinear form is a basic question within Arlequin method. The problems involved by L 2 coupling are known. It has been established that the Lagrange multiplier converges to a distribution and not to a function, for 1D elasticity with different meshes [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] and for atomistic-continuum coupling [START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. We come back to this discussion in the present case of a coupling between nonlinear beam and envelope model in the case of the prolongation coupling. The same clamped beam as in Section 4.5.1 is considered. In Fig. 4.22, we have plotted the transversal displacement in the zone of the microscopic model on the left. It should be compared with Fig. 4.19 where the prolongation and reduction-based couplings were evaluated. The difference is rather weak, about few percents, while a significant difference was observed between the prolongation and reduction-based couplings. Nevertheless, this does not mean that H 1 and L 2 couplings are equivalent. In Fig. 4.23, one evaluates the difference between the two displacements v f and v r in the cases of H 1 and L 2 couplings and one observes that this difference is much smaller with H 1 coupling. In the next Fig. 4.24, the spatial evolution of normal force n(x) is depicted, as well as the macroscopic stress n 0 (x) that is the mean value of n(x). In this case, these two quantities should be constant because of Eqs. (4.1-a) and (4.6-a). One observes rather strong oscillations in the coupling zone that are much severer in the L 2 case (about 4.2%) than in the H 1 case (about 1.6%). Nevertheless, these oscillations have little influence out of the gluing zone. Note that small oscillations existing in the microscopic zone on the left are due to membrane locking in finite element approximation. Finally, we present in Fig. 4.25 the variations of Lagrange multipliers in the two coupling cases. Localized forces are observed near the end of the gluing zone, which was expected by comparison with previous results; see for instance [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF][START_REF] Chamoin | Ghost forces and spurious effects in atomic-to-continuum coupling methods by the Arlequin approach[END_REF]. In the latter paper, the origin of these so-called "ghost forces" was carefully analyzed, and some corrections were proposed with an appropriate choice of weights and especially by introducing interaction forces between coarse and fine model. Hence, there are differences between H 1 and L 2 couplings in the present case of bridging between a macroscopic and a microscopic model, which is very sensitive in the gluing zone. However, the coupling can be achieved at the micro or macro level. In the studied case, the best way is to couple in the macroscopic domain and this point is at least as important as the choice of the bilinear form, which is clearly shown in Figs. S g = [5π, 6π] |v f -v r |/max|v f | H1 coupling L2 coupling Chapter conclusion In this chapter, we have discussed how to connect a fine and a coarse model within the Arlequin framework. A typical cellular instability problem has been accurately analyzed, in which the coarse model is defined by envelope equations of Ginzburg-Landau type. An Arlequin-problem involves a coupling operator and two models to be connected. The efficiency of the numerical technique depends on all three, but the coupling model has to be sufficiently robust and compatible with various choices of reduced models. In the case of envelope equations discussed here, the reduction operator leads to spurious oscillations in the Fourier coefficients that have been smoothed by the coupling operator. The presented reduction-based coupling has permitted us to accurately describe the response of the system near the boundary, even with a rather coarse reduced model. The Arlequin method has been applied in a multi-scale framework, which has required to accurately define transition operators between the two levels, i.e. a prolongation operator from the coarse to the fine level and a reduction operator in the opposite sense. These two operators play a crucial role in the coupling technique, as well as the mediator bilinear form and its discretization. In the studied case, it is clearly better to use the reduction operator for a coupling at the coarse level rather than the prolongation operator for a coupling at the fine level. It will be interesting to discuss this question with other multi-scale models like those obtained in computational homogenization. It will be also interesting to apply a similar bridging technique for the coupling between full shell models and 2D envelope equations as introduced in [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF]. Clearly, the bilinear form should be of H 1 -type as in the nonlocal coupling introduced in [START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF] and the coupling has to be performed in the space of Fourier coefficients as established in the present chapter. film, while the computational cost would be rather high for simulating large samples with big wave number, which requires numerous finite elements. In this respect, one idea for simulating larger samples is to introduce reduced-order models, for example via the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. In this chapter, the macroscopic modeling methodology mentioned in Section 1.3 will be conducted both in 2D and 3D cases for film/substrate multi-scale modeling. A generalized macroscopic modeling framework will be deduced first, then it goes to specific 2D and 3D cases with simplifications and assumptions so as to be more efficient. Precisely, the 2D macroscopic film/substrate model will be based on the established classical model presented in Chapter 2, while all the mechanical fields are in the macroscopic perspective represented by Fourier coefficients. As for the 3D modeling of film/substrate, a nonlinear macroscopic membranewrinkling model that accounts for both membrane energy and bending energy will be deduced first. Then a linear macroscopic elastic model will be derived. Finally, following the same strategy as in Section 3.2.3, these two models will be coupled at the interface through Lagrange multipliers. General macroscopic modeling framework We will conduct a multi-scale approach based on the concept of Fourier series with slowly varying coefficients. Let us suppose that the instability wave number q is known. In this way, all the unknowns of model U (x) = {u i (x), w i (x)} can be written in the form of Fourier series, whose coefficients vary more slowly than the harmonics: U (x) = +∞ ∑ j=-∞ U j (x)e jiqx , ( 5.1) where the Fourier coefficient U j (x) denotes the envelope for the j th order harmonic, which is conjugated with U -j (x). The macroscopic unknown fields U j (x) slowly vary over a period [ x, x + 2π q ] of the oscillation. In practice, only a finite number of Fourier coefficients will be considered. As shown in Fig. 1.6, at least two functions U 0 (x) and U 1 (x) are necessary to describe nearly periodic patterns: U 0 (x) can be identified with the mean value while U 1 (x) represents the envelope or amplitude of the spatial oscillations. The mean value U 0 (x) is real valued, while the other envelopes are complex. Consequently, the envelope of the first harmonic U 1 (x) can be written as U 1 (x) = r(x)e iφ(x) , where r(x) represents the amplitude modulation and φ(x) is the phase modulation. If the phase varies linearly like φ(x) = Qx + φ 0 , this type of approach is able to describe quasi-periodic responses whose wave number q + Q slightly differs from the a priori chosen q. Hence, the method makes it possible to account for a change in wave number. The main idea of macroscopic modeling is to deduce differential equations satisfied by the amplitude U j (x). In what follows, we will develop a generalized macroscopic modeling framework for film/substrate system. Some calculation rules have been introduced in [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF] to manage these Fourier series with slowly varying coefficients. Derivative operators can be calculated exactly, according to the rules presented in Appendix B. Let us apply the above methodology in general nonlinear elasticity problem. With the notations in [START_REF] Cochelin | Méthode asymptotique numérique[END_REF], neglecting volume forces, the principle of virtual work can be expressed as ∫ Ω ⟨δγ⟩ {s} dΩ = λ ∫ ∂Ω ⟨δu⟩ {f } dΩ, (5.2) where {δu} is the virtual displacement vector and {f } is the external loading vector at the boundary with the incremental loading parameter λ. In the framework of linear constitutive laws, the second Piola-Kichhoff stress {s} and the Green-Lagrange strain vector {γ} are linked linearly, while the strain is related to the displacement gradient {θ} by a quadratic relationship:    {s} = [D] {γ}, {γ} = [H] {θ} + 1 2 [A(θ)] {θ}, ( 5.3) where the matrices [D], [H] and [A(θ)] are defined in [START_REF] Cochelin | Méthode asymptotique numérique[END_REF]. Note that the matrix [A(θ)] satisfies the following symmetry property: [A(θ)] {φ} = [A(φ)] {θ}. ( 5.4) We seek nearly periodic responses that vary rapidly in one direction. This characteristic direction and the period are described by a wave vector q ∈ R 3 that is assumed as a given parameter. In practice, this vector comes from a linear stability analysis. Hence, the vector {Λ(x)}, which includes displacement vector, its gradient, strain and stress tensors, is sought in the form of Fourier series, whose coefficients {Λ j (x)} vary slowly. For simplicity, we keep harmonics up to level 2 but it may achieve to high level for complex wave oscillation: Λ(x) = +2 ∑ j=-2 Λ j (x)e jiqx . (5.5) After applying it to the constitutive law (5.3), one can obtain a macroscopic constitutive law for harmonic 0 (real value) and two constitutive laws for harmonics 1 and 2 (complex values), all these equations being coupled:          {γ 0 } = [D] -1 {s 0 } = [H] {θ 0 } + 1 2 [A(θ 0 )] {θ 0 } + [A(θ -1 )] {θ 1 } + [A(θ -2 )] {θ 2 }, {γ 1 } = [D] -1 {s 1 } = [H] {θ 1 } + [A(θ 0 )] {θ 1 } + [A(θ -1 )] {θ 2 }, {γ 2 } = [D] -1 {s 2 } = [H] {θ 2 } + [A(θ 0 )] {θ 2 } + 1 2 [A(θ 1 )] {θ 1 }. (5.6) Therefore, a generalized continuum model has been defined, which is a sort of superposition of several continua. Like each field in the model, the displacement is replaced by a generalized displacement that includes five Fourier coefficients for j ∈ [-2, 2]. To facilitate the understanding and implementation, the macroscopic constitutive law (5.6) can be unified in the same generic form as the starting law (5.3):    {S} = [D gen ] {Γ}, {Γ} = [H gen ] {Θ} + 1 2 [A gen (Θ)] {Θ}, ( 5.7) where the generalized stress {S}, the generalized strain {Γ} and the generalized displacement gradient {Θ} also include five Fourier coefficients for j ∈ [-2, 2]. The matrices that represent linear relationships are diagonal since the couplings between harmonics appear only for nonlinear terms, which leads to [D gen ] =         D 0 0 0 0 0 D/2 0 0 0 0 0 D/2 0 0 0 0 0 D/2 0 0 0 0 0 D/2         , ( 5.8 ) [H gen ] =         H 0 0 0 0 0 2H 0 0 0 0 0 2H 0 0 0 0 0 2H 0 0 0 0 0 2H         . ( 5.9) The nonlinear aspects are taken into account by a single matrix: [A gen (Θ)] = 2         A(θ 0 )/2 A(θ R 1 ) A(θ I 1 ) A(θ R 2 ) A(θ I 2 ) 0 A(θ 0 ) 0 A(θ R 1 ) A(θ I 1 ) 0 0 A(θ 0 ) -A(θ I 1 ) A(θ R 1 ) 0 A(θ R 1 )/2 -A(θ I 1 )/2 A(θ 0 ) 0 0 A(θ I 1 )/2 A(θ R 1 )/2 0 A(θ 0 )         . ( 5.10) The size of this matrix is 30 × 45 in 3D cases and 15 × 20 in 2D cases. Note that the j th component of the displacement gradient is not the gradient of the j th component of the displacement. The rule defining the Fourier components of a gradient vector reads {∇u} j = {∇(u j )} + ji[Q]{u j }, ( 5.11) where [Q] =    {q} 0 0 0 {q} 0 0 0 {q}    . (5.12) In the same way, we will define the principle of virtual work for the extended macroscopic continuum. The previously defined technique with slowly variable Fourier coefficients can be applied to the balance equations. The weak form of those equations is then the extended principle of virtual work. Besides, this weak form can be deduced directly from the principle of virtual work of the initial problem (5.2) by using Parseval identity (B.3) (see Appendix B). The deduced principle of virtual work involves the Fourier coefficients of stress and strain: ∫ Ω +2 ∑ j=-2 ⟨ δγ -j ⟩ {s j } dΩ = λ ∫ ∂Ω +2 ∑ j=-2 ⟨δu j ⟩ {f j } dΩ, (5.13) where the left hand side involves Fourier coefficients of stress and strain, i.e. macroscopic stress and strain. One can find that the extended principle of virtual work takes the same form as the initial model (5. P gen (U ) = 1 2 ∫ Ω ⟨Γ(U )⟩ [D gen ] {Γ(U )} dΩ -λ ∫ ∂Ω ⟨U ⟩ {F } dΩ. (5.15) Since the generalized macroscopic model has the same form as the initial microscopic model, its solution can be approximated by the same shape functions. In the microscopic model, the displacement and its gradient can be related to nodal variables via two interpolation matrices [N ] and [G]: .16) After applying this discretization principle to the generalized displacement and the generalized displacement gradient, one can obtain interpolation formulae similar to (5.16). { {u(x)} e = [N ] {v} e , {∇u(x)} e = [G] {v} e . ( 5 The generalized displacement will be expressed with respect to the generalized nodal displacement via a block diagonal matrix: {U (x)} e = [N gen ] {V gen } e , ( 5.17) where [N gen ] =    [N ] [0] [0] [0] [N ] [0] [0] [0] [N ]    . ( 5.18) The other matrix interpolates the displacement gradient and it is a little complicated due to the derivative rule (5.11). The coupling between micro and macro scales appears at this level via the wave number matrix [Q]. First, let us separate the complex variables to real and imaginary parts from (5.11) by considering the matrix [Q] is real: {θ} j = {θ R j } + i{θ I j } = {∇(u j )} + ji[Q]{u j } = {∇(u R j )} -j[Q]{u I j } + i ( {∇(u I j )} + j[Q]{u R j } ) . ( .19) The relation between the generalized displacement and the generalized displacement gradient vector {Θ} reads {Θ(x)} e =                {θ 0 } {θ R 1 } {θ I 1 } {θ R 2 } {θ I 2 }                = [G gen ] {V gen } e , ( 5.20) where [G gen ] =         [G] [0] [0] [0 [0]] [0] [G] -[Q][N ] [0] [0] [0] [Q][N ] [G] [0] [0] [0] [0] [0] [G] -2[Q][N ] [0] [0] [0] 2[Q][G] [G]         . (5.21) Consequently, the discretization of the generalized strain tensor in Eq. (5.7) can be written as {Γ} = ( [H gen ] + 1 2 [A gen (Θ)] ) [G gen ] {V gen } e . ( 5.22) The above general macroscopic model can be directly applied to the 2D or 3D discretization of film/substrate systems. While considering the intrinsic property of the thin film that has been discussed in the previous chapters, the more efficient way is to take into account some kinematics simplifications for the film or to incorporate beam/shell/plate elements that are competitive for thin-walled structure modeling. This leads to the energy separation of the film/substrate system into two parts, i.e. film part and substrate part: Π = Π f + Π s . (5.23) Thus, in what follows, we will incorporate the general macroscopic modeling framework with nonlinear beam formulation for 2D case and nonlinear Föppl-von Kármán plate theory for 3D case, while the substrate is considered to be a linear elastic foundation. A 2D macroscopic film/substrate model The general macroscopic modeling framework established in the last section can be used directly for 2D film/substrate modeling, with the discretization of domain using 2D finite elements. However, some simplifications can be introduced as conducted in Chapter 2 to reduce the computational cost. One straight way is to develop a specific 2D Fourier-related film/substrate model based on the microscopic film/substrate model (2.8)-(2.14) established in Chapter 2, where the microscopic model has shown its validated effectiveness for post-buckling analyses. Thus, considering the identities presented in the last section, the kinematics in Eqs. (2.8)-(2.10) can be transformed to macroscopic displacement fields as follows: Film          U f j = u f j - ( z - h f 2 -h s ) ( d dx + ijq ) W f j , h s ≤ z ≤ h t W f j = w f j . (5.24) 1 st sublayer          U s1 j = 1 -η 2 (u 0 ) j + 1 + η 2 (u 1 ) j , -1 ≤ η ≤ 1, (h s -h 1 ) ≤ z ≤ h s W s1 j = 1 -η 2 (w 0 ) j + 1 + η 2 (w 1 ) j . (5.25) n th sublayer          U sn j = 1 -η 2 (u n-1 ) j + 1 + η 2 (u n ) j , -1 ≤ η ≤ 1, 0 ≤ z ≤ h n W sn j = 1 -η 2 (w n-1 ) j + 1 + η 2 (w n ) j . (5.26) In the same way, the constitutive and geometric equations in (2.11) and (2.12) can also be converted to macroscopic fields:          (σ f xx ) j = E f (ϵ f xx ) j , (σ sn xx ) j = (λ s + 2G s ) (ϵ sn xx ) j + λ s (ϵ sn zz ) j , (σ sn zz ) j = (λ s + 2G s ) (ϵ sn zz ) j + λ s (ϵ sn xx ) j , (σ sn xz ) j = G s (γ sn xz ) j , (5.27)                      (ϵ f xx ) j = ( d dx + ijq ) U f j + 1 2 ∞ ∑ j 1 =-∞ ( d dx + ij 1 q ) ( d dx + i (j -j 1 ) q ) W f j 1 W f j-j 1 , (ϵ sn xx ) j = ( d dx + ijq ) U sn j , (ϵ sn zz ) j = W sn j,z , (γ sn xz ) j = U sn j,z + ( d dx + ijq ) W sn j , ( 5.28) Consequently, the internal virtual work (2.14) can be rewritten in the macroscopic form: P int (δu) = - ∫ Ω f +∞ ∑ j=-∞ (σ f xx ) j δ(ϵ f xx ) j dΩ - ∑ sn ∫ Ω sn +∞ ∑ j=-∞ [(σ sn xx ) j δ(ϵ sn xx ) j + (σ sn zz ) j δ(ϵ sn zz ) j + (σ sn xz ) j δ(γ sn xz ) j ] dΩ. (5.29) Therefore, the microscopic model (2.8)-(2.14) has been transformed into its equivalent macroscopic form (5.24)-(5.29), with the unknowns {u 0 , w 0 , u 1 , w 1 , . . . , u n , w n } of the microscopic level being converted to the Fourier coefficients {(u 0 ) j , (w 0 ) j , (u 1 ) j , (w 1 ) j , . . . , (u n ) j , (w n ) j } of the macroscopic scale. Since wrinkles usually appear on the top surface of the film/substrate system, which means local buckling plays a major role, three envelopes U 0 (x), U -1 (x) and U 1 (x), respectively representing the mean field and amplitudes of fluctuation, are sufficient to describe such surface morphological instability (see Fig. 1.6). Hence, only three terms (j = -1, 0, 1) of Fourier coefficients will be considered in the transverse displacements. Besides, as relatively small oscillations appear in the longitudinal displacement field, only the zero order term (j = 0) is reasonable to be taken into account. Similar approximations have been conducted and validated in previous works [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF][START_REF] Xu | Bridging techniques in a multiscale modeling of pattern formation[END_REF]. Internal virtual work of the substrate First, let us define the unknown variables in each sublayer ⟨q sn ⟩ = ⟨u n-1 w n-1 u n w n ⟩ , (5.30) ⟨ q sn ,x ⟩ = ⟨u n-1,x w n-1,x u n,x w n,x ⟩ . (5.31) According to the kinematics (2.10), the displacement field reads { U sn W sn } = [N z ] {q sn } , ( 5.32) where [N z ] =    1 -η 2 0 1 + η 2 0 0 1 -η 2 0 1 + η 2    . (5.33) The strain vector {ε sn } and stress vector {S sn } can be respectively expressed as {ε sn } =      ϵ sn xx ϵ sn zz γ sn xz      = [B 1 ] {q sn } + [B 2 ] { q sn ,x } , ( 5.34 ) {S sn } = [C sn ] {ε sn } , ( 5.35) in which [B 1 ] =       1 -η 2 0 1 + η 2 0 0 - 1 h n 0 1 h n - 1 h n 0 1 h n 0       , ( 5.36) [B 2 ] =     0 0 0 0 0 0 0 0 0 1 -η 2 0 1 + η 2     , (5.37) [C sn ] =    λ s + 2G s λ s 0 λ s λ s + 2G s 0 0 0 G s    . ( 5.38) The internal virtual work of the substrate can be represented as the sum of all the sublayers: P s int (δu) = - ∫ L 0 ∫ hs 0 ⟨δε s ⟩ {S s } dzdx = - ∫ L 0     ∑ sn ⟨δq sn ⟩ ∫ hn 0 T [B 1 ] {S sn } dz Φ + ∑ sn ⟨ δq sn ,x ⟩ ∫ hn 0 T [B 2 ] {S sn } dz Ψ     dx. (5.39) Through considering Eqs. (5.34) and (5.35), one can obtain .41) One can also combine the above two equations in the following form: Φ = ∫ hn 0 T [B 1 ] [C sn ] [B 1 ] dz {q sn } + ∫ hn 0 T [B 1 ] [C sn ] [B 2 ] dz { q sn ,x } , ( 5.40) Ψ = ∫ hn 0 T [B 2 ] [C sn ] [B 1 ] dz {q sn } + ∫ hn 0 T [B 2 ] [C sn ] [B 2 ] dz { q sn ,x } . ( 5 { Φ Ψ } = [C s ] { q sn q sn ,x } . (5.42) The macroscopic form of internal virtual work of the substrate involving three envelopes can be written as P s int (δu) = - ∑ sn ∫ L 0 ( ⟨ δ (q sn ) 0 , δ ( q sn ,x ) 0 ⟩ [C s ] { (q sn ) 0 ( q sn ,x ) 0 } +2 ⟨ δ (q sn ) 1 , δ ( q sn ,x ) 1 ⟩ [C s ] { (q sn ) 1 ( q sn ,x ) 1 }) dx. (5.43) Now we consider the discretization of substrate along the x direction. The unknown vectors can be given as {q sn } = [N s ] {v s } , (5.44) { q sn ,x } = [ N s ,x ] {v s } , ( 5.45) where {v s } is the elementary unknown vector of the substrate and [N s ] is the shape function. Note that the longitudinal displacement u is discretized by linear Lagrange functions, while the transverse displacement w is discretized by Hermite functions. Consequently, the internal virtual work of the substrate can be written as P s int (δu) = - ∑ e ⟨δv s ⟩ ∫ le 0 ( T [N s ]Φ + T [N s ,x ]Ψ ) dx = - ∑ e ⟨δv s ⟩ ∫ le 0 ( [ T N s , T N s ,x ] [C s ] [ N s N s ,x ]) dx{v s }, (5.46) where l e is the length of 1D element. Internal virtual work of the film As for the thin film, the strain energy is mainly generated by normal strain ϵ f xx , the other two terms ϵ f zz and γ f xz being neglected. By considering three envelopes, the macroscopic form of the internal virtual work for the film is expressed as P f int (δu) = - ∫ Ω f σ f xx δϵ f xx dΩ = -δ ( 1 2 ∫ Ω f E f [ (ϵ f xx ) 2 0 + 2 (ϵ f xx ) 1 2 ] dΩ ) , ( 5.47) in which ( ϵ f xx ) 0 = u f 0,x - ( z - h f 2 -h s ) w f 0,xx + 1 2 (w f 0,x ) 2 + (w f 1,x ) 2 + q 2 (w f 1 ) 2 , (5.48) ( ϵ f xx ) 1 = [ - ( z - h f 2 -h s ) ( w f 1,xx -q 2 w f 1 ) + w f 0,x w f 1,x ] + i [ -2 ( z - h f 2 -h s ) qw f 1,x + qw f 0,x w f 1 ] , (5.49) where ( ϵ f xx ) 0 and ( ϵ f xx ) 1 are respectively the zero order and the first order of the strain ϵ f xx . Consequently, Eq. (5.47) can be rewritten in the following expanded form: P f int (δu) = - ∫ L 0 [ S f a ( δu f 0,x + w f 0,x δw f 0,x + 2w f 1,x δw f 1,x + 2q 2 w f 1 δw f 1 ) + S f b δ ( w f 0,x w f 1,x ) + S f c δw f 0,xx + S f d δ ( w f 0,x w f 1 ) + S f e δw f 1,x + S f f δ ( w f 1,xx -q 2 w f 1 )] dx = - ∫ L 0 ⟨ δε f ⟩ { S f } dx, (5.50) where                              S f a = E f h f ( u f 0,x + 1 2 (w f 0,x ) 2 + (w f 1,x ) 2 + q 2 (w f 1 ) 2 ) , S f b = 2Eh f w f 0,x w f 1,x , S f c = 1 12 E f h 3 f w f 0,xx , S f d = 2E f h f q 2 w f 0,x w f 1 , S f e = 2 3 E f h 3 f q 2 w f 1,x , S f f = 1 6 E f h 3 f ( w f 1,xx -q 2 w f 1 ) . (5.51) The generalized strain vector {ε f } of the film is defined as { ε f } = ( [H] + 1 2 [A(q f )] ) { q f } , ( 5.52) in which [H] =           1 0 0 1 0 1 0 1           , (5.53) [A(q f )] =           0 w f 0,x 0 2q 2 w f 1 2w f 1,x 0 0 w f 1,x 0 0 w f 0,x 0 0 0 0 0 0 0 0 w f 1 0 w f 0,x 0 0 0 0 0 0 0 0 0 0 0 0 0 0           , ( 5.54) ⟨ q f ⟩ = ⟨ u f 0,x w f 0,x w f 0,xx w f 1 w f 1,x ( w f 1,xx -q 2 w f 1 )⟩ . ( 5 .55) Since [A(q f )] and { q f } are linear functions of u f and w f , the internal virtual work of the film (5.50) is in a quadratic form with respect to the displacement and the generalized stress: P f int (δu) = - ∫ L 0 ⟨ δq f ⟩ ( T [H] + T [A(q f )] ) { S f } dx, (5.56) where the generalized stress of the film reads { S f } = [D] ( [H] + 1 2 [A(q f )] ) { q f } , ( 5.57) in which [D] = diag ( E f h f 2E f h f 1 12 E f h 3 f 2q 2 E f h f 2 3 q 2 E f h 3 f 1 6 E f h 3 f ) . (5.58) Note that the discretization of unknown variables { q f } takes the same shape function as for the substrate, i.e. Lagrange functions for the longitudinal displacement and Hermite functions for the transverse displacement. Nonlinear macroscopic membrane-wrinkling model for the film The technique of slowly variable Fourier coefficients will be applied to deduce the macroscopic membrane-wrinkling model based on the well-known Föppl-von Kármán equations for elastic isotropic plates that is considered as the reference model in this section:            D∆ 2 w -div (N • ∇w) = 0, N = L m • γ, γ = 1 2 ( ∇u + t ∇u ) + 1 2 (∇w ⊗ ∇w) , divN = 0, (5.62) where u = (u, v) ∈ R 2 represents the in-plane displacement, while w is the deflection. The membrane stress and strain are denoted by N and γ, respectively. With the vectorial notations N = t {N X , N Y , N XY } and γ = t {γ X , γ Y , γ XY }, the membrane elastic matrix can be written as L m = Eh 1 -ν 2     1 ν 0 ν 1 0 0 0 1 -ν 2     . ( 5.63) The potential energy of the film, Π f , can be divided into a membrane part Π mem and a bending part Π ben :              Π f (u, w) = Π mem (u, w) + Π ben (w), Π mem (u, w) = 1 2 ∫ ∫ Ω t γ • L m • γdΩ = Eh 2(1 -ν 2 ) ∫ ∫ Ω ( γ 2 X + γ 2 Y + 2(1 -ν)γ 2 XY + 2νγ X γ Y ) dΩ, Π ben (w) = D 2 ∫ ∫ Ω ( (△w) 2 -2(1 -ν) ( ∂ 2 w ∂X 2 ∂ 2 w ∂Y 2 - ( ∂ 2 w ∂X∂Y ) 2 )) dΩ. (5.64) Macroscopic modeling of membrane energy We adapt the technique of Fourier series with slowly variable coefficients in 2D framework [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. For simplicity, we suppose that the instability wave number Q is known and the considered wrinkles spread only in the O x direction. The unknown field U(X, Y ) = {u(X, Y ), w(X, Y ), N(X, Y ), γ(X, Y )}, whose components are in-plane displacement, transverse displacement, membrane stress and strain, is written in the following form: U(X, Y ) = +∞ ∑ j=-∞ U j (X, Y )e jiQX , ( 5.65) where the macroscopic unknown fields U j (X, Y ) vary slowly on the period [ X, X + 2π Q ] of oscillations. It is not necessary to choose an infinite number of Fourier coefficients, so the unknown fields U(X, Y ) are expressed in terms of two harmonics: the mean field U 0 (X, Y ) and the first order harmonics U 1 (X, Y )e iQX and U 1 (X, Y )e -iQX . The second harmonic should be taken into account to recover the results of the asymptotic Ginzburg-Landau double-scale approach [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Nevertheless, the second harmonic does not contribute to the membrane energy in the present case, since the rapid one-dimensional oscillations e iQX are not extensional so that N 2 = 0, w 2 = 0. Hence the second harmonic does not influence the simplest macroscopic models. A unique direction O x for wave propagation is chosen in the whole domain. This assumption is a bit restrictive so that the current model can only describe sinusoidal pattern, which should be improved in the future. In principle, the mean field U 0 (X, Y ) is real and the envelope U 1 (X, Y ) is complex-valued. However, spatial evolutions of patterns can be reasonably accounted for with only two real coefficients in practice, even if a complex envelope can improve the treatment of boundary conditions [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF]. After extending derivation rules in two-dimensional framework, the first Fourier coefficient of the gradient and the zero order coefficient of strain (i.e. mean value on a period) can be respectively expressed as {(∇w) 1 } =      ∂w 1 ∂X + iQw 1 ∂w 1 ∂Y      , ( 5.66 ) {γ 0 } =      γ X0 γ Y 0 2γ XY 0      = { γ F K } + {γ wr } , (5.67) in which { γ F K } =                ∂u 0 ∂X + 1 2 ( ∂w 0 ∂X ) 2 ∂v 0 ∂Y + 1 2 ( ∂w 0 ∂Y ) 2 ∂u 0 ∂Y + ∂v 0 ∂X + ∂w 0 ∂X ∂w 0 ∂Y                , ( 5.68 ) {γ wr } =                ∂w 1 ∂X + iQw 1 2 ∂w 1 ∂Y 2 ( ∂w 1 ∂X + iQw 1 ) ∂w 1 ∂Y + ( ∂w 1 ∂X -iQw 1 ) ∂w 1 ∂Y                , ( 5.69) where the strain is divided into a classical part γ F K that takes the same form as the initial Föppl-von Kármán model (5.62), and a wrinkling part γ wr that depends only on the envelope of deflection w 1 . The strain-displacement law (5.67) can be simplified. First, the displacement field is reduced to a membrane mean displacement and to a bending wrinkling, i.e. u 1 = 0, w 0 = 0, which only considers the influence of wrinkling on a flat membrane state. Second, the deflection envelope w 1 (X, Y ) is assumed to be real, which disregards the phase modulation of the wrinkling pattern. Therefore, the envelope of the displacement has only three components u 0 = (u 0 , v 0 ) and w 1 that will be rewritten for simplicity as (u, v, w) = (u 0 , v 0 , w 1 ). Consequently, the simplified version of the strain field becomes The simplified membrane strain (5.70) is quite similar to that of the initial Föppl-von Kármán model. It is split, first in a linear part ε(u) that is the symmetric part of the mean displacement gradient corresponding to the pure membrane linear strain, second in a nonlinear part γ wr more or less equivalent to wrinkling strain. The main difference with the initial Föppl-von Kármán strain (5.62) is the extension Q 2 w 2 in the wave direction of wrinkles. This wrinkling strain is a stretching and is always positive. In the case of a compressive membrane strain, this wrinkling term leads to a decrease of the true strain. By only considering the zero order harmonic, the reduced membrane energy becomes (5.73) Π mem (u, w) = Eh 2(1 -ν 2 ) ∫ ∫ Ω  Macroscopic modeling of bending energy Through simplifying the energy by keeping only the zero order term, it provides a formulation easier to be managed for the numerical discretization. The computation of the energy is based on the fact that only the zero order harmonic φ 0 of a function φ has a non-zero mean value: ∫ ∫ Ω φdΩ = ∫ ∫ Ω φ 0 dΩ. (5.74) This identity is applied to the bending energy in the framework u 1 = (u 1 , v 1 ) = (0, 0), w 0 = 0, w 1 ∈ R: φ = (∆w) 2 -2(1 -ν) ( ∂ 2 w ∂X 2 ∂ 2 w ∂Y 2 - ( ∂ 2 w ∂X∂Y ) 2 ) = φ A -2(1 -ν)φ B . (5.75) The first terms of bending energy reads φ A 0 = (∆w) 2 = +∞ ∑ j=-∞ (∆w) j (∆w) -j = 2 (∆w) 1 (∆w) -1 = 2 |(∆w) 1 | 2 = 2 ∆w 1 -Q 2 w 1 + 2iQ ∂w 1 ∂X 2 . (5.76) Due to the assumption of a real envelope w = w 1 , the first term of the bending energy can be expressed as φ A 0 = 2 ( ∆w -Q 2 w ) 2 + 8Q 2 ( ∂w ∂X ) 2 . (5.77) In the same way, the second term of the bending energy φ B 0 reads φ B 0 = 2 ( ∂ 2 w ∂X 2 -Q 2 w ) ∂ 2 w ∂Y 2 -2 ( ∂ 2 w ∂X∂Y ) 2 -2Q 2 ( ∂w ∂Y ) 2 . (5.78) As in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], the derivatives of order three and order four in above differential equations can be neglected, since theses high order derivatives may lead to spurious oscillations and the derivatives of order two are sufficient to recover the Ginzburg-Landu asymptotic approach. Consequently, this leads to the simplified macroscopic bending energy: Π ben (w) = D 2 ∫ ∫ Ω { Q 4 w 2 -2Q 2 w∆w + 4Q 2 ( ∂w ∂X ) 2 + 2(1 -ν 2 )Q 2 [ w ∂ 2 w ∂Y 2 + ( ∂w ∂Y ) 2 ]} dΩ. (5.79) Full membrane-wrinkling model The macroscopic model is deduced from the minimum of total energy that is the sum of membrane energy (5.73) and bending energy (5.79), which associates zero order harmonic for membrane quantities and real-valued first order harmonics for the deflection. In this way, the total energy is stationary at equilibrium: δΠ mem + δΠ ben = 0. (5.80) The condition that any virtual displacement is zero at the boundary gives ∫ ∫ After straightforward calculations, one obtains the partial differential equations of the macroscopic problems:        -6DQ 2 ∂ 2 w ∂X 2 -2DQ 2 ∂ 2 w ∂Y 2 + ( DQ 4 + N X Q 2 ) w -div (N • ∇w) = 0, N = L m : [ε(u) + γ wr (w)] , divN = 0. (5.83) The above model (5.83) couples nonlinear membrane equations with a bifurcation equation satisfied by the envelope of wrinkling pattern. More discussion on the model can be found in [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. Since this model is a second order partial differential system, any classical C 0 finite element is acceptable for its discretization. In the application, 8-node quadratic quadrilateral elements (2D-Q8) with three degrees of freedom (u, v, w) per node will be used. Details on the discretization will be omitted here, since it is quite straightforward. all the equations being coupled: { {ε 0 } = [L s ] -1 {σ 0 } = [H]{θ 0 }, {ε 1 } = [L s ] -1 {σ 1 } = [H]{θ 1 }, ( 5.87) where the matrix [H] is detailed in [START_REF] Cochelin | Méthode asymptotique numérique[END_REF]. Finally, the macroscopic behavior can be formulated in a general form: (5.90) Discretization of the 3D macroscopic model (5.85) takes the same shape functions as the 3D classical model (3.9), since they are quite similar, where the details are omitted. Thus, 8-node solid elements with reduced integration can be applied as well, but with totally 72 (3 × 3 × 8) degrees of freedom on each element. Connection between the film and the substrate As in Section 3.2.3, displacement continuity is satisfied at the interface. However, the macroscopic membrane-wrinkling elements for the film and 3D macroscopic block elements for the substrate cannot be simply joined directly since they belong to dissimilar elements. Therefore, additional incorporating constraint equations have to be employed. Hereby, Lagrange multipliers are applied to couple the corresponding node displacements of the same order harmonics in compatible meshes between the film and the substrate (see Fig. 3.4). Note that using 8-node linear block element here is only for coupling convenience, 20-node quadratic block element would be another good candidate, while both of them follow the same coupling strategy. Consequently, the stationary function of film/substrate system is given in a Lagrangian form: L (u f , u s , ℓ) = Π f + Π s + ∑ node i ℓ i [u f (i) -u s (i)] , (5.91) where the displacements of the film and the substrate are respectively denoted as u f and u s , while the Lagrange multipliers are represented by ℓ. At the interface, the displacement continuity is satisfied at the same nodes and connects the middle surface of the film and the top surface of the substrate. From Eq. (5.91), three equations are obtained according to δu f , δu s and δℓ:              δΠ f + ∑ node i ℓ i δu f (i) = 0, δΠ s - ∑ node i ℓ i δu s (i) = 0, ∑ node i δℓ i u f (i) - ∑ node i δℓ i u s (i) = 0. (5.92) Resolution technique and bifurcation analysis The same generic resolution techniques and bifurcation schemes established in Section 2.4 and 3.3 will be adapted to both 2D and 3D macroscopic film/substrate model, which includes using the ANM as a continuation technique to solve nonlinear differential equations and calculating bifurcation indicators to predict secondary bifurcations as well as the associated wrinkling modes on the post-buckling path. Since this general framework is also suitable for the current multi-scale problem and the procedure remains unchanged, the details will be omitted here. Chapter conclusion In this chapter, we revisit the film/substrate system from a multi-scale standpoint with respect to Fourier series. Firstly, a generic macroscopic modeling scheme is established, which is suitable for both 2D and 3D problems but not so efficient for very thin film in terms of computational cost. Thus, secondly, a simplified 2D macroscopic film/substrate model with three envelopes is derived based on the established microscopic model presented in Chapter 2. Development of the computer codes have been completed. It is expected to predict the primary sinusoidal wrinkles and entailing aperiodic modes observed in Chapter 2 with much fewer elements, which can save an enormous amount of computation time. Validation of the macroscopic model by comparison with 2D classical film/substrate model established in Chapter 2, should be included as part of short-term perspective. Lastly, following the same modeling scheme proposed in Chapter 3, a 3D nonlinear macroscopic film/substrate model has been developed. Precisely, this 3D model couples a macroscopic membrane-wrinkling model based on the well-known Föppl-von Kármán nonlinear plate theory and a linear macroscopic elasticity. Through introducing Lagrange multipliers at the interface, displacement continuity is satisfied in a weak form. Self-developed computer codes for both macroscopic models and coupling procedures have been accomplished. It is expected to predict at least sinusoidal patterns observed in Chapter 3 with much fewer elements, which is competitive from computational standpoint. In the following, validation of the macroscopic model by comparison with 3D classical film/substrate model established in Chapter 3, will be included as part of short-term perspective. Conclusion and perspectives In this thesis, we proposed a whole framework to study surface wrinkling of thin films bound to soft substrates in a numerical way: from 2D to 3D modeling, from classical to multi-scale perspective. Both 2D and 3D models incorporated Asymptotic Numerical Method (ANM) as a robust path-following technique and bifurcation indicators well adapted to the ANM, so as to predict a sequence of multiple bifurcations and the associated instability modes on their post-buckling evolution path as the load is increased. The tracing of post-bifurcation evolution is an important numerical problem and it is definitely non-trivial. The ANM gives interactive access to semi-analytical equilibrium branches, which offers considerable advantage of reliability compared with classical iterative algorithms. Probably, it would be rather difficult to detect all the bifurcations found in this thesis by using conventional numerical methods. To our best knowledge, it appears to be the first work that addresses the post-bifurcation instability problems of film/substrate from the quantitative standpoint, through applying these advanced numerical approaches (path-following techniques, bifurcation indicators, bridging techniques, multi-scale approaches, Lagrange multipliers, etc.), which is viewed as a valuable contribution to the field of film/substrate post-buckling analyses. In 2D cases of film/substrate system, a classical finite element model was used associating geometrically nonlinear beam for the film and linear elasticity for the substrate. The effect of boundary conditions and material properties of the substrate (homogeneous, graded material and orthotropic) on the bifurcation portrait is carefully studied, which was rarely explored in the literature. The evolution of wrinkling patterns and postbifurcation modes including period-doubling has been detected beyond the onset of the primary sinusoidal wrinkling mode on the post-buckling evolution path (see Fig. 1). In 3D cases, spatial pattern formation of film/substrate was investigated based on a nonlinear 3D finite element model associating geometrically nonlinear shell formulation for the film and linear elasticity for the substrate. Typical post-bifurcation patterns include sinusoidal, checkerboard and herringbone shapes, with possible spatial modulations, boundary effects and localizations. The post-buckling behavior often leads to intricate response curves with several secondary bifurcations that were rarely studied and only in the case of periodic cells [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF]. In conventional finite element analysis, the post-buckling simulation may suffer from the convergence issue if the film is much stiffer than the substrate. Nevertheless, the proposed finite element procedure in this thesis allows accurately describing these bifurcation portraits by taking into account the effect of boundary conditions, without any convergent problem. The occurrence and evolution of sinusoidal, checkerboard and herringbone patterns were highlighted in Chapter 3. This work has been viewed as a valuable contribution to film/substrate buckling problems, where the model incorporating the ANM as a path-following technique to detect multiple bifurcations in the post-buckling analysis is interesting and meaningful. A very original nonlocal coupling strategy is developed in this thesis, which is able to bridge classical models and multi-scale models concurrently, where the strengths of each model are fully exploited while their shortcomings are accordingly overcome. More precisely, we considered a partitioned-domain multi-scale model for a 1D nonlinear elastic wrinkling model. The coarse model, obtained from a suitable Fourier-reduction technique [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], is inaccurate near the boundary of the domain. Therefore, near the boundary, the full model is employed, which is then coupled to the coarse model in the remainder of the domain using the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] as a bridging-domain technique. Numerical results explicitly illustrate the effectiveness of this methodology. Besides, discussion on the transition between a fine and a coarse model was provided in a general way. The present method can also be seen as a guide for coupling techniques involving other reduced-order models. This work has been considered as an original, logical and relevant application of multi-scale modeling with good motivations, explanations and interesting numerical results. The proposed bridging techniques are not merely limited to 1D case, but can also be flexibly extended to 2D or 3D cases. Thus, one direct potential perspective of application is to use the proposed bridging techniques to study 2D or 3D film/substrate buckling problems, which can be viewed as one of our future works. A macroscopic modeling framework was provided based on the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], which is in 3D situation and can be extended to any film/substrate modeling application. In particular, a 2D and a 3D macroscopic film/substrate model were derived from the classical models established in Chapter 2 and Chapter 3, respectively. Development of the computer codes for both 2D and 3D envelope models has been completed, while validation of the macroscopic models by comparison with classical film/substrate models should be included in the short-term perspective. Some parts of the works presented in this thesis have been published in International Journal of Solids and Structures [START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF][START_REF] Xu | Bridging techniques in a multiscale modeling of pattern formation[END_REF] or in International Journal of Non-Linear Mechanics [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF], while the ideas and schemes generated from the present thesis are expected to lead to more works in the future. As for the short term perspective, validation of both 2D and 3D multi-scale models should be done by comparing with the results obtained in the established classical models. It can predict the patterns found in classical models with much fewer elements so as to significantly reduce the computational cost. As shown in Fig. 2, preliminary results demonstrate the evolution of sinusoidal wrinkles by using only 2 elements along the wave spreading direction, while 50 elements are required in this direction for the 3D classical model established in Chapter 3. However, due to the intrinsic limitation of any reduced-order model that clear and real boundary conditions are difficult to be taken into account, bridging techniques are really needed, which provides a flexible and efficient way to overcome this drawback. The considerable advantage of the proposed nonlocal reduction-based coupling strategy has been explicitly demonstrated in Chapter 4. Application range of the above mentioned models is constrained by a large stiffness ratio E f /E s in the range O(10 4 ), which means the substrate is much softer than the film. In this range, critical strains are relatively small and thus the linear elastic framework is sufficient [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF]. Some recent studies consider much softer films made of polymeric materials [START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Yin | Deterministic order in surface micro-topologies through sequential wrinkling[END_REF], typically with a stiffness ratio E f /E s in the range O [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF], where the critical strain is relatively large and the small strain framework is no more appropriate. Therefore, finite strain models such as neo-Hookean hyperelasticity should be considered at least for the substrate [START_REF] Hutchinson | The role of nonlinear substrate elasticity in the wrinkling of thin films[END_REF]. By considering the large deformation of the substrate, some extremely localized instability modes like folding [START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF], creasing [START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF] or ridging [START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF] that require the nonlinearity of the substrate can be explored. It is not complicated to introduce nonlinear material laws in our framework (see Appendixes E and F) and development of computer codes has been accomplished. Preliminary results illustrate a localized surface valley in the film center (see Fig. 3), namely folding, during the post-buckling evolution of the film/substrate with a stiffness ratio E f /E s in the range O [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF], where neo-Hookean law is employed in the substrate within the ANM framework. Further investigations and discussions in this direction should be included in the perspectives. Appendix D The residuals of the microscopic and macroscopic model Through variation of potential energy in Eq. (4.17), one obtains P f (δu f ) = ∫ Ω f ⟨δu δu ′ δv δv ′ δv ′′ ⟩         α f                0 ES(u ′ + v ′2 /2) cv + c 3 v 3 ESv ′ (u ′ + v ′2 /2) EIv ′′                -β f                f 0 0 0 0                        dΩ. (D.1) With Eq. (4.31), we define                δu δu ′ δv δv ′ δv ′′                e = [G f ] { δQ f } , [G f ] =         N f u N f u ′ N f v N f v ′ N f v ′′         . (D.2) Therefore, the elementary residual [ R f (Q f ) ] e can be written as [ R f (Q f ) ] e = ∫ Ωe [G f ] t         α f                0 ES(u ′ + v ′2 /2) cv + c 3 v 3 ESv ′ (u ′ + v ′2 /2) EIv ′′                -β f                f 0 0 0 0                        dΩ. (D.3) Appendix E Finite strain hyperelasticity Generally, the mechanical response of hyperelastic materials is described by strain energy potential function, Ψ, to fit the particular material. This function is a continuous scalar-valued function and is given in terms of the deformation gradient tensor, F = ∇u+I (u being the displacement field and I being the second-order identity tensor), or some strain tensors, Ψ = Ψ(F). In this paper, we limit the strain energy function to isotropic behavior throughout the deformation history. Isotropic hyperelastic materials can be expressed as a function of strain invariants of the right symmetric Cauchy-Green tensor, C = t F • F. Therefore, the strain energy potential can be formulated as Ψ Résumé Le plissement dans les films minces sur un substrat plus mou a été largement observé dans la nature. Ces phénomènes ont suscité un intérêt considérable au cours de la dernière décennie. L'évolution en post-flambage d'instabilités morphologiques implique souvent de forts effets de non-linéarité géométrique, de grandes rotations, de grands déplacements, de grandes déformations, une dépendance par rapport au chemin de chargement et de multiples brisures de symétrie. En raison de ces difficultés notoires, la plupart des analyses non-linéaires de flambement ont recouru à des approches numériques parce qu'on ne peut obtenir qu'un nombre limité de solutions exactes de manière analytique. Cette thèse propose un cadre général pour étudier le problème de flambage de systèmes film/substrat de manière numérique : de la modélisation 2D ou 3D, d'un point de vue classique ou multiéchelle. L'objectif principal est d'appliquer des méthodes numériques avancées pour des analyses de bifurcations multiples aux divers modèles de film/substrat, en particulier en se concentrant sur l'évolution en post-flambement et la transition du mode à la surface. Les modèles intègrent la Méthode Asymptotique Numérique (MAN) comme une technique robuste de pilotage et des indicateurs de bifurcation qui sont bien adaptés à la MAN pour détecter une séquence de bifurcations multiples ainsi que les modes d'instabilité associés sur leur chemin d'évolution de post-flambement. La MAN donne un accès interactif aux branches d'équilibre semi-analytique, qui offre un avantage considérable en termes de la fiabilité par rapport aux algorithmes itératifs classiques. En outre, une stratégie originale de couplage non-local est développée pour coupler les modèles classiques et les modèles multi-échelles concurremment, où les forces de chaque modèle sont pleinement exploitées, et leurs lacunes surmontées. Une discussion sur la transition entre les différentes échelles est fournie d'une manière générale, qui peut également être considéré comme un guide pour les techniques de couplage impliquant d'autres modèles réduits. A la fin, un cadre général de modélisation macroscopique est développé et deux modèles spécifiques de type Fourier sont dérivés de modèles classiques bien établis, qui permettent de prédire la formation des modes d'instabilités avec beaucoup moins d'éléments et donc de réduire le coût de calcul de manière significative. Mots-clés: Plissement; Post-flambement; Bifurcation; Film mince; Multi-échelle; Technique de cheminement; Méthode de couplage; Méthode Arlequin; Méthode des éléments finis. 1 1 Wrinkles in nature: (a) hornbeam leaf wrinkles, (b) finger wrinkles, (c) wrinkles in landform. [pictures from internet] . . . . . . . . . . . . . . . . . 1.2 Schematics of mode shapes: (a) sinusoidal mode, (b) checkerboard mode, (c) herringbone mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Schematics of mode shapes: (a) hexagonal mode, (b) triangular mode. . . . 1.4 Schematic of three types of morphological instability: (a) wrinkling, (b) folding, and (c) creasing [116]. . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Descriptive scheme of the ANM. . . . . . . . . . . . . . . . . . . . . . . . . 1.6 At least two macroscopic fields (the mean field and the amplitude of the fluctuation) are necessary to describe a nearly periodic response. . . . . . . 1.7 Arlequin method in a general mechanical problem. . . . . . . . . . . . . . . 1.8 Different weight functions for energy distribution. . . . . . . . . . . . . . . 1.9 Mediator space is based on the fine mesh. Arlequin parameters: α f = 0.99, β f = 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 Mediator space is based on the coarse mesh. Arlequin parameters: α f = 0.99, β f = 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. 2 2 . 5 225 Geometry of the film/substrate system. . . . . . . . . . . . . . . . . . . . . 2.3 Sketch of the film/substrate system under a compression test. . . . . . . . 2.4 Convergence test of film/substrate system with simply supported boundary conditions. The substrate is respectively divided into 5, 10, 15, 20 or 25 sublayers. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . ix Bifurcation curve of film/substrate system with simply supported boundary conditions. Four bifurcations are observed. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. . . . . . . . 2.6 Transverse displacement along the z direction. Results follow an exponential distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Bifurcation indicators as a function of load parameter for film/substrate system with simply supported boundary conditions. (a) The 1st bifurcation point when λ = 0.04893. (b) The 2nd bifurcation point when λ = 0.05568. (c) The 3rd bifurcation point when λ = 0.08603. (d) The 4th bifurcation point when λ = 0.116. The condition ∆µ 0 = 1 is prescribed a priori at each ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators in Fig. 2.7. The right column presents the associated instability modes. Simply supported boundary conditions are imposed. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. (h) The 4th mode. . . . . . . . . . . . . . . . . . . . . . 2.9 Bifurcation curve of film/substrate system with clamped boundary conditions. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -3 , 25 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . 2.10 Bifurcation indicators as a function of load parameter for film/substrate system with clamped boundary conditions. (a) The 1st bifurcation point when λ = 0.05035. (b) The 2nd bifurcation point when λ = 0.05794. The condition ∆µ 0 = 1 is prescribed a priori at each ANM step. . . . . . . . . 2.11 The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators in Fig. 2.10. The right column presents the associated instability modes. Clamped boundary conditions are imposed. (b) The 1st mode. (d) The 2nd mode. . . . . . . . 2.12 Exponential variations of Young's modulus of the substrate along the thickness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13 Bifurcation curve of FGM substrate with simply supported boundary conditions and softening Young's modulus E s1 . Three bifurcations are observed. ANM parameters: n = 15, δ = 10 -5 , 37 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 2.14 The left side shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right side presents the associated instability modes. Simply supported boundary conditions and softening Young's modulus E s1 are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. . . . . . . . . . . . . . . . 2.15 Bifurcation curve of FGM substrate with clamped boundary conditions and softening Young's modulus E s1 . Three bifurcations are observed. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.16 The left side shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right side presents the associated instability modes. Clamped boundary conditions and softening Young's modulus E s1 are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. . . . . . . . . . . . . . . . . . 2.17 Bifurcation curve of FGM substrate with stiffening Young's modulus E s2 and simply supported boundary conditions. Bifurcated branch and fundamental branch are distinguished. Each point corresponds to one ANM step. ANM parameters: (1) n = 15, δ = 10 -5 , 50 steps for the bifurcated branch; (2) n = 15, δ = 10 -5 , δ = 10 -2 after 22 steps for the fundamental branch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.18 The left side shows a sequence of wrinkling patterns corresponding to the bifurcated branch. The right side presents the associated instability modes. Stiffening Young's modulus E s2 and simply supported boundary conditions are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. (h) The 4th mode. . . . . . . . . . . . . . . . . . . . . . . . 2.19 The left side shows the wrinkling pattern corresponding to the fundamental branch. The right side presents the associated 4th instability mode. . . . . 2.20 Comparison of bifurcation curves between anisotropic substrate and isotropic one. Each point corresponds to one ANM step. ANM parameters: (1) n = 15, δ = 10 -5 , 29 steps for the anisotropic case; (2) n = 15, δ = 10 -5 , 45 steps for the isotropic case. . . . . . . . . . . . . . . . . . . . . . . . . . 2.21 The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes. Simply supported boundary conditions are imposed in the anisotropic case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. . . . . . . . . . . . . . . . . . . . . . . . . . xi 2.22 Anisotropic substrate with simply supported boundary conditions under compression: (a) wrinkling pattern in the final step, (b) final shape of the film. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.23 Comparison of bifurcation curves among four different cases with simply supported boundary conditions: (a) isotropic substrate, (b) anisotropic substrate, (c) FGM substrate with softening Young's modulus, (d) FGM substrate with stiffening Young's modulus. . . . . . . . . . . . . . . . . . . 3.1 Schematics of wrinkling patterns: (a) sinusoidal mode, (b) checkerboard mode, (c) herringbone mode (a periodic array of zigzag wrinkles). . . . . . 3.2 Geometry of film/substrate system. . . . . . . . . . . . . . . . . . . . . . . 3.3 Geometry and kinematics of shell. . . . . . . . . . . . . . . . . . . . . . . . 3.4 Sketch of coupling at the interface. . . . . . . . . . . . . . . . . . . . . . . 3.5 Loading conditions: (a) Film/Sub I under uniaxial compression, (b) Film/Sub II under equi-biaxial compression, (c) Film/Sub III under biaxial step loading. 3.6 Bifurcation curve of Film/Sub I with simply supported boundary conditions under uniaxial compression. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -4 , 26 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Bifurcation indicators as a function of load parameter of Film/Sub I with simply supported boundary conditions: (a) the first bifurcation point when λ = 0.05281, (b) the 2nd bifurcation point when λ = 0.05516. The indicators vanish at bifurcation points. . . . . . . . . . . . . . . . . . . . . . . . 3.8 Film/Sub I with simply supported boundary conditions under uniaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators in Fig. 3.7. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. . . . . . . . . . 3.9 Film/Sub I with simply supported boundary conditions under uniaxial compression: (a) sinusoidal pattern v in the final step, (b) the final shape v 3 . 3.10 Bifurcation curve of Film/Sub I with clamped boundary conditions under uniaxial compression. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -4 , 28 steps. Each point corresponds to one ANM step. . . . 3.11 Film/Sub I with clamped boundary conditions under uniaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. . . . . . . . . . . . . . . . . . . . . . . . . xii 3.12 Film/Sub I with clamped boundary conditions under uniaxial compression: (a) sinusoidal pattern v in the final step, (b) the final shape v 3 . . . . . . . . 3.13 Bifurcation curve of Film/Sub II under equi-biaxial compression. Four bifurcations are observed. ANM parameters: n = 15, δ = 10 -4 , 80 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . 3.14 Film/Sub II under equi-biaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode, (f) the 3rd mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Film/Sub II under equi-biaxial compression: (a) checkerboard pattern v in the final step, (b) the final shape v 3 . . . . . . . . . . . . . . . . . . . . . . 3.16 Bifurcation curve of Film/Sub III: (a) the first step of compression along the x direction, ANM parameters: n = 15, δ = 10 -4 , 20 steps, (b) the second step of compression along the y direction, ANM parameters: n = 15, δ = 10 -4 , 31 steps. Each point corresponds to one ANM step. . . . . . . . 3.17 Film/Sub III under the first step of compression along the x direction. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. . . . . . . . . . . . . . . . . . . . . . . . . 3.18 Film/Sub III under the second step of compression along the y direction. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. . . . . . . . . . . . . . . . . . . . . . . . . 3.19 Film/Sub III under the second step of compression along the y direction: (a) herringbone pattern v in the final step, (b) top view of (a). . . . . . . . 4.1 Sketch of an elastic beam on a nonlinear elastic foundation. . . . . . . . . . 4.2 Schematic of the reduction from the microscopic model. . . . . . . . . . . . 4.3 Buckling of a simply supported beam under uniform compression: one real envelope (v 0 ) and four complex envelopes 4. 6 6 Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by all the reduced Fourier coefficients(v 0 , v R 1 , v I 1 , v R2 and v I 2 ) is compared with the exact solution v(x). . . . . . . 4.7 Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by two reduced Fourier coefficients v I 1 and v R 1 is compared with the exact solution v(x). . . . . . . . . . . . . . 4.8 Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by all the reduced Fourier coefficients (u 0 , u R 1 , u I 1 , u R 2 and u I 2 ) is compared with the exact solution u(x). . . . . . 4.9 Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by only u 0 is compared with the exact solution u(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.10 Buckling of a clamped beam under uniform compression: derivatives of the curve in Fig. 4.8. The instability pattern for µ = 2.21. . . . . . . . . . . . 4.11 Definition of domains in the Arlequin framework. . . . . . . . . . . . . . . 4.12 How the reduced Fourier coefficient R v (x i ) depends on the global microscopic transversal displacement Q f v . . . . . . . . . . . . . . . . . . . . . . . 4.13 Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01. Only the left half part of the beam is demonstrated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.21. Only the left half part of the beam is demonstrated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.15 Buckling of a clamped beam under uniform compression: maximal deflection in [3π, 15π] vs. applied shortening µ. The prolongation coupling and reduction-based coupling are depicted together. . . . . . . . . . . . . . . . 4.16 Buckling of a clamped beam under uniform compression: maximal deflection in [0, 3π] vs. applied shortening µ. The prolongation coupling and reduction-based coupling are depicted together. . . . . . . . . . . . . . . . 4.17 The prolongation coupling approach is implemented in the bridging technique. Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01 and µ = 2.21. Only the left half part of the beam is demonstrated. . . . . . . . . . . . . . . . . . . xiv 4.18 The reduction-based coupling approach is implemented in the bridging technique. Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01 and µ = 2.21. Only the left half part of the beam is demonstrated. . . . . . . . . . . . . . 4.19 The left boundary region: spatial distribution of the instability patterns for µ = 2.21. The prolongation coupling and reduction-based coupling are depicted together. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.20 Spatial distribution of the instability patterns with the coupling zone in [3π, 4π]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.21 Spatial distribution of the instability patterns with the coupling zone in [8π, 9π]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.22 Deflection according to H 1 and L 2 couplings in the interval [0, 5π] for µ = 2.21. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.23 Effect of H 1 and L 2 couplings in the gluing zone. The relative difference of displacement is plotted. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.24 Effect of H 1 and L 2 couplings on normal forces. . . . . . . . . . . . . . . . 4.25 Lagrange multiplier within H 1 and L 2 couplings. . . . . . . . . . . . . . . . 1 Bifurcation curve and the associated wrinkling modes with respect to the incremental compression. Period-doubling mode appears at the third bifurcation point. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3D macroscopic film/substrate model under uniaxial compression: (a) the 1st wrinkling mode, (b) the 2nd wrinkling mode. Only two elements are used along the wave direction. . . . . . . . . . . . . . . . . . . . . . . . . . 3 1 . 1 11 Surface morphological instabilities of film/substrate systems 1.1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Challenges and discussion . . . . . . . . . . . . . . . . . . . . . 1.2 Asymptotic Numerical Method for nonlinear resolution . . . 1.2.1 Perturbation technique . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Path parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Continuation approach . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Bifurcation indicator . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Multi-scale modeling for instability pattern formation . . . . 1.4 Arlequin method for model coupling . . . . . . . . . . . . . . . 1.4.1 Energy distribution . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Coupling choices . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Chapter conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Surface morphological instabilities of film/substrate systems 1.1.1 Literature review Figure 1 . 1 : 11 Figure 1.1: Wrinkles in nature: (a) hornbeam leaf wrinkles, (b) finger wrinkles, (c) wrinkles in landform. [pictures from internet] Figure 1 . 2 : 12 Figure 1.2: Schematics of mode shapes: (a) sinusoidal mode, (b) checkerboard mode, (c) herringbone mode. Figure 1 . 3 : 13 Figure 1.3: Schematics of mode shapes: (a) hexagonal mode, (b) triangular mode. Figure 1 . 4 : 14 Figure 1.4: Schematic of three types of morphological instability: (a) wrinkling, (b) folding, and (c) creasing [116]. Figure 1 . 5 : 15 Figure 1.5: Descriptive scheme of the ANM. Figure 1 . 6 : 16 Figure 1.6: At least two macroscopic fields (the mean field and the amplitude of the fluctuation) are necessary to describe a nearly periodic response. Figure 1 . 7 : 17 Figure 1.7: Arlequin method in a general mechanical problem. Figure 1 . 8 : 18 Figure 1.8: Different weight functions for energy distribution. Figure 1 . 9 : 19 Figure 1.9: Mediator space is based on the fine mesh. Arlequin parameters: α f = 0.99, β f = 0.5. Figure 1 . 10 : 110 Figure 1.10: Mediator space is based on the coarse mesh. Arlequin parameters: α f = 0.99, β f = 0.5. Figure 2 . 2 : 22 Figure 2.2: Geometry of the film/substrate system. Figure 2 . 3 : 23 Figure 2.3: Sketch of the film/substrate system under a compression test. Figure 2 . 4 : 24 Figure 2.4: Convergence test of film/substrate system with simply supported boundary conditions. The substrate is respectively divided into 5, 10, 15, 20 or 25 sublayers. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. Figure 2 . 5 : 2 Figure 2 . 6 :Figure 2 . 7 : 2522627 Figure 2.5: Bifurcation curve of film/substrate system with simply supported boundary conditions. Four bifurcations are observed. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. Figure 2 . 8 : 28 Figure 2.8: The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators in Fig. 2.7. The right column presents the associated instability modes. Simply supported boundary conditions are imposed. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. (h) The 4th mode. 54 Figure 2 . 9 : 29 Figure 2.9: Bifurcation curve of film/substrate system with clamped boundary conditions. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -3 , 25 steps. Each point corresponds to one ANM step. Figure 2 . 10 : 210 Figure 2.10: Bifurcation indicators as a function of load parameter for film/substrate system with clamped boundary conditions. (a) The 1st bifurcation point when λ = 0.05035. (b) The 2nd bifurcation point when λ = 0.05794. The condition ∆µ 0 = 1 is prescribed a priori at each ANM step. Figure 2 . 11 : 211 Figure 2.11: The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators in Fig. 2.10. The right column presents the associated instability modes. Clamped boundary conditions are imposed. (b) The 1st mode. (d) The 2nd mode. Figure 2 . 12 :Figure 2 . 13 : 212213 Figure 2.12: Exponential variations of Young's modulus of the substrate along the thickness. Figure 2 . 14 :Figure 2 . 15 : 214215 Figure 2.14: The left side shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right side presents the associated instability modes. Simply supported boundary conditions and softening Young's modulus E s1 are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. Figure 2 . 16 : 2 0Figure 2 . 17 : 2162217 Figure 2.16: The left side shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right side presents the associated instability modes. Clamped boundary conditions and softening Young's modulus E s1 are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. Figure 2 . 18 : 63 λFigure 2 . 19 : 21863219 Figure 2.18: The left side shows a sequence of wrinkling patterns corresponding to the bifurcated branch. The right side presents the associated instability modes. Stiffening Young's modulus E s2 and simply supported boundary conditions are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. (h) The 4th mode. 63 Figure 2 . 20 : 220 Figure 2.20: Comparison of bifurcation curves between anisotropic substrate and isotropic one. Each point corresponds to one ANM step. ANM parameters: (1) n = 15, δ = 10 -5 , 29 steps for the anisotropic case; (2) n = 15, δ = 10 -5 , 45 steps for the isotropic case. Figure 2 . 21 : 221 Figure 2.21: The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes. Simply supported boundary conditions are imposed in the anisotropic case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. Figure 2 . 22 : 222 Figure 2.22: Anisotropic substrate with simply supported boundary conditions under compression: (a) wrinkling pattern in the final step, (b) final shape of the film. Figure 2 . 23 : 223 Figure 2.23: Comparison of bifurcation curves among four different cases with simply supported boundary conditions: (a) isotropic substrate, (b) anisotropic substrate, (c) FGM substrate with softening Young's modulus, (d) FGM substrate with stiffening Young's modulus. Figure 3 . 1 : 31 Figure 3.1: Schematics of wrinkling patterns: (a) sinusoidal mode, (b) checkerboard mode, (c) herringbone mode (a periodic array of zigzag wrinkles). Figure 3 . 2 : 32 Figure 3.2: Geometry of film/substrate system. Figure 3 . 3 : 33 Figure 3.3: Geometry and kinematics of shell. Figure 3 . 5 : 35 Figure 3.5: Loading conditions: (a) Film/Sub I under uniaxial compression, (b) Film/Sub II under equi-biaxial compression, (c) Film/Sub III under biaxial step loading. Figure 3 . 6 : 36 Figure 3.6: Bifurcation curve of Film/Sub I with simply supported boundary conditions under uniaxial compression. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -4 , 26 steps. Each point corresponds to one ANM step. Figure 3 . 7 : 37 Figure 3.7: Bifurcation indicators as a function of load parameter of Film/Sub I with simply supported boundary conditions: (a) the first bifurcation point when λ = 0.05281, (b) the 2nd bifurcation point when λ = 0.05516. The indicators vanish at bifurcation points. Figure 3 . 8 : 38 Figure 3.8: Film/Sub I with simply supported boundary conditions under uniaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators in Fig. 3.7. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. Figure 3 . 9 :Figure 3 . 10 :Figure 3 . 11 : 39310311 Figure 3.9: Film/Sub I with simply supported boundary conditions under uniaxial compression: (a) sinusoidal pattern v in the final step, (b) the final shape v 3 . Figure 3 . 12 :Figure 3 . 13 : 312313 Figure 3.12: Film/Sub I with clamped boundary conditions under uniaxial compression: (a) sinusoidal pattern v in the final step, (b) the final shape v 3 . Figure 3 . 14 : 314 Figure 3.14: Film/Sub II under equi-biaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode, (f) the 3rd mode. 90 3Figure 3 . 15 : 90315 Figure 3.15: Film/Sub II under equi-biaxial compression: (a) checkerboard pattern v in the final step, (b) the final shape v 3 . Figure 3 . 16 :Figure 3 . 17 : 316317 Figure 3.16: Bifurcation curve of Film/Sub III: (a) the first step of compression along the x direction, ANM parameters: n = 15, δ = 10 -4 , 20 steps, (b) the second step of compression along the y direction, ANM parameters: n = 15, δ = 10 -4 , 31 steps. Each point corresponds to one ANM step. Figure 3 . 18 : 318 Figure 3.18: Film/Sub III under the second step of compression along the y direction. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. Figure 3 . 19 : 319 Figure 3.19: Film/Sub III under the second step of compression along the y direction: (a) herringbone pattern v in the final step, (b) top view of (a). Figure 4 . 1 : 41 Figure 4.1: Sketch of an elastic beam on a nonlinear elastic foundation. .10) 104 4. 3 . 1043 Transition operators in the framework of Fourier series with variable coefficients xi -Domain influencing xi Figure 4 . 2 : 42 Figure 4.2: Schematic of the reduction from the microscopic model. 2 Figure 4 . 3 : 243 Figure 4.3: Buckling of a simply supported beam under uniform compression: one real envelope (v 0 ) and four complex envelopes(v R 1 , v I 1 , v R 2 , v I 2 ). The reduction is performed over the domain[π, 29π]. The instability pattern for µ = 2.21. 2 Figure 4 . 4 : 244 Figure 4.4: Buckling of a clamped beam under uniform compression: one real envelope (v 0 ) and four complex envelopes(v R 1 , v I 1 , v R 2 , v I 2 ). The reduction is performed over the domain[π, 29π]. The instability pattern for µ = 2.21. 2 Figure 4 . 5 : 245 Figure 4.5: Buckling of a clamped beam under uniform compression: one real envelope (u 0 ) and four complex envelopes(u R 1 , u I 1 , u R 2 , u I 2 ). The reduction is performed over the domain[π, 29π]. The instability pattern for µ = 2.21. Figure 4 . 6 : 46 Figure 4.6: Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by all the reduced Fourier coefficients(v 0 , v R 1 , v I 1 , v R2 and v I 2 ) is compared with the exact solution v(x). 1 RFigure 4 . 7 : 147 Figure 4.7: Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by two reduced Fourier coefficients v I 1 and v R 1 is compared with the exact solution v(x). Figure 4 . 8 : 48 Figure 4.8: Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by all the reduced Fourier coefficients(u 0 , u R 1 , u I 1 , u R2 and u I 2 ) is compared with the exact solution u(x). Figure 4 . 9 : 49 Figure 4.9: Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by only u 0 is compared with the exact solution u(x). Figure 4 . 10 : 410 Figure 4.10: Buckling of a clamped beam under uniform compression: derivatives of the curve in Fig. 4.8. The instability pattern for µ = 2.21. Figure 4 . 11 : 411 Figure 4.11: Definition of domains in the Arlequin framework. Figure 4 . 12 : 412 Figure 4.12: How the reduced Fourier coefficient R v (x i ) depends on the global microscopic transversal displacement Q f v . Figure 4 . 13 : 413 Figure 4.13: Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01. Only the left half part of the beam is demonstrated. Figure 4 . 14 : 414 Figure 4.14: Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.21. Only the left half part of the beam is demonstrated. 0 Figure 4 . 16 : 416 Figure 4.16: Buckling of a clamped beam under uniform compression: maximal deflection in [0, 3π] vs. applied shortening µ. The prolongation coupling and reduction-based coupling are depicted together. Figure 4 . 17 : 417 Figure 4.17: The prolongation coupling approach is implemented in the bridging technique. Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01 and µ = 2.21. Only the left half part of the beam is demonstrated. Figure 4 . 18 : 418 Figure 4.18: The reduction-based coupling approach is implemented in the bridging technique. Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01 and µ = 2.21. Only the left half part of the beam is demonstrated. Figure 4 . 19 : 419 Figure 4.19: The left boundary region: spatial distribution of the instability patterns for µ = 2.21. The prolongation coupling and reduction-based coupling are depicted together. µ =2.21 x = [0, 9π] Reference model Reduction-based coupling (micro part) Figure 4 . 20 : 420 Figure 4.20: Spatial distribution of the instability patterns with the coupling zone in [3π, 4π]. coupling (micro part) Figure 4 . 21 : 421 Figure 4.21: Spatial distribution of the instability patterns with the coupling zone in [8π, 9π]. Figure 4 . 22 : 422 Figure 4.22: Deflection according to H 1 and L 2 couplings in the interval [0, 5π] for µ = 2.21. Figure 4 . 23 : 423 Figure 4.23: Effect of H 1 and L 2 couplings in the gluing zone. The relative difference of displacement is plotted. Figure 4 . 24 : 424 Figure 4.24: Effect of H 1 and L 2 couplings on normal forces. Figure 4 . 25 : 425 Figure 4.25: Lagrange multiplier within H 1 and L 2 couplings. established that Eqs. (5.7) and(5.14) are derived from the stationarity of the following macroscopic potential energy: {γ} = {γ 0 } = {ε(u)} + {γ wr } , N : δγ wr dΩ + δΠ ben = 0.(5.82) { {σ} = [L gen s ]{ε}, {ε} = [H gen ]{θ}, Figure 1 : 1 Figure 1: Bifurcation curve and the associated wrinkling modes with respect to the incremental compression. Period-doubling mode appears at the third bifurcation point. Figure 2 : 2 Figure 2: 3D macroscopic film/substrate model under uniaxial compression: (a) the 1st wrinkling mode, (b) the 2nd wrinkling mode. Only two elements are used along the wave direction. mm) Deflection (mm) of film at line X=0.5L x (d) Figure 3 : 3 Figure 3: Thin films on hyperelastic substrates under equi-biaxial compression. The left column shows a sequence of wrinkling patterns, while the right column presents the associated instability shapes at the line X = 0.5L x . Localized folding mode and checkerboard mode appear in the bulk. I 3 2 ∂C = I 3 C 323 = Ψ(C) = Ψ (I 1 , I 2 , I 3 ) , = det(C) = [det(F)] 2 = J 2 . (E.4)One can obtain the derivatives of invariants with respect to C as follows: -1 .(E.7) Table 2 . 2 1: Material and geometric parameters of the film/substrate system.E f (M P a) E s (M P a) ν f ν s L(mm) h t (mm) h f (mm) 1.3 × 10 5 1.8 0.3 0.48 1.5 0.1 10 -3 2.5.1 Film/substrate with simply supported boundary conditions Table 3 . 3 1: Common characteristics of material and geometric properties. E f (M P a) E s (M P a) ν f ν s h f (mm) h s (mm) 1.3 × 10 5 1.8 0.3 0.48 10 -3 0.1 Table 3 . 3 2: Different parameters of film/substrate systems. L x (mm) L y (mm) Loading Film/Sub I 1.5 1.5 Uniaxial Film/Sub II 1 1 Equi-biaxial Film/Sub III 0.75 1.5 Biaxial step Figure 4.15: Buckling of a clamped beam under uniform compression: maximal deflection in [3π, 15π] vs. applied shortening µ. The prolongation coupling and reduction-based coupling are depicted together. 2.3 2.25 2 2.2 2.15 1.99 2.1 1.98 0.01 0.02 0.03 0.04 µ 2.05 2 1.95 Reference model 1.9 Macroscopic model Prolongation coupling 1.85 Reduction-based coupling Analytical bifurcation point 1.8 0.1 0.2 0.3 0.4 0.5 0.6 Maximum transversal displacement in [3π, 15π] 2.3 2.25 2.01 2.2 2 2.15 1.99 2.1 0 0.01 0.02 0.03 µ 2.05 2 1.95 Reference model 1.9 Macroscopic model Prolongation coupling 1.85 Reduction-based coupling Analytical bifurcation point 0 1.8 0.1 0.2 0.3 0.4 0.5 Maximum transversal displacement in [0, 3π] Acknowledgments Appendix B General methodology to obtain the macroscopic models vii Table of contents Appendix C A macroscopic model with one real and one complex envelope Appendix D The residuals of the microscopic and macroscopic model Appendix E Finite strain hyperelasticity Appendix F Automatic differentiation with the ANM Bibliography viii δP = 0, which leads to ) , (b) The macroscopic model couples the 1D membrane Eqs. (4.6-a) and (4.6-b) with a second order differential equation (4.6-c) for the envelope v 1 (x), which is a sort of Ginzburg-Landau equation. In this chapter, we only consider the simplest case of a real v 1 (x), which has the drawback of fixing the phase in the bulk. A complete study of the model with a complex v 1 (x) can be found in [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF], where better accuracy in the boundary layer has been established. One may wonder why the fourth order derivatives of v 1 have been dropped. It was theoretically and numerically established in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], §5.1, that these high order derivatives can lead to spurious oscillations and lack of convergence for fine meshes. In fact, this is consistent with the basic assumption of a slowly varying envelope d/dx ≪ q. On the contrary, one can consider removing all the derivatives in Eq. (4.6-c), but the discussion in previous papers showed that the resulting model is very poor and not consistent with the asymptotic Ginzburg-Landau approach. Effective range of macroscopic models The assumption of slowly varying amplitudes is the main restriction for application of envelope models. In other words, the length scales of the oscillations and of the envelope must be clearly distinguished. This limitation holds for the present Fourier approach as well as for the asymptotic one [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. A known example is the localization of buckling patterns arising from long domains and softening nonlinearity (c 3 < 0). For instance, in [START_REF] Hunt | Structural localization phenomena and the dynamical phase-space analogy[END_REF], Fig. 8 demonstrates that the assumption is only valid very close to the bifurcation point. In other words, such localized behavior cannot be represented by slowly varying amplitude. In the same way, any envelope model is not able to perfectly account for boundary effects. That is the reason why we propose a double scale analysis, where the macroscopic model is considered in the bulk while the fine model is used near the boundary. [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF]. To verify this, we have computed the solution of the nonlinear problem (4.1) by the classic Newton-Raphson method and the corresponding macroscopic quantities v From Fig. 4.4, we can observe that the first envelope |v 1 | is bigger than the other three ones, but the ratio is larger than in the simply supported case and is approximately 10 -1 . According to Eq. (4.16), this shows that the effective wave number is not exactly the expected value q = 1. Moreover, the envelopes obtained by Eqs. (4.13) and (4.14) are not precisely slowly variable and a rapidly varying part is observed as indicated in the oscillating part of Eq. (4.16). The longitudinal displacements u 0 , u R 1 , u I 1 , u R 2 and u I 2 are illustrated in Fig. 4.5. As expected, the zero order term u 0 is larger than the others and is almost linear, but there are also oscillating parts u R 1 and ), respectively. Next, we have rebuilt the microscopic deflection v(x) = P(v i ) from five envelopes (j = 0, j = ±1 and j = ±2) by Eq. (4.4) and we find that this reconstruction well describes the exact value as shown in Fig. 4.6. Furthermore, it can be noticed that only two envelopes v I 1 and v R 1 can sufficiently cover the microscopic model as shown in Fig. 4.7, which justifies the assumptions made in Section 4.2.3. The same reconstruction is demonstrated in Figs. 4.8 and 4.9 and this leads to the expected nearly linear response u(x). However, u(x) is not perfectly linear and its derivative u ′ (x) fluctuates around -2.22 as shown in Fig. 4.10. This can be explained through the constitutive equation (4.1-b). Since there is no horizontal force f = 0, the normal force is constant n(x) = -µ. The transversal displacement v(x) being in a wave form, the deformation u ′ (x) is in a wave form as well. Comments By considering the exact solution of the full problem, we have established that the macroscopic quantities defined by the reduction formulae (4.13) and (4.14) are not exactly slowly variable. They also account for an oscillating part that is small only if the effective wave number Q is very close to the a priori chosen wave number q. In the next sections, we will introduce a coupling between the two scales that is based on the reduction operator (4.10). A bridging technique based on the formulae (4.13) and (4.14) will be presented and evaluated. To smooth these oscillations, an L 2 -type coupling operator will be used. Numerical evaluation and assessment Prolongation versus reduction-based coupling Two coupling approaches are compared and evaluated. The numerical assessment is based on the results obtained on a beam resting on a nonlinear elastic foundation configuration. The clamped beam has the same parameters as reported in Section 4.3.2. The models are defined as 1. Microscopic model: 120 microscopic elements for the whole beam [0, 30π]. 2. Macroscopic model: 15 macroscopic elements for the whole beam [0, 30π] with a wave number q = 1. The boundary conditions for the microscopic model are those of a classical clamping: Prolongation The same boundary conditions have been applied for the bridging techniques (prolongation coupling and reductionbased coupling) at x = 0. As for the macroscopic model, it is known that at a clamped end, the envelope satisfies nearly v 1 (0) = v 1 (L) = 0. To be clear, only the left half part of the beam [0, 15π] will be presented in the following figures, but the computations have been performed on the whole domain [0, 30π]. The nonlinear problems have been solved by the Newton-Raphson procedure. Small step lengths have to be chosen in the region of the bifurcation µ ≈ 2. To ensure a transition from the fundamental branch to the bifurcated one, a transverse perturbation force g pert is applied. The values of the perturbation force are given as g pert = 2 × 10 -3 for the microscopic model, g pert = 3 × 10 -5 for the macroscopic model and g pert = 2 × 10 -3 for the bridging models. The numerical response of the structure is a spatial oscillation with a modulation near the end (see Fig. 4. 13 and Fig. 4.14). Close to the bifurcation point µ = 2.01, the envelope is nearly sinusoidal and it looks like a hyperbolic tangent away from the Connection between the film and the substrate As the film is bonded to the substrate, the displacement should be continuous at the interface. Therefore, the connection between the film and the substrate reads (5.59) From Eqs. (2.8)-(2.9) and (5.59), the following relations can be obtained: (5.60) Consequently, the corresponding macroscopic relation reads (5.61) A 3D macroscopic film/substate model The general macroscopic modeling framework established in Section 5.2 can be used directly for 3D film/substrate modeling, by discretizing the domain using 3D finite elements. Nevertheless, it is cumbersome and especially not efficient for very thin film, since the huge difference ratio of thickness between the film and the substrate requires extremely refined meshes to keep the acceptable element aspect ratio. Some more flexible modeling schemes should be developed for 3D cases. One simple way is to develop a 3D macroscopic film/substrate model based on the same modeling strategy introduced in Chapter 3, where the 3D classical model has shown its competitive efficiency for post-bifurcation response of morphological instability. Thus, we consider the same film/substrate system as in Section 3.2, which includes the geometric and material properties as well as Hookean elasticity framework (see Fig. 3.2). However, the 3D macroscopic model will be totally different from the previous one. Precisely, this 3D macroscopic one couples a nonlinear macroscopic membrane-wrinkling model [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF] to represent the film and a linear macroscopic elasticity to describe the substrate, and these two models are coupled by introducing Lagrange multipliers. In what follows, this modeling scheme will lead to derive the integrated macroscopic film/substrate model. Linear macroscopic elasticity for the substrate As explained in Section 3.2.2, linear isotropic elasticity theory is sufficient to represent the substrate, while the potential energy of the substrate (3.9) should be reformulated in the macroscopic framework. We apply the technique of slowly variable Fourier coefficients in 3D case [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. To be consistent with the macroscopic membrane-wrinkling model, we also assume that the instability wave number vector {q} is known and the wave spreads along the O x direction. In other words, the vector {q} is parallel to the unit vector {1, 0, 0}, i.e. {q} = q{1, 0, 0}. The unknown field U(X, Y, Z) = {u(X, Y, Z), σ(X, Y, Z), ε(X, Y, Z)}, whose components are displacement, stress and strain, is written in the following form: where the macroscopic unknown fields U j (X, Y, Z) vary slowly on the period of oscillations. As discussed before, it is not necessary to choose an infinite number of Fourier coefficients and thus the unknown fields U(X, Y, Z) are expressed in terms of two harmonics: the mean field U 0 (X, Y, Z) and the first order harmonics U 1 (X, Y, Z)e i{q}X and U 1 (X, Y, Z)e -i{q}X . Hence, the macroscopic potential energy of the substrate can be expressed as where L s is the elastic matrix of the substrate. It can be seen that the macroscopic potential energy (5.85) takes the same form as the classical one (3.9). Note that each unknown of the classical model is replaced by a vector of three times larger size, since it takes into account the zero order harmonic and the first order harmonic represented by a complex vector or by two real vectors. These generalized vectors, i.e. the generalized displacement {u s }, the generalized stress {σ}, the generalized strain {ε} and the generalized displacement gradient {θ}, are defined as (5.86) For linear elasticity, one can get a constitutive law for the zero order harmonic (real number) and two constitutive laws for the first order harmonic (complex number), with Appendix A Justification for exponential distribution of transverse displacement along the z direction Let us recall Lamé-Navier equation where λ and µ are Lamé's first parameter and shear modulus, respectively. In twodimensional case, Eq. (A.1) can be written as As for sinusoidal wrinkles [START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF], the displacement field can be assumed as where f 1 and f 2 are the unknown functions, while k is the wave number. By introducing Eq. (A.3) into Eq. (A.2), one can obtain two ordinary differential equations These two coupled equations can be solved after introducing the boundary conditions, while the boundary conditions are not clear at the interface and bottom [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF]. Nevertheless, it can be solved with unknown constants as where c 1 , c 2 , c 3 and c 4 are four unknown constants depending on the boundary conditions. Obviously, f 1 is an exponential type function. Therefore, the transverse displacement w should follow the exponential distribution along the thickness direction, which can be observed in Fig. 2.6. The most remarkable feature of the solution (A.5) is the length 1/k characterizing the exponential decay, which coincides with the wrinkling wavelength. This proves the property α = λ x /λ z = O(1) in Section 2.2. Appendix B General methodology to obtain the macroscopic models Within the Fourier approach, the differential equations satisfied by the amplitudes U j = (u j , v j , n j ) are deduced from the microscopic model simply by identifying the Fourier coefficients in each equation. The amplitudes U j (x) are assumed to be constant over a period (see [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]). However, the derivative operators are calculated exactly, according to the following rules: ( da dx where a(x) is a Fourier series with slowly varying coefficients as in Eq. (4.4). The other rules follow from Parseval identity and from the assumption of slowly varying coefficients: where b(x) is also a Fourier series with slowly varying coefficients. The membrane constitutive law (4.1-b) leads to the following macroscopic constitutive law: Hence, in the case of two real envelopes, U 0 = (u 0 , v 0 , n 0 ) ∈ R and U 1 = (u 1 , v 1 , n 1 ) ∈ R, the macroscopic constitutive law for the membrane stress n 0 becomes Remark that the two last terms are always positive and correspond to an increase of tensile strain or to a decrease of the compressive strain. Therefore, this macroscopic law is able to account for the membrane stress decrease due to local wrinkling, particularly via the last term of Eq. (B.6). The procedure to deduce a finite number of envelope equations is straightforward in the case of a simple nonlinear system as Eq. (4.1) (see [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]). Theoretically, the number of terms in the Fourier series can be very large (see Eq. (B.5)), but in practice, it is convenient to limit the number of harmonics. For instance, in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], a macroscopic model involving five harmonics has been presented corresponding to wave numbers 0, ±q, ±2q. The corresponding potential energy has the following form: (B.8) The model (B.7) includes five envelopes, but it may be unnecessarily intricate. In this context, further simplifications are introduced in the potential energy (B.7). Appendix C A macroscopic model with one real and one complex envelope Additional simplifications would be introduced in the potential energy (B.7). Firstly, we consider only three harmonics 0, ±q, with one real envelope U 0 (x) = (u 0 , v 0 , n 0 ) and one complex envelope U 1 (x) = (u 1 , v 1 , n 1 ), or equivalently three real envelopes U 0 (x), U R 1 (x), U I 1 (x). The second restriction concerns the body axial force that does not fluctuate on the basic cell: f (x) = f 0 (x), which implies that the normal force does not fluctuate (n 1 = 0). This makes it possible to drop the unknown u 1 (x). In this case, the potential energy depends on the mean field (u 0 (x), v 0 (x)) and the envelope of the deflexion v 1 (x): where Since v 1 is a complex valued function, the model (C.1) makes it possible to predict slow phase modulations. It is more accurate than the macroscopic model used in this thesis. In [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF], it was proven that the account of a complex v 1 improves the solution near the boundary. In the same way, one can also obtain [R r (Q r )] e as follows: where Thus, the second Piola-Kirchhoff stress tensor can be expressed as ] . (E.8) Several forms of the strain energy function have been proposed in the literature to represent the isotropic hyperelastic material behavior. The most popular one used in the literature is the Ogden model for modeling rubberlike materials. The compressible Ogden constitutive law [127] is described in function of the eigenvalues λ i (i = 1, 2, 3) of the right Cauchy-Green tensor, C, as follows: where N is a material constant, and µ i , α i and β i are temperature-dependent material parameters. The Mooney-Rivlin model and neo-Hookean form can be obtained as a particular case of the Ogden model (see [START_REF] Holzapfel | Nonlinear solid mechanics: A continuum approach for engineering[END_REF]). Setting N = 2, α 1 = α 2 = -2, the strain energy potential of Mooney-Rivlin for the incompressible materials reads where the material constants c 1 = µ 1 /2 and c 2 = -µ 2 /2. The compressible form of the Mooney-Rivlin reads where c and d are temperature-dependent material constants. By prescribing that the reference configuration is stress-free, we can obtain d = 2(c 1 + 2c 2 ). One can deduce the second Piola-Kirchhoff stress tensor from Eqs. (E.8) and (E.11): The neo-Hookean model can be deduced from the strain energy of Mooney-Rivlin (E.11) by taking c 2 = 0. Another version of the neo-Hookean model, which is an extension where λ 0 and µ 0 are Lamé's material constants. The second Piola-Kirchhoff stress tensor then reads S = (λ 0 lnJ -µ 0 ) C -1 + µ 0 I. (E.14) Appendix F Automatic differentiation with the ANM The implementation of the recurrence formulae as (2.47) is relatively simple for Föpplvon Kármán nonlinear plate or Navier-Stokes equations [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF][START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF], but it can be tedious in a more general constitutive framework. For example, there are several forms of strain energy potentials to describe hyperelastic behavior of rubberlike materials: Ogden, Mooney-Rivlin, neo-Hookean, Gent, Arruda-Boyce, etc. Each constitutive law should be set in the adapted ANM form. This requires different adaptations or even regularization techniques, which appears to be cumbersome and not straightforward sometimes. That is why some tools based on Automatic Differentiation (AD) techniques [START_REF] Griewank | Evaluating derivatives: Principles and techniques of algorithmic differentiation[END_REF] have been proposed to compute high order derivatives of constitutive laws [START_REF] Koutsawa | A generic approach for the solution of nonlinear residual equations. Part I: The Diamant toolbox[END_REF][START_REF] Charpentier | Automatic differentiation of the asymptotic numerical method: the Diamant approach[END_REF]111,[START_REF] Lejeune | Automatic solver for non-linear partial differential equations with implicit local laws: Application to unilateral contact[END_REF], which allows one to introduce various potential energy functions of hyperelastic laws in a rather simple and automatic way. This work applies the Matlab AD toolbox presented in [START_REF] Koutsawa | A generic approach for the solution of nonlinear residual equations. Part I: The Diamant toolbox[END_REF] to compute Taylor series involved the ANM algorithm, see [START_REF] Cochelin | Méthode asymptotique numérique[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF][START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF] and references cited therein. The AD has been first introduced in the ANM algorithm by Charpentier and Potier-Ferry [START_REF] Charpentier | Automatic differentiation of the asymptotic numerical method: the Diamant approach[END_REF] to make the algorithm more generic and easier to be implemented. The theoretical developments based on the generic Faá di Bruno formula for the higher order differentiation of compound functions have been presented in [START_REF] Koutsawa | A generic approach for the solution of nonlinear residual equations. Part I: The Diamant toolbox[END_REF]. The main result reported in [START_REF] Koutsawa | A generic approach for the solution of nonlinear residual equations. Part I: The Diamant toolbox[END_REF] is that from a proper initialization of the p-th Taylor coefficients of the independent unknown variables (U p , λ p ) (see Eqs. Series computation Construction of L 0 t (U 0 ) using AD on R Decomposition of L 0 t Order 1: (0) Initialization: intermediate variables End of the series computation Abstract Surface wrinkles of stiff thin layers attached on soft materials have been widely observed in nature and these phenomena have raised considerable interests over the last decade. The post-buckling evolution of surface morphological instability often involves strong effects of geometrical nonlinearity, large rotation, large displacement, large deformation, loading path dependence and multiple symmetry-breakings. Due to its notorious difficulty, most nonlinear buckling analyses have resorted to numerical approaches since only a limited number of exact analytical solutions can be obtained. This thesis proposes a whole framework to study the film/substrate buckling problem in a numerical way: from 2D to 3D modeling, from classical to multi-scale perspective. The main aim is to apply advanced numerical methods for multiple-bifurcation analyses to various film/substrate models, especially focusing on post-buckling evolution and surface mode transition. The models incorporate Asymptotic Numerical Method (ANM) as a robust path-following technique and bifurcation indicators well adapted to the ANM to detect a sequence of multiple bifurcations and the associated instability modes on their post-buckling evolution path. The ANM gives interactive access to semi-analytical equilibrium branches, which offers considerable advantage of reliability compared with classical iterative algorithms. Besides, an original nonlocal coupling strategy is developed to bridge classical models and multi-scale models concurrently, where the strengths of each model are fully exploited while their shortcomings are accordingly overcome. Discussion on the transition between different scales is provided in a general way, which can also be seen as a guide for coupling techniques involving other reduced-order models. Lastly, a general macroscopic modeling framework is developed and two specific Fourier-related models are derived from the well-established classical models, which can predict the pattern formation with much fewer elements so as to significantly reduce the computational cost. Keywords: Wrinkling; Post-buckling; Bifurcation; Thin film; Multi-scale; Path-following technique; Bridging technique; Arlequin method; Finite element method.
355,735
[ "1136654" ]
[ "178323" ]
01751772
en
[ "spi" ]
2024/03/05 22:32:07
2015
https://hal.univ-lorraine.fr/tel-01751772/file/DDOC_T_2015_0086_ARLAZAROV.pdf
Reduction of fuel consumption and CO 2 emission is one of the key concerns of the worldwide car makers today. The use of high-strength high-formability steels is one of the potential solutions to lighten a car. The so-called "Medium Mn" steels, containing from 4 to 12 wt.% Mn, are good candidates for such applications. They exhibit an ultra-fine microstructure with a significant amount of retained austenite. This retained austenite transforms during mechanical loading (TRIP effect), which provides a very attractive combination of strength and ductility. Such ultra-fine microstructure can be obtained during the intercritical annealing of fully martensitic Medium Mn steel. In that case, the formation of austenite happens through the socalled "Austenite Reverted Transformation" (ART) mechanism. Consequently, the understanding and modeling of phenomena taking place during ART-annealing is of prime interest. In this PhD work, the evolution of both microstructure and tensile properties was studied as a function of holding time in the intercritical domain. First, an "alloy design" procedure was performed to select the chemical composition of steel and the thermal treatment adapted to this grade. It was based on both computational and experimental approaches. Then, the elaborated cold-rolled 0.098C -4.7Mn (wt.%) steel was subjected to ART-annealing at 650°C with various holding time (from 3min to 30h). Two types of characterization were applied on the treated samples: analysis of microstructure evolution and evaluation of mechanical behavior. Microstructure evolution was studied using a double experimental and modeling approach. The final microstructure contains phases of different natures (ferrite (annealed martensite), retained austenite and fresh martensite) and of different morphologies (lath-like and polygonal). A particular attention was paid to the kinetics of austenite formation in connection with cementite dissolution and to the morphology of the phases. A mechanism was proposed to describe the formation of such microstructure. Furthermore, the importance of taking into account the size distribution on the overall transformation kinetics was evidenced through the comparison between experimental and simulated austenite growth. The critical factors controlling thermal austenite stability, including both chemical and size effects, were determined and discussed, based on the analysis of the retained austenite time-evolution. At last, an adapted formulation of Ms temperature law applicable to the medium Mn steels with ultra-fine microstructure was proposed. Tensile properties of the steel were measured as a function of holding time and the relation between microstructure and mechanical behavior was analyzed. Advanced analysis of the individual behavior of the three major constituents (ferrite (annealed martensite), retained austenite and fresh martensite) was performed. An important influence of Mn content on the strength and strain hardening of fresh martensite was revealed. Therefore, a specific model was proposed to describe true-stress versus true-strain curves of fresh martensite. On the other hand, mechanical behavior of retained austenite and ferrite was described with the adapted approaches already existing in the literature. At last, a complete model for predicting the true-stress versus true-strain curves of medium Mn steels was proposed based on the ISO-W mixture model. Secondly, I'm very grateful to my family. Of course to my parents, especially my mother, who put all their energy, time and love to raise me, to give me good education and to make me think in all circumstances. I wish to express my love to my actual family, my wife and daughter. Also many thanks for their patience and comprehension, but also for their love. I would like to express my gratitude to my supervisors: Alain HAZOTTE, Mohamed GOUNÉ and Olivier BOUAZIZ. They helped me and gave me a lot of advices throughout this work. All our numerous discussions contributed to the development of this work and allowed good understanding of the happening phenomena. Especially, I would like to thank them for the useful comments and corrections during the manuscript redaction. I'm extremely thankful to Patrick BARGES for the help with TEM characterizations, to Gerard PETITGAND for the EPMA analysis and to Frederic KEGEL for his support with different experiments. I also gratefully acknowledge the NanoSIMS characterization performed by Nathalie Valle and her detailed explications. I would like to express my appreciation for the fruitful discussions on different topics to Didier HUIN, Jean-Philippe MASSE, David BARBIER, Jean-Christophe HELL and Sebastien ALLAIN. I would like to thank Sabine Fogel from the Documentation department for her appreciable assistance in the literature research. I would like to express my gratefulness to all the staff of Automotive Products centre of ArcelorMittal Maizières Research and Development campus and, in particular, to the engineers and technicians of Metallurgy Prospects and Manufacturing (MPM) team. Different technicians helped me a lot with various experimental techniques. Thanks to my colleagues engineers because they were supporting an additional charge due to my partial working time. I wish to thank my chiefs Thierry IUNG and Michel BABBIT for accepting this PhD work and for their support during these years. I would like to acknowledge also Thierry's help in final review of the manuscript. Of course, many thanks to ArcelorMittal company for financial supply of this work. I'm very thankful to all the members of jury: Alexis DESCHAMPS, Philippe MAUGIS, Pascal JACQUES, Astrid PERLADE and Gregory LUDKOVSKY -for the time they spend in order to evaluate this work. I'm also grateful to them for their interesting questions and pertinent remarks. Finally, this work is dedicated to my small daughter Anna. She helped me a lot with typing of the manuscript. Окружающий нас мир столь многогранен и сложен, что познавая его, мы всё больше и больше осознаём, что процес познания бесконечен как и сам мир. Le monde autour de nous est tellement varié et complexe qu'en l'étudiant, le processus de découverte et d'apprentissage nous apparaît de plus en plus sans fin, comme le monde lui-même. The world around us is so multifaceted and complex that when perceiving it, we realize more and more that discovering and learning processes are infinite as the world itself. Sergey OLADYSHKIN Résumé La perspective d'une hausse durable du prix de l'énergie fossile et les exigences réglementaires accrues vis-à-vis des émissions de C0 2 nécessitent de développer des véhicules plus légers. L'utilisation d'aciers à Très Haute Résistance (THR) est une voie possible pour répondre à ces exigences, car ceux-ci permettent une réduction significative d'épaisseur sans affecter la rigidité des pièces. Le développement d'un acier qui combine à la fois une très haute résistance et une bonne formabilité constitue à l'heure actuelle un thème central chez les sidérurgistes. Une des solutions envisagées est de développer une nouvelle nuance d'aciers THR dits « Medium Manganèse » dont la teneur en Mn est située entre 4 et 12 %. Les premiers résultats publiés montrent un intérêt évident au développement de telles nuances. En effet, pour des teneurs en carbone relativement faibles, il est possible de stabiliser une fraction élevée d'austénite résiduelle à température ambiante grâce à la taille ultra fine de la microstructure et à l'enrichissement en Mn. Cette austénite résiduelle se transforme en martensite sous la charge mécanique (effet TRIP), ce qui procure une combinaison très attractive entre la résistance et la ductilité. Une des voies pour obtenir ce type de microstructure est d'effectuer un recuit inter-critique d'un acier complètement martensitique (issu d'austénitisation et trempe). Lors d'un tel recuit, la formation de l'austénite obéit à un mécanisme spécifique qui porte le nom d'ART -Austenite Reverted Transformation (transformation inverse de l'austénite). La compréhension et la modélisation des phénomènes en jeu pendant le recuit ART ont un grand intérêt scientifique. Ainsi, l'objectif de ce travail de thèse était d'étudier et de modéliser les évolutions microstructurales en lien avec les propriétés mécaniques lors d'un recuit ART. Dans un premier temps, une étude de type « alloy design » a été menée pour déterminer la composition chimique de l'acier et le traitement thermique adapté à cette nuance. Une double approche, numérique et expérimentale, a été utilisée. Ensuite, des recuits inter-critiques (à 650°C) avec des temps de maintien variables (entre 3min et 30h) ont été réalisés sur l'acier laminé à froid et contenant 0.098%C et 4.7%Mn (mas.). Après chaque traitement, deux types de caractérisations ont été menés : analyse microstructurale et évaluation des propriétés mécaniques. L'évolution de la microstructure lors du recuit a été étudiée en se basant sur deux types d'approches : expérimentale à l'échelle des phases (MEB, MET,..) et en utilisant la modélisation thermodynamique. Il a été déterminé que la microstructure finale se compose de phases de nature (ferrite, austénite résiduelle et martensite de trempe) et morphologie (en forme d'aiguille et polygonale) différentes. Une attention particulière a été accordée aux cinétiques de dissolution des carbures et de formation de l'austénite. Une vision complète de ces processus a été construite. De plus, un effet important de la taille de grains ultra fine sur les cinétiques globales a été démontré en comparant les résultats des calculs numériques avec les données expérimentales. En outre, le mécanisme de stabilisation de l'austénite résiduelle à la température ambiante a été étudié et discuté. Les deux contributions à la stabilité de l'austénite résiduelle, composition chimique et taille des grains, ont été analysées sur la base de l'évolution temporelle de l'austénite résiduelle. Enfin, une formulation adaptée pour le calcul de la température Ms des aciers « Medium Manganèse » avec une microstructure ultrafine a été proposée. Des essais de traction ont été réalisés afin d'évaluer le comportement mécanique de l'acier après différents recuits ART. Les liens entre la réponse mécanique de l'acier et sa microstructure ont été établis. Une analyse plus détaillée du comportement de chaque constituant de la microstructure (ferrite, austénite résiduelle et martensite de trempe) a été effectuée. Elle a révélé que le Mn a un effet très important sur la résistance et sur l'écrouissage de la martensite de trempe. Cette observation a conduit à proposer un modèle spécifique pour décrire la courbe contrainte vraie -déformation vraie de la martensite de trempe. En revanche, les comportements de la ferrite et de l'austénite résiduelle ont été modélisés en s'appuyant sur des approches existant déjà dans la littérature. A l'issue de cette thèse, un modèle complet est disponible pour calculer les courbes de contrainte vraie -déformation vraie d'un acier « Medium Mn » après un recuit ART. Il s'appuie sur l'approche ISO-W pour décrire ce matériau multiphasé évolutif. INTRODUCTION Background of this study Since more than 20 years, the worldwide car makers are in permanent progress in the field of material selection in order to decrease the weight of the vehicles. The goal of car weight reduction is to minimize fuel consumption and CO 2 emission. In the same time, passengers safety should be preserved or even improved. Vehicle lightening, especially for the passenger cars, is also stipulated by the European Union (EU) legislation. In 2009 was approved the mandatory emission reduction targets for new cars [EC]. These days, the average CO 2 emissions from the car fleet in the EU are precisely observed and reported. The evolution of CO 2 emissions and the targets for 2015 and 2020 are presented in Figure 1. The presented 2015 and 2020 targets are 130 grams of CO 2 per kilometre (g/km) and 95g/km, respectively. Compared with the 2007 fleet average of 158.7g/km these targets represent reductions of 18% and 40%, respectively [MON '12]. To achieve these quite ambitious targets, the car markers are constrained to use the most innovative solutions in terms of materials and design. Car body and many other pieces of vehicles are made of different steels. Hence, there are two strategies for the weight reduction: a) use permanently improved steel solutions; b) use alternative materials: aluminum, magnesium, plastics and others. Till now, steel stays a very attractive and functional material as it combines a wide range of mechanical properties and low production cost. That's why the research and development in the field of steels is quite important and has its place in the near future. For many years, steel researchers were working on the conventional Advanced High Strength Steels (AHSS) grades: Dual Phase (DP), TRansformation Induced Plasticity (TRIP), Complex Phase (CP) and Martensitic (M) steels. These steels were classified as the 1 st Generation AHSS. Then, the TWinning Induced Plasticity (TWIP) steels were discovered and investigated as the 2 nd Generation AHSS. Production of these steels is inhibited due to the high level of alloying elements that stipulates an important cost increase and a lot of problems on the process route. Finally, these days the researchers are trying to develop the 3 rd Generation AHSS. The objective of this development is to propose steels with an intermediate strength-ductility balance (better than 1 st Generation AHSS, but probably lower than the 2 nd Generation AHSS) but with reasonable costs and production issues. Positions of this target in terms of tensile strength/total elongation balance as well as the already developed 1 st and 2 nd Generations of AHSS are illustrated in Figure 2. '06]. There are different metallurgical concepts that can offer a steel product with the satisfactory strength-ductility balance: carbide free bainite (CFB), annealed martensite matrix (AMM), quenching and partitioning (Q&P) and, finally, so-called "Medium Mn" steel (MMS). This research is focused on the last concept. Medium Mn TRIP steel is a promising solution to get high strength steels with good formability. Recently, a lot of studies on MMS concept were done and published. The results of some of these studies [MIL '72], [FUR'89], [HUA'94], [FUR'94], [MER'07], [SUH'09], [SHI '10], [COO '10], [GIB '11], [JUN'11], [CAO '11], [ARL'11] are shown in Figure 3. This graph presents the Ultimate Tensile Stress (UTS) as a function of Total Elongation (TE) and the targeted domain of the 3 rd Generation AHSS. It can be seen that MMS has a great potential since some combinations already satisfy the target of third generation high strength steels. Aim of this study The mechanical response of MMS is very attractive. However the studies of these steels started not so long time ago. Hence, there are a certain amount of unanswered questions about microstructure formation, mechanical behavior and the link between these two properties. Therefore, it was proposed to investigate the following topics: 1. Understand the mechanisms of microstructure formation during intercritical annealing of MMS: -C and Mn distribution between phases; -Mechanisms of austenite formation and stabilization ; 2. Explain the relation between microstructure and mechanical properties obtained after thermal treatments: -Influence of Mn and ultra-fine size on work hardening; -Austenite strain induced transformation -TRIP effect; 3. Develop a model for prediction of mechanical behavior of the steel, considered as a multi-phase evolutive material. More global purpose of this research is to acquire all the necessary knowledge of this concept in order to build the tools (models) for the further optimization of steel compositions and thermal treatments. This objective is an essential step for the future product development of such type of steels. Context of this study This PhD work was done in a specific context. Before starting the PhD, the author of this work was working for 4 years and continued to work during the PhD in ArcelorMittal Maizieres Research centre, in the Automotive department and more precisely in the Metallurgy Prospects and Manufacturing (MPM) team. Therefore, this PhD work was accomplished in particular conditions with somehow following time partitioning: 50% of the authors' time was allocated to its engineer work in ArcelorMittal Maizieres Research centre and other 50% was dedicated to the PhD study. This work was done within the collaboration between ArcelorMittal Maizieres Research centre and the University of Lorraine, in particular LEM3 laboratory. It was supervised by three persons: Director -Alain HAZOTTE (professor in LEM3 laboratory, University of Lorraine); Co-director -Mohamed GOUNÉ (former research engineer in ArcelorMittal Maizieres Research centre, now professor in ICMCB laboratory, University of Bordeaux); Co-supervisor -Olivier BOUAZIZ (former research engineer in ArcelorMittal Maizieres Research centre, now professor in LEM3 laboratory, University of Lorraine). Finally, it should be highlighted that the major part of the experimental work was done in the ArcelorMittal Maizieres Research centre. Presentation of the manuscript The manuscript is divided in the following four parts. Chapter 1 is devoted to the literature review. The analysis of the found literature relative to microstructure formation and mechanical behavior of MMS steels is presented. In the part about the microstructure, the following major topics are treated: recrystallization, standard and reverse austenite formation, and, at last, austenite thermal stability. Concerning mechanical behavior part, the following subjects in link with this study are reviewed: global analysis of mechanical properties of MMS in relation with the available microstructure characterizations (retained austenite fraction), mechanical behavior of ultra fine ferrite, fresh martensite and retained austenite (including TRIP effect) and, finally, modeling of the multiphase steel mechanical behavior. Chapter 2 presents different experimental and numerical techniques used in this work. All the tools utilized for the elaboration of steel, for microstructure observation and analysis at different scales, for thermodynamic calculations and for final properties measurements are described. In the second part of this chapter, selection of chemical composition of steel and of the subsequent thermal treatment is given. To define temperature of the thermal treatment a combinatory approach is used. It is based on both thermodynamic simulations and particular experimental heat treatment. In Chapter 3, the observations and analysis of microstructures obtained after different thermal treatments are presented and described. Morphology, size and chemical composition of phases are analyzed. Microstructure evolution with time and different steps of austenite formation are discussed. The results of thermodynamic simulations and their comparison with experimental values are also considered in this chapter. According to these simulations, particularities of austenite transformation in MMS are argued. This microstructure analysis permits to describe microstructure evolution and in particular to explain unambiguously the stability of austenite at room temperature. The effect of austenite size on its stability is introduced in a particular manner through the variation of M s temperature. At last, the mechanisms of austenite size influence are debated. Chapter 4 is dedicated to the mechanical properties of MMS. Tensile behavior of intercritically annealed medium Mn steels with different phases (two phases and three phases microstructures) is presented and analyzed. Tensile behavior of each phase constituent (as-quenched martensite, ferrite with medium Mn content and austenite with medium C and Mn content) is described. Influence of Mn on the work hardening of the different phases and on the mechanical stability of austenite is discussed. Finally, a global mechanical model capable to predict the whole tensile curve of the intercritically annealed medium Mn steels with different phases is proposed. This model is based on the analysis of the mechanical behavior of each constituent and on the detailed experimental description of the microstructures. The performance of the model is also discussed. CHAPTER 1: LITERATURE REVIEW General Description More than 200 years ago (in 1774) manganese was elaborated in metallic state and from the end of nineteenth century manganese is widely applied as an alloying element for the steel elaboration. One of the most remarkable and important discovery in the field of steels was done by Sir Robert Abbott Hadfield. He searched for a hard and in the same time tough steel and developed a steel containing around 12% Mn and having outstanding combination of hardness and toughness. His invention was called in his honor Hadfield steel and since that time was produced in significant quantities [DES '76] [MAN '56]. Gradually, manganese became a common alloying element for the steel production. Its influence on different properties (hardenability, grain size, phase transformations, mechanical behavior and etc…) was widely studied. However, manganese content was often limited to ~2% because of its important hardenability [DES '76]. In the end of 60's and beginning of 70's of the twentieth century, Medium Mn steels (up to 6% Mn) were first proposed by Grange and Hribal [GRA'68] as an air-hardenable material. Fully martensitic structure was obtained for a 0.1 wt.% C -6 wt.% Mn steel with the cooling rate of 1.7°C/min. But this structure had poor toughness properties. Thus, a tempering treatment was applied and resulted in a good balance between strength and ductility. Miller and Grange [MIL'70], [MIL'72] continued to work on this material and found that this good mechanical behavior can be explained by rather high fraction of austenite retained after tempering and ultrafine size of the microstructure features. In the beginning of 90's,Furukawa et al. [FUR'89], [HUA '94], [FUR'94] also found a good balance of strength and ductility after annealing martensitic hot rolled steels with 5 wt.% Mn in their intercritical domain. One more time it was shown that ultra-fine microstructure and retained austenite have an important role in these steels. These studies also claimed that there are optimum temperature and time to get the maximum of retained austenite. Nowadays, due to the increasing demand for the development of "3rd generation high strength steels" the interest to MMS attained its maximum and the number of research and development studies is exponentially growing. Final properties of the steel product depend on its chemical composition and final structure: grain size, phase's fractions, precipitates and dislocation content. Further, the final microstructure in most of the cases is directly linked to all the steps of elaboration (thermomechanical treatments). Generally, one of the most important steps, that will condition the final properties, is annealing. Heat treatment necessary for the production of MMS is conventional annealing: heating to soaking temperature, holding and final cooling. Thus, most of the phenomena happening during annealing are already well described. However, there are some particularities in these steels. Due to the high manganese content which increases drastically the steel hardenability, in majority of the cases, the initial (before annealing) microstructure will be fully martensitic or a mixture of bainite and martensite. Also, thanks to high manganese content which decreases grain boundaries mobility, a very fine size of microstructural features is expected. These two particularities will impact the phase transformations during annealing and will result in different mechanical properties. The available in the literature information necessary for understanding of phase transformations, microstructure formation and mechanical properties of MMS will be given in this chapter. Mechanisms of phase transformations Generally, in the simple Fe-C-Mn system during heating the following phenomena can happen: recrystallization, carbide dissolution and austenite nucleation and growth. The microstructure in the end of heating depends strongly on the initial state of the material: initial microstructure and/or the deformation level. In the case of important deformation level without any dependence on initial microstructure the steel will progressively evolve through recrystallization, carbide dissolution and austenite nucleation and growth. When initial structure is not-deformed (case of double annealing for example) the recrystallization step after deformation is suppressed and carbide dissolution and austenite nucleation and growth will depend on the initial microstructure of steel. The case of intermediate deformation rate combines all the difficulties: recrystallization and dependence of carbide dissolution and of austenite nucleation and growth on the initial microstructure of steel. In the work of Arlazarov et al. [ARL'12] it was shown that after intercritical annealing of MMS a complex ultrafine mixture of ferrite, retained austenite and/or martensite can be obtained. Examples of such microstructures are presented in Figure 4. Complexity in the phase constituents and their morphology results from the interference of different phenomena happening during annealing. The increase of Mn content is known to decrease the temperature range of intercritical domain. Thus, to obtain a certain fraction of austenite the annealing temperature should be lower. On the other hand, at lower annealing temperature recrystallization and cementite dissolution have a more sluggish kinetics. Therefore, increase of Mn creates an overlap between recrystallization, cementite dissolution and austenite transformation. In addition, development and interaction of these phenomena depend on the initial (before annealing) microstructure. At last, austenite stabilization or its transformation to martensite during final cooling step will be significantly influenced by the processes occurring during heating and holding. '12]. Taking into account the complexity of microstructure formation in the annealed MMS, it is necessary to introduce the descriptions of each phenomenon. Therefore, further in this subchapter will be presented the following topics: 1. Recrystallization Deformation of steel like rolling, forging or other introduces in the material a certain number of defects (in particular dislocations) that will depend on the strain rate. Deformed metal has a certain amount of stored energy, hence during reheating and soaking it tends to suppress the defects and to recover its structure and properties. Depending on the heating rate, reheating temperature and quantity of stored energy, the following processes can happen: rearrangement and annihilation of dislocations, formation of new dislocation-free grains and their growth. Hence, recrystallization process can be divided in 3 steps: o recovery -rearrangement and annihilation of dislocations, formation of sub-structure; o recrystallization -nucleation of new dislocation-free grains; o grain growth -the larger grains grow consuming the smaller ones in order to obtain the lower energy configuration of grain boundaries [HUM '04]. Schematic representation of different states of recrystallization is presented in Figure 5. As it was mentioned previously, in the case of not-deformed initial microstructure there is no recrystallization in the majority of the cases. However, when initial structure is fully martensitic, the dislocation density is high enough to provoke martensite recrystallization (nucleation of new ferrite grains). Some studies about martensite recovery and recrystallyzation were already done in the past. First of all in late 60's and in the beginning of 70's G. R. Speich [SPE'69], [SPE'72] investigated the effect of tempering in low carbon martensite. Early stages of carbon segregation and carbide precipitation were observed and temperature ranges of martensite recovery and recrystallization were established. Recovered martensite was characterized by the lath structure with a certain dislocation density, issued from the as-quenched lath martensite, and after recrystallization strain-free equiaxed grains were observed. Then, in the beginning of 70's, R. Caron and G. Krauss [CAR'72] made an exhaustive study of the microstructure evolution during tempering of 0.2 wt.% C steel. They found that coarsening of fine lath martensite structure with the elongated packet-lath morphology is the major microstructure transformation and only at late stages of tempering an equiaxed structure gradually appears. Figure 6 shows their microstructure observations with optical microscope (OM) and electron micrograph extraction replica of the sample tempered at 700°C for 12h. Figure 6 -Extraction replica TEM electron micrographs of lath martensite tempered at 500°C for 5h (at left) and at 700°C for 12h (at right) [CAR '72]. In spite of the equiaxed form of grains the increase in high angle boundaries expected from the formation of strain-free grains was not observed. Hence, R. Caron and G. Krauss proposed the following scenario of as-quenched martensite tempering:  first recovery takes place and significantly decreases the low angle boundaries content. At this stage the morphology of the tempered martensite is lath-like coming from the initial lath morphology of the as-quenched martensite. In the same time carbon segregation and carbide precipitation occurs at very early stages of tempering;  then recovery continues by polygonization or low angle boundary formation because the recrystallization is delayed due to the pinning of grain boundaries by carbide particles. As well process of carbide spheroidization, or Ostwald ripening, can occur;  finally the aligned coarsened lath morphology, left from the earlier stages, transforms to an equiaxed ferritic grains through the grain growth mechanism [CAR '72]. These results were supported by the work of S. Takaki et al. [TAK'92] on the commercial 0.2%C steel. No proofs of the as-quenched martensite recrystallization were found. However, recrystallization of the deformed lath martensite was observed and the influence of deformation rate on the recovery and recrystallization was studied. More recent research works of T. Tsuchiyama et al. [TSU'01], [NAT '05] and [TSU '10] showed that recrystallization of ultra-low carbon steel at high tempering temperatures and long times is possible through the specific bulge nucleation and growth (BNG) mechanism (Figure 7). Bulging of packet boundaries and prior austenite grain boundaries results in the recrystallized grains nucleus that grow afterwards by consuming martensitic structure with high dislocation density. ArcelorMittal internal experience also shows the possibility of the as-quenched martensite recrystallization in 0.4C-0.7Mn steel [TAU '13]. Martensite, obtained after austenitization at 900°C for 5min followed by water quench, was then tempered at 690°C for 60h and resulted in microstructure consisting of strain-free equiaxed ferrite grains and cementite (Figure 8). Austenite formation during intercritical annealing Austenite formation in steels and alloys was broadly studied for more than 100 years. One of the first works on austenite was done by Arnold and McWilliams in 1905 [ARN'05]. They found that austenite forms during heating through the nucleation and growth mechanisms. The same conclusion was done by Roberts and Mehl [ROB'43], but they also have done the exhaustive analysis of the austenite formation kinetics starting from the ferrite-cementite microstructure. Development of dual-phase and TRIP steel grades from the early 80's pushed the researchers to perform numerous works on the austenite formation. A detailed study of austenite formation in steels with various carbon content and 1.5wt.% Mn was done by Garcia and DeArdo [GAR'81]. Different initial microstructures were used: spheroidized cementite in a recrystallized ferrite matrix; spheroidized cementite in a cold rolled ferrite matrix and ferrite plus pearlite. It was found that austenite nucleates at cementite particles located on the ferrite grain boundaries in the case of spheroidized cementite and on either pearlite colony boundaries or boundaries separating colonies and ferrite grains in the case of pearlite. The kinetics of austenite formation at 725°C was also studied. It appears that austenite forms slightly more rapidly from cold rolled ferrite than from recrystallized ferrite or ferrite-pearlite structures (Figure 9). However the final amount of formed austenite is very similar for different cases. '81]. Figure 9 -Comparison of austenite formation kinetics at 725°C from various starting microstructures [GAR Another interesting thing observed by Garcia and DeArdo [GAR'81] was the effect of Mn segregation (banding) on the pearlite microstructure: pearlite in the zones with higher Mn content had a more fine interlamellar spacing in comparison with the pearlite outside of this Mn bands. Similar but very important study was done by Speich et al. [SPE'81]. Series of 1.5 pct manganese steels containing different carbon amounts and with a ferrite-pearlite starting microstructure were investigated in the range of 740 to 900°C. According to the obtained experimental results, the kinetics of austenite formation was separated into three steps: 1) prompt nucleation of austenite at the ferrite-pearlite interface and very rapid growth of austenite into pearlite until the full dissolution of cementite (Figure 10(1)); 2) after dissolution of cementite, further growth of austenite into ferrite occurs with slower rate that is controlled by carbon diffusion in austenite at high temperatures (850 to 900°C) and by manganese diffusion in ferrite at low intercritical temperatures (740 to 780°C) (Figure 10(2a) and (2b)); 3) very sluggish final equilibration of the manganese contents of the austenite and ferrite phases controlled by manganese diffusion in austenite (Figure 10(3)). With the help of diffusion models and experimental results, austenite formation diagram which graphically illustrate the amount of produced austenite and processes controlling kinetics at different times and temperatures was constructed for one of the steels (Figure 11). Figure 11 -Diagram for austenite formation and growth in 0.12C-1.5Mn (wt.%) steel [SPE '81]. On the other hand, Pussegoda et al. [PUS'84] studied 0.06C, 2.83Mn, 0.33Si (wt.%) steel and demonstrated significant partitioning of Mn between ferrite and austenite during intercritical annealing. It was shown that Mn can diffuse to the center of austenite grains within a reasonable time at 695°C of the order of hours rather than centuries (Figure 12). This result was partly attributed to the obtained very fine-grained microstructure with ferrite grain size of ~5µm and austenite grain size of ~ 1µm. Also diffusion rate of Mn in austenite at 695°C was estimated at ~10-14 cm 2 /s which is considerably higher than that obtained by extrapolation of diffusion measurements from higher temperatures. Effect of cold deformation on the austenite formation kinetics in 0.11C-1.58Mn-0.4Si (wt.%) steel was studied by El-Sesy et al. [ELS '90]. Figure 13 shows the austenite fraction evolution with the time at 735°C for the samples with different initial states: hot-rolled, 25% and 50% cold deformed states. It appears that the kinetics of austenite formation (at least till the complete cementite dissolution) increases with the enhancement of cold rolling from 0% to 50%. Two possible reasons of the cold-deformation influence were proposed:  Higher deformation rate decreases the size of recrystallized ferrite grains hence increasing the number of austenite nucleation sites at the intersections of ferrite grains with pearlite. Higher density of ferrite grain boundaries also promotes easier diffusion of substitutional alloying elements (for example, Mn) during the second step of austenite growth;  Increase of cold rolling accelerates austenite formation due to the higher driving force or lower activation energy [ELS '90]. Positions of the a/a' boundaries are indicated [PUS '84]. Figure 13 -Effect of cold deformation on the austenite formation kinetics in 0.11C-1.58Mn-0.4Si steel at 735°C. FGS is the ferrite grain size and tp is the time of complete pearlite dissolution [ELS '90] Finally, manganese influence on austenite formation was also investigated in the past [BAI '69], [WEL '48]. Fe-C equilibrium diagram and the effect of manganese on the form and position of austenitic region are shown in Figure 14. It can be noted that Mn extends austenitic domain to lower temperatures and diminish the intercritical domain (austenite+ ferrite). In the same time, Mn lowers the eutectoid point "E" in both temperature and carbon content. Manganese also enhances stability of austenite and, hence, enlarges the region of metastable austenite. The delay of austenite decomposition into ferrite plus pearlite at 525°C and into bainite at 310°C is clearly illustrated on the isothermal transformation diagrams in Figure 15 [BAI '69]. All these Mn effects are quite important for the appropriate choice of annealing temperature. The pioneer study in this field was done by Nehrenberg in the late 40's on different C-Mn commercial steels [NEH'50]. He investigated the dependence of austenite formation on the initial microstructure: pearlite, spheroidite (spheroidal carbides and ferrite), martensite, tempered martensite and bainite. It was found that from pearlite and spheroidite austenite grows more or less freely and equiaxed ferrite-austenite structure is obtained. On the other hand, martensite, tempered martensite or bainite result in acicular shape of final autenite and ferrite and a lamellar microstructure consisting of these alternate acicular phases is obtained. Plichta and Arronson [PLI '74] looked on the effectiveness of different alloying elements for the production of acicular morphology. It was found that Mn, Ni and Cu stimulate the formation of fine, complex, acicular network of boundaries in which the influence of initial martensite structure is clearly reflected. The proposed explanation was that the rates of austenite nucleation are sufficiently high to restrict significantly the migration of martensite needle boundaries. Interesting discoveries were also done by , [MAT' when they studied Fe-C-Ni steels. The final microstructure after reverse transformation consisted of two types of austenite grains: globular and acicular. Then, they examined the effect of heating rate on the formed microstructures and found that the number of globular grains decreases with the decrease of heating rate. Figure 16 shows two examples of microstructure and the evolution of globular austenite grains quantity as a function of heating rate. It was also observed that cementite is more enriched in Mn with lower heating rate. Hence, it was supposed that the decrease in the number of globular austenite grains is due to the inhibited by the Mn segregation growth. Similar type of observations was also done by Kinoshita and Ueda [KIN'74] for Fe-0.21C-1Cr (wt.%) steel and by Law and Edmonds [LAW'80] for the Fe-0.2C-1V (wt.%) steel. Another researcher who contributed a lot to the development of ART annealing was Gareth Thomas [KOO '76], [KOO '77], [KIM '81], [NAK '85]. He was one of the persons who clearly showed an interest of ART annealing for industrial use: grain refinement, lath morphology and improvement of mechanical behavior. He also obviously demonstrated the difference between ART, intercritical annealing and formation of ferrite from high temperature austenite. That is why some researchers call the fibrous microstructure obtained with ART as "Thomas fibers". Figure 17 shows the scheme of thermal cycles which were compared and the relative resulting microstructures [KIM '81]. Using transmission electron microscopy (TEM) it was also observed that ferrite regions have some subgrain structure after ART and intercritical annealing, while the step quenched coarse ferrite was free of such subgrains. Different studies [TOK '82], [MAT'84], [YI'85], [CAI'85] also showed that the prior to annealing martensite structure will condition austenite nucleation and, hence, the morphology of final microstructure. These investigations point out the importance of the following parameters for the control of austenite shape and distribution: • prior austenite grain size; • presence of non-dissolved carbides; • presence of defects (deformation state); • recrystallization state of martensite. In particular, Yi et al. [YI'85] observed an unambiguous difference in the morphology of austenite obtained from two different martensitic states. Succinct representation of their results is given in Figure 18. It appears that very rapid austenitization prevents the full dissolution of carbides in austenite and limits the austenite grain growth, hence in the following intercritical annealing new austenite nucleates mostly on the prior austenite grain boundaries and its shape is more globular. In contrary, austenite nucleates at lath and packet boundaries and has a lath-like morphology in the case of initial martensite with fully dissolved carbides and bigger grains. Cai et al. [CAI'85] also observed sluggish austenite kinetics and an extensive partitioning of Mn between annealed martensite (ferrite) and austenite. This indicates that driving force for the austenite growth is rather small and the kinetics is controlled by the manganese diffusion. Two very recent studies of ART mechanisms [WEI '13] and [NAK '13] investigated formation and growth of austenite from as quenched martensite and came to almost the same conclusions. Wei et al. [WEI'13] studied 0.1C-3Mn-1.5Si (wt.%) steel and found that during intercritical annealing of as-quenched martensitic structure, austenite will grow from thin austenite films between laths retained upon quenching, but also austenite will nucleate at lath boundaries and packet boundaries of martensite and within laths. As found by Speich et al. [SPE'81] the growth of austenite can be divided on three stages: 1. initial no-partitioned growth of austenite, controlled by rapid carbon diffusion in ferrite, which is gradually replaced by carbon diffusion in austenite; 2. intermediate slow growth, controlled by diffusion of Mn and/or Si in ferrite; 3. very slow growth, controlled by diffusion of Mn and/or Si in austenite for final equilibration, which accompanies the shrinkage of austenite [WEI '13]. These three stages are clearly depicted in Figure 19. '13]. 1 1 2 3 2 3 a) b) c) d) Internal ArcelorMittal results [MOU '03] confirmed the observations of fine fibrous microstructure obtained after intercritical annealing of martensite and their interest for improvement of mechanical behavior. Austenite stabilization during annealing It is known that during cooling from the austenitic or intercritical region, austenite will undergo a transformation according to the cooling rate: ferrite, pearlite, bainite or martensite. But in the case of high stability of austenite, these transformations can be avoided and austenite will be retained at the room temperature. Concerning medium Mn steels, ferrite, pearlite and bainite transformations are strongly delayed due to the high Mn content. Hence, in the majority of cases the only transformation that takes place during cooling is the martensitic one. Numerous studies were done on the observation and comprehension of martensitic transformation and many reviews were edited. The reviews or books used in this work as the base for the understanding of martensite and its transformation are the following: [KUR'60], [KUR '77], [OLS '92], [KRA '99]. These authors have done a considerable contribution to the knowledge building about martensite. Martensite transformation is characterized by two temperatures: M s -temperature at which martensite starts to form and M f -temperature when martensite transformation is finished. Accordingly, during cooling austenite does not transform at temperatures higher than M s . Therefore, it is very important to know M s temperature since it can be used as an indicator of austenite thermal stability. A lot of studies were done about the influence of different parameters on the M s temperature. It is commonly agreed that M s temperature depends on chemical composition, prior austenite grain size, hydrostatic pressure and applied stress. It is also known that the effect of chemical composition and fine prior austenite grain size (less than 10µm) are of major importance. A number of different relations linking chemical composition and M s temperature exist in the literature. Probably the most common relations are those proposed by Steven and Haynes [STE'56] and Andrews [AND'65]. However, more recent work of Van Bohemen [BOH '12] proposed a better correlation of M s temperature and carbon content, using an exponential carbon dependence. Also a clear review of the past investigations and an extension of M s relation to higher and broader alloying elements contents was done by Barbier [BAR'14]. Van Bohemen's and Barbier's formulas are presented hereafter. Both give very satisfactory results, but Barbier's formula has a wider application field. Hence, in this work it will be taken as a reference. [BOH12] s w 12 - w 8 - w 10 - w 13 - w 31 - )) w exp(-0.96 - (1 600 - 565 M Mo Ni Cr Si Mn C ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ = (1) B Ti Nb Cu Al Co V Mo Ni Cr Si Mn C ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ + ⋅ + ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ = ( 2 ) where i w is the alloying element content in austenite expressed in weight percents. Studies about the effect of prior austenite grain size on the austenite stability and M s temperature are more rare. This is probably related to the experimental difficulties as the observation of prior austenite grains is quite complex. Therefore, the first studies were done in more theoretical ways. One of the most common work is that of Fisher et al. [FIS'49] where a nucleation and growth model to predict the kinetics of martensite transformation was proposed. Effect of prior austenite grain size was introduced in the model by the authors, based on the geometrical constraints: the volume fraction of forming martensite is proportional to the volume of the prior austenite grain size. Modern studies [WAN '01], [DRI'04], [JIM'07-1], [JIM' , [YAN '09] confirmed the fact that small austenite grain size decreases considerably the M s temperature. Two last works as well proposed their ways to take into account the influence of austenite grain size. Jimenez-Melero et al. [JIM' suggested the following equation: 3 1 ' - - = γ BV M M s s (3) where M s is taken from the Andrews equation [AND '65], V γ is the volume of austenite grain, and B is a parameter that can be obtained from thermodynamic calculations. For spherical grains, the factor B was fitted to the value of 475 µm.K. Yang et al. [YAN'09] had the same global idea, but a different formula was given in their study of Fe-C-Ni-Mn alloys:         +       -       - - - = 1 1 ) 1 ln( exp 1 ln 1 0 m f aV b M M s s γ (4) where V γ is the volume of austenite grain, f is martensite fraction (for M s calculation f = 0.01), m is the plate aspect ratio (m = 0.05 from [CHR '79]) and a = 1mm and [YAN '09]. As it can be seen there are some key limitations in both solutions:  Jimenez-Melero et al. model gives too high impact of very small grains (less than 2µm);  Yang et al. equation, in contrary, predicts too low impact of fine grains (less than 2µm). Such restrictions leave the possibility for the future investigations and new solutions. Another important topic that was also largely studied is the kinetics of martensite transformation. In Fe-C-Mn steels austenite to martensite transformation is athermal. This means that kinetics of martensite transformation only depends on the decrease of temperature, thus driving force for the transformation increases with higher supercooling below the M s temperature. The most widespread model to predict the kinetics of athermal martensite transformation is the Koistinen and Marburger empirical equation [KOI '59]: ( ) ( ) q s RA T M f - ⋅ - = α exp (5) where f RA is the volume fraction of retained austenite, T q is the temperature reached during quenching and α=0.011 is a fitting parameter. Van Bohemen et al. [BOH'09] and of proposed some modifications to this equation. In the former one, it was first found that the curvature of martensite transformation kinetics curve depends on chemical composition of austenite. The curvature is controlled by the parameter α, hence the following empirical dependence of α on the Recent works of chemical ⋅ ⋅ = ⋅ ⋅ = - - - = β α α β α (7) From the previously mentioned works, it appears that two factors have the major influence on the M s temperature, and thus on austenite stability: 1. chemical enrichment of austenite with alloying elements (C, Mn, Si and others); 2. austenite grain size that can give an important increase of austenite stability when its value is very small. Considering these factors, one can imagine an intercritical annealing that allows considerable partitioning of C and Mn (as major austenite stabilizers) and in the same time low austenite grain size. Hence, using such intercritical annealing an important fraction of austenite can be retained at the room temperature. One of the first works that showed experimental confirmation of important Mn partitioning in medium Mn steels was the study of Kim et al. [KIM'05]. It was shown that high amounts of retained austenite can be stabilized after intercritical annealing and that the stability of retained austenite was due to both C and Mn partitioning (Figure 21). Recent studies of different authors [COO '10], [WAN'11], [LUO '11], [JUN'11], [LUO '13] confirmed the partitioning of Mn during intercritical annealing and high Mn enrichment of austenite. Examples of TEM observations and EDX profiles across austenite grains taken from the work of [COO '10] are presented in Figure 22. As it can be seen from Figure 22, the obtained austenite grain size is ultra fine (less than 1µm). This means that an important contribution of grain size effect on the stability of austenite is expected. Already in the early works on medium Mn steels of Huang et al. [HUA'94] the ultra fine grain size was reported and its influence on austenite stability was anticipated. Current works [WAN '11], [LUO'11], [JUN'11] confirmed the ultra fine grain size of austenite with the quantitative results. An example of such quantification taken from the work of Luo et al. [LUO'11] is shown in Figure 23. Also, based on their previous work [LEE '13-1] proposed a way to consider the influence of austenite grain size on the optimal selection of annealing temperature to obtain maximum fraction of retained austenite. Figure 23 -At left microstructure observation using TEM and at right apparent thickness of austenite and ferrite laths, Mn content and austenite volume fraction as measured by TEM, STEM-EDX and XRD respectively [LUO '11]. Mechanical Properties of Medium Mn steels In general, MMS are the steels with an enhanced TRIP effect due to the high volume fraction of retained austenite. The matrix of these steels is an ultra fine ferrite (in certain cases the mean size is less than 1µm). The hard phase in such matrix is represented by retained austenite (RA) and, if its stability is not sufficient, fresh martensite. Recently, a lot of studies describing mechanical behavior of MMS were performed and published. In order to analyze the existing data and to have a clear vision on the trends of mechanical behavior of MMS, a data base was built in the beginning of this work. It was updated during the whole period of this work. The entire data base can be found in Annex 1.1. Further will be presented only some extracted data, considered to be the most relevant for the standard intercritically annealed MMS (no Q&P). Overview of the mechanical properties First of all, Figure 24 presents the classical Ultimate Tensile Strength versus Total Elongation (UTS-TE) as well as more specific Yield Strength versus Uniform Elongation (YS-Uel) charts. As it can be seen and was already stated, these two graphs show the clear potential of MMS to fulfill the requirement of 3 rd generation AHSS steels. Looking on the UTS-TE graph, it can be stated that UTS ranging from 1000 to 1200MPa can be obtained with TE ranging from 12 to 37. This means that it is possible to create a variety of strength-ductility combinations using MMS. Even though there are some points with higher UTS (up to 1400MPa), the general trend is that with the increase of UTS the ductility slightly goes down. For the YS-Uel the tendency is less clear, but in the same time the quantity of available data is also lower. Nevertheless, in certain cases very attractive combinations of YS and Uel can be obtained (~1000MPa with more than 20% of Uel). Using the available data set, it was decided to investigate relationships between some mechanical characteristics and microstructure parameters. Generally, ductility is correlated with the RA fraction. Thus, Figure 25 presents two dependencies: TE and Uel as a function of RA fraction. It can be seen that the correlations are much dispersed. Of course, there is a trend that with increasing austenite fraction elongation increases, but there are probably also other parameters, like austenite stability for example, that also have an important influence. In all the cases, it is clear that tensile elongation cannot be attributed to RA fraction only. Relation between maximum true stress and RA fraction is plotted in Figure 26. It can be observed that there is no correlation between these two parameters. It was then supposed that maximum true stress should depend not directly on the RA, but rather on the global fraction of hard constituents: fresh martensite and RA. Unfortunately, in the majority of the articles only fraction of RA was given. Hence, further investigations of the links between microstructure and mechanical properties were not possible. At last, it is known that mechanical behavior of a multi-phase material depends on the behavior of each phase under mechanical loading. Taking into account that the studied steels can contain 3 different phases (ultra-fine ferrite, retained austenite and fresh martensite), further text will provide the knowledge about mechanical behavior of each phase. In addition, the information concerning the TRIP effect (induced martensite transformation) will be given. Finally, the possibilities of global modeling of multi-phase steel will be discussed. Mechanical behavior of ultra fine ferrite Classically, the yield and flow stresses in polycrystalline materials can be described using the following Hall-Petch equation: D k + = 0 σ σ (8) where σ is the yield or flow stress, D is the mean grain size and σ 0 and k are constants. This equation shows a direct relation between the stress and inverse square root of the mean grain size. Physical origin of this relationship is that the grain boundaries are obstacles for dislocation motion. Further, the works about the dislocations movement and storage of Bergström [BER'70], Mecking and Kocks [MEC'81], and Estrin [EST '96] allowed a more sophisticated description of ferrite behavior law and of its dependence on the mean grain size. For example, in the work of Allain and Bouaziz [ALL'08] it was proposed to use the following expression for the evolution of ferrite flow stress (σ F ) as a function of its deformation (ε F ): ( ) f M f D b M F F ε µ α σ σ ⋅ ⋅ - - ⋅ ⋅ ⋅ ⋅ + = exp 1 0 (9) where α is a constant of the order of unity (α = 0.4), M is the mean Taylor factor (M= 3), μ is the shear modulus of the ferrite (μ =80GPa), b is the Burgers' vector (b ≈ 2.5×10 -10 m), D is the ferrite mean grain size, σ 0 is the lattice friction stress which depends on the chemical composition (elements in solid solution) and on the temperature, and f is a fitting parameter related to the intensity of dynamic recovery processes, and adjusted on the experimental results. Both laws (8) and ( 9) were derived and fitted from data with the average grain size mostly higher than 2 µm. In this range of grain sizes (more than 2 µm) these laws show quite good performance and the predictions are in good agreement with the experimental data. However, when the grain size is ultra low (below 2 or even 1 µm) such approaches appear to be inappropriate. Several works pointed out this point and proposed some new considerations and/or theories [CHO '89], [EMB'94], [BOU'09], [BOU '10]. However, for the moment there is no clear vision in the literature about the strengthening mechanism of the steels with ultra-fine (or nanoscale) grain size. The difficulty also exists in the production of such fine scale structures and measurements of their mechanical properties. Only a few recent works proposed some valuable data [OHM '04], [TSU'08]. An example of stress-strain curves obtained on a given steel with different ferrite grain sizes is shown in Figure 27. Looking on these results, it can be seen that for the grain sizes lower than 1µm the work hardening is close to zero. As well, the strength increase with decreasing grain size appears more pronounced for the ultra-fine structures. But even in these works the obtained ferrite grains co-existed with cementite particles, which can also have an influence on the mechanical behavior. Thus, their presence should be considered carefully. The important dependence on the grain size of YS, UTS and Uel for the ferritic steels with ultrafine grains was also shown in the more recent work of Bouaziz [BOU'09] (Figure 28). The variation of YS, UTS and Uel as a function of grain size appears to be almost linear in the domain of grain sizes between 0.1 and 1 µm. This clearly shows that for the ultra-fine ferrite grains, the classical Hall-Petch relation between the stress and inverse square root of the mean grain size is not effective. Therefore, for the ultra-fine grains it seems to be more appropriate to take into account grain size influence on the stress in form of 1/D. Figure 27 -Nominal stress-nominal strain curves of the ferrite-cementite steels obtained by tensile tests with strain rates of 3.3×10 -4 s -1 (a), 100 s -1 (b), and 10 3 s -1 (c) at 296K [TSU '08]. Till now, only the behavior of ultra-fine ferrite with polygonal morphology was discussed. Concerning the lath-like ultra fine ferrite structures, the literature is rather poor. Naturally, lath structure of ferrite can be found in tempered martensite and pearlite. In the first case (tempered martensite) the resulting ferrite, depending on the tempering temperature, contains a certain density of dislocations and in the same time some carbides are generally present. Thus, the understanding of mechanical behavior of such mixture is rather complex. In addition, data about stress-strain behavior of tempered martensite are really rare, especially for high tempering temperatures. In most of the works, only hardness and/or 0.2% offset yield stress are considered. In the case of pearlite, the stress-strain data are more abundant and the ferrite contains much lower number of defects. However, the problem is how to dissociate the separate contribution of ultra-fine lath ferrite and lamellar cementite. It is commonly admitted that mechanical properties of pearlitic steel strongly depends on the interlamellar spacing. Based on experimental results, Bouaziz and Buessler [BOU'02] proposed a semi-empirical behavior law for pearlite. It was assumed that only ferrite in between the cementite lamellae deforms plastically. Thus, the yield stress was considered to be dependent on solid solution hardening and on interlamellar spacing with the form inspired from the Orowan's [ORO '54] theory. Finally, plastic strain hardening was supposed to be isotropic and to follow a Voce type law. Consequently, the general form of the pearlite law was as follows:                 ⋅ - - ⋅ + ⋅ ⋅ + = 2 exp 1 0 p p g g K b M ε λ µ σ σ (10) where λ is the interlamellar spacing, and K and g are fitting parameters adjusted on the experimental tensile curves of fully pearlitic steels with different interlamellar spacing. Mechanical behavior of fresh martensite Martensite is one of the hardest phase in the steel and also one of the most complex due to its ultra-fine, acicular and multi-variant structure. Martensite is a metastable phase obtained in the steels during cooling with sufficient rate from austenite to room temperature. As it was already stated, martensitic transformation is athermal and displacive. This means a lot of things. First of all, there is no diffusion of atoms at long distance during transformation. As a result, the produced martensite inherits the composition of prior austenite. Secondary, during transformation the face centered cubic (FCC) lattice structure is changed to the body centered tetragonal (BCT) one and this involves two things: volume change and large shear. Finally, the transformed fraction of martensite depends only on the temperature below M s and is independent of time. Of course, such complicated transformation produces a very complex structure. According to the study of Krauss and Marder [KRA'71], low M s twinned martensite is called "plate martensite", whereas high M s needle-like martensite is called "lath martensite". In this work only lath-martensite is considered. An example of complicated lath-martensite structure taken from Morito et al. [MOR'05] is given in Figure 29. It is evident that such complex nano-scale structure will result in specific mechanical behavior. An example of true stress-true strain curves of martensitic steels with different carbon content is given in Figure 30. As it can be seen, very high levels of strength can be obtained, but in counterpart the ductility is rather low. However, ductility can be improved with tempering treatments, which will not be discussed here. Different sources of strengthening are discussed in the literature: grain and substructure (packets, laths), solid solution, precipitation, internal stresses and others. But the most generally acknowledged correlation between a mechanical and a metallurgical parameter is the evolution of martensite hardness related to its carbon content. Such dependence is very well demonstrated in the review of Krauss [KRA'99], where hardness measurements from a number of authors were plotted as a function of martensite carbon content for the variety of carbon concentrations (Figure 31). As well, there is an important quantity of data about the influence of different alloying elements on the martensite hardness [GRA '77], [YUR'87]. Such effect of alloying elements is naturally included in the prediction of yield stress. For example, the effect of Mn on the martensite yield strength was shown in the review of Krauss [KRA'99] by combining the data from the studies of Speich et al. [SPE'68] and Norstrom [NOR'76] (Figure 32). However, very few studies discuss the influence of Mn and other substitution elements on the strength of martensite. '68] and for Fe-C -Mn alloys [NOR '76]. In addition, there is a lack of data about the influence of substitution elements on the strain hardening of martensitic steels. Only one study concerning the effect of Cr on the work hardening of martensite was found [VAS '74]. Thus, it appears that the Mn effect on the strength and strain hardening of martensitic carbon steels is not clear and needs more studies. The amount of considerations in the literature about strain-hardening mechanisms of martensitic steels is quite low, probably related to their poor uniform elongation (i.e. necking strain). Principally, the mechanical behavior of martensite is either described with phenomenological polynomial law [CHO '09] or it is reduced to an elastic or an elastic-perfectly plastic law [DEL '07]. However, recently a Continuum Composite Approach (CCA) was proposed to predict the complete tensile curves of as-quenched martensitic steels [ALL '12]. The general idea of this approach is to consider martensite as a composite of elastic-perfectly plastic phases in interaction. All the phases have the same Young modulus, and the density of probability f(σ) to find a phase with a given yield strength defines the so-called "stress spectrum". Consequently, the behavior of any martensitic steel can be described using its distinctive stress spectrum. An example of such spectrum f(σ) and of its associated cumulated function F(σ) are given in Figure 33. The principal equation of the model relates the macroscopic stress Σ to a given macroscopic strain E in the following way: ( ) ( )dσ σ f σ σdσ σ f Σ L min L σ σ σ L ∫ ∫ +∞ + = (11) where σ min is the threshold stress at which the softest phases of the composite plasticize and σ L (E) is the highest yield stress among the plasticized phases for a given macroscopic strain E. The first integral is the contribution of already plasticized zones of the composite and the second one is the contribution of the zones that remains elastic under σ L loading. In majority of the cases, these integrals cannot be solved explicitly. However, it is possible to calculate the derivative as a function of the macroscopic strain (i.e. the macroscopic strain-hardening rate): ( ) ) ( 1 ) ( 1 1 L L F F Y dE d σ β σ - + = Σ ( 12 ) where Y is the young modulus and β is a constant parameter that allows describing the interactions between the zones in the composite after any plasticization: ε σ β - - Σ - = E ( 13 ) where σ and ε are respectively the local stress and the local strain in each element of the composite. This model was fitted on the mixed database: mechanical behavior of certain martensitic steels was taken from the literature and others were determined experimentally. The adjustements of the mechanical spectrums was done using the KJMA type law with three fitting parameters: σ min , σ 0 and n. The last two parameters control the shape and the width stress spectrum. Comparison between the results from the model and experimental data is shown in Figure 34. As it can be seen, not only very good agreement between the model and experimental true stress-true strain curves was obtained, but also very good prediction of strain hardening was achieved (Figure 34(b)). As well, the simulation of elastic-plastic transitions was very accurate and exact. Finally, using such sophisticated model permits to describe the tensile behavior of martensite in a univocal way and to explain low microplasticity yield stress, very high strain hardening rate and large Bauschinger effect. However, as in the previous studies, only the effect of C content on the strength and strain hardening was taken into account (Figure 34(c)). The present description of martensite structure and mechanical behavior is clearly not exhaustive. Only very rapid overview of the main results was presented here. More information about martensite is available in the books of Kurdjumov [KUR'60], [KUR '77] and Olson and Owen [OLS'92], but also in the review of Krauss [KRA'99] and in the more recent PhD work of Badinier [BAD '13]. Austenite present at room temperature, also called retained austenite (RA), is a metastable phase. Usually, RA is obtained at room temperature due to the chemical enrichment with C and Mn and in some cases due to the ultra fine size of austenite. The stacking fault energy (SFE) of C-Mn steels is low. Consequently, multiple simultaneous and/or sequential deformation mechanisms are possible:  dislocation slip;  mechanical twinning;  transformation to α'-or ε-martensite. One more time, chemical composition (mostly C and Mn), deformation temperature and in a less extend austenite grain size have major influence on the deformation mechanisms that will govern austenite behavior. According to Schumann's phase stability diagram (left part of Figure 35) [SCH '72], only transformation to α'-martensite occurs till 10 wt.% of Mn. In the same time, from the right part of Figure 35 showing the modified diagram with experimentally determined domain of mechanical twinning (shaded area), it can also be stated that no twinning occurs for the compositions with Mn content lower than 10 wt.% and C lower than 1.2 wt.%. As this work concerns mostly the steels in this composition range, only dislocation slip and transformation to α'-martensite will be considered further as possible deformation mechanisms. It is very difficult or even impossible to obtain experimentally the mechanical behavior (with dislocation slip mechanism) of austenite with low C and Mn levels. Firstly, because it is complicated to produce fully austenitic steel with low C-Mn levels at room temperature and, secondly, due to the occurrence of the martensite induced transformation. However, inspired from the works concerning ferrite and highly alloyed austenitic steels, at least two behavior laws of austenite were proposed. Hereafter these two models will be briefly described and compared. The first one, proposed by Perlade et al. [PER'03], assumes that austenite follows the classical relation for polycrystals between flow stress (σ A ) and the total dislocation density (ρ): ρ µ α σ σ b M A A + = 0 ( 14 ) where α = 0.4 is a constant, M = 3 is the average Taylor factor, m = 72 GPa is the shear modulus, b = 2.5*10 -10 m is the Burgers vector and σ 0A is the friction stress of austenite. According to [PIC '78], this friction stress can be expressed as: Cr Si C 0 w 7 . 3 w 20 w 354 68 σ A ⋅ + ⋅ + ⋅ + = (15) where w C , w Si and w Cr are taken in weight percents. In this approach it was also considered that induced during deformation martensite, which is much harder, results in a strong hardening of the retained austenite islands. In fact, induced martensite laths will reduce the mean free path of dislocations, thus increasing dislocation density and strengthening of austenite. Finally, taking into account the works of Mecking and Kocks [MEC'81] concerning the dislocation accumulation and annihilation due to the dynamic recovery, it was proposed to consider the evolution of the dislocation density with shear strain for dislocation glide with the following equation: ρ ε σ ⋅ - Λ ⋅ = ⋅ f b d M d 1 ( 16 ) where Λ is the dislocation mean free path and f is a constant. In this equation, the dislocation storage rate is described by the first term (1/bΛ) and dynamic recovery by the second one (-fρ). Another behavior law was proposed by Bouaziz et al. [BOU'11]. The modeling approach was quite similar, but it was extended to the TWIP steels, thus permitting an experimental validation. A semi-phenomenological model was proposed to predict the stress-strain behavior of C-Mn TWIP steels as a function of Mn and C content. The flow stress was expressed as: A A A 2 1 0 A σ σ σ σ + + = (17) where σ 0A is the yield stress of austenite that increases with C content and decreases with Mn content in the following manner: Mn C 0 w 2 w 187 228 σ A ⋅ - ⋅ + = (18) with w C and w Mn in weight percents. The flow stress of austenite without any TWIP effect (σ 1A ) was taken in the form of well known Voce law: ( )       ⋅ - - ⋅ = f f K A 1 ε exp 1 σ A (19) where K and f are material related constants and ε A is the strain of the austenite. Finally, σ 2A, which represented the contribution of the dynamic composite effect related to the development of backstresses (i.e. TWIP effects), was expressed as: p m ε ⋅ = A 2 σ (20) where m and p are dimensionless material-related parameters. Using this modeling approach, Bouaziz et al. obtained rather accurate predictions of the true stress-true strain curves of different C-Mn TWIP steels. Comparison between model and experimental true stress-true strain curves is presented in Figure 36. The negative effect of Mn on the yield stress is obvious from this data. For example, Figure 36(b) presents the true stresstrue strain curves for two steels with almost the same C level but different Mn contents. It can be seen that higher Mn content steel has lower yield strength. Comparison between the models proposed by Perlade and Bouaziz led to the following remarks. Bouaziz approach appears to be better for the prediction of the yield stress of austenite in medium and high-Mn steels. For example, for the 1.2C-12Mn steel Bouaziz model gives the YS = 428MPa, which is close to the experimental value (~430MPa). Perlade's model is not so far (YS = 493MPa), but for higher Mn this difference increases significantly. Moreover, Bouaziz model proposes the combined contributions of dislocation slip and twining, which is a clear advantage. However, for our work this is of minor interest because, as it was stated previously, the studied compositions are out of twining domain. Finally, the evident advantage of Perlade's model is the fact that it takes into account the strengthening of austenite due to the martensite induced transformation. '11]. Figure 36 -Comparison between experimental and model predicted true stress-true strain curves [BOU Mechanical stability of retained austenite and induced transformation As it was said beforehand, retained austenite is a metastable phase. Consequently, during mechanical loading it will transform to α'-or ε-martensite. Such transformation is very beneficial for the global mechanical behavior of the steel. Indeed, it considerably increases strain hardening rate and delays necking. This phenomenon is called TRansformation Induced Plasticity (TRIP) effect. A lot of works were done on the characterization, understanding and modeling of austenite mechanical stability and TRIP effect in the low Mn steels and alloys [ZAC '67], [GER'70], [BHA'72], [ZAC'74], [OLS '75], [OLS '78], [TAM '82], [RAG'92], [SUG'92], [JAC '01], [JAC '07]. Nowadays using new machines for fine in-situ characterizations like neutron diffraction or high energy synchrotron X-ray diffraction, the studies of TRIP effect continues [TOM '04], [MUR'08], [JUN '10], [BLO '14]. In the same time, the investigations of the huge TRIP effect in the MMS, its understanding and modeling are also multiplying [SHI '10], [CAO '11-1], [WAN '13], [CAI '13], [SUH '13], [GIB '11], [COO '13], [RYU '13]. Of course this topic is of prime interest for us, as it has a direct link with our work. Further, a short summary of the observations found in the literature is given. It is divided in two parts: characterization of austenite stability and TRIP effect in MMS steels and modeling of austenite induced transformation. Austenite stability and TRIP effect in medium Mn steels Two of the first experimental results concerning the retained austenite evolution under mechanical loading of MMS were published by Shi et al. [SHI'10] and . The evolution of microstructure and mechanical properties as a function of holding time at intercritical temperature was studied for hot forged steels with 0.2 wt.% C and 4.7 wt.% Mn. Figure 37 shows the evolution of retained austenite fraction as a function of strain for 4 different holding times at 650°C and as a function of deformation temperature for 6h holding time. An optimum holding time (in this case 6h) was found where the best TRIP effect can be achieved, meaning the optimum stability of retained austenite. And, as expected, austenite transformation was accelerated at lower deformation temperatures. Microstructure observations of specimens annealed for 6h then deformed were published later by Wang et al. [WAN'13]. It was concluded that retained austenite was transformed in majority of cases to α'-martensite through strain induced transformation, due to the high density of microtwins and stacking faults in austenite grains. Presence of ε-martensite was also detected, but its fraction was so low that it was considered as negligible. Suh et al. [SUH'13] studied steels with different C and Mn levels and 2 wt.% Al. These steels were cold rolled and annealed at different intercritical temperatures. Evolution of retained austenite fraction during tensile test was measured. The evolution of normalized fraction versus engineering strain is presented in Figure 38. It can be seen that the rate of strain-induced transformation increases with increasing annealing temperature. In this work, it was also found that there is an optimum annealing temperature in the intercritical domain that can provide optimum stability of retained austenite and consequently best TRIP effect. In particular, for 0.11C-4.5Mn-0.45Si-2.2Al steel this temperature was 720°C, while it was 700°C for 0.055C-5.6Mn-0.49Si-2.1Al steel. '11], [COO '13] performed less statistical, but more rigorous study about the deformation behavior of retained austenite during tensile loading. Cold rolled 0.1C-7.1Mn-0.13Si (wt.%) steel was intercritically annealed at 575, 600, 625 and 650°C for 168h. Then insitu neutron diffraction experiments were carried out during tensile tests. This permitted to plot the evolution of retained austenite as a function of strain (Figure 39). The observed tendency is the same than in the case of Suh et al. [SUH'13]: the induced transformation rate increases with the increase of holding temperature. However, further analysis of samples annealed at 600°C and 650°C [COO '13] showed two things: Gibbs et al. [GIB • in the sample with the high austenite stability (600°C), the latter was deformed by the glide of partial dislocations trailing stacking faults. Strain induced transformation of the austenite took place only after the yield point elongation; • on the other hand, austenite islands of the sample annealed at 650°C contained some stacking faults and thin ε-martensite laths, which promoted the stress induced transformation of austenite at lower stress levels. Models for austenite induced transformation Generally, for the modeling of the induced austenite transformation a modified Kolmogorov-Johnson-Mehl-Avrami approach [KOL '37], [JOH '39], [AVR '39] is used. One of the most commonly applied formulation of such approach is the equation proposed by Olson and Cohen [OLS'75]: ( ) ( ) [ ] n M ind αε β - - ⋅ - - = exp 1 exp 1 F (21) where n is a constant parameter, α is dependent on the stacking fault energy and defines the rate of shear band formation, and β is related to the probability that a shear band intersection forms a martensite nucleus. This probability depends on the temperature through its link with the chemical driving force. Another equation proposed even earlier by Guimaraes [GUI '72] for 0.06C-31Ni (wt.%) steel was closer to the standard Kolmogorov equation: ( ) z p M ind kε - - = exp 1 F (22) where k and z are constant fitting parameters and ε p is the true plastic strain. Most of the induced transformation models use one of these two equations with adapted fitting parameters. However, some works [ANG '54], [LUD'69], [GER'70] rather proposed a direct power law of the following type: RA B M ind F A ⋅ ⋅ = ε F (23) where A and B are fitting parameters and F RA is the volume fraction of retained austenite. Finally, Perlade et al. [PER'03] proposed a model for the induced martensite transformation based on the Raghavan's physical approach for the isothermal martensite transformation in Fe-Ni alloys [RAG '92]. Brief description of Raghavan's approach is as follows. Usually, the rate of first-order transformation at a given temperature depends on the nucleation and growth rates of the new phase. In the case of martensite transformation the growth of plates or laths is stopped by the prior austenite grain boundaries or/and by the neighboring units. Hence, in comparison with classical nucleation-and-growth processes (mutual impingement) there are additional obstacles for the growth. The global martensite transformation rate (dF M /dt) is then controlled by the nucleation rate per unit volume of parent phase at any instant (dN V /dt) and by the transformed volume fraction per nucleation event (dF M /dN V ): V V dN d dt dN dt d M M F F ⋅ = (24) Raghavan also observed that nucleation accelerates very rapidly even at earliest stages, indicating a strong autocatalytic effect. Therefore, he suggested that the number of new nucleation sites generated by autocatalysis is proportional to the volume fraction of formed martensite: V M i f N F p n n - ⋅ + = (25) where n f is the number of sites at martensite fraction F M , p is the so-called "autocatalytic factor" and N V is the number of martensite plates up to F M . Based on the work of Pati and Cohen [PAT'69], Raghavan used the following equation for the nucleation rate:       ∆ - ⋅ ⋅             - ⋅ + = kT W V p n dt dN a i exp 1 F M ν (26) where V N V / F M = is the mean plates volume at martensite fraction F M , ν is a vibration frequency term and ΔW a is the activation energy for nucleation. In the model of Perlade et al. [PER'03], it was considered that at the temperatures above M s the austenite to martensite transformation can take place when the applied stress level is high enough -"stress assisted transformation" (Figure 40(a)). Such transformation can be modeled by incorporating the thermodynamic effect of the applied stress in the theory developed for the transformation upon cooling (Figure 40(b)). When the stress exceeds the yield stress of austenite the martensite nucleation will be strain-induced on potent sites induced by the plastic strain. This domain is characterized by the σ s M temperature. Effect of plastic strain on the nucleation rate was then introduced in the activation energy (ΔW) through the driving force (ΔG) in the following manner: G B A W a ∆ ⋅ + = ∆ (27) where A and B are two positive constants and ΔG is taken as a sum of chemical and mechanical contributions: Using such a physically based approach is advantageous as it directly takes into account the effects of chemical composition, austenite size and plastic strain on the induced martensite transformation. As it can be seen in Figure 41, good predictions in comparison with the experimental volume fractions of induced martensite were obtained. However, in the work of Moulin et al. [MOU'02] it was shown that the modeled induced martensite transformation is very sensitive to the grain size of retained austenite (Figure 42). This means that for the majority of cases the retained austenite size should be fitted in order to obtain good description of experimental data. γ σ σ σ ⋅ ∂ ∆ ∂ + ∆ = ∆ = G G G 0 (28) Modeling of the multiphase steel mechanical behavior For the multiphase structures the stress and strain levels of the global material depends on the stress and strain values of each phase. The behavior laws that consider the behavior of each constituent are so-called "mixture rules". One of the first works that proposed the additivity of the stress and strain tensors for a multicomponent system was the article of Hill [HIL'63]: 2 2 1 1 σ σ σ ⋅ + ⋅ = f f (29) 2 2 1 1 ε ε ε ⋅ + ⋅ = f f (30) where f 1 and f 2 are the phase volume fractions (f 1 + f 2 = 1); σ i and ε i are the stress and the strain of each component. This approach means that there is stress and strain partitioning between the multiple constituents of the system. These two equations were very frequently used separately (with the iso-strain or iso-stress hypothesis) or together [FIS '77], [KAR'75], [TAM '73]. Utilization of both equations is more adapted as it results in less pronounced stress partitioning than the iso-strain model. Another way to improve the predictions of iso-strain model was proposed by Gladman et al. [GLA'72]. Actually the authors suggested to use a power law for the volume fraction of constituents in the equation ( 29): 2 1 1 1 ) 1 ( σ σ σ ⋅ - + ⋅ = n n f f (31) Nevertheless this approach is still less precise than the general modeling with both equations ( 29) and (30), especially in the case of multicomponent system with more than 2 constituents. Even though the combined mixture law for stress and strain gives rather good results, its disadvantage is the need of a fitting parameter in order to solve the system of two equations. Actually, equations ( 29) and ( 30) do not describe the transfer of stress and strain between phases. Commonly to take into account this stress and strain transfer, a parameter β, that describes the slope of line AB in Figure 43, is determined in the following way [FIS '77], [GOE'85]: 1 2 1 2 ε ε σ σ β - - = (32) where β is a fitting parameter that is fixed in the range between 0 and ∞, depending on the studied case. '85]. Figure 43 -Schematic representation of the three different conditions of the mixture law (isostress, iso-strain, and intermediate one) with the lines EF, GH, and AB, respectively, and the true stress-true strain curves of a soft phase matrix (m), hard phase α'-martensite, and the composite [GOE In order to avoid this arbitrary fitting parameter, Bouaziz and Buessler [BOU'02] proposed another approach. For a disordered microstructure in whatever material state, mechanical work increment was suggested to be equal in each constituent. As well, the global increment of strain was considered to be the sum of strain increments in each constituent multiplied by their volume fractions. In terms of expressions this means a system of two equations: 2 2 1 1 ε σ ε σ d d = (33) 2 2 1 1 ε ε ε d f d f d + = (34) where σ i and ε i are the stress and the strain of each constituent and f i their respective volume fractions. This approach was called Iso-W and was used further for the modeling of different multiphase materials. It was successfully applied for ferrite-pearlite [BOU '02] and low-Mn (standard) TRIP [PER '03] steels. Furthermore, it was recently utilized by for the modeling of the mechanical behavior of a medium Mn steel containing 0.08 wt.% C, 6.15 wt.% Mn, 1.5 wt.% Si, 2.0 wt.% Al, 0.08 wt.% V and with bimodal grain size distribution. Yield Point Elongation In addition, one particularity of the mechanical behavior of MMS should be stated. There are a lot of examples where a yield point elongation (YPE) is observed. A yield point elongation phenomenon is a localized, heterogeneous transition from elastic to plastic deformation. An example of stress-strain curve presenting YPE is shown in Figure 44. Generally, YPE elongation occurs in low carbon steels due to the pinning of dislocations by interstitial atoms (typically carbon and nitrogen). In order to liberate the dislocations and make them available for the further motion, an additional energy (stress) is required. This stress corresponds to the Upper Yield Stress (in this work it will be abbreviated as YS H ). After the YS H the dislocations are free and the needed stress for their motion becomes abruptly lower. This leads to lower macroscopic stress of the specimen: Lower Yield Stress (in this work it will be abbreviated as YS L ). The plastic deformation of material is then localized and heterogeneous. At this moment Lüders band between plastically deformed and undeformed material appears and moves with a constant cross head velocity. During the propagation of the band from one end of the specimen to the other, the nominal stress-strain curve is flat or fluctuates around a constant stress value due to the interaction of moving dislocations with the interstitial atoms. Once the band has gone through the entire specimen, the deformation became homogeneous and a positive strain hardening is observed. There are two major factors affecting Lüders bands formation: microstructure (grain size, ageing state and crystallographic structure) and macroscopic geometry of the sample. Figure 44 -Stress-strain diagram showing Yield Point Elongation (YPE) and Upper (UYS) and Lower (LYS) Yield Strengths [AST'09]. These stress localization and Lüders band propagation in MMS were studied in details by De Cooman et al. [COO '13] and Gibbs et al. [GIB'14]. The outputs from both works were similar. It was found that yielding behavior of a 7 wt.% Mn steel was controlled by the intercritical annealing temperature, which in its turn has a great influence on the final microstructure and stability of retained austenite. Two possible scenarios were detected and described. 1) Yielding of the duplex ferrite-austenite microstructure, obtained after low temperature intercritical annealing, proceeded through a localized plastic deformation of ferrite (Lüders band nucleation and propagation at a constant stress). In the range of yield point elongation, the retained austenite deformed by the glide of partial dislocations trailing stacking faults and only about 6% from the whole austenite fraction was transformed into martensite. The major part of the strain-induced retained austenite transformation took place after the yield point elongation. 2) On the other hand, complex microstructure consisting of ferrite, α'-martensite, εmartensite, and a low stability retained austenite, obtained after high temperature intercritical annealing, yielded in another manner. In fact, stress-induced retained austenite transformation was quite rapid, thus providing high work hardening rate and avoiding localized deformation. These results one more time underline the complexity of mechanical behavior of MMS steels and the importance of their microstructure control, in particular stability of retained austenite. In the first part of this chapter a brief description of machines, techniques and methods used in this work will be given. The second part will present the results of a primary study that aims the selection of steel composition and adapted heat treatment. This part can be called "alloy design". Machines, techniques and methods In this section are presented the tools and methods used for the elaboration of samples and their characterization. In majority of the cases, the following strategy was adopted for the experiments and analysis: 1. heat treatments; 2. tensile tests; 3. microstructure observation and quantification; 4. fine characterization; 5. model simulations. Therefore, the presentation of different tools in this section will follow this plan. Heat treatments Different heat treatments, aiming production of samples for different analyses, were performed with one dilatometer and two furnaces: 1) Dilatometer Bähr DIL 805 was used for the study of cementite precipitation during heating and characterization of phase transformations; 2) AET Gradient Batch Annealing furnace that produces a gradient of temperature on one sheet sample was used for the rapid evaluation of mechanical properties as a function of holding temperature; 3) NABERTHERM furnace that allows simple homogeneous heat treatments was used as a major tool for the elaboration of big size samples (tensile tests and microstructure analysis). The main characteristics of these three tools are presented hereafter. Dilatometer Bähr DIL 805 Dilatometry is most often used to characterize the phase transformations (transition points and kinetics) in steels. In this work, a Bähr DIL 805 dilatometer was used to follow the cementite precipitation during heating and to study the phase transformations (austenite, ferrite and martensite) during annealing. A picture of Bähr DIL 805 dilatometer and a schematic representation of the experimental cell are presented in Figure 45. The dilatometer follows the length variations of the sample occurring during the imposed heat treatment. The sample is heated and maintained at temperature by a high-frequency induction coil. Temperature control is done by one or several thermocouples. Usually, Pt-Rt/Rh 10% (type S) thermocouples are used. The sample is maintained by the quartz rods and one of them is mobile. Hence, when the length variations occur, one rod is moving and the linear displacement is captured by an LVDT (Linear Variable Differential Transformer) sensor. Figure 45 -Global view on Bähr DIL 805 dilatometer (at left) and schematic representation of the experimental cell (at right). To avoid oxidation during treatment vacuum is done in the experimental chamber, then a small amount of helium (He) is injected. Cooling rate can be controlled and high cooling rates can be obtained using He gas injection. Three types of samples were used:  Ø4 mm × 10 mm cylindrical rod -hot rolled steel;  4 mm × 4 mm × 10 mm parallelepiped -hot rolled steel;  1.2 mm × 4 mm × 10 mm parallelepiped -cold rolled steel. AET Gradient Batch Annealing furnace The AET batch annealing (BA) furnace is presented in Figure 46. It consists of 4 zones that can be controlled independently in terms of heating and holding. This allows producing a gradient of temperature on one sheet sample. Therefore, this furnace is named Gradient Batch Annealing (GBA) furnace. The precise control of the temperatures in the different zones is assured by 12 thermocouples located in different axial and transversal positions. A linear temperature gradient between 400°C and 800°C can be obtained on a sample with the length of 700 mm. It can be supposed that each 15 mm segment of the annealed sheet has a constant global mean temperature. Thus, for each temperature segment it is possible to prepare two so-called mini tensile samples, which will be described later in the text. Usually, the heating rate is quite low and it takes hours (in between 10 and 40h) to reach the target temperature, which is comparable with industrial batch annealing process. Fast cooling is not possible, only natural cooling or controlled slow cooling can be produced. Annealing is performed under vacuum in the entire furnace chamber in order to avoid oxidation and decarburization during treatment. NABERTHERM furnace The NABERTHERM furnace is shown in Figure 47. This is a 700 × 500 × 250 mm isolated chamber, which temperature is regulated by resistance heating. The furnace is first heated and stabilized at the target temperature. Maximum holding temperature is 1280°C. Then, Argon or Nitrogen gas is introduced in the chamber in order to protect the sample from the possible oxidation and decarburization. Next, the sample is put in the deep part of the chamber, which is characterized by more homogeneous temperature, and held for a desired time. The heating rate depends on the sample thickness and geometry, and it cannot be controlled or varied in-situ. However, for the same sample thickness and geometry, the heating rate will be the same or quite similar. Finally, after holding for a certain time, the sample is cooled down with three possible ways: water quench, oil quench or air cooling. As it was said previously, in the deep part of the chamber the sample has a completely homogeneous temperature (less than 5°C difference between different points of the sample). Figure 47 -NABERTHERM furnace. Tensile tests After thermal treatments using gradient batch annealing furnace, small tensile specimens were prepared with gauge length 20 mm and section 5 × 1 mm 2 (Figure 48). Specimens were cut along the transverse direction of the steel sheet. For each temperature two tensile tests were done at room temperature with a constant strain rate of 0.13 s -1 . In the case of heat treatments with Nabertherm furnace, two specimens with gauge length 50 mm and section 12.5 × 1 mm 2 (ASTM E8 geometry, Figure 49) were machined and tensile tests were performed at room temperature with a constant strain rate of 0.008 s -1 . Specimens were cut along the longitudinal direction of the steel sheet. Tensile tests were realized on a Zwick 1474 machine with macroXtens SE50 extensometer (Figure 50). This machine has a capacity of 50 kN. The cross head rate can be varied between 0.0005 and 600 mm/min with the precision of 0.002 % of the used value. The acquisition frequency in the system is 500 Hz. The electronic device for the force measurement corresponds to the type I, according to the ISO 7500/1 standard: • rank 0 in the range from 200 N to 50000 N; • rank 1 in the range lower than 200 N. The displacement was measured using a macroXtens SE50 extensometer, which is a high resolution extensometer (Figure 51). According to standard EN ISO 9513, this extensometer has a precision rank of 0.5. The maximal error in the measurements of cross head displacement between two points in the range between 20 and 200 µm is ±1µm. Quantification of retained austenite Two methods were used to quantify the volume fraction of retained austenite: X-Ray Diffraction (XRD) and saturation magnetization measurements (Sigmametry). The results from both techniques were compared and discussed. Finally, the most pertinent ones were retained for the global microstructure analysis in this work. Further, both techniques will be briefly presented and the comparison of results will be discussed. X-Ray Diffraction (XRD) X-ray diffraction is a powerful tool for the analysis of crystalline phase structures. This technique is widely used for the characterization of different phases in steels. In particular, austenite fraction and carbon content can be evaluated using X-ray diffraction patterns. Diffraction phenomenon in crystals is effective because the wavelength λ of X-rays is typically of the same order of magnitude (1-100 angstroms) as the spacing between the crystal planes. In this work, steel samples of 15 × 15 mm 2 were mechanically grinded to their quarter thickness and polished down to 1 µm to obtain a mirror surface. Then, the X-ray diffraction measurements were done using a Siemens D5000 diffractometer with a cobalt tube, under 30 kV and 30 mA (Co K α radiation with λ = 1.8 Å). The scans were done in the configuration θ-2θ. In order to avoid texture effect, the angle variations were the following: 2θ from 55° to 129° with 0.026° step, ψ from 0° to 60° with a step of 5° and φ from 0° to 360° in continuous rotation. Siemens D5000 diffractometer and scheme of goniometric configuration are shown in Figure 52. The Averbach-Cohen [AVE '48] method was used to calculate the volume fraction of retained austenite, through the following equation: where f γ and f α are the volume fractions of austenite and ferrite, hkl I α and hkl I γ are the average integrated intensities of (220) α , (211) α , (200) α and (200) γ , (220) γ , (311) γ diffraction peaks and hkl R α and hkl R γ are constant parameters related to the α and γ phases and studied hkl plans. hkl R α and hkl R γ parameters were determined internally using a reference sample with known retained austenite volume fraction. The average integrated intensities of (220) α , (211) α , (200) α and (200) γ , (220) γ , (311) γ diffraction peaks are measured from the obtained XRD spectra. Then, 9 ratios between the average integrated intensities of the 3 peaks of austenite and the 3 peaks of ferrite are calculated. Next, using the ratio hkl hkl R R α γ / obtained on the reference sample, 9 values of austenite fraction are determined. Finally, the average over these 9 values is considered to be the volume fraction of retained austenite in the sample. Saturation magnetization measurements There are two major disadvantages of XRD measurements:  the depth of analysis is only about 10-20 microns, just below the prepared sample surface;  the sample preparation is time consuming and mechanical grinding can affect the stability of retained austenite. Therefore, the amount of retained austenite was also evaluated using magnetic saturation measurements (Sigmametry). The advantages of this technique are that the measurement is done in the whole volume of the sample and that the sample preparation is quite simple. A small carefully cut sample of about 5 mm wide and from 5.5 to 6 mm long (the ratio of length to width has to be superior to 1) is used. This sample is put in the device generating a magnetic field sufficient for complete magnetization and the level of saturation magnetization is measured. Most of the phases in classical steels are ferromagnetic at room temperature, except the austenite and epsilon martensite that are paramagnetic. Consequently, using magnetic saturation method it is possible to determine retained austenite fraction. Saturation magnetization of a multiphase sample depends on the saturation magnetization of each phase ( i s σ ): ∑ ⋅ = i m m i i s s σ σ (37) where m is the global mass of the sample and m i is the mass of each phase, thus ∑ = i i m m . Thus, when the mass of the sample contain m a amount of austenite and m f of ferrite, the equation becomes as follows: m m m m f f s a a s s ⋅ + ⋅ = σ σ σ (38) As it was stated previously austenite is a paramagnetic phase, hence its saturation magnetization can be considered as negligible (approximately 0.3 to 0.7 µTm 3 /kg). Therefore, the saturation magnetization of the sample takes the following form: m m f f s s ⋅ = σ σ (39) Finally, to obtain the fraction of retained austenite it is necessary to perform two measurements: 1) the saturation magnetization of the studied sample containing retained austenite ( s σ ); 2) the saturation magnetization of the specimen without retained austenite ( f s σ reference sample). Then the fraction of retained austenite is calculated: f s s f s RA f σ σ σ - = (40) Usually, the specimen without retained austenite is obtained by applying a heat treatment on the initial sample for austenite destabilization. The standard TRIP steels are annealed at 500°C for 1h in order to get the reference samples. In this work, due to the high Mn content (low temperature domain of austenite existence), the destabilization of austenite was not so trivial. Therefore, three different samples were chosen for the reference:  one after austenitization at 750°C for 30min and quenching -expected to have fully martensitic structure;  one after annealing of initial martensite structure (850°C -1min WQ) at 500°C for 1h;  one after annealing of initial martensite structure (750°C -30min WQ) at 500°C for 30h. The microstructures after these 3 different treatments were observed using FEG SEM (Figure 54). All the microstructures were considered to be free of retained austenite. For each thermal treatment at least 2 samples were prepared for saturation magnetization measurements. Further, only the average values of the saturation magnetization or retained austenite fraction will be presented. Table 1 presents the mean values of f s σ obtained for different reference samples. As it can be seen, these values are quite close. This means that any reference sample can be used for the evaluation of retained austenite fraction. However, in order to obtain some vision on the possible dispersion of the calculated retained austenite fraction, all possible combinations between s σ and f s σ were used, then the average value was taken. The mean standard deviation of the retained austenite fraction estimation was calculated to be 1.4 % and the mean confidence interval was about 5 %. These rather low values are evidence for the robustness of the saturation magnetization method. Comparison between XRD and Sigmametry results The values of retained austenite fractions measured on different samples using XRD and Sigmametry techniques are plotted in Figure 55 as a function of holding time at 650°C. It can be seen that the shape of the RA fraction evolution with holding time is almost the same for both techniques. However, there is a significant difference between the two curves. The values obtained with XRD are much lower than those from Sigmametry, except the first point at 3min which is rather close. For the standard TRIP steels, it is known that mechanical polishing preceding XRD analysis can affect the results of the measurement. Therefore, based on the works of Zaefferer et al. [ZAE'08], it was decided to perform XRD analysis using different preparation methods on two samples with different stability of retained austenite (30min WQ and 30h WQ). For the comparison, two samples (TRIP 800 and Q&P steels) from other studies were also included in the test procedure. All the obtained results are presented in Table 2. The results in Table 2 clearly show that the effect of mechanical polishing is very important in the case of the MMS studied in this work. This effect is significantly less pronounced in the case of TRIP 800 and Q&P steels. It can also be observed that, even though electrochemical polishing permits to suppress the major part of hardened layer from mechanical polishing, the values obtained by XRD are still slightly below those from Sigmametry analysis. Such an important effect of mechanical polishing is probably due to the lower mechanical stability of RA. Further in the work it will be shown that a part of RA stability is controlled by the size of the austenite and not only by carbon content as in the case of TRIP and Q&P steels. Hence, it is supposed that mechanical stability of RA stabilized by the size effect is lower than that coming from carbon enrichment. Nevertheless, more studies are still needed to confirm this hypothesis. It is interesting to highlight that, very lately, Matsuoka et al. [MAT'13] reported that grain refinement of austenite is ineffective for suppression of deformation-induced martensite transformation. In fact, in the case of deformation-induced martensite transformation, the multivariant transformation is no longer necessary and single-variant transformation is favored. Thus the mechanical stability of austenite is claimed to be independent of the austenite size. Such approach is in good agreement with the results obtained in this work. Finally, based on the obtained results, sigmametry technique was considered to be more adapted for this study and almost all the RA fractions presented in this work were obtained by this method. Samples preparation for observations Small samples with about 20 mm length and 5 mm width were cut in the longitudinal direction from the heads of tensile specimens. Then, they were mounted in a conductive resin under heat and high pressure. The effect of mounting cycle was shown to have no or very limited influence on the microstructure. Next, the samples were mechanically grinded and polished to obtain a mirror surface. To reveal the microstructure, different etchants were used: Dino or Marshall etching -for the global microstructure observations in the optical microscope (OM) and/or scanning electron microscope (SEM); light Metabisulfite + Dino etchings -for the observations of microstructure in the SEM, and following quantifications of fresh martensite + retained austenite fractions; Picral etching -for the observations of carbides in the SEM. First, characterization using optical microscopy (OM) was systematically performed. Such type of observation provides initial information about the microstructure and permits the qualitative analysis of phase components. As well, macroscopic vision of the steel structure is obtained, which is necessary for the analysis of global homogeneity of the sample and possible decarburization of the edges. Microstructure observations were done using the Zeiss Axiovert 200 MAT microscope (Figure 56). In order to perform precise quantification of fresh martensite + retained austenite fractions, Back Scattered Electrons (BSE) imaging mode was utilized. BSE images allow higher contrast between phases while smoothing variations inside the martensite islands, which make easier the image analysis (Figure 58). Quantitative analysis -Aphelion and Image J Microstructural quantifications were performed on the SEM images. As said beforehand, fresh martensite + retained austenite fractions were estimated using Aphelion® semi-automatic image analyzer software [ADC'15] with internally developed routines. BSE images of the samples etched with light Metabisulfite + Dino etchings were acquired at magnification of 3000. The fractions were determined using a simple threshold method (Figure 58). Preliminary comparison with the standard point counting showed that this method was more effective with small amount of available images. For each sample, 10 images were analyzed, which represents a surface of about 400 000 µm 2 . The confidence range on the mean fractions was estimated to be about 10 % of the resulted value. Austenite size was evaluated using Image J ® software [SCH '12], [COL'07], [GIR'04] and a manual selection procedure. In that case, standard Secondary Electrons images obtained by SEM FEG at a higher magnification (×5000 and ×10000) were preferred. As it will be explained in Chapter 3, two morphologies of austenite were observed. Thus, to estimate the size of austenite two types of measurements were considered: 1) width of the laths: the distance between the two interfaces of a given lath in the normal direction was considered; 2) equivalent diameter of polygons: distance between the two interfaces of polygon in random direction. An example of the performed measurements is shown in Figure 59. 100 measurements were performed for each type of morphology: laths and polygons. Fine characterization tools Transmission Electron Microscope (TEM) High magnification observations of microstructure, as well as Mn content measurements in different phases, were performed using a JEOL 2100F TEM (Figure 60) with Brüker Energy Dispersive X-ray Spectrometer (EDX). Two kinds of samples were used for TEM observations: replicas and thin foils. Replicas were prepared using the following procedure: • standard mechanical polishing till 1 µm; • etching first with 2 % Nital, then Picral; • deposition of a cellulose acetate (Biodène) film; • 20min drying, then peeling-off (detachment) of the film from the sample; • carbon vapor deposition (~30-50 nm) on the film; • cutting the film in squares of 4 mm 2 and putting them on the copper grid; • dissolution of cellulose acetate film in a mixture of solvents. In order to prepare a thin disk shaped foil the steel sheet was ground mechanically to about 80 µm thickness, then twin-jet electropolished in a solution of 5% perchloric and 95% acetic acids at about 15°C. Convergent Beam Electron Diffraction (CBED) in STEM mode (Scanning TEM) was used to make difference between the phases present in the microstructure. Obtained Kikuchi patterns were indexed using "Euclid's Phantasies V1.1" software developed in LEM3 laboratory (University of Lorraine) [FUN '03]. Such methodology is very similar to the Electron Back Scattered Diffraction (EBSD) mapping used in SEM. However, the accuracy of orientation determination on the patterns generated by TEM can be better than 0.1°, thus making this tool opportune for ultra fine scale studies. In certain cases, when indexing was not possible or doubtful, the standard Selected Area Diffraction (SAD) in mode TEM was used. Electron probe microanalyzer (EPMA) Manganese distribution and segregations were characterized using the CAMECA SX100 electron probe microanalyzer (EPMA). An example of CAMECA SX100 EPMA is presented in Figure 61. The function of EPMA can be explained in the following manner. The generated electron beam strikes the analyzed sample and the interaction between prime electrons and the sample atoms provokes the emission of characteristic X-rays. This emitted X-rays are analyzed in a Wavelength-Dispersive Spectrometer (WDS), owing to single-crystal monochromators which diffract a precise wavelength onto a detector where the photons are counted. Then, using a reference sample, the elemental concentrations can be determined. The qualitative and quantitative distribution (mapping) of chemical elements can be obtained by scanning a small area of the sample [BEN '87]. Generally, with an electron beam of about 1 μm diameter, the analyzed volume of material at each measuring point is approximately 1 μm 3 . In this work the following test conditions were used for acquiring manganese maps: -Accelerating voltage -15 keV; -Current -2 μA; -Step size -0.5 μm; -Time step -0.1 s/pixel. Mn quantitative maps were built by combining the intensity in the distribution map and the intensity given by the quantitative analysis. NanoSIMS analyzer Recently, in the studies of Valle et al. [VAL'06] and Drillet et al. [DRI'12], it was shown that the NanoSIMS (SIMS for Secondary Ion Mass Spectrometry) technique is a powerful tool for the characterization of carbon distribution in steels. For this reason, Cameca NanoSIMS 50 (Figure 62) was used in this work to confirm the C level of retained austenite and martensite (prior austenite) in certain samples. The specimen preparation for NanoSIMS analysis is rather simple. Moreover, the advantages of Cameca NanoSIMS 50 are its high lateral resolution (about 50 nm) and its high sensitivity (detection limit for carbon is 0.0063 wt.%). Secondary ion mass spectroscopy is based on the analysis of secondary ions, induced by an initial ion bombardment, with a mass spectrometer. In fact, the surface of a solid sample is sputtered by primary heavy ions of a few keV energy. At a contact with the surface of target sample, this primary beam generates a sequence of atom collisions, followed by the ejection of atoms and atom clusters. A fraction of the emitted particles is spontaneously ionized -"secondary ions". Such a secondary ion emission supplies the information about the chemical composition of the emitting area. The ions of interest are isolated using the mass analyzer. Finally, ion detection system permits to record the magnitude of the secondary ions signal and to present this data in forms of quantitative maps for a chosen element. Schematic representation of the SIMS technique is shown in Figure 63. The major difference between static SIMS and dynamic SIMS (nanoSIMS) modes is the resolution depth. The first one analyses only the surface of the sample, which is mostly limited to the first top monolayer. The second one has a depth resolution ranging from sub-nm to tens of nm, therefore giving the possibility to investigate bulk composition and depth distribution of trace elements. In this work NanoSIMS 50 machine was used to obtain 12 C ion maps of studied samples in order to estimate carbon content of austenite. The samples were mounted in an aluminum ring using Wood's alloy and disposed in a sample holder (Figure 64). 10 × 10 μm 2 areas were scanned using a focused Cs + primary ion beam (<1 pA) and the SIMS intensities were measured. Before the measurements, surface of the samples was pre-sputtered in order to eliminate any carbon contamination. Wood's alloy Al ring Wood's alloy 2.1.7 Thermo-Calc and DICTRA softwares Thermodynamic simulations were performed using Thermo-Calc ® and DICTRA ®™ softwares for deeper study of phase transformations. Thermo-Calc ® software was used for two major objectives: 1. to visualize the effect of Mn on phase transformations using pseudo-binary diagrams; 2. to obtain expected phase fractions and compositions in ortho-equilibrium condition; this information was used for: a. selection of annealing temperature; b. comparison with experimental results. Al ring DICTRA ®™ software coupled with Thermo-Calc ® was used for thermo-kinetics simulation of phase transformations during annealing. The aims of these simulations were to assist the analysis of the experimental data and to propose a way for the prediction of the final microstructure. Starting from the 1980 [HIL '80], [SUN'85], researchers continuously develop the "CALPHAD" (CALculation of PHAse Diagrams) approach in order to have complete thermodynamic data bases for different alloys systems. These data bases allow different kinds of calculations of thermodynamic properties as functions of temperature, composition, pressure, etc., construction of phase diagrams and evaluation of other thermodynamic factors and parameters. To use these data bases, different softwares were created. For instance, Thermo-Calc company developed two softwares: Thermo-Calc ® (TCC ™ and TCW ™ ) for thermochemical equilibrium calculations based on the minimization of Gibbs energy and DICTRA ®™ for kinetic simulations based on the diffusion theory [AND '02]. Both softwares were considered to be adapted for this study and were used with TCFE5 and MOB2 databases [THE'15]. Thermo-Calc simulations were performed in Chapter 2.2 (so-called "Alloy design") in order to evaluate the effect of Mn on the phase transformations in steel and to predict different phase fractions at equilibrium conditions. DICTRA was used to simulate the dissolution of cementite and the formation of austenite under local equilibrium conditions. The initial conditions and the results of simulations are reported in Chapter 3. Selection of composition and treatment Selection of composition and elaboration/characterization of obtained steel The precursor works of Matlock [MAT '06] show that a certain level of retained austenite is necessary (from 20% to 30%) to achieve the mechanical properties, especially the balance between strength and ductility, in order to fulfill the requirements of 3rd generation AHSS steels. It is well known that an increase in the concentration of both carbon and manganese, which are strong austenite stabilizers, results in an improved stability of austenite at room temperature. Furthermore, a reduction in the austenite grain size is well known to increase the austenite stability by suppressing the martensite transformation. These two aspects, austenite composition and size, were considered for the development of steels with large volume fraction of retained austenite. In particular, such high fraction of retained austenite can be achieved in ultrafinegrained steels with 5-7 wt.% Mn content and relatively low carbon content. This type of microstructure provides an excellent combination of strength and ductility thanks to the enhanced TRIP effect. The composition used in this work (see Table 3) follows the aforesaid philosophy. It is worth noting that both carbon and manganese contents were limited respectively to 0.1 wt.% and 5 wt.% in order to prevent the problems linked to the welding properties and to the resistance of spot welds. At this point, it is also important to mention that theoretically, considering only the chemical contribution to M s temperature, the selected composition does not allow stabilizing 30% of retained austenite at room temperature. As it will be discussed and demonstrated later in this thesis, the size effect plays a key role on the austenite stability. Vacuum induction melting was used to prepare the steel. Chemical composition of the obtained steel is shown in Table 3. The level of C and Mn was slightly lower than the targeted ones. The ingot was then reheated to 1200°C and hot rolled with the finishing temperature around 930°C. Coiling was simulated by a slow cooling in the furnace from 625°C. Microstructure of hot rolled steel consists of bainite and martensite (Figure 66) and its microhardness was evaluated to be around 340 HV. Afterward, 70% of reduction was done to get the final thickness of cold rolled steel at about 1.2 mm. Table 3 -Chemical composition of studied steel (10 -3 wt. %). Thermal treatment selection As it was discussed in Chapter 1.1, globally, there are two ways to anneal a MMS: direct annealing of cold rolled metal or double annealing, i.e. austenitization followed by quench then second annealing with corresponding ART phenomena. As shown previously [KIM '81], one of the main advantages of double annealing, when the initial microstructure before second annealing is martensite and/or bainite, is the very fine resulting microstructure and a particular morphology of its constituents: lath-like and/or fibers. From a mechanical point of view, this provides better balance between strength and ductility and better damage properties [SUG'02], [SUG'00], [HEL '11], [KRE '11]. From a phase transformation point of view, finer grain size is supposed to enhance both the kinetics of austenite formation and the stability of austenite. Finally, it should be stated that double annealing with the first annealing in fully austenitic domain permits to decrease the so-called "heritage effects". Indeed, direct annealing is very dependent on the prior steps of steel processing like coiling and cold rolling rate. Normally, in the case of double annealing, due to the full austenitization at the first step, these "heritage effects" have lower impact. For all the above reasons, it was decided to investigate this type of thermal treatment. However, the choice of annealing temperature is of major importance since it determines both the fractions of microstructure constituents and the stability of austenite via its composition and size. In order to better assess the annealing temperature, it was decided to proceed in two steps: -thermodynamic calculations using Thermo-Calc software. This step is necessary to determine the temperature and the composition ranges for the existence of different phases; -combinatory experiments using gradient batch annealing in order to determine the effects of temperature on mechanical properties. Batch annealing was selected as a type of second intercritical treatment based on the works of Furukawa et al. [FUR'89], [FUR'94] and Huang et al. [HUA'94], which shown that optimal retained austenite fraction and mechanical properties were obtained after 3h annealing. Thermodynamic calculations Four pseudo-binary diagrams with 1.7 wt.%, 2.7 wt.%, 3.7 wt.% and 4.7 wt.% Mn content were calculated (Figure 67) to visualize the global effect of Mn on the possible phase transformations. The analysis of these pseudo-binary diagrams brings the following outcomes:  increasing Mn expands austenite domain to lower temperatures;  two phase domain α+γ is moved to the lower temperatures and slightly expanded at low C concentrations;  three phase region (α+γ+θ) is also expanded, meaning higher stability of cementite;  slope of the solvus between intercritical and austenitic domains (red line) decreases, leading to the rapid change of austenite volume fraction with temperature variation. Then, some ortho-equilibrium simulations were done to obtain the temperature evolution of phase fractions and their respective compositions for the studied steel (Fe-0.098 wt.% C-4.7 wt.% Mn). A part of the results is shown in Figure 68. Figure 68(a) shows that complete dissolution of cementite happens at 610°C (blue line) and fully austenite structure is obtained at 740°C (red line). This means that the intercritical domain exists in a rather narrow temperature range: from 610 to 740°C. In Figure 68(b) it can be seen that the evolution of C content in austenite is in form of peak with the highest amount achieved at temperature when complete dissolution of cementite happens. In the same time, Mn content in austenite is continuously decreasing till the value of steel composition (4.7 wt.%) in fully austenite structure. Such evolution of chemical composition of austenite supposes that there is a similar peak evolution of retained austenite fraction if only chemical stability of retained austenite (no size effect) is taken into account. This means that there is an optimum annealing temperature to get the highest amount of retained austenite which was observed in the previous studies [FUR '89]. In order to assess this optimum temperature and to obtain the evolution of retained austenite fraction at room temperature as a function of annealing temperature the following calculations were performed. The M s temperature was evaluated using Andrews relation [AND '65]. In the case of simple C-Mn steels without other alloying elements, it takes the following form: A A Mn C 0 s w 30.4 - w 23 4 - 539 C) ( M ⋅ ⋅ = ° (41) where A C w and A Mn w are C and Mn contents in austenite taken in weight percents. Then, Koistinen and Marburger empirical equation [KOI '59] was taken for the estimation of retained austenite fraction at room temperature (20°C): ( ) ( ) q s A RA T M f f - ⋅ - ⋅ = α exp (42) where f RA is the volume fraction of retained austenite, f A is the volume fraction of austenite before martensite transformation (austenite fraction at the end of holding before quenching), T q is temperature reached during quenching (in this study it is equal to 20°C -room temperature) and α=0.011 is a fitting parameter. Finally, using the data from Figure 68 (austenite fraction and its composition) the calculations were done and the results are presented in Figure 69. It can be seen that the peak of retained austenite corresponds exactly to the peak of C content in austenite, which in its turn is related to the temperature of cementite dissolution. From a thermodynamic point of view, the optimum temperature is calculated around 610°C. However, it should be noted that kinetics effects, such as cementite dissolution and austenite growth, are not taken into account in this type of calculations. As it was seen in pseudo-binary diagrams (Figure 67), stability of formed cementite increases with the increase of Mn, thus cementite dissolution in MMS can be sluggish. Knowing that the carbon peak is directly related to the cementite dissolution, one can expect that the peak of retained austenite fraction will be also controlled by both cementite dissolution and austenite growth. Therefore, the temperature of 610°C determined by thermodynamic calculations can be clearly considered as a lower value. In a more pragmatic way, it is possible to approach the optimum annealing temperature by using a combinatory experiment. Combinatory experiments As explained previously, it was decided to perform a specific thermal treatment (batch annealing) in the furnace in which it is possible to control the gradient of temperature. The principle of gradient batch annealing furnace was explained in the previous section. The furnace was programmed to obtain a linear gradient of soaking temperature between 600 and 700°C (range defined from aforementioned thermodynamic calculations). The scheme of annealing cycle is presented in the left part of Figure 70. Based on the literature and preliminary dilatometry trials, it can be reasonably supposed that the slow cooling does not affect significantly the microstructure. Indeed, it was found that ferrite formation does not occur at very low cooling rates and, even during holding in the domain with high driving force for ferrite formation, the transformation was very sluggish. This is probably due to the high Mn content in austenite before cooling. A steel sheet 230 mm long and 150 mm wide was heat treated. The obtained gradient of temperature depending on the position in the sheet is shown in the right part of Figure 70. For each temperature two "mini" tensile samples with the gauge length of 20 mm and section of 5 mm × 1.2 mm were prepared. Then, tensile tests were performed. The evolution of mechanical properties as a function of holding temperature is presented in Figure 71. Table of all results and all tensile curves can be found in Annex 2. These results clearly show that there is an optimum domain of temperatures where a good balance between strength and ductility can be achieved. For the studied composition, it appears that the optimum range of temperatures is between 640 and 660°C. As it was anticipated in the previous part of the work, these temperatures are higher than those predicted by orthoequilibrium calculations, due to the kinetics effects. XRD analysis was performed on all samples to study the evolution of retained austenite fraction. Results are presented in Figure 72. Taking into account only the evolution of retained austenite fraction, one can conclude that the optimal temperature is ~690°C, but this will be an erroneous outcome. In fact, the optimum ductility (elongation) was obtained at temperature range of 640-660°C, as already stated. This is due to one more parameter that introduces another level of complexity in the choice of optimal treatment: the mechanical stability of retained austenite. As it can be seen in Figure 72, different retained austenite fractions can provide the same level of strength-ductility balance and, alternatively, the same retained austenite fraction can result in different strength-ductility. This indicates directly that strength-ductility balance is related to both parameters: fraction and stability of retained austenite. At this stage of the work, it was not possible to get better understanding about the mechanical stability of retained austenite. Based on the obtained results, it was decided to select the soaking temperature of 650°C and to investigate the impact of soaking time on the microstructure and mechanical properties evolution. The decisions taken in part 2.2 can be summarized in the following manner: 1. selected composition: 0.1 wt.% C and 5 wt.% Mn; 2. selected heat treatment: double annealing with first austenitization above Ae 3 followed by intercritical batch annealing at 650°C. As it was already stated, final mechanical properties of steels are closely related to the microstructure parameters: nature, composition, volume fraction, size and morphology of the microstructure constituents. In its turn, microstructure depends on the applied thermomechanical treatments. In this chapter, the microstructure evolutions resulting from the double annealing treatment will be presented and discussed. According to the considerations described in chapter 2.2, the heat treatment schematically presented in Figure 73 was chosen for the further investigations. It consists of two thermal cycles. First, a complete austenitization at 750°C for 30 minutes followed by water quench. Second, an intercritical annealing at 650°C with different holding times (3min, 10min, 30min, 1h, 2h, 3h, 7h, 10h, 20h and 30h) followed by water quench in the end. All double annealing treatments were performed in Nabertherm furnace under Ar atmosphere to avoid any decarburization. The mean heating rate in the furnace was about 5°C/s. However, in the end of heating section the heating rate was much lower. Finally, the holding temperature ±5°C was achieved after ~200s. 750°C-30min Water Quench Characterisation of the microstructure after austenitization The microstructure after first annealing cycle (austenitization followed by quench) was characterized using different techniques: optical microscope (OM), scanning electron microscope (SEM), transmission electron microscope (TEM) and X-ray diffraction (XRD). Figure 74 presents the observations (OM, SEM, TEM) of the obtained microstructure. It can be seen that the resulting microstructure is composed of lath martensite. Applying image analysis on OM images after Dino etching, prior austenite grain size was estimated to be around 4µm. Furthermore, optical observation after Klemm etching revealed the presence of a small quantity of retained austenite in the martensite matrix as highlighted in Figure 75(b). Finer characterization of retained austenite done with TEM (Figure 75(d)) confirms this observation. The evaluation of retained austenite volume fraction from the XRD spectrum (Figure 75(c)) was not possible due to its very low value. Thus, it was considered that there was less than 3% of retained austenite. A mean Mn content in retained austenite islands of 9 wt.% was measured by EDX. This value was considered as relatively high. As a consequence, the microstructure at the beginning of the second cycle consists of a fully martensitic structure in which some small quantities of retained austenite are present. The presence of retained austenite after quenching can be due to the existence of Mn microsegregations. Therefore, Mn distribution was analyzed using EPMA and the results are presented in Figure 76. As expected, microsegregation of Mn can be observed. Nevertheless, the mean segregation level is only about 5.5 wt.% which is a rather low value in comparison with the measured Mn content in retained austenite islands by TEM-EDX (~9 wt.%). Locally, Mn composition can reach a value as high as 15 wt.% (red zones on the Mn map) which is more consistent with the level of Mn necessary for austenite stabilization. Finally, the obtained mean segregation level corresponds to a partition coefficient of about 1.2. Quantitative X Image of Mn (wt.%) Microstructure evolution during annealing at 650°C Before going into the core of this section (microstructure characterization), a short recall of the previously obtained results is given hereafter. First of all, the initial microstructure before intercritical annealing is fully martensitic without any prior deformation. Next, based on the thermodynamic calculations (Chapter 2.2), the expected stable phases are ferrite + austenite at 650°C and a ferrite + cementite mixture at low temperature. The austenite formation can thus result from different phase transformations including the formation of cementite at lower temperature and its dissolution at higher temperature. It is worth noting that cementite particles would act as preferential nucleation sites for austenite. Therefore, the objective of this section is to characterise and to analyse the microstructural evolution during annealing at 650°C and its role on the stabilization of retained austenite at room temperature. Microsegregation evolution Generally, the cold rolled high strength steels are subjected to microsegregations (especially Mn) due to the former processing conditions. These microsegregations result from the solidification process and introduce dispersions in the microstructure, which can have a significant impact on phase transformations, microstructure evolution and both mechanical and damage properties. Therefore, it is important to characterize the states of dispersions and microsegregations at different steps of heat treatment. Beforehand, it was shown that the initial (before intercritical annealing) microstructure has Mn microsegregations. Hence, it was of interest to follow the evolution of Mn microsegregation during intercritical annealing. For that purpose, four samples (3min, 1h, 10h and 30h) were subjected to EPMA analysis. The obtained quantitative images of Mn distribution are shown in Figure 77. The complete data from EPMA analysis are presented in Annex 3.1. An important redistribution of Mn happens during such a long intercritical annealing. These analyses suggest that there is a progressive homogenization of Mn in segregated bands during annealing, as observed in [LAV '49], [KRE '11]. In that case, the Mn diffusion length is expected to be of the order of the initial mean distance between bands, i.e. order of 10 µm. The long-range diffusion of Mn supposes that Mn should diffuse in a two-phase matrix of ferrite and austenite. This means that the effective diffusivity of Mn may be different from that in pure ferrite and in pure austenite. This phenomenon was already discussed in the work of . It was shown that such long-range Mn homogenization was controlled by an apparent diffusivity similar to the one in ferrite. Both the small segregated band spacing and the ultra-fine size of microstructure constituents are supposed to enhance this phenomenon in the particular case of MMS. It is worth noting that another type of Mn partitioning may occur at smaller scale. It is mainly linked to the interactions between Mn and migrating α/γ interface during austenite growth. This partition is driven by short-range diffusion and depends on temperature and microstructural parameters. This topic will be discussed more deeply later using more precise data of TEM hypermaps. Evolution of cementite precipitation state Carbides were first analyzed during heating process using interrupted cycles (heating to the temperature followed by He quench). Three different temperatures were considered 550°C, 600°C and 650°C. The observations were done on TEM replicas. The Mn content of carbides was measured by EDX. All the obtained results can be found in Annex 3.2. Figure 78 highlights only the most significant ones. The mean Mn content of carbides at 650°C was determined from EDX measurements. Its value was estimated to be about 10 wt.%. Then, the time evolution of carbides during holding at 650°C (3min, 1h, 2h and 3h) was analyzed using TEM thin foils. The main results are presented in Figure 79. In a general manner, the carbides precipitation state depends on time at 650°C. For short holding time (3min), high volume fraction was observed. In terms of Mn content, there were two populations of carbides: one with high Mn content around 30 wt.% and another one with lower one about 15wt.%. Also, two types of morphology were detected: rod one and polygonal one (more or less rectangular). After 1h holding time, the volume fraction of carbides decreases, but there are still a certain amount of carbides with bimodal Mn content and shape. After 2h holding time, the sample can be considered as almost carbide free: only a few carbides with high Mn amount (~25 wt.%) remains in some areas. Finally, carbides are completely dissolved after 3h holding. It is also important to note that it was not possible to establish any link between carbides morphology and their Mn content from the analyzed data. Using Convergent Beam Electron Diffraction (CBED) and Selected Area Electron Diffraction (SAD) it was determined that crystallographic structure of the observed carbides corresponds to cementite. This is coherent with the conclusions of Luo et al. [LUO'11], [LUO '13]. These results show the presence of cementite at 650°C for holding time as long as 2 hours. This contrasts with equilibrium thermodynamic calculations carried out in Chapter 2.2. Indeed, the temperature of complete cementite dissolution was determined to be around 610°C. This can be attributed to kinetics effects and will be clarified further by DICTRA simulations Time-evolution of austenite and ferrite The samples obtained after intercritical annealing at 650°C were firstly characterized by OM (Figure 80). Dino etching [ARL '13] was used to reveal the microstructure. Basically, a refined microstructure is evidenced by OM even after 30h holding. Thus, using OM it was only possible to get the macro vision of the microstructure. For more detailed analysis, all the microstructures were characterized using FEG SEM (Figure 81 and Figure 82). The explanation and details of the observed microstructures are given in the Figure 82. The latter shows microstructure after 10h holding at 650°C consisting of at least three phases: ferrite (F), retained austenite (RA) and fresh martensite (M). The detailed analysis of the microstructures leads to the following conclusions: 1) austenite (retained austenite and fresh martensite) and ferrite have two different morphologies: lath-like and polygonal ones; 2) at holding times longer than 3h, higher fraction of fresh martensite was observed and most of the martensite was polygonal, 3) it can be supposed that the polygonal ferrite grains results from a recrystallization process from the initial martensite structure. The first ferrite polygonal grains were observed already after 30min holding. Recrystallization of martensite is a complex subject and need a dedicated study which is not in the scope of this work. The rate of austenite transformation is very rapid in the beginning and then becomes more and more sluggish. The equilibrium fraction is achieved after 7h holding. The observed austenite evolution is apparently consistent with the work of Speich et al. [SPE'81], in which it was shown that there are 3 stages for austenite growth: first two rapid ones that are controlled by C and Mn diffusion in ferrite and a third sluggish stage that is controlled by Mn homogenization in austenite. The analysis of Mn distribution between austenite and ferrite as a function of time was done using local TEM-EDX measurements and TEM-EDX hypermaps. More complete information from the overall TEM-EDX analysis can be found in Annex 3.3. Figure 84 presents the hypermaps for the samples with 1h, 2h, 3h and 10h holding time. On each hypermap the Mn-rich zones, considered to be RA or FM, and several Mn-depleted zones (F) were selected and quantified. An example of such a selection for samples after 2h and 3h holding at 650°C is shown in Figure 85. The analysis of at least 10 austenite and 5 ferrite zones of the hypermap was done for each sample. The average values of Mn content in both austenite and ferrite of different samples are reported in Table 4. As calculated with Thermo-Calc software, the equilibrium Mn contents of ferrite and austenite at 650°C are 2.3 and 9.1 wt.%, respectively. From data in Table 4, it can be seen that Mn content of ferrite decreases with holding time and achieve the equilibrium value after 10h. In a general manner, Mn content in austenite decreases with time from 10 wt.% to 8.5 wt.%. The high Mn value at the beginning of transformation may be explained from the fact that austenite nucleates preferentially in Mn rich regions such as carbides. After longer time, the measured Mn content in austenite is close to the equilibrium one (8.5 wt.% versus 9.1 wt.%). As a consequence and surprisingly, the system reaches the equilibrium after a relatively short time of 10h; the equilibrium being defined from the calculated volume fraction of phases and from their respective compositions. It was also decided to measure the level of carbon content in austenite for the samples with 1h and 30h holding time at 650°C using NanoSIMS technique. Some areas of 10 × 10 μm 2 were scanned using a focused Cs + primary ion beam (<1 pA) in the NanoSIMS 50 machine and the SIMS intensities were measured and plotted as maps (in particular 12 C ion maps are necessary for the estimation of carbon content). Image analysis was then performed on the obtained 12 C ion maps and the evaluation of carbon content was done according to the method described in chapter 2 that is mainly based on the comparison of the intensities between the analyzed sample and a reference sample with known carbon content. The carbon content was estimated for each selected zone in the 12 C ion maps in order to evaluate the dispersion of C content. Two 12 C ion maps and the associated distribution of measurements for 1h and 30h samples are shown in Figure 86 and Figure 87. It is observed that the measured carbon contents show a relatively low scatter. The mean carbon content measured in both samples after 1h and 30h holding is close to 0.36 wt.% and 0.27 wt.%, respectively. These values correspond very well to the C content calculated from the mass balance that neglects carbides (Figure 86 and Figure 87) and thus suggests that a major part of carbides are completely dissolved after one hour treatment at 650°C. Time-evolution of retained austenite and martensite The volume fraction of retained austenite was evaluated by saturation magnetization measurements and that of fresh martensite was deduced from image analysis. The evolutions of retained austenite and martensite fractions with holding time at 650°C are shown in Figure 88. The time-evolution of both retained austenite and fresh martensite can be divided into three successive steps: -in the beginning of transformation, i.e. for times shorter than 1h, all the prior austenite was stabilized at room temperature, -a decrease of RA and a concomitant increase of FM is observed for times between 1h and 10h, -a final stage where both RA and FM seems no longer to evolve. In addition, evolution of RA fraction with holding time has a peak-type form, which was already reported in the literature [HUA '94], [ARL'12-1] without any clear explanations. Later in this chapter, it will be shown that this temporal retained austenite evolution is a fair indicator of the mechanism of austenite stability. Some TEM analysis of the selected samples (3min, 1h, 2h, 3h, 10h and 30h) were also performed in order to study more finely the different phases and to determine the mean Mn content in each phase. Figure 89, Figure 90 and Figure 91 present the TEM images with the indexed diffraction patterns in mode STEM (CBED) and the measured Mn content of certain phases for all the samples. The analysis of the sample after 3min of holding (Figure 89(a) and(b)) shows that ferrite (indexed as BCC structure) contains a high density of dislocations and some features present also a substructure. The dislocation density decreases with the increase of holding time. However, even after 30h of holding, certain grains still contain some dislocations. The presence of substructure and the decrease of dislocation density with time are certainly the indicators of martensite recovery phenomena, as already seen in the past by Speich [SPE '69] [SPE '72]. Surprisingly, it appears that recovery is delayed to longer times. The high Mn content is supposed to be the reason for such sluggish recovery kinetics. In fact, Mn effect is thought to be double. First, the increase of Mn results in lower intercritical temperature range, thus recovery of martensite happens with lower driving force. Second, Mn atoms can interact with dislocations or grain boundaries by a solute drag effect [GOR '62], [PET'00], [REI '06]. It can be seen in Figure 89(b) that ferrite of the sample with 3min holding time has a Mn content of about 4 wt.% and that the size of formed austenite is ultra fine. The sample with 1h holding (Figure 89(c) and (d)) presents coarser austenite and a lower Mn content in both ferrite and retained austenite. The retained austenite was detected further in all the samples. Its mean Mn content was estimated between 9 and 11 wt.%. In Figure 90(b) (2h sample) certain austenite grains present some features inside the grain with lath morphology. Using CBED it was possible to distinguish some austenitic twins (FCC(Twins)) and alpha prime martensite (BCC(M)). Figure 90(b) also reveals some dislocations in the ferrite grain (outlined by the blue line) adjacent to the austenite grains. It is thought that this is due to the accommodation of transformation stress. Geometrical and topological aspects The analysis of both the size and morphology of phases is a key point to better understand the microstructure formation. By using SEM images at relatively high magnification (×10000example in Figure 59) it was possible to perform manual measurements of the size of austenite by considering martensite and retained austenite (FM+RA) in the final microstructure. For each holding time, 200 austenite features were measured by image analysis from SEM images. Both the lath and polygonal like morphologies were distinguished and their mean size was measured. Table 5 summarizes the measurements of size and includes the volume fraction of austenite at a given time for holding at 650°C. It is clear that the low statistics and manual operations lead to uncertainties regarding the size measurements. Although, the values obtained for lath features are in rather good agreement with those found by Luo et al. [LUO'11]. '80] it was observed that austenite nucleates at lath and packet boundaries and grows preferentially along the laths, consequently producing a structure with lath-like morphology of ferrite and austenite. However, the complete vision on the sequence of carbide precipitation and austenite nucleation and growth was not completely obvious. Based on the different observations it was proposed to consider the following mechanism of austenite nucleation and growth: • precipitation of carbides along the laths, packets boundaries and on the triple junctions of prior austenite grains; • nucleation of austenite on the previously precipitated carbides (along the laths, packets boundaries and on the triple junctions); • preferential growth of austenite nucleated at laths and packets boundaries along these units. Comparison of the austenite formation mechanisms from the initial fresh martensite structure (not deformed) and from the deformed structure of MMS is schematically described in Figure 92. Figure 92(A) show the nucleation and formation of austenite from an initial non deformed martensite. In this case, as stated previously, austenite will nucleate and grow on both the laths, packets boundaries and on the triple junctions of prior austenite grains. Consequently, the final morphology of the austenite is double: lath-like and polygonal features. In the case of deformed martensite structure (Figure 92(B)) the scenario is different. Cementite precipitation is concomitant with recovery and recrystallization of ferrite. Therefore, cementite precipitates on the prior austenite grains, on the laths or packets boundaries, but also on the boundaries of new grains and/or restoration cells. The recrystallization before austenite transformation is partial. This was observed in the earlier study [ARL '12-2]. In this condition, austenite nucleates preferentially on the cementite which is located on the recrystallized ferrite boundaries. In the same time, there are also two other possible sites for austenite nucleation: prior austenite grains and laths or packets boundaries. Finally, further austenite growth and ferrite recrystallization result in the polygonal microstructure of ferrite and austenite. Comparison of the experimentally obtained microstructures from the initial martensite structure (not deformed) and from the deformed structure of MMS is shown in Figure 93. It should be stated that the both microstructures have similar fraction of austenite (martensite + retained austenite). The obtained morphologies correspond well to those described by the both mechanisms in Figure 92. However, it should be noted that the scheme proposed in Figure 92 cannot be generalized. Cementite precipitation, recrystallization and austenite formation depend on several parameters like heating rate and path, initial structure and steel composition. Therefore, depending on these parameters the morphology of the final microstructure can be different. Overall view on the obtained experimental data Microstructure evolution during ART annealing at 650°C was characterized using EPMA, SEM, TEM and NanoSIMS techniques. EPMA results revealed the long-range Mn homogenization controlled by the diffusivity of Mn close to those in ferrite. Small interspacing of microsegregation bands and ultra fine microstructure enhance this phenomenon. SEM characterization shown that the microstructures contained at least three phases: ferrite, retained austenite and fresh martensite. Double morphology (lath-like and polygonal) of analyzed features was revealed and a complete mechanism of such microstructure formation was proposed. From the image analysis of SEM pictures it was found that the kinetics of austenite transformation was very rapid in the beginning and then becomes smoother. Size evolution of austenite was also evaluated. Retained austenite fraction was determined by saturation magnetization method. It was found that an important amount of RA was stabilized at room temperature after final cooling. Evolution of RA fraction with holding time has a particular peaktype form. Finer TEM observations shown that carbides dissolution was sluggish: small amount of non-dissolved carbides were present even after 2h holding and contain high amount of Mn. Different phases were distinguished using CBED in STEM mode. An important dislocation density in ferrite was observed. Mn content of different phases was characterized using TEM-EDX measurements. Finally, C content of two samples, 1h and 30h, was determined by NanoSIMS technique. The obtained values were close to those, calculated from the mass balance neglecting the presence of cementite. All the obtained quantitative data is put together in the following table. Discussion of main results: experimental/modelling approach As it was pointed out in the literature review, retained austenite fraction and its stability have significant influence on the final mechanical behavior of MMS. Under mechanical loading retained austenite transforms in martensite and thus providing additional ductility and strain hardening known as a TRIP effect. The rate of strain induced martensite transformation is linked to the fraction and stability of retained austenite. Both are directly linked with the mechanism of austenite formation during holding. In this part, we propose to discuss more deeply the kinetics of austenite growth, the stability of austenite at room temperature and their interactions. A double experimental/modeling approach is proposed here. Mechanisms of austenite formation Effects of the representative volume It is proposed to study the mechanisms of austenite growth using DICTRA software that is mainly based on a mean field approach. In this case, the choice of the equivalent representative volume of the microstructure is critical, all the more so when the studied microstructure is complex. The linear geometry was chosen for the main reason that the experimental observations revealed that the lath-like morphology is dominant, especially at the beginning of holding. The configuration for DICTRA calculations is given in Figure 94. Three phases were considered: θcementite, α-ferrite and γ-austenite, the latter one being initially set as an inactive phase at the α/θ interface. Consequently, austenite will appear when the driving force for its nucleation will be higher (absolute value) than a threshold value calculated by DICTRA under local equilibrium conditions. The simulation cycle was set to be as follows: starting at 400°C, heating to 650°C at ~1°C/s heating rate then holding at 650°C. The characteristic length L α (see Figure 94) was taken equal to the half-distance between the martensite laths. This length was approximately estimated to be 150 nm. Then the cementite region size (L θ ) was calculated from the classical mass balance considering that the volume fraction of cementite is 1.45%, value that corresponds to the equilibrium fraction at 400°C. The obtained cementite thickness was 2 nm. L α (nm) γ α θ L θ (nm) L α (nm) γ α θ L θ (nm) The Mn content of cementite was considered to be 10 wt.% as determined in section 3.2.2 using the EDX in TEM. Both Mn and C content of ferrite were then calculated according to the mass balance: 4.53 wt.% and 2.10 -6 wt.%, respectively. The results in terms of kinetics of cementite dissolution and of austenite formation are presented in Figure 95. In a general manner, the kinetics of austenite is relatively well predicted, even if the calculated kinetics is abrupt between 30min and 3h. However, the calculated dissolution rate of cementite is rapid (complete dissolution after ~800s). This seems contradictory with the observed experimental results -carbides are still present after 2h of holding. From a theoretical point of view, there are many reasons that can explain such a result. First, the microstructure dispersions such as the size distribution and the heterogeneity of compositions are not taken into account. This can explain that some carbides may persist after rather long holding time. Second, the mean field approach used in this work imposes to choose a characteristic length (L α ), supposed to be representative of the microstructure state. This distance, very difficult to determine for the reason that the microstructure is quite complex, predetermines the size of cementite and can strongly influence the calculated kinetics. For example, the corresponding size of cementite in the initial state is of 4 nm that is far away from the measured one and would explain partially why the kinetics of cementite dissolution is faster than the observed one. In order to better understand the influence of calculation parameters and to improve the predictions, some complementary calculations were carried out with the following considerations: o cementite region size was varied from 1 to 10 nm (this implies variation of ferrite region size from 75 to 750 nm); o Mn content of cementite was varied from 10 to 30 wt.%. Effects of cementite size and of its Mn content on the kinetics of cementite dissolution and austenite formation are shown in Figure 96 and Figure 97. As expected, it was found that both kinetics strongly depend on the considered cementite size (and then on the ferrite region size) and on the Mn content in cementite. It is known [GOU '12-1] that an increase of Mn content in cementite decreases the driving force for cementite dissolution and influences the kinetics of the austenite growth. This effect is clearly illustrated in Figure 97. However, the effect of Mn content appears less pronounced. Furthermore, it is shown that it is not possible to describe simultaneously the experimental time for complete dissolution of cementite and the experimental kinetics for austenite growth using a given couple of cementite size and Mn content. It is worth noting that the time for the complete dissolution of cementite is not necessary a relevant parameter in the absence of the measured kinetics of cementite dissolution. Indeed, the measurements of C content in austenite (Figure 86 and Figure 87) are in very good agreement with the C content calculated from the mass balance that neglects the presence of carbides. This is a clear evidence that a major part of carbides are completely dissolved after one hour of treatment at 650°C. The dispersion existing in the microstructure, particularly those concerning both the size and Mn content, may explain that some cementite particles persist even after longer holding time at 650°C. In the following analysis and in view of the above, we will consider the first parameters set (L α =150nm) as sufficiently relevant to describe the austenite growth. Austenite growth controlled by Mn diffusion The mechanism and the kinetics of austenite transformation and growth are strongly linked to the evolution of C and Mn at the transformation interfaces α/γ and γ/θ. In ternary Fe-C-Mn system the situation is more complicated in comparison with the Fe-C binary one. Indeed, both interstitial and substitutional diffusion occur during transformation and their diffusivities differ substantially. Thus, the composition of C and Mn at the α/γ that defines the tie-line for austenite growth cannot be determined by the tie-line passing through the bulk alloy composition, contrary to the Fe-C system. Furthermore, the austenite formation can be successively controlled by the C diffusion in austenite, the Mn diffusion in ferrite and, finally, by the Mn diffusion in austenite [SPE '81]. These varieties of possible growth mode for austenite have a strong influence on the kinetics of austenite formation. The analysis of the time-evolution of both C and Mn profiles through the system at 650°C is given in Figure 98. It provides some clarifications about the mechanism of austenite growth. The time-evolution of both C and Mn at γ/θ and α/γ define the tie-lines for cementite dissolution and austenite growth. In a first step, the focus is put only on the cementite dissolution process. It is worth noting that during this stage austenite grows into ferrite. From a theoretical point of view, the tie-line for transformation depends on both kinetics and thermodynamic properties of the studied system. However, in this specific case, it is shown that the content of both C and Mn at γ/θ and α/γ do not significantly evolve with time. The C and Mn contents are close to 0.45 wt.% and 8 wt.%, respectively, at the α/γ interface (austenite side) and are close to 0.5 wt.% and 11.5 wt.%, respectively, at the γ/θ interface (austenite side). As a consequence, the tie-lines for transformation, which are represented in Figure 99, can be considered as time-independent during cementite dissolution process. The determination of these tie-lines is critical for the reason that they govern the austenite growth for a substantial part of austenite formation. The austenite growth is influenced by the dissolution of cementite into austenite and by the growth of austenite into ferrite (Figure 98). Obviously, these two steps are interdependent and their kinetics depends mainly on the difference between the difference of the chemical potentials of both C and Mn at γ/θ and α/γ interfaces from the austenite side (the small red and blue arrows in Figure 99). It is interesting to note that the values of carbon content at the γ/θ and α/γ interfaces are quite close (0.45 wt.% versus 0.5 wt.%). This is the reason why the carbon profile in austenite is almost flat (Figure 98). In order to go further, the C activity through the system at the beginning of transformation at 650°C (t=0s) was determined (Figure 100). The C activity is uniform through the system. As a consequence and surprisingly, the austenite growth is not controlled by C diffusion in austenite, as suggested by many authors in other steels with lower Mn content [SPE '81]. This can be explained by two concomitant effects: the high nominal Mn content and the relatively low temperature of transformation. Indeed, all this lead to a high C activity at the α/γ interface (austenite side) and to a strong decrease of the C chemical potential gradient through the system. Figure 98 -DICTRA calculations of the time-evolution of both C (a) and Mn (b) profiles at 650°C during cementite dissolution. The time t=0s corresponds to the beginning of transformation at 650°C. Figure 101 presents the profiles evolution of Mn activity. It can be seen that the Mn activity profile in austenite is relatively flat. It can thus be concluded that the driving force for cementite dissolution is weak, although the Mn content in cementite is relatively low. This explains why the kinetics of cementite dissolution is relatively slow and why the effect of Mn content in cementite on the kinetics of cementite dissolution is not significant (Figure 97). Such behavior was already observed in other Fe-C-Mn steels [GOU '12-1]. On the other hand, a gradient of Mn activity is established in ferrite and the austenite growth is thus controlled by Mn diffusion in ferrite. It is interesting to note that a substantial part of austenite is formed before the complete dissolution of cementite. In a second step, when cementite is completely dissolved, an accumulation of Mn at the α/γ interface (austenite side) is highlighted (Figure 102). This is a direct consequence of Mn partitioning from ferrite to austenite. In that case, it is worth noting that the tie-line for austenite growth depends on time as shown in Figure 103. It is interesting to note that the tie-lines for transformation are located below the equilibrium tie-line for times shorter than 1800s, approximately, and above for longer times. As a consequence, it is expected to observe a decrease of Mn content at α/γ in austenite side from a certain time and shrinkage of austenite as clearly evidenced in Figure 102 and Figure 103. This retraction, also visible in Figure 95(b), is relatively slight in this case for the reason that the geometrical locus of tie-lines is relatively close to the equilibrium tie-line (see the position of tie-line at 7200s and the equilibrium tie-line in Figure 103). Surprisingly, this retraction is observed for relatively short holding time (after 2 hours). This can be partially attributed to the kinetics effect resulting from the ultra-fine structure that reduces the Mn diffusion length. As a partial conclusion, the major part of austenite growth is controlled by Mn diffusion in ferrite and by Mn diffusion of austenite for longer times. Although the transformation is not controlled by the carbon diffusion in austenite, the kinetics of austenite is relatively fast and the shrinkage of austenite occurs, surprisingly, for relatively short times. These effects result from both thermodynamic and kinetics process. For instance, it is clearly shown that the high nominal Mn content and the relatively low temperature of transformation lead to a strong decrease of the C chemical potential gradient through the system. Furthermore, the ultra-fine microstructure seems to play a key role on the kinetics of austenite growth: the kinetics of austenite would have been more sluggish (as it is controlled by Mn diffusion) in the case of coarse microstructure. Tie-lines for cementite dissolution and austenite growth Effect of characteristic length L α size distribution Further, by combining both effects of cementite size and Mn content, it was possible to compare different austenite kinetics simulated with DICTRA with the experimental values (Figure 104). This figure shows that an increase of both cementite size and its Mn content makes the austenite formation curve smoother. However, taking only the fixed values of cementite size and of its Mn content cannot completely describe the experimental data. Once again, it was supposed that the heterogeneous distribution of size and/or Mn content in the microstructure can explain such an evolution. In the works of Gouné et al. [GOU'10], [GOU '12-2] it was shown that the dispersions existing in microstructure play an important role in the kinetics of phase transformations and that taking into account such heterogeneities can improve the predictions of classical approaches. Based on this, it was decided to perform calculations that will account for the effect of size distribution. For that purpose, it was assumed that the distribution of the characteristic length L α can be described by the Log-normal distribution: ( ) [START_REF] Rob | [END_REF] where µ and σ are, respectively, the mean value and the standard deviation of the logarithm distribution of the characteristic length L α . It was supposed that there were no interactions between the features of different size classes. Next, DICTRA simulations were performed for each size class. The obtained from each simulation austenite fraction was then balanced according to the distribution fractions (Figure 105(b)) using following equation: ( ) π σ α σ µ α α ⋅ ⋅ ⋅ =       - - 2 2 ln 2 1 L e L F L ( ) ( ) ∑ ⋅ = i L i A A f t f t f α (44) where ( ) t f A is the average volume fraction of austenite at time t, ( ) t f i A is the volume fraction of austenite calculated using DICTRA for a selected The resulted austenite kinetics is presented in Figure 106. As it can be seen, very good prediction of austenite kinetics was achieved. The reason of such a good result is the correct selection of the L α distribution. In fact, high fraction, almost 60%, of small size cells (L α is less than 200nm) provides a rapid austenite formation during the first 1000s of holding. The austenite fraction generated by the small size cells during this period represents roughly 80% of the global fraction. In the same time, with the increase of holding time the influence of coarser cells increases, thus decreasing the austenite formation rate and making the curve smooth. An interesting fact is that the calculated fractions for chosen size classes of L α with the Lognormal distribution is very similar to the measured fractions of austenite size classes for the sample with 3min holding at 650°C (Figure 107). Actually, there is a direct link between L α and austenite size: a given cell size will result in a certain fraction of austenite with a certain size. However, to recalculate L α from the measured austenite size requires a huge number of DICTRA simulations and complex fitting with the experimentally measured austenite fractions. Figure 107 -Comparison of size classes fractions calculated from: L α Log-normal distribution (orange) and measured distribution of austenite sizes for the sample with 3min holding at 650°C (blue). On the other hand, this similarity between measured and calculated size classes is a direct proof of the right form of the selected L α distribution. Indeed, the selected Log-normal L α distribution should result in the Log-normal distribution of the austenite sizes, which is the observed case in this study. Factors controlling austenite stabilization at room temperature In the previous section, it was shown that a relatively high volume fraction of retained austenite can be stabilized at room temperature, although, its carbon composition is much lower than in the classical TRIP steels. The origin of such behavior is controversial as evidenced by some recent lively discussions in [LEE '11], [LUO '12], [LEE '12]. At all events, most agree that the enrichment of alloying elements such as C and Mn are important factors responsible for the room temperature stability of retained austenite in medium Mn steels. However, the effect of grain size on the stability of retained austenite is much more discussed. In this section, it will be shown that the experimentally obtained evolution of retained austenite is a very valuable dataset for the study of factors governing the stability of austenite in MMS. Retained austenite fraction at room temperature can be determined using the Koistinen and Marburger empirical equation [KOI '59] presented earlier in the literature review and Chapter 2: ( ) ( ) q s A RA T M f f - ⋅ - ⋅ = α exp (45) where f RA is the volume fraction of retained austenite, f A is the volume fraction of austenite before martensite transformation (austenite fraction at the end of holding before quenching), T q is the quenching temperature (T q =20°C in the present work) and α=0.011 is a fitting parameter. It is assumed that the effect of chemical composition and other variables is visible in the change of M s . It can be seen also that the retained austenite fraction (f RA ) depends on the initial austenite fraction at the end of holding (f A ). For sake of simplicity, it is suggested to describe the austenite fraction during isothermal holding using the Kolmogorov-Johnson-Mehl-Avrami (KJMA) approach [KOL '37], [JOH'39], [AVR '39]. Obviously, a similar result can be obtained using DICTRA as soon as the kinetics of austenite formation is well described. In that case f A can be expressed in the following manner: ( ) [ ] n eq A A t k f f ⋅ - - ⋅ = exp 1 (46) where n is so-called "Avrami coefficient", k is a parameter that depends on nucleation and isotropic growth rates, and eq A f is the volume fraction of austenite under ortho-equilibrium. This value, obtained with ThermoCalc at 650°C, is of 0.36. A very good agreement with experimental data was obtained for values of k = 0.074 and n= 0.38 (Figure 108). Then, the time evolution of retained austenite fraction as a function of holding time and M s temperature is obtained by combining equations ( 45) into (46): ( ) [ ] ( ) ( ) q s n eq A RA T M t k f f - ⋅ - ⋅ ⋅ - - ⋅ = α exp exp 1 (47) Here, it is highlighted that the time evolution of retained austenite fraction is governed by the competition between the initial volume fraction of austenite before quenching and its stability. The first derivation of equation ( 47) permits to evaluate the magnitude of retained austenite variation and the inflection points. The following equation is thus obtained: ( ) [ ] ( ) ( ) [ ]       ⋅ - - ⋅ ⋅ - ⋅ - ⋅ ⋅ ⋅ ⋅ - ⋅ - ⋅ = - n s n n q s eq A RA t k dt dM t k t n k T M f dt df exp 1 exp exp 1 α α (48) This relation shows clearly that the evolution of retained austenite depends on the temporal variation of M s temperature, which is linked to the austenite stability. For example, when M s is constant, the derivative from the equation 48 is always positive. Therefore, the retained austenite fraction is expected to be strictly increasing over the considered time-interval. In this work, it was shown that the retained austenite evolution exhibits a clearly marked maximum (Figure 88). As it was stated previously, three different stages are observed for RA evolution: increase, decrease and, finally, stagnation. First, initial austenite volume fraction is fully stabilized at room temperature, thus, f RA = f A and it continuously increases till 1h holding time. In that case, it can be easily demonstrated from equations ( 45) and ( 47) that this condition is fulfilled for M s temperatures strictly less than T q . Second, the time-evolution curve of RA fraction goes through a maximum value and decreases. The peak, that corresponds to the higher volume fraction of retained austenite, is, thus, defined by M s = T q . As a consequence, the austenite state (chemical composition and size) at the peak, defines the critical point beyond which austenite is unstable, because M s >T q . From the experimental results, it appears that the peak point corresponds to carbon and manganese contents in austenite of 0.33 wt.% and 9.3 wt.%, respectively. The decrease of f RA can be mainly explained by the concomitant decrease of austenite carbon content and increase of its size during growth. The final stagnation stage is observed simply because of the absence or very slight evolution of austenite: equilibrium fraction and composition are achieved and the effect of size variation is becoming minor. From these considerations it is obvious that the shape of the f RA curve and the position of the peak are fair indicators of the factors controlling austenite stability. In order to go further, it is important to clarify an important point, which is the chemical contribution to M s temperature. According to the Andrews empirical relation [AND '65] and in the case of ternary Fe-C-Mn steels, the chemical contribution to M s can be expressed as: A A Mn C 0 s w 30.4 - w 23 4 - 539 C) ( M ⋅ ⋅ = ° (49) where A C w and A Mn w are C and Mn contents in austenite taken in the weight percents. It is important to mention that Andrews collected a huge data base of M s temperatures obtained from fully austenitic structures with quite large grain sizes. Thus, the effect of austenite grain size on M s can be reasonably considered as negligible in the resulting data base. From the measured Mn content in austenite (9.3 wt.%) at peak, this relation predicts that austenite is stable at T q =20°C when its carbon concentration of A C w >0.56 wt.%. In this study, it was shown clearly that austenite with carbon lower than 0.33 wt.% are expected to transform into martensite (see the carbon content at peak). This strongly suggests that the empirical relation given in equation 49 has a limited validity for the MMS. In addition, it appears that parameters other than chemical composition may play a significant role in the stability of the austenite. As already mentioned during literature review, it can be reasonably supposed that the small austenite size affects its stability by lowering the M s temperature. In other words, an extra driving force that depends on austenite grain size would be added to the change of Gibbs free energy for the austenite to martensite transformation. It was then decided to propose a specific formulation for the influence of austenite size. Actually, it is proposed to introduce an additional term that depends on the volume of austenite as follows: ( ) q K A 0 s s V - M M = (50) where 0 s M is determined from the equation 49 and represents the chemical contribution, V A is the volume of austenite features taken as an equivalent volume of cube (D A 3 in µm 3 ), and K and q are fitting parameters. As discussed beforehand, at the peak point M s = T q , therefore, the parameters K and q must satisfy the following first-order condition: ( ) q T - ⋅ = peak 0, s q A M V K (51) In this study, the peak occurs at 1h with carbon and manganese contents in austenite of 0.33 wt.% and 9.3 wt.%, respectively, and a mean size of austenite about 240 nm. Thus, 0 s M was calculated, using equation 49, to be about 117°C. The fitting parameters K and q are interdependent and at each value of q is associated an unique value of K (Figure 109). Nevertheless, many pairs of K and q satisfy condition 51 but only a single set of K and q parameters should fit best the time-evolution curve of retained austenite. Using the experimental data of this work, the best agreement was obtained with q=0.039 and K=83.3 (Figure 110). Finally, the new M s law that accounts for the size effect in the MMS with ultra-fine microstructure takes the following expression: ( ) 039 . 0 A Mn C new s V 3 . 83 - w 30.4 - w 23 4 - 539 C) ( M - ⋅ ⋅ ⋅ = °A A (52) where V Α is given in µm 3 . All the values considered for the calculation of RA fraction and the obtained results are summarized in Table 7. The prediction of RA fraction evolution using new Ms formulae is in a very good agreement with the experimental results. As it was seen experimentally, the fresh martensite starts to form during cooling in case of holding time longer than 2h. This means that for longer holding time, stability of austenite is decreased. Also, the proposed approach predicts correctly the stabilization of RA fraction at very long holding time. Figure 110 shows the comparison between the evolutions of RA fraction calculated using only chemical contribution to M s (Andrews M s ) and using both chemical and size contributions (new M s with q=0.039 and K=83.3). These results confirm unambiguously the size effect on austenite stability. It can be also observed that the influence of size effect is quite important. Actually, it brought two things: a) delay of the maximum peak to a longer holding time; b) global increase of RA fraction (maximum value and values at longer annealing times). Moreover, the particular peak-like form of the RA fraction time-evolution is the result of concomitant change of C content and size of austenite. Finally, using the new M s law it is possible to estimate the critical volume or size of austenite, with a given chemical composition in austenite (in our case C and Mn), necessary for the full stabilization at room temperature: q K 1 0 s critical A 20 M V         - = (53) For example, austenite with 0.33 wt.% C and 9.3 wt.% Mn will be fully stabilized at room temperature when its size is lower than 280nm. Such stabilization due to the ultra-fine size is apparently contradictory with the classical nucleation model in which more grain boundary area should provide more chances for nucleation of martensite embryos [GHO '94]. This dichotomy was discussed in [FUR '89] and it can be reasonably suspected that a stabilizing mechanism becomes dominant for smaller grain sizes. The results of this work are probably not sufficient in order to determine the mechanism of austenite stabilization by the size effect. However, some discussions about two possible mechanisms are proposed hereafter. As already seen previously, Yang et al. [YAN'09] suggested that the dependence of M s on the austenite grain size during the austenite to martensite transformation can be explained quantitatively on the basis of Fisher's original model [FIS '49]. The latter, based on a purely geometrical analysis, claims that the number of plates per unit volume needed to obtain a detectable fraction of martensite increases as the austenite grain volume decreases because the volume transformed per plate is reduced. The adaptations done by Yang et al. [YAN'09] provided the following expression for M s :         +       -       - - - = 1 1 ) 1 ln( exp 1 ln 1 0 m f aV b M M s s γ (54) where V γ is the volume of austenite grain, f is martensite fraction (for M s calculation f = 0.01), m is the plate aspect ratio (m = 0.05 from [CHR '79]) and a = 1mm -3 , b = 0.2689 and 0 s M = 363.5°C are three fitting parameters. Using this M s law, the critical size of austenite with 0.33 wt.% C and 9.3 wt.% Mn for its full stabilization at room temperature should be lower than ~100nm. This value is almost 3 times lower than that determined previously with the M s law proposed in this study. One of the possible reasons for this is that the fitting parameters a and b were determined from the grain sizes much higher than those measured in this work (see [YAN'09]). However, by resetting the fitting parameters a, b and m and using 0 s M of Andrew's it is possible to obtain equivalent results for the retained austenite time evolution. It should be noted that the fitting parameter b and the plate aspect ratio m are not independent. The product is equal to the constant parameter α in the Koistinen-Marburger equation (α being equal to 0.011 here). The values of parameters a and b (m = 0.05) obtained by fitting are 4 mm -3 and 0.22, respectively. The obtained results are compared with experimental data in Figure 111. It can be seen that the adapted "geometrical model" fits well the measured retained austenite evolution. In the same time, Takaki et al. [TAK'04] recently suggested another theory based on the experimental results of Fe-Cr-Ni steel. It was stated that austenite can be significantly stabilized when the martensitic transformation mode is changed from multi-variant to single variant. Therefore, it can be reasonably suspected that austenite refinement contributes to the change in the transformation mode. The mechanism of grain refinement-induced austenite stabilization can be discussed in terms of increase of elastic strain energy associated with austenite to martensite transformation via a single variant. It was shown that the increase of elastic strain energy due to the development of one single variant of martensite in one single grain of austenite is estimated by the following relation:       +       = ∆ D x D x E V 6 . 562 1 . 1276 2 (55) where x is the thickness of martensite plate and D is the austenite grain size. The plot of elastic strain energy ∆E v as a function of grain size shows the difficulty for martensite lath to nucleate in austenite which size is smaller than 1 µm [TAK '04]. The quantity ∆E v can be seen as an excess driving force for martensite nucleation. As a result, it is expected that M s depends on grain size in the same manner that ∆E v depends on grain size [GUI '07]. This relation is far from that observed in this work (see equation 52) and would suggest that such mechanisms is not involved. As it was stated previously, the results of this work do not allow to select or to define the mechanism of austenite stabilization by the size effect, even though geometrical model show a good agreement. It can be imagined that both geometrical and elastic energy constraints can operate together and prevent austenite transformation to martensite. Therefore, more studies are necessary to discriminate the mechanism of austenite stabilization by the size effect. As a conclusion, austenite stabilization during ART annealing of MMS was provided by two mechanisms: C, Mn enrichment and ultra fine size of austenite (less than 0.5 µm). Mn content and ultra fine size of austenite play an important role for its stabilization; however the prime importance for the austenite stability was attributed to its C content. Both C content and ultra fine austenite size are the key factors for the peak-like form of the RA evolution curve. Moreover, the time-evolution of retained austenite is shown to be a fair indicator of the critical factors governing austenite stability in ultra-fine grained MMS. Particularly, the peak of the curve defines the critical point delimiting stable and unstable austenite. The corresponding critical size was estimated around 280nm. This chapter presents the experimental characterization of mechanical behavior of studied MMS, the analysis and understanding of the obtained results, and, finally, a modeling approach based on the microstructure observations discussed in the previous chapter. The chapter division on sections is given just above. Mechanical properties of annealed samples Mechanical behavior of the medium Mn steel (0.1C-4.7Mn wt.%) after intercritical annealing was obtained using standard tensile tests. Evolution of mechanical behavior with the holding time at 650°C was mainly studied. Engineering and true stress-strain curves are presented in Figure 112. Engineering and true stress-strain curves were each time separated in two graphs: one corresponding to annealing times going till 3h of holding time and another corresponding to higher holding times. This was done for easier analysis of the curves. Figure 112 shows that holding time has an important impact on mechanical behavior. Indeed, a variety of curves were obtained with different strength-elongation combinations and work hardening rates. Mechanical characteristics like yield strength (YS), yield point elongation (YPE), ultimate tensile strength (UTS), uniform (Uel) and total (TE) elongations are summarized in Table 8. Their evolution with holding time was analyzed and presented in Figure 113. The obtained mechanical properties are quite attractive: good combination of strength and ductility was produced through a rather large range of holding time. Higher YS (more than 100MPa) with same or slightly better elongation was obtained in comparison with the mechanical properties of classical TRIP800. There is a clear optimum in terms of strengthelongation balance observed for 2h holding at 650°C. It can be seen that increase of holding time till 2h resulted in a small increase of UTS (+60MPa), but in the same time YS showed a decrease in about 150MPa and elongation was improved (+10% of Uel). Further increase of holding time induced a decrease of ductility (elongation), slight increase of UTS and pronounced decrease in YS. Such evolution of mechanical properties correlates well with the microstructure evolution observed in chapter 3. During first holding hours austenite fraction is increasing and carbides are continuously dissolved. C and Mn partitioning into the austenite and its ultra fine size during this stage are the reasons of improved austenite stability. The retained austenite fraction of 2h sample is close to the maximum observed for 1h sample. Higher RA fraction results in more pronounced TRIP effect which finally gives better elongation. Higher holding time is decreasing the stability of RA, hence fresh martensite is observed in the microstructure, which contributed to the decrease of elongation. Slight increase of UTS till 3h of holding is correlated with the increase of global austenite fraction meaning higher martensite or induced martensite fraction during mechanical loading. The observed permanent decrease of YS is thought to be related with the continuous recovery and recrystallization of initial martensite structure which leads to the lower resistance of matrix phase (ferrite) due to lower density of defects. Low YS can be also attributed to high RA fraction due to the low YS of obtained austenite as it will be shown further in Chapter 4.4. Strain hardening rate of the samples was also studied. The evolution of strain hardening rate with strain and stress is presented in Figure 114. All samples expose three domains of work hardening (WH) rate evolution: 1) WH rapid decreasing; 2) re-enhancement and stabilization of WH; 3) final drop of WH. These three stages are clearly shown in Figure 115. Such three step behavior was reported to be related to the strain induced transformation of austenite into martensite (TRIP effect) in MMS [SHI '10]. All the samples had a retained austenite fraction higher than 20% thus taking benefit of TRIP effect. However, the rate of strain induced transformation was not the same and, hence, the strain hardening behavior was also singular. The re-enhancement of WH was moderate for the holding times lower than 2h, especially for the 3min holding, but in the same time the final drop of WH appears more as a stagnation or very slight decrease (particularly true for the 1h and 2h samples). Taking into account that the RA fraction was rather high and that stability of RA was increasing, one can suppose that such behavior results from the continuous strain induced transformation of austenite with small increment. It can be even imagined that at the end of the tensile test some RA stays untransformed. On the other hand, for the samples with higher annealing time (>2h) the re-enhancement of WH was more pronounced, but the following decrease of WH was also amplified especially for the 30h sample. It is thought that such behavior was obtained due to the lower stability of RA, thus the rate of strain induced transformation was more rapid and RA was consumed quickly. These aspects of RA stability will be further discussed in chapter 4.6 in relation with the obtained experimental data. Some simple correlations between mechanical characteristics and fractions of constituents were analyzed. Relations between YS L , Maximum True Stress, Uel and fractions of RA and FM+RA were supposed to be the most interesting, and thus they are presented in Figure 116. It can be seen that RA fraction influence on YS L and Maximum True Stress is very limited or inexistent, but in the same time it has an impact on Uel, although, it cannot explain the complete evolution. On the other hand, FM+RA fraction has no correlation with Uel, but seems to be an important parameter for the strength of these steels. However, the relation between YS L and FM+RA fraction appears to be strange in this case. FM+RA represent the hard constituents in the steel. Thus one can expect the increase of YS with the increase of FM+RA fraction, but the observed tendency was opposite. In a particular case when YPE exists, the decrease of YS with the increase of hard phase is possible when the YPE is suppressed. But in this work YPE was found for all the samples. Hence, it was then supposed that this correlation has an indirect link through the holding time at 650°C which represents the annealing (tempering) of initial martensite. Longer holding time results in higher FM+RA fraction, but the degree of initial martensite tempering will be also higher, consequently decreasing YS. Finally, it can be concluded that, although there are some correlations between mechanical properties and microstructure, they are quite complex and their understanding is incomplete. Hence, to explain and model global tensile behavior of studied steel, more advanced comprehension of the behavior of each constituent and their interactions should be acquired. The I II III I II III following subsections will present the approach used for the modeling of mechanical behavior and the performed experimental trials to obtain behavior laws of each constituent, when possible. Finally, in the end of the chapter the complete model is presented and discussed. Description of the Iso-W approach Based on the microstructure analysis and in order to simplify the problem, it was decided to consider a microstructure consisting of three phases: ferrite, retained austenite and fresh martensite. Influence of the carbides, precipitated in the ferrite then dissolved during annealing, was neglected. It is a strong assumption which needs to be confirmed by a complementary detailed study. This point is discussed further in the section concerning mechanical behavior of ferrite. To predict mechanical behavior law of the three phase mixture without supplementary fitting parameter it was decided to use the so-called "Iso-W" approach proposed by Bouaziz and Buessler [BOU'99]. This approach was already presented in chapter 1. It supposes that in a multiphase microstructure increment of mechanical work is equal in each constituent during the whole mechanical loading. In terms of equation this means that: FM FM A A F F d d d ε σ ε σ ε σ = = (56) where σ F , σ A , σ FM and ε F , ε A , ε FM are the stress and strain of ferrite, retained austenite and fresh martensite, respectively. In order to calculate the increment of strain for each phase the subsequent mixture law was used: FM FM A A F F d f d f d f d ε ε ε ε + + = (57) where ε is the macroscopic strain of the material and f F , f A and f FM are, respectively, the fractions of ferrite, retained austenite and fresh martensite. As it can be seen from equations ( 56) and (57), prediction of the global mechanical behavior of such mixture of phases requires at least two following inputs: 1) mechanical behavior of each constituent: annealed and fresh martensite with medium Mn content, and retained austenite with medium C and Mn content; 2) description of initial microstructure: at least fractions of all constituents. As well it is necessary to take into account the dynamic evolution of phase fractions under loading, e.g. induced transformation of retained austenite into martensite (TRIP effect), which introduces an additional strain hardening. Hereafter are described the considerations and investigations performed for each phase. Mechanical behavior of as-quenched martensite and its simulation 4.3.1 Mechanical behavior of as-quenched medium Mn martensite Generally, as it was discussed in the literature review, it is considered that the mechanical behavior of as-quenched martensite depends mostly on its C content. Thus, it was decided to obtain experimental stress-strain curves of as-quenched martensite for the studied steel and to model them with the approach proposed by Allain et al. [ALL'12] (already described in the literature review). Two samples after first complete austenitization at 750°C for 30 minutes followed by water quench as presented in chapter 3 were used. Their microstructure was already investigated in chapter 3.1. Figure 117 presents the main results of this investigation. Microstructure analysis after Dino etching confirmed the absence of ferrite and X-ray spectra did not revealed any austenite. Hence it was concluded that the structure consists mainly of martensite and only negligible amount of retained austenite, observed in TEM. Then, tensile tests were performed and the obtained tensile curves and mechanical properties are presented in Figure 118 and Table 9, respectively. Figure 118 and Table 9 also illustrates the reproducibility of tensile curves between the two samples. It can be stated that the observed strength level and strain hardening rate are quite high for a quenched 0.1C steel which typical UTS value is around 1200MPa. Looking on the obtained data, we were forced to conclude that the mechanical behavior of the obtained martensite depends not only on its C content but also on its Mn content. Moreover, it was not possible to predict the stress-strain curve of such martensite (medium Mn steel) using the CCA model proposed by Allain et al. [ALL'12]. Hence, it was decided to investigate a little more the effect of Mn on the strength and strain hardening of martensite. 4.3.2 Influence of Mn content on the strength and strain hardening of martensite A database of some selected C-Mn martensitic steels from previously published studies and of some new experimental trials was built. The chemical compositions of these steels are shown in Table 10. "-" means less than 0.01 wt.% for Si and Cr and less than 0.001 wt.% for Ti. Figure 119 presents the results of experimental tensile tests performed in the present study along with the results collected from the previous works [PUS '09], [ZHU '10], [ALL '12]. The true stress evolution as a function of true strain is shown in Figure 119 shows the related strain hardening rate evolution as a function of true stress (so-called Kocks-Mecking plot [KOC '03], [MEC'81]). For all studied martensitic steels some general highlights can be stated: • conventional yield stress seems to be a function of martensite carbon and manganese contents; • high work-hardening rate is observed and it increases up to necking strain in accordance with the carbon and manganese contents. From Figure 119(a) it can be seen that the true stress-true strain curve of 0.1C-5Mn is almost the same as for 0.15C, and that the true stress-true strain curve of 0.15C-5Mn is in between the curves of 0.22C and 0.3C. The same conclusion can be deduced from Figure 119(b) which demonstrates the strain hardening evolution. The solid solution hardening of Mn cannot explain this difference in the behavior of martensitic steels with high Mn content. Modification of solid solution hardening only shifts the curves to higher stress levels, but in the case of medium Mn steels a clear change in strain hardening of martensite can be found. According to F.B. Pickering and T. Gladman [PIC'63] the solid solution hardening of Mn can be evaluated using following relation: S Mn *Mn wt.%, where S Mn = 32. This means that each percent of Mn increases strength of 32MPa and for 4.6wt.% of Mn the strength should increase of about 150MPa. Figure 119(c) shows the experimental curves of 0.15C and 0.15C-5Mn, but also 0.15C curve shifted up at 150MPa. This figure proves that such shift is not sufficient to match with 0.15C-5Mn; there are still about 125 MPa missing. But also the strain hardening rate, which is depicted in Figure 119(b) is not the same. Thus, it is considered that simple additive solid solution hardening cannot explain the behavior of martensite with medium content of Mn. It can be also observed from the Figure 119(a) and (b) that the influence of Mn on stress and on strain hardening is rather limited in the case of small carbon content (0.01C-3Mn). All these facts suggest that Mn content influences the martensite strength and strain hardening in synergy with the carbon content of martensite. Based on the CCA approach [ALL '12], a simplified behavior law for martensitic steels was proposed. In order to describe the stress-strain curve of martensite as a large elasto-plastic transition the strain hardening can be expressed as the product of Young modulus (Y) by the fraction of elastic zones (1-F): ( ) Y F d d ⋅ - = 1 ε σ (58) where σ and ε are respectively the macroscopic stress and strain of the material. The plasticized zones fraction F is chosen as a logistic law:                 - - - = p 0 min exp 1 F σ σ σ (59) where σ min is the minimum stress necessary to start to plasticize and, p and σ 0 are the parameters that control the shape of F(σ) curve (Figure 120(a)). In the first stage of tensile test, the macroscopic stress is lower than the elastic threshold (σ<σ min ), hence F = 0 and the material exhibits completely elastic behavior. When the applied stress (σ) became more important than σ min , then F starts to increase, meaning that the plasticized zones fraction increases. Model adjustment with the experimental and literature data showed that σ min and p can be taken as constants for all considered steels, and the following values were found to be optimal: σ min = 450MPa and p = 2.5. Thus, only one variable parameter, σ 0 , was used to obtain the best fitting results between model and experiments. It was found that both C and Mn have an important influence on σ 0 . A linear dependence between σ 0 and C eq was established in the form of subsequent equation: eq C ⋅ + = 1997 130 0 σ (60) where C eq is the parameter that considers the concomitant influence of C and Mn. It was proposed to take into account this synergy of C and Mn in the following way:         + ⋅ = Mn eq K C Mn C w 1 w (61) where w C and w Mn represent C and Mn (wt.%) contents of martensite, and K Mn is the coefficient of the Mn influence. Finally, from the collected experimental data (Table 10 and Figure 119) it was possible to find the optimum value for K Mn = 3.5. The σ 0 evolution with C eq is shown in Figure 120 The final results of the model are presented and compared to the experimental data in Figure 121. The results of stress-strain curves prediction were separated into two graphs (Figure 121(a) and (b)) to have a better vision of medium Mn steels curves. As it can be seen, this simple model accurately predicts the whole stress-strain curves of different martensitic steels with varied C and Mn contents. On the other hand, it can be noticed that the simulated curves are not perfect and there are some mismatches. However, the maximum difference between model and experimental curves in terms of stress is less than 60 MPa and this represent less than 5% of the maximum flow stress. d) present the evolution of strain hardening rate as a function of stress and as a function of strain, respectively. Taking into account that modeling of derivative is more complex than modeling of a function itself, the proposed model gives very satisfactory results of strain hardening rate evolution. These figures also show that the model considers correctly the synergy influence of C and Mn on the strain hardening rate evolution. Even though the results are satisfactory, it is evident that the proposed model is a simplified version of CCA published previously [ALL '12], hence the global description of stress-strain curves is less precise, especially elasto-plastic transition. This can be clearly seen on the stressstrain curves of 0.22C, 0.15C and in particular 0.01C-3Mn steels. The discrepancy of the model is also related to the fact that only one fitting parameter was considered. This was done deliberately in order to simplify the understanding of the observed phenomenon of C-Mn synergy. Model response can be easily improved by relaxing the constraints on σ min making it variable. Nevertheless, more data and studies are needed to obtain good correlation between σ min and some metallurgical or microstructural parameters. Future works are also necessary to understand physical mechanism of this C-Mn synergy and its relation with the microstructure. The outputs of these further investigations will be probably very helpful for further model improvement. The work about the effect of Mn on strength and on strain hardening of C-Mn martensite was published in the ISIJ International journal with the reference [ARL '13]. Mechanical behavior of austenite with medium C and Mn contents From the microstructure investigations in chapter 3, it was concluded that retained austenite contains about 10 wt.% Mn and a maximum of 0.4 wt.% C. Thus, it was decided to elaborate such composition in order to produce fully austenitic steel and to evaluate its mechanical behavior. However, some technical problems occurred during melting and the obtained steel contained a little higher C content (0.5 wt.%) as well as important quantity of boron. Nevertheless, the work was continued with this steel as it was considered that the increased C content would not change drastically the nature of austenite and the mechanism of its deformation because this composition is in the same domain as targeted one according to the Schumann's diagram (Figure 35). The obtained 2 kg ingot had the composition given in Table 11. The ingot was then reheated to 1200°C and hot rolled with the finishing temperature around 900°C. Coiling was simulated by a slow cooling in the furnace from 550°C. Microhardness of hot rolled band was evaluated to be around 560 HV. Table 11 -Chemical composition of steel elaborated for the investigation of austenite mechanical behavior (10 -3 Hot rolled microstructure was analyzed after Nital etching using optical microscope and SEM. The obtained microstructures are shown in Figure 122. From these observations it appears that brown or dark areas on optical image are pearlite islands, which are clearly seen in SEM images. The orange color areas on optical and dark grey on SEM images are supposed to be austenite. Needle like features are supposed to be epsilon martensite. And finally dense areas around pearlite islands are imagined to be alpha prime martensite. X-Ray Diffraction revealed the presence of former three phases: austenite, ε-martensite and α'martensite. However, deeper microstructure analysis of these phases (like TEM) was not done, thus their morphological appearance on optical and SEM images stays hypothetical. Cold rolling of such steel was not possible due to high hardness and brittleness of the hot rolled sheet, thus further trials with the objective to produce fully austenitic structure were continued on hot rolled metal. The dilatometer trial presented in Figure 123(a) was performed: heating with ~5°C/s rate to 1000°C, then holding for 10s followed by cooling with ~5°C/s rate. The obtained dilation curve is shown in Figure 123(b). Based on the equilibrium thermodynamic data, at 1000°C the structure is fully austenitic. And according to the dilation curve (Figure 123(b)), there is no phase transformation during slow cooling. Therefore, it was supposed that if the sample is raised into the austenitic domain and then quenched it will be all or mostly austenitic with perhaps some martensite (α' and/or ε). Hence, a new annealing cycle was done on hot rolled sample: it was heated up with ~6°C/s rate to 800°C, soaked for 5min and then water quenched. Hardness of annealed sample was evaluated to be around 270HV. In the same time, low fraction of α'-martensite (less than 7%) was determined from the X-ray and magnetic measurements. It was then supposed that the microstructure was mostly austenitic. Finally, tensile tests were performed on hot rolled and annealed at 800°C-5min samples. The obtained stress-strain curves and mechanical properties are presented in Figure 124 and Table 12, respectively. It can be observed that the obtained mechanical properties are rather similar, meaning that the effect of thermal treatment on the properties of this steel is rather limited. In both cases the YS 0.2 is about 340 MPa, UTS close to 600 MPa and somehow poor elongation (less than 5%) is observed. It is known that the YS is controlled by the softest phase. Hence, taking into account that in the both microstructures the soft phase is austenite, it is assumed that the obtained YS correspond to the YS of austenite. On the other hand, very poor ductility is thought to be due to the very hard initially present or transformation induced α'-martensite that contain high level of C and Mn. Mn segregations can also be the reason for the low elongation values. However this topic was not studied. Finally, considering the observed low YS and relatively high C and Mn content (0.4 wt.% of C and at least 9 wt.% of Mn) of retained austenite in the final microstructure, it appeared to be appropriate to use the law proposed by Bouaziz et al. [BOU'11] for the description of retained austenite mechanical behavior: ( ) ( )       ⋅ - - ⋅ + = f ε f exp 1 σ ε σ A 0 A A A K (62) where σ A and ε A are respectively the stress and strain of the retained austenite, K = 2900 and f = 4 are fitting parameters determined by Bouaziz et al. [BOU'11] and σ 0A is the yield stress of austenite that increases with C content and decreases with Mn content in the following manner: Mn w 2 w 187 228 σ C 0 A ⋅ - ⋅ + = (63) where w C and w Mn are, respectively, C and Mn contents of austenite in weight percents. Comparison of the stress-strain curves proposed by the explained model and experimental one is shown in Figure 125. One can state that the simulation differs from the experimental points. Clearly, the strain hardening rate of the experimental curve is higher than the model ones. Of course, by changing the fitting parameters the difference can be decreased. But deeper analysis of the global data (tensile curves and microstructure characterization) brought us to the following conclusions. In fact, initially present or rapidly transformation induced α'-martensite that contain high level of C and Mn would contribute significantly to the strain hardening of the experimental material. Hence, the experimentally obtained curve for the sample annealed at 800°C for 5min is not representative of the fully austenitic structure behavior, but that of an austenite-martensite structure. Taking into account these considerations, it was judged that the proposed model will be appropriate for the mechanical behavior description of austenite with medium C and Mn contents. Mechanical behavior of ferrite with medium Mn content As presented previously in chapters 1 and 3, ferrite (annealed martensite) is the phase that was obtained from the high temperature annealing of fresh martensite. One can say that this is simply a ferrite and from crystallographic point of view this will be true, but the morphology and mechanical behavior of this ferrite (annealed martensite) is different. This was clearly shown in the works of Sugimoto [SUG '02]. In our work in chapter 3 it was also demonstrated that such ferrite contains an important quantity of dislocations which should have a non negligible impact on mechanical behavior. Consequently, in this work it is considered that this ferrite (annealed martensite) has similar properties as an ultra fine lath-like ferrite with some dislocations and/or restorations cells. Then the question was: how to produce steel with 100% lath-like ferrite structure? This question was quite complex and no trivial answer was found. No way to produce 100% lath-like ferrite structure was determined. However, it was possible to produce microstructure with lath-like ferrite (more than 98%) and some carbides. Then it was decided to study such structure and to look for some approximations. The studied 0.1C-4.7Mn (wt.%) steel was heated up with ~6°C/s rate to 500°C, held for different times (3min, 1h and 30h) and finally water quenched. Such low temperature was chosen intentionally in order to avoid any austenite formation. However, even at such low temperature after very long annealing, some nucleus of austenite were formed, as it was revealed by microstructure analysis presented further. Microstructure was controlled using SEM after metabisulfite etching; the images are presented in Figure 126. The revealed microstructure shows clearly the ferrite in grey with some carbides (white spots). Prior austenite boundaries can be also easily observed. In the sample with 30h holding time some very small nucleus of austenite were observed (black colored spots in the down left corner of Figure 126(b)) but their fraction was quite limited and thus they can be neglected. The low temperature annealed samples were then submitted to tensile tests. The resulting stressstrain curves and mechanical properties are presented in Figure 127 and Table 13, respectively. Data for one sample with fully fresh martensite structure are also given for comparison. It can be seen that the obtained tensile curves were quite particular. The yield strength was rather high (more than 750 MPa) and almost no strain hardening was observed (flat curves). Such mechanical behavior corresponds to the revealed microstructure: ultra fine lath-like recovered ferrite and some fine carbides. In the work of Bouaziz [BOU'09] it was already shown that during mechanical loading ultra fine ferrite is characterized by very high strength (for example ~1000MPa for 0.3µm size) and absence of strain hardening. Based on this work it was concluded that the obtained tensile curves and microstructures are in good agreement. In addition, it was noticed that with the increasing holding temperature or time the size of annealed martensite increases, thus decreasing the strength (3min ~1000 MPa; 30h ~800 MPa). Considering the fact that there was no strain hardening of such a ferrite and that its morphology was a lath-like one, it was proposed to model its behavior with an elastic perfectly plastic law where the stress level depends only on solid solution hardening and on the mean free path. For that purpose, the law proposed by Bouaziz and Buessler [BOU'02] and inspired from Orowan [ORO '54] theory was used: ( ) λ µ b M σ ε σ F 0 F F ⋅ ⋅ + = ( 64 ) where M is the Taylor factor (taken equal to 3), μ is the shear modulus (80000MPa), b is the magnitude of the Burgers vector (0.25 nm), λ is the mean free path (lath size) and σ 0 is the internal friction stress that takes into account solid solution hardening in the following manner: P Mn w 750 w 32 60 σ F 0 ⋅ + ⋅ + = ( 65 ) where w Mn and w P are Mn and P contents of ferrite, taken in weight percents. The parameter λ (mean free path) is a fitting parameter and it can represent the influence of both lath size of ferrite and carbides interspacing. Therefore, influence of carbides can be partially included in this fitting parameter λ. Retained austenite strain induced transformation (TRIP effect) As it was discussed in chapter 1, under mechanical loading retained austenite transforms to martensite. Such transformation increases considerably the strain hardening rate of the material, thus delaying necking and increasing elongation. Hence, it is very important to know the factors that control this transformation and to predict properly the induced martensite fraction. With this aim, the evolution of RA fraction with the increase of deformation was measured via the interrupted tensile tests for two different treatments. These two extreme cases are 1h and 30h holding at 650°C. Magnetic saturation measurements were performed in the deformed zone to estimate retained austenite fraction. The increase of induced martensite and the decrease of RA fraction with the strain are presented in Figure 128. Looking only on the Figure 128(a) one could say that the induced martensite transformation is the same, but this is not really true. In fact, considering Figure 128(b) it is more evident that there is an important difference. In the beginning (till ~0.08 strain) the slope of RA decrease in both cases seems to be the same, but afterward a different behavior is observed. The slope of 1h sample is slightly diminished thus less austenite is transformed, but in the case of 30h sample the transformation continues with the same rate. Finally, it can be found that in 1h sample still ~8% of RA was available at the end of tensile test and in contrary in the 30h sample all austenite was consumed. This means that in the 1h sample a portion of RA was stable enough to avoid the transformation even at rather high strain levels. These observations are also in a good agreement with the strain hardening rate curves presented in Figure 114. Strain hardening rate increase of 1h sample was less pronounced but in the same time a certain value is maintained for a rather high strain levels which corresponds to high stability of RA and continuous induced transformation. On the other hand, for 30h sample the increase of strain hardening is important, but its drop off is as well very quick. Such behavior corresponds to a more rapid induced transformation and signifies lower stability of RA. Different models describing TRIP effect were presented in chapter 1. The model proposed by Perlade et al. [PER'03] was even tested, but it was rejected (in its actual form) due to the high sensitivity to the austenite size evolution. For simplicity and in the same time good control of the transformation rate, an approach proposed by Olson and Cohen [OLS '75] was selected. This approach is based on simple Kolmogorov-Johnson-Mehl-Avrami [KOL '37], [JOH '39], [AVR '39] law. Nevertheless, the controlling and fitting parameters may differ from author to author. In our case a special form of this phenomenological exponential law was proposed to simulate the evolution of RA fraction with the increase of strain:                         - - ⋅ = n RA RA ini M ind f f 0 exp 1 ε ε (66) where RA ini f and M ind f are the fractions of initial retained austenite and induced martensite, respectively, ε A is the strain of the retained austenite and ε 0 and n are the fitting parameters. Using the experimental data from Figure 128, ε 0 and n were obtained for the two samples with different holding time at 650°C: 1h (ε 0 =0.12; n=1) and 30h (ε 0 =0.07; n=2). Comparison of the simulated and experimental evolution of martensite induced transformation is shown in Figure 129. As it can be seen, the proposed model accounts well the differences in the stability of RA discussed above. As a perspective for this model, the parameters that govern the induced transformation rate (ε 0 and n) can be related to some physical parameters controlling the stability of retained austenite (C, Mn composition, grain size and/or others). Before starting the description of the obtained results with the global model, some additional considerations for the model should be discussed. In fact, during strain induced transformation, fraction of RA decreases and new induced martensite appears. Hence, the effect of such transformation on strain hardening and also the difference between the flow stress of two phases (transformed and induced) should be taken into account. As proposed by Embury and Bouaziz [EMB'10] the strain hardening law for such multiphase mixture can be written in the following manner: ( ) A M A FM FM A A F F d df d d f d d f d d f d d σ σ ε ε σ ε σ ε σ ε σ - + + + = (67) where f F , f A , f FM are the fractions of ferrite, retained austenite and fresh martensite, respectively, σ M is the flow stress of induced martensite and σ and ε are respectively macroscopic stress and strain. ( ) A M A d df σ σ ε part in equation 67 demonstrates explicitly the contribution of strain induced transformation on the work hardening and also the difference between the flow stress of retained austenite and induced martensite. As the observed C and Mn contents of retained austenite were rather high (~0.3 wt.% C and at least 9 wt.% Mn) the flow stress of induced martensite will be also high, hence it was decided to take a constant value of σ M = 2500 MPa. This value of 2500MPa represents roughly the maximum flow stress that can be measured for fully martensitic steel with high C content (~0.5 wt.%). Krauss [KRA'99] showed, for example, such a value for a 0.5C (wt.%) steel tempered at 150°C. For such high C content martensite a brittle fracture is expected without tempering. For higher C values, a high amount of retained austenite is present after quench, hence, it seems impossible to evaluate properly the flow stress of such martensite. Global Iso-W model As the behavior of each constituent was determined and simulated, it was then possible to predict the stress-strain curves of samples with different holding times at 650°C and various multiphase microstructures. All the parameters used in the model which are necessary for the description of each phase behavior are presented in Annex 4.1. As it was determined experimentally in chapter 3, martensite appears only for the samples with holding time superior to 2h. The values for C content of retained austenite and martensite were taken from the calculations performed in section 3.3.2 (Table 7). C eq calculated for martensite was close to 1 wt.%, however, as it was discussed above, mechanical behavior of such martensite cannot be determined. Thus in the case when C eq was more than 0.5, it was limited to the value of 0.5, which gives σ 0FM = 1129 MPa. True stress-true strain (only plastic strain is considered) curves simulated with the model and obtained experimentally are compared in Figure 130. This figure is divided in four couples for the easy judgment of the model performance (avoid intermixture between curves): (a) -3min and 1h samples; (b) -10min and 3h samples; (c) -30min and 10h samples; (d) -2h and 30h samples. Curves of the samples 10h and 20h were not represented as their behavior is very close to the 7h and 30h samples, respectively. As it can be seen, the model provides quite reasonable results: accurate prediction of the whole stress-strain curves is obtained for the samples with different holding times at 650°C. This also means that good description of two phase and three phase microstructures was achieved, as the samples with holding time lower than 2h contained only ferrite and retained austenite (of course the carbides were neglected) while samples with higher holding time contained in addition fresh martensite. However, it can be seen that the beginning of certain curves (1h and 2h samples) was not well predicted. This was supposed to be linked to the pronounced YPE, which was not accounted at all in the model. Description of Lüders plateau is rather complex, thus it needs a specific study which was not in the scope of the present work. Strain hardening of the different samples is correctly described and the variation of different mechanical parameters was well predicted. Table 14 shows the comparison between experimental and model mechanical properties. Differences between experimental and model values of UTS and Uel (ΔUTS and ΔUel) were rather limited and their values were comparable with the experimental errors. On the other hand, prediction of YS appeared to be more difficult. In the case of samples with high volume fraction of RA (30min, 1h and 2h holding) the ΔYS is close to 100MPa, which is rather high. Such a big error is attributed to the three factors: 1. presence of YPE; 2. difficulties to assess the individual values of YS for each constituent; 3. and lack of information about austenite mechanical stability at low strain levels. However, a more detailed study is necessary to provide precise reasons of such deviations. Although the absolute values of YS were not well predicted, the evolution of YS with the holding time was well described. Globally, the performance of this model with all the taken assumptions and only 3 fitting parameters (λ, ε 0 and n) was considered to be very satisfactory. Good description of mechanical properties and of tensile behavior evolutions with the holding time at 650°C is achieved. As well, sensibility analysis of the model was performed. The detailed description and results of the analysis are presented in Annex 4.2. Figure 131 shows the effect of the most sensitive parameters on the true stress-true strain curve. Globally, the model accounts properly different influence of input parameters and has certain sensitivity to all of the parameters. Consequently, for the good performance of the model a detailed microstructural analysis of such complex microstructures is needed. As it is highlighted in Figure 131, the most sensitive parameters are the fractions of constituents (especially fresh martensite) and the size of ferrite. On the contrary, the impact of retained austenite stability, represented by ε 0 and n, appears to be of the second order. CONCLUSIONS The so-called "Medium Mn" steel (MMS) is a promising solution to get high strength steels with good formability. An attractive combination of strength and ductility is the result of pronounced TRIP effect in these steels. The latter is obtained due to the high amount of retained austenite in the final microstructure at room temperature. There are several reasons for such a high retained austenite fraction: for example, Mn partitioning and ultra-fine microstructure. This type of ultrafine microstructure can be obtained using a particular heat treatment, named ART-annealing that consists of two subsequent heat cycles: first austenitization followed by rapid cooling to get a fully martensite structure, then intercritical annealing of this martensitic Medium Mn steel. During the second treatment, the formation of austenite occurs according to the so-called "Austenite Reverted Transformation" (ART) mechanism. The main goal of this PhD was to better understand the mechanism of microstructure formation and the link between microstructure parameters and the mechanical properties of Medium Mn steels. First of all, the chemical composition of the steel studied during this PhD work was Fe-0.1C-4.7Mn wt.%. It was chosen from a large analysis of literature data assisted by thermodynamic calculations. The temperature for the second intercritical annealing was determined according to the thermodynamic simulations and, in the same time, by combinatory experimental heat treatments. It was found that there is an optimum domain of temperatures (between 640 and 660°C) where a good balance between strength and ductility can be achieved. Based on the obtained results, the selected heat treatment was double annealing with first austenitization at 750°C for 30min followed by intercritical batch annealing at 650°C. Using this type of treatment the time-evolution of microstructure and mechanical properties during intercritical annealing at 650°C was studied. Microstructure evolution during intercritical annealing The initial state before intercritical annealing (after austenitization at 750°C for 30min followed by water quench) was characterized using different experimental techniques: optical (OM), scanning electron (SEM) and transmission microscopes (TEM), X-ray diffraction (XRD) and electron probe microanalyzer (EPMA). Majority of the microstructure was found to be lath martensite. Next, the microstructure evolution during intercritical annealing at 650°C was characterized using the before mentioned techniques as well as saturation magnetization measurements and NanoSIMS analysis. First, the evolution of Mn microsegregations was investigated. It was found that a significant redistribution of Mn happens during a long intercritical annealing. Such a longrange diffusion of Mn supposes that Mn should diffuse in both ferrite and austenite, meaning that the effective diffusivity of Mn may be different from that in ferrite and in austenite. It was found that homogeneization was controlled by diffusivity close to those in ferrite. In the same time, both the small segregated band spacing and the ultra-fine size of microstructure constituents were supposed to enhance this phenomenon. On the other hand, another type of Mn diffusion was observed at smaller scale. This Mn partitioning is mainly linked to the interactions between Mn and migrating α/γ interface during austenite growth. Second step was the characterization of cementite precipitation state evolution during heating and holding at 650°C. In this work, the evolution of carbides during heating process was not very significant, probably due to relatively high heating rate ~5°C/s. The most important information from this analysis was the mean Mn content of carbides at the beginning of holding (650°C-0s). Its value was estimated to be about 10 wt.%. Then, the time evolution of carbides during holding was analyzed on four samples: 3min, 1h, 2h and 3h. In the beginning of holding (3min), carbides with both two types of morphology (rod and polygonal) and Mn content (15 and 30 wt.%) were observed. Samples after 2h holding time still presented a small amount of carbides with high Mn level (~25 wt.%). Finally, sample after 3h holding was completely carbide free. Existence of cementite after 2h holding at 650°C contrasts with equilibrium thermodynamic calculations predicting complete cementite dissolution at around 610°C. This indirectly highlights the role of kinetics effects. Third stage was the analysis of austenite and ferrite time-evolution. SEM characterizations shown that the final microstructures contained at least three phases: ferrite, retained austenite and fresh martensite. Double morphology (lath-like and polygonal) was revealed for both austenite (retained austenite and fresh martensite) and ferrite. At longer holding times (more than 3h) a clear presence of fresh martensite was observed and most of this martensite was polygonal. From image analysis of SEM pictures, it was also found that the kinetics of austenite transformation was very rapid in the beginning and then becomes smoother. Mn distribution between austenite and ferrite as a function of time was evaluated using local TEM-EDX measurements and TEM-EDX hypermaps. Unexpectedly, the system reaches the equilibrium after a relatively short time of 10h. The experimental volume fractions of phases and the respective compositions were in agreement with those calculated with Thermo-Calc software: 36% of austenite with 9.1 wt.% Mn and 64% ferrite with 2.3 wt.% Mn. At last, the carbon content in austenite for the samples with 1h and 30h holding time at 650°C was measured by NanoSIMS technique. The C content of austenite was estimated by comparison of the obtained SIMS intensities between the analyzed samples and the reference one using image analysis. This methodology was employed for the first time in this work and was considered to be adapted for such type of studies. The measured mean carbon contents of 1h and 30h samples were close to 0.36 wt.% and 0.27 wt.%, respectively. These values correspond very well to the C content calculated from the mass balance neglecting the presence of cementite and thus confirms that a major part of carbides are dissolved after one hour treatment at 650°C. Fourth step was the study of retained austenite and martensite time-evolution. Here it is important to note that a preliminary comparison between two experimental techniques (XRD and Sigmametry) for retained austenite fraction measurements was performed. This was done due to the observed discrepancies in the RA fraction values obtained with two mentioned techniques. It was then found that RA fraction is underestimated by XRD measurements due to preceding mechanical polishing. Such an important effect of mechanical polishing is particular to the MMS and probably results from the lower mechanical stability of RA. As in the standard TRIP steels, RA stability is assumed to be controlled by the austenite composition (C, Mn in this work) and its size. The effect of austenite size on its thermal stability is quite important in the case of MMS. However, according to some very recent work of Matsuoka et al. [MAT'13] the size effect on the mechanical stability of RA is quite low or even absent. Based on these results, Sigmametry technique was considered to be more adapted for this study. Then, the time-evolution of volume fraction of retained austenite was evaluated. A particular peak-type form of the RA timeevolution curve was observed. It can be divided in three stages:  full stabilization of growing austenite resulting in the increase of RA fraction till the maximum value;  decrease of RA fraction and appearance of martensite due to the decrease of austenite stability (lower C, Mn contents and bigger size)  a final stage where RA seems no longer to evolve, probably because the state close to the equilibrium one is achieved. In the end, TEM analysis of several samples with different holding time was performed. Depending on the holding time at 650°C, the four possible phases, cementite, ferrite, retained austenite and fresh martensite, were distinguished using Convergent Beam Electron Diffraction (CBED) in STEM mode (Scanning TEM). The double morphology (lath-like and polygonal) of the observed features was confirmed. The next point was the austenite size measurements and discussion of the observed morphology. The mean austenite size was found to vary from 0.16 µm to 0.45 µm for the studied range of time. These values correspond well to those found in the literature. According to the overall microstructure characterizations, a complete vision on the sequence of carbide precipitation and austenite nucleation and growth in connection with the microstructure topology was proposed. The next part of the work consisted in the analysis of the main results using coupled experimental/modelling approach. The kinetics of phase transformation in ultra-fine lath-like microstructure was studied, including both cementite dissolution and austenite formation, with DICTRA simulations. It was found that in this specific case of Medium Mn steel with ultra-fine microstructure the major part of austenite growth is controlled by Mn diffusion in ferrite and by Mn diffusion in austenite for longer times. It was also clearly shown that standard DICTRA simulations based on a mean filed approach disclose limitations for the prediction of the observed kinetics of cementite dissolution and austenite formation. However, prediction of austenite fraction evolution can be improved by taking into account the dispersions. These dispersions come from the prior processing conditions and play an important role in the kinetics of phase transformations. In this work, DICTRA simulations that account for the effect of size distribution were performed. Very good prediction of austenite kinetics was achieved using such methodology. The reason of such a good result is the correct selection of the size distribution which is coherent with the experimental observations. Finally, the important role of the initial ultra-fine size for the overall kinetics was highlighted. In last section of this part, the critical factors controlling thermal austenite stability, including both chemical and size effects, were determined and discussed, based on the analysis of the retained austenite time-evolution. In an original manner, it was shown that the time-evolution of retained austenite is a fair indicator of the critical factors governing austenite stability in the MMS with ultra-fine microstructure. Particularly, the peak of the curve defines the critical point in terms of composition and size delimiting stable and unstable austenite. The effect of austenite size on its stability was clearly demonstrated and a critical size of about 280 nm was estimated. Finally, an adapted formulation of M s temperature law applicable to the medium Mn steels with ultra-fine microstructure was proposed. Characterization and modeling of mechanical properties Tensile properties of the steel were measured as a function of holding time at 650°C and the relation between microstructure and mechanical behavior was analyzed. The continuous increase of ultimate tensile strength (UTS) and decrease of yield strength (YS) was observed with the increase of holding time. In the same time, uniform (Uel) and total (TE) elongations have a hilllike time-evolution with the maximum values corresponding to the range of times between 30min and 2h. The increase of UTS is clearly related to the increase of global austenite fraction (retained austenite plus fresh martensite). The observed permanent decrease of YS was supposed to be related with the continuous recovery and recrystallization of initial martensite structure which lead to the lower resistance of matrix phase (ferrite) due to lower density of defects. Finally, it was seen that time-evolution of ductility (Uel and TE) is somehow linked to the RA fraction, but this relation is not straightforward, since it requires to take into account the mechanical stability of RA. As well recovery and recrystallization of initial martensite structure can affect the ductility. Analysis of the work hardening rate (WH) curves supports the thoughts about the significant influence of RA on both ductility and strength. Indeed, all the samples exhibited transformation induced plasticity (TRIP) effect, however the rate of strain induced transformation was not the same and, hence, the strain hardening behavior was also singular. Advanced analysis of the individual behavior of the three major constituents (ferrite (annealed martensite), fresh martensite and retained austenite) was done in order to better understand the behavior of such multiphase medium Mn steels. Mechanical behavior of ferrite with medium Mn content From the microstructure characterizations, it was shown that the ferrite (annealed martensite) is the matrix phase of the steel. This ferrite was obtained during the annealing of initial martensite structure and, thus, it has a double specificity. Firstly, the majority of the ferrite has a lath-like morphology. Secondly, it has an important dislocation density issued from the recovery process. No simple way to produce such type of microstructure (100% of lath-like dislocated ferrite) was found. Hence, it was approximated that microstructure with lath-like ferrite (more than 98%) and some carbides has the similar mechanical behavior. Such type of microstructures was obtained by the annealing of initial martensite at 500°C. Three different holding times were considered. The obtained tensile curves were particular, but in accordance with the literature data. The yield strength was rather high (more than 750MPa), but almost no strain hardening was observed (flat curves). Taking into account the fact that there was no strain hardening of such a ferrite and that its morphology was a lath-like one, it was proposed to model its behavior with an elastic perfectly plastic law, where the stress level depends only on solid solution hardening and on the mean size of laths. Mechanical behavior of as-quenched medium Mn martensite Experimental stress-strain curves of as-quenched martensite for the studied steel were obtained. It was observed that the strength level and the strain hardening rate are higher than the levels classically seen in the literature for the 0.1C (wt.%) steel. Also, it was not possible to predict the stress-strain curve of such martensite using the Continuous Composite Approach (CCA) model [ALL '12]. Therefore, it was supposed that there is an effect of Mn on the strength and strain hardening of martensite. To investigate this effect, a database of 8 C-Mn martensitic steels from previously published studies and of some new experimental trials was built. The stress-strain curves and the strain hardening rate versus true stress were analyzed. An important influence of Mn content on the strength and strain hardening of fresh martensite was confirmed. Moreover, it was found that Mn content influences the martensite strength and strain hardening in obvious synergy with the carbon content of martensite. In addition, a more pragmatic behavior law, based on the CCA approach, was proposed for modeling the stress-strain curve of medium Mn martensite. The revealed synergy between carbon and manganese was taken into account in a specific way and the coefficient of the manganese influence was adjusted. The results of the adjusted model with only one fitting parameter showed very satisfactory agreement with the experimental data. Influence of Mn content on the mechanical behavior of as-quenched martensite and its synergy with C content of martensite was clearly evidenced in this work. Mechanical behavior of austenite with medium C and Mn content Mechanical behavior of austenite was assessed on the steel with 0.5 wt.% C and 10 wt.% Mn. This composition was chosen based on experimental characterization of austenite. The investigations were performed on the hot rolled steel (~4 mm thick) because cold rolling was not possible. A specific annealing treatment was determined in order to avoid α'-and/or εmartensite formation and to produce almost fully austenitic steel. The final microstructure was mostly austenitic and contained less than 7% of α'-martensite. The obtained tensile curves were rather particular. High strain hardening, low YS and low ductility were observed. It was considered that low YS is intrinsic to the austenite matrix, which is in accordance with the literature data. On the other hand, poor ductility and high strain hardening was supposed to be the result of transformation induced α'-martensite with high level of C and Mn. On the ground of these results, it was proposed to describe mechanical behavior of retained austenite using empirical law proposed by Bouaziz et al. [BOU'11]. Retained austenite strain induced transformation (TRIP effect) Retained austenite transforms to martensite under mechanical loading. Interrupted tensile tests were performed for two treatments (1h and 30h holding at 650°C) in order to study the evolution of RA fraction with the increase of deformation. It was found that in the 1h sample a portion of RA was stable enough to avoid the transformation even at rather high strain levels and at the end of tensile test still ~8% of RA was available. In contrary, in the 30h sample lower stability of RA was observed and all austenite was consumed. These observations are also supported by the strain hardening rate curves of the studied samples. Based on the Kolmogorov-Johnson-Mehl-Avrami approach, a special form of phenomenological exponential law was proposed for the modeling of RA fraction evolution with the increase of strain. Two fitting parameters, that allow good control of the induced transformation rate, were necessary. These parameters can be related to some physical factors controlling the stability of retained austenite in the future studies. Iso-W model to predict mechanical behavior of studied medium Mn steel At last, based on the Iso-W mixture model, a complete model for predicting the true-stress versus true-strain curves of medium Mn steels was proposed. The effect of strain induced RA transformation on strain hardening and also the difference between the flow stress of two phases (transformed and induced) was taken into account in the way proposed by Embury and Bouaziz [EMB'10]. The performance of this model with all the taken assumptions and only 3 fitting parameters was considered to be very satisfactory. Good description of the mechanical properties, stress-strain curves and strain hardening rate evolutions with the holding time at 650°C was achieved. It was found that the most sensitive parameters are the fractions of constituents (especially fresh martensite) and the size of ferrite. In contrast, the influence of retained austenite stability appears to be of second order. As a general conclusion it can be highlighted that this work contributes to the understanding of microstructure formation and of mechanical behavior of ultra-fine medium Mn steels. Microstructure analysis revealed undoubtedly the influence of austenite size on its thermal stability and the absolute necessity to take this into account for the M s temperature predictions. On the other hand, studies of mechanical behavior demonstrated the important influence of Mn content on the tensile behavior of as-quenched martensite. Moreover, a complete model for the prediction of mechanical behavior of complex multiphase medium Mn steels was build utilizing the inputs from microstructure investigations. Clearly, this model is a helpful tool for the further developments of medium Mn steels. PERSPECTIVES Although this work proposed the answers on several important questions, there are still a certain number of topics for further investigations and improvements. Furthermore, this work put on the table some new questions, hence there are even more questions now than in the beginning of this study. Possible subjects for the further studies are summarized hereafter. Microstructure formation and evolution In this work, it was shown that ferrite recrystallization from initial not deformed martensite structure can happen rather rapidly. The first ultra-fine grains of ferrite were detected already after 30min of annealing at 650°C. In the literature, there are certain works about ferrite recrystallization from not deformed martensite, however the observed times are much longer. From a scientific point of view, ferrite recrystallization from not deformed martensite can be a very interesting subject for further investigations. As it was stated before, a long-range diffusion of Mn happens during a long intercritical annealing. This suggests that the effective diffusivity of Mn may be different from that in ferrite and in austenite. Therefore, it can be of interest to think about the method for calculating the diffusivity of Mn in a two-phase or multiphase structure. Characterization of cementite precipitation state and further DICTRA simulations highlighted the importance of cementite dissolution for the austenite transformation and stabilization. Besides, stability of austenite and its fraction are both very significant parameters for the mechanical behavior of steel. In particular, different thermal paths can provide different carbides and austenite evolutions, and thus result in various mechanical properties. Therefore, the control of cementite precipitation state and its influence on the austenite transformation and stabilization can be a very valuable topic for future studies. In this study, austenite size was shown to have an important influence on its thermal stability. However, the mechanism of austenite stabilization by the size effect is still the subject of discussions. Thus, it can be interesting to continue the work on this topic. Moreover, some results pointed out the low or even non-existing effect of size on the mechanical stability of austenite. This issue requires deeper investigations. Finally, it was clearly demonstrated that standard DICTRA simulations based on a mean field approach disclose limitations for the prediction of the observed kinetics of cementite dissolution and austenite formation. Nevertheless, prediction can be improved by taking into account the dispersions, for example, in form of size distribution. It seems to be important to work on the characterization and modeling of different dispersions (size, morphology, topology, etc…) and further to introduce them in the kinetics simulations. This aspect is of particular importance for the 3 rd generation AHSS grades due to their higher sensibility to the heritage effects. Mechanical properties and behavior It was discussed that producing 100% of lath-like dislocated ferrite structure is a very complex issue. It is thought to be challenging to produce such type of microstructure and to measure its mechanical response. Significant influence of Mn content on the tensile behavior of as-quenched martensite was revealed. It was also emphasized that there is a synergy between C and Mn. However, to understand the physical mechanism of this Mn influence and its synergy with C requires further deeper studies. It is also of interest to look if there is any effect on the microstructure of asquenched martensite and its link with the mechanical behavior. Finally, the improvement of characterization and modeling of austenite induced transformation (TRIP effect) is supposed to be very relevant. At least two ways to improve the modeling of TRIP effect are possible. One is to modify the already existing physical model proposed by Perlade et al. [PER'03] in order to decrease its exaggerated sensitivity to the austenite grain size. Another possibility is to introduce some physical factors controlling the retained austenite stability in the empirical law used in this work. Austenitization at 750°C 30min WQ Figure 136 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profiles along the lines 1 and 2. Optical observation of microstructure is also included in this figure. FINAL STATEMENT OF AUTHOR Optical Image Quantitative X Image of Mn (wt.%) ART annealing at 650°C for 3min The microstructure obtained after 3min ART annealing at 650°C is shown in Figure 138. Figure 139 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profiles along the lines 1 and 2. ART annealing at 650°C for 1h The microstructure obtained after 1h ART annealing at 650°C is shown in Figure 140. Figure 141 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profiles along the lines 1 and 2. ART annealing at 650°C for 10h The microstructure obtained after 10h ART annealing at 650°C is shown in Figure 142. Figure 143 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profiles along the lines 1, 2 and 3. The microstructure obtained after 30h ART annealing at 650°C is shown in Figure 144. Figure 145 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profile along the line 1. As a global conclusion of the model sensibility analysis it can be stated that the model accounts well different effects of input parameters and has certain sensitivity to all of them. This proves the necessity of detailed microstructural analysis of such complex microstructures. It can be also highlighted that the most sensitive parameters are the fractions of constituents (especially fresh martensite) and the size of ferrite. In contrast, stability of retained austenite has a lighter impact. Figure 1 - 1 Figure 1 -Evolution of average CO 2 emissions from the car fleet in the EU. Figure 2 - 2 Figure 2 -Schematic representation of tensile strength/total elongation balance for different already developed and future steels [MAT'06]. Figure 3 - 3 Figure 3 -Published results of mechanical properties of Medium Mn steels -ultimate tensile strength (UTS) as a function of total elongation (TE); target of third generation high strength steels is marked with red ellipse. 1. 1 1 General Description.................................................................................... 1.2 Mechanisms of phase transformations ...................................................... 1.2.1 Recrystallization.............................................................................................. 1.2.2 Austenite formation during intercritical annealing ........................................... 1.2.3 Particular case of Austenite Reverse Transformation (ART) .......................... 1.2.4 Austenite stabilization during annealing .......................................................... 1.3 Mechanical Properties of Medium Mn steels ............................................. 1.3.1 Overview of the mechanical properties ........................................................... 1.3.2 Mechanical behavior of ultra fine ferrite .......................................................... 1.3.3 Mechanical behavior of fresh martensite ........................................................ 1.3.4 Mechanical behavior of retained austenite ..................................................... 1.3.5 Mechanical stability of retained austenite and induced transformation ........... 1.3.5.1 Austenite stability and TRIP effect in MMS .......................................................... 1.3.5.2 Models for austenite induced transformation ....................................................... 1.3.6 Modeling of the multiphase steel mechanical behavior................................... 1.3.7 Yield Point Elongation ..................................................................................... References ....................................................................................................... Figure 4 - 4 Figure4-Observed in the SEM-FEG microstructures of the medium Mn steel annealed at 670°C for 6h (a), revealed with Marshall reagent, and 7h (b), revealed with OPU polishing and picral etching. In both cases uniform light grey colour corresponds to ferrite and rough dark grey areas are martensite and/or retained austenite [ARL'12]. Figure 5 - 5 Figure 5 -Schematic description of different steps of recrystallization: (a) Deformed state, (b) Recovered, (c) Partially recrystallized, (d) Fully recrystallized and (e) Grain growth [HUM'04]. Figure 7 - 7 Figure 7 -Left part -Optical micrographs and traces of grain boundaries obtained by the insitu observation of the as-quenched specimen (upper pictures) and tempered at 1 023 K for 3.6 ks (lower ones). A bulging packet boundary is shown by the hatched line. Right part -Transmission electron micrographs and traces showing nucleation (tempered at 973 K for 2.1 ks) (a) and growth (tempered at 973 K for 2.7 ks) (b) of recrystallized grains in the ULC steel [TSU'01]. Figure 8 - 8 Figure 8 -SEM image of the ferrite-Fe 3 C carbides microstructure, obtained after as-quenched martensite tempering at 690°C for 60h [TAU'13]. Figure 10 - 10 Figure 10 -Schematic representation of austenite formation and growth during intercritical annealing of ferrite-pearlite steels: 1 -dissolution of pearlite, 2a -austenite growth with carbon diffusion in austenite, 2b -austenite growth with manganese diffusion in ferrite, 3 -final equilibration with manganese diffusion in austenite [SPE'81]. Figure 12 - 12 Figure 12 -Mn distribution in ferrite and martensite: (a) as-rolled (before heat treatment), and after 695°C intercritical annealing for (b) 0.83 h and (c) 30 h followed by brine quenching. Positions of the a/a' boundaries are indicated [PUS'84]. Figure 14 - 14 Figure 14 -Fe-C equilibrium diagram and the effect of manganese on the form and position of austenitic region (taken from [DES'76]). Figure 15 - 15 Figure 15 -Isothermal transformation diagram -effect of manganese on the austenite decomposition of 0.55 wt.% C steel at two different temperatures (taken from [DES'76]). Figure 16 - 16 Figure 16 -At left -microstructure of specimens quenched from 770 and 780°C: γ G -globular austenite and γ A + α -acicular austenite + ferrite. At right -number of globular austenite grains as a function of heating rate [MAT'74-1], [MAT'74-2]. Figure 17 - 17 Figure 17 -Scheme of applied heat treatments and corresponding final microstructures after: (a) Intermediate Quenching (ART); (b) Intercritical Annealing and (c) Step Quenching. Figure 18 18 Figure 18 -a) Schematic illustrations of the starting microstructures: MT-1 and MT-2. b) Austenite particles (martensite at room temperature) revealed with LePera's etchant: MT-1 sample annealed at 840°C for 2 min and water-quenched and MT-2 sample annealed at 780°C for 10 sec and water-quenched. Figure 19 19 Figure 19 -a) Comparison of measured and calculated volume fractions of austenite (open symbols are measured by point counting and solid ones are from dilatometry).b), c), d) -Isothermal sections of a ternary Fe-C-Mn phase diagram illustrating different steps of austenite growth during annealing (open circles indicate the bulk alloy composition; solid circles represent the mean composition of the ferrite matrix changing with time) [WEI'13]. -3 , b = 0.2689 and M s 0 = 363.5°C are three fitting parameters. Figure 20 20 Figure 20 presents the comparison of both models ([JIM'07-2] and [YAN'09]) in terms of the decrease of M s temperature: Figure 20 - 20 Figure20-Decrease of M s temperature as a function of austenite grain size according to both studies: [JIM' and [YAN'09]. Figure 21 - 21 Figure 21 -Two upper figures: effect of holding temperature on the amount of retained austenite and of its carbon content after 6h annealing at different temperatures. Lower figure: EDS-TEM analysis of retained austenite in 8 wt.% Mn steel after 6h annealing at 625°C [KIM'05]. Figure 22 - 22 Figure 22 -TEM images (at left) and EDX profiles (at right) of the samples after intercritical annealing at different temperatures: (a) -640, (b) -660, (c) -680, (d) -700. Figure 24 - 24 Figure24-UTS-TE and YS-Uel (when the data were available) charts using the most relevant data found in the literature and patents. Figure 25 - 25 Figure 25 -Influence of retained austenite fraction on: (a) -TE; (b) -Uel; plotted with the available literature data. Figure 26 - 26 Figure 26 -Relation between ultimate true stress and retained austenite fraction; plotted with the available literature data. Figure 28 - 28 Figure 28 -Variation of the YS, UTS (at left) and Uel (at right) as a function of the grain size for pure iron and IF steels [BOU'09]. Figure 29 - 29 Figure 29 -TEM image of lath martensite in the 0.2C-1.5Mn-0.15V (wt.%) steel formed from fine-grained austenite with a 2.3 µm mean grain size. The plain and dotted white lines show prior austenite grain and packet boundaries, respectively [MOR'05]. Figure 30 - 30 Figure 30 -Normalized true stress -true strain tensile curves of martensitic steels with different carbon contents [ALL'12]. Figure 31 - 31 Figure 31 -Hardness of martensitic microstructures in steels as a function of their carbon content [KRA'99]. Figure 32 32 Figure 32 -0.2% offset yield stress as a function of carbon content for Fe-C [SPE'68] and for Fe-C -Mn alloys [NOR'76]. Figure 33 - 33 Figure 33 -Typical stress spectrum f(σ) with associated cumulated function F(σ) and the definition of σ min [ALL'12]. Figure 34 - 34 Figure 34 -(a) Comparison between the results of the model and experimental tensile curves of the studied steels; (b) Comparison between the results of the model and experimental evolution of the slopes of the tensile curves as a function of true stress; (c) Evolution of the adjusted σ 0 parameters with C content of steels; (d) Adjusted stress spectrums for different martensitic steels [ALL'12]. Figure 35 - 35 Figure 35 -At left is presented Schumann's Fe-Mn-C phase stability diagram at 298K and at right -modified Schumann's Fe-Mn-C phase stability diagram after tensile tests at 298K [SCO'05]. Figure 37 - 37 Figure 37 -Retained austenite volume fraction versus true tensile strain: (a) samples with 5min, 30min, 6h and 144h holding time at 650°C and deformed at 25°C; (b) 6h annealed samples deformed at different temperatures: 100°C, 50°C, 25°C, -40°C, -80°C [CAO'11-1]. Figure 38 - 38 Figure 38 -Normalized austenite fraction as a function of engineering strain for (a) 0.11C-4.5Mn-0.45Si-2.2Al and (b) 0.055C-5.6Mn-0.49Si-2.1Al steels. Symbols are measured values and lines are calculated ones. Figure 39 - 39 Figure39-At left: evolution of retained austenite fraction as a function of strain; at right: evolution of transformed austenite fraction as a function of true strain (points) and fittedOlson- Cohen model (lines) [GIB'11]. Figure 40 - 40 Figure 40 -(a) Evolution of the martensite nucleation stress with temperature, (b) Gibbs free energy curves versus temperature and effect of an applied stress σ [PER'03]. Figure 41 - 41 Figure 41 -Volume fraction of strain induced martensite as a function of the macroscopic strain for the different TRIP steels. Symbols are the obtained experimental results and curves are the predictions of the model [PER'03]. Figure 42 - 42 Figure 42 -Influence of the retained austenite size on the martensite induced transformation according to the Perlade's model [MOU'02]. Figure 46 - 46 Figure 46 -AET Gradient Batch Annealing (GBA) furnace. Figure 48 - 48 Figure48 -Scheme of the small (so-called "mini") specimens used for tensile tests after gradient batch annealing. Figure 49 - 49 Figure49 -Scheme of the so-calledISO 12.5x50 specimens used for tensile test after annealing in the Nabertherm furnace. Figure 50 - 50 Figure 50 -Zwick 1474 machine. Figure 51 - 51 Figure 51 -MacroXtens SE50 extensometer: at left -global view; at right -zoom on the part which is in contact with tensile specimen. Figure 52 -Figure 53 - 5253 Figure 52 -Siemens D5000 diffractometer (at left) and scheme of goniometric circle with tube, detector and sample positions and the rotation angles 2θ, ψ and φ. Figure 54 - 54 Figure 54 -Microstructure observations (FEG SEM images) of the different reference samples for saturation magnetization measurements: (a) martensitic structure obtained by austenitization at 750°C for 30min and quenching; (b) sample after double annealing: 850°C -1min WQ followed by 500°C for 1h WQ; (c) sample after double annealing: 750°C -30min WQ followed by 500°C for 30h WQ. Figure 55 - 55 Figure 55 -Retained austenite (RA) fraction as a function of holding time at 650°C. Comparison of two techniques: XRD (blue triangles) and Sigmametry (green squares). Figure 56 - 56 Figure 56 -Zeiss Axiovert 200 MAT optical microscope. Figure 57 - 57 Figure 57 -Field Emission Gun Scanning Electron Microscope (FEG SEM) JEOL 7001F. Figure 58 - 58 Figure 58 -Example of BSE image (at left) and of binary image obtained after threshold (at right). 1h WQ sample after Metabisulfite + Dino etchings. Figure 59 - 59 Figure 59 -Example of the performed measurements of lath width and equivalent diameter of polygon. 2h WQ sample after Metabisulfite + Dino etchings. Figure 60 - 60 Figure 60 -View on JEOL 2100F TEM. Figure 61 - 61 Figure 61 -CAMECA SX100 Electron probe microanalyzer. Figure 62 -Figure 63 - 6263 Figure 62 -Photo of Cameca NanoSIMS 50. Figure 64 - 64 Figure 64 -Scheme of a sample mounted in Al ring sample (at left) and nanoSIMS sample holder (at right) (modified from [PUS'09]). Figure 65 - 65 Figure 65 -12 C ion map of the reference sample and the selection of martensite zones for the averaging of SIMS intensities. Figure 66 - 66 Figure 66 -Observed microstructures after hot rolling with coiling at 625°C (at left -Nital etching) and after cold rolling (at right -Dino etching). Figure 67 - 67 Figure 67 -Pseudo-binary diagrams calculated with Thermo-Calc software for the steels with four different Mn contents: (a) 1.7 wt.%; (b) 2.7 wt.%; (c) 3.7 wt.%; (d) 4.7 wt.%. Figure 68 - 68 Figure 68 -Evolution of phase fractions and austenite C and Mn contents with temperature according to the Thermo-Calc simulations: steel with 0.098 wt.% C and 4.7 wt.% Mn. Figure 69 - 69 Figure 69 -Evolution of retained austenite fraction and its C content as a function of annealing temperature (using the data from Thermo-Calc simulations). Figure 70 - 70 Figure 70 -Scheme of gradient batch annealing cycle (in the left part of the figure) and the obtained gradient of temperature depending on the position in the sheet (in the right part). Figure 71 - 71 Figure 71 -Evolution of ultimate tensile strength (UTS) and total elongation (TE) as a function of holding temperature. Figure 72 - 72 Figure 72 -Evolution of retained austenite fraction and balance between strength and ductility expressed by UTS*TE as a function of annealing temperature. of the microstructure after austenitization ....................... 3.2 Microstructure evolution during annealing at 650°C .................................. 3.2.1 Microsegregation evolution ............................................................................. 3.2.2 Evolution of cementite precipitation state ....................................................... 3.2.3 Time-evolution of austenite and ferrite ............................................................ 3.2.4 Time-evolution of retained austenite and martensite .................................... 3.2.5 Geometrical and topological aspects ............................................................ 3.2.6 Overall view on the obtained experimental data ........................................... 3.3 Discussion of main results: experimental/modelling approach ................ 3.3.1 Mechanisms of austenite formation ..............................................................3.3.1.1 Effects of the representative volume ................................................................. 3.3.1.2 Austenite growth controlled by Mn diffusion ...................................................... 3.3.1.3 Effect of characteristic length L α size distribution .............................................. 3.3.2 Factors controlling austenite stabilization at room temperature .................... References ..................................................................................................... Figure 73 - 73 Figure 73 -Schematic representation of performed double annealing cycles. Figure 74 -Figure 75 - 7475 Figure 74 -Characterization of the martensite present in the microstructure after first annealing cycle: a) OM image after Dino etching [ARL'13]; b) SEM image after Metabisulfite etching; c) TEM image obtained on thin foil. Figure 76 - 76 Figure 76 -EPMA quantitative analysis of Mn distribution in the microstructure after first annealing: a) quantitative Mn map (rolling direction is parallel to abscisse); b) Mn profiles along the lines 1 and 2. Figure 77 - 77 Figure 77 -Quantitative Mn maps obtained with EPMA for samples after annealing at 650°C for 3min, 1h, 10h and 30h. Figure 77 ( 77 Figure77(a) and (b) show that Mn microsegregation is still evident after short time annealing (3min or 1h). For the samples with longer annealing time (10h and 30h) the situation is different (Figure77(c) and (d). An important redistribution of Mn happens during such a long intercritical annealing. These analyses suggest that there is a progressive homogenization of Mn in segregated bands during annealing, as observed in [LAV'49], [KRE'11]. In that case, the Mn diffusion length is expected to be of the order of the initial mean distance between bands, i.e. order of 10 µm. The long-range diffusion of Mn supposes that Mn should diffuse in a two-phase matrix of ferrite and austenite. This means that the effective diffusivity of Mn may be different from that in pure ferrite and in pure austenite. This phenomenon was already discussed in the work of. It was shown that such long-range Mn homogenization Figure 78 - 78 Figure 78 -TEM images on replicas of the samples obtained with different reheating temperatures: 550, 600 and 650°C. Mn content of carbides measured with EDX is shown directly on the images. Figure 79 - 79 Figure 79 -TEM images obtained on thin foils of samples with 3min, 1h, 2h and 3h holding time. Carbides observation was particularly targeted. Mn EDX measurements are directly presented on the images. Figure 80 - 80 Figure 80 -Optical images of microstructures obtained with different holding time at 650°C. Dino etching [ARL'13] was used to reveal the microstructure. Figure 81 - 81 Figure 81 -SEM images of microstructures obtained with different holding time at 650°C. Dino etching was used to reveal the microstructure. Figure 84 - 84 Figure 84 -EDX Mn hypermaps obtained in TEM for the 1h, 2h, 3h and 10h samples. Figure 85 - 85 Figure 85 -Examples of selected areas for Mn analysis: at left -2h sample, at right -3h sample. Figure 86 - 86 Figure 86 -NanoSIMS results for the sample with 1h holding time: (a) -12 C ion map with selected zones for the intensity analysis; (b) -estimated C content of each selected area in map (a) (red square points) and calculated using mass balance (blue line). Figure 87 - 87 Figure 87 -NanoSIMS results for the sample with 30h holding time: (a) -12 C ion map with selected zones for the intensity analysis; (b) -estimated C content of each selected area in map (a)(blue square points) and calculated using mass balance (orange line). Figure 88 - 88 Figure 88 -RA and FM fractions as a function of holding time at 650°C. Figure 89 - 89 Figure 89 -TEM images of samples after 3min and 1h of holding at 650°C. The diffraction patterns in mode STEM show with arrows different phases (BCC -ferrite and FCC -retained austenite). The Mn content of different phases determined with EDX is put directly on the images. Figure 90 ( 90 Figure 90(c) and (d) (3h sample) show the coexistence of ferrite (BCC) and retained austenite (FCC) as well as their double morphology (lath-like and polygonal). A high dislocation density of certain ferrite grains can be also observed. Finally, presence of lath martensite (BCC(M)) was clearly revealed in Figure 91 (10h and 30h samples). Figure 90 -FCCFigure 91 - 9091 Figure 90 -TEM images of samples after 2h and 3h of holding at 650°C. The diffraction patterns in mode STEM show with arrows different phases (BCC -ferrite, BCC (M) -fresh martensite and FCC -retained austenite). The Mn content of different phases determined with EDX is put directly on the images Figure 92 - 92 Figure 92 -Scheme of the austenite nucleation and growth in: A -non-deformed martensite structure and B -deformed structure. Figure 94 - 94 Figure 94 -Schematic representation of the linear configuration for the DICTRA simulations. Figure 95 - 95 Figure 95 -Kinetics of cementite dissolution (a) and of austenite formation (b) at 650°C calculated by DICTRA. The initial state corresponds to the cementite size of 2nm and to the 10 wt.% Mn content in cementite. The experimental measurements of austenite fraction are shown with orange squares. Figure 96 - 96 Figure 96 -Cementite and austenite fractions as a function of holding time obtained using DICTRA simulations with different size of cementite region. The experimental measurements of austenite fraction are shown with orange squares Figure 99 - 99 Figure 99 -Cross section of the studied Fe-C-Mn system at 650°C. The tie-lines for cementite dissolution and austenite growth are represented in blue and red lines, respectively. The contents of both C and Mn at γ/θ and α/γ interface Figure 100 - 100 Figure 100 -DICTRA calculation of the carbon activity profile through the system at the beginning of transformation (t=0s) at 650°C. Figure 101 - 101 Figure 101 -DICTRA calculations of the Mn activity profile through the system at 650°C before cementite dissolution. Figure 102 - 102 Figure 102 -Simulated with DICTRA time-evolution of both C (a) and Mn (b) profiles during austenite growth at 650°C after cementite dissolution. Figure 103 - 103 Figure 103 -Time-evolution of tie-line for austenite growth. The equilibrium tie-line passing through the bulk composition (big red point) is given by the red dotted line. Figure 104 - 104 Figure 104 -Austenite kinetics simulated with DICTRA using different size and Mn content of cementite compared to the experimentally measured values (orange squares). Figure 105 - 105 Figure 105 -(a) Calculated L α Log-normal distribution; (b) Fractions for chosen classes of size L α calculated from the Log-normal distribution. Figure 106 - 106 Figure 106 -Austenite kinetics calculated using DICTRA simulations with Log-normal distribution of L α compared with the experimentally measured values (orange squares). Figure 108 - 108 Figure 108 -Experimentally measured and calculated (KJMA model) time evolution of austenite fraction. Figure 109 - 109 Figure 109 -Interdependence of fitting parameters K and q. Figure 110 - 110 Figure 110 -Evolution of A and RA fractions with the holding time at 650°C: A and RA are experimental fractions of initial, before quench, (blue open triangles) and retained austenite (orange squares); RA-calculated (Andrews M s ) -RA fraction calculated using equations 45 and 49 (black crosses) and RA-calculated (q=0.039, K=83.3) -RA fraction calculated using equations 45 and 52 (pink filled lozenges). Figure 111 - 111 Figure 111 -Comparison of calculated using "geometrical model" RA fraction (RA-calculated (adapted Yang's M s )) -equations 45 and 54 (blue filled triangles) with the experimental RA fractions (RA -Experimental; orange squares), as well as RA fraction calculated using equations 45 and 52 (RA-calculated (q=0.039, K=83.3); pink filled lozenges). Figure 112 - 112 Figure 112 -Engineering (a, b) and true (c, d) stress-strain curves of the samples annealed at 650°C with different holding time. Figure 113 - 113 Figure 113 -Evolution of mechanical properties with the holding time for the samples annealed at 650°C. Figure 114 - 114 Figure 114 -Strain hardening rate as a function of strain (a) and stress (b). Figure 115 - 115 Figure 115 -Examples of 3 stage work hardening rate evolution for the samples annealed at 650°C for 2 and 30 hours. Curves of true stress-true strain (a, b) and of strain hardening rate as a function of true strain (c, d). Figure 116 - 116 Figure 116 -Tendencies between YS L , Maximum True Stress, Uel and fractions of RA and FM+RA. Figure 117 - 117 Figure 117 -Structure of medium Mn steel quenched after holding at 750°C for 30min: (a) observation with optical microscope after Dino etching and (b) corresponding X-ray spectrum. Figure 118 - 118 Figure 118 -Engineering and true stress-strain curves of studied medium Mn martensitic samples. Figure119presents the results of experimental tensile tests performed in the present study along with the results collected from the previous works [PUS'09], [ZHU'10], [ALL'12]. The true stress evolution as a function of true strain is shown in Figure119(a), meanwhile Figure119(b)shows the related strain hardening rate evolution as a function of true stress (so-called Kocks-Mecking plot [KOC'03],[MEC'81]). For all studied martensitic steels some general highlights can be stated:• conventional yield stress seems to be a function of martensite carbon and manganese contents; • high work-hardening rate is observed and it increases up to necking strain in accordance with the carbon and manganese contents. Figure 119 - 119 Figure 119 -(a) Experimental true stress-true strain curves of all studied martensitic steels; (b) Strain hardening rate as a true stress function of corresponding tensile tests presented in (a); (c) Comparison of true stress-true strain curves of 0.15C and 0.15C-5Mn steels, and also the curve of 0.15C shifted up at 150MPa, which corresponds to solid solution hardening of Mn. (b): experimentally adjusted points were compared to the ones predicted with equation 60. Figure 120 - 120 Figure 120 -(a) Evolution of modeled F(σ) with the true stress; (b) Comparison of the σ 0 values: predicted with equation 60 (model) and experimentally adjusted ones. Figure 121 - 121 Figure 121 -Comparison of the model and experimental results: (a) and (b) show the stressstrain curves: (a) -steels with varied C content; (b) -steels with higher Mn content and two steels with standard (for AHSS) Mn content for comparison. (c) -evolution of strain hardening rate as a function of stress. (d) -evolution of strain hardening rate as a function of strain. "E" means experimental data and "M" means data from the model. Figure 121 121 Figure121(c) and (d) present the evolution of strain hardening rate as a function of stress and as a function of strain, respectively. Taking into account that modeling of derivative is more complex than modeling of a function itself, the proposed model gives very satisfactory results of strain hardening rate evolution. These figures also show that the model considers correctly the synergy influence of C and Mn on the strain hardening rate evolution. Figure 122 - 122 Figure 122 -Optical and SEM images of hot rolled 0.5C-10Mn (wt.%) steel after Nital etching. Figure 123 - 123 Figure 123 -Dilatometer trial performed on 0.5C-10Mn (wt.%) steel: (a) -temperature versus time curve; (b) dilation curve. Figure 124 - 124 Figure 124 -Engineering and true stress-strain curves of hot rolled and annealed at 800°C-5min samples. Figure 125 - 125 Figure 125 -True stress-true strain curves of the sample annealed at 800°C for 5min (experimental points -open red squares) and model austenite structure (blue curve): (a)complete curves; (b) -zoom on low strain domain. Figure 126 - 126 Figure 126 -Microstructure of the studied MMS steel annealed at 500°C for (a) -1h, (b) -30h. Grey color is ferrite and white spots are carbides. Figure 127 - 127 Figure 127 -Engineering and true stress-strain curves of samples annealed at 500°C for different times and one fresh martensite sample for comparison. Figure 128 - 128 Figure 128 -Evolution of induced martensite (a) and RA (b) fraction with the increase of strain. Figure 129 - 129 Figure 129 -Induced martensite fraction (f M ind ) as a function of strain. Proposed model compared to experimental results for two samples annealed at 650°C for: (a) -1h; (b) -30h. Figure 130 - 130 Figure 130 -True stress-true strain curves resulting from the proposed model compared to the experimental ones: (a) -3min and 1h samples; (b) -10min and 3h samples; (c) -30min and 7h samples; (d) -2h and 30h samples. "E" means experimental data and "M" means data from the model. Figure 131 - 131 Figure 131 -Influence of the most sensitive parameters ((a) and (b) fractions of fresh martensite and retained austenite, respectively; (c) size of ferrite) on the true stress-true strain curves according to the model.As a final conclusion, it should be stated that the model demonstrates very good performance and an accurate prediction of the experimental tensile curves is achieved. However, to obtain such a good results of the model a very fine and advanced description of the microstructure constituents and their chemical composition is required. Therefore, additional studies to characterize different microstructure constituents and in particular lath-like recovered ferrite can even further improve the performance of the model. Better understanding of the induced austenite transformation can also contribute to the development of the model. Figure 133 - 133 Figure 133 -Engineering (a, b) and true (c, d) stress-strain curves of the samples after gradient batch annealing. Figure 134 - 134 Figure 134 -(a) Evolution of retained austenite fraction as a function of holding temperature (GBA trial). (b) Evolution of Uel as a function of RA fraction. Figure 135 - 135 Figure135-Quantitative Mn maps obtained with EPMA for the sample after austenitization at 750°C for 30min and for the samples after ART annealing at 650°C for 3min, 1h, 10h and 30h. Figure 136 - 136 Figure 136 -EPMA quantitative analysis of Mn distribution in the microstructure after first annealing (750°C-30min-WQ): (a) optical micrograph after Dino etching; (b) quantitative Mn map (rolling direction is parallel to abscisse); (c) and (d) Mn profiles along the lines 1 and 2, respectively. Figure 137 Figure 137 - 137137 Figure 137 (a), (c) and (e) show another representation of quantitative Mn maps using ImageJ software. In Figure 137 (b), (d) and (f) are plotted the Mn profiles according to the lines in Figure 137(a), (c) and (e). Figure 138 - 138 Figure 138 -SEM images of the microstructures obtained after 3min ART annealing at 650°C: (a) ×3000; (b) ×5000. Figure 139 - 139 Figure 139 -EPMA quantitative analysis of Mn distribution in the microstructure obtained after 3min ART annealing at 650°C: (a) quantitative Mn map (rolling direction is parallel to abscisse); (b) and (c) Mn profiles along the lines 1 and 2, respectively. Figure 140 - 140 Figure 140 -SEM images of the microstructures obtained after 1h ART annealing at 650°C: (a) ×3000; (b) ×5000. Figure 142 - 142 Figure 142 -SEM images of the microstructures obtained after 10h ART annealing at 650°C: (a) ×3000; (b) ×5000. Figure 143 - 143 Figure143-EPMA quantitative analysis of Mn distribution in the microstructure obtained after 10h ART annealing at 650°C: (a) quantitative Mn map (rolling direction is parallel to abscisse); (b) and (c) Mn profiles along the lines 1,2 and 3, respectively Figure 144 - 144 Figure 144 -SEM images of the microstructures obtained after 30h ART annealing at 650°C: (a) ×3000; (b) ×5000. Figure 145 - 145 Figure 145 -EPMA quantitative analysis of Mn distribution in the microstructure after 30h ART annealing at 650°C: (a) quantitative Mn map; (b) Mn profile along the line 1. Figure 154 - 154 Figure 154 -Mn profiles of 1h sample (1 st hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 155 - 155 Figure 155 -Bright field TEM image (at left) and the EDX-hypermap of corresponding zone (at right) of 1h sample (2 nd hypermap). Figure 156 - 156 Figure 156 -Mn profiles of 1h sample (2 nd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 157 - 157 Figure 157 -Mn profiles of 1h sample (2 nd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 158 - 158 Figure 158 -Bright field TEM image (at left) and the EDX-hypermap of corresponding zone (at right) of 1h sample (3 rd hypermap). Figure 159 - 159 Figure 159 -Mn profiles of 1h sample (3 rd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 160 - 160 Figure 160 -Mn profiles of 1h sample (3 rd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 162 - 162 Figure 162 -Mn profiles of 2h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 163 - 163 Figure 163 -Mn profiles of 2h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 164 - 164 Figure 164 -Mn profiles of 2h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 167 - 167 Figure 167 -Bright field TEM image (at left) and the EDX-hypermap of corresponding zone (at right) of 3h sample (2 nd hypermap). Figure 168 - 168 Figure 168 -Mn profiles of 3h sample (2 nd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 169 - 169 Figure 169 -Mn profiles of 3h sample (2 nd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 170 - 170 Figure 170 -Bright field TEM image (at left) and the EDX-hypermap of corresponding zone (at right) of 3h sample (3 rd hypermap). Figure 171 - 171 Figure 171 -Mn profiles of 3h sample (3 rd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 172 - 172 Figure 172 -Mn profiles of 3h sample (3 rd hypermap): at left is presented selected zone and at right -corresponding Mn evolution as a function of distance. Figure 174 - 174 Figure 174 -Mn profiles of 10h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 175 - 175 Figure 175 -Mn profiles of 10h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 177 - 177 Figure 177 -Mn profiles of 30h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 178 - 178 Figure 178 -Mn profiles of 30h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 179 - 179 Figure 179 -Mn profiles of 30h sample: at left are presented selected zone and at rightcorresponding Mn evolution as a function of distance. Figure 181 - 181 Figure 181 -Influence of ferrite size (λ) on the mechanical behavior according to the model: (a) true stress-true strain curves; (b) evolution of mechanical characteristics (YS, UTS and Uel) and (c) stability of retained austenite. Figure 183 - 183 Figure 183 -Influence of retained austenite stability (ε 0 ) on the mechanical behavior according to the model: (a) true stress-true strain curves; (b) evolution of mechanical characteristics (YS, UTS and Uel) and (c) stability of retained austenite Figure 184 - 184 Figure 184 -Influence of retained austenite stability (n) on the mechanical behavior according to the model: (a) true stress-true strain curves; (b) evolution of mechanical characteristics (YS, UTS and Uel) and (c) stability of retained austenite Similar modifications were proposed by Lee et al, but as the work was done on different steel compositions (medium Mn steels), other coefficients were used for the alloying elements. Also Lee et al. introduced supplementary fitting parameter (β) to improve the curvature prediction: composition was established, based on numerous dilatometer data: α m = 0.0224 - 0.0107 ⋅ w C - 0.0007 ⋅ w Mn - 0.00012 ⋅ w Cr - 0.00005 ⋅ w Ni - 0.0001 ⋅ w Mo (6) f ' 1 exp [ ( M s q T ) ] 0.0076 - 0.0182 w C - 0.00014 w Mn 1.4609 - 0.4483 w C - 0.0545 w Mn Table 1 - 1 Mean values of f Reference samples s σ measured for different reference samples. Table 2 - 2 Retained austenite (RA) fraction measured using XRD technique with different polishing methods and using Sigmametry. Results were obtained on two samples from this study (30min WQ and 30h WQ) and on two samples from other studies (TRIP 800 and Q&P steels). Technique Preparation This study Samples from other studies 30min WQ 30h WQ TRIP800 Q&P Mechanical polishing (down to 1 µm) 17.7 11.3 13.4 20.8 XRD Chemical polishing 17.7 12.4 - - Electrochemical polishing 24.2 19.0 14.5 22.1 Sigmametry Standard 25.9 20.9 15.2 20.7 Table 4 - 4 Mn content in austenite and ferrite zones of different samples determined from direct EDX measurements and hypermaps. Reference Time, s Measured by EDX Mn F , wt.% Mn A , wt.% 3min WQ 180 4.3 10.0 1h WQ 3600 4.1 9.3 2h WQ 7200 3.5 8.9 3h WQ 10800 2.8 8.9 10h WQ 36000 2.3 8.7 30h WQ 108000 2.3 8.5 Table 5 - 5 Fraction and size measurements of FM+RA features obtained from the image analysis of FEG SEM images. Reference Time (s) Fraction FM+RA (%) Size of FM+RA features (µm) Polygonal Lath Global mean 3min WQ 180 15.4 0.24 0.075 0.16 10min WQ 600 20.6 0.23 0.079 0.16 30min WQ 1800 21.6 0.37 0.110 0.24 1h WQ 3600 27.3 0.37 0.110 0.24 2h WQ 7200 31.5 0.38 0.140 0.26 3h WQ 10800 32.5 0.41 0.140 0.27 7h WQ 25200 36.1 0.45 0.140 0.30 10h WQ 36000 34.9 0.46 0.170 0.31 20h WQ 72000 36.0 0.57 0.210 0.39 30h WQ 108000 37.5 0.68 0.210 0.45 Based on the microstructural investigations, it was of interest to discuss the observed double morphology (lath-like and polygonal) of ferrite and austenite. The origin of the lath like morphology of ferrite and austenite was already presented in the chapter 1.1.1.3 and is undoubtedly attributed to the initial non-deformed lath martensite structure. In previous works [NEH'50], [KOO'76], [KIM'81], [LAW Table 6 - 6 The values of the different parameters measured experimentally or calculated from the experimental values. Employed tools Image Analysis of SEM images Saturation magnetization TEM-EDX hypermaps NanoSIMS Calculated from mass balance Reference Time, s F A (%) D A (µm) F RA (%) Mn A (wt.%) C A (wt.%) C A (wt.%) 3min WQ 180 15.4 0.16 14.6 10.0 - 0.67 10min WQ 600 20.6 0.16 21.4 - - 0.46 30min WQ 1800 21.6 0.24 25.9 - - 0.38 1h WQ 3600 27.3 0.24 30.1 9.3 0.36 0.33 2h WQ 7200 31.5 0.26 26.2 8.9 - 0.31 3h WQ 10800 32.5 0.27 25.7 8.9 - 0.30 7h WQ 25200 36.1 0.30 22.6 - - 0.27 10h WQ 36000 34.9 0.31 22.5 8.7 - 0.28 20h WQ 72000 36 0.39 21.1 - - 0.27 30h WQ 108000 37.5 0.45 20.3 8.7 0.27 0.26 Table 7 - 7 The values of all the parameters necessary for the estimation of RA fraction at room temperature and the results of these calculations: M s new (M s temperature taking into account the influence of chemical composition and size of austenite), F M ind (induced martensite fraction) and RA (retained austenite fraction). Ref. Time (s) F a (%) D a (µm) V a (µm 3 ) C a (wt.%) Mn a (wt.%) M s new F M ind RA (%) 3min 180 14.6 0.16 0.0041 0.67 10.0 -152.8 0.00 14.6 10min 600 21.4 0.16 0.0041 0.46 9.9 -59.2 0.00 21.4 30min 1800 25.9 0.24 0.0138 0.38 9.7 -14.6 0.00 25.9 1h 3600 30.1 0.24 0.0138 0.33 9.3 20.0 0.00 30.1 2h 7200 31.5 0.26 0.0176 0.31 8.9 39.3 0.19 25.5 3h 10800 32.5 0.27 0.0197 0.30 8.9 43.8 0.23 25.0 7h 25200 36.1 0.30 0.0270 0.27 8.7 63.8 0.38 22.3 10h 36000 34.9 0.31 0.0298 0.28 8.7 60.2 0.36 22.4 20h 72000 36 0.39 0.0593 0.27 8.7 66.4 0.40 21.6 30h 108000 37.5 0.45 0.0911 0.26 8.7 72.5 0.44 21.0 Table 8 - 8 Measured mechanical properties of samples after different annealing treatments, with their corresponding incertitude in the brackets: high and low yield strength (YS H and YS L ± 15MPa), yield point elongation (YPE ± 0.2%), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. Ref. YS H (MPa) YS L (MPa) YPE (%) UTS (MPa) Uel (%) TE (%) 3min WQ 712 688 2.3 766 13.5 22.8 10min WQ 648 641 1.6 776 20.6 29.1 30min WQ 628 626 1.4 791 23.7 31.6 1h WQ 589 589 2.1 826 23.2 26.3 2h WQ 566 566 0.3 827 24.9 29.6 3h WQ 495 487 1.9 909 19.2 23.6 7h WQ 482 482 1.9 882 21.9 25.8 10h WQ 455 446 1.7 897 21.5 25.7 20h WQ 385 378 1.4 905 19.1 22.2 30h WQ 368 359 1.2 871 14.2 15.2 Table 9 - 9 Measured mechanical properties of martensitic samples with the corresponding incertitude in the brackets: yield strength (YS 0.2 ± 15MPa), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. Ref. YS 0.2 (MPa) UTS (MPa) Uel (%) TE (%) 2260B-M1 1049 1387 3.0 5.2 2260B-M2 1067 1391 3 5 Table 10 - 10 Chemical compositions (wt.%) of some martensitic steels tested in the present study and in previously published works with corresponding references. For each steel first column gives the reference (steel) that will be used further in the text. Steel C Composition (wt.%) Mn Si Cr Ti Source 0.3C 0.29 1.20 0.25 0.17 0.04 This study 0.36C 0.36 1.22 0.23 0.10 0.04 This study 0.1C-5Mn 0.09 4.6 - - - This study 0.15C-5Mn 0.14 4.6 - - - This study 0.01C-3Mn 0.01 2.92 0.01 0.11 0.04 Zhu et al. [ZHU'10] 0.09C 0.09 1.90 0.15 0.10 - Allain et al. [ALL'12] 0.15C 0.15 1.90 0.22 0.20 - Pushkareva [PUS'09] 0.22C 0.22 1.18 0.27 0.21 - Allain et al. [ALL'12] wt.%). Ref. C Mn Si P Al B S Cr Cu N A193 495 9940 51 11 26 1.4 2 4 6.8 1.1 Table 12 - 12 Measured mechanical properties of hot rolled and annealed at 800°C-5min samples with the corresponding incertitude in the brackets: yield strength (YS 0.2 ± 15MPa), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. Ref YS 0.2 (MPa) UTS (MPa) Uel (%) TE (%) Hot Rolled 339 572 4.6 4.8 800°C-5min 337 603 3.8 3.9 Table 13 - 13 Measured mechanical properties of samples annealed at 500°C for different times and one fresh martensite sample. Corresponding incertitude is presented in the brackets: high and low yield strength (YS H and YS Ref. YS H (MPa) YS L (MPa) YPE (%) YS 0.2 (MPa) UTS (MPa) Uel (%) TE (%) Fresh M - - - 1049 1387 3.0 5.2 3min WQ 1000 990 4.9 - 1001 6.1 9.1 1h WQ 898 878 6 - 892 6.8 10 30h WQ 806 765 4.3 - 795 7.7 12 L ± 15MPa), yield point elongation (YPE ± 0.2%), yield strength (YS 0. 2 ± 15MPa), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. Table 14 - 14 Comparison between experimental and model mechanical properties of samples after different heat treatments: yield strength (YS), ultimate tensile strength (UTS) and uniform elongation (Uel), respectively. Ref. YS (MPa) UTS (MPa) Uel (%) ΔYS ΔUTS ΔUel 3min WQ E M 688 697 766 776 13.5 12.8 9.2 10.2 0.7 10min WQ E M 641 612 776 770 20.6 18.3 29.1 6.1 2.3 30min WQ E M 626 557 791 783 23.7 22.0 69.1 8.4 1.7 1h WQ E M 589 491 826 811 23.2 23.3 98.1 15.4 0.1 2h WQ E M 566 483 827 842 24.9 25.0 82.7 14.5 0.1 3h WQ E M 487 481 909 915 19.2 19.3 6.2 6.3 0.1 7h WQ E M 482 430 882 893 21.9 21.1 52.0 11.0 0.8 10h WQ E M 446 426 897 891 21.5 19.8 19.6 6.3 1.7 30h WQ E M 359 375 871 866 14.2 14.4 16.1 4.5 0.2 Table 21 - 21 Results of the model simulations with different sizes of ferrite. λ (µm) YS (MPa) UTS (MPa) Uel (%) 0.05 1118 1454 10.9% 0.1 687 1020 15.1% 0.2 467 783 19.5% 0.3 393 696 21.9% 0.5 332 621 24.4% 0.7 306 586 25.7% 1 286 559 26.8% Variation 0.95 831.5 895.0 15.9% Var / 0.1µm 87.5 94.2 1.7% Table 24 - 24 Results of the model simulations with different n values n YS (MPa) UTS (MPa) Uel (%) 0.5 368 643 20.4% 1 355 656 28.3% 1.5 353 678 31.8% 2 353 697 32.2% 2.5 353 712 31.4% 3 353 723 30.3% 4 353 738 28.3% Variation 3.5 15.5 94.3 8.0% Var / 0.5 2.2 13.5 1.1% F A -austenite fraction; D A -austenite size; F RA -retained austenite fraction; Mn A and C A -Mn and C contents in austenite, respectively. ACKNOWLEDGEMENTS First of all, I would like to thank God for His support during my entire life. All the achievements in my life are the result of His favor to me. The time-evolution of austenite fraction a 650°C that corresponds to retained austenite and fresh martensite (FM+RA) after quenching was evaluated by image analysis (Aphelion software) from SEM pictures. Figure 83 presents the obtained results. The corresponding kinetics is also compared with the equilibrium fraction of austenite calculated using Thermo-Calc. Figure 83 -Evolution of austenite fraction with holding time at 650°C. In the final structure after quench austenite is represented by retained austenite and fresh martensite (FM+RA). Equilibrium fraction of austenite at 650°C calculated using Thermo-Calc is given by the dashed line. Annex 1: Data base of mechanical properties of MMS The following table presents the data set, collected from the literature and used for the analysis of mechanical properties of Medium Mn Steels. It contains the following information (when available):  composition (major alloying elements);  process parameters (hot rolling, cold rolling, annealing);  mechanical properties;  microstructure description (fractions, sizes and contents of C and Mn). Retained austenite fraction was measured by XRD for each holding temperature. The results are presented in Table 16 and in Figure 134. Table 16 -Measured mechanical properties of samples after gradient batch annealing (different holding temperature), with their corresponding incertitude in the brackets: high and low yield strength (YS H and YS L ± 15MPa), yield point elongation (YPE ± 0.2%), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. The last column gives also the measured fraction of retained austenite (RA). Annex 3.2: Characterization of carbides by TEM TEM observations of carbides formed during heating were done using replicas. The heating was interrupted at 550, 600 and 650°C using He quench. For each sample several observations were performed and the obtained results are presented in the following sub-sections. Mean composition of cementite estimated from 17 measurements was 6.8 wt.% Mn. However, three levels of Mn content in cementite were observed: first around 5 wt.%, second ~8 wt.% and third about 13 wt.% . 650°C sample 6.9 wt.% Mn Annex 4.1: Global model parameters. All the parameters necessary for each phase behavior description are given in the Annex 4.2: Sensitivity analysis of the mechanical model. Variation of following input parameters of the model was studied: a) fresh martensite fraction; b) size of ferrite; c) retained austenite fraction; d) mechanical stability of retained austenite (both ε and n). In order to avoid the interference of different effects several parameters were blocked at a certain values depending on the performed calculations. Influence of Fresh Martensite fraction The following parameters were considered with the subsequent constant values: • λ = 0.3 (µm) • Mn F = 3 (wt.%) • f RA = 20 (%) • ε = 0.14 • n = 1 C FM and Mn FM were recalculated according to the global fraction of FM+RA, but the C eq was always superior to 0.5, thus almost no influence on the stress-strain curves. Finally, changing parameters were f FM (0-30%) and f F fractions (50-80%). The results of the analysis are presented in Figure 180 and Table 20. As it can be seen fresh martensite has an important influence on the mechanical behavior of such a multiphase mixture. UTS has the biggest variation with the FM fraction evolution, on the other hand the change of YS and Uel is less pronounced. It can be also observed that due to the strain partitioning (Iso-W assumption) the induced transformation of RA is slightly modified as well (Figure 180 c). Influence of Ferrite size The following parameters were considered with the subsequent constant values: • f FM = 10 (%) • C FM = 0.31 (wt.%) • Mn FM = 8.3 (wt.%) • Mn F = 3 (wt.%) • f RA = 20 (%) • ε = 0.14 • n = 1 The size of ferrite (λ) was varying from 0.05 to 1 (µm). The results of the analysis are presented in Figure 181 and Table 21. The impact of λ evolution appears to be very important for all mechanical characteristics (YS, UTS and Uel). Due to the strain partitioning ferrite size has an important effect on the strain induced transformation (Figure 181c). Influence of Retained Austenite fraction The following parameters were considered with the subsequent constant values: • f F = 70 (%) • λ = 0.3 (µm) • Mn F = 3 (wt.%) • C FM = 0.31 (wt.%) • Mn FM = 8.3 (wt.%) • ε = 0.14 • n = 1 Changing parameters were f RA (0-30%) and f FM fractions (0-30%). The results of the analysis are shown in Figure 182 and Table 22. As it can be expected the increase of RA fraction decreases YS and UTS and improves the Uel. The effect on Uel is very significant. On the other hand, on YS and UTS it is less prominent. The following parameters were considered with the subsequent constant values: • f AM = 70 (%) • λ = 0.3 (µm) • Mn F = 3 (wt.%) • C FM = 0.31 (wt.%) • Mn FM = 8.3 (wt.%) • f RA = 30 (%) Changing parameters were ε 0 and n. In the case of ε 0 variation (0.03-0.21), the n was blocked at the value of 1. Figure 183 and Table 23 present the obtained results. In contrast, when n was changing, the ε 0 = 0.14 was constant. The results obtained with such simulation are shown in Figure 184 and Table 24. As it can be found both parameters ε 0 and n have low effect on the YS. In the same time the impact of austenite stability on UTS and Uel is quite significant, especially with the variation of ε 0 . High values of ε 0 increase the stability of RA thus decreasing induced martensite fraction and subsequently UTS. This effect is also accompanied by the improved Uel. Finally, it can be concluded that the sensibility of ε 0 in comparison with n is more important.
297,927
[ "765970" ]
[ "178323" ]
01751800
en
[ "chim" ]
2024/03/05 22:32:07
2015
https://hal.univ-lorraine.fr/tel-01751800/file/DDOC_T_2015_0105_POLTORAK.pdf
Keywords: This work combines the electrochemistry at the interface between two immiscible electrolyte solutions (ITIES) with the Sol -Gel process of silica leading to an interfacial modification with mesoporous silica using soft template. In the first part of this work the macroscopic liquid -liquid interface was employed to separate the aqueous solution of the hydrolyzed silica precursor species (tetraethoxysilane (TEOS)) from the cationic surfactant (cetyltrimethylammonium (CTA + )) dissolved in the dichloroethane. The silica material deposition was controlled by the electrochemical CTA + transfer from the organic to the aqueous phase. Template transferred to the aqueous phase catalyzed the condensation reaction and self-assembly resulting in silica deposition at the interface. A variety of initial synthetic Frequently used abbreviations: In alphabetic order BET -Brunauer-Emmett-Teller isotherm BTPPA + -Bis(triphenylphosphoranyldiene) cation CE -Counter electrode CTA + -Cetyltrimethylammonium cation CV -Cyclic Voltammetry 𝑫 𝒊 -Diffusion coefficient 𝑫 𝒊 ′ -Apparent diffusion coefficient DCE -Dichloroethane DecFc -Decamethylferrocene DMFc -1,1'-Dimethylferrocene EtOH -Ethanol G -Gibbs energy of transfer ITIES -Interface between Two Immiscible Electrolyte Solutions LOD -Limit of detection NMR -Nuclear Magnetic Resonance 4OBSA --4-Octylbenzenesulfonic anion PAMAM -Poly(amidoamine) PH + -Trimethylbenzhydrylammonium cation RE -Reference electrode S -Spacing factor or in the other words pore center-to-center distance SAXS years I could unceasingly count on their priceless scientific advices and help. I will miss our very fruitful discussions, which has resulted in series of interesting discoveries. The time, which they have invested in me, resulted in my tremendous personal development -I am in their debt. Last but not least I appreciate a financial support from Région Lorraine and to Ecole Doctorale SESAMES (ED 412, Université de Lorraine) for my PhD grant. conditions were studied with cyclic voltammetry: influence of [CTA + ] org and [TEOS] aq , polarity of the organic phase, pH of the aqueous phase or deposition time scale etc. [CTA + ] org was found to be limiting factor of the deposition reaction. Characterization of silica material was also performed in order to study its chemical functionalities (XPS and infra-red spectroscopy were used) and morphology confirming mesostructure (SAXS and TEM were employed in this regard). Silica deposition at the miniaturized ITIES (membranes supporting array of micrometer in diameter pores were used in this regard) was the second part of this work. Silica interfacial synthesis performed in situ resulted in stable deposits growing on the aqueous side of the interface. Mechanical stability of the supported silica deposits allowed further processing -silica material was cured. Based on imaging techniques (e.g. SEM) it was found that deposits form hemispheres for longer experimental time scales. Interfacial reaction was also followed with in situ confocal Raman spectroscopy. Molecular characteristics of the interface were changed dramatically once CTA + species were transferred to the aqueous phase. An array of microITIES modified with silica was also assessed by ion transfer voltammetry of five interracially active species different in size, charge and nature. Ion transfer of each ion was affected in the presence of mesoporous silica at the ITIES. Finally the local pH change at the liquid -liquid interface was induced by ion transfer and UV photolysis of trimethylbenzhydrylammonium initially dissolved in the organic phase. The local pH change was confirmed with the local pH measurement (iridium oxide Pt microdisc modified electrode was used in this regard). Interfacial deposition triggered by pH decrease was shown to be feasible once TEOS precursor was dissolved in the organic phase whereas the CTA + Br -was dissolved in the aqueous phase. Interfacial modification with mesoporous silica materials was shown to possess promising properties for improving selectivity at the ITIES. This particular analytical parameter can be further improved by silica functionalization, which could be continuation of this work. Résumé -Version Français Ce travail combine l'électrochimie à l'interface liquide -liquide avec le procédé solgel pour la modification interfaciale avec de la silice mésoporeuse. Dans la première partie de ce travail, l'interface liquide -liquide macroscopique a été utilisée pour séparer la solution aqueuse de l'espèce de précurseur de silice hydrolysées (tétraéthoxysilane (TEOS)) de l'agent tensioactif cationique (cetyltrimethylammonium (CTA + ) qui a agi comme un template et a été dissous dans le dichloroéthane. Le dépôt de matériau de silice a été déclenchée par le transfert du CTA + à partir de la phase organique vers la phase aqueuse. CTA + qui a transféré à la phase aqueuse a catalysé la réaction de condensation de la silice sur l'interface liquideliquide. Différents conditions initiales de synthèse ont été étudiée par voltampérométrie cyclique: [CTA + ] org et [TEOS] aq , polarité de la phase organique, le pH de la phase aqueuse ou temps de déposition, etc. [CTA + ] org est le facteur limitant la réaction de déposition. La caractérisation de la silice a également été réalisée. Les fonctionnalités chimiques ont été évaluées par spectroscopie XPS et infra-rouge. La mésostructure de la silice a été confirmée par SAXS et TEM. Le dépôt de silice à des interfaces liquide -liquide miniaturisées était la deuxième partie de ce travail. Les dépôts stables sur le côté de l'interface ont été synthétisés in situ par voie électrochimique. La stabilité mécanique des dépôts de silice permis un traitement thermique de la silice. Basé sur les techniques d'imagerie (par exemple SEM) il a été constaté que les dépôts forment des hémisphères pour des temps plus long. La réaction interfaciale a également été suivie in situ par spectroscopie Raman confocale. Caractéristiques moléculaires de l'interface ont été modifiées de manière spectaculaire une fois les espèces CTA + ont été transférés à la phase aqueuse. Les interfaces liquide -liquide miniaturisés et modifiés ont également été évaluée avec le transfert voltampérométrique de cinq ions différentes (en taille, charge et nature des espèces). Le transfert de chaque d'ion a été affectée par la présence de silice mésoporeuse à l'interface liquide -liquide. Enfin, le changement de pH local à l'interface liquide -liquide a été induit avec le transfert d'ions et de photolyse UV de trimethylbenzhydrylammonium initialement dissous dans la phase organique. La diminution de pH local a été confirmée par la mesure du pH local (microdisque en Pt modifié avec de l'oxyde d'iridium a été utilisé à cet égard). Le dépôt interfacial de silice déclenchée par la diminution du pH local a été démontré une fois que le précurseur TEOS a été dissous dans la phase organique tandis que le CTA + Br -a été dissous dans la phase aqueuse. La modification interfaciale avec des matériaux de silice mésoporeuse a été démontrée pour posséder des propriétés prometteuses pour améliorer la sélectivité de capteur electroanalytiques basés sur l'interface liquide -liquide. Ce paramètre analytique particulier peut être encore amélioré par la fonctionnalisation de silice, ce qui pourrait être la poursuite de ce travail. Chapter I. Bibliographical introduction First chapter gives an overview of the issues being an introduction to the consequential work. It is divided in three main parts: (i) the general information concerning electrified liquid -liquid interface including its structure, the charge transfer reactions, electrochemical behavior and miniaturization; (ii) the Sol-Gel process of silica and the examples of its application in electrochemistry and (iii) the approaches that were used for the liquid -liquid interface modification with metals, phospholipids, polymers, carbon materials and finally silica materials. Electrified interface between two immiscible electrolyte solutions At the very beginning, it is important to clarify a terminology developed in electrochemistry for the liquid -liquid interface based systems. The terms as electrified, polarized or non-polarized liquid -liquid interface were used alternatively with the interface between two immiscible electorate solutions (ITIES) among the literature of the subject and hence they are also use in the present work. From electroanalytical point of view, the ITIES bears plenty of high quality properties: (i) first of all under proper conditions it can be polarized; (ii) it is self-healing in the same manner as the mercury electrodes; (iii) it is free from defects down to a molecular level; (iv) lack of preferential nucleation sites offers a unique way to study a deposition process; (v) the electrochemical theories developed for the solid electrodes are applicable at the ITIES. (vi) What is setting the ITIES apart is the detection not restricted to the reduction or the oxidation reactions, which can arise from ion transfer reactions and (vii) the ITIES as an electrochemical sensor has good sensitivity and reasonable limits of detection. In the following subchapters, the most significant aspectsfrom this work point of view -are discussed and divided into: (i) interfacial structure, (ii) different types of interfacial charge transfer reactions, (iii) the polarized and the non-polarized interfaces, (iv) electrochemical instability phenomena occurring in the presence of surface active molecules adsorption and (v) miniaturization at the ITIES. More comprehensive set of information dealing with the electrochemical aspects of the ITIES are available in number of reviews. [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF][START_REF] Samec | Charge-Transfer Processes at the Interface between Hydrophobic Ionic Liquid and Water[END_REF][START_REF] Dryfe | The Electrified Liquid-Liquid Interface[END_REF]4,[START_REF] Girault | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions[END_REF][START_REF] Peljo | Electrochemistry at the Liquid/liquid Interface[END_REF][START_REF] Samec | Dynamic Electrochemistry at the Interface between Two Immiscible Electrolytes[END_REF][START_REF] Senda | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions[END_REF][START_REF] Girault | Charge Transfer across Liquid-Liquid Interfaces[END_REF] 1.1.1. Liquid -liquid interface structure The structure of the liquid -liquid interface was proposed for the first time in 1939 when Verwey and Niessen based on Gouy-Chapman theory described an electric double layer as two back-to-back electric double layers with opposite charge separated by a continuous geometric boundary (Figure 1.1 a). [START_REF] Verwey | The Electrical Double Layer at the Interface of Two Liquids[END_REF] First experimental report dealing with the interfacial structure was given by Gavach et al. almost 40 years later. By measuring the interfacial tension versus the concentration of different tetraalkylammonium ions they proved presence of specific adsorption, which was explained in terms of ion pair formation at the liquidliquid interface. [START_REF] Gavach | The Double Layer and Ion Adsorption at the Interface between Two Non Miscible Solutions: Part I: Interfacial Tension Measurements for the Water-Nitrobenzene Tetraalkylammonium Bromide Systems[END_REF] One year later the same group proposed what we know today as the 'modified Verwey-Niessen' model (Figure 1.1 b). The experimental approach consisted in control of interfacial Galvani potential difference between a sodium bromide aqueous solution and a tetraalkylammonium tetraphenylborate organic solution by addition of tetraalkylammonium bromide to the aqueous phase and subsequent interfacial tension measurement -giving the electrocapillary curve. This work has brought two main characteristics to the interfacial model: the first, has treated the interface as a 'compact layer' of oriented dipole molecules; the second has assumed very small potential drop across the interface. [START_REF] Gros | The Double Layer and Ion Adsorption at the Interface between Two Non-Miscible Solution; Part II: Electrocapillary Behaviour of Some Water-Nitrobenzene Systems[END_REF] Furthermore, the presence of mixed solvent layer at the interface, was confirmed via surface excess study of water at the interface between organic solvents of different polarity. [START_REF] Girault | Thermodynamic Surface Excess of Water and Ionic Solvation at the Interface between Immiscible Liquids[END_REF][START_REF] Girault | Thermodynamics of a Polarised Interface between Two Immiscible Electrolyte Solutions[END_REF] Results indicated that surface excess of water at the liquid -liquid interface was not enough to form a monolayer, which in turn has suggested that these ions penetrates an interfacial region (Figure 1.1 c). An important aspect of the ITIES structure is its thickness. Existence of capillary waves at the liquid -liquid interface designates its boundaries. Qualitative parameter -mean-square interfacial displacement -allowed the estimation of upper and lower limits of the interface. 15 The results based on molecular dynamic calculations for a typical interfacial tension values between H 2 O and DCE indicated that the size of the interface is in the order of around 10 Å. [START_REF] Benjamin | Theoretical Study of the Water/1,2-Dichloroethane Interface: Structure, Dynamics, and Conformational Equilibria at the Liquid-liquid Interface[END_REF][START_REF] Benjamin | Chemical Reactions and Solvation at Liquid Interfaces: A Microscopic Perspective[END_REF] The complex nature and the unique character of the liquid -liquid interface narrows the study of its structure down to few spectroscopic techniques: scattering of X-ray and neutrons [START_REF] Schlossman | Liquid-Liquid Interfaces: Studied by X-Ray and Neutron Scattering[END_REF] and non-linear optical methods (sum-frequency vibrational spectroscopy [START_REF] Wang | Generalized Interface Polarity Scale Based on Second Harmonic Spectroscopy[END_REF] and second harmonic generation [START_REF] Higgins | Second Harmonic Generation Studies of Adsorption at a Liquid -Liquid Electrochemical Interface[END_REF] ). Charge transfer reactions at the ITIES At the ITIES, each of immiscible phases can be characterized with its own inner Galvani potential. Under open circuit potential when no charge transfer is observed, the species in one phase, say an aqueous are too hydrophilic to be transferred to the organic phase and species from the organic phase are too hydrophobic to transfer to the aqueous phase. The system equilibrium can be disrupted by introduction of external solute to one of phases (partition of species driven by interfacial Galvani potential difference leads to the interfacial ion transfer reaction until the equilibrium is established) or by external interfacial polarization. The quantity of energy that has to be delivered to the system in order to transfer one of the components to the neighboring phase can be given by standard transfer Gibbs energy whereas the partition of the species between two immiscible phases are conditioned by interfacial Galvani potential difference. In principle charge transfer reaction across the ITIES can be divided into three main groups: (i) simple ion transfer, (ii) assisted/facilitated ion transfer and (iii) electron transfer between the redox couple O 1 /R 1 in one phase and redox couple O 2 /R 2 in the latter. The proceeding description is given for the three above mentioned charge transfer thermodynamics enriched with practical examples. The examples and characteristics of electrochemically induced liquid -liquid interface adsorption reaction are also briefly discussed. Simple ion transfer reaction If one will consider that ion (i) is transferred from the aqueous to the organic phase via simple ion transfer reaction, than the standard transfer Gibbs energy (∆𝐺 (𝑖) 0,𝑎𝑞→𝑜𝑟𝑔 ) will be defined as the difference between standard Gibbs energy of solvation (𝜇 0,𝑜𝑟𝑔 ) and hydration (𝜇 0,𝑎𝑞 ): ∆𝐺 (𝑖) 0,𝑎𝑞→𝑜𝑟𝑔 = 𝜇 0,𝑜𝑟𝑔 -𝜇 0,𝑎𝑞 (1.1) Standard transfer Gibbs energy (∆𝐺 (𝑖) 0,𝑎𝑞→𝑜𝑟𝑔 ) can be converted to standard transfer potential (∆ 𝑜𝑟𝑔 𝑎𝑞 Ф (𝑖) 0 ) by introduction of 𝑧 (𝑖) 𝐹 factor according to equation 1.2: ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф (𝑖) 0 = ∆𝐺 (𝑖) 0,𝑎𝑞→𝑜𝑟𝑔 𝑧 (𝑖) 𝐹 (1.2) For the charged species (i), the ion transfer equilibrium condition, at constant temperature and pressure is given by following equality: 𝜇 ̃(𝑖) 𝑎𝑞 = 𝜇 ̃(𝑖) 𝑜𝑟𝑔 (1.3) Where 𝜇 ̃(𝑖) 0,𝑥 is the electrochemical potential of charged species (i) expressed in form of equation 1.4: 𝜇 ̃(𝑖) 𝑥 = 𝜇 (𝑖) 0,𝑥 + 𝑅𝑇𝑙𝑛𝑎 (𝑖) 𝑥 + 𝑧 (𝑖) 𝐹𝛷 𝑥 (1.4) Where 𝜇 (𝑖) 0 is the standard chemical potential, 𝛷 𝑥 is the inner Galvani potential, 𝑎 (𝑖) 𝑥 is the activity of the specie and x correspond to the aqueous (aq) or the organic (org) phase. The equality from equation 1.3 can be developed by substitution with equation 1.4: 𝜇 (𝑖) 0,𝑎𝑞 + 𝑅𝑇𝑙𝑛𝑎 (𝑖) 𝑎𝑞 + 𝑧 (𝑖) 𝐹𝛷 𝑎𝑞 = 𝜇 (𝑖) 0,𝑜𝑟𝑔 + 𝑅𝑇𝑙𝑛𝑎 (𝑖) 𝑜𝑟𝑔 + 𝑧 (𝑖) 𝐹𝛷 𝑜𝑟𝑔 (1.5) Separation of the Galvani potential difference -∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = Ф 𝑎𝑞 -Ф 𝑜𝑟𝑔 -on one side and rest of the components on the second side of the equality yields in the Nernst equation for ion transfer at the ITIES: [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF][START_REF] Sabela | Standard Gibbs Energies of Transfer of Univalent Ions From Water To 1,2-Dichloromethane[END_REF] Where ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф (𝑖) 0 is the standard transfer potential or in the other words standard Gibbs energy expressed in the voltage scale as it was shown with equation 1.2, 𝑎 (𝑖) 𝑥 is the activity for the ion (i), in the aqueous or organic phase, R is the gas constant (8.31 𝐽/𝑚𝑜𝑙 • 𝐾) and T is the temperature. Equation 1.6 can be expressed with concentration instead of activity as: Where ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф′ (𝑖) 0 correspond to formal transfer potential. [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF]4 Simple ion transfer reaction is considered as the easiest to study and above mentioned deliberations provide qualitative parameters (standard transfer potential or standard Gibbs energy difference) that can be measured with electrochemical techniques. Figure 1.2 includes the ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф (𝑖) 0 values for different cationic and anionic species measured at the waterdichloroethane interface. ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = Assisted/facilitated ion transfer This type of interfacial transfer induces host-guest interaction between ion (i) in one phase and ligand (L) dissolved in a second, immiscible phase. Nernst like equation for simple ion transfer reaction can be easily adapted to describe the facilitated ion transfer. To do so, some assumptions have to be made: -Ion is initially present in the aqueous phase, whereas ligand able to complex transferring ion is present in the organic phase; -Complexation is taking place in the organic phase; -Concentration of ions in the aqueous phase is in excess over their concentration in the organic phase; -Under open circuit potential complex formation in the organic phase prevails, hence ion concentration in the organic phase can be neglected and; -The aqueous phase concentration of ligand is neglected. To facilitate consideration we can assume that the complexation is of 1:1 stoichiometry type, which can be written as: 𝐿 𝑜𝑟𝑔 + (𝑖 𝑎𝑞 ) ⇌ 𝐿 -(𝑖) 𝑜𝑟𝑔 (1.8) For given reaction the association constant is given by: 𝐾 𝑎 = 𝑎 𝐿-(𝑖) 𝑜𝑟𝑔 𝑎 𝐿 𝑜𝑟𝑔 𝑎 (𝑖) 𝑎𝑞 (1.9) The ∆ 𝑜𝑟𝑔 𝑎𝑞 𝛷 for the facilitated ion transfer can be written in the form of equation 1.10: First report concerning assisted ion transfer reaction at the electrified liquid -liquid interface derives from Czechoslovakia at that time, [START_REF] Koryta | Electrochemical Polarization Phenomena at the Interface of Two Immiscible Electrolyte Solutions[END_REF] where Koryta has studied potassium transfer from the aqueous phase to a nitrobenzene solution containing dibenzo-18-crown-6 ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = ∆ ionophore. Beyond doubts complexation agent being dissolved in the organic phase lowered the Gibbs energy of transfer and allowed the ions to transfer at lower interfacial potential difference values. This novelty opened the way to study species, whose detection was limited by rather narrow potential window. Complex nature of the facilitated ion transfer reaction involves several mechanisms divided -in respect to a complexation/dissociation locationinto: ACT (aqueous complexation followed by transfer), TOC (transfer followed by complexation), TIC (transfer by interfacial complexation) and TID (transfer by interfacial dissociation). [START_REF] Shao | Assisted Ion Transfer at Micro-ITIES Supported at the Tip of Micropipettes[END_REF] All four mechanisms are illustrated on Figure 1.3. The model of the facilitated ion transfer reaction at the ITIES can be referred to as historical potassium transfer facilitated by dibenzo-18-crown-6. [START_REF] Koryta | Electrochemical Polarization Phenomena at the Interface of Two Immiscible Electrolyte Solutions[END_REF] To date, facilitated transfer of ionic compounds allowed the improvement of the still poor selectivity of the ITIESguided by specific interaction between the host (ionophore, ligand) and the guest (target ion). The assisted transfer of heavy metals, which detection is of highest importance, had also attracted a lot of experimental attention. Series of five cyclic thioether ligands were found to be proper ionophores for the assisted transfer of cadmium, lead, copper and zinc. [START_REF] Lagger | Electrochemical Extraction of Heavy Metal Ions Assisted by Cyclic Thioether Ligands[END_REF] Synthetic molecule -ETH 1062 -incorporated into gelled polyvinylchloride-2-nitrophenylethyl ether polymer membrane was used in cadmium detection from the aqueous solution. [START_REF] Lee | Amperometric Tape Ion Sensors for cadmium(II) Ion Analysis[END_REF] The ion transfer voltammetry of silver ions being dissolved in the aqueous phase indicated that upon addition of 1,5-cyclooctadiene to the organic phase (1,6-dichlorohexane) its transfer becomes shifted from the potential range where it was partially masked by background ion transfer to a less positive potential where clear peak-like response was formed. [START_REF] Katano | Electrochemical Study of the Assisted Transfer of Silver Ion by 1 , 5-Cyclooctadiene at the 1 , 6-Dichlorohexane | Water Interface[END_REF] Work devoted for divalent copper cations assisted transfer by 6.7-dimethyl-2,3-di(2-pyridyl)quinoxaline ligands, showed that the half-wave potential of free ion transfer was shifted by around 400 mV to a less positive potential values in the presence of ligands in the organic phase. Among other examples, much attention was given to calix [4]arenes synthetic ionophores, first time applied by Zhan et al. [START_REF] Zhan | Electrochemical Recognition of Alkali Metal Ions at the Micro-Water 1,2-Dichloroethane Interface Using a calix[END_REF] towards alkali metal detection. Subsequent works from other groups -dealing with analogical ionophore derivatives -have shown that in the presence of alkali and alkaline-earth metals, selective detection at the ITIES can be slightly improved [START_REF] Kaykal | Synthesis and Electrochemical Properties of a Novel calix[4]arene Derivative for Facilitated Transfer of Alkali Metal Ions across water/1,2-Dichloroethane Micro-Interface[END_REF] or directed towards potassium (once 5.11.17.23-tetra-tertbuthyl-25-27-bis(2'amino-methylpyridine)-26-28-dihydroxy calix [4]arene was used) [START_REF] Durmaz | Voltammetric Characterization of Selective Potassium Ion Transfer across Micro-water/1,2-Dichloroethane Interface Facilitated by a Novel calix[4]arene Derivative[END_REF] or calcium cations (for 5.11.17.23-tetra-tert-buthyl-25-27-diethoxycarbonylmethoxy-26-28-dimethoxy calix[4]arene). [START_REF] Bingol | Facilitated Transfer of Alkali and Alkaline-Earth Metal Ions by a Calix[4]arene Derivative Across Water/1,2-Dichloroethane Microinterface: Amperometric Detection of Ca(2+)[END_REF] O'Dwyer and Cunnane have studied the facilitated transfer of silver cations with O,O''-Bis [2-(methylthio)ethyl]-tertbutyl calix [4]arene. [START_REF] O' Dwyer | Selective Transfer of Ag+ at the water|1,2-Dichloroethane Interface Facilitated by Complex Formation with a Calixarene Derivative[END_REF] Since host-guest interaction for calix [4]arene ionophores are not limited to metal cations it has been shown that its modification with urea group allowed the detection of anions (phosphate, chloride and sulphate with selectivity towards phosphate anions). [START_REF] Kivlehan | Study of Electrochemical Phosphate Sensing Systems: Spectrometric, Potentiometric and Voltammetric Evaluation[END_REF] An example of facilitated transfer of proton was also given. Reymond et al. have shown that in the presence of piroxicam derivatives in the organic phase, H + undergoes transfer by interfacial complexation/dissociation (TIC/TID) reaction. 33 Electron transfer across ITIES Properly selected two redox pairs O 1 /R 1 and O 2 /R 2 dissolved in the aqueous and the organic phase respectively under potentiostatic conditions may lead to the electron transfer reaction across the ITIES. [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF] 𝑂 1 𝑧,𝑎𝑞 + 𝑅 2 𝑧,𝑜𝑟𝑔 ⇌ 𝑂 2 𝑧,𝑜𝑟𝑔 + 𝑅 1 𝑧,𝑎𝑞 (1.12) For the reaction 1.12 at constant temperature and pressure the electrochemical potentials equilibrium will be given as: 𝜇 ̃𝑂1 0,𝑎𝑞 + 𝜇 ̃𝑅2 0,𝑜𝑟𝑔 = 𝜇 ̃𝑂2 0,𝑜𝑟𝑔 + 𝜇 ̃𝑅1 0,𝑎𝑞 (1.13) Substitution of eq. 1.4 to eq. 1.13 and proper transformation will yield the Nernst like equation for the electron transfer reaction across the ITIES: ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 𝑒𝑙 0 + 𝑅𝑇 𝐹 𝑙𝑛 𝑎 𝑂 2 𝑜𝑟𝑔 𝑎 𝑅 1 𝑎𝑞 𝑎 𝑂 1 𝑎𝑞 𝑎 𝑅 2 𝑜𝑟𝑔 (1.14) where ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 𝑒𝑙 0 is the standard Galvani potential difference for the electron transfer from the aqueous to the organic phase, related with the free Gibbs energy of the electron transfer reaction via equation 1.15: ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 𝑒𝑙 0 = ∆ 𝑜𝑟𝑔 𝑎𝑞 𝐺 𝑒𝑙 0 𝐹 (1.15) The electron transfer reaction at the ITIES depends from the relative reduction potential of two redox couples separated between two immiscible phases and hence can occur spontaneously (if the interfacial potential difference is high enough to trigger redox reaction) or can be controlled potentiometrically. The studies of the electron transfer reaction were performed with variety of electrochemical techniques as for instance: SECM, [START_REF] Cai | Electron Transfer Kinetics at Polarized Nanoscopic Liquid/liquid Interfaces[END_REF][START_REF] Barker | Scanning Electrochemical Microscopy: Beyond the Solid/liquid Interface[END_REF] cyclic voltammetry, [START_REF] Geblewicz | Electron Transfer between Immiscible Solutions. The Hexacyanoferrate-Lutetium Biphthalocyanine System[END_REF][START_REF] Samec | Charge Transfer between Two Immiscible Electrolyte Solutions. Part IV. Electron Trnasfer between hexacyanoferrate(III) in Water and Ferrocene in Nitrobenzene Investigated by Cyclic Votammetry with Four-Electrode System[END_REF] ac impedance [START_REF] Cheng | Impedance Study of Rate Constants for Two-Phase Electron-Transfer Reactions[END_REF] etc. The process of an interfacial electron transfer reaction was first time observed by Samec et al. for the system composed from hexacyanoferrate redox couple in the aqueous phase and ferrocene in the nitrobenzene phase: [START_REF] Samec | Detection of an Electron Transfer across the Interface between Two Immiscible Electrolyte Solutions by Cyclic Voltammetry with Four-Electrode System[END_REF] 𝐹𝑒(𝐶𝑁) 6(𝑎𝑞) 3-+ 𝐹𝐶 (𝑜𝑟𝑔) ⇌ 𝐹𝑒(𝐶𝑁) 6 4-+ 𝐹𝐶 (𝑜𝑟𝑔) + (1.16) Since that time this particular type of interfacial charge transfer reaction accompanied the work focused on electrocatalysis, [START_REF] Rodgers | Particle Deposition and Catalysis at the Interface between Two Immiscible Electrolyte Solutions (ITIES): A Mini-Review[END_REF] photoinduced interfacial reaction [START_REF] Eugster | Photoinduced Electron Transfer at Liquid | Liquid Interfaces : Dynamics of the Heterogeneous Photoreduction of Quinones by Self-Assembled Porphyrin Ion Pairs[END_REF][START_REF] Fermìn | Organisation and Reactivity of Nanoparticles at Molecular Interfaces. Part II. ‡ Dye Sensitisation of TiO2 Nanoparticles Assembled at the Water/1,2-Dichloroethane Interface[END_REF] or interfacial modification with the metals, polymers and metal/polymers deposits. The latter is discus in more details in the subsections 3.1 and 3.3. Electrochemically induced interfacial adsorption At the ITIES the interfacial adsorption was reported for two following class of species: (i) amphiphilic ions and (ii) large and multicharged species e.g. dendrimers. The study of amphiphilic molecules adsorption, e.g. phospholipids, under applied potential conditions is most of all performed by electrocapillary curves measurements. [START_REF] Zhang | Potential-Dependent Adsorption and Transfer of Poly(diallyldialkylammonium) Ions at the Nitrobenzene|water Interface[END_REF][START_REF] Kitazumi | Potential-Dependent Adsorption of Decylsulfate and Decylammonium prior to the Onset of Electrochemical Instability at the 1,2-Dichloroethane|water Interface[END_REF][START_REF] Samec | Dynamics of Phospholipid Monolayers on Polarised Liquid-Liquid Interfaces[END_REF] The adsorption of phosphatidylcholine phospholipid from the organic phase and its complexation with the different ionic species (K + , H + , Fe 2+ , Fe 3+ , 𝐼𝑟𝐶𝑙 6 2-, 𝐼𝑟𝐶𝑙 6 3-) from the aqueous solution was followed by cyclic voltammetry (which gave characteristic triangular signal in the presence of adsorption) and contact angle measurements at different interfacial potentials values (contact angle for the studied droplet tend to increase with the polarization towards more positive potentials in the presence of phospholipid adsorption). [START_REF] Uyanik | Voltammetric and Visual Evidence of Adsorption Reactions at the Liquid-liquid Interfaces Supported on a Metallic Electrode[END_REF] Recently, dendrimers -repetitively branched molecules with the generation growing with number of molecular branches -attracted a lot of scientific attention as drug carriers, [START_REF] Patri | Dendritic Polymer Macromolecular Carriers for Drug Delivery[END_REF] molecular gates, [START_REF] Perez | Selectively Permeable Dendrimers as Molecular Gates[END_REF] soft templates [START_REF] Hedden | Templating of Inorganic Nanoparticles by PAMAM/PEG Dendrimer -Star Polymers[END_REF] etc. whereas the electrochemistry at the ITIES was shown as a good electroanalytical tool for their determination. [START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces[END_REF][START_REF] Calderon | Electrochemical Study of a Dendritic Family at the water/1,2-Dichloroethane Interface[END_REF] It has been shown that dendrimers with growing size and charge exhibit complex behavior at the electrified liquid -liquid interface. Molecular dynamics simulations employed to study adsorption of model dendrimer of a third generation at the liquid -liquid interface has shown that molecules possessing amphiphilic structure have higher stability at the interface. [START_REF] Cheung | How Stable Are Amphiphilic Dendrimers at the Liquidliquid Interface?[END_REF] The interfacial adsorption rather than interfacial transfer was reported for Poly-L-Lysine dendritic family (from generation 2 to generation 5) [START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF] and higher generation of poly(amidoamine) and poly(propyleimine) dendrimers. [START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces.pdf[END_REF] Interfacial adsorption was also voltammetrically observed for biomolecules as insulin, [START_REF] Scanlon | Voltammetric Behaviour of Biological Macromolecules at Arrays of Aqueous|organogel Micro-Interfaces[END_REF] hen egg-white lysozyme [START_REF] Scanlon | Voltammetric Behaviour of Biological Macromolecules at Arrays of Aqueous|organogel Micro-Interfaces[END_REF] and hemoglobin. [START_REF] Herzog | Electrochemical Behaviour of Haemoglobin at the Liquid/liquid Interface[END_REF] The last biomolecule, additionally to adsorption process was shown to facilitate the transfer of anionic part of the organic phase supporting electrolyte, and subsequently decrease its Gibbs energy of transfer. Such species do not exhibit the characteristics of reversible transfer: (i) the peak-to-peak separation exceeds 0.059V/n; (ii) forward and reverse peak current ratio ≠ 1; (iii) deviation from linearity for current -concentration (calibration) curves and (iv) the reverse peak current terminated with abrupt current drop rather than diffusion limited tail. For surface active molecules adsorption -(v) so-called 'electrochemical instability' manifested by unrepeatable current spikes was reported and is discussed in the subsection 1.1.4. [START_REF] Kitazumi | Electrochemical Instability in Liquid-Liquid Two-Phase Systems[END_REF] Additionally for hemoglobin (vi) a thin layer film was visible at the ITIES after repetitive cycling. 56 Potential window and limiting current A polarizable electrode is ideal when, in the absence of Faradaic processes, no current flows through it over the whole potential range. A current -potential characteristic of a polarizable electrode in real conditions is presented on Similar behavior can be distinguished at the ITIES. In that case, the solid conductor is replaced with an immiscible electrolyte phase and the polarization becomes purely an ionic process. If we assume that the aqueous phase contains highly hydrophilic salt (A + B -) of ideally zero solubility in the organic phase, and the organic phase contains highly lipophilic salt (C + D -) of ideally zero solubility in the aqueous phase then the ideally polarized interface can be described as the interface impermeable to the charged particles transfer in whole potential range. In other words, it means that the ions should possess infinite Gibbs energy of transfer, which of course is far from the truth. In experimental conditions, a polarizable interface can be constructed between highly hydrophilic salt dissolved in the aqueous phase (NaCl, LiCl ect…) and highly lipophilic salt (tetrabutylammonium tetrakis(4chlorophenylborate) (TBA + TPBCl -)) dissolved in the organic phase. One or both phases can also be replace with respectively a hydrophilic (1-butyl-3-H-imidazolium nitrate) [START_REF] Cousens | Electrochemistry of the Ionic Liquid|oil Interface: A New Water-Free Interface between Two Immiscible Electrolyte Solutions[END_REF] or the hydrophobic (1-octyl-3-methylimidazolium bis(perfluoroalkylsulfonyl)imid) 59 ionic liquids. Non-polarizable interface occurs when both immiscible phases contain at least one common ion, which can freely cross the interface. In an ideal case, the current originating from common ion transfer in any direction should not induce any changes in interfacial potential difference. According to Nernst like equation for ion transfer reaction (eq. 1.6), the potential distribution is depended on the ionic species separated between two immiscible phases. At non-polarized interface, binary electrolyte (1:1; 2:2 etc.) A + B -or electrolytes containing common ion -A + B -/A + C --are distributed between two phases and hence interfacial potential difference does not depend on their concentration. [START_REF] Markin | Electrocapillary Phenomena at Polarizable and Reversible Interfaces between Two Immiscible Liquids: The Generalized Electrocapillary Equation in Hansen's Representation[END_REF][START_REF] Markin | Potentials at the Interface between Two Immiscible Electrolyte Solutions[END_REF] Systems employing non-polarized interface are used to study the kinetics of electron transfer reactions. )), curvature of electrocapillary curve and location of potential of zero charge against ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 0 . [START_REF] Kakiuchi | Potential-Dependent Adsorption and Partitioning of Ionic Components at a Liquid|liquid Interface[END_REF][START_REF] Kakiuchi | Electrochemical Instability of the Liquid|liquid Interface in the Presence of Ionic Surfactant Adsorption[END_REF] The concepts of an electrochemical instability model do not assumed presence of specific adsorption of the ionic species, which is the case in real systems. Kakiuchi and Kitazumi improved previous model based on Gouy's double layer theory by introduction of the inner layer localized between two diffuse double layers. The new model describes potential dependency of interfacial capacitance, excess charge of ionic species in the aqueous phase and shape of electrocapillary curves in the presence of specific adsorption. [START_REF] Kitazumi | A Model of the Electrochemical Instability at the Liquid|liquid Interface Based on the Potential-Dependent Adsorption and Gouy's Double Layer Theory[END_REF][START_REF] Kitazumi | Electrochemical Instability in Liquid-Liquid Two-Phase Systems[END_REF] On cyclic voltammetry curves, in the presence of surface active ions, the electrochemical instability manifests itself as irregular current spikes in the vicinity of half-wave potential of transferring species. These characteristics were confirmed for anionic surfactants (alkansulfonate and alkyl sulfate salts), [START_REF] Kakiuchi | Cyclic Voltammetry of the Transfer of Anionic Surfactant across the Liquid-liquid Interface Manifests Electrochemical Instability[END_REF][START_REF] Kakiuchi | Regular Irregularity in the Transfer of Anionic Surfactant across the Liquid/liquid Interface[END_REF][START_REF] Kitazumi | Potential-Dependent Adsorption of Decylsulfate and Decylammonium prior to the Onset of Electrochemical Instability at the 1,2-Dichloroethane|water Interface[END_REF] cationic surfactant (decylamine) [START_REF] Kitazumi | Potential-Dependent Adsorption of Decylsulfate and Decylammonium prior to the Onset of Electrochemical Instability at the 1,2-Dichloroethane|water Interface[END_REF][START_REF] Kasahara | Electrochemical Instability in the Transfer of Cationic Surfactant across the 1,2-Dichloroethane/water Interface[END_REF] and alkaline-earth metals facilitated transfer by complexation agent -polyoxyethylene (40)isooctylphenyl ether (Triton X-405). [START_REF] Kakiuchi | Electrochemical Instability in Facilitated Transfer of Alkaline-Earth Metal Ions across the Nitrobenzene|water Interface[END_REF] The abundance of current irregularities is highly depended from the experimental time scale and intensifies for lower scan rates. Low concentration of dodecansulfonate (0.2 mM) [START_REF] Kakiuchi | Regular Irregularity in the Transfer of Anionic Surfactant across the Liquid/liquid Interface[END_REF] do not induce anomalies at cyclic voltammograms due to weak adsorption (fluctuation appears once the concentration reaches 0.5 mM). This confirms that the electrochemical instability can be triggered once surface coverage reaches crucial value. Miniaturization of the ITIES In electroanalytical chemistry, miniaturization possesses two advantages over the macroscopic system. First of all, it improves the sensitivity as a reason of increased mass transfer to a solid -liquid or the liquid -liquid interface arising from radial diffusion zone geometry. [START_REF] Scanlon | Enhanced Electroanalytical Sensitivity via Interface Miniaturisation: Ion Transfer Voltammetry at an Array of Nanometre Liquid-Liquid Interfaces[END_REF] Second characteristic lies in the interfacial surface area, which decreases as the system becomes smaller, which in turn lowers the capacitance current and improves limits of detection. [START_REF] Collins | Ion-Transfer Voltammetric Determination of the Beta-Blocker Propranolol in a Physiological Matrix at Silicon Membrane-Based Liquid|liquid Microinterface Arrays[END_REF] In case of liquid -liquid interface, miniaturization additionally improves its mechanical stability whereas the developments in the field of lithographical techniques allow the design and performance of well-defined supports. At the ITIES, the miniaturization was first time performed by Taylor and Girault who have supported the liquid -liquid interface in a pulled glass tube resulting in 25 µm in inner tip diameter. [START_REF] Taylor | Ion Transfer Reaction across a Liquid -Liquid Interface Supported on a Micropipette Tip[END_REF] Repeatable single pore microITIES could be also prepared by using a metal wire with a fixed diameter as a template, and a glass tube melted around. The wire removal of a wire by etching releases the pore that can be subsequently used to support the ITIES. [START_REF] Stockmann | Hydrophobic Alkylphosphonium Ionic Liquid for Electrochemistry at Ultramicroelectrodes and Micro Liquid|liquid Interfaces[END_REF] With development of new technologies the dimensions of the single ITIES could be further decreased up to the nanometer level -especially when LASER pulling approach was employed. 81 The geometrical and voltammetric properties of three kinds of ITIES were compiled in Table 1.2. It is worth noting that the miniaturized liquid -liquid interface can possess an asymmetrical diffusion zone profile on both sides of the interface as it is shown i.e. for microITIES scheme from Table 1.2. In that case, the mass transfer inside the pore is dominated by linear diffusion and hence the charge transfer reaction is a diffusion limited process. On the pore ingress, mass transfer is enhanced by hemispherical diffusion zone, which makes the charge transfer diffusion non-limited process. The application of arrays of geometrically regular nano- [START_REF] Scanlon | Ion-Transfer Electrochemistry at Arrays of Nanointerfaces between Immiscible Electrolyte Solutions Confined within Silicon Nitride Nanopore Membranes[END_REF] and microITIES, [START_REF] Zazpe | Ion-Transfer Voltammetry at Silicon Membrane-Based Arrays of Micro-Liquid-Liquid Interfaces[END_REF] as in the case of solid state electrochemistry, have gave better electroanalytical response since under proper geometrical conditions the ensemble can be treated as a sum of individual pores. Organosilicon compounds are when there is a covalent bond between silicon and carbon atom. Once two silicon atoms from two different organosilicon compounds are connected by oxygen atom, siloxane compound is being formed. The polymers with the skeletal structure formed from siloxane units are called silicone. Siloxides are the compounds with the general formula R 3 SiOM, were R is the organic group and M is the metal cation. r >> δ r [mm, cm] 𝑜𝑟𝑔 → 𝑎𝑞 -linear diffusion 𝑎𝑞 → 𝑜𝑟𝑔 -linear diffusion r < δ r [µm] 𝑜𝑟𝑔 → 𝑎𝑞 -linear diffusion 𝑎𝑞 → 𝑜𝑟𝑔 -radial diffusion r < δ r [nm] 𝑜𝑟𝑔 → 𝑎𝑞 -linear diffusion 𝑎𝑞 → 𝑜𝑟𝑔 -radial diffusion Signal current 𝐼 𝑜𝑟𝑔→𝑎𝑞 = 268600𝑛 3/2 𝐴𝐷 1/2 𝐶𝑣 1/2 𝐼 𝑎𝑞→𝑜𝑟𝑔 = 268600𝑛 3/2 𝐴𝐷 1/2 𝐶𝑣 1/2 𝐼 𝑜𝑟𝑔→𝑎𝑞 = 268600𝑛 3/2 𝐴𝐷 1/2 𝐶𝑣 1/2 𝐼 𝑎𝑞→𝑜𝑟𝑔 = 4𝑁𝑛𝐹𝐷𝑐𝑟 𝐼 𝑜𝑟𝑔→𝑎𝑞 = 4𝑓(𝛩)𝑛𝐹𝐷𝑐𝑟 81 Different characteristics of the regular arrays of microITIES are shown in Silicon alkoxides or alkoxysilanes are the compounds of silicon and alcohol with the general formula Si(OR) 4 where R is the organic substituent (for example tetramethoxysilane, tetraethoxysilane, tetrapropoxysilane etc.). [START_REF] Iler | The Chemistry of Silica: Solubility, Polymerization[END_REF] The chosen silicon containing compounds can be found on The Sol -Gel process of silica The Sol -Gel processing of silica and silicates is a two-stage process which involves (i) hydrolysis with the final aim to reach sol (small particle dispersion in the liquid medium) and (ii) condensation, which results is the gel phase (relatively rigid polymerized and noncrystalline network possessing different in size pores). Apart from hydrolysis and condensation steps, one has to take into account other reactions that might occur, as for instance silica dissolution at higher pH values. Moreover, the gel phase usually needs further post-treatment to cure not fully cross-linked silica matrix in order to obtain the solid material. All these steps have been well studied and described, [START_REF] Lev | Sol-Gel Electrochemistry: Silica and Silicates[END_REF] hence only a brief description will be given in here. Hydrolysis The rate of hydrolysis of alkoxysilane species as a function of the pH is shown on Condensation Second step of the Sol -Gel process is the condensation reaction -see red curve on Dissolution The sol -gel process is said to be inhibited once the dissolution rate exceeds over the condensation rate. Dissolution rate versus pH is represented by the black line on Figure 1.9. Silicates or silica dissolution takes place under high pH condition -using strong bases -or in the presence of hydrofluoric acid. The mechanism of dissolution is similar in both cases and involves nucleophilic attack of 𝑂𝐻 -or 𝐹 -on the positively charged silicon atom and subsequent cleavage of Si-O bond. 88,87 Curing The silica structure in the gel state is not yet rigid. The matrix is flexible, not fully crosslinked and the reactions of hydrolysis, condensation and dissolution can still affect final structure of the silica network. The fulfillment of the condensation is performed with the drying process, which involves the following stapes: (i) reduction of the gel volume; (ii) successive emptying the liquid content from pores driven by capillary pressure and (iii) diffusion of the residual solvent to the surface and evaporation. The drying process is accompanied by formation of cracks due to the high surface tension at the interface between empty pores and liquid filled pores, which creates large pressure difference. Crack formation can be somehow overcome by generation of thin materials, the use of surface active species decreasing surface tension, drying under freeze-drying conditions or doping the silica with other more 'elastic' materials. 88 Templates -towards surface engineering The idea behind template technology is very straightforward and assumes the use of the geometrically well-defined 'mold', which is then used to grow the material of interest, usually on the basis of the bottom-up approach. In electrochemistry, an electrode surface modification has many advantages: increase in the electroactive surface area (once conductive material is grown), enhancement in the mass transport via diffusion or improvement in the catalytic efficiency for the increasing number of nucleation sites (especially for near-atomic range of roughness). Deposition of electrically insulating materials, as for instance porous silica, is of highest interest from the analytical point of view, especially when the deposit exhibits some degree of selectivity towards the analyte in the presence of a contaminant. Depending on pores dimension three main class of porosity can be distinguish: (i) microporous materialsare when the pore widths is less than 2 nm; (ii) mesoporous materials -with the pore widths between 2 and 50 nm and (iii) macroporous materials -with the pore widths greater than 50 nm. 89 Walcarius has divided the templates into three main groups: hard templates, colloidal crystal assemblies and soft templates. [START_REF] Walcarius | Template-Directed Porous Electrodes in Electroanalysis[END_REF] Hard template is the term reserved for a solid porous substrate that is modified within the pores and the deposit is then released by removal of surrounding pore walls (see Silica or silicate materials can exhibit dual role since they were used as templates (in form of colloidal assemblies for instance) [START_REF] Velev | Colloidal Crystals as Templates for Porous Materials[END_REF] and were easily templated as shown in the following subsection. 1.2.4. A soft template for a Sol-Gel process of mesoporous silica thin films Evaporation Induced Self-Assembly (EISA) is the method, which allows the formation of a silica film with controlled mesostructure and pore size by varying: chemical parameters and processing conditions. In general, in such a method, the sol solution (containing template and silica precursor species) is contacted with the solid support interface and the volatile components (e.g. H 2 O, EtOH and HCl) of the reaction media are left to evaporate. Reducing the volume of the sol solution results in condensation of the silica precursor around the template matrix (which tend to form liquid crystal phases once its critical micelle concentration is reached). Once solvent is evaporated the silica film is formed. [START_REF] Grosso | Fundamentals of Mesostructuring Through Evaporation-Induced Self-Assembly[END_REF] Variety of processing methods were developed for EISA process, examples include: dip-coating -with the substrate being immersed to the sol solution and subsequently pulled out at a known rate; [START_REF] Lu | Continuous Formation of Supported Cubic and Hexagonal Mesoporous Films by Sol -Gel Dip-Coating[END_REF] spin-coating -centrifugal force is used to spread the sol solution over the support; [START_REF] Etienne | Preconcentration Electroanalysis at Surfactant-Templated Thiol-Functionalized Silica Thin Films[END_REF] casting -when the sol is simply poured on the support and left for evaporation [START_REF] Kong | Gel-Casting without de-Airing Process Using Silica Sol as a Binder[END_REF] or sprayingtransferring the sol solution on the material in the form of aerosol. [START_REF] Olding | Ceramic Sol-gel Composite Coatings for Electrical Insulation[END_REF] Under proper conditions, all the above methods allow the formation of ordered films with pores oriented horizontally to the support plane. Deposition of mesoporous silica at solid electrodes in the manner which does not exclude the electrical contact between the medium phase and a conductive substrate is challenging. Deposition of mesoporous silica films with high symmetry and controlled pore orientationpreferentially perpendicular to the substrate plane -is achievable by Electrochemically Assisted Self-Assembly (EASA). [START_REF] Walcarius | Electrochemically Assisted Self-Assembly of Mesoporous Silica Thin Films[END_REF][START_REF] Goux | Oriented Mesoporous Silica Films Obtained by Electro-Assisted Self-Assembly (EASA)[END_REF] In such an approach, the condensation of silica precursor is catalyzed by OH -electrogeneration at sufficiently low cathodic potential. Under these conditions, the cationic surfactants (cetyltrimethylammonium cations) present in the reaction media form hexagonally packed liquid crystal phase growing perpendicular to the electrode surface. The condensation of silica and self-assembly of soft templates occur simultaneously. The resultant thin film, after thermal curing, shows highly ordered silica network with the pores oriented normal to the underlying support. Functionalized mesoporous silica films prepared by Sol-Gel processing The introduction of chemical functionalities possessing different physico-chemical properties allows of material chemical and physical properties to be altered. It is especially important in analytical chemistry since it improves the selectivity once the system is designed to favors the detection of analyte and in parallel, inhibits the detection of contaminant (e.g. based on charge, hydrophilic/hydrophobic or host -ligand interactions). The sensor becomes even more versatile when surface nanoarchitecture is additionally adjusted. Such attributes are easily feasible for highly ordered mesoporous silica films with pores oriented normal to the surface plane prepared by EASA method. [START_REF] Etienne | Oriented Mesoporous Organosilica Films on Electrode: A New Class of Nanomaterials for Sensing[END_REF] The functionalized mesoporous silica prepared by the Sol -Gel process has two possible synthetic routs: (i) direct co-condensation with organosilanes and (ii) co-condensation and deposition followed by chemical reaction. The first method involves hydrolysis of alkoxy silanes with organosilanes bearing the functional group of interest. An electrochemical deposition leads to functionalized silica film formation. Such an approach allowed the introduction of simple functionalities as for instance methylup to 40% mmol, 100 amine -up to 10% mmol [START_REF] Etienne | Oriented Mesoporous Organosilica Films on Electrode: A New Class of Nanomaterials for Sensing[END_REF] or thiol -up to 10% mmol 101 groups without lost in mesostructure order. The second method is composed from two steps. Initially the organosilanes with the reactive organic group are co-condensate with the alkoxy silanes and electrodeposited at the solid electrode surface. The second step involves the reaction between the organic functionalities from the silica framework and properly selected reagent. A pioneering example was developed by Vilá et al. who electrogenerated azide functionalized oriented and ordered mesoporous silica films (for up to 40% mmol of azide group bearing silanes in the initial sol solution) that were further modified by alkyne-azide 'click' reaction with ethynylferrocene or ethynylpyridine. 102 Liquid -liquid interface modification Liquid -liquid interfaces can be modified ex situ (with material performed prior to its interfacial deposition) or in situ (when the interfacial reaction results in deposit formation). Electrochemistry at the ITIES can be the driving force for the interfacial modification as it can be a tool for deposits evaluation. modified liquid -liquid interface to photovoltaics applications. 106 The liquid -liquid interface was modified with Au NPs or mirror-like Au films. Photo-exited meso-tetra(4carboxyphenyl)porphyrin was located in the aqueous phase, whereas the ferrocene species were dissolved in the dichloroethane. Modified interface irradiation at the 442 nm -under total internal reflection conditions -has led to the increase in photocurrent (resulting as electron transfer from ferrocene in the organic phase to exited porphyrin from the aqueous phase -see Figure 1.11 for details) with significantly greater efficiency for the mirror-like deposit as compared with the Au NPs. 106 Other reductants for Au precursor were also used. The interfacial reduction of the 𝐴𝑢𝐶𝑙 4 -, on its transfer from organic phase containing tri-(p-tolyl)amine reductant to the aqueous phase, could be observed by metallic Au formation confirmed with microfocus X-ray absorption near-edge structure spectroscopy. [START_REF] Samec | Dynamic Electrochemistry at the Interface between Two Immiscible Electrolytes[END_REF] No spontaneous reaction was found to took place when 𝐴𝑢𝐶𝑙 4 -was initially dissolved in the DCE together with TPA. Higher reactivity was found for 𝐴𝑢𝐶𝑙 2 -electron transfer induced reduction and hence it was used for Au NPs deposition for further analysis. The effect of time and potential were examined. The particle size distribution depended on the conditions applied. NPs as small as 3 nm were obtained at lower potentials and shorter deposition times. The NPs size increased up to 50 nm at higher potential and longer deposition times. 110 One example from group of Opallo emerges from Au modified three phase junction system. 111 The cell composition was the ITO electrode crossing the liquid -liquid interface constituted between the organic phase containing a gold precursor salt -tetraoctylammonium tetrachloroaurate -and aqueous solution of a hydrophilic slat -KPF 6 -see Figure 1.12. The electrochemical reduction, which was taking place only at the three phase junction, led to gold deposition: .18) which was coupled with the ion transfer reaction -relatively hydrophobic 𝑃𝐹 6 -ions transfer to the organic phase and repeal of 𝐶𝑙 -to the aqueous phase: 𝐴𝑢𝐶𝑙 4 (𝑜𝑟𝑔) - + 3𝑒 -→ 𝐴𝑢 (𝑡ℎ𝑟𝑒𝑒 𝑝ℎ𝑎𝑠𝑒 𝑗𝑢𝑛𝑐𝑡𝑖𝑜𝑛) + 4𝐶𝑙 (𝑜𝑟𝑔) - ( 1 𝐶𝑙 (𝑜𝑟𝑔) - + 𝑃𝐹 6(𝑎𝑞) - → 𝐶𝑙 (𝑎𝑞) - + 𝑃𝐹 6(𝑜𝑟𝑔) - (1.19) The NPs deposition was performed by chronoamperometry. Size distribution (from 110 to 190 nm) and the shapes (most of all rounded particles were formed) were found to be unaffected after initial growth. Another issue employing liquid -liquid interface modification with Au NPs found application as liquid-mirrors. 112 The optical and electrochemical properties of the gold mirror like films prepared from Au NPs were found to be depended from the surface coverage, NPs size and the polarization of the incident irradiation as well as its wavelength. 113 It was found for instance that the films prepared from 60 nm Au NPs for the surface coverage parameter equal to 1.1 (indicating the formation of one monolayer) exhibited the maximum reflectivity (for S-polarized light under green laser irradiation). Moreover, the films were found to be conductive starting from surface coverage equal to 0.8 (as confirmed with the SECM). Ag deposition at the electrified ITIES Interfacial electrodeposition of Ag films at the liquid -liquid interface from so-called the three phase junction system was studied in a series of by Efrima. 114,115,116 Deposition was performed by reduction of aqueous Ag + with the conventional three electrode set-up with silver working electrode 'touching' the liquid -liquid interface. In the set of referred works the authors included detailed study of the effect of the organic phase solvent, concentration of the silver precursor, potential of the silver reduction, presence of surfactants etc. on the film formation morphology. For instance, once the aqueous phase was contacted with the low surface tension solvents, deposits were thinner, shinier and possessed higher degree of ramification. In contrary, the silver films electrogenerated at the interface between water and organic solvents of higher surface tension were black and required much longer deposition times. 116 Guo et al. studied the formation of Ag nanoparticles at the miniaturized (nano-and micro-ITIES). 117 Interfacial polarization allowed the electron transfer from the organic phase containing BuFc to reduce Ag + from the aqueous phase and subsequent formation of Ag NPs. Single particle deposition was shown to take place at the microITIES lesser than 0.5 µm. In The Ag disc ultramicroelectrode was first oxidized above the liquid -liquid interface to give Ag + ions, which thereafter were reduced to metallic Ag by decamethylferrocene dissolved in the DCE. The heterogeneous electron transfer reaction can be written as: 𝐴𝑔 (𝑎𝑞) + + 𝐷𝑒𝑐𝐹𝐶 (𝑜𝑟𝑔) → 𝐴𝑔 (𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) + 𝐷𝑒𝑐𝐹𝐶 (𝑜𝑟𝑔) + (1.20) It was found that the interfacial potential difference established with the common ion method had a weak effect on the nucleation and the growth process. Ag deposition at the ITIES driven by the free energy and ion transfer was reported by Schutz and Hasse. 119 The aqueous phase was a solution of AgNO 3 whereas the organic phase (nitrobenzene or n-octanol) contained DecFC or ferrocene as reductant. The spontaneous transfer of 𝑁𝑂 3 (𝑎𝑞→𝑜𝑟𝑔) triggered the reduction of Ag + (present in the aqueous phase) by the reductant from the organic phase. The reaction was driven by the rule of charge neutrality of the system. For DecFC the reaction can be schematically written as: 𝐷𝑒𝑐𝐹𝐶 (𝑜𝑟𝑔) + 𝐴𝑔 (𝑎𝑞) + + 𝑁𝑂 3 (𝑎𝑞) - → 𝐷𝑒𝑐𝐹𝐶 (𝑜𝑟𝑔) + + 𝐴𝑔 (𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) 0 + 𝑁𝑂 3 (𝑜𝑟𝑔) - (1.21) In the subsequent work, it was shown that the presence of the dust pollutants in the airadsorbing at the interface before contacting two phases -give some preferential site for the nucleation process and has led to deviation from initially obtained -whisker like fibersmorphologies. 120 An interesting approach was introduced for Ag NPs electrodeposition at the liquidliquid interface separating bulk aqueous phase from the thin layer organic phase Pd and Pt deposition at the electrified ITIES Deposition of metallic Pd and Pt NPs at the ITIES was usually performed under potentiostatic interfacial reduction of metal-chloro-complex present in the aqueous phase by heterogenus electron transfer reaction from organic phase containing reductant -for instance 1,1'-dimethylferrocene 122 or butylferrocene. 123 An electrochemically induced liquid -liquid interface modification with both metals attracted a lot of attention concerning mechanistic studies of nucleation process. Johans et al. have proven on the basis of theoretical model that Pd NPs nucleation is free from preferential nucleation sites. 124 The size of the Pd particles was also found to have an effect on their surface activity and based on quantitative thermodynamic considerations it was shown that only particles exceeding a critical radius can stay at the interface and further grow. When the interfacial tension was lowered by the adsorption of phospholipid molecules following changes were noticed: (i) the nucleation kinetics was significantly decreased (kinetics of growth was found to be unaffected); (ii) the critical radius and consequently, the particles size of the same surface activity had to increase and (iii) more energy was required to trigger the electron transfer reaction. 125 In order to eliminate NPs agglomeration found in all previous works, the ITIES was miniaturized using an alumina membrane with the mean pore diameter of 50 nm 126 and 100 nm. 127 Interestingly the growth of NPs was observed only in some of the pores, which was explained by autocatalytic effected followed by an interfacial nucleation. Other explanation of such a behavior was given by et al.. 136,137,138 The interface modified with a phospholipid monolayer can be studied by the simple ion transfer process across the artificial half part of biological membrane. For instance, adsorption of phosphatidylcholines on the water-1.2-dichloroethane interface had almost no effect on the transfer of tetraethylammonium cations. 139 As discussed in mentioned work, the phospholipids at the ITIES form 'island-like clusters' which partially cover the interface and the electrochemical signal is due to the transfer of electroactive species through cluster free domains. In order to control a compactness of the adsorbed monolayer, the surface pressure control -with the Langmuir trough technique -as an additional degree of freedom was introduced. 140,141 Even though the monolayer quality could be controlled by lateral compression, the large planar area gave rise to a potential distribution and become unstable due to the dissolution of the adsorbed phospholipids in the organic phase. To overcome such difficulties, the Langmuir trough used to control surface pressure of the adsorbed phospholipid monolayer was used as the aqueous half-part of electrochemical cell. The second, organic phase was specially designed PTFE cell containing gelled onitrophenyloctylether (o-NPOE) -poly(vinyl chloride) (PVC), which was immersed into the monolayer, resulting in gel-liquid interface (see. 2𝑀𝑜𝑛 𝑜𝑟𝑔 0 → 6𝑀𝑜𝑛 𝑜𝑟𝑔 + 2𝐻 + (1.24) Elongation of oligomers lowers their oxidation potential which results in the formation of oligomeric cation radical: 25) which precipitates at the liquid -liquid interface 𝐶𝑒 𝑎𝑞 4+ + 6𝑀𝑜𝑛 𝑜𝑟𝑔 → 𝐶𝑒 𝑎𝑞 3+ + 6𝑀𝑜𝑛 𝑜𝑟𝑔 •+ (1. 2(6𝑀𝑜𝑛 𝑜𝑟𝑔 •+ ) → 12𝑀𝑜𝑛 𝑝𝑟𝑒𝑐𝑖𝑝𝑖𝑡𝑎𝑡𝑒 + 2𝐻 + (1.26) The roughness of the polymer film was found to be in the order of tens of nanometer as measured with the AFM. 154 Direct electro-polymerization at the liquid -liquid interface was also reported for polythiophene. The oxidation potential for the monomer was found to be > +1.85 V and in case of the studied system (aqueous phase was the 0.1 M Ce 4+ /0.01 M Ce 3+ redox couple, 0.2 M H 2 SO 4 and 0.1 M Li 2 SO 4 whereas the organic phase was solution of 1 mM TPAsTPBF and 1 mM 2,2':5',2'' terthiophene) it was overlaid with a background current limiting the potential window. The presence of growing polymer layer additionally inhibited the transfer of the organic phase electrolyte ions (TPAs + on the negative site and TPBF -on the positive site of the potential window), which was attributed to the formation of physical obstacle. 155 where it undergoes homogeneous electron transfer with the monomer species: 𝐴𝑢𝐶𝑙 4 (𝑎𝑞) - + 3𝐻 -𝑀𝑜𝑛 (𝑎𝑞) → 𝐴𝑢 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒 + 3𝑀𝑜𝑛 (𝑎𝑞) •+ + 3𝐻 + + 4𝐶𝑙 - (1.27) Evolution of radical cation in the aqueous phase triggers the polymerization reaction: 𝑀𝑜𝑛 (𝑎𝑞) •+ + 𝑥𝑀𝑜𝑛 (𝑎𝑞) → 𝐷𝑖𝑚𝑒𝑟 (𝑎𝑞) → 𝑂𝑙𝑖𝑔𝑜𝑚𝑒𝑟 (𝑎𝑞) → 𝑃𝑜𝑙𝑦𝑚𝑒𝑟 (𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) (1.28) The formation of the compact layer at the ITIES was confirmed by the blocking effect for the ions crossing the interface with a subsequent voltammetric cycling (attributed to the grown of the polymer thickness). Growth of the film and size distribution of embedded Au NPs were studied under different experimental conditions i.e. type of monomer, gold precursor to the monomer concentration ratio, 157 pH of the aqueous phase 157 and interfacial Galvani potential difference. 158 The results are depicted in Table 1.5. CV Upon irradiation with the beam light (454 nm) the photocurrent associated with the electron transfer reaction between the dye entrapped in the polypeptide multilayer and the electron acceptor in the organic phase gave about 8 times bigger value as compared with planar disc-glassy-carbon electrode modified in the same manner. Resulting photocurrent increase was associated with the high specific surface area of the carbon foam electrode. As mentioned by authors, the carbon material due to its high absorption could be replaced with other -transparent or reflective -electrodes to further increase a photovoltaic efficiency. Recently communicated report from Dryfe and coworkers described the catalytic effect of few layer graphene and single wall carbon nanotubes modified ITIES on interfacial electron transfer reaction (between 1,1'-dimethylferrocene in the organic phase and ferricyanide in the aqueous phase). 163 The interfacial reaction for the electron transfer can be schematically written as: [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1mM 10 ~15 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Tyramine] = 0.5 mM 10 ~15 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Tyramine] = 1 mM 10 ~15 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1mM 12 ~5 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Tyramine] = 0.5 mM 12 ~5 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Tyramine] = 1 mM 12 ~5 nm Ref. [157] IDPM ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = 0.123 𝑉 [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1 mM - ~7 nm (univocal shapes) Ref. [158] IDPM ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = 0.0.440 𝑉 [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1 mM - ~60 nm (presence of nanorods Ref. [158] IDPM ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = 0.123 𝑉 [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Resorcinol] = 𝐷𝑀𝐹𝑐 (𝑜𝑟𝑔) + 𝐹𝑒(𝐶𝑁) 6 3- (𝑎𝑞) ⇌ 𝐷𝑀𝐹𝑐 + (𝑜𝑟𝑔) + 𝐹𝑒(𝐶𝑁) 6 4- (𝑎𝑞) (1.29) The electron transfer reaction, in the presence of few layer graphene film at the ITIES gave rise to a higher faradaic current, which was attributed to the increase in the effective interfacial area. Similar catalytic effect was found for the carbon nanotubes (CNT) modified ITIES. Increase in electron transfer current in such case was attributed to the increase in surface active area and/or doping-charging effect of the CNT. It was also shown that Pd metal nanoparticles can be easily prepared on the carbon modified ITIES with the spontaneous or potential-induced electron transfer. The size of the particles, depending from the support, oscillated around 10 -20 nm for CNT and 20 -40 nm for graphene flakes. The electrified liquid -liquid interface modified with carbon based material was developed for catalytic water splitting -hydrogen evolution reaction (2𝐻 + + 2𝑒 -→ 𝐻 2 ↑). The two electron transfer reaction between the protons from the aqueous phase and the electron donor from the organic phase can be controlled with the interfacial Galvani potential difference. A recent article by Ge et al. has described the effected of graphene and mesoporous carbon doped with Mo 2 S nanoparticles modified ITIES on interfacial proton reduction catalysis. 164 The authors have shown that modification of the liquid -liquid interface separating tetrakis(pentafluoropheneyl) borate anions in the aqueous phase (used as proton pump and potential determining ion) and electron donor -DecMFc -in the organic phase with the graphene doped MoS 2 leads to 40 fold increase in the hydrogen evolution rate and over 170 fold when mesoporous carbon was used as the conductive support. Similarly, 1000 fold increase in reaction rate was observed when the ITIES was modified with Mo 2 C doped multiwalled carbon nanotubes under similar conditions. 165 The choice of carbon material as a scaffold for nanoparticles deposition is not surprising as (i) it exhibit the large dispersion of nanoparticles being the catalytic sites of the proton reduction reaction; (ii) it possess high specific surface area and (iii) the use of a carbon provides the interfacial region with the extra electrons. Silica modified liquid -liquid interface The following section is devoted to examples of the silica materials synthesized or deposited at/or in the close vicinity of the liquid -liquid interface. Silica, thanks to its attractive physicochemical properties (biocompatible, chemically inert, undergoes easy functionalization, allow the formation of multi-scale porous materials, insulator etc.) found applications in range of scientific directions: drug delivery systems, 166 multi-scale porous membranes, 167 supports for enzymes 168,169 and proteins 170 immobilization, high-specificsurface-area sorbents or gas sensors 172 -being only ''the tip of the iceberg'' among other examples. Much attention was pay during the last few decades to air -liquid 173 and liquidsolid [START_REF] Goux | Oriented Mesoporous Silica Films Obtained by Electro-Assisted Self-Assembly (EASA)[END_REF]174 interfaces modified with mesoporous silica. Insight into the field yielded to a development of range of methods allowing the morphological control and the design of the well-defined silica materials. Up-to-date, modification of the interface separating two immiscible liquids with the silica materials is restricted to the emulsion, microemulsions and over a few dozen examples emerging from the neat liquid -liquid interface, among which only few emerges from the ITIES modification. In following subsections attention is pay to the latter, which covers the three phase junction systems, the non-polarized planar liquidliquid interfaces and finally the ITIES. Three phase junction systems Once the liquid -liquid interface is contacted with the solid electrode, the three phase junction system is formed. In principle, in such systems the electrochemical reaction at the solid support is followed by the ion transfer across the neighboring liquid -liquid interface. Three phase junction with one of the phases being a solid electrode modified with the silicate material was studied in the group of Opallo. 175 First report dealt with ex situ Au electrode modification with silicate films on the surface of which the small droplet of the hydrophobic redox liquid (t-butylferrocene) was placed and hereinafter was covered with the aqueous phase. Silica modification was performed in order to minimize transfer of relatively hydrophilic t-butylferrocenium cation (generated upon electrooxidation) to the aqueous phase. The best effect was observed when electrode was premodified with mercaptotrimethoxysilane. Electrochemically assisted Sol -Gel process employing the three phase junction was also reported 176 and is schematically shown on 𝑆𝑂 3 2-+ 𝐻 2 𝑂 → 𝑆𝑂 4 2-+ 2𝐻 + + 2𝑒 - (1.30) The n-octyltriethoxysialne, once hydrolyzed, transfer across the liquid -liquid interface and condensed at the ITO electrode. The resulting stripe width ranges from 10 µm to 70 µm depending on the method used (cyclic voltammetry or chronoamperometry) and electrodeposition time. The stripe thickness was largest ~100 nm on the aqueous side of the interface. Neat, non-polarized liquid -liquid interface in situ modification with silica materials First reports dealing with silica synthesis at the planar liquid -liquid interface involved an acid-prepared mesostructures method. 177 This approach consists of the synthesis of silica materials at the interface between surfactant (structure directing agent) aqueous solution at pH adjusted to be considerably below the isoelectric point of silica species and the organic solvent -typically hexane, decane or methylene chloride -containing silica precursor. Self-assembly of the template and inorganic species at the oil/water interface follows S + -X --I + mechanism (where S + is the cationic surfactant, X -is the halide anion and I + is the positively charge inorganic species), controlled by hydrogen-bonding interactions. The silica precursor is hydrolyzed at the oil/water interface. Hydrolyzed species, being more hydrophilic than hydrophobic, transfer to the aqueous phase and undergoes self-assembly with abundantly present cationic surfactants. As the reaction time passes, the film grows towards aqueous side of the interface. Morphology of the film differs depending on the side of the interface; in the organic environment the silica film consisted of ball-shaped beads with diameters ranging up to 100 µm whereas the shapes from the aqueous site are much smaller and differentiated. Analogous experimental approach was employed at the oil/water interface to grow mesoporous silica fibers. 178,179 The formation of desired fiber shapes requires the control of variety of factors: pH of the aqueous phase, the template and the precursor concentrations, nature of the organic phase and use of co-solvent (changing the kinetics of hydrolyzed silica species crossing the interface). Resultant silica-fibers with length up to 5 cm and diameters from 1 µm to 15 µm with high mesoporous quality (hexagonal arrangement and narrow pore size distribution, pores oriented parallel to the fiber axis and high specific surface area of 1200 m 2 /g) were obtained. Regev et al studied the silica film formation at the heptane/water interface under acidic and basic conditions. 180 The aqueous phase contained CTACl and CTAB in acidic and basic media respectively. Counter phase was always the mixture between TEOS and heptane. Different morphology on both sides of the silica film was found; for acid media the synthesized film roughness on the aqueous side were found to be in the order of 50 nm, whereas for the organic side it was 3 nm (as measured with AFM). For silica films prepared from basic media it was found that both aging and curing time have an effect on film mesostructure: (i) aging times <6h results in poor order and undefined crystalline phase, (ii) aging times >6h generates cubic structure and leads to better crystalline structure and (iii) long curing times (>15h) led to a structure collapse. Fabrication of ordered silica-based films at the liquid -liquid interface could also be obtained by silica spheres deposition. Example of such an approach was given by He et al. 181 were dispersed in the aqueous phase and contacted with hexane 182 and (ii) silica nanoparticles dispersed in the aqueous phase were separated from hexane containing octadecylamine (ODA). 183 Adsorption of CTA + at SiO 2 nanoparticles surface tune their properties and make them partially hydrophobic which facilitates their interfacial deposition. Rearrangement, immediately after deposition, of SiO 2 /CTA + layer was also observed and attributed to interaction among alkyl chains and/or organic phase. A second system (ii) is different. In the presence of ODA in the organic phase, three distinctive stages, for the increasing surfactant concentration were proposed. First, for low ODA concentrations the interface coverage was low and hence electrostatic (and probably hydrogen bonding) interactions were not sufficient enough to induce the SiO 2 adsorption. The partition of ODA between water -hexane and its adsorption at SiO 2 surface become visible close to the oil -water interface saturation with the surface active molecules. Silica nanoparticles 'modified' with ODA species started to adsorb at the liquid -liquid interface and changed it viscoelastic properties (higher interfacial tension values observed as compared with system in absence of silica nanoparticles). Finally at high ODA concentrations, the interface is fully occupied by the SiO 2 -ODA spheres, which probably (no experimental evidences were given) self-assembly forming densely packed layer at liquid -liquid interface. Approach dealing with the silica film generation at the liquid -liquid interface having anisotropic properties (also referred to as Janus properties) was first reported by Kulkarini et al. 184 Interface was constituted between ammonia aqueous solution and heptane containing methyltrimethoxysilane. Basic pH of the aqueous phase has led to the silica precursor hydrolysis and its interfacial condensation. Films were directly collected from the interface or supported with the porous material (suspended in the organic phase so that its bottom edge barely touched the aqueous phase) and cured in the oven. For the highest methyltrimethoxysilane/heptane molar ratio -0.4 -the superhydrophobic properties of the silica film from the organic side of the interface (contact angle ~152°) were attributed to the high methyl group surface coverage and its morphology (thick film with roughness on organic site). Aqueous side of the film was hydrophilic as confirmed with the contact angles ~65°. The NH 4 OH was found to have an effect on film grown and the wetting properties for concentration below 0.1 M. It seems that addition of cationic (CTAB) and anionic (SDS) surfactants at concentration >CMC to the aqueous phase does not change the wetting properties of the organic-side of the silica film, and only CTAB slightly decreased the hydrophilicity of the aqueous side of the film (contact angle increased from 65° in the absence to 75° in the presence of CTAB). The presence of charged surfactants (especially CTAB) could have some additional mesoporous structure driving properties; however no experimental effort was made towards this direction. Further examples of silica films of Janus type generated at static liquid -liquid interface were studied by Biswas and Rao. 185 Films were grown at the toluene (containing silica precursor) -water (at acidic pH facilitating precursor hydrolysis) interface. Three silica precursors were used: tetraethoxysilane, hexadecyltrimethoxysilane and perfluorooctyltrimethoxysilane. On micrometer scale, organic side of the silica film had always rougher structure than its counter -aqueous side. Contact angle measurements have shown that the hydrophilicity of the aqueous side of the silica films stayed unaffected (~70°) for all three precursors. Contact angle on organic side of the silica films varied depending from the hydrophobic character of the precursor. Results indicated slight hydrophobicity for tetraethoxysilane-based organic-side (contact angle = 92°) as compared with very hydrophobic hexadecyltrimethoxysilane-and perfluorooctyltrimethoxysilane-based organic-sides of the films (contact angles = 140° and 146° respectively). 185 The compilation of different synthetic approaches leading to neat liquid -liquid interface modification with silica materials can be found in Table 1.6. Electrified Interface between Two Immiscible Electrolyte Solutions modification with silica materials The examples of the electrified liquid -liquid interface modification with the silica materials are basically based on ex situ and in situ approaches. 1.3.5.3.1. Ex situ modification In recent years, Dryfe et al. published series of papers focused on ex situ modified ITIES with silicalite materials. In the pioneering work, 186 for the first time, they modified ITIES with inorganic material -namely non-polar zeolite (the zeolite membrane is schematically depicted on Figure 1.20 A). The ex situ modification employs material initially grown on mercury surface, which after proper treatment was cured to a glass tube using a silicone rubber. By interfacial modification authors were able to slightly increase the potential window as compared with the unmodified system. The size of the pinholes within the zeolite network allowed the size-selective ion transfer across the interface. Two different in size model ions were employed, this is TMA + and TEA + . Cyclic voltammetry has shown that TMA + (hydrodynamic radius 0.52 nm) transfer remain unaffected in the presence of zeolite membrane, whereas the transfer of the later ion (TEA + with 0.62 nm in hydrodynamic radius) was suppressed. It has to be mention that the TMA + backward transfer (from organic to aqueous phase) was accompanied by adsorption within the zeolite network, as indicated from the current drop and partial loss of reversibility. In the second approach 187 it was shown that zeolite framework modified ITIES can significantly increase the potential window. The key requirement is the use of background limiting cation and anion with the size exceeding the zeolite framework pores. This was obtained by employing tetrabutylammonium tetraphenylborate (TBA + TPB -) as an organic electrolyte. The value of standard transfer potentials for organic electrolyte ions were lower for cations and higher for anions as compared with the standard transfer potential of aqueous salt ions and hence the potential window -at the unmodified ITIES -was limited by supporting organic electrolyte ions transfer. The diameter of TPB -(0.84 nm) is comparable to the pore diameter of silicalite (0.6 nm). The size of the pores in zeolite network prevented the organic electrolyte anion interfacial transfer and as a result, elongated the potential window, which finally was limited by the smaller inorganic aqueous electrolyte ions transfer. The silicalite membranes were immersed for ten minutes in the aqueous phase prior to each electrochemical measurement in order to saturate the The increase in the potential window from more positive potential side is in agreement with the organic anion size and charge (TPBCl -) being unable to cross the pores. Finally, with help of inductively coupled plasma optical emission spectrometry -used to calculate the concentration of TEA + in the zeolite-Y modified ITIES -and cyclic voltammetry, authors estimated apparent diffusion coefficient for TEA + within the membrane. The parameter was calculated based on Randles -Sevčik equation and ranged from 1.9 • 10 -8 to 3.8 • 10 -8 cm/s 2 . The ion transfer voltammetry at the ITIES modified with the zeolite Y can be also used to change the chemical properties of the silicalite material. The example include proton exchange with the sodium from the membrane framework immersed in an acidic solution 189 . Another example, developed in the group of Chen,190 emerges from ex situ modification of macroporous polyethylene terephthalate (PET) membrane with randomly distributed pores with 500 nm in diameter and 5 µm in height (see Figure 1.20 B for schematics). Silica deposition inside the pores was performed with the aspiration-induced infiltration method 191 from the acidic solution containing CTAB and TEOS as template and precursor respectively. The as-prepared membrane was used to support liquid -liquid interface (ITIES separating 0.1 M KCl aqueous solution and 0.02 M BTPPA + TPB -in DCE). Authors claimed that silica inside the PET macropores has formed channels directed along the pores. The average pore diameter was estimated to be 3 nm (based on N 2 adsorption/desorption method), which is suitable for macromolecule sieving. Three molecules were employed to study the permeability of the silica modified macropores by ion transfer voltammetry: TEA + , K + (with crown ether present in the organic phase) and a biomolecule, Cytochrome c. Especially the potassium did not encounter any resistance to mass transfer in the presence of silica membrane. TEA + also gave rise to a signal which was partially hinder by a TPB -transfer (application of species giving a wider potential window could improve interpretation of voltammetric data). No signal was detected for Cytochrome c, which size apparently exceeded the silica channels entrance dimensions and hence retained in the aqueous phase. Similar conditions were used for the folic acid detection. 192 Analytical study allowed the characterization of the folic acid transfer across the silica modified PET membrane with and without CTAB species inside the silica channels. When the liquidliquid interface was constituted between 10 mM NaCl (aq) and 20 mM BTPPA + TPBCl - (org) in dichlorohexane, interfacial transfer of the folic acid was found to be better pronounced in the presence of CTAB among the silica network (phenomena attributed to the anion exchange process between bromides from CTAB and the folic acid). Presence of alkyl chains inside the silica channels changed the polarity of the interior of initially hydrophilic pores, which shifted the position of the interface from the pore ingress to its interior. To overcome this problem low molecular weight polyvinylchloride was used in order to gelified the organic phase. Such an approach, first stabilized the position of the interface and second, allowed the use of stripping techniques, which even further improved limit of detection of folic acid (from 100 µM for CTAB doped silica modified PET membrane for CV at water -dichlorohexane interface up to 80 nM for differential pulse stripping voltammetry technique for CTAB doped silica modified PET membrane at water -organogel interface). 1.3.5.3.2. In situ modification In situ, electrochemically controlled silica film formation at the ITIES, was reported for the first time in 2003 by Mareček and Jänchenová. 193 Method has involved Sol-Gel process with template and precursor separated between the organic and the aqueous phase respectively. During silica film formation -by cyclic voltammetry -the trimethyloctadecylammonium cation (TODA + ) (used as a template and cationic part of organic electrolyte in parallel) was transferred to the aqueous solution of water glass containing Na 2 O and SiO 2 . The interfacial transfer was followed by the template and the precursor self-assembly reaction and finally silica film deposition at the liquid -liquid interface according to the following reaction: 4𝑇𝑂𝐷𝐴 (𝑜) + + Chapter II. Experimental part The aim of the following part is to supply the reader with all the experimental, technical and instrumental details concerning this work. At the very beginning the full list of chemicals that have been used among this study is given. Next section describes the electrochemical set-ups, including: (i) the cell used to study the reaction taking place at the macroscopic ITIES, (ii) the cell and the membrane used for miniaturization and finally (iii) the system employed to probe local pH change induced by ion transfer and UV irradiation. The compositions of cells during electrochemical measurements as well as the information concerning preparation of the aqueous and the organic phase prior to silica interfacial deposition were included in separate section. Next are the information concerning instrumentation. This chapter is fulfilled with the set of step-bystep protocols dealing with the: (i) organic synthesis of chemicals prepared for the purpose of this work and (ii) preparation of the organic counter electrode as well as (iii) micro capillaries used to support the liquid -liquid interface. Chemicals The Table 2.1 gives the full list of the chemicals that have been used in this work. For each chemical its name, abbreviation, source and function are given. The chemicals synthesized in this study were prepared according to protocol available in section 1-2.5.1; 2 -2.5.3; 3 -2.5.2; 4 -Appendix II; 5 -2.5.7 and 6 -2.5.4. Electrochemical set-ups Different set-ups were used among this work and their descriptions are divided in agreement with the successive parts of the thesis i.e. macroscopic ITIES modification with the silica material, microscopic ITIES modification with the silica material and finally local pH changes at the ITIES induced by ion transfer and UV irradiation. (i) Electrochemical cells supporting macroITIES The custom made electrochemical cells used to study the electrochemical silica deposition at the macroscopic ITIES are presented on The RE org and RE aq correspond to the organic and the aqueous reference electrodes respectively whereas the CE org and CE aq correspond to the organic and the aqueous counter electrodes respectively. The numbers stand for: 1 -the aqueous phase; 2 -the organic phase; 3 -supporting aqueous phase; 4 -is the liquid -liquid interface between higher density phase and the supporting aqueous phase and 5 is the ITIES. The interfacial surface area was 1.13 cm 2 for cell A and 2.83 cm 2 for cell B. (ii) Electrochemical set-up supporting microITIES The electrochemical cell used for electrochemical silica interfacial deposition study and electroanalytical evaluation of the silica deposits is shown on of which silicon wafer with array of microITIES was attached (silicon wafer was fixed to the glass tube using silicon acetate sealant from Rubson® (resistive to DCE)). The glass vessel was filled with the aqueous phase whereas the glass tube was filled with the organic phase. The silver wire acting as both organic reference and organic counter electrode was placed directly in the organic phase. Prior to contacting two phases, the organic phase was placed in the glass tube and it was left for one minute in order to impregnate the array of pores with the organic solution. Once the silver wire was placed in the organic phase the upper hole of the glass tube was finely The silicon membrane supporting the array of microITIES (see Figure 2.2 B) was fabricated from a silicon wafer being a 4x4 mm square. The pores were patterned by UVphotolithography and pierced by a combination of wet and DRIE etches as described elsewhere. [START_REF] Zazpe | Ion-Transfer Voltammetry at Silicon Membrane-Based Arrays of Micro-Liquid-Liquid Interfaces[END_REF] The fabrication process has led to hydrophobic pore walls. The pores were therefore filled with the organic phase, presenting an array of inlaid microITIES. The characteristics of silicon wafers used in this work are shown in Table 2.2. The pore height was constant and for all wafers was 100 µm. (iii) Electrochemical set-up used to study local pH change For electrochemical study of PH + ion transfer the cell with the macroscopic ITIES (see way that the distance between the surface of the wafer and the electrode tip was 1 µm. The UV irradiation source was directed toward interface using flexible optical fiber. Composition of electrochemical cells. The aqueous and the organic phase preparation The electrochemical set-up, used for silica material electrodeposition at macroscopic ITIES, can be written as: The various TEOS and CTA + concentrations used in the above mentioned cells 1 and 2 were in the ranges 50 mM ≤ x ≤ 300 mM and 1.5 mM ≤ y ≤ 14 mM, respectively. In order to study local pH changes at the liquid -liquid interface induced by ion transfer and UV irradiation photo active compound (PH + ) had to be synthesized (see protocol in section (ii) Composition of the aqueous and the organic phase during silica deposition induced by CTA + ion transfer Aqueous phase During the interfacial silica electrodeposition the silica precursor -TEOS -was initially dissolved in the aqueous part of the liquid -liquid cell. Initially the TEOS was added to the 5 mM NaCl solution and the pH was adjusted to 3 in order to facilitate the hydrolysis reaction: 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4 + 𝑥𝐻 2 𝑂 𝑝𝐻=3 → 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 (𝑂𝐻) 𝑥 + 𝑥𝐶𝐻 3 𝐶𝐻 2 𝑂𝐻 (2.1) Such mixture, initially biphasic, was vigorously stirred for 1h until clear solution was obtained. Next the pH was increased to around 9 in order to facilitate condensation process. 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 (𝑂𝐻) 𝑥 + 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 (𝑂𝐻) 𝑥 𝑝𝐻=9 → 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 (𝑂𝐻) 𝑥-1 𝑂𝑆𝑖(𝑂𝐻) 𝑥-1 (𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 + 𝐻 2 𝑂 (2. 2) The precursor solution prepared in such a manner was than directly transferred to the liquidliquid electrochemical cell and constituted the higher density phase. Organic phase The organic phase used during interfacial electrochemical silica deposition was 1,2dichloroethane solution of 10 mM BTPPA + TPBCl -(prepared according to protocol from section 2.6.1) and x mM CTA + TPBCl -(prepared according to protocol from section 2.6.2). The organic phase was the lower density phase of the liquid -liquid cell. (ii) Composition of the aqueous and the organic phase during silica deposition induced by local pH change The composition of two immiscible phases during silica material interfacial deposition triggered by local pH change was inversed as compared with previous experiments. This means that precursor was initially dissolved in the organic phase whereas template species were in the aqueous phase. The organic phase composition, 20% TEOS solution in DCE, was unchanged among all experiments. The aqueous phase was x mM CTA + Br -solution (for x = 0.70; 1.38 and 2.02 mM) in 5 mM NaCl. Nitrogen adsorption-desorption -isotherms were obtained at -196 °C over a wide relative pressure range from 0.01 to 0.995. The volumetric adsorption analyzer was TRISTAR 3000 from Micromeritics. The samples were degassed under vacuum for several hours at room temperature before nitrogen adsorption measurements. The specific surface area was calculated with the BET (Brunauer, Emmett, Teller) equation. Instrumentation X-ray Photoelectron Spectroscopy -analysis was performed with the KRATOS Axis Ultra Xray spectrometer from Kratos Analytical. The apparatus was equipped with the monochromated AlKα X-ray source (hν = 1486.6 eV) operated at 150 W. Fourier transform Infra-red Spectroscopy -IR spectra were recorded in transmission mode using the KBr pellets as a support for a sample. The Nicolet 8700 apparatus was used for this purpose. Transmission Electron Microscope -the TEM micrographs were recorded with the CM20 apparatus (Philips, Netherlands) at an acceleration voltage 1 kV. Scanning Electron Microscope -the SEM micrographs were obtained using a Philips XL30 (with acceleration voltage equal to 1 kV), without any sample metallization. Confocal Raman spectroscopy -Horiba Jobin Yvon T64000 spectrometer equipped with a nitrogen cooled CCD detector with a green laser light (the wavelength was 532 or 514 nm) was used to collect Raman spectra. The irradiance was measured in air at 100 kW cm -2 and was estimated to have the same value in the aqueous phase. No laser heating in the aqueous phase was observed. UV irradiation -the irradiation source was HXP 120-UV lamp from Kubler Codix. The apparatus was equipped with the flexible optical fiber. The capillary was set at 4500 V. The end plate offset was -500V. Local pH measurement Protocols Inseparable part of this work was organic synthesis, miniaturization and electrodes preparation. All step-by-step protocols describing these aspects are arranged together in the following sections. 2.5.1. Preparation procedure of BTPPA + TPBCl - Bis(triphenylphosphoranylidiene)ammonium tetrakis(4-chlorophenylborate) (BTPPA + TPBCl -) -organic electrolyte salt, was prepared according to the following metathesis reaction: Preparation involves few stages: 1. 1.157 g of BTPPA + Cl -and 1.000 g of K + TPBCl -were dissolved in 10 ml and 20 ml of 1:2 mixture of H 2 O:MeOH respectively. Solution of BTPPA + Cl -was added drop-wise to a vigorously stirring solution of K + TPBCl -resulting in a white precipitate. 2. The reaction was continuously stirred at 4°C for 48h. After this time, solvent was filtered under vacuum for about 30 min. Reaction product was transferred to dry beaker and placed in a desiccator overnight. Beaker was covered with an aluminum foil to prevent light induced decomposition of the product. Then solution of CTA + Br -was added drop-wise to a vigorously stirring solution of K + TPBCl -. The product of reaction -white precipitate -was then left under stirring in the fridge for 48h. 2. After this time suspension was filtered under vacuum and then dried overnight in desiccator. 3. Dry CTA + TPBCl -was dissolved in acetone and was filtered under gravity using paper filter. The beaker containing CTA + TPBCl -was covered with the Parafilm® and placed under the fume hood. Small holes were made in Parafilm® to allow acetone evaporation. 4. Recrystallized CTA + TPBCl -is very dense and viscus liquid residual at the bottom of the beaker. In the last stage, the product was dissolved in dichloroethane and moved to the volumetric flask in order to calculate its concentration (beaker was weighted with and without CTA + TPBCl -). 5. The solution was stored at 4°C. Preparation procedure of TBA + TPBCl - Tetrabutylammonium tetrakis(4-chlorophenylborate) (TBA + TPBCl -) -salt of interfacially active model cation -TBA + -soluble in the organic phase, was prepared according to following metathesis reaction: Preparation involves few stages: 1. Equimolar amount of TBA + Cl -(0.500 grams) and K + TPBCl -(0.893 grams) were dissolved in 10 ml and 20 ml of 1:2 mixture of H 2 O:MeOH respectively. Solution of TBA + Cl -was added drop-wise to a vigorously stirring solution of K + TPBCl -resulting in white precipitate. 2. The reaction was continuously stirred at fridge for 48h. After this time solvent was filtered out under vacuum for about 30 min. Reaction product was transferred to dry beaker and placed in desiccator overnight. 3. Next step was to dissolved TBA + TPBCl -in acetone and then filtered under gravity using paper filter. Vessel with the filtrate was covered with Parafilm® in which small holes were made to allow the acetone evaporation. 4. Recrystallized TBA + TPBCl -(small, powder-like crystals) was transferred to the vessel and stored at 4°C. Preparation procedure of PH + TPBCl - Trimethylbenzhydrylammonium tetrakis(4-chlorophenylborate) (PH + TPBCl -) photoactive compound organic salt, was prepared according to following metathesis reaction: Preparation involves few stages: 1. Equimolar amounts of PH + l -(0.500 grams) and K + TPBCl -(0.740 grams) were dissolved in 10 ml and 20 ml of 1:2 mixture of H 2 O:MeOH respectively. Solution of PH + l -was added drop-wise to a vigorously stirred solution of K + TPBCl -resulting in white precipitate. 2. The reaction was continuously stirred at fridge for 48h. After this time the solvent was filtered under vacuum for about 30 min. 3. In order to purify the product from KI, it was rinsed three times with distilled water (iodides detection was performed with HPLC in order to follow effectiveness of KI removal -see Figure 2.5). Each time 10 ml of water was shaken together with the PH + TPBCl -and then it was centrifuged (15 minutes with 5000 rpm). 4. Free from KI reaction product was transferred to dry beaker and placed in desiccator overnight. 5. Next step was to dissolved PH + TPBCl -in acetone and then filter it under gravity using paper filter. Vessel with the filtrate was covered with the Parafilm® in which small holes were made to allow the acetone evaporation. 6. Recrystallized PH + TPBCl -, being viscus liquid, was dissolved in dichloroethane and transferred to volumetric flask. The mass of the PH + TPBCl -was taken as a result of mass subtraction of beaker with and without photoactive compound product. Solution was stored in the fridge. Protocol of organic counter electrode preparation Borosilicate glass capillary, as an inert material, was used to protect interior of the electrode from both solutions (the organic and the aqueous phase). Preparation of organic counter electrode requires: -Platinum mesh and platinum wire attached to each other by spot welding (Figure 2 Step-by-step electrode preparation protocol can be described as follows: 1. First, the platinum mesh has to be welded around the platinum wire. Next, as it is shown on 4. The ~15 cm copper-tinned (for electric contact) wire was placed inside the glass capillary until it touched the conductive resin. To speed up curing time of resin capillary was put in the oven at 130°C for few hours. 5. Once resin is cured, the electrode was taken out from the oven and its interior was filled with the silicon sealant (once again using syringe with the thicker needle possible). Finally, elastic isolation was placed on top of the electrode in order to secure the contact wire. Ready electrode is depicted on Figure 2.9. Figure 2.9. Platinum mesh based counter organic phase electrode. Single pore microITIES protocol of preparation Greater part of this thesis belongs to miniaturization. Reproducible procedure of preparation of the single pore capillaries supporting miniaturized liquid -liquid interface is described in here. Preparation of single pore microITIES requires: -borosilicate glass capillaries with filament (outer diameter 1. The single pore microITES preparation can be divided into few steps: 1. Initially, the borosilicate glass capillary has been closed in the middle. To do so, one end of the capillary was clogged with the Parafilm® and then the capillary has been placed vertically with the opened end directed upwards so that its half-length was placed in the middle of the ring made from several scrolls of the nickel/chrome 80/20 wire (Figure Protocol of preparation of trimethylbenzhydrylammonium iodide The protocol was adapted from a previous study. [START_REF] Samec | Charge-Transfer Processes at the Interface between Hydrophobic Ionic Liquid and Water[END_REF] The amide group of aminodiphenylhydrochloride is nucleophilic center, which is methylated with methyl iodide in order to form quaternary amine. The overall reaction of methylation can be schematically written as: NH 2 N + CH 3 C H 3 CH 3 I - CH 3 I Na 2 CO 3 MeOH The protocol of preparation can be divided into few stages: 1. First, the solvent -100 ml of methanol, was placed into the round-bottom two neck flask and was deoxygenated for 30 minutes by passing the nitrogen through the solution (top neck was clogged with the cap, whereas the side neck was covered with turn-over Flang stopper through which two needles were placed, first as a nitrogen inlet and second as nitrogen outlet). 2. The following reagents: 1g of aminodiphenylmethane hydrochloride, 3.85g of anhydrous sodium carbonate and 2.26 ml of iodomethane were added to the flask containing methanol. Reaction was carried out under inert gas atmosphere. N 2 inlet was supplied from top part of condenser. Gas bubbler filled with silicon oil was employed to monitor the N 2 flow. 4. After 24 hours, once the mixture cooled down to room temperature, 30 ml of 1M sodium thiosulfate was added and then mixture was stirred for 15 minutes. Insoluble part of reaction mixture was gravity filtered with paper filter. 10 grams of potassium iodide was added and the reaction mixture was stirred for another 15 minutes. 5. Further purification step was completed in separatory funnel. The reaction mixture was shaken with 50 ml of H 2 O and 50 ml of dichloromethane. Organic phase was collected. Extraction was repeated two more times with 50 ml dichloromethane each time. 6. Dichloromethane was reduced under vacuum with rotary evaporator shown on Figure 2.16. 7. Obtained product was recrystallized from acetone. Chapter III. Templated Sol -Gel process of silica at the electrified liquidliquid interface Under proper conditions the liquid -liquid interface can be polarized and used as a well understood electro chemical device (the equations developed for the electrochemistry of solid electrodes are applicable at the ITIES). From an electroanalytical point of view the electrified liquid -liquid interface brings several advantages over the solid electrodes: (i) it is self-healing, which eliminates the problems concerning electrode surface polishing; (ii) it has an interface deprived from defects up to a molecular level; (iii) the detection is not restricted to reduction and oxidation reaction but can arise from simple interfacial ion transfer reaction; (iv) it has good sensitivity and finally (v) reasonable limits of detection down to nM level for miniaturized systems. The ITIES can be used for determination of number of ionic analytes ranging from inorganic molecules [START_REF] Bingol | Facilitated Transfer of Alkali and Alkaline-Earth Metal Ions by a Calix[4]arene Derivative Across Water/1,2-Dichloroethane Microinterface: Amperometric Detection of Ca(2+)[END_REF]196 to organic compounds including the species which are biochemically important. 197 Chiral detection was also reported at the ITIES. 198,199 In spite of number of qualities the detection at the ITIES still suffer from poor selectivity, which until today is restricted to the use of ionophores (discussed in the section 1.1.2.2) or ex situ zeolite modifiers (discussed in the section 1.3.5.3.1). For solid electrodes it was shown many times that the surface modification can significantly improve their selectivity. Recently the Sol -Gel process of silica attracted a lot of attention for an electrochemical conductive electrode modification since under proper conditions, in the presence of structure driving agents called templates, the formation of highly ordered structures oriented normal to the electrode substrate -even with complex geometry -was easy to control. [START_REF] Dryfe | The Electrified Liquid-Liquid Interface[END_REF]4 The optimal conditions requires the hydrolysis of TEOS -silica precursor species -at the acidic pH equal to 3 in the presence of cationic CTA + surfactants -used as a template -and supporting electrolyte, for TEOS/CTA + molar ratio equal to 0.32. The condensation process of silica, cylindrical CTA + micelles formation and silica scaffold generation were triggered simultaneously by the application of sufficiently low potential at which an electrochemical reduction of H + and H 2 O has led to OH -evolution and subsequently local pH increase. The first reports dealing with the in situ electrified liquid -liquid interface modification with silica materials emerge from the group of Marecek This study aims to investigate deposition of mesoporus silica material at the macroscopic liquid -liquid interface. By coupling the electrochemistry at the ITIES with the Sol-Gel process of silica (in the presence of structure driving agents) the liquid -liquid interface can be easy modified. To do so, the precursor species were hydrolyzed in the aqueous phase and separated from the organic template solution by liquid -liquid interface. The silica material formation was controlled by electrochemical transfer of positively charged template species from the organic to the aqueous phase. Number of experimental parameters e.g. initial concentration of template and precursor, polarity of the organic phase or experimental time scale were investigated. Electrochemical information extracted during interfacial silica deposition allowed the proposal of possible deposit formation mechanism. The detailed description of electrogenerated silica material was performed with a number of qualitative, structural and morphological studies. The ultimate goal of this work is to construct well characterized molecular sieves that could be used to improve selectivity of electroanalytical sensors employing the liquid -liquid interface. Results and discussion 3.1.1. Electrochemical study The B). After electrodeposition the ITIES was modified with the silica film being a gel phase -not fully condensed silica matrix. In order to obtain solid silica it has to be cured. To do so, the silica deposits were collected (using microscope cover slip immersed in the cell prior to electrolysis) and stored overnight in the oven at 130°C. Silica deposits were prepared under different initial synthetic conditions, which include: the effect of [TEOS] aq and [CTA + ] org , the polarity of the organic phase and the method used for electrolysis. Samples prepared in such way could then be used for further characterization. IR spectroscopy and XPS were used to study chemical functions of the silica deposits. BET analysis allowed evaluation of the silica porosity after the calcination (removal of organic residues from inside the pores induced by thermal oxidation). Finally, the SAXS and TEM were employed as a tool to study the structural properties of the silica. Spectroscopic analysis Characteristic IR spectrum recorded for the silica deposit electrogenerated at the ITIES using surfactant template is shown on Figure 3.7. The peaks attributed to the vibrations of silica network are localized at 1080 cm -1 -intense band arising from Si-O-Si asymmetric stretching mode, 790 cm -1 -arising due to symmetric stretching of Si-O-Si and 480 cm -1 -attributed to Si-O bond rocking. The small peak observed at around 1200 cm -1 was described in the literature as an overlap of asymmetric stretching modes of Si-O-Si bond. [START_REF] Senda | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions[END_REF] The broad band from about 3100 cm -1 up to 3750 cm -1 corresponds to both: terminal -O-H stretching mode present inside the silica walls and absorbed of adsorbed water molecules. The additional contribution to -O-H can be also found as a peak around 1630 cm -1 , which arises from its bending vibration mode. 9,10 Finally, IR spectroscopy was also used to track remaining surfactant species. The CTA + molecules were tracked by two characteristic peaks situated at 2925 cm -1 -attributed to asymmetric CH 2 stretching vibrations and at 2850 cm -1 -arising as a reason of CH 2 symmetric stretching vibrations. Also the peak at around 1480 cm -1 could inform about the presence of alkyl chains as it corresponds to C-H bending vibration, although it is not clearly pronounced on the recorded spetrum. [START_REF] Senda | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions[END_REF] The presence of template molecules in the silica framework was in agreement with the voltammetric results, which have indicated that the interfacial transfer of CTA + was irreversible and its reverse transfer was accomplished with the adsorption process. The absorption of aromatic rings from the cationic part of organic electrolyte was observed as a peak with very weak intensity at around 3060 cm -1 (see inset of Figure 3.7). [START_REF] Gavach | The Double Layer and Ion Adsorption at the Interface between Two Non Miscible Solutions: Part I: Interfacial Tension Measurements for the Water-Nitrobenzene Tetraalkylammonium Bromide Systems[END_REF] The organic electrolyte cation presence could be due to the residuals from the organic phase evaporation most probably collected together with the silica deposit from the liquid -liquid interface. However, as it will be shown in one of the following chapters, this was not the case since organic electrolyte ions were also incorporated in a silica deposit formation. Cl 2s are most probably due to remaining aqueous supporting electrolyte residuals, although some charge balancing of terminal OH -inside the pores by Na + cannot be excluded. Another source of Cl -anions is the TPBCl --anionic part of organic electrolyte -which apparently was also present in the sample. The BTPPA + traces could be followed with the P 2p signal. The possible explanation of two hydrophobic ions that were present in the sample is not straightforward since some residuals could have been collected together with the organic phase during silica deposit collection. The SiO 2 formation can be confirmed on the basis of BET analysis The results of nitrogen adsorption-desorption isotherm for the silica deposit electrogenerated at the liquid -liquid interface are displayed on Figure 3.9. The curve marked with the squares corresponds to the silica deposit that was cured in the oven at 130°C overnight. The second plot, marked with the circles, was recorded after calcination at 450°C for 30 min. Clearly, after calcination, the pores filled with the organic species are released since the specific surface area increased from 194 m 2 /g up to 700 m 2 /g. The curve recorded after calcination took the shape of type IV isotherm [START_REF] Gros | The Double Layer and Ion Adsorption at the Interface between Two Non-Miscible Solution; Part II: Electrocapillary Behaviour of Some Water-Nitrobenzene Systems[END_REF] with two capillary condensation steps: the first at 𝑝/𝑝 0 = ~ 0.4 and the second at 𝑝/𝑝 0 = ~ 0.7 (which was also present for the sample before calcination). The first step is characteristic for the mesoporous materials with relatively small pore -the pores dimensions were calculated to be around 7 nm. The step at 𝑝/𝑝 0 = ~ 0.7 indicated that synthesized silica exhibit second degree of mesoporosity which is around 1 µm. Morphological characterization SAXS analysis was employed to study structural parameters of silica deposits. Broad peak in the SAXS patterns suggested the presence of 'worm-like' structure (especially visible for Figure 3.10 A blue curve). [START_REF] Girault | Thermodynamic Surface Excess of Water and Ionic Solvation at the Interface between Immiscible Liquids[END_REF] This level of order confirms the template (CTA + ) transfer from the aqueous to the organic phase, where it self-assembly with TEOS molecules and results in the silica modified ITIES. Figure 3.10 A shows the structural properties of the silica deposits prepared for different [CTA + ] org (5 mM -black curve, 10 mM -red curve and 14 mM -blue curve) while the [TEOS] aq was kept constant (300 mM). The peak intensity increased with the increasing [CTA + ] org and the pore center-to-center distance (being the 2π/q where q is the wavelength of the peak center) decreased from 5.7 nm ([CTA + ] org = 5 mM) to 4.8 nm ([CTA + ] org = 10 mM) down to 4.5 nm ([CTA + ] org ). It was already shown that the [CTA + ] org influenced the charge transfer during the voltammetric silica deposition and hence was the limiting factor during the silica formation. The [CTA + ] org was directly related with the CTA + concentration in the aqueous diffusion layer once CTA + was transferred from the organic to the aqueous phase. The critical micelle concentration (CMC) in water for CTA + Cl -is 1.4mM. [START_REF] Girault | Thermodynamics of a Polarised Interface between Two Immiscible Electrolyte Solutions[END_REF] The concentration of CTA + in the aqueous diffusion layer is more likely to be above the CMC for the higher [CTA + ] org and hence results in better structuration of the silica deposit. When [CTA + ] org was kept constant (14 mM) and [TEOS] aq was decreased from 300 mM down to 50 mM the small increase in the pore center-to-center distance (from 4.5 nm to 4.9 nm) was observed. It was concluded that [TEOS] aq impact on the silica deposit formation is not as important as it was shown for [CTA + ] org . The nature and the polarity of the organic phase were found to have a great influence on the structure of the deposits as it is shown on to the organic phase has led to the formation of still broad however clear response; with a pore center-to-center distance of 3.8 nm (the alcohols are known to affect the structure of silica prepared by Sol-Gel process in the presence of CTA + as structure driving agents). 15 The discrepancy of the pore center-to-center distance calculated from SAXS pattern and from the BET analysis (pore diameter was found to be 7 nm) can be attributed to the sample broad pore dimensions distribution. 208 Also the pore center-to-center distance calculated from SAXS patterns show very small discrepancy (4.6 nm for chronoamperometry and 4.8 nm for cyclic voltammetry). Conclusion The macroscopic liquid -liquid interface electrochemical modification with silica material structured with the cationic surfactant -being the surface driving agent -was studied. It has been shown that silica deposits can be formed only in the presence of both, silica precursor in the aqueous phase and positively charged surfactant template in the organic phase. This indicates that the CTA + ion transfer was facilitated by negative charge of polynuclear TEOS species hydrolyzed in the aqueous phase. The shape of voltammetric response has suggested the interfacial adsorption process of CTA + , as abrupt drop in current was always observed on the reverse peak. Moreover, it was concluded that the formation of silica at the ITIES is limited by [CTA + ] org , as the charge above the reverse peak was increasing linearly with its concentration (varying [TEOS] aq did not affect the charge transfer). Silica deposits were also found to stabilize the liquid -liquid interface as the electrochemical instability was not observed after deposition. Electrochemical deposition was followed by the silica collection, curing and finally its characterization. With spectroscopic methods, the formation of Si-O-Si bond was confirmed. Additionally the presence of template species could be followed by IR spectroscopy as two vibrational modes of symmetric and asymmetric stretching of CH 2 bonds were found. The organic electrolyte was also found to be present in the silica deposits. Although most probably some organic phase solution was collected together with the deposit its participation is silica formation cannot be excluded. Thermal treatment at 450°C for 30 min was enough to calcinated the organic residues from inside the silica pores. The BET isotherm confirmed the presence of mesostructure. The pore symmetry was found to be of 'worm like' structure as concluded from broad peak presence in the SAXS patterns and from TEM micrographs. The pore center-to-center distance, depending on the experimental conditions applied during silica deposit synthesis, ranged from 3.8 nm up to 6.1 nm. Silica generated at the liquid -liquid interface requires further treatment in order to support and cure the silica material for further application in electroanalysis. This is not feasible at the macroscopic ITIES and as a consequence silica has to be supported in a way which will allow its synthesis, processing and finally reuse in the liquid -liquid configuration. To do so, the liquid -liquid interface was miniaturized and modified with the silica material using conditions elaborated at the macroscopic ITIES as it is shown in the following chapter IV. Results described in this chapter are also available in the Electrochemistry Communications, 2013, 37, 76 -79. Chapter IV. Silica electrodeposition using cationic surfactant as a template at miniaturized ITIES In the following chapter, the liquid -liquid interface was miniaturized through silicon chips supporting an array of pores. Electrodeposition of silica material at the microITIES was performed with the protocol developed at the macroITIES (described in chapter III). Application of the membrane supporting array of microITIES allowed the mechanical stabilization of silica deposits. The electrochemical deposition process, silica deposits morphological study, spectroscopic characterization, in situ spectroscopic study of silica formation and finally electroanalytical evaluation of silica deposits were investigated in a series of experiments that are described in following subchapters. Electrochemical and morphological study of silica deposits at the array of microITIES The supported microITIES allowed an easy and straightforward silica deposition. The silicon chips (mesoporous membrane of array of pores prepared by lithography) have additionally served as the support for silica deposits, which after electrogeneration have kept their mechanical stability and hence further characterization and reuse in electroanalysis was possible. In the following subchapters, parameters such as the concentration of template ions in the organic phase and precursor species in the aqueous phase, time scale of the experiment or the influence of geometry of the array of microITIES used to support the liquid -liquid interface on the silica deposit formation were evaluated based on cyclic voltammetry (CV) results. Morphological study, performed with SEM and profilometry based on shear force measurement allowed the morphological characterization of silica deposits. The effect of calcination was studied with Raman spectroscopy and ion transfer voltammetry. Surfactant-template assisted Sol-Gel process of silica at the microITIES Typical CV recorded at the microITIES membrane design number 3 (Table 2.2) during silica deposit formation is marked on precursor in the aqueous and the organic phase respectively. During silica electrogeneration the interface was polarized from +600 mV down to -100 mV on the forward scan (the polarization direction was chosen with agreement to the positive charge of cationic template species, which in order to trigger silica condensation reaction, had to be transferred from the organic to the aqueous phase). As it is shown on scheme 1 on The information extracted from the shape of a reverse peak confirms the participation of the CTA + species in the silica material formation on the aqueous side of the liquid -liquid interface and has indicated that the back transfer is not a diffusion-limited process. After one cycle, the array of microITIES is completely covered with the silica material as it is shown on scheme 3 on Factors affecting silica deposition at the array of microITIES In the following part the influence of the parameters such as: (i) template and precursor initial concentrations, (ii) the time scale of the experiments -i.e. scan rate influence -and (iii) the different geometries of array of the micropores supporting the liquid -liquid interface on the silica material deposition are discussed. Influence of [CTA + ]org and [TEOS]aq The silica deposits are being formed only when CTA + is transferred from the organic to the aqueous phase containing silica precursor -hydrolyzed TEOS. It is not surprising that composition of both contacting phases is likely to affect the deposition process. chosen in such a way that each pore was independent of each other. The membrane design number 3 (see Table 2.2) (100 µm center-to-center distance was enough to avoid the overlap of radial diffusion layer) and moderate scan rate (5 mV/s) were employed for this purpose. When These findings indicate that the [CTA + ] org is the limiting factor for the silica deposits formation reaction and [TEOS] aq has no influence on the reaction rate (in contrary to the mesoporous silica films at the solid electrodes derived from the surfactant assisted Sol-Gel process of silica). 97,98 Influence of the pore center-to-center distance For the micro electrode arrays or the microITIES arrays the shape of the currentpotential curves and the limiting currents are highly affected by the center-to-center distance (noted as spacing factor, S) between two micro electrodes or microITIES. Change in the spacing factor allows the control of the planar and radial diffusion contributions affecting the mass transfer of the ions. Davies and Compton [START_REF] Davies | The Cyclic and Linear Sweep Voltammetry of Regular and Random Arrays of Microdisc Electrodes: Theory[END_REF] have divided microdisc arrays into four main groups based on their geometrical size with respect to the size of diffusion zones: (i) δ < r, δ < S; (ii) δ > r, δ < S/2; (iii) δ > r, δ > S/2 and (iv) δ > r, δ >> S, where r is the pore radius, δ is the diffusion zone radius and S is the pore center to center distance (see Schemes from (i) to (iv) are different groups of the microITIES arrays divided with respect to the size of diffusion zones. The terminology proposed by Compton and Davis can be applied to describe the results obtained during silica deposits formation at an array of microITIES supported with different membranes (design number 1, 2, 3 and 4 were used -see Table 2.2) with constant pore radius (r = 5 µm) but various spacing factor (20 µm ≤ S ≤ 200 µm). The schemes on This was most probably due to entrapped CTA + species, with a higher energy required for back transfer to the organic phase. intensity and the shape of CV curves were found to vary, with negative peak evolving from slight peak to wave and to bigger peak when increasing the potential scan rate. These observations can be rationalized by analyzing the diffusion layer profile for each case. A prerequisite is to know the diffusion coefficient of the transferring species (i.e., CTA + ). Apparently, this value is not available but one can reasonably get an estimation of it from the literature data available for the closely related octadecyltrimethylammonium chloride (ODTM) compound (only 2 more CH 2 moieties in the alkyl chain with respect to CTA + ). The diffusion coefficient in the aqueous phase for ODTM was found to be around 0.66×10 -6 cm 2 s -1 as determined by pulse gradient spin-echo NMR. 211 Taking into account the difference in alkyl chain length between ODTM and CTA +two carbon atoms -and the change in medium viscosity -from 1.002 cP (at 20°C) 212 for aqueous phase to 0.85 cP (at 20°C) 212 for DCE -then one would expect 𝐷 𝐶𝑇𝐴 + to be slightly higher than 𝐷 𝑂𝐷𝑇𝑀 . Consequently, a 𝐷 𝐶𝑇𝐴 + value is expected to be around reasonable value of 1×10 -6 cm 2 s -1 . Influence of the scan rate From such 𝐷 𝐶𝑇𝐴 + value, one can estimate the diffusion layer thickness at various scan rates on the basis of the following equation: δ = √2𝐷 𝐶𝑇𝐴 + 𝑡 (3.1) The estimated thicknesses of diffusion layers of CTA + for each scan rate are thus roughly: 630 µm for 0.1 mV/s, 200 µm for 1 mV/s, 90 µm for 5 mV/s, and 63 µm for 10 mV/s. The diffusion layer from the organic side of the liquid -liquid interface can be considered as: δ = δ 𝑙 + δ 𝑛𝑙 (3.2) where δ 𝑙 is the linear diffusion inside the pores of the membrane and δ 𝑛𝑙 correspond to diffusion zone on the pore ingress from the organic side. For the highest scan rate -10 mV/s -the diffusion layer thickness, δ l , is less than the membrane thickness -d = 100 µm -(see scheme (i) on Figure 4.5) and hence the transfer inside the pore was limited only by linear diffusion (resulting in peak like response). In this case, not enough CTA + species were transferred from the organic to the aqueous phase and no silica deposit formation was observed (verified with the profilometry based on shear force measurements). Consequently, all CTA + species transferred to the aqueous phase on the forward scan were transferred back to the organic medium on scan reversal, as supported by forward and reverse charge transfer of similar magnitude (see Figure experimental times would cause an increase in δ nl exceeding half of the pore center-to-center distance (S/2) on the basis of the estimated δ value (i.e., 200 µm in this case), leading to the overlap of radial diffusion zones, which should have a peak-like response. Then fact that it is not happening here and that a sigmoidal response was obtained is attributed to the restricted transfer of CTA + across the interfacial synthesized silica material (apparent diffusion values for the overall process have to be lower than those in solution at an unmodified liquid-liquid interface). Actually, much longer experimental times were necessary (for instance, that corresponding to a potential scan rate as low as 0.1 mV/s) to get conditions of total diffusion overlap (see scheme (iv) on Morphological study The morphology of the silica deposits generated at arrays of microITIES under various conditions was analyzed by SEM and profilometry (based on shearing force measurement). Micrographs and profilometry images concern deposits generated using the same composition of c). The first observation is that the silica deposits are always formed on the aqueous side of the liquid -liquid interface. A second point is the good quality of the deposits, which have been formed uniformly on all pores, and their crack-free appearance after surfactant removal (images were recorded after calcination at 400°C) suggests a good mechanical stability. When using the same support (microITIES number 2), keeping constant the scan rate (5 mV/s), but increasing the number of cycles from half scan (first column) to 3 scans (second column), one can notice an increase in both the deposit height (from 1.7 µm to 3.4 µm) and the deposit diameter (from 14 µm to 15.7 µm). Also, there was no significant difference in the deposit diameter between samples prepared from half and whole potential cycle, showing only an increase in the deposit height from 1.7 µm up to 2.7 µm, consistent with the lengthened duration of more negative potentials application. All these results have confirmed that the electrochemically-induced 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + transfer was indeed at the origin of the surfactant-templated silica self-assembly process. However, the deposit height growth was not directly proportional to the number of CV cycles, most probably due to increased resistance to 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + transfer in the presence of much thicker deposits. A more effective way to get massive silica material deposition at the microITIES was to slow down the potential scan rate (which also requires the use of a silicon membrane with larger pore spacing to avoid overlap of the radial diffusion profiles). This is shown on part 3 of 1) deposits prepared by one linear scan voltammetry (half CV scan) at 5 mV/s using microITIES number 2; (2) deposits prepared by three successive CV scans at 5 mV/s using microITIES number 2; and ( 3) deposits prepared by one CV scan at 0.1 mV/s using microITIES number 3. Polarization direction was always from anodic to cathodic potential direction. Another way to control the generation of silica deposits was the use of constant-potential amperometry (i.e., chronoamperometry) instead of linear sweep or cyclic voltammetry. Too short experiment times (5-10 s) did not lead to any silica material formation. Then, the amount of deposited material was found, as expected, to increase with the potentiostatic step duration (for example 50 s deposition time led to deposits with 3.7 µm in height and 14.9 µm in diameter), while extending the electrosynthesis up to 200 s enabled the formation of massive deposits, with 8.4 µm in height and 18.6 µm in diameter. As for deposits prepared by cyclic voltammetry, their shape was also found to change from rather flat to hemispherical when increasing the charge passing across the interface contributing thereby to larger amounts of silica material deposited at the microITIES. Finally, the variation in shape of the silica deposits resembles to an image of the evolution of the diffusion layer profiles (on the aqueous side of the interface): short deposition times corresponds to mainly linear diffusion limitations, yet with some radial contribution at the macropore edges, while longer experiments are governed by radial diffusion control. This is quantitatively illustrated on Another point, yet minor, to mention from the morphological analysis is the presence of cubic spheres on some silica deposits (see, e.g., image 3a on Figure 4.7), which are the sodium chloride crystals arising from residual background electrolyte back transfer from organic to aqueous phase. This phenomenon is similar to that reported by Silver et al. 214 , on the basis of protein crystallization experiments at liquid -liquid interfaces. This means that the electroassisted generation of silica material could be also suitable to the entrapment of proteins in the silica matrix (sol-gel silica is indeed known to enable encapsulation of proteins) 215 as an effective way to immobilize biomolecules at liquid -liquid interfaces. Finally, a very thin additional silica layer covering the whole silicon membrane (including silica deposits) can be also evidenced from SEM pictures (see micrographs 1a and 2a in Figure 4.7). This arises most probably from some evaporation induced condensation of TEOS and silica deposition even if each electrosynthesis experiment was followed by careful rinsing with distilled water. Spectroscopic and electrochemical characterization of deposits Confocal Raman spectroscopy was first used to evidence the formation of a silica network at microITIES. It has been performed after thermal treatment of the deposits (24h storage at 130°C followed by 30 min at 400°C) to be able to detect the characteristic vibrational modes of silica without being disturbed by large signals arising from the organic matter expected to be present in the same spectral region (see Figure 4.11). In addition to the narrow and intense band at 520 cm -1 due to the vibration mode characteristic of the silicon wafer support, one can notice a broad band from 375 cm -1 to 500 cm -1 attributed to the Si-O-Si vibrational mode, and a single band at 710 cm -1 suggesting the presence of terminal Si-OH groups. This strongly supports the formation of silica. Raman spectroscopy can be also used to characterize the organic species that have been incorporated into the silica material during formation of the deposits, as well as to check the effectiveness of their removal upon calcination. This is shown on Figure 4.12 B, where the broad and intense signal located in the region 2760 -3020 cm -1 region, corresponding to the -CH 2stretching mode of the long alkyl chain of CTA + species, 216 confirms the presence of the surfactant template in the material. This signal disappears almost completely after heat treatment, indicating the successful calcination of the organic template. From Figure 4.12 B, one can also notice a broad signal of weak intensity at 3060 cm -1 , which can be ascribed to the C-H stretching mode of aromatic rings, which are present in the organic electrolyte (BTPPA + TPBCl -), suggesting that some of these cations have been co-encapsulated in the silica material in addition to the surfactant template. This is best shown on Figure 4.12 A, where the typical signature of BTPPA + TPBCl -is seen via the bands located in the 600 to 1800 cm -1 region 217 (for instance the peak at around 1000 cm -1 is due to the vibration of the aromatic rings). 203 After calcination, all these bands disappeared, which indicates the complete removal of this organic electrolyte from the deposits. One can also notice from Figure 4.12 A two broad peaks of weak intensity in the region from 1250 to 1750 cm -1 , which appeared upon calcination, representing actually the D and G bands of traces of amorphous carbon arising from thermal decomposition of the organic molecules. 218 These results support that the mesopore channels have been liberated from most of their organic content, even if the quantitative analysis of the porosity by the classical gas adsorption method was not possible due to the too low amount of available material (too small deposits). In order to better evaluate the permeability of µITIES modified with silica deposits before and after calcination, a model interfacial active cation was employed. The transfer of 𝑇𝑀𝐴 𝑎𝑞→𝑜𝑟𝑔 + through the microITIES modified with the silica deposits was clearly observed after removal of the surfactant template (Figure 4.13 B -red curve), confirming good permeability of the mesoporous silica deposits, whereas negligible transfer (possible hindering effect of capacitive current cannot be excluded) occurred before calcination (Figure 4.13 B -black curve). The corresponding CV curve was characterized by a sigmoidal wave when transferring from the aqueous to the organic phase and a peak-like response on scan reversal (due to the restricted amount of TMA + for back transfer). The absence of significant 𝑇𝑀𝐴 𝑎𝑞→𝑜𝑟𝑔 + transfer prior to template removal also confirms the good quality of the deposits and the fact that all -or the greatest part of -the macropores of the silicon membrane have been successfully modified with the silica deposits. Conclusion Miniaturized liquid -liquid interface supported by an array of micropores can be easily modified with the mesoporous silica deposits with the surfactant assisted Sol-Gel process of silica. The experimental time scale -i.e. scan rate -was shown to have a great effect on the voltammetric characteristics recorded during deposition as well as on the shape of the silica deposits, which tend to grow towards the bulk of the aqueous phase taking the shape of the diffusion layer profile of CTA + on the aqueous side of the liquid -liquid interface. The spacing factor between two neighboring pores in the different microITIES geometrical arrays was also shown to affect deposition process. The rate limiting properties of CTA + found at macroscopic ITIES were also confirmed with miniaturized system. With TEM the mesoporous character of the silica was confirmed. The effect of calcination -removal of organic species blocking the interior of the pores -was followed by confocal Raman spectroscopy and ion transfer voltammetry of TMA + . The system optimized in this section can be used for further evaluation in electroanalysis as it will be shown in section 4.4 of following chapter. The results for this section can be also found in Langmuir, 2014, 30, 11453 -11463. In situ confocal Raman spectroscopy study of interfacial silica deposition at microITIES In general, electrochemistry at the ITIES allows the estimation of thermodynamic, kinetic and charge transfer parameters, but precise quantitative information about the interfacial region has to be supported by other methods, especially when very precise molecular information of the system is required. Spectroscopic methods (both linear and non-linear techniques) can be coupled to electrochemistry at the ITIES. 220 At the liquid -liquid interface the biggest difficulty is the separation of the signal originating from the bulk from the signal originating from the interfacial region. This can be overcome by fulfilling the condition of total internal reflection. 220 The first studies of interfacial processes by spectroelectrochemistry were made using linear techniques, which include UV-visible volt-and chronoabsorptiometry, 221,222,223 volt-and chronofluoriometry 224,225 or reflectance spectroscopy. 226 In contrast, non-linear techniques (sum frequency spectroscopy and second harmonic generation) have provided information about the molecular structure of the liquid -liquid interface as they are surface specific. [START_REF] Wang | Generalized Interface Polarity Scale Based on Second Harmonic Spectroscopy[END_REF] Confocal Raman microspectrometer can also be used to focus the analysis at the interface and provide useful interfacial information. A Raman spectroscope with a spatial resolution of 0.5-1 mm has been used to investigate electron transfer reaction at solid microelectrode arrays. 227,228 Raman spectroscopy has also been used for the characterization of reactions occurring at the neat liquidliquid interface. 217,229,230,231,232,233 Interfacial reaction 229 or metallic nanoparticle selfassembly 230,231,232,217 at the liquid-liquid interface were also reported to be monitored by Raman spectroscopy. However, the investigation of reactions at the liquid-liquid interface controlled by electrochemical means has scarcely been studied. Indeed, it was only recently that surface enhanced Raman scattering was used to study the potential dependent agglomeration of silver nanoparticles at the water -1,2-dichloroethane (DCE) interface. 217 More recently, the interfacial transfer of ferroin ion and heterogeneous electron transfer between dimethylferrocene (from the organic phase) and hexacyanoferrate (II/III) anions (from the aqueous phase) were followed by confocal Raman spectroscopy. 234 The electrodeposition of gold nanoparticles formed at a three- Additionally, the chloromethyl group gives two overlapping peaks at around 3034 cm -1 (weak intensity) assigned to the antisymmetric CH 2 stretching mode and 2987 cm -1 (very strong intensity) assigned to the corresponding symmetric stretching mode. 236 When the organic electrolyte was added to the organic phase, three additional peaks were observed (see inset of TPBCl -. The intensity of the peak at 1000 cm -1 attributed to BTPPA + , remained constant upon addition of 10 mM CTA + TPBCl -in the organic phase and then dropped for 53 mM CTA + TPBCl - in the organic phase, suggesting that the attribution of a vibrational mode of BTTPA + was correct. Addition of CTA + TPBCl -to the organic electrolyte solution also gave rise to another peak at 349 cm -1 , which could be attributed to low-frequency deformation modes of CTA + alkyl chains. 203 These experiments demonstrated that the organic electrolyte and surfactant ions remained in the organic phase at open-circuit potential. Raman spectroscopy analysis of the liquidliquid interface at open circuit potential Ion transfer followed by Raman spectroscopy Previous works have demonstrated that the application of a negative potential difference can cause a displacement of the interface with ingress to the aqueous phase. 213,237 The influence of polarization potential on the position of the interface was then investigated by recording Raman spectra under various negative interfacial potential differences can be found on . Then, the interfacial potential difference was held for times sufficiently long (which was typically 4 minutes) to collect a full Raman spectrum from 200 to 3200 cm -1 . Spectra were normalized to the silicon peak at 520 cm -1 (arising from photons scattered by the silicon membrane). The interfacial potential difference was varied from -200 mV down to -800 mV. At these potentials, BTPPA + ions were transferred from the organic phase to the aqueous phase as demonstrated on the blank CV shown on Figure 4.17 A -full Raman spectra and B -Raman spectra in the region from 975 cm -1 to 1100 cm -1 recorded at the microITIES between 5 mM NaCl aqueous phase solution and 10 mM BTPPA + TPBCl - organic phase solution under different negative polarization. For A and B dashed line corresponds to Raman spectrum recorded at open circuit potential. Dotted line from A was recorded under negative polarization potential at -300 mV whereas the solid was recorded at -800 mV. The spectra from B were recorded from +200 mV up to -800 mV. C -shows the peak intensity (after normalization to 520 cm -1 ) in function of applied potential -empty squares correspond to the peak at 1002 cm -1 while filled circles can be attributed to the peak at 1078 cm -1 (in that case error bars are too small to notice); open circuit potential was 200 mV; Raman peaks marked with () were assigned to BTTPA + , with ( ) to DCE and with () to TPBCl -. D -is the blank voltammogram recorded prior to spectra collection with the scan rate equal to 5 mV/s. 2847, 2877, 2961 and 3005 cm -1 ) assigned to the vibrational modes of DCE decreased in intensity, which would confirm that the organic phase did not ingress to the aqueous phase (Figure 4.17 A). This is supported by the increase of the band intensity at 1002 and 1029 cm -1 related to BTPPA + ions (Figure 4.17 C -open squares). Indeed, more and more BTPPA + ions were transferred as the interfacial potential difference became more and more negative (i.e., from 200 mV down to -800 mV). Furthermore, at such negative interfacial potential difference, the transfer of anions from the organic side of the interface (TPBCl -) was not expected. The band intensity at 1078 cm -1 drop as the interfacial potential difference varied (Figure 4.17 C -filed black circles). These experiments demonstrated that the variations in Raman spectra were due to ion transfer caused by the application of a negative interfacial potential difference. The displacement of the liquid -liquid interface was excluded. Interfacial silica deposition followed by Raman spectroscopy The electrochemically assisted assembly of surfactant-templated silica at the microITIES was followed by Raman spectroscopy. A Raman spectrum was first recorded at open circuit potential (ocp = +200 mV) and it showed all the characteristic vibration bands reported on Figure 4.18. After this control experiment, the interfacial potential difference was linearly swept (at 5 mV/s) from +600 mV down to -100 mV and the potential was held at -100 mV while a second Raman spectrum was recorded. Next the potential was swept back from -100 mV to +600 mV where a third Raman spectrum was recorded while holding the potential at +600m V; the same operation was repeated once more to get the 4 th and 5 th Raman spectra, which are shown on The region from 2800 to 3100 cm -1 showed the most dramatic variations upon voltammetric cycling. This region corresponds to the C-H stretching modes, which originated from either DCE, BTTPA + , TPBCl -or CTA + ions. The presence of CTA + after the first scan was confirmed by the vibrational contribution from its long alkyl chains, in particular, CH 2 symmetric stretching at 2851 cm -1 , CH 3 symmetric stretching at 2874 cm -1 and CH 3 asymmetric stretching mode at 2935 cm -1 . Additionally, BTPPA + and TPBCl -ions gave rise to two overlapping bands from 3025 to 3080 cm -1 . The band centered at 3044 cm -1 originated most probably from BTPPA + and arose from the five aryl C-H bonds in the aromatic rings, whereas the latter at 3064 cm -1 can be assigned to monosubstituted aromatic rings as in the case of the chlorophenyl substituent in TPBCl -. The other peaks associated with BTPPA + at 1002 and 1025 cm -1 and with TPBCl -at 1078 and 1115 cm -1 also increased. If the growth of characteristic peaks assigned to CTA + vibrational modes was expected, the increase of those related to the ions of the organic phase electrolyte was more surprising, however possible since the silica material is known to be an attractive adsorbent for coadsorption between CTA + and species containing aromatic rings. 238 Indeed, the spectra presented on Figure 4.18 had shown that these ions were barely visible at open circuit potential. Furthermore, peak intensities of the vibrational modes of BTPPA + were not growing before a potential difference of -300 mV was reached, whereas TPBCl -did not transfer at all at such a negative interfacial potential difference (see Figure 4.17 C). A possible explanation to such an increase in the intensity of the bands characteristic of these two compounds is that they can be trapped in the silica-surfactant matrix, which is being formed by self-assembly between condensing TEOS and CTA + transferred to the aqueous phase, via favorable electrostatic interactions (i.e., between the TPBCl -anions and CTA + cations, and between BTPPA + cations and the negatively charged silica surface). Such a hypothesis was notably supported by previous observations made for CTA + based mesoporous silica materials for which the final composition implied the presence of ionic species in addition to the presence of the surfactant and silica. 239,240 Unfortunately, no direct in situ evidence of the interfacial silica material formation was found, since no band for the Si-O-Si vibrational mode has been observed during the acquisition time used in in situ experiments. This might be due to the fact that longer acquisition times were required to observe the Si-O-Si vibrational mode in the 450 to 500 cm -1 region 185 for silica deposits and if existing here, this signal was too weak to be visible with respect to the other ones or to the noise of the Raman spectra. Another point was that the condensation kinetics for solgel-derived silica is usually rather slow (i.e., low condensation degree for the material generated in the synthesis medium, as pointed out from in situ NMR experiments), 241,242 and required subsequent heat treatment to achieve a high degree of crosslinking of the silica network. Actually, the observation of the silica vibration band for a silica deposits electrogenerated at the microITIES was possible, but only after calcination at 400 °C for 30 minutes (sees Figure 4.11). Another indirect evidence of silica formation can be seen by following the evolution of the Si-Si band at 520 cm -1 (see Figure 4.18 A). Indeed, mesoporous materials (e.g. mesoporous silica) can serve as a waveguide for light. 243 The amount of the silica material generated at the microITIES grew with the number of scans, and hence disturbed the system by guiding laser photons through the silica deposits to the silicon wafer supporting the microITIES. The waveguide phenomenon (due to elastic light scattering by the silica formed) was observed as the increase in Si-Si band intensity. The same phenomenon could be responsible for the disappearance of all bands originating from DCE once silica is being formed at the liquid -liquid interface. The Raman shifts of the particular molecular contributions used to describe all recorded spectra from this work are summarized in the Table 4.1. Conclusions The confocal Raman spectroscopy with the spatial resolution around 1 µm allowed the study of molecular changes at the miniaturized ITIES induced by electrochemically driven ion transfer and silica deposition reactions. Ion transfer of cationic part of supporting electrolyte of the organic phase -BTPPA + -induced by negative polarization, could be followed by confocal Raman spectroscopy after a precise assignment of each typical band (based on data available in the literature and series of reference measurements). The high spectral resolution of usual Raman spectrometers and the sharpness of Raman peaks were of strong help in this regard. Raman spectroscopy also offers the opportunity to monitor vibrational modes in the aqueous phase, which is more difficult in infrared absorption spectroscopy (though not impossible using dedicated techniques like attenuated total reflection sampling (ATR)). The confocal microscope also enabled to finely localize the liquid -liquid interface, which was mandatory in this kind of study. Besides, compared to non-linear optical techniques like second harmonic or sumfrequency generation, the laser irradiance can be low enough to avoid any disturbance of the fragile interface during the formation of the silica deposits. No direct evidence was found for Si-O-Si formation during in situ silica formation study with Raman spectroscopy (it was already shown before that such band can be found for cured deposits). Silica formation was confirmed indirectly as the interfacial molecular characteristics have changed dramatically upon interfacial polarization in the presence of template molecules in the organic phase and silica precursor in the aqueous phase. Additionally, it was found that organic supporting electrolyte is also involved in the silica formation mechanism as the signals arising from its vibrational mode were also present in recorded Raman spectra. This experimental set-up developed in this work could be also used to follow the in situ functionalization of silica deposits via the co-condensation of silanes and organosilanes. The results from this part of work can be also found in Phys.Chem.Chem.Phys., 2014, 16, 26955 -26962. Electrochemical evaluation of microITIES modified with silica deposits In the following section the silica deposits electrogenerated at the liquid -liquid interface supported with the membrane design number 3 (30 pores, each having 5 µm in radius, for more details refer to Table 2. The potential window was determined by the transfer of the supporting electrolyte ions dissolved in each phase, which resulted in a current rise on both sides of CV. There was no significant impact of the silica deposits on the potential window width. At the negative end, the potential window was limited by the transfer of Cl -, which crosses the interface at a higher potential than BTTPA + ions. Indeed, previous studies have shown that the Galvani potential for Clis  1/2 = -530 mV, [START_REF] Sabela | Standard Gibbs Energies of Transfer of Univalent Ions From Water To 1,2-Dichloromethane[END_REF]245 whereas the Galvani potential for BTPPA + is  1/2 = -700 mV. 245 The peak observed at  = -450 mV was then attributed to the back-transfer of Cl -from the organic to the aqueous phase, whose diffusion was confined inside the microITIES pores and hence was linear. At the positive end of the potential window, the transfer was limited by the transfer of anions of the organic electrolyte salt. Indeed, the absence of a peak on the reverse scan has suggested a radial diffusion in the aqueous phase, which would correspond to the back transfer of TPBCl -from the aqueous to the organic phase. Unfortunately, the Galvani standard potential for the transfer of TPBCl -has never been determined 245 and one can only assume that it is lower than the one for Li + which is  1/2 = +580 mV. 246 Blank experiment before and after modification Single charge ion transfer before and after modification Electrochemical behavior in the presence of silica deposits of three tetraalkylammonium cations of different sizes (TMA + , TEA + and TBA + ) -see Figure 4.20 A -and of the negatively charged 4OBSA --see Figure 4.20 B -were first studied. The electrochemical behavior of each analyte was investigated in the absence (black curves) and in the presence (red curves) of silica deposits at the microITIES. All ions were initially present in the aqueous phase at concentration of 56.8 µM, and the forward polarization was selected in agreement with the charge of transferring species (towards positive potentials for tetraalkylammonium cations and towards negative potentials for 4OBSA -). The shape of cyclic voltammograms for all four species showed a sigmoidal forward signal -in agreement with hemispherical diffusion zone from the aqueous side of the interface -and a reverse peak like response -indicative for linear diffusion limitation inside the microITIES pore channels filled with the organic phase. The voltammograms are shown on the Galvani potential scale, based on the peak transfer of Cl -. The presence of silica deposits has an impact on both the ion transfer potential and current intensity. For all four ions studied here, the presence of silica deposits at the microITIES has made the transfer more difficult, as an ion transfer potentials were shifted negatively by 22 mV for TMA + , 26 mV for TEA + , 46 mV for TBA + and by 50 mV for 4OBSA -. Based on the potential shift the difference of the Gibbs Energy of transfer,G, could be estimated: G = G before modification -G after modification (3.3) The difference was higher than any of the cations tested, even larger than TBA + , whose hydrodynamic radius is almost twice bigger (𝑟 ℎ 4𝑂𝐵𝑆𝐴 = 0.30 𝑛𝑚 and 𝑟 ℎ 𝑇𝐵𝐴 = 0.48 𝑛𝑚). 247 In spite of similar hydrodynamic radii, the behaviors of TEA + (𝑟 ℎ = 0.28 nm 247 ) and 4OBSA -were quite different with a shift of 2.5 kJ mol -1 for TEA + and 4.8 kJ mol -1 for 4OBSA -. A similar trend was observed for the ion transfer current with a drop caused by the interface modification: 59% for 4OBSA -and 48% for TEA + . This behavior can be attributed to electrostatic repulsion between negative net charge of silica surface (arising from the presence of deprotonated terminal OH groups located on the edge of silica walls as the point of zero charge is 2) 248 and anionic 4OBSA -. Multicharged ion transfer before and after modification Dendrimers are the second family of species studied in this work. Large multi-charged species are giving complex electrochemical response at ITIES, which varies with the dendrimer generation. They can undergo either interfacial adsorption [START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF] or ion transfer [START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces[END_REF][START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF] . Furthermore, PAMAM dendrimers at electrified liquid -liquid interface were employed as encapsulating agents for smaller porphyrin molecules 249 or molecular organic dyes. 250 Complex behavior of dendrimer-guest molecular association studied with cyclic voltammetry coupled with spectroscopic methods indicated that ion transfer reaction is accomplished with interfacial adsorption process -well pronounced by rapid current drop on the reverse peak. 249 Prior to interfacial modification, both dendrimers (G0 and G1) underwent a simple ion transfer reaction, which is also the case at the macroscopic ITIES. [START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces[END_REF] Modification of the microITIES with silica has led to the following results : (i) a lower forward current for both dendrimers, which was slightly more pronounced for PAMAM G1 (51% of loss on the forward current as compared with a 29% loss on the forward current for PAMAM G0); (ii) a shift in forward and reverse transfer potentials towards more negative potentials, which was in contrast to the series of tetraalkylammonium cations and (iii) the shape of back transfer peak for PAMAM G0 was unaffected after modification and hence a diffusion-limited peak was observed suggesting that the transfer current is due to an interfacial transfer reaction. The characteristics of PAMAM G1 behavior are more complex as the peak shape was changed. These phenomena (i, ii, and iii) may have two origins. First, on the forward scan, ion transfer can be affected by electrostatic interaction between positively-multi-charged dendritic molecules and negative net charge of silica (terminal OH groups inside the mesopores), which has led to charge screening and resulted in lower current. Second, the peak on the reverse scan (especially visible for PAMAM G1) was probably affected by adsorption process since additionally to the negative peak potential shift the current drop was more rapid and the separation between forward and reverse signal increased from 64 mV to 114 mV (factors that are a fingerprint of macromolecules exhibiting adsorption behavior at ITIES) [START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF][249][250][251] . In the case of PAMAM G1 adsorption, facilitated transfer of the organic electrolyte counter ion cannot be excluded and may give some additional portion of faradaic current on forward scan, as it was shown for proteins [START_REF] Scanlon | Voltammetric Behaviour of Biological Macromolecules at Arrays of Aqueous|organogel Micro-Interfaces[END_REF][START_REF] Herzog | Electrochemical Behaviour of Haemoglobin at the Liquid/liquid Interface[END_REF] and synthetic dendrimers [START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF] studied at the ITIES. 4OBSA -; 5 -PAMAM G0 and 6 -to PAMAM G1. As already noticed, modification caused the decrease of the faradaic current evidenced for each of studied ions. On the forward scan, the sigmoidal wave current increased linearly for the three tetraalkylammonium cations and 4OBSA - over the whole concentration range studied. This was observed in the presence and in the absence of silica deposits. This behavior was different for multi-charged species. The forward transfer current reached a plateau for the higher concentrations of dendrimers (see calibration curves on Figure 4.24 5c and 6c). The analytical parameters of each ion (sensitivity, limit of detection and apparent diffusion coefficients within the silica deposits) in the presence and in the absence of silica at microITIES were extracted from these calibration curves (Table 4.2 𝑖 𝑙𝑖𝑚 = 4𝑛𝑟𝑧 𝑖 𝐷𝐹𝐶 𝑖 (3.5) where n is the number of microITIES in the array (n = 30), r is the microITIES radius (r = 5 µm), z i is the charge of the species transferred, F is the Faraday constant (96485 𝐶 • 𝑚𝑜𝑙 -1 ) and C i is concentration. Theoretical calibration curves in the absence of silica deposits were fitted using There is a good correlation between the sensitivity calculated from the slope of theoretical fitting using Eq. (3.5) and the sensitivity measured experimentally before modification, S 0 , for TMA + , TEA + , and to a lesser extent for TBA + and PAMAM G0 and G1. The impact of the interface modification on the sensitivity, S, is shown for the different ions studied as a function of the product of their charge, z i , and their diffusion coefficient, D i (Figure 4.25). For single charge species, the impact of the interface modification on the sensitivity ratio S/S 0 (S 0 is the sensitivity before modification) was lower for the smaller ion TMA + than for the larger TBA + . A similar trend was observed for dendrimers where the impact on the sensitivity ratio was greater for PAMAM G1 than for PAMAM G0. The sensitivity of the larger molecule (PAMAM G1) was more affected than the one of the smaller molecule (PAMAM G0). Nevertheless, the S/S 0 is higher for both these multiply-charged molecules than for single charged ions. This can be explained by stronger interactions between the negatively charged silica walls and the multiple charges of PAMAM dendrimers. Electroanalytical properties of microITIES modified with silica deposits The limit of detection (LOD) was calculated from the linear fit of equation: 𝐿𝑂𝐷 = 3.3𝑆𝐷 𝑠 (3.6) where SD is the standard deviation of the intercept and S is the sensitivity. LODs for the ions studied before microITIES modification are in the µM range, with the exception of TBA + in the sub-µM range. The modification with mesoporous silica did not impact significantly the LODs since their value remained rather in the same range (see Table 4.2). According to eq. (3.5), the difference in sensitivity before and after modification can be explained by an impeded diffusion of charge species through mesoporous silica, leading to apparent diffusion coefficients, 𝐷 𝑖 ′ , that are lower than the diffusion coefficients of species in the bulk solution. From linear fit of calibration curves for modified microITIES using eq. (3.5), the 𝐷 𝑖 ′ for all studied species was extracted, assuming that the microITIES radius remained unchanged in the presence of silica deposits (deposits are flat at the bottom and preferably filled with the aqueous phase). The 𝐷 𝑖 ′ calculated for all ions are shown in for TMA + and 50% drop for TEA + , TBA + and 4OBSA -were observed in the presence of silica deposits. These estimations correlate with previous studies. The diffusion coefficient of TEA + within zeolite Y modified ITIES (with the aperture diameter of 7.4 Å) was two order of magnitude lower than in the aqueous phase. 188 TEM images showed that the mesopores dimensions of silica material from this work was significantly larger than zeolite Y and hence the effect on 𝐷 𝑖 ′ is considerably smaller as on the diffusion coefficient through zeolite Y. rh is the hydrodynamic radius, 𝐷 𝑖 is the aqueous diffusion coefficient, 𝐷 𝑖 ′ is the apparent diffusion coefficient determined experimentally, zi is the charge, Stheor correspond to the theoretical sensitivity based on Eq. (3.6), S0 correspond to the sensitivity at a bare microITIES and S to the sensitivity at a modified microITIES, BM to before modification and AM to after modification. *Calculated from the Stokes Einstein relationship, 𝑟 ℎ = 𝑘 𝐵 𝑇 6𝜋𝜂𝐷 , where kB is the Boltzmann constant (1.3807 • 10 -23 𝑚 2 • 𝑘𝑔 • 𝑠 -2 • 𝐾 -1 ), T is the temperature (293 K), 𝜂 is the aqueous phase viscosity (0.89 cP) and D is the diffusion coefficient (cm s -1 ). **Calculated from calibration curve fit based on equation describing limiting current at array of microITIES (eg. (3.5)). Conclusion Arrays of microITIES were modified with silica deposits based on electrochemically driven surfactant (CTA + ) assisted Sol -Gel process with TEOS as a silica precursor species. The electrodeposition and morphological study was done and is described in section 4.2 of this chapter. Silica deposits were characterized by ion transfer voltammetry of six different in size, nature and charge analytes: TBA + , TEA + , TMA + , 4OBSA -and PAMAM G0 and G1. The transfer peak currents observed for the transfer of all species were lowered and the transfer peak potentials were shifted depending on the nature of studied analyte in the presence of silica deposits at the ITIES. Increase in size of tetraalkylammonium cations has led to a greater current drop and potential shift (from finely affected TMA + to clearly changed TBA + ). In case of 4OBSA -both, the current drop and E 1/2 shift were attributed to both size and charge effects. The electrochemical behavior of multi-positively-charged dendrimers in the presence of silica deposits was attributed to the electrostatic interaction with negatively charged silica net. Furthermore, microITIES modification affects differently the electroanalytical parameters (sensitivity, LOD and apparent diffusion coefficient) Depending on the size of the species transferred, suggesting the future possibility of selective ion transfer using microITIES modified with functionalized mesoporous silica. The results of this section were published in Electrochimica Acta doi:10.1016/j.electacta.2015.01.129. Chapter V. Local pH change at the ITIES induced by ion transfer and UV photolysis The anodic oxidation of water allows the generation of H + which can act as a catalyst in the Sol-Gel process of silica leading to uniform and compact film formation at the electrode surface. 254 The silica film formation from hydrolyzed alcoxysilane species can be also conducted under high pH values -easily feasible by cathodic water reduction. 255 Such approach can be extended to TiO 2 electro-assisted deposition, 256 SiO 2 -TiO 2 257 or SiO 2 -conducting polymers 258 formation as the electrode modifiers. The electro-assisted (at cathodic potential facilitating OH - evolution) Sol -Gel process of silica generated in the presence of structure driving agents has led to the formation of well-ordered and oriented normal to the electrode surface mesoporous films. [START_REF] Walcarius | Electrochemically Assisted Self-Assembly of Mesoporous Silica Thin Films[END_REF]239 Such properties are of paramount interest in electroanalysis since well-defined mesostructure can act as the molecular sieve, which in case of silica, can be additionally modified with number of functionalities. 102 The approach where the H + or OH -generated at the ITIES can indirectly trigger modification of the liquid -liquid interface has never been studied before. Indeed during the interfacial modification with polymer 153 or polymer-Au NPs 156 the generation of protons as a reaction side product was reported however not evidenced experimentally. The application of interfacially and photo active species (in polymer science such species are known as photobase 259 or photoacid 260 generators) which photolysis may lead to change in the pH could be applied to trigger pH sensitive reaction as for instance Sol -Gel process of silica. In this work the trimethylbenzhydrylammonium (PH + ) cation was first synthesised 195 and then used as the interfacially and photoactive ion which photolysis has led to the local pH decrease. First, the PH + electrochemical behavior was studied at macro-and microITIES. The photolysis of the PH + was performed with the UV irradiation and it was followed by electrochemistry, HPLC and mass spectroscopy. Interestingly the transfer and subsequent photolysis of PH + was not the only source of the pH change. The side reaction at the counter electrode was found to occur -reduction of water at negative interface polarization potentials has led to an increase in pH. The local pH measurement had to be performed in order to separate the pH increase occurring at the aqueous counter electrode from the pH change occurring at the ITIES. To do so the Pt microdisc electrode modified with an iridium oxide (as the pH electrode) 261,262 was used to probe the pH above the microITIES. Finally, the local pH change was used to trigger the interfacial silica deposition in the liquid -liquid interface configuration where precursor (TEOS) was initially dissolved in the DCE whereas template (CTA + Br -) in the aqueous phase. The methylation of primary amine with methyl iodide has led -under proper conditionsto quaternary amine. Detailed protocol of preparation can be found in section 2.5.7 of chapter II. The proton NMR spectra for both: ADPM and PH + I -can be found on ) for iodide is -340 mV [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF][START_REF] Sabela | Standard Gibbs Energies of Transfer of Univalent Ions From Water To 1,2-Dichloromethane[END_REF]245 and hence I -would be expected to cross the interface at the more negative side of the potential window. Unfortunately, showing the potential scale as the standard Galvani potential difference is not possible since no internal reference was present in the system. Moreover, for the signal with E 1/2 at around +550 mV it was also difficult to study the effect of capillary geometry on the shape of the signal itself since it is partially masked with the background electrolyte transfer and hence the charge of the transferring specie remains unknown. The presence of undesirable species transferring on the positive side of the available potential window required further processing, hence the PH + was precipitated from the post-synthesis mixture with TPBCl -(please refer to section 2.5.4 in chapter II for protocol of PH + TPBCl -preparation): 𝑃𝐻 + 𝐼 -+ 𝐾 + 𝑇𝑃𝐵𝐶𝑙 -→ 𝑃𝐻 + 𝑇𝑃𝐵𝐶𝑙 -↓ + 𝐾𝐼 (5.1) After filtration, the white precipitate was rinsed four times with 50 mL of distilled water (the flask containing PH + TPBCl -and 50 mL of H 2 O was place in the ultrasound bath for 10 min each time). After sonification each sample was centrifuged. Next, the aqueous phase was collected from above the solid PH + TPBCl -and analyzed by the ion chromatography. (peak like on the forward -linear diffusion inside the pore -and wave like on the reserve scanradial diffusion on the pore ingress) confirms the occurrence of the PH + interfacial transfer. based on Randles-Sevcik equation: Electrochemical characterization of PH + transfer at macroITIES 𝐼 𝑝 = (2.69 × 10 5 )𝑧 3/2 𝐴𝐷 1/2 𝐶𝑣 1/2 (5.2) where z is the charge, A is the interfacial surface area (2.83 cm 2 ) and C is the concentration (50 × 10 -9 𝑚𝑜𝑙/𝑐𝑚 3 ). The obtained value of 6.91 • 10 -6 cm 2 /s is of the same order of magnitude as interfacially active species having similar hydrodynamic radius. [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF] The behavior of PH + at the polarized ITIES can be classified as 'common' for known mono-charged quaternary ammonium cations. Its interfacial transfer can be easily controlled with electrochemical methods as for instance cyclic voltammetry, which in parallel can be used as an analytical detection method. The abbreviations stand for: UV -for Ultraviolet, Nu -for nucleophile and R -for . First mechanism (photo S N 1) is expected to give tertiary amine (: 𝑁 -(𝐶𝐻 3 ) 3 ) and a carbocation that reacts with the nucleophile and gives 𝑅 -𝑁𝑢 products. Photo S N 2 leads to a tertiary amine and 𝑅 -𝑁𝑢 through the formation of the intermediate. In the single electron transfer (SET) mechanism the electron from the nucleophile is excited by the UV irradiation and can transfer to the aromatic system of the quaternary ammonium cation. The resulting radical is then dissociated into the benzhydryl radical and the tertiary amine. When the photolysis of PH + cations caring I -or TPBCl -as the counter ion was studied in water or EtOH photo S N 1 mechanism was considered to take place (polar protic solvent favor S N 1 mechanism). In the case of the photolysis reaction which was taking place in DCE -SET mechanism was very probable. 195 PH + photolysis products were studied by HPLC. The series of peaks up to 3 min and an intense peak at 3.14 min were observed and attributed to the products of photo decomposition. Very weak peak was also recorded at around 12.95 min and originates from remaining PH + ions. Photolysis of PH + could also be followed by cyclic voltammetry as shown in The photodecomposition was confirmed and followed with three different techniques: cyclic voltammetry indicating disappearance of interfacially active species in the organic phase upon UV irradiation; chromatography which allowed the separation of photolysis products and finally mass spectroscopy aiming to study the nature of the aqueous phase photolysis products. Local pH change induced by electrochemical transfer and photodecomposition of PH + species To study the effect of PH + photodecomposition on the aqueous phase pH, the PH + species were transferred by electrochemistry to the aqueous phase with subsequent UV irradiation. First pH measurements were performed with the standard pH meter. pH increase was observed, however not due to photoelectrochemistry but as a reason of side reaction on the counter aqueous electrode -water reduction (2𝐻 2 𝑂 + 2𝑒 -→ 𝐻 2 + 2𝑂𝐻 -). The effect of negative polarization on the aqueous phase pH was further studied and the results are shown on The masking effect caused by the side reaction at the aqueous counter electrode highlighted the need to measure the pH locally. The set-up used for this purpose can be found on The exact nature of species marked with ??? (org) was not studied and it can be only assumed that these arise from the nitrogen radical reactions which transfer to the organic phase. The ∆ 𝑜𝑟𝑔 𝑎𝑞 𝛷 𝐻 + 0 at the DCE -water interface is 549 mV 1 and hence for the applied conditions (the potential was The pH changes observed above the liquid -liquid interface are significant enough to induce the hydrolysis and condensation reaction of TEOS and hence such system was employed in this regard as it is shown in the following section. Silica deposition induced by local pH decrease The silica deposition at the electrified liquid -liquid interface can be triggered by electrochemical transfer of cationic surfactant -template -species from the organic phase to the aqueous phase containing already hydrolyzed silica precursor. The template can act as the catalyst for silica condensation reaction and as structure driving agent in one. The neat liquidliquid interface can be also modified with the silica deposit in the 'reversed system' where the precursor is dissolved in the organic phase contacted with the acidic or basic aqueous phase. 185,176 In order to induce porosity of the silica interfacial deposit, template -usually cationic surfactant -can be added to the aqueous phase. 177 A similar approach was used in this work. The only difference was the pH of the aqueous phase, which instead of being a fixed value (adjusted before the liquid -liquid interface was formed) was controlled with electrochemical transfer of 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + with simultaneous UV irradiation. The CVs recorded at macroscopic ITIES in the presence of 1 mM CTA + Br -in the aqueous phase and TEOS in the organic phase (see In order to triger silica deposition the local pH of the aqueous phase above the liquidliquid interface was decreased by PH + transfer from the organic phase (the potential of the ITIES was held at +150 mV for 60 minutes) with simultaneous UV irradiation. Photolysis of PH + has indireclty led to proton generation which catalysis the hydrolysis reaction of silica from the organic phase. min with simultaneous irradiation in the presence and in the absence of CTA + in the aqueous phase. Subsequently the cell was left for 12 hours. After this time the silica deposits were collected and cured at 130°C for 16 hours. The morphological analysis was performed with the TEM imaginary. The 'worm like' structures were observed for all three samples even without CTA + in the aqueous phase (see A -TEM image on Figure 5.15). This finding suggest that a spontaneous formation of mesopores could take place in here as it was already shown to occur for aluminosilicate materials prepared via Sol-Gel process. 268 Conclusion In this work, the interfacially active quaternary ammonium cation (trimethylbenzhydrylammonium -PH + ) being sensitive to UV irradiation was synthesized and characterized electrochemically. The photodecomposition of the PH + in the protic solvents was studied and the formation of benzhydrol in the aqueous phase, as one of the products of photolysis, was confirmed. The PH + was then employed to locally affect the pH of the aqueous phase since its transfer can be controlled by electrochemical means whereas the irradiation leads to its decomposition followed by reactions which affect aqueous protons concentration. Due to the water reduction taking place at the aqueous counter electrode, the pH change of the aqueous phase could not be followed with the conventional pH measurement. Iridium oxide modified Pt microdisc electrodes were used to study the change of the pH at the local scale and the results have shown that the PH + transferred and photodecomposed in the aqueous phase increase proton concertation. This experimental set-up was then used to modify the liquid -liquid interface with silica material. The results included in this chapter are planned to be published in Electrochimica Acta journal. General conclusions English version The main focus of this thesis was the modification of electrified liquid -liquid interface with silica material to form molecular sieves. In general the silica deposition was performed via the Sol -Gel process in the presence of soft template. Deposition was controlled by electrochemistry at the ITIES. Two categories of the ITIES were employed for this purpose: (i) macroscopic ITIES created in a conventional four electrode electrochemical cell and (ii) microITIES under the form of an array of pores or single microscopic pore capillary. The macroscopic ITIES was first employed to study the silica deposition mechanism. Cationic surfactant -CTA + -was used as a template and a catalyst for the silica formation, initially dissolved in the organic phase (10 mM BTPPA + TPBCl -solution in dichloroethane). Silica precursor -TEOS -was hydrolyzed in the aqueous phase (at pH = 3) and it was thus separated from the template species. The transfer of CTA + was controlled with interfacial polarization and was only observed in the presence of hydrolyzed silica in the aqueous phasethis type of reaction is known as facilitated ion transfer. The silica formation was triggered once CTA + cations were transferred to the aqueous phase -the formation of spherical micelles catalyzes the condensation reaction and act as a template which structure the silica deposits. General conclusions were made regarding the macroscopic ITIES modification: 1. The CTA + transfer reaction was irreversible as the characteristic for adsorption current drop (associated to backward 𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + transfer) was found on the cyclic voltammograms; 2. The silica deposit is being formed at the liquid -liquid interface after one voltammetric cycle run at 5 mV/s; 3. Charge being back transferred to the organic phase increase for the first few cycles and becomes constant which indicated that the interfacial region was 'saturated' with the negative charge of condensing silica facilitating the 𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + transfer; 4. The limiting factor of the silica deposit formation is the [CTA + ] org and its concentration in the diffusion layer on the aqueous side of the liquid -liquid interface; 5. [TEOS] aq was not affecting the CTA + transfer in the studied concentration range (from 50 mM up to 300 mM); 6. The aqueous phase pH, governing the formation of polynuclear silanol species, was found to be optimal in the pH range from 9 to 10. Moreover, the macroscopic ITIES has served for the silica deposit generation for further characterization. In order to cure the silica deposits collected from the liquid -liquid interface they were stored overnight in the oven at 130°C. Set of characterization techniques was employed, suggesting that: 1. The silica formation was confirmed. Further spectroscopic (infra-red) investigation indicated the presence of the CTA + among the silica network (as expected) and traces of organic electrolyte ions; 2. XPS indicated that TEOS was not totally hydrolyzed since C-O bond was detected. Furthermore, based on XPS results it was assumed that some charge balancing between negatively charged OH -groups (present inside the silica pores) and Na + can occur; 3. The mesostructure of silica deposit was confirmed and the pore center-to-center distance depending on the polarity of the organic phase and [CTA + ] org concentration was in the range between 3.7 up to 7 nm; 4. The pores among the silica deposits are of 'worm like' shape as evidenced with the broad peak on SAXS patterns and directly on the TEM micrographs; Once the silica deposit electrogeneration was optimized at the macroscopic ITIES, the miniaturization of the liquid -liquid interface was performed. The membrane used to support ITIES was a silicon wafer with array of microscopic pores (with the radius ranging from 5 µm up to 10 µm) arranged in honeycomb. Miniaturization has improved the electroanalytical parameters of the system (lower limit of detection due to lower capacitive current and better sensitivity arising from higher mass transfer) and additionally allowed the formation of mechanically stable silica deposits -which were further characterized and evaluated. The study concerning silica electrodeposition at the array of microITIES can be summarized as follows: 1. The current -potential characteristics affected by asymmetric diffusion profile on the both sides of the liquid -liquid interface did not correspond to the simple ion transfer reaction recorded during silica formation; 2. The shape of the forward peak (arising from 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + transfer) depended on scan rate (diffusion layer thickness on the organic side of the liquid -liquid interface) and membrane design used (the pore center-to-center distance). Generally saying, the process was diffusion limited once the transfer was governed by linear diffusion inside the pores of the silicon wafer (for scan rate > 10 mV/s) and by overlaid diffusion layer on the ingress to the pores from the organic phase (for scan rate < 0.1 mV/s). Overlap of the diffusion profiles on the organic side of the liquid -liquid interface was also observed for the membranes supporting the pores with low spacing factor; 3. Silica deposits always grow towards the bulk of the aqueous phase. They are formed on the ingress to pore from the aqueous side of the interface, are flat on the bottom and are filled with the silica inside (which excludes any possible interface movement during deposition); 4. The shape of the silica deposits correspond to hemispherical diffusion layer of CTA + in the aqueous phase: for short experimental times they were flat at the top and rounded on the sides whereas longer experimental time has led to the hemispherical 'cups' formation; 5. The blocking effect of the organic species present inside the pores of silica deposits was confirmed by ion transfer voltammetry; 6. Calcination allowed the removal of organic species from inside the mesopores. Empty mesostructure was permeable for the analytes. 5. In situ study of silica deposition did not evidenced Si-O-Si bond formation. Nevertheless the silica presence was confirmed after electrogenerated material was cured thermally. Long acquisition time was employed for this purpose. The ion transfer voltammetry of five different in charge, size and nature analytes were finally employed to electroanalytically evaluate the array of micro ITIES modified with silica deposits (modified under optimal conditions elaborated during previous study). Following observations were made: 1. The ion transfer was affected in the presence of silica deposits for all five analytes studied; 2. The change of the Gibbs energy of transfer for three different in size tetraalkylammonium cations before and after the modification was observed to be greater for the largest TBA + as compared with the slightly affected TMA + . The positively and mono charged analytes with the greater size required more energy to transfer across liquid -liquid interface modified with the silica deposits; 3. The transfer of negatively charged 4OBSA -was also studied and it was found that the interfacial modification has increased the Gibbs energy of transfer in the same manner as for larger TBA + cation. This behavior was attributed to the electrostatic repulsion between negatively charged anion and OH groups located inside the silica pore walls; 4. The electrochemical behavior of PAMAM dendrimers at the modified liquid -liquid interface was found to be different from tetraalkylammonium cations. Firstly, the E 1/2 for both dendrimers was shifted towards less positive potential which means that the presence of silica deposits decreased the amount of energy required to trigger the transfer. Secondly, the fingerprints of adsorption for bigger PAMAM molecule -generation 1 -were found in the presence of silica deposits. Both phenomena were attributed to the electrostatic interactions between positive charge of dendrimers and negative net charge of silica framework; These observations clearly indicate that the presence of silica deposits at the ITIES affects the electrochemical behavior of different analytes in different manner suggesting that the ultimate goal of this work -sieving properties -are entirely possible. Further evaluation of liquid -liquid interface modified with silica material could be performed with the larger molecules as for instance: larger PAMAM dendrimers or biomolecules. Assessment of silica deposit with the permeability coefficient (which can be extracted from information collected with SECM) would be of highest interest. Some effort was made towards this direction (see section 7.1) however further study is needed. The silica electrogeneration was performed on the aqueous side of the liquid -liquid interface and its morphology was governed by the hemispherical diffusion of CTA + . Interesting properties could be obtained if interior of the pore, where transfer is limited only by linear diffusion, could be modified in the same manner. Confined geometry inside the miniaturized pores can differently affect the morphology of the silica deposits and consequently their sieving properties. The selectivity of the ITIES modified with silica deposits can be also improved by chemical functionalization. Sol -Gel process allows the permanent introduction of the organic groups to the silica framework on the route of co-condensation between organosilanes with alkoxysilanes. Some preliminary results concerning functionalized silica interfacial deposition can be found in section 7.2. First approach dealing with the liquid -liquid interface modification with the silica material was based on the controlled electrochemical transfer of template molecules from the organic phase to the silica precursor containing aqueous phase. The local change of the pH at the liquid -liquid interface is the second way of triggering silica deposition (the hydrolysis and condensation reactions are pH sensitive). The photosensitive and interfacially active cationtrimethylbenzhydrylammonium (PH + ) -was employed to change the pH above the liquid -liquid interface on its aqueous side. The conclusions for this part of the thesis are: 1. Electrochemical transfer of 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + with simultaneous UV irradiation has led to the formation of reactive carbocation (which reacts with water and forms benzhydrol) and releases a proton; 2. The overall pH of the aqueous phase was also affected by the water reduction taking place at the counter aqueous electrode; 3. The local pH measurements (performed above the liquid -liquid interface with the iridium oxide modified Pt microdisc electrodes) have shown that the interfacial pH is decreasing once PH + species are transferred and photodecomposed in the aqueous phase; 4. The local decrease in the pH was shown to catalyze the silica hydrolysis followed by condensation (in this configuration silica precursor -TEOS -was dissolved in the organic phase whereas the template -CTA + -was present in the aqueous phase) and resulted in silica deposit formation; 5. Miniaturization was found to mechanically stabilize the liquid -liquid interface in the presence of CTA + species (dissolved in the aqueous phase) which have destabilized macroscopic ITIES; 6. The mesostructure formation was poor and the worm like structures were present even when silica was deposited in the absence of CTA + in the aqueous phase. Further study is needed to improve the mesostructure properties of synthesized material. The factor which affects the structuration of silica is the CTA + concentration. Conducting electrochemical study at the macroscopic ITIES in the presence of high [CTA + ] aq was impossible and hence miniaturization has to be employed (the microITIES was mechanically stable up to [CTA + ] aq = 2.02 mM, higher concentrations were not investigated). The interfacial silica deposition can be also triggered by the local pH increase. In order to control this process electrochemically, interfacially active species (initially present in the organic phase) has to be functionalized with the basic center, for instance nitrogen atom with a lone electron pair. The electrochemically controlled transfer of the base to the aqueous phase will increase the pH which can trigger the silica deposition (see section 7.3). Version Française L'objectif principal de cette thèse concerne la modification de l'interface liquide -liquide par voie électrochimique. Ainsi, un matériau de silice a été utilisé dans le but de former une tamis 5. L'effet de blocage des espèces organiques présentes à l'intérieur des pores de dépôts de silice a été confirmée par voltamétrie de transfert d'ions; 6. La calcination a permis l'oxydation des espèces organiques à l'intérieur des mésopores. La mésostructure vide était perméable pour les analytes. La dimension de l'interface liquide -liquide miniaturisée a nécessité l'application d'une technique de caractérisation locale. Pour cela, une méthode de spectroscopie Raman confocale a été utilisée et a permis l'étude des différentes contributions moléculaires. Ainsi, deux phénomènes ont été suivis: (i) la réaction interfaciale de transfert d'ions et (ii) la formation de dépôts de silice à l'interface contrôlée par voie électrochimique. En général, les informations suivantes ont été extraites de ce travail: 1. Une polarisation négative a affecté la composition moléculaire de l'interface liquide -liquide parce que les signaux Raman correspondant au BTPPA + ont augmenté et ceux du TPBCl -ont diminué; 2. Aucun mouvement de l'interface n'a été détecté pendant la polarisation de l'interface : l'ensemble des bandes Raman attribués au DCE sont resté inchangées indépendamment du potentiel appliqué; 3. La composition moléculaire de l'interface liquide -liquide pendant la formation de silice a changé brusquement après la première moitié du cycle voltamétrique ; 4. Les signaux forts provenant du CTA + , du BTPPA + et du TPBCl -ont été trouvés au cours de l'électrodéposition. L'intensité du signal à une tendance à augmenter avec le nombre de cycles voltammétriques; 5. Les études in situ de la déposition de silice n'ont pas prouvé la formation de liaisons Si-O-Si. Néanmoins, la présence de silice a été confirmée après le durcissement thermique du matériau. La voltampérométrie de transfert de cinq ions qui différaient en charge, en taille et en nature a finalement été utilisée pour évaluer les dépôts de silice par voie électroanalytique (la membrane de silicium a été modifiée avec les conditions optimales élaborées lors de l'étude précédente). Les observations suivantes ont été réalisées: 1. Le transfert d'ions a été influencé en présence de dépôts de silice pour les cinq analytes étudiés; 2. La variation de l'énergie de Gibbs de transfert pour trois cations différents de 12. La formation de mésostructure était pauvre et les structures vermiculaires étaient présents même lorsque la silice a été déposée en absence de CTA + dans la phase aqueuse. Une étude plus approfondie est nécessaire pour améliorer des propriétés de mésostructure du matériau synthétisé. Le facteur qui influence la structuration de la silice est la concentration de CTA + . Une étude électrochimique réalisée à l'interface liquide -liquide macroscopique en présence de concerning few of the above mentioned ideas are given in the following subsections. Silica deposits -SECM characterization SECM allows the study of electrochemical behavior of the system at the local scale. In SECM configuration the electrochemical interaction between the tip and studied interface was investigated. Two working modes can be distinguished in SECM: (i) generation/collection mode -where one side of the electrochemical cell, say tip, is electrochemically detecting the probe which is generated at the second side, say support and (ii) the feedback mode which can be negative (drop in current once the tip is approaching the surface -insulator) or positive (increase in current in close vicinity of the surface -conductor). In the negative feedback mode the probe from the bulk of the liquid media gives the steady state current which above the insulating surface is decreased due to hindered diffusion profile of the tip. The positive feedback is when the species being the reason of charge transfer and resulting current at the tip in the bulk are regenerated at the conductive support (tip approach result in current increase in this case). SECM can be employed to probe the properties of porous membranes, 247,269 and hence it could be employed to study the microITIES modified with silica deposits. The set-up which could be used for such experiment is shown on Silica deposits functionalization Different functionalities can be introduced to the silica framework on the basis of cocondensation between alkoxysilanes and organosilanes species. The electrodeposition of such silica functionalized materials can be easily performed at the liquid -liquid interface. A preliminary investigation has been done and the results are found to be promising. The cocondensation was performed with two organosilanes: (3-mercaptopropyl)trimethoxysilane was used as a method for silica deposits generation. The cationic surfactant being initially dissolved in the organic phase was transferred on forward polarization (from more positive to more negative potential) to the aqueous phase where it has catalyzed and self-assembly with condensing silica precursor species. Interfacial silica deposition was performed for different concentration of organosilanes in the initial sol solution (from 5% up to 25%). Resulting silica deposits were collected form the liquid -liquid interface and spectroscopic characterization (data not shown) was performed. Thiol bond evolution was followed with Raman spectroscopy whereas the presence of azide group was confirmed with the infra-red spectroscopy. The morphological characterization was conducted with SAXS. The results are shown on As a continuation, other functionalities can be introduced to the silica framework (cocondensation or post grafting can be employed). In the second step, the miniaturized ITIES could be modified with the functionalized silica deposits, which can be finally, evaluate with a range of interfacially active analytes. Interfacially active base The local pH increase of the aqueous phase near the liquid -liquid interface could be performed with the ion transfer reaction, once the ion initially present in the organic phase would be functionalized with a base. Such compound could be a quaternary ammonium cation, which is substituted, for instance, with an alkyl chain terminated with a nitrogen atom with a lone electron pair (i.e. (n-aminoalkyl)trimethylammonium cation). Surprisingly such compounds are nearly inaccessible and very expensive despite the fact that their synthesis seems to be easy. The amination reaction of an alkyl halide containing quaternary ammonium cation with ethylenediamine is one of the ways to obtain the product of interest: In future the electroanalytical evaluation of the synthesized molecule can be performed at the electrified liquid -liquid interface. The change in pH has to be evidenced experimentally. The system employing interfacially active base can be used to trigger interfacial silica deposition (which can be further characterized in order to evaluate its structure). 2. Each pulled capillary was observed with an optical microscope in order to verify effectiveness of applied pulling parameters. 3. Nanopipettes were then filled with the chlorotrimethylsilane so that the small volume of the solution (~1µL) was placed inside the capillary in the close vicinity of the bigger entrance. Vertical positon of the nanopipette, with the tip oriented upwards, was maintained all the time. 4. The nanopipette with the solution inside was left for 20 min (in vertical position!) under the fume hood. 5. After this time the residual solution was removed from inside the capillary and it was stored overnight in the oven at 130°C. 6. The nanopipette characterization was performed with the ion-transfer voltammetry. Prior to electrochemical measurement, nanopipettes were filled with the 10 mM solution of BTPPA + TPBCl -in DCE. Frequently, during pipette filling air space separated the very end of the filled tip from the rest of organic solution filling the capillary. In order to remove this empty space from inside the tip, glass fiber (pulled out from Pasteur pipette above the Bunsen burner) could be used with diameter smaller than the inner diameter of the tip filled with the air bubble. The limiting current recorded at micro-and nanopipette can be calculated with the equation (1). 𝐼 𝑠𝑠 = 3.35𝜋𝐶𝐹𝐷𝑛𝑟 (I.1) were I ss is the limiting current, C is the concentration of analyte in mol/cm 3 , D is the diffusion coefficient in cm 2 /s, F is the Faraday constant (96485 A•s/mol) and n is the net charge of the analyte. I would also like to acknowledge other people who have contributed to this work. I will especially highlight Manuel Dossot and Jérôme Grausem who have helped me a lot with the Raman spectroscopy, Cédric Carteret for his help with infra-red spectroscopy, Mathieu Etienne for his help with SECM, Neus Vila for advices concerning organic synthesis, Christelle Despas for help with ion chromatography and Marc Hebrant for his engagement during HPLC analysis. I am also indebted to the people who were involved in other aspects of my work. Particularly, I would like to acknowledge Marie-José Stébé, Andreea Pasc and Melanie Emo for SAXS analysis and precious discussions, Aurélien Renard for XPS analysis, Jaafar Ghanbaja and Sylvie Migot for TEM imaging and Lise Salsi for the SEM imaging. A special acknowledgement goes to my office mates and good friends: Doktorka Veronika Urbanova, Ievgien Mazurenko and Daniel Alonso Gamero Quijano. Next in row are lab mates. With Ivan Vakulko and Khaoula Hamdi we started our theses at the same time. Wissam Ghach introduced me to the laboratory organization. Lin Zhang started her thesis at the end of 2013. Mohana Afsharian first did her master training in 2013 and then, in 2104 joined ELAN team as a PhD student. During my third year Maciek Mierzwa, Tauqir Nasir, Cheryl Maria Karman and Stephane Pinck became a part of our team. Martha Collins and Maizatul Najwa Binti Jajuli, both working with liquid -liquid interface, did master internship in our team in 2015. Together we were a dozen or so nationalities working under one roof. It was a great pleasure to be a part of such multicultural team. One period of my PhD was a DocSciLor 2015 organization. For great fun and new experience I am glad to gratitude Fernanda Bianca Haffner, Hugo Gattuso, Ileana-Alexandra Pavel, Ivan and Maciek. I also want to acknowledge Claire Genois for taking care about the laboratory organization, which has made our lives easier. I cannot omit the work shop team: Jean-Paul Moulin also known as Monsieur Moustache, Gérard Paquot and Patrick Bombardier. I am very grateful for their technical support. My acknowledgements also go to Marie Tercier, Christelle Charbaut, and Jacqueline Druon for their help with all administration issues and of course to all LCPME members. Figure 1 . 1 . 11 Figure 1.1. Different models for the ITIES structure. Black solid lines correspond to potential distribution across the polarized liquid -liquid interface. Figure 1 . 2 . 12 Figure 1.2. Standard ion-transfer potentials for different anionic and cationic species across the dichloroethane -water interface. The abbreviations stand for: Ph 4 P + -tetraphenyl phosphonium; Ph 4 As + -tetraphenylarsonium; Ph 4 B --tetraphenylborate and Alkyl 4 N + -correspond to tetraalkylammonium compounds. Figure prepared based on ref.[START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF][START_REF] Sabela | Standard Gibbs Energies of Transfer of Univalent Ions From Water To 1,2-Dichloromethane[END_REF] Figure 1 . 3 . 13 Figure 1.3. Four mechanisms of possible assisted ion transfer reaction. Designations: L -ligand, iionic species, ACT -aqueous complexation followed by transfer, TOC -transfer followed by complexation, TIC -transfer by interfacial complexation and TID -transfer by interfacial dissociation. Figure 1 . 4 (Figure 1 . 4 ( 1414 Figure 1.4 (b). Deviations from stability are induced by reactions occurring at the electrode surface. Figure 1 . 4 . 14 Figure 1.4. Current -potential characteristics for a) polarizable and b) non-polarizable electrodes. Figure 1 . 5 . 15 Figure 1.5. Voltammogram recorded only in the presence of the aqueous (LiCl) and the organic (BTPPA + TPBCl -) supporting electrolytes. Regions A and C correspond to electrolyte ion transfer currents whereas region B is the potential window, among which the interface is impermeable for all ionic species present in both phases. Figure 1 . 5 15 Figure 1.5 illustrates cyclic voltammogram recorded at the polarized interface between aqueous solution of LiCl and organic solution of bis(triphenylphosphoranyldiene) ammonium tetrakis(4-chlorophenylborate) (BTPPA + TPBCl -). Region between two vertical dashed lines (marked as B) correspond to polarizable part of the interface -potential window. In this particular part of a cyclic voltammogram, the change in inner potentials between two immiscible phases does not induce noticeable change in chemical composition of the aqueous and the organic medium. In other worlds interface is impermeable for the charge transfer and resulting current is only due to charging of double layer capacitance at both sides of the liquid -liquid interface. The potential window is limited by supporting electrolyte ions transfer (part A and C). Once the standard transfer potential of the less hydrophobic (in case of ions transferring from the aqueous to the organic phase) or the less hydrophilic (for ions crossing the interface from the organic side of the interface) ion is reached, they start to cross the interface. The available potential window in voltammetric Figure 1 . 6 . 16 At the ITIES, resulting current is associated with the direction of ion transfer as it is shown on Positive current is recorded once cations are moving from the aqueous to the organic phase or anions from the organic to the aqueous phase, whereas negative current is when cation are transferring from the organic to the aqueous phase and anions from the aqueous to the organic phase. With this in mind, the association of ion transfer to the positive and the negative end of the voltammogram from Figure 1.5 becomes simplified. Limiting current on the lower potential scale side (part A on Figure 1.5) originates from chloride transfer (positive peak corresponds to 𝐶𝑙 𝑜𝑟𝑔→𝑎𝑞 whereas the negative current is due to 𝐶𝑙 𝑎𝑞→𝑜𝑟𝑔 -). Positive end of potential window is limited by TPBCl -transfer (𝑇𝑃𝐵𝐶𝑙 𝑎𝑞→𝑜𝑟𝑔 results in the negative current peak and 𝑇𝑃𝐵𝐶𝑙 𝑜𝑟𝑔→𝑎𝑞 can be observed as a positive current increase). Figure 1 . 6 . 16 Figure 1.6. Direction of ion transfer associated with the current response. -tail finished peak 𝑎𝑞 → 𝑜𝑟𝑔 -tail finished peak 𝑜𝑟𝑔 → 𝑎𝑞 -wave like signal 𝑎𝑞 → 𝑜𝑟𝑔 -tail finished peak 𝑜𝑟𝑔 → 𝑎𝑞 -tail finished peak 𝑎𝑞 → 𝑜𝑟𝑔 -wave like signaln -is the charge, F -the Faraday constant, A -the surface area, D -the diffusion coefficient, C -the concentration, v -the scan rate, N -the number of pores (in case of array), r -the ITIES radius and f(Θ) is a function of the tip inner angle. Figure 1 1 Figure 1 . 7 . 17 Figure 1.7. Examples of silicon containing compounds. Figure 1 . 8 . 18 Figure 1.8. Hydrolysis (1) and condensation (2) reactions of tetraalkoxysilane. Figure 1 . 9 ( 19 blue curve). The reaction rate is slowest at a near neutral pH. Increasing concentration of protons or hydroxides species in the aqueous media leads to increase in hydrolysis rate. The reaction mechanism can be referred to as the nucleophilic substitution of S N 2 type, namely nucleophilic attack on the positively charged silicon atom, which takes place synchronously with the cleavage of the Si-OR bond. The hydrolysis reaction is shown on Figure1.8 (1). The rate of the hydrolysis reaction can be affected by the reaction medium -polar and protic solvents through formation of hydrogen bond may increase the efficiency of Si-OR bond cleavage -whereas the increasing hydrophobic character of R substituent shows the opposite effect.[START_REF] Lev | Sol-Gel Electrochemistry: Silica and Silicates[END_REF] Figure 1 . 9 . 19 Figure 1.9. Schematic change in the relative rate for hydrolysis, condensation and dissolution of alkoxysilane and its products.Figure was adopted from ref 88 Figure 1 . 9 19 Figure 1.9 for the kinetics versus pH dependence. The reaction takes place most efficiently in two pH ranges. The first, around the neutral pH conditions where the nucleophilic attack on the positively charged silicon atom is performed by deprotonated silanol moieties and the second range, at the acidic pH values (pH<2), where deprotonated silanol moieties are even more attracted by positively charged silicon atom due to formation of protonated species -𝑆𝑖𝑂𝐻 2 + . Condensation reaction leads to silicates polymerization and finally the formation of a gel (according to reaction (2) from Figure 1.8). The gelation process is dependent from a variety of factors: (i) temperature -polymerization can be thermally activated and can be significantly reduced at high temperatures; (ii) pH -gelation was found to be slowest at intermediate values and fastest at low and high values; (iii) solvent -decreasing content ofwater in reaction media leads to increase in the gelation time, as well as use of volatile additives.[START_REF] Lev | Sol-Gel Electrochemistry: Silica and Silicates[END_REF] Figure 1 . 10 . 110 Figure 1.10. Illustrative schemes for hard, colloidal assembly and soft template route. Figure 1 . 1 10 A). Template route employing colloidal particles was classified as an intermediate option bearing the rigidity of the hard templates and selfassembly properties of soft templates (see Figure 1.10 B). The last group -shown on Figure 1.10 C -belongs to 'soft' mater, in this case liquid crystals formed by amphiphilic species (able to form variety of spatial arrangements: micelles, vesicles, cubic, hexagonal or lamellar liquid crystal structures), which is the most versatile method as the template extraction can be performed under mild condition without affecting deposit properties. Figure 1 . 11 . 111 Figure 1.11. Schemes representing following steps of photocurrent evolution at the Au (NPs or mirror-like film) modified ITIES. ZnTPPC is the meso-tetra(4-carboxyphenyl)porphyrin. Figure was prepared based on ref 106 . Figure 1 . 12 . 112 Figure 1.12. Three phase junction set-up used for Au NPs deposition. TOA + is the tetraoctylammonium cation. another study, Li et al. used SECM in anodic generation mode for the generation of Ag NPs (see Figure 1.13). 118 Figure 1 . 13 . 113 Figure 1.13. The scheme representing Ag NPs deposition by SECM anodic dissolution of silver UME. Adapted from ref. 118 (Figure 1 . 1 Figure 1.14 B). The authors noticed that regardless of the deposition technique employed, the 𝐶𝑙𝑂 4 (𝑎𝑞→𝑜𝑟𝑔) - Figure 1 . 14 . 114 Figure 1.14. Thin layer organic phase liquid -liquid interface approach for the Ag NPs electrodeposition approach. A -Corresponds to the open circuit potential electrodeposition and Bcorresponds to the potential controlled electrodeposition. Adapted from ref. 121 Figure 1 . 1 15).142 The effect of 1,2-dioctadecanoyl-sn-glycero-3-phosphocholine (DSPC) monolayer onto the adsorption and the kinetics of charge transfer for TEA + , porpanolol, metoprolol and tacrine has been studied by cyclic voltammetry and AC voltammetry.142 Comparison of the calculated values of admittance and apparent capacitance in the presence and absence of ion transfer through DSPC monolayer allowed concluding what follows: (i) all studied ions tend to interact with the phospholipid membrane; (ii) rate constant of TEA + , propranolol and metoprolol decrease with increasing phospholipid deposition surface pressure. No change was observed for tacrine and (iii) calculated apparent capacitance values in the presence and the absence of ion transfer indicated that charge transfer reaction of tacrine and partially the metoprolol are coupled with the adsorption process. In subsequent work, electrochemical impedance spectroscopy was used to evaluate interaction between four similar in structure therapeutics (aminacrine, tacrine, velnacrine and proflavine) and -different in compositionphospholipid monolayers adsorbed at the ITIES. The results indicated that the preferable adsorption site in the organic phase for velnacrine and proflavine is the polar head group region whereas tacrine and aminacrine prefer hydrocarbon tail domains.143 Figure 1 . 15 . 115 Figure 1.15. Simplified electrochemical cell allowing the compact phospholipid monolayer formation at the electrified organic gel -aqueous phase interface. Adapted from ref. 142 Figure 1 . 16 . 116 Figure 1.16. Structures of the monomers used for the electropolimerization at the electrified liquidliquid interface: A -1-methylpyrrole, B -1-phenylpyrole, C -4-(pyrol-1-yl)phenyloxyacetic acid, D -2,2':5',2'' terthiophene, E -tyramine and F -resorcinol. Some effort has been done with regard to planar liquid -liquid interface modification with carbon based materials with examples emerging from synthesis 159 , functionalization 160 or catalysis 161 etc. and only few examples emerges from the use of carbon or/and carbon based material at the polarized liquid -liquid interface. Carbon materials were also used to form 'semi modified' ITIES and such examples are also given here. Figure 1 . 17 . 117 Figure 1.17. Scheme of 3D-ITIES composed from reticulated vitreous carbon modified with 4-ABA/polypeptide multilayer impregnated with ferri/ferrocyanide ions and photosensitizer (ZnTPPS 4-) and the organic phase being the solution of electron acceptor (TCNQ) and 5 mM organic electrolyte. Abbreviations stand for: 4-ABA is the 4-aminobenzoic acid, TCNQ is 7,7',8,8'tetracyanoquiodimethane and ZnTPPS 4-is the zinc meso-tetrakis(p-sulfonatophenyl) porphyrin. Figure is adapted from ref. 162 Figure 1 . 18 .Figure 1 . 18 . 118118 Figure 1.18. Scheme of the ITIES modification with a (doped)-graphene layer. The abbreviations stand for: CVD GR -chemical vapor deposited graphene; DMFc -1,1'-dimethylferrocene, DecMFcdecamethylferrocene. Numbers from arrows correspond to different experimental synthetic approaches: (1) CVD GR deposition; (2) one step CVD GR/metal nanoparticles deposition; (3) two step CVD GR/metal nanoparticles deposition and (4) two step CVD GR/metal nanoparticles deposition under the potentiostatic control. Figure prepared on the basis of ref. 160 Figure 1 . 1 19. The ITO electrode crossing the interface between aqueous solution of sulphites ions and the nitrobenzene containing n-octyltriethoxysilane was modified with the silica positionedalmost completely -on the aqueous side of the liquid -liquid junction. The hydrolysis and condensation reaction were catalyzed by protons generated at ITO electrode according to the redox reaction: Figure 1 . 19 . 119 Figure 1.19. Set-up showing silica stripe formation at the three phase junction. Deposition place is indicated with red arrow. Protons were produced at the ITO on the aqueous side of the liquid -liquid interface with the conventional three electrode set up. OTEOS is n-octyltriethoxysilane. Scheme was adapted from ref. 176 photo-initiator. The colloidal particles self-assembled at the liquid -liquid interface were bonded into the stable film by polymerization induced by UV irradiation. Self-assembly of SiO 2 spheres has led to well-ordered, mechanically stable film of one monolayer thickness for times starting from 30 min and silica spheres concentration up to 0.013%.Whitby et al. studied the effect of SiO 2 nanoparticles/surfactant composite on the liquid -liquid interface stabilization. Macroscopic systems reveal new set of information helping to understand the interior of the pores with the aqueous solution. The hydrophobic character of silicates could enhance the organic phase penetration into zeolite framework, and hence the exact position of the interface remained unclear. To answer these doubts, Dryfe et al. studied facilitated ion transfer of alkali metals with 18-crown-6 ether and based on obtained electrochemical results they suggested that it is rather the aqueous phase which fills the silicalite pores. Ex situ zeolite modified ITIES size selective membrane can be complete with charge selectivity188 . The example given by Dryfe et al. concerns sodium zeolite-Y pressed with 10 tons of pressure and healed with tetraethoxy silane (TEOS) solution. Healing process was applied in order to eliminate inter-grain pathways between mechanically pressed zeolite crystals, which may constitute the route for analytes transfer. Resulting disks with 0.75 mm in thickness and 20 mm in diameter were used to support the liquid -liquid interface. Cyclic voltammetry results in the presence of zeolite-Y membrane have shown size selective exclusion of tetrabutylammonium cations whereas tetraethylammonium cations undergo reversible transfer. When 𝐵𝐹 4 -and 𝐶𝑙𝑂 4 -were studied as the transferring ions (with diameter of ions smaller then diameter of pore entrance) no voltammetric response was observed. This result univocally indicated charge selective exclusion for negatively charged ions. The supporting electrolytes used in this study were LiCl and BTPPA + TPBCl -in aqueous and organic phase respectively. Since the zeolite-Y membrane exhibit both size and charge sieving effect, the potential limits were elongated from a negative and a positive potential side. Broader potential window from less positive potential side has arisen due to size exclusion of BTPPA + transfer and charge barrier for Cl -transfer across modified ITIES. Figure 1 . 20 . 120 Figure 1.20. Schematic and simplified schemes for ex situ modified ITIES with silica materials. Acorrespond to macroscopic ITIES modified with zeolite membrane used in size-selective voltammetric study 188 and B -is the polyethylene terephthalate (PET) membrane with randomly distributed pores modified with the aspiration induced infiltration method. 190 Figure 2 . 1 . 21 The cells were custom made from glass tubes with inner diameters 12 mm (Figure2. 1 A) and 19 mm (Figure2. 1 B). The reference electrodes placed in the Luggin capillaries of each phase were Ag/AgCl wires whereas the counter electrodes were made from platinum mesh. The Ag/AgCl electrode was prepared by oxidation of silver wire in a saturated solution of FeCl 3 . The aqueous counter electrode was platinum mesh spot welded to platinum wire. The organic counter electrode was the platinum mesh spot welded to platinum wire which was isolated from both phases by glass tube (for protocol of preparation of the organic counter electrode please refers to section 2.5.5). During the measurement, the organic phase Luggin capillary was filled with the 10 mM BTPPA + Cl -and 10 mM LiCl supporting aqueous solution. The cell with the smaller interfacial radius was used to study ion transfer reactions and interfacial silica deposition mechanism. The cell with the bigger interface diameter was equipped with a removable upper Luggin capillary and hence was used for the electrogeneration of large amounts of silica (several mg per synthesis). In the second case, during the interfacial deposition, the volume of the organic phase was diminished by placing inert glass boiling stones at the bottom of the cell. Figure 2 . 1 . 21 Figure 2. 1. Four electrode electrochemical cells supporting the macroscopic liquid -liquid interface: Awith fixed Luggin capillaries and B -with removal upper Luggin capillary.The RE org and RE aq correspond to the organic and the aqueous reference electrodes respectively whereas the CE org and CE aq correspond to the organic and the aqueous counter electrodes respectively. The numbers stand for: 1 -the aqueous phase; 2 -the organic phase; 3 -supporting aqueous phase; 4 -is the liquid -liquid interface between higher density phase and the supporting aqueous phase and 5 is the ITIES. The interfacial surface area was 1.13 cm 2 for cell A and 2.83 cm 2 for cell B. Figure 2 . 2 A 22 . The cell consists of a simple glass vessel covered with a lid with three holes. The side places were occupied by platinum mesh counter electrode and Ag/AgCl reference electrode (prepared by Ag oxidation in the oversaturated solution of FeCl 3 ). The center hole was occupied by glass tube to the bottom Figure 2 . 2 . 22 Figure 2.2.The electrochemical cells used for A -microITIES modification and electroanalytical study and C -for in situ Raman analysis of electrochemical silica deposition. B -is the silicon membrane used to support array of microITIES. Designation stands for: 1 -aqueous phase; 2 -organic phase; 3 -microITIES (r is the pore diameter, S is the pore center-to-center distance and d is the membrane thickness -100 µm), 4 -is the objective used to focus laser beam. RE and CE stands for reference electrode and counter electrode respectively. Aq stands for aqueous whereas org for organic. covered with the ParaFilm®. The set-up used to couple electrochemical silica deposition with the confocal Raman spectroscopy is shown on Figure 2.2 C. In this configuration the glass tube filled with the organic phase was placed in the bottom of a custom made PTFE vessel filled with the aqueous phase. The top side of the glass tube was finished with the silicon wafer bearing the array of microITIES. Down part of glass tube was closed with a silicone stopper in order to avoid the organic phase leakage. During the measurement laser spot was focused at the liquid -liquid interface using objective adopted to work in the liquid media. The electrodes were the same as for the cell shown on Figure 2.2 A. Figure 2 . 2 Figure 2. 1 A) and microscopic ITIES (see Figure 2.2 A) were used. In some experiments single pore microITIES (see section 2.5.7 for protocol of preparation) was used instead of silicon wafer membrane. Figure 2 . 3 . 23 Figure 2.3. Set-up used during local pH measurement above the liquid -liquid interface supported with array of micrometer pores. The designations are as follows: 1 -PTFE cell, 2 -aqueous phase, 3 -the glassy tube filled with the organic phase and aqueous reference supporting electrolyte (4), 5 -the silicon wafer supporting array of microITIES (∅ is the pore diameter equal to 50 µm, and d is the membrane thickness -100 µm), 6 -the source of UV irradiation, 7 -the double junction Ag/AgCl reference electrode, 8 -the iridium oxide modified platinum electrode, 9 -correspond to shear-force positioning and 10 -the CCD camera used for visualization. Electrochemistry at the liquid -liquid interface was controlled with the CE aq -aqueous counter electrode (platinum mesh), RE aq -reference counter electrode (Ag/AgCl) and RE org /CE org -Ag/AgCl wire used as organic reference and counter electrode in one. Cell 1 : 1 (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 - 𝑥 𝑚𝑀 𝑇𝐸𝑂𝑆 || 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 - 𝑦 𝑚𝑀 𝐶𝑇𝐴 + 𝑇𝑃𝐵𝐶𝑙 -𝑖𝑛 𝐷𝐶𝐸 | 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝐶𝑙 - 10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 -| 𝐴𝑔𝐶𝑙 |𝐴𝑔 The electrochemical set-up used to study microITIES modification with silica deposits is shown on cell 2 configuration. MicroITIES modified with silica deposits were evaluated electrocatalytically using ion transfer voltammetry (analytes were the species being different in size, charge and nature) in the cell 3 configurations. Cell 2: (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 - 𝑥 𝑚𝑀 𝑇𝐸𝑂𝑆 || 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 - 𝑦 𝑚𝑀 𝐶𝑇𝐴 + 𝑇𝑃𝐵𝐶𝑙 -𝑖𝑛 𝐷𝐶𝐸 | 𝐴𝑔(𝑜𝑟𝑔) Cell 3: (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 - 𝑧 µ𝑀 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑖𝑎𝑙 𝑎𝑐𝑡𝑖𝑣𝑒 𝑖𝑜𝑛 || 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -𝑖𝑛 𝐷𝐶𝐸 | 𝐴𝑔(𝑜𝑟𝑔) Electrochemical measurements -All electrochemical measurement from chapter III and IV were performed with a PGSTAT 302N (Metrohm, Switzerland) with the four electrode configuration. Potentiostat was controlled by NOVA software. In chapter V the liquid -liquid interface was polarized using PalmSens EmStat 3+ potentiostat with the PalmSens Differential Electrometer Amplifier allowing the application of four electrode configuration. Palmsens potentiostat was controlled by PSTrace software. Small Angle X-ray Scattering -SAXS measurements were performed with the SAXSee mc 2 apparatus from Anton Paar. Profilometry based on shear force measurement -shearfoce detection was done with two piezoelectric plates attached to the pulled borosilicate glass capillary and connected to a lock-in amplifier (7280 DSP LOC-IN-AMPLIFIER). -High impedance potentiometer Kethley 6430 was used in all potentiometric pH measurements. The double junction Ag/AgCl was used as a reference electrode. pH probe was the Pt microelectrode modified with iridium oxide. The pH probe calibration was conducted prior and after measurements in buffered solutions at pH 7 and 4. Ion chromatography -the analysis was performed with 882 Compact IC Plus from Metrohm equipped with IC conductivity detector. High Pressure Liquid Chromatography -chromatograms were recorded using Waters 501 HPLC pump and Waters 486 Tunable Absorbance detector. The colon was RP 18C 100-4.6 mm from Chromolith® performance. H 1 NMR -all spectra were recorded with a Bruker 200 MHz spectrometer at 298 K. UV-Vis spectroscopy -all UV-Vis spectra were recorded in quartz cuvettes with Cary 60 UV-Vis spectrometer from Agilent Technologies Mass spectroscopy -MicroTOFq spectrometer from Bruker was used. The ESI (electrospray ionization) source was in positive mode. The scan was performed between 50 up to 1200 m/z. 3 . 3 Next step was to dissolved BTPPA + TPBCl -in acetone and then filtered under gravity using paper filter. Vessel with the filtrate was covered with Parafilm® in which small holes were made to allow evaporation of acetone. 4. Depending on volume of acetone used, evaporation takes up to few days. Resulting BTPPA + TPBCl -gives rectangular crystals as it is shown on Figure 2.4. Afterwards, crystals were rinsed with 1:1 Acetone:H 2 O mixture, filtered under vacuum and stored overnight in desiccator. 5. The resulting BTPPA + TPBCl -was stored in refrigerator in the vessel covered with an aluminum foil preventing from light exposure. Figure 2 . 4 . 24 Figure 2.4. BTPPA + TPBCl -crystals after acetone evaporation. Figure 2 . 5 . 25 Figure 2.5. The ion chromatography detection of iodides after four successive reaction product (PH + TPBCl -) rinsing with distilled water. Figure 2 . 6 . 26 Figure 2.6. Components used for the organic counter electrode preparation: A -platinum mesh fixed to platinum wire, B -borosilicate capillaries, C -dual-component conductive epoxy and D -copper-tinned contact wire. Figure 2 . 7 B 27 , the platinum wire was placed inside the borosilicate capillary. The second side of the glass capillary was connected to the hose (Figure 2.7 A), which has led to the pump.2. The glass around the platinum wire was gently melted above the Bunsen Burner. Heating was slowly started from the mesh side and the capillary was progressively inside the flame. Attention had to be paid since capillary can easily collapse above the platinum wire closing the capillary. If so, step two has to be repeated. Figure 2 . 7 . 27 Figure 2.7. Trapping the platinum wire inside the glass capillary was performed with the pump (C) under the Bunsen burner (D). One side of capillary was attached to the pump hose (A) whereas the platinum mesh and wire occupied second side (B). 3 . 3 Once platinum mesh and wire are securely fixed to the capillary (Figure2.8 B) the conductive resin has to be placed just above the wire (Figure2.8 C). Syringe with long needle can be used for this purpose. First uptake some resin inside the needle and then presses it out to the capillary. Figure 2 . 8 . 28 Figure 2.8. Successive steps of electrode preparation. A -Platinum mesh and wire inserted into the capillary, B -glass melted around the platinum wire and C -conductive resin placed above the wire. 56 mm and inner diameter 0.75 mm) from SUTTER INSTRUMENT®; -gold wire with fixed diameter (in case of this work the diameter was 25 or 50 µm); -10 ml of aqua regia solution (mixture between concentrated hydrochloric acid and nitric acid in 3 to 1 v/v ratio respectively); -Pump, vertical capillary puller and external power supply shown in Figure 2.10 as A, B and C respectively. Figure 2 . 10 . 210 Figure 2.10. Set-up used to prepare the single pore microITIES capillaries. A -is the pump, B -is the vertical capillary puller and C -is the external power supply. Figure 2 . 11 . 211 Figure 2.11. The vertical capillaries puller was used to close the capillary in the middle and to melt the gold wire into the glass. a) shows the capillary placed in the nickel/chrome wire ring before current passage and b) after current was passed through the wire. 2 . 2 11 a). Using external power supply the current was passed through the wire ipso facto increasing the temperature in the vicinity of the capillary (Figure 2.11 b), which in turn decreased the glass velocity. Next, the hose from the pump was attached to the top of the capillary and under vacuum the walls in the heating region were collapsed, closing the capillary in the mid-length (Figure 2.12 b). 2. Next step was to place short piece of gold wire (3-5 mm, ∅ = 25 or 50 µm) just above the collapsed part of the capillary. The gold wire was melted into the glass in the same manner as the capillary was closed. The capillary was then turned up-side-down and second piece of the gold wire was melted into glass as it is shown on Figure 2.12 c. Figure 2 . 2 Figure 2.12. a) Borosilicate glass capillary before closing the mid-length, b) with collapsed walls closing the capillary in the middle c) the 50 µm gold wires melted into the glass on double side of the capillary mid-length and d) the capillary splitted into two pieces, each contained gold wire. Figure 2 . 2 Figure 2.13. a) The polishing set-up used in this protocol, b) the capillary with the excess of glass above the gold wire and c) the capillary after polishing with four different sand papers (gold wire diameter was 50 µm).4. The gold from the glass was removed by placing the capillary overnight into the beaker containing aqua regia. The gold etching has to be hold under the fume hood due to the toxic nitrogen dioxide evolution during the reaction: Figure 2 . 14 . 214 Figure 2.14. Optical microscope images recorded after gold removal. Images a) and b) correspond to the pore with 25 µm in diameter, whereas c) and d) to the pore with 50 µm in diameter. a) and c) are top view, b) and d) are the side view of the single pore capillary. Figure 2 . 15 . 215 Figure 2.15. Reflux set-up used for synthesis of trimethylbenzhydrylamonium iodide. Figure 2 . 16 . 216 Figure 2.16. Distillation under vacuum set-up. A -the pump, B -rotary evaporator and C -heating bath filled with distilled water. CV recorded in the absence of 𝐶𝑇𝐴 𝑜𝑟𝑔 + and 𝑇𝐸𝑂𝑆 𝑎𝑞 in shown on Figure 3.1 with a dashed line. No ion transfer occurred in the available potential window which is limited by the background electrolyte transfer (𝐶𝑙 𝑜𝑟𝑔→𝑎𝑞 at the less positive potential and 𝑇𝑃𝐵𝐶𝑙 𝑎𝑞→𝑜𝑟𝑔 at the more positive potential side). The addition of cationic surfactant salt to the organic phase -CTA + TPBCl --(see Figure 3.1 red solid line) resulted in the irreproducible current spikes also known as the electrochemical instability -the phenomena known to occur at the electrified liquid -liquid interface in the presence of surface active species, which is discussed in more detail in section 1.1.4. Figure 3 . 1 . 31 Figure 3.1. Cyclic voltammograms recorded in Cell 1. Black solid line was recorded in the presence of TEOS (x = 50 mM) and in the presence of CTA + (y = 1.5 mM); Black dashed line was recorded in the presence of organic and aqueous supporting electrolytes (x and y = 0 mM) and red solid line correspond to voltammogram recorded in the absence of TEOS (x = 0 mM) in the aqueous phase and in the presence of CTA + in the organic phase (y = 14 mM). Scan rate = 5 mV/s. Figure 3 . 2 . 32 Figure 3.2. Schematic representation of the silica deposit formation mechanism. Forward scan is indicated with the red arrow. [CTA + ] org = 1.5 mM; [TEOS] aq = 50 mM; Scan rate = 5 mv/s Charge associated with the transfer of CTA + was found to be dependent from the aqueous phase pH as it is shown on Figure 3.3. Each point from the curve is the average of the five last (from fifteen consecutive voltammetric cycles) positive peak charges. The charge under the reverse peak was observed to obtain maximum values for the pH = 9-10 at which the existence of polynuclear species -𝑆𝑖 4 𝑂 6 (𝑂𝐻) 6 2-𝑎𝑛𝑑 𝑆𝑖 4 𝑂 8 (𝑂𝐻) 4 4--is predominant in the aqueous phase.At the pH = 11 and < 9 mononuclear species are dominant and hence as a consequence the net negative charge is not sufficient to facilitate the 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + Figure 3 . 3 . 33 Figure 3.3. The charge of the forward cyclic voltammetry scan recorded in the presence of [CTA + ] org = 3 mM and [TEOS] aq = 300 mM as a function of pH. The error bars are the standard deviation of the last five values of fifteen consecutive runs. Figure 3 . 4 .Figure 3 . 5 A 3435 Figure 3.4. A -Cyclic voltammograms recorded for the 1 st , 2 nd , 3 rd , 5 th , 10 th and 15 th cycle in the presence of [TEOS] aq = 50 mM and [CTA + ] org = 1.5 mM. B -Correspond to reverse peak charge as a function of number of cycles. Scan rate = 5 mV/s. pH of the aqueous phase was 9.5. The influence of repetitive cycling on the current -potential characteristics and the charge above the reversed peaks are shown on Figure 3.4 A and B respectively (pH of the aqueous Figure 3 . 5 . 35 Figure 3.5. Cyclic voltammograms showing the influence of interfacial silica deposit formation on the electrochemical instability. A -Cyclic voltammograms for the 1 st scan; B -Cyclic voltammograms for the 15 th scan. Red line correspond to [CTA + ] org = 3 mM, black line correspond to [CTA + ] org = 14 mM. [TEOS] aq = 50 mM. Scan rate = 5 mV/s. Figure 3 . 6 .For Figure 3 . 6 B 3636 Figure 3.6. Influence of [CTA + ] org -A and [TEOS] aq -B on the reverse peak charge for: A -1.5 mM (), 5 mM () and 14 mM () [TEOS] aq ; and B -50 mM (), 200 mM () and 300 mM () [CTA + ] org . The points and the error bars (standard deviations) were calculated from the last five cycles of the fifteen repetitive voltammetric runs. Figure 3 . 7 . 37 Figure 3.7. Infra-red spectrum of silica deposit prepared for following template and silica precursor concentrations: [CTA + ] org = 14 mM and [TEOS] aq = 300 mM. The most significant wavenumbers are indicated with the arrows.X-Ray Photoelectron Spectroscopy (XPS) measurements were performed in order to prove the formation of silica as well as to study the contribution of other attractions between the atoms. A typical XPS spectrum for the silica deposit synthesized at the ITIES (for [CTA + ] org = 10 Figure 3 . 8 B 38 (presence of Si-O bond) and D (presence of Si-O-Si bond). Nitrogen deriving from surfactant species was also present. The C-O bond from Figure 3.8 C suggests that TEOS molecules were not fully hydrolyzed and some Si-O-C may still remain in the sol solution. Figure 3 . 8 . 38 Figure 3.8. XPS spectra of the silica deposit prepared for: [CTA + ] org = 10 mM and [TEOS] aq = 300 mM. A -spectrum in the full range; B -spectrum in the region of O 1s ; C -spectrum in the region of C 1s and Dspectrum in the region of Si 2p . The tables are correlated with the spectra. Figure 3 . 9 . 39 Figure 3.9. Nitrogen adsorption-desorption of the silica deposit prepared using [CTA + ] org = 14 mM and [TEOS] aq = 300 mM before (squares) and after (circles) calcination (heat treatment was performed at 450°C for 30 min). Figure 3 .Figure 3 . 10 B 3310 Figure 3.10 B) in the organic phase resulted in very weak response and the pore center-to-center distance around 6.1 nm. Interestingly, the addition of 20% of ethanol (blue curve Figure 3.10 B) Figure 3 . 10 .Figure 3 . 11 . 310311 Figure 3.10. Variation of SAXS pattern recorded for silica deposits electrogenerated at the ITIES. Acorrespond to silica deposits synthesized under different [CTA + ] org (black -5 mM, red -10 mM and blue -14 mM) and constant [TEOS] aq = 300 mM; B -correspond to organic phase different polarity (black -30% decane / 70% DCE, red -100% DCE and blue 20% ethanol / 80% DCE) the [CTA + ] org = 10 mM and the [TEOS] aq = 300 mM. All samples were prepared by cyclic voltammetry. Figure 3 . 11 . 311 Figure 3.11. TEM images of silica deposits prepared by both cyclic voltammetry and chronoamperometry. The initial composition of two phases is indicated as the headlines of each column. Figure 4 . 1 41 with a black curve. The deposition was performed with [CTA + ] org = 14 mM and [TEOS] aq = 50 mM in the cell 2 at 5 mV/s. Red curve from Figure 4.1 corresponds to blank CV recorded in cell 2 in the absence of template and Figure 4 . 1 , 41 it was assumed that at the beginning of the polarization the liquid -liquid interface was uniformly covered with the CTA + monolayer and the current was only measured due to interfacial double layer charging. The 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + has started at around +100 mV. The local concentration of CTA + in the aqueous diffusion profile zone should exceed its CMC (it was reported to be 1.4 mM for CTAB) 206 in order to form spherical and positively charged micelles. The presence of positive charge at the edge of micellar spheres catalyzes the condensation of the TEOS precursor around the template species, resulting in silica material formation (see scheme 2 on Figure4.1). The sigmoidal shape of the negative peak suggest that the 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + was not limited by linear diffusion inside the pore of the silicon wafer, which should be the case in here (the shape of the negative response will be discussed in the following section). The polarization was reversed at -100 mV. Going towards more negative values would force the transfer of first 𝐶𝑙 𝑎𝑞→𝑜𝑟𝑔 followed by 𝐵𝑇𝑃𝑃𝐴 𝑜𝑟𝑔→𝑎𝑞 + . During the reverse scan the CTA + was expected to back transfer towards the organic phase. The formation of the characteristic positive peak at around +260 mV terminated with the abrupt drop in current has suggested adsorption process of CTA + among the silica network (drop in current informs that there are no more charges to transfer). Figure 4 . 1 . 41 Figure 4.1.Typical cyclic voltammogram recorded during the formation of silica deposit. The microITIES membrane was design number 3. The CV was recorded in cell 1 (black line) for [TEOS] aq = 50 mM and [CTA + ] org = 14 mM. A blank cyclic voltammogram was recorded in cell 1 (red line), without TEOS and CTA + . The schemes on the right illustrates the different stages of the silica deposition: 1. Formation of the monolayer at the beginning of polarization; 2. Transfer of 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + followed by micelles formation and silica condensation; 3. Partial backward transfer of CTA + to the organic phase and silica deposition. Figure 4 4 scheme 3 on Figure 4.1. CVs recorded during interfacial silica material formation for [TEOS] aq = 50 mM and [CTA + ] org = x mM where 1.5 mM ≤ x ≤ 14 mM can be found on Figure 4.2 A. When [CTA + ] org was kept constant (14 mM) and [TEOS] aq was increased from 50 mM up to 300 mM, CVs shown on Figure 4.2 B were recorded. To study the influence of [CTA + ] org and [TEOS] aq , the deposition conditions were [ TEOS] aq was kept constant and [CTA + ] org was increased from 1.5 mM up to 14 mM the linear increase of the negative peak current was observed -arising from 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + (as it is shown in insert of Figure 4.2 A). This reaction is the rate-determining step. The characteristics of the positive peaks are different. Faradaic current associated with the CTA + back transfer has grown up to [CTA + ] org = 5 mM and level off for higher concentrations (see Figure 4.2 A). Such behavior can be explained in the following manner: the formation of silica deposits assisted by surfactant template for low [CTA + ] org has involved most of the CTA + species transferred to the aqueous phase and hence low current recorded during backward transfer was expected. Higher [CTA + ] org has resulted in much thicker silica deposit formation, which acts as a physical barrier for the CTA + returning to the aqueous phase. When [CTA + ] org was kept constant and [TEOS] aq was increased from 50 mM up to 300 mM no significant changes in the current response and the shape of CVs were observed. The only difference was a shift in the potential transfer of 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + towards more positive values (100 mV) for higher [TEOS] aq . This was not surprising since the concentration of the negatively charged hydrolyzed TEOS species in the aqueous phase was increased by a factor of 6, which promotes the transfer of 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + at higher potential values. Figure 4 . 2 . 42 Figure 4.2. A -Cyclic voltammograms recorded during silica deposits formation for [TEOS] = 50 mM and [CTA + ] = 3 mM (solid line), 5 mM (dashed line), 10 mM (dotted line) and 14 mM (dash -dot line). Linear dependency of the negative peak current versus CTA + concentration for the constant [TEOS] aq can be found as an insert. B -Cyclic voltammograms recorded for [CTA + ] = 14 mM and [TEOS] aq = 50 mM (black curve) and 300 mM (red curve). microITIES membrane design was number 3. Scan rate in all cases was 5 mV/s. Figure 4 . 3 43 Figure 4.3 for schemes with corresponding designations for the array of microITIES). For each group, they have proposed voltamperometric current response characteristics for a charge transfer reaction,: (i) linear diffusion leading to clear peak, (ii) radial diffusion leading to steady state wave, (iii) slight overlap of the diffusion profiles leading to a slight peak to clear peak and (iv)total overlap of the diffusion profiles giving a clear peak, from which only group (ii) does not follow scan rate dependency.[START_REF] Davies | The Cyclic and Linear Sweep Voltammetry of Regular and Random Arrays of Microdisc Electrodes: Theory[END_REF] Figure 4 . 3 . 43 Figure 4.3. Schematic representation of microITIES arrays with different diffusion zones. Scheme A shows the geometrical parameters of the membrane supporting microITIES: r -is the pore radius, S -is the pore center-to-center distance or in other words spacing factor and δ -is the diffusion layer thickness. Figure 4 . 3 43 Figure 4.3 show diffusion layer profiles from the aqueous side of the liquid -liquid interface but they can be also applied by analogy for the organic side of the liquid -liquid interface. On Figure 4.4 the CVs curves have been presented as current density versus potential in order to take into account the fact that the number of pores within the array supporting microITIES, consequently their total surface area, varies for each design. Figure 4 . 4 . 44 Figure 4.4. Variations of current density in function of the applied potential recorded during silica deposits formation at the membranes supporting the array of microITIES with different spacing factors Figure 4 . 5 A 45 Figure 4.5 A illustrates the influence of the scan rate on the current-potential curves recorded during interfacial silica deposition at the membrane design number 3. Both the current Figure 4 4 Figure 4 . 6 . 46 Figure 4.6. The charge for forward (empty squares) and reverse (filled circles) processes recorded during silica deposits formation versus scan rate. solutions ([TEOS] aq = 50 mM and [CTA + ] org = 14 mM), but different supporting silicon membranes and distinct deposition conditions (potential scan rates and number of cycles as well as the deposition method). The data are presented in Figure 4.7 under the form of side views of single pore (row a) and pore arrays (row b), and mapping profilometry (row Figure 4 . 7 47 for the modification of membrane design number 3 by one scan at 0.1 mV/s, leading to much thicker deposits(13.1 µm in height and 33.1 µm in diameter). Since the generated deposits were massive as compared with other experiments, they were likely to be more affected by calcination and can suffer from some losses (see, e.g., one deposit missing on the bottom left of part 3b on Figure 4.7). Figure 4 . 7 . 47 Figure 4.7. SEM micrographs and 3D profilometry mapping based on shear force measurements obtained for various microITIES membranes modified with silica deposits. The rows correspond to three different points of views: (a) side view on single pore recorded by SEM, (b) side view on array of modified interfaces recorded by SEM and (c) modified interface mapping made by profilometry. The columns divide the images depending on synthesis initial conditions: (1) deposits prepared by one linear scan voltammetry (half CV scan) at 5 mV/s using microITIES number 2; (2) deposits prepared by three successive CV scans at 5 mV/s using microITIES number 2; and (3) deposits prepared by one CV scan at 0.1 mV/s using microITIES number 3. Polarization direction was always from anodic to cathodic potential direction. Figure 4 . 8 48 where the deposit height, h, was plotted as function of the difference between deposit and pore radii, r d -r p . At the beginning of the deposition process h  r dr p , corresponding to a rather flat morphology, while subsequent growing has led to more hemispherical shape with h values likely to rise up to 2×(r d -r p ). Finally, more massive deposits were characterized by preferential lateral growth (h < 2×(r d -r p )). On the other hand, it is also possible that the liquid -liquid micro-interfaces displacement under charge transfer, 213 which could affect the deposit internal geometry, seems to be minimal here. SEM images for detached membranes (see Figure4.9 A and B) indeed suggest that the silica deposits are filled inside and rather flat at the bottom. Figure 4 . 8 . 48 Figure 4.8. A -Deposit height versus the difference between deposit and pore radii for materials prepared either by cyclic voltammetry or chronoamperometry. The schemes on the right (SEM micrographs and corresponding drawings) correspond to two 'extreme' cases indicated with the Figure 4 . 9 . 49 Figure 4.9. A -SEM image of the silica deposit turned upside down. Inset shows the zoom of the deposit with the dashed white line indicating the imprint of the pore supporting the liquid -liquid interface. B -is the SEM micrograph for the pore from which the silica deposit has been removed. In both cases the silica deposition was performed with one cyclic voltammetry cycle for [CTA + ] org = 14 mM and [TEOS] aq = 50 mM. Membrane design and the scan rate are indicated on the micrographs.The 'worm like' shape of the pores among the silica deposits generated at the macrosocpic liquid -liquid interface (confirmed by TEM and SAXS) were also found to present in the silica deposits at the microITIES as it is shown with TEM micrographs on the Figure 4.10. Figure 4 . 10 . 410 Figure 4.10. The TEM micrographs for the silica deposits electrogenerated for [CTA + ] org = 14 mM and [TEOS] aq = 50 mM. The scan rate was 5 mV/s. Deposition was performed at membrane design number 4 with one cycle. Figure 4 . 11 . 411 Figure 4.11.Raman spectra for silicon membrane (black line) and silica deposit after calcination (red line). The enlargement of the spectra in the 300 -800 cm -1 region corresponds to the frequencies of vibration of silica bonds. Spectra were normalized to the Si-Si peak. Figure 4 . 12 . 412 Figure 4.12. Two different vibrational ranges of RAMAN spectra recorded for silica deposit before (black curve) and after (red curve) calcination. Figure 4 . 4 Figure 4.13. A -blank cyclic voltammograms and B -voltammetric transfer of [TMA + ] aq = 70.9 µM before (red line) and after (black line) calcination. microITIES membrane design number was 3. Silica deposits were electrogenerated by CV (2 cycles at 5 mV s -1 , [CTA + ] = 14 mM and [TEOS] = 300mM). phase junction was also investigated by surface enhanced Raman spectroscopy.111 In the present work, an experimental set-up was developed to couple electrochemical measurements at the ITIES with Raman confocal microscopy. The association of electrochemistry and Raman techniques at microscopic liquid -liquid interfaces allows the collection of complementary data for the mechanism involved in the electrochemically assisted generation of surfactant-templated silica material at such interfaces. The incident laser was focused on both macroscopic and microscopic interfaces to allow the recording of Raman spectra at open circuit potential and upon application of interfacial potential differences. Finally, the changes in interfacial molecular composition of microITIES were studied during the formation of silica deposit. Figure 4 . 4 14 shows the molecular composition of two immiscible phases during silica electrogeneration. Figure 4 . 14 . 414 Figure 4.14. Composition of the aqueous and the organic phase during electrochemical silica deposition at array of microITIES. Figure 4 . 4 Figure 4.15 shows the different Raman spectra recorded at the interface formed between a 5 mM NaCl aqueous solution and DCE in the absence (Figure 4.15 a) and in the presence of 10mM BTPPA + TPBCl -(Figure 4.15 b), of 10mM BTPPA + TPBCl -and 14 mM CTA + TPBCl - (Figure 4.15 c) and 53 mM CTA + TPBCl -(Figure 4.15 d) in the organic phase. Prior to spectrum collection, the laser was focused on the macroscopic liquid -liquid interface. Spectrum (a) from Figure 4 . 4 Figure 4.15 was recorded at the liquid -liquid interface with pure DCE as the organic phase. The series of peaks obtained correspond to the peaks obtained for the Raman spectrum of pure DCE recorded in solution in the macroscopic cell (see spectra A on Figure 4.16). The peaks at 653and 673 cm -1 can be assigned to the C-Cl stretching modes of the gauche conformer, whereas the peak at 753 cm -1 arises from the C-Cl Ag stretching mode of the trans conformer.235 Figure 4 . 15 . 415 Figure 4.15. Raman spectra recorded at open-circuit potential at the macroscopic liquid -liquid interface constituted between 5 mM NaCl aqueous solution and: a) DCE, b) 10 mM BTPPA + TPBCl -in DCE, c) 10 mM BTPPA + TPBCl -and 14 mM CTA + TPBCl -in DCE and d) 53 mM CTA + TPBCl -in DCE. Raman bands marked with () were assigned to BTTPA + , with () to TBCl -, with () to DCE and (△) to CTA + . All spectra were normalized to the band at 2987 cm -1 . Figure 4 . 15 ) 415 Figure 4.15) at 725, 1000and 1078 cm -1. The peak at 725 cm -1 was probably due to the aromatic C-Cl vibration and arose from the presence of the anion of the organic electrolyte. The origin of the peak at 1000 cm -1 was ascribed to the presence of the aromatic rings of BTPPA + (also found elsewhere)217 and can be therefore treated as a trace of the organic electrolyte. The peak at 1078 cm -1 can be assigned to the vibration of the aryl-Cl bond present only in the TPBCl -. Figure 4 . 16 .Figure 4 . 16 ) 416416 Figure 4.16. Raman spectra recorded for a) DCE, b) BTPPA + TPBCl -, c) BTPPA + Cl -and d) K + TPBCl -. Figure 4 .Figure 4 . 17 A 4417 Figure 4.17 A for full spectra and B for spectral region of interest) to ensure that variations of Raman peak intensities are due to ion transfer rather than displacement of the interface. Prior to spectrum collection, the laser spot was focused at the microITIES under open circuit potential (see dashed curve on Figure 4.17 A and B for recorded spectrum). Then, the interfacial potential D. Three types of behaviour were observed corresponding to the different molecules present in the solution. As the interface is polarized at more negative potentials, all bands(653, 673, 753, Figure 4 . 4 Figure 4.17.A -full Raman spectra and B -Raman spectra in the region from 975 cm -1 to 1100 cm -1 recorded at the microITIES between 5 mM NaCl aqueous phase solution and 10 mM BTPPA + TPBCl - organic phase solution under different negative polarization. For A and B dashed line corresponds to Raman spectrum recorded at open circuit potential. Dotted line from A was recorded under negative polarization potential at -300 mV whereas the solid was recorded at -800 mV. The spectra from B were recorded from +200 mV up to -800 mV. C -shows the peak intensity (after normalization to 520 cm -1 ) in function of applied potential -empty squares correspond to the peak at 1002 cm -1 while filled circles can be attributed to the peak at 1078 cm -1 (in that case error bars are too small to notice); open circuit potential was 200 mV; Raman peaks marked with () were assigned to BTTPA + , with ( ) to DCE and with () to TPBCl -. D -is the blank voltammogram recorded prior to spectra collection with the scan rate equal to 5 mV/s. Figure 4 . 18 A 418 Figure 4.18 A for three distinct spectral regions (Figure 4.18 B shows the potential values at which Raman spectra were recorded). After the first potential scan, molecular composition of the liquid -liquid interface has changed and the spectrum obtained became much more complex than at OCP. The arrows on Figure 4.18 indicate the evolution of the particular peaks upon repetitive scans; with some of the intensities increasing with the number of scans while others drop down. Figure 4 . 4 Figure 4.18.A -Raman spectra recorded during in situ silica material formation at the microITIES in three different spectral regions. Arrows on the graphs indicate the direction of peak evolution, whereas the colors correspond to the order of spectra collection (1 st -black, 2 nd -red, 3 rd -green, 4 th -blue and 5 thgrey). The Raman spectrum collection was performed alternately with ion-transfer linear sweep voltammetry as it is shown schematically in part B of the graph. Raman bands marked with (□) were assigned to BTPPA + , with () to TPBCl -, (☆) to DCE and (△) to CTA + . 2 ) 2 are evaluated with the ion transfer voltammetry. Interfacially active ions of different charge, size and nature: (i) three different in size tetraalkylammonium cations, (ii) single charged anion and (iii) two generations -G0 and G1 -of PAMAM dendrimers were employed for this purpose. The silica deposition was performed with one voltammetric cycle at 5 mV/s for [CTA + ] org = 14 mM and [TEOS] aq = 50 mM (see Figure 4.4 C). Modified with silica silicon membranes were then calcinated at 450°C for 30 min. Figure 4 . 4 Figure 4.19 shows the blank CVs recorded before and after modification of the microITIES array with the silica deposits. The potential region was scanned from more negative to more positive potentials on the forward scan as indicated on Figure 4.19 with the dotted arrow. The potential window was determined by the transfer of the supporting electrolyte ions Figure 4 . 19 . 419 Figure 4.19. CV recorded only in the presence of supporting electrolytes (10 mM LiCl in the aqueous phase and 10 mM BTPPA + TPBCl -in the organic phase) before -black line -and after -red linemodification with silica deposits. The insert is the CV recorded in the absence of the aqueous supporting electrolyte before -black line -and after -blue line -modification. Scan rate was 10 mV/s. Dotted arrow indicates the direction of polarization on the forward scan. Figure 4 . 20 . 420 Figure 4.20. CVs of four different ions crossing the interface in the absence (black curves) and in the presence (red curves) of silica deposits at the array of microITIES. A -shown the transfer of three different in size cations (starting from left): TBA + , TEA + and TMA + . B -is the transfer of negatively charged 4OBSA -. Arrows indicate direction of polarization on the forward scan. The concentration of each ion was 56.8 µM and the scan rate was 10 mV/s. Figure 4 . 22 shows 422 the electrochemical behavior of PAMAM dendrimers generation 0 (G0) (Figure 4.22 A) and generation 1(G1) (Figure 4.22 B) before (black lines) and after (red lines) modification. The voltammograms are shown on the Galvani potential scale, based on the internal reference -peak transfer of Cl -(despite such peak is not present on the graphs shown, the correction was made in regard to blank CV recorded prior to each experiment) . Both PAMAM dendrimers (28 µM) and the model ion TEA + (42 µM) were initially present in the aqueous phase to facilitate the comparison. CVs from Figure 4.22 A and B before modification had the same characteristics, independent of the generation of PAMAM dendrimer studied. A first sigmoidal wave rose at +44 mV corresponding to the non-diffusion limited transfer of 𝑇𝐸𝐴 𝑎𝑞→𝑜𝑟𝑔 + . The second sigmoidal wave originated from 𝑃𝐴𝑀𝐴𝑀 𝑎𝑞→𝑜𝑟𝑔 transfer, which was partially masked by organic electrolyte anion transfer -𝑇𝑃𝐵𝐶𝑙 𝑜𝑟𝑔→𝑎𝑞 -. On the reverse scan, back transfer of PAMAM dendrimers resulted in a peak response followed by diffusion limited back transfer of 𝑇𝐸𝐴 𝑜𝑟𝑔→𝑎𝑞 + . Figure 4 . 22 . 422 Figure 4.22. Cyclic voltammograms illustrating the transfer of A -PAMAM generation G0 and B -PAMAM generation G1 dendrimers in the absence (black line) and the presence (red line) of silica deposits. The concentrations are [TEA + ] aq = 42 µM and [PAMAM G0 or G1] aq = 28 µM. Scan rate was 10 mV s -1 . Black, solid arrows indicate the direction of polarization during forward scan. Inserts are the CVs after background subtraction. Figure 4 . 4 Figure 4.23 and Figure 4.24 have three columns: a) CVs recorded before modification; b) CVs recorded after modification and c) corresponding calibration curves. The rows on Figure 4.23 can be ascribe to 1 -TMA + ; 2 -TEA + and 3 -TBA + , whereas on Figure 4.24 to 4 - ). The theoretical calibration curve (see solid line on Figure 4.23 and Figure 4.24 in column c) was calculated with the equation expressing the limiting current in presence of hemispherical diffusion zone: eq. 3 . 5 35 with diffusion coefficients taken from other studies (exception was 4OBSA -for which diffusion coefficient was extracted from linear fit of experimental calibration curve (red dashed line on Figure4.24 4c).Berduque et al. have shown that the electrochemical behavior of PAMAM dendrimers G0 and G1, is characteristic of multiply charged species.[START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces[END_REF] However, they demonstrated that the charge transferred is lower than the theoretical charge. Figure 4 . 23 . 423 Figure 4.23. Data correspond to TMA + (row 1), TEA + (row 2) and TBA + (row 3). Cyclic voltammograms for different interfacial active ion concentrations recorded before modification are shown in column a, whereas their voltammetric behavior after modification is depicted in column b. Column C represent the calibration curves before modification (black squares), after modification (empty circles) and theoretical values calculated using eq. (3.5) (solid line). Error bars, however present are smaller than the size of the points. Arrows indicate the direction of polarization during forward scan. Scan rate was 10 mV/s. Figure 4 . 24 . 424 Figure 4.24. Continuation of Figure 4.23. Data correspond to 4OBSA -(row 4); PAMAM G.0 (row 5) and PAMAM G.1 (row 6). The remaining description is identical with Figure 4.23. Figure 4 . 25 . 425 Figure 4.25. Sensitivity ratio, S/S 0 , as a function of zD for 4OBSA -, TBA + , TEA + and TMA + PAMAM G0 and PAMAM G1. 1 . 1 Scheme 5.1. Methylation reaction of aminodiphenylmethane Figure 5 . 1 51 as spectra a and b respectively. Three characteristic resonances (Figure 5.1 a) were attributed to the ADPM protons. The resonance at around 9.2 ppm, marked with A, corresponds to the protons at the amine group. The multiple peaks from 7.3 to 7.6 ppm, marked as B, are due to the phenyl rings Figure 5 . 1 . 51 Figure 5.1. Proton NMR spectra for a) ADPM and b) PH + I -. Studied molecules were dissolved in DMSO. Spectra were recorded in the 300 MHz spectrometer. Figure 5 . 3 . 53 Figure 5.3. Raman spectra in the region from 2700 to 3200 cm -1 . Bands correspond to:  -symmetric CH 3 ,  -antisymmetric CH 3 and ▲ aromatic -C-H vibrational modes respectively. Black line correspond to PH + I -whereas red line to ADPM. The electrochemical behavior of PH + I -(see Figure 5.4 a) was studied at the single pore glass capillary supporting microITIES -for protocol of preparation refer to section 2.5.6 in chapter II. The pore diameter was 25 µm and the interior of the capillary was silanized prior to Figure 5 . 4 . 54 Figure 5.4. Cyclic voltammograms recorded at single pore microITIES. Dashed line represents to blank CV. Solid line corresponds to 0.5 mg/ml post-synthesis mixture containing PH + I -dissolved in the organic phase. The arrows indicate the direction of polarization on the forward scan. Scan rate was 5 mV/s. Schemes b) and c) correspond to radial diffusion from the aqueous side of the interface and linear diffusion from the organic side of the interface respectively. Figure 5 . 5 . 55 Figure 5.5. Ion chromatography used for iodides detection. a) Chromatograms for known concentrations of aqueous solutions of potassium iodide (insert corresponds to a calibration curve), b) chromatograms for water used for iodides extraction for four subsequent purification steps after PH + TPBCl -metathesis reaction. The eluent used was the aqueous solution of 3.2 mM Na 2 CO 3 and 1 mM NaHCO 3. In all cases the background was subtracted for better data presentation. Figure 5 . 5 ( 55 b) shows the chromatograms for four samples after subsequent rinsing in the region of iodides retention time peak. After first rinsing the iodides concentration was equal to 3.07 mg/mL and dropped to 0.74 mg/mL after second rinsing. No iodides were detected after third and fourth rinsing. Based on this result three rinsing steps were adopted for iodides removal after the metathesis reaction. Figure 5 . 6 . 56 Figure 5.6. The current density versus potential recorded for 1 mM PH + TPBCl -initially present in the organic phase at array of microITIES (8 pores each having 50 µm in diameter). Insert is the current density versus potential for PH + I -before metathesis reaction (recorded at single pore microITIES with diameter equal to 25 µm). The schemes represent the radial diffusion form the aqueous side of the interface and linear diffusion inside the pore filled with the organic phase. Solid arrows indicate the direction of polarization during forward scan. Scan rate was 10 mV/s. Figure 5 . 7 A 57 Figure 5.7 A shows the CVs recorded in the cell with macroscopic liquid -liquid interface. The curve marked with the dashed line represents a blank voltammogram recorded only in the presence of supporting electrolytes (5 mM NaCl in the aqueous phase and 10 mM BTPPA + TPBCl -in the organic phase). The curve marked with the solid line was recorded in the cell 5 for x = 330 µM. The polarization direction, based on the charge of PH + , was from a more positive to a less positive potential during forward scan. For the concentration studied, PH + starts to transfer from the organic to the aqueous phase at around +350 mV, reaching maximal current (-67.5 µA) for the peak at +220 mV. On the reverse scan, the back transfer peak center was found at +350 mV with a height of 64.7 µA. The half-wave transfer potential (E 1/2 ) for the PH + is +300 mV. CVs for different concentration of PH + in the organic phase (from 50 µM to 832 µM) are overlaid and shown on Figure 5.7 B. The linear increase of current on the forward scan in function of the concentration is shown as an inset of Figure 5.7 B. The increasing peak separation for higher analyte concentrations is common at liquid -liquid interface and might arise from the system resistivity likewise from PH + partial interfacial adsorption. Figure 5 . 7 .Figure 5 . 8 A. 5758 Figure 5.7. A -Cyclic voltammogram in the absence (dotted line) and presence (solid line) of 330 µM PH + TPBCl -in the organic phase. B -Cyclic voltammograms for different concentration of PH + TPBCl -in the organic phase: 123,4 µM; 470 µM; 1,01 mM; 2,02 mM and 2,82 mM. Insert shows the current of the Figure 5 . 8 . 5 . 3 .( 5853 Figure 5.8. A -Cyclic voltammograms recorded for PH + interfacial transfer (concentration 50 µM) initially present in the organic phase at different scan rates (from 5 to 25 mV/s every 5 mV/s). B -Correspond to the positive and negative peak current versus square root from the scan rate. R 2 is the coefficient of determination of liner fitting. Dashed arrow shows the direction of the polarization during a forward scan. Figure 5 . 9 AFigure 5 . 9 C 5959 is the reference chromatogram recorded for EtOH. Chromatogram from Figure 5.9 B was recorded for PH + I -dissolved in EtOH without any irradiation. Peak at 0.57 min correspond to the solvent (EtOH), whereas the peak at 12.28 min was attributed to PH + species. In the second approach the UV irradiation was coupled with the electrochemistry at the liquid -liquid interface. correspond to the chromatogram recoded for the aqueous phase collected from above the organic phase after 50 min of chronoamperometric (E = +150 mV) 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + transfer with simultaneous UV irradiation. Figure 5 . 9 D 59 Photolysis of PH + in the aqueous solvent has led to the formation of benzhydryl carbocation. The stability of carbocation in the nucleophilic (aqueous) solvent should be relatively weak and hence formation of alcohol is expected (less probable adjuncts are also possible for instance: benzhydryl chloride). The peak at around 3.14 min was assigned to the resulting benzhydrol (see Figure5.9 C) and was confirmed by the injection of pure benzydrol into the chromatographic column. Trace of undecomposed PH + can also be seen as very weak Figure 5 . 9 . 59 Figure 5.9. Chromatograms recorded during photodecomposition study of PH + cations: A -pure EtOH; B -100 µM PH + I -in EtOH; C -aqueous phase collected from the liquid -liquid cell after 50 min of chronoamperometric transfer (E = 0.15 V) of PH + from the organic to the aqueous phase with simultaneous UV irradiation and D -the aqueous phase collected from the liquid -liquid cell under condition applied in part A in the absence of PH + TPBCl -in the organic phase. Mobile phase flow rate was 3 ml/min. The wavelength of detector was 220 nm. Figure 5 . 10 . 510 Figure 5.10. The liquid -liquid cell was under open circuit potential during irradiation and CVs were recorded before and after 40 min of irradiation. The current decrease on the forward and reverse peak induced by continuous UV irradiation suggests degradation of interfacially active PH + species in the organic phase. The forward and reversed peak currents has decreased from i f = 129.6 µA and i r = 131.4 µA before irradiation down to i f = 69.3 µA and i r = 44.5 µA after 40 min of UV irradiation. Interestingly the available potential window has been narrowed as a large background current was recorded at the positive end of potential scale. This increase observed can be attributed to proton ion transfer. The origin of the increased concentration of protons is discussed the following section. Figure 5 . 10 . 510 Figure 5.10. Cyclic voltammogram recorded for 1.01 mM PH + TPBCl -initially present in the organic phase is marked with red line. Black curve was recorded after 40 minutes of UV irradiation. Solid arrows show the current decrease caused by irradiation. The direction of polarization is indicated with dashed arrow. Scan rate was 10 mV/s. Mass spectroscopy was employed in order to study the content of the aqueous phase after 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + Figure 5 . 12 . 512 For each point, the ITIES was held at corresponding potential for 150 seconds (red circles) and 420 seconds (black squares). Apparently the pH of the aqueous phase stays unaffected up to 0 V. The pH jump due to OH -electrogeneration at the aqueous counter electrode was observed at around -100 mV and reaches plateau for potentials < -200 mV. The pH change around the counter electrode was additionally detected with the pH indicator -phenolphthalein -added to the aqueous phase during voltammetric cyclic in the presence of PH + TPBCl -in the organic phase. The fuchsia cloud -indicating the pH increase -around the counter aqueous electrode starts to be visible once the +100 mV was reached on the forward scan. The interface polarization towards the less positive potential caused the increase of fuchsia cloud. On reverse polarization the fuchsia cloud was seen up to +300 mV until it became blurred away and disappeared. No change in color was detected in the vicinity of the liquid -liquid interface neither in the absence nor in the presence of UV irradiation coupled with the 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + transfer. Figure 5 . 12 . 512 Figure 5.12. Changes of the aqueous phase bulk pH induced by the side reaction at platinum mesh counter electrode. pH for each point was measured after 150 seconds (red circles) and 420 seconds (black squares) of chronoamperometric polarization at given potential. pH at OPC before polarization was 5,73 for (red circles) and 5,83 for (black squares). Photos indicate the change in color of phenolphthalein around the platinum electrode. Figure 2 . 3 in section 2 . 2 . 2322 Figure 2.3 in section 2.2. The liquid -liquid interface was supported with the array of pores, each having 50 µm in diameter, and hence the local pH measurements in the micrometer scale had to be performed. Several possible pH probes could be employed in this regard. Examples include: (i) microelectrodes modified with neutral carrier-based ion-selective liquid-membrane, 263 (ii) the two-dimensional semiconductor pH probe, 264 (iii) the antimony-antimony oxide electrodes 265 or (iv) electrodes modified with iridium oxide. 261,262 Especially the last example is worthy of note. Iridium oxide modified electrodes exhibit Nernstian behavior, are stable over long time periods, have a pH operating range from 2 to 12 and they are cheap as compared with iridium microwires. One of the methods of preparation is an electrodeposition from the alkaline iridium (III) oxide solution. Such an approach was employed in this work to modify Pt microdisc electrodes. Modification with iridium oxide was performed with 10 subsequent voltammetric scans (see Figure 5. 13 A) at scan rate of 50 mV/s between 0 V and +1300 mV -versus silver wire reference electrode. Figure 5 .Figure 5 . 13 B 5 . 14 . 5513514 Figure 5. 13. a) Cyclic voltammograms recorded during the electrodeposition of iridium oxide at Pt microelectrodes, b) is the SEM image of the corresponding modified electrode. On the forward scan, the electrode was polarized towards an anodic potential. Three pairs of signals corresponding to different oxidation states of iridium can be distinguished: (i) 𝐸 1/2 1 of 220 mV due to 𝐼𝑟 𝐼𝐼+ ⇋ 𝐼𝑟 𝐼𝐼𝐼+ redox couple, (ii) 𝐸 1/2 2 at around 0.63 V originating from the 𝐼𝑟 𝐼𝐼𝐼+ ⇋ 𝐼𝑟 𝐼𝑉+ redox couple and (iii) the signal on the positive extreme of the potential window arising Figure 5 . 14 .Scheme 5 . 4 . 5 . 4 .Scheme 5 . 5 . 514545455 Figure 5.14. Local pH measurements in function of experimental time. Black circles correspond to the pH measurements 1 µm above the ITIES in the presence of 1 mM PH + TPBCl -in the organic phase during 𝑷𝑯 𝒐𝒓𝒈→𝒂𝒒 +transfer and simultaneous UV irradiation (with the exception of the first point recorded 1 µm above the ITIES in the absence of irradiation). Red squares were recorded under identical conditions in the absence of PH + TPBCl -in the organic phase. Green symbols correspond to the pH measurements in the bulk aqueous phase performed before and after local pH measurements. Insert correspond to the calibration curves recorded before (black points) and after (red point) experiment with the PH + TPBCl -in the organic phase. photodecomposition products of the organic electrolyte a control experiment in the absence of PH + TPBCl -in the organic phase was performed (see red squares on Figure5.14). No pH change was observed among the studied experimental time once the pH probe was placed 1 µm above the liquid -liquid interface, which potential was held at +150 mV with simultaneous UV irradiation. Figure 5 . 5 15 shows the CVs recorded at an array of microITIES (a) and macroITIES (b) separating the 8 mM BTPPA + TPBCl -in 20% TEOS in DCE and 5 mM NaCl aqueous solution in the absence and in the presence of CTA + Br -in the aqueous phase. Figure 5 . 5 Figure 5.15. a) Cyclic voltamograms recorded at the array of microITIES (8 pores, 50 µm in diameter each). (A) correpond to blank system (8 mM BTPPA + TPBCl -in 20% TEOS in DCE and 5 mM NaCl) which aqueous phase was further enriched by (B) 0.70 mM, (C) 1.38 mM and (D) 2.02 mM CTA + Br -. b) Cyclic voltammograms recorded in macroscopic cell suporting ITIES. (E) is the blank (8 mM BTPPA + TPBCl -in 20% TEOS in DCE and 5 mM NaCl), which can be also found in the insert, whereas (F) additionally contianed 1 mM CTA + Br -in the aqueous phase. Figure 5 . 5 15 (b)) has led to system destabilization and only resistance was recorded. No silica deposit formation was observed since under such conditions the hydrolysis of TEOS at the liquid -liquid interface is very slow. CVs recorded at microITIES (see Figure 5.15 (a)) showed that the interface is mechanically stable up to 2.02 mM CTA + Br -(this was the highest concentration studied) in the aqueous phase as compared with the macroscopic system. The only difference observed was the shift of the whole voltammogram towards more positive potential values and evolution of the peak limiting the potential window on the negative side of potential scale -probably arising from 𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + transfer. Figure 5 . 16 . 516 Figure 5.16.Cyclic voltammograms recorded at the macroscopic liquid -liquid interface. Red curve correspond to blank solution 8 mM BTPPA + TPBCl -in 20% TEOS in DCE and 5 mM NaCl. Remaining black curves were recorded at the ITIES whose organic phase additionally contained 500 µM PH + TPBCl - whereas the aqueous phase was enriched with 0 mM (A), 0.5 mM (B) and 2 mM (C) CTA + Br -in the aqueous phase. The TEM micrographs correspond to the silica deposits whose deposition was conducted under the condition indicated by corresponding CVs. Figure 5 . 5 Figure 5.16 shows the voltammograms recorded prior to local pH change and silica deposition. As it was already shown on Figure 5.15 (b) the presence of cationic surfactants in the aqueous phase destablizes the macroscopic liquid -liquid interface. In the presence of 0.5 mM CTA + Br - (curve B on Figure 5.16) in the aqueous phase the voltammogram shift and some current fluctuation were observed whereas only resisive current was recorded for 2 mM CTA + Br -(curve C on Figure 5.16). To trigger the silica deposition the interface was held at E = +150 mV for 60 The in situ insight of the polarized ITIES with confocal Raman spectroscopy (the dimension of the miniaturized liquid -liquid interface required the application of local characterization technique) allowed the study of different molecular contributions. Two phenomena were followed: (i) interfacial ion transfer reaction and (ii) electrochemically controlled interfacial silica deposit formation. In general, the information extracted during this work reveals the following: 1. The negative polarization affected the molecular composition of the liquid -liquid interface as the Raman signals corresponding to BTPPA + and TPBCl -increased and decreased respectively; 2. No interface movement was detected since the set of Raman bands attributed to DCE have remained unchanged independently from the potential applied; 3. The molecular composition of the liquid -liquid interface during silica formation has changed dramatically after first half of the voltammetric cycles. 4. The strong signals from the CTA + , BTPPA + and TPBCl -were found during the electrodeposition. The intensity of the signal tend to grow with the number of voltammetric cycles; 4 . 4 moléculaire. En général, la déposition de silice a été effectuée par procédé Sol -Gel en présence des molécules connues en tant que 'templates'. La déposition a été contrôlée par électrochimie à l'interface liquide -liquide. Pour cela, deux catégories de d'ITIES (interface between two immiscible electrolyte solutions) ont été utilisées: (i) une interface liquide -liquide macroscopique créés dans une cellule électrochimique avec quatre électrodes et (ii) une l'interface liquide -liquide de dimension microscopique sous le forme d'une interface unique ou d'un réseau de micro-interfaces.Dans un premier temps, l'interface liquide -liquide macroscopique a été utilisée pour étudier le mécanisme de déposition de silice. Pour cela, un tensioactif cationique -CTA + , initialement dissous dans une solution de BTPPA + TPBCl -à 10 mM dans du dichloroéthane, a été utilisé à la fois en tant que 'template' et catalyseur lors de la formation de silice. Ensuite, un précurseur de silice -TEOS -a été hydrolysé dans la phase aqueuse (pH = 3) constituée de 5 mM NaCl. Après hydrolyse, le pH de la phase aqueuse a été augmenté jusqu'à 9 afin de promouvoir la formation d'espèces silanol polynucléaires. Un transfert de CTA + de la phase organique vers la phase aqueuse a été contrôlé par polarisation interfaciale et observé par présence de silice hydrolysé dans la phase aqueuse. Ce type de réaction est connu comme le transfert d'ions facilité. La formation de silice a été déclenchée une fois que les cations CTA + ont été transférés dans la phase aqueuse. Ainsi, la formation de micelles sphériques catalyse la réaction de condensation et agit comme la matrice qui structure les dépôts de silice. Les conclusions générales correspondant à la modification de l'interface liquide -liquide macroscopique sont les suivantes: 1. Caractéristique baisse de courant en voltampérométrie cyclique pendant le transfert retour (𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + ) et indique que le transfert était irréversible et que CTA + a été piégé dans le matériau de silice; 2. Le matériau de silice est formé à l'interface liquide -liquide déjà après un cycle voltampérométrique effectué à 5 mV/s; 3. L'augmentation de charge au transfert arriéré (𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + ) a été observé pour les premier cycles. La charge calculée pour les cycles suivants devient constante. Ces observations peuvent témoigner que la zone interfaciale est saturée par des espèces de silanol polynucléaire; La formation de dépôts de silice a été limitée par [CTA + ] org et sa concentration dans la couche de diffusion du côté de la phase aqueuse de l'interface liquide -liquide; 5. L'intervalle de concentration de TEOS étudié, c'est-à-dire de 50 mM à 300 mM n'a pas affecté le transfert de CTA + ; 6. Un pH optimal pour la formation l'espace de silanol polynucléaire a été trouvé entre 9 et 10. De plus, l'interface liquide -liquide macroscopique a permis de générer un dépôt de silice pour sa caractérisation ultérieure. Les dépôts de silice qui ont collectées de l'interface liquideliquide, ils ont été stockés pendant une nuit dans un four à 130 ° C. Ensemble de techniques de caractérisation a été employé, ce qui suggère que: 1. La formation d'une liaison de silice (Si-O-Si) a été confirmée. Des études par méthodes spectroscopiques (infra-rouge) ont indiqué la présence de CTA + et de traces d'ions d'électrolyte organique dans le matériau de silice; 2. Des analyses par XPS ont indiqué que le TEOS n'a pas été totalement hydrolysé car une liaison C-O a été détectée. De plus, en se basant sur ces résultats XPS, on suppose qu'un équilibrage de charge entre des fonctions OH -chargées négativement (présentes à l'intérieur de pores de la silice) et des ions Na + positifs peut se produire; 3. La mésostructure de dépôt de silice a été confirmée et une distance de centre à centre pour les pores en fonction de la polarité de la phase organique et de la concentration du CTA+ dans cette phase organique [CTA + ] org ont été dans entre 3,7 et 7 nm; 4. Des analyses par SAXS et par imagerie MEB ont confirmé la présence de structures connues dans la littérature comme 'vermiculaires'.Une fois que l'électrogénération de dépôt de silice a été optimisée à l'interface liquideliquide macroscopique, la miniaturisation a été effectuée. L'ITIES microscopique a été supportée avec une plaquette de silicium dont la matrice des pores (le rayon était dans la gamme entre 5 et 10 µm) situé en arrangement hexagonal, prépare par lithographie. La miniaturisation a amélioré certains des paramètres électroanalytiques. D'une part, la limite de détection a baissé grâce à un courant capacitif. De l'autre part, un transport de masse plus élevé a permis d'améliorer la sensibilité du système. La déposition de silice à l'interface liquide -liquide miniaturisée a été étudiée et les conclusions suivantes ont été faites:1. Les courbes courant -potentiel enregistrée lors de la formation de silice n'ont pas correspondis a une réaction simple de transfert d'ions; 2. La forme du sommet de polarisation en avant (résultant du transfert CTA org→aq + ) dépendait de la vitesse de balayage (épaisseur de la couche de diffusion sur le côté de la phase organique de l'interface liquide -liquide) et de la membrane de silicium utilisée (la distance de centre à centre entre les deux pores). En général, la réaction de transfert de CTA org→aq + était limitée par diffusion une fois que le transfert a été régie par diffusion linéaire à l'intérieur des pores de la membrane de silicium (à vitesse de balayage > 10 mV/s) et quand de couche de diffusion superposée sur l'entrée des pores du côté de la phase organique (à vitesse de balayage < 0,1 mV/s). Une superposition des profils de diffusion du côté de la phase organique a également été observée pour la membrane de silicium ayant un faible facteur d'espacement entre les pores; 3. Les dépôts de silice croissant toujours vers la phase aqueuse. Ils sont formés à l'entrée des pores à partir du côté de la phase aqueuse. Les dépôts sont plats en le fond et sont remplis par la silice à l'intérieur (ce qui exclut tout mouvement possible de l'interface pendant le procédé de déposition); 4. La forme des dépôts de silice correspond à la couche de diffusion hémisphérique du CTA + qui transfère vers la phase aqueuse: pour les expériences courtes, les dépôts étaient plats en haut et arrondies sur les côtés et hémisphériques, pour les expériences longues; 7 . 10 . 710 photosensible et actif à l'interface, a été synthétisé et ensuite utilisé pour modifier le pH du côté aqueux de l'interface liquide -liquide. Les conclusions de cette partie de la thèse sont comme suit: [ CTA + ] aq en concentration élevée était impossible. Il fallait donc utiliser les systèmes miniaturisés (l'interface liquide -liquide miniaturisée était mécaniquement stable jusqu'à [CTA + ] aq = 2,02 mM, des concentrations plus élevées n'ont pas été étudiées). Le dépôt de silice à l'interface peut être déclenché aussi par l'augmentation du pH local. Les espèces actives interfacialement (initialement présents dans la phase organique) qui sont fonctionnalisées avec le centre de base, par exemple un atome d'azote avec une paire d'électrons isolée, peuvent être employés pour contrôler des réactions d'hydrolyse et de condensation par voie électrochimique (pour plus de détails, voir section 7.3, partie résultats préliminaires). Certain scientific questions and improvements concerning this work still remain challenging. (i) Further effort could be directed towards evaluation of silica deposits permeability and scanning electrochemical microscopy (SECM) is of highest interest in this regard. (ii) The selectivity of the liquid -liquid interface modified with the silica deposits can be further improved by chemical functionalization, for instance by introduction of organosilane species to the sol. In this way, the parameters such as the charge, hydrophobicity or some specific ligandhost interaction could be tuned depending on the functional group used. (iii) Mesoporous and well-ordered silica materials can be also obtained by evaporation induced self-assembly technique. Resulting material deposited at the membrane supporting nano-or microITIES could be employed as molecular sieve. (iv) Triggering the condensation of silica by local pH change induced by the ion transfer reaction from the organic phase could be used to control the silica deposit formation. The design of the interfacially active ions functionalized with chemical moiety affecting the concertation of protons in the aqueous media is a requirement. (v) Interfacial silica gel electrogeneration can form a scaffold for molecules or particles undergoing interfacial adsorption. This phenomenon might be employed for development of new silica templating methods or for encapsulation of species being adsorbed at the interface. Some preliminary results Figure 6 . 1 . 61 Figure 6.1. A -The set-up design to study local properties of silica deposits. Designations stand for: 1 -is the PTFE cell; 2 -is the aqueous phase being the solution of TBA + Cl -; 3 -is the DCE solution of TBA + TPBCl -and 4 -is the SECM tip filled with the DCE solution of 10 mM BTPPA + TPBCl -. The array of microITIES was modified with silica deposits. B -Schematic representation of the positive feedback recorded at the ITIES. Figure 6 . 1 A 6 . 2 .Figure 6 . 2 . 616262 Figure 6.2. SCEM approach curves recorded A -above the silicon wafer; B -above the pure DCE; Cabove the 10 mM TBA + TPBCl -solution in DCE and D -above the silica deposit modified µITIES supporting 10 mM TBA + TPBCl -in DCE. interface the positive feedback was recorded (see Figure 6.2 C). Intermediate response was observed in the presence of silica deposits as shown on Figure 6.2 D. Information such as the permeability coefficient or the kinetics of ion transfer across the silica deposits still have to be extracted from SECM results. ( MPTMS) and (3-azidopropyl)trimethoxysilane (AzPTMS). Cyclic voltammetry (see Figure6.3) Figure 6 . 3 . 63 Figure 6.3. The CVs recorded during silica deposit formation at macroscopic ITIES in the presence of A -5% of MPTMS and B -15% AzPTMS in the initial sol solution. The experimental conditions: [CTA + ] org = 14 mM; [x% organosilane + 100 -x% TEOS] aq = 300 mM; scan rate was 1 mV/s. Insert are the TEM images for electrogenerated silica material. Figure 6 . 4 . 64 Figure 6.4. Variation of SAXS patterns for A -thiol and B -AzPTMS functionalized silica deposit. The deposits were prepared under following initial conditions: [CTA + ] org = 14 mM; x% organosilane + 100 -x% TEOS] aq = 300 mM; scan rate was 1 mV/s. Black line correspond to pure TEOS, red to 5%, blue to 15% and gray to 25% of organosilanes in the initial sol solution. Figure 6 . 4 . 64 Figure6.4. Similar tendency was observed for both functionalities: (i) the broadness of the peak from SAXS pattern was increasing with the increasing concentration of the organosilanes in the initial sol solution (indicating increase in the average pore center-to-center distance) and (ii) the peak intensity tend to drop for the samples containing higher amount of organic groups suggesting deterioration of mesoporous properties. Scheme 6 . 1 .Scheme 6 . 2 . 6162 Scheme 6. 1. Amination reaction of (2-bromoethyl)trimethylammonium cation with ethylenediamine.The reaction is not selective and may lead to the formation of di-, tri-and quaternary substituted amines hence ethylenediamine has to be used in high excess. Next step is the precipitation of the product from the reaction mixture with TPBCl -anions in order to form a molecule soluble in DCE: Figure 6 . 5 . 1 H 651 Figure 6.5. 1 H NMR spectra for the product of amination reaction. Figure II. 2 . 2 Figure II.2. FT-IR spectra recorded for AzPTMOS (few drops were put on glass support, dried in oven at 130°C for 4 hours and then scraped out, grated with KBr and pressed to form pellet). A -is the peak corresponding to N3 vibrational mode; B -is the CH 2 and CH 3 vibrational modes. Table 1 .1. 1 Table 1 . 1 . 11 Comparison of the different organic solvents, organic phase electrolytes and aqueous phase electrolytes in terms of available potential window widths. Organic phase solvent [Organic phase electrolyte] [Aqueous phase electrolyte] Potential window width Ref. Valeronitrile 10 mM BTPPA + TPBCl - 100 mM LiCl 100 mV Caprylonitrile 10 mM BTPPA + TPBCl - 100 mM LiCl 200 mV 2-octanone 10 mM BTPPA + TPBCl - 100 mM LiCl 250 mV 2-decanone 10 mM BTPPA + TPBCl - 100 mM LiCl 350 mV 3-nonanone 10 mM BTPPA + TPBCl - 100 mM LiCl 350 mV o-NPOE 1 mM TPAs + TPB- 10 mM LiCl 350 mV o-NPOE 20 mM TPA + TPB - 10 mM LiCl 350 mV 5-nonanone 10 mM BTPPA + TPBCl - 100 mM LiCl 400 mV 5-nonanone 10 mM BTPPA + TPBCl - 100 mM LiCl 400 mV 1,2-DCE 100 mM TOctA + TPBCl - 50 mM Li 2 SO 4 600 mV o-NPOE 5 mM TBA + TPBCl - 10 mM LiCl 620 mV 1,2-DCE 10 mM BTPPA + TPBCl - 100 mM LiCl 650 mV 1,2-DCE 1 mM BTPPA + TFPB - 5 mM NaCl 700 mV 1,4-DCB 100 mM TOctA + TPBCl - 50 mM Li 2 SO 4 700 mV 1,2-DCE 1 mM BTPPA + TFPB - 5 mM NaCl 700 mV 1,2-DCE 1 mM BTPPA + TFPB - 5 mM LiF 800 mV 1,2-DCE 1 mM BTPPA + TFPB - 5 mM MgSO 4 825 mV 1,6-DCH 100 mM TOctA + TPBCl - 50 mM Li 2 SO 4 850 mV (1:1 v:v) 1,2-DCE:CH 1 mM BTPPA + TFPB - 5 mM NaCl 920 mV 1,2-DCE 10 mM BTPPA + TFPB - 10 mM MgSO 4 970 mV (1:1 v:v) 1,2-DCE:CH 1 mM BTPPA + TFPB - 5 mM LiF 970 mV (1:1 v:v) 1,2-DCE:CH 1 mM BTPPA + TFPB - 5 mM MgSO 4 1000 mV (1:1 v:v) 1,2-DCE:CH 1 mM BTPPA + TFPB - 2000 mM MgSO 4 1000 mV 1,2-DCE TBA + TPFPB - 0.5 mM HCl 1050 mV (1:1 v:v) 1,2-DCE:CH 10 mM BTPPA + TFPB -10 mM MgSO 4 1120 mV The abbreviation stand for: 1,2-DCE -1,2-dichloroethane; 1,4-DCB -1,4-dichlorobutane; 1,6-DCH -1,6-dichlorohexane; CH -cyclohexane; o-NPOE -2-nitrophenyl octyl ether; BTPPA + -Bis(triphenylphosphoranyldiene)ammonium cation; TPBCl --tetrakis(4-chlorophenyl) anion; TFPB -tetrakis[3,5-bis(trifluoromethyl)phenyl]borate anion; TBA + -tetrabutylammonium cation; TPA +tetrapentylammonium cation; TPB --tetraphenylborate; TPAs + -tetraphenylarsonium cation; TPFPB -tetrakis(pentafluorophenyl)borate anion and TOctA + is the tetraoctylammoniu cation. [START_REF] Ding | Kinetics of Heterogeneous Electron Transfer at Liquid/Liquid Interfaces As Studied by SECM[END_REF] 1.1.4. Electrochemical instability at the electrified liquid -liquid interface in the presence of ionic surfactants Electrochemical instability, term proposed and explained by Kakiuchi, 70,71 explains the instability of the electrified liquid -liquid junction in the presence of ionic molecules undergoing both: partitioning and adsorption processes. Potential depended adsorption of surface active ions was shown to reach a maximum value for the interfacial potential difference, ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф, at around standard ion transfer potential difference of surface active ion i, ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 𝑖 0 . A consequence of maximal adsorption is the drop in an interfacial tension that leads to the thermodynamic instability. Further deliberation concerning electrocapillary curves in the presence of surface active ions adsorption has led to a conclusion, that for some ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф values, double layer capacitance becomes negative. Since a negative value of capacitance has no physicochemical meaning, real systems compensates the energy lost by emulsification or escapes from instability by Marangoni type movements (transfer of charged species among the liquid -liquid interface induced by surface tension gradients). Thermodynamically forbidden region can be referred to as an instability window. Its width depends on the interfacial tension change upon surfactant adsorption (dependent from surfactant concentration and Gibbs free energy of adsorption, (∆𝐺 𝑎𝑑𝑠,𝑖 0 Table 1 . 2 . 12 Comparison between three different dimensions of ITIES. Dimensions Macro ITIES Micro ITIES Nano ITIES Schemes Conditions r -is the interfacial radius δ -is the diffusion layer thickness Table 1 .3. 1 Table 1 . 3 . 13 Voltammetric characteristics for the array of different arrays of microITIES.[START_REF] Davies | The Cyclic and Linear Sweep Voltammetry of Regular and Random Arrays of Microdisc Electrodes: Theory[END_REF] 1.2. Sol -Gel Process of Silica employing Template Technology The aspects described in the following subsections are: (i) chemistry and properties of silicon containing compounds, (ii) the Sol -Gel process of silica and (iii) template methods used for electrodes structuring -and each is described in regard to the content of the present work rather than giving comprehensive overview covering a set of very broad subjects. The first part covers the most relevant information including a nomenclature, chemical and physical properties of silicon and silicon containing compounds. Next, the Sol -Gel processing of silica materials is discussed chronologically -from the raw material to the final product -silica -formation. Third part describes the available template methods developed for the electrode surface engineering. Next, examples of the templated Sol -Gel processing of mesoporous silica are given, which emerge from solid state supports modification. Finally, the functionalization possibilities of mesoporous silica materials are briefly described. 1.2.1. Nomenclature and physicochemical properties of silicon and silicon containing compounds Silicon (Si) is the second (after the oxygen) most abundant atom in the earth crust (around 28% by mass). Its physico-chemical properties are shown in Table 1 .4. Si comprises of 1 variety of silicate minerals (quartz, tridymite or cristobalite) or synthetic chemicals. In order to avoid any confusion, it is important to give one, commonly accepted terminology used to name silicon containing compounds. The term 'silica' will be used alternatively with its IUPAC name silicon dioxide (SiO 2 ). SiH 4 is called silane. The anionic species containing silicon atom, as for instance 𝑆𝑖𝑂4 4-or [𝑆𝑖𝐹 6 ] 2-are called silicates. The term 'silanol' can be used when at least one OH group is attached to one silicon atom. Silicic acid is the silanol with the general formula [SiO x (OH) 4-2x ] n . Table 1 . 4 . 14 Physicochemical properties of silicon atom. Atomic number Group 14 14 Electron configuration Melting Point [Ne] 3s 2 3p 2 1410°C 86 1414°C 85 Atomic radius (non-bonded) 2.10Å 85 Period 3 Boiling Point 2355°C 86 3265°C 85 Most common Oxidation states -4, +4, Block p Density 2.33 g/cm 3 (at 25°C) 86,85 Electronegativity 1.90 Bond 28.085 enthalpies 85 Atomic weight and isotopes 28 Si -92.223% 29 Si -4.685% 30 Si -3.092% Covalent radius 1.14 Å 86 H-Si Si-Si O-Si 318 kJ mol -1 222 kJ mol -1 452 kJ mol -1 C-Si 301 kJ mol -1 1.3.1. Metals at the electrified liquid -liquid interface. The following description of the examples emerging from electrified liquid -liquid interface modification is divided into: (i) metal interfacial deposition, (ii) phospholipids at the ITIES, (iii) interface modification with organic polymers, (iv) carbon based materials (with some examples emerging from semi-modified ITIES e.g thin layer ITIES supported with carbon based electrodes or three phase junction set-ups) and finally (v) silica materials. The last group gives an overview on the recent development in the field of liquid -liquid interface modification with silica materials. It covers the reports dealing with the electrified ITIES as well as the neat liquid -liquid interface modification. Metal deposition at the liquid -liquid interface involves metal precursor dissolved in one phase and the electron donor dissolved in the latter. The mechanism of deposition follows the homogeneous or the heterogeneous electron transfer reaction and results in the interfacial formation of metallic films or metal NPs. Historically, the first report dealing with the metal interfacial electrogeneration was communicated by Guainazzi et al. who reported the formation of metallic Cu film at the ITIES between aqueous solution of CuSO 4 and dichloroethane solution of 𝑇𝐵𝐴 + 𝑉(𝐶𝑂) 6 -once the current was passed through the system. 103 From that time the electrified liquid -liquid interface has attracted significant scientific interest towards metal deposition. In the following section, a few examples emerging from Au, Ag and Pd and Pt interfacial electrodeposition are given. 1.3.1.1. Au deposition at the ITIES Au NPs and Au films deposition at the electrified liquid -liquid interface is very recent topic which is gaining ground every year. The first report dealing with potential controlled deposition of preformed Au NPs at the liquid -liquid interface was by Su et al., who have shown that mercaptosuccinic acid stabilized Au NPs, suspended in the aqueous phase can undergo a reversible interfacial adsorption under controlled Galvani potential difference. 104 Reversible assembly of Au NPs taking place at the negative end of the voltammetric potential window could not be followed by current characteristics and hence, was studied by electrocapillary curves constructed under different Au NPs concentrations. Adsorption of negatively charged citrate-coated Au NPs was also observed at negative potentials by capacitance measurements. 105 Cheng and Schiffrin reported as a first in situ Au particle formation at the electrified ITIES. The deposition reaction was triggered by the electron transfer between 𝐹𝑒(𝐶𝑁) 6 4-in the aqueous phase and 𝐴𝑢𝐶𝑙 4 -in the DCE. 107 Interestingly Gründer et al. have not observed any Au 0 formation under similar conditions unless some preferential nucleation sites were introduced to the system (Pd NPs adsorbed at the ITIES). 108 Schaming et al. extended Au 0 1.3.2. Phospholipids at the electrified liquid -liquid interface number and saturation of carbon in alkyl chain may differ, with the exception of phosphatidylinositides where additionally inositol group can be substituted with one, two or three phosphate groups. 131 Modification with phospholipids were successfully applied to the solid/liquid, 132,133 air/liquid 134 and at the liquid -liquid interfaces. 135 Model phospholipid monolayers are well-defined and controllable systems, which represents half part of the biological membrane. The phospholipid modified liquid -liquid interface can also provide the information about pH equilibrium, adsorption -desorption reaction of the lipids at and from Trojánek et al. who studied initial nucleation rates of the Pt NPs the liquid -liquid interface, association -dissociation interaction of phsopholipids with and with the wide range of values obtained (from nucleation rate approaching zero up to 207 • 10 -5 𝑐𝑚 -2 𝑠 -1 ), they concluded that the presence or lack of nucleus formation is charged species from both sides of the interface as discussed in series of papers from Mareček dictated by probability. 122 Pd and Pt are known for a long time as extremely versatile catalysts. This feature was also feasible at the liquid -liquid interface. For instance, pre-formed Pd NPs activated electrochemically by heterogeneous electron transfer reaction from the organic phase containing decamethylferrocene were used to catalyze the dehalogenation reaction of organic substrate being dissolved in the aqueous phase. 128 Hydrogen evolution reaction was also catalyzed by Pd and Pt NPs (with the first having slightly better efficiency) electrogenerated in situ at the ITIES by metal precursor reduction with decamethylferrocene -electron donor which also served as a reductant for the aqueous phase protons. 129 Trojánek et al. show the catalytic effect of Pt NPs modified ITIES on the oxygen reduction reaction obtaining the rate constant one order of magnitude greater as compared with the unmodified liquid -liquid interface. 130 Phospholipids are a class of lipids being a part of all biological cell membranes. Phospholipids with the phosphate polar head group and the fatty acid tails are amphiphilic and biological membranes owe a unique double layer structure to this particular property. The nature of the functional groups attached to the phosphate groups results in five main types of compounds: phosphatidylcholines (PC), phosphatidylserines (PS), phosphatidylethanolamines (PE), phosphatidic acids (PA) and phosphatidylinositides (PI). Among each family, the 1.3.3. Organic polymers at the polarized liquid - liquid interface coupled with the DS/Ru nanoparticles it was demonstrated that the rate constant was reduced The liquid -liquid interface electro-modification with polythiophene was the subject by the factor of 2/3 for aminacrine and 1/2 for tacrine with respect to bare phospholipid of works by Cunnane et al. Interfacial polymerization was controlled by the heterogeneous monolayer. For broader image of the ITIES modified with the phospholipid monolayer it is electron transfer between Ce 4+ /Ce 3+ in the aqueous phase and 2,2':5',2'' terthiophene convenient to reach review publications devoted to this particular topic. 135,147,148 monomer (Figure 1.16 D) from the organic phase controlled with the use of external power source 152 or by employing potential determining ions. 153 It was found that interfacial modification with polythiophene film requires concentrations > 1 mM (lower concentration has led to oligomers formation in the organic phase only). The mechanism of interfacial deposition 153 starts with the heterogeneous electron transfer with the ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 1/2 = 24 mV and formation of the radical cation of the monomer -𝑀𝑜𝑛 •+ : It was mainly the group of Cunnane which introduced and developed the concept of 𝐶𝑒 𝑎𝑞 4+ + 𝑀𝑜𝑛 𝑜𝑟𝑔 0 → 𝐶𝑒 𝑎𝑞 3+ + 𝑀𝑜𝑛 𝑜𝑟𝑔 •+ (1.23) electropolymerization of the organic polymers at the liquid -liquid interface. First report, describes the electron transfer reaction between Fe 3+ /Fe 2+ redox couple in the aqueous phase Formation of the 𝑀𝑜𝑛 •+ at sufficiently high potential results in formation of sexithiophene and the 1-methylpyrrole (Figure 1.16 A) or 1-phenylpyrole (Figure 1.16 B) monomer (Mon) (6M) according to reaction: in the organic phase: 149 𝐹𝑒 𝑎𝑞 3+ + 𝑀𝑜𝑛 𝑜𝑟𝑔 0 → 𝐹𝑒 𝑎𝑞 2+ + 𝑀𝑜𝑛 𝑜𝑟𝑔 •+ (1.22) Radical cation electrogeneration was followed by oligomers formation in the organic phase without any interface modification. It was also shown that N-phenylpyrrole present in the organic phase can facilitate semi-reversible Ag + transfer from the aqueous phase, which results in the formation of polymer coated silver nanoparticles in the organic phase (no direct evidence was given besides from electron absorption measurement). 150 Mareček et al. were the first who modified the ITIES with a free standing polymer layer. 151 The interface modification with pyrrole containing polymers was triggered by the potentiostatic electron transfer between interfacially adsorbed monomer (Figure 1.16 C - 4-(pyrol-1-yl)phenyloxyacetic acid) and Ce 4+ dissolved in the aqueous phase. Interfacial measurement allowed the study of polymerization reaction. Interestingly, ion transfer voltammetry of 𝑇𝑀𝐴 + and 𝑃𝐹 6 -with simultaneous polymer deposition showed that an Phospholipid monolayer modified ITIES was employed for other bioimportant molecules increasing number of voltammetric cycles has led to the formation of compact layer, which study. The interaction of dextran sulfate/Ru nanoparticles 144 , dextran sulfate (DS) 145,146 or/and acted as physical barrier and hence charge transfer was first reduced and consequently gramicidine (gA) 146 with monolayers composed from different phospholipids affected the significantly subtracted. Potential shift in current peaks was observed for facilitated transfer of rate constant of different ions transfer across the monolayer. The presence of DS among the K + by DB18C6 in presence of polymer film at the liquid -liquid interface and no additional phospholipid monolayer decreased the rate constant for the TEA + and metoprolol in opposite resistance to mass transfer was encountered when facilitated transfer of H + by DB18C6 was to the gA modified monolayer, which slightly enhanced its rate constant. For the monolayers probed. Table 1 . 5 . 15 Characteristics of the polymer -Au NPs composites generated at the electrified liquidliquid interface Synthesis method [𝑨𝒖𝑪𝒍 𝟒 -] org [Monomer] aq pH aq Au NPs size distribution Ref. CV [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1mM 2 Very little film generated Ref. [157] Table 1 . 6 . 16 Compilation of works dealing with silica material deposition at neat liquid -liquid interface. Organic phase solvent Silica source pH of aq. phase Template (if used) Morphology and characteristics Ref. Decane TEOS Acidic CTAB Planar films. Hexagonal order of pores oriented perpendicularly to the film; Silica fibers with mesopores ordered hexagonally and oriented Hexane TEOS, TPOS, TBOS Acidic CTAB parallel to fiber axis. Shape of the fibers depend from the precursor used; Heptane TEOS Acidic CTACl Planar films with rougher aqueous-side and smoother organic-side; 180 Heptane TEOS Basic CTAB Planar films of MCM-41 and MCM-48 type, hexagonal and cubic order of pores; Heptane MTMS Basic CTAB, SDS 184 TEOS Planar films possessing Janus properties. Hydrophobicity of organic Toluene HDTMOS Acid - side depends from precursor used and increase in order: TEOS > PfOTMOS HDTMOS > PfOTMOS; Dichloromethane Monodisperse SiO 2 spheres modified with 3APTMOS Neutral - Monolayer films build from monodispersed silica spheres (~300 nm in diameter); TEOS -tetraethoxysilane, TPOS -tetrapropoxysilane, TBOS -tetrabutoxysilane, MTMS -methyltrimetoxysilica, HDTMOS -heksadecyltrimethoxysilane, PfOTMOS -pentafluorooctyltrimethoxysilane, 3APTMOS -3-(acryloyloxy)propyltrimethoxysilan Planar films possessing Janus properties. Rough organic-side and smooth aqueous-side. Methyl groups oriented towards organicside; Table 2 . 1 . 21 Full list of chemicals used in this work Aqueous and organic electrolytes Name Additional information Abbreviation CAS registratory number Molar mass g/mol Source Function Bis(triphenylphosphoranyldiene) ammonium chloride 97% BTPPA + Cl - 21050-13-5 574.03 Aldrich For organic electrolyte preparation Potassium tetrakis(4-chlorophenylborate) ≥98% K + TPBCl - 14680-77-4 496.11 Fluka For organic electrolyte Table 2 . 2 . 22 Characteristics of silicon wafers used to support the array of microITIES. Design number Pore radius / µm Spacing / µm Number of pores Interfacial Surface area / cm 2 1 5 20 2900 2.28×10 -3 2 5 50 460 3.61×10 -4 3 5 100 110 8.64×10 -5 4 5 200 30 2.36×10 -5 5 10 100 120 3.77×10 -4 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 -|| 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 - 𝑥 𝑚𝑀 𝑃𝐻 + 𝑇𝑃𝐵𝐶𝑙 -𝑖𝑛 𝐷𝐶𝐸 | 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 - 10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 2.5.7) and assessed electrochemically. Cell 4 and 5 were used in this regard: Cell 4: (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 -|| 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -𝑥 𝑚𝑀 𝑃𝐻 + 𝐼 - 𝑖𝑛 𝐷𝐶𝐸 | 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 - |𝐴𝑔𝐶𝑙|𝐴𝑔(𝑜𝑟𝑔) Cell 5: Cell 6: (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 -𝑥 𝑚𝑀 𝐶𝑇𝐴 + 𝐶𝑙 -|| 8 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -1 𝑚𝑀 𝑃𝐻 + 𝑇𝑃𝐵𝐶𝑙 -20% 𝑇𝐸𝑂𝑆 𝑖𝑛 𝐷𝐶𝐸 | 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 - |𝐴𝑔𝐶𝑙| 𝐴𝑔(𝑜𝑟𝑔) (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 - |𝐴𝑔𝐶𝑙| 𝐴𝑔(𝑜𝑟𝑔) Silica deposition triggered by local pH changes was performed in the cell 6 configuration. The [CTA + Cl -] aq = x mM where 0.70 mM ≤ x ≤2.02 mM. 193,[START_REF] Peljo | Electrochemistry at the Liquid/liquid Interface[END_REF] (discussed with more details in the The study, however fundamental, requires further investigation hence any effort was made towards: (i) morphological evaluation of electrogenerated deposits -which probably exhibited some degree of mesostructure; (ii) additional experimental work supporting silica deposit formation mechanism and (iii) experimental examples for any possible applications of mesoporous material at the liquid -liquid interface. Consequently, the in situ modification of the liquid -liquid interface with silica materials is still unexplored and poorly studied field of science. subchapter 1.3.5.3.2). Table 4 . 1 . 41 Summary of the bands observed in the different Raman spectra Molecule of interest Raman shift frequency 203 /cm -1 Band assignment 203 Description 203 1465 Scissoring (bending) mode 1305 -1295 -(CH 2 ) n -in phase twisting mode Medium -strong intensity in Raman 3100 -3000 Aromatic ring in C-H stretching region BTPPA + 1600 -1000 1500 -1141 Five aryl C-H bonds (in aromatic) P=N 1130 -1090 P-(Ph) 3 Weak in Raman 1010 -990 Aromatic rings Very strong in Raman 1000 -700 C -H bending 3070 -3030 C-H of substituted benzenes One strong band 1620 -1585 The two quadrant stretch 1590 -1565 Mono-and disubstituted benzene components give rise to two bands. When benzene is substituent with Cl the bands TPBCl - will occur at lower limit. For phenyl group on para this 1083 Aryl-Cl in chlorobenzene band is seen in the 1130 -1190 cm -1 region. From m to s in Raman 420 -390 Mono and parasubstituted benzenes Very weak Raman band 3030 244 -CH 2 antisymmetric stretching 2987 244 -CH 3 symmetric stretching DCE 730 -710 244 Cl-(CH 2 ) n -Cl stretching mode of the trans conformer 657 244 CH 3 -CH 2 -Cl stretching mode of the gauche conformer 3000 -2840 -CH 2 -and -CH 3 stretching Strong and characteristic bands 2962 ± 10 -CH 3 Antisymmetric stretching Strong intensity and characteristic frequencies CTA + 2926 2872 ± 10 -CH 2 -Antisymmetric stretching -CH 3 symmetric stretching Strong in Raman Strong intensity and characteristic frequencies 2853 -CH 2 -Symmetric stretching Often overlapped with CH 3 in anti-symmetrical region 1470 -1440 -CH 3 Antisymmetric bending Medium intensity 1470 -1340 CH 2 and CH 3 bending Not necessarily visible in Raman spectra Table 4 . 2 . 42 40% drop in 𝐷 𝑖 ′ Table 4 . 2 . 42 The structural and electroanalytical data for six different in size, charge and nature species employed in this work. Ion transferred r h / nm D i • 10 -6 / cm 2 •s -1 𝑫 𝒊 ′ • 10 -6 / cm 2 •s -1 z i S theor. Sensitivity / nA•mM -1 S 0 (R 2 ) S (R 2 ) LOD / µM** BM AM TMA + 0.22 247 13.8 252 8.5 +1 81.6 87.4 (>0.999) 49.0 (0.999) 1.91 0.82 TEA + 0.26 247 10.0 ± 0.4 252 5.0 +1 56.7 57.4 (>0.999) 28.8 (0.998) 0.47 3.82 TBA + 0.48 247 6.4 ± 0.3 252 3.1 +1 36.6 45.2 (>0.999) 17.4 (0.997) 0.07 5.32 4OBSA - 0.30* 8.2** 3.8 -1 n/a -46.4 (0.998) -22.1 (0.992) 3.88 2.78 PAMAM G0 0.70 253 3.4* 2.6 2 50 87.6 94.2 (0.989) 74.9 (0.986) 2.41 3.82 PAMAM G1 0.55 53 4.5 53 0.9 5 50 129 91.6 (0.961) 58.6 (0.977) 4.90 3.70 ACKNOWLEDGEMENTS At the very beginning I am glad to express my deepest gratitude to two big personalities, my supervisors, Alain Walcarius and Grégorie Herzog. Being a member of ELAN team was a great pleasure and it was possible as they trusted in me. For the last three Acknowledgements 4.6) . Decreasing potential scan rates down to 5 and 1 mV/s has led to diffusion layer thicknesses increasing from almost equal (i.e., 90 µm for 5 mV/s) to much larger (i.e., 200 µm for 1 mV/s) than the membrane thickness, leading to sigmoidal-shaped current responses on forward scan (yet with an additional residual and very slight peak at 5 mV/s). The origin of such wave is due to the prolongation in diffusion layers profiles (forming hemispherical diffusion zones at the pores entrance from the organic side of the interface due to fulfillment of S/2 > δ nl condition, see schemes (ii) on Figure 4.5). In addition, the local changes in the interfacial properties as a result of growing silica deposit also play a role in the evolution of CV curves, as one can expect some additional resistance to mass transport through the interface in the presence of a silica deposit. This is notably the case of 1 mV/s scan rate (see scheme (iii) on Where G before modification is the Gibbs energy of transfer in the absence of silica deposits and G after modification is the Gibbs energy of transfer in the presence of silica deposits.G is governed with the following equation: G = -zF (3.4) On Figure 4.21 the variation of Gibbs energy of ion transfer was plotted as a function of the ion hydrodynamic radius. For cations of the tetraalkylammonium series, the shift in Gibbs energy of transfer became larger as the hydrodynamic radius grew. This is to be expected as the diffusion of larger cations through the mesopores of the silica deposits might be more impeded than the smaller cations. The Gibbs energy of transfer for the anion 4OBSA -was also higher after modification than before indicating that the transfer had become even more difficult. The disappearance of primary amine group of the ADPM after methylation reaction was also confirmed with the UV-Vis spectroscopy. Appendix I. Nanopipette preparation and silanization All pipettes were pulled with P-2000 micropipette puller fabricated by Sutter Instrument Company (Figure 1). The protocol allows the preparation of the nanopipettes with the diameters ranging from 150 to 600 nm. Inner walls silanization prevents the ingress of the aqueous phase inside the nanopipette.  Heat: 810 -870 (the energy supplied to the glass allows the control of the nanopore diameter. For higher energy longer tip and smaller pore was obtained. In order to obtain visible changes -factor change should be at least 10);  Filament: 3 (this parameter regulate the scanning pattern of the laser spot and control the heat distribution within the scanning length);  Velocity: 45 (this parameter specifies the velocity at which the glass carriage must be moving before hard pull);  Delay: 150 (the parameter which controls the hard pull)  Pull: 90 (the force applied when pulling). 17 Appendix II. Protocol of preparation of 3azidopropyltrimethoxysilane The protocol was developed in LCPME laboratory. [START_REF] Schlossman | Liquid-Liquid Interfaces: Studied by X-Ray and Neutron Scattering[END_REF] Overall reaction can be written as: The protocol of preparation can be divided into few stages: 1. 100 ml of acetonitrile was placed in round bottom two-neck flask. Top neck was covered with stopper. Side neck was covered with turn-over Flang stopper. N 2 passed through the solvent for 30 min. In order to do so, two needles were situated inside the turn-over Flang stopper, first -N 2 inlet immersed inside the solution and the second -outlet of the gas. 2. After 30 minutes, 2.16 grams of sodium azide (NaN 3 ), 1.29 gram of tetrabutylammonium bromide (TBA + Br -) and 4 grams of 3-chloropropyltrimethoxysilane were add to vigorously stirred solvent inside the round bottom flaks. 3. Reaction mixture was stirred at 80°C (under reflux) for 48h in the set-up depicted on
313,652
[ "781786" ]
[ "411849" ]
01752219
en
[ "chim" ]
2024/03/05 22:32:07
2015
https://hal.univ-lorraine.fr/tel-01752219/file/DDOC_T_2015_0336_JUAN.pdf
T H Èse Pierre-Alexandre Juan Stéphane Berbenni Teresa María Dr Pérez Prado Edgar Rauch Dr Delannay Dr Tóth Micromechanical and statistical studies of twinning in hexagonal metals : application to magnesium and zirconium Études micromécaniques et statistiques du maclage dans les métaux hexagonaux : application au magnésium et zirconium Keywords: twinning, magnesium, zirconium, twin-twin junctions, micromechanics, EBSD Mis en page avec la classe thesul. Je souhaiterais tout d'abord exprimer ma plus sincère gratitude à mes directeurs de thèse, Dr. Berbenni et Dr. Capolungo, pour tous leurs conseils et tout leur soutien. Je m'estime chanceux d'avoir eu l'opportunité de travailler sous leur supervision pendant toutes ces années. Je souhaite également remercier Sommaire (a) Comparaison des courbes contrainte-déformation en traction et compression de Mg AZ31B laminé [START_REF] Kurukuri | Rate sensitivity and tension-compression asymmetry in az31b magnesium alloy sheet[END_REF] et (b) comparaison de la réponse mécanique du Zr pur laminé pour différentes températures et trajets de chargement [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Traction et compression apparaissent, respectivement, dans la légende sous la forme de "tens" et "comp". Les abréviations "TT" et "IP" signifient, quant à elles, "throughthickness compression" et "in-plane compression", à savoir "compression dans le sens de l' épaisseur" et "compression dans le plan". . . . . . . . . . . . . . . . . . 1.1 (a) comparison of tension and compression stress-strain curves of rolled AZ31B Mg alloy [START_REF] Kurukuri | Rate sensitivity and tension-compression asymmetry in az31b magnesium alloy sheet[END_REF] and (b) comparison of mechanical responses of clock-rolled high-purity Zr for various loading directions and temperatures [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Tension and compression are denoted by "tens" and "comp", respectively. Abbreviations "TT" and "IP" stand for "through-thickness" and "in-plane", respectively. . . . . . . . . . . . . . . . . 1.2 EBSD scan example of a clock-rolled Zr specimen loaded in compression along the through-thickness direction. These scans were processed using the software presented in Chapter ). The two inclusions are embedded in an infinite elastic medium, with elastic modulus C 0 , containing an overall uniform plastic strain, E p . The second-order tensor E d represents the imposed macroscopic strain. . . . . . . . . Representation of the local coordinate system (e ′ 1 , e ′ 2 , e ′ 3 ) associated with the {10 12}-tensile twinning. The reference coordinate system (e 1 , e 2 , e 3 ) associated to the crystal structure and the crystallographic coordinate system (a 1 , a 2 , a Solid lines and dashed lines refer to the present model and to the Nemat-Nasser and Hori's double inclusion scheme for homogeneous elasticity, respectively. The ellipsoid aspect ratio for the parent, R parent , is set to 3. . . . . . . . . . . . . . . 2.7 Schematic representation of the elasto-plastic problem with twinning corresponding respectively to the uncoupled [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] (a) and coupled (present) (b) formulations. The dashed line signifies that inclusions V t and V g-t with tangent moduli L t and L g-t are embedded in an equivalent homogeneous medium with a tangent modulus L ef f 2.8 Computational flowchart of the DI-EPSC scheme . . . . . . . . . . . . . . . . . . between primary electrons and atoms at or near the sample surface [START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF]. . . . . . . 3.5 [START_REF] Boersch | About bands in electron diffraction[END_REF] Iron Kikuchi patterns [START_REF] Boersch | About bands in electron diffraction[END_REF][START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF] . . . . . . . . . . . . . . . . . . . . . 3.6 Image of a Kikuchi pattern obtained by using a phosphor screen and a TV camera that illustrates the method developed by Venables [START_REF] Venables | Electron back-scattering patterns -a new technique for obtaining crystallographic information in the scanning electron microscope[END_REF] to locate the pattern center. Elliptical black shapes correspond to the projected shadows of the three spheres placed at the surface of the sample [START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF] Table des figures 3. [START_REF] Mordike | Magnesium. properties -applications -potential[END_REF] Example of neighboring relationships encountered in EBSD data. On the left, when measurement points form a square grid, the pixel represented by the black disc has 4 neighbors represented by the white circles. On the right, when measurement points form an hexagonal grid, each measurement has 6 neighbors. . . . . . . . . 3.17 Graph grouping measurement points of consistent orientation in connected parts. The colored circles correspond to EBSD measurement points, with the Euler angles mapped on the RGB cube and, white lines represent edges, whose thicknesses are proportional to the weight, w. Consequently, twins appear clearly as areas delineated by a black border where the edge weight becomes negligible. . . . . . . 3.18 Graph grouping measurement points of consistent orientation in connected parts with added twinning mode. Green and red edges linking border points, displayed in brown, indicate tensile and compressive twinning relations, respectively. . . . . 3.19 Three sample cases of twinning : on the left a single twin T in the middle of its parent P ; in the middle, a twin going across its parent and separating it into two consistent part P1 and P2 ; on the right, a grain appearing as two consistent parts next to each others. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.20 Automatic output for a Zr EBSD map. The sample was cut from a high-purity clock-rolled Zr plate and loaded in compression along one of the in-plane directions up to 5% strain [START_REF] Juan | A statistical analysis of the influence of microstructure and twin-twin junctions on nucleation and growth of twins in zr[END_REF]. Yellow borders mark the grain joints, brown borders the twin joints. Green edges represent tensile 1 relation, magenta tensile 2, red compressive ) but is identified as irrelevant to the twinning process. . . 3.23 Example of secondary and ternary twinning observed in an EBSD map of a highpurity clock-rolled Zr sample loaded in compression along the through-thickness direction up to 3% strain. This is shown using three different visualization modes (see appendix A) : raw mode (left), twinning editor mode (middle) and twinning statistics mode (right). The parent grain is surrounded in yellow, first order twins appear in cyan, secondary twins in blue and ternary or higher order twins in red. 3 Macroscopic stress-strain curves of specimens loaded in compression along the rolling direction followed by a second compression along the transverse direction. 4.4 (a) XRD {0001}, {2 11 0} and {10 10} pole figures of specimens before compression (a) and after compression along the transverse direction up to 4% strain (b), the rolling direction up to 1.8% strain (c) and along the rolling direction up to 1.8% strain and then along the transverse direction up to 1.3% strain (d). . . . . . . order tensors E p , ǫ p 1 and ǫ p 2 denote the macroscopic plastic strain imposed to the medium and plastic strains induced by primary and secondary twinning, respectively. The infinite matrix and primary and secondary tensile twins are represented by volumes V -V A , V A -V B and V B , respectively. Second-order tensors ǫ p a and ǫ p b correspond to eigenstrains, modeling twinning shears induced by primary and secondary twinning, prescribed in inclusions V A -V B and V B , respectively. The homogeneous elastic tensor is denoted by the fourth-order tensor C. . . . . . . . Résumé En préambule de ce résumé, je souhaiterais insister sur le fait que le travail rapporté dans ce manuscrit est le fruit de diverses collaborations intervenues à différents moments de mon doctorat tant en France qu'aux Etats-Unis. Concernant les modèles micromécaniques introduits chapitre II, je suis, avec l'aide de mes directeurs de thèse, Dr. Berbenni et Dr. Capolungo, à l'origine de l'intégralité de leur développement et programmation. Dr. Tomé a eu la gentillesse de partager avec nous la dernière version du code EPSC, développée à Los Alamos National Laboratories, à partir de laquelle nous avons implémenté les nouvelles relations de localisation correspondant au schéma DI-EPSC. Leur implémentation a nécéssité une profonde adaptation du code initial. Les discussions avec Dr. Tomé et Dr. Barnett qui ont suivi le développement de cette nouvelle approche micromécanique à double inclusion ont permis l'enrichissement de l'étude. Le chapitre III est principalement dédié à la description d'un nouveau logiciel de visualisation et d'analyse de cartographies EBSD pour matériaux hexagonaux, capable de reconnaître n'importe quel type de macle et d'extraire une "myriade" de données microstructurales, enregistrées au sein d'une base de données SQL. Le développement de l'outil est à mettre au crédit de Dr. Pradalier, professeur de Computer Sciences à Georgia Tech Lorraine, à l'origine du choix de la théorie des graphes pour la construction de l'outil. Pour ma part, je me suis chargé de vérifier les résultats obtenus à partir de ce dernier et de proposer les adaptations et améliorations nécéssaires pour rendre le logiciel plus ergonomique et plus intelligible aux membres de la communauté de la mécanique de la matière. J'ai également rédigé un code fortran qui a permis de générer tous les résultats de l'étude statistique présentée au chapitre IV et réalisée à partir des cartographies EBSD de zirconium que Dr. McCabe a eu l'amabilité de partager avec nous . La double étude portant sur les macles à faible facteur de Schmid ainsi que sur les doubles macles d'extension au sein de l'alliage de magnésium AZ31 a été menée avec l'aide de Dr. Shi (post-doctorant au LEM3 de 2013 à 2015 dans le cadre du projet ANR Magtwin). Ma participation a consisté à réaliser les différents essais mécaniques de compression monotone et séquentielle permettant l'observation de ces macles aux caractéristiques particulières et le développement de la partie théorique du modèle micromécanique à double inclusion en élasticité hétérogène anisotrope avec "eigenstrains". Le travail relaté dans ce manuscrit a ainsi profité de l'interaction avec tous ces chercheurs que je remercie vivement pour m'avoir permis d'élargir mon spectre de connaissances de la sorte. Les matériaux polycristallins à structure hexagonale compacte ont, depuis les années 1960, été l'objet de beaucoup d'intérêt. L'engouement que des métaux tels que le magnésium, le zirconium, le rhénium, le titane, ..., ont pu susciter et suscite toujours s'explique par l'ampleur et l'extrême variété de leurs applications. Une attention plus particulière est portée, au sein de cette thèse, sur le magnésium et le zirconium. Les propriétés de ce dernier, à savoir une excellente résistance à la corrosion, une grande pénétrabilité des neutrons lents, la conservation de ses propriétés à haute température ainsi qu'une bonne ductilité font du zirconium, et plus particulièrement de ses formes [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Traction et compression apparaissent, respectivement, dans la légende sous la forme de "tens" et "comp". Les abréviations "TT" et "IP" signifient, quant à elles, "through-thickness compression" et "in-plane compression", à savoir "compression dans le sens de l' épaisseur" et "compression dans le plan". alliées, un métal très utilisé en tant que gaine de combustible et matériau de tuyauterie dans les réacteurs nucléaires de génération III et IV. Le magnésium et ses différents alliages ont, quant à eux, fortement intéressé l'industrie automobile qui, par souci d'alléger les voitures, est toujours à la recherche de nouveaux matériaux plus légers mais aux modules de Young spécifiques élevés. Ainsi, depuis des décennies, des pièces de magnésium moulées ont été utilisées comme éléments structurels. En revanche, sa mise en forme, ainsi celle que des autres métaux hexagonaux, implique de déformer de façon irréversible le matériau. Or, la déformation plastique, dont les caractéristiques clé sont la limite d'élasticité, le durcissement et la ductilité, est fortement anisotropique pour le magnésium, le zirconium et leurs alliages respectifs. La Figure 1 montre les courbes de contrainte-déformation obtenues expérimentalement à partir d'éprouvettes composées d'alliage de Mg AZ31 et de Zr laminés et chargées en tension et compression selon différentes directions ainsi qu'à des températures différentes. Il apparaît clairement que température et direction de chargement ont tous deux un effet très important sur la réponse mécanique du matériau, rendant la prédiction de cette dernière beaucoup plus complexe. Les difficultés rencontrées lors la mise en forme de tôles métalliques faites de matériaux hexagonaux limitent sensiblement leurs potentielles applications. L'accommodation de la déformation et la relaxation des contraintes au sein des matériaux hexagonaux résultent de l'activation, simultanée ou non, de modes de glissement et de maclage. Le glissement se caractérise par le déplacement de dislocations et l'interaction entre celles appartenant aux différents systèmes et modes de glissement. A contrario, le maclage se caractérise, quant à lui, par la nucléation, au sein des grains, de sous-volumes, appelés macles, dont la réorientation du réseau peut être décrite au moyen d'une symétrie spéculaire par rapport à un plan invariant, bien classiquement dénommé plan de maclage. La compréhension de la déformation plastique des métaux hexagonaux est donc particulièrement ardue au vu de la très grande variété des modes δ ( • ) T 1 {10 12} < 1011> 85.2 T 2 {11 21} < 11 26> 34.9 C 1 {11 22} <11 23 > 64.2 C 2 {10 11} <10 12 > 57.1 En général, chaque mode de déformation actif impacte non seulement la réponse mécanique du matériau mais aussi l'évolution de sa texture. Les conséquences d'un tel phénomène s'observent sur les courbes de contrainte-déformation macroscopiques, présentées sur la Figure 1. Ainsi, les courbes correspondant à la compression de Mg AZ31 (Figure 1a) selon les directions transversale et normale à la direction de laminage sont similaires à bien des courbes de contrainte-déformation observées avec d'autres métaux. Cependant, la courbe de forme sigmoïdale obtenue dans le cas d'une compression le long de la direction de laminage est bien plus singulière. Les mêmes observations s'appliquent au zirconium (Figure 1b). La Figure 1b révèle, par ailleurs, l'influence très prononcée de la température sur le durcissement des métaux hexagonaux en présentant côte à côte les courbes de contrainte-déformation macroscopiques obtenues suite à la compression du spécimen selon l'une des directions du plan, à températures ambiante et cryogénique. En effet, alors que la courbe représentant la compression à température ambiante est "typique" d'une déformation dominée par le glissement, celle obtenue suite à la compression effectuée à température cryogénique est clairement de forme sigmoïdale. De telles variations ne peuvent s'expliquer que par l'activation et la compétition existant entre les différents modes de glissement Résumé et de maclage. Par conséquent, la compréhension et la prédiction précise du comportement mécanique des métaux hexagonaux requièrent l'étude et la prise en compte de trois types distincts d'interaction que sont les interactions glissement/glissement (1), glissement/macle (2) et macle/macle [START_REF] Christian | Deformation twinning[END_REF]. Le très générique terme « interaction macle/macle »englobe des phénomènes tels que le maclage secondaire mais également les jonctions macle-macle, présentées sur la Figure 2. En parallèle de ces trois problèmes fondamentaux dont il ne fait nul doute qu'ils soient étroitement liés à l'état des contraintes internes au sein des phases parent et macle, il est primordial d'accroître notre savoir concernant les mécanismes de nucléation et de croissance associés au maclage. Ce sont ces différentes problématiques qui motivèrent ce travail de thèse dont la méthodologie et les résultats clé sont résumés dans les paragraphes ci-après. L'étude portant sur le développement des contraintes internes au cours du processus de maclage se fit tout d'abord faite à l'aide d'une nouvelle approche micromécanique basée sur une topologie à double inclusion et le recours au théorème de Tanaka-Mori. Cette approche fut déclinée en deux modèles dont le premier consista en un schéma de Tanaka-Mori élasto-statique pour milieux élastiques hétérogènes avec incompatibilités plastiques. Ce premier modèle, appliqué dans un premier temps au Mg pur puis à l'alliage de Mg AZ31B, fut initialement développé pour étudier l'évolution des contraintes internes dans les phases parent et macle lors de maclage primaire et secondaire. Bien que limité au cas d'élasticité anisotrope hétérogène, les résultats suggèrent que les niveaux de contraintes internes moyennées au sein des macles sont suffisants pour induire de la plasticité dans ces dernières. Par ailleurs, en raison de la forte différence d'orientation existant entre le parent et la macle, l'élasticité hétérogène semble être ce qui impacte le plus les contraintes internes, tant au niveau de leurs valeurs que de leurs signes. L'étude révèle également, dans le cas du maclage primaire, la forte dépendance de l'état de contrainte de la macle vis à vis de la forme de la phase parent. Fort de ces résultats, un second modèle fut développé, appelé modèle élasto-plastique autocohérent à double inclusion, consistant en une extension du schéma élasto-statique précédemment introduit à l'élasto-plasticité ainsi qu'aux milieux polycristallins. A l'instar du premier modèle, le résultat original de Tanaka-Mori est utilisé pour dériver les nouvelles relations de concentration liant les champs de déformation moyens des macles et des grains maclés au champ de déformation macroscopique. Tous les grains, maclés ou non, sont considérés comme faisant partie intégrante du milieu homogène équivalent dont les propriétés mécaniques effectives sont calculées à l'aide d'une procédure auto-cohérente itérative implicite et non-linéaire, appélée modèle DI-EPSC. Contrairement aux modèles élasto-plastiques existants qui assimilent les phases parent et macle à des inclusions ellipsoïdales indépendantes, les nouvelles relations de concentration tiennent compte du couplage direct existant entre ces dernières. La comparaison des résultats obtenus avec le nouveau modèle DI-EPSC avec ceux provenant du modèle classique EPSC ainsi que ceux issus de l'expérience aboutit à trois résultats significatifs. Le premier consiste en la conclusion qu'une reproduction exacte des effets latents induits par le maclage permet de prédire l'influence de la plasticité sur le durcissement et sur le taux de durcissement se produisant dans les macles. Le second réside en l'observation que les nouvelles relations de concentration génèrent des distributions de déformation en cisaillement plus éparses. Enfin, le troisième résultat est la mise en exergue de l'influence de l'état de contrainte initiale de la macle, lors de sa nucléation, sur la réponse mécanique du matériau. Par ailleurs, il apparut que la majeure partie des instabilités numériques rencontrées, au cours de cette étude, provenait du choix de la matrice de durcissement. En effet, bien que cette dernière fût positive semi-définie, le schéma auto-cohérent se serait avéré bien plus stable si elle avait été définie strictement positive. Cependant, une telle hypothèse va à l'encontre d'une récente étude qui montra que certaines valeurs propres associées à la matrice de durcissement du Mg pur même d'identifier n'importe quel type de macle à condition que l'utilisateur renseigne dans le logiciel la valeur du ratio c/a ainsi que les désorientations théoriques correspondant aux différents systèmes de maclage potentiellement actifs. Pour l'analyse d'autres structures cristallographiques, l'utilisateur devra, en outre, modifier les paramètres de maille ainsi que les quaternions de symétrie. Les deux premières études statistiques furent effectuées à partir de cartographies EBSD de Mg AZ31 laminé dans le but d'expliquer l'activation de macles d'extension {10 12} à faible facteur de Schmid et la nucléation de double macles d'extension {10 12}-{10 12}. La première étude révéla que les macles d'extension {10 12} ayant un faible facteur de Schmid ne représentent que 6,8% de toutes les macles observées. S'appuyant uniquement sur des lois constitutives déterministes, les modèles polycristallins tels qu'EPSC ou EVPSC (modèle élasto-visco-plastique) ne peuvent être capables de justifier de l'activation de telles macles. En raison de leur faible apparition, l'influence de ces dernières sur la réponse mécanique du matériau demeure a fortiori très limitée. La deuxième étude montra, quant à elle, que les double macles d'extension {10 12}-{10 12} obéissent, en général, à la loi de Schmid. Elle révéla également que considérer les variations d'énergie interne induites par l'apparition de telles macles à partir d'un modèle micromécanique à double inclusion, même simplifié, permet de prédire de façon précise quelles sont les variantes susceptibles d'être activées. L'étude établit aussi que de telles macles restent extrêmement rares et ont un effet négligeable sur les propriétés mécaniques du matériau. En parallèle de ces deux études, une troisième fut menée, cette fois, sur du Zr pur afin de discuter des influences respectives (i) des jonctions macle-macle entre macles de première génération, (ii) de la taille de grain, (iii) de l'orientation cristallographique sur la nucléation et la croissance de macle. Les échantillons furent baignés dans l'azote liquide et chargés en compression selon l'une des directions du plan ainsi que le long de la normale au plan dans le but de favoriser, respectivement, la formation de macles d'extension T 1 et de compression C 1 . Les abréviations T 1 , T 2 et C 1 réfèrent aux macles de type {10 12}, {11 21} et {11 22}. Cette étude est la première à établir la pertinence statistique des jonctions macle-macle. Six types de jonctions macle-macle, à savoir Résumé (a) T 1 -T 1 type 1 (b) T 1 -T 2 type 1 (c) T 1 -C 1 type 1 (d) T 2 -T 2 type 1 (e) T 2 -C 1 type 1 (f) C 1 -C 1 type 1 T 1 -T 1 , T 1 -T 2 , T 1 -C 1 , T 2 -T 2 , T 2 -C 1 et C 1 -C 1 , furent observés. Les jonctions macle-macle se produisant entre macles de même mode, et plus particulièrement entre celles appartenant aux deux modes les plus actifs, sont très fréquentes et ne peuvent donc pas être négligées. Selon le trajet de chargement considéré, ces dernières peuvent représenter plus de la moitié de toutes les jonctions observées. La comparaison des épaisseurs de macles apparues dans des grains ne contenant qu'une seule macle avec celles des macles appartenant à des grains comprenant plusieurs macles révèle que les jonctions macle-macle entravent la croissance des macles. En outre, seules les macles appartenant au mode de maclage prédominant semble être fortement affectées par l'orientation cristallographique du grain ainsi que par la direction de chargement. Ces différences peuvent probablement être expliquées par la présence de niveaux de contraintes très localisés permettant la nucléation de n'importe quel type de macle. En accord avec de précédentes études, il est apparu également que la probabilité de nucléation de macle et le nombre moyen de macles par grain maclé augmentent avec la taille de grain. En termes de perspectives, la continuation logique de cette thèse consiste en 1) une intégration plus aboutie des capacités de post-traitement du logiciel EBSD et en 2) le développement et l'implémentation dans le DI-EPSC de modèles stochastiques de nucléation de macle qui prendraient en considération les données statistiques des récentes études. Par ailleurs, en multipliant les mesures EBSD sur plus d'échantillons de Mg et de Zr, chargés monotoniquement et cycliquement selon des directions variées à différentes températures et vitesses de chargement, il serait alors possible pour la modélisation micromécanique de prendre en considération une quantité précieuse de données statistiques. Chapitre 1 Introduction and state-of-the-art Polycrystalline materials with hexagonal close-packed crystal structure -hereafter h.c.p.-have been the subject of worldwide interest dating back to the late 1960s. Such is motivated by the broad range of applications of h.c.p. metals such as magnesium, zirconium, titanium, zinc, cadmium, beryllium, rhenium, cobalt, etc. Focus is placed here on the cases of pure zirconium and magnesium and some of their alloys (with emphasis on AZ31B Mg alloy). Zirconium has an ideal thermal neutron scattering cross-section, good ductility and resistance to corrosion and is therefore extensively used, in an alloyed form (e.g. Zircalloy), as cladding and piping material in generation 3 and 4 nuclear reactors [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF][START_REF] Kreuer | Proton conductivity : Materials and applications[END_REF][START_REF] Maglic | Calorimetric and transport properties of zircalloy-2, zircalloy-4 and inconel-625[END_REF][START_REF] Kreher | Residual stresses in polycrystals influenced by grain shape and texture[END_REF][START_REF] Carrillo | Nuclear characterization of zinalco and zircalloy-4[END_REF]. Magnesium and its alloys on the other hand, have great potential for lightweithing applications [START_REF] Mordike | Magnesium. properties -applications -potential[END_REF] -particularly for the automotive industry-as they exhibit high specific Young's modulus. As a result, cast magnesium alloys have, over the past decade, been increasingly used as structural components. The use of sheets of h.c.p. metals for part production necessarily relies on forming operations in which the metal will be necessarily deformed irreversibly. Plastic deformation -with key characteristics such as strength, hardening, ductility-is highly anisotropic both in pure Mg, Zr and most of their alloys. Figure 1.1 depicts experimentally measured stress-strain curves of rolled AZ31B Mg and clock-rolled high-purity Zr loaded in tension and compression along different directions and, in the case of Zr, at different temperatures. It is clearly shown here that a change in temperature or in loading direction can have a significant effect on the materials yield stress, strain hardening rate and ductility. As a result, sheet forming of h.c.p. materials remains a rather delicate task that largely limits their use in sheet forms. [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Tension and compression are denoted by "tens" and "comp", respectively. Abbreviations "TT" and "IP" stand for "through-thickness" and "in-plane", respectively. Motivation and Objectives In h.c.p. metals [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF][START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF][START_REF] Crocker | The crystallography of deformation twinning in alpha-uranium[END_REF][START_REF] Taylor | Plastic strain in metals[END_REF], strain accommodation and stress relaxation may result from the activation of either slip, twinning or both. Slip is characterized by the motion and interaction of dislocations belonging to different slip systems and possibly slip modes. On the other hand twinning is a sensibly different irreversible deformation mode characterized by the nucleation of sub-volumes within grains, i.e. twin domains, exhibiting a mirror symmetry reorientation of the lattice with respect to a specific plane, the twinning plane. Understanding plastic deformation in h.c.p. metals is particularly complex as a result of the large variety of possibly active deformation modes. For example, three active slip and two twinning modes can be found in pure magnesium while four twinning modes and three slip modes have been experimentally observed in pure zirconium. In addition, plasticity can also be activated within twin domains by means of secondary slip or of double twinning. In general, the relative contribution and activity of each deformation mode will have an effect on both the mechanical response but also on the texture change in the material. The consequence of this can be appreciated by observing the macroscopic stress-strain curves displayed in Figure 1.1. For example, the macroscopic stress-strain curves of a rolled AZ31B Mg alloy loaded along the transverse and normal directions shown in Figure 1.1a are similar to many other metal stress-strain curves. However, the sigmoidal macroscopic stress-strain curve corresponding to the compression along the rolling direction is significantly different. The same observation applies for Zr (Figure 1.1b). Figure 1.1b also shows that temperature may strongly influence the hardening response of h.c.p. metals. It is observed that when compressed along one of the in-plane directions at both room and liquid nitrogen temperatures, the macroscopic stress-strain curve obtained is in one case typical of slip dominated deformation while in the other case the curve is clearly sigmoidal. Such variations in the mechanical responses can only result from the activation and the competition of many and diverse deformation mechanisms. Motivation and Objectives Figure 1.2 -EBSD scan example of a clock-rolled Zr specimen loaded in compression along the through-thickness direction. These scans were processed using the software presented in Chapter 3. Consequently, accurately understanding and predicting the behavior of h.c.p. metals requires that we study and take into account three distinct interaction types : (1) slip/slip interactions, (2) slip/twin interactions, and (3) twin/twin interactions. The term twin-twin interaction is very general and includes phenomena such as secondary or double twinning and twin-twin junctions whose examples are displayed by Figure 1.2. In parallel to these three key fundamental problems, and acknowledging that slip/twin and twin/twin interactions will necessarily be largely driven by the internal stress state within twin domains -for example activation of secondary slip is clearly dependent on the internal stress within the twin-one must also gain an understanding of the growth and nucleation mechanisms associated with twinning [START_REF] Capolungo | Nucleation and stability of twins in hcp metals[END_REF][START_REF] Wang | 1)over-bar0 1 2) Twinning nucleation mechanisms in hexagonal-close-packed crystals[END_REF][START_REF] Wang | Nucleation of a twin in hexagonal close-packed crystals[END_REF]. These topics are briefly discussed in what follows so as to motivate the work to be presented in upcoming chapters. Twinning : key experimental results Twin domains form via the nucleation and glide of twinning partial dislocations on the twinning plane [START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF][START_REF] Yoo | Nonbasal deformation modes of HCP metals and alloys : Role of dislocation source and mobility[END_REF]. The shear induced by the propagation of those dislocations reorients the lattice in such a way that the parent and twin lattices are symmetric with respect to either the twinning plane or the plane normal to both the twinning direction and the twinning plane [START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF][START_REF] Hall | Twinning[END_REF][START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF][START_REF] Bevis | Twinning shears in lattices[END_REF][START_REF] Bevis | Twinning modes in lattices[END_REF]. Following nucleation, twin thickening occurs by the subsequent nucleation and propagation of twinning dislocations along planes parallel and adjacent to the faulted plane. Because motion and nucleation of twinning partial dislocations are sign sensitive, most of constitutive models for h.c.p. materials [START_REF] Salem | Strain hardening due to deformation twinning in titanium : Constitutive relations and crystal-plasticity modeling[END_REF][START_REF] Proust | Modeling texture, twinning and hardening evolution during deformation of hexagonal materials[END_REF][START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF][START_REF] Abdolvand | Incorporation of twinning into a crystal plasticity finite element model : Evolution of lattice strains and texture in zircaloy-2[END_REF] rely on positiveness and magnitude of resolved shear stresses (RSS) on the twinning plane in the shear direction to determine when a twin nucleates and grows. However, recent experimental studies [START_REF] Beyerlein | Effect of microstructure on the nucleation of deformation twins in polycrystalline high-purity magnesium : A multi-scale modeling study[END_REF][START_REF] Khosravani | Twinning in magnesium alloy {AZ31B} under different strain paths at moderately elevated temperatures[END_REF] based on EBSD measurements clearly show that twins nucleate at grain boundaries, crack tips, ledges and other interface defects where stresses are highly localized. Twin nucleation can be considered as a random event as it strongly depends on the presence of defects within the material. These studies suggest that the stress levels required for the nucleation and growth of twins are different. Consequently, knowledge of internal stresses in the parent phases is absolutely necessary for predicting twinning activity. Using a three-dimensional X-ray diffraction technique, Aydiner et al. [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF] performed in-situ measurements of the evolution of full strain and stress averaged tensors within both the twin and parent phases as the twin nucleated and grew. The authors revealed that the two nucleated twin variants were those with the highest averaged resolved shear stresses projected on the twinning plane along the twinning direction. Because the experimental study was limited to solely one grain, results about twin nucleation cannot be considered as statistically representative, but they still prove that internal stresses and twinning incidence are related. They also showed that stress magnitudes in the parent and twin phases are drastically different, while one could expect intuitively that, at least at the onset of twinning, both twin and parent phases have the same stress state. The initial stress state within the twin domain is particularly relevant because the latter influences the rate of hardening, the hardening and, hence, the activation of secondary slip in the twin phase. In addition, at large strains, the twinned volume becomes significant compared to the total volume, and stresses in the twin phases directly impact the macroscopic stress levels of the material. Until now, polycrystalline models based on deterministic approaches to deal with twin nucleation and homogenization techniques were capable of accounting for inter-granular interactions [37,38,[START_REF] Budiansky | On the elastic moduli of some heterogeneous materials[END_REF][START_REF] Hill | A self-consistent mechanics of composite materials[END_REF][START_REF] Hutchinson | Elastic-plastic behaviour of polycrystalline metals and composites[END_REF][START_REF] Hashin | A variational approach to the theory of the elastic behaviour of multiphase materials[END_REF][START_REF] Mori | Average stress in matrix and average elastic energy of materials with misfitting inclusions[END_REF][START_REF] Berveiller | An extension of the self-consistent scheme to plastically-flowing polycrystals[END_REF][START_REF] Berveiller | The problem of two plastic and heterogeneous inclusions in an anisotropic medium[END_REF][START_REF] Lipinski | Elastoplasticity of micro-inhomogeneous metals at large strains[END_REF][START_REF] Molinari | A self consistent approach of the large deformation polycrystal viscoplasticity[END_REF][START_REF] Sabar | A new class of micro-macro models for elastic-viscoplastic heterogeneous materials[END_REF][START_REF] Wang | A finite strain elastic-viscoplastic selfconsistent model for polycrystalline materials[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF] but were incapable of quantifying the influence of parent-twin interactions without fitting model parameters or adding an artificial stress correction term [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF]. Slip/slip interactions : towards a comprehensive understanding While this is not the main subject of the present thesis, slip system interactions are briefly discussed here for the sake of completeness. Taylor [START_REF] Taylor | Plastic strain in metals[END_REF] elegantly proved that any polycrystalline material is able to undergo homogeneous deformation without producing cracks if five independent slip systems can be activated. The potentially active slip systems in h.c.p. metals are basal, prismatic and pyramidal first or second order [START_REF] Groves | Independent slip systems in crystals[END_REF][START_REF] Yoo | Slip, twinning and fracture in hexagonal close-packed metals[END_REF]. More recent works [START_REF] Kocks | Importance of twinning for ductility of CPH polycrystals[END_REF][START_REF] Hutchinson | Creep and plasticity of hexagonal polycrystals as related to single-crystal slip[END_REF] showed that five independent slip systems are not always necessary. For example, Hutchinson [START_REF] Hutchinson | Creep and plasticity of hexagonal polycrystals as related to single-crystal slip[END_REF] observed that inelastic deformation can result from the activation of only four linearly independent slip systems without pyramidal slip. Regardless, during slip dominated plasticity, all slip systems necessarily interact. From the constitutive modeling standpoint all such interactions are usually grouped in self and latent hardening matrix. These are expected to capture the collective behavior of dislocations as they interact with one another. Ideally one should thus accurately render each unit process associated with each dislocation interaction event that can potentially lead to junction formation, unzipping, repulsion and crossed states. Discrete dislocation dynamics (DDD) can largely provide answers to those questions. In DDD, dislocation lines are discretized and dislocation motion is predicted by solving an overdamped equation of motion for each dislocation segment (or node depending on the numerical strategy chosen). Then junction formation can be predicted by enforcing conservation of the Burgers vector as well as the maximum dissipation. Recently, latent hardening coefficients resulting from slip/slip interactions were computed from DDD by Bertin et al. [START_REF] Bertin | On the strength of dislocation interactions and their effect on latent hardening in pure magnesium[END_REF] for pure magnesium. The authors showed that basal/pyramidal slip interactions are particularly strong. This can be of prime importance, as it suggests that such interactions could be detrimental to the material's ductility. Note however in the work cited above, some of the key fundamental aspects remain to be treated, as the nature and specifics associated with <c+a> dislocations on the second order pyramidal plane are subject to debate [START_REF] Agnew | Connections between the basal {I1} "growth" fault and <c+a> dislocations[END_REF]. Slip/twin interactions : recent progress Slip/twin interactions occur at the twin interface as a slip dislocation intersects and possibly dissociates onto the twinning interface. The nature of the reaction between slip and twinning dislocations depends on the geometry of the reaction at stake (i.e. incoming dislocation Burgers vector, character, etc.). In his review of deformation modes in h.c.p. metals [START_REF] Yoo | Slip, twinning and fracture in hexagonal close-packed metals[END_REF], Yoo described to a high level of detail the specific crystallographic reactions that can result from slip/twin interactions. While of great interest, the study was necessarily limited to geometrical and elementary energetic 1.2. Crystallography of h.c.p. metals considerations. In particular transition states describing dislocation core effects were disregarded. Provided that accurate atomistic pair potentials can be found/developped, these limitations can be circumvented by means of atomistic simulations such as those conducted in a series of pioneering studies by Serra and Bacon [START_REF] Serra | Computer-simulation of twin boundaries in the HCP metals[END_REF][START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF][START_REF] Serra | Dislocations in interfaces in the h.c.p. metals. defects formed by absorption of crystal dislocations[END_REF]. The authors revealed the complexity of the unit processes involved. For example, it was shown that a screw "basal" dislocation can easily cross a {10 12} twin boundary, while an edge basal dislocation will dissociate on the twin boundary and could generate surface disconnections. Such simulations are ideal to understand and describe the different mechanisms but are limited to a few dislocations. Later on, the reactions described in the above were rationalized in a constitutive model that suggested that slip assisted twin growth resulting from the continuous generation of steps by means of slip/twin interactions was an unlikely candidate for explaining the rapid growth rate of tensile twins. In parallel, Fan et al. [START_REF] Fan | The role of twinning deformation on the hardening response of polycrystalline magnesium from discrete dislocation dynamics simulations[END_REF] used three-dimensional DDD to study the interactions between slip dislocations and {10 12} tension twin boundaries. It was found, as expected that twins have a stronger influence on hardening (in the sense of limiting slip) than grain boundaries. However, their results cannot be directly implemented in constitutive laws. This shortcoming serves as a motivation for developing novel micromechanical models allowing researchers to capture, albeit in a coarse fashion, these effects. Twin/twin interactions : motivating theoretical and experimental results Twin/twin interactions and secondary twinning are also suspected to have significant effect on microstructure evolution and internal stress developments during plasticity. Regarding twin-twin interactions, the crystallography associated with the different twinning partial dislocations that can occur was discussed in general in work by Cahn [START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF] on depleted uranium. In the specific case of h.c.p. materials the literature is quite scarce. One must acknowledge recent work [START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Kadiri | The effect of twin-twin interactions on the nucleation and propagation of twinning in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] based on EBSD and TEM measurements investigated the nature of {10 12}-{10 12} twin-twin junctions and their influence on twin nucleation and twin propagation rates. It was suggested that twin-twin junctions could hinder twin growth while favoring nucleation. However, these studies are limited to one type of twin-twin junction and rely on the observation of only a few single twinned grains such that they are not necessarily statistically representative. The question remaining is that of the need for accounting for these interactions in constitutive models. Micromechanical multi-inclusion models exist [START_REF] Hori | Double-inclusion model and overall moduli of multi-phase composites[END_REF][START_REF] Nemat-Nasser | Micromechanics : overall properties of heterogeneous materials[END_REF] but have never been applied or adapted to treat this type of problem. Secondary twinning, also called double twinning, is another interesting problem as several experimental studies [START_REF] Barnett | Non-schmid behaviour during secondary twinning in a polycrystalline magnesium alloy[END_REF][START_REF] Beyerlein | Double twinning mechanisms in magnesium alloys via dissociation of lattice dislocations[END_REF] have proposed connections between the activation of such processes and fracture. An experimental study performed by Martin et al. [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF] from Mg EBSD scans revealed that activated double twin variants were those that either required the least strain accommodation within the parent domain or obeyed the Schmid's law. However, the latter study neither discussed the statistical relevance of secondary twinning nor quantified its influence on the mechanical response. Crystallography of h.c.p. metals In the Miller-Bravais indexing system, the hexagonal crystal structure as shown in Figure 1.3 is defined with respect to four axes or vectors denoted by a 1 , a 2 , a 3 and c, respectively. Vectors a 1 , a 2 , a 3 lie in the lower basal plane. Angles formed by vector pairs (a 1 ,a 2 ), (a 2 ,a 3 ) and (a 3 ,a 1 ) are all equal to 2π/3 radians. The c-axis is perpendicular to basal planes and, hence, to vectors a 1 , a 2 and a 3 . The magnitude is the same for all vectors a i with i = {1, 2, 3} but differs from the magnitude of vector c. The norm of vectors a i corresponds to the distance between two neighboring atoms lying in the same atomic layer. The magnitude of the vector c represents the distance between two atomic layers of same type. Using other words, it represents the distance separating the lower part from the upper part of a hexagonal cell. Atom positions, directions and planes can be expressed from vectors a 1 , a 2 , a 3 and c. However, it is also possible to only use vectors a 1 , a 2 and c since a 3 = -(a 1 + a 2 ). The hexagonal structure as shown in figure 1.3 contains 3 unit cells consisting of atom arrangements made of two tetrahedra facing upwards. For example, by representing atoms with hard spheres, the first of the 3 unit cells shown in Figure 1.3 includes atoms located at coordinates (1,0,0), (0,0,0), (1/2,1/2,0), (2/3,1/3,1/2), (1,0,1), (0,0,1) and (1/2,1/2,1). An unit cell is composed of 7 atoms. Each atom lying in layers A, i.e. blue spheres, is shared with 2 other unit cells. The atom belonging to layer B is not shared with any other unit cell. As a result, the coordination number of an elementary hexagonal unit cell is 2. If all atoms contained in an unit cell are equidistant, then the axial ratio γ = c/a is equal to 8/3 ∼ 1.633. This value is never reached with pure materials at room temperature and pressure but metals such as magnesium and cobalt exhibit axial ratios which are very close, e.g. 1.623 for both of them [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF]. At room pressure and temperature conditions, axial ratios of pure h.c.p. metals are included between 1.56 (i.e. beryllium) and 1.89 (i.e. cadmium) [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF]. Hexagonal intermetallic phases with a γ ratio equal to 8/3 have also been produced [START_REF] Dorn | High-Strength Materials[END_REF]. Note that the value of γ is function of temperature and pressure. Figure 1.4 displays important planes and directions in h.c.p. metals. They correspond to planes and directions of either slip or twinning systems that will be mentioned further. Crystallography of twinning in h.c.p. metals Hall [START_REF] Hall | Twinning[END_REF] and Cahn [START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF] were the first to present a detailed analysis of deformation twinning crystallography in h.c.p. materials. Shortly after, Kiho [START_REF] Kiho | The crystallographic aspect of the mechanical twinning in metals[END_REF][START_REF] Kiho | The crystallographic aspect of the mechanical twinning in Ti and alpha-U[END_REF] and Jawson and Dove [START_REF] Jaswon | Twinning properties of lattice planes[END_REF][START_REF] Jaswon | The prediction of twinning modes in metal crystals[END_REF][START_REF] Jaswon | The crystallography of deformation twinning[END_REF] developed theories aimed at predicting the activation of twinning modes. Their theories rely on the assumption that activated twinning systems are the ones that minimize both the twinning shear magnitude and shuffles. In 1965, Bilby and Crocker [START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF] reviewed and generalized the works previously mentioned. They provided the most complete analysis of shuffling processes and a very rigorous treatment of the orientation relationships and of the division of atomic displacements into shear and shuffles. Till now, their theory, referred to as the Bilby-Crocker theory or the classical crystallographic theory of deformation twinning in the following, remains the reference. Bevis and Crocker [START_REF] Bevis | Twinning shears in lattices[END_REF][START_REF] Bevis | Twinning modes in lattices[END_REF] extended the theory to the case of non-classical twins. The objective of the present paragraph is to present the main aspects of deformation twinning crystallography and introduce the principal twinning modes in h.c.p. metals. A deformation twin consists of a region of a grain that underwent a homogeneous shape deformation in such a way that the twinned domain has exactly the same crystalline structure as the parent domain but a different orientation. Consequently, deformation twinning consists in a homogeneous shape deformation but does not induce volume variation. Because parent and twin phases remain in contact, the deformation undergone by the twinned region must be an invariant plane shear strain. A twinning mode can be completely characterized by two planes and two directions. These planes and directions correspond to the invariant and unrotated plane, denoted by K 1 , the second invariant but rotated plane of the simple shear, K 2 , the twinning shear direction, η 1 , and the direction, η 2 , resulting from the intersection of the plane of shear P , perpendicular to both K 1 and K 2 , with K 2 (Figure 1.5). The plane K 1 is called the composition or twinning plane. Even if four elements characterize a twinning mode, knowing only either K 1 and η 2 or K 2 and η 1 is sufficient to completely define a twinning mode. The conjugate of a given twinning mode is also described by planes K ′ 1 , K ′ 2 and directions η ′ 1 , η ′ 2 such that K ′ 1 = K 2 , K ′ 2 = K 1 , η ′ 1 = η 2 , η ′ 2 = η 1 and the magnitude of the twinning shear, s, is the same. K 2 and η 2 are then called the reciprocal twinning plane and reciprocal twinning direction, respectively. The four orientation relations of the classical crystallographic theory of twinning are : -1) reflexion in K 1 , -2) rotation of π about η 1 , -3) reflexion in the plane normal to η 1 , 1.3. Crystallography of twinning in h.c.p. metals -4) rotation of π about the normal to K 1 . Because hexagonal structures are centro-symmetric, lattices obtained with relations (1) and (4) are identical. The same observation can be made with relations (2) and (3). It is then natural to classify into three types of twins. Type 1 and type 2 twins correspond to twins whose lattices can be obtained from relations (1)-( 4) and ( 2)-(3), respectively. The third type of twins includes all twins for which twin lattices can be reproduced by using any of the four classical orientation relations. Note that relations (2) and ( 4) are particularly convenient for expressing Rodrigues vectors and hence quaternions associated with twinning of types 2 and 1, respectively. Another way to differentiate twins of type 1 and 2 consists in looking at the rationality of indices of K 1 , K 2 , η 1 and η 2 . Twins of type 1 are twins for which indices of K 1 and η 2 are rational while twins of type 2 are those for which K 2 and η 1 are rational. In the case of compound twins, all K 1 , K 2 , η 1 and η 2 indices are rational. Geometrical considerations explaining the rationality or non-rationality of Miller indices of planes K 1 , K 2 and directions η 1 , η 2 are detailed in Bilby and Crocker [START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF] and reminded by Christian and Mahajan [START_REF] Christian | Deformation twinning[END_REF]. Other twinning orientation relations are theoretically possible and have been investigated by Bevis and Crocker [START_REF] Bevis | Twinning shears in lattices[END_REF][START_REF] Bevis | Twinning modes in lattices[END_REF] Referred to as non-classical twins, they will not be studied in the present document. Denoting by u 1 , u 2 and u 3 basis vectors associated with the hexagonal cell and using Einstein convention and Christian et al. notations [START_REF] Christian | Deformation twinning[END_REF], unit vectors parallel to the η 1 direction, the η 2 direction and the normal to the twinning plane can be written as l = l i u i , g = g i u i and m = m i u i , respectively. As a result, if K 1 and η 2 are known, the shear direction η 1 is expressed as follows : η 1 = sl = 2(m -(g.m -1 )g) (1.1) The magnitude of the twinning shear is given by : s 2 = 4((g.m) -2 -1) (1.2) Consequently, the deformation gradient tensor associated with deformation twinning is expressed as F = I + s(l ⊗ n) (1.3) with I, the identity tensor. The case of {10 12} twins is particularly interesting because the twinning shear s becomes null when γ = √ 3, and shear direction reverses as γ passes through this value. As a result, {10 12} twins can be either tensile or compressive, depending on the axial ratio magnitude. However, the simple application of the previously described shear is not sufficient to reorient the crystal lattice in such a way that atoms belonging to the twinned region face their image in the parent phase with respect to the mirror symmetry plane. Additional atomic displacements are then necessary to produce the twin structure from the sheared parent structure. These atomic displacements, which relate the twin lattice sites to the parent lattice sites, are called lattice shuffles. Consider a primitive lattice vector w parallel to the direction η 2 . The magnitude of its projection along the normal to the twinning plane is equal to qd where q is an integer denoting the number of lattice planes K 1 of spacing d crossed by w. Atom displacements are repeated in each successive group of q planes. Bilby and Crocker [START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF] showed that, in the case of twinning of type 1, lattice points lying in the planes p = q/2 and p = q are sheared directly to their final has to be introduced. It corresponds to the number of K 2 planes crossed by a primitive lattice vector parallel to the twinning shear direction η 1 . It is been proven that no shuffles are required when q * = 1 and q * = 2. In short, any crystalline structure containing more than one atom per unit primitive cell will undergo lattice shuffling during twinning deformation when q = 4. Examples of possible shuffling mechanisms are presented in Figure 1.6. Moreover, Table 1.1 lists most of the twinning systems observed in h.c.p. materials. Note that except a very rare twinning system observed in Mg by Reed-Hill [START_REF] Reed-Hill | A study of the (1011) and (1013) twinning modes in magnesium[END_REF], all twinning modes listed in Table 1.1 are compound. Very early, Kiho [START_REF] Kiho | The crystallographic aspect of the mechanical twinning in metals[END_REF][START_REF] Kiho | The crystallographic aspect of the mechanical twinning in Ti and alpha-U[END_REF] and Jaswon et al. [START_REF] Jaswon | Twinning properties of lattice planes[END_REF][START_REF] Jaswon | The prediction of twinning modes in metal crystals[END_REF][START_REF] Jaswon | The crystallography of deformation twinning[END_REF] developed models aimed at predicting the activation of twinning systems. These models are based on the simple assumption that activated twin systems are those inducing the least amount of shear and the smallest lattice shuffles. Bilby and Crocker [START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF] generalized them and suggested that a newly-formed twin has the following properties : -1) small twinning shear magnitude, -2) its formation required simple lattice shuffles, i.e. shuffles with a small value of q, -3) lattice shuffles induced by its nucleation have a small magnitude, -4) if large shuffles are necessary, they should be parallel to the twinning shear direction η 1 . In general, criteria (1) and ( 2) are sufficient to predict the predominant twinning modes. Criteria (3) and (4) are particularly useful to choose between a twin mode and its conjugate. However, they cannot be used to predict the nucleation and growth of twins at the grain scale. For example, they are not capable of predicting if a given strain will be accommodated by a large twin or by several small twins of same mode. More recently, El Kadiri et al. [START_REF] Kadiri | The candidacy of shuffle and shear during compound twinning in hexagonal close-packed structures[END_REF] derived the analytical expressions of all possible shuffles and shears for any compound twin in h.c.p. metal. Their purpose was to propose a generalized crystallographic framework capable of determining the twinning dislocations that can be formed for each twinning system. Their theory recovered the expressions of all twinning dislocations previously identified from the admissible interfacial defect theory developed by Serra et al. [START_REF] Serra | Computer-simulation of twin boundaries in the HCP metals[END_REF][START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF][START_REF] Serra | Computer-simulation of the structure and mobility of twinning dislocations in hcp metals[END_REF] and discussed hereafter. Their calculations enabled the identification of all planes subject to shear exclusively and those subject to both shear and shuffling for any non elementary twinning dislocation. Regarding the {11 22} and {10 12} twinning modes, the authors demonstrated that the smallest deviation from the stacking sequence is exactly repeatable with an order equal to 7 and 2, respectively. They also revealed that twinning disconnection with a step height equal to multiple interplanar spacings does not necessarily require shuffles within intermediate planes to operate in the twinning direction. Aware that some of the terms used to describe El Kadiri's results have not been defined yet, this discussion can be seen as a transition or an early introduction to the next paragraph. Nucleation and growth of twins Twinning dislocations and twin interfaces A disconnection in a coherent rational twin boundary has a stress field similar to the one associated with a dislocation. This explains why the term of twinning dislocation was used when first discussed by Vladimirskij [77], Frank and Van der Merwe [START_REF] Frank | One-dimensional dislocations .1. Static theory[END_REF] in the late 40's. The equivalent Burgers vector of a disconnection, also called step, of height h can be expressed as follows : b t = hs η 1 η 1 (1.4) where s denotes the unit twinning shear magnitude and η 1 , the twinning shear direction. In cases where h is equal to the spacing d of the lattice planes parallel to K 1 , the twinning dislocation is called the elementary twinning dislocation. Moreover, since the elastic energy is proportional to the square of the Burgers vector magnitude, steps with heights equal to multiples of d tend to dissociate spontaneously into elementary twinning dislocations. However, when parent and twin lattices do not coincide, an elementary twinning dislocation might be energetically unfavorable. As shown by Thompson and Millard [79], lattice shuffles also imply that the interface structure repeats at every q lattice planes parallel to K 1 if q is odd and at every q/2 if q is even. As a result, Burgers vectors, b t,odd and b t,even , corresponding to twinning dislocations for which q is odd and even, respectively, can be expressed such that : b t,odd = qds η 1 η 1 (1.5) b t,even = 1 2 qds η 1 η 1 (1.6) These twinning dislocations are referred to as zonal twinning dislocations. Regarding the nature of twinning dislocations, they can be of edge, screw or mixed type. Twinning dislocations have most of the properties of ordinary lattice dislocations. As explained by Christian and Mahajan [START_REF] Christian | Deformation twinning[END_REF], they can glide along the interface plane when a shear stress is applied. A twin can be represented as a series of twinning dislocation loops whose diameter inversely increases with the vicinity of the central plane of the twin. Then, while expansion of existing dislocation loops engenders diameter increase, formation of new dislocation loops results in an increase of the twin thickness. The motion of zonal dislocations along along free-defect planes parallel to K 1 is responsible for twin growth or shrinkage. The dislocation is then said to be glissile. In reality, short and long-range interactions with point, line, surface defects slow down or even block the displacement of the twinning dislocation. Such interactions can be considered as friction stresses. The lattice resistance is a kind of Peierls-Nabarro force whose magnitude strongly depends on the type of atomic bonding and hence on the structure of the dislocation core. Twinning dislocation cores were first assumed to be similar to lattice dislocation cores, i.e. quite narrow. However, experiments and simulation revealed that the size of twinning dislocation cores varies a lot with the material. For example, twinning dislocation cores observed in zirconia are very narrow and its corresponding Peierls stress is very high, while for many metals, dislocation cores may extend over several atomic planes and may be opposed to a relatively small Peierls stress. Nucleation and growth of twins In addition, the dissociation of zonal dislocations into elementary twinning dislocation lowers the elastic energy but increases the surface energy. Elementary dislocations, products of the dissociation, have parallel Burgers vectors. As a result, because of repulsive forces, they will tend to separate. Note that the dissociation process of zonal dislocations into elementary dislocations is very similar to the one corresponding to the dissociation of lattice dislocations into partials dislocations [START_REF] Mendelson | Fundamental Aspects of Dislocation Theory[END_REF]. Thompson and Millard [79] established that the only stable twinning dislocation for the {10 12} twinning mode is the zonal dislocation of double step height with the following Burgers vector b t = 3 -γ 2 3 + γ 2 η 1 (1.7) Its magnitude is then equal to b t = 3 -γ 2 3 + γ 2 a (1.8) Atomistic simulations performed by Serra et al. [START_REF] Serra | Computer-simulation of twin boundaries in the HCP metals[END_REF] with two-body potentials revealed that the width of these zonal dislocations is sensitive to the used potential. Regarding the {11 22} twinning mode, the zonal dislocation corresponding to the twinning features presented in Table 1.1 has a step height equal to three interplanar spacing of K 1 planes and the following Burgers vector b t = γ 2 -2 3(γ 2 + 1) η 1 (1.9) whose magnitude is b t = γ 2 -2 γ 2 + 1 a (1.10) Due to the high value of q, i.e. q = 6, many lattice shuffles are expected to occur. Serra et al. [START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF] considered three different shuffle models and observed that the energy and the width of the twinning dislocation were not really affected by the type of shuffle but were sensitive to the atomic potential used. No lattice shuffle occurs with {11 21} twinning. As a result, the twinning dislocation associated with this twinning mode is an elementary dislocation whose the Burgers vector and the Burgers vector magnitude are, respectively, b t = 1 3 4γ 2 + 1 η 1 (1.11) and b t = 1 1 + 4γ 2 a (1.12) The most observed twinning dislocations in {10 11} twin interfaces in Mg and Ti are such that their step height is equal to 4d, with d the interplanar distance between K 1 lattice planes. Their corresponding Burgers vector and Burgers vector magnitude have the following expressions : b t = 4γ 2 -9 4γ 2 + 3 η 1 (1.13) and b t = √ 2(4γ 2 -9) √ 3 4γ 2 + 3 a (1.14) In order to minimize the interfacial energy, all twinning dislocations mentioned in the above may dissociate into dislocations whose Burgers vectors are smaller in magnitude. The energetic stability of twin interfaces is a very pertinent discrimination criterion for investigating the likelihood of twinning modes. Using different two-body potentials, Serra and Bacon [START_REF] Serra | Computer-simulation of twin boundaries in the HCP metals[END_REF][START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF] thoroughly investigated the main twin interface structures in pure h.c.p. materials, i.e. {10 12}, {11 21}, {10 11} and {11 22} twins. As shown in Table 1.1, all these twin modes were observed experimentally in h.c.p. materials. Simulations revealed that classical twinning dislocations having both a relatively small b and h have smaller line energies in {10 12} and {11 21} interfaces than in {11 22} interfaces. The only stable equilibrium configuration found for the {10 12} twin interface is such that parent and twin lattices are mirror images, and the interface plane results from the coalescence of two adjacent atomic planes into a corrugated {10 12} plane. Moreover, Xu et al. [START_REF] Xu | On the importance of prismatic/basal interfaces in the growth of twins in hexagonal close packed crystals[END_REF] also showed that prismatic/basal interfaces which exhibit a low interface energy play an important role in the growth of {10 12} twins. The relaxed structure of the {10 11} is very similar to the one computed for {10 12} twins, since the stable interface consists of a {10 11} plane generated after the coalescence of two separate atomic planes. Regarding the {11 22} twin interface, atomistic simulations reveal that the interface, as well as all lattice planes parallel to K 1 , is perfectly flat. Mechanisms involved in nucleation and growth of twins Twin formation can be decomposed into three steps. The first step is the nucleation, consisting of the formation of a small twin nucleus. The second step is named propagation and corresponds to the phase during which dislocation loops expand very quickly in all directions contained within the twinning plane. At the end of the second step, the newly-formed twin is flat and wide. The last step in the development of a twin, referred to as the growth step, consists of the thickening of the twin. Mechanisms involved in both nucleation and growth are detailed in the following. In theory, twins may nucleate homogeneously or heterogeneously. Homogeneous nucleation consists in the formation of a small twin in a defect free region under the influence of an applied stress. Homogeneous nucleation implies that there exists a critical resolved shear stress (CRSS), also called theoretical strength of the material, such that when the resolved shear stress on the twinning plane in the twinning direction reaches or exceeds the critical stress, a twin forms. Theoretical works of Orowan [START_REF] Koehler | Dislocations in Metals[END_REF], Price [START_REF] Price | Pyramidal glide and the formation and climb of dislocation loops in nearly perfect zinc crystals[END_REF][START_REF] Price | Nucleation and growth of twins in dislocation-free zinc crystals[END_REF][START_REF] Price | Non-basal glide in dislocation-free cadmium crystals[END_REF], Lee and Yoo [START_REF] Lee | Elastic strain-energy of deformation twinning in tetragonal crystals[END_REF][START_REF] Yoo | Deformation twinning in hcp metals and alloys[END_REF] show that twins may nucleate homogeneously if the applied resolved shear stress on the twinning plane is very high and if both the surface and strain energies are very small. As a consequence, homogeneous nucleations seem to be very unlikely. Experimental results obtained by Bell and Cahn [START_REF] Bell | The nucleation problem in deformation twinning[END_REF][START_REF] Bell | The dynamics of twinning and the interrelation of slip and twinning in zinc crystals[END_REF] as well as Price [START_REF] Price | Pyramidal glide and the formation and climb of dislocation loops in nearly perfect zinc crystals[END_REF][START_REF] Price | Nucleation and growth of twins in dislocation-free zinc crystals[END_REF][START_REF] Price | Non-basal glide in dislocation-free cadmium crystals[END_REF] are in agreement with the previous conclusion. Bell and Cahn [START_REF] Bell | The nucleation problem in deformation twinning[END_REF][START_REF] Bell | The dynamics of twinning and the interrelation of slip and twinning in zinc crystals[END_REF] observed that twins appeared at much higher stress levels in "almost" defect-free h.c.p. single crystals than they did in less perfect crystals. Carrying out in situ measurements on specimens in a scanning electron microscope, Price [START_REF] Price | Pyramidal glide and the formation and climb of dislocation loops in nearly perfect zinc crystals[END_REF][START_REF] Price | Nucleation and growth of twins in dislocation-free zinc crystals[END_REF][START_REF] Price | Non-basal glide in dislocation-free cadmium crystals[END_REF] found that the stresses required to initiate twinning were an order of magnitude higher than those usually measured on "regular" macroscopic specimens. Therefore, Bell et al. [START_REF] Bell | The nucleation problem in deformation twinning[END_REF][START_REF] Bell | The dynamics of twinning and the interrelation of slip and twinning in zinc crystals[END_REF] and Price [START_REF] Price | Pyramidal glide and the formation and climb of dislocation loops in nearly perfect zinc crystals[END_REF][START_REF] Price | Nucleation and growth of twins in dislocation-free zinc crystals[END_REF][START_REF] Price | Non-basal glide in dislocation-free cadmium crystals[END_REF] all agreed with the conclusion that twinning is initiated by some defect configuration. Twinning in constitutive and polycrystalline modeling As opposed to homogeneous nucleation, heterogeneous nucleation consists of a defect-assisted twin formation. Heterogeneous nucleation is usually modeled via the dissociation of some dislocation into a single or multi-layered stacking fault [START_REF] Christian | Dislocations in Solids[END_REF]. The resulting stacking fault, bounded by partial dislocations belonging to the parent crystal, is then the defect responsible for twin nucleation. There are other ways for twins to form. These are called pole or cross-slip source mechanisms which enable a single twinning dislocation to move through successive K 1 planes. Derived from the general theory developed by Bilby [START_REF] Bilby | On the mutual transformation of lattices[END_REF], Cottrell-Bilby [START_REF] Cottrell | A mechanism for the growth of deformation twins in crystals[END_REF] and introduce the concept of a pole mechanism in b.c.c and h.c.p. materials, respectively. The pole mechanism was described by Bilby and Christian [START_REF] Bilby | The Mechanism of Phase Transformations in Metals[END_REF] as follows. Consider a lattice dislocation in a parent crystal with a Burgers vector b a . The same dislocation has after crossing the twin interface, a Burgers vector b b . The two Burgers vectors are assumed to be related by the twinning shear such that b b = Sb a . As a result, the glide of the dislocation leaves a step in the interface of height equal to the projection of the Burgers vector b a along the normal to the twinning plane, i.e. h = b a .m with m an unit vector normal to K 1 . This step is a twinning dislocation whose Burgers vector is b t = b b -b a . Consequently, each point crossed by the initial lattice dislocation along the twin interface is the junction of three dislocation lines. Bilby called the junction point a "generating node". The twinning dislocation associated with such a configuration is said to be a pole dislocation. Other more elaborate illustrations of pole mechanisms involving the dissociation of a pole dislocation into partial and sessile dislocations have been detailed and explained by authors such as Cottrell-Bilby [START_REF] Cottrell | A mechanism for the growth of deformation twins in crystals[END_REF], Venables [START_REF] Venables | Dislocation pole models for twinning[END_REF] and Hirth and Lothe [START_REF] Hirth | Theory of Crystal Dislocations[END_REF]. Similar to nucleation, both homogeneous and heterogenous growth mechanisms have been and are still being investigated in the literature. However, in contrast to homogeneous nucleation, which is very unlikely, homogeneous growth is possible. Homogeneous growth corresponds to repeated homogeneous nucleation of twinning dislocations on K 1 lattice planes to form new twin layers. Twin thickening may also occur by random accumulation of nucleated faults or by heterogeneous nucleation of steps at defect loci or by pole or cross-slip mechanism. Twinning in constitutive and polycrystalline modeling Polycrystalline models accounting for twinning rely either on the use of the finite element method (FEM), such as recently done in work by Izadbakhsh et al. [96] [96, 97], or on the use of the Green operator techniques and Eshelbian micromechanics [START_REF] Shiekhelsouk | Modelling the behaviour of polycrystalline austenitic steel with twinning-induced plasticity effect[END_REF]. The latter can be applied in the form of a mean field approach (e.g. self-consistent methods) or of a full-field via the use of the Fast Fourier Transform (FFT) method originally proposed by Moulinec and Suquet [START_REF] Moulinec | A numerical method for computing the overall response of nonlinear composites with complex microstructure[END_REF][START_REF] Moulinec | Intraphase strain heterogeneity in nonlinear composites : a computational approach[END_REF]. In both cases of full-field methods based on the FEM and the FFT, current models accounting for twin activity do not effectively reorient the crystal within the twin domain such that second generation twinning and secondary slip is necessarily predicted to a lesser accuracy. On the contrary, mean field self-consistent polycrystalline models, in which domain reorientation is straightforward, have been used in a large body of work to predict the effect of twinning on strain hardening and microstructure evolution [START_REF] Tomé | A model for texture development dominated by deformation twinning : Application to zirconium alloys[END_REF][START_REF] Lebensohn | A self-consistent anisotropic approach for the simulation of plastic deformation and texture development of polycrystals : Application to zirconium alloys[END_REF][START_REF] Proust | Modeling texture, twinning and hardening evolution during deformation of hexagonal materials[END_REF]. A simple way to deal with the crystallographic reorientation induced by twinning consists in (1) initially representing the polycrystal with a finite set of orientations with given volume fractions and (2) reorienting an entire crystal when the effective twin fraction is larger than a critical value. This type of approach was originally proposed in early work by Van Houtte [START_REF] Van Houtte | Simulation of the rolling and shear texture of brass by the taylor theory adapted for mechanical twinning[END_REF] in which a Monte Carlo type method was employed to allow for twin reorientation [START_REF] Van Houtte | Simulation of the rolling and shear texture of brass by the taylor theory adapted for mechanical twinning[END_REF]. In the same spirit, Tomé et al. [START_REF] Tomé | A model for texture development dominated by deformation twinning : Application to zirconium alloys[END_REF] developed the "Predominant Twin Reorientation" (PTR) scheme. This last scheme, in which an entire grain is reoriented at once and in which solely the most active twin system is accounted for in terms of crystallographic reorientation, necessarily leads to imprecision in the prediction of texture development. To overcome these limitations, an alternative method, referred to as the "Volume Fraction Transfer" (VFT) scheme was proposed [START_REF] Tomé | A model for texture development dominated by deformation twinning : Application to zirconium alloys[END_REF][START_REF] Lebensohn | A self-consistent anisotropic approach for the simulation of plastic deformation and texture development of polycrystals : Application to zirconium alloys[END_REF]. The polycrystal is represented as a finite set of fixed orientations, weighted by volume fractions. As deformation proceeds, the weights of the orientations evolve to reproduce the nucleation and growth of twins. The VFT scheme provides an accurate description of the texture development in the case of twinning, but does not allow for a direct coupling or for a direct connection between the parent and the twin domains. More recently, Proust et al. [START_REF] Proust | Modeling texture, twinning and hardening evolution during deformation of hexagonal materials[END_REF][START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF] developed the "Composite Grain" (CG) model from the PTR scheme. In the CG model, when a critical twin volume fraction is reached, new grains are created with an orientation corresponding to that of the twin domain. The shape of newly formed domains is fixed by a parameter such that the multi-lamellar aspect of twinning -yielding to relatively flat ellipsoidal twins-is respected. From the point of view of micromechanics, the newly formed twinned domains are either treated as new grains to be embedded in the homogeneous reference medium or can be artificially coupled to the parent phase by imposing traction continuity across the interface. Although the coupled CG approach may appear to be more a realistic description of the geometry and traction continuity conditions associated to twinning, it is to be noted that enforcing traction continuity on the mean field within the parent and twin domains is unlikely to be appropriate when the twin fraction is not small. When applied to the case of pure polycrystalline magnesium in an elasto-plastic self-consistent scheme, it is found that the CG model cannot accurately reproduce the evolution of internal strains concomitant to twinning activity. Typically these approaches suffer from two limitations. First, they do not account explicitly for the direct mechanical interaction between parent and twin or between twin and neighboring grains. Second, they do not consider the stresses induced by the shear transformation inside the twin domain. These essentially limit the accuracy or correspondence with full-field methods of the predicted stress state in the twin domain both at the onset of twinning and when plasticity has occurred. Most models use the average stress in the parent and resolve it on the twin plane so as to quantify the driving force associated with twin growth. More recent approaches considered both the stress states in the parent and twin domains [START_REF] Wang | A constitutive model of twinning and detwinning for hexagonal close packed polycrystals[END_REF][START_REF] Wang | A crystal plasticity model for hexagonal close packed (hcp) crystals including twinning and de-twinning mechanisms[END_REF]. However, at a fine continuum mechanics scale, the stress of interest is the one acting at the interface between the twin and the parent. Experiments [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF][START_REF] Balogh | Spatially resolved in situ strain measurements from an interior twinned grain in bulk polycrystalline {AZ31} alloy[END_REF] reveal that those resolved shears could be very different. High energy X-ray diffraction in-situ measurements by Aydiner et al. [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF] revealed that substantial backstresses develop within twin domains during the activation of {10 12} tensile twins. As such, the accuracy of predictions of secondary slip and second-generation twin activities in polycrystals is limited. The latter is typically observed in magnesium alloys AM30, where tensile twins of the {10 12} type develop within compressive twins of the {10 11} type [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF][START_REF] Barnett | Non-schmid behaviour during secondary twinning in a polycrystalline magnesium alloy[END_REF]. In recent EBSD measurements performed by Martin et al. [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF], it was shown that out of the six possible second generation twin variants two are observed far more frequently. To date, no clear explanation of the phenomenon exists, but strain accommodation in twin domains inside both primary twin and 1.6. Scope of the thesis parent phases is likely to play a significant role [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF]. To remedy the first limitation, related to the direct coupling between the parent and twin domains, an alternative mean field approach still based on a 1-site self-consistent model for elasto-plastic polycrystals "EPSC" but including twins as new "child grains" embedded in the HEM (Homogeneous Equivalent Medium) was proposed [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF]. Continuity conditions were enforced across the parent/twin interface. The traction continuity constraint is appropriate at the onset of twinning at the twin/parent interface. However, as it is enforced on mean stresses within the parent and twin phase, it is unlikely to be accurate when the twin has reached a significant volume. For the initial state of the twin domain, Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] proposed to impose an initial twin volume fraction and hence an initial plastic shear for the twin system once twinning is activated. Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] also showed that better agreement with experimental data could be obtained by introducing a back-stress term at the constitutive level in an ad-hoc fashion within the twin phase. The motivation there was essentially to reduce the stress within the twin domain at the onset of twinning so as to match elastic strains measured by neutron diffraction. An alternate route has also been proposed, in which, rather than first computing the plastic shear in the parent phase on the twinning plane and then reorienting a twin domain, one first creates a twin domain and then imposes an eigenstrain in the domain. In the work of Lebensohn et al. [START_REF] Lebensohn | A study of the stress state associated with twin nucleation and propagation in anisotropic materials[END_REF] , this method was used in a purely elastic two-phase model. Although limited to a purely elastic accommodation, these models lead to much reduced or even null stress states in the twin domain. The eigenstrain-based approach [START_REF] Lebensohn | An elasto-viscoplastic formulation based on fast fourier transforms for the prediction of micromechanical fields in polycrystalline materials[END_REF] was also used in full field elasto-plastic studies using finite element and fast Fourier transform methods. Rendering such complex physical phenomena in view of performing virtual material characterization is complicated due to the local nature of nucleation events. The vast majority of polycrystalline models use deterministic criteria to predict nucleation events. However, the scale at which nucleation occurs is typically lower than the resolution scale of FFT or FEM based full field simulations such that connections with the atomistic structure of grain boundaries in terms of degrees of freedom and defect content are missing. To overcome such limitations, probabilistic twinning models accounting for the statistical nature of twin nucleation have been developed [START_REF] Niezgoda | Stochastic modeling of twin nucleation in polycrystals : An application in hexagonal close-packed metals[END_REF][START_REF] Beyerlein | Effect of microstructure on the nucleation of deformation twins in polycrystalline high-purity magnesium : A multi-scale modeling study[END_REF]. Clearly these approaches rely on the gathering of rigorous statistical data from experimental studies. In Capolungo et al. [START_REF] Capolungo | Nucleation and growth of twins in zr : A statistical study[END_REF] and Beyerlein et al. [START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF], an automated twin detection tool capable of detecting the presence and geometry of twins from Electron Backscatter Diffraction (EBSD) measurements was used to that end. These works clearly delineate a path towards both directly embedding experimental into constitutive models and generating datasets for model validation. Yet, the studies were limited to microstructures with relatively well organized twin structures and considered only one twin mode at a time. Indeed, automatically extracting microstructural data and twinning statistics such as grain size, grain orientation, number of twins per grain or modes and variants of twins is particularly complex because of the diversity of twinning modes, the multiplicity of certain twins and the complex morphologies one can introduce following complex or arbitrary loading [START_REF] Mccabe | Quantitative analysis of deformation twinning in zirconium[END_REF][START_REF] Marshall | Automatic twin statistics from electron backscattered diffraction data[END_REF]. Scope of the thesis Focused on twinning in h.c.p. metals, the present PhD thesis is dedicated to the study of internal stress development, the investigation and the quantification of the relative contributions of parent/twin and twin/twin interactions on the mechanical behavior and the microstructure evolution during deformation twinning. Particular attention will be drawn to magnesium and zirconium. The thesis is then organized in the following manner : Chapter 2 introduces a new micromechanical approach based on a double inclusion topology and the use of the Tanaka-Mori theorem. A first elasto-static model in heterogeneous elastic media with eigenstrains is developed and applied to the case of first and second generation twinning in a Mg single twinned grain. A second model, referred to as the double-inclusion elasto-plastic self-consistent scheme (DI-EPSC), is then derived. The DI-EPSC scheme consists of an extension of the first model to the case of elasto-plastic media and polycrystalline materials. Applied to an initially extruded Mg AZ31alloy, its predictions will be compared to those obtained by Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF]. Chapter 3 aims at introducing a new EBSD analysis software developed for automated twinning statistics extraction and based on graph theory and quaternion algebra. Prior to presenting this new automated twin recognition tool, the chapter will briefly describe scanning electron microscopes, give the historical perspectives of the EBSD technique and review the basic concepts of electron diffraction and diffraction pattern analysis. Chapter 4 is dedicated to the identification of statistically representative data associated to nucleation and growth of twins from three studies carried out from Mg AZ31 alloy and pure Zr EBSD scans. The first two studies performed on Mg AZ31 alloy are focused on the determination and explanation of activation criteria for low Schmid factor {10 12} tensile twins and successive {10 12}-{10 12} double extension twins. The last statistical analysis performed on Zr discusses the statistical relevance of twin-twin junctions and their influence on nucleation and growth of twins. Finally, chapter 5 summarizes the main results of the presented work and presents possible further developments and studies. Chapitre 2 Study of the influence of parent-twin interactions on the mechanical behavior during twin growth The present chapter focuses on the influence of parent/twin interactions on the mechanical behavior of polycrystalline Mg. A numerically efficient mean-field Eshelbian based micromechanical model is proposed to address the problem. The general idea is based on the use of the Tanaka-Mori scheme [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF] which is first extended to the case of heterogeneous elasticity [START_REF] Juan | Prediction of internal stresses during growth of first-and second-generation twins in mg and mg alloys[END_REF] and then further extended to the case of elasto-plasticity [START_REF] Juan | A double inclusion homogenization scheme for polycrystals with hierarchal topologies : application to twinning in mg alloys[END_REF]. Prior to that a few key foundations of Eshelbian micromechanics are recalled [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF]. The following convention is used throughout the rest of the paper. Fourth-order tensors will be denoted with capital Latin characters, second-order tensors will be denoted with Greek characters and vectors will be denoted with lower case Latin characters. Einstein summation convention is used for the sake of brevity. Finally, when contracted notations (e.g. " :" denotes a doubly contracted product) are used, non-scalar variables will be noted in bold. The symbol "," denotes a spatial derivative. Chapitre 2. Study of the influence of parent-twin interactions on the mechanical behavior during twin growth The inclusion problem Field equations and thermodynamics In pioneering work, Eshelby [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF] analytically determined the local strain and stress tensors in an inhomogeneous inclusion containing an eigenstrain and embedded in an infinite elastic homogeneous medium (Figure 2.1). The temperature of the medium is assumed to be constant, i.e. there is no thermal strain. Dynamics effects and body forces are neglected. Denoting r the position vector in the medium V, the equilibrium condition without body force and acceleration is expressed as the divergence of the Cauchy stress tensor denoted by σ : ∇.σ(r) = 0 (2.1) The compatibility equation on the total distortion,β,is given by : β(r) = ∇u(r) (2.2) where u is the displacement vector. In the small deformation approximation, the total strain and rotation tensors, respectively denoted with ǫ and ω, are related to the total distortion as follows : β(r) = ǫ(r) + ω(r) (2.3) with, ǫ(r) = 1 2 ∇u(r) + ∇ t u(r) (2.4 ) ω(r) = 1 2 ∇u(r) -∇ t u(r) (2.5) As previously mentioned, a spatially varying eigenstrain, denoted with superscript *, is imposed in the medium. The eigenstrain is a non-elastic stress-free strain -as per Eshelby-that can physically represent a phase transformation, a pre-strain, a thermal strain and a strain [START_REF] Mura | Micromechanics of defects in solids[END_REF]. In the small perturbation hypothesis, the total strain is written as the sum of the elastic strain and of the eigenstrain. ǫ(r) = ǫ el (r) + ǫ * (r) (2.6) In a linear elastic homogeneous medium, the constitutive relation is simply given by : σ(r) = C 0 : [ǫ(r) -ǫ * (r)] (2.7) where C 0 is the homogeneous reference elastic modulus tensor. The traction and displacement boundary conditions, on ∂V σ and ∂V u respectively, are the following : u d = (E + Ω) r (2.8) t d = σ.n (2.9) where E and Ω represent the macroscopic strain and rotation tensors imposed to the surface ∂V u , respectively, and n is the vector normal to traction surface, ∂V σ . Using the minor symmetry of the elastic modulus of h.c.p. materials and accounting for strain compatibility, the constitutive equation can be written as follows : σ(r) = C 0 : [∇u(r) -ǫ * (r)] (2.10) After introducing the constitutive law into the balance equation, one obtains the so-called Navier type equation for the homogeneous problem considered here : C 0 : ∇.∇u(r) + f * (r) = 0 (2.11) where f * , representing the virtual body forces due to the incompatibility, ǫ * , is given by : f * (r) = -C 0 : ∇.ǫ * (r) (2. 12) The static elastic Green's function G ∞ ij (r -r ′ ) corresponds to the displacement at point r in direction i due to the application of a unit body force applied at r ′ in the j direction. Consequently, the solution for the present problem is the product of the static Green function, G ∞ , and the virtual body force vector, f * : u(r) = ∞ -∞ G ∞ (r -r ′ ).f * (r ′ )dV r ′ (2.13) The static elastic Green function satisfies the following equation : C 0 ijkl G ∞ km,lj (r -r ′ ) + δ im δ(r -r ′ ) = 0 (2.14) where δ im is the kronecker symbol and δ(r -r ′ ) the three-dimensional Dirac delta function. Eq. 2.14 corresponds to the Navier equation after multiplying its terms by the Green function, performing two integrations by parts and simplifying the resulting equation by consideration of the boundary conditions [START_REF] Mura | Micromechanics of defects in solids[END_REF]. The Helmholtz free energy density, denoted Φ and corresponding to the portion of the internal energy available for doing work in an isothermal process, is expressed as an integral of the volume density of elastic energy, W el , on the volume of the medium, V [START_REF] Berbenni | Intra-granular plastic slip heterogeneities : Discrete vs. mean field approaches[END_REF][START_REF] Collard | Role of discrete intra-granular slip bands on the strain-hardening of polycrystals[END_REF]. Φ(E, ǫ * ) = 1 2V V σ(r) : ǫ el (r)dV (2.15) After development and use of the Gauss theorem, the surface terms appear in the expression of Φ : Φ(E, ǫ * ) = 1 2V ∂V σ(r)u(r).ndS - 1 2V V σ(r) : ǫ el (r)dV (2.16) By considering the boundary conditions (Eqs. 2.8-2.9), Φ becomes : Φ(E, ǫ * ) = 1 2V ∂Vσ t d .udS + 1 2V ∂Vu σ(r)u d .ndS - 1 2V V σ(r) : ǫ * (r)dV (2.17) If only perturbation fields due to microstructural inhomogeneities fields are studied, the internal part of the Helmholtz free energy density reduces to : Φ int = - 1 2V V σ(r) : ǫ * (r)dV (2.18) Eshelby's solution The exact solution to this boundary problem is given by the Lippman-Schwinger-Dyson's type integral equations [118,[START_REF] Berveiller | The problem of two plastic and heterogeneous inclusions in an anisotropic medium[END_REF] recalled here : ǫ(r) = E d + ∞ -∞ Γ ∞,s (r -r ′ ) : C 0 : ǫ * (r ′ )dV r ′ (2.19) where Γ ∞,s corresponds to the symmetric modified Green tensor Γ ∞,s ijkl (r -r ′ ) = - 1 2 G ∞ ik,jl (r -r ′ ) + G ∞ jk,il (r -r ′ ) (2.20) Outside the inclusion, with volume V Ω , the eigenstrain tensor is null. Considering a uniform eigenstrain tensor in V Ω , the integral equation becomes : ǫ(r) = E d + V Ω Γ ∞,s (r -r ′ )dV r ′ : C 0 : ǫ * (2.21) To simplify notations, uniform vectors or tensors are replaced by their values such that one writes for example ǫ * (r) = ǫ * when r ∈ V Ω . Finally, the exact solution of this inclusion problem is given by : ǫ(r) = E d + P V Ω (r) : C 0 : ǫ * (2.22) where the fourth-order tensor P V Ω (r) denotes the so-called polarized Hill's tensor. It is expressed as the integral over the inclusion volume of the symmetric modified Green tensor : P V Ω (r) = V Ω Γ ∞,s (r -r ′ )dV r ′ (2.23) Similarly, the final expression of the rotation tensor ω is : ω(r) = Ω d + V Ω Γ ∞,a (r -r ′ )dV r ′ : C 0 : ǫ * (2.24) where Γ ∞,a corresponds to the anti-symmetric modified Green tensor whose expression is : Γ ∞,s ijkl (r -r ′ ) = - 1 2 G ∞ ik,jl (r -r ′ ) -G ∞ jk,il (r -r ′ ) (2.25) Eshelby's tensor, S 0 , is defined as the double dot product between Hill's polarized tensor and the fourth-order elastic stiffness tensor : S 0 (r) = P V Ω (r) : C 0 (2.26) The fourth-order tensor P V Ω (r) is uniform inside the inclusion Ω. As a result, S 0 , ǫ and σ are also uniform when r ∈ V Ω . The final expressions of the strain and stress tensors, inside and outside the inclusion, result from the combined use of intermediary results and newly-introduced notations : -when r ∈ V Ω , ǫ(r) = E d + S 0 : ǫ * (2.27) σ(r) = C 0 : E d + C 0 : S 0 -I : ǫ * (2.28) -when r / ∈ V Ω , ǫ(r) = E d + S 0 (r) : ǫ * (2.29) σ(r) = C 0 : E d + C 0 : S 0 (r)s : ǫ * (2.30) 2.2 A generalized Tanaka-Mori scheme in heterogeneous elastic media with plastic incompatibilities. In this paragraph, a Lippman-Schwinger type equation is derived for microstructures with heterogeneous elasticity (due to twin reorientation) and plastic incompatibilities (eigenstrains due to the shearing in different types of twin domains). This is done by generalizing the original work of Tanaka and Mori [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF], initially developed in the case of homogeneous elasticity. This scheme aims at predicting the development of internal stresses within twin and parent domains during the growth of first and second-generation twins. The proposed method considers a static configuration for an elastic medium with eigenstrains. It allows computation of the values and evolutions of internal stresses in a double inclusion of ellipsoidal shape -mimicking the geometry of a twin domain contained in either a parent phase or a primary twin-with inclusion shape, relative shape and volume fraction. Elasto-static Tanaka-Mori scheme Generalized Tanaka-Mori scheme This sub-paragraph presents both the twinning topology considered and the key steps in the derivation of a Generalized Tanaka-Mori scheme. Consider two ellipsoidal inclusions V b and V a such that V 1 ⊂ V 2 with elastic moduli C b in V b and C a in the sub-domain V a -V b (Figure 2. (with V 1 ⊂ V 2 ) with prescribed eigenstrains ǫ * b in V b and ǫ * a in sub-region V a -V b and distinct elastic moduli C b in V b and C a (in sub-region V a -V b ). The two inclusions are embedded in an infinite elastic medium, with elastic modulus C 0 , containing an overall uniform plastic strain, E p . The second-order tensor E d represents the imposed macroscopic strain. Such a geometry can yield a geometrical representation of both the first generation twin, i.e. volume V b is null and volume V a represents the first generation twin, and the second-generation twin with V a and V b representing the first and second generation twins, respectively. In order to represent the shear deformation induced by twinning, two uniform eigenstrains, denoted with ǫ * b and ǫ * a , are respectively introduced in inclusion V b and in the sub-region V a -V b . Another uniform plastic strain, denoted by the second-order tensor E p , is introduced in the sub-domain V -V a in order to model the macroscopic plastic strain undergone by the specimen during mechanical testing (Figure 2.2). Following the same steps as for the inclusion problem, the Navier type equation of this heterogeneous multi-inclusion problem is : C 0 : ∇.∇u(r) + f * (r) = 0 (2.31) where the virtual body forces, represented by f * i , result from both heterogeneous elasticity and incompatibilities (eigenstrains), are expressed as follows : f * (r) = ∇.(δC(r) : ǫ(r) -C(r) : ǫ * (r)) (2.32) The new Lippman-Schwinger-Dyson's type integral equations are then : ǫ(r) = E d - V Γ ∞,s (r -r ′ ) : δC(r ′ ) : ǫ(r ′ ) -C(r ′ ) : ǫ * (r ′ ) dV r ′ (2.33) ω(r) = Ω d - V Γ ∞,a (r -r ′ ) : δC(r ′ ) : ǫ(r ′ ) -C(r ′ ) : ǫ * (r ′ ) dV r ′ (2.34) θ b (r) and θ a (r) denote the characteristics functions associated to V b and V a , respectively. These functions are equal to the identity and null tensors inside and outside their corresponding volumes, respectively. The heterogeneous elastic properties and eigenstrains in the infinite body, V, can be expressed as spatially fluctuating fourth and second-order tensors, C(r) and ǫ * (r), respectively : C(r) = C 0 + δC(r) = C 0 + C a -C 0 θ a (r) + C b -C a θ b (r) (2.35) ǫ * (r) = E p + (ǫ * a -E p ) θ a (r) + ǫ * b -ǫ * a θ b (r) (2.36) In order to simply the subsequent calculations, rewrite the expressions of C(r) and ǫ * (r) in the following manner : C(r) = C 0 (1 -θ a (r)) + C a (θ a (r) -θ b (r)) + C b θ b (r) (2.37) ǫ * (r) = E p (1 -θ a (r)) + ǫ * a (θ a (r) -θ b (r)) + ǫ * b θ b (r) (2.38) Replacing the spatially varying elastic modulus and eigenstrain tensors by their expressions into the integral equation, one obtains the following expression of the strain field anywhere in the volume : ǫ(r) = E d - V -Va Γ ∞,s (r -r ′ ) : C 0 : E p dV r ′ - Va-V b Γ ∞,s (r -r ′ ) : C a -C 0 : ǫ(r) -C a : ǫ * a dV r ′ - V b Γ ∞,s (r -r ′ ) : C b -C 0 : ǫ(r ′ ) -C b : ǫ * b dV r ′ (2.39) Since lim V →∞ V Γ ∞,s (r -r ′ )dV r ′ = 0 and after rearrangement of the terms in the integrand, the strain field expression becomes : ǫ(r) = E d - Va Γ ∞,s (r -r ′ ) : C a -C 0 : ǫ(r) + C 0 : E p -C a : ǫ * a dV r ′ - V b Γ ∞,s (r -r ′ ) : C b -C a : ǫ(r ′ ) + C a : ǫ * a -C b : ǫ * b dV r ′ (2.40) Regardless of the respective shape of both volumes, the strain field ǫ(r) given by Eq. 2.40 is not uniform. Exact solutions of Eq. 2.40 can be obtained via the use of the FFT method [START_REF] Moulinec | A numerical method for computing the overall response of nonlinear composites with complex microstructure[END_REF][START_REF] Moulinec | Intraphase strain heterogeneity in nonlinear composites : a computational approach[END_REF] or of high order polynomial expansions of ǫ(r) [START_REF] Shodja | Elastic fields in double inhomogeneity by the equivalent inclusion method[END_REF]. To avoid the numerical difficulty associated with solving exactly the integral equation, and to proceed with realistic analytical derivations, the strains under the integral are assumed to be equal to their averages over these volumes, i.e. ǭi = 1 V i V i ǫ(r)dV r (with i = {a, b}). ǫ(r) = E d - Va Γ ∞,s (r -r ′ ) : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a dV r ′ - V b Γ ∞,s (r -r ′ ) : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b dV r ′ (2.41) All uniform terms can then be extracted from the integrals. ǫ(r) = E d - Va Γ ∞,s (r -r ′ )dV r ′ : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - V b Γ ∞,s (r -r ′ )dV r ′ : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.42) As with other mean-field models [START_REF] Proust | Modeling texture, twinning and hardening evolution during deformation of hexagonal materials[END_REF][START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF], strain fields are supposed to be uniform inside the inclusions and equal to their average values over these ellipsoidal volumes. The average strain in the inclusion V b is derived as follows : ǭb = E d - 1 V b V b Va Γ ∞,s (r -r ′ )dV r ′ dV r : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - 1 V b V b V b Γ ∞,s (r -r ′ )dV r ′ dV r : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.43) where V b Γ ∞,s (r -r ′ )dV r ′ and Va Γ ∞,s (r -r ′ )dV r ′ are uniform because V b and V a are ellipsoidal inclusions and r ∈ V b ⊂ V a (following Eshelby [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF]). ǭb = E d - Va Γ ∞,s (r -r ′ )dV r ′ : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - V b Γ ∞,s (r -r ′ )dV r ′ : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.44) The average strain tensor in inclusion V a is derived following the same procedure. ǭb = E d - 1 V a Va Va Γ ∞,s (r -r ′ )dV r ′ dV r : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - 1 V a Va V b Γ ∞,s (r -r ′ )dV r ′ dV r : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.45) Since Va Γ ∞,s (r -r ′ )dV r is independent of r, the order of integration in the second term of the previous equation can be changed according to the Tanaka-Mori theorem [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF]. 2.2. A generalized Tanaka-Mori scheme in heterogeneous elastic media with plastic incompatibilities. ǭa = E d - 1 V a Va Va Γ ∞,s (r -r ′ )dV r ′ dV r : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - 1 V a V b Va Γ ∞,s (r -r ′ )dV r dV r ′ : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.46) Considering the Eshelby's result [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF], Va Γ ∞,s (r -r ′ )dV r is uniform, because V b ⊂ V a , so that : ǭa = E d - Va Γ ∞,s (r -r ′ )dV r ′ : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - V b V a Va Γ ∞,s (r -r ′ )dV r : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.47) Similar to the inclusion problem, the following tensors P V i are defined from the volume integral of the symmetrized modified Green function : P V i (r) = V i Γ ∞,s (r -r ′ )dV r ′ (2.48) with i = {a, b}. Tensors S j (V i ), withj = {0, a, b} and i = {a, b}, are written as the double contracted product of the P V i tensors with the stiffness tensors, C j : S j V i = P V i : C j (2.49) As the inclusion shapes considered here are all ellipsoidal, both P V i and S j V i are uniform when r ∈ V i . Clearly, the different tensors S j V i are to be considered as Eshelby type tensors. Consequently, the approximated average strain in the inclusion V b is given by : ǭb = E d -S 0 (V a ) : E p -S a (V a ) -S 0 (V a ) : ǭa -S b (V b ) -S a (V b ) : ǭb + [S a (V a ) -S a (V b )] : ǫ * a + S b (V b ) : ǫ * b (2.50) And the approximated average strain in the inclusion V a is given by : ǭa = E d -S 0 (V a ) : E p -S a (V a ) -S 0 (V a ) : ǭa - V b V a S b (V a ) -S a (V a ) : ǭb + V a -V b V a S a (V a ) : ǫ * a + V b V a S b (V a ) : ǫ * b (2.51) Evaluation of the average strain in sub-region V a -V b is of interest here. It can be obtained from Eqs. 2.50 and 2.51 through the following relationship : ǭVa-V b = V a V a -V b ǭa - V b V a -V b ǭb (2.52) Equations 2.51 and 2.50 consist of an extension of the Tanaka-Mori observation to the case of a double-inclusion problem with heterogeneous elastic properties and eigenstrains. Finally, the expression of the two unknown averaged strains within each volume is obtained by solving the following system of equations : ǭa = E d + ∆S a-0 Va : ǭa + V b V a ∆S b-a Va : ǭb + R a (2.53) ǭb = E d + ∆S a-0 Va : ǭa + ∆S b-a V b : ǭb + R b (2.54) with, ∆S a-0 Va = S a (V a ) -S 0 (V a ) (2.55) ∆S b-a Va = S b (V a ) -S a (V a ) (2.56) ∆S b-a V b = S b (V b ) -S a (V b ) (2.57) R a = S 0 (V a ) : E p + V a -V b V a S a (V a ) : ǫ * a + V b V a S b (V a ) : ǫ * b (2.58) R b = S 0 (V a ) : E p + [S a (V a ) -S a (V b )] : ǫ * a + S b (V b ) : ǫ * b (2.59) Solutions of Eqs. 2.53 and 2.54 are given by : ǭb = I + ∆S b-a V b - V b V a ∆S a-0 Va : I + ∆S a-0 Va -1 : ∆S b-a Va -1 : E d + R b (2.60) ǭa = I + ∆S a-0 Va -1 : E d - V b V a ∆S b-a Va : ǭb + R a (2.61) Interestingly, the expressions of ∆S a-0 Va , ∆S b-a Va , ∆S b-a V b , R a and R b defined in Eqs. 2.55-2.59 show that the generalization of the Tanaka-Mori method introduces a coupling between the averaged strain fields in each volume. Moreover, with the five S j (V i ) tensors introduced in Eq. 2.49, relative shape and volume fraction effects between both inclusions can be predicted. Note that solution of Eq. 2.50 is not trivial, as it requires inverting a general four-dimensional tensor. Tensors S j (V i ) are obtained by use of a Gauss-Lobatto integration of Greens tensors in the Fourier space. This method is similar to that used in the viscoplastic self-consistent (VPSC) [START_REF] Lebensohn | A self-consistent anisotropic approach for the simulation of plastic deformation and texture development of polycrystals : Application to zirconium alloys[END_REF] and elasto-plastic self-consistent (EPSC) [START_REF] Turner | A study of residual stresses in zircaloy-2 with rod texture[END_REF] schemes. Using Hooke's law, average stresses in V b and V 1 -V 2 are given by : σb = C b : ǭb -ǫ * b (2.62) σVa-V b = C a : ǫ Va-V b -ǫ * a (2.63) The average stresses in V a are given by : σa = V b V a σb + V a -V b V a σVa-V b (2.64) As defined in the previous paragraph, the stored elastic energy per unit of volume is given by Φ = 1 2V V σ(r) : ǫ el (r)dV r (2.65) Applying now the Hill's result for heterogeneous elasto-plastic media [START_REF] Hill | Elastic properties of reinforced solids : some theoretical principles[END_REF][START_REF] Mandel | Cours de Mécanique des Milieux Continus[END_REF] to heterogeneous elastic matrix with plastic incompatibilities enables to derive a new closed form of the stored elastic energy density as follows : Φ = 1 2 Σ : E d + Φ int = 1 2 Σ : E d - 1 2V V σ(r) : ǫ * (r)dV r (2.66) where Σ and Φ int denote the macroscopic stress tensor and the internal part of the free Helmholtz energy density resulting from plastic incompatibilities, respectively. Both the macroscopic stress and plastic strain tensors are given by the two subsequent integral equations : Σ = 1 V V σ(r)dV r (2.67) E p = 1 V V B t (r) : ǫ * (r)dV r (2.68) with B, a fourth-order concentration tensor linking the virtual local stress fields that would have existed if the medium remained purely elastic to the macroscopic stress [START_REF] Hill | Elastic properties of reinforced solids : some theoretical principles[END_REF][START_REF] Mandel | Cours de Mécanique des Milieux Continus[END_REF]. In the present case, the macroscopic strain tensor, E d , can be computed from the homogeneous medium elastic constants and the macroscopic stress applied in the following manner E d = C 0 -1 : Σ + E p (2.69) Given the spatial expression of plastic incompatibilities (Eq. 2.36) and the mean-field approximation, the internal free energy density, Φ int , can be rewritten as : Φ int = - V -V a V σV -Va : E p - V a -V b V a σVa-V b : ǫ * a - V b V σb : ǫ * b (2.70) Because the medium is infinite, σV -Va is assumed to be equal to Σ. Consequently, Φ int = - V -V a V Σ : E p - V a -V b V a σVa-V b : ǫ * a - V b V σb : ǫ * b (2.71) Relationship with the classical Eshelby's results and Nemat-Nasser and Hori's solutions Considering the particular case of an elastically homogeneous medium such that C b = C a = C 0 , Eq. 2.40 reduces to : ǫ(r) = E d - Va Γ ∞,s (r -r ′ )dV r ′ : C 0 : E p -C a : ǫ * a - V b Γ ∞,s (r -r ′ )dV r ′ : C 0 : ǫ * a -ǫ * b (2.72) Then, the average strains in V a and V b also reduce to : ǭa = E d + S 0 (V a ) : E p + V a -V b V a S 0 (V a ) : ǫ * a + V b V a S 0 (V a ) : ǫ * b (2.73) ǭb = E d + S 0 (V a ) : E p + S 0 (V a ) -S 0 (V b ) : ǫ * a + S 0 (V b ) : ǫ * b (2.74) where S 0 (V i ) = P V i : C 0 are the elastic tensors associated with C 0 and V i . As expected Eqs. 2.73 and 2.74 correspond to the first extension of the Tanaka-Mori scheme observed by Hori and Nemat-Nasser [START_REF] Hori | Double-inclusion model and overall moduli of multi-phase composites[END_REF] and Nemat-Nasser and Hori [START_REF] Nemat-Nasser | Micromechanics : overall properties of heterogeneous materials[END_REF] . Note that in the case of homogeneous elasticity, Eqs. 2.61 and 2.60 yield null values for ∆S a-0 Va , ∆S b-a Va and ∆S b-a V b so that these equations become respectively Eqs. 2.73 and 2.74. Similarly, if one considers the eigenstrain in V a -V b to be null, the average strain in V b reduces to Eshelby's solution to the inclusion problem : ǭb = E d + S 0 (V b ) : ǫ * b (2.75) Application to first generation tensile twinning in magnesium For the present application, a slightly simplified version of the elasto-static Tanaka-Mori scheme, described in the above, is considered since the matrix does not contain an overall plastic strain incompatibility, E p . The local elastic stiffness tensors, associated to each inclusion/twin domain, are related to that of medium "0" by simple rotation operations. Note that, for the sake of consistency, all calculations need to be performed in a reference coordinate system chosen arbitrarily as that associated to the grain. As a result, the elastic moduli of the inclusions, expressed in the reference coordinate system, are given by : C c ijkl = C 0 mnpq R c im R c jn R c kp R c lq (2.76) where c = {a, b} and R i is the rotation matrix representing the misorientation of the inclusion "i" (Figure 2.2). Note that these cumulative rotations may transform transversely isotropic tensors into anisotropic tensors. The elastic constants of Magnesium expressed in the crystal reference frame are extracted from [START_REF] Kocks | Texture and anisotropy[END_REF] and given, in GPa, by : C 0 =                 Twinning on the {10 12} planes is common to all h.c.p. materials. Because experimental measures of the development of internal strains within both a parent and a {10 12} twin phase are available for magnesium alloy AZ31 [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF], the generalized Tanaka-Mori method presented previously is applied to this problem. Therefore, the misorientation between V a -V b , representing the original parent crystal, and the "unbounded" body, V, is set to zero and the eigenstrain in the sub-region V a -V b , ǫ * a , is equal to the null tensor. In V b , a non-zero eigenstrain -only the shear components along the axis e ′ 2 and e ′ 3 are non null -is prescribed in order to restore the twinning shear for which the unit magnitude is equal to 0.131 according to [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF] . Clearly, the use of a homogeneous eigenstrain within the twin domain is an approximation of the strain state within the twin domain since all the twinning shear strain is effectively concentrated at the twin interface. Note that a similar approximation was made in Lebensohn et al. [START_REF] Lebensohn | A self-consistent anisotropic approach for the simulation of plastic deformation and texture development of polycrystals : Application to zirconium alloys[END_REF] to study twin nucleation in anisotropic materials. The local frame is associated to the twin domain : axis e ′ 1 , is perpendicular to the twinning shear direction, η 1 , and lies in the undistorted plane, K 1 , and axis e ′ 2 , is parallel to the twinning shear direction and lies too in the undistorted planeK 1 ; the third axis, e ′ 3 , along which thickening occurs, is the cross product of the first axis with the second one. ) associated with the {10 12}-tensile twinning. The reference coordinate system (e 1 , e 2 , e 3 ) associated to the crystal structure and the crystallographic coordinate system (a 1 , a 2 , a 3 , c) are also shown. The effect of both twin and grain shapes on the average internal stresses within each phase is studied first. In order to facilitate the understanding of relative shape effects in each inclusion (i.e. the parent grain and the twin phase), axes 1 and 2 of each ellipsoid have the same length. R denotes the aspect ratio of the ellipsoid length along axis 1 divided by that along axis 3 which corresponds to the axis of thickening. Therefore large values of R will denote flat ellipsoidal domains while R equal to 1 describes a perfectly spherical domain. In order to isolate the effects of relative volume from the shape effects, the twin volume fractions are arbitrarily fixed here to 2.5 and 5 percent and R is varied in both the twin and parent domains. Figures 2.4a, 2.4b, 2.4c and 2.4d present the evolutions of the resolved shear stress on the twin system in the twin ((a) and (c)) (i.e. V b ) and in the parent ((b) and (d)) (i.e. V a -V b ) domains, respectively, as a function of the ratio R twin . Simulations are repeated for several initial grain shapes described with R parent . Figure 2.4 suggests several interesting shape effects. First, it is found, by comparison with the values of the resolved shear stresses (RSS) in the parent and twin domains that for some grain and twin shapes, the approach can predict the stress reversal, i.e. RSS of opposite signs in the twin and parent domain. This finding is in qualitative agreement with that experimentally measured in Aydiner et al. [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF]. 3DXRD measurements showed that the difference in the RSS of the parent and twin domains depends on the twin volume fraction. For small twin volume fractions a stress reversal is observed. However, when the twin has reached a "critical" size the stress reversal is no longer observed [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF]. Interestingly it is found by comparison of Figures 2.4a and 2.4b that the relative shape effect between the parent and twin phases has a similar effect to that of the twin fraction discussed above. Namely, for a given parent shape, described by R parent , an increase in R twin (e.g. flattening of the twin phase) leads to an increase in the resolved shear stress in the parent phase. Such an increase can affect the sign of the RSS in the parent domain. As the RSS in the twin domain remains negative the relative shape effect shown here reveals that the occurrence of a stress reversal depends on the relative shape of the twin and parent phases. Second, as otherwise predicted by the elementary solution to the inclusion problem proposed by Eshelby [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF] and shown in Figure 2.4a, it is found that the parent grain shape has no effect on the stress state within the twin phase. Furthermore, it is found that at fixed relative volume fractions the magnitude of the RSS in the twin domain decreases with an increase in R twin . In other words, internal stresses in the twin are the lowest in magnitude for a flat ellipsoidal twin and the highest for a spherical one. This is consistent with the experimentally observed twin shapes. Third, it is found that both the twin and grain shapes affect the stress state within the parent domain but in opposite directions. Namely, while increasing R twin leads to an increase in the stress state within the parent grain, the same effect is produced by a decrease in R parent . Note that the stress state within the twin phase is far more affected by relative shape effects than that in the parent domain (Figure 2.4b). Note also that the dependence is affected by the relative twin volume fraction. Finally, comparing Figures 2.4a and 2.4c, corresponding to the two different twin fractions, shows that twin volume fraction only affects the magnitude of the stress states within each phase but not the trends associated with changes in the shape of the twin and parent domains. With this, it is to be concluded that in the range of twin fractions studied here, the orders of magnitude of the RSS in the parent and twin domains are different from a factor ≈ 100, while experimental measures do not exhibit such difference. It is to be expected that an elasto-plastic approach, as opposed to the purely elastic accommodation method as proposed here, will limit the stress state within the twin domains, as the magnitude of the elastic incompatibility could then be accommodated by plastic deformation modes. However, this is out the scope of the present work and will be the objective of a further study. As experimental measures of back-stresses within the twin and parent domains showed that their magnitude evolves during growth [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF], it is desired here to fictitiously and qualitatively reproduce growth of a twin in a parent grain by computing the evolution of the RSS in both phases for a fixed grain shape, with R parent = 3, and a twin of increasing thickness. Initially, the twin is a flat ellipsoid with axes 1 and 2 such that the twin is spread on the entire surface available on the twin plane. Note here that as opposed to the previous case the simultaneous effects of both the shape and relative volume fraction effects are evaluated. As shown in Figure 5, where symbols denote the experimentally measured RSS on the twin domains (reported from [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF]), substantial changes in the backstresses are predicted during growth of the twin domain. However, since a purely elastic accommodation is used here, the magnitude of the stress reversal predicted is much larger than those measured at the level of plastic strains performed in Aydiner et al. [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF]. On the contrary, as depicted in Figure 2.5, experimental measures indicate that the magnitude of the back-stresses vary little with twin growth. Therefore it is suggested, from comparison with experimental measures [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF] that the elastic incompatibilities (evolving with twin shape and relative volume fraction effects) necessarily lead to an increase in plastic accommodation, via slip, during twin growth. This is likely to have a prominent role in the generation of the typically observed multi-lamellar twins, as the sequential nucleation of new twin lamellae allows for accommodation of elastic incompatibilities during twin growth. Figure 2.5 exhibits the evolution with twin volume fraction and at a given parent geometry (R parent = 3) of the internal stresses in the twin and parent domains. As revealed by 3DXRD experiments and shown in Figure 2.4, the RSS in the parent phase is of opposite sign, i.e. positive, with respect to that in the twin phase when the twin volume fraction is small. In Figure 2.5, the change of sign occurs at a twin volume fraction equal to 6% and R twin equal to 2.3. For a twin volume fraction equal to 5%, and R twin and R parent respectively equal to 2.3 and 3, Figure 2.4d shows that the RSS in the parent domain is negative, whereas as shown in Figure 2.5 the RSS in the parent domain reaches zero when the twin volume fraction reaches 6%. This observation The specificity of the present extension of the Tanaka-Mori scheme is the consideration of heterogeneous elasticity, the effect of which is to be discussed here. As discussed in the previous section, Nemat-Nasser and Hori [START_REF] Nemat-Nasser | Micromechanics : overall properties of heterogeneous materials[END_REF][START_REF] Hori | Double-inclusion model and overall moduli of multi-phase composites[END_REF] dealt with problems in which two eigenstrains are placed in two overlapping inclusions. However, this scheme is limited to homogeneous elasticity. Hence, comparison of the two schemes allows for the investigation of the sole effect of elastic heterogeneity on the stress states in both parent and twin domains. Let us recall that, in the case of first generation twins, the eigenstrain in volume V a -V b is null. Consequently, the mean stresses in the twin, obtained from the Hori and Nemat-Nasser schemes, correspond to Eshelby's solution. In Figure 2.6, it is shown that elastic heterogeneity changes the trend of stress evolution considerably. This effect is especially appreciable for the internal resolved shear stress of the twin on the twinning plane. Indeed, one observes in Figure 2.6b that the solution using homogeneous elasticity does not capture the experimentally observed decrease of the RSS in the twin (Figure 2.5). However, Figure 2.6a shows that for small twin volume fractions, the present model and the Nemat-Nasser scheme display the same trend about mean internal stresses in the twin whereas, for larger twin volume fractions, the influence of heterogeneous elasticity becomes important and the two models evolve in opposite directions. 2.3 The Double Inclusion Elasto-Plastic Self-Consistent (DI-EPSC) scheme. Encouraged by the promising results obtained with the approach introduced in the previous paragraph, the double inclusion elasto-plastic self-consistent scheme has then been developed. It consists of an adaptation of the EPSC model [START_REF] Turner | A study of residual stresses in zircaloy-2 with rod texture[END_REF][START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] to consider the direct mechanical interaction between parent and twin phases during the development of intra-granular twins. With reference to the experimental and modeling results of Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF], the role of parent/twin interactions is examined especially regarding the predictions of internal strains and stresses within twin domains. The present section is organized as follows : in the first part is introduced the new double inclusion elasto-plastic self-consistent (DI-EPSC) scheme. Concentration relations for non-twinned grains, twin and parent domains are detailed and the single crystal plasticity model is descibed ; the second part is dedicated to the quantification of the impact of the topological coupling between the twin and parent phases on model predictions via the observation of latent effects induced by twinning and the comparison of results for an extruded and a randomly textured AZ31 alloy ; finally, in the third part, the authors show the influence of secondary slip on the material response and focus on how to model the twin stress states at the onset of twinning, studying two limit initial configurations where twins are either assumed to have the same stress state as the parent domain or to be fully relaxed. DI-EPSC model The idea to be mathematically derived in the following is to distinguish those grains not containing twins and those containing twin domains. In the self-consistent approach the polycrystal is represented as an ensemble of inclusions (i.e. grains containing or not containing twins) and the average strains and stresses within each domain are obtained from solving a specific inclusion problem yielding a concentration rule that relates the average local stress or strain fields to the macroscopic equivalents. Here in the case of grains not containing twins, the inclusion is assumed to be embedded in a homogeneous equivalent medium (HEM) with properties and mechanical response corresponding to those of the polycrystal. Such concentration laws are similar to that initially derived by Hill [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF] and based on the work of Eshelby [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF]. In the case of grains containing twins, new concentration relations are derived based on a double inclusion topology so called DI-EPSC in the following. Figure 2.7a presents the current uncoupled approach [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] while figure 2.7b presents the new coupled one. General equations The fourth-order tensors L ef f and L i denote the linearized tangent moduli of the HEM and of a given phase 'i' respectively. The latter may correspond to a non-twinned grain 'c', a twinned grain 'g', a twin domain 't' and a parent one 'g-t' associated with 'g'. The constitutive response of the HEM is thus given by : Σ = L eff : Ė (2.77) where, Σ and Ė are respectively the macroscopic stress and strain rate tensors associated with the HEM. In similar fashion, the constitutive equation of crystal 'i' is expressed as follows : σi = L i : ǫi (2.78) where, σi and ǫi denote the stress rate and strain rate tensors of crystal 'i', respectively. Concentration tensors then provide a link between the local and global mean fields. In the present formulation these are formally expressed as follows : ǫi = A i : Ė (2.79) σi = B i : Σ (2.80) Here A i and B i denote the strain and stress concentration tensors. These will be different whether or not the considered grain contains a twin domain. From homogeneous boundary conditions and average conditions, the overall mechanical response of the material is obtained by enforcing macro homogeneity conditions : Σ =< σi > (2.81) Ė =< ǫi > (2.82) Combining 2.81, 2.82 and 2.77, the overall tangent modulus is obtained self-consistently as follows : L eff =< L i : A i >:< A i > -1 (2.83) Integral equation Considering spatial fluctuations of linearized tangent moduli denoted δL with respect to a reference homogeneous medium denoted L 0 , the tangent modulus L decomposes as follows : L(r) = L 0 + δL(r) (2.84) At any point r within the volume, the local constitutive equation is given by : σ(r) = L(r) : ǫ(r) (2.85) Upon introducing Eq. 2.84 into Eq. 2.85 and enforcing both static mechanical equilibrium and compatibility of the total local strain, one obtains a Navier-type equation for the heterogeneous medium : ∇.L 0 : ∇ s u(r) + f * (r) = 0 (2.86) where, f * represents the body forces resulting from the heterogeneity within the medium. ∇ s u denotes the displacement gradient. f * (r) = ∇.(δL(r) : ǫ(r)) (2.87) A solution to this boundary value problem is given by the Lippman-Schwinger-Dyson-type integral equations. Following Lipinski and Berveiller [START_REF] Lipinski | Elastoplasticity of micro-inhomogeneous metals at large strains[END_REF], the strain field at any material point of position vector r : ǫ(r) = Ė - V Γ ∞,s (r -r ′ ) : δL(r ′ ) : ǫ(r ′ )dV r ′ (2.88) where Γ ∞,s (r -r ′ ) denotes the symmetric modified Green tensor, given by : Γ ∞,s ijkl (r -r ′ ) = - 1 2 (G ∞,s ik,jl (r -r ′ ) + G ∞,s jk,il (r -r ′ )) (2.89) Double inclusion geometry for twinned grains and mean field approximation In the present case the geometry of the problem is as follows ; the twinned grain, embedded in the HEM, occupies a volume V g that contains one single twin domain with volume V t (such that V t ⊂ V g ). Therefore the parent volume is given by V g -V t . The tangent moduli of the overall inclusion (i.e. twin and parent phase combined), the twin domain and the parent domains are given by L g , L t and L g-t , respectively. Similarly, their volume fractions are denoted f g , f t and f g-t . Let us introduce the characteristic functions θ g (r) and θ t (r) associated with V g and V t , respectively. These functions are either equal to the identity or to the null tensor inside and outside their corresponding volumes. Consequently, the spatially varying tangent modulus, δL(r), can be expressed as : δL(r) = (L g-t -L 0 )θ g (r) + (L t -L g-t )θ t (r) (2.90) Note here that solely the stiffness tensors L t and L g-t are accessible via the constitutive law. Introducing Eq. 2.90 into Eq. 2.88 one obtains an integral expression of the local strain increments : ǫ(r) = Ė - Vg Γ ∞,s (r -r ′ ) : (L g-t -L 0 ) : ǫ(r ′ )dV r ′ - Vt Γ ∞,s (r -r ′ ) : (L t -L g-t ) : ǫ(r ′ )dV r ′ (2.91) Such expression can also be solved exactly via the use of Fast Fourier transforms, or as proposed here can be approximated using the average strains in the different domains like e.g. in Berveiller et al. [START_REF] Berveiller | The problem of two plastic and heterogeneous inclusions in an anisotropic medium[END_REF]. The volume average strain increment, denoted by ǫi , within a volume V i (with i=g, g-t, t) is defined as : ǫi = 1 V i V i ǫ(r)dV r (2.92) Substituting the local strain increment by its expression (Eq. 2.91), the previously defined volume average strain increment becomes : ǫi = Ė - 1 V i V i Vg Γ ∞,s (r -r ′ ) : (L g-t -L 0 ) : ǫ(r ′ )dV r ′ dV r - 1 V i V i Vt Γ ∞,s (r -r ′ ) : (L t -L g-t ) : ǫ(r ′ )dV r ′ dV r (2.93) In the following, this last equation is applied to 'i'='g' or 't'. Because V t and V g are ellipsoidal and V t ⊂ V g , the order of integration in the computation of the averaged strain increment fields can be inverted according to the Tanaka-Mori theorem [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF]. In the present double inclusion topology, the strain increment ǫ(r ′ ) present in Eq. 2.91 interferes in subdomains 't' and 'g-t' and is not uniform neither in V g nor V t even in the case of ellipsoidal shape inclusions. However, an intuitive approximation used here considers the mean fields ǫg and ǫt defined by Eq. 2.92 . One can introduce tensors P V i as the integral of the symmetrized modified Green function over volume V i : P V i (r) = V i Γ ∞,s (r -r ′ )dV r ′ (2.94) Similarly, let us introduce tensors S j V i as the double dot product of the P V i tensors with the incremental stiffnesses L j , with j=g, g-t, t, such that : S j V i = P V i : L j (2.95) Self-consistent solutions for twinned grains In the case of ellipsoidal inclusions, tensors P V i and S j V i are uniform when r ∈ V i . Note the similarity between the expression of the S j V i tensors and the Eshelby's tensor. Applying the self-consistent condition, i.e. L 0 = L eff , the strain increment solutions for the twin domains 't' and for the twinned grains 'g' -that include both the twin and the parent phases -are obtained as follows : ǫt = Ė -∆S g-t Vg : ǫg -∆S t V t : ǫt (2.96) ǫg = Ė -∆S g-t Vg : ǫg - f t f g ∆S t Vg : ǫt (2.97) with, ∆S g-t Vg = S g-t Vg -S eff Vg (2.98) ∆S t V t = S t V t -S g-t V t (2.99) ∆S t Vg = S t Vg -S g-t Vg (2.100) After some algebraic manipulations with the systems of equations 2.96 and 2.97, the concentration tensors in the twinned grains and in the twin domains, denoted A g and A t respectively, can be written as : A g = I + ∆S g-t Vg - f t f g ∆S t Vg : (I + ∆S t V t ) -1 : ∆S g-t Vg -1 : I - f t f g ∆S t Vg : (I + ∆S t V t ) -1 (2.101) A t = (I + ∆S t V t ) -1 : (I -∆S g-t Vg : A g ) (2.102) The average strain rate in the parent subdomains V g -V t is estimated here using a simple averaging procedure : ǫg-t = 1 f g -f t f g A g -f t A t : Ė (2.103) As a result, the associated concentration tensor, denoted A g-t , is given by : A g-t = 1 f g -f t f g A g -f t A t (2.104) Self-consistent solutions for non-twinned grains In the case of non-twinned grains, tensors A c and B c are determined via use of Hill's classic self-consistent interaction law [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF] : σc -Σ = -L eff : (S c-1 -I) : ( ǫc -Ė) = -L * ,c : ( ǫc -Ė) (2.105) Here, S c , denotes Eshelby's tensor that depends on the grain shape and on the overall instantaneous elasto-plastic stiffness. I, denotes the fourth-order identity tensor. L * ,c is Hill's constraint tensor [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF]. After simplification, the expression of the strain concentration tensor is given by the following equation : A c = (L c + L * ,c ) -1 : (L eff + L * ,c ) (2.106) It is noteworthy that setting f t = 0 in Eq. 2.101 allows to retrieve us Eq. 2.106. Single crystal constitutive model The crystal plasticity constitutive model adopted here is based on Schmid's law for slip activity and on an extended Voce hardening law, which are briefly summarized in the following. Given a deformation system "s" (i.e. either slip or twin) of a phase "i" which may correspond to a non-twinned grain "c", a parent domain "g-t" or a twin domain "t" and denoting m s the Schmidt tensor on system "s", the first consistency condition simply states that plasticity could occur if the resolved shear stress (RSS) on system "s" is equal to a critical resolved shear stress (CRSS) denoted τ s as follows : m s : σ i = τ s (2.107) The second condition states the necessity for the system "s" to remain on the yield surface during a deformation increment m s : σi = τ s (2.108) In addition, for any system "s", the plastic shear strain rate is necessarily positive. γs > 0 (2.109) In the case of slip, a negative shear strain rate corresponds to a positive shear strain rate on the opposite direction. Because twinning is considered as pseudo-slip, twin nucleation is controlled by the three conditions presented previously (Eqs. 2.107, 2.108 and 2.109). Moreover, a twin forms with exactly the same hardening parameters and variables (e.g. CRSS) as those of its parent domain. Then from purely kinematic considerations, the twin volume fraction evolves as follows : ḟ t = γt s (2.110) where ḟ t denotes the twin volume fraction increment, γt the shear strain increment on the twinning system of the parent phase and s the characteristic twinning shear (s = 0.13). Note that the topology newly introduced does not allow twin multiplicity. The direct coupling between the parent and twin phases strongly decreases the damping and stabilizing effect of the HEM. The new topology makes twin and parent internal states and stiffnesses dependent on each other. Therefore, fulfilling consistency conditions for both domains in the same iteration becomes 2.3. The Double Inclusion Elasto-Plastic Self-Consistent (DI-EPSC) scheme. more difficult. Following the tangent linearization of the material constitutive equations [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF][START_REF] Hutchinson | Elastic-plastic behaviour of polycrystalline metals and composites[END_REF][START_REF] Turner | A study of residual stresses in zircaloy-2 with rod texture[END_REF], the relation between the shear strain increment and the strain rate in crystal "c" is given by : γs = f s : ǫi (2.111) where, f s = s ′ (X -1 ) ss ′ m s ′ : C i (2.112) C c is the elastic stiffness tensor and (X -1 ) ss ′ a square matrix with dimensions equal to the square of the number of active systems in the grain "c". It is expressed as : X ss ′ = m s : C i : m s ′ + V s (Γ)h ss ′ (2.113) The Voce hardening law is of the form τ s = τ s 0 + (τ s 1 + θ s 1 )(1 -exp(- θ s 0 Γ τ s 1 )) (2.114) And the hardening rate is given by : τ s = s ′ V s (Γ)h ss ′ γs ′ (2.115) where V s (Γ) describes the hardening of slip system "s" with accumulated plastic strain Γ and h ss ′ describes the latent interactions between the different deformation systems : ∆τ s ∆Γ = V s (Γ) = θ s 1 + (θ s 0 -θ s 1 + θ s 0 θ s 1 Γ τ s 1 )exp(- θ s 0 Γ τ s 1 ) (2.116) τ s 0 , τ s 1 , θ s 0 and θ s 1 are hardening parameters presented in the following section. Finally, the instantaneous single crystal stiffness is given by the following formula : L i = C i : (I - s m s ⊗ s ′ (X -1 ) ss ′ m s ′ : C i ) (2.117) where the operator ⊗ is the tensor dyadic product. Influence of initial twin stress state From a modeling standpoint, the twinning transformation -inception and propagation of the twin -occurs out of equilibrium and at very fast rate leading to acoustic emission. It can thus be seen as an instantaneous process. From the physics standpoint, the twin/parent interaction strongly affects internal states within the parent and twin phases. Twin growth would tend to shear the parent domain in the twinning direction. To model such a complex mechanism, two distinct assumptions can be considered. In the first case, in order to respect both quasi-static stress equilibrium in twinned grains before and after twin occurrence and overall stress equilibrium conditions, the assumption consists of imposing the parent stress state as initial twin stress state. Therefore, Cauchy stress tensor of parent and twin phases can be written, at the inception of the twin, as follows : σ t = σ g-t (2.118) This assumption implies that stress in the parent is totally transmitted to the newly formed twin without accommodation. In the present paper, it will be referred to as the "unrelaxed initial twin stress state" estimate. However, another assumption considers that twins initially behave like cracks (i.e. total stress relaxation). In order to do so, we chose an elementary case in which the twin is taken as stress free at inception. Mathematically, this corresponds to state in which the Cauchy stress tensor of the twin phase is initially equal to the null tensor : σ t = 0 (2.119) This assumption will be referred to as the "relaxed initial twin stress state" estimate. At the onset of twinning and only at that time step, such approach violates the global equilibrium but minimizes the local energy. At the time step following nucleation, stress equilibrium is naturally restored via use of the self-consistent scheme. In both cases, the twin is assumed to behave elastically at nucleation. Therefore, Hooke's law is used to relate the initial elastic twin strain tensor to the twin domain stress tensor, which, under the assumption considered, is equal to either the parent domain stress tensor or the null tensor. The initial total strain is then expressed as the sum of the elastic twin strain and the plastic strain of the parent domain. Both approximations results will be analyzed and discussed further in this chapter. Computational flowchart The computational flowchart shows how and when tangent moduli and concentration tensors for non-twinned grains, twin and parent domains are numerically derived (see figure 2.8). In figure 2.8, subscripts "n" and "n-1" indicate the "self-consistency conditions" loop iteration considered. At each time step, a macroscopic strain increment is imposed. In order to return the macroscopic stress increment and to calculate the linearized tangent modulus of the HEM, the DI-EPSC code has to compute the local tangent modulus, the stress and strain tensors in all grains. Consequently, it has to deal with twinned and non-twinned grains. Consider first the case of a non-twinned grain. At the beginning of a given self-consistent loop iteration, denoted by "n" in the flowchart, all deformation systems fulfilling the first consistency condition (Eq. 2.107) are flagged as potentially active ; Cauchy stress tensors used in Eq. 2.107 have been computed at the previous self-consistent loop iteration, denoted by "n-1". From these potentially active systems, the instantaneous grain tangent modulus is estimated using Eq. 2.117. The latter is expressed as a function of the elastic stiffness tensor and the active slip systems. Mathematically, the individual contribution of a slip system "s" is represented by the second-order tensor f s (Eq. 2.112) whose formula depends on the inverse of the reduced square matrix X (Eq. 2.113), and hence on the Voce hardening rate, V (Eq. 2.116). The work of Hill [START_REF] Hill | Generalized constitutive relations for incremental deformation of metal crystals by multislip[END_REF] shows that to obtain a unique stress-rate corresponding to a given strain-rate it is sufficient that the matrix V s h ss ′ be positive semi-definite and the elastic stiffness tensor be positive definite. All latent and self-hardening coefficients h ss ′ are equal to 1. Therefore, the hardening matrix V s h ss ′ is not always symmetric, but all its eigenvalues are positive or zero. However, it would have been 2.3. The Double Inclusion Elasto-Plastic Self-Consistent (DI-EPSC) scheme. ! E Flags on potentially active systems L i n No twinning Parent Twin A g n = A g n L n g!t , L n!1 t , L n!1 eff ( ) A n t = A t n L n g!t , L n!1 t , L n!1 eff ( ) A c n = A c n L n c , L n!1 eff ( ) ! ! i = A i : ! E ! ! i = L i : ! ! i L eff ! ! Self-consistency condition Consistency conditions Twinning A g n = A g n L n g!t , L n t , L n!1 eff ( ) A n g!t = A n g!t L n g!t , L n t , L n eff ( ) A n t = A t n L n g!t , L n!1 t , L n!1 eff ( ) Figure 2. 8 -Computational flowchart of the DI-EPSC scheme physically incorrect to ignore the slip anisotropy. In addition, we observed numerically that the major symmetry of local tangent moduli is almost respected. Differences between symmetric non-diagonal components of tangent moduli greater than 1 MPa never exceeded 15.6%. Because the elastic stiffness tensor components are orders of magnitude larger than any hardening rate V , the symmetry of the elastic stiffness tensor dominates the tangent modulus. Then, internal strains and stresses are computed using the Hill's concentration relation (Eq. 2.106). The second consistency condition (Eq. 2.108) associated with a positive shear strain increment condition (Eq. 2.109) allows us to verify whether flagged systems are truly active. The tangent modulus, and the local strain and stress increments are computed and checked again until the fulfillment of these two conditions. The procedure used eliminates randomly one of the potentially active systems and re-iterate the calculation, until either all the systems considered give positive shear or until all have been eliminated, in which case the considered phase is assumed to be elastic. The latter is the case when simulating unloading using EPSC : in some grains the stress first 'slides back' along the yield surface, unloading plastic systems in succession, and eventually detaching from the yield surface. In other grains it detaches at the start of unloading, and goes from plastic to elastic state in one incremental step. Note here that, if the RSS exceeds the CRSS, the stress tensor is proportionally scaled to put it on the yield surface. However, this situation occurs very rarely. For non-twinned grains, there is no difference compared to the EPSC algorithm developed by [START_REF] Turner | A study of residual stresses in zircaloy-2 with rod texture[END_REF]. Consider now the case of a twinned grain. The algorithm deals with parent and twin domains separately. Choice is made here to compute first the tangent moduli (Eq. 2.104) and stresses of the parent phases and then those of the twin phases (Eq. 2.102). As a result, in the case of parent domains, one must use the tangent moduli of the twin phases computed at the previous iteration. Such is not necessary when dealing with twin domains. It is found that this approach improves convergence. Once the computations on all grains have been performed, the overall tangent modulus from which the self-consistency condition checks the convergence is calculated using Eq. 2.83. The overall tangent modulus major symmetry is enforced numerically. If the self-consistency condition is fulfilled, the DI-EPSC code returns the macroscopic stress increment (Eq. 2.77). Application to AZ31 alloy Material parameters and initial textures The AZ31 alloy studied here is composed of 3 wt% Al, 1 wt% Zn, with restrictions on the transition impurities Fe, Ni and Cu. As shown in Figure 2 [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF]. The hardening parameters used in the present simulations are the same as those used in work by Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] under the FIF assumption, which consists of assigning a finite volume fraction to the twins at nucleation. All values and parameters are presented in Table 2.1. In order to quantify the impact of the double inclusion topology, we did not re-fit hardening model parameters, and we removed the "backstress" correction added by Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] at twin inception. Moreover, all latent hardening constants were assumed to be equal to 1. Latent effects induced by twinning The DI-EPSC scheme introduces a more realistic topology for twinning where twins are directly embedded in parent domains. Note that latent is meant in a sense of capturing the effect of plasticity in the parent phase on hardening and hardening rates in the twin phase. All simulations presented in this paragraph consider the case of the extruded alloy. Only the unrelaxed Figure 2.9 -Initial textures of (a) the extruded alloy (with the extrusion axis at the center of the pole figures) and (b) the randomly textured material initial twin stress state estimate is used in this section. Figure 2.10a shows the macroscopic response of the material, loaded along its extrusion axis, obtained from mechanical loading, EPSC and DI-EPSC. The experimental stress-strain curve is characteristic of twinning-dominated compression with a plateau in the early stages of the deformation and then a progressive stress increase as twins grow and slip occurs in the twins [START_REF] Muránsky | On the correlation between deformation twinning and l 'uders-like deformation in an extruded mg alloy : in situ neutron diffraction and epsc4 modeling[END_REF]. As parameters are not fitted to the set of data, differences between experimental measures and macroscopic predictions are to be expected. In spite of the different topologies, the macroscopic stress-strain curves predicted by the DI-EPSC scheme and the EPSC scheme are nearly identical. However, Figure 2.10b, which describes the evolution of the total twin volume fraction in the polycrystal, reveals that the total twin volume fraction predicted by DI-EPSC is lower than the total twin volume fraction predicted by EPSC. Predicting similar overall mechanical response but different twin volume fractions necessarily implies significant differences in the calculation of internal stresses and the selection of active slip systems within the twin and parent domains. Figure 2.11 presents slip and twinning system activities in both the parent and twin domains as predicted with the current and extended EPSC schemes. In both cases twinning occurs in the early stages of compression : at 1% strain, 80 % of the total number of twins have been created. Before twins appear, basal slip is the prevalent active slip system in the parent phase due to its low CRSS. Once twinning is activated, the activity of basal slip within the parent phase decreases strongly while prismatic slip followed by pyramidal slip are activated, consistent with predictions of Capolungo et al. [START_REF] Capolungo | Slip-assisted twin growth in hexagonal close-packed metals[END_REF]. Regardless of the topology used (i.e. EPSC vs DI-EPSC), similar responses in the parent phases are to be expected. Plasticity in the twin and twin domains morphology do not significantly affect the average stress state within the parent phase. Within the twin domains, though, the two approaches yield drastically different predictions of the slip system activity. At strains lower than 1%, basal slip is the predominant deformation mode in twins. This is to be expected as the twin orientation combined with the imposed stress state favors basal slip activation. However, as twins grow, the current model predicts that basal slip activity drops in favor of pyramidal slip. In spite of a CRSS nearly two times greater than that of prismatic slip CRSS and eight times that of basal slip CRSS, pyramidal slip is extensively activated within the parent and twin phases. Adapting the micromechanical scheme to twinning topology directly enforces the twin/parent interaction and reveals another latent hardening effect that is generally described by single crystal plasticity models. Indeed, in comparing resolved shear stresses (RSS) projected on the twinning plane along twinning direction within twin and parent phases of an arbitrarily chosen twinned grain, one observes that the new DI-EPSC scheme predicts both a higher strain hardening rate and a higher RSS in the twin than the EPSC scheme (Figure 2.12). With the choice of hardening parameters associated with tensile twinning, hardening cannot occur on the twin system. Although not shown here, the effect of grain morphology on slip system activities was studied. From the pioneering work of Tanaka and Mori [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF], it is known that for homothetic inclusions, grain and twin shape have a negligible contribution. This can be seen in Eqs. 2.92-2.93 in which the Eshelby type tensors introduce the relative shape effects. When considering different parent and twin morphologies for twins and parents, via the Eshelby-type tensors, it is found that twin shape does not affect the stress state within the parent domains. However, it is found that ellipsoidal twin shapes tend to marginally increase the stress level within the twin domains but do not affect the selected slip systems. At the onset of twinning, twin volume fractions are too small for twin morphologies to influence internal stress and strain states. In the early stages of twin growth, volume fraction effects have a significant effect on local stress levels. Interestingly, the direct coupling between parent and twin phases leads to far more scattered accumulated plastic strain distributions. Figure 2.13 shows both the total shear strain in each single twin and parent domain and the averaged total shear strain in the twin and parent domains as obtained from the DI-EPSC and the EPSC schemes. It appears that the standard deviation of the distribution of total accumulated shear strain in twin phases obtained from the new coupled approach is four times higher than that obtained from the previous uncoupled approach. This phenomenon is more visible in Figure 2.14, where total accumulated shear strain distributions derived from the DI-EPSC scheme are less evenly distributed compared to those predicted by the EPSC scheme. Bin size was optimized using the following formula : w = 3.49Xn -1/3 , where w denotes the bin width, X the standard deviation associated to total shear strains and n the number of twins. Note here that points corresponding to grains for which numerical stability cannot be guaranteed shall be disregarded. Nonetheless the predictions seem more representative, as they directly result from the concentration relations (Eqs. 2.93-2.95). However, shown in black symbols, the averaged total shear strains in twin domains seem to be insensitive to the new coupling, while the averaged total shear strains in parent domains is slightly decreased. That is precisely why, in spite of a higher strain hardening in twins and a smaller twinned volume, the DI-EPSC scheme predicts a macroscopic stress-strain curve nearly identical to that obtained from the EPSC scheme without the backtress correction. Influence of initial texture To test the coupling between the parent and twin phases in a more general way, i.e. with a broader spectrum of active slip and twinning systems, the case of an initially randomly textured AZ31 alloy is investigated. In this paragraph, we only consider simulations regarding the unrelaxed initial twin stress state estimate. Figure 2.14 shows that total shear strain distributions in twin phases derived from the DI-EPSC scheme are centered around a few peaks while they are more evenly distributed when derived from the EPSC scheme. The heavy centre of the DI-EPSC distributions is interpreted as resulting from the orientation of parent domains, which was initially favorable to an easy activation of twinning and secondary slip occurrence. As deformation occurs and plasticity develops, parent grain configurations change, and therefore plastic strain accommodation in the twin domains is affected as a direct result of the new concentration laws. Although not shown here, the diversity of grain orientations limits the number of parent grains favorably oriented for twinning and, hence, lowers the total twin volume fraction in the case of an initially non-textured material. In addition, Figure 2.13 reveals that initial texture has a marginal effect on the averaged total accumulated plastic strain in twin and parent domains because secondary slip and plastic strain accommodation are controlled by criteria which are independent of the initial texture. However, Figure 2.14 shows that plastic strain accommodation in both the parent and twin phases depends on the parent-twin interaction. Moreover, even with an initial random texture, pyramidal slip remains the predominant active slip system in twins, but it is significantly less present compared to the first case with the extruded alloy (Figure 2.15). Influence of initial twin stress state Section 2.3.1.7 introduces the modeling challenges induced by twin inception and presents the two approximations that are used throughout the paper. The present section focuses now on analyzing and quantifying the influence of the initial twin stress state on the mechanical response of an extruded AZ31 alloy. The evolution of RSS projected on the twinning plane along the twinning direction in the cases of initially relaxed and unrelaxed twins is shown in Figure 2. [START_REF] Mordike | Magnesium. properties -applications -potential[END_REF]. It reveals that, with initially relaxed twins, the strain hardening rate in the twin predicted by the DI-EPSC scheme is higher in the first 3% of deformation and then stabilizes at a value close to the one observed with unrelaxed twins. However, the new coupled approach predicts both a higher strain hardening rate and a higher strain hardening in the twins regardless of the initial stress state of the twins. In addition, comparison with the predictions obtained from the EPSC model with initially relaxed and unrelaxed twins shows that the strain hardening rate in the twin domains is not solely controlled by the twin-parent interaction in the case of the DI-EPSC model or by the twin-HEM interaction in the case of the EPSC model. Strain hardening rate and, hence, hardening are strongly dependent on the considered initial twin stress state. In parallel, the total twin volume fraction tends to increase with relaxed twins (Figure 2.10b). As expected and shown in Figures 2.10a and 2.16, initially relaxed twins lower both global and local stress levels that become closer to experimental ones. In addition, the averaged pyramidal slip activity in each twin is significantly lowered as compared to the stress equilibrated case (Figure 2.17). Note that imposing a null Cauchy stress tensor in the twin domain at the inception of twin is a lower bound case. Another way to account for the stress accommodation induced by twinning consists of using an eigenstrain, representative of that due to the twinning shear, within the twin domain at the onset of twinning. However, considering both twinning as pseudo-slip in the parent phase and twinning shear as an eigenstrain within the twinning phase appears as mechanically redundant. Therefore an alternate approach would impose the eigenstrain within the twin domain and considers only slip as a deformation mechanism in the parent domain. Conclusion A new double inclusion micromechanical approach, generalizing the original Tanaka-Mori scheme, is introduced to study the evolution of internal stresses in both parent and twins during twin growth in h.c.p materials. A first elasto-static scheme in heterogeneous elastic media with plastic incompatibilities was derived. The model was shown to reduce to the Nemat-Nasser and Hori scheme as well as to the elementary inclusion problem of Eshelby in some peculiar situations (e.g. homogeneous elasticity) and was first applied to the case of pure Mg to reproduce the average internal resolved shear stresses in the parent and the twinning phases. While the first study is limited to anisotropic elasticity with eigenstrains representing the twinning shears, it is suggested that the magnitude of these backstresses is sufficient to induce plastic deformation within twin domains. Moreover, a detailed analysis of the model shows that the predominant effect on the magnitude and the direction of backstresses is due to heterogeneous elasticity because of large induced misorientations between the parent and the twin domains. It is also found here that the stress state within twin domains is largely affected by the shape of the parent phase. Clearly, all results shown here are limited to static configurations and neglect internal variable evolutions. Nonetheless, it is suggested that application of the generalized Tanaka Mori scheme to mean-field self-consistent methods shall yield more accurate predictions of the internal state within twin domains for real polycrystalline hexagonal metals like magnesium and associated alloys. Then, a second study investigated evolution of internal stresses and strains and plasticity within twin and parent domains of Mg alloys via a new double inclusion-based elasto-plastic self-consistent scheme (DI-EPSC) that used the original Tanaka-Mori result to derive new concentration relations that include average strains in twins and twinned grains (double inclusions). Then, twinned and non-twinned grains are embedded in an HEM with effective behavior determined in an implicit nonlinear iterative self-consistent procedure (called "DI-EPSC" model). Contrary to the existing EPSC scheme which only considers single ellipsoidal inclusions, new localization relations account for a direct coupling between parent and twin phases. Using the same hardening parameters as in [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF], comparison between the EPSC, the DI-EPSC and experimental data leads to three main results with respect to twinning and associated plasticity mechanisms. First, it appears that, by introducing a new topology for twinning, latent effects induced by twinning in the parent phases are capable of predicting the influence of plasticity on hardening and hardening rates in the twin phases. Second, because twins are now directly embedded in the parent phases, new concentration relations lead to more scattered shear strain distributions in the twin phases. Twin stress states are strongly controlled by the interaction with their associated parent domains. Third, the study clearly shows the importance of appropriately considering the initial twin stress state at twin inception. The Electron Backscatter Diffraction (EBSD) technique is based on the collection of backscattered electrons and the indexing of Kikuchi diffraction patterns. EBSD scans provide a profusion of information regarding the texture, presence of grain boundaries, grain morphologies, nature and crystallographic orientation of the different phases present in the material, etc. The EBSD technique is then particularly interesting because it enables us to extract the possibility of extracting statistical information about the microstructure and automatically compute metrics that can be used to guide for models for example. The present chapter introduces a new automated data collection numerical tool [START_REF] Pradalier | A graph theory based automated twin recognition technique for Electron Backscatter Diffraction analysis[END_REF] that uses EBSD scans to generate statistical connections between twin features, microstructure and loading path. The first, second and third parts of this chapter describe the SEM microscopes, the EBSD technique and the basic concepts of electron diffraction and EBSD pattern analysis, respectively. The final section describes the newly developed graph theory-based automated twin recognition technique for EBSD analysis. Brief description of Scanning Electron Microscopes In a scanning electron microscope, a beam of high energy electrons, emitted by either a thermoelectronic gun or a field emission gun (FEG), hits the sample. A system of electromagnetic lenses, also referred to as condensers, and coils enable the user to focus the beam and scan the whole surface of the sample. All incident electrons have a quasi parallel trajectory. Although it depends on the voltage chosen (between 0.1 kV and 30 kV), the diameter of the incident beam does not exceed a few nanometers. When incident electrons penetrate into the sample, they interact both elastically (i.e. no energy loss) and inelastically with atoms and electrons present at or near the surface. These interactions result in the emission of secondary electrons (SE) (i.e., electrons produced by inelastic interactions of high energy electrons with valence electrons, causing the ejection of the electrons from their atomic orbitals), back-scattered electrons (BSE) (i.e. electrons produced by elastic interaction of high energy beam electrons with atom nuclei), characteristic X-rays and photons. Elastic scattering consists of the deviation of the electron trajectory by the nucleus of an atom without loss of energy. Because of the mass difference between the electron and the nucleus, energy transfers are negligible ; the energy loss induced by elastic scattering is smaller than 1 eV. But the deviation angle is important. As a result, backscattered electrons are assumed to have the same energy as beam electrons. It has also been shown that the number of backscattered electrons increases with the atomic number, Z. Depending on the input voltage and Z number, the penetration depth will vary from a few nanometers to 20-30 nanometers. In contrast, inelastic scattering induces a progressive loss of energy due to transfers between high energy electrons and valence electrons belonging to the different atomic orbitals of specimen atoms. These high energy electrons can either be incident beam electrons, also called primary electrons, or back-scattered electrons. The excitation and the ionization of the sample atoms result in the emission of secondary electrons with low deviation angle and low energy (0-50 eV), X-rays, Auger electrons and photons. Secondary electrons can also be produced after back-scattered electrons strike pole pieces or other solid objects located in the vicinity of the sample. They are emitted isotropically from the superficial layers of the specimen, e.g. the depth of these layers can either be a few nanometers in the case of metals or 20-30 nm in the case of non-conducting materials. The secondary emission yield, defined as the ratio of the secondary electrons to the number of primary electrons, increases when the energy, i.e. the voltage, of the beam electrons decreases. Lower primary energies induce slower incident electrons. Except for light atoms (Z<20), the second emission yield does not vary with atom mass. However, the secondary emission yield has to be corrected, since a signifiant part of the detected secondary electrons consists of backscattered electrons. In addition, the ionization of atomic orbitals close to the nucleus triggers the emission of characteristic X-rays and Auger electrons, as shown in Figure 3.3. Mechanisms involved in the emission of X-rays and Auger electrons that consist in electronic transitions between the ionized atomic orbital and external orbitals are aimed at leading the atom toward its state of equilibrium. Energy levels associated with the different types of electrons mentioned in this paragraph are graphically represented in Figure 3.4. Therefore, any standard SEM includes a secondary electron detector. However, it is common to equip scanning electron microscopes with back-scattered electron detectors and X-ray spectrometers, as shown in Figure 3.2. Two types of electron guns exist, i.e. thermal electron guns and field emission guns. The first ones use either a heated tungsten wire or a lanthanum hexaboride crystal. Field emission guns can either be of the cold-cathode type using tungsten single crystal emitters or the thermally assisted Schottky type (Figure 3.1) using emitters made of zirconium oxide. While electrons are emitted from a tungsten thermal guns with voltages between 10 kV and 30 kV, field emission guns are capable of producing incident electron beams whose voltages are between about 0.1 kV and 30 kV. The choice of the electron gun type has to be based on the nature of the materials studied. The resolution of a microscope can be defined as its ability to distinguish and separate very close "objects". Its resolution is altered by monochromatic and chromatic aberrations resulting from the geometry and the variation of the refractive index of lenses, respectively. In addition to aberrations, the resolution of an optical microscope is limited by the diffraction of light. Therefore, the image of a point through a lens is not another point but a diffracted disk, called Airy disk. The consequence of this phenomenon is that disks corresponding to the images of two distinct points may overlap. Abbe's theory states that the resolution limit of a microscope is proportional to the incident wavelength and inversely proportional to the refractive index of the medium. Consequently, decreasing the wavelength allows the resolution limit to improve and, hence, allows users to observe finer details. Using X-rays would then be ideal but they cannot be focused. However, it is possible to generate electron beams whose wavelengths are of the same order of magnitude as X-rays. In both transmission and scanning electron microscopes, electrons are focused by several electromagnetic lenses. Historical perspectives of the Electron Backscatter Diffraction Technique The origins of the EBSD technique date back to 1928 when Shoji Nishikawa and Seishi Kikuchi pointed a beam of 50 keV electrons on a cleavage face of calcite, inclined at 6 • to the vertical [START_REF] Kikuchi | Diffraction of cathode rays by mica[END_REF][START_REF] Nishikawa | The diffraction of cathode rays by calcite[END_REF]. Backscattered electrons were collected on photographic plates positioned perpendicular to the primary beam at 6.4 cm behind and in front of the sample. Recorded patterns displayed black and white lines, as revealed by Figure 3.5. The existence of line pairs is due to multiple scattering and selective reflection. [START_REF] Joy | Electron channeling patterns in the SEM[END_REF] at Oxford University, Kossel diffraction by Biggin and Dingley [START_REF] Dingley | A general method for locating the x-ray source point in kossel diffraction[END_REF] at Bristol University and electron backscatter patterns (EBSP) by Venables and Harland [START_REF] Venables | Electron back-scattering patterns -a new technique for obtaining crystallographic information in the scanning electron microscope[END_REF] at the University of Sussex. Note that EBS patterns are nothing else than Kikuchi patterns. Venables and Harland [START_REF] Venables | Electron back-scattering patterns -a new technique for obtaining crystallographic information in the scanning electron microscope[END_REF] obtained them with a TV camera and phosphor screens. Moreover, Venables [START_REF] Venables | Crystallographic orientation determination in the SEM using electron backscattering patterns and channel plates[END_REF] found an easy way to locate the pattern center, defined as the shortest distance between the impact area where the electron beam hits the sample and the phosphore screen. To do so, Venables placed three spheres on the sample surface, whose projections appear elliptical on the pattern (Figure 3.6). The intersection point of the three major axes corresponds to the pattern center. In 1984, Dingley [START_REF] Dingley | Diffraction from sub-micron areas using electron backscattering in a scanning electron microscope[END_REF] developed and implemented an indexing algorithm capable of locating the pattern center numerically. In 1987, the first indexing software, based on Dingley's code, was released by Link Analytical, now Oxford Instruments. Dingley's model is still used now by current EBSD systems. Five years later, Krieger-Lassen, Conradsen and Juul-Jensen [START_REF] Krieger Lassen | Image processing procedures for analysis of electron back-scattering patterns[END_REF] used the Hough transform, originally developed by Hough [START_REF] Hough | Method and means for recognizing complex patterns[END_REF] in 1962 to track high energy particles, to automatically detect and identify Kikuchi bands. The use of the Hough transform allows the system to transform parallel bands into collections of points. In 1993, Brent Adams [START_REF] Adams | Orientation imaging : the emergence of a new microscopy[END_REF], from Yale University, introduced the term "Orientation Imaging Microscopy" to describe the procedure that generates an orientation map. The technique consists Figure 3.5 -Boersch (1937) Iron Kikuchi patterns [START_REF] Boersch | About bands in electron diffraction[END_REF][START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF] in representing pixels of similar orientation with an unique color. Adams and Dingley founded TexSEM Laboratory, alias TSL, to release their EBSD analysis system and chose Thermo Noran to distribute it. In 1999, EDAX purchased TSL, depriving Thermo Noran of their EBSD analysis system. Thermo Noran turned to Robert Schwarzer [START_REF] Krieger Lassen | Automated crystal lattice orientation mapping using a computer-controlled sem[END_REF], from TU Clausthal, who developed its own system for orientation and texture measurements, and Joe Michael and Richard Goehner [START_REF] Michael | Advances in backscattered electron kikuchi patterns for crystallographic phase identification[END_REF], from Sandia National Laboratory. Joe Michael and Richard Goehner were amongst the first to use EBSD for phase identification. There also existed another software, very popular amongst geologists, released by HKL Technologies and based on the work of Schmidt [START_REF] Schmidt | Computer aided determination of crystal orientation from electron channeling patterns in the sem[END_REF]. This software was widely used by scientists willing to study minerals because it included very efficient low-symmetry indexing algorithms. In April 2005, Oxford Instruments purchased HKL Technologies. Currently, both EDAX and Oxford Instruments continue to develop and sell their own EBSD analysis software. Basic concepts of electron diffraction and diffraction pattern analysis Regarding the technique itself, at each measurement point, the electron beam hits the sample surface, tilted about 70 • with respect to the horizontal line and preliminarily polished. Primary accelerated electrons are either transmitted or reflected by the specimen atoms. The EBSD detector (Figures 3.2 and 3.8) collects low loss energy backscattered electrons. Modern EBSD detectors are usually made of a phosphor screen, a compact lens to channel the electrons and low and high resolution CCD camera chips for fast and slow measurements, respectively. The impact of electrons with the phosphor screen produces light that is converted into an electric signal by the CCD camera chips. The different Kikuchi bands and the pattern center are then identified by using an optimized Hough transform and Dingley's method, respectively. From the pattern center and the Kikuchi bands, EBSP softwares are capable of identifying the crystallographic structure and the orientation of the region struck by the beam. This step is called Figure 3.6 -Image of a Kikuchi pattern obtained by using a phosphor screen and a TV camera that illustrates the method developed by Venables [START_REF] Venables | Electron back-scattering patterns -a new technique for obtaining crystallographic information in the scanning electron microscope[END_REF] to locate the pattern center. Elliptical black shapes correspond to the projected shadows of the three spheres placed at the surface of the sample [START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF]. indexing and will be repeated for each Kikuchi pattern recorded by the CCD chips, i.e. for each measurement point. Note that samples are usually scanned following a square or hexagonal grid. Electron diffraction techniques rely on the principle of wave-particle duality asserted by Louis de Broglie in 1925. According to this principle, a beam of accelerated electrons is a wave whose wavelength is proportional to the particle velocity. Wavelength and velocity are related by the following expression λ = h p = h √ 2meV (3.1) where h denotes the Planck's constant, i.e. 6, 626.10 -34 J.s, p, the momentum of electrons, V , the potential used to accelerate the beam, m, the mass of an electron, i.e. 9.10938215.10 -31 kg, and e, the elementary negative electric charge of an electron, i.e. 1.6021765.10 -19 coulombs. This equation is the fundamental relationship of wave mechanics. In 1927, Davisson and Germer [START_REF] Davisson | Diffraction of electrons by a crystal of nickel[END_REF] verified de Broglie's relationship by observing diffracted electrons after a beam of 54 eV struck a single crystal of nickel. The Bragg's condition for constructive interference is written as 2.d hkl .sinθ = n.λ (3.2) with d, the distance between two successive lattice planes whose family plane Miller indices are represented by (h, k,l) and n, an integer denoting the order of reflection. As shown by Figure 3.9, the incident electron beam is diffracted at an angle 2θ. Thus θ is also the angle between the incident beam and the lattice planes. While incident primary electrons have a narrow range of energies and directions, inelastic scattering undergone by backscattered electrons broadens the spectrum of energies. Momentum changes caused by both elastic and inelastic scattering cause electrons satisfying the Bragg's condition to scatter in all directions. Therefore, for a given plane {hkl}, electrons diffract in such a way that they form two cones located on each side of the plane. These cones are named Kossel cones, and they are shown by Figure 3.11. The projection of the two cones and the {hkl} plane on the viewing screen consists of three lines. However, since more electrons are scattered forward than sideways or backward, the projected line corresponding to the cone formed by the electrons that scattered forwards appears brighter than the one corresponding to the second cone. The "bright" and "dark" lines are referred to as the excess and deficit lines (Figure 3.10). The spacing of the pair of Kikuchi lines, i.e. the pair of lines formed by the excess and deficit lines, is the same as the spacing of the diffracted spots from the same plane. However, the position of Kikuchi lines is strongly dependent on the orientation of the specimen. Although not proved here, it can also be shown that Kikuchi lines associated with planes {hkl} and {-h-k-l} are parallel. Because each diffraction band represents a lattice plane, the intersections of these bands correspond to zone axes. If more than two diffraction bands intersect at a single spot, the latter is called a pole. The width of the diffraction bands appearing on EBS patterns, denoted by w, is a function of both electron wavelength, λ, and the inter-planar spacing, d hkl , and is then be expressed as w = 2 sin -1 λ d hkl (3.3) As shown by Figure 3.12, the width can also be determined experimentally from two positions vectors, r and r ′ , extending from the pattern center (PC) and intersecting the sides of the band The abbreviation BSD stands for backscattered detector [START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF]. at right angles : w = tan -1 r ′ z -tan -1 r z (3.4) The angle between two bands can be computed directly from the phosphor screen (Figure 3 n 2 = OR × OS OR × OS (3.6) As a result, the inter-planar angle, γ, is equal to the arccosine of the scalar product of the first and second unit plane normal vectors previously derived γ = cos -1 (n 1 .n 2 ) (3.7) The main output of any EBSD analysis software is the texture file providing the lattice orientation associated with each pixel of the micrograph. A method for computing the lattice orientation was proposed by Wright et al. [START_REF] Wright | Automatic-analysis of electron backscatter diffraction patterns[END_REF] and relies on the construction of a new set of vectors, (e t 1 ,e t 2 ,e t 3 ), expressed successively in the crystal and sample frames. Defined relative to the sample, their expressions are e t,s 1 = n 1 (3.8) e t,s 2 = n 1 × n 2 n 1 × n 2 (3.9) e t,s 3 = e t,s 1 × e t,s 2 (3.10) In the crystal frame, their expressions become where h i , k i and l i with i = {1, 2} are Miller indices of the plane i. e t,c 1 = (hkl) 1 (hkl) 1 (3.11) e t,c 2 = (hkl) 1 × (hkl) 2 (hkl) 1 × (hkl) 2 (3. The direction cosines, g ij , specify the lattice orientation and are written as g ij = g c ij g s ij (3.14) with, g c ij = e c i .e t,c j (3.15) g s ij = e t,s i .e s j (3.16) The cosine value g ij represents the rotation required to bring the coordinate frames of the sample and of the crystal lattice in coincidence. However, computation of cosine directions requires knowledge of the Miller indices corresponding to the different Kikuchi lines. Prior to presenting one of the possible indexing procedures, it is important and useful to introduce the reciprocal lattice. The reciprocal lattice is a fictitious lattice consisting of the Fourier transform of the direct lattice. Denote (u, v, w) and (u * , v * , w * ) the two sets of basis vectors associated with the frames of the direct and reciprocal lattices, respectively. Therefore, any direction vector d can be expressed by using Miller indices as follows d = hu + kv + lw (3.17) Similarly, any direction vector passing through two nodes of the reciprocal lattices can be written as d = h * u * + k * v * + l * w * (3.18) such that d = 1 d h ′ k ′ l ′ (3.19) Vectors u, v and w are related to vectors u * v * , w * such that u * .v = u * .w = v * .u = v * .w = w * .u = w * .v = 0 (3.20) and u.u * = v.v * = w.w * = 1 (3.21) Consequently, metric and orientational relations existing between the direct and reciprocal lattices are listed below : -dimensions in the reciprocal lattice are equal to their inverse in the direct lattice ; -the direction vector hkl * is perpendicular to the plane (hkl) ; -the direction vector mnp is perpendicular to the plane (mnp)* ; - d hkl = 1 n * hkl ; -d mnp * = 1 nmnp . The use of the reciprocal lattice simplifies many calculations such as, for example, the computation of the angle Φ formed by two reticular planes represented by their respective Miller indices (h 1 k 1 l 1 ) and (h 2 k 2 l 2 ) and expressed as The reciprocal lattice also enables an easy computation of the zone axis of two intersecting planes. Its expression is given by the following relation Φ = cos -1 n * h 1 k 1 l 1 .n * h 2 k 2 l 2 n * h 1 k 1 l 1 . n * h 2 k 2 l 2 (3.22) n mnp = n * h 1 k 1 l 1 × n * h 2 k 2 l 2 (3.23) Moreover, each atom located within the interaction volume scatters incident electrons. The intensity of diffracted electrons in a given direction results from the sum of destructive and non-destructive interferences which are functions of the number of atoms, their nature and location. This dependence on the arrangement of the different atoms composing the unit cell is expressed via a factor called shape factor and usually denoted by F (hkl) F (hkl) = n j=1 f j exp 2πi(hx j + ky j + lz j ) (3.24) where f j is the atomic diffusion factor of the j-th atom of coordinates (x j ,y j ,z j ). Therefore, the formula implies that a node in the reciprocal space does not exist if its associated factor F is null. The diffraction intensity I(hkl) is strictly proportional to the structure factor and can be written as I(hkl) = κ.F (hkl) (3.25) with κ a real constant. Regarding the indexing of diffraction bands and spots, different methods exist and depend whether the material structure is known or not by the experimentalist. Assume for example a hexagonal crystal lattice. The procedure described hereafter is named the triplet indexing technique, or more simply the triplet method, because three vectors or three bands are used to obtain an unique orientation solution for both diffraction spots and bands. The first step consists of measuring the distances D hkl between two diffraction spots symmetric with respect to the pattern center O. Figure 3.14 shows that the distance D h 1 k 1 l 1 corresponds to the distance between points P 1 (h 1 , k 1 , l 1 ) and P ′ 1 (-h 1 , -k 1 , -l 1 ). Distances are then sorted by increasing order. The second step is the computation of inter-reticular distances from previously calculated distances that can be obtained by using the following formula d hkl = 2Zλ D hkl (3.26) Since the material is known, inter-reticular distances can be compared to values listed in standard look-up tables with corresponding Miller indices, such as ASTM tables. In the third step, we ensure that points have been indexed consistently, i.e. Miller indices of three diffraction spots P 1 (h 1 , k 1 , l 1 ), P 2 (h 2 , k 2 , l 2 ) and P 3 (h 3 , k 3 , l 3 ) can be related to each other such that h 1 = h 2 + h 3 , k 1 = k 2 + k 3 and l 1 = l 2 + l 3 . Denoting by v * 1 = OP 1 , v * 2 = OP g * ij = v * i v * j v * i v * j (3.27) The zone axis, displayed with Miller indices uvw on Figure 3.14, can be derived from the cross product of two of the three vectors v * i with i = 1, 2, 3. The method to deal with diffraction bands is exactly the same. One computes the angle between two bands. Knowing the structural data of the studied phase, angle values are then compared to those of look-up tables of inter-planar spacings and corresponding Miller indices. Theoretically, using only one triplet of bands is enough to obtain an unique orientation solution. However, because of experimental uncertainties and the presence of rogue bands, it is better to use multiple triplets. A graph theory based automated twin recognition technique for Electron Back-Scatter Diffraction analysis This section introduces a new EBSD data analysis and visualization software capable of automatically identifying twins and of extracting statistical information pertaining to the presence and geometrical features of twins in relationship with the microstructure. Twin recognition is performed via the use of several graph and group structures. Twin statistics and microstructural data are then classified and saved in a relational database. Software results are all accessible and can be easily corrected, if necessary, via the graphical user interface. Initially developed to identify twins in magnesium and zirconium, the numerical tool's architecture is such that only a minimum of changes is required to analyze other materials, h.c.p. or not. The first part of the section is dedicated to presenting and describing the method. The choice and the evaluation of features of the graphical user interface, as well as the construction of a relational database storing both microstructural information and twinning statistics, are discussed in the second part of the section. Euler angles, quaternion rotation representations and their application to EBSD data Euler angles and quaternion orientation and rotation representations -Euler angles An EBSD map can be seen as an image, e.g. a square or hexagonal array of measurement points, where each measurement point (or pixel) gives the crystal local orientation as a set of Euler angles following the Z-X-Z convention, denoted by (φ 1 , Φ, φ 2 ). Crystal orientation can be obtained by applying the following rotation matrix to the basic crystal structure : R(φ 1 , Φ, φ 2 ) =   cos φ 1 -sin φ 1 0 sin φ 1 cos φ 1 0 0 0 1     1 0 0 0 cos Φ -sin Φ 0 sin Φ cos Φ     cos φ 2 -sin φ 2 0 sin φ 2 cos φ 2 0 0 0 1   (3. 28) From a transformation perspective, the matrix R, also more explicitly denoted by R w c , corresponds to the transformation from the local crystal frame to the world frame. Conversely, the transformation from the world to the crystal will be denoted by R c w . The rotation matrix R c w transforms the vector v c , initially expressed in the local crystal frame, into v w , expressed in the world frame as following : v w = R c w v c . The matrix R is a representation of an element belonging to the algebraic group of 3D rotations, also called the Special Orthogonal group of dimension 3, SO(3). This space is a group in the algebraic sense, meaning that it supports a multiplication operator, defines an inverse, i.e. the transpose of a rotation matrix, and a neutral element, i.e. the identity matrix. -Rodrigues' formalism and quaternions Other useful and well known representations of 3D rotations are the Rodrigues vector and unit quaternions. The Rodrigues vector is a vector whose length is proportional to the amplitude of a given rotation and whose direction is the axis around which the rotation is applied. Quaternions are a compact representation of a rotation of angle θ around an axis v with four values (w, x, y, z) where w = cos θ 2 and (x, y, z) = v • sin θ 2 . By analogy with complex numbers, w is called the real part of the quaternion and v the imaginary part. When working with rotations, unit quaternions are preferred, i.e. w 2 + x 2 + y 2 + z 2 = 1. The advantage of quaternions lies in the existence of a multiplication operator allowing the preservation of the group structure while keeping the representation compact. In computer graphics, quaternions are frequently used because they allow an easy implementation of interpolated rotations between two rotations [START_REF] Dam | Quaternions, interpolation and animation[END_REF]. Note that well-known formulas exist for conversion between rotation matrices and quaternions. In formal terms coming from differential geometry, extracting the Rodrigues representation from a quaternion or a rotation matrix is referred to as using the logarithmic map of the differential manifold [START_REF] Do Carmo | Riemannian geometry[END_REF]. Recovering the quaternion from the Rodrigues representation is the exponential map. Therefore, the relationship between a given Rodrigues vector, r, and its equivalent quaternion, q, can be written as follows : r = log q and q = exp r (3.29) As a result, the amplitude of the rotation q, denoted by θ, can be expressed as the norm of log q : θ = log q . -Metrics The SO(3) group is not a vector space. As a consequence, the usual norm of the Euclidean space R 3 , i.e. x 2 + y 2 + z 2 , does not apply. A more appropriate norm consists in the amplitude of the rotation, θ. Therefore, the norm of a rotation represented by its quaternion q is defined as q so = log q (3.30) Although the norm of a rotation is not directly used for EBSD map analysis, the resulting distance leads to a unified definition of disorientation. Therefore, the distance between two rotations represented by q 1 and q 2 is denoted by : d(q 1 , q 2 ) = log q -1 1 • q 2 = q -1 1 • q 2 so (3.31) Application to EBSD data An EBSD map can be seen as a set of orientations and coordinates since three Euler angles are associated with each single pixel. In addition, the geometrical transformation induced by twinning is described via either the reflection of the lattice with respect to a specific plane or the π radian rotation of the lattice with respect to a given axis. As a result, quaternions and Rodrigues' formalism suit well for EBSD data processing, since they enable a relatively easy computation of disorientations, recognition and classification of the different phases or domains of the map. -Disorientation Disorientation is defined as the orientation difference between two entities. These entities can be grains, parent and/or twin phases or individual measurements. Consider now the case of two measurements points, represented by their quaternions q 1 = q c 1 w and q 2 = q c 2 w . Both quaternions q 1 and q 2 correspond to a rotation from the sample to the local crystal frames. The amplitude of the disorientation between these two measurements, denoted by δ(q 1 , q 2 ), can be expressed as d(q 1 , q 2 ) = q -1 1 • q 2 so . However, the actual disorientation consists in the rotation that needs to be applied to q 1 to transform it into q 2 . δ(q 1 , q 2 ) = q 2 • q -1 1 = q c 2 w • (q c 1 w ) -1 = q c 2 w • q w c 1 = q c 2 c 1 (3.32) The advantage of such a notation lies in the fact that δ includes both the amplitude and the axis of the rotation. The latter property will be particularly useful to recognize and identify mode and system of twins. For the sake of consistency and taking advantage of quaternion properties, what follows systematically identifies the smallest positive rotation transforming q 1 into q 2 when computing the disorientation δ(q 1 , q 2 ). For example, if the real part of δ(q 1 , q 2 ) is negative, it implies that | θ 2 | > π 2 radians. In this case, δ(q 1 , q 2 ) is then replaced by -δ(q 1 , q 2 ) which corresponds to a rotation around the same axis but with an angle equal to θ + π radians. Note that the disorientation angle is now smaller than π radians in absolute value. In addition, the disorientation can be made positive since a quaternion representing a rotation of an angle θ around a vector v is equal to the quaternion corresponding to a rotation of an angle -θ around -v. These two choices lead to an unambiguous representation of quaternions that is helpful when comparing rotations to identify twins. However, the disorientation measure, δ, ignores crystal symmetries. The hexagonal crystallographic structure is invariant by rotations around the c-axis by k π 3 , k ∈ IN and by π radian around any vector lying in the basal plane. Quaternions associated with symmetries around the c-axis and vectors lying in the basal plane are denoted by q x (k) and q z (k), respectively, and are expressed as follows : q x (k) = exp (kπ x) (3.33) q z (k) = exp k π 3 z (3.34) The set of possible disorientations between q 1 and q 2 , denoted by ∆(q 1 , q 2 ), is then defined as : ∆(q 1 , q 2 ) = {q z (j) • q x (i) • δ(q 1 , q 2 )} i=0...1,j=0...5 (3.35) In the case of h.c.p. materials, ∆(q 1 , q 2 ) contains 12 elements. The definition of the disorientation quaternion, Diso(q 1 , q 2 ), and its norm, Diso(q 1 , q 2 ) so , results from the definition of ∆(q 1 , q 2 ). Their expressions are, respectively : Diso(q 1 , q 2 ) = arg min q∈∆(q 1 ,q 2 ) q so (3.36) Diso(q 1 , q 2 ) so = min q∈∆(q 1 ,q 2 ) q so (3.37) The symmetry around any vector lying in the basal plane allows the disorientation with an angle θ greater than π/2 radian to be equivalent to a disorientation with an angle equal to θ -π, smaller in magnitude. If θ -π is negative, the negative sign is removed by considering the rotation of -(θ -π) around the opposite rotation vector. Symmetries also imply that the norm of the disorientation quaternion, Diso(q 1 , q 2 ) so , is always in the range of 0 to π/2 radian. -Classification of twinning relationships As previously mentioned, a twin system can be completely defined by the indices of either the twinning plane, K 1 , and η 2 or the second invariant plane, K 2 , and the twinning shear direction, η 1 . Planes K 1 and K 2 and vectors η 1 and η 2 are all invariant. Also, as per Chapter 1, the lattice reorientation induced by twinning of the first and second kinds can be described by a rotation of 180 • about the normal to K 1 and the twinning shear direction, respectively. In the case of 3.1 lists some of the twinning mode crystallographic properties such as twinning plane, disorientation angle, etc. The parameter γ denotes the c/a ratio, equal to 1.59 and 1.62 in Zr and Mg, respectively. Figure 3.15 shows a graphical representation of one of the six possible twin systems that can be activated for the four above mentioned twinning modes, . Table 3. -Twinning modes in Zr Twinning Twinning plane Twinning direction Disorientation angle mode K 1 η 1 δ ( • ) T 1 {10 12} < 1011> 85.2 T 2 {11 21} < 11 26> 34.9 C 1 {11 22} <11 23 > 64.2 C 2 {10 11} <10 12 > 57.1 All twin modes considered correspond to compound twins [START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF]. As a result, the lattice reorientation that they induce can either be described by a rotation of π radians around the shear direction, η 1 , or by a rotation of π radians around the normal to the twinning plane. In the case of h.c.p. structures, the expressions of disorientation quaternions representative of twinning relationships can be indexed by k, such that k ∈ [START_REF] Kurukuri | Rate sensitivity and tension-compression asymmetry in az31b magnesium alloy sheet[END_REF][START_REF] Boersch | About bands in electron diffraction[END_REF], and written as follows : q(k) = exp [π • η(k)] (3.38) with, Identifying the mode and system of a twin consists in finding the closest object to the disorientation q c 2 c 1 existing between the parent and twin phases in T , defined as the set of possible twinning relationships within a given threshold d max : η(k) =   cos [α(k)] cos [β] sin [α(k)] cos [β] sin [β]   (3.39) where, -α(k τ = arg min t∈T δ q c 2 c 1 , t such that δ q c 2 c 1 , t so < d max (3.40) The set T contains 24 and 12 elements in the case of Zr and Mg, respectively. Note how easy it is for the software, and hence for the user, to switch from the analysis of a Zr EBSD map to a Mg EBSD scan. The only two differences lie in a change of the value of the c/a ratio and in a readjustment of the list of theoretical disorientation quaternions corresponding to the different potentially active twinning modes. Identification of grains, parent and twin phases In EBSD scans, non-twinned grains, parent and twin phases are contiguous areas with a consistent orientation. Therefore, detecting grains and twins consists of identifying and grouping contiguous and consistent areas. These operations will be performed following an approach similar to the "super-pixels" [START_REF] Fulkerson | Class segmentation and object localization with superpixel neighborhoods[END_REF][START_REF] Achanta | SLIC Superpixels compared to state-of-the-art of Superpixel Methods[END_REF] technique based on graph theory and commonly used in image analysis. The whole process of twin recognition and parent phase identification relies on one tool of graph theory, the extraction of connected parts, used five times at different levels, which is the extraction of connected parts. Mathematically, a graph G is a pair of sets (V, E) containing vertices and edges, respectively. Two vertices are said to be connected when an edge links them. Vertices are usually designated by integers. It implies that the edge (i, j) connects vertex i to vertex j. A path between two vertices k and l corresponds to a sequence of edges and vertices reaching k from l, and reciprocally. A path is, by definition, non-directional. In addition, a subset W ⊂ V is a connected part of G if, for all pairs of vertices (i, j), i ∈ W , j ∈ W , there exists a path in G between i and j. Extracting connected components is a well known problem in graph theory, for which efficient algorithms already exist [START_REF] Bondy | Graph theory with applications[END_REF]. The degree of complexity of the extraction of connected parts increases linearly with the number of vertices belonging to the graph. Segmentation of the EBSD map into connected parts of consisted orientations The first step in the EBSD analysis consists of grouping all measurement points of similar orientation into fragments. To this end, a first graph is built as follows : every measurement point is considered as a vertex and an edge between two neighboring pixels is created if the disorientation between them is smaller than a given threshold, e.g. 5 degrees. The type of measurement grid, i.e. square or hexagonal (Figure 3.16), affects the construction of the graph but does not have a significant influence on the extraction of connected parts. In addition, assuming that edge disorientations follow a normal distribution, the thickness with which edges appear on screen is characterized by a weight, w, whose value depends on the w(q 1 , q 2 ) = e diso(q 1 ,q 2 ) 2 L 2 (3.41) where L is a threshold value. As a result, the smaller disorientation between two pixels, the more strongly is displayed their mutual edge. The edge color indicates the nature of the disorientation. For example, an edge between two points of similar orientation appears in white. But as shown in Figure 3.18, disorientations corresponding to tensile 1 and compressive 1 twinning relationships are displayed in green and red, respectively. Moreover, some measurement points located in a few small areas exhibit a very low resolution, i.e. the software cannot determine their orientation. Then, a flood-filling algorithm [START_REF] Burtsev | An efficient flood-filling algorithm[END_REF] interpolates missing measurement points and associates them with their closest connected parts. -Grouping of connected parts into grains The second step in the analysis of an EBSD data is to group connected fragments of consistent orientation into grains. In a material free of twins or precipitates, this step is trivial, since every fragment corresponds to a grain. However, when twinning occurs, different configurations have to be considered. Figure 3.19 depicts the three most typical twinning configurations observed in Zr scans. Therefore, a second graph, referred to as the twinning graph, is generated at the level of connected fragments to group them into grains. Vertices are now connected fragments, and edges link two vertices in contact if the disorientation between them can be classified as a twinning relation. A connected component in this graph is a set of fragments all linked by known twinning relations. Hence, this set of fragments is very likely to be part of the same grain. The construction of this graph relies on the measure of disorientation between two components of consistent orientation, i.e. connected parts. Three hypotheses have been taken into account to compute the disorientation between components : -Measuring the disorientation along the connected part boundary. This hypothesis was discarded on the argument that the boundary is the hardest part to measure with the EBSD process and as such is the least reliable. -Using the disorientation of the measurement at the barycenter of the connected part. This is simple enough, but can be incorrect if the barycenter happens to be a bad measurement spot, or even a point outside of the connected part if the latter is non convex. -Computing the average orientation across the connected part. This is computationally more expensive and sensitive to continuous changes of orientation across the grain, but can be implemented in the most generic way. The third hypothesis is the one chosen in the present work. However, because SO(3) is not an Euclidean space, the closed-form expression of the average is incorrect. Consequently, a specific algorithm, similar to the one computing average of quaternions, is used to determine the average orientation of connected EBSD measurements. Consider a set of n EBSD measurement points represented as quaternions q i , i = 1..n. Assume that the initial average, m 0 is equal to q 1 , i.e. m 0 = q 1 . The average orientation of a connected part is estimated iteratively by computing the following two equations : e k = 1 n n i=1 log Diso(m k , q i ) (3.42) m k+1 = m k • exp e k (3.43) The iteration stops when e k reaches a given threshold (5.10 -4 in the present case). At this step, m k corresponds to the best estimate of the connected part orientation. The construction of this second graph allows extraction of a significant amount of properties such as twin modes, twin systems, twin boundary lengths, the list of neighbors for each grain, the list of pixels belonging to grain borders, etc. -Identification of parent phases In twinned grains, parent phases are composed of one or several connected parts, as shown in Figure 3.19. Parent phases are then considered as sets of connected fragments of consistent orientation. To build such sets of connected fragments, the software generates a third graph over the EBSD map. Vertices are now connected fragments and edges only link two connected fragments if they belong to the same grain and if they have a low respective disorientation. By construction, a connected component of this graph is a set of connected parts embedded in the same grain with consistent orientation. By default, the parent phase is identified as the set of orientation path occupying the largest part of the grain. In addition to the incorrect links, the user has also the possibility to correct the software in case a twin occupies more than half of the total grain area. Figures 3.20 and 3.21 show two parts of EBSD maps before and after edition of incorrect twinning relationships, respectively. Note that, as shown in figure 3.22, the software is still capable of recovering complex grain structures. -Detection of higher order twins A higher order twin is defined as a twin embedded in another. For example, a secondary and tertiary twin corresponds to a twin that nucleated in a primary and secondary twin, respectively. Depending on the material, the loading path and loading history, secondary twinning may occur. For example, it has been observed by Martin et al. [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF] in Mg and appears on scans of Zr samples loaded along the through-thickness direction [START_REF] Kaschner | Role of twinning in the hardening response of zirconium during temperature reloads[END_REF]. Tertiary twinning is more unlikely and statistically irrelevant. However, the software is still capable of identifying tertiary and higher order twins if necessary. The identification of these twins relies on the graph of connected fragments used to build the grains. In this graph, a twin of order n has an edge (i.e. an identifiable twinning relation) with a (n -1) th -order twin or parent, if n -1 = 0, but no identifiable relation with (n -k) th -order twins, with k > 1. An example of such a situation is seen in figure 3.23 where the orange secondary twin (marked with a blue dot in its center) is embedded in the gray twin (marked by a yellow dot). Although the orange twin shares a border with the parent phase, its disorientation with respect to the parent domain does not correspond to one of the previously detailed twinning relations. The present definition of twinning order leads to a simple recursive implementation of higherorder twins. The initialization step consists of considering that all parent fragments previously identified are of twinning order 0. To find n th order twins, all nodes, i.e. fragments, with a twinning order strictly lower than n are removed as well as all edges with one extremity corresponding to one of these nodes. The connected components of the resulting graph are twins of order n. If such a connected component contains several fragments, these fragments can be separated in groups of consistent orientation and identify the "parent" phase, i.e. n th -order twin itself, as the largest one. All fragments not identified as the "parent" fragment correspond to twins of order greater than n. The result of this process applied to the case of secondary and ternary twinning is shown in figure 3.23 where the grey first-order twin (highlighted in cyan) has four secondary twins (in blue), one of which having a tertiary twin (in red). The indirect benefit of detecting and tagging higher order twins lies in the fact that, because of their decreasing likelihood, they help the user to check the software results more rapidly. -The particular case of "twin strips" In a non-negligible number of cases, small twins appear as a strip of connected twins at the outcome of the previous algorithm. This phenomenon occurs either when a very thin twin is separated in small objects because of the low resolution of a few EBSD measurement points or when a twin is divided into two parts by another twin. However, it is statistically relevant to be able to count these connected twins, also called "twin strips", as single twins. Two connected components are considered to belong to the same twin or twin strip if they meet the following five conditions : -They are in the same grain. -They have the same orientation, or the disorientation between the average orientation of both components is small. Typically, the same threshold is used as the one used to build connected components. -The twin's ellipse main orientations (see sec. 3.4.2) are similar, for instance, less than 5 degrees apart. -The sum of the twin half-lengths is within 20% of the distance between their centroid. -The vector linking their centroids diverges by less than a few degrees from the twin's ellipse main orientation. From these conditions, a fifth graph is generated in all grains. Vertices are connected fragments again, and edges link pairs of connected components fulfilling the previous 5 conditions. Consequently, the connected components with more than 1 vertex are twin strips. Figure 3.24 gives an example of the type of reconstruction obtained with this approach. Examples of automatically extracted metrics and statistics The following section illustrates the capabilities of the approach discussed above in terms of exploitable metrics. -Area and Perimeter The area of connected parts, i.e. grains, twins, parent phases, is obtained by multiplying the number of measurement points with the area corresponding to a single pixel. The area associated with a measurement point depends on the step size and the grid type (i.e., hexagonal or square). Similarly, grain boundary length is estimated by multiplying the number of measurement points located along the boundary by the inter-measurement length, associated, here, with the hexagonal grid. -Grain Boundary Properties Because the disorientations between every pair of measurement points or pair of connected fragments are identified, classified and saved, many statistics, such as, for example, grain boundary length and number of neighbors, are easily accessible and stored in the output database. -Convexity Visual observation of EBSD maps suggests that every grain or twin seems to be more or less convex. The degree of convexity of grains and twins can be quantified as follows. First, the convex hull of the object of interest is built by building the convex hull of all its joint points. The object can be either a grain or a twin. The convex hull is a convex polygon that encloses a set of 2D points1 . The construction of the convex hull can be performed in O(n log n). Computing the area of such a polygon is a well defined geometric process. The degree of convexity can then be defined as the ratio of the area of the object to the area of its corresponding convex hull. The ratio is expected to be lower than 1 and the farther away from 1 it is, the less convex the object is. This measure of convexity was implemented to refine the grain detection by identifying grains with a low convexity, trying to break edges in the connected fragment graph and recomputing the convexity of the resulting grains. If the overall convexity is improved, this edge is removed. Tests showed that this method is generally successful. However, the current automatic grain extraction method combined with the graphical user interface is efficient enough to not require the use of this refinement step in practice. high-purity clock-rolled Zr sample loaded in compression along the through-thickness direction up to 3% strain. This is shown using three different visualization modes (see appendix A) : raw mode (left), twinning editor mode (middle) and twinning statistics mode (right). The parent grain is surrounded in yellow, first order twins appear in cyan, secondary twins in blue and ternary or higher order twins in red. -Twin shape and ellipsicity The length and the thickness of twins are computed by estimating the 2D covariance of their constituent EBSD measurement points. The eigenvectors of the covariance matrix indicate the main directions of the twin. The apparent twin length is estimated to be equal to four times the square-root of the largest eigenvalue of its covariance matrix, and the apparent twin thickness is assumed to be equal to four times the square-root of the smallest eigenvalue. The orientation of the twin main axis is then given by the orientation of the largest eigenvectors of its covariance matrix, using the C function atan2. Both the true twin length and thickness can be computed in post-processing by multiplying them by the cosine of the angle between the twin plane and the normal to the sample [START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF][START_REF] Marshall | Automatic twin statistics from electron backscattered diffraction data[END_REF]. Comparing the ellipse area with the actual twin area computed from the number of measurement points contained in the twin allows estimation of the ellipsicity of a twin, defined as its departure from an idealized ellipse. Figure 3.25 depicts these ellipses and illustrates how the ellipsicity criterion highlights what the software identifies as "merged" twins. These "merged" twins are very likely co-zonal Tensile 1 twin variants. Considering co-zonal Tensile 1 variants have an ideal misorientation of 9.6 degrees and given the 5 degree tolerance used for twin variant recognition, the software is, in rare cases, not capable of distinguishing such twins. However, because of the level of precision of EBSD measurement, reducing the software tolerance for twin variant recognition, does not improve the results. Graphical User Interface and Data availability Graphical User Interface To provide a quick and direct access to a large choice of metrics and statistics displayed on an EBSD map, the graphical user interface is aimed at allowing the user to check and correct, if necessary, software results. Three types of situation require the intervention of the user. First, because of the random distribution of disorientations between neighboring grains, it is statistically likely that, for a few grains per map, the disorientation between adjacent grains matches a twinning relation. The user has the choice between either using the convexity option to refine the analysis performed by the map or simply deactivating manually the edge linking the parent to the mistaken twin. Second, in highly strained sample scans, orientation gradients can be significant, and the disorientation between two connected parts may be above the threshold to be flagged as a twinning relation. In such a case, the user can manually activate this twinning relation. Third, the parent is by default the largest connected component of the grain. However, twin phases may occupy the largest part of the grain and appear as the parent phase. Once again, this can be corrected manually by the user. This feature will be particularly useful when dealing with highly strained Mg samples. The result of such manual editions can be seen by comparing Figures 3.20 Data availability The analysis of an EBSD map generates a wealth of quantitative data about grains, twins and parent phases. However, different studies will not use the same EBSD data for the same purposes. This is why it is of primary importance to export data in a way that preserves all relations and does not make assumptions regarding what should or should not be stored. To do so, the present software exports the data-structure extracted from the EBSD map analysis to a SQL database with the structure described in Figure 3.26. In practice, this is performed by using the SQLite library that implements a server-less database stored inside a single file. The advantage of such a relational database lies in the fact that it keeps all the information in a single file and allows the user to create aggregated statistics with simple SQL requests, as shown hereafter with requests 1 and 3, shown in Appendix A. For example, Request 1 generates a table containing features about twins, such as twinning modes (i.e. "twinning"), twin systems (i.e. "variant"), twin area (i.e. "area"), twin thickness (i.e. "thicknesse), quaternions corresponding to the average twin orientation (i.e. "qx", "qy", "qz", "qw"), etc. Request 3 was used to extract information about twin-twin junctions such as the modes and systems of intersecting twins. This request relies on the view created by Request 1 to generate the table containing twin characteristics. Matlab, C++ or Fortran codes can then process the tables generated by the SQL requests in order to extract statistics of interest. For example, the statistics about the influence of microstructure and twin-twin junctions on nucleation and growth of twins presented in the next chapter were computed from four tables only. Moreover, for the sake of keeping a record of experimental conditions, the database also stores constants and parameters used to construct this particular EBSD analysis. Conclusion In addition to a brief description of scanning electron microscopes, physical phenomena observed with electron diffraction and existing EBSD pattern analysis techniques and softwares, the present chapter introduces a new software for EBSD map automated analysis based on graph theory and quaternion algebra. Quaternions allow easy computation of disorientations between pixels and areas of consistent orientation. The subsequent use of graph and group structures allows grain identification, twin recognition and statistics extraction. The newly introduced software is distinguished from pre-existing commercial softwares or academic codes by combining visualization with automated analysis of the EBSD map. The built-in graphical user interface enables an immediate and direct access to microstructural and twinning data such as orientation and size of twins and grains, mode and system of twins, but also allows the user to correct or complete, if necessary, the analysis performed by the software. In addition, all raw and processed data are saved in a relational database. Consequently, all experimental parameters, microstructural data and twinning statistics are easily accessible via SQL requests. The database also enables the systematic quantification of the influence of a very large number of parameters. The constructions of such a database makes a significant difference compared to other pre-existing analysis tools. Moreover, although the software was initially developed to perform statistical analyses on Mg and Zr scans, it is not limited to these two h.c.p. metals. Its algorithm is capable of identifying any twin occurring in h.c.p. materials on the condition that the user properly defines the c/a ratio and the theoretical disorientation quaternions corresponding to all potentially active twinning systems. For the analysis of other crystallographic structures, the user has to adapt the cell characteristics and modify the symmetry quaternions. For example, the authors are using the software for martensite identification, whose the crystallographic structure is tetragonal, in TRIP steels. Chapitre 4 Identification of statistically representative data associated to nucleation and growth of twins The objective of the present chapter is to provide new statistically representative data to establish relationships between the presence and size of twins and loading conditions, texture, etc. As shown in Chapter 2, from the micromechanical standpoint, one can -at the cost of relatively lengthy mathematical derivations-solve Eshelby type problems for relatively complex topologies. Similarly, constitutive models can be extended to reproduce the stochasticity associated with the nucleation of twins, double twins, etc. However, prior to resorting to such developments it is necessary to assess the statistical relevance of the phenomena to be modeled so as to distinguish between first and second order phenomena. In the present chapter, it is the objective to provide data by using the automated EBSD statistical analysis method presented in Chapter 3 to make such distinctions. The following three phenomena will be studied : (1) nucleation and growth of "unlikely twins", (2) double extension twinning and (3) twin-twin junctions. Here "unlikely twins" refers to twin variants one would not expect to find in a given grain owing to its relatively poor orientation with respect to loading conditions. Both in the case of ( 1) and ( 2) current mean field models do not accurately reproduce these processes. The question at stake is that of the necessity of addressing this shortcoming. To this end a study will be performed on initially hot rolled AZ31 magnesium alloy [START_REF] Shi | On the selection of extension twin variants with low schmid factors in a deformed mg alloy[END_REF][START_REF] Shi | Double extension twinning in a magnesium alloy : combined statistical and micromechanical analyses[END_REF]. Regarding [START_REF] Christian | Deformation twinning[END_REF], so far the literature on twin/twin interactions has remained quite limited ; the objective is simply to assess whether twin-twin junctions do affect the selection of variants, and, more importantly, whether these have an effect at the macroscale on twin growth. This second study [START_REF] Juan | A statistical analysis of the influence of microstructure and twin-twin junctions on nucleation and growth of twins in zr[END_REF] will be performed on high purity Zr, as this material readily allows for the nucleation of four twin modes. The present chapter is then organized such that its first part is dedicated to the formation of "unlikely twins" with a first sub-section about low Schmid factor tensile twins and a second one about successive double extension twins that can also be considered as another type of "unlikely twins". The two studies of "unlikely twins" include experimental, statistical and modeling results. The last part of the chapter is focused on the influence of microstructure and twin-twin junctions on the nucleation and growth of twins in Zr. Preliminary notations and considerations Because EBSD scans do not provide access to local stresses before unloading and sectioning, for classification purposes the geometric Schmid factors (SF) are computed from the inner product of the symmetric Schmid tensor and the normalized macroscopic stress tensor, such that Σ 2 = 3 i=1 3 j=1 Σ 2 ij = 1. In addition, similar statistics using the macroscopic stress for computations of distributions can be produced from the use of either full-field or mean field models. The symmetric Schmid tensor is defined as the symmetric part of the dyadic product between the Burgers vector and the normal vector to the deformation plane. For each twinned grain, the six possible twin variants of each twinning mode are classified in order of decreasing SF. Low Schmid factor twins are here divided into two categories. The first type consists of twins with a negative Schmid factor. A twin is said to be a low Schmid factor twin of the second type when its Schmid factor is positive but lower than or equal to 0.3, and when the ratio of its Schmid factor to the highest twin variant Schmid factor possible in the considered twinned grain is lower or equal to 0.6. This ratio will be now referred to as the Schmid factor ratio. 4.2 Nucleation of "unlikely twins" : low Schmid factor twins and double twinning in AZ31 Mg alloy Experimental set-up and testing conditions The material used is initially hot-rolled AZ31 Mg with composition shown in table 4.1 and an initial grain size of 11.4 µm. The thick sheet was annealed at 400 • C for 2 hours after rolling. Using XRD the initial pole figures of the material were measured. These are shown in Figure 4.4 where all three {0001}, {2 11 0} and {10 10} pole figures were recorded prior to and after loading. For the sake of brevity, the abbreviations RD, TD, and ND will stand for rolling, transverse and normal directions, respectively in the remaining of this chapter. As expected, prior to compression basal poles are centered around RD while the {10 10} and {2 11 0} poles are axi-symmetrically distributed about and perpendicular to RD. Using either a wheel saw or a diamond wire saw cubic samples with 10 mm edge length were cut from the as received plate. The diamond wire saw is preferred here, as it yields minimum changes on the microstructure during the cutting process (i.e. in particular it minimizes the number of twins induced by the cutting process). Two different testing conditions were defined to introduce low Schmid factor twins and secondary extension twins. In the first case, uniaxial compression was performed along the rolling direction up to 2.7% engineering strain. These tests were performed at room temperature with a strain rate of 10 -3 s -1 (Figure 4.1). Clearly, a compressive load perpendicular to the basal poles is not expected to generate extension twins. The second battery of tests aims to introduce, in a sequential fashion, {10 12}-{10 12} double extension twins. To this end samples were subjected to more complex loading conditions. Three scenarios were considered ; three cubes were compressed along the rolling direction up to 1.8% strain ; three cubes were compressed along the transverse direction up to 1.8% strain and three cubes were compressed first along the rolling direction up to 1.8% strain and then compressed along the transverse direction up to 1.3% strain. Here too, the imposed strain rate was set to 10 -3 s -1 (Figure 4.1). Table B.2 lists all test cases considered. It is expected here that the first loading path should initiate twinning and that the following compression enables the nucleation of secondary tensile twins. All tests were performed on the Instron 5985 floor testing machine shown in Figure 4.1. Its has a load capacity of 250 kN and a 1430 mm vertical test space. Tests were controlled with the software Bluehill III, and two LVDT capacitive sensors were used to measure displacement to an accuracy 0.4 µm. In Figure 4.1b, the two cylindrical devices correspond to the LVDT sensors, placed on each side of the cubic specimen and protected by three screws. In addition, teflon tape was used to minimize the effect of surface friction. As detailed in appendix B, the machine stiffness was systematically taken into account. Following each compression, the samples were sectioned perpendicularly to the loading direction for microstructure analysis within the bulk. Sectioned faces were ground using SiC papers with grits from 2400 to 4000, and then were electrolytically polished in an electrolyte of 62.5% phosphoric acid and 37.5% ethanol at 3V for 30 seconds and then at 1.5V for 2 minutes at -15 • C. A JEOL 6500F FEG SEM equipped with Channel 5 Analysis was used for EBSD measurements. Samples were scanned using a square grid with a step size of 0.1 µm and 0.3 µm for the first and second statistical analyses, respectively . 4.3 show the macroscopic stress-strain curves corresponding to specimens loaded in compression along RD, TD, ND and in compression along RD, followed by a second compression along TD. For compressions along RD and TD, the yield stress was approximately equal to 70 MPa. The inflection observed in the plastic region of the curves is typical of the activation of {10 12} tensile twins. As revealed by Proust et al. [START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF], the two deformation modes active in the matrix are basal slip and tensile twinning, with twinning increasing its contribution until about 3% strain while the basal slip activity decreases. At 5% strain, the total twinned volume is expected to occupy about 70% of the material volume. Mechanical behavior and microstructure evolutions This result is consistent with initial pole figures, showing that the initial texture of the material is not optimal for twinning activation when compressed along ND. prevent the glide of dislocations. Recent discrete dislocation dynamics simulations performed by Fan et al. [START_REF] Fan | The role of twinning deformation on the hardening response of polycrystalline magnesium from discrete dislocation dynamics simulations[END_REF] revealed that twin boundaries induce a stronger hardening than grain boundaries. After uni-axial compression along TD and RD, basal poles move from the normal direction to the transverse and rolling directions, respectively. This texture change is due to the activation of tensile twinning, which induces a reorientation of the crystal lattice by 86.6 • . As a result, a first compression along RD followed by a second compression along TD is expected to produce successive {10 12}-{10 12} double extension twins. Similarly, Figure 4.5 displays EBSD micrographs prior to compression, after compression along RD, and after compression along RD followed by a second compression along the TD. Low Schmid factor {10 12} tensile twins In this first study [START_REF] Shi | On the selection of extension twin variants with low schmid factors in a deformed mg alloy[END_REF], a new type of selection criteria for low Schmid factor tensile twin variants based on strain compatibility considerations is proposed. The distinction is also made between tensile twins of groups 1 and 2, defined as twins intersecting the grain boundary and twins constituting a pair of cross-boundary twins, respectively. The grain-by-grain analysis of 844 grains containing 2046 twins revealed that the Schmid factors of all twins range from -0.09 to a maximum value of 0.5. Twins with a negative Schmid factor represent 0.6% of all twins. Twins with a Schmid factor lower than 0.3 represent 23.4% of the total twin population. According to the definitions presented in the preliminary section, 127 twins can be deemed low SF twins, i.e. 26.6% of twins with a Schmid factor lower than 0.3 and 6.2% of all twins. As a result, low Schmid factor twins represent 6.8% of the total twin population. In addition, all twinned grains contain between 1 and 4 different twin variants. Twinned grains with 1, 2, 3 or 4 variants represent 62.6%, 30.3% and 7.1% of all twinned grains, respectively. The analysis of twin Schmid factor revealed that the proportion of low Schmid factor per twinned grain increases with the number of activated twin variants. This was expected considering the second criterion defining a low positive Schmid factor. As twinning participates significantly in plastic deformation, the relative contribution of twinning shears to the macroscopic strain tensor, ǫ, is here estimated from the normal components of stress-free distortion tensor of twins, E, expressed in the sample reference frame. The three coordinate axes of the sample reference frame are aligned with RD, TD and ND. The components of both the macroscopic strain and the distortion tensors along RD, TD and ND are denoted by ǫ RD , ǫ T D , ǫ N D and e RD , e T D , e N D , respectively. Since twinning does not involve any volume change, the trace of the twin distortion tensor is null. This implies that there exist six independent sign combinations for e RD , e T D and e N D . These six combinations correspond to a new definition of twin variants. They are detailed, as well as their occurrence frequencies, in Table 4 4.3 reveals that low Schmid factor twins form more frequently in grains favoring the nucleation of twins producing an extension along the rolling direction, which is opposite to the strain induced by the compressive loading. The normal to the twinning plane's shear direction and the twinning plane normal can be written in a set of coordinate axes associated with the twin, i.e. reference axes are chosen parallel to the shear direction. The distortion tensor, E tw , corresponding to the deformation induced by twinning is then written as follows : [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF]. E tw =   0 0 s 0 0 0 0 0 0   with s = 0.129 The expression of the distortion tensor associated with a given tensile twin variant t and expressed in the reference frame of a potentially active deformation system, i, is denoted by E t,i . As mentioned previously, two groups of low Schmid factor twins are here considered. Group 1 includes twins growing from the grain boundary. Group 2 contains twins forming a "cross-boundary" twin pair. Therefore, deformation systems i do not correspond to systems potentially active in the twinned grain but in the neighboring grain with which the observed low Schmid factor twin is in contact. The index i refers to basal slip, pyramidal slip, prismatic slip, tensile and compressive twinning systems. Then, in order to compare the characteristics of low Schmid factor twins in terms of strain accommodation with other tensile twin variants, the distortion tensor for each tensile twin variant t and for each of the 24 slip and twinning systems that could be activated in the neighboring grain was computed. This implied that, for each low Schmid factor twin, 144 different distortion tensors were calculated. The amount of strain to be accommodated by a given system i is assumed to be equal to the component e i xz of the distortion tensor E i , expressed in the reference frame of the system i. Therefore, the larger the component, the higher the ability of the system i to accommodate the twinning shear. It can also be interpreted as being a favorable factor for twin growth. This can be defined as a mean to approximate geometrical accommodation. It resulted from data processing that group 1 low Schmid factor twins require the most accommodation through basal slip with the lowest CRSS and the least accommodation through pyramidal slip with the highest CRSS. It is also found that group 2 low Schmid factor twins require the least pyramidal slip or contraction twinning accommodations with high CRSS but the most accommodation through prismatic slip and tensile twinning. Note that CRSSs associated with prismatic slip and tensile twinning are both higher than the CRSS associated with basal slip and lower than CRSSs associated with pyramidal slip and compressive twinning. Successive {10 12}-{10 12} double extension twins For the study of double extension twins, the total scanned area represents 0.82 mm 2 . Therefore, 4481 grains were observed, none of them were in contact with the map border, as well as 11 052 primary tensile twins and 585 double extension twins. Further details are provided by Table 4.4. Note that double extension twins are 19 times less frequent than primary twins. In terms of twinned area, the difference is even more pronounced ; the total area occupied by secondary twins is 66 times smaller than the total area occupied by primary twins. EBSD scans performed on samples loaded in uniaxial compression along RD and TD did not contain any double extension twin. As a result, double extension twins only appear during the second compression along TD. Considering crystal symmetries of hexagonal crystals, 6 {10 12} tensile twin variants and hence 36 {10 12}-{10 12} double extension twin variants may be activated in Mg. However, based on the misorientation angle existing between the primary and secondary twins, these 36 variants can be grouped into 4 distinct sets, detailed in Table 4.5. The minimum angle associated with Group I twin variants is 0 • because the twinning plane of the secondary twin coincides with the one of the primary twin. Over the 585 double extension twins experimentally observed, 383 were clearly identified, i.e. the difference between the theoretical misorientation angles of the two variants that best match the experimentally measured misorientation angle is smaller than 3 • . Consequently, the study was limited to these 383 double extension twins. In addition, only 4.2% of the secondary twins can be qualified as "low Schmid factor" twins, as defined in the previous paragraph. The present study clearly shows that the activation of tensile secondary twins depends on both grain and primary twin orientations as well as loading direction. However, the Schmid factor analysis is not able to explain why 76.0% of secondary twins belong to Group III and 24.0% to Group IV. Schmid factors corresponding to variants of Group III and IV are always too close, i.e. their difference is lower than 0.05 in magnitude, to be meaningful and to be used as a selection criterion. Elasto-static micromechanical analysis In order to explain such a phenomenon, a simplified version of the elasto-static Tanaka-Mori scheme (Figure 4.7), described in the first section of Chaper 2, was developed. The simplification consists in assuming that the medium was homogeneous, isotropic and elastic. However, in addition to the simplifications induced by the assumption of homogeneous elasticity, considering an isotropic medium implies that the Eshelby type tensors, S(V A ) and S(V B ), can be expressed analytically [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF]. The purpose of the present analysis is the study of the variations of internal 1 and ǫ p 2 denote the macroscopic plastic strain imposed to the medium and plastic strains induced by primary and secondary twinning, respectively. The infinite matrix and primary and secondary tensile twins are represented by volumes V -V A , V A -V B and V B , respectively. Second-order tensors ǫ p a and ǫ p b correspond to eigenstrains, modeling twinning shears induced by primary and secondary twinning, prescribed in inclusions V A -V B and V B , respectively. The homogeneous elastic tensor is denoted by the fourth-order tensor C. free energy induced by the formation of a tensile secondary twin. Twins are then represented by oblate inclusions embedded in an infinite homogeneous elastic medium. A uniform macroscopic plastic strain, E p , is also introduced in the matrix in order to represent the deformation undergone by the specimen. Double extension twin variants inducing the smallest change of elastic energy are assumed to nucleate preferentially. The change of elastic energy induced by the formation of the secondary twin is defined as the difference of free energies before and after secondary twinning, i.e. ∆Φ = Φ II -Φ I . Figure 4.8, and is expressed as follows : ∆Φ = - V B 2V [Σ : ǫ p 2 + C : ([S(V A ) -I] : [ǫ p 1 : ǫ p 2 + ǫ p 2 : ǫ p 1 -E p : ǫ p 2 ] + [S(V B ) -I] : ǫ p 2 : ǫ p 1 )] (4.1) where the second-order tensors ǫ p 1 and ǫ p 2 correspond to the plastic shear strains, assumed to be uniform, induced by primary and secondary twinning, respectively. They differ from ǫ p a and ǫ p b , defined as the uniform plastic strains present in inclusions V A -V B and V B , respectively. This explains why ǫ p a = ǫ p 1 and ǫ p b = ǫ p 1 + ǫ p 2 . Figures 4.8a and 4.8b show the evolution of the change of elastic energy normalized by the twin volume fraction V B /V with respect to the applied stress and the secondary twin volume fraction, respectively. In the first case, the secondary twin volume fraction remains fixed and equal to 0.03 (Figure 4.8a) ; in the second case, the macroscopic stress, Σ T D , is set to be equal to 100 MPa (Figure 4.8b). Figure 4.8a reveals that the change of free energy density is minimal with Group III double extension twin variants and maximal with Group II double twin variants. Figure 4.8b shows that the free energy density change associated with Group II double twin variants is the highest and increases rapidly with twin volume fraction. It also shows that, when the secondary twin volume fraction is greater than 0.03, the PV4-SV5 variant, belonging to Group III, exhibits the lowest normalized free energy variation and is then the most preferred energetically. Consequently, the variations of the internal free energy density explain why Group III double twin variants are the most frequently observed and Group II double twin variants never observed. In addition, the comparison of predictions by the simplified double inclusion model and the classical Eshelby's scheme revealed that the classical single inclusion model derived by Eshelby is not capable of reproducing the trends obtained with the double inclusion model. Enforcing the topological coupling between the primary and secondary twins then appears as essential for accurate secondary twin activation predictions. 4.3 Probing for the latent effect of twin-twin junctions : application to the case of high purity Zr Experimental set-up and testing conditions The material used comes from a high-purity crystal bar Zr (<100 ppm) which was arc-melted, cast and clock-rolled at room temperature. Cuboidal samples were machined from the rolled plate and annealed at 823K for 1 hour. In the as-annealed state, grains are free of twins, equiaxed and have an average diameter equal to 17 µm. Specimens display a strong axisymmetric texture where basal poles are aligned within approximately 30 degrees of the through-thickness direction (Figure 4.9). Samples were deformed in an equilibrium liquid nitrogen bath at 76K in order to facilitate twin nucleation and loaded in compression along one of the in-plane directions to 5% strain (IP05) and along the through-thickness direction to 3% strain (TT03). Figure 4.10 shows the macroscopic stress-strain curves of cubes compressed along the through-thickness (TT) and in-plane (IP) directions. Experimental data was collected from 10 and 4 (240 µm x 120 µm) scans at different locations on the same cross sectional area of the TT03 and IP05 samples (Figure 4.11), respectively. The section plane for TT03 analysis contains both the TT direction and IP direction, and the section plane for IP05 analysis contains the TT direction and the IP compression direction. Statistical data was obtained using the automated EBSD technique developed by Pradalier et al. [START_REF] Pradalier | A graph theory based automated twin recognition technique for Electron Backscatter Diffraction analysis[END_REF]. The total analyzed area for TT03 and IP05 specimens is 205 736 µm 2 and 73 122 µm 2 , respectively. Twins represent 9.1 % and 5.7 % of the total scanned area in the TT03 and IP05 samples, respectively. Incomplete grains bounded by scan edges are not considered in the statistical analyses. Computing misorientations between measurement points and relying on graph theory analysis, the twin recognition EBSD software [START_REF] Pradalier | A graph theory based automated twin recognition technique for Electron Backscatter Diffraction analysis[END_REF] is able to identify the four twin modes present in Zr (Table 4.6). As highlighted by recent studies [START_REF] Kaschner | Role of twinning in the hardening response of zirconium during temperature reloads[END_REF][START_REF] Capolungo | Nucleation and growth of twins in zr : A statistical study[END_REF], {11 22} compressive (C 1 ) twins and {10 12} tensile (T 1 ) twins are the most commonly observed twins in the TT03 and IP05 scans, with 74.4% and 81.7% respectively (Table 4.7). Table 4.7 also reveals that the second most active twinning modes are {10 12} (T 1 ) and {11 21} (T 2 ) in the TT03 and IP05 samples, respectively. In both cases, the second most active twin modes represent about 17% of the total number of twins. However, no {10 11} (C 2 ) twin was observed in the 14 scans. Grain areas are directly calculated from the number of experimental points of the same orientation with a step size equal to 0.2 µm. As a result of the annealing treatment the grains are equiaxed. The grain area is computed by multiplying the number of measurement points that the grain contains by the area associated with a pixel, i.e. 0.1 µm 2 . Grain diameter is estimated assuming a spherical grain. The software developed by Pradalier et al. [START_REF] Pradalier | A graph theory based automated twin recognition technique for Electron Backscatter Diffraction analysis[END_REF] fits an ellipse to each twin. The measured twin thickness is defined as the minor axis of the ellipse. The true twin thickness is then estimated by multiplying the measured twin thickness by the cosine of the angle formed by the twin plane, K 1 , and the normal to the sample surface [START_REF] Marshall | Automatic twin statistics from electron backscattered diffraction data[END_REF][START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF]. plotted with respect to Schmid factor, the bin size was rounded down to 0.05 in order to obtain an exact integer number of subdomains between -0.5 and 0.5. Twin-twin junctions statistics This section is dedicated to the description of twin-twin junctions between first generation twins occurring in Zr. As mentioned in the previous section, 4 different twin modes are reported for Zr, which allows for 10 different junction modes. However, since only 3 twinning modes have been observed, 6 different twin-twin junction modes may occur. These are listed in Table 4.8. Depending on the twinning modes involved, each twin-twin junction mode contains 3 or 4 types. The distinction between the different twin-twin junction types is based on the value of the minimum angle formed by twin zone axes. The twin zone axis is here used to define the direction that is perpendicular to both the K 1 plane normal and the twinning shear direction, η 1 . For example, 3 different types of T 1 -T 1 , T 1 -T 1 and C 1 -C 1 junctions exist : the first one corresponds to junctions between two twins sharing the same zone axis ; the second and third types correspond to junctions between twins for which the minimum angle formed by the two axes is equal to 2π/3 and π/3 radians, respectively. In the case of T 2 -C 1 twin-twin junctions, 4 types of junctions are considered : 2 types corresponding to junctions between twins sharing the same zone axis and 2 other types for junctions between twins with non parallel zone axes. In the case of T 1 -T 2 and T 1 -C 1 , 3 different types of junctions can be distinguished. None of them corresponds to junctions between twins with parallel zone axes. Minimum angles formed by twin zone axes are here equal to π/6, π/2 and π/3 rad. The 19 interaction modes and types observed in TT03 and IP05 scans are graphically represented in Figure 4.13. The total number of twin-twin junctions observed in TT03 and IP05 scans is 833 and 96, respectively. Table 4.8 lists all possible twin-twin junctions modes in Zr, and, more relevant to this work, their observed occurrence frequencies. Frequencies are here defined as the ratio of the population of a given species to the overall population. Tables 4.7 and 4.8 show that in the case of specimens loaded along the through-thickness direction, whereby C 1 twins are most frequently observed, i.e. 74.5% of all twins, C 1 twins interact mostly with other C 1 twins. Furthermore, T 2 twins tend to interact with twins of different modes regardless the predominant mode since T 2 -T 1 , T 2 -C 1 and T 2 -T 2 twin-twin junctions represent 5.9 %, 4.5 % and 1.4 % of all junctions appearing in TT03 maps and 52.1 %, 0 % and 6.2 % of all junctions observed in specimens loaded along the in-plane direction (IP05), respectively. In TT03 specimens, T 1 twins represent 16.5% of all twins (Table 4.7) but are only involved in 8.8% of all twin-twin junctions ; they also represent 31.4% of all twins embedded in single twinned grains (Table 4.9). Even when parent grains are suitably well oriented for T 1 twin nucleation, such as in the case of compression along the IP direction, T 1 -T 1 twin-twin junctions only represent 41.7% of all twin-twin junctions while T 2 twins, whose population is 4.7 times smaller, are involved in 58.3% of all twin-twin junctions. For both TT and IP specimens, the ratio of twins belonging to the most active twinning mode to the number of twins belonging to the second most active twinning mode is similar, i.e. 4.5 and 4.6, respectively. However, no statistical trend appears regarding the 3 types of twin-twin junctions that may occur between the first and second most active twinning modes. 4.14c shows that, in the case of through thickness compression, 80 % of C 1 -C 1 twin-twin junctions correspond to the third type of junctions. Figure 4.14 also reveals that junctions between T 1 -T 1 twins with parallel zone axes, studied by Yu et al. [START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] in Mg single crystals, do not correspond to the predominant type of twin-twin junctions here. These results are purely qualitative, but they can be used as guidelines (a) T 1 -T 1 type 1 (b) T 1 -T 1 type 2 (c) T 1 -T 1 type 3 (d) T 1 -T 2 type 1 (e) T 1 -T 2 type 2 (f) T 1 -T 2 type 3 (g) T 1 -C 1 type 1 (h) T 1 -C 1 type 2 (i) T 1 -C 1 type 3 (j) T 2 -T 2 type 1 (k) T 2 -T 2 type 2 (l) T 2 -T 2 type 3 m) T 2 -C 1 type 1 (n) T 2 -C 1 type 2 (o) T 2 -C 1 type 3 (p) T 2 -C 1 type 4 (q) C 1 -C 1 type 1 (r) C 1 -C 1 type 2 (s) C 1 -C 1 type 3 Influence of twin-twin junctions and grain-scale microstructural characteristics on twin nucleation and twin growth The twinning process should be decomposed into three steps, starting with twin nucleation. This generally occurs at grain boundaries or local defects where internal stresses are highly concentrated. The second step corresponds to transverse propagation across the grain. Like a crack, the newly formed twin propagates very quickly until reaching another grain boundary or defect. Then, the third and last step is twin growth, consisting of twin thickening [START_REF] Kumar | Numerical study of the stress state of a deformation twin in magnesium[END_REF]. This section is dedicated to the study of statistics related to twin nucleation and twin growth. However, prior to any result, and as a complement to Table 4.7, Figures 4.15 and 4.18 present the distribution of twinning mode frequencies with respect to grain size and Schmid factor for specimens loaded along the TT and IP directions. Twinning mode frequencies are defined as the number of T 1 , T 2 and C 1 twins contained in grains belonging to a given subdomain divided by the total population of twins. Notice that the larger number of twins associated with smaller grains should not be interpreted as a "reverse" Hall-Petch effect. Rather, it is a consequence of having a large number of small grains, as shown in Figure 4.12. displaying average twin thicknesses and average twin numbers per twinned grain, respectively. Twin nucleation Figure 4.16 shows the evolution of the fraction of twinned grains containing T 1 , T 2 and C 1 twins as a function of grain area. Similar to previous statistical studies performed in Mg and Zr [START_REF] Capolungo | Nucleation and growth of twins in zr : A statistical study[END_REF][START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF], the present work also finds that twin nucleation can be correlated to grain size. However, Figure 4.16 not only establishes that the overall probability of twinning incidence increases with grain area but also differentiates the 3 cases corresponding to the different twinning modes observed in samples loaded in compression along the TT and IP directions. The influence of grain size on nucleation probability appears to be the strongest for C 1 and T 1 twins in TT03 and IP05 scans, respectively, since 100% of grains larger than 1060 µm 2 contain at least one twin of the predominant twinning mode while less than 50% of grains smaller than 136 µm 2 are twinned. In both TT03 and IP05 specimens, the effect of grain size on T 2 twin incidence is significant. The fraction of grains containing at least one T 2 twin increases rapidly and linearly with grain area, even if, in the case of samples compressed along the TT direction, T 2 does not correspond to the second most active twinning mode. Still note that about 10% of grains smaller than 136 µm 2 and 30% of grains larger than 664 µm 2 contain at least one T 1 twin in TT03 scans. 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure 4 17c and4.17d lies in the observation that small grains can contain a very large number of twins. It also shows that the number of twins per twinned grain can vary significantly from one grain to another. However, EBSD scans are images of 2D sections. As a result, it is possible that small grains are actually bigger than they seem to be. This introduces a bias in grain size effect statistics. Nucleation of twins belonging to the predominant twinning mode is strongly controlled by grain orientation and macroscopic stress direction. As indicated by a classic Schmid factor analysis (Figure 4.18), 91% and 76% of C 1 and T 1 twins, respectively, have a Schmid factor greater than 0.25. Figure 4.19 also shows that 21% and 42% of C 1 and T 1 twins observed in specimens loaded along the TT and IP directions correspond to the 1 st Schmid factor variant, denoted by v 1 , respectively. Regarding the activation of twins belonging to the second most active twinning mode, the dependence on grain orientation and loading direction is less obvious. The phenomenon is particularly striking in the case of T 1 twins in TT03 specimens since 55% of T 1 twins exhibit a negative Schmid factor (Table 4.10) and 26% of them correspond to either the 4 th , 5 th or 6 th variants. For T 2 twins observed in IP05 scans, 38% have a Schmid factor lower than 0.25, and 20% correspond to either the 4 th , 5 th or 6 th variants. Figure 4.18 and Table 4.10 also reveal that the proportion of T 2 twins with a negative Schmid factor remains low (i.e. 11% and 8% in the case of TT and IP compressions, respectively) irrespective of the loading direction. The activation of twins with negative SF is a result of using the macroscopic stress to define SF. In practice, this result is pointing at large local deviations from the macroscopic stress in the grains involved. Twin growth Twin growth is considered to be the last step in the twinning process, consisting in twin lamella thickening. The influence of grain orientation, grain size and twin-twin junctions is investigated via histograms presenting twin thickness distributions with respect to grain area and Schmid factor. Following the same approach as the one used for twin nucleation, Figure 4.20 shows statistics of twin true thicknesses as a function of grain size. Figures 4.20a and 4.20b are histograms displaying average twin true thicknesses sorted by twinning mode. In the case of the TT compression, T 1 twin thickness average is always close to 0.5 µm. The value of C 1 average twin thickness appears to first increase until grain size reaches 928 µm 2 and then to decrease ; the values corresponding to the first and last bin are 0.64 and 0.71 µm, respectively. In the case of the IP compression, the T 1 twin thickness average oscillates around 0.75 µm. As a result, it is not possible to identify a correlation between twin thickness, twinning mode and twinned grain area. However, Figure 4.20 also reveals that the average thickness of twins belonging to the most active mode is always greater than the average thickness of twins belonging to the second most active mode. Figures 4.20c Variant frequencies consist of the ratio of the number of twins of a given SF variant and of a given twinning mode to the total population of twins belonging to the considered twinning mode. growth. Numerical support for such effects is provided by Kumar et al. [START_REF] Kumar | Numerical study of the stress state of a deformation twin in magnesium[END_REF] and based on shear accommodation and stress considerations. Large thickness values observed in small grains also suggest that using 2D variables to describe spatial phenomena introduces a bias in grain size effect statistics. The same comment was made in the paragraph dealing with twin nucleation. Disregarding negative Schmid factor twins (Figure 4.18), the influence of the crystallographic orientation on growth of twins belonging to the predominant mode is clearly shown in Figure 4.21. Figure 4.21 presents the distribution of twin thickness sorted by twinning mode with respect to Schmid factor. The average true twin thickness of C 1 and T 1 twins increases with increasing Schmid factor values in TT03 and P05 specimens, respectively. This indicates that the macroscopic stress is the major driving force for twin growth in the case of first most active twinning modes. However, similar to observations made about twin nucleation, the influence of grain orientation and macroscopic stress direction is reduced for twins belonging to the second and third most active twinning modes. As a result, mechanisms involved in the growth of twins belonging to the predominant twinning mode are likely to be different from those responsible for the growth of other twins. Beyerlein et al. [START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF] and Capolungo et al. [START_REF] Capolungo | Nucleation and growth of twins in zr : A statistical study[END_REF] argue that if backstresses induced by neighboring grains in reaction to the localized twin shear are independent of orientation, then twins with higher SF, and hence with higher resolved shear stress, have an advantage to overcome this shear reaction. Finally, to highlight the influence of twin-twin junctions on twin thickening in a statistically meaningful manner, the comparison of twin thicknesses between single twinned (also referred to as mono-twinned grains in Figures 4.22 and 4.23) and multi-twinned grains is performed for C 1 twins in TT03 specimens. A multi-twinned grain is, here, defined as a twinned grain containing several twins. In the TT03 scans, 102 and 1369 twins in single twinned and multi-twinned grains were observed, respectively (see Tables 4.7 and 4.9). Figures 4. 22b and4.23b show the distribution of twins embedded in single twinned and multi-twinned grains as a function of grain area and Schmid factor. They are aimed at indicating the statistical relevance of data about twins contained by single twinned grains. Figure 4.23a presents the distribution of averaged twin thicknesses embedded in both single twinned and multi-twinned grains as a function of Schmid factor values. Figure 4.23a clearly shows that the average twin thickness of twins embedded in single twinned grains is greater than or equal to the average twin thickness of twins contained by multi-twinned grains irrespective of grain orientation. This phenomenon appears more clearly for high and mid-high Schmid factor values, i.e. SF > 0.25. As previoulsy mentioned, Figure 4.23b shows that bars associated with negative Schmid factors apply to only a few single twinned grains. Moreover, Figure 4.23a shows that similar to multi-twinned grains, the thickness of twins contained in single twinned grains does not depend on grain area. However, the latter is generally greater than the average twin thickness of twins in multi-twinned grains. Figure 4.22b also shows that almost all the single twinned grain areas are smaller than 664 µm 2 . Such a result was expected due to the influence of grain size on twin nucleation. Conclusion The statistical analyses performed on Mg AZ31alloy EBSD scans were aimed at determining activation criteria of two distinct types of "unlikely twins", i.e. low Schmid factor twins, defined as twins with either a negative Schmid factor or a Schmid factor smaller than 0.3 and a ratio of the Schmid factor to the highest twin variant Schmid factor possible in the considered twinned grain lower or equal to 0.6, and double extension twins. As implied by their designation, the nucleation of both low Schmid factor and double extension can be considered as a rare event since low Schmid factor and double extension twins represent 6.7 % and 5.7 % of all twins, respectively. Note also that double extension twins occupy less than 0.5% of the total twinned area. Relying on the value of the distortion induced by a twin that has to be accommodated by the potential other deformation modes, it was found that group 1 low Schmid factor twins, i.e. twin in contact with a grain boundary, require the most accommodation through basal slip with the lowest CRSS and the least accommodation through pyramidal slip with the highest CRSS. Group 2 low Schmid factor twins, i.e. twins forming a cross-boundary twin pair, require the least pyramidal slip or contraction twinning accommodations with high CRSS but the most accommodation through prismatic slip and tensile twinning, of which CRSSs are higher than basal slip and lower than pyramidal slip and compressive twinning. The second study highlights that, contrary to primary twins, secondary tensile twins obey the Schmid's law. It also showed that the use of a micromechanical double-inclusion model is an accurate way to predict the activation of the right twin variant. The fact that the activation of double extension twin variant obeys the Schmid's law also implies that purely deterministic approaches as those used in classical polycrystalline model should be able to predict the nucleation of such twins. The statistical study performed on a large set of EBSD scans of high purity Zr discusses the influence of twin-twin junctions between first generation twins, grain size and crystallographic orientation on nucleation and growth of twins. Samples were loaded in compression at liquid nitrogen temperature along the through-thickness and one of the in-plane directions in order to favor C 1 and T 1 twins, respectively. This study is the first to establish the statistical relevance of twin-twin junctions by collecting and processing data about all twinning modes and all twin-twin junctions in Zr. Six different types of twin-twin junctions ,i.e. T 1 -T 1 , T 1 -T 2 , T 1 -C 1 , T 2 -T 2 , T 2 -C 1 and C 1 -C 1 , are observed. Twin-twin junctions occurring between twins belonging to different modes, and more particularly between twins belonging to the first and second most active twinning modes, appear very frequently and cannot be neglected. Depending on the loading configuration, they may represent more than half of all twin-twin junctions. The comparison between the average twin thickness of twins embedded in single twinned and multi-twinned grains reveals that twin-twin junctions hinder twin growth. In addition, only nucleation and growth of twins belonging to the predominant twinning mode seem to be strongly sensitive to grain orientation and loading direction. These differences can probably be explained by the presence of localized high stress levels allowing the nucleation of any twinning mode. In agreement with previous studies, it is also found that the probability of twin nucleation and the average number of twins per twinned grain increase with grain size. Chapitre 5 Conclusion Three types of interactions, i.e. slip/slip, slip/twin, and twin/twin interactions, are key to the mechanical response and strain hardening in h.c.p. metals. The work presented here clearly focused on the twinning process in general and, more specifically on understanding the means of internal stress development within the twin domain from inception to the final shape and on quantifying the statistical relevance of twin/twin interactions (i.e. double/sequential twinning and twin intersection). To this end, novel micromechanical models were introduced, specimens were experimentally characterized by means of mechanical testing, XRD and EBSD techniques, and these results were analyzed in light of a new freeware developed along the course of the work to extract quantified links between twins, initial microstructure and loading directions. The key findings of each of those initiatives are presented in what follows, and guidance for future developments is proposed. To study internal stress development during twinning, a new micromechanical approach based on a double inclusion topology and the use of the Tanaka-Mori theorem was first adopted in the form an elasto-static Tanaka-Mori scheme in heterogeneous elastic media with plastic incompatibilities. This first model was introduced to study the evolution of internal stresses in both parent and twins during first and second-generation twinning in h.c.p materials. The model was first applied to the case of pure Mg to reproduce the average internal resolved shear stresses in the parent and twin phases. While the study is limited to anisotropic heterogeneous elasticity with eigenstrains representing the twinning shears, it is suggested that the magnitude of the back-stresses is sufficient to induce plastic deformation within the twin domains. Moreover, the predominant effect on the magnitude and the direction of the back-stresses appears to be due to heterogeneous elasticity because of large induced misorientations between the parent and the twin domains. It is also found that the stress state within twin domains is largely affected by the shape of the parent phase. Using the same notations as Martin et al. [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF] to refer to as "secondary" any {10 12} tensile twin variants embedded in {10 11} compressive twins, application of the model shows that only three (i.e. A, B, D) of the six tensile twin variants have a positive resolved shear stress on the twin plane. Two of those correspond to the most experimentally observed variants. In addition, variants A and D2 are found to exhibit the largest elastic energy decrease during secondary twin growth. Interestingly, it is also found that variant A can also grow to larger volume fractions than variant D. Clearly, all results shown here are limited to static configurations and neglect internal variable evolutions. Nonetheless, these first results suggest that application of the generalized Tanaka-Mori scheme to mean-field self-consistent methods will yield more accurate predictions of the internal state within twin domains for real polycrystalline hexagonal metals like magnesium and associated alloys. Consequently, a second model called the double inclusion elasto-plastic self-consistent scheme (DI-EPSC) and consisting of an extension of the elasto-static Tanaka-Mori scheme to elastoplasticity and polycrystalline media, was proposed. Similar to the previous model, the original Tanaka-Mori result is used to derive new concentration relations including average strains in twins and twinned grains. Then, twinned and non-twinned grains are embedded in an HEM, with effective behavior determined in an implicit nonlinear iterative self-consistent procedure called the "DI-EPSC" model. Contrary to the existing EPSC scheme, which only considers single ellipsoidal inclusions, new strain concentration relations account for a direct coupling between parent and twin phases. Using the same Voce's law coefficients and the same hardening parameters as in Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF], comparison between the EPSC, the DI-EPSC and experimental data leads to three main results with respect to twinning and associated plasticity mechanisms. First, it appears that, by introducing a new topology for twinning, latent effects induced by twinning in the parent phases are capable of predicting the influence of plasticity on hardening and hardening rates in the twin phases. Second, because twins are now directly embedded in the parent phases, new concentration relations lead to more scattered shear strain distributions in the twin phases. Twin stress states are strongly controlled by the interaction with their associated parent domains. Third, the study clearly shows the importance of appropriately considering the initial twin stress state at twin inception. During this study, it was found that most numerical instabilities can be traced back to the choice of hardening matrix [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF][START_REF] Hutchinson | Elastic-plastic behaviour of polycrystalline metals and composites[END_REF]. Although such matrix was chosen to be positive semi-definite, the EPSC scheme would have been more stable if all eigenvalues of the hardening matrix were strictly positive, i.e. if the hardening matrix was positive definite. However, the hardening matrix associated with Mg single crystals computed by Bertin et al. [START_REF] Bertin | On the strength of dislocation interactions and their effect on latent hardening in pure magnesium[END_REF] from dislocation dynamics is definitely not positive semi-definite and even less positive definite. As a result, the easiest way to overcome such an issue, inherent in the EPSC algorithm, is either to use hardening laws with strictly positive definite hardening matrices or to use visco-plastic self-consistent or elasto-visco-plastic self-consistent schemes. In order to process EBSD scans and extract twinning statistics automatically, a new EBSD analysis software based on graph theory and quaternion algebra was developed. Quaternions allow an easy computation of disorientations between pixels and areas of consistent orientation. The subsequent use of graph and group structures allows grain identification, twin recognition and statistics extraction. The newly introduced software is distinguished from pre-existing commercial softwares or academic codes by combining visualization with automated analysis of the EBSD map. The built-in graphical user interface enables an immediate and direct access to microstructural and twinning data such as orientation and size of twins and grains, mode and system of twins ; it also allows the user to correct or complete, if necessary, the analysis performed by the software. In addition, all raw and processed data are saved in a relational database. Consequently, all experimental parameters, microstructural data and twinning statistics are easily accessible via SQL requests. The database also enables us to quantify systematically the influence of a very large number of parameters. The construction of such a database makes a significant difference compared to other pre-existing analysis tools. Moreover, although the tool was initially developed to perform statistical analyses on Mg and Zr scans, the software is not limited to these two h.c.p. metals. Its algorithm is capable of identifying any twin occurring in h.c.p. materials on condition that the user writes in the code the value of the c/a ratio and the theoretical disorientation quaternions corresponding to all potentially active twinning systems. For the analysis of other crystallographic structures, the user has to adapt the cell characteristics, add or remove quaternions corresponding to the different twinning orientation relations and modify the symmetry quaternions. The two first statistical studies were performed from rolled AZ31 Mg alloy EBSD scans in order to explain the activation of low Schmid factor {10 12} tensile twins and successive {10 12}-{10 12} double extension twins. The first study revealed that low Schmid factor {10 12} tensile twins only represent 6.8 % of all twins. Based on purely deterministic constitutive laws, polycrystalline schemes such as EPSC or EVPSC will not be able to predict the formation of low Schmid factor twins. However, consequences on predicted mechanical responses are expected to be small because of their low statistical relevance. The second study revealed that {10 12}-{10 12} double extension twins obey the Schmid's law in general. It also showed that considering internal energy changes computed from a micromechanical double-inclusion model, even simplified, results in very accurate predictions of double extension twin variant activation. The study also pointed out that such double twins are extremely rare and have a negligible effect on the mechanical properties of the material. Another statistical study performed from Zr EBSD scans was carried out in order to discuss the influence of twin-twin junctions between first generation twins, grain size and crystallographic orientation on nucleation and growth of twins. Samples were loaded in compression at liquid nitrogen temperature along the through-thickness and one of the in-plane directions in order to favor C 1 and T 1 twins, respectively. Abbreviations T 1 , T 2 and C 1 stand for {10 12}, {11 21} and {11 22} twins, respectively. This study is the first to establish the statistical relevance of twin-twin junctions by collecting and processing data about all twinning modes and all twin-twin junctions in Zr. Six different types of twin-twin junctions ,i.e. T 1 -T 1 , T 1 -T 2 , T 1 -C 1 , T 2 -T 2 , T 2 -C 1 and C 1 -C 1 , are observed. Twin-twin junctions occurring between twins belonging to different modes, and more particularly between twins belonging to the first and second most active twinning modes, appear very frequently and cannot be neglected. Depending on the loading configuration, they may represent more than half of all twin-twin junctions. The comparison between the average twin thickness of twins embedded in single twinned and multi-twinned grains reveals that twin-twin junctions hinder twin growth. In addition, only nucleation and growth of twins belonging to the predominant twinning mode seem to be strongly sensitive to grain orientation and loading direction. These differences can probably be explained by the presence of localized high stress levels allowing the nucleation of any twinning mode. In agreement with previous studies, it is also found that the probability of twin nucleation and the average number of twins per twinned grain increase with grain size. The logical continuation of this work consists of 1) a more user-friendly integration of postprocessing capabilities in the EBSD software and 2) the development and implementation of stochastic models taking into account all new statistically meaningful data into the DI-EPSC scheme scheme. For example, the software could become more user-friendly if SQL requests were directly implemented in the software and if the software would display and output tables with information about twins, twin junctions, twinned grains, etc. One can also imagine an interface allowing the user to choose the microstructural features he desires to obtain. Regarding the development of stochastic models, Beyerlein et al. [START_REF] Beyerlein | Effect of microstructure on the nucleation of deformation twins in polycrystalline high-purity magnesium : A multi-scale modeling study[END_REF] and then Niezgoda et al. [START_REF] Niezgoda | Stochastic modeling of twin nucleation in polycrystals : An application in hexagonal close-packed metals[END_REF] proposed models dealing with nucleation of {10 12} tensile twins in Mg. These models could be extended so as to be capable of predicting the nucleation of any type of twinning mode in h.c.p. metals. Results of the statistical study on Zr presented in Chapter 4 are a starting point. In addition, performing EBSD measurements on more Mg and Zr samples loaded either monotonically or cyclically along different directions at different temperatures and strain rates would represent an incredible additional source of data for the modeling community. Annexe A Graphical User Interface of the EBSD map analysis software and SQL Requests for automated twin statistics extraction A.0.1 Graphical User Interface To assist the EBSD map analysis, 9 distinct visualization modes and 8 color mappings of the EBSD data are provided by the software. The color mappings correspond to two different color mappings of the rotation space, three stereographic projections ; and displayed in grey levels, the image quality, confidence index and fitness value reported by the acquisition software. The 9 visualization modes are the following : 1. Raw mode : displays the raw EBSD measurement points as shown in Figure 3.23, left. 2. Twinning editor : displays twinning relations between fragments and allows the user to enable or disable them. Fragments identified as parent phases, first, second, third and higher generation twins are marked with yellow, light blue, dark blue and red discs, respectively. Discs indicating high order twins, i.e. second, third and higher order, appear larger in order to be more visible. Likely being the result of incorrectly enabled relations, these twinning relations have to be inspected in priority ( Figure 3.23, center). 3. Grain neighbors : displays grains with their neighbors. The user, here, is able to mark a fragment as parent or twin phase. 4. Clusters : displays phases grouped by twinning order. Colors used to indicate twinning modes are also the same as those used in the twinning editor visualization mode. However, it emphasizes twinning order. For example, it was previously mentioned that first generation twins are marked with a light blue disc in the twinning editor mode. Here, first generation twins are not only marked by a light blue disc but their boundaries appear in light blue (Figure 3.23, right). these two connected parts such as the disorientation existing between the twin and the parent phases, the twinning mode corresponding to the disorientation, etc. 6. Convex hulls : displays the convex hull of detected grains. The polygons are drawn in green if their area is close to the enclosed grain area, in red otherwise. 7. Ellipses : displays a fitted ellipse around every detected twins. Twins whose shape does not fit an ellipse very well are drawn in red because they are likely to be the result of the merging of two (or more) twins (Figure 3.24, left). This mode allows the user to have access to grain and twin properties such as orientation and size. 8. Connected twins : displays detected twin-twin junctions. The user can also mark manually undetected twin-twin junctions. 9. Twin joints : displays identified twinning relations between measurement points located along twin boundaries. Even though measurement points along twin boundaries are not very reliable, this mode might be useful to visualize how strong the disorientation appears along the boundary. In addition to these visualization modes, options are available to highlight grain or twin boundaries, exclude grains in contact with the map edge, replace twins in a twin strip by their union (Figure 3.24, right), display connected part ids and zoom in or out and pan. When zooming in to a level where individual measurement can be separated, local disorientation is also displayed as shown in figure 3.17 A.0.2 SQL Requests All SQL requests used for statistics extraction are listed in the following. -Request 1 : s e l e c t g . id , g . area , g . qx , g . qy , g . qz , g . qw from g r a i n s as g where not g . map_edge order by g . i d ; -Request 2 : drop view Twins ; create view Twins as s e l e c t C1 . i d as P , C2 . id , c2 . g r a i n , S2 . area , C2 . size , S2 . l e n g t h , S2 . t h i c k n e s s , c2 . x , c2 . y , s 2 . qx , s 2 . qy , s 2 . qz , s 2 . qw , e . twinning , e . v a r i a n t from ConnectedEdges as E inner j o i n Fragments as c1 on c1 . i d=e . i inner j o i n Fragments as c2 on c2 . i d=e . j inner j o i n F r a g m e n t S t a t i s t i c s as s 1 on c1 . i d=s 1 . i d inner j o i n F r a g m e n t S t a t i s t i c s as s 2 on c2 . i d=s 2 . i d inner j o i n Gr ains as G on G. i d = C2 . g r a i n and C1 . i s _ p a r e n t and E . twinning >1 and C1 . g r a i n=C2 . g r a i n and not G. map_edge and c1 . t w i n s t r i p <=0 and c2 . t w i n s t r i p <=0 and c2 . tw inn i ng _ o r de r=1 order by C2 . i d ; -Request 3 : s e l e c t C1 . g r a i n , C1 . id , C2 . id , T1 . twinning , T1 . v a r i a n t , T2 . twinning , T2 . v a r i a n t s e l e c t MAP_ID, g . id , g . area , g . border_length , avg ( t . a r e a ) ,sum( t . a r e a ) , count ( t . i d ) , avg ( t . t h i c k n e s s ) from g r a i n s as g , Twins as t where G. i d = t . g r a i n group by g . i d order by g . i d ; Annexe B Stress-strain curve correction method and mechanical testing parameters Measuring the evolution of the distance existing between the current and the initial positions of the upper compression plate is an accurate way to estimate the deformation of the sample. However, during a compression test, the distance measured by LVDT sensors evolves not only because of the deformation of the sample but also because of the deformation of the machine. Initially, the stiffness of the machine was measured experimentally by compressing square tungsten samples. Correction parameters were then directly derived from the measured machine stiffness. Unfortunately, tungsten samples cracked. As a result, it was decided to correct the measured strain in such a way that the Young's modulus, during the loading phase, is equal to 45 GPa. To calculate the corrective parameters, a MATLAB code was written to extract the stress and strain values as well as to output the stress-strain curves, and another was written to calculate the corrective parameters and Young's modulus for both loading and unloading regimes. The reason why many Young's moduli for unloading regimes are missing in Table B.1 is that measurement points corresponding to unloading phases were not recorded initially. The procedure was then adapted in order to save them. This is also why the elastic regime of the loading phase was used to determine the correction parameters. The following equations describe the method used to correct the measured strain. First, the measured strain, ǫ m , is corrected by a term denoted by ǫ corr linearly proportional to the force applied and, as a result, to the engineering stress, σ m : ǫ c = ǫ m -ǫ corr (B.1) where ǫ c denotes the measured strain after correction and ǫ corr the correction term expressed as follows : ǫ corr = a.σ m + b (B.2) Correction parameters a and b are determined from the desired theoretical Young's modulus value, i.e. E th = 45 GPa, and two measurement points, P 1 (ǫ m 1 , σ m 1 ) and P 2 (ǫ m 2 , σ m 2 ), picked such that both σ m 1 and σ m 2 belong to the elastic regime of the loading. For all corrections, points P 1 and P 2 were chosen to have σ m 1 and σ m 2 equal to about 30 MPa and 60 MPa, respectively. Résumé Chapitre 1 3 . 1 131 Introduction and state-of-the-art 1.1 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Crystallography of h.c.p. metals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Crystallography of twinning in h.c.p. metals . . . . . . . . . . . . . . . . . . . . . 1.4 Nucleation and growth of twins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Twinning dislocations and twin interfaces . . . . . . . . . . . . . . . . . . 1.4.2 Mechanisms involved in nucleation and growth of twins . . . . . . . . . . 1.5 Twinning in constitutive and polycrystalline modeling . . . . . . . . . . . . . . . 1.6 Scope of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 2 Study of the influence of parent-twin interactions on the mechanical behavior during twin growth 2.1 The inclusion problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Field equations and thermodynamics . . . . . . . . . . . . . . . . . . . . . 2.1.2 Eshelby's solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 A generalized Tanaka-Mori scheme in heterogeneous elastic media with plastic incompatibilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Elasto-static Tanaka-Mori scheme . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Application to first generation tensile twinning in magnesium . . . . . . . 2.3 The Double Inclusion Elasto-Plastic Self-Consistent (DI-EPSC) scheme. . . . . . 2.3.1 DI-EPSC model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Computational flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Application to AZ31 alloy . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 3 Electron backscattered diffraction technique and automated twinning statistics extraction iii Sommaire Brief description of Scanning Electron Microscopes . . . . . . . . . . . . . . . . . 3.2 Historical perspectives of the Electron Backscatter Diffraction Technique . . . . . 3.3 Basic concepts of electron diffraction and diffraction pattern analysis . . . . . . . 3.4 A graph theory based automated twin recognition technique for Electron Back-Scatter Diffraction analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Euler angles, quaternion rotation representations and their application to EBSD data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Identification of grains, parent and twin phases . . . . . . . . . . . . . . . 3.4.3 Graphical User Interface and Data availability . . . . . . . . . . . . . . . . 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 4 Identification of statistically representative data associated to nucleation and growth of twins 4.1 Preliminary notations and considerations . . . . . . . . . . . . . . . . . . . . . . . 4.2 Nucleation of "unlikely twins" : low Schmid factor twins and double twinning in AZ31 Mg alloy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Experimental set-up and testing conditions . . . . . . . . . . . . . . . . . 4.2.2 Mechanical behavior and microstructure evolutions . . . . . . . . . . . . . 4.2.3 Low Schmid factor {10 12} tensile twins . . . . . . . . . . . . . . . . . . . 4.2.4 Successive {10 12}-{10 12} double extension twins . . . . . . . . . . . . . . 4.3 Probing for the latent effect of twin-twin junctions : application to the case of high purity Zr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Experimental set-up and testing conditions . . . . . . . . . . . . . . . . . 4.3.2 Twin-twin junctions statistics . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 5 Conclusion Annexes Annexe A Graphical User Interface of the EBSD map analysis software and SQL Requests for automated twin statistics extraction A.0.1 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.0.2 SQL Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Annexe B Stress-strain curve correction method and mechanical testing parameters Bibliographie 2 2 Exemple de cartographie EBSD obtenu à partir d'un échantillon de Zr pur laminé chargé en compression dans le sens de son épaisseur. Cette cartographie fut traitée par le logiciel lequel sera plus longuement décrit chapitre 3. . . . . . . . . . . . . 3 Représentation schématique du problème élastique hétérogène avec "eigenstrains", dues aux maclages primaire et secondaire. . . . . . . . . . . . . . . . . . . . . . . 4 Représentation schématique du problème élasto-plastique telle que modélisé dans le modèle auto-cohérent élasto-plastique classique (dont l'abrviation anglaise est "EPSC") (a) et dans le nouveau modèle auto-cohérent élasto-plastique á double inclusion développé dans cette thése et appelé DI-EPSC. . . . . . . . . . . . . . . 5 Activités plastiques moyennées des systèmes de déformation principaux de glissement et de maclage au sein des phases macle et parent obtenues à partir de (a) EPSC et du (b) DI-EPSC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 (a) comparaison des courbes contrainte-déformation macroscopiques prédites par l'EPSC et le DI-EPSC avec celle obtenue expérimentalement et (b) évolution de la fraction volumique totale de macle au sein du polycristal. . . . . . . . . . . . . . 7 Représentation graphique d'exemples de jonctions macle-macle observées dans les cartographies EBSD de Zr. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vdes figures 1 . 5 15 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Schematic representation of a hexagonal structure containing three primitive unit cells. Atoms are represented by blue hard spheres. . . . . . . . . . . . . . . . . . . 1.4 Schematic representation of the main crystallographic planes and directions in h.c.p. structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table Schematic representation of the twinning plane, K 1 and its conjugate, K 2 , the twinning shear direction, η 1 , and its conjugate, η 2 , and the plane of shear, P [3]. 1.6 Schematic representation of possible lattice shuffles for double lattice structures when q = 4. (a) Parent structure ; (b) sheared parent ; (c) type 1 twin ; (d) type 2 twin ; (e) possible type 1 shuffle ; (f) possible type 2 shuffle ; (g) alternative type 1 shuffle ; (h) alternative type 2 shuffle [3]. . . . . . . . . . . . . . . . . . . . . . . . 2.1 Schematic representation of the original inclusion elastic problem containing one ellipsoidal inclusion V Ω with prescribed eigenstrain ǫ * . The dashed lines of the signify that the inclusion is embedded in an infinite elastic medium. The inclusion and the matrix have the same elastic modulus C 0 . . . . . . . . . . . . . . . . . . 2.2 Schematic representation of the heterogeneous elastic problem containing two ellipsoidal inclusions V b and V a (with V 1 ⊂ V 2 ) with prescribed eigenstrains ǫ * b in V b and ǫ * a in sub-region V a -V b and distinct elastic moduli C b in V b and C a (in sub-region V a -V b 3 , c) are also shown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 (a and c) Mean internal stresses projected on the twinning plane in the twinning shear direction in both the twin phase and the parent phase as functions of R twin , (b and d) Influence of R parent on the mean internal stresses of the parent projected on the twinning plane. The twin volume fraction in the parent is 0.25 for (a) and (b) and 0.05 for (c) and (d). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Evolution of the mean internal stresses in both twin and parent phases projected on the twinning plane as a function of twin volume fraction. Lines refer to the model predictions while symbols denote measure data. The ellipsoid aspect ratio for the parent, R parent , is set to 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Resolved shear stresses -projected on the twinning plane and twin shear directionof both the parent (a) and the twin (b) phases as functions of twin volume fraction. 2 . 9 29 Initial textures of (a) the extruded alloy (with the extrusion axis at the center of the pole figures) and (b) the randomly textured material . . . . . . . . . . . . . . 2.10 (a) comparison of macroscopic stress-strain curves from EPSC and DI-EPSC with experimental diffraction data and (b) evolution of the total twin volume fraction in the polycrystal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Averaged system activities within the parent and twin phases from (a) EPSC and (b) DI-EPSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 2.12 RSS projected on the twinning plane in the twinning direction within the parent and twin phases. (a) compares the DI-EPSC and EPSC results ; (b) compares internal stresses for different geometrical configurations for initially unrelaxed twins. Axis length ratios for ellipsoidal shapes are a 1 /a 2 = 1 and a 1 /a 3 = 3. . . . . . . . 2.13 Spread of total shear strains within the parent and twin phases from (a) EPSC and (b) DI-EPSC for an extruded alloy (c) EPSC and (d) DI-EPSC for an initially randomly textured alloy. Each cross represents the total shear strain for one single grain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14 Total shear strain distributions in parent and twin domains at 4.5% deformation obtained from (a) EPSC and (b) DI-EPSC in the case of the extruded AZ31 alloy and from (c) EPSC and (d) DI-EPSC in the case of the initially randomly textured AZ31 alloy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Averaged system activities within the parent and twin phases from (a) EPSC and (b) DI-EPSC for an initially randomly textured material . . . . . . . . . . . . . . 2.16 RSS projected on the twinning plane in the twinning direction within the parent and twin phases and obtained from the EPSC and DI-EPSC models for both initially relaxed and unrelaxed twins. . . . . . . . . . . . . . . . . . . . . . . . . . 2.17 Averaged system activities within the parent and twin phases from DI-EPSC when twins are assumed to be initially totally relaxed . . . . . . . . . . . . . . . . . . . 3.1 Philips XL 30 F Orientation Imaging Microscopy System at Los Alamos National Laboratory, MST-6 (LANL web site). . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Chamber of a Scanning Electron Microscope. . . . . . . . . . . . . . . . . . . . . 3.3 Schematic representation of signals resulting from the interaction between primary electrons and atoms at or near the sample surface ; R denotes the depth of the interaction volume [5]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Schematic representation of energy levels of electrons resulting from the interaction . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Example of EBSD orientation map showing the undeformed microstructure of a high purity clock-rolled Zr specimen. Colors designate crystal orientations as indicated by the unit triangle in the bottom right-hand corner. The software used to generate this map is OIM Analysis [8]. . . . . . . . . . . . . . . . . . . . . . . 3.8 Schematic representation of the chamber of a SEM equipped with an EBSD detector. The abbreviation BSD stands for backscattered detector [5]. . . . . . . . . . . . . 3.9 Schematic representation of the Bragg's condition . . . . . . . . . . . . . . . . . . 3.10 Examples of Kikuchi patterns obtained from a h.c.p. material (Oxford Instrument) 3.11 Intersection of Kossel cones with the viewing screen [9] . . . . . . . . . . . . . . . 3.12 Diagram for calculation of bandwidth angle [10] . . . . . . . . . . . . . . . . . . . 3.13 Simplified and schematic representation of diffraction setup [10] . . . . . . . . . . 3.14 Schematic representation of the triplet method for diffraction spot indexing . . . 3.15 Schematic representation of twinning modes observed in Zr Mg. Twins are represented via their twinning planes, K 1 . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1 and blue compressive 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.21 Same map as Figure 3.20, but with manual edition of 4 incorrect links. The disabled links are displayed as thin edges. . . . . . . . . . . . . . . . . . . . . . . 3.22 Zoom on the map of Figure 3.20 to illustrate complex grain structures recovered by our software. The dashed line is a disorientation relation that matches a known relation (compressive 1 4 . 2 42 3.24 Zoomed-in EBSD map of a high-purity clock-rolled Zr sample loaded in compression along one of the in-plane directions up to 10% strain. Left : detected component with their ellipses and twin-strip links in magenta ; right : reconstructed complete twin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.25 EBSD map of a high-purity clock-rolled Zr sample loaded in compression along one of the in-plane directions up 10% strain. The top part shows the twinning relation identified. The right caption displays ellipses fitted to twins. Red ellipses correspond to low ellipsicity (below 70%). Low ellipsicity twins correspond here to merged orthogonal twins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.26 Structure of the database used to store the EBSD analysis results. Boxes are database tables, edges with numbers indicates relations and the n-arity of these relations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 (a) Instron tensile-compression machine ; (b) Compression test at room temperature. viii Macroscopic stress-strain curves of specimens monotonically loaded in compression along RD (a), along TD (b) and along ND (c) at different temperatures and strain rates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . 4.5 (a) EBSD ND inverse pole figure micrograph of the specimen before compression along RD ; (b) Part of an EBSD orientation micrograph of a specimen compressed along the rolling direction up to 2.7% strain ; (c) EBSD orientation micrograph of a specimen successively loaded in compression along the rolling direction up to 1.8% strain and the transverse direction up to 1.3% strain. Black and yellow arrows indicate the presence of double extension twins. . . . . . . . . . . . . . . . 4.6 Scatter plots displaying the Schmid factor and Schmid factor ratio values of 291 Group 3 secondary twins (a) and 92 Group 4 secondary twins (b). . . . . . . . . . 4.7 Schematic representation of the simplified elasto-static Tanaka-Mori scheme. Second- 4 . 8 48 Evolution of the change of elastic energy normalized by the twin volume fraction with respect to (a) the applied stress, Σ T D , and (b) the secondary twin volume fraction, f V B = V B /V A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Initial basal (0001) and prismatic (10-10) pole Figures of the clock-rolled highpurity zirconium studied in this work. The 3-axis is the through thickness direction of the plate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Macroscopic stress-strain curves of high purity Zr samples loaded in compression along through-thickness (TT) and in-plane (IP) directions at 76K and 300K. . . . 4.11 Examples of EBSD scans for specimens loaded in compression along the TT (a) and along one of the IP (b) directions. . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Effective grain diameter (a,b) and grain area (c,d) distributions for TT03 (a,c) and IP05 (b,d) samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Graphical representation of twin-twin junction modes and types observed in TT03 and IP05 Zr EBSD scans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Schematic representation of twin-twin junction modes and types observed in TT03 and IP05 Zr EBSD scans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Modes and types of twin-twin junctions observed in EBSD scans of samples loaded along the TT-direction ((a),(c),(e)) and the in-plane directions ((b),(d),(f)) . . . 4.15 Distribution of frequencies of T 1 , T 2 and C 1 twins with respect to grain size in samples loaded along the TT (a) and the IP (b) directions. . . . . . . . . . . . . 4.16 Distribution of the fraction of twinned grains containing T 1 , T 2 and C 1 twins plotted with respect to twinned grain area for samples loaded along the TT (a) and the IP (b) directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.17 Distribution of the number of T 1 , T 2 and C 1 twins per twinned grain for TT03 (a) and IP05 (b) samples and scattergraphs displaying the number of C 1 , T 2 and C 1 twins embedded in parent phases with respect to twinned grain area for TT03 (c) and IP05 (d) samples. Each cross represents one single twin. But because twin numbers are integers, many crosses overlap. . . . . . . . . . . . . . . . . . . . . . 4.18 Distribution of SF values corresponding to twins activated in samples loaded along the TT-direction (a) and the IP-direction (b). . . . . . . . . . . . . . . . . . . . . 4.19 Distribution of variant frequencies of T 1 (a), T 2 (c) and C 1 (e) twins in TT03 scans and of T 1 (b) and T 2 (d) in IP05 scans, respectively, with respect to their Schmid factor. Variant frequencies consist of the ratio of the number of twins of a given SF variant and of a given twinning mode to the total population of twins belonging to the considered twinning mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.20 Distribution of average twin thicknesses as a function of grain size in samples loaded along the TT (a) and the IP (b) directions and scattergraphs displaying twin thickness values with respect to grain size in samples loaded along the TT (c) and the IP (d) directions. Each cross represents one twin. . . . . . . . . . . . . . 4.21 Distribution of average twin thicknesses as a function of SF values in samples loaded along the TT (a) and the IP (b) directions. . . . . . . . . . . . . . . . . . 4.22 Distribution of the twin thickness (a) and the frequency (b) of C 1 twins with respect to grain area in samples loaded along the TT direction. . . . . . . . . . . 4.23 Distribution of the twin thickness (a) and the frequency (b) of C 1 twins with respect to SF values in samples loaded along the TT direction. . . . . . . . . . . A.1 Visualization modes for a single grain . . . . . . . . . . . . . . . . . . . . . . . . . x Figure 1 - 1 Figure 1 -(a) Comparaison des courbes contrainte-déformation en traction et compression de Mg AZ31B laminé [1] et (b) comparaison de la réponse mécanique du Zr pur laminé pour différentes températures et trajets de chargement[START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Traction et compression apparaissent, respectivement, dans la légende sous la forme de "tens" et "comp". Les abréviations "TT" et "IP" signifient, quant à elles, "through-thickness compression" et "in-plane compression", à savoir "compression dans le sens de l' épaisseur" et "compression dans le plan". Figure 2 - 2 Figure 2 -Exemple de cartographie EBSD obtenu à partir d'un échantillon de Zr pur laminé chargé en compression dans le sens de son épaisseur. Cette cartographie fut traitée par le logiciel lequel sera plus longuement décrit chapitre 3., Figure 3 - 3 Figure 3 -Représentation schématique du problème élastique hétérogène avec "eigenstrains", dues aux maclages primaire et secondaire. Figure 4 - 4 Figure 4 -Représentation schématique du problème élasto-plastique telle que modélisé dans le modèle auto-cohérent élasto-plastique classique (dont l'abrviation anglaise est "EPSC") (a) et dans le nouveau modèle auto-cohérent élasto-plastique á double inclusion développé dans cette thése et appelé DI-EPSC. Figure 5 -Figure 6 - 56 Figure 5 -Activités plastiques moyennées des systèmes de déformation principaux de glissement et de maclage au sein des phases macle et parent obtenues à partir de (a) EPSC et du (b) DI-EPSC. Figure 7 - 7 Figure 7 -Représentation graphique d'exemples de jonctions macle-macle observées dans les cartographies EBSD de Zr. Sommaire 1 . 1 11 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2 Crystallography of h.c.p. metals . . . . . . . . . . . . . . . . . . . . . 13 1.3 Crystallography of twinning in h.c.p. metals . . . . . . . . . . . . . . 14 1.4 Nucleation and growth of twins . . . . . . . . . . . . . . . . . . . . . . 20 1.4.1 Twinning dislocations and twin interfaces . . . . . . . . . . . . . . . . . 20 1.4.2 Mechanisms involved in nucleation and growth of twins . . . . . . . . . 22 1.5 Twinning in constitutive and polycrystalline modeling . . . . . . . . 23 1.6 Scope of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Figure 1 1 Figure 1.1 -(a) comparison of tension and compression stress-strain curves of rolled AZ31B Mg alloy [1] and (b) comparison of mechanical responses of clock-rolled high-purity Zr for various loading directions and temperatures[START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Tension and compression are denoted by "tens" and "comp", respectively. Abbreviations "TT" and "IP" stand for "through-thickness" and "in-plane", respectively. Figure 1 . 3 - 13 Figure 1.3 -Schematic representation of a hexagonal structure containing three primitive unit cells. Atoms are represented by blue hard spheres. Figure 1 . 4 - 14 Figure 1.4 -Schematic representation of the main crystallographic planes and directions in h.c.p. structures. Figure 1 . 5 - 15 Figure 1.5 -Schematic representation of the twinning plane, K 1 and its conjugate, K 2 , the twinning shear direction, η 1 , and its conjugate, η 2 , and the plane of shear, P [3]. Figure 1 . 6 - 16 Figure 1.6 -Schematic representation of possible lattice shuffles for double lattice structures when q = 4. (a) Parent structure ; (b) sheared parent ; (c) type 1 twin ; (d) type 2 twin ; (e) possible type 1 shuffle ; (f) possible type 2 shuffle ; (g) alternative type 1 shuffle ; (h) alternative type 2 shuffle [3]. Figure 2 . 1 - 21 Figure 2.1 -Schematic representation of the original inclusion elastic problem containing one ellipsoidal inclusion V Ω with prescribed eigenstrain ǫ * . The dashed lines of the signify that the inclusion is embedded in an infinite elastic medium. The inclusion and the matrix have the same elastic modulus C 0 . 2). Both inclusions are embedded in an infinite matrix with reference elastic stiffness C 0 and subjected to traction and displacement boundary conditions. Figure 2 . 2 - 22 Figure 2.2 -Schematic representation of the heterogeneous elastic problem containing two ellipsoidal inclusions V b and V a (with V 1 ⊂ V 2 ) with prescribed eigenstrains ǫ * b in V b and ǫ * a in sub-region V a -V b and distinct elastic moduli C b in V b and C a (in sub-region V a -V b ). The two inclusions are embedded in an infinite elastic medium, with elastic modulus C 0 , containing an overall uniform plastic strain, E p . The second-order tensor E d represents the imposed macroscopic strain. Figure 2 . 3 - 23 Figure 2.3 -Representation of the local coordinate system (e ′1 , e ′ 2 , e ′ 3 ) associated with the {10 12}-tensile twinning. The reference coordinate system (e 1 , e 2 , e 3 ) associated to the crystal structure and the crystallographic coordinate system (a 1 , a 2 , a 3 , c) are also shown. Figure 2 . 4 - 24 Figure 2.4 -(a and c) Mean internal stresses projected on the twinning plane in the twinning shear direction in both the twin phase and the parent phase as functions of R twin , (b and d) Influence of R parent on the mean internal stresses of the parent projected on the twinning plane. The twin volume fraction in the parent is 0.25 for (a) and (b) and 0.05 for (c) and (d). Figure 2 . 5 - 25 Figure 2.5 -Evolution of the mean internal stresses in both twin and parent phases projected on the twinning plane as a function of twin volume fraction. Lines refer to the model predictions while symbols denote measure data. The ellipsoid aspect ratio for the parent, R parent , is set to 3. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] and 2.5. Figure 2 . 6 - 26 Figure 2.6 -Resolved shear stresses -projected on the twinning plane and twin shear directionof both the parent (a) and the twin (b) phases as functions of twin volume fraction. Solid lines and dashed lines refer to the present model and to the Nemat-Nasser and Hori's double inclusion scheme for homogeneous elasticity, respectively. The ellipsoid aspect ratio for the parent, R parent , is set to 3. Figure 2 . 7 - 27 Figure 2.7 -Schematic representation of the elasto-plastic problem with twinning corresponding respectively to the uncoupled [4] (a) and coupled (present) (b) formulations. The dashed line signifies that inclusions V t and V g-t with tangent moduli L t and L g-t are embedded in an equivalent homogeneous medium with a tangent modulus L ef f .9, both initially extruded and randomly textured materials are considered. Elastic constants, expressed in the crystal reference frame with Voigt notation, are given, in GPA, by C 11 = 59.75, C 22 = 59.74, C 33 = 61.7, C 12 = 23.24, C 13 = 21.7, C 44 = 16.39 and C 66 = 18.25. In Mg alloys, basal slip {0001} 2110 , prism slip {1010} 1210 , second-order pyramidal slip {2112} 2113 and tensile twinning {1012} 1011 are potential active systems. As shown previously, the present work uses an extended Voce law to describe hardening evolution Figure 2 . 2 Figure 2.10 -(a) comparison of macroscopic stress-strain curves from EPSC and DI-EPSC with experimental diffraction data and (b) evolution of the total twin volume fraction in the polycrystal. Figure 2 . 11 - 211 Figure 2.11 -Averaged system activities within the parent and twin phases from (a) EPSC and (b) DI-EPSC Figure 2 . 2 Figure 2.12 -RSS projected on the twinning plane in the twinning direction within the parent and twin phases. (a) compares the DI-EPSC and EPSC results ; (b) compares internal stresses for different geometrical configurations for initially unrelaxed twins. Axis length ratios for ellipsoidal shapes are a 1 /a 2 = 1 and a 1 /a 3 = 3. Figure 2 . 13 - 213 Figure 2.13 -Spread of total shear strains within the parent and twin phases from (a) EPSC and (b) DI-EPSC for an extruded alloy (c) EPSC and (d) DI-EPSC for an initially randomly textured alloy. Each cross represents the total shear strain for one single grain. Figure 2 . 14 - 214 Figure 2.14 -Total shear strain distributions in parent and twin domains at 4.5% deformation obtained from (a) EPSC and (b) DI-EPSC in the case of the extruded AZ31 alloy and from (c) EPSC and (d) DI-EPSC in the case of the initially randomly textured AZ31 alloy. Figure 2 . 15 - 215 Figure 2.15 -Averaged system activities within the parent and twin phases from (a) EPSC and (b) DI-EPSC for an initially randomly textured material Figure 2 . 16 - 216 Figure2.16 -RSS projected on the twinning plane in the twinning direction within the parent and twin phases and obtained from the EPSC and DI-EPSC models for both initially relaxed and unrelaxed twins. Figure 2 . 17 - 3 Sommaire 3 . 1 62 3. 2 64 3. 3 75 3. 4 . 1 2173316226437541 Figure2.17 -Averaged system activities within the parent and twin phases from DI-EPSC when twins are assumed to be initially totally relaxed Figure 3 . 1 - 31 Figure 3.1 -Philips XL 30 F Orientation Imaging Microscopy System at Los Alamos National Laboratory, MST-6 (LANL web site). Figure 3 . 2 - 32 Figure 3.2 -Chamber of a Scanning Electron Microscope. Figure 3 . 3 - 33 Figure 3.3 -Schematic representation of signals resulting from the interaction between primary electrons and atoms at or near the sample surface ; R denotes the depth of the interaction volume [5]. Figure 3 .Figure 3 . 4 - 334 Figure 3.4 -Schematic representation of energy levels of electrons resulting from the interaction between primary electrons and atoms at or near the sample surface [5]. Figure 3 . 7 - 37 Figure 3.7 -Example of EBSD orientation map showing the undeformed microstructure of a high purity clock-rolled Zr specimen. Colors designate crystal orientations as indicated by the unit triangle in the bottom right-hand corner. The software used to generate this map is OIM Analysis [8]. Figure 3 . 8 - 38 Figure 3.8 -Schematic representation of the chamber of a SEM equipped with an EBSD detector. The abbreviation BSD stands for backscattered detector [5]. Figure 3.9 -Schematic representation of the Bragg's condition Figure 3 . 10 - 310 Figure 3.10 -Examples of Kikuchi patterns obtained from a h.c.p. material (Oxford Instrument) 2 ( 3 . 13 )Figure 3 . 11 - 2313311 Figure 3.11 -Intersection of Kossel cones with the viewing screen[START_REF] Fultz | Transmission Electron Microscopy and Diffractometry of Materials[END_REF] Figure 3 . 12 - 312 Figure3.12 -Diagram for calculation of bandwidth angle[START_REF] Wright | Automatic-analysis of electron backscatter diffraction patterns[END_REF] Figure 3 . 13 - 313 Figure 3.13 -Simplified and schematic representation of diffraction setup [10] Figure 3 . 14 - 314 Figure 3.14 -Schematic representation of the triplet method for diffraction spot indexing 2 and v * 3 = 3 OP 3 , the cosine directions g * ij associated with vectors v * i and v * j are expressed as Figure 3 . 15 - 315 Figure 3.15 -Schematic representation of twinning modes observed in Zr Mg. Twins are represented via their twinning planes, K 1 . ) = (k- 1 )π 3 -5π 6 and β = arctan γ √ 3 for 3 + π 6 √ 3 for 133363 Tensile 1, α(k) = (k-3)π 3 and β = arctan 2γ for Tensile 2, α(k) = kπ 3 and β = π + arctan γ for Compressive 1, α(k) = (k-1)π and β = π + arctan 2γ Compressive 2. Figure 3 . 16 - 316 Figure 3.16 -Example of neighboring relationships encountered in EBSD data. On the left, when measurement points form a square grid, the pixel represented by the black disc has 4 neighbors represented by the white circles. On the right, when measurement points form an hexagonal grid, each measurement has 6 neighbors. Figure 3 . 17 - 317 Figure 3.17 -Graph grouping measurement points of consistent orientation in connected parts. The colored circles correspond to EBSD measurement points, with the Euler angles mapped on the RGB cube and, white lines represent edges, whose thicknesses are proportional to the weight, w. Consequently, twins appear clearly as areas delineated by a black border where the edge weight becomes negligible. Figure 3 .Figure 3 . 19 - 3319 Figure 3.18 -Graph grouping measurement points of consistent orientation in connected parts with added twinning mode. Green and red edges linking border points, displayed in brown, indicate tensile and compressive twinning relations, respectively. Figure 3 . 3 Figure 3.20 -Automatic output for a Zr EBSD map. The sample was cut from a high-purity clock-rolled Zr plate and loaded in compression along one of the in-plane directions up to 5% strain [11]. Yellow borders mark the grain joints, brown borders the twin joints. Green edges represent tensile 1 relation, magenta tensile 2, red compressive 1 and blue compressive 2. Figure 3 . 3 Figure 3.21 -Same map as Figure 3.20, but with manual edition of 4 incorrect links. The disabled links are displayed as thin edges. Figure 3 . 3 Figure 3.22 -Zoom on the map of Figure 3.20 to illustrate complex grain structures recovered by our software. The dashed line is a disorientation relation that matches a known relation (compressive 1) but is identified as irrelevant to the twinning process. Figure 3 . 23 - 323 Figure3.23 -Example of secondary and ternary twinning observed in an EBSD map of a high-purity clock-rolled Zr sample loaded in compression along the through-thickness direction up to 3% strain. This is shown using three different visualization modes (see appendix A) : raw mode (left), twinning editor mode (middle) and twinning statistics mode (right). The parent grain is surrounded in yellow, first order twins appear in cyan, secondary twins in blue and ternary or higher order twins in red. Figure 3 . 3 Figure 3.24 -Zoomed-in EBSD map of a high-purity clock-rolled Zr sample loaded in compression along one of the in-plane directions up to 10% strain. Left : detected component with their ellipses and twin-strip links in magenta ; right : reconstructed complete twin. and 3.21. Figure 3 . 3 Figure 3.25 -EBSD map of a high-purity clock-rolled Zr sample loaded in compression along one of the in-plane directions up 10% strain. The top part shows the twinning relation identified. The right caption displays ellipses fitted to twins. Red ellipses correspond to low ellipsicity (below 70%). Low ellipsicity twins correspond here to merged orthogonal twins. Figure 3 . 26 - 326 Figure 3.26 -Structure of the database used to store the EBSD analysis results. Boxes are database tables, edges with numbers indicates relations and the n-arity of these relations. Sommaire 4. 1 1 Preliminary notations and considerations . . . . . . . . . . . . . . . . 92 4.2 Nucleation of "unlikely twins" : low Schmid factor twins and double twinning in AZ31 Mg alloy . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.2.1 Experimental set-up and testing conditions . . . . . . . . . . . . . . . . 92 4.2.2 Mechanical behavior and microstructure evolutions . . . . . . . . . . . . 94 4.2.3 Low Schmid factor {10 12} tensile twins . . . . . . . . . . . . . . . . . . 96 4.2.4 Successive {10 12}-{10 12} double extension twins . . . . . . . . . . . . . 100 4.3 Probing for the latent effect of twin-twin junctions : application to the case of high purity Zr . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.1 Experimental set-up and testing conditions . . . . . . . . . . . . . . . . 104 4.3.2 Twin-twin junctions statistics . . . . . . . . . . . . . . . . . . . . . . . . 107 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Figure 4 . 1 - 41 Figure 4.1 -(a) Instron tensile-compression machine ; (b) Compression test at room temperature. Figures 4. 2 2 Figures 4.2and 4.3 show the macroscopic stress-strain curves corresponding to specimens loaded in compression along RD, TD, ND and in compression along RD, followed by a second compression along TD. For compressions along RD and TD, the yield stress was approximately equal to 70 MPa. The inflection observed in the plastic region of the curves is typical of the activation of {10 12} tensile twins. As revealed by Proust et al.[START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF], the two deformation modes active in the matrix are basal slip and tensile twinning, with twinning increasing its contribution until about 3% strain while the basal slip activity decreases. At 5% strain, the total twinned volume is expected to occupy about 70% of the material volume. Figures 4.2and 4.3 show the macroscopic stress-strain curves corresponding to specimens loaded in compression along RD, TD, ND and in compression along RD, followed by a second compression along TD. For compressions along RD and TD, the yield stress was approximately equal to 70 MPa. The inflection observed in the plastic region of the curves is typical of the activation of {10 12} tensile twins. As revealed by Proust et al.[START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF], the two deformation modes active in the matrix are basal slip and tensile twinning, with twinning increasing its contribution until about 3% strain while the basal slip activity decreases. At 5% strain, the total twinned volume is expected to occupy about 70% of the material volume. Figure 4.3 reveals that after a first compression along RD the yield stress associated with compression along TD is no longer 70 MPa but approximately 115 MPa. Such a change can be explained by the presence of twin boundaries formed during the first compression, which act as barriers and /dt=0.001/s, T=348K ND3: dE/dt=0.001/s, T=348K ND5: dE/dt=0.1/s, T=298K ND6: dE/dt=0.1/s, T=298K Figure 4 . 2 - 42 Figure 4.2 -Macroscopic stress-strain curves of specimens monotonically loaded in compression along RD (a), along TD (b) and along ND (c) at different temperatures and strain rates. Figure 4 . 3 - 43 Figure 4.3 -Macroscopic stress-strain curves of specimens loaded in compression along the rolling direction followed by a second compression along the transverse direction. Figure 4 . 4 - 44 Figure 4.4 -(a) XRD {0001}, {2 11 0} and {10 10} pole figures of specimens before compression (a)and after compression along the transverse direction up to 4% strain (b), the rolling direction up to 1.8% strain (c) and along the rolling direction up to 1.8% strain and then along the transverse direction up to 1.3% strain (d). Figure 4 . 5 - 45 Figure 4.5 -(a) EBSD ND inverse pole figure micrograph of the specimen before compression along RD ; (b) Part of an EBSD orientation micrograph of a specimen compressed along the rolling direction up to 2.7% strain ; (c) EBSD orientation micrograph of a specimen successively loaded in compression along the rolling direction up to 1.8% strain and the transverse direction up to 1.3% strain. Black and yellow arrows indicate the presence of double extension twins. Table 4 . 5 - 45 Groups of possible double extension twins Group Axis-minimum angle pair Number of variants I 0 • 6 II < 1 210 > 7.4 • 6 III < 014 141 > 60 • 12 IV < 17 80 > 60.4 • 12 All identified secondary twins have a positive Schmid factor, and 78.1% of them have a Schmid factor greater than 0.3. Moreover, 95.8% of these secondary twins have a Schmid factor ratio, introduced in the previous sub-section, greater than 0.6 (Figure 4.6). As a result, activated tensile secondary twins have a relatively high Schmid factor compared to the Schmid factor of the other 5 potentially active tensile secondary twins. Figure 4 . 6 - 46 Figure 4.6 -Scatter plots displaying the Schmid factor and Schmid factor ratio values of 291 Group 3 secondary twins (a) and 92 Group 4 secondary twins (b). Figure 4 . 7 - 47 Figure 4.7 -Schematic representation of the simplified elasto-static Tanaka-Mori scheme. Secondorder tensors E p , ǫ p1 and ǫ p 2 denote the macroscopic plastic strain imposed to the medium and plastic strains induced by primary and secondary twinning, respectively. The infinite matrix and primary and secondary tensile twins are represented by volumes V -V A , V A -V B and V B , respectively. Second-order tensors ǫ p a and ǫ p b correspond to eigenstrains, modeling twinning shears induced by primary and secondary twinning, prescribed in inclusions V A -V B and V B , respectively. The homogeneous elastic tensor is denoted by the fourth-order tensor C. Figure 4 . 8 - 48 Figure 4.8 -Evolution of the change of elastic energy normalized by the twin volume fraction with respect to (a) the applied stress, Σ T D , and (b) the secondary twin volume fraction, f V B = V B /V A . Figure 4 . 9 - 49 Figure 4.9 -Initial basal (0001) and prismatic (10-10) pole Figures of the clock-rolled high-purity zirconium studied in this work. The 3-axis is the through thickness direction of the plate. Figure 4 . 10 - 410 Figure 4.10 -Macroscopic stress-strain curves of high purity Zr samples loaded in compression along through-thickness (TT) and in-plane (IP) directions at 76K and 300K. Figure 4 . 12 - 412 Figure 4.12 -Effective grain diameter (a,b) and grain area (c,d) distributions for TT03 (a,c) and IP05 (b,d) samples Figure 4 . 4 Figure 4.14 shows twin-twin junction types for each mode whose frequency is greater than 4% in TT03 (Figures 4.14a, 4.14c, 4.14e) and IP05 (Figures 4.14b, 4.14d, 4.14f) scans. Notations are all detailed in the Appendix. Therefore, Figure4.14c shows that, in the case of through thickness compression, 80 % of C 1 -C 1 twin-twin junctions correspond to the third type of junctions. Figure4.14 also reveals that junctions between T 1 -T 1 twins with parallel zone axes, studied by Yu et al.[START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] in Mg single crystals, do not correspond to the predominant type of twin-twin junctions here. These results are purely qualitative, but they can be used as guidelines Figure 4.14 shows twin-twin junction types for each mode whose frequency is greater than 4% in TT03 (Figures 4.14a, 4.14c, 4.14e) and IP05 (Figures 4.14b, 4.14d, 4.14f) scans. Notations are all detailed in the Appendix. Therefore, Figure4.14c shows that, in the case of through thickness compression, 80 % of C 1 -C 1 twin-twin junctions correspond to the third type of junctions. Figure4.14 also reveals that junctions between T 1 -T 1 twins with parallel zone axes, studied by Yu et al.[START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] in Mg single crystals, do not correspond to the predominant type of twin-twin junctions here. These results are purely qualitative, but they can be used as guidelines Figure 4.14 shows twin-twin junction types for each mode whose frequency is greater than 4% in TT03 (Figures 4.14a, 4.14c, 4.14e) and IP05 (Figures 4.14b, 4.14d, 4.14f) scans. Notations are all detailed in the Appendix. Therefore, Figure4.14c shows that, in the case of through thickness compression, 80 % of C 1 -C 1 twin-twin junctions correspond to the third type of junctions. Figure4.14 also reveals that junctions between T 1 -T 1 twins with parallel zone axes, studied by Yu et al.[START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] in Mg single crystals, do not correspond to the predominant type of twin-twin junctions here. These results are purely qualitative, but they can be used as guidelines Figure 4 . 13 - 413 Figure 4.13 -Graphical representation of twin-twin junction modes and types observed in TT03 and IP05 Zr EBSD scans. ( Figure 4 . 13 - 413 Figure 4.13 -Schematic representation of twin-twin junction modes and types observed in TT03 and IP05 Zr EBSD scans. Figures 4. 12 12 , 4.15 and 4.18 also indicate that certain bars are only representative of a few twins or a few grains. But, since most of statistics presented in this section rely on average values, the authors decided to not plot bars corresponding to averages performed over less than 3 twins and 3 twinned grains in histograms Figure 4 . 15 - 415 Figure 4.15 -Distribution of frequencies of T 1 , T 2 and C 1 twins with respect to grain size in samples loaded along the TT (a) and the IP (b) directions. Figure 4 . 4 Figure 4.17 shows the distributions of the number of T 1 , T 2 and C 1 twins per twinned grain with respect to grain size. While Figures 4.17aand 4.17b are histograms displaying averaged values, i.e. averaged numbers of twins, Figures 8c and 8d are scattergraphs displaying all values. Figures 4.17aand 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure4.15b indicates that beyond 796 µm 2 , averages are performed over fewer than 5 grains. The interest of Figure 4.17 shows the distributions of the number of T 1 , T 2 and C 1 twins per twinned grain with respect to grain size. While Figures 4.17aand 4.17b are histograms displaying averaged values, i.e. averaged numbers of twins, Figures 8c and 8d are scattergraphs displaying all values. Figures 4.17aand 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure4.15b indicates that beyond 796 µm 2 , averages are performed over fewer than 5 grains. The interest of Figure 4.17 shows the distributions of the number of T 1 , T 2 and C 1 twins per twinned grain with respect to grain size. While Figures 4.17aand 4.17b are histograms displaying averaged values, i.e. averaged numbers of twins, Figures 8c and 8d are scattergraphs displaying all values. Figures 4.17aand 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure4.15b indicates that beyond 796 µm 2 , averages are performed over fewer than 5 grains. The interest of Figure 4.17 shows the distributions of the number of T 1 , T 2 and C 1 twins per twinned grain with respect to grain size. While Figures 4.17aand 4.17b are histograms displaying averaged values, i.e. averaged numbers of twins, Figures 8c and 8d are scattergraphs displaying all values. Figures 4.17aand 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure4.15b indicates that beyond 796 µm 2 , averages are performed over fewer than 5 grains. The interest of Figure 4 . 16 - 416 Figure 4.16 -Distribution of the fraction of twinned grains containing T 1 , T 2 and C 1 twins plotted with respect to twinned grain area for samples loaded along the TT (a) and the IP (b) directions. Figures 4 . 4 Figures 4.17c and 4.17d lies in the observation that small grains can contain a very large number of twins. It also shows that the number of twins per twinned grain can vary significantly from one grain to another. However, EBSD scans are images of 2D sections. As a result, it is possible that small grains are actually bigger than they seem to be. This introduces a bias in grain size effect statistics. Figure 4 . 17 - 417 Figure 4.17 -Distribution of the number of T 1 , T 2 and C 1 twins per twinned grain for TT03 (a) and IP05 (b) samples and scattergraphs displaying the number of C 1 , T 2 and C 1 twins embedded in parent phases with respect to twinned grain area for TT03 (c) and IP05 (d) samples. Each cross represents one single twin. But because twin numbers are integers, many crosses overlap. Figure 4 . 18 - 418 Figure 4.18 -Distribution of SF values corresponding to twins activated in samples loaded along the TT-direction (a) and the IP-direction (b). and 4.20d consist of scattergraphs that display all true thicknesses of twins observed in samples loaded along the TT and IP directions. The spread is significant and does not follow any pattern. Fluctuations may be associated with neighbor effects on twin Figure 4 . 19 - 419 Figure 4.19 -Distribution of variant frequencies of T 1 (a), T 2 (c) and C (e) twins in TT03 scans and of T 1 (b) and T 2 (d) in IP05 scans, respectively, with respect to their Schmid factor.Variant frequencies consist of the ratio of the number of twins of a given SF variant and of a given twinning mode to the total population of twins belonging to the considered twinning mode. Figure 4 . 20 - 420 Figure 4.20 -Distribution of average twin thicknesses as a function of grain size in samples loaded along the TT (a) and the IP (b) directions and scattergraphs displaying twin thickness values with respect to grain size in samples loaded along the TT (c) and the IP (d) directions. Each cross represents one twin. Figure 4 . 21 - 421 Figure 4.21 -Distribution of average twin thicknesses as a function of SF values in samples loaded along the TT (a) and the IP (b) directions. Figure 4 . 22 - 422 Figure 4.22 -Distribution of the twin thickness (a) and the frequency (b) of C 1 twins with respect to grain area in samples loaded along the TT direction. Figure 4 . 23 - 423 Figure 4.23 -Distribution of the twin thickness (a) and the frequency (b) of C 1 twins with respect to SF values in samples loaded along the TT direction. 5 .Figure A. 1 - 51 Figure A.1 -Visualization modes for a single grain . Figure A.1 summarizes all available modes. b Annexe B. Stress-strain curve correction method and mechanical testing parametersConsequently, parameters a and b can be expressed as follows := ǫ iσ i E th (1 + a.E th ) (B.4)with i = {1, 2}. Table des figures 1 des1 Table 1 - 1 Modes de maclage dans le Zr Mode de Plan de maclage Direction de maclage Angle de désorientation maclage K 1 η 1 Table 1 . 1 1 -List of observed twinning modes in h.c.p. metals K 1 K 2 η 1 η 2 s q Observed {} {} <> <> - - in 10 12 1012 10 11 101 1 10 11 10 13 10 12 30 32 11 21 0001 11 26 11 20 10 13 10 11 30 32 10 12 i i i i 11 22 11 24 11 23 22 43 (γ 2 -3)/3γ √ 3γ (4γ 2 -9)/4 1/γ (4γ 2 -9)/4 √ 3γ (4γ 4 -21γ 2 + 36) 1/2 /4 √ 2(γ 2 -2)/3γ 4 Mg, Ti, Co, Zr, Zn, Be 8 Mg, Ti 2 Re, Ti, Zr, Co 8 Mg 3γ -Mg 2 Re, Ti, Zr, Co positions, i.e. twin positions, if q is even. Regarding twinning of type 2, another parameter q * Table 2 . 2 1 -CRSS and hardening parameters used in Voce hardening rule Initial twin Deformation τ 0 τ 1 θ 0 θ 1 fraction system (MPa) (MPa) (MPa) (MPa) 0.03 Basal 12 20 240 0 Prism 60 20 240 0 Pyramidal 100 117 2000 0 Tensile twin 60 0 0 0 Table 4 . 4 1 -Chemical composition limits of AZ31B Mg alloy in wt% Al Zn Mn Si Cu 2.5-3.5 0.7-1.3 0.2 min 0.05 max 0.05 max Ca Fe Ni Others Mg 0.04 max 0.005 max 0.005 max 0.30 max balance Table 4 . 4 2 -Description of loading conditions Sample Label Loading Direction Strain Rate Temperature (/s) (K) RD1 RD 0.001 25 RD2 RD 0.001 25 RD3 RD 0.001 25 TD2 TD 0.001 25 ND2 ND 0.001 75 ND3 ND 0.001 75 ND5 ND 0.1 25 ND6 ND 0.1 25 RD1TD1 RD thenTD 0.001 25 RD2TD2 RD thenTD 0.001 25 RD3TD3 RD thenTD 0.001 25 Table 4 . 4 .3. 3 -Classification of twins with respect to the sign of the normal components of the twin distortion tensor. The number of twins corresponding to each variant type and their occurrence frequency with respect to the total twin population are also indicated. Variants e RD e T D e N D Observed twins Observed low SF twins Number % Number % 1 - + + 835 40.8 16 1.9 2 - - + 920 45.0 74 8.0 3 - + - 152 7.4 8 5.3 4 + - - 13 0.6 12 92.3 5 + - + 42 2.1 21 51.0 6 + + - 84 4.1 9 10.7 Table Table 4 . 4 4 -Main characteristics of grains, primary and secondary {10 12} tensile twins observed on EBSD maps of AZ31 Mg alloy specimens loaded in compression along RD and then along TD Phase Number Area Area fraction Average diameter type - (x10 3 µm 2 ) (%) (µm) Grains 4 481 823 100 16.34 Primary twins 11 052 230 27.94 5.24 Secondary twins 585 3.47 0.42 1.37 Table 4 . 4 6 -Twinning modes in Zr. {10 11} (C 2 ) twins were not observed Abbreviation Twinning plane, K 1 Twinning direction, η 1 Misorientation (deg) T 1 T 2 C 1 C 2 {10 12} {11 21} {11 22} {10 11} < 1011> < 11 26> <11 23 > <10 12 > 85.2 34.9 64.2 57.1 Table 4 . 4 8 -Twin-twin junction frequencies for samples loaded in compression along the TT and IP directions Loading path TT03 IP05 Type Twin-twin Number Frequency Number Frequency number interaction type - (%) - (%) 1 2 3 4 5 6 T 1 -T 1 T 1 -T 2 T 1 -C 1 T 2 -T 2 T 2 -C 1 C 1 -C 1 19 49 5 12 37 709 2.3 5.9 0.6 1.4 4.5 85.5 40 50 0 6 0 0 41.7 52.1 0.0 6.2 0.0 0.0 Table 4 . 4 [START_REF] Fultz | Transmission Electron Microscopy and Diffractometry of Materials[END_REF] -Frequencies of twins contained in single twinned grains for TT03 and IP05 samples Loading path TT03 IP05 Twin category Number Frequency (%) Number Frequency (%) All single twinned grains 176 - 73 - T 1 55 31.4 62 84.9 T 2 18 10.3 8 11.0 C 1 102 58.3 3 4.1 Table 4 . 4 10 -Twins with negative SF and their relative frequencies and twinned areas in TT03 and IP05 samples Loading path TT03 IP05 Twin Number Rel. Rel. twinned Number Rel. Rel. twinned mode - freq. (%) area (%) - freq. (%) area (%) All 211 10.7 3.9 22 4.3 4.9 T 1 180 55.2 43.7 14 3.3 0.6 T 2 20 11.2 8.4 7 7.7 19.5 C 1 8 0.5 0.2 1 33.3 68.2 from FragmentEdges as E inner j o i n Fragment as C1 on C1 . i d = E . i inner j o i n Fragment as C2 on C2 . i d = E . j inner j o i n Twins as T1 on C1 . i d = T1 . i d inner j o i n Twins as T2 on C2 . i d = T2 . i d where E . i > E . j and not C1 . i s _ p a r e n t and not C2 . i s _ p a r e n t and C1 . i s _ v a l i d and C2 . i s _ v a l i d and C1 . g r a i n=C2 . g r a i n and C1 . t w i n s t r i p <= 0 and C2 . t w i n s t r i p <= 0 order by C1 . g r a i n ; -Request 4 : s e l e c t t . g r a i n , d . i , d . j , d . d i s t , d . xi , d . yi , d . xj , d . y j from t w i n s as t , I n g r a i n D i s t a n c e s as d where ( t . i d=d . i or t . i d=d . j ) group by d . d i s t * d . x i * d . x j order by t . g r a i n ; -Request 5 : Table B.1 lists the measured Young's moduli and correction parameters for compression tests whose characteristics are described in Table B.2. Abbreviations RD, TD and ND stand for rolling direction, transverse direction and normal direction, respectively. Table B.1 -Young's moduli measured for both the loading and unloading regimes before correction and correction parameters Sample Young's modulus Young's modulus Corr. Param. Corr. Param. Label loading (GPa) unloading (GPa) a (MPa -1 ) b RD1 - 7.8511e-05 5.2016e-04 RD2 - 3.0803e-05 7.7801e-04 RD3 - 5.2416e-05 6.6649e-04 TD2 15.6 - 4.5699e-05 7.8436e-04 ND2 21.4 73.2 2.4145e-05 -6.34e-04 ND3 25.9 69.7 1.6109e-05 -7.043e-04 ND5 18.1 15.0 3.3402e-05 1.02751e-03 ND6 19.6 14.9 2.9008e-05 1.03421e-03 RD1TD1 19.7 - 1.9934e-05 1.39418e-03 - 14.1 - /4.88218e-05 /1.20736e-03 RD2TD2 17.9 - 3.4792e-05 7.7543e-03 - 16.3 - /3.9626e-05 /9.511e-04 RD3TD3 17.4 - 3.669e-05 7.9274e-03 - 13.7 - /5.11435e-05 /1.0106e-03 Table B.2 -Description of loading conditions Sample Label Loading Direction Strain Rate Temperature (/s) (K) RD1 RD 0.001 25 RD2 RD 0.001 25 RD3 RD 0.001 25 TD2 TD 0.001 25 ND2 ND 0.001 75 ND3 ND 0.001 75 ND5 ND 0.1 25 ND6 ND 0.1 25 RD1TD1 RD thenTD 0.001 25 RD2TD2 RD thenTD 0.001 25 RD3TD3 RD thenTD 0.001 25 Afin de traiter les cartographies EBSD et d'en extraire des statistiques relatives au maclage, et ce de façon automatique, un nouveau logiciel d'analyse EBSD, reposant sur les théories des graphes et structures de groupe et les quaternions, fut développé. Les quaternions permettent de calculer facilement les désorientations entre pixels et groupes de pixels de même orientation. L'utilisation de la théorie des graphes et des structures de groupe rend possible l'identification des grains, la reconnaissance des phases macle et l'extraction des statistiques.Le logiciel se distingue des versions commerciales existantes en combinant visualisation et analyse automatisée du micrographe. L'interface graphique intégrée permet un accès direct et immédiat aux données relatives à la microstructure et au maclage ; elle autorise également l'utilisateur à corriger ou compléter, si nécessaire, l'analyse réalisée par le logiciel. Toutes les données, aussi bien brutes que traitées, sont sauvegardées au sein d'une base de données relationnelle. Il est, par conséquent, possible d'accéder à l'intégralité des paramètres expérimentaux, données microstructurelles et statistiques sur le maclage via de simples requêtes SQL. La base de données rend également possible la quantification systématique de l'influence d'un très grand nombre de paramètres. La construction et l'intégration d'une telle base de données au sein même du logiciel sont, en plus de l'interface graphique interactive, autant de fonctionnalités que les autres outils d'analyse actuels n'ont pas.Bien qu'initialement développé pour analyser des micrographes de Zr et de Mg, les capacités du logiciel ne se limitent pas à ces deux métaux hexagonaux. En effet, son algorithme est à The concept also extends to higher dimensions Remerciements While we can only detect fully formed twins with EBSD, their presence implies previous nucleation of such variant. To study twin nucleation, all grains (i.e. twinned and untwinned) that are not on the edge of the map are considered. Concerning twin growth, statistics are based on twinned grains solely. Figure 4.12 shows grain diameter and area distributions of specimens loaded along the TT and IP directions. Because the notion of "grain" is questionable for cases with very small numbers of measurement points, grains smaller than 4 µm 2 (i.e. 23 measurement points) are disregarded. In the following, data are represented as histograms. Histograms are consistent statistical tools capable of estimating density functions, but they do not directly address the issues of bias and variance. However, it is still possible to minimize the error introduced by the histogram representation. In the present article, bin sizes are estimated from Scott's formula : w = 3.49σ.n -1/3 [START_REF] Scott | On optimal and data-based histograms[END_REF], where σ is an estimate of the standard deviation and n the number of elements considered, i.e. the total number of grains. The term n -1/3 results from the minimization of the integrated mean squared error function. The main advantage of this expression lies in its insensitivity to the nature of the estimated density function (Gaussian, log normal, etc). Mean and standard deviation have been computed for area, diameter and SF distributions in TT03 and IP05 specimens. As a result, optimal bin widths for diameter, area and SF in TT03 samples are 63 µm 2 , 2.5 µm and 0.06, respectively, and 112 µm 2 , 4.1 µm and 0.06, respectively, in the case of IP05 maps. However, to be able to compare results obtained from both TT03 and IP05 maps, the same bin sizes have to be applied. In addition, we enforced the constraints that all distributions expressed with respect to diameter and area have the same number of subdomains and that every subdomain contains at least one element. Consequently, diameter and bin sizes used in the next histograms are 5.47 µm and 132 µm 2 , respectively. To avoid empty columns, one grain larger than 1456 µm 2 observed in a TT03 scan was disregarded. Concerning distributions Abstract The main objective of this thesis is to investigate and quantify the influence of parent-twin and twin-twin interactions on the mechanical response of hexagonal close-packed metals. To study parent-twin interactions, a mean-field continuum mechanics approach has been developed based on a new twinning topology in which twins are embedded in twinned grains. A first model generalizing the Tanaka-Mori scheme to heterogeneous elastic media is applied to first and second generation twinning in magnesium. In the case of first generation twinning, the model is capable of reproducing the trends in the development of backstresses within the twin domain as observed experimentally. Applying the methodology to the case of second-generation twinning allows the identification, in exact agreement with experimental observations, of the most likely second-generation twin variants to grow in a primary twin domain. Because the elastic behavior assumption causes internal stress level magnitudes to be excessively high, the first model is extended to the case of elasto-plasticity. Using a self-consistent approximation, the model, referred to as the double inclusion elasto-plastic self-consistent (DI-EPSC) scheme, is applied to Mg alloy polycrystals. The comparison of results obtained from the DI-EPSC and EPSC schemes reveals that deformation system activities and plastic strain distributions within twins drastically depend on the interaction with parent domains. The influence of twin-twin interactions on nucleation and growth of twins is being statistically studied from zirconium and magnesium electron back-scattered diffraction scans. A new twin recognition software relying on graph theory analysis has been developed to extract all microstructural and crystallographic data. It is capable of identifying all twinning modes and all twin-twin interaction types occurring in hexagonal close-packed materials. The first results obtained from high purity Zr electron back-scattered diffraction maps reveal that twin-twin interactions hinder subsequent twin nucleation. They also show that mechanisms involved in twin growth may differ significantly for each twinning mode. A second study performed on AZ31 Mg presents statistics about low Schmid factor {10 12} tensile twins and about {10 12}-{10 12} sequential double twins coupled with a simplified version of the Tanaka-Mori scheme generalized to heterogeneous elasticity with plastic incompatibilities.
349,143
[ "782059" ]
[ "178323" ]
01717722
en
[ "sdv" ]
2024/03/05 22:32:10
2018
https://univ-rennes.hal.science/hal-01717722/file/Vassallo%20et%20al_How%20do%20walkers%20behave%20when%20crossing%20the%20way%20of%20a%20mobile%20robot%20that%20replicates.pdf
Christian Vassallo email: cvassall@laas.fr Anne-Hélène Olivier email: anne-helene.olivier@univ-rennes2.fr Philippe Souères email: soueres@laas.fr Armel Crétual email: armel.cretual@univ-rennes2.fr Olivier Stasse email: ostasse@laas.fr Julien Pettré email: julien.pettre@inria.fr How do walkers behave when crossing the way of a mobile robot that replicates human interaction rules? Keywords: Human-robot interaction, Locomotion, Collision avoidance, Gait adaptation, Mobile robot  A mobile robot was programmed to reproduce the interaction rules of human walkers  We observed the behavior of human walkers crossing the way of this reactive robot  Contrary to what occurs with a passive robot, the crossing order is preserved  Humans behave with the reactive robot as when crossing another human walker  Making robots move in a human-like way eases their interaction with human walkers Wordcount is 2997 words. Introduction In everyday life, we walk by constantly adapting our motion to our environment. In past work, the relation between the walker and the environment was modeled as a coupled dynamical system. The trajectories result from a set of forces emitted by goals (attractors) and obstacles (repellers) [START_REF] Warren | Behavioral dynamics of visually guided locomotion, Coordination: neural, behavioral and social dynamics[END_REF]. Collision avoidance between pedestrians has also received a lot of attention either using front-on [START_REF] Dicks | Perceptual-motor behaviour during a simulated pedestrian crossing[END_REF] or side-on approach trajectories [START_REF] Huber | Adjustments of speed and path when avoiding collisions with another pedestrian[END_REF][START_REF] Knorr | Influence of Person-and Situation-Specific Characteristics on Collision Avoidance Behavior in Human Locomotion[END_REF][START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF][START_REF] Olivier | Collision avoidance between two walkers: role-dependent strategies[END_REF]. Olivier et al. showed that walkers adapt their trajectory only if a future risk of collision exists [START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF]. This adaptation depends on the order of arrival of pedestrians that defines their order of passage. The first walker that arrives maintains or increases his/her advance by slightly accelerating and changing his/her direction to move away from the other participant. The second one slows down and moves in the opposite direction to reduce the risks of a collision. Huber et al. focused on how trajectories are adapted using speed and heading modifications depending on the crossing angle [START_REF] Huber | Adjustments of speed and path when avoiding collisions with another pedestrian[END_REF]. Future crossing order (who is about to give way or pass first) is quickly and accurately perceived and preserved until the end of the interaction [START_REF] Knorr | Influence of Person-and Situation-Specific Characteristics on Collision Avoidance Behavior in Human Locomotion[END_REF][START_REF] Olivier | Collision avoidance between two walkers: role-dependent strategies[END_REF]. This shows that walkers take efficiency into account since an inversion of the crossing order would result in suboptimal adaptations of higher amplitude. In addition, it was shown that the participant giving way contributes more to solving the collision avoidance [START_REF] Olivier | Collision avoidance between two walkers: role-dependent strategies[END_REF]. Finally, behavior is influenced by the number of pedestrians to interact with and the potential to have social interactions with them [START_REF] Dicks | Perceptual-motor behaviour during a simulated pedestrian crossing[END_REF]. Because humans and robots will have to share the same environment in the near future [START_REF] Goodrich | Human-robot interaction: a survey[END_REF][START_REF] Kruse | Human-aware robot navigation: a survey[END_REF], recent studies focused on tasks involving walkers and a moving robot. Vassallo et al. [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF] performed an experiment in which participants had to avoid collision with a passive wheeled robot (moving straight at constant speed), crossing perpendicularly their direction. In contrast to a human-human interaction, several inversions of the crossing order were observed, even though this behavior was not optimal. Such a behavior was observed when the walker arrived ahead of the robot with a predictable future crossing distance between 0 and 0.6m but, despite this advance, finally gave way. This result was linked to the notion of perceived danger and safety, and to the lack of experience of interacting with such a robot. Because of its design, the main limitation of Vassallo et al. study [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF] was its inability to conclude whether the modification of the walker behavior was due to the lack of adaptability of the moving obstacle or solely to its artificial nature. Nonetheless, it was shown in [START_REF] Satake | How to Approach Humans? Strategies for Social Robots to Initiate Interaction[END_REF] that the robot trajectory can be read and understood by humans in a task where a robot moves towards a human to initiate a conversation based on an approach linked to public and social distances. Furthermore, in a face-toface task with a moving robot, humans behave similarly whether they are told or not what the robot trajectory will be [START_REF] Carton | Measuring the effectiveness of readability for mobile robot locomotion[END_REF], showing their ability to actually read the robot motion. Given these results, the question addressed in this paper is: "How would humans behave if they have to cross the trajectory of a robot programmed to replicate the observed human-human avoidance strategy?" Would humans understand that the robot adapts its trajectory and then adapt their own strategy accordingly, or would they give way to the robot as observed in [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF]? Materials and methods Participants Ten volunteers participated in the experiment (2 women and 8 men). They were 28.8 (±9.5) years old and 1.77m tall (±0.12). They had no known pathology that could affect their locomotion. All of them had normal or corrected sight and hearing. All participants were naïve to the studied situation. Participants gave written and informed consent before their inclusion in the study. The experiment conformed to the Declaration of Helsinki, with formal approval of the ethics evaluation committee of INSERM (IRB00003888, Opinion number 13-124), Paris, France (IORG0003254, FWA00005831). Apparatus A C C E P T E D M A N U S C R I P T The experiment took place in a 40mx25m gymnasium. The room was separated into two areas by 2m high occluding walls forming a gate in the middle (Figure1). Four specific positions were defined: the participant starting position PSP, the participant target PT, and two robot starting positions RSP1 and RSP2, to generate situations where the robot approached from the right or from the left of the participants. Two virtual guidelines ra and rb, parallel to the line (RSP1, RSP2) and respectively located at a distance of 0.5m and 1.0m from the gate, were used as reference for guiding the robot to pass behind or ahead the participant during the avoidance phase. A specific zone between PSP and the gate was named Motion Estimation Zone (MEZ), far enough from PSP to let the participants reach their comfort velocity before they entered the MEZ. The intersection point between the robot and the initial path of the participant was named Hypothetical Crossing Point (HCP) as this is the point where the participant and robot would cross if they do not modify their trajectory. - -----------------------------Insert figure 1 here ------------------------------ Task Participants were asked to walk at their preferred speed from PSP to PT passing through the gate. They were told that a robot could be moving beyond the gate and could obstruct them, meaning that the robot could adapt its trajectory according to the participants' one. One experimental trial corresponded to one travel from PSP to PT. We defined tsee, the time at which the participant passed through the gate and saw the robot moving, and tcross, the time of closest approach, when the human-robot distance was minimal (i.e., the "distance of closest approach"). The crossing configuration and the risk of future collision were estimated using the Signed Minimal Predicted Distance, noted smpd, which gives, at each time step, the future distance of closest approach if both the robot and the participant keep a constant speed and direction [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF]. A variation of smpd means that the participant or/and the robot are performing adaptation. The sign of this function depends on who, between the participant and the robot, is going to pass first: positive if it is the participant and negative otherwise. A change of smpd sign means a switch of the future crossing order. Recorded data 3D kinematic data was recorded using a 16 infrared cameras motion capture Vicon-MX system (120Hz). Reconstruction was performed with Vicon-Blade and computations with Matlab (Mathworks®). The global position of participants was estimated as the centroid of the reflective markers set on a helmet they were wearing. The stepping oscillations were filtered out by applying a Butterworth low-pass filter (2nd order, dual pass, 0.5Hz cut-off frequency). Robot Behavior We used a RobuLAB10 robot from Robosoft (dimension: 0.45x0.40x1.42m, weight 25 Kg, maximum speed ~3 m.s -1 ). The robot reference point was the center of its base. The robot control sequence was the following (cf. Figure1): 1) The robot was at rest at RSP1 or RSP2. 2) The participant crossed MEZ, its arrival time at HCP was estimated. 3) The theoretical speed at which the robot should move to reach HCP at the same time as the participant was estimated. This speed was then further increased (resp. decreased) for the robot to arrive in advance (respectively lately) at HCP, in order to match the expected smpd. This choice was done such that smpd values at tsee were randomly distributed in [-0.9m;0.9m] 4) When the robot had to avoid the human, 2m before reaching HCP, the robot adapted its trajectory by inserting a new way-point on its trajectory, in order to pass behind the walker along the line r_a or ahead the walker by moving along the line r_b, depending on the A C C E P T E D M A N U S C R I P T sign of smpd at tsee. 5) When the avoidance phase was over, the robot was controlled to reach its final position. Experimental plan Each participant performed 30 trials. The robot starting position (50% from RSP1, 50% from RSP2) was randomized among the trials. To introduce variability, in 2 trials the robot did not move. The participants were not informed about the initial position of the robot nor about the possibility that the robot would not move on every trial. Only the 28 trials with potential adaptations were analyzed. Analysis The analysis focused on the time interval during which adaptation was performed. To this end, smpd was normalized in time by resampling the function at 100 intervals between tsee (time 0%) and tcross (time 100%). The quantity of adaptation was defined as the absolute value of the difference between smpd(tsee) (i.e., the initial conditions of the interaction) and smpd(tcross) (i.e., the actual signed minimum distance between the participant and the robot). Statistics were performed using Statistica (Statsoft®). All effects were reported at p<0.05. Normality was assessed using a Kolmogorov-Smirnov test. Depending on the normality, values are expressed as median (M) or mean ±SD. Wilcoxon signed-rank tests were used to determine differences between values of smpd at tsee and tcross. The influence of the crossing order evolution on the smpd values was assessed by using a Kruskal-Wallis test with post hoc Mann-Whitney tests for which a Bonferroni correction was applied: all effects are reported at a 0.016 level of significance (0.05/3). Finally, we used a Mann-Whitney test to compare the crossing distance depending on the final crossing order. Results We considered 279 trials (one has been removed because the robot failed to start). Figure 2 depicts the evolution of smpd for all trials. - -----------------------------Insert figure 2 here ------------------------------The sign of smpd at tcross showed that participants passed first in 53% of cases, and gave way in the other 47%. Combining this information with the data at tsee, we could evaluate if an inversion of crossing order occurred. The trials have been divided into 4 categories, depending on the relative signs of smpd at tsee and tcross (Pos for positive and Neg for negative): PosPos, PosNeg, NegPos, NegNeg. For example, the PosNeg category contained the trials for which smpd(tsee)>0 and smpd(tcross)<0. smpd categories were distributed among the trials in the following way: PosPos=144 trials (52%), NegNeg=110 trials (39%), PosNeg=22 trials (8%), NegPos=3 trials (1%). All participants had both PosPos and PosPos trials, and 9 out of 10 participants had at least one PosNeg trial. In the remainder of the paper, the NegPos category will not be further considered as it contained only three trials defined as outliers. Examples of corresponding trajectories for each of the 3 remaining categories are depicted in Figure3. Note that in 91% of cases the crossing order was preserved. We only observed ------------------------------Insert figure 3 here ------------------------------Figure4a and 4b show respectively the average evolution of smpd and its time derivative for each category. Based on the sign of the smpd time derivative, we can separate the reaction period during which participants perform adaptations (smpd varies) from the regulation period that follows the collision avoidance (the derivative vanishes, and its sign may even change) as defined in [START_REF] Knorr | Influence of Person-and Situation-Specific Characteristics on Collision Avoidance Behavior in Human Locomotion[END_REF] and [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF]. The relative duration of the reaction phase for PosPos (55%) and NegNeg (57%) trials was almost the same, while participants were longer to adapt when they decided to give way to the robot in PosNeg (69%) trials. - -----------------------------Insert figure 4 here ------------------------------Figure5 shows comparison between smpd(tsee) and smpd(tcross) for the 3 categories. For each category, the human-robot distance increased from tsee to tcross so that the risk of collision was reduced. Statistical analysis showed a significant difference of smpd between tsee and tcross for PosPos trials (Msmpdtsee=0.71m, Msmpdtcross=1,08m, Z=9.17, p<0.0001, r=0.76), for NegNeg trials (Msmpdtsee=-0.46m, Msmpdtcross=-1.14m, Z=8.98, p<0.0001, r=0.85) and for PosNeg trials (Msmpdtsee=0.29m, Msmpdtcross=-0.71m, Z=4.11, p<0.0001, r=0.88). - -----------------------------Insert figure 5 here ------------------------------Finally, the distance of closest approach was influenced by the category (H(2,276)=29.3, p<0.0005). Post-hoc tests showed that the median distance between the robot and participants did not significantly differ between PosPos (M=1.08m) and NegNeg (M=1.14m) trials. However, when an inversion of the crossing order in PosNeg trials occurred, this distance was smallest (M=0.71m). Discussion In the current study, results indicated that when a human is crossing the trajectory of a mobile robot which is programmed to replicate the observed human avoidance strategy, strong characteristics of collision avoidance are comparable with the ones of human-human interactions. First, the crossing order is preserved from tsee to tcross in a majority of trials, as observed in human-human interactions [START_REF] Knorr | Influence of Person-and Situation-Specific Characteristics on Collision Avoidance Behavior in Human Locomotion[END_REF][START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF]. However, in 8% of trials, the participants gave way to the robot while they were in position to pass first. Such a behavior was observed when smpd(tsee) was around 0.39m. Above this threshold, participants preferred to preserve their role rather than giving way to the robot. This result is confirmed by the repartition into PosPos and PosNeg categories of trajectories starting from the smpd interval [0.39m, 0.74m], where 94% of trials belong to the PosPos group. This result is in contrast with the one previously observed with a passive robot [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF], where participants consistently preferred to give way to the robot when the risk of collision was below 0.81m, even though this choice was not optimal. Note that, whether or not an inversion of the crossing order occurred, the trajectories were adapted in order to increase the crossing distance between the human and the robot to reduce the risk of collision. Results show that humans solve the collision avoidance with anticipation, as previously demonstrated during human-human interaction [START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF]. Indeed, Figure 4 A C C E P T E D M A N U S C R I P T shows a plateau in smpd values before tcross meaning that the avoidance maneuvers are over before the end of the task. As discussed in the review of Higuchi [START_REF] Higuchi | Visuomotor Control of Human Adaptive Locomotion: Understanding the Anticipatory Nature[END_REF], the anticipatory nature of adaptive locomotor strategies ensures safety navigation during the task. When the participant decides to preserve the crossing order, the task is solved earlier than when a switch of roles occurs, that requires more motion adaptation. The human-human avoidance strategy takes advantage from the configurations of both agents to limit their adaptations [START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF][START_REF] Olivier | Collision avoidance between two walkers: role-dependent strategies[END_REF]. Assuming that both participants have similar locomotion capabilities, a role is assigned to each of them depending on their order of passage, as recalled in the introduction. This high-level strategy is not related to the anthropomorphic walking, it is simply expressed in terms of the trajectory of a representative point (e.g. the waist position and heading) in the horizontal plane of motion. As such, the method can be easily transferred to a wheeled robot. The fact that the robot automatically initiates its avoidance motion by replicating the human strategy allows the human to easily go back to the process usually applied. In this way, the human easily understands the role he/she should play and no conflicting situation occurred in any trials. For this reason, our overall results are comparable to previous findings, which were reported in the case of a human-human interaction. The control of our robot follows a model of shared-avoidance strategy based on the human behavior [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF]. One conflicting situation might theoretically occur when both agents arrive with a zero smpd (i.e. exactly at the same time) and take the same role. Such a conflicting situation between human walkers was not reported in [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF] and never occurred in our human-robot experiment. When the human and the robot were approaching the crossing point quite simultaneously, the smpd was checked twice: once at the beginning, based on the measure of the human velocity in the MEZ, and once at tsee. Based on this accurate measurement of the smpd, which is never exactly equal to zero, the robot adopts a role that helps the walker adapt his behavior. For this reason, we never observed any conflicting situation in which the walker would have tried to force the way (NegPos) after the robot had initiated the avoidance. However, the opposite situation (PosNeg), in which the human prefers to give way to the robot though he arrived ahead, was sometimes observed. This cautious behavior does not constitute a conflicting situation that could block both agents. The behavioral similarities observed between human-human and human-robot is in accordance with the study of Carton et al. [START_REF] Carton | Measuring the effectiveness of readability for mobile robot locomotion[END_REF], in which a walker avoids a robot that reproduces an average human trajectory to avoid a face-to-face collision. They showed that giving a human-like behavior to a moving robot gives rise to readable motions that convey intentions. This readability allows humans to minimize their planning effort and avoid the collision earlier and smoothly. In accordance with previous studies [START_REF] Carton | Measuring the effectiveness of readability for mobile robot locomotion[END_REF][START_REF] Dragan | Generating legible motion[END_REF][START_REF] Lichtenthäler | Towards legible robot navigation-how to increase the intend expressiveness of robot navigation behavior[END_REF], our result shows that controlling robots in order to make them behave in a human-like way is a key point to ease human-robot cohabitation. Conclusion Our study suggests that when human walkers cross the trajectory of a mobile robot that obeys the observed human-human avoidance rules, they behave closely as when they cross the trajectory of another human walker. This result shows that, for the ease of human-robot collaboration, machines should move by respecting human interaction rules. In future works, as previously investigated in human-human interactions [START_REF] Dicks | Perceptual-motor behaviour during a simulated pedestrian crossing[END_REF][START_REF] Gallup | Visual attention and the acquisition of information in human crowds[END_REF][START_REF] Higuchi | Visuomotor Control of Human Adaptive Locomotion: Understanding the Anticipatory Nature[END_REF][START_REF] Passos | Information-governing dynamics of attacker-defender interactions in youth rugby union[END_REF], it would be interesting to better understand the visual anticipation processes as well as the nature of the visual information underlying such a collaboration. This can be done by using an eye-tracking system to A C C E P T E D M A N U S C R I P T couple the adaptations made by the human walkers and the gaze-activity. Also, it would be interesting to evaluate whether the use of a humanoid robot, whose morphology is closer to the one of a human than a wheeled robot, modifies the human behavior. Another direction of research would be to extend this work to the case of multiple walkers interacting with each other at the same time. Would it be possible, if some participants are replaced by robots that behave like humans, to observe the same human adaptation? Finally, the nature of human expectations and presuppositions, that can be linked to the notion of socially-aware navigation (see [START_REF] Rios-Martinez | From proxemics theory to socially-aware navigation: A survey[END_REF] for a review), should have strong influence on the walker behavior. Indeed, in a less controlled context, participants would certainly behave differently than in the framework of a scientific experiment, where the robot is expected to behave safely. An interesting complement of study would then be to lead similar experiment in real-life environment to evaluate the impact of the context to the walker behavior. where participants were likely to pass first but adapted their trajectory to finally give way to the robot. Figure 1 : 1 Figure 1: Experimental apparatus and task. The robot moves from RSP1 to RSP2 (or vice versa), following the lateral path r_b or r_a to pass respectively behind or ahead the participant. Figure 2 : 2 Figure 2: Evolution of smpd normalized in time during the interaction [tsee, tcross] for all the 279 trials. Gray curves represent trials where the initial crossing order was preserved while black curves represent trials where the initial crossing order was changed. Figure 3 : 3 Figure 3: Three examples of participant's (P) and robot (R) trajectories during the interaction phase, for PosPos (top), PosNeg (middle) and NegNeg (bottom) categories. The part of the trajectory between tsee (circle mark) and tcross (square mark) is represented in bold line. Corresponding positions along time are linked by dotted lines. Figure 4 : 4 Figure 4: (a) Mean evolution (±1 SD) of smpd for each category of trial. (b) Time derivative of the mean smpd. The three vertical segments correspond, for each curve (PosPos, PosNeg or NegNeg), to the time at which the time derivative of the smpd vanishes, i.e., separate the reaction phase (on the left) from the regulation phase (on the right). Figure 5 : 5 Figure 5: smpd values for PosPos, PosNeg and NegNeg categories at tsee and tcross. A significant difference in values means that adaptations were made to the trajectory by the participant (***p < 0.001). Acknowledgements The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007 -2013) under grant agreement n. 611909 (KoroiBot) and from the French National Research Agency, projects Entracte (#ANR-13-CORD-0002) and Percolation (#ANR-13-JS02-0008). Conflict of interest In this work, there are no conflict of interest. A C C E P T E D
26,342
[ "964372", "14006", "1510", "2715", "6663" ]
[ "388790", "388790", "528695", "491419", "388790", "525244" ]
01579651
en
[ "chim" ]
2024/03/05 22:32:10
2017
https://univ-rennes.hal.science/hal-01579651/file/Triplet%20state%20CPL%20active%20helicene-dithiolene_accepted.pdf
Thomas Biet Thomas Cauchy Qinchao Sun Jie Ding Andreas Hauser email: andreas.hauser@unige.ch Patric Oulevey Thomas Bürgi Denis Jacquemin Nicolas Vanthuyne Triplet state CPL active helicene-dithiolene platinum bipyridine complexes Keywords: UV-vis, ECD, photophysical measurements niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Triplet state CPL active helicene-dithiolene platinum bipyridine complexes Thomas Biet, a Thomas Cauchy, a Qinchao Sun, b Jie Ding, b Andreas Hauser,* b Patric Oulevey, b Thomas Bürgi, b Denis Jacquemin, c Nicolas Vanthuyne, d Jeanne Crassous e and Narcis Avarvari* a Chiral metal dithiolene complexes represent a family of chiral precursors, which can give rise to molecular materials with properties resulting from the interplay of chirality with conductivity, magnetism, and photophysics. We describe herein the first examples of chiral metal diimine dithiolene complexes, by the use of a platinum(II) centre coordinated by 2,2'-bipyridine and helicene-dithiolene ligands. Straightforward synthesis of racemic and enantiopure complexes allows the preparation of luminescent Pt(bipy) [4] and [6]helicene compounds for which the solid-state structure was determined as well. TD-DFT calculations support the assignment of the low energy bands observed in the UV-vis absorption spectra as mixed metal-ligand-to-ligand charge transfer transitions and confirm that the emission band results from the T1 excited state. Interestingly the enantiopure [6]helicene complexes show CPL activity at room temperature in acetonitrile solutions with anisotropy factors of 3×10 -4 . Chiral metal dithiolene complexes represent an emerging family of molecular materials where chirality is expected to modulate properties such as conductivity, magnetism, luminescence, etc. 1 For example, differences in conductivity between diastereomeric pairs of anionic Ni(II) bis(dithiolene) complexes with chiral viologene type cations have been noticed, 2 while the first chiral single component conductors based on neutral Au(III) bis(dithiolene) complexes have been recently described. 3 Besides, stable anionic, neutral or cationic species can be easily accessed in metal bis(dithiolene) complexes thanks to the "non-innocent" character of the dithiolene ligands, so that redox modulation of the chiroptical properties can be observed. 4 Although square planar platinum diimine dithiolene complexes have been investigated over more than twenty years especially for their emission properties in solution, [5][6][7][8] and more recently for photocatalytic water splitting, [START_REF] Zheng | Proc. Natl. Acad. Sci[END_REF] no chiral derivative of this heteroleptic family has been yet reported. For example, in Pt(II) 2,2'-bipyridine (bpy) arene-dithiolene complexes such as Pt(bpy)(tdt) (tdt = toluenedithiolate) showing room temperature luminescence in solution arising from a MMLL'CT (mixed metal-ligand-to-ligand charge transfer) transition, 5,6 chirality could be in principle introduced either on the diimine or the benzodithiolene fragments in order to influence the photophysical properties. One of the interests of such complexes relies on the possible observation of circularly polarized luminescence (CPL), which is the differential spontaneous emission of left and right-handed circularly polarized light. [START_REF] Riehl | [END_REF] While chiral lanthanide complexes are generally the most intense CPL emitters, 11,12 several examples of transition metal complexes have been reported as well. [13][14][15] Among them those containing helicene based ligands are particularly interesting, 16,17 as helicenes 18,19 and heterohelicenes 20 are well known non-planar conjugated molecules possessing strong chiroptical properties. Note that CPL active cationic dioxa, azaoxa and diaza [6]helicenes have been recently reported. 21 This work is presenting out investigations on helical dithiolene platinum 2,2'-bipyridine complexes using the hitherto unknown helicene-dithiolate (heldt) ligands, by analogy with the achiral toluene-dithiolate (tdt) previously mentioned. We describe herein the synthesis and the structural characterization of Pt(bpy)([n]hel-dt) (n = 4, 6) complexes together with their chiroptical and photophysical properties supported by DFT calculations. The racemic 2,3-dithiolate- [4] and [6]helicene ligands, generated in situ from the protected precursors (rac)-1a and (rac)-1b respectively, which have recently been used by some of us for the synthesis of TTF-helicenes, 22 have been reacted with Pt(bpy)Cl2 to generate the corresponding complexes. Thus, (rac)-Pt(bpy)( [4]hel-dt) 2a and (rac)-Pt(bpy)( [6]hel-dt) 2b have been isolated as dark purple crystalline solids after column chromatography (Scheme 1 and ESI). As the racemization barrier for [4]helicenes is generally very low, 23 the enantiopure forms have been prepared only for the [6]helicene dithiolene complexes 2b starting from the precursors (M)-1b and (P)-1b separated by chiral HPLC (Fig. S1-S3, ESI). The racemic complexes 2a and 2b crystallize in the centrosymmetric space groups C2/c and P-1 respectively, with both enantiomers (M) and (P) in the unit cell (Table S1, ESI). Worth noting are the helical curvatures (hc) defined by the dihedral angle between the terminal rings of the helicene skeleton amounting to 25.40° and 53.76° for (rac)-2a (Table S2, Fig. S4-S5, ESI) and (rac)-2b (Table S3, Fig. S6-S7, ESI), which are typical values for [4] and [6]helicenes, and the square planar coordination geometry of the platinum centres. DFT calculations in acetonitrile yield dihedral angles 28.06° and 41.68° for 2a and 2b, suggesting that the impact of crystal packing is stronger for the latter compound. The enantiomerically pure complex (M)-2b has been also analysed by single crystal X-ray diffraction thus allowing to confirm that it was obtained from the (-)-1b precursor. (M)-2b crystallized A c c e p t e d m a n u s c r i p t in the orthorhombic system, non-centrosymmetric space group P212121, with four independent molecules in the unit cell (Fig. 1). Fig. 1 The four independent molecules of complex in the solid state structure of (M)-2b. The four molecules named as Pt1A -Pt1D slightly differ by the helical curvature values ranging from 58.58° (Pt1C) to 62.36° (Pt1D), while the Pt-S (2.24-2.25 Å) and Pt-N (2.05-2.07 Å) are in the normal range for such complexes (Table S4, ESI). 24 The packing of the molecules is very likely governed by the - stacking interactions occurring along the c direction (Fig. 1 and Fig. S8-S9, ESI). The enantiomeric (P) and (M)-2b complexes represent the first chiral members of the platinum diimine dithiolenes family. As outlined above, Pt(diimine)(dithiolate) complexes are emissive in fluid or frozen solutions. 5,6,25 We have therefore set out to measure in a first time the photophysical properties of the racemic 2a and 2b. In the low energy region the complex (rac)-2a has an absorption band from 450 to 650 nm with a maximum around 550 nm (18180 cm -1 ) and an absorption coefficient  ≈ 6700 M -1 cm -1 (Fig. 2, top). For (rac)-2b the maximum of the corresponding band is around 562 nm and the absorption coefficient  ≈ 3640 M -1 cm -1 (Fig. 2, bottom). This low energy absorption band is typical for Pt(diimine)(dithiolate) complexes and has been assigned to a MMLL'CT transition, as the HOMO has metal/dithiolene character while the LUMO is developed over the unsaturated diimine ligand. 6 The small redshift of the CT transition from 2a to 2b is caused by the slight change in the dithiolate ligand, with a more extended rigid  backbone in the latter, which makes the HOMO energy slightly higher by +0.03 eV in 2b than in 2a. Moreover, the absorption coefficient is smaller for 2b likely because of the more distorted structure. TD-DFT reproduces the experimental trends with a vertical absorption at 556 nm (f = 0.21) for 2a and 562 nm (f = 0.19) for 2b. Both complexes are emissive in fluid solutions of CH2Cl2 when irradiated into the CT bands, showing low energy emission bands at 720 nm (2a) and 715 nm (2b) (Fig. 2 and Fig. S10, ESI, for (rac)-2a in CH3CN). The more distorted structure of 2b might be as well at the origin of the lower emission quantum yield for 2b (0.15%) than for 2a (0.19%) (Table S5, ESI). The perfect agreement between absorption and excitation spectra is proof that the luminescence indeed originates from the two compounds despite the low emission quantum yield. For 2b a luminescence life-time of 124 ns was measured in deoxygenated solution with pulsed excitation at 458 nm (decay curve shown in Figure S11,ESI). With the quantum efficiency of 0.15% this corresponds to a radiative lifetime of around 100 μs, indicating that the emission originates from the T1 state, as is generally the case for Pt(II) complexes. 6 Fig. 2 Absorption, emission and excitation spectra of (rac)-2a (top) and (rac)-2b (bottom) in CH2Cl2. Absorption spectra were measured at concentrations of 2.2×10 -5 M for (rac)-2a and at 8×10 -5 M for (rac)-2b. Emission spectra were measured using the same solutions degassed by nitrogen bubbling for 20 min with excitation at 525 nm for (rac)-2a and at 560 nm for (rac)-2b. Excitation spectra were measured at emission wavelengths of 720 nm for (rac)-2a and 715 nm for (rac)-2b. To characterize the charge transfer and emission properties, DFT calculations have been performed on 2a and 2b (Fig. S12-S19, Tables S6-S8, ESI). For both compounds, optimized as (M) and (P) enantiomers respectively, the fully relaxed molecular geometries obtained by DFT are in line with those obtained by X-Ray diffraction. In Fig. 3, we represent the electron density difference (EDD) plots corresponding to the transition to the lowest singlet excited-state and the spin density of the lowest emissive triplet state. The EDD representation clearly shows that there is a strong CT from the dithiolene (donor, mostly in blue in Fig. 3) to the diimine (acceptor, mostly in red). The computed CT distance attains 4.0 Å in both 2a and 2b, which is a rather large value. Interestingly, one notices that the metal centre presents both positive and negative density contributions, indicating that it also partially plays the role of an accepting unit. The implication of the metal in the EDD plots is also consistent with a possible intersystem crossing to the T1 state. The spin density of the lowest triplet state is unsurprisingly localized in exactly the same regions as the corresponding S1 state. The emission for the lowest triplet state was estimated in both the vertical and adiabatic approximations. In the former, DFT yields an emission at 802 nm and 815 nm for 2a and 2b, respectively, whereas in the latter that takes into account the difference of vibrational energies, we obtained 759 nm and 760 nm, for the two compounds. These latter values are in good correspondence with the experimental data, with an error smaller than 0.1 eV, confirming that the observed emission is indeed coming from T1. The fact that the vertical values indicate (incorrectly) a small redshift when going from 2a to 2b, whereas the adiabatic energies show essentially no shift, hints that the T1 geometrical relaxation is smaller in the latter compound than in the former. Investigation of the photophysical properties of both enantiomers of 2b has been performed in acetonitrile solutions. Absorption spectra are shown in the top panel of Fig. 4, together with the emission spectrum of (M)-2b. Compared to the spectra in CH2Cl2, the maximum of the absorption is shifted to 530 nm, that is, by 30 nm to lower wavelength. The emission maximum in acetonitrile appears at 720 nm, that is, shifted to higher wavelength by 5 nm. The CD spectra of (P) and (M)-2b in acetonitrile, image mirror of each other, are shown in the bottom panel of Fig. 4. For the CD spectra, TD-DFT indeed yields a weakly positive contribution to the rotary strength for the transition to the lowest excited-state of (P)-2b and a weakly negative contribution to that strength for the lowest excitedstate of (M)-2b, which is consistent with the experimental findings. CPL, representing the differential emission between left and right circularly polarized light and characterized by the anisotropy factor gem = 2(IL -IR)/(IL + IR) at the maximum of the emission band, has been measured for solutions of (M) and (P)-2b in acetonitrile at room temperature (Fig. 4, bottom). As hypothesized through the introduction of the helicene backbone in the dithiolene ligand, the enantiomers of 2b show CPL activity with an anisotropy factor of ±3×10 -4 , which is a typical value for organic, organometallic and coordination complexes in solution excepting the lanthanides. 11,12 It should be mentioned however that this value of CPL anisotropy is for a compound with a luminescence quantum efficiency of only 0.15%, and which represents the first CPL active metal dithiolene complex. In summary, the first helical Pt(diimine)(dithiolene) complexes have been prepared through the introduction of [4] and [6]helicene backbones in the structures of the dithiolene ligand. The solid state structures of the racemic complexes show the presence of both (P) and (M) enantiomers, with helical curvatures typical of [4] and [6]helicenes and packings controlled by  interactions. Enantiopure [6]helicene complexes have been prepared from the corresponding enantiopure precursors separated by chiral HPLC. The complexes are emissive in fluid solutions at room temperature when excited in the MMLL'CT band, the triplet state being responsible for the observed emission band centered at 715-720 nm. DFT calculations support the absorption, emission and CD properties of the enantiopure compounds. The conformationally stable [6]helicene enantiopure complexes show CPL activity. These results underline the interest of helical dithiolene ligands as means to access chirality related combined properties in the derived complexes and open the way towards the preparation of other related compounds by the use of chiral bipyridines in combination with diverse helicene-dithiolenes in order to tune their photophysical properties. Scheme 1 1 Scheme 1 Synthesis of Pt(bpy)(hel-dt) complexes 2a-b. A c c e p t e d m a n u s c r i p t Fig. 3 3 Fig.3Top: Density difference plots between the lowest S1 and the S0 state as determined by the TD-DFT on the optimal ground-state geometry. The blue (red) regions indicate regions of loss (gain) of density upon transition. Bottom: spindensity difference plots between the T1 and S0 states considering the T1 state in its optimal geometry. (M)-2a and (P)-2b are displayed on the left and right handside, respectively. See the SI for computational details. Fig. 4 4 Fig.4 Absorption spectra of (P)-2b and (M)-2b and emission spectrum of (M)-2b in acetonitrile (2.5×10 -4 M) at T = 298 K, ex = 532 nm (top); CD and CPL spectra of (P)-2b and (M)-2b (bottom). A c c e p t e d m a n u s c r i p t This work was supported in France by the CNRS (GDR 3712 Chirafun), the University of Angers and the French Ministry of Education and Research (grant to T.B.). The investigation was supported in part by the University of Geneva and by the Swiss National Science foundation (grant No 200020_152780). Magali Allain (University of Angers) is warmly thanked for help with the solid state structures.
15,541
[ "183495", "777292", "763966", "18938", "754026" ]
[ "213131", "213131", "154620", "154620", "154620", "154620", "59514", "186403", "194938" ]
01611001
en
[ "spi" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01611001/file/liu_19801.pdf
Y Liu V Vidal S Le Roux F Blas F Ansart P Lours Influence of isothermal and cyclic oxidation on the apparent interfacial toughness in thermal barrier coating systems Keywords: In thermal barrier coatings (TBCs), the toughness relative to the interface lying either between the bond coat (BC) and the Thermal Grown Oxide (TGO) or between the TGO and the yttria stabilized zirconia topcoat (TP) is a critical parameter regarding TBCs durability. In this paper, the influence of aging conditions on the apparent interfacial toughness in Electron Beam-Physical Vapor Deposition (EB-PVD) TBCs is investigated using a specifically dedicated approach based on Interfacial Vickers Indentation (IVI), coupled with Scanning Electron Microscopy (SEM) observations to create interfacial cracks and measure the extent of crack propagation, respectively. Introduction Thermal barrier coatings (TBCs) are typically used in key industrial components operating at elevated temperature under severe conditions such as gas turbines or aero-engines, to effectively protect and isolate the superalloy metal parts, for instance turbine blades, against high temperature gases. Even though TBCs allow drastic improvement of component performance and efficiency [START_REF] Padture | Thermal barrier coatings for gas-turbine engine applications[END_REF][START_REF] Goswami | Thermal barrier coating system for gas turbine application-a review[END_REF], thermal strains and stresses resulting from transient thermal gradients developed during in-service exposure limit the durability of the multi-material system. TBCs exhibit complex structure and morphology consisting of three successive layers, deposited or formed on the superalloy substrate (Fig. 1), i.e. (i) the bond coat standing as a mechanical bond between the substrate and the topcoat; (ii) the Thermally Grown Oxide (TGO), an Al 2 O 3 scale that forms initially by pre-oxidation of the alumina-forming bond coat then slowly grows upon thermal exposure to protect the substrate from further high temperature oxidation and corrosion; (iii) The ceramic topcoat (TC), made of yttria-stabilized-zirconia (YSZ), the so-called thermal barrier coating itself whose role is mainly to insulate the superalloy substrate from high temperatures. Electron Beam Physical Vapor Deposition (EB-PVD) and Air Plasma Spray (APS) are the two major coating processes implemented industrially for depositing YSZ. They generate different morphologies and microstructures and consequently different thermal and mechanical properties. The columnar structure, typical of the EB-PVD deposition, shows an optimal thermal-mechanical accommodation of cyclic stress resulting in high lateral strength. However, elongated (high aspect ratio) inter-columnar spaces roughly normal to the TBC, assist thermal flux conduction and penetration through the top-coat, which detrimentally increases the thermal conductivity of the system which can reach 1.6 W/m•K. APS TBCs are characterized by a lamellar structure, intrinsically much more efficient in terms of thermal insulation (conductivity as low as 0.8 W/m•K) but less resistant to in-plane cyclic mechanical loading. Regardless of the coating process, TBCs can suffer in-service damage as a consequence of the synergetic effect related to mechanical stress, high temperature and thermally activated growth of interfacial alumina. Failure can either occur cohesively within the top coat for APS TBCs or adhesively at interfaces between successive layers in EBPVD TBCs. Degradation of such systems usually occurs through the spallation of the topcoat resulting from severe delamination either at the BC/TGO or the TGO/TC interface. The resistance to spallation is intimately related to the capacity of interfaces of the complex TBC system to sustain crack initiation and propagation, which can be evaluated by measuring the interfacial toughness. Several methods have been proposed to achieve interfacial fracture toughness measurement for various substrate/coating systems, including "four point bending test" [START_REF] Thery | Adhesion energy of a YPSZ EB-PVD layer in two thermal barrier coating systems[END_REF], "barb test" [START_REF] Guo | Effet of interface roughness and coating thickness on interfacial shear mechanical properties of EB-PVD Yttria-Partially Stabilized Zirconia thermal barrier coating systems[END_REF], "buckling test" [START_REF] Faulhaber | Buckling delamination in compressed multilayers on curved substrates with accompanying ridge cracks[END_REF], "micro-bending test" [START_REF] Eberl | In Situ Measurement of the toughness of the interface between a thermal barrier coating and a Ni alloy[END_REF] and various indentation techniques [START_REF] Sniezewski | Thermal barrier coatings adherence and spallation: interfacial indentation resistance and cyclic oxidation behaviour under thermal gradient[END_REF][START_REF] Lesage | Effect of thermal treatments on adhesive properties of a NiCr thermal sprayed coating[END_REF][START_REF] Choulier | Contribution à l'étude de l'adhérence de revêtements projetés a la torche a plasma[END_REF][START_REF] Vasinonta | Measurement of interfacial toughness in thermal barrier coating systems by indentation[END_REF][START_REF] Mao | Evaluation of microhardness, fracture toughness and residual stress in a thermal barrier coating system: a modified Vickers indentation technique[END_REF]. This paper proposes to implement the Vickers hardness technique to estimate the interfacial toughness in EB-PVD TBCs as well as its evolution upon various isothermal and cyclic aging conditions. As a matter of fact, aging may provoke microstructural changes and residual stress development prone to enhance crack initiation and propagation. A tentative correlation between the conditions of aging, the induced microstructural changes and the concomitant evolution of toughness, necessary to understand and predict the durability of TBC systems, is detailed. Materials and testing conditions TBC systems processed by EB-PVD (150 m thick), are provided by SNECMA-SAFRAN. Topcoats and bond-coats are industrial standards, respectively made of yttria stabilised zirconia (namely ZrO 2 -8 wt.% Y 2 O 3 ) and ␤-(Ni,Pt)Al. Substrates are AM1 single crystal Ni-base superalloy disks, with a diameter of 25 mm and a thickness of 2 mm. All specimens are initially pre-oxidised to promote the growth of a thin protective Al 2 O 3 scale. Samples are cut, polished and subsequently aged using various oxidation conditions prior to interfacial indentation. In addition to the as-deposited condition, two series of results are analyzed separately. The first series is relative to isothermal oxidation, following 100 h exposure at 1050 • C, 1100 • C and 1150 • C respectively. As the exposure time is kept constant, the influence of the oxidation temperature can be specifically analysed. The second series, performed at a given temperature, (1100 • C) is dedicated to compare isothermal and cyclic oxidation behavior. Here again, the hot time at 1100 • C (i.e., 100 h), is the same for both tests. Fig. 1 shows the typical cross-sectional microstructure of an initial as-deposited EB-PVD TBC. Note that, after aging, a slight additional grinding is often required to prepare thoroughly the surface for interfacial indentation. Interfacial indentation test Various types of interfacial or surface indentation tests exist. They are performed either on the top surface of specimens, normal to the coating [START_REF] Davis | The fracture energy of interfaces: an elastic indentation technique[END_REF], or on cross-section, either within the substrate close to the interface [START_REF] Colombon | Contraintes Résiduelles et Nouvelles Technologies[END_REF] or at the interface between the substrate and the coating [START_REF] Choulier | Contribution à l'étude de l'adhérence de revêtements projetés a la torche a plasma[END_REF]. The latter, further developed in [START_REF] Chicot | Apparent interface toughness of substrate and coating couples from indentation tests[END_REF], employs a pyramidal Vickers indenter and can be applied for a large range of coating thicknesses (greater than ∼100 m). Typically, it is specifically used for investigating adhesion of TBC systems [START_REF] Sniezewski | Thermal barrier coatings adherence and spallation: interfacial indentation resistance and cyclic oxidation behaviour under thermal gradient[END_REF][START_REF] Lesage | Effect of thermal treatments on adhesive properties of a NiCr thermal sprayed coating[END_REF][START_REF] Chicot | Apparent interface toughness of substrate and coating couples from indentation tests[END_REF][START_REF] Wu | Microstructure parameters affecting interfacial adhesion of thermal barrier coatings by the EB-PVD method[END_REF]. The principle of interfacial indentation is to accurately align one diagonal of the Vickers pyramid with the interface between the substrate and the coating while loading the system to hopefully generate the local delamination of the coating. In this case, resulting from the application of a high enough indentation force, an induced crack with roughly semi-circular shape instantaneously propagates. For a given aging condition, each indentation force P greater than a critical force P c that must be estimated, generates a crack with radius a and an indent imprint with radius b. P c and correlatively the critical crack length a c can not be determined using straightforward measurements but graphically correspond to the coordinates of the intercept between the apparent hardness line Ln(b)-Ln(P) showing the evolution of the imprint size versus the indentation force (master curve), and the Ln(a)-Ln(P) line giving the evolution of the crack size versus the indentation force (Fig. 2). The apparent interfacial toughness (K ca ) is calculated as a function of the critical values according to the following relationship: K ca = 0.015 P c a 3/2 c ⎛ ⎝ E H 1/2 B 1 + H B H T 1/2 + E H 1/2 T 1 + H T H B 1/2 ⎞ ⎠ (1) where B and T stand, respectively, for the bondcoat and the topcoat. In standard TBC systems, the thickness of the TGO is generally low, typically ranging from 0.7 m (after initial pre-oxidation) to 7 m (after long term exposure at high temperature), and is in any case much lower than the imprint of the indent resulting from the force range used for coating delamination purpose (Fig. 3). As a consequence, the influence of the TGO in terms of mechanistic issue is deliberately neglected [START_REF] Sniezewski | Thermal barrier coatings adherence and spallation: interfacial indentation resistance and cyclic oxidation behaviour under thermal gradient[END_REF]. However, it will be shown later that the thickness of the TGO has an influence on the location of the crack initiation and subsequently the propagation path. Determination of Young's modulus and hardness Young modulus E and hardness H strongly depend on the chemical composition and the process-induced microstructure of materials. In multi-materials such as TBCs, those mechanical characteristics change as composition changes throughout the entire thickness of the multi-layered system. If the nature of the single crystal substrate is essentially not affected by the overall deposition process, the morphology and microstructure of the top coat and to a lesser extent of the bond coat are strongly related to processing which in turn affects the mechanical properties. Strictly speaking, the mechanical response of the system to interfacial loading should depend on the elastic and plastic properties of all materials involved including that of the thermally grown oxide. However, a measurement of E and H of the growing oxide is not possible by means of standard micro and nano-indentation. The model detailed in [START_REF] Chicot | Apparent interface toughness of substrate and coating couples from indentation tests[END_REF] requires the knowledge of these characteristic parameters for the substrate and the coating. Accordingly, the TGO is assumed to play the role of a (three-dimensional) interface, thickening as temperature exposure increases and promoting, when loaded, spallation along the (two-dimensional) interface it shares with either the topcoat or the bond coat. Young modulus of the top coat E T and hardness of both the bond coat H B and the topcoat H T are measured using the nano-indentation technique, implementing a Berkovich indenter. Details of the method can be found in [START_REF] Jang | Hardness and Young's modulus of nanoporous EB-PVD YSZ coatings by nanoindentation[END_REF]. Basically, considering the indentation force applied, hardness is evaluated by a simple and direct measurement of the indent imprint dimensions. Young's modulus is calculated by analyzing the purely elastic recovery of the plot relating the evolution of the applied force versus the in-depth displacement. For statistical reasons, hardness and Young modulus have been measured on 10 different locations within the bond coat and top coat respectively. Average values, experimentally determined, are given below: • E B = 133 GPa • E T = 70 GPa • H B = 5.15 GPa • H T = 4.14 GPa Results and discussion Typical examples of interfacial indentation results are given in Fig. 3. The indent imprint alone (Fig. 3a) or the indent imprint plus the induced crack (Fig. 3b) are shown for cases where the critical force to provoke crack formation is not reached or exceeded,respectively. Fig. 4 gathers all data collected from experiments on asdeposited and isothermally oxidised TBCs. The linear relationship between Ln(P) and Ln(b), plotting the so-called master curve of apparent hardness, with a slope close to 0.5 is in good agreement with the general standard formula relating the Vickers hardness (HV) of bulk materials to the ratio between the applied load P and the square of the indent diagonal length b 2 . For a given oxidation temperature, the variation of the length a of the indentationinduced crack versus the applied load P also fits a single regression line on a Log-Log scale which can serve (as indicated in Section 3) to evaluate the critical load P c necessary to initiate interfacial detachment. Note that for aged specimens, the critical force P c (corresponding to the abscissa of the intercept between the master curve and Ln(P) vs Ln(a) plot) decreases as the oxidation temperature increases, thus indicating a thermally activated degradation of the interface. As a comparison, the as-deposited TBC can sustain much higher load prior to suffer interfacial debonding. Quantitatively, the critical force is respectively 0.3 N for 100 h oxidation at 1150 • C, 0.8 N for 100 h oxidation at 1100 • C, 2 N for 100 h oxidation at 1050 • C and about one order of magnitude higher (up to 16 N) for the as-deposited non oxidized TBC. Using Eq. ( 1) and according to the values of E and H reported in Section 4, the apparent interfacial toughness is calculated and detailed in Fig. 5 for all investigated cases. Of course, correlatively to the thermally activated decrease of the critical force discussed above, toughness also decreases as the oxidation temperature increases. Besides, it was shown elsewhere [START_REF] Sniezewski | Thermal barrier coatings adherence and spallation: interfacial indentation resistance and cyclic oxidation behaviour under thermal gradient[END_REF] that for a given oxidation temperature, the interfacial degradation was similarly time-dependent too. This unambiguously shows that the propensity of the coating to detach from the substrate results from complex solid-state diffusion processes that impair the mechan- Beyond the mechanistic approach, the fractographic analysis gives interesting information on the mechanisms of crack initiation and further propagation, which can both vary depending on the aging conditions. Indeed, the enhancement of thermal activation as the oxidation temperature is raised results in the growth of a thicker alumina scale at the interface between the bond coat and the top coat. This observation is consistent with results reported by Mumm et al. [START_REF] Mumm | On the role of imperfections in the failure of a thermal barrier coating made by electron beam deposition[END_REF], using wedge imprint to generate delamination. It was shown that when the oxide thickness is lower than 2.9 m, delamination extends predominantly within the TGO and TBC, whereas for thicknesses higher than 2.9 m, degradation occurs along the interface between the bond coat and the TGO. Fig. 6 proposes a comprehensive map of cracking, which delimitates -within a graph plotting oxide thickness versus oxidation temperature -the two domains of initiation and propagation, either at the BC/TGO or the TGO/TC interfaces. These two domains consistently extend on either side of the critical thickness value proposed by Mumm et al. as the threshold for crack propagation at the TGO/bond coat interface. Note that the diagram includes oxide thicknesses measured for as-deposited, isothermally oxidized and cyclically oxidized (100 "1 h" cycles at 1100 • C) specimens. The oxide thickness was accurately estimated using image analysis on cross-sectional SEM micrographs showing the TGO layer, by dividing the total area of the layer (expressed in m 2 ) by the developed length of its median axis (expressed in m). It was evaluated on thirty contiguous micrographs representing an equivalent length of 4.8 mm [START_REF] Le Roux | Quantitative assessment of the interfacial roughness in multi-layered materials using image analysis: application to oxidation in ceramic-based materials[END_REF]. Mechanism of crack initiation and propagation is strongly related to the stress level and distribution within the multi-layer system. Apart from oxide growth stress (due to volume change during oxide growing), the main cause of mechanical strain and stress is the thermal expansion mismatch between layers of different thermal expansion coefficient during cooling and thermo-cycling. It is particularly critical upon cyclic oxidation as cumulative heating plus cooling is prone to dramatically enhance degradation. Under isothermal exposure, oxide growth stresses may be considered as the predominant contribution. According to Baleix et al. [START_REF] Baleix | Oxidation and oxide spallation of heat resistant cast steels for superplastic forming dies[END_REF], the spallation at the metal/oxide interface system in a Cr 2 O 3 forming heat resistant cast steels, occurs through the so-called oxide buckling for scale thickness equal or higher than 4 m. In the frame of the strain-energy model for spallation, it is shown in this case that the interfacial fracture energy for buckling decreases from 5 J.m -2 to 2 J.m -2 as the oxide thickens from 7 m to 9 m. This suggests a progressive degradation of the interface as the oxidation mechanisms are enhanced through an extended exposure time. Formally, buckling of oxide prior to detachment for the onset to spallation is very comparable to the generation of cracks at the interface between the bond coat and the TGO promoted by indentation in TBCs systems. Indeed, depending on the oxide thickness and the strength of the metal oxide interface, two different routes for spallation are generally reported. In the case of a strong interface for thin oxides, i.e., a high toughness or high fracture energy, compressive shear cracking develops in the oxide. Consequently, detachment of oxide particles occurs by wedge cracking. In the case of a weak interface for thicker oxides, i.e., low toughness or fracture energy, the oxide may detach from the metal in the form of buckles. Evans et al. [START_REF] Evans | The influence of oxides on the performance of advanced gas turbines[END_REF] showed elsewhere that the energy stored within the TGO increases as the TGO thickens, and contributes only to the delamination at the TGO/bond coat interface. As a consequence, the fracture energy or toughness of the interface TGO/bond coat may decrease as the oxidation proceeds and the oxide thickens. Beyond a given oxide thickness threshold, namely around 3 m for 150 m thick EB-PVD TBCs, the toughness of the TGO/bond coat interface becomes lower than that, essentially unchanged, of the TGO/topcoat interface, yielding to a change in delamination location. In addition, as indicated in Fig. 2, the slopes of the various lines plotted for various conditions of aging in order to determine the critical force to initiate interfacial cracking are different. This clearly indicates that whatever the critical force is, the possibility to extend a crack requires more or less mechanical energy depending on the configuration, i.e., the specific morphology of the interface. To address this, a straightforward image analysis methodology detailed in [START_REF] Le Roux | Quantitative assessment of the interfacial roughness in multi-layered materials using image analysis: application to oxidation in ceramic-based materials[END_REF] is applied on SEM cross-sectional micrographs to determine the roughness of the internal interfaces, between the bond coat and the TGO, and the TGO and the top coat (Fig. 7). According to this approach, a rumpling or folding index is defined as the ratio between the developed length of the interface and the horizontal projected length measured on 30 contiguous micrographs corresponding to an equivalent length of 4.8 mm. This index is a relevant indicator of the tortuosity of the interfaces. Fig. 7 gives values for both the TGO-top coat interface (upper profile), the bond coat -TGO interface (lower profile) as well as normalised values (obtained by dividing the values by the reference one (as-deposited TBC)). Both isothermal aging (100 h at 1150 • C, 1100 • C and 1050 • C) and cyclic aging (100 cycles of 1 h at 1100 • C) are investigated. Cross-sectional SEM micrographs illustrating the evolution of the thickness and morphology of the TGO for various isothermal oxidation temperatures are also shown in Fig. 7. Note that the interfacial corrugation of the as-deposted TBC is significant, both for the upper and lower profles, which accounts directly for the roughness of the initial substrate. However, the upper profile is slightly smoother than the lower profile, suggesting a leveling effect of the initial oxidation intrinsic to the EB-PVD deposition process. In all cases, aging results in an enhancement of the upper profile tortuosity: the higher the oxidation temperature, the more pronounced the associated folding effect. The evolution of the lower profile is more complex to analyze. Indeed, aging at 1050 • C leads to a significant decrease (about 10%) with respect to the initial value of the as-deposited TBC. The interface between the oxide and the bond coat becomes smoother as oxidation progresses, indicating a total absence of interfacial folding. This observation is not consistent with results presented in [START_REF] Tolpygo | On the rumpling mechanism in nickel-aluminide coatings part II: characterization of surface undulations and bond coat swelling[END_REF] reporting the occurence of rumpling even under isothermal oxidation. For aging at 1100 • C and 1150 • C, Linf/L -though remaining lower than the reference value -is higher than at 1050 • C. Note that for cyclic oxidation at 1100 • C, the tortuosity of the bond coat/TGO interface is slightly greater than that of the reference, as-deposited TBC and of the TBC aged at the same temperature over the same hot time upon isothermal oxidation (Fig. 7). Globally speaking, the normalised folding index indicating the propensity of the multi-materials sytem to rumpling remains lower than 1 for the interface between the bond coat and the TGO. This clearly indicates that oxidation over short term exposure, either isothermal or cyclic, does not provoke any corrugation of the interface. In contrast, the interface between the TGO and the top coat tends to undulate as it is exposed to high temperature either upon isothermal or cyclic conditions. It is however unusual to evaluate rumpling considering the interface between the TGO and the top coat. It is much more common to monitor the evolution of the bond coat/TGO interface. It can be assumed that (i) rumpling is negligible, or at least little pronounced, under isothermal oxidation, (ii) thermal aging under 100 cycles -though prone to generate more degradation than isothermal exposure -is not sufficient to provoke significant rumpling. Though equivalent in terms of hot-time, cyclic oxidation (1 h-cycle) does not degrade further the interfacial toughness nor the tortuosity of the interface which, taking into account the assumed severity, highly constraining effect of the cooling phases of cycles, is probably due to the low number of cycles imposed to the TBC. Folding effects and subsequent cracks propagation routes can be related to the evolution of oxide thickness. Indeed, for thin oxides, typically with thickness lower than 2.9 m, for the as-deposited and 1050 • C oxidised TBC, indentation-induced cracks initiate and further propagate at the TGO -top coat (outer) interface which exhibits in both cases apparent toughness higher than 3 MPa.m 0.5 . This suggests a high adherence of the TGO in good agreement with previous results commonly reported in the literature. Between the reference, as-deposited TBC and the TBC oxidised at 1050 • C, a huge difference in the folding index is however noted. It is almost 20% higher in the second case as a consequence of a significant roughening of the interface upon oxidation. Though the location of crack initiation and propagation is the same for the two conditions; the evolution of the indentation force versus the size of crack produced is highly specific of each case as it is directly relative to the tortuosity of the interface. Indeed the increase in force is much less pronounced for the smoothest interface corresponding to the case of the as-deposited non-aged TBC and reciprocally. For oxides thicker than 2.9 m, in the case of TBC oxidised at 1100 • C and 1150 • C, indentation-induced cracks propagate at the bond coat/TGO (inner) interface. For the two cases, the apparent toughness of the involved interface is lower than 2.6 MPa.m 0.5 . While thickening, the alumina scale progressively looses adhesion from the bond coat, which transfers the location of crack initiation and propagation accordingly to commonly admitted models for spallation. The folding index of this inner interface is similar in both cases and very close to that of the outer interface of the asdeposited TBC. This results in a similar tortuosity of the interfaces (either inner or outer), where cracks form and extend and accounts for the similar evolution of the "required force" versus "size of crack produced" plots, experimentally established. While thickening, the thermally grown oxide develops non uniformely as clearly shown on micrographs in Fig. 7. Preferential growth of oxide can occur in zones, typically within intercolumnar spaces of the EB-PVD TBC, where the oxidation kinetics is faster as more room is available for oxide to develop. The interface profile generated by this inhomogeneous growth shows local excresences, clearly visible in Fig. 7c. The occurrence of such protrusions, whose formation is thermally activated can have various consequenceswith opposite effects -on the mechanical strenght of the interface. Indeed, an increase in interface tortuosity may contribute to a loss of adhesion as the result of local mechanical stresses and stress concentration responsible for enhanced crack initiation. Once initiated and to further degrade the system, cracks have to propagate. However, it is assumed that the presence of local excrescences acting as mechanical pegs can limit the propagation, thus preventing from early spallation. Conclusion The adhesion and counterpart spallation of EB-PVD TBC systems is investigated using various approaches including isothermal and cyclic oxidation at various temperatures and interfacial indentation of both as-deposited and oxidized systems. This former characterization is dedicated to evaluate the apparent toughness shown by the interface between the inner bond coat (␤-NiPtAl) and the outer top coat (Yttria-Stabilised Zirconia) before and after thermal aging or cycling. For the as-deposited TBC, only short-term pre-oxidized to promote the formation of a dense, slowly growing alumina scale acting as diffusion barrier, the interfacial TGO is rather thin (less than 0.5 m). Upon aging, the TGO layer grows according to a roughly parabolic kinetics. In all cases, two distinct interfaces formed between the bond coat and the TGO (inner interface), and the TGO and the top coat (outer interface), respectively, must be considered. Driven by the growth of the TGO layer, both interfaces undergo morphological and roughness changes as the TGO thickens. The tortuosity of the interfaces, observed by SEM in crosssections, is quantified by a folding or rumpling index estimated using image analysis. It is shown that both the oxide thickness and the folding index of the inner and outer interfaces, have a strong impact on the localization of the indentation-induced crack initiation, the path for propagation of crack once initiated and the ease or difficulty for crack to propagate. The apparent toughness deduced from interfacial indentation decreases as the 100-h isothermaloxidation temperature increases from 1050 • C to 1150 • C indicating a progressive, thermally activated propensity for the degradation of TBC systems, as obviously expected. Apparent interfacial toughness controlled by interfacial roughness, TGO thickness and mostly by the temperature and time of isothermal or cyclic oxidation is a key parameter to address the mechanics and mechanisms of crack initiation and propagation prior to detrimental spallation of TBC systems. This is of course not the sole parameter entering in the implementation of possible models to predict TBC lifetime. Further improving the understanding of TBC behavior under severe oxidation exposure would require considering the fine variations of microstructural details and the evolution of the stored elastic strain energy, within each individual layer (Ni base single crystal, ␤-NiPtAl bond coat, Al 2 O 3 TGO and Yttria-Stabilised-Zirconia) and from one constitutive layer to another, as well as the substrate geometry to get closer to real in-service conditions. Fig. 1 . 1 Fig. 1. SEM micrographs in cross-section of an EB-PVD TBC specimen. Fig. 2 . 2 Fig. 2. Schematic representation of the intercept graphical method to determine the critical load required to generate interfacial crack [Ln(a) and Ln(b) are, respectively, plotted versus Ln(P)]. Fig. 3 . 3 Fig. 3. SEM micrograph of indented samples oxidised 100 h at 1150 • C (a) indentation charge 0.981 N (corresponding to 100 g); (b) indentation charge 2943 N (corresponding to 300 g). Fig. 4 . 4 Fig. 4. Determination of critical loads to initiate interface cracking by Vickers indentation where a and b are plotted versus P using logarithmic scale. Fig. 5 . 5 Fig. 5. Variation of interfacial toughness as a function of aging temperature. For as-deposited TBCs and TBCs aged at 1050 • C, (i.e., for thin Al 2 O 3 oxide scales, respectively 0.5 m and 1.8 m), cracks propagate preferentially along the interface between the TGO and the top coat. Conversely for TBC aged at 1100 • C and 1150 • C, with thick Al 2 O 3 oxide scales (3.4 m and 5.3 m, respectively), cracks propagate predominantly along the TGO/bond coat interface. Fig. 6 . 6 Fig. 6. Map of cracking as a function of oxidation conditions based on oxide thickness criterion. and ×dots correspond to TGO thickness for isothermal (100 h) and cyclic (1100 • C-100 cycles-1 h) oxidation, respectively. Fig. 7 . 7 Fig. 7. Cross-sectional SEM micrographs of specimens (a) as deposited, (b) aged 100 h at 1050 • C, (c) aged 100 h at 1100 • C, (d) aged 100 h at 1150 • C, (e) variation of interfacial folding index as a function of aging conditions. Note that cracks locate preferentially at the TGO-topcoat interface for as-deposited specimen and specimens aged at 1050 • C (relevant parameter Lsup/L) and at the bon coat/TGO interface for specimens aged at 1100 • C and 1150 • C (relevant parameter Linf/L).
32,443
[ "778280", "19543", "19631", "19595" ]
[ "110103", "580", "110103", "110103", "110103", "580", "580", "110103" ]
01753323
en
[ "shs" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01753323/file/SIND.pdf
Leonardo Pejsachowicz email: leonardo.pejsachowicz@polytechnique.edu. Stochastic Independence under Knightian Uncertainty Keywords: Bewley preferences, stochastic independence, product equivalents. JEL classification: D81. Titre en Francais: Independence Stochastique et incertitude à la préférences à la Bewley, indépendance stochastique, équivalent produit. Classification JEL: D81 published or not. The documents may come L'archive ouverte pluridisciplinaire Introduction The distinction introduced by Knight (1921) between risky events, to which a probability can be assigned, and uncertain ones, whose likelihoods are not precisely determined, cannot be captured by the standard subjective expected utility (SEU) model. This paradigm in fact posits a unique probability distribution over the states of the world, the agent's prior, that is used to assign weights to each contingency when evaluating a given course of action. Of the many models that have been proposed to accommodate Knightian uncertainty, the one pioneered in [START_REF] Bewley | Knightian Uncertainty Theory: Part I[END_REF] has recently been proved very useful, both in economic applications of uncertainty1 and as a foundational tool for the systematic analysis of non-SEU preferences. 2 [START_REF] Bewley | Knightian Uncertainty Theory: Part I[END_REF] allows the agent to hold a set of priors. Choices are then performed using the unanimity rule: an action will be preferred to an alternative one only if its expected utility under each prior of the agent is higher. Since the rank of two options might be reversed when considering different priors, under this paradigm the agent's preferences will typically be incomplete. The introduction of this type of incompleteness raises a slew of interesting modeling questions, but the one we will be concentrating on in this paper regards the characterization of stochastic independence (from now on s-independence). Specifically, suppose a Bewley decision maker must choose between bets that depend on two different experiments, for example the tosses of two coins. When can we say, based on the observation of his choices, that he considers the two tosses independent? An answer to this question is clearly of great interest, both for applications of the Bewley model to game theory, since independence of beliefs is a central tenet of Nash equilibrium, and as a benchmark for the development of a theory of updating, which is essential in applications to dynamic environments. In the SEU model, s-independence is captured by the intuitive idea that the preferences of an agent over bets that depend only on one of the tosses should not change when he receives information about the other (see [START_REF] Blume | Lexicographic Probabilities and Choice under Uncertainty[END_REF]), a property we dub conditional invariance. As we show in an example in Section 2.2, such requirement is unable to eliminate all of the forms of correlation between experiments that the multiplicity of priors in the Bewley model introduces. As a consequence under conditional invariance an agent's preferences are no longer uniquely determined by his marginal beliefs, which we argue is essential for a useful definition of s-independence. To overcome this problem we introduce the idea of product equivalent of an act, which is close in spirit to that of certainty equivalent of a risky lottery, although adapted to the product structure of the state space. We show that if a Bewley decision maker treats product equivalents as if they where certainty equivalents, his set of beliefs must coincide with the closed convex hull of the pairwise product of its marginals over each experiment. An important consequence of our result is that it provides a characterization of sindipendence for MAxMin preferences that coincides with the definition proposed in [START_REF] Gilboa | Objective and Subjective Rationality in a Multi-Prior Model[END_REF]. Such definition has been applied in the characterization of independent beliefs for Nash equilibria under ambiguity, for example in [START_REF] Lo | Equilibrium in Beliefs under Uncertainty[END_REF] and more recently in [START_REF] Riedel | Ellsberg Games[END_REF]. Neverthelss, as [START_REF] Lo | Correlated Nash equilibrium[END_REF] complains, the behavioral implications of Gilboa and Schmeidler's definition are still poorly understood. It is our hope that the present work will provide a first step towards a clarification of these implications. [START_REF] Blume | Lexicographic Probabilities and Choice under Uncertainty[END_REF] are the first to provide a decision theoretic axiom for s-independence in the SEU model, based on the insight of conditional invariance, of which we use a stronger version in Section 2.3. Bewley provides an early definition of s-independence for his model in the original 1986 paper, though he gives no behavioral characterization. His definition is weaker than the model we obtain. The remaining related literature is mostly concerned with other non-SEU models. [START_REF] Gilboa | Objective and Subjective Rationality in a Multi-Prior Model[END_REF], as we already said, define a concept of independent product of relations for MaxMin preferences and characterize it, though their characterization uses directly the representation instead of the primitive preference. [START_REF] Klibanoff | Stochastically Independent Randomization and Uncertainty Aversion[END_REF] gives a definition of an independent randomization device, which he uses to evaluate different types of uncertainty averse preferences in the Savage setting. [START_REF] Bade | Stochastic Independence with Maxmin Expected Utilities[END_REF] explores various possible forms of s-independence for events under the MaxMin model of Gilboa and Schmeidler, providing successively stronger definitions. Bade (2011) contains a characterization of s-independence for general uncertainty averse preferences that is particularly useful for the way in which the paper introduces uncertainty in games. [START_REF] Ghirardato | On Independence for Non-Additive Measures with a Fubini Theorem[END_REF] studies products of capacities, and proposes a restriction on admissible products based on the Fubini theorem. We share with this paper the intuition of using the iterated integral property to characterize product structures outside of the standard model. Finally, [START_REF] Epstein | Symmetry of Evidence without Evidence of Symmetry[END_REF], who study alternative versions of the De Finetti theorem for MaxMin preferences, provide an axiom, dubbed orthogonal independence, which achieves a weaker form of separation in beliefs. The rest of the paper is organized as follows: Section 2 introduces the model, provides a motivating example and discusses the limits of conditional invariance as a characterization of s-independence. In Section 3 we define product equivalents and give our main characterization result, and some further corollaries, one of which is the afore mentioned characterization of MaxMin s-independence. Section 4 concludes with a brief discussion on the quality of our main assumption. Preliminaries We consider a finite state space Ω with a product structure Ω = X × Y endowed with an algebra of events Σ which, for simplicity of exposition, we assume through the paper to coincide with 2 Ω . Notice that the collections Σ X = {A × Y | A ⊆ X} and Σ Y = {X × B | B ⊆ Y } are proper sub-algebras of Σ under the convention ∅ × Y = X × ∅ = ∅. States, elements of Ω, are denoted ω or alternatively through their components (x, y). Elements of X and Y represent the outcomes of two separate experiments. Sets of the form {x} × Y , which we call X-states, will be indicated, with abuse of notation, x, and a similar convention applies to Y-states. A prior p is an element of (Ω), the unit simplex in R Ω . For a subset S of Ω we let p(S) = ω∈S p(ω). Let p X ∈ (X) and p Y ∈ (Y ) be the marginals of p over X and Y respectively. The notation p X ×p Y indicates the product prior in (Ω) uniquely identified by p X × p Y (x, y) = p X (x)p Y (y). Prior p has a product structure if p = p X × p Y . For a set P ⊆ (Ω), we let P X and P Y stand for the sets of marginals of elements of P over each experiment. The set of pairwise products of P X and P Y , namely {p X × p Y | (p X , p Y ) ∈ P X × P Y } is denoted P X ⊗ P Y . We work with a streamlined version of the Anscombe-Aumann model. An act is a map from Ω to K ⊆ R, a non-trivial interval of the real line. The set of all acts is F = K Ω with generic elements f and g. The interpretation of f is that it represents an action that delivers (utility) value f (ω) if state ω is realized. For every λ ∈ (0, 1) and f, g ∈ F, the mixture λf + (1 -λ)g is the state-wise convex combination of the acts. 3 For any finite set S, let 1 S be the indicator function of S. The constant act k1 Ω that delivers k ∈ K in every state of the world is denoted, with abuse of notation, k. We say that f is an X-act if f (x, y) = f (x, y ) for all y, y ∈ Y , namely if f is constant across the realizations of Y -states. The set of X-acts is F X with generic elements f X , g X . The unique value that act f X ∈ F X assumes over {x} × Y is indicated f X (x). We can then see f X as the sum x∈X f X (x)1 x . Similar considerations and notation apply to Y -acts. Bewley preferences Our primitive is a reflexive and transitive binary relation -a preference -on F. Through most of the paper we assume that there exist a non-empty convex and closed set P ⊆ (Ω) such that f g if and only if ω∈Ω f (ω)p(ω) ≥ ω∈Ω g(ω)p(ω) for all p ∈ P. ( We say then that has a (non-trivial) Bewley representation or that it is a Bewley preference. 4 An axiomatic characterization of (2.1) can be found in [START_REF] Gilboa | Objective and Subjective Rationality in a Multi-Prior Model[END_REF]. 5The set P above is uniquely determined by . Because P is the only free parameter in the model, we say that P represents when it satisfies (2.1). We note that any two subsets of (Ω) induce the same Bewley preference via (2.1) as long as their closed convex hulls, which we denote co(P ) for a generic P , coincide. The SEU model corresponds to P = {p}, in which case is complete. Hence Bewley preferences are a generalization of SEU in which completeness is relaxed. At the same time, any extension of a Bewley preference to a complete relation over acts that is SEU corresponds to some prior p in 3 The classical Anscombe-Aumann environment posits an abstract consequence space C and defines acts as maps from Ω to the set s (C) of simple lotteries over C. One then goes on to show that, under standard assumptions (in particular Risk Independence, Monotonicity and Archimedean Continuity), there exists a Von Neumann-Morgenstern utility U : s (C) → R such that two acts f and g are indifferent whenever U (f (ω)) = U (g(ω)) for all ω. Thus one can think of our approach as one that considers acts already in their "utility space" representation. 4 In the model of [START_REF] Bewley | Knightian Uncertainty Theory: Part I[END_REF], preferences satisfy a version of equation (2.1) in which and ≥ are replaced by their strict counterparts and >. The two models are close but distinct, and correspond to the weak and strong versions of the Pareto ranking in which different priors take the role of different agents. its representing set of priors P . We will say that is a SEU preference whenever it has a Bewley representation with a singleton set {p} of priors. A motivating example In this section we illustrate the need for a novel characterization of s-independence with a simple example. Consider an agent who is betting on the results of the tosses of two different coins. All he knows about these is that they have been coined by two separate machines, each of which produces either a coin that comes up heads α% of the times, or one that comes up heads β% of the times. The two machines have no connection to each other, and no information on the mechanism that sets the probability of heads in either machine is given, so that no unique probabilistic prior can be formed. Given this description, it seems agreeable that a Bewley decision maker, facing acts on the state space {H 1 , T 1 }× {H 2 , T 2 } (where H i corresponds to the i-th coin coming up heads), would consider the set of priors P 1 = {p α × p α , p β × p α , p α × p β , p β × p β } where p α = (α, 1 -α) ∈ ({H 1 , T 1 }) and p β = (β, 1 -β) ∈ ({H 2 , T 2 }). Does P 1 reflect an intuitve notion of independence between the tosses? One way to answer this question is to ask our agent to compare a particular type of acts which we will call, for lack of a better term, conditonal bets. Namely, assume we ask the agent to decide between bets f 1 and g 1 , where f 1 H 2 T 2 H 1 1 0 T 1 k k g 1 H 2 T 2 H 1 0 1 T 1 k k Both f 1 and g 1 pay the same amount k if the first coin turns up tails, and provide opposing bets on the second toss if the first turns up heads. Whichever ranking the agent provides, it stands to reason that, if he treats the tosses as independent, he should rank in the same way the acts f 2 and g 2 in which the opposing bets on the second coin are provided conditional on the first coming up tails, namely: f 2 H 2 T 2 H 1 k k T 1 1 0 g 2 H 2 T 2 H 1 k k T 1 0 1 This form of invariance of the preferences over one experiment to information on the result of the other can be shown to characterize, for the SEU model, an agent whose unique prior p has a product structure. We notice that P 1 satisfies this invariance with regards to the pairs f 1 , g 1 and f 2 , g 2 . In fact if f 1 g 1 then we must have α 2 ≥ α(1 -α) and βα ≥ β(1 -α) ⇔ α ≥ (1 -α) for p α × p α and p β × p α , and also αβ ≥ α(1 -β) and β 2 ≥ β(1 -β) ⇔ β ≥ (1 -β) for p α × p β and p β × p β . It is immediate to check that the same two conditions ensure f 2 g 2 . This is in line with our intuition that the situation described above, and its related set of priors P 1 , reflect a natural notion of independence between tosses. But now imagine the agent comes to learn that the two machines have a common switch. This switch is the one that decides whether the coins produced will be of the α or β variety, thus whenever the first machine produces a coin of a certain kind so does the other. Here the natural set of priors is P 2 = {p α × p α , p β × p β }. The preferences induced by this set also satisfy the invariance we discussed between pairs f 1 , g 1 and f 2 , g 2 . In fact f 1 g 1 if and only if α ≥ (1 -α) and β ≥ (1 -β), which also implies f 2 g 2 . Nevertheless we would be hard pressed to argue that this situation reflects the same degree of independence of the first. The priors in P 2 in fact contain information about a certain kind of correlation between the tosses. This correlation, which is novel, regards the mechanism that determines the probabilistic model assigned to each coin. As we will see in the next section, the conditional invariance requirement we loosely described is unable, even in its strongest form, to eliminate this sort of correlation. One can understand this failure as stemming from the lack of aggregation that is characteristic of the Bewley model. Correlations in the mechanism that selects models for each toss are reflected in the shape of the whole set of priors. On the other side, when a Bewley decision maker compares two acts, he uses priors one by one, hence only the structure of each single distribution comes into play. Conditional Invariance Here we formalize and extend the discussion of the previous section. Before we do so, we will need an additional definition: Definition 1. An event S ⊆ Ω is -non-null if k > l implies k1 S + l1 Ω\S l for any k, l ∈ K. The definition corresponds to that of a Savage non-null set. For a Bewley preference represented by P , a set S is -non-null if and only if p(S) > 0 for some p ∈ P . Now consider the following axiom: Conditional Invariance For all acts h ∈ F, f X , g X ∈ F X and any pair of -non-null events R, S ∈ Σ Y , f X 1 R + h1 Ω\R g X 1 R + h1 Ω\R =⇒ f X 1 S + h1 Ω\S g X 1 S + h1 Ω\S (2.2) and the same holds when we switch the roles of X and Y in the above statement. This is a stronger version of the Stochastic Independence Axiom (Axiom 6) of [START_REF] Blume | Lexicographic Probabilities and Choice under Uncertainty[END_REF]. 6 If we let X = Y = {H, T } with Y representing the first coin toss and X the second, setting R = {H, T } × {H}, S = {H, T } × {T } and choosing f X , g X and h equal to f X H T H 1 0 T 1 0 g X H T H 0 1 T 0 1 h H T H k k T k k we obtain from (2.2) the implication f 1 g 1 =⇒ f 2 g 2 . Hence acts of the form f X 1 R + h1 Ω\R are the general version of the conditional bets we discussed in Section 2.2 and carry the same interpretation. The next proposition highlights the limits of Conditional Invariance as a behavioral characterization of s-independence: Proposition 1. For any P ⊆ (X) ⊗ (Y ), the Bewley preference induced by co(P ) satisfies Conditional Invariance. Proof: See the Appendix. Thus the type of correlations embodied by a set such as P 2 from the previous section are in general not excluded by Conditional Invariance. More than that, the degree of non-uniqueness that the assumption allows for in the formation of priors is problematic for any definition of s-independence. To see this, consider for a moment an agent with SEU preferences . Suppose we elicit his information on each separate experiment by "asking him questions", i.e. proposing him comparisons of acts, that depend either only on X or only on Y . His answers correspond to the restrictions X and Y of to F X and F Y respectively. By the SEU representation theorem, we know that these are uniquely determined by the marginals p X and p Y of his prior. Now if his prior p has a product structure (which in this case, as we show below, is true if and only if satisfies Conditional Invariance), the inverse is also true. Namely, because in this case p = p X ×p Y , we can uniquely determine his preferences over F, and hence his information about the whole set of possible results in X × Y , using X and Y . Thus once we learned about each experiment in isolation we know all that can be known about the pair, a key aspect of the description of s-independence in a single prior environment. Turning our attention to the Bewley case, we find that the information on each experiment is now subsumed into the sets P X and P Y of marginals, which uniquely determine X an Y . But now suppose we take two sets of priors P and Q inside (X) × (Y ) whose marginals coincide with P X and P Y and such that co(P ) = co(Q) (P 1 and P 2 from Section 2.2 are one such pair, for P X = P Y = {p α , p β } ). Proposition 1 ensures that the preferences induced by P and Q satisfy Conditional Invariance, and by our assumption their restrictions to F X and F Y coincide. But the two preferences will differ, by the uniqueness part of the Bewley representation theorem, hence the condition that characterizes s-independence under SEU does not allow us to uniquely determine the agent's global preferences from X and Y . The information we are lacking is precisely the one on correlations in the mechanism that matches priors on one experiment to priors on the other. To recover the desired degree of uniqueness, we propose in the next section a stronger requirement on preferences. Stochastic Independence via product equivalents In this section we propose a new concept, that of the product equivalent of an act, and use it to give a characterization of s-independence for Bewley preferences. In order to illustrate the logic behind product equivalents we first introduce them in the simpler SEU environment. Product equivalents under SEU Throughout this section we will consider a Bewley decision maker whose representing set of priors P is a singleton. Thus his preferences are complete and he is a subjective expected utility maximizer. In this case it can be shown7 that every act f ∈ F has a certainty equivalent, namely that there exists some constant act k such that f ∼ k. We denote such act ce(f ). Now notice that any act f ∈ F can be seen as a collection of bets on Y delivered conditional on the outcomes in X. Namely, we can find a collection {f x Y } x∈X of Y -acts, uniquely identified by f x Y (y) = f (x, y), such that f = x∈X f x Y 1 x . This particular way of seeing an act suggest the following definition, which is partly inspired by that of certainty equivalent: Definition 2. An act f X ∈ F X is the X-product equivalent of f ∈ F, denoted pe X (f ), if for all x ∈ X f X (x) = ce(f x Y ). Notice that pe X (f ) need not be indifferent to f . 8 This is intuitive, since evaluating pe X (f ) requires a different thought process than the one used for f , in which first the value of each conditional bet the act induces on Y is determined in isolation, and then these are aggregated using the information the agent has over X. Nevertheless we would think it is precisely when X and Y are independent, and hence information about the aggregate value of f is completely embedded in the agents preferences over each individual experiment, that the two approaches will lead to the same result, and consequently f ∼ pe X (f ). The next theorem vindicates such view: Theorem 1. Let be a SEU preference over F represented by {p}. Then the following are equivalent 1) There are distributions p X ∈ (X) and p Y ∈ (Y ) such that p = p X × p Y . 2) satisfies Conditional Invariance. 3) f ∼ pe X (f ) for all f ∈ F . Proof: See the Appendix. 1) ⇔ 2) is well known and easily proved using the uniqueness properties of the SEU representation. Since the definition of product equivalent is novel, the equivalence of 1) and 3) is a new result, although it is an elementary application of separation arguments. We can better understand this part of the result in light of Fubini's celebrated theorem. The latter gives conditions under which the integral of a function through a product measure can be obtained as an iterated integral. Now notice that when is SEU it must be that pe X (f )(x) = y∈Y f (x, y)p Y (y) (3.1) and hence the value of pe X (f ) is nothing but x∈X y∈Y f (x, y)p Y (y) p X (x). Thus 1) ⇒ 3) is equivalent to (a very simple version of) the Fubini theorem, while 3) ⇒ 1) provides an inverse of that result. Uniform product equivalents Theorem 1 suggest an alternative route for the characterization of s-independence under Bewley preferences, one that goes through the extension of the definition of a product equivalent to the multiple-priors case. The first obstacle we find along this way is that in general even certainty equivalents of acts need not exist for a Bewley decision maker. 9 In order to sidestep this issue, Ghirardato et al. ( 2004) consider a set of constant acts, which for each f ∈ F we will denote Ce(f ), that behave as certainty equivalents do for complete preferences, namely: Ce(f ) = {k ∈ K | c f implies c k and f d implies k d for all c, d ∈ K}. (3.2) 9 An easy geometric intuition of this fact is the following. If we look at acts in F as elements of R Ω , we can see that the indifference curves induced by an SEU preference with prior p correspond to the restriction to K Ω of the hyperplanes that are perpendicular to p. On the other side, the indifference curve through f of a Bewley decision maker with a set of priors P is given by the intersection of a set of hyperplanes, one for each p ∈ P . Obvious dimensionality considerations suggest then that in general the only act indifferent to f is f itself. For example if |Ω| = 2 and is incomplete, there are at least two non collinear priors in P , hence indifference curves are points and no two acts f = g are indifferent. They also provide the following characterization result, which illustrates the parallel between Ce(f ) and ce(f ) under Bewley and SEU preferences: We would then hope that substituting Ce(f x Y ) for ce(f x Y ) in the definition of product equivalent would lead to a generalization that retains the intuition behind pe X (f ). Nevertheless here we stumble on a second issue. Both the parallel with Fubini's theorem and equation (3.1) suggest that for a given f , the relevant collection of X-acts in this case should be of the form {f X ∈ F X | ∃ p Y ∈ P Y such that f X (x) = y∈Y f (x, y)p Y (y) for all x ∈ X}, (3.3) which is the set of X-acts obtained by evaluating, for each prior model p ∈ P , the conditional bets on Y induced by f via its marginal p Y , reflecting the information in prior p about the outcomes of Y in isolation. But since Ce(f x Y ) is in general an interval, an X-act f X such that f X (x) ∈ Ce(f x Y ) for all x ∈ X need not be of the form (3.3). In fact for each x ∈ X, f X (x) might correspond to an evaluation of f x Y performed using a different p Y ∈ P Y . Thus we turn to a more indirect approach. First, notice that from any act f X , and α ∈ (X), we can obtain the "reduction"11 of f X via α by taking the mixture x∈X α x f X (x). Looking back, once again, at the SEU setting, we can see that pe X (f ) can be alternatively identified as the unique X-act f X such that, for all elements α of (X), we have x∈X α x f X (x) = ce( x∈X α x f x Y ). In fact if f X = pe X (f ) the equality is always true, since x∈X α x pe X (f )(x) = x∈X α x y∈Y f (x, y)p Y (y) = x∈X y∈Y f (x, y)α x p Y (y) = y∈Y p Y (y) x∈X α x f (x, y) = ce( x∈X α x f x Y ). The inverse is immediately obtained taking the α's that correspond to degenerate distributions over X. Notice that the equalities above hold exactly because each pe X (f )(x) is found using the same marginal p Y , which also coincides with the marginal used to evaluate every ce(f x Y ). This motivates the following definition: Definition 3. An act f X ∈ F X is a X-uniform product equivalent of f ∈ F if x∈X α x f X (x) ∈ Ce( x∈X α x f x Y ) (3.4) for all α ∈ (X). The set of all such acts for given f is denoted U pe X (f ). Armed with this, we are ready to give the main result of the paper: Theorem 2. Let be a Bewley preference over F represented by P . Then the following are equivalent 1) P = co(P X ⊗ P Y ) 2) For all f ∈ F and f X ∈ U pe X (f ), c f ⇒ c f X and f d ⇒ f X d for all c, d ∈ K while at the same time, for all c, d ∈ K, c f X for all f X ∈ U pe X (f ) ⇒ c f and f X d for all f X ∈ U pe X (f ) ⇒ f d. Proof: See the Appendix. The property in item 2) is the requirement that X-uniform product equivalents behave as elements of Ce(f ), the generalized version of certainty equivalents of Ghirardato et al. (2004). As can be seen in item 1), it provides a characterization of s-independence that recovers the desired uniqueness, in the sense that is uniquely determined by X and Y . This is done by building the largest set of product priors consistent with P X and P Y , the set P X ⊗ P Y in which all possible matches of models consistent with X and Y are considered. Obviously, by Proposition 1, when either of the conditions hold, satisfies Conditional Invariance. Remark: Given the theorem, we would expect that the set U pe X (f ) could be shown to coincide with (3.3). In fact this is not true, and U pe X (f ) is in general larger. The intuition is the following. The sets {f X ∈ F X | ∃p Y ∈ P Y such that f X (x) = y∈Y f (x, y)p Y (y) for all x ∈ X} are clearly convex, and hence as is well known they can be identified by the intersection of the half-spaces that contain them. This is what we are de facto doing when we define uniform products using condition (3.4), with the α's playing the role of normals to the hyperplanes defining such half-spaces. Nevertheless we can do this only up to a point, because the α's have to be positive and normalized, a restriction that binds when trying to separate sets of acts. Hence we are left with less hyperplanes than those needed to "cut out" the right set, and with a larger U pe X (f ). This does not affect the result though because the additional acts cannot be distinguished from those in {f X ∈ F X | ∃p Y ∈ P Y such that f X (x) = y∈Y f (x, y)p Y (y) for all x ∈ X} as long as we evaluate f X acts using a distribution in (X). Independent acts and independent events A series of modeling questions concerning s-independence do not lend themselves immediately to representation through a state space with a product structure. Here we propose an approach that leverages Theorem 2 to answer two such questions: when are two acts f, g ∈ F independent according to a Bewley preference? When does such preference consider two events A, B ⊆ Ω independent? We will start as before from a finite state space Ω, though we note that as long as we restrict our attention to simple acts (which assume a finite number of values) the whole discussion can be extended to the case |Ω| = ∞. Now assume we are given a Bewley preference over K Ω . An act f induces a finite partition Π f over Ω given by Π = {f -1 (k) | k ∈ f [Ω]}. For any two acts f, g ∈ F, let Π f ⊗ Π g = {A ∩ B | A ∈ Π f and B ∈ Π g }, which is once again a partition of Ω. Let Σ f be the algebra generated by Π f and Σ f ×g the one generated by Π f ⊗ Π g . Finally, for a set P of priors over (Ω), let P f be the set of restrictions of elements of P to Σ f , namely: P f = {p : Σ f → [0, 1] | ∃ p ∈ P such that p (A) = p(A) for all A ∈ Σ f } and define similarly P f ×g for the restrictions of P to Σ f ×g . We are now ready to give the first definition of this section Definition 4. Say that acts f and g are independent according to if the set P representing satisfies co(P f ×g ) = co(P f ⊗ P g ). Notice that when P is a singleton {p} this reduces to the usual condition that p(A ∩ B) = p(A) × p(B) for all sets A and B in the algebras generated by f and g respectively. A characterization of independent acts is immediately deduced from Theorem 2. Let f ×g be the restriction of to Σ f ×g measurable acts, let F f be the set of acts that are Σ f measurable, and say that h f ∈ F f is the f -Uniform product equivalent, U pe f (h), of the Σ f ×g measurable act h if: A∈Π f α A h f (A) ∈ Ce   A∈Π f α A h A Πg   for all α ∈ (Π f ), where h A Πg is the Σ g measurable act identified by h A Πg (B) = h(A ∩ B) for each B ∈ Π g . An f ∈ U pe f (h), c f ×g h ⇒ c f ×g h f and h f ×g d ⇒ h f f ×g d for all c, d ∈ K while at the same time, for all c, d ∈ K, c f ×g h f for all h f ∈ U pe f (h) ⇒ c f ×g h and h f f ×g d for all h f ∈ U pe f (h) ⇒ h f ×g d. One can now also easily derive a definition of independent events. Fix two elements {k 1 , k 2 } of K such that k 1 > k 2 ,and let f A = k 1 1 A + k 2 1 Ω\A . We will then say that A and B are independent events according to if f A and f B are independent acts according to . It follows that two events are independent under this definition if and only if p(A ∩ B) = p(A)p(B) for all p in the set P representing . A characterization of s-independence for MaxMin preferences In their pioneering work, Gilboa and Schmeidler (1989) provide a characterization of independent product of relations for MaxMin preferences that is strictly connected to the representation in Theorem 2. Recall that a preference over F is MaxMin if there is a closed convex set of priors P ⊆ (Ω) such that is represented by the concave functional V (f ) = min p∈P ω∈Ω f (ω)p(ω). Gilboa and Schmeidler (1989) define a notion of independent product of preferences which is equivalent to a MaxMin relation X×Y on F represented by V (f ) = min p∈co(P X ×P Y ) ω∈Ω f (ω)p(ω) where P X and P Y are the priors representing two original MaxMin preferences X and Y over K X and K Y respectively. The link with our representation is clear, as is the fact that the definition of Gilboa and Schmeidler also satisfies the requirement of being completely identified by it's marginal preferences. In fact more can be said on the relation between the two models. Ghirardato et al. ( 2004) introduce the concept of unambiguous preference. This is a sub-relation * of a complete preference over acts that is identified as follows: f * g ⇔ λf + (1 -λ)h λg + (1 -λ)h for all λ ∈ (0, 1) h ∈ F. For a large class of preferences, which includes MaxMin, * can be shown to be a Bewley relation. 12 Moreover, the sets of priors representing a MaxMin preference and it's unambiguous sub-relation * coincide. Thus we can state the following corollary of Theorem 2: Corollary 4. For any MaxMin preference over F and its unambiguous sub-relation * , letting U pe * X (f ) stand for the X-uniform product equivalents of f under * , the following are equivalent: 1) There are nonempty, closed and convex sets P X ⊆ (X) and P Y ⊆ (Y ) such that is represented by the functional V : F → R given by V (f ) = min p∈co(P X ×P Y ) ω∈Ω f (ω)p(ω) 2) For all f ∈ F and f X ∈ U pe X (f ), c * f ⇒ c * f X and f * d ⇒ f X * d for all c, d ∈ K while at the same time, for all c, d ∈ K, c * f X for all f X ∈ U pe X (f ) ⇒ c * f and f X * d for all f X ∈ U pe X (f ) ⇒ f * d. This provides a characterization of Gilboa Schmeidler independence based on the model primitives ( and the derived relation * ) instead of on elements of the representation, as the one given in the 1989 paper (although see on this our comments in the next section). Remark: One might be tempted to try to extend this result to other classes of preferences under ambiguity,given that Cerreia-Vioglio et. al. ensures that all MBA preferences have an unambiguous sub-relation with a Bewley representation. Nevertheless, as the following two examples illustrate, this might lead to issues with existence and uniqueness. It is our opinion that Corollary 3 is a natural extension of Theorem 2 precisely because there is a deep link, at a mathematical and interpretational level, between the Bewley and MaxMin models, as illustrated for example in [START_REF] Gilboa | Objective and Subjective Rationality in a Multi-Prior Model[END_REF]. Definitions and characterizations of s-independence for alternative models should be built based on their individual structure and interpretation. Example 1. Existence: Consider Multiplier preferences, which are represented by the functional U (f ) = min p∈ (Ω) ω∈Ω f (ω)p(ω) + θr(p||q) where q is a reference distribution, r(p||q) the relative entropy of p w.r.t. q and θ a non-negative real number. Ghirardato and Siniscalchi (2010) show that the set of priors representing the unambiguous part * of a multiplier preference must coincide, when Ω is finite, with (Ω). Hence if Ω = X × Y and both X and Y contain at least two elements, no multiplier preference can satisfy condition 2) of Corollary 4, since in this case (X) × (Y ) is strictly included in (X × Y ). Thus no multiplier preference over K Ω can reflect s-independence according to our definition. Example 2. Uniqueness: A Choquet Expected Utility preference can be represented, assuming for simplicity K ∈ R + , by the functional V (f ) = v({ω | f (ω) ≥ t})dt, where v : 2 Ω → [0, 1] is a capacity, i.e. a normalized, monotone function over sets. As is well known, the Choquet integral can be alternatively expressed as V (f ) = ω∈Ω f (ω)p f (ω), where p f is an additive probability distribution derived from the capacity by assigning to ω the probability p f (ω) = v({ω | f (ω ) ≥ f (ω)}) -v({ω | f (ω ) > f (ω)}). There are as many such probabilities as there are orders ≥ f over the state space induced by the rule ω ≥ f ω if and only if f (ω) ≥ f (ω ). [START_REF] Ghirardato | Differentiating Ambiguity and Ambiguity Attitude[END_REF] show that the priors P representing the unambiguous part of a CEU preference coincide with co{p f | f ∈ F}. Hence if we let X = {x 1 , x 2 } and Y = {y 1 , y 2 }, and consider two CEU preferences X and Y on K X and K Y respectively, we can conclude that the number of extreme points of P X and P Y is of at most two each, as there are only two orders over a set of two elements. Thus P X ⊗P Y has at most 2×2 = 4 extreme points. That means that if we wish to define the independent product of X and Y as the CEU preference over K X×Y whose unambiguous preference is represented by P X ⊗ P Y , we will be unable to do so uniquely as we need to assign 4 distributions to 4! = 24 different orders over X × Y . Dropping the requirement that the product be CEU would only worsen the issue, as the number of MBA preferences consistent with priors P X ⊗ P Y is extremely large. Hence our definition of s-independence is unable to uniquely identify the independent product of two CEU preferences. Some considerations on our results We conclude with a brief comment concerning falsifiability. Decision theorists in general like, with good reason, to keep what we will call "continuity" and "behavioral" axioms separated. The distinction between the two is sometimes vague, but it can be made precise using finite falsifiability as a litmus test. With this we mean that, starting from the primitive of our model, we should always be able to obtain a violation of a behavioral assumption in a finite number of steps. In this sense the classic Independence axiom is behavioral, since it is negated in two steps, by finding three acts f, g, h and a weight λ ∈ (0, 1) such that f g but λg + (1 -λ)h λf + (1 -λ)h. On the other hand the Archimedean axiom, which asks that for any three acts f, g, h the sets {λ ∈ [0, 1] | λf + (1 -λ)h g} and {λ ∈ [0, 1] | g λf + (1 -λ)h} be closed, is typically not. In fact to violate it we must be able to check that, for example, λ * f + (1 -λ * )h g while λ n f +(1-λ n )h g for all {λ n } n∈N of a sequence converging to λ * , which involves verifying an infinite number of positive statements. When considering a novel representation result, we usually prefer new assumptions to be of the first kind rather than the second, since this allows for a direct test of the validity of the model. At the same time, characterizations based on continuity type assumptions do bring a contribution, as they still allow us to identify the position of a model in the space of possible representations. The Conditional Invariance assumption falls in the behavioral side, as can be easily checked. The requirement we proposed as a characterization of s-independence for Bewley preferences, unfortunately, does not. To see this, notice that, for example, a possible violation of the axiom takes place if we can find a c ∈ K such that f c but c f X for some f X ∈ U pe(f ). But showing this requires us to make sure that f X is an X-uniform product equivalent of f , a process which implies checking that for all α ∈ (X), an infinite set, equation (3.4) is satisfied. For this reason we stop short of stating that we provide a full behavioral characterization of the model P = co(P X ⊗ P Y ), and we believe that additional work is still needed to obtain it. This is the focus of ongoing research, which we hope to report in future work. A Proofs Proof of Proposition 1: We prove the proposition only for X-acts conditioned on Y -events, since the argument for the inverse situation is symmetric. Assume f X 1 R + h1 Ω\R g X 1 R + h1 Ω\R for some h ∈ F, f X , g X ∈ F X and R ∈ Σ Y . This means that for all p ∈ P ω∈R f X (ω)p(ω) + ω∈Ω\R h(ω)p(ω) ≥ ω∈R g X (ω)p(ω) + ω∈Ω\R h(ω)p(ω). (A.1) Subtract the common term on each side, for each prior, to get ω∈R f X (ω)p(ω) ≥ ω∈R g X (ω)p(ω). Because g X and f X are constant over X and there is some B ⊆ Y such that R = X × B, we have ω∈R f X (ω)p(ω) = x∈X f X (x) y∈B p(x, y) (A.2) for all p ∈ P , and similarly for g X . Since for each p ∈ P there are p X ∈ (X) and p Y ∈ (Y ) such that p = p X × p Y , we can rewrite the r.h.s. of (A. f X (x)p X (x) ≥ x∈X g X (x)p X (x) for all p X ∈ P X , where the common factor p Y (B) can be canceled on both sides (for those priors in P for which p Y (B) = 0 the inequalities are trivially true). Now, since S = X × B for some B ⊆ Y , we can multiply for each p X ∈ P X the inequality above by p Y (B ), for all p Y ∈ (Y ) such that p X × p Y ∈ P , to obtain ω∈S f X (ω)p(ω) = x∈X f X (x)p X (x)p Y (B ) ≥ x∈X g X (x)p X (x)p Y (B ) = ω∈S g X (ω)p(ω) for all p ∈ P . Adding ω∈Ω\S h(ω)p(ω) to both sides of the inequality delivers the desired implication. Proof of Theorem 1: 1) ⇒ 2) is a direct consequence of Proposition 1. The argument for 2) ⇒ 1) is well known, but we provide it here for completeness. To see that 2) ⇒ 1), consider first the case in which X × {y} is -non-null for only one y ∈ Y (at least one such y must exist since otherwise p(Ω) = 0). Then it is clear that p Y = δ y and p = p X × δ y , where δ y is the degenerate distribution at y inside (Y ), namely the element of (Y ) that is 1 at y and zero everywhere else. Now assume that at least two Y -states y, y ∈ Y are -non-null. Denote also by F X the set of maps from X to K, with generic elements f X and g X . Each X-act f X in F X has a corresponding projection f X in this set, identified by f X (x) = f X (x). For every -non-null Y -state let y be the preference over F X identified by f X y g X ⇐⇒ f X 1 y + h1 Ω\y g X 1 y + h1 Ω\y . (A.3) By the usual arguments this preference is independent from h. Moreover, Conditional Invariance requires it to be also independent from y. Because it is still going to be SEU over F X , there is a unique distribution p X ∈ (X) representing each y . On the other side, we can see that for each -non-null y, the second comparison in (A.3) will be satisfied if and only if x∈X p(x,y) = p(x,y) p Y (y) . The latter inequality is clearly an alternative SEU representation of y , hence by the uniqueness of the SEU representation p(.|y) = p X for all -non-null y ∈ Y . But then p = p X × p Y = p X × p Y . 1) ⇒ 3) is an immediate consequence of the characterization of pe X (f ) in (3.1), and elementary distributive properties of the sum of real numbers. To show that 3) ⇒ 1), assume by way of contradiction that 3) holds but p = p X × p Y . Then by an elementary application of the hyperplane separation theorem there is, w.l.o.g., a vector r in R Ω such that ω∈Ω r ω p(ω) > ω∈Ω r ω p X × p Y (ω). Because distributions are positive and normalized to 1, we can multiply both sides of the inequality by a constant and add to both a constant vector without affecting the inequality. Hence we can assume that r corresponds to some f r in F. But clearly (x,y)∈X×Y f r (x, y)p X (x)p Y (y) = x∈X y∈Y f r (x, y)p Y (y) p X (x) is the value of pe X (f r ) under the prior p, hence this implies that f r pe x (f r ), contradicting 3). by the product structure of P . Let p * X and p * Y be the distributions that achieve the above minimum. For any f x ∈ U pe X (f ) we must have, by taking the p X reduction of f X for any p X ∈ P X , 2) ⇒ 1). Suppose first that there is a p ∈ P such that p / ∈ co(P X ⊗ P Y ). Then there must be, by the usual hyperplane separating argument, an f * ∈ F and a c ∈ K such that ω∈Ω f * (ω)p(ω) > c ≥ ω∈Ω f * (ω)p X × p Y (ω) for all p X × p Y ∈ P X ⊗ P Y . Now the last inequality implies that c f X for all f X ∈ U pe X (f ), since otherwise we would have one member fX of such set for which max p X ∈P X fX (x)p X (x) > max p Y ∈P Y ,p X ∈P X y∈Y x∈X f * (x, y)p X (x)p Y (y) in direct contradiction to x∈X fX p X (x) ∈ Ce( x∈X f * x Y p X (x)) for all p X ∈ P X ⊆ (X). But then by assumption c f which implies c ≥ ω∈Ω f * (ω)p(ω) for all p ∈ P . This proves that P ⊆ co(P X ⊗ P Y ). For the remaining inclusion, assume there are p X ∈ P X and p Y ∈ P Y such that p X × p Y / ∈ P . Then we can find an f * and a c such that ω∈Ω f * (ω)p X × p Y (ω) > c ≥ ω∈Ω f * (ω)p(ω) (A.4) for all p ∈ P . Thus c f * . On the other side, we can find an f X ∈ U pe X (f ) such that f X (x) = y∈Y f * (x, y)p Y (y). By assumption c f X , hence c ≥ x∈X y∈Y f * (x, y)p Y (y)p X (x) for all p X ∈ P X , so that the strict inequality in (A.4) cannot hold. This shows that P X ⊗ P Y ⊆ P and hence, since P is closed and convex, co(P X ⊗ P Y ) ⊆ P . 10 10 Proposition 2 . 2 (From Proposition 18 in Ghirardato et al. (2004)). For every f ∈ F k ∈ Ce(f ) ⇔ min p∈P ω∈Ω f (ω)p(ω) ≤ k ≤ max p∈P ω∈Ω f (ω)p(ω). x∈X f X (x)p(x, y) ≥ x∈X g X (x)p(x, y) ⇐⇒ x∈X f X (x)p(x|y) ≥ x∈X g X (x)p(x|y) where p(x|y) = p(x,y) Proof of Theorem 2 : 1 ) 21 ⇒ 2). For any c ∈ K, f c if and only if minp∈P ω∈Ω f * (ω)p(ω) ≥ c ⇔ min p Y ∈P Y ,p X ∈P X y∈Y x∈X f * (x, y)p X (x)p Y (y) ≥ c x∈X f X (x)p X (x) ≥ min p Y ∈P Y y∈Y x∈X f * (x, y)p X (x) p Y (y) ≥ y∈Y x∈X f * (x, y)p * X (x)p * Y (y)Hence f x c. On the other side, U pe X (f ) contains the set{f X ∈ F X | ∃ p Y ∈ P Y such that f X (x) = y∈Y f (x, y)p Y (y) for all x ∈ X}since for any such f X and any α ∈ (X) we have maxp Y ∈P Y y∈Y x∈X f (x, y)α x p Y (y) p Y ∈P Y y∈Y x∈X f (x, y)α x p Y (y).Hencef x c for all f X ∈ U pe X (f ) implies that min p X ∈P X x∈X   y∈Y f (x, y)p Y (y)   p X (x) ≥ cfor all p Y ∈ P Y , and thus f c. immediate corollary of Theorem 2 is then : Corollary 3. Let be a Bewley preference over F. Then f and g are independent if and only if for all h ∈ F f ×g and h See for example Rigotti and Shannon (2005), Ghirardato and Katz (2006) and Lopomo et al. (2011) for applications in finance, voting, and principal-agent models respectively. In this regard see Ghirardato et al.(2004), Gilboa et al. (2010) and Cerreia-Vioglio et al.(2011). While Gilboa et al. (2010) work in the classical Anscombe-Aumann setting, the correspondence to our environment is immediate and can be found in their Appendix B. Ok Ortoleva Riella (2012) give a different axiomatization for a model that corresponds to (2.1) when K is compact. The difference lies in the fact that in[START_REF] Blume | Lexicographic Probabilities and Choice under Uncertainty[END_REF] the conditioning events are only of the form X × {y}. While the two formulations are equivalent for SEU preferences, it can be shown that for a Bewley decision maker our version is strictly stronger. Using the Archimedean Continuity, Monotonicity and Completeness properties of the SEU model. Thus our definition is different from the most intuitive generalization of certainty equivalent that asks for any X-act that is indifferent to f , which one might dub the X-equivalent of the act. We have adapted Prop. 18 in Ghirardato et al. (2004) to our environment and notation. [START_REF] Ok | Incomplete Preferences Under Uncertainty: Indecisiveness in Beliefs versus Tastes[END_REF] use this type of reduction in the formulation of an axiom that characterizes two dual representations: Bewley's and the alternative single prior multi-expected utility model. [START_REF] Cerreia-Vioglio | Rational Preferences under Ambiguity[END_REF] show that this is true for all Monotone Bernoullian Archimedean preferences, a class that includes Variational Preferences, Smooth Ambiguity preferences and many others. I would like to thank Eric Danan and participants at the THEMA seminar at U.Cergy and Ecole Polytechnique internal seminar for useful comments and discussions. I acknowledge support by a public grant overseen by the French National Research Agency (ANR) as part of the Investissements d'Avenir program (Idex Grant Agreement No. ANR -11-IDEX-0003-02 / Labex ECODEC No. ANR -11-LABEX-0047). Needless to say, all mistakes are my own.
47,286
[ "18593" ]
[ "444480" ]
01665015
en
[ "info" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01665015v2/file/BARROIS_Benjamin.pdf
Olivier Sentieys Tout D'abord Karthick Parashar Daniel Menard Cédric Killian Audrey Ga- Briel Karim Nicolas R Et Joel ceux ayant contribué aux miens Keywords: Mes premiers remerciements vont à mon directeur de thèse Olivier Sentieys, qui a cru en moi dès le master et m'a permis d'effectuer cette thèse. C'est appuyé par son expérience que j'ai pu produire les différents travaux présentés dans ce document et m'intégrer dans les riches communautés propres aux domaines de ces travaux. arithmetic provides much higher performance and important energy savings when applied to real-life applications, thanks to much smaller error and data width. The last contribution is a comparative study between fixed-point and floating-point paradigms. For this, a new version of ApxPerf was developed, which adds an extra-layer of High Level Synthesis (HLS) achieved by Mentor Graphics Catapult C or Xilinx Vivado HLS. ApxPerf v2 uses a unique C++ source for both hardware performance and accuracy estimations of approximate operators. The framework comes with template-based synthesizable C++ libraries for integer approximate operators (apx_fixed) and for custom floating-point operators (ct_float). The second version of the framework can also evaluate complex applications, which are now synthesizable using HLS. After a comparative evaluation of our custom floating-point library with other existing libraries, fixed-point and floating-point paradigms are first compared in terms of stand-alone performance and accuracy. Then, they are compared in the context of K-Means clustering and FFT applications, where the interest of small-width floating-point is highlighted. The thesis concludes in the strong interest of reducing the bit-width of arithmetic operations, but also in the important issues brought by approximation. First, the many integer approximate operators published with promising important energy savings at low error cost, seem not to keep their promises when considered at application level. Indeed, we showed that fixed-point arithmetic with smaller bit-width should be preferred to inexact operators. Finally, we emphasize the interest of small-width floating-point for approximate computing. Small floating-point is demonstrated to be very interesting in low-energy systems, compensating its overhead with its high dynamic range, its high flexibility and its ease of use for developers and system designers. Résumé en Français Au cours de ces dernières décennies, des améliorations significatives ont été faites en termes de performances de calcul et de réduction d'énergie, suivant la loi de Moore. Cependant, les limites physiques liées à la réduction de la taille des transistors à base de silicium sont en passe d'être atteintes et le solutionnement de ce problème est aujourd'hui l'un des enjeux majeurs de la recherche et de l'industrie. L'un des moyens d'améliorer l'efficacité énergétique est l'utilisation de différentes représentations des nombres, et l'utilisation de tailles réduites pour ces représentations. Le standard de représentation des nombres réels est aujourd'hui la virgule flottante double-précision. Cependant, il est maintenant admis qu'un important nombre d'applications pourrait être exécuté en utilisant des représentations de précision inférieure, avec un impact minime sur la qualité de leurs sorties. Ce paradigme, récemment qualifié de calcul approximatif ou approximate computing en anglais, apparaît comme une approche prometteuse et est devenu l'un des secteurs de recherche majeurs visant à l'amélioration de la vitesse de calcul et de la consommation énergétique pour les systèmes de calcul embarqués et de haute performance. Le calcul approximatif s'appuie sur la tolérance de beaucoup de systèmes et applications à la perte de qualité ou d'optimalité dans le résultat produit. En relâchant les besoins d'extrême précision de résultat ou de leur déterminisme, les techniques de calcul approximatif permettent une efficacité énergétique considérablement accrue. Dans cette thèse, le compromis performance-erreur obtenu en relâchant la précision des calculs dans les opérateurs arithmétiques de base est traité. Après l'étude et une critique constructive des moyens existants d'effectuer de manière approximée les opérations arithmétiques de base, des méthodes et outils pour l'évaluation du coût en termes d'erreur et l'impact en termes de performance moyennant l'utilisation de différents paradigmes arithmétiques sont présentés. Tout d'abord, après une rapide description des arithmétiques classiques que sont la virgule flottante et la virgule fixe, une étude de la littérature des opérateurs approximatifs est présentée. Les principales techniques de création d'additionneurs et multiplieurs approximatifs sont soulignées par cette étude, ainsi que le problème de la nature et de l'amplitude très variable des erreurs induites lors des calculs les utilisant. Dans un second temps, une technique modulaire d'estimation de l'erreur virgule fixe s'appuyant sur la densité spectrale de puissance est présentée. Cette technique considère la nature spectrale du bruit de quantification filtré à travers le système, menant à une précision accrue de l'estimation d'erreur comparé aux méthodes modulaires ne prenant pas en compte cette nature spectrale, et d'une complexité plus basse que la propagation classique des moyenne et variance d'erreur à travers le système complet. Ensuite, le problème de l'estimation analytique de l'erreur produite par les opérateurs approximatifs est soulevé. La grande variété comportementale et structurelle des opérateurs approximatifs rend les techniques existante beaucoup plus complexes, ce qui résulte en un fort coût en termes de mémoire ou de puissance de calcul, ou au contraire en une mauvaise qualité d'estimation. Avec la technique proposée de propagation du taux d'erreur binaire positionnel, un bon compromis est trouvé entre la complexité de l'estimation et sa précision. Ensuite, une technique utilisant la pseudo simulation à base d'opérateurs arithmétiques approximatifs pour la reproduction des effets de la VOS est présentée. Cette technique permet d'utiliser des simulations haut niveau pour estimer les erreurs liées à la VOS en lieu et place de simulations SPICE niveau transistor, extrêmement longues et coûteuses en mémoire. Introduction Handling The End Of Moore's Law In the History of computers, huge improvements in calculation accuracy have been made in two ways. First, the accuracy of computations was gradually improved with the increase of the bit width allocated to number representation. This was made possible thanks to technology evolution and miniaturization, allowing a dramatical growth of the number of basic logic elements embedded in circuits. Second, accurate number representations such as floating-point were proposed. Floating-point representation was first used in 1914 in an electro-mechanical version of Charles Babbage's computing machine made by Leonardo Torres y Quevedo, the Analytical Engine, followed by Konrad Zuse's first programmable mechanical computer embedding 24-bit floating-point representation. Today, silicon-based general-purpose processors mostly embed 64-bit floating-point computation units, which is nowadays standard in terms of high-accuracy computing. At the same time, important improvements of computational performance have been performed. The miniaturization, besides allowing larger bit-width, also allowed considerable benefits in terms of power and speed, and thus energy. CDC 6600 supercomputer created in 1964, with its 64K 60-bit words of memory, had a power consumption of approximatively 150 kW for a computing capacity of approximatively 500 KFLOPS, which makes 3.3 floating-point operations per Watt. In comparison, best 2017's world' supercomputer Sunway TaihuLight in China with its 1.31 Pbytes of memory, announces 6.05 GFLOPS/W [5]. Therefore, in forty years, the supercomputers energy efficiency has improved by roughly 2E9, following the needs of industry. These impressive progresses were achieved according to Moore's law, who forecast in 1965 [START_REF] Moore | Cramming more components onto integrated circuits[END_REF] an exponential growth of computation circuit complexity, depicted in Figure 1 in terms of transistor count, performance, frequency, power and number of processing cores. From the day it was stated, this law and its numerous variations have outstandingly described the evolution of computing, but also the needs of the global market, such as computing is now inherent to nearly all possible domains, from weather forecasting to secured money transactions, including targeted advertising and self-driving cars. However, many specialists agree on Moore's Law to end in a very near future [START_REF] Williams | What's next? [the end of moore's law[END_REF]. First of all, the gradual decrease of the size of silicon transistors is coming to an end. With a Van Der Waals radium of 210 pm, today's 10 nm transistors are less than 50 atoms large, leading new issues at quantum physics scale, as well as increased current leakage. Then, the 20th century has known a steady increase of clock frequency in synchronous circuit, which represent the overwhelming majority of circuits, helping the important gain in performance. However, the beginning of the third millennium has seen a stagnation of this frequency, enforcing performance to be found in parallelism instead, single-core disappearing in favor of multi/many-core processors. Technology being pushed to its limits and a new cutting-edge physical-layer technology not appearing to arrive soon, new ways have to be found for computing to follow the needs. Moreover, with a strong interest of industry into energy-critical embedded systems, energyefficiency is sought more than ever. Specialists are anticipating a future technological revolution brought by the Internet Of Things (IOT), with a fast growth of the number of interconnected autonomous embedded systems [START_REF] Xu | Internet of things in industries: A survey[END_REF][START_REF] Perera | Context aware computing for the internet of things: A survey[END_REF][START_REF] Zanella | Internet of things for smart cities[END_REF]. As said above, a first performance improvement can be found in computing parallelism, thanks to multi-core/multi-thread superscalar or VLIW processors [START_REF] Sohi | Multiscalar processors[END_REF], or GPU computing. However, not all applications can be well parallelized because of data dependencies, leading to moderate speed-up in spite of an important area and energy overhead. A second modern way to improve performance is the use of hardware accelerators. These accelerators come in mainstream processors, with hardware video encoders/decoders such as for x264 or x265 codecs, but also in Field-Programmable Gate Arrays (FPGAs). FPGAs consist in a grid of programmable logic gates, allowing reconfigurable hardware implementation. In addition of general-purpose Look-Up Tables (LUTs), most FPGAs embed Digital Signal Processing blockss (DSPs) to accelerate signal processing computations. Hybrids embedding processors connected to an FPGA, embodied by Xilinx Zynq family, are today a big stake for high-performance embedded systems. Nevertheless, hardware and software co-design still represents an important cost in terms of development and testing. Approximate Computing or Playing with Accuracy for Energy Efficiency This thesis focuses on an alternative way to improve performance/energy ratio, which is relaxing computation accuracy to improve performance, as well in terms of energy/area for lowpower systems, but also in terms of speed for High Performance Computing (HPC). Indeed, for many reasons, many applications can tolerate some imprecision for various reasons. For instance, having a high accuracy in signal processing applications can be useless if the input signals are noisy, since the least significant bits in the computation will be only applied on the noisy part of the signal. Also, some applications such as classification do not always have golden output and can tolerate a set of satisfying results. Therefore, it is useless to perform many loop iterations to get an output which can be considered as good enough. Various methods applied at several levels are possible to relax the accuracy to get performance or energy benefits. At physical layer, voltage and frequency can be scaled beyond the circuit tolerance threshold with potential important energy of speed benefits, but with effects on the quality of the output that may be destructive and hard to be managed [START_REF] Kuroda | Variable supply-voltage scheme for low-power high-speed cmos digital design[END_REF][START_REF] Sueur | Dynamic voltage and frequency scaling: The laws of diminishing returns[END_REF]. At algorithmic level, several simplifications can be performed such as loop perforation or mathematical functions approximations. For instance, trigonometric functions can be approximated using COordinate Rotation DIgital Computing (CORDIC) [START_REF] Volder | The cordic trigonometric computing technique[END_REF] with limited number of iterations, and complex functions can be approximated by interval using simple polynomials stored in small tables. The choice and management of arithmetic paradigms also allows important energy savings while relaxing accuracy. In this thesis, three main paradigms are explored: • customizing floating-point operators, • reducing bit-width and complexity using fixed-point arithmetic, which can be associated to quantization theory, • and approximate integer operators, which perform arithmetic operations using inaccurate functions. Floating-point arithmetic is often associated to high-precision computing with important time and energy overhead compared to fixed-point. Indeed, floating-point is today the most used representation of real numbers because of its high dynamic and high precision at any amplitude scale. However, because of more complex operations than for integer arithmetic and the complexity of the many particular cases handled in IEEE 754 standard [START_REF]Ieee standard for floating-point arithmetic[END_REF], fixed-point is nearly always selected when low energy per operation is aimed at. In this thesis, we consider simplified small-bitwidth floating-point arithmetic implementations leading to better energy efficiency. Fixed-point arithmetic is the most classical paradigm when it comes to low-energy computing. In this thesis, it is used as a reference for comparisons with the other paradigms. A model for fixed-point error estimation leveraging Power Spectral Density (PSD) is also proposed. Finally, approximate arithmetic operators using modified addition and multiplication functions are considered. Many implementations of these operators were published this past decade, but they have never been the object of a complete comparative study. After presenting literature about floating-point and fixed-point, a study of state-of-the-art approximate operators is proposed. Then, models for error propagation for fixed-point and approximate operators are described and evaluated. Finally, comparative studies between approximate operators and fixed-point on one side, and fixed-point and floating-point on the other side, leveraging classical signal processing applications. More details on the organization of this document are given in the next section. Thesis Organization Chapter 1 Approximate computing in general is presented, followed by a deeper study on the different existing computing arithmetic. After a presentation of classical floating-point and fixed-point arithmetic paradigms, state-of-the-art integer approximate adders and multipliers are presented to give an overview of the many existing techniques to lower energy introducing inaccuracy in computations. Chapter 2 A novel technique to estimate the impact of quantization across large fixed-point systems is presented, leveraging the noise Power Spectral Density (PSD). The benefits of the method compared to others is then demonstrated on signal processing applications. Chapter 3 A novel technique to estimate the error of integer approximate operators error propagated across a system is presented in thus chapter. This technique, based on Bitwise Error-Rate (BWER), first uses training by simulation to build a model, which is then used for fast propagation. This analytical technique is fast and requires less space in memory than other similar existing techniques. Then, a model for the reproduction of Voltage Over-Scaling (VOS) error in exact integer arithmetic operators using pseudo-simulation on approximate operators is presented. Chapter 4 A comparative study of fixed-point and approximate arithmetic is presented in Chapter 4. Both paradigms are first compared in their stand-alone version, and then on several signal processing applications using relevant metrics. The study is performed using the first version of our open-source operator characterization framework ApxPerf, and our approximate operator library apx_fixed. Chapter 5 Using a second version of our framework ApxPerf embedding a High Level Synthesis (HLS) frontend and our custom floating-point library ct_float, a comparative study of fixed-point and small-width custom floating-point arithmetic is performed. First, the hardware performance and accuracy of both operators are compared in their stand-alone version. Then, this comparison is achieved in K-means clustering and Fast Fourier Transform (FFT) applications. Chapter 1 Trading Accuracy for Performance in Computing Systems In this Chapter, various methods to trade accuracy for performance are first listed in Section 1.1. Then, the study is centered on the different existing representations of numbers and the architectures of the arithmetic operators using them. In Section 1.2, floating-point arithmetic is developed. Then, Section 1.3 presents fixed-point arithmetic. Finally, a study of state-of-the-art approximate architectures of integer adders and multipliers is presented in Section 1.4. Various Methods to Trade Accuracy for Performance In this section, the main methods to trade accuracy for performance are presented. First, VOS is discussed. Then, existing algorithm-level transformations are presented. Finally, approximate arithmetic is introduced, to be further developed in Sections 1.2, 1.3, and 1.4 as the central element of this thesis. Voltage Overscaling The power consumption of a transistor in a synchronous circuit is linear with the frequency and proportional to the square of the voltage applied. For a same load of computations, decreasing frequency also increases computing time and the energy is the same. Thus, it is important to mostly exploit the voltage to save as much energy as possible. Nevertheless, decreasing voltage implies more instability in the transitions of the transistor, and this is why in a large majority of systems, the voltage is set above a certain threshold which ensures the stability of the system. Lowering the voltage under this threshold can cause the output of the transistor to be stuck to 0 or 1, compromising the integrity of the realized function. One of the main issues of VOS is process variability. Indeed, two instances A and B of a same silicon-based chip are not able to handle the exact same voltage before breakdown, a given transistor in A being possibly weaker than the same in B, mostly because of Random Dopant Fluctuations (RDF) which are a major issue in nanometer-scale Very Large Scale Integration (VLSI). However, with the important possible energy gains brought by VOS, its mastering is Chapter 1 an important stake which is widely explored [START_REF] Kuroda | Variable supply-voltage scheme for low-power high-speed cmos digital design[END_REF][START_REF] Sueur | Dynamic voltage and frequency scaling: The laws of diminishing returns[END_REF]. Low-leakage technologies like Fully Depleted Silicon On Insulator (FDSOI) allow RDF to impact much less near-threshold computing variability. Despite technology improvements, near-threshold and sub-threshold computing needs error-correcting circuits, coming with an area, energy and delay overhead which needs to be inferior to the savings. In [START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF], a method called Razor is proposed to monitor at low cost circuit error rate to tune its voltage to get an acceptable failure rate. Therefore, the main challenge with VOS is its uncertainty, the absence of a general rule which would make all instances of an electronic chips equal towards voltage scaling, which make manufacturers generally turn their backs to VOS, preferring to keep a comfortable margin above the threshold. In next Subsections 1.1.2 and 1.1.3, accuracy is traded for performance in a reproducible way, with results which are independent from hardware and totally dependent from the programmer/designer's will, and thus more likely to be used in the future at industrial scale. Algorithmic Approximations A more secure way than VOS to save energy is achieved by algorithmic approximations. Indeed, modifying algorithm implementation to make them deliver their results in less cycles or using less intensively costly functions potentially leads to important savings. First, the approximable parts of the code or algorithm must be identified, i.e. the parts where the gains can be maximized despite a moderate impact on the output quality. Various methods for the identification of these approximable parts are proposed in [START_REF] Roy | Asac: Automatic sensitivity analysis for approximate computing[END_REF][START_REF] Esmaeilzadeh | Architecture support for disciplined approximate programming[END_REF][START_REF] Grigorian | Dynamically adaptive and reliable approximate computing using light-weight error analysis[END_REF]. These methods are mostly perturbation-based, meaning errors are inserted in the code, and the output is simulated to evaluate the impact of the inserted error. As all simulation-based methods, these methods may not be scalable to large algorithms and only consider a limited number of perturbation types. Once the approximable parts identified by an automatized method or manually, depending on the kind of algorithm part to approximate (computation loops, mathematical functions, etc), different techniques can be applied. One of the main techniques for reducing the cost of algorithm computations is loop perforation. Indeed, most signal processing algorithms consist in quite simple functions repeated a high number of times, e.g. for Monte Carlo simulations, search space enumeration or iterative refinement. In these three cases, a subset of loop iterations can simply be skipped, yet returning good enough results [START_REF] Sidiroglou-Douskos | Managing performance vs. accuracy trade-offs with loop perforation[END_REF]. Using complex mathematical functions such as exponentials, logarithms, square roots or trigonometric functions is very area, time and energy-costly compared to basic arithmetic operations. Indeed, accurate implementations may require large tables and long addition and multiplication-based iterative refinement. Therefore, in applications using intensively these mathematical functions, releasing accuracy for performance can be source of important savings. A first classical way to approximate functions is polynomial approximation, using tabulated polynomials representing the function in different ranges. In the context of approximate computing, iterative-refinement based mathematical approximations are also very interesting since they allow loop perforation discussed previously. Reducing the number of iterations can be applied to CORDIC algorithms, very common for trigonometric function approximations [START_REF] Volder | The cordic trigonometric computing technique[END_REF]. Several efficient approximate mathematical functions were proposed for specialized hardware such as FPGAs [START_REF] Pandey | An fpga-based fixedpoint architecture for binary logarithmic computation[END_REF] or Single Instruction Multiple Data (SIMD) hard-ware [START_REF] Cristiano | Fast exponential computation on simd architectures[END_REF]. Approximate Basic Arithmetic The level of approximation we focus on in this thesis is the approximation of basic arithmetic operations, which are addition, subtraction and multiplication. These operations are the base for most functions and algorithms in classical computing, but also the most energy-costly compared to other basic CPU functions such as register shifts or binary logical operators, meaning that a gain in performance or energy on these operations automatically induces an important benefit for the whole application. They are also statistically intensively used in general-purpose CPU computing: in ARM processors, ADD and SUB instructions are the instructions the most used after LOAD and STORE. The approximation of these basic operators is explored along two different angles in this thesis. On the one hand, approximate representations of numbers is discussed, more precisely the approximate representation of real numbers in computer arithmetics, using floating-point and fixed-point arithmetic. On the other hand, approximate computing using modified functions for integer arithmetical functions of addition, subtraction and multiplication is explored used in fixed-point arithmetic and compared to existing methods. Approximations using floating-point arithmetic are discussed in Section 1.2, approximations using fixed-point arithmetic in Section 1.3 and approximate integer operators are presented in Section 1.4. Relaxing Accuracy Using Floating-Point Arithmetic Floating-point (Floating-Point (FlP)) representation is today the main representation for real numbers in computing, thanks to a potentially high dynamic, totally managed by the hardware. However, this ease of use comes with relatively important area, delay and energy penalties. FlP representation is presented in Section 1.2.1. Then, FlP addition/subtraction and multiplication are described in Section 1.2.2. Ways to relax accuracy for performance in FlP arithmetic is then discussed in Section 1.2.3. Floating-Point Representation for Real Numbers In computer arithmetic, the representation of real numbers is a major stake. Indeed, most powerful algorithms are based on continuous mathematics, and their accuracy and stability is directly related to the accuracy of the number representation they use. However, in classical computing, an infinite accuracy is not possible since all representations are contained in a finite bit width. To address this issue, having a number representation as accurate for very small numbers and very large numbers is important. Indeed, large and small numbers are dual, since multiplying (resp. dividing) a number by another large number is equivalent to dividing (resp. multiplying) by a small number. Giving the same relative accuracy to numbers whatever their amplitude is can only be achieved giving the same impact to their most significant digit. In decimal representation, this is achieved with scientific notation, representing the significant Chapter 1 value of the number in the range [1, 10[, weighted by 10 elevated to a certain power. FlP representation in radix-2 is the pendant of the scientific notation for binary computing. The point in the representation of the number is "floating" so the representative value of the number (or mantissa) represents a number in [1, 2[, multiplied by a power of 2. Given an M -bit mantissa, a signed integer exponent of value e, often represented in biased representation, and a sign bit s, any real number between limits defined by M and E the number of bits allocated to the exponent e can be represented with a relative step depending on M by: (≠1) s ◊ m M ≠1 .m M ≠2 m M ≠3 • • • m 1 m 0 ◊ 2 e . With this representation, any number under this format can be represented using M +E +1 bits as showed in Figure 1.1. A particularity of binary FlP with a mantissa represented in [1, 2[ is that its Most Significant Bit (MSB) can only be 1. Knowing that, the MSB can be left implicit, freeing space for one more Least Significant Bit (LSB) instead. Nevertheless, automatically keeping the floating point at the right position along computations requires an important hardware overhead, as discussed in Section 1.2.2. Managing subnormal numbers (numbers between 0 and the smallest positive possible representable value), the values 0 and infinity also represent an overhead. Despite this additional cost, FlP representation is today established as the standard for real number representation. Indeed, besides its high accuracy and high dynamic, it has the huge advantage of leaving the whole management of the representation to the hardware instead of leaving it to the software designer, significantly diminishing developing and testing time. This domination is sustained by IEEE 754 standard, last revised in 2008 [START_REF]Ieee standard for floating-point arithmetic[END_REF], which sets the conventions for floating-point number possible representation, subnormal numbers management and the different cases to be handled, ensuring a high portability of programs. However, such a strict normalization implies: • an important overhead for throwing flags for the many special cases, and even more important overhead for the management of these special cases (hardware or software overhead), • and a low flexibility in the width of the mantissa and exponent, which have to respect the rules of Table 1.1 for 32, 64 and 128-bit implementation. As a first conclusion, the constraints imposed to FlP representation by IEEE 754 normalization imply a high cost in terms of hardware resource, which highly counterbalance its accuracy benefits. However, as discussed in Section 1.2.3, taking liberties with FlP can significantly increase the accuracy/cost ratio. As discussed later in Section 1.3, integer addition/subtraction is the simplest arithmetic operator. However, in FlP arithmetic, it suffers from a high control overhead. Indeed, several steps are needed to perform the FlP addition: • First, the difference of the exponents is computed. • If the difference of the exponents is superior to the mantissa width, the biggest number is directly issued (this is the far path of the operator -one of the numbers is too small to impact the addition). • Else, if the difference of the exponents is inferior to the mantissa width, one of the inputs' mantissas must be shifted so bits of same significance are facing each other. This is the close path, by opposition with the far path. • The addition of the mantissas is performed. • Then, rounding is performed on the mantissa, depending on the dropped bits and the rounding mode selected. • Special cases are then handled (zero, infinity, subnormal results), and the output sign. • Then, mantissa is shifted so it represents a value in [1, 2[, and the exponent is modified depending on the number of shifts. FlP addition principle is illustrated in Figure 1.2 taken from [START_REF] Muller | Handbook of floating-point arithmetic[END_REF]. More control can be needed, depending on the implementation of the FlP adder and the specificities of the FlP representation. For instance, management of the implicit 1 implies to add 1s to the mantissas before addition, and an important overhead can be dedicated to exception handling. For cost comparison, Table 1.2 shows the performance of 32-bit and 64-bit FlP addition, using ac_float type from Mentor Graphics, and 32-bit and 64-bit integer addition using ac_int type, generated using the HLS and power estimation process of the second version of Apx-Perf framework described in Section 4.1, targeting 28nm FDSOI with a 200 MHz clock and using 10, 000 uniform input samples. FlP addition power was estimated activating the close path 50% of the time. These results show clearly the overhead of FlP addition. For 32-bit version, FlP addition is 3.5◊ larger, 2.3◊ slower and costs 27◊ more energy than integer addition. For 64-bit version, FlP addition is 3.9◊ larger, 1.9◊ slower and costs 30◊ more energy. The vs integer addition overhead seems to be roughly linear with the size of the operator, and the impact of numbers representation is highly impacting the performance. However, it is showed in Chapter 5 that this high difference shrinks when the impact of accuracy is taken into account. FlP multiplication is less complicated than addition as only a low control overhead is necessary to perform the operation. Input mantissas are multiplied using a classical integer multiplier (see Section 1.3), while exponents are simply added. At worse, a final +1 on the exponent can be needed, depending on the result of the mantissas multiplication. The basic architecture of a FlP multiplier is described in Figure 1.3 from [START_REF] Muller | Handbook of floating-point arithmetic[END_REF]. Obviously, all classical hardware over- .m x 1.m y e x + b z 1 • • • z p+1 z 1 z 0 z 1 • • • z 2p 2 z p s z 0 • • • z p 2 e x + e y + b e x + e y + b + 1 z p 1 z p+1 • • • z 2p 2 c out sticky Figure 1. 3 -Basic floating-point multiplication [START_REF] Muller | Handbook of floating-point arithmetic[END_REF] heads needed by FlP representation are necessary (rounding logic, normalization, management of particular cases). Table 1.3 shows the difference between 32-bit and 64-bit floating-point multiplication using Mentor Graphics ac_float and 32-bit and 64-bit fixed-width1 integer multiplication using ac_int data type, with the same experimental setup than discussed before for the addition. A first observation on the area shows that the integer multiplication is 48% larger than FlP version for 32-bit version, and 37% larger for 64-bit version. This difference is due to the smaller size of the integer multiplier in the FlP multiplication, since it is limited to the size of the mantissa (24 bits for 32-bit version, 53 bits for 64-bit version). Despite the management of the exponent, the overhead is not large enough to produce a larger operator. However, if the overhead area is not very large, 32-bit FlP multiplication energy is 11◊ higher than the integer multiplication energy, while 64-bit version is 37◊ more energy-costly. It is interesting to note that the difference of energy consumption between addition and multiplication is much more important for integer operators than for FlP. For 32-bit version for instance, integer multiplica- 7◊ more energy than integer addition, while this factor is only 1.4◊ for 32-bit FlP multiplier compared to 32-bit FlP adder. Therefore, using multiplication in FlP computing is relatively less penalizing than for integer multiplication, typically used in Fixed-Point (FxP) arithmetic. Potential for Relaxing Accuracy in Floating-Point Arithmetic There are several possible opportunities to relax accuracy in floating-point arithmetic to increase performance. The main one is simply to use word-length as small as possible for the mantissa and the exponent. With normalized mantissa in [1, 2[, reducing the word-length corresponds to pruning the LSBs, which comes with no overhead. Eventually, rounding can be performed at higher cost. For the exponent, the transformation is more complicated if it is represented with a bias. Indeed, if e is the exponent width, an implicit bias of 2 e ≠ 1 applies to the exponent in classical exponent representation. Therefore, reducing the exponent to a width e Õ means that a new bias must be applied. The original exponent must be added 2 e Õ ≠ 2 e (< 0) before pruning MSBs, implying hardware overhead at conversion. The original exponent must represent a value in Ë ≠2 e Õ ≠1 + 1, 2 e Õ ≠1 È to avoid overflow. In practice, it is better to keep a constant exponent width to avoid useless overhead and conversion overflows which would have a huge impact on the quality of the computations, even if they are scarce. A second way to improve computation at inferior cost is to play with the implicit bias of the exponent. Indeed, increasing the exponent width increases the dynamic towards infinity, but also the accuracy towards zero. Thus, if the absolute maximum values to be represented are known, the bias can be chosen so it is just large enough to represent these values. This way, the exponent gives more accuracy to very small values, increasing accuracy. However, using a custom bias means that the arithmetical operators (addition and multiplication) must consider this bias in the computation of resulting exponent, and the optimal bias along computation may diverge to ≠OE. To avoid this, if the original 2 e ≠ 1 exponent bias is kept, exponent bias can be simulated by biasing the exponents of the inputs of each or some computations using shifting. For the addition, biasing both inputs adding 2 e in to the exponent implies that the output will also be represented biased by 2 e in . For the multiplication, the output will be biased by 2 e in +1 . Keeping an implicit track of the bias along computations allows to know any algorithm output bias, and eventually to perform a final rescaling of the outputs. Finally, accuracy can be relaxed in the integer operators composing FlP operators, i.e. the integer adder adding mantissas in FlP addition close path, and the integer multiplier in the FlP multiplication. Indeed, they can be replaced by the approximated adders and multipliers described in Section 1.4 to improve performance relaxing accuracy. However, as the most part of the performance cost is in control hardware more than in integer arithmetic part, the impact on accuracy would be strong for a very small performance benefit. The same approximation can be applied on exponent management, but the impact of approximate arithmetic would be huge on the accuracy and is strongly unadvised. More state-of-the-art work on FlP arithmetic is developed in Section 5.1, more particularly on HLS using FlP custom computing cores. Relaxing Accuracy Using Fixed-Point Arithmetic Aside from FlP, a classical representation for real numbers is Fixed-Point (FxP) representation. This Section presents generalities about FxP representation in Section 1.3.1, then presents the classical models for quantization noise in Section 1.3.2. Finally, hardware implementations of addition and multiplication are respectively listed and discussed in Sections 1.3.3 and 1.3.4. Fixed-Point Representation for Real Numbers In fixed-point representation, an integer number represents a real number multiplied by a factor depending on the implicit position of the point in the representation. A real number x is represented by the FxP number x FxP represented on n bits with d bits of fractional part by the following equation: x FxP = - - -x ◊ 2 d - - - r ◊ 2 ≠d , where |•| r is a rounding operator, which can be implemented using several functions such as the ones of Section 1.3.2. The representation of a 12-bit two's complement signed FxP number with a 4-bit integer part is depicted in Figure 1 2 2 2 1 2 0 2 -1 2 -2 2 -3 2 -4 2 -5 Integer part Fractional part -2 3 𝑥 2 𝑥 1 𝑥 0 2 -7 2 -8 2 -6 number x bin = {x i } ioe[0,n≠1] represents the integer number x int the following way: x int = x n≠1 ◊ 1 ≠2 n≠1 2 + n≠2 ÿ i=0 x i ◊ 2 i Therefore, the two's complement n-bit FxP number represented by x FxP with d-bit fractional part is worth: x FxP = x int ◊ 2 ≠d = x n≠1 ◊ 1 ≠2 n≠d≠1 2 + n≠2 ÿ i=0 x i ◊ 2 i≠d . Quantization and Rounding Representing a real number in FxP is equivalent to transforming a number represented on an infinity of bits to a finite word-length. This reduction is generally referred as quantization of a continuous amplitude signal. Using a FxP representation with a d-bit fractional part implies that the step between two representable values is equal to q = 2 ≠d , referred as quantization step. The process of quantization results in a quantization error defined by: e = x q ≠ x, (1.1) where x is the original number to be quantified and x q the resulting quantified number. Quantization of continuous amplitude signal is performed differently depending on the rounding mode chosen. Several classical rounding modes are possible: 1. Rounding towards ≠OE (RD). The infinity of bits dropped are not taken into consideration. This results in a negative quantization error e and is equivalent to truncation. 2. Rounding towards +OE (RU). Again, the infinity of bits dropped are not taken into consideration, and the nearest superior representable value is selected. This is equivalent to adding q to the result of the truncation process. 3. Rounding towards 0 (RZ). If x is negative, the number is rounded to +OE. Else, it is rounded to ≠OE 4. Rounding to Nearest (RN). The nearest representable value is selected -if the MSB of the dropped bits is 0, then x q is obtained by rounding to ≠OE (truncation) -else x q is obtained by rounding towards +OE. The special value where the MSB of the dropped part is 1 and all the following bits are 0 can lead whether to rounding up or down depending on the implementation. Choosing one or another case does not change anything in the error distribution in the case of continuous amplitude signal, which is also the case for discrete case, as said below. The quantization error produced by rounding towards ±OE and to nearest discussed above are depicted in Figure 1.5. RZ method has a varying quantization error distribution depending on the sign of the value to be rounded. As described in [START_REF] Widrow | A study of rough amplitude quantization by means of nyquist sampling theory[END_REF] and [START_REF] Widrow | Statistical analysis of amplitude-quantized sampled-data systems[END_REF], the additional error due to the quantization of a continuous signal is uniformly distributed between its limits ([≠q, 0] for truncation, [0, q] for rounding towards +OE and [≠q/2, q/2] for rounding to nearest) and statistically independent on the quantized signal. Therefore, the mean and variance of the error are perfectly known in these cases and are indexed in Table 1.4. Thanks to its independence to the signal, quantization error can be seen as an additive uniformly distributed white noise q such as depicted in Figure 1.6. This representation of quantization error is the base of FxP representation error analysis discussed in the next Chapter. Figure 1.6 -Representation of FxP quantization error as an additive noise The previous paragraphs describe the properties of the quantization of a continuous signal. However, in FxP arithmetic, it is necessary to reduce the bit width along computations to avoid a substantial growth of the necessary resources. Indeed, as discussed in Section 1.3.4, an integer Chapter 1 multiplication needs to produce an output which width is equal to the sum of its inputs to get a perfectly accurate computation. However, the LSBs of the result are often not significant enough to be kept, and so a reduction of data width must be performed to save area, energy and time. Therefore, it is often necessary to reduce a FxP number x b = d 1 ≠ d 2 . This distribution is still uniform for RD, RU and RN rounding methods, but has a different bias. Moreover, for RN method, this bias is not 0, and depends on the direction of rounding chosen when the MSB of the dropped part is 1 and all the other dropped bits 0. This can lead to divergences when accumulating a large number of computations. To overcome this possible deviation, Convergent Rounding to Nearest (CRN) was proposed in [START_REF] Lapsley | DSP Processor Fundamentals: Architectures and Features[END_REF]. When the special case cited above is met, the rounding is once performed toward +OE, and once towards ≠OE. This way, the quantization error distribution gets centered to zero. . As the highest error occurs for the special in between case, distributing this error using alternatively RD and RU paradigms balances the error, lowering by half the highest negative error and moving its impact to a new spike removing the bias as showed on Figure 1.7b. However, this compensation slightly increases the variance of the quantization error. The values of the mean µ e and variance ‡ 2 e of RD, RU, RN and CRN rounding methods 1 1 ≠ 2 ≠2d b 2 RN 0 q 2 12 q 2 1 2 ≠d b 2 q 2 12 1 1 ≠ 2 ≠2d b 2 CRN 0 q 2 12 0 q 2 12 1 1 ≠ 2 ≠2d b +1 2 • the round bit, which is the value of the bit indexed by d 1 ≠ d 2 ≠ 1 of x 1 , • and the sticky bit, which is a logical or applied to the bits {0, ..., d 1 ≠ d 2 ≠ 2} of x 1 . The extraction of round and sticky bits is illustrated in Figure 1.8. The horizontal stripes in x 1 correspond to the round bit, and the tilted stripes to the bits implied in the computation of the sticky bit. Here, both are worth 1, and the rounding logic outputs 1, which can correspond to RU, RN, or CRN. The possible functions performed by the different rounding functions which can be implemented in rounding logic block of Figure 1.8 are listed in Table 1.5. It is important to notice that for RD method, the value of round and sticky bits have no influence on the rounding direction. For RN method, if the default rounding direction is towards +OE when round/sticky bits are 1/0, then the value of the sticky bit does not influence the rounded result. If it is up, the sticky bit has to be considered. Therefore, some hardware simplifications can be performed for RD and RN (down case) methods, by just dropping the unused bits. 0 0 ≠ ≠ ≠ ≠ 0 1 ≠ + ≠ ≠ 1 0 ≠ + Always ≠ Alternatively or always + ≠ and + 1 1 ≠ + + + Table 1.5 -Rounding direction depending on the value of round and sticky bits Addition and Subtraction in Fixed-Point Representation The addition/subtraction in FxP representation is much simpler than the one of FlP described in Section 1.2.2. Indeed, FxP arithmetic is entirely based on integer arithmetic. Adding two FxP numbers can be performed in 3 steps: 1. Aligning the points of the two numbers, shifting one of them (software style) or driving their bits to the right input in the integer adder (hardware design style). 2. Adding the inputs using an integer adder. 3. Quantizing the output using methods of Section 1.3.2. In this section, we will consider the addition (respectively subtraction) of two signed FxP numbers x and y with a total bit width of resp. n x and n y , a fractional part width of d x and d y and an integer part width of m x = n x ≠ d x and m y = n y ≠ d y . In the rest of this chapter, a n x -bit FxP number x with m x -bit integer part will be noted x(n x , m x ). To avoid overflows or underflows, the output z(n z , m z ) of the addition/subtraction of x and y must respect the following equation: m z = max (m x , m y ) + 1. (1.2) Moreover, an accurate addition/subtraction must also respect: d z = max (d x , d y ) . (1.3) The final process for FxP addition of x(6, 2) and y(8, 3) returning z(9, 4) then quantized to z q (6, 4) is depicted by Figure 1.9. For the subtraction x ≠ y, the classical way to operate is to compute y Õ = ≠y before performing the addition x + y Õ . In two's complement representation, this is equivalent to performing y Õ = y + 1, where y is the binary inverse of y. The inversion is fast and requires small circuit, and adding 1 can be performed during the addition step of Figure 1.9 (after shifting) to avoid performing one more addition for the negation. As a first conclusion about FxP addition/subtraction, the cost of the operation mostly depends on two parameters: the cost of shifting the input(s), and the efficiency of integer addition. From this point, we will focus on the integer addition, which represent the majority of this cost. The integer addition can be built from the composition of 1-bit additions, taking each three inputs -the input bits x i , y i and the input carry c i and returning two outputs -the output sum bit s i and the output carry c i+1 . This function is realized by the Full Adder (FA) function of Figure 1.10 which truth table is described in Table 1. [START_REF] Moore | Cramming more components onto integrated circuits[END_REF]. The simplest addition structure is the Ripple Carry Adder (RCA), built by the direct composition of FAs. Each FA of rank i takes the input bits and the input carry of rank i. It returns the output bit of rank i and the output carry of rank i + 1, which is connected to the following full adder, resulting in the structure of Figure 1.11. This is theoretically the smallest possible area for an addition, with a complexity of O (n). However, this small area is counterbalanced Chapter 1 Figure 1.9 -Fixed-point addition process of x(6, 2) and y(8, 3) returning z(9, 4) quantized to z q (6, 4) Figure 1.10 -One-bit addition function -Full adder (or compressor 3:2) by a high delay also in O (n), while the theoretical optimum is O (log n) [START_REF] Koren | Computer Arithmetic Algorithms[END_REF]. Therefore, RCA is only implemented when area budget is critical. x i y i c i c i+1 s i 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 1 A classical improvement issued from the RCA is the Carry-Select Adder (CSLA). The CSLA is composed of elements containing two parallel RCA structures, one taking 0 as input carry and the other taking 1. When the actual value of the input carry is known, the correct result is selected. This pre-computation (or prediction) of the possible output values increases speed, which can reach at best a complexity of O ( Ô n) when the variable widths of the basic elements are optimal, while more than doubling the area compared to classical RCA. An 8-bit version of CSLA is depicted in Figure 1.12, with basic blocks of size 2-3-3 from LSB to MSB. Therefore, the resulting critical path is 3 FAs and 3 multiplexers 2-1, from input carry to output carry, instead of 8 FAs for RCA. It is important to note that CSLA structure can be applied to any addition structure such as the ones described below, and not only RCA, which can lead to better speed performance. As already stated, the longest path in the adder starts from the input carry (or the input LSBs) and ends at the output carry (or output MSB). Therefore, propagating the carry across the operator as fast as possible is a major stake as long as high speed is required. It can whether be done duplicating hardware like for CSLA, but it can also be achieved by prioritizing the carry propagation. In FA design, the output is computed together with the output carry. In Carry- Chapter 1 Lookahead Adder (CLA) design, carry propagation is performed by an independent circuit so the carries at the MSBs position do not need to wait for all outputs of inferior rank to be computed to be available. For this, two peculiar values need to be calculated at each bit position: the generate and propagate bits g i and p i defined by: p i = x i ü y i , g i = x i • y i . (1.4) Using these values obtained with very small circuitry, the carry of rank i is extracted from the carry of previous rank by the following relation: c i = g i≠1 ' (c i≠1 • p i≠1 ) . (1.5) Then by recurrence, any carry signal can be deduced knowing any carry signal of inferior rank and all propagate and generate bits of intermediate rank. The addition output bit z i is then simply deduced by the following relation: z i = p i ü c i . (1.6) For instance, knowing c i , the four following carries are defined by the equations: c i+1 = g i ' (c i • p i ) , c i+2 = g i+1 ' (g i • p i+1 ) ' (c i • p i • p i+1 ) , c i+3 = g i+2 ' (g i+1 • p i+2 ) ' (g i • p i+1 • p i+2 ) ' (c i • p i • p i+1 • p i+2 ) , c i+4 = g i+3 ' (g i+2 • p i+3 ) ' (g i+1 • p i+2 • p i+3 ) ' (g i • p i+1 • p i+2 • p i+3 ) ' (c i • p i • p i+1 • p i+2 • p i+3 ) . (1.7) The direct translation of these equations into hardware leads to faster carry generation compared to RCA, but also leads to an important area overhead. However, parallel prefix adders, which are based on CLA paradigm, show better area performance. The main idea is to propagate g and p bits to get equivalent couples (p Õ , g Õ ) which turns the computation of any c i in Equation 1.5 independent of c i≠1 , so we finally get: c i = g Õ i≠1 , z i = p Õ i ü c i = p Õ i ü g Õ i≠1 , (1.8) and so all outputs can be computed in parallel. This equivalent representation is obtained thanks to a series of 4:2 compressors, extracting an equivalent couple (p Õ i , g Õ i ) from couples (p i , g i ) and (p j , g j ) where j < i, performing: p Õ i = p j • p i , g Õ i = g i ' (g j • p i ) . (1.9) The details and mathematical proof of this method are available in [START_REF] Parhami | Computer Arithmetic[END_REF]. The use of this (p, g) 4:2 compressor has led to the creation of several parallel prefix adders, such as the Brent-Kung Adder (BKA) and the Kogge-Stone Adder (KSA) [START_REF] Kogge | A parallel algorithm for the efficient solution of a general class of recurrence equations[END_REF], respectively depicted for their [START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF] As a conclusion about integer adders used for FxP addition, several addition structures exist. This section only presents the main principles, many other instances based on these principles do exist, many being described in [START_REF] Parhami | Computer Arithmetic[END_REF]. What is important to observe is that integer adders have an area of minimum complexity O(n) and time complexity O(log n). However, both these complexity cannot be achieved by a same structure. Reaching the minimum time complexity implies parallelism and so larger area, whereas the smallest area implies longer critical path and so higher delay. In the next section, the carry-save addition method is presented in the context of summand grid reduction in multiplication. This is why it was not handled in this section. Multiplication in Fixed-Point Representation As for addition, FxP multiplication is performed by integer multiplication. Unlike addition, no alignment of the inputs is necessary. Thus, for the accurate multiplication of n-bit inputs, a 2n-bit result is returned. Therefore, compared to addition where only 1 more bit is necessary, multiplication is a potential source for high resource needs downstream, which definitely justifies the necessity of quantizing numbers along computations, as presented in Section 1.3.2. Integer multiplication can be split in two phases -generation of summand grid, and summand grid addition, leading to the scheme showed in Figure 1.15. Compared to higher-base, Figure 1.15 -General integer multiplication principle applied on 6-bit input Figure 1.16 -General visualization of 6bit multiplication summand grid binary multiplication is much simpler. Indeed, only two values are possible for all summands, 0 or the value of the multiplicand, which can leads to major simplifications. Indeed, the generation of each line of the summand grid of an n-bit multiplier can be performed by n 2-to-1multiplexers selecting whether the input bits are x i or 0, controlled by the value of the bit y j corresponding to the current line. Therefore, the most expensive part of the multiplier in terms of resources is the carry-save reduction of the summand grid to reduce it to a final addition. The summand grid can be visualized by an n/2-stage triangle, as showed on Figure 1.16. The reduction of the tree is achieved by several stages of FA and Half Adder (HA), until only two lines are left. A HA can be seen as a simplified FA (see Section 1.3.3) with only two inputs instead of three. A HA is built with only two logic gates instead of five for FA. FAs perform a 3-2 compression illustrated by Figure 1.17 and HAs a 2-2 transformation as in Figure 1.18. Figure 1.17 -Full adder compression -requires 5 gates Thus, the complexity of the multiplier in terms of speed and area depends on how the summand grid reduction is organized. Different classical methods to build reduction trees exist in the literature, most famous ones being Wallace tree [START_REF] Wallace | A suggestion for a fast multiplier[END_REF] and Dadda tree [START_REF] Dadda | Some schemes for parallel multipliers[END_REF]. Wallace tree reduces the partial product bits as early as possible, whereas Dadda tree reduces them as late as possible. This leads to two different kinds of architectures, Wallace tree being the fastest, whereas Dadda tree implementation is smaller. Figure 1.19 shows the difference between Wallace and Dadda trees in a 5-bit multiplier context. Wallace tree requires 9 FAs and 3 HAs (51 gates) before final 8-bit addition, whereas Dadda tree needs 8 FAs and 4 HAs (48 gates). Computing multiplications based on partial product reduction such as with Wallace or Chapter 1 Dadda trees is a good compromise between speed and area. Indeed, only a few reduction stages are necessary for the reduction before the final addition (2 stages for 8-bit multiplication, 6 stages for 16-bit and 8 stages for 32-bit), which represents an acceptable overhead. Tree multipliers have a delay complexity in O(log n). A particular sort of tree multiplier is the array multiplier. It is made with one-sided Carry-Save Adder (CSA) reduction tree (less efficient than distributed trees), which makes it slower (O(n)) and theoretically larger than previously discussed multipliers. And the final computation is performed by a RCA, which is the slowest possible adder as discussed in Section 1.3.3. However, it is very interesting in VLSI design thanks to its regularity which implies small wires ensuring a compact layout. This regularity also implies fine-grained pipelining possibilities. Figure 1.20 is a 6-bit signed array multiplier. The signed version is obtained using modified Baugh-Wooley two's-complement technique which consists in inverting the MSBs of all partial products except the last one which has all its bits inverted except the MSB. On Figure 1.20, AFA (resp. AHA) corresponds to a FA (resp. HA) whose inputs x i and y i are combined by an AND cell, NFA corresponds to a FA which inputs x i and y i are combined by a NAND cell. Previously discussed multipliers are fast but their area is important. E.g, array multiplier area complexity is O ! n 2 " , whereas it is possible to reach an O(n) complexity with a sequential multiplier such as the one presented on the multiplexer is then accumulated in the MSB half of another register shifting one bit right at each new addition. Therefore, the whole computation is only performed by an adder which can be one of the several presented in Section 1.3.3, chosen whether for delay or area performance. In this section, several ways to manage computation time or area of the summand grid were presented. However, improvements can be done upstream in order to reduce the initial size of the summand grid. The most classical and efficient way is to apply modified Booth encoding on y. In Radix-4 Booth encoding, y is encoded so only Ân/2Ê + 1 lines in the summand grid are generated instead of n. However, this implies circuitry overhead for the encoding. Indeed, n-bit y operand needs to be transformed into Ân/2Ê + 1 actions to perform on x for partial product generation. These actions consists in a multiplication of x by an element of the set {≠2, ≠1, 0, 1, 2}, which translates into 0 to 2 left shifts during the partial products generation, as well as possible negation. The decision of the corresponding action is driven by the rules of Table 1.7. Higher radix encoding such as Radix-8 Booth encoding do exist, but increasing the radix leads to much higher encoding complexity and a high cost due to a more important number of shifts or dense wiring of input bits x i , which tend to cancel the benefits of the reduction of the number of partial products. However, Radix-4 or Radix-8 Booth encoding techniques tend to be faster than classical Radix-2 multiplication, especially for large bit widths, which imply a large summand grid. Chapter 1 In this section, several multiplication techniques and optimizations were presented. Their efficiency strongly depends on the techniques used to generate and reduce the summand grid. Generally, for an n-bit multiplication, tree multiplication is O(log n) fast, with or without Booth encoding, whereas array multiplier is O(n) fast and sequential multiplier O(n log n) fast. However, the sequential multiplier has an area in O(n) and the array multiplier in O(n 2 ), whereas Wallace or Dadda tree multipliers have an intermediate area. Array multiplier, despite being the largest and not the fastest, has compact layout possibilities and fine-grained pipelining possibilities. The following section present approximate operators which try to overcome the limitations of addition and multiplication complexity, often taking the previously presented operators as a work basis. y i y i≠1 y i≠2 Y Âi/2Ê Operation 0 0 0 0 +0 0 0 1 1 +x 0 1 0 1 +x 0 1 1 2 +2x 1 0 0 ≠2 ≠2x 1 0 1 ≠1 ≠x 1 1 0 ≠1 ≠x 1 1 1 0 +0 Relaxing Accuracy Using Approximate Operators In Section 1.3, hardware integer addition and multiplication were presented in the context of FxP arithmetic. These operators are accurate, meaning they always return a mathematically correct value. However, many applications do not need calculations to be perfectly accurate since a degraded output can be tolerated. This is why these past decades, many researchers tried to break performance limitations of accurate arithmetic by proposing an important number of approximate operators. Section 1.4.1 presents a collection of previously published approximate adders and Section 1.4.2 presents some multipliers. Approximate Integer Addition If adders are the most basic arithmetic operators, they nevertheless are intensively used in all applications. Moreover, they are also directly used in some multiplier architectures and in many implementations of more complex functions, such as exponential, logarithms, functions using CORDIC algorithm, etc. Improving their latency, area or power consumption is therefore a big stake for all arithmetic operator design field. For an n-bit adder, optimal delay complexity is O (log n) and area complexity O (n) [START_REF] Koren | Computer Arithmetic Algorithms[END_REF]. It is thus impossible to find perfectly accurate adders getting below these complexities. For breaking these barriers, approximate adders were created. This section presents a non-exhaustive list of them. As stated in Section 1.3.3, the critical path of an n-bit addition is located between the input LSBs (considering the adder has no input carry) and the z n output, which can be considered as the adder's output carry. Therefore, the best source of improvements in addition is based on the ability to break the path of this critical carry chain. Indeed, most of the time during addition, the whole carry chain is unused and yet limiting the frequency of the adder and spending energy in glitch. It is showed in [START_REF] Schilling | The longest run of heads[END_REF] that the probability for a series of n coin tosses, the longest run of heads that does not exceed x is the series A n (x), defined by: A n (x) = I 2 n if n AE x, q 0AEjAEx A n≠1≠j (x) otherwise. (1.12) Using this series, the longest carry chain with respectively a probability of 99% and 99.99% is given by Table 1.8. E.g. for a 256-bit adder, the longest propagation of a carry will be 20 bits with only 0.01% chance for it to be longer. It can also be noticed that the probability for the longest carry chain not to exceed a given x has a fast growth, as illustrated by Figure 1.23, which shows the probability for the longest carry chain for a 64-bit adder to be inferior or equal to x as a function of x. This probability explicitly shows that cautiously breaking carry chains only causes few chances for the result to be false. However, breaking a long carry chain is likely to cause a strongly erroneous output since an error with the weight of the MSBs of the carry chain can be performed. A balance must therefore be found between the occurrence of errors and their amplitude. Given an adder of width n, the probability of having a correct result P (n, x) = 3 1 ≠ 1 2 x+2 4 n≠x≠1 , ( 1.13) which also has a fast growth with x. Sample Adder, Almost Correct Adder and Variable Latency Speculative Adder In [START_REF] Lu | Speeding up processing with approximation circuits[END_REF], S.-L. Lu proposes the Sample Adder, which will be denoted as Lu's Parallel Adder (LPA) from this point. Based on the previous remarks about the potential of breaking the carry chains, an LPA of width n is parameterized by k, the maximum width of the transmitted carry chain. Based on the structure of a parallel prefix adder, the LPA is made of k rows of k-bit carry-chain computing blocks (or inferior to k for boundary blocks) applied on propagate and generate circuits. The output carry of each block is transmitted to the corresponding rank, and the final sum is computed along with the corresponding sum bit. The example of a 16-bit LPA with k = 4 is represented in Figure 1.24. First, a conversion of (x i , y i ) to (p, g) is performed, and carries values are computed in parallel. Finally, the sum bits x i , yi and the computed carries c i are added to get the output sum bit at each position. The structure of LPA makes it delay constant for a given k, independently from n. The author claims the adder to be faster and smaller than KSA and Han-Carlson Adder (HCA), with a constant delay complexity in O(k) and an area complexity in O(n). A functionally similar adder denoted as Almost Correct Adder (ACA) is proposed in [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF]. Indeed, the same principle of limiting the transmitted carry chain to a same k for each position is used. However, the implementation is different. To understand it, the kill bit must be added to generate-propagate scheme, giving Equations 1.14. The kill signal is set when both inputs are 0. When this situation occurs, an hypothetical input carry can not be transmitted. p i = x i ü y i , g i = x i • y i , k i = (¬x i ) • (¬y i ) . (1.14) Figure 1.24 -16-bit Sample Adder (LPA) with k = 4 -Red-striped square converts (x i , y i ) to (p, g) (Equation 1.4) and green-striped square converts sum bit x i ü yi and c i to output z i . Considering k i , we get: c i = Y _ ] _ [ 0 if k i = 1, 1 if g i = 1, c i≠1 otherwise (p i = 1). s i = a i ü b i ü c i≠1 . (1.15) Using Equations 1.15, a matrix recursion can be found to express c i as a function of any of its carry predecessors: A c i 1 B = A p i g i 0 1 B A c i≠1 1 B = M i A c i≠1 1 B , and by recursion, A c i 1 B = M i M i≠1 • • • M i≠k+2 M i≠k+1 A c i≠k 1 B . (1.16) Therefore, knowing the adder output carry c n+1 implies propagating, generating or killing carries from first carry c 0 , performing n ≠ 1 simple binary matrix products (performed operations are only logical OR and AND). ACA proposes to reduce this chain by limiting this series of matrix products to a given number, taking into account the low probability for the existence of a long carry chain. For instance, a 32-bit operator with a maximum considered carry chain of 8 taken into account for each output bit calculation will produce incorrect results only for cases where the longest carry chain should be greater to 8, which occurs for only 2.4%. To summarize, an n-bit ACA with x-bit restricted carry chain will have to consider n≠x+1 carry chains of size x instead of a single n ≠ 1-bit carry chain. However, successive x-bit carry chains have x ≠ 1 bits recovering with their direct neighbour carry chains. As a consequence, an organization for a fast calculation of carry chains matrix M i:i≠x+1 is possible as showed in Figure 1.25, where it is applied to a 16-bit ACA with 6-bit carry chains. M i:i≠x+1 is the matrix product r x≠1 j=0 M i≠j . By construction, an n-bit ACA with a k-bit considered carry chain has an area complexity O (n log k) and a time complexity O (log k). In [START_REF] Schilling | The longest run of heads[END_REF], it is showed that the [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF] expectation for the longest chain of ones for an n-bit sequence is log n ≠ 2/3, which in our case makes k proportional to log n for equal performance. Therefore, the final complexity of an n-bit ACA is O (n log log n) for space complexity, which is near-linear even for relatively high values of n, and O (log log n) for time complexity. Hence, the theoretical space complexity limit for accurate adder is nearly reached, whereas time complexity is exponentially beaten, so ACA can be considered as a fast approximate adder. As previously mentioned, LPA and ACA produce the same function with different hardware layout. Figure 1.26 sums up the effect of approximation on the output on an 8-bit LPA or ACA with k = 2. Each colored rectangle takes the inputs considered in the computation of the corresponding output bit(s). Most approximate adders can have their function fully described by this type of figure, except for particular ones such as ETAI described in Section 1.4.1.2 or configurable adders such as AC2A mentioned in Section 1.4.1.3. Figure 1.27 shows the error maps for 8-bit ACA for k = 2 and k = 4 in log scale. The error map is the amplitude of error given all possible combinations of inputs x (input 1) and y (input 2). The white zones correspond to the inputs leading to no error, i.e. the inputs implying a carry chain inferior to k. As expected, k = 4 leads to scarcer error than k = 2. The error map is here represented using FxP-represented inputs with a 1-bit integer part. The theoretical highest possible error is therefore equal to 2 ≠ q = 2 ≠ 2 1≠n . For LPA and ACA, it is interesting to see that the error map seems to be fractal, which shows the structural different between the nature of their error and the uniform nature of quantization noise issued by FxP. As pointed out, the ACA adder proposes high benefits in terms of delay with small area sacrifice compared to classical accurate adders. Moreover, it is possible to choose the deepness of the approximation by selecting the length of the maximal considered carry chain for each output bit. As reducing this length is source of error, an architecture going against the first interest of approximate computing is proposed in [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF], the Variable Latency Speculative Adder (VLSA). This adder is totally accurate, but based on ACA. The method consists in the following steps: + 1. calculate the sum using an (n, k) ACA, choosing k such as a relatively low number of errors occurs, 2. detect if an error has occurred, i.e. if a carry chain is bigger than k, and 3. if an error occurred, correct the sum in order to obtain an exact result. This method, which provides a correct adder with variable latency from one unit to two, is only interesting if the sum of the ACA and the error recovery system does not exceed an equivalent state-of-the-art adder. An error recovery system which uses a maximum of the ACA structure to detect and correct the error is proposed in [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF]. The error detection mechanism has to consider all chains of length k + 1 instead of k for the ACA. This leads to an O(log n) time complexity, which is higher than (n, k) ACA time complexity, but still more efficient than a traditional adder by two thirds according to the author. The correction system is an n/k-bit Carry Look-Ahead (CLA) block, which returns carries that were missed by the ACA because of a too long carry chain. This mechanism has about the same time-efficiency than the corresponding ACA, so the critical path will be the error detection mechanism. The schematic of the resulting architecture is given in Figure 1.28. Tests comparing ACA, ACA with error detection, ACA with error detection and recovery and a traditional fast adder provided by DesignWare are provided by [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF]. ACA adders are optimally sized for an accuracy of 99.99% following Table 1.8 values. Results are showed on Figure 1.29. For ACA with no error detection and recovery, we can see a clear benefit in delay compared to traditional adder. They are both near-linear, but the proportional coefficient is much smaller for ACA. In terms of area, ACA is about 25% smaller than the traditional adder. For the ACA with error recovery, it can be noticed that it is nearly as fast as a traditional fast adder. Though, the error correction only occurs for 0.01% of computations. The average delay of corrected ACA is 0.9999 multiplied by the error detection delay, which is about 2/3 of the traditional adder according to first graph of Figure 1.29. In terms of area, the ACA and its error recovery are together about 1.5◊ larger than an optimal exact adder, but its complexity is linear. To conclude about ACA: • ACA takes advantage of deep carry propagation scarcity. • It performs scarce errors depending on its design, but with a potential high amplitude, especially when a carry chain is incompletely considered on a bit of high significance. • It has a near-linear delay even for high values of n, with a very low proportional coefficient compared to a fast adder. • It covers a 25% smaller area compared to this same fast adder. [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF] applied to any other approximate operator, and is a very interesting accurate structure for systems which can allow variable number of cycles for an operation. In this context VLSA can be seen as a 2-cycle addition, which can nearly always be bypassed in 1 cycle. Error-Tolerant Adders Between 2009 and 2010, Zhu proposed four approximate adders: • Error-Tolerant Adder type I (ETAI) [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF], • Error-Tolerant Adder type II (ETAII) [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF], • Error-Tolerant Adder type II Modified (ETAIIM) [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF], and • Error-Tolerant Adder type IV (ETAIV) [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF]. In [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF], the first Error-Tolerant Adder (ETA) is presented, then referred as ETAI in the subsequent iterations [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF][START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF]. Its principle is simple: the most significant part (MSB side) of the adder is an accurate adder, and the least significant part (LSB side) is approximated with Algorithm 1. The inputs x i and y i of the approximate part are read from their MSB. When both are equal to 1, the calculations are stopped and all bits from rank i to the LSB are set to 1. This mechanism is depicted in Figure 1.30. ETAI is made for fast approximation. Its goal is to round up the result as fast as possible when a generate signal is met in the approximate part, i.e. both input bits are 1. In this way, the carry which should have been generated towards the MSB is compensated at best by maximizing the lower weight bits that have not been treated yet, that is to say all the bits from the carry generation to the LSB. The propagation of these 1s to the right is performed by a control block which was designed for propagating the information as fast as possible when the two-1s case occurs, using a control signal. This block is composed of two types of sub-blocks: i = 1 then two_ones Ω True z i Ω 1 else z i Ω x i ' y i end if end while while i Ø 0 do z i Ω 1 i Ω i ≠ 1 end while 1 0 1 1 0 0 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 0 0 1 0 0 1 1 + 1 0 0 0 1 1 1 0 0 1 0 0 1 1 Block Output value CT L i CSGCI CT L i+1 ' (x i • y i ) CSGCII CT L i+1 ' CT L i+k ' (x i • y i ) Table 1.9 -Logical equations of CSGC type I and II • Control Signal Generating Cells of type II (CSGCII), which is similar to CSGCI, but takes in addition as input another control signal of rank i + k, where k is fixed at design time. CSGCII allows for the control signal to be short-circuited. This way, for an n+m-bit ETAI(n, m) with n-bit accurate part and m-bit approximate part, the critical 1s propagation is k + 1 < m, where k is the spacing between two CSGCII in the control block. The architecture of the control block is graphically described in Figure 1.31, for a 20-bit approximate part ETAI. CSGCI and CSGCII are simple logic blocks easily described by their binary logic in Table 1.9. Once the control bits are generated/propagated for the whole approximate part, calculations are performed by a carry-free addition block, composed of a logic function called Modified XOR (MXOR) and presented at transistor-level logic in [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF]. Studying the truth table of this new logic block reveals that it actually is a 3-input OR gate. Hence, each output z i of the approximate part is given by z i = MXOR(x i , y i , z i ) = x i ' y i ' CT L i . The accuracy of ETAI is studied introducing Minimum Acceptable Accuracy (MAA) and Acceptance Probability (AP). MAA is a fixed value defining what is the desired minimum accuracy compared to an exact computation for the result, expressed as a percentage. AP is the probability for a given MAA to be reached by the operator. Simulation results are given by Figure 1.32. The first graph is obtained for several 16-bit ETAIs with 10 4 simulation sets of inputs, the second one has unknown simulation size parameters, but all ETAIs are designed with an approximate part representing 75% of the adder width. For 99% of MAA, ETAI [START_REF] Xu | Internet of things in industries: A survey[END_REF][START_REF] Xu | Internet of things in industries: A survey[END_REF] gives quite a high AP (about 99%), but when the approximate part length grows, AP dramatically drops when MAA increases. The second graph shows that for small operators, a difference of 1% on the MAA provokes a high drop in AP. For longer operators, the difference of AP for different MAA gets tinier. Errors performed by an ETAI(n, m) can be high in amplitude (nearly 2 m≠1 ). ETAI is an adder which often performs errors, often with a low amplitude but sometimes with relatively high amplitude. Moreover, it has very bad performance for the addition of low amplitude numbers. Acceptance Probability (%) Acceptance Probability (%) Minimum Acceptable Accuracy (%) Size of Adder (bits) Figure 1.32 -ETAI accuracy simulation results [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF] Simulation results for the timing and delay performance of ETAI compared to most classical accurate adders are given in Table 1.12 [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF] for 0.18µm CMOS process, for a 100 MHz frequency. In this table, CSK refers to Carry-Skip Adder (see [START_REF] Parhami | Computer Arithmetic[END_REF]). The details about the size of these adders is not given. 100 sets of inputs were used for simulation, which is a bit low for giving an average of all different possible reactions of ETAI. These results give advantage to ETAI towards classical correct adders on this metric, with Power-Delay Product (PDP) savings up to more than 80% compared to carry-select implementation. Once again, results must be moderated, since the experimental conditions are not very clear (including size of adders, size of approximate part, value of CSGCII spacing parameter k). Another fast approximate adder, ETAII, is presented in [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF]. The structure of this adder is based on the same idea as LPA and ACA previously presented, i.e. shortening carry propagation paths. Indeed, carry propagation chain is cut in more little sub-chains of equal size. But contrary to ACA, every output bits do not take the same input propagated carry chain size. In the structure of ETAII given Figure 1.33, it can be noticed that each sub-block of size X is calculated using an exact adder, but taking into account only the carries generated inside and the propagated output carry of its predecessor carry generator sub-block. For carry generation, CLA blocks are implemented and for the sum generator, classical RCAs are used to minimize area. Well-designing the ETAII is entirely about finding the proper size for the carry propagation chains. The author studied the AP of 32-bit ETAII given different carry generator blocks sizes m. The results are available in Table 1.10. Simulations were led using 10 4 sets of inputs, which is quite low again for the adder width, so the results must be taken with caution. For a 32-bit adder, a high AP can be reached for a high MAA with quite a low carry chain length. E.g., more than 97% of tested inputs lead to an accuracy superior to 99% compared to the accurate value. Contrary to ETAI, ETAII has no relative accuracy disparity comparing low amplitude and high amplitude input sets. Simulation in [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF] showed that AP for a given MAA is similar for ETAII whatever the range of inputs is, at least for n/m = 4. Comparison Table 1.10 -AP as a function of MAA and carry propagation block size for 32-bit ETAII of accuracy between an ETAI (with unknown parameters) and a 32-bit ETAII with 4-bit carry propagation block is given by the author. For instance, for a 99% MAA and inputs in the integer range J0, 2 8 K, ETAII presents a 97% AP against only 52% for ETAI. In order to improve ETAII accuracy, a modified version is proposed in [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF], the ETAIIM. Indeed, ETAII has a periodic structure, with the same substructure for high significance and low significance output bits, which makes it give as much importance to LSB part than to MSB part, which is generally unwise. In order to give more importance to MSB, ETAIIM takes the same structure as ETAII, but with a longer carry propagation chain for most significant bits. Figure 1.34 represents a 32-bit ETAIIM, with 4-bit carry propagation for all LSB and a 12-bit MSB carry propagation chain. Such a structure induces a longer critical path, corresponding to this longer carry chain and the corresponding sum generator. In this way, the previously described ETAIIM has a 99.9% AP for a 99% MAA against 97.0% AP for the same MAA for the corresponding ETAII. Hardware simulations results are presented in Table 1.12 [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF], comparing classical correct adders to ETAII and ETAIIM. The only difference between ETAII and ETAIIM is the delay which is 64% higher for ETAIIM, but both operate with the same power. ETAIV is presented in [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF]. Its principle is the same as ETAII and ETAIIM, shortening the carry chain. However, ETAIV presents longer carry chains than ETAII and ETAIIM for an identical delay, in exchange for a higher energy cost. • the LSB carry generator is strictly identical than the one met in ETAII and ETAIIM and • the MSB carry generator is composed of two parallel carry generator blocks: one taking value 0 (GND) as input carry, the other taking 1 (VDD). In this way, the two exhaustive possibilities for the concerned partial carry propagation are calculated. The good one is chosen thanks to a 2-bit multiplexer controlled by the LSB carry generator output carry. The partial block diagram of ETAIV is depicted in Figure 1.37. The author performed simulations on 10 4 sets of inputs and accuracy results are given in Table 1.11, where X represents the number of bits of each sum generator (and thus each carry generator). ETAIV provides better results in terms of AP for a given MAA than ETAII (considering the same block size X). The error maps for 16-bit ETAIV with respectively blocks of width X = 2 and X = 3 are depicted in Figure 1.36. For X = 2, triangular patterns are clearly visible, corresponding to areas where error is less. Generally, the error is quite high since many carries are not transmitted. For X = 3, the errors seem more homogeneous and with lower amplitude in average. Table 1.12 -Simulation results for Error-Tolerant Adders [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF] Hardware simulation results for ETAIV are given in Table 1.12. Because of its longer critical paths, ETAIV needs slightly more power than ETAII and ETAIIM and has a 21% greater delay. The authors also show by simulation that the design has a good accuracy even for low amplitude inputs. It is a good alternative to ETAIIM, if a little amount of area can be traded for delay reduction. + 𝑥 𝑛-1 ~𝑥𝑛-𝑋 𝑦 𝑛-1 ~𝑦𝑛-𝑋 𝑧 𝑛-1 ~𝑧𝑛-𝑋 Sum Generator 𝑥 𝑛-𝑋-1 ~𝑥𝑛-2𝑋 𝑦 𝑛-𝑋-1 ~𝑦𝑛-2𝑋 𝑧 𝑛-𝑋-1 ~𝑧𝑛-2𝑋 Carry Generator Carry Generator MUX2 𝐺𝑁𝐷 𝑉𝐷𝐷 Sum Generator 𝑥 𝑛-2𝑋-1 ~𝑥𝑛-3𝑋 𝑦 𝑛-2𝑋-1 ~𝑦𝑛-3𝑋 𝑧 𝑛-2𝑋-1 ~𝑧𝑛-3𝑋 Carry Generator 𝐺𝑁𝐷 Sum Generator 𝑎 𝑁-3𝑋-1 ~𝑎𝑁-4𝑋 𝑏 𝑁-3𝑋-1 ~𝑏𝑁-4𝑋 𝑠 𝑁-3𝑋-1 ~𝑠𝑁-4𝑋 With ETAI, ETAII and their modified versions ETAIIM and ETAIV, four subsequent approximate adders are proposed. ETAI is original by its reversed carry-propagation approximation, but is very inaccurate and has very poor performance for low amplitude inputs. However, it is a very low energy adder thanks to its low delay. ETAII, ETAIIM and ETAIV have a very different nature from ETAI, but they are very close the one from the others in their principle. The most accurate is ETAIV, followed by ETAIIM and ETAII. However, the fastest is ETAII, then ETAIV and ETAIIM. ETAII is the most energy-efficient, ETAIV coming second. The four operators have linear area complexity, but ETAI is the smallest, followed by ETAII and ETAIIM tied, not far from ETAIV. However, the authors compare these operators to some classical adders, but not to state-of-the-art fast adders such as KSA [START_REF] Kogge | A parallel algorithm for the efficient solution of a general class of recurrence equations[END_REF] or Ladner-Fischer Adder (LFA) [START_REF] Ladner | Parallel prefix computation[END_REF]. Accuracy-Configurable Approximate Adder In [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF], Kahng proposes an Accuracy-Configurable Adder (AC2A). This operator is able to perform additions on different levels of accuracy on an unique implementation using a series of error correction systems which can be activated or deactivated thanks to power gating techniques. The approximate part of an n-bit AC2A is composed of n/k ≠ 1 recovering k-bit exact adders, whose only the MSB part is kept for the final result, as illustrated by Figure 1.38. Hence, errors are generated only when a carry is not propagated to the input of one of these sub-adders. As a functional point of view, AC2A resembles ETAIV, except that the sub-blocks recover by a half instead of 2/3 and that the accuracy is tunable at run time on AC2A. The approximate computation part has a delay complexity of O (log 2 k + 1) , + Figure (n ≠ 2k) (log 2 k + 1) 2 2 . This means delay complexity beats the optimal delay for an accurate adder, whereas area complexity is slightly above the optimal accurate adder. Estimations of minimal delay, area, dynamic power and pass rate (1 ≠ error rate) as a function of the value of k for the approximate computation part of a 16-bit AC2A compared to a conventional CLA is given in Table 1.13. For k AE 6, the proposed computation part is more power-efficient than the classical CLA in terms of dynamic power, but its area is larger when k Ø 2. It can thus be assumed that its static power is superior to the one of CLA. When k decreases, the minimal delay of the proposed operator decreases, but the error rate increases because of the induced larger number of unconsidered propagated carries. Therefore, a trade-off must be found between delay and pass rate. To characterize AC2A approximate part more completely, two metrics are used in [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF]: ACC amp = 1 ≠ |R c ≠ R e | R c , and (1.17) ACC amp measures the relative amplitude of the error, 1.0 representing a perfect accuracy and the value decreasing with the error. ACC inf represents the proportion of correct bits, 1.0 representing a correct output bit sequence. Considering these metrics, comparisons of 16-bit AC2A approximate part with k = 4, ETAI [START_REF] Zhu | Design of low-power highspeed truncation-error-tolerant adder and its application in digital signal processing[END_REF] and ETAIIM [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF] described in Section 1.4.1.2, LPA [START_REF] Lu | Speeding up processing with approximation circuits[END_REF] described in Section 1.4.1.1 and an accurate CLA are given in Table 1.14. In this table, EDC stands for error detection and correction system. The best delay is obtained for AC2A and ETAI, but ETAI is 38% less area-costly than AC2A. However, ETAI pass rate and ACC inf are very low. In terms of area, AC2A is beaten by ETAIIM, which is also more accurate considering ACC amp and ACC inf metrics and pass rate, but with a delay overhead of 30%. LPA is nearly as fast as AC2A and also more accurate, but as it is based on a parallel prefix structure, its area is superior by 47%. The required area overhead for error detection and correction is 75% for LPA, whereas it is only 28% for AC2A and 15% for ETAIIM. Error detection and correction method is discussed below. A first conclusion from these results is that AC2A is a fast adder with a good balance between accuracy and area. ACC inf = 1 ≠ B e B w , ( 1 Figure 1.39 [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF] gives comparative results for AC2A adder, using the metrics ACC amp and ACC inf , varying voltage from 0.6V to 1.0V . In these graphs, AC2A is refered to as ACA adder (not to be confused with Almost-Correct Adder [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF]), and Lu's adder refers to LPA [START_REF] Lu | Speeding up processing with approximation circuits[END_REF], both presented in Section 1.4.1.1. AC2A shows an interesting resistance to VOS. Indeed, for ACC amp metric, only ETAI achieves a better resistance, but it has extremely bad results with ACC inf metric, whereas AC2A beats every other tested operator, closely followed by ETAIIM. What can be concluded is that AC2A has a shorter critical path than the tested adders, and ensures a good accuracy in terms of error amplitude as well as a low Bit Error Rate (BER). As mentioned before, AC2A calculation errors occur when at least one of the sub-adder should have taken an input carry. Knowing this, detecting an error can be performed with a very little overhead. Correction can then be performed by transmitting the lost carry or carries to the concerned sub-adders. Just as VLSA (see Section 1.4.1.1), the entire error correction could be performed with additive cycles, but with only a small area overhead since most of the design is re-used for correction. However, unlike VLSA, AC2A proposes a configurable accuracy. Indeed, the periodical structure of the error correction system allows several levels of correction, so that an erroneous result can be partially corrected. For this, a pipelined correction design is proposed, following the principle showed in Table 1 An example of the previously described structure is developed in [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF] taking a 32-bit AC2A composed of 4 8-bit sub-adders. With such a structure, four modes can be applied: • mode-1: no power-gating, the whole pipeline is active, and the produced result is exact, • mode-2: only stage 4 is power-gated, only the most significant bits sub-adder is not corrected, • mode-3: stage 3 and 4 are power-gated, only one sub-adder is corrected, which is the second one from the LSB, • mode-4: stage 2, 3 and 4 are power-gated, and so only the approximate calculation is performed with no error correction. Comparisons of the accuracy reached by these modes using ACC amp and ACC inf metrics, as well as power and power reduction compared to a conventional pipelined adder are give in Table 1. [START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF]. The conventional pipelined adder refers to an exact adder where the calculation is regularly performed by pipelining the calculation using its sub-adders. Therefore, the 32bit conventional pipelined adder used as a reference also has four pipeline stages, each stage performing one-fourth of the total calculation, contrary to the proposed pipeline where the whole approximate calculation is performed on the first pipeline step. In mode-1, the conventional pipelined adder is more energy efficient than AC2A, but when using approximate modes 2 to 4, energy saving raises from 12.4 to 51.6%, with a relatively small loss of accuracy. Accuracy results of the 32-bit pipelined AC2A on SPEC 2006 benchmarks [START_REF]Standard performance evaluation corporation (spec) cpu2006[END_REF] detailed in [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF], present generally good accuracy for every mode, even using mode-4. In this mode, ACC amp is superior to 0.99 for every test and above 0.95 for ACC inf . In order to show the advantage of configurability in terms of power, the same benches are run with a dynamic configuration of the accuracy. Even if it is not specified in [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF], it can be assumed that the operating modes were chosen thanks to a succession of simulations and configuration optimizations and so there is no auto-control system. [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF] show that the proposed 32-bit AC2A leads to an average of 30.0% power savings with the ACC amp objective (44.5% at best) and 35.8% average power savings with the ACC inf objective (47.1% at best) on SPEC 2006 benchmarks. As a conclusion, AC2A proposes a pipelined accuracy-configurable adder with good accuracy performance and with potentially important energy savings thanks to partial power gating applied on the error-correction system. However, accuracy configuration must be done offline and so programming effort is increased. In terms of power consumption, AC2A is quite near to ETAI performance, which is intermediately energy-efficient (see Section 1.4.1.2), but with a much better accuracy in terms of information accuracy, meaning the number of correct bits produced. However, AC2A has two main drawbacks: • error correction is gradually performed from the least significant sub-adders, and • changing the error correction efficiency at run time implies a modification on the number of cycles for an addition, and variable-latency instructions are generally difficult to handle in an instructions pipeline. Discarding these drawbacks, AC2A coupled with a good configuration can lead to important energy savings with a relatively low loss of accuracy or information. Gracefully-Degrading Adder This section presents a quality-configurable approximate adder denoted as Gracefully-Degrading Adder (GDA) [START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF]. In comparison to AC2A [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF] presented in Section 1.4.1.3, GDA is meant to be an approximate adder with a better accuracy, reached with less effort. Indeed, as showed in Table 1.15, each AC2A additive correction cycle leads to better correction, the first correction cycle correcting LSB bringing much less accuracy improvement than the last one, correcting MSB. Both GDA and AC2A can reach the same maximum quality with the same effort, but the optimized operator is able to reach a very good quality with a dramatically reduced amount of effort when compared to the original one. The proposed GDA is based on a structure very similar to ETAIV [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF] described in Section 1.4.1.2. Indeed, the adder is divided into smaller chained adders whose input can be Chapter 1 switched whether to the upstream sub-adder output carry or to the output of a carry-in prediction block thanks to a multiplexer. For an n-bit GDA divided in four n/4-bit sub-adders, with X = (X 3 , X 2 , X 1 , X 0 ) and Y = (Y 3 , Y 2 , Y 1 , Y 0 ) , both are n/4-bit subsets of inputs, and Z = (Z 3 , Z 2 , Z 1 , Z 0 ) , n/4-bit subsets of outputs, the structure of the corresponding GDA is given in Figure 1.40. The configuration of GDA consists in setting up the multiplexers. If the 3 2 1 0 Figure 1.40 -Structure of proposed n-bit GDA composed of 4 n/4-bit sub-adders upstream sub-adder output carry is chosen, then a bigger sub-adder is virtually used, causing longer delay in return for a higher accuracy. In order to obtain a faster operator, it is then better to choose carry-in prediction blocks as sub-adders input. To be prevented from an important loss of accuracy, these blocks need to predict the input carry efficiently and with the shortest possible delay, and imperatively strictly inferior to the adder unit. Proposed carry-in prediction is based on a hierarchical scheme. Indeed, in order to have a fast and accurate prediction, many LSBs have to be considered, more precisely more than the length of the proposed GDA adder units. To achieve that, prediction computation needs to be parallelized. As for ACA (see Section 1.4.1.1), prediction is based on propagate and generate signals p i and g i , which define a recursive formula for the calculation of the carry value c i+1 : p i = a i ü b i , g i = a i • b i , c i+1 = g i ' (p i • c i (). (1.19) By developing the previous equations and by assuming the longest carry chain cannot exceed t, the expression of the carry signal c i is then c i = g i≠1 ' (p i≠1 • g i≠2 ) ' • • • ' Q a i≠t+1 Ÿ j=i≠1 p j R b • g i≠t ' Q a i≠t Ÿ j=i≠1 p j R b • c i≠t . (1.20) Using Equation 1.12 in Section 1.4.1, we can prove that for a 32-bit adder, the probability for the longest carry-chain to exceed 8 is 2.43%. Therefore, assuming taking the 8 preceding bits into account for carry prediction is acceptable, the expression of the carry signal becomes c i = c Õ i ' 1 r i≠4 j=i≠1 p j 2 • c Õ i≠4 , with c Õ i = g i≠1 ' (p i≠1 • g i≠2 ) ' • • • ' 1 r i≠4 j=i≠1 p j 2 • g i≠4 , and c Õ i≠4 = g i≠5 ' (p i≠5 • g i≠6 ) ' • • • ' 1 r i≠4 j=i≠1 p j 2 • g i≠8 . (1.21) Moreover, c Õ i≠4 will propagate to c i only if i≠4 Ÿ j=i≠1 P j = 1. (1.22) These equations lead to the hierarchical prediction scheme of Figure 1.41, where two 4-bit groups are watched in parallel before considering the condition showed by Equation 1.22 using AND gates. Carry-in Prediction Carry-in Prediction 𝑐 𝑖-4 ′ 𝑝 𝑖-1 : 𝑝 𝑖-4 𝑐 𝑖 ′ 𝑐 𝑖 Figure 1. -GDA hierarchical prediction scheme Based on this hierarchical prediction scheme, another level of configurability can be introduced. Indeed, following the scheme proposed by Figure 1.42, the number of preceding bits considered in the prediction can be set by a series of multiplexers, with a step determined by the deepness of carry-in prediction blocks. For instance, the number of carries which can be considered is whether 4, 8, . . ., 4 ◊ k, where k is the number of implemented unitary carry-in prediction blocks. This structure gives a high number of possibilities for the granularity of the reconfigurability and on the parallelism of the carry-in prediction. To conclude about its structure, GDA proposes two levels of configuration: Carry-in Prediction Carry-in Prediction Carry-in Prediction • by its possibility to select the use of carry-in prediction or exact adding between each sub-adder, and • by its possibility to select the deepness of carry-in prediction when this mode is active. Chapter 1 By structure, GDA offers fine-grained control for the level of error that can be tolerated at its output, and potentially offers a good prediction of carry despite an important area penalty for control. In [START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF], the authors give very complete comparative simulation results of 32-bit GDA compared to exact adders, RCA and CLA, and other approximate adders, Variable Latency Carry Select Adder (VLCSA-1) [START_REF] Du | High performance reliable variable latency carry select addition[END_REF] and LPA [START_REF] Lu | Speeding up processing with approximation circuits[END_REF] described in Section 1.4.1.1, leveraging worstcase-error, error rate and average error. Results are given in Table 1.17. All simulations are based on one million randomly-generated inputs, so they have to be taken with caution, especially worst-case error. All adders are static, meaning AC2A and GDA are not generated using their reconfigurable version, the configuration is set before implementation. M A corresponds to the mode AC2A is operating such as described in Table 1.16. For the test and for each mode, AC2A was implemented in a non-pipelined version in order to get fair area and delay measurements. M B and M C are the parameters for GDA. M B indexes the number of bits for each sub-adder (respectively 4, 8, 12 or 16) and M C the number of prediction bits (respectively 4, 8, 12 or 16). From this point and for the rest of this section, AC2A i will denote AC2A on mode M A = i, and GDA (i,j) will denote GDA on mode (M B , M C ) = (i, j). When comparing all exact adders, RCA, CLA, AC2A 4 and GDA (4,4) , the two last exact adders achieve a better delay than the classical others, but with an important area and power overhead. For instance, GDA (4,4) is 42.3% faster than RCA, but with 44.5% area overhead and twice the operating power. Compared to AC2A 4 , GDA (4,4) occupies 66.45% more area and consumes 7.97% more power, but is 5.93% faster. It shows than nor AC2A 4 neither GDA (4,4) are only efficient in terms of delay in their exact version. Accuracy-configurable implementation of AC2A and GDA are compared in Table 1.18, showing the area for each reconfigurable operator, as well as the power and delay corresponding to all their operating modes. Delay ú corresponds to the delay when applying voltage scaling for each operator and each mode in order to have the same power consumption as the highest, occurring for GDA [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]4) . In their reconfigurable versions, it can be observed that GDA costs 19.43% less area than AC2A. In accurate mode, GDA is only 0.90% slower than AC2A because of the higher number of multiplexers in the critical path. Results from Table 1.17 and Table 1.18 highlight the benefits of GDA compared to AC2A for an equivalent power (Delay ú rows in Table 1.18) and for each of the three metrics considered. Figure 1.43 shows the benefits of GDA, selecting the optimal accuracy configuration for each of these metrics: 3) and GDA (4,4) . In conclusion, the slight delay overhead in GDA structure allows significative reductions of the error, contrary to AC2A. However, it has to be kept in mind that each of the curves of Figure 1.43 does not represent the entirety of GDA but only the optimal points. This means for instance that the ideal configuration for optimizing worst-case error is not the same for error Worst-case error rate or average error. Therefore, there is no optimal configuration for GDA considering all metrics at the same time, and so GDA accuracy has to be configured knowing the prior metric to take into consideration. • AC2A worst-case error is compared to GDA (1,1) , GDA (2,1) , GDA (3,1) , GDA (4,1) and GDA (4,4) . • AC2A error rate is compared to GDA (1,1) , GDA (1,2) , GDA (1,3) , GDA (1,4) and GDA (4,4) . • AC2A average error is compared to GDA (1,1) , GDA (2,2) , GDA (3, GDA (M B , M C ) (1, 1) (1, 2) (1, 3) (1 (M B , M C ) (1, 1) (1, 2) (1, 3) (1 Addition Using Approximate Full-Adder Logic All the approximate adders previously described leverage modifications in addition function, always cutting off in different ways carry propagation, except for ETAI (see Section 1.4.1.2). In this section, the adders are built considering modifications in the Full Adder (FA) function. As developed in Section 1.3.3, FA function is the basic cell the most used in binary addition since it produces a 1-bit addition. Several possible modifications of the FA celle are proposed in [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF], and more particularly on the Mirror Adder (MA) circuit implementing FA logic (see Figure 1.44a). Indeed, being able to remove transistors in a FA cell modifying as little as possible its binary logic function Chapter 1 can potentially induce high benefits in terms of speed, area, and energy savings, because of the omnipresence of this cell in numerous adder designs. First, transistors are removed one by one from the conventional MA to find a configuration where all sets of input x, y and c in give in as many cases as possible the good set of outputs z and c out . The best result according to the author was obtained removing 8 transistors, which leads to the resulting circuit of Figure 1.44b. Then, the final Approximate Mirror Adders (AMAs) are designed following a series of observations about FA truth table: • The Approximate Mirror Adder type 1 (AMA1) is based on the observation that z = c out for 6 cases out of 8. Therefore, the z calculation part is suppressed and z is set to c out using a buffer in order to limit capacitance for latency and/or power efficiency purpose. AMA1 transistor view is showed on Figure 1.44c. A 44.5% area gain is observed by AMA1 towards traditional MA. • The Approximate Mirror Adder type 2 (AMA2) is based on the observation that for 6 cases out of 8, cout = x is verified (or c out = y by symmetry). Therefore, an inverter is used on x to calculate c out from simplified MA of Figure 1.44b, producing the transistor view in Figure 1.44d. A 41.2% area gain is observed by AMA2 towards traditional MA, which is nearly as good as AMA1 thanks to its shorter critical path. • The Approximate Mirror Adder type 3 (AMA3) is obtained from AMA2. In AMA2, there are 3 errors for z in its truth table. The idea of AMA3 is to reduce dependency between c in and z. A good way to do it is to force z = y. Structuring the adder this way also insures c out to be correct when z is correct. AMA3 is 66.7% smaller than traditional MA, which is much better than AMA1 and AMA2, but it comes at a cost of a much higher approximation. Truth tables of AMA1, AMA2 and AMA3 are given in Table 1. [START_REF] Grigorian | Dynamically adaptive and reliable approximate computing using light-weight error analysis[END_REF]. They show that AMA1 globally generates less errors than AMA2, and AMA2 less than AMA3. Obviously, the case when each of these errors occurs is very important for the global accuracy. For instance, it is potentially more costly in terms of error to accidentally generate a carry on c out than not to propagate a carry that should have been propagated or than to give the wrong result to z, since the error is possibly propagated to a higher significance output bit. Because of the potential high amplitude errors that can be generated by chaining AMA, AMA-based approximate adders cannot be exclusively used in approximate adders designs to reach an acceptable output minimal accuracy. That is why the author only presents designs where only the LSB part is made of AMAs. In [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF], the MSB part is composed of conventional exact FA cell instances forming a RCA. The adders produced this way are denoted as IMPrecise Adder for low-power Approximate CompuTing (IMPACT). However, in general, it is possible to use any of the accurate adder structures presented in Section 1.3.3. The designs of AMAs being fixed, the only way to modify accuracy is to modify the number of approximated LSBs. In [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF], image processing tests are performed with different approximated LSB lengths and compared to FxP truncation method in terms of Peak Signal-to-Noise Ratio (PSNR), power and area savings towards exact method. Results for Discrete Cosine Transform (DCT) and Inputs Acc. outputs Approximate outputs III of [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF]. It can be noticed that operating voltages for all AMAs and truncated adders are very similar so the results are quite fair. Results of Figure 1.45 show that AMA3 is the most accurate in terms of PSNR, followed by AMA2, AMA1 and finally truncation. Truncation being the worse is not surprising since it virtually sets all LSBs to 0, but the order of accuracy using the three different AMAs would be expected to be the opposite since 1 corresponds to the slightest approximation and 3 to the largest one. However, no other accuracy metric is used so the nature of the performed error is unknown, though we could suspect the AMAss to perform a salt-and-pepper noise because of the nature of their design. Moreover, using 20-bit adders on an 8-bit image brings question about their legitimacy. In terms of power, truncation is obviously more efficient than AMA-based adders, but IMPACT shows a good power efficiency, especially for a high number of approximated LSBs. The most power efficient is AMA3, followed by AMA1 or AMA2 depending on the situation. For area results, truncation does not appear. A good estimation of truncation area savings can be performed considering that for x truncated LSBs on a 20-bit adder, the benefits in terms of area are x 20 ◊ 100 in percents. Therefore, we can assume area savings for 7, 8 and 9 LSBs truncation are respectively 35%, 40% and 45%, which is about 25% as high as AMA3, partially explaining energy savings results. However, results show that area savings are twice as high for AMA3 than for AMA2 and AMA1, which are quite similar. Further results implementing MPEG video compression are given in [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF] and tend to confirm the results obtained for DCT/IDCT experiment. x y c in z c out z 1 c out 1 z 2 c out 2 z 3 c out 3 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0 1 0 1 0 0 1 0 0 1 0 0 1 1 0 1 0 1 1 0 1 0 1 0 0 1 0 1 0 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 As a conclusion, IMPACT are power-efficient approximate adders, but their accuracy compared to truncation still needs to be tested on legitimate experimental settings, which is done in Chapter 4. Approximate Adders by Probabilistic Pruning of Existing Designs In previous sections, all proposed adders were original designs. However, the existence of a deep literature about integer arithmetic raises the following question: is it possible to take any adder circuit and automatically transform it to an approximate one? The answer of this question is addressed in [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF]. The idea is to prune iteratively little parts of a design in order to trade accuracy for area, power, and potentially delay. Of course, pruned elements must not be chosen randomly, and so strategies are leveraged. Two parameters are considered for design pruning -activity and significance. Indeed, power consumption is proportional to activity, therefore it is more interesting for a highactivity-driven gate to be removed uppermost. For arithmetic operators, each output is twice as significant as its direct less significant bit, and so an error occurring at its position has twice as much impact on the output error as an error on this neighbor. Assuming this, every transistor, gate, or group of gates can be assigned a cost function which is the value of its activity multiplied by its significance. This way, they can be sorted in an increasing order to generate a priority list for pruning. The cost functions can also be modified considering all operator outputs have the same weigth. However, we will only get interested in weighted output since this is the most common situation in signal processing. For automated design, an error target must be given as a parameter. In [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF], two error parameters are allowed in the framework: • Error Rate = Number of erroneous computations Total number of computations . • Relative Error Magnitude = 1 ‹ q ‹ k=1 |z k ≠z Õ k | x k , where ‹ is the number of possible sets of inputs, zk the accurate output for a given input set k and z Õ k the output for the same input case considering the pruned design. Once the error nature chosen and the target defined, probabilistic pruning optimization process follows the flowchart given in Figure 1. [START_REF] Kidambi | Area-efficient multipliers for digital signal processing applications[END_REF]. In [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF], the method is applied to parallel-prefix adders, which is convenient as they are defined by a grid of base cells (see Section 1.3.3), and thus probabilistic pruning is easy to be applied considering each of these parallel-prefix base cells to be the unitary block for the method. Figure 1.47 shows the results for a pruned 16-bit Kogge-Stone adder with a -20 dB relative error accuracy goal. For this adder, activity increases with the considered level of the parallel-prefix graph, while the significance logically increases from LSB to MSB on each level. Thus, cells are pruned from the lower right corner to the upper left corner. It can be seen that to reach the ≠20 dB relative error, 10 cells were pruned on a total of 49, which represents more than 20% area saving. Probabilistic pruning method on several classical adders such as RCA, CSA, KSA, HCA, LFA is applied in [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF]. All error estimations were obtained by functional simulation, whose number of sets of inputs is unknown. Analytical method was used to compute activity estimation, assuming each input has a 0.5 activity. Globally, the best benefits in the Energy-Delay-Area (EDA) product are 2◊-7.5◊ compared to the original adder, with a relative error of respectively 10% and 10 ≠6 %, these best results being obtained for KSA and HCA. Results about these adders are given in Figure 1.48. The most important gains are obtained on delay, thanks to shorter critical paths. Despite a lack of detailed results, we can assume by interpolating the points of the result curves that quite an important benefit can be observed even when trading a few amount of accuracy. As an example, for a 10 ≠2 relative error, a factor of more than 3◊ is reached for the relative EDA. The main issue with this method is the use of mean relative error as a target, which does not prevent the operator from performing high amplitude errors. As an example, Weighted-Pruned Kogge-Stone Adder (WPKSA) presented on Figure 1.47 can perform an error on its MSB for a certain number of sets of inputs, causing an extremely high error, which cannot be tolerated depending on the considered application. The list of approximate adders presented in Section 1.4.1 is far from exhaustive, many others exist in literature. However, the ones presented here are representative of the general trend of approximate adders. Most are derived from classical adders, with different ways to cut carry chains and accelerate carry propagation. Some designs come with error detection and correction circuits, leading to another kind of approximate adders, the configurable adders, which can take different accuracy targets at run time. Finally, an interesting automated method for approximate circuit generation stands out from the crowd [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF], applied on adders in the original paper but which can be applied to any signal processing circuit. Next section presents a subset of the literature for approximate multipliers. Approximate Integer Multiplication In this section, a subset of literature dealing with approximate multipliers is developed. As seen previously in Section 1.3.4, the most important part of the multiplication structure is the reduction of the summand grid, performed with adders (generally CSA). Therefore, a high quantity of different multipliers can be derived using the approximate adders such as the ones presented in previous section. In this document, we will try to highlight more original multipliers leveraging more creative ideas from which they can benefit. .2.1 Approximate Array Multipliers In this section, three versions of approximate array multipliers are presented. As a reminder, accurate array multipliers are presented in Section 1.3.4. A 6-bit two's-complement signed array multiplier is depicted in Figure 1.20. Accurate array multipliers are not the fastest neither the smallest multipliers. However, their periodic structure has a compact hardware layout, thanks to short wiring, and allows for efficient pipelining. This advantage makes array multiplier one of the most used in embedded System on Chip (SoC). The three Approximate Array Multiplier (AAM) of this section are Fixed-Width Multipliers (FWMs), meaning that if their inputs are of with n, their output is also of width n, instead of 2n, as it should be to be perfectly accurate. Only the n MSBs are kept. In practice, most multipliers are FWMs as data width is generally constant in processing units. The first AAM will be denoted by Approximate Array Multiplier I (AAMI) and was proposed in [START_REF] Kidambi | Area-efficient multipliers for digital signal processing applications[END_REF]. As the two others described below, the idea is to prune the least significant cells, i.e. the ones responsible for the computation of the non-kept output LSBs. The resulting two'scomplement signed operator is given in Figure 1.49. More than the half of the base cells were pruned, and the diagonal AFAs were changed into AHAs since they have one less input in AAMI version. The mathematical calculation of the error bias is given in [START_REF] Kidambi | Area-efficient multipliers for digital signal processing applications[END_REF]. The author also showed that the bias and the variance of the error are linear with the size of the AAMI. The basic pruning proposed by AAMI was then improved in [START_REF] Jou | Design of low-error fixed-width multiplier for dsp applications[END_REF] with Approximate Array Multiplier II (AAMII) presented below and in [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF] with Approximate Array Multiplier III (AAMIII). AAMII adds to AAMI a correction circuit on the diagonal to reduce the bias of error. This correction circuit is made with very few gates, since it is composed of n very simple cells on the diagonal. The first one is an AND gate, followed by n ≠ 2 AAO cells composed of two Table 1.20 [START_REF] Jou | Design of low-error fixed-width multiplier for dsp applications[END_REF] gives accuracy results comparing AAMI and AAMII using its maximal absolute error AE max , and the AP for a given MAA, this metric being explained in Section 1.4.1.2. These results show that AAMII has a much lower maximal error than AAMI thanks to bias reduction circuit, and the maximal error is increasing slower when the size of the multiplier increases. AP is also much higher for AAMII, and the difference between both multiplier designs seems to increase with MAA. As an example, for a 16-bit multiplication, 77.9% of outputs will be less than 0.01% far from the expected results, against only 35.2% for AAMI. To conclude about AAMII, this second version achieves much better performance in terms of maximal error as well as in term of acceptance probability, with a very small area and delay overhead. AAMII also has an area which is nearly half of the classical non-fixed-width array multiplier, but slightly more important than AAMI. A third amelioration of the signed version of the AAM, referred as AAMIII, is proposed in [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF]. As AAMII, the idea is to lower the bias induced by the operator truncation, but more effectively. To reach an optimal efficiency, a method to make a fixed-width array multiplier with reduced maximum error, average error and error variance with no overhead for the correction circuit compared to AAMII is presented. The minimization of the bias is exclusively performed by feeding the array multiplier's diagonal with AND and NAND gates (instead of OR gates in AAMII) and adjusting using a constant input on the last FA line. In order to find the most effective configuration, an exhaustive search of the bias is first performed. For a given set of inputs (x i , y j ), i,joeJ0,n≠1K then the best error correction term towards AAMI structure C t can be written as [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF]] C t = n≠2 ÿ i=1 Èa n≠i≠1 b i Í q n≠i≠1 + ÂKÊ (1.23) with ÈT Í q k = I T, if q k = 0 T , if q k = 1 (1.24) where the value of K depends on the inputs value. The first part of the correction term given by Equation 1.23 can be achieved by inserting simple AND and NAND gates on the AAMI diagonal. When these gates are set, then the only way to affect multiplication result is to set the input bit on the last FA line to 0 or 1. When this value is set to a given value p oe {0, 1}, then, two correction term cases occur depending on the inputs, given by C t = Y _ _ ] _ _ [ n≠2 q i=1 Èa n≠i≠1 b i Í q n≠i≠1 + ÂpÊ , if ◊ < n n≠2 q i=1 Èa n≠i≠1 b i Í q n≠i≠1 + ÂpÊ , if ◊ = n (1. ◊ = n≠1 ÿ i=0 Èa n≠i≠1 b i Í q n≠i≠1 (1.26) Therefore, the values of q k , k oe J0, n ≠ 1K need to be fixed so the compensation offset defined by the last line input is as near as possible of p oe {0, 1} in the first case of Equation 1.25, and p in the second case. To determine this case, an exhaustive search is performed, testing all combinations of (q 0 , . . . , q n≠1 ) for all input combinations in order to determine the best value of K ◊<n and K ◊=n to minimize the bias for each combination. Figure 1.52 [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF] shows the example of an exhaustive search of K ◊<n and K ◊=n for a 6-bit multiplier. Once this exhaustive search performed, then the nearest case from K ◊<n = 0or1 and K ◊=n = K ◊<n is determined. This case defines which combination of q k is chosen, and what is the optimal value of the last line input bit. On Figure 1.52, the vertical red line shows the index of the optimal values of K ◊<n and K ◊=n . After exhaustive experimentation, the author analytically shows that for any n-bit multiplier the optimal configuration is always: • q 0 = q n≠1 = 1 and 'k oe J1, n ≠ 2K, q k = 0, and • Last line input = 1 for n high enough (case-by-case simulation is needed for low values). Taking that into account, the author gives the structure of a signed 8-bit AAMIII, as showed on Figure 1.53. FA, AFA, NFA and A cells are previously described cells. ND-ND cell is composed of two NAND gates, disposed as on Figure 1.51c. Applying the method described above, AAMIII from width 4 to 12 are generated and compared with the two previous versions in [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF]. The accuracy results in terms of maximum error E M , average error µ e and variance of error ‡ 2 e are given in Table 1.21. In this table, only signed versions of the three AAMs versions were studied. There are slight benefits with AAMIII in terms of maximal error comparing to AAMII, but the relative gains decreases when the width of the operator increases. However, the benefits in terms of mean and variance of the error, and therefore in terms of power of error, are very important and increase with the width of the operator. E.g, for a 12-bit operator, the power of error is reduced by 52.0% comparing AAMIII to AAMII, which is a huge gain considering there is no area overhead, as showed in Table 1.22. For larger operators, all AAMs area ratio decreases towards 0.5 and so the area overheads of AAMII and AAMIII towards AAMI become negligible. The error maps of 4-bit, 8-bit and 16-bit AAMIII are visible on Figure 1.54. On these error maps, four squared areas are clearly visible. The lower-left square, corresponding to lowamplitude inputs, returns low amplitude error. The three other squares perform globally much higher error, with patterns which need to be different depending on the operator's width. We can then conclude about AAMs that: • AAMI has brought an interesting way to reduce by a factor of 2 the area of an n-bit fixed-width parallel multiplier, removing the least significant part of the operator. • AAMII has proposed an improvement of AAMI by adding simple cells on the operator diagonal. • AAMIII has proposed an optimized modification of the diagonal cells in order to set the approximate operator bias to its minimum, achieving great gains in terms of accuracy with no area overhead. Multiplier As a consequence, AAMIII designing method is a very efficient way to build an approximate n-bit fixed-width array multiplier with a nearly-50% area reduction. Error-Tolerant Multiplier The Error-Tolerant Multiplier (ETM) [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF] is inspired from the principle of ETAI [START_REF] Zhu | Design of low-power highspeed truncation-error-tolerant adder and its application in digital signal processing[END_REF] studied in Section 1.4.1.2. Indeed, it is composed of two parts: • the MSB part, which is a conventional accurate multiplier, and • the LSB part, which is designed for very fast approximation. For the accurate MSB part, as an n ◊ n multiplier has a O(n 2 ) area complexity, dividing the accurate part digits number by a factor k insures a k 2 area saving. Therefore, the benefit of reducing the accurate multiplication part is much bigger than for adders. For the approximate LSB part, an approximation resembling ETAI, described by Algorithm 1 page 44: the inputs of the approximate part are read from MSB to LSB, performing a logical OR until two 1s are met on the same position i. When this happens, all outputs from i down to 0 are set to 1. An illustration of the process is showed in Figure 1.55. ETM also embeds a system detecting if the MSB part of the inputs has at least one bit worth 1. This way, if there is no 1 (meaning it is a low amplitude multiplication), then the accurate multiplier is used for the multiplication of the LSB part. This way, the calculation is 100% accurate with no overhead on the design area, except for multiplexing the inputs. Therefore, contrary to ETAI, ETM performs well for low amplitude inputs. The system view of ETM can be seen on Figure 1.56. For a good use of this system, the accurate and approximate parts lengths of the multiplier need to be of equal width. The LSB approximation part is achieved the same way as ETAI thanks to a control block (see Figure 1.31). The only difference is that all sub-blocks are CSGCI blocks (see Section 1.4.1.2), meaning the control signal is not propagated as fast as for ETAI. This choice is probably due to the fact that the approximation part is faster than the accurate multiplication part, and so there is no need to reduce the delay on the control block which would imply an area and power overhead. The accurate multiplication is performed by an array multiplier of size n/2. The evaluation of accuracy is performed using AP for a given MAA in [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF]. This metric is presented in Section 1.4.1.2. Figure 1.57 presents the results for a MAA range from 90% to 99%, using ETM of input width from 4 to 20 bits. We can suppose these ETM have the same lengths n/2 for their accurate and approximate parts. Results were obtained by simulation of 65, 000 sets of inputs for the 20-bit multiplier, and 6, 500 for the others. Such a choice can be discussed, the number of points being objectively too low to ensure all approximation cases to be met. On the graph, what can first be noticed is that small-width ETMs have a very low accuracy, with only less than 20% AP for a 95% MAA for 8-bit ETM for instance. For larger operators such as 16-bit or 20-bit, AP seems stable above 90% MAA, but the 16-bit AP curve dramatically decreases after 97%. Results for lower MAA can be found in [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF]. To conclude about accuracy, the ETM model seems quite well adapted to large operators, small ones often generating high errors relatively to their computing range. For a 99% MAA, 20-bit ETM seems to be the minimal operator for a 90% AP. Of course, the interest of large widths in approximate computing is questionable. Answers are given in Chapter 4. 1 0 1 1 1 0 0 1 1 0 1 1 × 0 1 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 + 0 0 0 0 0 0 0 1 1 0 1 1 0 1 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 6-bits parallel multiplier Design simulations were performed on 0.18 µm CMOS process with a 10 MHz frequency. The comparison was made between a conventional 12-bit array multiplier and a 12-bit ETM with 6-bit accurate part and 6-bit approximate part. Power and delay were reported for five sets of inputs, the PDP of the five corresponding operations are given in Figure 1.58. The detailed results are given in [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF]. These results show that energy consumption strongly depends on the inputs. On the five tested sets of inputs, energy consumption of ETM is 75% to 90% lower than the conventional 12-bit multiplier, though we could regret the very low number of performed tests. Looking at detailed test results, we can see there is nearly no improvement in calculation delay, whereas power improvement is quite important. In terms of area, the 12-bit ETM covers 491 µm 2 whereas parallel multiplier covers 1028 µm 2 , which is more than twice. Since a 12-bit array multiplier is roughly four times as large as its 6-bit version, this means that the approximation part has an area overhead nearly equivalent to the exact part area. To conclude about this operator, ETM proposes a n-bit approximate multiplication using a n/2-bit accurate multiplier, that can be used for the MSB part or LSB part of the computation depending on the input range. This allows for high energy and area savings, with quite good accuracy results for large-width operators. Once again, using a unique scalar metric does not give details about the nature of the error, but this operator, by nature, can perform high amplitude errors in a small number of worst-case input sets. It can be noticed that ETM has an area which is comparable to the area of an AAM version II or III (see Section 1.4.2.1), but ETM is an n ◊ n ae 2n multiplier whereas APMs are only n ◊ n ae n. However, this does not ensure at all that ETM is more accurate than the best of them (AAMIII), and more tests would need to be performed to determine its accuracy. Approximate Multipliers using Modified Booth Encoding As discussed in Section 1.3.4, modified Booth encoding allows the number of partial product of the summand grid to be importantly reduced. Usual Radix-4 encoding divides the number of partial products by two for an even input width. As the critical part of a multiplier is the carry-save reduction of the partial products, using modified Booth encoding is a good potential for area, delay and power saving. Therefore, in the context of approximate multipliers in lowpower applications, using this technique as a starting point is a potentially efficient way to save energy. In [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF], the authors present a fixed-width Booth multiplier with a simple error-correction system. In FxP context, fixed-width multipliers are obtained by truncating the LSB half part of the multiplication output. A classical approximation for fixed-width multipliers consists in removing the LSB half part of the partial products and to compensate the induced bias by removing an adapted constant. In [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF], error compensation is performed by keeping a few recombined cells of the most significant column of the LSB part. Therefore, the usual constant output bias is replaced by an input-dependent bias. To determine how error-compensation cells should be recombined, the authors studied -n≠1 , the number of carries generated at rank n ≠ 1 as a function of -, the sum of ones in summand grid at rank n ≠ 1. The author statistically determines that the best choice to minimize the error is to have -n≠1 = -. Therefore, every rank n ≠ 1 cell in summand grid is kept without any recombination. The resulting summand grid is given by Figure 1.59. From this point, the column of rank n ≠ 1 in the summand grid will be referred as LP major (Low Part, Chapter 1 𝑝𝑝 0,8 𝑝𝑝 0,7 𝑝𝑝 0,6 𝑝𝑝 0,5 𝑝𝑝 0,4 𝑝𝑝 0,3 𝑝𝑝 0,2 𝑝𝑝 0,1 𝑝𝑝 0,0 𝑝𝑝 Y i X sel 2X sel NEG ≠2 0 1 1 ≠1 1 0 1 0 0 0 0 1 1 0 0 2 0 1 0 Table 1.23 -Equivalence between Radix-4 modified-Booth-encoded symbol Y i and control bits in partial product generation major position) and all the lower significance elements LP minor (Least significant Part, minor position). The most significant part which corresponds to the n bits which are always kept for fixed-width multipliers output calculation will be referred as MP (Most significant Part, major position). The proposed 8-bit multiplier has 46% less gates than the accurate equivalent. In [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF], accuracy results are given in terms of Signal-to-Noise Ratio (SNR). For a 16-bit multiplier, the SNR is 76.64 dB. More detailed results about this multiplier are respectively given in [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF] and [START_REF] Juang | Low-error carry-free fixed-width multipliers with low-cost compensation circuits[END_REF], reported in Tables 1. [START_REF] Widrow | Statistical analysis of amplitude-quantized sampled-data systems[END_REF], 1.28 and 1.30, and denoted as Fixed-width modified-Booth-encoded Multiplier version I (FBMI). Another fixed-width Booth multiplier with input-dependent error correction, denoted here as Fixed-width modified-Booth-encoded Multiplier version II (FBMII), is proposed in [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF]. In FBMII, only the Booth encoder control outputs are considered for error correction. The idea is to perform a statistical approximation of the carries generated by the truncated part in function of the coded input to be able to design a well-adapted error compensation system. Radix- Y Õ i = I 1 if Y i " = 0 0 otherwise. (1.27) Therefore, for a given chain of symbols {Y i } ioeLP , the corresponding chain of bits {Y Õ i } ioeLP can be obtained with a very small area and delay overhead performing logical OR as Y Õ i = X sel,i | 2X sel,i . (1.28) Then, statistics on the chain of bits {Y Õ i } ioeLP must be performed in order to find the best way to estimate carries generated in LP group. For this, S LP , the sum of all weighted partial products of LP group, is introduced and defined as S LP = ÿ 0AE2i+j<n≠1 p i,j 2 2+2i+j≠n . (1.29) Indeed, a good estimation of S LP for a given input allows to inject in MP part a good correcting bias. For this, the exact carries of LP major are propagated and the carries of LP minor are estimated. S LP minor is defined the same way as S LP but restricted to LP minor . First, a method leveraging exhaustive simulation is proposed in [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF]. The idea is to list all occurrences of all possibilities for {Y Õ i } ioeLP minor and to calculate the rounded statistical mean of S LP minor for each of these occurrences. E.g., for n = 10, there are 108 ways to obtain {Y Õ i } ioeLP minor = 1, 0, a_carry 0 = Y Õ 3 | Y Õ 2 | Y Õ 1 | Y Õ 0 a_carry 0 = Y Õ 3 Y Õ 2 (Y Õ 1 | Y Õ 0 ) | Y Õ 1 Y Õ 0 (Y Õ 3 | Y Õ 2 ) (1.30) This circuit can be implemented using 8 basic logic gates. Then, the value of the carry to propagate on LP major is obtained by rounding. As {Y Õ i } ioeLP minor can only be 0 or 1, the maximal value for the propagated carry is 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 𝑌 1 ′ 𝑌 0 ′ 𝑌 3 ′ 𝑌 ) 2 ≠1 (n/2 ≠ 1) * r , and so the number of binary approximated carries N a_carry is always Ân/4Ê. A methodology for the error correction system can then be derived: 1. For an n-bit FBMII, the number of approximated carries is N a_carry = Ân/4Ê, denoted as a_carry 0 , a_carry 1 , . . . , a_carry N a_carry≠1 . 'i oe J0, N a_carry ≠ 1K, a_carry i = 1 … A n/2≠2 q i=0 Y Õ i B AE (2i + 1). 3. Approximate carry circuit is designed by defining the compensation carry logic, e.g. using a Karnaugh map. Using this method, accuracy results are given in [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF] in terms of error mean and variance for n = 10 and n = 12, showed in 2. Each group bits are summed using a FA (or HA or a wire for the last group). 3. At each FA output, the carry signal c is an approximate carry, and the sum signal s is summed with the ones from the other groups. 4. The process is repeated until only one sum signal is left. 5. Finally, 1 is added to the last adder. With ACGPII, original ACGPI is slightly modified by a new higher level approximation, but the sum of the generated carries remains the same, just as showed for n = 8 in Table 1.26. The design for n = 32 is given in Figure 1.62. With only 7 FA and 1 HA, the ACGPII represents a very low overhead in term of area. Moreover, with only three levels, it only adds a small delay to the critical path. A comparison of delays and areas using ACGPI and ACGPII for different operator sizes is provided in Table 1.27. For n <= 10, ACGPI is better in both domains than ACGPII, in addition to be more accurate by construction in the estimation of the carries. When the operator size grows, ACGPII gets much more efficient in delay and area. Indeed, for a 32-bit FBMII, ACGPII is 78% faster than ACGPI for 56% area benefit. Delay and area comparisons of FBMI and FBMII using ACGPII were performed in [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF]. They show that their performance is nearly the same, with a slight advantage for FBMII/ACGPII on delay for all sizes and also on area from n = 14. Using Synopsys tools, FBMI and FBMII show very similar power consumption at least for n = 10 and n = 12, with a bit more than 60% of the power consumption of an ideal fixed-width Booth multiplier, meaning a complete Booth multiplier with output truncation or rounding. Accuracy comparisons between rounding, truncation, FBMI and FBMII using ACGPII are given for n = 16 and n = 20 by Table 1.28. They show that proposed FBMII/ACGPII significantly beats both truncation and FBMI in terms of To conclude about FBMII, this fixed-width Booth multiplier proposes much better performance than FBMI with no area, power or delay overhead, thanks to improved error compensation systems which are ACGPI and ACGPII. The first one consists in a statistical analysis of the carries generated in the truncated part in function of the Booth-coded value of the input, whereas the second one is an approximate version of the first one, allowing a dramatic reduction of the theoretical correction circuit size and delay. Chapter 1 ACGPI ACGPII Y Õ 2 Y Õ 1 Y Õ 0 a_carry 0 a_carry 1 a_carry 0 a_carry 1 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 1 0 1 0 1 0 0 1 0 0 1 1 0 1 1 0 1 0 1 1 0 1 0 1 0 1 1 1 1 1 1 1 In [START_REF] Juang | Low-error carry-free fixed-width multipliers with low-cost compensation circuits[END_REF], Juang proposes a fixed-width Booth multiplier with a very low cost error correction system, based on the estimation of LP minor bits of the summand grid in function of LP major bits values. For more convenience, the proposed multiplier will be denoted as Fixed-width modified-Booth-encoded Multiplier version III (FBMIII). This section presents the methodology for an 8-bit FBMIII as well as accuracy and area comparisons for different sizes of FBMIII and previously described multipliers. To design an 8-bit FBMIII, LP minor levels need to be discriminated like showed on Figure 1.63. The four level weighted symbol strings of LP minor are denoted as w, x, y and z, and can be expressed as w = 6 q i=0 2 i≠7 pp 0,i , x = 4 q i=0 2 i≠5 pp 1,i , y = 2 q i=0 2 i≠3 pp 2,i , z = 2 ≠1 pp 3,0 . (1.32) As a reminder, each symbol pp i,j is in the set {≠1, 0, 1}. The idea of FBMIII error correction is to find the relation between LP major bits and these symbolic strings. For this, the best values Chapter 1 𝑝𝑝 0,8 𝑝𝑝 0,7 𝑝𝑝 0,6 𝑝𝑝 0,5 𝑝𝑝 0,4 𝑝𝑝 0,3 𝑝𝑝 0,2 𝑝𝑝 0,1 𝑝𝑝 0,0 𝑝𝑝 , q 2 and q 3 must be found in order to get as close as possible to w + x © q 1 ◊ (pp 0,7 + p 1,5 ) , y © q 2 ◊ pp 2,3 , z © q 3 ◊ pp 3,1 . (1.33) Their best representative is their mathematical mean. q 2 mathematical mean can be decomposed as E [q 2 ] = 1 ÿ k=≠1 q 2,k ◊ P (pp 2,3 = k) . (1.34) where q 2,k is the optimally correcting value of q 2 for a given k. P (pp 2,3 = k) can be easily computed for any value of k assuming that each multiplier binary input is equiprobable. Table 1.29 gives the probabilities for all pp i,j to be worth k in LP . Assuming these data, the calculation of E [y|pp 2,3 = 1] is E [y|pp 2,3 = 1] = E # 2 ≠1 ◊ pp 2,2 + 2 ≠2 ◊ pp 2,1 + 2 ≠3 ◊ pp 2,0 $ = 2 ≠1 ◊ E [pp 2,2 ] + 2 ≠2 ◊ E [pp 2,1 ] + 2 ≠3 ◊ E [pp 2,0 ] = 2 ≠1 ◊ (1/2) + 2 ≠2 ◊ (1/2) + 2 ≠ 3 ◊ (1/2) = 0.4375. (1.35) Therefore, as y = q 2,k ◊ k, the best value for q 2,1 is 0.4375. The same computation and reasoning for k = ≠1 and k = 0 respectively gives q 2,≠1 = ≠0.4375 and q 2,0 oe R. As q 2,0 can take any value, 1 is chosen in an arbitrary manner. By injecting these three q 2,k in Equation Therefore, q 2 = 1 is chosen as the approximated coefficient for y. The same process is applied for q 1 and q 3 , and finally we get q 1 = 0, q 2 = 1, q 3 = 1. (1.37) Partial products p i,j = ≠1 p i,j = 0 p i,j = +1 p i,0 1/8 C V ideal = Í 2 ≠1 (pp 0,7 + pp 1,5 + pp 2,3 + pp 3,1 ) + 2 ≠1 (w + x + y + z) Î . (1.38) Approximating w, x, y and z using q 1 , q 2 and q 3 leads to C V app = % 2 ≠1 (pp 0,7 + pp 1,5 + pp 2,3 + pp 3,1 ) + 2 ≠1 (q 1 ◊ (pp 0,7 + pp 1,5 ) +q 2 ◊ pp 2,3 + q 3 ◊ pp 3,1 )Ê , = % 2 ≠1 (pp 0,7 + pp 1,5 ) & + pp 2,3 + pp 3,1 . (1.39) In spite of taking into consideration both LP major and LP minor , the computation of the compensation value C V app is not more complex than the compensation value of FBMI which only took LP major into account for compensation, and so better compensation performance can be expected. Indeed, for C V app to be applied, adding pp 2,3 and pp µ |e| = 1 2 n A 2 n≠1 ≠1 q X=≠2 n≠1 A 2 n≠1 ≠1 q Y =≠2 n≠1 |X ◊ Y ≠ Z op | BB , ‡ 2 e = 1 2 n A 2 n≠1 ≠1 q X=≠2 n≠1 A 2 n≠1 ≠1 q Y =≠2 n≠1 |X ◊ Y ≠ Z op | 2 BB , (1.40) where Z op is the result of the operation X ◊ Y performed by the operator op. Accuracy results are given in Table 1. [START_REF] Wallace | A suggestion for a fast multiplier[END_REF]. In terms of absolute error mean, it can be noticed that FBMIII achieves slightly better performance than FBMI, but has an error variance which is twice as small. In terms of error power, an 8-bit FBMIII is 14.7% more efficient than its FBMI equivalent, and 23.9% for their 12-bit versions. FBMIII accuracy benefits towards FBMI seem to increase with the size of the operator. However, FBMIII is slightly beaten by truncation in term of accuracy. This is still good performance taking into account that FBMIII saves a 44% area compared to an 8-bit truncated-output fixed-width Booth multiplier and 49% compared to a 12-bit one. More results about area are detailed in [START_REF] Juang | Low-error carry-free fixed-width multipliers with low-cost compensation circuits[END_REF] as well as more tests comparing FBMI and FBMIII on image processing tests, confirming FBMIII to be more accurate on several metrics such as root mean square error, SNR and PSNR. Therefore, FBMIII proposes better global performance than FBMI on many metrics, with a smaller error correction system. In this section, three fixed-width Booth operators with error correction system were presented: • FBMI [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF] proposes a low-cost error-compensation system, considering only LP major . • FBMII [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF] proposes a higher-cost error-compensation system, only based on the value of the input multiplier, but taking it entirely into account (LP major + LP minor ). This higher cost allows to strongly beat FBMI in terms of accuracy, and even the basic fixedwidth Booth multiplier obtained by truncation of the output. Compared to its summand grid, FBMII error compensation system is still small, and so it represents a very interesting fixed-width Booth multiplier. • FBMIII [START_REF] Juang | Low-error carry-free fixed-width multipliers with low-cost compensation circuits[END_REF] has the lowest-cost error-compensation system, which has even smaller cost than FBMI's LP major consideration. Moreover, it has better performance than FBMI, but does not beat output truncation method as FBMII does. To conclude, FBMII seems to be the most efficient fixed-width Booth operator in terms of accuracy, though FBMIII proposes a lower-cost error compensation. In terms of delay, the compensation system overhead for the three presented operators is nearly negligible compared to the cost of the carry-save partial product reduction. Therefore, for the best accuracy, FBMII should be given high priority, and FBMIII should be chosen only if area is the critical resource. Dynamic Range Unbiased Multiplier In [START_REF] Hashemi | Drum: A dynamic range unbiased multiplier for approximate applications[END_REF], a novel approximate multiplier referred as Dynamic Range Unbiased Multiplier (DRUM) is inspired from floating-point multiplication. As a reminder, FlP multiplication process is described in Section 1.2.2. Indeed, the idea of DRUM with n-bit inputs is to use an accurate multiplier of size k < n, shifting the n-bit inputs in a way that the input MSBs of the multiplier are fed with the most significant one of each input. This way, no effort is wasted in the multiplication uselessly processing "high significance zero ◊ zero". To reduce the approximation, inputs are unbiased before multiplication. As only a subset of k bits is extracted from the inputs, all the less significant bits are virtually set to 0, causing an error which is always in the same direction. To prevent this, the LSB of the k bits extracted for multiplication is set to 1. The process applied on each input for unbiasing is depicted in Figure 1.65. Once the inputs extracted and unbiased, the accurate k-bit multiplication is performed. Finally, the result is shifted so the output of k-bit multiplication is expressed with its legitimate significance. The corresponding structure of DRUM is depicted in Figure 1.66 [START_REF] Hashemi | Drum: A dynamic range unbiased multiplier for approximate applications[END_REF]. First, leading-zero detection is performed on each input to get the position of the first one in k-bit multiplication. Then, after input unbiasing, k-bit multiplication is performed. Finally, the result is shifted using the sum of the leading-zero detection values of the inputs. Designing DRUM is therefore about finding a good compromise for k. Decreasing k value by 1 diminishes the size of the effective multiplier, but increases the size of the multiplexers of Figure 1.66, and the L0D needs to be able to count one potential more leading zero. Also, the final maximal shifting is increased by 1. Generally, decreasing k decreases area but may increase delay. To keep the benefit of using leading-zero detection, the inputs must be unsigned. Therefore, a signed version of DRUM must unsign the inputs using two's complement transformation before being applied, and manage the sign of the output depending on the sign of the inputs, which can be achieved with a small overhead. Using Synopsys Design Compiler and Mentor Graphics Modelsim with 65-nm standard cell libraries, area and power for 16-bit DRUM as a function of k are depicted in Figure 1.67 [START_REF] Hashemi | Drum: A dynamic range unbiased multiplier for approximate applications[END_REF]. These simulations show that substantial reductions in area and power are reached. For 16-bit DRUM with k = 3, more than 80% of area and more than 90% of power are saved with respect to a 16-bit accurate multiplier -the structure of the reference multiplier being unknown. For k = 8, nearly 50% area and power are saved. The intermediate k values seem to show near-linear savings. These savings need to be put in relation with the errors being performed. In [START_REF] Hashemi | Drum: A dynamic range unbiased multiplier for approximate applications[END_REF], four error metrics are explored, all relative to the accurate result. If the relative error referred as RE is defined by RE = Z ≠ Ẑ Z , (1.41) where Z is the exact result of 16-bit multiplication X ◊ Y and Ẑ the result of the same multiplication using DRUM, the four error metrics explored are defined by mean relative error (or relative error bias), and ‡ RE the standard deviation of the relative error. S represents the set of all possible inputs and N S the width of this set. It is important to notice that all these metrics are relative to the accurate output, as this is the family of metrics which is the most reliable when speaking about FlP-like error. Indeed, as the multiplier is "sliding" on the inputs, the error performed is always relative to the amplitude of the inputs. Table 1.31 gives the values of these metrics for the same 16-bit versions of DRUM as the ones of Figure 1.67. Max RE =max ioeS |RE i | , MA RE = 1 N S ÿ ioeS |RE i | , µ RE = 1 N S ÿ ioeS RE i , ‡ RE = 1 N S ÿ ioeS RE 2 i , ( 1 As expected, the error is larger when k is smaller. For k = 3, maximum relative error reaches 56%, which is high but much smaller than most approximate operators which can have error propagated to their MSB in case of broken carry chain. Thanks to the unbiasing method, error bias is very low and lowers when k increases. The general amplitude of error, represented by MA RE and ‡ RE is generally low. As a conclusion, DRUM operator shows interesting area and power benefits, leveraging FlP multiplication style applied to fixed-significance data. As an example, for a 16-bit DRUM with k = 6, more than 60% area is saved and more than 70% of the power, while the error stays very tight. Indeed, the error bias is nearly zero, while the mean absolute relative error is 1.47% only, with a maximum at 6.31%. DRUM approximate operator is a scarce example of approximate operator with important savings and producing often-erroneous but low-amplitude In this chapter, besides classical floating-point and fixed-point paradigms, a subset of approximate operators were described, all chosen for being different the ones from the others and basing their designs on different techniques of integer addition and multiplication. The list dressed in this chapter is far from exhaustive but tried to cover the main stakes of approximate operators. The study of the existing literature also concludes in a more bitter observation: most presented operators do not come with enough results about their impact on real-life application. Some of them only come with stand-alone results using convenient metrics hiding high error spikes, and others are tested on applications too simple to make definitive conclusions. In this document, we intend to provide methods, tools and conclusions about the general advantages and drawbacks of these operators which are all different the ones from the others. It is also important to note that all the approximation techniques presented in this chapter are not exhaustive, though they are the main ones and lay the foundation for the remainder of this thesis. A more complete survey of existing approximation techniques at many levels can be found in [START_REF]A survey of techniques for approximate computing[END_REF]. The high number of existing techniques, which are enabled at different levels, from algorithm design to the physical layer, will need by the future to be unified in a single general technique allowing to take advantage of the best of each through cross-level design. Chapter 2 Leveraging Power Spectral Density in Fixed-Point System Refinement The first contribution of this thesis, after literature study and comparison of approximate operators in the previous chapter, is a novel method for system-level optimization in FxP arithmetic. This work led to a paper in the DATE'16 conference [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]. Motivation for Using Fixed-Point Arithmetic in Low-Power Computing Signal processing applications popularly use fixed-point data types for implementation. The choice of fixed-point data types is driven usually by cost constraints such as power, area and timing. The objective of fixed-point refinement during the design process is to make sure that chosen data types are precise enough to achieve the expected quality of computation while minimizing the cost constraint. The acceptable lower quality of computation is because either there are error correction mechanisms explicitly defined as a part of the system or that the user perception defines the lower bound on the quality of output or both. For instance, video CODECs such as H.264 popularly used for wireless transmission allow a certain amount of errors on the channel, which can be corrected by error-resilience [START_REF] Alajel | Error resilience performance evaluation of h.264 iframe and jpwl for wireless image transmission[END_REF] or because the human eye is insensitive to some errors [START_REF] Wu | An overview of perceptual processing for digital pictures[END_REF]. All these layers of error resilience mean that using approximations for concerned computations could represent significant gain of area, time and/or energy. A classical way to approximate a computation process is to use fixed-point arithmetic. Indeed, the representation of fractional numbers by integers insures faster and more energy-efficient arithmetical computations as discussed in previous chapter, and the design of the operators requires meaningfully less area. The most important drawback of using an approximated arithmetic is the need for managing the induced computation errors. The errors with fixed-point data types are classified into two types arising from finite precision on one hand and finite dynamic range on the other. Although the impact of errors due to violation of finite dynamic range is more pronounced, these errors can be mitigated by techniques such as range analysis using affine arithmetic, interval arithmetic or more complex statistical techniques such as [START_REF] Özer | Stochastic bit-width approximation using extreme value theory for customizable processors[END_REF]. In spite of allowing for good dynamic range, the lack of precision causes errors that are perceived as bad Chapter 2 quality of computation. In case of wireless applications, this can be measured as Bit-Error-Rate (BER), in image and signal processing as Signal-to-Quantization-Noise Ratio (SQNR), and, in general, as quantization noise power. Measuring the impact of finite precision on the output quality of computation is discussed in this Chapter. Commercial tools for performing fixed-point accuracy include C++ fixed-point libraries (e.g. ac_fixed from Mentor Graphics used with Catapult-C, or sc_fixed from SystemC) or the Matlab fixed-point design toolbox. These tools are primarily based on facilitating FxP simulation with user-defined word-lengths using software FxP constructs and libraries. Although very useful, evaluation by simulation can be very time consuming. The time required for FxP evaluation grows in proportion with the number of FxP variables and also the number of input sample size. Using the analytical approach for accuracy evaluation, the noise power is obtained by evaluating a closed-form expression as a function of the number of bits assigned to various signals in the system. This approach requires a one-time effort for arriving at the closed-form expression for a given system. These analytical techniques can be handy but are generally limited in applicability to linear and some types of non-linear systems (referred to as smooth operations). The analytical technique evaluates the first two moments of the quantization noise sources and propagates it through the Signal Flow Graph (SFG) from all noise sources to the system output. On relatively small systems, the evaluation of path functions can be accomplished manually. As the system complexity grows, it would require automation support. And possibly for very large systems, the automation could also prove painstakingly slow. Therefore, several divide and conquer approaches have been proposed [START_REF] Parashar | A hierarchical methodology for word-length optimization of signal processing systems[END_REF][START_REF] Novo | Cracking the complexity of fixed-point refinement in complex wireless systems[END_REF] to overcome the apparent complexity of large systems which respectively suffer from loss of information or enumerating all paths in the graph. With the method described in this chapter, we provide an alternative analytical accuracy evaluation approach for use with hierarchical techniques to be applied on LTI systems. This technique captures the information associated with the frequency spread of quantization noise power by sampling its PSD. We show how such information can be used for breaking the complexity of evaluating quantization noise at the output of large signal processing systems. Contributions brought by our work are as follows: • quantifying the accuracy of the proposed technique based on PSD propagation and • demonstrating its high scalability at system level resulting from linear time complexity. The rest of this chapter is organized as follows. Section 2.2 reviews analytical methods for accuracy analysis at the algorithm level of errors due to finite arithmetic effects in systems using fixed-point arithmetic. In Section 2.3, the proposed estimation method based on PSD is introduced and developed for general systems. Finally, in Section 2.4, two representative signal-processing benchmarks are chosen to showcase the efficiency of the proposed method. Related work on accuracy analysis The loss in accuracy due to finite precision imposed by fixed-point numbering format has been evaluated using several metrics. The most common among them are the error bounds and the Mean Square Error (MSE). While the first metric is used to determine the worst case impact, the MSE is an average case metric very useful in tuning the average performance of the system under consideration in terms of its energy and timing. Although the finite precision accuracy must be compared with infinite precision (or arbitrary precision) numbers, it is impossible to do so while simulating using a computer. So, the IEEE double-precision floating-point format, whose dynamic range and precision are several orders of magnitude higher compared to typical fixed-point word lengths, is considered as the reference for all comparison purposes and may be referred as infinite precision in what follows. In the literature, the MSE is the mean square value of the differences between computations by fixed-point system and the reference system implementation extracted as showed on Figure 2.1 and is also referred to as quantization noise power. This is a scalar quantity and it changes as a function of the FxP word-length. Evaluation of quantization noise power at the output of a fixed-point system is either performed by simulation-based technique or using analytical techniques. Simulation-based techniques are universal and can be made use of as long as there are enough computational resources. By the nature of it, simulation-based techniques take longer time and are subjected to the input stimulus bias. Analytical techniques, on the other hand, provide a closed-form expression for calculating the quantization noise power as a function of FxP word-lengths. However, they are limited due to their dependence upon the following properties [START_REF] Widrow | Quantization Noise: Roundo Error in Digital Computation, Signal Processing[END_REF]: 1. Quantization noise and the signal are uncorrelated. 2. Quantization noise at its source is spectrally white. 3. Effect of a small perturbation at the input of the operation generates a linearly proportional perturbation at the output of the operation. The first two properties pertain to the quantization noise source under conditions defined in the Pseudo-Quantization Noise (PQN) model, the statistics of the noise and signal are uncorrelated and even though the signal itself may be correlated in time, the noise signal is uncorrelated in time [START_REF] Widrow | Quantization Noise: Roundo Error in Digital Computation, Signal Processing[END_REF]. The independent and uniform nature of this noise was already discussed in Section 1.3.2. The representation of quantization error as an additive uniformly distributed white noise is depicted by Figure 1.6. The third property relates to the application of "perturbation theory" [START_REF] Constantinides | Perturbation analysis for word-length optimization[END_REF]. It is possible to propagate quantization noise through as long as the function defined by the operation can be linearized. Consider a binary operator whose inputs are x and y and the output is z. If the input Chapter 2 signals be perturbed by b x and b y to obtain x and y respectively, the output is perturbed by the quantity b z to obtain z. In other words, as long as the fixed-point operator is smooth, the impact of small perturbations at the input translates to perturbation at the output of the operator without any change in its macroscopic behavior. In the realm of perturbation theory, the output noise b z is a linear combination of the two input noises b x and b y such as b z = ‹ 1 b x + ‹ 2 b y (2.1) where ‹ 1 , ‹ 2 are obtained from a first-order Taylor approximation [START_REF] Constantinides | Perturbation analysis for word-length optimization[END_REF] of the continuous and differentiable function f : z = f (x, y) (2.2) ƒ f (x, y) + ˆf ˆx (x, y).(x ≠ x) + ˆf ˆy (x, y).(y ≠ y). Therefore, the expression of the terms ‹ 1 and ‹ 2 are given as ‹ 1 = ˆf ˆx (x, y) ‹ 2 = ˆf ˆy (x, y). ( 2.3) Following the third property of quantization noise enumerated above, a further assumption for Equation 2.1 to hold true is that the noise terms b x and b y are uncorrelated with one another. It has to be noted here that the terms ‹ 1 and ‹ 2 can be time varying. This method is not limited to binary operations only. In fact, this method can be applied at the functional level with any number of inputs and outputs and to all operators on a given data path in order to propagate the quantization noise from all error sources to the output. When above conditions hold true, the output quantization noise power of the system is obtained by linear propagation of all quantization noise sources [START_REF] Menard | Analytical Fixed-Point Accuracy Evaluation in Linear Time-Invariant Systems[END_REF] as E Ë b 2 y È = Ne ÿ i=1 K i ‡ 2 i + Ne ÿ i=1 Ne ÿ j=1 L ij µ i µ j (2.4) where E [•] is the expectation function, b y is the error signal associated with its corresponding system output signal y. The system under consideration consists of N e fixed-point operations and the i th operation is generating quantization noise b i with mean and standard deviation µ i and ‡ i . Figure 2.2a illustrates this noise propagation. The terms K i and L ij are constants and depend on the path function h i from the i th source to the output y and are calculated as K i = OE ÿ k=≠OE E Ë h 2 i (k) È , ( 2.5 ) L ij = OE ÿ k=≠OE OE ÿ l=≠OE E [h i (k)h j (l)] . (2.6) Hierarchical techniques for evaluation of quantization noise power have been proposed [START_REF] Weijers | From MIMO-OFDM algorithms to a real-time wireless prototype: a systematic Matlab-to-hardware design flow[END_REF][START_REF] Parashar | A hierarchical methodology for word-length optimization of signal processing systems[END_REF] to overcome the scalability concerns associated with fixed-point systems. In this approach, the system components are evaluated one at a time and then combined by superposition at the output (Figure 2.2b, blind propagation of µ i , ‡ 2 i ). If simulation-based technique is used for evaluation of quantization noise power at the output, the hierarchical evaluation process helps parallelize simulation of each of the components. When employing analytical technique such as the one in Eq. 2.4, the number of paths required to be evaluated is reduced dramatically. This reduction is very interesting from the design automation perspective. The paths are broken around the system component boundaries and each component can be evaluated separately thereby reducing the burden of semantic analysis. However, it has to be borne in mind that the application of the technique in Equation 2.4 requires that the quantization noise satisfies the three properties enumerated above and also that the noise signals are always uncorrelated, which is often false and can cause severe errors. The method in this chapter addresses this problem and suggests a technique that exploit the information hidden in the PSD of the quantization noise [START_REF] Widrow | Quantization Noise: Roundo Error in Digital Computation, Signal Processing[END_REF][START_REF] Jackson | Roundoff-noise analysis for fixed-point digital filters realized in cascade or parallel form[END_REF] PSD-based accuracy evaluation It is clear from the state of the art that there exists two types of limitations to the existing accuracy evaluation techniques. While the analytical technique reduces the simulation time greatly, its preprocessing time can grow exponentially requiring to employ hierarchical techniques such as [START_REF] Parashar | A hierarchical methodology for word-length optimization of signal processing systems[END_REF]. However, these techniques introduce the problem of inaccuracy by approximating error quantities with just mean and variance. This is especially true in cases when large systems are broken down to smaller sub-systems for analysis. For illustration, consider the system shown in Figure 2.2b. The system S consists of several sub-systems marked as Op 1...5 . The noise generated at the output of each system, correspondingly marked as b 1...5 , is propagated (blue arrows) through several parts of the system for calculation of the moments of error at the system output. Suppose there are memory elements in Op 1 and Op 2 , propagation of noise b 1 and b 2 (say) through Op 3 by just using the first two moments of the quantization noise (as described in the previous section) can lead to errors in estimates at the output of Op 3 which can further be amplified by Op 5 all the way to output O. Similarly, the path through Op 4 also influences the error of the estimate through Op 5 leading to very large error margins for O. In order to analytically arrive at the moments of the system output, additional information pertaining to quantization noise at points of convergence of two or more noise paths is required. We refer to the methods that do not consider PSD information (such as [START_REF] Weijers | From MIMO-OFDM algorithms to a real-time wireless prototype: a systematic Matlab-to-hardware design flow[END_REF]) as PSD-agnostic methods. In this section, we propose a technique which efficiently makes use of the PSD of the quantization noise for evaluating the error at the output of a system and which is scalable both in terms of accuracy and system size. PSD of a quantization noise A large signal processing system can be divided into a number of sub-systems, each characterized by its transfer function. The transfer function defines the magnitude and phase relationship of the path for input signals of different frequencies. Since our interest is only the noise power, we ignore the phase spectrum and consider only its magnitude spectrum or the PSD. With the knowledge of the PSD distribution of the input and the system PSD profile, it is possible to calculate the PSD of the output. The PSD S xx (F ) of a signal x at any normalized frequency F is defined as the Fourier transform (F {•}) of the autocorrelation function of x as S xx (F ) = F {x(n) ⇧ x ú (n + m)} , (2.7) S xx (F ) = F {x} ⇧ F {x} ı = |F {x}| 2 . (2.8) With the knowledge of the PSD of x, the MSE and the mean of x is obtained by summing up the power in each frequency component as E Ë x 2 È = ⁄ 1 ≠1 S xx (F ) dF = µ 2 + ‡ 2 S xx (0) = µ 2 . (2.9) The PSD of the quantization noise generated by a fixed-point data type with d fractional bits is (as discussed in Section 2.2) white, except for F = 0, which depends on mean. By discretizing the PSD into N PSD regular bins including the DC component, the PSD of a generated quantization noise b x is given by: S bx (F ) = I 1 N PSD ‡ 2 if F " = 0, µ 2 if F = 0. (2.10) where mean and variance µ and ‡ 2 for both truncation and rounding modes with d bits is as given in [START_REF] Widrow | Quantization Noise: Roundo Error in Digital Computation, Signal Processing[END_REF]. PSD propagation across a fixed-point LTI system In the method developed in this chapter, we will focus on linear and time-invariant (LTI) systems, which constitute the major part of signal processing systems. An LTI system can be represented by a signal flow graph (SFG) composed of boxes corresponding to sub-systems defined by their impulse response and delimited by additive quantization noise sources such as the one described in Section 2.2. The proposed PSD evaluation method then consists of three steps: 1. Detect cycles in SFG and break them to obtain an equivalent acyclic SFG that can be used for noise propagation using classical SFG transformations [START_REF] Ogata | Modern Control Engineering[END_REF]. An example of SFG breaking is issued by Figure 2.3. Given the original cyclic SFG of Figure 2.3a, the loop generated by H 3 loopback is flattened as showed on Figure 2.3b. 2. The discrete PSD of each signal processing block and of the additive noise associated with the input signal is calculated on N PSD points. 3. The noise PSD parameters are propagated from inputs to outputs, using Equations 2.11 and 2.14. Let x be the input of a system of impulse response h. Then the output y is obtained by the convolution operation (ú) of x and h as y = x ú h. In the Fourier transform domain it can be written as Y = X ⇧ H where Y = F {y}. Following this, the output PSD S yy (F ) is obtained as [START_REF] Jackson | Roundoff-noise analysis for fixed-point digital filters realized in cascade or parallel form[END_REF] S yy (F ) = S xx (F ) ⇧ ÎH(F )Î 2 . (2.11) where ÎH(F )Î is the magnitude response of the system h. In any signal processing system, the quantization noise sources from various inputs converge in at either an adder or a multiplier. Considering the LTI subset, multiplications are nothing but multiplication with constants and hence correspond to linear scaling factors for noise powers. In the case of adders, if the sum of two quantities x and y is obtained as z = x + y, then S zz (F ) is given by S zz (F ) = S xx (F ) + S yy (F ) + S xy (F ) + S yx (F ), (2.12) where S xy (F ) is obtained using the cross-correlation spectrum of x and y and is obtained as The complexity of propagating PSD parameters through the system essentially depends on the number of discrete points N PSD . The total time for evaluation of the PSD parameters can be split into two parts: first, • pp corresponding to the preprocessing stage which involves evaluating the N PSD -point Fourier transform of transfer function of the sub-systems with complexity O{Nlog(N )}; second, the actual time required for evaluation • eval which is O{N } from Equations 2.11 and 2.14. • eval is required for evaluating the accuracy for various inputs and can be repeatedly performed without any preprocessing say N eval times. Since the time spent on preprocessing is a one-time effort, the actual evaluation time is dominated by the • eval which is linear with N PSD . S xy (F ) = F {x(n) ⇧ y ú (n + m)} . ( 2 Experimental Results of Proposed PSD Propagation Method In this section, the proposed method is evaluated using a three-step approach. First, we show experimentally that the estimates obtained by proposed PSD technique are close to simulation. Then, we present the impact of choosing the number N PSD to capture PSD information on the accuracy as well as the execution times of the proposed approach. Finally, we also discuss the improved accuracy in estimation and compare it with the result obtained by PSD agnostic method. All experiments are performed using Matlab R2014b. The MSE deviation E d is chosen as the metric for comparison in all these experiments. It is calculated as is obtained by proposed analytical estimation. From this metric, an accuracy equivalent to less than one bit corresponds to the range E d oe (≠75%, 300%), which can be trivially proven considering the error power relative to two successive word-lengths. Beyond these limits, the estimation is unmistakably not suitable for the fixed-point refinement process as the finally selected wordlength would not meet the maximum error requirements. In the following sections, we first present the experiments and provide a discussion of the results obtained. E d = E # err 2 sim $ ≠ E # err 2 est $ E # err 2 sim $ , ( 2 Experimental Setup Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) Filters The first experiment consists in evaluating the PSD of a single Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filter blocks as described in Section 2.3. The quantized input signal is propagated through the chosen filter and the output quantization noise power is measured by simulation and by the proposed PSD method. The error in estimates of the noise power E d is obtained on a total of 147 FIR and 147 IIR filters obtained by attributing different functionalities (bandpass, low-pass and hi-pass), various taps involving memory elements between 16 and 128 taps for FIR filters and from 2 to 10 taps for IIR filter. Simulation is run on 10 6 inputs and PSD estimation is performed on 1024 samples. Frequency Domain Filtering The system described in Figure 2.4 is a frequency domain band-pass filter. It consists of a 16tap low-pass FIR filter H hp followed by a frequency-domain filter, composed of a 16-point FFT block, a multiplication by the 16 coefficients of a high-pass FIR filter H lp and an inverse FFT. The frequency domain filter applies the filter using the popular overlap save method. Simulations are carried out on a set of 10 7 input samples. databases and from Brodatz texture images [START_REF] Brodatz | Textures: A Photographic Album for Artists and Designers[END_REF] used generally for evaluating JPEG2000 compression algorithms. Two levels of sub-band decomposition are performed on the sample images using the hierarchical signal flow graph. For the encoder, the first level of filtering and downsampling is applied on rows and the second one on columns. The second level coding is applied on the low-pass components (x ll ). Symmetrically, the decoder first performs upsampling and filtering is applied on columns followed by the second upsampling and filtering on the rows. For this experiment, fractional word-lengths d of all variables are set to the same value and are varied across 8 ≠ 32 bits in steps of 4 In the case of FIR filters, E d is contained within an absolute value of 0.37% in comparison with simulation. In the case of IIR filters, Ed bounds are higher because of their recursive nature and the high filter orders tested. FIR and IIR filters result in an average absolute E d of respectively 0.11% and 9.44%, showing a generally very accurate estimation. For both, the accuracy is anyway largely less than one-bit equivalent. Moreover, classical flat estimation [START_REF] Menard | Analytical Fixed-Point Accuracy Evaluation in Linear Time-Invariant Systems[END_REF] applied to the same filters gives the exact same results in terms of E d , showing their strict equivalence on an elementary filtering block. Daubechies 9/7 Discrete Wavelet Transform Figure 2.6 presents the results for the two other experiments when the number of fractional bits are changed between 8 and 32 bits with a maximum deviation in error of only about 10%. The maximum error in estimate is by far too small to have an impact on the final optimization. Influence of the Number of PSD Samples The proposed PSD estimation method achieves very good accuracy with a large number of sampling PSD samples. However, as discussed in Section 2.3.2, a larger number of N PSD samples increases the evaluation time. Therefore, it would be of interest to know the impact of finding out how this choice affects the estimation accuracy. To observe this, in both examples chosen in this section, the fixed-point error is obtained by both simulation and the proposed Chapter 2 PSD method with different values of N PSD in powers of 2 ranging from 16 to 1024. In this example, fractional bit-width d is uniformly set to 32 for all signals. Output error power deviation E d value for this experiment is plotted in Figure 2.7 versus N PSD . As expected, increasing the number of PSD samples leads to an improvement of E d . For N PSD = 16, E d is slightly inferior to ≠8% for the frequency filtering system, and slightly superior to 1% for the DWT system. Then, both curves tend to a value inside ±1%. The accuracy obtained is better than the sub-one-bit objective. The accuracy of estimates obtained using the proposed method is a function of the system complexity. Comparison with PSD-Agnostic Methods The deviation of the error estimates between the proposed and the PSD agnostic method is presented in Table 2.2. The max error is obtained with N PSD = 16 and min error is obtained with N PSD = 1024. In all cases, it can be observed that the PSD agnostic method is much more erroneous than even the maximum error obtained using the proposed technique. It has to be noted that for the DWT example, the PSD agnostic method renders an error of 610%. The PSD agnostic method is 4.5◊ worse off in its estimate for frequency filtering, and 554◊ for DWT. For the best case, these values raise respectively to 3.5 10 3 ◊ and 6.7 10 4 ◊. Frequency Repartition of Output Error Another interesting feature inherent to the proposed estimation method is to know the frequency repartition of errors, which is relevant for refinement of fixed-point signal processing systems, and which is not estimated with conventional methods. Indeed, the classical flat method is not able to give any clue about the frequency repartition of the error, which is a capital information in signal processing. E.g, for image compression for instance, accuracy is more likely to be relaxed in low frequencies than in high ones, human vision being less sensible to slight variations of colors than in tiny details. The proposed PSD estimation is able to give the frequency repartition of this error in a very precise and fast way. data fractional parts set to 12 bits. Black to white values represent log-normalized low to high errors. The center of the image represents low frequencies, while the borders represent the high ones. These visual representations show that proposed method achieves a very good estimation of frequency repartition of the output error, taking only a few milliseconds with PSD method whereas simulation on 72 grayscale images has taken several hours of computation using Matlab. Such a fast and accurate information can be used for refining the system word-lengths to reach a better output quality, basing the refinement not only on output error intensity but also on what frequency repartition is best for the application. Using PSD method, different versions of the application can be evaluated in terms of error frequency repartition to allow for code transformations leading to less impact in the relevant frequency bands. Frequency repartition information can then be modified by allowing the introduction of errors in one or several given parts of the system to identify. Conclusions about PSD Estimation Method This chapter described the characterization and propagation of quantization noises in a fixedpoint signal processing system using its power spectral density. This method is applied at block level, which dramatically reduces the complexity of fixed-point system evaluation when compared to classical flat estimation method. It therefore leads to a significant speed up for accuracy evaluation, going from 3 to 5 orders of magnitude when compared to Monte-Carlo simulation in tested examples. Results demonstrate that the proposed estimation method leveraging spectral information achieves a less than one-bit accuracy with a large margin. They also show that complexity-equivalent PSD-agnostic techniques evaluate the accuracy with large errors. The proposed PSD technique also allows the observation of useful frequential properties of the output error that could not be achieved with conventional scalar methods. This work was published at DATE'16 conference [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]. Chapter 3 Fast Approximate Arithmetic Operator Error Modeling In this chapter, techniques based on propagation of Bit-Error Rate (BER) are presented. First, the bitwise-error rate propagation method is proposed and applied to approximate operators. This method allows for fast analytical propagation of approximate operators error, with low memory cost. The model is trained by simulation and converges fast. Then, attempts to use approximate operators for the simulation of Voltage Over-Scaling (VOS) effects are discussed. The Problem of Analytical Methods for Approximate Arithmetic As pointed out in Chapter We discussed in Chapter 2 the importance of optimizing a computing system so the operators with the lowest cost meeting accuracy requirements are used so no time, area and/or energy is uselessly spent. For FxP error, modeling the error as an additive uniform white noise allows an efficient propagation of the mean and variance of the noise [START_REF] Widrow | A study of rough amplitude quantization by means of nyquist sampling theory[END_REF][START_REF] Widrow | Statistical analysis of amplitude-quantized sampled-data systems[END_REF]. However, the nature of approximate operators error is very different from FxP noise. To account for this specifity, Figure 3.1 shows the error maps of several different 8-bit approximate operators previously mentioned. All error maps take as a reference an 8-bit exact adder. The uniform nature of 4bit reduction FxP noise illustrates with a very regular striped-pattern on the error map 3.1a, whereas all the others are very different. ACA error map 3.1b shows a fractal behavior, with nested error triangles. AAM error map 3.1c has four areas with very different error amplitude and patterns. Finally, DRUM error map 3.1d, with its floating-point-like behavior, has an error pattern which is transformed depending on the amplitude of the inputs. All these differences between Approximate (Apx) operators and FxP, and between different Apx operators themselves, make it hard to find pure analytical models to estimate their impact on the output error of an application. The only efficient way to estimate their impact is therefore simulation. Hybrids between analytical models and simulation, referred as pseudo-simulation, have been developed, but they are inefficient because often heavier and less accurate than simulation. In [START_REF] Huang | A methodology for energy-quality tradeoff using imprecise hardware[END_REF], an analytical propagation of the error Probability Density Function (PDF) of approximate operators is proposed. It leverages modified interval arithmetic, representing and propagating the PDF of signal, signal error and operators error by sets of intervals. This method allows fast simulation compared to Monte Carlo simulation. However, the method has a major limitation. Indeed, the model for the propagation of error PDF, which is specific to each operator, potentially costs a lot of memory. Indeed, given 2 input error PDFs with k intervals each, k 2 resulting values must be kept in memory. However, to be accurate, k must be large enough to be representative of the real error PDF. For an n-bit value, a perfect accuracy for the representation of a corresponding error PDF must have k = 2 n . Therefore, for a 32-bit operator, a perfect accuracy would require 2 2ú32 = 2 64 values to be stored, that is 10 19 . Of course, a much lower value must be chosen for k, which implies important approximations in the PDFs, and also in the model of propagation. Therefore, this model is likely to diverge quite fast along propagations, or to be memory-hungry. In the next section a proposition of a lighter model to propagate the Bitwise-Error Rate (BWER) caused by approximate operators is presented. Bitwise-Error Rate Propagation Method This section presents the Bitwise-Error Rate (BWER) propagation method. First, the main principle of BWER propagation method is described. Then, the data structure used for propagation and the training algorithm are discussed. Finally, the propagation algorithm is described. Main Principle of BWER Propagation Method BWER propagation method is an analytical method which consists in estimating the output BWER of a system composed of approximate integer operators. BWER is defined as the BER associated to each bit position of a binary word. Given an n-bit binary word x = {x i } ioeJ0,n≠1K , BWER is the vector BWER(x) = {p i } ioeJ0,n≠1K composed of n real numbers in range [0, 1] corresponding to the probability p i for x i to be erroneous. Given an approximate operator Op whose inputs are x and y of width n and whose output is z of width m, BWER propagation method aims at determining analytically the output BWER vector, knowing both input BWER vectors such as depicted on Figure 3.2. Then, considering a network of approximate operators, the BWER vectors can be propagated operator by operator from the inputs, considered as accurate, to the outputs. To be time-efficient, the propagation must be simulation-free and so the model must be completely analytic. Storage Optimization and Training of the BWER Propagation Data Structure To propagate BWER across an operator, a BWER transfer function must be built. For this, the impact of an error at any bit position of both inputs on any bit of the output must be determined. Let e x,i be the event "The i-th position bit of x is erroneous". Considering n-bit inputs x and y, let the vector be the event "x and y to have all their bits erroneous". In this notation, err_id is the integer value represented by the binary word E x,y,err_id where the event e E x, x,i is represented by 1 and the opposite event e x,i is represented by 0, read left to right from MSB to LSB. E.g, if x and y are 2-bit inputs, E x,y,3 represents the event vector {e y,1 , e x,1 , e y,0 , e x,0 }, and E x,y,6 represents {e y,1 , e x,1 , e y,0 , e x,0 }. For inputs of width n, there are 4 n possible event vectors, E x,y,0 being the vector for which all input bits are correct, and E x,y,4 n ≠1 being the one for which all input bits are erroneous, which is Equation 3.1. From this point, these vectors will be referred as Error Event Vectors (EEVs). Therefore, to know the impact of an error on any input bit for n-bit inputs on m-bit output, the set of probabilities must be determined and stored: P (e z,j |E x,y,i ) ioeJ0,4 n ≠1K,joeJ0,m≠1K (3.2) This set has a size of m◊4 n real numbers. The cost for storing this data in memory considering single-precision floating-point representation is given by Table 3 is clear that storing such an amount of data is not scalable, even for small bit-width such as 16-bit. Moreover, the time that would be needed to train such a volume of data would also be huge. Therefore, reductions of this data structure are necessary. First, for arithmetic operators, the output bits of significance j only depend of input bits of significance i AE j. This already allows for an important reduction in the required storage. Indeed, the number of data needed to be stored is now (m ≠ n + 1 3 )4 n ≠ 1 3 instead of m ◊ 4 n , as the set of conditional probabilities to be stored of Equation 3.2 is now: P (e z,j |E x,y,i ) ioeJ0,4 min(j,n) ≠1K,joeJ0,m≠1K (3.3) As an example, for 16-bit addition, 22 GB are necessary instead of the previous 292 GB. In spite of a 92% memory reduction, this is still too high to be decently implemented. The previous reduction implies no approximation. However it is possible to reduce dramatically the size of the data structure, allowing a small amount of inaccuracy. For this, the hypothesis that any output bit of significance j only depends at most on the input bits of significance i oe Jj ≠ k + 1, jK, where k is arbitrary. With this method, the set of conditional probabilities of Equation 3.3 becomes: P (e z,j |E x,y,i ) ioeJ0,4 min(j,k) ≠1K,joeJ0,m≠1K (3.4) This approximation is legitimated by two facts: 1. As already stated, the probability for a carry chain to be long is very small [START_REF] Schilling | The longest run of heads[END_REF]. Therefore, a low significance input bit only has an impact on a much higher significance output bit in a very small minority of cases. 2. A vast majority of approximate arithmetic operators are based on cutting carry propagations to a certain limitation l. Therefore, choosing k > l even induces no approximation at all. Choosing an arbitrary k reduces the storage cost to (m ≠ k + 1 3 )4 k ≠ 1 3 real numbers. Table 3 turning the storage of the data structure to a decent value. Moreover, knowing the parameters of each considered operator, k can be minimized so there is no approximation in the model. E.g, for ACA 16 (5), taking k = 6 can be chosen with no compromise, requiring only 186 kB of storage. Once k is selected, the model needs to be trained offline. Monte Carlo simulation with fault injection is used for this, using functional model of the operator in C++, from the approximate operator library of ApxPerf 2.0 [2]. The extraction process of an observation of the EEV E z corresponding to an observation of an input EEV E x,y is depicted in Figure 3.3. x and y inputs are randomly picked using Monte Carlo simulation, and the accurate operation Op Acc is Chapter 3 performed to obtain the corresponding exact output z = z j joeJ0,m≠1K . In parallel, random faults are injected in x and y performing an XOR with fault injection vectors. These fault injection vectors produce the 2n-bit observation F x,y of an EEV E x,y . The approximate operation Op Apx is then fed with the generated faulty inputs x and ŷ, returning ẑ. Finally, the m-bit bitwise-error observation vector F z = {f z,j } joeJ0,m≠1K of an EEV E z = {e z,j } joeJ0,m≠1K is extracted from ẑ and z by an m-bit XOR. Figure 3.3 -Extraction of Binary Error Event Vectors for BWER Model Training The conditional probability of Equation 3.4 can be estimated by the ratio between the number of observations of the event e z,j when the corresponding EEV E x,y,i is simultaneously observed, referred as (f z,j = 1|F x,y,i ), and F x,y,i the total number of observations of E x,y,i : First, looking at the blue part, after approximate operation, the output LSB is not faulty. Therefore, f z,0 = 0. As x0 was not faulty and ŷ0 was faulty, the corresponding observation of input EEV is F P (e z,j |E x,y,i ) © q l (f z,j = 1|F x,y,i ) l q l (F x,y,i ) l . ( 3 x,y,2 . Thus, following Equation 3.5, the estimation of P (e z,0 |E x,y,2 ) is modified by increasing the denominator by 1. Then, looking at the yellow part of Figure 3.4, the output observation is f z,1 = 1, meaning the output is erroneous. As k = 2, input ranks 1 and 0 are observed together. The corresponding observation (0, 1, 1, 0) is F x,y,6 . P (e z,1 |E x,y, 6) is modified increasing the numerator by 1 (faulty output) and increasing the denominator by 1. The operation is repeated at significance position 2 (green part) and 3 (red part), each time observing 2k = 2 input error vector bits. Following this method, the conditional probability data structure has m elements updated at each new training cycle. Finally, after a sufficient number of training cycles (discussed in Section 3.3.1), the model is trained and ready for propagation, which is discussed in the following section. BWER Propagation Algorithm The propagation of BWER is performed the following way. Given two BWER inputs B x = {b x,i } ioeJ0,n≠1K and B y = {b y,i } ioeJ0,n≠1K and the equivalent conglomerate vector The output BWER B x,y = {b y,n≠1 , b x,n≠1 , b y,n≠2 , b x,n≠2 , . . . , b y,1 , b x,1 , b y,0 , b x,0 } . ( 3 B z = {b z,j } joeJ0,m≠1K is then determined by b z,j = 4 min(j,k) ≠1 ÿ i=0 - i,j P (e z,j |E x,y,i ) , ( 3.7) wherei,j is the probability of the event vector E x,y,i to be true at significance j knowing B x,y , referred as P ) = 0.21 ◊ (1 ≠ 0.14) ◊ (1 ≠ 0.17) ◊ 0.05 = 7.49E≠3 This operation has then to be iterated and summed for all other 15 values i to determine b z,4 , and again for all j in J0, m ≠ 1K. Results of the BWER Method on Approximate Adders and Multipliers This section presents results about the BWER propagation method. These results were produced a few days before the redaction of this part of the document and are consequently not yet published. First, the convergence speed of the trained data structure is studied in Section 3.3.1. Then, results concerning stand-alone approximate adders and multipliers and tree structures of adders are given considering inputs with a maximal activity. Chapter 3 BWER Training Convergence Speed As developed previously in Section 3.2.2, one of the main interests of BWER propagation method is the reasonable memory cost of the trained structure. However, the method can only be suitable if the training time is not too important. In this Section, we evaluate the convergence speed of BWER training. To evaluate the convergence speed, three approximate adders and two approximate multipliers were used. The adders are ACA, ETAIV and IMPACT, described in Section 1.4.1. The multipliers are AAMIII, denoted as AAM in the rest of this section, and DRUM described in Section 1.4.2. They were all tested with input/output widths between 8 and 16 using multiple configurations, with values of k in BWER method in {2, 4, 6, 8}, leading to 374 different experiment parameters. In this experiment, the reference of the final BWER trained structure are the values obtained after 10 8 random value draws. The experiment was performed using 374 cores of IRISA's computing grid IGRIDA, each instance of the experiment being computed on a single-core for correct time analysis. IGRIDA computing grid embeds 1700 cores leveraging Intel Xeon CPUs. All implementations are done in C++, using approximate library apx_fixed described in the next chapter. A first observation of the curves shows that the convergence speed of the training clearly depends on k. The smaller k, the faster the convergence. For k = 2, the mean distance of the estimation from the reference gets under 10 ≠2 between 20, 000 and 50, 000 training samples, which represents only 0.5 ≠ 1.2 seconds of training. For k = 8, getting as near from the reference as 10 ≠1 takes about 1, 000, 000 training samples, which represents a training time of 22 seconds. Indeed, when k is small, the training structure is very small, and the elements of the structure are likely to be activated by a random input with a high probability. When k is large, however, each element of the structure has very few chances to be activated by the drawn number. This is why the number of random inputs necessary to have a sufficient number of activations for all elements of the data structure to estimate the BWER probabilities grows very fast with the value of k. From now on, all BWER estimation structures used are trained on 10 8 input samples. Evaluation of the Accuracy of BWER Propagation Method As mentioned in Section 3.2.3, the trained structure of BWER propagation method is built to work for inputs with maximum activity at each bit position. In this section, all presented results are produced in these conditions, meaningj in Equation 3.9 is always worth 0.5. Two experiments are presented in this section. First, results on single stand-alone operators are given, using the same approximate operators as the ones described in the previous section. Finally, results on tree of operations for the same approximate operator instance are given. Figure 3.6 shows the example of this structure, made of three stages. This tree structure is considered to represent typical data-flow graphs used in signal processing applications and is used to see how the model propagates in this structure. Even if the model can be accurate for one operator alone, the interest of analytical models is in their use for application-level error estimation, which is not considered in the models currently published in the literature for approximate operators. To evaluate the efficiency of the BWER method, the mean distance D B = 1 n n≠1 ÿ i=0 - - -B(i) ≠ B(i) - - -, (3.8) is used, where B(i), i oe J0, n ≠ 1K is the reference output BWER of a given operator of output width n and B(i), i oe J0, n ≠ 1K is the output estimated by the model. The first experiment without considering the activity of the inputs is the verification of the model on stand-alone operators. For this experiment and all the other experiments to come in this section, all simulations for the reference are run on 10 7 input samples. For the estimation time, the analytical propagation is averaged on at least 5 seconds of repeated experiments. An example of estimation and simulation output BWER vector is shown in Figure 3.7 for a 12-bit approximate adder ACA with carry chains limited to x = 2, and a parameter k = 4 for the BWER propagation method. In this specific case, the estimation is very near from the reference simulation. There are no errors occurring on the first 3 bits, which is normal for an ACA with x = 2. Then, the error probability increases with the bit significance, as expected. Other cases may show different results, depending on the value of k. E.g, if k Ø x in ACA, then at least one LSB causing error at a higher significance is not taken into account. In this case, the estimation is not accurate. Therefore, it is important to find the optimal value of k to minimize the memory needed and the training time while keeping the best possible accuracy in the estimation. Figure 3.8a shows the evolution of D B with k for 16-bit ACA as a function of x. As expected, when k < x, the estimation is bad. Then, the quality of estimation improves when k approaches x. However, when k gets larger than x, the estimation gets worse again. This is due to the number of training samples, which is the same for all k in our experiment. As showed in Section 3.3.1, the training process converges slower when k is larger. Therefore, when k > x, as there is no improvement in the accuracy of the model compared to k = x, the slowest convergence is source of inaccuracy. Thus, the optimal value of k in this case is k = x, which is the best balance between accuracy and required training samples. Figure 3.8b shows the evolution of D B with k for 16-bit IMPACT as a function of the number of MSBs computed with an exact adder N acc . For IMPACT, the results are different than for ACA. Indeed, in IMPACT, every output bit depends on every input bit of lower significance. Therefore, for all k < n , there is a lack of information in the model. It is interesting to see that for IMPACT, when the number of accurate MSB computations gets larger, the accuracy of the estimation gets worse off, until a certain value of k for which the lack of samples used for training the model compensates for the increase of the information taken into account. This tendency is quite opposed to what could be expected and needs to be investigated in the future for better comprehension. AAM and DRUM multipliers results are respectively given in Figures 3.8c and 3.8d. A first observation on the error shows that globally, the accuracy of the estimation is worse off than for adders. Indeed, there is a main difference between adders and multipliers. In adders, the output bit of significance i is strongly dependent on the input bits of significance i, and then less and less dependent on the input bits of significance i ≠ 1, i ≠ 2, i ≠ 3, . . . , 0. Therefore, when k is high enough, the absence of information on the inputs of rank i ≠ k ≠ 1, . . . , 0 represents a small accuracy penalty. However, for an n-bit multiplier with n even, the output bit of significance n ≠ 1 is as dependent on the inputs of significance n ≠ 1 as on the input bits of significance 0. However, when k < n, the input bit of significance 0 is not reached by the estimation, leading to potentially bad accuracy. In Figure 3.8c for AAM results, as AAM has no parameter, each curve represents the bit-width of the multiplier. The smaller the multiplier, the better the estimation. For all AAM widths, increasing k always leads to better accuracy as expected, except for 8-bit AAM with k = 8, which is worse than k = 6. Indeed, as only the most significant half of multiplication is taken into account in AAM, this means that for each bit j at the output no partial product implying input significance 0 is operating at weight j. Therefore, when switching from k = 6 to k = 8, only one more significant bit instead of two is taken into account, which is not enough to counterbalance the fact that the training was less efficient for k = 8, thus leading to a lower accuracy. The same phenomenon can be observed for DRUM on Figure 3.8d when the floating multiplier is only 2-bit or 4-bit large. tion is used. For ACA on Figure 3.9a, for x = 6, the estimation becomes less accurate with the number of stages, which is what could be expected. However, for the other configurations, the opposite is observed, i.e., more stages lead to better estimation accuracy. This is actually due to the high BWER of the approximate adder configuration, which leads to a maximal BWER at nearly all positions after a high number of stages. As the BWER propagation model also estimates a maximal BWER, not because of a high accuracy but because of a saturation effect, the estimation becomes very good. However, this is only a side effect of the bad performance induced by using several layers of ACA. Estimation and Simulation Time As the BWER propagation method is analytical, it is made for fast accuracy evaluation. In this section, the execution time of the method is evaluated. Table 3.4 gives the time spent for the BWER propagation method to evaluate ACA, IMPACT, AAM and DRUM in their 8-, 12and 16-bit versions. For each bitwidth, the training is obtained taking the average of several different parameters of all approximate adders and multipliers, except for AAM which takes no parameter. All simulations are run on 10 7 points. All estimations are repeated during at least 5 seconds and the total time is divided by the number of repetitions. All computations were run on IGRIDA computing grid composed of different models of Intel Xeon processors. As the computing grid is heterogeneous, the evaluation may vary if one processor or another is used, and this must be considered in the analysis of results of Table 3 .4. BWER propagation time roughly oscillates between 500µs and 18ms for addition, and between 800µs and 42ms for multiplication for stand-alone operators. In a larger system, the propagation time has to be multiplied by the number of operators. In comparison, 10 7 simulations take between 26s and 129s. This makes a huge difference when the operation has to be performed on complete systems and repeated many times, which is the case in an incremental system optimization process where many configurations of approximate operators must be Conclusion and Perspectives In the previous Sections, BWER propagation model principle, its convergence speed and some results on operators or tree of operators were presented. The existing literature and the results show how difficult it is to build strong general error propagation models for approximate operators. First, their many different natures make the accuracy of the models very dependent on each of the structures, and a good estimation accuracy on ACA for instance could give bad results on an adder like IMPACT (and vice versa). Then, their errors often containing scarce and very high amplitude peaks, it is very hard to evaluate this with analytical models smoothing these phenomena. Another limitation of the BWER propagation method is that it is only suitable if all inputs are uniformly distributed on their whole dynamic, which means they have a maximal activity. Indeed, BWER does not carry any information about the activity at any position. If this hypothesis is not true, then the results must be weighted by information about activity. Letj the probability for the output bit z j of an operator to be worth 1. Then, the output BWER in these conditions can be approximated by b z,j = 2- j 4 min(j,k) ≠1 ÿ i=0 - i,j P (e z,j |E x,y,i ) . ( 3.9) Indeed, if the input MSB are mostly worth 0 instead of equally 0 or 1, the output BWER on the MSBs will be proportionally lower. Thus, knowing the probability of input bits to be 1 at each position, these probabilities need to be propagated analytically across the operators along with BWER propagation. In practice, the simplest way is to use the analytical propagation of these probabilities across exact operators (adders and multipliers). The propagation of the probability of the output bits to be worth 1 across an adder can be trivially calculated composing their propagation across a full adder. Their propagation across a multiplier is calculated on the composition of additions in the partial product reduction. The output probability of a bit to be worth 1 analytically obtained this way is then weighted by the initially computed BWER at the output of the operator. Indeed, as approximate operators are erroneous by nature, they can generate bit flips that can totally modify the value of activity when compared to an exact operator. If -Õ j is the probability for the output of an exact operator to have its j-th bit worth 1, then we approximatej (see Equation 3.9) as - j = - Õ j ◊ (1 ≠ b z,j ) + b z,j ◊ 1 1 ≠ - Õ j 2 . ( 3.10) The question of using BWER for propagation is also contestable. Indeed, few significant signal processing metrics are possibly deducible from it. However, the objective of this section is to point out the interest of basing the analytical error estimation on models trained using simulated values of approximate adders, as it is the only way to catch their many different natures. Also, the interest of finding ways to reduce the storage cost of the models while sacrificing a minimum of estimation accuracy has been highlighted by the use of k in the training and the propagation method. In the future, methods derived from BWER training and propagation could be developed leveraging other metrics for the propagation and more efficient storage compression methods. Indeed, as discussed in Section 3.3.2, important data are likely to be lost, especially in multipliers, when choosing bad parameters for the model. After many unsuccessful attempts and a deep study of the literature, a more general conclusion about modeling approximate operators error is that there seem not to be better method than Monte Carlo simulation. Indeed, simple models always suffer from high imprecision which does not allow them to be used in real system design process, while complex models giving reasonably good results always come with a high storage or computational cost approaching the cost of Monte Carlo simulation. Moreover, simulating approximate operators can be done with a potentially good computational efficiency. Indeed, most of them are essentially a composition of small exact adders or multipliers. When computing using a CPU, it is therefore possible to use the integer arithmetic units to accelerate the computations using high-level code instead of heavier gate-level descriptions. There also are good opportunities to use HLS on the approximate operators code to simulate them on FPGA, accelerating computation using DSPs. In the next section, pseudo-simulation leveraging approximate operators is used for the reproduction of VOS effects. In Section 1.1.1, functional approximation leveraging VOS is discussed. In this section, a method to reproduce the effects of VOS on arithmetic operators using models based on approximate operators is discussed. As for BWER method developed above, it is based on model training applied to BER at each significance position of an operator. Voltage scaling has been used as a prominent technique to improve energy efficiency in digital systems, scaling down supply voltage effects in quadratic reduction in energy consumption of the system. Reducing supply voltage induces timing errors in the system that are corrected through additional error detection and correction circuits. A class of circuit-level approximation is achieved by applying dynamic voltage and frequency scaling aggressively to an accurate operator. Due to the dynamic control of voltage and frequency, timing errors due to scaling can be controlled flexibly in terms of trade-off between accuracy and energy. This method is referred as Voltage Over-Scaling (VOS). It has the potential to unlock the opportunities of higher energy efficiency by operating the transistors near or below the threshold. VOS-based approximate operators can be used when error-resilient applications are considered. Despite its high efficiency in terms of energy savings, sub-threshold VOS has important drawbacks. First, the process variability makes its effects hard to predict in a general way since two instances of a same chip are likely to behave differently. Second, besides on-chip measurements or transistor-level simulation simulation, there is no suitable method able to give a prediction of these effects. On-chip measurements is costly in terms of human resource, and observing at wire level the effects of VOS is impossible -adding hardware at that level would modify the nature of what is intended to observe. Transistor-level simulation (such as SPICE), on the other hand, is accurate. However, the computational resources and simulation time necessary for large system observation are prohibitive. In this section, we intend to reproduce the effects of VOS at arithmetic operator scale, leveraging approximate operators. This allows for much faster and low-resource simulation, while keeping satisfying accuracy compared to the reality. For this, we propose a new modeling technique that is scalable for large-size operators and compliant with different arithmetic configurations. The proposed model is accurate and allows for fast simulations at the algorithm level by imitating the faulty operator with statistical parameters. We also characterize the basic arithmetic operators using different operating triads (combination of supply voltage, body-biasing scheme and clock frequency) to generate models for approximate operators. Error-resilient applications can be mapped with the generated approximate operator models to achieve better trade-off between energy efficiency and error margin. In our experiments using 28nm FDSOI technology, we achieve maximum energy efficiency of 89% for basic operators like 8-bit and 16-bit adders at the cost of 20% Bit Error Rate (ratio of faulty bits over total bits) by operating them in near-threshold regime. Characterization of Arithmetic Operators In this section, characterization of arithmetic operators is discussed for voltage over-scaling based approximation. Characterization of arithmetic operators helps to understand the behaviour of the operators with respect varying operating triads. Adders and Multipliers are the most common arithmetic operators used in datapaths. In this work different adder configurations are explored in the context of near-threshold regime. (V dd , V bb , T clk ) , where V dd is supply voltage, V bb is body-biasing voltage, and T clk is clock period. In ideal condition, the arithmetic operator functions without any errors. Also, EDA tools introduce additional timing margin in the datapaths during Static Timing Analysis (STA) due to clock path pessimism. This additional timing prevents timing errors due to variability effects. Due to the limitation in availability of design libraries for near/subthreshold computing, it is necessary to use SPICE simulation to understand the behaviour of arithmetic operators in different voltage regimes. By tweaking the operating triads, timing errors e are invoked in the operator and can be represented as e = f (V dd , V bb , T clk ) (3.11) Characterization of arithmetic operator helps to understand the point of generation and propagation of timing errors in arithmetic operators. Among the three parameters in the triad, scaling V dd causes timing errors due to the dependence of operator's propagation delay t p on V dd , such as t p = V dd .C load k(V dd ≠ V t ) 2 (3.12) Body-biasing potential V bb is used to vary the threshold voltage (V t ), thereby increasing the performance (decreasing t p ) or reducing leakage of the circuit. Due to the dependence of t p on V t , V bb is used solely or in tandem with V dd to control the timing errors. Scaling down V dd improves the energy efficiency of the operator due to its quadratic dependence to total energy. Chapter 3 E total = V 2 dd .C load . Mere increase in T clk does not reduce the energy consumption, though it will reduce the total power consumption of the circuit P total = -.V 2 dd . 1 T clk .C load (3.13) Therefore, T clk is scaled along with V dd and V bb to achieve high energy efficiency. Characterization of Adders Adder is an integral part of any digital system. In this section, two adder configurations Ripple carry adder (RCA) and Brent-Kung adder (BKA) are characterized based on circuit level approximations. Ripple carry adder is a sequence of full adders with serial prefix based addition. RCA takes n stages to compute n-bit addition. In worst case, carry propagates through all the full adders and makes it longest carry chain adder configuration. Longest carry chain corresponds to the critical path of the adder, based on which the frequency of operation is determined. In contrast, Brent-Kung adder is a parallel prefix adder. BKA takes 2 log 2 (n ≠ 1) stages to compute n-bit addition. In BKA, carry generation and propagation are segmented into smaller paths and executed in parallel. Behaviour of arithmetic operator in near/sub-threshold region is different from the superthreshold region. In case of an RCA, when the supply voltage is scaled down, the expected behaviour is failure of critical path(s) from longest to the shortest with respect to the reduction in the supply voltage. Fig. 3.11 shows the effect of voltage over-scaling in 8-bit RCA. When the supply voltage is reduced from 1V to 0.8V, MSBs starts to fail. As the voltage is further reduced to 0.7V and 0.6V more BER is recorded in middle order bits rather than most significant bits. For 0.5V V dd , all the middle order bits reaches BER of 50% and above. Similar behaviour is observed in 8-bit BKA shown in Fig. 3.12 for v dd values of 0.6V and 0.5V. This behaviour imposes limitations in modelling approximate arithmetic operators in near/sub-threshold using standard models. Behaviour of arithmetic operators during voltage over-scaling in near/subthreshold region can be characterized by SPICE simulations. But SPICE simulators take long time (4 days with 8 cores CPU) to simulate exhaustive set of input patterns needed to characterize arithmetic operators. Modelling of VOS Arithmetic Operators As stated previously, there is a need to develop models that can simulate the behavior of faulty arithmetic operators at functional level. In this section, we propose a new modelling technique that is scalable for large-size operators and compliant with different arithmetic configurations. The proposed model is accurate and allows for fast simulations at the algorithm level by imitating the faulty operator with statistical parameters. As VOS provokes failures on the longest combinatory datapaths in priority, there is clearly a link between the impact of the carry propagation path on a given addition and the error issued from this addition. Figure 3.13 illustrates the needed relationship between hardware operator controlled by operating triads and statistical model controlled by statistical parameters P i . As the knowledge of the inputs gives necessary information about the longest carry propagation chain, the values of the inputs are used to generate the statistical parameters that control the equivalent model. These statistical parameters are obtained through an off-line optimization ), the goal is to find C max , minimizing the distance between the output of the hardware operator and the equivalent modified adder. This distance can be defined by the above listed accuracy metrics. Hence, C max is given by: C max (in 1 , in 2 ) = Argmin Coe[0,N ] Îx (in 1 , in 2 ) , x (in 1 , in 2 )Î where Îx, yÎ is the chosen distance metric applied to x and y. As the search space for characterizing C max for all sets of inputs is potentially very high, C max is characterized only in terms of probability of appearing as a function of the theoretical maximal carry chain of the inputs, denoted as P 1 C max = k|C th max = l 2 . This way, the mapping space of 2 2N possibilities is reduced to (N + 1) 2 /2. Table 3.5 gives the template of the probability values needed by the equivalent modified adder to produce an output. Table 3.5 -Carry propagation probability table of modified 4-bit adder C max C th max 0 1 2 3 4 0 1 P (0|1) P (0|2) P (0|3) P (0|4) 1 0 P (1|1) P (1|2) P (1|3) P (1|4) 2 0 0 P (2|2) P (2|3) P (2|4) 3 0 0 0 P (3|3) P (3|4) 4 0 0 0 0 P (4|4) The optimization algorithm used to construct the modified adder is shown in Algorithm 2. When the inputs (in 1 , in 2 ) are in the vector of training inputs, output of the hardware adder configuration x is computed. Based on the particular input pair (in 1 , in 2 ), maximum carry Chapter 3 chain C th max corresponding to the input pair is determined. Output x of the modified adder with three input parameters (in 1 , in 2 , C) is computed. The distance between the hardware adder output x and modified adder output x is calculated based on the above defined accuracy metrics for different iterations of C. The flow continues for the entire set of training inputs. Algorithm 2 Optimization Algorithm P (0 : N bit_adder | 0 : N bit_adder ) Ω 0 max_dist Ω +OE C max_temp Ω 0 for variable in 1 , in 2 oe training_inputs do x Ω add_hardware(in 1 , in 2 ) C th max Ω max_carry_chain(in 1 , in 2 ) for variable C oe C th max down to 0 do x Ω add_modif ied(in 1 , in 2 , C) dist Ω Îx, xÎ if dist <= max_dist then dist_max Ω dist C max_temp Ω C end if end for P (C max_temp |C th max ) + + end for P (: | :) Ω P (: | :)/size(training_outputs) Once the offline optimization process performed, the equivalent modified adder can be used to generate the outputs corresponding to any couple of inputs in 1 and in 2 . To imitate the exact operator subjected to VOS triads, the equivalent adder is used in the following way: 1. Extract the theoretical maximal carry chain C th max which would be produced by the exact addition of in 1 and in 2 . 2. Pick of a random number, choose the corresponding row of the probability table, in the column representing C th max , and assign this value to C max . Compute the sum of in 1 and in 2 with a maximal carry chain limited to C max . For the experiments, the equivalent modified adder used is ACA, presented in Section 1.4.1.1. As a reminder, for an n-bit ACA parameterized by k, each output sum bit is calculated considering only the k previous input carries. This approximated operator is therefore chosen as its effects represent quite well the effects of VOS, with errors occurring on the critical path, i.e. the carry chain. Therefore, the control parameter used in the optimization of the model is the value of k. Figure 3.15 shows the estimation error of model of different adders based on the above defined accuracy metrics. SPICE simulations are carried out in 43 operating triads with 20K input patterns. Input patterns are chosen in such a way that all the input bits carry equal probability to propagate carry in the chain. Figure 3.15a plots the SNR of 8-and 16-bit RCA and BKA adders. MSE distance metric shows higher mean SNR, followed by Hamming distance and weighted Hamming distance metrics. Since MSE and weighted Hamming distance are taking the significance of bits into account, their resulting mean SNRs are higher than for the Hamming distance metric. Figure 3.15b shows the plot of normalized Hamming distance of all the four adders. In this plot, MSE and Hamming distance metrics are almost equal, with a slight advantage for non-weighted Hamming distance, which is expected since this metric gives all bit positions the same impact. Both the 8-bit adders have same behavior in terms of the distance between output of hardware adder and modified adder. On the other hand, 16-bit RCA is better in terms of SNR compared to its BKA counterpart. These results demonstrate the accuracy of the proposed approach to model the behavior of operators subjected to VOS in terms of approximation. This method was presented in [3], along with error versus energy SPICE simulations under VOS. In these results, important energy savings are performed using VOS, with no or limited errors on the output. Figure 3.16 shows the results obtained for different voltage triads on a 16bit RCA. The link between the intensity of error, represented here as BER and energy savings is clearly established, as well as the many possible tuning knobs. This emphasizes the need for models such as the one presented here, so tuning a faulty circuit does not take days to weeks. However, the method presented in this section has the main drawback to depend on simulations results to be trained. As these simulations are extremely long, only 20,000 simulated points at best could be used for this model training, which is very low and does not ensure robustness of the generated outputs. Moreover, it is still hard to test the accuracy of this method on more complex systems, since transistor-level SPICE simulations has prohibitive computational time and memory cost. 0 .5 3 ,0 .6 , ± 2 0 .5 3 ,0 .7 ,0 0 .5 3 ,0 .7 , ± 2 0 .5 3 ,0 .8 ,0 0 .5 3 ,0 .9 ,0 0 .5 3 ,0 .8 , ± 2 0 .5 3 ,1 .0 ,0 0 .1 9 ,0 .9 , ± 2 0 .7 0 ,1 .0 ,0 0 .5 3 ,0 .9 , ± 2 0 .2 0 ,1 .0 , ± 2 0 .2 5 ,1 .0 , ± 2 0 .5 3 ,1 .0 , ± 2 0 .2 5 ,0 .8 , ± 2 0 .5 3 ,0 .5 , ± 2 0 .2 0 ,0 .8 , ± 2 0 .2 5 ,1 .0 ,0 0 .2 0 ,0 .9 , ± 2 0 .2 5 ,0 .9 ,0 0 .2 5 ,0 .7 , ± 2 0 .2 0 ,0 .7 , ± 2 0 .2 0 ,1 .0 ,0 0 .2 5 ,0 .6 , ± 2 0 .2 0 ,0 .9 ,0 0 .5 3 ,0 .4 , ± 2 0 .2 5 ,0 .8 ,0 0 .5 3 ,0 .6 ,0 0 .2 0 ,0 .6 , ± 2 0 .2 5 ,0 .5 , ± 2 0 .2 0 ,0 .8 ,0 0 .2 5 ,0 .7 ,0 0 .2 0 ,0 .5 , ± 2 0 .2 0 ,0 .7 ,0 0 .2 5 ,0 .4 , ± 2 0 .5 3 ,0 .5 ,0 0 .2 5 ,0 .6 ,0 0 .2 0 ,0 .4 , ± 2 0 .2 0 ,0 .6 ,0 0 .2 5 ,0 .5 ,0 0 .2 0 ,0 .5 ,0 0 .5 3 ,0 .4 ,0 0 .2 0 ,0 . Conclusion As a conclusion for this chapter, approximate computing error modeling is a very complex task. Today, there is still no real suitable method for analytical propagation of errors at the application level as it exists for FxP arithmetic. Indeed, existing techniques always come with a major drawback -the accuracy of the estimation always has to be traded for memory or computational cost. There are two main reasons for this: 1. The output error strongly depends on the inputs. A variation of 1 bit in an input can easily switch between a perfectly accurate result to an error with the amplitude of the MSB. 2. And, as stated and developed in Chapter 1, the countless approximate operators of different natures make general rules hard to be found. In the BWER propagation method proposed in Section 3.2, a solution implying reasonable memory cost with very fast error estimation after model training was proposed. However, as it is limited to BWER, only a limited number of metrics can be extracted, and the lack of accuracy in the estimation makes it not scalable to large systems. What was finally observed along the many attempts during the development of this thesis and the reading of literature about integer approximate operators is that, when accuracy of error estimation is sought, Monte Carlo simulation is often the best solution. First, it is easier to extract any metric from the results, and especially metrics relative to an application. Moreover, the particularity of approximate operators is that, to be energy efficient, they have to remain quite simple and can often be represented by series of additions. Therefore, their functions can often be efficiently coded using integer functions supported by all CPUs so they execute very fast and therefore Monte Carlo simulations can remain efficient. In this chapter, we also showed that approximate operators functions could be useful to represent complicated physical phenomena such as VOS faults, combining them to estimate these faults using Monte Carlo simulation. In the next chapter, FxP and approximate operators paradigms are compared in terms of raw error and hardware performance, as well as in terms of error and performance regarding real-life signal processing applications. For this, only Monte Carlo simulation is used, so that the study is not dependent on the accuracy variations of models. Chapter 4 Approximate Operators Versus Careful Data Sizing In Chapter 1, fixed-point (FxP) and approximate (Apx) operators paradigms were presented. In Section 1.3, FxP arithmetic is presented as well as quantization error. Classical techniques for FxP refinement as well as PSD propagation method are presented in Chapter 2. Error management of Apx operators and their issues are discussed in Chapter 3. In this chapter, both arithmetic paradigms are compared. On the one hand, FxP arithmetic is relying on the use of accurate integer operators, inaccuracy being induced by quantization process, mostly at arithmetic operators outputs. On the other hand, approximate operators rely on inaccurate designs inducing errors by nature. Therefore, this chapter compares data sizing to functional approximation. For this, an open-source hardware performance (area, delay, power, energy) and accuracy characterization framework was developed during the thesis related to this document. This framework allows fast, accurate and user-friendly simulation-based area, delay and power estimation of approximate arithmetic operators in a general meaning -FxP, floating-point (FlP) or Apx operators such as presented in Chapter 1. It comes with built-in approximate adders and multipliers libraries, real-life applications benchmarks, and very complete scripts for results processing. The framework is presented in Section 4.1. The raw comparison between FxP and Apx paradigms is presented in Section 4.2. Finally, the performance of both are compared in signal processing applications in Section 4.3. APXPERF: Hardware Performance and Accuracy Characterization Framework for Approximate Computing This section presents ApxPerf, which is the open-source hardware performance and accuracy characterization framework developed during the thesis. Two versions were developed. The first one, based on Bash and Matlab scripts and taking VHDL for hardware performance evaluation and C++ for error evaluation, is presented in Section 4.1.1. The second one, written with Python3, adds an HLS layer so only a unique C++ source is needed for both functional and hardware simulation. APXPERF-First Version The first version of ApxPerf, presented in [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF], is a framework dedicated to approximate arithmetic operators characterization. It is composed of a hardware characterization part and an accuracy characterization part. The hardware characterization part is based on VHDL specifications of the operators. Given an approximate operator VHDL code, the code is parsed to the clock and reset signals, as well as the two inputs and the output. The parameters of the approximate operator, which must be resolved at compile time, are set up using generic variables. The VHDL source is then compiled along with linked hardware libraries provided by the user using Synopsys Design Compiler and a gate-level design is produced. Area and delay are extracted from PrimeTime reports. Then, a VHDL benchmark is generated, automatically interfaced with the operator to be characterized. The benchmark is then run using Modelsim from Mentor Graphics. A VCD file containing all the transitions at every gate interface with a default 1ps granularity is generated. Finally, the VCD and SDF files and the technology libraries are passed to Synopsys PrimeTime, which produces a very accurate time-based estimation of dynamic and static power of the circuit. In parallel, a C source of the same operator is given to a C++ framework. The function parameters (composed of the operator inputs and output and the parameters of the operator) are parsed and a test-bench is generated. Then, the simulation is run. An important number of metrics are returned by the simulation: • Mean Square Error (MSE), • Mean Absolute Error (MAE), • Mean Absolute Relative Error, • Error Rate, • Bit Error Rate (BER), • Bitwise Error Rate (BWER), • Acceptance Probability (AP) given a Minimum Acceptable Accuracy (MAA), • Minimum and Maximum Error, • Mean of Error (or error bias), • Power Spectral Density (PSD) of error, and • Probability Density Function (PDF) of error. Hardware and accuracy simulations are run taking uniformly distributed inputs on the whole range of the operator to be tested. Accuracy simulations are optimized for parallel execution with OpenMP for a high speed-up when used on a multicore machine. The overall scheme of the first version of ApxPerf is depicted in Figure 4.1. HDL Simulation C Simulation Veri cation Data Fusion Operator HDL Description RTL Synthesis Gate-Level Sim. The framework comes with built-in C and VHDL versions of the following approximate operators: • Almost-Correct Adder (ACA, see Section 1.4.1.1), • Error-Tolerant Adder Version IV (ETAIV, see Section 1.4.1.2), • IMPrecise Adder for low-power Approximate CompuTing (IMPACT, see Section 1.4.1.5), • Approximate Array Multiplier III (AAMIII, see Section 1.4.2.1), and • FxP adders and multipliers with various sizes. ApxPerf framework also embeds several signal processing applications, only for the accuracy evaluation part -Fast Fourier Transform (Fast Fourier Transform (FFT)), K-Means clustering algorithm, JPEG encoding and motion compensation filter for High Efficiency Video Codec (HEVC). These applications are further described in Section 4.3. The first version of ApxPerf was used for all the results in this Chapter. However, the second version, described in the next section, brings many improvements, mostly thanks to HLS and the usage of C++ templates for the parameterization of operators and simulations. erated results and the generation of figures are performed using Numpy and Matplotlib packages of Python. An execution trace of ApxPerf v2 characterization for 32-bit ct_float (see Chapter 5) is given by Listing 4.1. Another important improvement in the framework is the integration of hardware performance characterization in embedded signal processing applications. At the time of writing this thesis, only K-means clustering and FFT were adapted from ApxPerf v1. Finally, the code of the approximate operators were replaced by a template-based C++ synthesizable library called apx_fixed, based on Mentor Graphics AC Datatypes v3.7 under license Apache 2.0. This library included in ApxPerf v2 features: • synthesizable operator overloading: unary operators: unary ≠, !, ++, ≠≠, relational operators: <, >, <=, >=, ==, ! =, binary operators: +, + =, ≠ ≠ =, ú, ú = replaced by the approximate operators selected in the template values in the apx_fixed instance declaration (see example below), <<, <<=, >>, >>=, and assignment operator from/to an other instance of apx_fixed. • non-synthesizable operator overloading: assignment operator from/to C++ native datatypes (float, double), output operator << for easy display and writing in files. apx_fixed variables are represented by a fixed-point value which width, integer part width, rounding et wrapping modes (inherited from ac_fixed) are parameterized in template as well as the name and parameters of the approximate operators to be used in additions and multiplications. A use case of apx_fixed library is given by Listing 4.2. In this example, the result of the operation out = x ◊ y + z is computed. x and z are 8-bit integer numbers with 2-bit integer part in FxP representation, denoted as [START_REF] Xu | Internet of things in industries: A survey[END_REF]2). y has (10, 5) representation and the output out is represented on [START_REF] Williams | What's next? [the end of moore's law[END_REF]2). In line 14, the apx_fixed variables are initialized casting double precision floating-point values. The nearest representable value is set for each variable. E.g, x is set from 0.13236. As it only has 6-bit fractional part, its value is 0.125 (00.001000 in binary representation). In line 18, the operation is performed. Thanks to operator overload, the first operation x ◊ y is performed using the approximate multiplier given in the template of x type declaration (first operand), which is classical fixed-point multiplication. As x and y are (8, 2) and (10, 5), the implicit result is stored in a number with representation (18, 7) according to the rules discussed in Section 1.3.4. Then the implicit result is added to z. The sum of (18, 7) and (8, 2) representations returns an implicit [START_REF] Grigorian | Dynamically adaptive and reliable approximate computing using light-weight error analysis[END_REF][START_REF] Xu | Internet of things in industries: A survey[END_REF] according to Section 1.3.3. The addition is performed using ACA [START_REF] Esmaeilzadeh | Architecture support for disciplined approximate programming[END_REF]3) based on template values and the size of inputs. Finally the result is casted to the (7, 2) number out. For this, the bits from position 6 to 12 of the result are extracted and put in out after truncation (because of the directive AC_TRN in out type apx3_t). This computation is fully synthesizable. During hardware optimization, the paths return EXIT_SUCCESS; 28 } leading to the generation of the bits 13 to the MSB 18 would be pruned, as well as the bits from the LSB to bit position 5 because truncation does not consider them. As a matter of fact, changing rounding and overflow mode respectively to saturation and rounding-to-nearest in the template declarations would increase accuracy and hardware cost. Finally, line 25 gives the example of the overloading of output operator << in apx_fixed. The code outputs given below reflects the successive casts and approximations. The syntax developed in apx_fixed library allows for fast and easy development and testing of approximate arithmetic kernels. The software flexibility of C++ and the efficiency of HLS tools allow for complex circuits to be produced and simulated with benchmarks, whose generation is easy to be fully automatized thanks to the overloading of type casting. A custom synthesizable floating-point library embedded by ApxPerf v2 called ct_float is described in the last chapter of this thesis. Raw Comparison of Fixed-Point and Approximate Operators In this section, the results of the raw comparison between FxP and Apx operators are presented. They were all obtained using the first version of ApxPerfdescribed in Section 4.1.1 and were published in DATE'17 conference [2]. A first comparison of FxP and Apx operators consists in measuring the difference in their accuracy with regards to a performance metric (energy, area, delay). For this, approximate adders ACA, ETAIV and IMPACT and approximate multipliers AAMIII and FBMIII (respectively denoted as AAM and FBM in this section), described in Chapter 1, were compiled and tested with a number of bits varying from 2 to 32 and all possible combinations of parameters. FxP operators (i.e. classical integer adders and multipliers) were tested in the same way, with all possible combinations of input and output size (inputs from 2 to 32, outputs from 2 to 32 for the adders and 2 to 64 for the multipliers). All power results are given for a clock frequency of 100 MHz. With ApxPerf v1, Design Compiler (2013.03) was used for RTL synthesis with a 28nm FDSOI technology library, Modelsim (10.3c) for gate-level simulation and PrimeTime (2013.12) for power analysis. Simulation and power analysis are performed on 10 5 random input samples. The extraction of error metrics based on the C description was computed on more than 10 7 random inputs. Results for adders are presented on Figure 4.3 and provide MSE versus power, area, delay, and PDP. For the sake of clarity, results are here only presented for 16-bit input operators. 16-bit output is considered as the correct adder and used as reference. Truncated and rounded FxP adder outputs vary from 15 to 2 bits (from left to right). For approximate adders, results are given by varying: the number of approximated LSBs (M ) and types of FA for IMPACT, the maximal size of carry chain (P ) for ACA, and the block size (X) for ETAIV. What can be first noticed is that in terms of power consumption and design area, FxP operators are better than Apx operators for a same MSE, except for very-low accuracy. However, in terms of delay, most approximate operators are faster, but they cannot reach the same level of accuracy when more than 8 bits are kept for the fixed-point output. In terms of energy, the PDP of FxP adders is quite near from approximate ones when less than 8 output bits are kept. However, ACA and IMPACT are able to spend less energy without sacrificing much accuracy. In some applications, all output bits have the same weight on the error. Therefore results on BER metric are presented on Figure 4.4 for the same adders than previously. Approximate operators achieve very good BER performance when compared to fixed-point operators. Considering the power and area, truncated and IMPACT adders perform similarly for any fixed BER. However, for delay and energy per addition, most approximate operators perform significantly better truncated or rounded FxP operators. When not considering bit significance in the operands, FxP operators are penalized by the suppression of part of their output bits, implicitly forcing them to zero. Results for the multipliers are presented in Table 4.1. The 16 to 32 integer multiplier is considered as the correct multiplier for accuracy reference. As AAM and FBM multipliers are fixed-width operators (16-bit inputs and output), comparison results are provided only for the truncated FxP multiplier with 16-bit output (MUL t [START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF][START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF]). Fixed-width MUL t truncated multiplier reaches the best accuracy, and consumes least power. Although MUL t is slower than FBM, its energy per multiplication (PDP) is 44% better than both approximate operators. FBM is 37% faster than MUL t and AAM is 17% smaller. However, FBM is extremely MSE inaccurate, with 7 orders of magnitude more erroneous results than fixed-point. Both AAM and FBM are worse than MUL t by about 19% for BER metric. As a conclusion for this operator-level performance analysis of various approximation schemes, fixed-point operators perform better when considering the MSE metric representative of signal processing applications, while approximate adders show good BER performance. However, the importance of output bit-width was not taken into account in this results. Indeed, when the bit-width is reduced, as in truncated or rounded operators, the amount of data to transfer to load and store operator inputs and output is consequently reduced. This shortening in bit-width has a major impact on energy consumption and must be considered for real-life application. Thus, although inexact and truncated-or-rounded operators seem to reveal the same gross performance, selecting the second one will allow to decrease energy cost by avoiding the transfer and memory storage of useless erroneous bits. Comparison of Fixed-Point and Approximate Operators on Signal Processing Applications In this Section, the effect of fixed-point and approximate adders and multipliers is evaluated on different real-life applications, leveraging relevant and adapted metrics. Considered applications include Fast Fourier Transform (FFT), JPEG image encoding, motion compensation filtering in the context of High-Efficiency Video Coding (HEVC) decoding, and K-means clustering. Fast Fourier Transform As a classical computation kernel used in many signal processing or communication applications, FFT is relevant for this study. ApxPerf v1 provides an instrumented, tunable FFT kernel. This sections presents results on a 32-sample Radix-2 FFT computed on 16-bit input/output data. In a first experiment, only the adders are considered. The total energy to compute the FFT is estimated by: P DP F F T = N add ÿ i=1 P DP add,i + N mul ÿ i=1 P DP mul,i (4.1) where N add and N mul are the total of additions and multiplications, respectively. (4. 2) The exact multipliers used alongside the modified adders are optimally sized according to the adder bit-width, so they are not source of error. For any accuracy constraint, FxP adders (truncation or rounding) notably dominate approximate adders. This supremacy could be explained by two factors: the relative energy cost of multipliers with regards to adders and the need for less operand size for the multiplier when reducing the accuracy of additions. This figure also shows the great potential of energy reduction when playing with accuracy of the fixed-point operators. A first conclusion here is that reducing the FxP adder size provides a smaller entropy of the data processed, transported and stored, than keeping the same bit-width along the computations but containing errors in data since computations rely on approximate operators. The same experiment is performed using 16-bit AAM and FBM multipliers and a 16-bit truncated FxP multiplier, while keeping 16-bit exact adders. Table 4.2 shows that AAM and FxP multipliers differ only by 6 dB of of accuracy. However, AAM consumes 78% more energy that reduced-precision fixed-point equivalent. Results on the FFT comfort the conclusion of Section 4.2. Providing results with both approximate adders and multipliers in the same simulation will not lead to a different conclusion. JPEG Encoding The second application is a JPEG encoder, representative of the image processing domain. The main algorithm of this encoder is the Discrete Cosine Transform (DCT). To obtain an approx- [START_REF] Wang | Image quality assessment: from error visibility to structural similarity[END_REF], which is representative of the human perception of image degradation. This metric results in a score between [0, 1], 1 representing a perfect quality. To obtain Figure 4.6, the DCT energy consumption is compared for all presented approximate adders, as well as for fixed-point versions. The algorithm is applied with an encoding effort of 90% on the image Lena. As observed for the FFT, the fixed-point versions of the algorithm are much more energy efficient than for approximate operators, mostly thanks to the bits dropped during the calculation. It is important to notice that the nature of the error generated by approximate operators (few high amplitude errors in general) is very problematic in the context of image processing. 4.7d is generated using 16-bit IMPACT with 8-bit exact addition and 8-bit approximate addition using approximation number 3 (see Section 1.4.1.5). The best visual result clearly occurs for the fixed-point addition. The result of IMPACT adder is also quite good thanks to its 8-bit accurate adder on the MSB part, but as showed above, its larger output implies an important energy overhead. Moreover, some visual artefacts are present in sharp details. For ETAIV, the image is quite degraded with horizontal and vertical lines giving a noisy appearance to the image. For ACA, strong artefacts are visible everywhere. Motion Compensation Filter for HEVC Decoder HEVC is the new generation of video compression standard. Efficient prediction of a block from the others requires fractional position Motion Compensation (MC) carried-out by interpolation filters. These MC filters are modified using fixed-point and approximate operators to test their accuracy and energy efficiency. Previously described MSSIM metric is used to determine the output accuracy of the filter on a classical signal processing image. K-means Clustering The last experiment presented in this section is K-means clustering. Given a bidimensional point cloud, this algorithm classifies them finding centroids and assigning each point to the cluster defined by the nearest centroid. At the core of K-means clustering is distance computation. More details about K-means clustering are given in Section 5.3.1. For the experiment, 5 sets of 5.10 3 points of data were generated around 10 random points with a Gaussian distribution. The accuracy metric is the success rate, from 0 to 1, representing the proportion of points classified in the correct cluster. Table 4.5 presents the success rate and energy spent in distance computation replacing the exact adders by fixed-point and approximate versions. For truncated fixed-point version, the energy spent in multiplication is inherently inferior to approximate, thanks to the reduction of bit-width, leading to a total energy reduction by more than half for a success rate of 99%, and even by nearly 10 for a success rate of about 86%. In spite of the theoretical competitiveness of approximate operators, their advantages are likely to be lost at application level. Indeed, at the difference of fixed-point operators, accuracy reduction is obtained by simplifying the operator structure but not by reducing the operator output bit-width. This reduces the energy of the considered operator, but does not have a positive impact on the other operators, as it is the case for fixed-point. Considerations About Arithmetic Operator-Level Approximate Computing In this Chapter, two types of hardware approximation were compared: fixed-point arithmetic relying on truncated and rounded accurate operators with careful data sizing, and approximate operators. A direct comparison using the ApxPerf framework showed that both techniques are competitive, depending on the observed error and performance metrics. However, stepping back observing some state-of-the-art applications showed that using approximate operators often leads to an important overhead since the designed architecture manipulates larger data containing in average more erroneous useless information bits. Approximate operators output showing high error entropy compared to quantized exact output for which all the error is virtually contained in dropped bits, the error generated by approximate operators may have a huge impact in downstream operations using their output. Mathematically, it is always preferable to propagate many low significance errors (symbolized by dropped bits in fixed-point paradigm) than scarcer high significance errors. Indeed, in most approximate operators proposed in literature, the probability for high significance errors to occur is never negligible, leading to errors with amplitudes as high as the represented dynamic. More generally, it has been shown that comparing the raw performance of operators does not necessarily reflect their performance in a given application context. Hence, a major stake for hardware approximation is now to be considered at a higher level to take more parameters into consideration, leveraging relevant error metrics. An effort must also be made in the research for more efficient approximate multipliers, since they are responsible for the majority of power consumption in computation-intensive applications. However, considering approximate operators in embedded processors to replace or enhance their integer arithmetic unit might still be a good option, since processor data size is fixed and cannot be application specific. After a conclusion in favor of fixed-point compared to functional approximate arithmetic operators, the studies lead in next chapter drop approximate operators to compare fixed-point and custom floating-point paradigms. If the rounding modes of both inputs are different (discouraged), then the first one parsed in the C++ code is selected. ct_float representation and arithmetic operators were created to remain simple for energy efficiency yet minimally secured, thanks to the combination of several choices. First, ct_float mantissa is represented in [1, 2[ with an implicit 1. This allows a 1-bit accuracy benefit in general. However, subnormal numbers are not handled, which implies that a certain range of numbers are not representable around 0. The exponent is represented in a biased representation. The bias is not customizable for the moment and is set at the center of the exponent range like in IEEE 754 representation. Using biased representation instead of two's-complement results in simpler exponent value comparisons, which are omnipresent in arithmetic operators. An important choice is that no flag bits are returned. In general, such flags are returned to indicate zero-case or subnormal cases for further management. However, these flags imply more bits to transfer in hardware (generally two), and the pipeline may be broken during the management of the corresponding special cases, leading to extra-energy consumption. In return for not raising zero-case flag, the number zero is directly managed by the arithmetic operators. For this, the value 0 must be representable. To represent 0, the nearest representable negative value from 0 is used. One representable value is sacrificed, but it does not imply any change in comparison operators for instance. A slight overhead is then necessary in the arithmetic operators to detect the 0 value at the input. For the addition/subtraction, if one of the inputs is worth 0, the second is selected. No extra-delay is implied since a simple short path may exist in parallel to the original adder. At the addition/subtraction output, the special value representing zero must be issued when the computed output mantissa is a vector of 0, leading to a slight control overhead. It is also insured that only two strictly opposed added values can issue 0. For the multiplication, 0 detection at the input returns 0 value, which only implies a very small overhead. It is insured that only a multiplication by 0 can return 0. As subnormals are not representable by ct_float, the output is always saturated to the smallest absolute possible representable value with the same sign. Towards infinity, the operators do not under/overflow. Saturation to the highest absolute representable value of same sign is returned. The combination of these choices implies a slight overhead in the operators which is not spent in external hardware management. The local management implies less data stored or long-distance transferred, which represents global energy savings. A use case of ct_float is given by Listing 5.1. In this example, an FIR filter is applied to random data. First, on line 9, an array of ct_float is initialized with the impulse response of the filter. The coefficients are parsed to 16-bit ct_float with e = 7 and m = 9. Then, input and output signals x and y are declared on line 13. Random inputs are generated in double representation, parsed to ct_float [START_REF] Williams | What's next? [the end of moore's law[END_REF][START_REF] Perera | Context aware computing for the internet of things: A survey[END_REF] and stored in array x. The random generation presented in this example is not synthesizable. Then, the synthesizable FIR filter is applied. Additions and multiplication are overloaded with the operators developed in ct_float library. Finally, the resulting FIR filter is displayed using display operator << overloading. ApxPerf also comes with a random number generation library. This simplifies the generation of integer and floating-point values, using uniform or normal distributions with userselected parameters. Several functions are available to guarantee that the generated random values are in the range of the representable values of the considered integer, fixed-point or floating-point operator. For custom floating-point operators such as ct_float, it is also possible to generate couple of inputs which guarantee the activation of the close path of addition (see Section 1.2.2) with a given probability. It is important to have this path activated for a certain percentage of input couples to perform fair Monte Carlo time-based dynamic power estimation, as it is the most energy-costly computingnpath of the addition. However, when performing totally Monte Carlo simulation using inputs uniformly distributed on the whole range of the representable value of a floating-point operator, the close path only has a very low probability to be activated. Therefore, it is important to consider this feature in the generation of random inputs. In the next section, stand-alone performance of ct_float facing other custom floatingpoint libraries is evaluated, using ApxPerf and its random number generation library. Performance of CT_FLOAT Compared to Other Custom Floating-Point Libraries Reconfigurable architectures like FPGA are more and more used in many domains. The most recent and impacting example is the FPGA chip found in Apple's iPhone 7 and suspected to be used for artificial intelligence [START_REF] Tilley | This mysterious chip in the iphone 7 could be key to apple's ai push[END_REF]. These conditions illustrate the interest of customizable floating-point architectures. Indeed, combining the ease of use of floating-point representation associated to low-energy small data width make these architectures very promising for the future of reconfigurable architectures. The past years have hosted the creation of several customizable floating-point libraries. Mentor Graphics, in its synthesizable C++ libraries AC Datatypes [START_REF]Mentor graphics ac datatypes v3.7[END_REF], proposes the custom floating-point class ac_float. Based on the fixed-point library ac_fixed, ac_float allows for light floating-point computation, thanks to simple operators. The mantissa in the representation is not normalized and has no implicit 1. This allows for easy management of subnormals, but induces a potential loss of accuracy in computations. The mantissa is represented in signed two's complement, so the sign information is contained in the mantissa instead of using an extra sign bit. However, there is no benefit to this choice since two's complement represents a loss of 1 bit of precision compared to unsigned representation. The choice of two's complement representation on the mantissa also turns comparison operator more com-Chapter 5 plex. Moreover, many cases are not handled such as zero or infinity. ac_float also supports custom exponent bias, but managing the exponent bias comes with an overhead. ct_float, presented in Section 5. Other alternatives such as VFLOAT [START_REF] Fang | Open-source variable-precision floating-point library for major commercial fpgas[END_REF][START_REF] Fang | Open-source variable-precision floating-point library for major commercial fpgas[END_REF] or OptiFEX [START_REF] Mahzoon | Optifex: A framework for exploring area-efficient floating point expressions on fpgas with optimized exponent/mantissa widths[END_REF] do exist but are not taken into account in the study led in this chapter. VFLOATproposes IEEE 754-2008 compliant customizable computing cores for existing FPGA. OptiFEX generates floating-point computing cores targeting FPGA like FloPoCo. Table 5.1 recapitulates the different known properties of ac_float, ct_float and FloPoCo floating-point representation. In this table, the number of additional bits in the representation is taking for reference a representation with implicit 1 in the mantissa and with one bit of sign in the representation. For an equal general accuracy, ac_float needs one more bit on the mantissa than ct_float and FloPoCo. However, with its 2-bit exception field, FloPoCo has the representation requiring the largest width, but also the highest computing reliability. The hardware performance comparison process for ac_float, ct_float and FloPoCo is as follows. All operators are characterized for an Application Specific Integrated Circuit (ASIC) target in 28nm FDSOI @ 1.0V, 25C. All designs are generated with a clock of 200 MHz. As ac_float and ct_float are compatible with ApxPerf v2, this framework is used to perform the hardware performance characterization process. For time-based power analysis, the random inputs generated for adder/subtracter characterization are ensuring an activation of the close path for at least 50% of the computations. For FloPoCo, VHDL files of the operators and test benches are generated using Stratix IV target and disabling all possible hardware acceleration which could allocate DSPs blocks used in FPGAs. Then, the design is compiled using Design Compiler, characterized using FloPoCo's generated benchmark in ModelSim, and power is estimated using PrimeTime. However, to our knowledge, the benchmark generated by FloPoCo does not insure any proportion of activation of the close path, so the time-based estimated dynamic power could be underestimated. FloPoCo's benchmark top design does not consider the input and output data registers, whereas ApxPerf generated benchmark does. This grossly represents about 5 to 10% underestimation in the total power for FloPoCo operators, which has to be kept in mind for the analysis of results. All operators are Using Equation 5.2 implies that both static energy and dynamic energy are considered. Moreover, increasing the clock period on a same circuit should give the exact same energy per operation since increasing T clk will proportionally decrease the average dynamic power P d given by the tool (since integrated on a proportionally longer time), while not modifying neither the static power nor the operator critical path. Therefore, the proposed energy estimation metric gets rid of both the issues of the classical previously described metrics. Also, taking the number of cycles like in Equation 5.2 for pipelined operators into consideration provides a fair comparison between operators having a different number of cycles. Indeed, let consider two operators op 1 and op 2 , where op 2 is the same circuit as op 1 but pipelined in 2 cycles (instead of 1 for op 1 ). Flip-flops excluded, both the circuits are the same. The energy overhead brought by flip-flops in op 2 is to some extent compensated by the smaller fan-outs in the circuit. This means that dynamic power of op 1 and op 2 are very close. However, if the pipeline is efficiently chosen, the critical path of op 2 should be twice as small as op 1 . Considering this hypothesis, the energy per operation of both op 1 and op 2 should be the same. This is translated by Equation 5.2 by compensating the division of T cp with the number of cycles N c . As a conclusion, with Equation 5.2, we have a measure of the energy per operation which can be considered as robust to: • modification of the slack due to different clock periods, and • pipelining the operator. With this metric, different operators, operating in slightly different conditions of frequency and pipelining can be legitimately compared. From this point, the total energy spent before stabilization metric is used each time energy per operation is mentioned. For the custom floating-point comparative study, half-precision and single-precision floatingpoint were tested. Half-precision corresponds to an exponent represented on 5 bits and a mantissa on 11 bits. Single-precision has an 8-bit exponent and a 24-bit mantissa. Both addition/-subtraction and multiplication were tested on each of these precisions. The results of the comparative studies for 16-bit (resp. 32-bit) addition/subtraction (resp. multiplication) are given in Tables 5.2, 5.4, 5.3 and 5.5. The two last lines of the tables refer to the relative performance of ct_float towards ac_float (resp. FloPoCo) (e.g., ct_float area is 2.15% higher than ac_float). At first sight, the three custom floating-point libraries give results in the same order of magnitude. For 16-bit addition/subtraction, ct_float is 15% more energy-costly than both ac_float and FloPoCo, despite being as large as ac_float and 12% smaller than FloPoCo. The fastest 16-bit adder/subtracter is ac_float, followed by ct_float, which is 19% slower but 27% faster than FloPoCo. All performance are slightly in favor of ac_floatfor 16-bit addition/subtraction. Area (µm For 16-bit multiplication, ac_float is beaten by both ct_floatand FloPoCo. FloPoCo's multiplier is the smallest and with the lowest energy consumption. However, ct_float is 25% faster but consumes 32% more energy. However, it must be kept in mind that there are registers in the inputs and outputs of ct_float and ac_float which are not present for FloPoCo, so the real gap should be narrower. 32-bit addition/subtraction shows very similar energy for ac_float, ct_float and FloPoCo. Indeed, ct_float is 9% worse than ac_float and 4% better than FloPoCo. Again, FloPoCo is the slowest operator, ct_float being 27% faster. The energy of 32-bit multiplication is strongly in favor of ct_float, which saves more than 45% more energy than both ac_float and FloPoCo. ct_float is the 13% smaller than FloPoCoand 49% smaller than FloPoCo. However, ac_float is 5% faster. In conclusion, ac_float, ct_float and FloPoCo addition/subtraction and multiplication are quite competitive the one towards the others. Though they all have different features (implicit 1 or not, particular cases management, etc.), they all are quite close in terms of performance. Nevertheless, FloPoCo generally produces the largest and slowest operators, but not always with the highest energy consumption. This can be explained by the fact that ac_float and ct_float operators are generated by a different software than FloPoCoand therefore the basic integer arithmetic operators architecture used may not be the same. Also, the values of the inputs for power estimation are generated differently for ac_float and ct_float on one side, and FloPoCo on the other side, thus activating differently the far and close paths during simulations. However, the main interesting conclusion of this study is to show that the proposed custom floating-point library ct_float competes with the other existing libraries and gives slightly comparable performance results. In the following section, ct_float is used as a reference for the comparison with fixedpoint arithmetic, first in stand-alone versions, and then leveraging classical signal processing applications. Stand-Alone Comparison of Fixed-Point and Custom Floating-Point Because of the different nature of floating-point and fixed-point errors, this section only compares them in terms of area, delay, and energy. Indeed, floating-point error magnitude strongly depends on the amplitude of the represented data. Low-amplitude data have low error magnitude, while high amplitude data have much higher error magnitude. Floating-point error is only homogeneous considering relative error. Oppositely, fixed-point has a very homogeneous error magnitude, uniformly distributed between well-known bounds. Therefore, its relative error depends on the amplitude of the represented data. It is low for high amplitude data and high for low amplitude data. This duality makes these two paradigms impossible to be atomically compared using the same error metric. The only interesting error comparison which can be performed is to use these two representations in the same application, which is done in Section 5.3 on FFT and K-means clustering. The study in this section and in the rest of this chapter is performed using ApxPerf v2 as mentioned before, and with the ct_float library for custom floating-point. A 100 MHz clock is set for designing and estimating performance. All the other parameters and targets are the same as for previous section. Energy per operation is estimated using detailed power results given by PrimeTime at gate level. and is estimated using the metric described in previous Section by Equation 5.2. In this section, 8-, 10-, 12-, 14-and 16-bit fixed-width operators are compared. For each of these bit-widths, several versions of the floating-point operators are estimated with different exponent widths. 25.10 3 uniform couples of input samples are used for each operator characterization. The random generation embedded by ApxPerf v2 insures that 25% of the floatingpoint adder inputs activate the close path of the operator, which has the highest energy by nature. Adders and multipliers are all tested in their fixed-width version, meaning their number of input and output bits are the same. The output is obtained using truncation of the result. Figure 5.2 (resp. Figure 5.3) shows the area, delay and energy of adders (resp. multipliers) for different bit-widths, relative to the corresponding fixed-point operator. FlP N (k) represents N -bit floating-point with k-bit exponent width. As discussed above, floating-point adder has an important overhead compared to fixed-point adder. For any configuration, results show that area and delay are around 3◊ higher for floating-point. As a consequence, the higher complexity of the floating-point adder leads to 5◊ to 12◊ more energy per operation. Results for the multipliers are very different. Indeed, floating-point multipliers are 2-3◊ smaller than for fixed-point. Indeed, the control part of floating-point multiplier is much less complicated than for the adder. Moreover, as multiplication is applied only on the mantissa, the multiplication is always applied on a smaller number of bits for floating-point than for fixed-point. Timing is also slightly better for floating-point, but not as much as area since an important number of operand shifts may be needed during computations. The impact of these shifts has an important impact on the energy per operation, especially for large mantissas. This brings floating-point to suffer an overhead of 2◊ to 10◊ on the energy per operation. For a good interpretation of these results, it must be kept in mind that, in a fixed-point application, data shifting is often needed at many points in the application. The cost of shifting this data does not appear in the preliminary results presented in this section. However, for floating-point, data shifting is directly contained in the operator hardware, which is reflected in the results. Thus, the important advantage of fixed-point showed by Figures 5.2 and 5.3 must be tempered by the important impact of shifts when applied in applications. Application-Based Comparison of Fixed-Point and Custom Floating-Point In this section, floating-point and fixed-point operators are compared in the context of their use in applications. Indeed, as stated below, they have very different error nature and thus their error can not be legitimately compared in the previous stand-alone comparison. First, both paradigms are compared using the K-Means clustering algorithm in Section 5.3.1, these results were published in [4]. Then, FxP and FlP operators are compared on the FFT algorithm in Section 5.3.2. These results are issued from the work of Romain Mercier during an undergraduate internship at IRISA. Algorithm 3 K-Means Clustering (1 Dimension) Require: k AE N data err Ω +OE cpt Ω 0 c Ω init_centroids(data) do Û Main loop old_err Ω err err Ω 0 c_tmp[0 : k ≠ 1] Ω 0 min_distance Ω +OE for d oe {0 : N data ≠ 1} do min_distance Ω +OE for i oe {0 : k ≠ 1} do Û Data labelling distance Ω distance_comp(data[d], c[i]) if distance < min_distance then min_distance Ω distance labels[d] Ω i end if end for c_tmp[labels[d]] Ω c_tmp[labels[d]] + data[d] counts[labels[d]] Ω counts[labels[d]] + 1 err Ω err + min_distance end for for i oe {0 : k ≠ 1} do Û Centroids position update if counts[i] " = 0 then c[i] Ω c_tmp[i]/counts[i] else c[i] Ω c_tmp[i] end if end for cpt Ω cpt + 1 while (|err ≠ old_err| > acc_target) ' (cpt < max_iter) The experimental setup is divided into two parts: accuracy and performance estimation. Accuracy estimation is performed on 20 data sets composed of 15.10 3 bidimensional data samples. These data samples are all generated in a square delimited by the four points {± Ô 2, ± Ô 2}, using Gaussian distributions with random covariance matrices around 15 random mean points. Several accuracy targets are used to set the stopping condition: 10 ≠2 , 10 ≠3 , 10 ≠4 . The reference for accuracy estimation is IEEE-754 double-precision floating-point. Figure 5.4 is an example of a typical golden output for the experiment. The error metrics for the accuracy estimation are: • the Mean Square Error of the resulting cluster Centroids (CMSE), and • the classification Error Rate (ER) in percents, which is defined as the proportion of points not being tagged by the right cluster identifier. The lower the CMSE, the better the estimated position of centroids compared to golden output. Energy estimation is performed using the first of these 20 data sets, limited to 20.10 3 iterations of distance computation for time and memory purposes. As data sets were generated around 15 points, the number of clusters researched is also set to 15. Performance and accuracy of the K-Means clustering experiment, from input data generation to result processing and graphs generation, is fully available in the open-source ApxPerf v2 framework, which is used for the whole study. Experimental Results on K-Means Clustering Section 5.2 showed that fixed-point additions and multiplications consume less energy than floating-point for the same bitwidth. However, these results do not yet consider the impact of the arithmetic on accuracy. This section details the impact of accuracy on the bidimensional K-means clustering algorithm. A first qualitative study on the K-Means clustering showed that, to get correct results (no artifacts), floating-point data must have a minimal exponent width of 5 bits in distance computation (smaller exponents are too inaccurate in low distance computations) and fixed-point data a minimal number of 3 bits for its integer part. Thus, all the following results use these two parameters. Area, latency and energy of distance computed by Equation 5.5 are provided. The total energy of the application is defined as E K-means = E dc ◊ (N it + N cycles ≠ 1) ◊ N data , ( 5.6) where E dc is the energy per distance computation calculated from the data extracted with Apx-Perf, N it the average number of iterations necessary to reach K-means stopping condition, N cycles the number of stages in the pipeline of the distance computation core (automatically determined by HLS), and N data the number of processed data per iteration. Results for 8-bit and 16-bit FlP and FxP arithmetic operators are detailed in Table 5.6, with stopping condition set to 10 ≠4 . For the 8-bit version of the algorithm, several interesting results can be highlighted. First, the custom floating-point version is twice as large as fixedpoint version and floating-point distance computation consumes 2.44◊ more energy than fixedpoint. However, the floating-point version of K-means converges in 8.35 cycles on average against 14.9 cycles for fixed-point. This makes floating-point version for the whole K-means The competitiveness of FlP over FxP on small bit-widths and the higher efficiency of FxP on larger bit-widths is confirmed by Figure 5.6 depicting energy vs. classification error rate. Indeed, for different accuracy targets (10 ≠{2,3,4} ), only 8-bit floating-point provides higher accuracy for a comparable energy cost, while 10-to 16-bit fixed-point versions reach an accuracy equivalent to floating-point with much lower energy. The stopping condition does not seem to have a major impact on the relative performance. Comparative Results on Fast Fourier Transform Application In the previous section, a comparative study between fixed-point and custom floating-point was performed on K-means. We showed that, contrary to what could be expected, floatingpoint was very competitive for small bit-width, besides being easier to manage due to its high flexibility. In this section, a similar study is performed on the Fast Fourier Transform (FFT). The error study, generation and analysis of results were performed by undergraduate intern Romain Mercier. The hardware performance estimation part was obtained using ApxPerf v2. The original study also included approximate integers operators, which will not be discussed here since a study on approximate operators in FFT has already been done in Section 4.3.1. The implementation of the studied FFT is Radix-2 Decimation-In-Time (DIT) FFT, which is the most common form of the Cooley-Tukey algorithm [START_REF] Cooley | An algorithm for the machine calculation of complex fourier series[END_REF]. For the hardware estimation, only the core of computation of the FFT is considered, i.e. the computation of: X k = E k + e ≠ 2fii N k O k X k+ N 2 = E k ≠ e ≠ 2fii N k O k . (5.7) This leads to the hardware implementation of 6 additions/subtractions and 4 multiplications. For each version of the FFT, all constants and variables are represented with the same parameters (same bit-width, same integer part width for FxP, same exponent width for FlP). The absence of over/underflow for FxP version is ensured. For FlP version, the repartition of the exponent and mantissa widths is chosen for giving the smallest error after exhaustive search. For hardware performance estimation, only FFT-16 was characterized. The error metric used for the study of error is the Mean Square Error (MSE) at the output compared to double-precision floating-point FFT. Energy per operation related to error for FFT-16 is depicted in Figure 5.7. On the x-axis, the energy derived from Equation 5.2 of the computing core of FFT is given in pJ. On the y- In this experiment, the advantage is clearly in favor of fixed-point. Indeed, for any identical bit-width, fixed-point outperforms floating-point in terms of energy and accuracy. As already showed in Section 5.2, floating-point operations, additions in particular, are much more expensive than fixed-point in return for an increased accuracy on larger dynamic. However, FFT output quality is not as dependent on accuracy on a dynamic as large as for K-means clustering for instance. This makes floating-point even less accurate than fixed-point at equal bit-width, because of a smaller significant part, mantissa for floating-point, all bits for fixed-point. Indeed, in the experiment, the exponent takes 7 bits of the total width, which are not assigned to more accuracy on the significant part. Another interesting point is the data points presenting an energy peak, which are occurring for 12-, 18-and 28-bit floating-point and 22-bit fixed-point. These peaks are most probably due to differences of implementation in the HLS process. E.g, larger adder or multiplier structures may have been selected by the tool to meet constraint of delays, leading to energy overhead. Conclusion and Discussion about the use of Fixed-Point and Floating-Point arithmetic for Energy-Efficient Computing A raw comparison of floating-point and fixed-point arithmetic operators gives an advantage in area, delay and energy efficiency for fixed-point. However, the comparison on a real application like the K-means clustering algorithm provides interesting features to custom floating-point Chapter 5 arithmetic. Indeed, for K-means, contrary to what would have been expected, floating-point arithmetic tends to show better results in terms of energy/accuracy trade-off for very small bit-widths (8 bits in our case). However, increasing this bit-width still leads to an important area, delay and energy overhead of floating-point. The most interesting results occur for 8-bit floating-point representation. With only 3 bits of mantissa, which corresponds to only 3-bit integer adders and multipliers, the results are better than 8-bit fixed-point integer operators. This is obviously due to the adaptive dynamic offered by floating-point arithmetic at operation level, whereas fixed-point has a fixed dynamic which is disadvantageous for low-amplitude data and distance calculation. However, non-iterative algorithms should be tested to know if small floating-point keeps its advantage. Floating-point representation showed its limitations on the FFT experiment. Indeed, the significant bits of the mantissa sacrificed to the exponent represent a penalty in an application whose output quality is not as dependent on a high accuracy on enhanced dynamic as the Kmeans clustering is. However, the gap between fixed-point and floating-point in this context could probably be narrowed with different architecture choices, such as exponent bias and subnormal numbers support. Nevertheless, these features would come with inevitable area, delay and energy overheads. From a hardware-design point of view, custom floating-point is costly compared to fixedpoint arithmetic. Fixed-point benefits from free data shifting between two operators, as outputs of one operator only need to be connected to the inputs of the following in the datapath. However, from a software-design point of view, shifts between fixed-point computing units must be effectively performed, which leads to a non-negligible delay and energy overhead. Oppositely, floating-point computing units do not suffer from this overhead, since data shifting is implemented in the operators and managed by the hardware at runtime. Thanks to this feature, floating-point exhibits another important advantage which is the ease of use, since software development is faster and more secured. Hence, in the aim of producing general-purpose low-energy processors, small-bitwidth floating-point arithmetic can provide major advantages compared to classical integer operators embedded in microcontrollers, with a better compromise between ease of programming, energy efficiency and computing accuracy. Conclusion To face the predicted end of Moore's Law, this thesis proposes to look into approximate architectures. Indeed, it has been showed that most applications can be computed with relaxed accuracy without affecting their output quality, or with a tolerable degradation. Several levels of architectural approximations are possible, listed in Chapter 1. In this thesis, the opportunities to save energy using approximate arithmetic are highlighted. They concern floating-point, fixed-point, and approximate integer adders and multipliers. In this document, four main contributions were proposed. First, to our knowledge, there was no existing critical and comparative study of existing approximate arithmetic operators considering an equivalent number of references, so the first Chapter of this document can constitute a base. Two techniques for the estimation of the output error of approximate systems were proposed, making the second main contribution. Concerning fixed-point systems, a fast and scalable method leveraging the spectral shape of error was described in Chapter 2. This technique has the same accuracy as statistical propagation techniques, but with much lower complexity in the analytical model construction. The development and results of this new approach were presented in [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]. In the first part of Chapter 3, a technique based on bitwise-error rate propagation was proposed for approximate operators. Compared to others, this technique is a good compromise between memory cost and accuracy. However, the very different natures of existing approximate operators make it very hard to find generally good models. After exploring a high number of solutions, the conclusion is that no model approaches the accuracy of Monte Carlo without equaling or exceeding its complexity. Contrary to fixed-point paradigm, it seems that approximate arithmetic operators can not be safely be used without preliminary simulations. A third contribution was to use error behavior of approximate operators to estimate the effect of VOS on accurate operators. Indeed, simulating VOS requires transistor-level simulation, which is extremely long and memory-costly. Using combinations of approximate operators trained on real data allows for fast estimation of the effects of VOS in systems too large to be simulated. This contribution was published in [3]. Finally, two comparative studies of existing approximate paradigms, supported by the creation and usage of our open-source framework ApxPerf, constitute the fourth contribution. First, fixed-point and approximate operators are compared. In their stand-alone versions, both are quite competitive. As most approximate operators are based on shortening the carry chains, they are generally fast and they generate scarce but high amplitude error, depending on their parameters. In opposition, fixed-point always generate errors because of quantization at the output, but this error is always small and well characterized. Therefore, they have quite similar error in average, whereas approximate operators tend to be faster thanks to shorter critical Conclusion path. However, our study showed that when comparing both paradigms in real signal processing applications, fixed-point is much better for two reasons. Firstly, its error entropy is minimal, only standing in the dropped bits which are the LSBs. Therefore, the propagation of this error across the system leads to a contained amplification. On the contrary, high-entropy approximate operator error potentially occurring at any bit significance leads to drastically important amplification effects, and the application output may be strongly degraded. Secondly, dropping bits at the output of fixed-point operators instead of keeping the same bit-width during computations for approximate operators has an effect which had never been pointed out until now, which is the need for smaller downstream operators and for less memory. These two reasons make quantization by far superior to operator-level approximation. The only reason why approximate operators could be considered is for constant bit-width processing like in CPUs and when computation must necessarily be faster than what fixed-point can offer. This study was published in [2]. The second study puts face-to-face fixed-point arithmetic and floating-point arithmetic in the context of low-energy computing. Generally, floating-point is associated to high accuracy computing, but with important hardware cost. In our study, we consider small-width floatingpoint across our custom library ct_float. After a comparison between different existing custom floating-point libraries showing that ct_float is competitive, we use this library combined with ApxPerf to evaluate its cost facing fixed-point. In their stand-alone versions, floating-point addition/subtraction is as expected much more costly than fixed-point. For multiplication, the gap is tight as floating-point uses smaller integer multiplier and as the control overhead in floating-point multiplier is reasonably small. Two applications were used to evaluate the real overhead of floating-point. In K-means clustering, small floating-point in surprisingly competitive with fixed-point. Indeed, K-means requires accurate computations both for small and large distance, which are advantaging floating-point thanks to its high accuracy on larger dynamic than fixed-point. In the context of FFT, whose quality is less impacted by inaccuracy on small amplitude signals, fixed-point strongly outperforms floating-point. Indeed, the large dynamic of floating-point does not bring enough improvement in this case to compensate for its overhead. As a conclusion, small-width floating-point competitiveness in terms of energy-error ratio is strongly dependent on the application. Nevertheless, despite a generally higher energy consumption for floating-point compared to fixed-point, its ease of use makes it very interesting for fast development. Having a CPU embedding small bit-width floating-point processing units could be very interesting for general-purpose low-power computing. The study on K-means clustering application was published in [4]. The work constituting this thesis comes with the following conclusions: • Approximate operators error degradation across a system is difficult to estimate efficiently when excluding Monte Carlo simulation. However, approximate operators can be used to reproduce the effects of physical phenomena such as VOS. • If stand-alone approximate arithmetic operators are generally competitive, they show important limitations when used in real-life signal processing applications compared to classical fixed-point. • Floating-point energy cost is able to compete with fixed-point when used in applications that require accuracy at different amplitude scales. In this context, using floating-point for low-energy computing has to be considered. This thesis also opens-up new perspectives. Firstly, it has been showed that in general, using approximate operators leads to important errors. Therefore, there is a strong need to find new approximate operators with better error performance. Some of them are on a good track. With DRUM approximate multiplier for instance, no high amplitude errors can be performed, as it is based on floating-point paradigm applied to a fixed representation. The idea of mixing fixed-point and floating-point paradigms is a good idea for speed, though it imposes the storage of many useless zeros in the LSBss. Other operators like the GDA approximate adder propose runtime-configurable approximate adders, which shows interest in the context of energy-autonomous embedded devices that may need to run in different levels of power consumption depending on the remaining stored energy and the amount of energy being harvested. Finally, the concept of exact adders based on error-corrected approximate operators proposed by VLSA deserves to be further investigated. Secondly, models for approximate operators error propagation need to be developed, though we concluded in this study that their various natures are an impediment to the good performance of general models. In our mind, new models should consider the nature of the approximation performed on each operator. Therefore, the general model would be an aggregation of several models, and the one that adapts best to a considered approximate operators would be selected when needed. Finally, floating-point paradigm for low-energy computing has a high potential for new research. First, the best compromise between control overhead and accuracy should be investigated in this context, for instance the management of subnormals. The question if the accuracy benefits from supporting these low amplitude numbers is important enough to compensate for the hardware overhead. The customization of the exponent bias also has to be investigated. With a constant bias, adding one more bit in the exponent increases the dynamic as much towards infinity than towards zero. However, it is not interesting to have a dynamic which is too large towards infinity, since resources could be allocated to even better accuracy around zero. Nevertheless, introducing a fully parameterizable bias would bring an important memory and control overhead. A solution might be to have several possible predefined biases that could be used to operate at different amplitude scales with moderated overhead. The interest of handling particular cases such as under/overflows also needs to be investigated. Once all these investigations, the structure of a low-energy small-bit-width processing unit should be proposed with innovative features. For instance, if exponent management is faster, it could be possible to use a single exponent management unit for two mantissa management units to save area. All these opportunities should be considered for future research and the proposition of new energy-efficient architectures, which are a major stake to overcome the predicted end of Moore's Law. Acronyms CONTENTS Introduction Figure 1 - 1 Figure 1 -45 years of microprocessor trend data, collected by M. Horowitz, F. Labonte, O. Shacham, K. Olukotun, L. Hammond, and C. Batten and completed by K. Rupp Figure 1 . 1 - 11 Figure 1.1 -12-bit floating-point number with 4 bits of exponent and 7 bits of mantissa Figure 1 . 1 Figure 1.2 -Dual-path floating-point adder[START_REF] Muller | Handbook of floating-point arithmetic[END_REF] Figure 1 . 1 Figure 1.4 -12-bit fixed-point number with 4 bits of integer part and 8 bits of fractional part Figure 1 . 5 - 15 Figure 1.5 -Distribution of continuous signal quantization error for rounding towards ±OE and rounding to nearest Figure 1 . 7 - 17 Figure 1.7 -Comparison of quantization error distribution of conventional rounding and convergent rounding Figure 1 . 1 2 ≠d b 2 112 [START_REF] Williams | What's next? [the end of moore's law[END_REF] shows the effect of CRN on the error distribution compared to simpler RN. On Figure1.7a, the discrete uniform distribution of RN method has a negative bias of q 2 Chapter 1 Figure 1 . 8 - 118 Figure 1.8 -Example of quantization and rounding of a 10-bit fixed-point number to a 6-bit fixed-point number Figure 1 . 1 Figure 1.12 -8-bit RCA-based Carry-Select Adder (CSLA) with 2-3-3 basic blocks Figure 1 . 18 - 118 Figure 1.18 -Half adder transformation -requires 2 gates Figure 1 . 19 - 119 Figure 1.19 -Wallace and Dadda trees schemes applied to 5-bit multiplication summand grid reduction. The dashed rectangle corresponds to final 8-bit addition. Both require three stages, but Wallace tree takes 51 gates while Dadda tree only requires 48. Figure 1 . 1 Figure 1.20 -6-bit signed array multiplier -AFA (resp. AHA) corresponds to a FA (resp. HA) which inputs x i and y i are combined by an AND cell, NFA corresponds to a FA which inputs x i and y i are combined by a NAND cell. AHA and AFA structure are depicted in Figure 1.21. Figure 1 . 22 .Figure 1 . 21 - 122121 Figure 1.21 -Structures of AHA and AFA Figure 1 . 22 - 122 Figure 1.22 -Sequential multiplier -boxes hatched in grey are right-shift registers Figure 1 . 23 - 123 Figure 1.23 -Probability for the longest carry chain of a 64-bit adder to be inferior to x as a function of x Figure 1 . 25 - 125 Figure1.25 -Distribution of calculations for carry propagation matrix products[START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF] Figure 1 . 26 - 126 Figure 1.26 -Consideration of carries in LPA and ACA output computation for an 8-bit adder with k = 2 -Each color shows the inputs considered in the computation of the corresponding output bit. 4 Figure 1 . 27 - 4127 Figure 1.27 -Error maps of 8-bit LPA and ACA adders for different values of k -White zones correspond to accurate calculations. Figure 1 . 28 - 128 Figure1.28 -Hardware implementation of VLSA[START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF] Figure 1 . 33 - 133 Figure 1.33 -Hardware implementation of ETAII[START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF] Figure 1 . 34 - 134 Figure 1.34 -Hardware implementation of ETAIIM Figure 1 . 35 - 135 Figure 1.35 -Consideration of carries in ETAIV output computation for an 8-bit adder with X = 2 -Each color shows the inputs considered in the computation of the corresponding output bit. 3 Figure 1 . 36 - 3136 Figure 1.36 -Error maps of 16-bit ETAIV for different values of X 1 . 38 - 5 delay 1385 Figure 1.38 -Consideration of carries in AC2A output computation for an 8-bit adder with k = 2 -Each color shows the inputs considered in the computation of the corresponding output bit. 8 Figure 1 . 42 - 8142 Figure 1.42 -GDA reconfigurable prediction scheme Figure 1 . 43 - 143 Figure1.43 -Error vs delay for an identical power consumption for GDA and AC2A[START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF] Figure 1 . 44 - 144 Figure 1.44 -Original and approximate MA transistor view -A and B inputs refer to x and y in the notations of this document, while Sum output refers to z. Chapter 1 Figure 1 . 11 Figure 1.45 -DCT/IDCT test results for IMPACT -The savings are relative to a 20-bit accurate RCA Figure 1 . 46 - 146 Figure1.46 -Flowchart for probabilistic pruning optimization process[START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF] Figure 1 . 48 - 148 Figure 1.48 -Probabilistic pruning results for Kogge-Stone and Han-Carlson adders[START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF] Chapter 1 Figure 1 . 11 Figure 1.49 -Two's-complement signed 6-bit AAMI structure Figure 1 .Figure 1 . 51 - 1151 Figure 1.50 -Two's-complement signed 6-bit AAMII structure 25 )Figure 1 . 52 - 25152 Figure 1.52 -Exhaustive search of K ◊<n and K ◊=n for a 6-bit multiplier -the vertical red line shows the index of the optimal values of K ◊<n and K ◊=n Chapter 1 Figure 1 . 11 Figure 1.53 -8-bit signed AAMIII structure (a) n = 4 (b) n = 8 (c) n = 16 Figure 1 . 1 11 Figure 1.54 -Error maps of 4-bit, 8-bit and 16-bit AAMIII Figure 1 . 56 -Figure 1 . 57 - 156157 Figure 1.56 -Structure of a 12-bit ETM[START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF] Figure 1 . 58 - 158 Figure1.58 -PDP for 12-bits ETM and array multiplier[START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF] Figure 1 . 60 - 160 Figure 1.60 -Partial product generation 1 Figure 1 . 61 - 1161 Figure 1.61 -Karnaugh map representation of approximate carry for n = 10 Figure 1 . 1 Figure 1.65 -DRUM input unbiasing process. Step 1: original input. Step 2: selecting the k non-zeros MSBs. Step 3: unbiasing. Greyed cells represent the virtual value of dropped bits. Chapter 1 Figure 1 . 66 - 1166 Figure 1.66 -Structure of DRUM. L0D stands for Leading-zero-Detector, used to select the bits to be effectively multiplied and to perform the final shift. Chapter 1 error 1 outputs instead of producing scarce high-amplitude error. This is confirmed by error maps visible in Figure1.68. 6 Figure 1 . 68 - 6168 Figure 1.68 -Error maps of 16-bit DRUM with different k Figure 2 . 1 - 21 Figure 2.1 -Extraction of the mean square error of a fixed-point system Figure 2 . 2 - 22 Figure 2.2 -Comparison of noise parameters propagation using traditional flat, PSD agnostic and proposed PSD methods A 2 -Figure 2 . 4 - 224 Figure 2.4 -Band-pass frequency filtering scheme Figure 2 . 2 Figure 2.6 -E d versus fractional bit-width d Figure Figure 2.7 -E d versus number of PSD samples N PSD Figure 2 . 8 - 28 Figure 2.8 -Execution time in seconds and speed up for frequency filtering and DWT systems versus the number of PSD samples Figure 2.9 gives a visual comparison between the PSDs of output error obtained by intensive simulation and PSD method on 1024 samples for a 2-level Daubechies DWT encoding and decoding, with all Chapter 2 Figure 2 . 9 - 29 Figure 2.9 -Output frequency repartition of the fixed-point error after DWT encoding and decoding Chapter 3 ( 4 Figure 3 . 1 - 3431 Figure 3.1 -Error maps of 8-bit FxP quantization process and approximate operators. 8-to-4bit quantization illustrates the regularity of the uniform error repartition of FxP arithmetic. The error maps of the three other approximate operators illustrate that the nature of their error is far more complex. Figure 3 . 2 - 32 Figure 3.2 -Propagation of BWER across an operator . 5 )Figure 3 . 53 Figure 3.4 presents an example of how are trained the corresponding conditional probabilities after one iteration of training. The inputs and output are both 4-bit and k = 2. First, looking at the blue part, after approximate operation, the output LSB is not faulty. Therefore, f z,0 = 0. As x0 was not faulty and ŷ0 was faulty, the corresponding observation of input EEV is Fx,y,2 . Thus, following Equation 3.5, the estimation of P (e z,0 |E x,y,2 ) is modified by increasing the denominator by 1. Then, looking at the yellow part of Figure3.4, the output observation is f z,1 = 1, meaning the output is erroneous. As k = 2, input ranks 1 and 0 are observed together. The corresponding observation (0, 1, 1, 0) is F x,y,6 . P (e z,1 |E x,y,6) is modified increasing the numerator by 1 (faulty output) and increasing the denominator by 1. The operation is repeated at significance position 2 (green part) and 3 (red part), each time observing 2k = 2 input error vector bits.Following this method, the conditional probability data structure has m elements updated at each new training cycle. Finally, after a sufficient number of training cycles (discussed in Section 3.3.1), the model is trained and ready for propagation, which is discussed in the following section. Figure 3 . 4 - 34 Figure 3.4 -Example of Conditional Probabilities Training for n = 4, m = 4 and k = 2 j (E x,y,0 | B x,y ). Let the partial input BWER b x,3 , b y,3 , b x,4 , b y,4 given by Table 3.3. Let an approximate adder trained with k = 2. To determine b z,4 (similarly for other b The results of training convergence speed are depicted in Figure3.5. The experiments show that the training time relative to the number of random training samples is independent of k on a same processor. Therefore, a single curve for elapsed time is depicted on the figure in dotted red, which is the mean of the curves for all 374 experiments. The elapsed time is linear with the number of training samples. On this experiment, the 10 8 training samples took about 35 minutes for each experiment in average. The blue curves represent the mean distance of the training values from the reference for each k. As a reminder, the values in the data structure are probabilities and thus they are in [0, 1]. Figure 3 . 5 - 35 Figure 3.5 -Convergence of BWER Training in Function of k Chapter 3 Figure 3 . 6 - 336 Figure 3.6 -Tree Operation Structure with Three Stages Figure 3 . 7 - 37 Figure 3.7 -BWER Estimation and Simulation Results for Stand-Alone ACA with x = 2 and k = 4 Figure 3 .Figure 3 . 8 - 338 Figure 3.9 presents the value of DB metric of adders for different number of stages in the tree configurations described in Figure3.6. For each figure, the number of stages varies between two and four, and the output error refers to the output of the last stage. For each configuration of approximate operators, the value of k giving the best results in the previous stand-alone estima- Figure 3 . 9 - 39 Figure 3.9 -Evolution of estimation error D B with k for different configurations of 16-bit approximate adders with different number of stages Chapter 3 3. 4 34 Modeling the Effects of Voltage Over-Scaling (VOS) in Arithmetic Operators Figure 3 . 3 Figure 3.10 -Proposed Design Flow for Arithmetic Operator Characterization Figure 3 . 11 -Figure 3 . 12 -Figure 3 . 14 - 311312314 Figure 3.11 -Distribution of BER in output bits of 8-bit RCA under voltage scaling Figure 3 . 15 - 315 Figure 3.15 -Estimation Error of the Model for Different Adders and Distance Metrics Figure 3 . 16 - 316 Figure 3.16 -BER and Energy for Different VOS Triads Applied to 16-bit RCA Figure 4 . 1 - 41 Figure 4.1 -First version of APXPERF // Declaration of variables 12 double 12 x_d = 0.13236, y_d = -1.54351, z_d = 0.75498; 13 double out_d; 14 apx1_t x = x_d; apx2_t y = y_d; apx1_t z = z_d; precision operation (non-synthesizable) 18 out_d = x_d * y_d + z_d; accurate:\t" << x_d << " * " << y_d << " + " << z_d << " = " << out_d << endl; 25 cout << "approximate:\t" << x << " * " << y << " + " << z << " = " << out << endl; * -1.5625 + 0.75 = 0.53125 Figure 4 4 .5 shows P DP F F T as a function of output Peak Signal-to-Noise Ratio (PSNR). PSNR is the maximal power of the output signal divided by the MSE, i.e: P SNR(x)[dB] = 10. log C max(x 2 ) MSE(x) D . Figure Figure 4.5 -Power Consumption of FFT-32 Versus Output PSNR Using 16-bit Approximate Adders shows Lena with four different approximations in the DCT encoding. On Figure4.7a, the additions are replaced by a 16-bit accurate adder with the output truncated to 10 bits. On trunc.Fixed-Point round. Figure 4 . 6 - 46 Figure 4.6 -Power Consumption of DCT in JPEG Encoding Versus Output MSSIM Using 16-bit Approximate Operators Figure 5 . 1 - 51 Figure 5.1 -Representation of the Power Spent by a Circuit in One Cycle. The area of the red polygon represents the total energy spent before stabilization. E s is the static energy and E d Figure 5 . 3 - 53 Figure 5.3 -Relative Area, Delay and Energy per Operation Comparison Between Fixed-Point and Floating-Point for Different Fixed-Width Multipliers and 5.5b show the output for floating-point and fixed-point 8bit computations, applied on the same inputs than the golden output of Figure5.4. A very neat stair-effect on data labelling is clearly visible, which is due to the high quantization levels of the 8-bit representation. However, in the floating-point version, the positions of clusters centroid is very similar to the reference, which is not the case for fixed-point.For the 16-bit version, all results are in favor of fixed-point, floating-point being twice bigger and consuming 1.7◊ more energy. Fixed-point also provides slightly better error results (2.9% for ER vs. 0.6%). Figures 5.5c and 5.5d show output results for 16-bit floating-point and fixed-point. Both are very similar and nearly equivalent to the reference, which reflects the high success rate of clustering. Figure 5 . 5 - 55 Figure 5.5 -K-Means Clustering Outputs for 8-and 16-bit floating-point and fixed-point with Accuracy Target of 10 ≠4 Figure 5 . 6 -Figure 5 . 7 - 5657 Figure 5.6 -Energy Versus Classification Error Rate for K-Means Clustering with Stopping Conditions of 10 ≠4 (Top), 10 ≠3 (Center) and 10 ≠2 (Bottom) 1 45 years of microprocessor trend data . . . . . . . . . . . . . . . . . . . . . . 1.1 12-bit floating-point number . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Dual-path floating-point adder . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Basic floating-point multiplication . . . . . . . . . . . . . . . . . . . . . . . . 1.4 12-bit fixed-point number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Distribution of continuous signal quantization error . . . . . . . . . . . . . . . (a) Quantization error distribution for rounding towards ≠OE . . . . . . . . . (b) Quantization error distribution for rounding towards +OE . . . . . . . . . (c) Quantization error distribution for rounding to nearest . . . . . . . . . . 1.6 Representation of FxP quantization error as an additive noise . . . . . . . . . . 1.7 Conventional rounding vs convergent rounding . . . . . . . . . . . . . . . . . (a) Conventional rounding . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Convergent rounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Quantization and rounding of a fixed-point number . . . . . . . . . . . . . . . 1.9 Fixed-point addition process . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 One-bit addition function -Full adder . . . . . . . . . . . . . . . . . . . . . . 1.11 8-bit Ripple Carry Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12 8-bit Carry-Select Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.13 16-bit Brent-Kung Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.14 16-bit Kogge-Stone Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.15 General integer multiplication principle . . . . . . . . . . . . . . . . . . . . . 1.16 General visualization of multiplication summand grid . . . . . . . . . . . . . . 1.17 Full adder compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.18 Half adder transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.19 Wallace and Dadda trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Wallace Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Dadda tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.20 6-bit signed array multiplier . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.21 Structures of AHA and AFA . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Structure of AHA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Structure of AFA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.22 Sequential multiplier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.23 Probability for the longest carry chain of a 64-bit adder to be inferior to x as a function of x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 1.24 16-bit Sample Adder (LPA) with k = 4 . . . . . . . . . . . . . . . . . . . . . 39 1.25 Distribution of calculations for carry propagation matrix products [34] . . . . . 40 1.26 Consideration of carries in LPA and ACA output computation . . . . . . . . . 40 1.27 Error maps of 8-bit LPA and ACA adders for different values of k . . . . . . . 41 (a) n = 8, k = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 (b) n = 8, k = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 1.28 Hardware implementation of VLSA [34] . . . . . . . . . . . . . . . . . . . . . 42 1.29 Delay and area results for ACA with different bitwidth [34] . . . . . . . . . . . 43 1.30 Principle of ETAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 1.31 ETAI control block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 1.32 ETAI accuracy simulation results [35] . . . . . . . . . . . . . . . . . . . . . . 46 1.33 Hardware implementation of ETAII . . . . . . . . . . . . . . . . . . . . . . . 47 1.34 Hardware implementation of ETAIIM . . . . . . . . . . . . . . . . . . . . . . 48 1.35 Consideration of carries in ETAIV output computation . . . . . . . . . . . . . 48 1.36 Error maps of 16-bit ETAIV for different values of X . . . . . . . . . . . . . . 49(a) n = 16, X = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 (b) n = 16, X = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 1.37 Hardware implementation of ETAIV . . . . . . . . . . . . . . . . . . . . . . . 49 1.38 Consideration of carries in AC2A output computation . . . . . . . . . . . . . . 51 1.39 Accuracy vs power for AC2A and other approximate adders under VOS . . . . 53 1.40 Structure of proposed n-bit GDA composed of 4 n/4-bit sub-adders . . . . . . 56 1.41 GDA hierarchical prediction scheme . . . . . . . . . . . . . . . . . . . . . . . 57 1.42 GDA reconfigurable prediction scheme . . . . . . . . . . . . . . . . . . . . . 57 1.43 Error vs delay for an identical power consumption for GDA and AC2A . . . . . 61 (a) Worst-case error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 (b) Error rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 (c) Average error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 1.44 Original and approximate MA transistor view . . . . . . . . . . . . . . . . . . 63 (a) Original MA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 (b) Simplified MA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 (c) AMA1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 (d) AMA2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 1.45 DCT/IDCT test results for IMPACT . . . . . . . . . . . . . . . . . . . . . . . 64 1.46 Flowchart for probabilistic pruning optimization process [45] . . . . . . . . . . 66 1.47 16-bit Weighted-Pruned Kogge Stone Adder (WPKSA) . . . . . . . . . . . . . 66 1.48 Probabilistic pruning results for Kogge-Stone and Han-Carlson adders [45] . . 67 1.49 Two's-complement signed 6-bit AAMI structure . . . . . . . . . . . . . . . . . 68 1.50 Two's-complement signed 6-bit AAMII structure . . . . . . . . . . . . . . . . 69 1.51 AAO, AA and ND-ND cells . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 (a) AAO cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 (b) AA cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 (c) ND-ND cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 LIST OF FIGURES 1.52 Exhaustive search of K ◊<n and K ◊=n for a 6-bit multiplier . . . . . . . . . . . 1.53 8-bit signed AAMIII structure . . . . . . . . . . . . . . . . . . . . . . . . . . 1.54 Error maps of 4-bit, 8-bit and 16-bit AAMIII . . . . . . . . . . . . . . . . . . (a) n = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) n = 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (c) n = 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.55 Example of ETM multiplication process [35] . . . . . . . . . . . . . . . . . . 1.56 Structure of a 12-bit ETM [35] . . . . . . . . . . . . . . . . . . . . . . . . . . 1.57 Accuracy evaluation by simulation for ETM [35] . . . . . . . . . . . . . . . . 1.58 PDP for 12-bits ETM and array multiplier [35] . . . . . . . . . . . . . . . . . 1.59 Summand grid for an 8-bit fixed-width Booth multiplier with LP major -based error correction [49] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.60 Partial product generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.61 Karnaugh map representation of approximate carry for n = 10 . . . . . . . . . (a) a_carry 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) a_carry 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.62 Approximate carry generation circuit using ACGPII for n = 32 . . . . . . . . . 1.63 LP minor level discrimination for the design of FBMIII . . . . . . . . . . . . . 1.64 8-bit FBMIII schematized structure . . . . . . . . . . . . . . . . . . . . . . . . 1.65 DRUM input unbiasing process . . . . . . . . . . . . . . . . . . . . . . . . . . 1.66 Structure of DRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.67 Area and power benefits of 16-bit DRUM [52] . . . . . . . . . . . . . . . . . . 1.68 Error maps of 16-bit DRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) n = 8, k = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) n = 8, k = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (c) n = 8, k = 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Extraction of the mean square error of a fixed-point system . . . . . . . . . . . 2.2 Comparison of noise parameters propagation using traditional flat, PSD agnostic and proposed PSD methods . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Traditional flat method: propagation of µ i , ‡ 2 i from each noise to output . (b) PSD agnostic: blind propagation of µ i , ‡ 2 i . Proposed PSD method: propagation of µ i , ‡ 2 i , and P SD i . . . . . . . . . . . . . . . . . . . . . . . . 2.3 SFG cycle breaking process example . . . . . . . . . . . . . . . . . . . . . . . (a) Cyclic SFG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Equivalent acyclic SFG . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Band-pass frequency filtering scheme . . . . . . . . . . . . . . . . . . . . . . 2.5 1-level DWT coder and decoder . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 E d versus fractional bit-width d . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 E d versus number of PSD samples N PSD . . . . . . . . . . . . . . . . . . . . 2.8 Execution time in seconds and speed up for frequency filtering and DWT systems versus the number of PSD samples . . . . . . . . . . . . . . . . . . . . . (a) Frequency filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Daubechies 9/7 DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Output frequency repartition of the fixed-point error after DWT encoding and decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 (a) Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 (b) PSD estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.1 Error maps of 8-bit FxP quantization process and approximate operators . . . . 108 (a) 8-to-4-bit quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 (b) 8-bit ACA, k = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 (c) 8-bit AAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 (d) 8-bit DRUM, k = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.2 Propagation of BWER across an operator . . . . . . . . . . . . . . . . . . . . 109 3.3 Extraction of Binary Error Event Vectors for BWER Model Training . .. . . . 112 3.4 Example of Conditional Probabilities Training for n = 4, m = 4 and k = 2 . . 113 3.5 Convergence of BWER Training in Function of k . . . . . . . . . . . . . . . . 115 3.6 Tree Operation Structure with Three Stages . . . . . . . . . . . . . . . . . . . 116 3.7 BWER Estimation and Simulation Results for Stand-Alone ACA with x = 2 and k = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 3.8 Evolution of estimation error D B with k for different configurations of 16-bit approximate stand-alone adders and multipliers . . . . . . . . . . . . . . . . . 118 (a) ACA -Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 (b) IMPACT -Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 (c) AAM -Multiplier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 (d) DRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 3.9 Evolution of estimation error D B with k for different configurations of 16-bit approximate adders with different number of stages . . . . . . . . . . . . . . . 119 (a) ACA -Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 (b) IMPACT -Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 3.10 Proposed Design Flow for Arithmetic Operator Characterization . . . . . . . . 123 3.11 Distribution of BER in output bits of 8-bit RCA under voltage scaling . . . . . 125 3.12 Distribution of BER in output bits of 8-bit BKA under voltage scaling . . . . . 125 3.13 Equivalence Between Faulty Hardware Operator and Equivalent Functionally Faulty Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.14 Design flow of modelling of VOS operators . . . . . . . . . . . . . . . . . . . 127 3.15 Estimation Error of the Model for Different Adders and Distance Metrics . . . 129 (a) Signal to Noise Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 (b) Normalized Hamming Distance . . . . . . . . . . . . . . . . . . . . . . 129 3.16 BER and Energy for Different VOS Triads Applied to 16-bit RCA . . . . . . . 130 4.1 First version of ApxPerf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.2 Second version of ApxPerf . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 4.3 Direct comparison of 16-bit-input fixed-point and approximate adders regarding MSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 (a) MSE vs power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 (b) MSE vs delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141List of Tables 1.1 IEEE 754 normalized floating-point representation . . . . . . . . . . . . . . . 1.2 Cost of FlP addition vs integer addition . . . . . . . . . . . . . . . . . . . . . 1.3 Cost of FlP multiplication vs integer multiplication . . . . . . . . . . . . . . . 1.4 Mean and variance of quantization error . . . . . . . . . . . . . . . . . . . . . 1.5 Rounding direction vs round/sticky bit . . . . . . . . . . . . . . . . . . . . . . 1.6 Full adder truth table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Radix-4 modified Booth encoding . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Bounds on the longest run of 1's with high probability . . . . . . . . . . . . . . 1.9 Logical equations of CSGC type I and II . . . . . . . . . . . . . . . . . . . . . 1.10 AP as a function of MAA and carry propagation block size for 32-bit ETAII . . 1.11 AP as a function of MAA and X for 32-bit ETAIV . . . . . . . . . . . . . . . 1.12 Simulation results for Error-Tolerant Adders [37] . . . . . . . . . . . . . . . . 1.13 Estimated parameters of the approximate computation part of a 16-bit AC2A relatively to a conventional 16-bit CLA . . . . . . . . . . . . . . . . . . . . . 1.14 Comparison of 16-bit AC2A approximate part with other adders . . . . . . . . 1.15 Error correction cycles in AC2A . . . . . . . . . . . . . . . . . . . . . . . . . 1.16 Accuracy and power consumption of 4-stage pipelined 32-bit AC2A as a function of the active mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.17 Comparison between 32-bit GDAs and exact and approximate static adders . .1.18 Accuracy-configurable implementation of AC2A and GDA . . . . . . . . . . . 1.19 Truth tables of accurate and approximate MA cells . . . . . . . . . . . . . . . 1.20 Accuracy comparison of AAMI and AAMII [47] . . . . . . . . . . . . . . . . 1.21 Accuracy comparison of signed AAM versions I, II and III [48] . . . . . . . . . 1.22 Area ratio comparing to original parallel adder for AAM versions I, II and III [48] 1.23 Equivalence between Radix-4 modified-Booth-encoded symbol Y i and control bits in partial product generation . . . . . . . . . . . . . . . . . . . . . . . . . 1.24 Representation of approximate carry values . . . . . . . . . . . . . . . . . . . 1.25 Accuracy comparison for FBMI and FBMII using ACGPI . . . . . . . . . . . . 1.26 Approximate carry signals generated by ACGPI and ACGPII for n = 8 . . . . 1.27 Comparison of delay and area of ACGPI and ACGPII [50] . . . . . . . . . . . 1.28 Accuracy comparison for FBMI and FBMII using ACGPII . . . . . . . . . . . 1.29 Value of P (pp i,j = k) in LP . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.30 Accuracy comparison for FBMI and FBMIII . . . . . . . . . . . . . . . . . . . Leveraging Power Spectral Density in Fixed-Point System Refine- ment 93 2 1.4.3 Final Discussion on Approximate Operators in Literature . . . . . . . . 90 .1 Motivation for Using Fixed-Point Arithmetic in Low-Power Computing . . . . 93 2.2 Related work on accuracy analysis . . . . . . . . . . . . . . . . . . . . . . . . 94 2.3 PSD-based accuracy evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 98 2.3.1 PSD of a quantization noise . . . . . . . . . . . . . . . . . . . . . . . 98 2.3.2 PSD propagation across a fixed-point Linear and Time-Invariant (LTI) system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 2.4 Experimental Results of Proposed Power Spectral Density (PSD) Propagation Method . Frequency Domain Filtering . . . . . . . . . . . . . . . . . . 101 2.4.1.3 Daubechies 9/7 Discrete Wavelet Transform . . . . . . . . . 101 2.4.2 Validation of the Approach for LTI Systems . . . . . . . . . . . . . . . 103 2.4.3 Influence of the Number of PSD Samples . . . . . . . . . . . . . . . . 103 2.4.4 Comparison with PSD-Agnostic Methods . . . . . . . . . . . . . . . . 104 2.4.5 Frequency Repartition of Output Error . . . . . . . . . . . . . . . . . . 105 2.5 Conclusions about PSD Estimation Method . . . . . . . . . . . . . . . . . . . 106 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 2.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 2.4.1.1 Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 101 2.4.1.2 Fast Approximate Arithmetic Operator Error Modeling 107 3 Storage Optimization and Training of the BWER Propagation Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.2.3 BWER Propagation Algorithm . . . . . . . . . . . . . . . . . . . . . . 112 3.3 Results of the BWER Method on Approximate Adders and Multipliers . . . . . 113 3.3.1 BWER Training Convergence Speed . . . . . . . . . . . . . . . . . . . 114 3.3.2 Evaluation of the Accuracy of BWER Propagation Method . . . . . . . 114 3.3.3 Estimation and Simulation Time . . . . . . . . . . . . . . . . . . . . . 119 3.3.4 Conclusion and Perspectives . . . . . . . . . . . . . . . . . . . . . . . 120 3.4 Modeling the Effects of Voltage Over-Scaling (Voltage OverScaling (VOS)) in Arithmetic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 3.4.1 Characterization of Arithmetic Operators . . . . . . . . . . . . . . . . 123 .1 The Problem of Analytical Methods for Approximate Arithmetic . . . . . . . . 107 3.2 Bitwise-Error Rate Propagation Method . . . . . . . . . . . . . . . . . . . . . 109 3.2.1 Main Principle of Bitwise-Error Rate (BWER) Propagation Method . . 109 3.2.2 3.4.2 Modelling of VOS Arithmetic Operators . . . . . . . . . . . . . . . . . 124 3.4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Approximate Operators Versus Careful Data Sizing 133 4.1 ApxPerf: Hardware Performance and Accuracy Characterization Framework for Approximate Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Table 1 . 1 Precision Mantissa Exponent Max decimal Exponent width width exponent bias Single precision 24 8 38.23 127 Double precision 53 11 307.95 1023 Quadruple precision 113 15 4931.77 16383 1 -IEEE 754 normalized floating-point representation 1.2.2 Floating-Point Addition/Subtraction and Multiplication Table 1 . 2 12 2 -Dual-path floating-point adder[START_REF] Muller | Handbook of floating-point arithmetic[END_REF] Area Total Critical Power-Delay (µm 2 ) power (mW) path (ns) Product (fJ) 32-bit ac_float 653 4.39E≠4 2.42 1.06E≠3 64-bit ac_float 1453 1.12E≠3 4.02 4.50E≠3 32-bit ac_int 189 3.66E≠5 1.06 3.88E≠5 64-bit ac_int 373 7.14E≠5 2.10 1.50E≠4 -Cost of FlP Table 1 . 1 3 -Cost of floating-point multiplication vs integer multiplication tion consumes 4. Chapter 1 .4. 𝑥 11 𝑥 10 𝑥 9 𝑥 8 𝑥 7 𝑥 6 𝑥 5 𝑥 4 𝑥 3 Table 1 . 1 4 -Mean and variance of quantization error depending on the rounding method and type of signal The implementation of rounding methods discussed above depend of two parameters. Indeed, when quantizing x 1 with d 1 -bit fractional part to x 2 with d 2 -bit fractional part, where d 2 < d 1 , only the following information is needed: Table 1 . 1 6 -Full adder truth table Figure 1.11 -8-bit Ripple Carry Adder (RCA) -bit versions by Figures 1.13 and 1.14. The parallel adders are the fastest adders with a delay complexity of O (log n), but with a superior area (2n ≠ log 2 (n) ≠ 2 compressors for BKA and n log 2 (n) ≠ n + 1 compressors for KSA but with lower fan-out). Figure 1.13 -16-bit BKA. Red-striped square converts (x i , y i ) to (p, g) (Equation 1.4) and blue-striped square converts (p, g) to z i (Equation 1.8). Circles represent (p, g) compressors. Given the multiplication of two FxP numbers x(m Chapter 1 Figure 1.14 -16-bit KSA. Red-striped square converts (x i , y i ) to (p, g) (Equation 1.4) and blue-striped square converts (p, g) to z x , d x ) and y(m y , d y ), the result z(m z , d z ) must respect for its integer part: m z = m x + my (1.10) to avoid under/overflows. Moreover, if there must be no loss of accuracy, the fractional part must also respect: d z = d x + dy. (1.11) i (Equation 1.8). Circles represent (p, g) compressors. Table 1 . 1 7 -Radix-4 modified Booth encoding -Y j is a radix-4 number which is not repre- sented in the encoded. Only the corresponding operation is performed during partial product generation. Each carry generator block is divided into two parts: 𝑥 31 ~𝑥28 𝑥 27 ~𝑥24 𝑥 23 ~𝑥20 𝑥 19 ~𝑥16 𝑥 15 ~𝑥12 y 31 ~𝑦28 y 27 ~𝑦24 y 23 ~𝑦20 y 19 ~𝑦16 y 15 ~𝑦12 CLA CLA CLA CLA … RCA RCA RCA RCA RCA 𝑐 32 𝑜𝑢𝑡 𝑧 31 ~𝑧28 𝑧 27 ~𝑧24 𝑧 23 ~𝑧20 𝑧 19 ~𝑧16 𝑧 15 ~𝑧12 Table 1 . 1 MUX2 Carry Generator Carry Generator 𝐺𝑁 𝑉𝐷 11 -AP as a function of MAA and x for 32-bit ETAIV Chapter 1 Table 1 . 1 .65 0.75 0.83 0.89 area 0.87 1.05 1.12 1.15 1.12 dynamic power 0.44 0.68 0.84 0.95 1.00 pass rate 0.554 0.829 0.942 0.982 0.995 13 -Estimated parameters of the approximate computation part of a 16-bit AC2A relatively to a conventional 16-bit CLA an area complexity of O ((n ≠ 2k) (log 2 k + 1)) , and a dynamic power complexity of O 1 Table 1 . 1 Chapter 1 .18) where R c and R e are the respective values of the correct and approximate results, B e the number of erroneous bits in the approximate results and B w the output bit-width. Therefore, 14 -Comparison of 16-bit AC2A approximate part with k = 4 with CLA and other approximate adders Table 1 . 1 approximate calculation is performed. Then, each following cycle is dedicated to the successive correction of each sub-adder, from the second one counting from the LSB. By power-gating each partial error correction systems, different levels of accuracy can then be targeted. By design, calculation and correction cycles have the same theoretical maximum delay. 1.000 amp Voltage scaling 0.900 ACC 1.000 (1.0V~0.6V) 0.800 0.995 0.700 0.990 3.00E-04 8.00E-04 0.600 ACA adder CLA 0.500 Lu's adder ETAI 0.400 ETAIIM total power (W) 2.00E-04 4.00E-04 6.00E-04 8.00E-04 1.00E-03 1.20E-03 1.000 inf Voltage scaling 0.900 ACC 1.000 (1.0V~0.6V) 0.800 0.990 0.700 0.980 0.600 4.00E-04 8.00E-04 ACA adder CLA 0.500 Lu's adder ETAI 0.400 ETAIIM total power (W) 2.00E-04 4.00E-04 6.00E-04 8.00E-04 1.00E-03 1.20E-03 Figure 1.39 -Accuracy vs power for AC2A and other approximate adders under VOS . [START_REF]Ieee standard for floating-point arithmetic[END_REF] . During the first cycle, the 15 -Error correction cycles in a 4-block AC2A -Checkmarks means the output of block S i is accurate after j cycles, crosses that it is inaccurate. Table 1 . 1 The final configurations insure Configuration ACC amp (max) ACC inf (max) Total power (mW) Power reduction mode-1 1.000 1.000 5.962 -11.5% mode-2 0.998 0.960 4.683 12.4% mode-3 0.991 0.925 3.691 31.0% mode-4 0.983 0.900 2.588 51.6% 16 -Accuracy and power consumption of 4-stage pipelined 32-bit AC2A as a function of the active mode and comparison with conventional pipelined adder that 0.99 AE ACC amp AE 1.00 for the first series of tests and 0.95 AE ACC inf AE 1.00 for the second. Results in Table 1 . 1 , 4) [START_REF] Roy | Asac: Automatic sensitivity analysis for approximate computing[END_REF] -Comparison between 32-bit GDAs and exact and approximate static adders [START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF] Table 1 . 1 [START_REF] Esmaeilzadeh | Architecture support for disciplined approximate programming[END_REF] -Accuracy-configurable implementation of AC2A and GDA[START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF] , 4) Table 1 . 1 19 -Truth tables of accurate and approximate MA cells -Shaded cells indicate design logic errors. Table 1 . 1 [START_REF] Sidiroglou-Douskos | Managing performance vs. accuracy trade-offs with loop perforation[END_REF] -Accuracy comparison of AAMI and AAMII[START_REF] Jou | Design of low-error fixed-width multiplier for dsp applications[END_REF] Chapter 1 Table 1 . 1 [START_REF] Cristiano | Fast exponential computation on simd architectures[END_REF] -Area ratio comparing to original parallel adder for AAM versions I, II and III[START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF] 536 0.527 0.522 0.518 AAMII 0.608 0.569 0.550 0.5540 0.533 AAMIII 0.608 0.569 0.550 0.5540 0.533 1,8 𝑝𝑝 1,7 𝑝𝑝 1,6 𝑝𝑝 1,5 𝑝𝑝 1,4 𝑝𝑝 1,3 𝑝𝑝 1,2 𝑝𝑝 1,1 𝑝𝑝 1,0 𝑝𝑝 2,8 𝑝𝑝 2,7 𝑝𝑝 2,6 𝑝𝑝 2,5 𝑝𝑝 2,4 𝑝𝑝 2,3 𝑝𝑝 2,2 𝑝𝑝 2,1 𝑝𝑝 2,0 𝑝𝑝 3,8 𝑝𝑝 3,7 𝑝𝑝 3,6 𝑝𝑝 3,5 𝑝𝑝 3,4 𝑝𝑝 3,3 𝑝𝑝 3,2 𝑝𝑝 3,1 𝑝𝑝 3,0 𝑝 𝐵,15 𝑝 𝐵,14 𝑝 𝐵,13 𝑝 𝐵,12 𝑝 𝐵,11 𝑝 𝐵,10 𝑝 𝐵,9 Summand Grid Truncated Part 𝑝 𝐵,8 Error Correction 𝑀𝑃 𝐿𝑃 𝑚𝑎𝑗𝑜𝑟 𝐿𝑃 𝑚𝑖𝑛𝑜𝑟 Figure 1.59 -Summand grid for an 8-bit fixed-width Booth multiplier with LP major -based error correction [49] 4 modified Booth-encoding is determined by radix-4 symbols Y 𝑏 𝑗 𝑋 𝑠𝑒𝑙,𝑖 𝑝𝑝 𝑖,𝑗 𝑏 𝑗+1 2𝑋 𝑠𝑒𝑙,𝑖 𝑁𝐸𝐺 𝑖 i . For the generation of a given partial product pp i,j , the value of input x j and bits X sel , 2X sel and NEG at determined by Y i from Table 1.23 are used, and the resulting generation circuit is depicted in Figure 1.60. To get 1-bit statistics on the encoded multiplier instead of the 3 bits representing a symbol Y i , Y Õ i is 1, 1. Amongst these 108 sequences, 52 verify {E [S is rounding operation. Therefore, the best compensation of the truncated carries generated by LP minor is 2. As {E [S LP minor ]} and 56 verify {E [S LP minor ]} r = 1 LP minor ]} r always is between 0 and 2, two carries must be transmitted to LP major , following rules of Table 1.24. For n = 10, the Karnaugh map representations of a_carry 0 and a_carry 1 as a function of {Y Õ i } from this map and is given by ioeLP minor are given by Figure 1.61. The logic for the carry generation can be determined r = 2, where {•} r Table 1 . 1 24 -Representation of approximate carry values LP minor ] = 2 ≠1 n/2≠2 ÿ Y Õ i . (1.31) i=0 The relation between the value of the chain of bits {Y Õ i } ioeLP minor and the value E [S LP minor ] is given by E [S Table 1 1 Multiplier Error metric n = 10 n = 12 Rounded 4.87E≠4 1.22E≠4 Truncated FBMI µ e 9.66E≠4 2.43E≠4 1.22E≠3 3.21E≠4 FBMII 6.28E≠4 1.63E≠4 Rounded 8.01E≠8 4.51E≠9 Truncated FBMI ‡ 2 e 3.17E≠7 1.99E≠8 8.22E≠6 5.21E≠7 FBMII 1.94E≠7 1.33E≠8 . [START_REF] Widrow | Statistical analysis of amplitude-quantized sampled-data systems[END_REF] . Rounded and Truncated respectively refer to a rounding and truncation of the output of the original accurate multiplier. FBMII proposes a better accuracy than its predecessor FBMI, but also beats truncation despite its nearly-50% area savings. With the proposed method, for bigger multiplier size, the overhead of this approximate carry generation procedure, denoted as Approximate Carry Generation Procedure version I (ACGPI), becomes too impacting in terms of area and delay, and is not suitable. To face that, Table 1 . 1 25 -Accuracy comparison for FBMI and FBMII using ACGPI the author proposes another approximate carry generation procedure denoted as Approximate Carry Generation Procedure version II (ACGPII): 1. {Y Õ i } ioeLP minor is divided in groups of 3, the last group being of size 1, 2 or 3 depending of the set size. Table 1 . 1 26 -Approximate carry signals generated by ACGPI and ACGPII for n = 8 𝑌 14 ′ 𝑌 13 ′ 𝑌 12 ′ 𝑌 11 ′ 𝑌 10 ′ 𝑌 9 ′ 𝑌 8 ′ 𝑌 7 ′ 𝑌 6 ′ 𝑌 5 ′ 𝑌 4 ′ 𝑌 3 ′ 𝑌 2 ′ 𝑌 1 ′ 𝑌 0 ′ FA FA FA FA FA 𝑎_𝑐𝑎𝑟𝑟𝑦 4 𝑎_𝑐𝑎𝑟𝑟𝑦 3 𝑎_𝑐𝑎𝑟𝑟𝑦 2 𝑎_𝑐𝑎𝑟𝑟𝑦 1 𝑎_𝑐𝑎𝑟𝑟𝑦 0 FA HA 𝑎_𝑐𝑎𝑟𝑟𝑦 6 𝑎_𝑐𝑎𝑟𝑟𝑦 5 FA 1 𝑎_𝑐𝑎𝑟𝑟𝑦 7 Figure 1.62 -Approximate carry generation circuit using ACGPII for n = 32 n Delay (ns) ACGPI ACGPII Area (# of NAND gates) ACGPI ACGPII 10 4.48 6.24 10 11 12 7.21 6.06 22 15 14 7.94 6.21 31 20 16 10.25 7.56 43 23 18 10.78 7.56 55 27 32 18.23 10.20 189 60 Table 1 . 1 [START_REF] Koren | Computer Arithmetic Algorithms[END_REF] -Comparison of delay and area of ACGPI and ACGPII[START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF] Multiplier Error metric n = 16 n = 20 Rounded 7.51E≠6 4.63E≠7 Truncated FBMI µ e 1.47E≠5 1.92E≠5 9.02E≠7 1.15E≠6 FBMII 1.07E≠5 6.35E≠7 Rounded 1.96E≠11 8.08E≠14 Truncated FBMI ‡ 2 e 7.96E≠11 3.21E≠13 1.92E≠10 8.24E≠13 FBMII 6.38E≠11 2.44E≠13 Table 1 . 1 [START_REF] Parhami | Computer Arithmetic[END_REF] -Accuracy comparison for FBMI and FBMII using ACGPII error mean and variance. As a matter of fact, FBMII/ACGPII has a mean square error which is 68.1% inferior to FBMI for n = 16 and 69.8% for n = 20. 1,8 𝑝𝑝 1,7 𝑝𝑝 1,6 𝑝𝑝 1,5 𝑝𝑝 1,4 𝑝𝑝 1,3 𝑝𝑝 1,2 𝑝𝑝 1,1 𝑝𝑝 1,0 𝑝𝑝 2,8 𝑝𝑝 2,7 𝑝𝑝 2,6 𝑝𝑝 2,5 𝑝𝑝 2,4 𝑝𝑝 2,3 𝑝𝑝 2,2 𝑝𝑝 2,1 𝑝𝑝 2,0 𝑝𝑝 3,8 𝑝𝑝 3,7 𝑝𝑝 3,6 𝑝𝑝 3,5 𝑝𝑝 3,4 𝑝𝑝 3,3 𝑝𝑝 3,2 𝑝𝑝 3,1 𝑝𝑝 3,0 𝑝 𝐵,15 𝑝 𝐵,14 𝑝 𝐵,13 𝑝 𝐵,12 𝑝 𝐵,11 𝑝 𝐵,10 𝑝 𝐵,9 Summand Grid Truncated Part 𝑤 𝑥 𝑦 𝑧 𝑝 𝐵,8 𝑀𝑃 𝐿𝑃 𝑚𝑎𝑗𝑜𝑟 𝐿𝑃 𝑚𝑖𝑛𝑜𝑟 Figure 1.63 -LP minor level discrimination for the design of FBMIII of q 1 1.34 as well as probability values of Table 1.29, then E [q 2 ] is given by E [q 2 ] = 0.4375 ◊ (3/16) + 0.4375 ◊ (3/16) + 1 ◊ (5/8) = 0.7890625 (1.36) Table 1 . 1 The global FB-MIII can be mapped as in Figure1.64, where RFA blocks are Redundant Full-Adder blocks and C block is the partial correction block for the appliance of the two partial compensations 3,1 does not need any ex- & can be performed with two AND gates. & . pp k,i..j refers to the partial symbolic string from pp k,i to pp k,j and pp B,i..j refers to the Booth-encoded multiplier output from rank i to rank j. The author gives comparisons of accuracy between the truncated output version of the fixed-width Booth multiplier and FBMI presented above, in terms of absolute error mean and 30 -Accuracy comparison for FBMI and FBMIII error variance. These metrics for a given n-bit operator denoted as op are defined as: Table 1 . 1 Metric k 3 4 5 6 7 8 Max RE (%) 56.25 26.56 12.86 6.31 3.1 1.54 MA RE (%) 11.90 5.89 2.94 1.47 0.73 0.37 µ RE (%) 2.08 0.53 -0.14 -0.04 0.01 0.01 ‡ RE (%) 14.75 7.26 3.61 1.80 0.90 0.45 .42) where Max RE is the maximum relative error, MA RE the mean absolute relative error, µ RE the Figure 1.67 -Area and power benefits of 16-bit DRUM relatively to 16-bit accurate multiplier [52] -DRUMk refers to 16-bit DRUM using k-bit multiplier. 31 -Error results for 16-bit DRUM for k between 3 and 8 -Error metrics are defined by Equations 1.42. signal to achieve very accurate estimates. 𝑆 𝑏 1 𝜇 1 , 𝜎 1 2 𝑏 3 𝜇 3 , 𝜎 3 2 𝑏 5 𝜇 5 , 𝜎 5 2 𝐼 3 𝐼 1 𝐼 2 𝑂𝑝 1 𝑂𝑝 2 + + 𝑏 2 𝜇 2 , 𝜎 2 2 𝑂𝑝 3 𝑂𝑝 4 + + 𝑏 4 𝜇 4 , 𝜎 4 2 𝑂𝑝 5 + 𝑂 𝜇 𝑂 , 𝜎 𝑂 2 (a) Traditional flat method: propagation of µi, ‡ 2 i from each noise to output 𝐼 1 𝐼 2 𝐼 3 𝑂𝑝 1 𝑂𝑝 2 𝑆 + 𝑏 1 + 𝑏 2 𝑃𝑆𝐷 2 𝜇 1 , 𝜎 1 2 𝑃𝑆𝐷 1 𝜇 2 , 𝜎 2 2 𝑂𝑝 3 𝑂𝑝 4 + 𝑏 3 + 𝑏 4 𝑂𝑝 5 𝑃𝑆𝐷 4 𝜇 3 , 𝜎 3 2 𝑃𝑆𝐷 3 𝜇 4 , 𝜎 4 2 + 𝑏 5 2 𝑃𝑆𝐷 5 𝜇 5 , 𝜎 5 𝑂 𝜇 𝑂 , 𝜎 𝑂 2 𝑃𝑆𝐷 𝑂 (b) PSD agnostic: blind propagation of µi, ‡ 2 i . Proposed PSD method: propagation of µi, ‡ 2 i , and P SDi Table 2 . 2 and N PSD is set to 1024. 𝐿𝑃 𝑐 2 ↓ 𝑥 𝑙𝑙 𝐿𝑃 𝑐 2 ↓ 𝑥 𝑖𝑛 𝐻𝑃 𝑐 2 ↓ 𝑥 𝑙ℎ 𝐿𝑃 𝑐 2 ↓ 𝑥 ℎ𝑙 𝐻𝑃 𝑐 2 ↓ 𝐻𝑃 𝑐 2 ↓ 𝑥 ℎℎ 𝑦 𝑙𝑙 2 ↑ 𝐿𝑃 𝑑 + 2 ↑ 𝐿𝑃 𝑑 𝑦 𝑙ℎ 2 ↑ 𝐻𝑃 𝑑 + 𝑦 𝑜𝑢𝑡 𝑦 ℎ𝑙 2 ↑ 𝐿𝑃 𝑑 𝑦 ℎℎ 2 ↑ 𝐻𝑃 𝑑 + 2 ↑ 𝐻𝑃 𝑑 Figure 2.5 -1-level DWT coder and decoder 1 -Relative error power estimation statistics E d Table 2 . 2 about one millisecond in case of both experiments. With more PSD samples, the time taken by frequency filtering example grows slower than Daubechies DWT example owing to its small size. A speed-up factor of 3 ≠ 5 orders of magnitude compared to simulation is obtained in both cases even for the highest value of N PSD . Proposed Proposed PSD PSD method PSD method agnostic (max accuracy) (min accuracy) method Freq. Filt. DWT 9/7 ≠8.40% 1.10% ≠0.87% 0.90% 29.5% 610% 2.7 -E d versus number of PSD samples N PSD 2 -Comparison of E d between PSD agnostic method and proposed PSD method Time spent on this estimation is usually another critical resource. Figure 2.8 gives the time of output error estimation using the proposed PSD method versus N PSD . With N PSD = 16 the proposed method requires 1, Section 1.4, many different approximate operators do exist. Most adders rely on different ways to break the carry chain, like LPA, ACA, ETA version II to IV, and most multipliers by pruning the partial products to simplify the summand grid reduction, such as AAM version I to III and Fixed-width modified-Booth-encoded Multiplier (FBM) version I to III. Only a few examples such as ETAI or DRUM use strongly different techniques. Amongst these operators, some are configurable at run time like AC2A and GDA. Table 3 . 3 1 -Storage Cost of BWER Propagation Full Data Structure .1 for different operations. It Operation n m Storage 8-bit addition 8 9 2.4 MB 8-bit multiplication 8 16 4.2 MB 16-bit addition 16 17 292 GB 16-bit multiplication 16 32 550 GB Table 3 . 3 Add 292 GB 22 GB 31 MB 2.4 MB 186 kB 14 kB 980 B 16-Mul 550 GB 280 GB 93 MB 6.3 MB 431 kB 29 kB 1.9 kB 2 -Storage Cost of BWER Propagation Data Structure for 16-bit Addition and Multiplication Depending on Reduction Method to 2.4 MB for instance with k = 8, limiting the consideration of input LSBs to a horizon of 8, Operator Original 1 st Reduction k = 10 k = 8 k = 6 k = 4 k = 2 2 nd Reduction 16- .2 gives the corresponding memory cost as a function of k for 16-bit addition and multiplication. For 16-bit addition, the data structure size is reduced from 292 GB Table 3 . 3 3 -Partial Input BWER for the Example b z,i ), -i,4 must be known for i oe J0, 15K. - 6,4 is calculated from Table 3.3 the following way: -6,4 = P 4 (E = P (e y,4 , e x,y,6 | B x,4 , e x,y ) y,3 , e x,3 Table 3 . 3 Chapter 3 4 -BWER Propagation and Simulation Time of Stand-Alone Approximate Operators -Simulation is run on 10 7 input samples considered. Table 4 . 4 1 -Direct comparison of 16-bit-input and output fixed-point and approximate multipliers 0.05 0.5 0.045 Power (mW) 0.025 0.03 0.035 0.04 (16,15) Fxp add. -trunc. Fxp add. -round. ACA ETAIV IMPACT (16,2) Delay (ns) 0.4 0.2 0.3 0.1 (16,15) Fxp add. -trunc. Fxp add. -round. ACA ETAIV IMPACT (16,2) -100 0.02 -80 -60 -40 -20 0 -100 0 -80 -60 -40 -20 0 MSE (dB) MSE (dB) (a) MSE vs power (b) MSE vs delay 0.025 220 200 0.02 (16,15) 180 (16,15) PDP (pJ) 0.01 0.015 Area 140 160 Fxp add. -trunc. (16,2) 120 Fxp add. -trunc. (16,2) Fxp add. -round. Fxp add. -round. 0.005 ACA ACA ETAIV 100 ETAIV IMPACT IMPACT 0 80 -100 -80 -60 -40 -20 0 -100 -80 -60 -40 -20 0 MSE (dB) MSE (dB) (c) MSE vs PDP (d) MSE vs area Figure 4.3 -Direct comparison of 16-bit-input fixed-point and approximate adders regarding MSE Table 4 . 4 2 -Accuracy and Energy Consumption of FFT-32 Using 16-bit Fixed-Width Multipliers imate version of the encoder, DCT operations are computed using fixed-point or approximate operators. The quality metric to compare the exact and the approximate versions of the JPEG encoder is the Mean Structural Similarity (MSSIM) 4.5 -Power Consumption of FFT-32 Versus Output PSNR Using 16-bit Approximate Adders MUL t (16, 16) AAM (16) FBM (16) PSNR (dB) PDP (pJ) 53.88 55.43 59.66 92.49 ≠18.14 93.26 Table 4 . 4 3 gives the energy spent by the MC filter replacing all its additions by adders producing an MSSIM of approximately 0.99. In their 16-bit version, ACA and ETAIV can only reach respectively Figure 4.7 -Lena Encoded with DCT Instrumented With Different 16-bit Approximate Operators0.96 and 0.98. In any case, and as discussed above, the multiplier overhead provokes an energy consumption which is 4.6 times superior for the approximate version than for the truncated FxP version. For multiplier replacement, Table4.3 shows that both 16-bit AAM and FBM produce an accuracy similar to fixed-width truncated FxP multiplier. Moreover, replacing multipliers by FBM in the MC filter do not lead to an important energy overhead, which makes it competitive considering that its delay is 37% inferior to MUL t[START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF][START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF] according to Table4.1. However, AAM suffers from an important energy overhead. Comparison of Fixed-Point and Approximate Operators on Signal Processing Applications147 Table 4 . 4 [START_REF] Moore | Cramming more components onto integrated circuits[END_REF] shows K-means clustering classification success rate and energy spent in distance computation performing multiplication using 16-bit input FxP and approximate multipliers. Success Adder Min. Mult. Total Rate Energy (pJ) Energy (pJ) Energy (pJ) ADD t (16, 11) 99.14% 1.55E≠2 9.36E≠2 2.03E≠1 ACA(16, 12) 99.10% 1.54E≠2 2.49E≠1 5.13E≠1 ETAIV(16, 4) 99.43% 1.30E≠2 2.49E≠1 5.11E≠1 IMPACT(16, 6, 3) 99.67% 1.00E≠2 2.49E≠1 5.08E≠1 ADD t (16, 8) 86.00% 1.27E≠2 2.40E≠2 6.06E≠2 ACA(16, 8) 86.06% 9.85E≠3 2.49E≠1 5.08E≠1 ETAIV(16, 2) 63.25% 7.00E≠3 2.49E≠1 5.05E≠1 IMPACT(16, 10, 1) 87.29% 1.26E≠2 2.49E≠1 5.11E≠1 Table 4 . 4 5 -Accuracy and Energy of Distance Computation for K-means Clustering Using 16bit Input Adders for Different Success RatesAAM achieves similar accuracy than fixed-width truncated accurate multiplier, with 99% classification success rate. However, it presents an energy overhead of 75%. FBM achieves very poor performance for K-means, with only 10% success, which is equivalent to prune 12 output bits of a FxP multiplier. Success Multiplier Min. Adder Total Rate Energy (pJ) Energy (pJ) Energy (pJ) MUL t (16, 16) 99.84% 2.49E≠1 1.83E≠2 5.15E≠1 AAM(16) 99.43% 4.42E≠1 1.83E≠2 9.02E≠1 FBM(16) 10.27% 2.54E≠1 1.83E≠2 5.27E≠1 MUL t (16, 4) 10.87% 2.04E≠1 1.24E≠3 4.09E≠1 Table 4.6 -Accuracy and Energy of Distance Computation for K-means Clustering Using 16- bit Input Multipliers 1.1 and embedded in ApxPerf, offers a balance between computational safety and simplicity. Inspired by ac_float, it is written in C++ for HLS. Two versions of ct_float do exist, one based on Mentor Graphics ac_int data type, made for Mentor Graphics Catapult C, and the other one based on ap_int data type from Xilinx, used in Vivado for FPGA targets.FloPoCo (for Floating-Point Cores, but not only) is a generator of arithmetic cores[START_REF] De Dinechin | Designing custom arithmetic data paths with flopoco[END_REF].Also based on C++, it has its own synthesis engine and directly returns VHDL. More than simple arithmetic operators, it is able to generate optimized floating-point computing cores performing complex arithmetic expressions. In this Section, we will only get interested in FloPoCoś custom floating-point addition and multiplication. The main difference of FloPoCo's floating-point representation is the extra 2-bit exception field transported in data. Like for ct_float subnormals are not handled by FloPoCo. Unlike ac_float both ct_float and FloPoCo do not support custom exponent bias. Table 5 . 3 53 2 ) Critical path (ns) power (mW) operation (pJ) Total Energy per AC_FLOAT 312 1.44 1.84E≠1 9.07E≠1 CT_FLOAT 318 1.72 2.13E≠1 1.05 FLOPOCO 361 2.36 1.84E≠1 9.06E≠1 CT_FLOAT/AC_FLOAT +2.15% +19.4% +15.4% +15.7% CT_FLOAT/FLOPOCO -11.8% -27.0% +15.7% +15.8% Table 5.2 -Comparative Results for 16-bit Custom Floating-Point Addition/Subtraction with F clk = 200MHz Area (µm 2 ) Critical path (ns) power (mW) operation (pJ) Total Energy per AC_FLOAT 488 1.18 2.15E≠1 1.05 CT_FLOAT 389 1.13 1.76E≠1 8.59E≠1 FLOPOCO 361 1.52 1.34E≠1 6.50E≠1 CT_FLOAT/AC_FLOAT -20.4% -4.24% -18.2% -18.2% CT_FLOAT/FLOPOCO +7.68% -25.6% +31.7% +32.1% -Comparative Results for 16-bit Custom Floating-Point Multiplication with F clk = 200MHz Table 5 . 5 55 Chapter 5 -Comparative Results for 32-bit Custom Floating-Point Multiplication with F clk = 200MHz Table 5 . 5 6 -8-and 16-bit Performance and Accuracy for K-Means Clustering Experiment algorithm consuming only 1.6◊ more energy than fixed-point. Moreover, floating-point version has a huge advantage in terms of accuracy. Indeed, CMSE is 10◊ better for floating-point and ER is 1.8◊ better. Figures 5.5a Chapter 5 AAM Approximate Array Multiplier. 68, 70, 72-74, 77, 107, 108, 114, 117-119, 140, 143, 144, 148, 149, 194, 197 AAMI Approximate Array Multiplier I. 68-70, 72, 192, 197 AAMII Approximate Array Multiplier II. 68-70, 72, 192, 197 AAMIII Approximate Array Multiplier III. 68, 70-74, 77, 114, 135, 140, 193 AC2A Accuracy-Configurable Adder. 50-55, 58, 60, 61, 107, 192, 197 ACA Almost Correct Adder. 38-43, 46, 56, 107, 108, 111, 114, 116-120, 128, 135, 140, 143, 146, 192, 194 ACGPI Approximate Carry Generation Procedure version I. 80-83, 197 ACGPII Approximate Carry Generation Procedure version II. 81-83, 193, 197 AMA Approximate Mirror Adder. 62, 64 AMA1 Approximate Mirror Adder type 1. 62-65, 192 AMA2 Approximate Mirror Adder type 2. 62-65, 192 AMA3 Approximate Mirror Adder type 3. 62, 64, 65 AP Acceptance Probability. 45-49, 69, 70, 76, 134, 197 Apx Approximate. 109, 133, 140 ASIC Application Specific Integrated Circuit. 156 BER Bit Error Rate. 52, 94, 107, 109, 122, 129, 130, 134, 142, 143, 194, 195 BKA Brent-Kung Adder. 30, 31, 128, 129 BWER Bitwise-Error Rate. 6, 109-117, 119-122, 130, 134, 194, 198 CLA Carry-Lookahead Adder. 29, 30, 46, 51, 52, 58, 197 An operator is considered as fixed-width when its output has the same width as its inputs. In the considered multiplication case, half of the output LSBs is truncated. Remerciements Acronyms Publications Search for custom error estimation test bench "err_testbench.cpp"... Not found. Automated generation of missing test bench(es)... -Detection of input and output types... Done. Input type: ct_float<8,24,CT_RD> Output type: ct_float<8,24,CT_RD> -Instrumentation of hardware performance estimation test bench input generation... Done. -Compilation of hardware performance estimation test bench input generation... Done. -Generation of hardware performance estimation test bench inputs... Done. -Instrumentation of hardware performance estimation test bench... Done. -Instrumentation of error estimation test bench... Done. Compilation of error estimation test bench... Done. Execution of error estimation test bench... Done. Copy of error estimation results to results folder... Done. Instrumentation of Catapult C script... Done. Execution of Catapult C script... Done. Instrumentation of Design Compiler script... Done. Detection of a previous compilation of technology libraries... Done. Execution of Design Compiler script... Done. Instrumentation of SystemC Verify makefile... Done. Compilation of SystemC Verify flow... Done. Preparation of technology libraries for Modelsim... Done. Instrumentation of Modelsim script for VCD generation... Done. Execution of SystemC Verify flow... Done. Instrumentation of PrimeTime script... Done. Execution of PrimeTime script... Done. Save of compiled technology libraries for next executions... Done. Copy of gate-level VHDL, experiment parameters, reports and logs to results directory ... Done. Chapter 3 Hardware Operator process that minimizes the difference between the outputs of the operator and its equivalent statistical model, according to a certain metric. In this work, we used three accuracy metrics to calibrate the efficiency of the proposed statistical model: • Mean Square Error (MSE) -average of squares of deviations between the output of the statistical model x and the reference x: (3.14) • Hamming distance -number of positions with bit flip between the output of the statistical model x and the reference x: • Weighted Hamming distance -Hamming distance with weight for every bit position depending on their significance: Proof of Concept: Modelling of Adders In the rest of the section, we develop a proof of concept by applying VOS on different adder configurations. All the adder configurations are subjected to VOS and characterized using the flow described in Fig. 3.10. Fig. 3.14 shows the design flow of modelling VOS operators. As shown in Fig. 3.13 rudimentary model of the hardware operators is created with the input vectors and the statistical parameters. For the given input vectors, output of both the model and the hardware operator is compared based on the defined set of accuracy metrics. The comparator shown in Fig. 3.14 generates signal to noise ratio (SNR) and Hamming distance to determine the quality of the model based on the accuracy metrics. SNR and Hamming distance are fed back to the optimization algorithm to further fine tune the model to represent the VOS operator. In the case of adder, only one parameter P i for the statistical model is used and is defined as C max , the length of the maximum carry chain to be propagated. Hence, given the operating APXPERF-Second Version The second version of ApxPerf so far brings an extra-layer of high level synthesis. In this version, whose framework is described in Figure 4.2, only one source is needed for both hardware and accuracy estimation, written in C++. HLS + RTL Synthesis Simulation + Verification Gate Synthesis The HLS and the simulations are performed by Mentor Graphics CatapultC. During HLS, the Register Transfer Level (RTL) representation of the input source is generated. Then, a second compilation pass is ensured by Design Compiler to get a gate-level representation. The gate-level representation is then passed again to CatapultC for Modelsim simulation and verification using integrated SystemC Verify framework. Thanks to this framework, the same C++ test bench as for accuracy estimation is used for both hardware verification and generation of the VCD files for PrimeTime power estimation. This way, the statistical distribution of the generated test bench, which can be uniform or random with tunable parameters, is ensured to be the same for hardware performance and accuracy characterizations. The accuracy estimation part returns the same error metrics as for the first version of Apx-Perf described in the previous Section. The main novelty is the possibility to add any error metric to the error estimation as a plugin, with no need to modify the framework kernel. Another main evolution is the replacement of Bash and Matlab scripts by Python. This second version is consequently more portable. Moreover, except for the hardware characterization part requiring Mentor Graphics and Synopsys tools, the whole error estimation part, from simulation to results management, is not linked to any third-party software. The management of gen- Chapter 5 Fixed-Point Versus Custom Floating-Point Representation in Low-Energy Computing In Chapter 4, we compared classical fixed-point arithmetic with operator-level approximate computing. The general conclusion was the superiority of fixed-point arithmetic thanks to lower error entropy making error more robust to deterioration in propagation. In this Chapter, fixed-point arithmetic is compared to custom floating-point arithmetic. As a reminder, fixed-point arithmetic is presented in Section 1.3 and floating-point arithmetic in Section 1.2. To perform this comparison, the study was led using the second version of Apx-Perf, described in Section 4.1.2. This version embeds a synthesizable custom floating-point library called ct_float presented in Section 5.1 and developed in the context of this thesis. In this section, ct_float is compared to other custom floating-point libraries to first show its efficiency. Then, stand-alone fixed-point and floating-point paradigms are compared in Section 5.2 to appreciate their differences in terms of accuracy and hardware performance. Finally, in Section 5.3, both representations are compared on signal processing applications, K-means clustering and FFT, leveraging relevant metrics. CT_FLOAT: a Custom Synthesizable Floating-Point Library The second version of ApxPerf framework was presented in Section 4.1.2. It allows for fast and user-friendly hardware characterization of approximate operators written in C++ thanks to HLS, leveraging Catapult C, Design Compiler, ModelSim and PrimeTime, and error characterization thanks to C++ benchmarks. As mentioned in Section 4.1.2, ApxPerf v2 comes with built-in approximate operators libraries such as apx_fixed containing approximate integer adders and multipliers in fixed-point representation. This section presents ct_float, the main operator library of ApxPerf v2. • a first version based on Mentor Graphics ac_int datatype, made for Catapult HLS but also stand-alone error estimation and • a second version based on Xilinx Vivado HLS integer library ap_int made for Xilinx FPGA target using Vivado. Both versions are provided in the same source code and activated through C++ pre-compiler directives. The implementation of ct_float, as for apx_fixed, features: • Synthesizable operator overloading: unary operators: unary ≠, !, ++, ≠≠, relational operators: <, >, <=, >=, ==, ! =, binary operators: +, + =, ≠ ≠ =, ú, ú =, <<, <<=, >>, >>=, and assignment operator from/to another instance of ct_float. • Non-synthesizable operator overloading: assignment operator from/to C++ native datatypes (float, double), output operator << for easy display and writing in files. Other built-in functions allow easy manipulation of floating-point values, such as test functions to get information about the extreme representable values for a given floating-point representation, to know if a given value is representable, etc. The declaration of an instance of ct_float requires three template parameters: 1. the exponent width e, 2. the mantissa width m, and 3. the rounding mode used in arithmetic operators and changes of representation. Currently, four rounding modes are available, given by Table 1.5 in Chapter 1. The value of mantissa width m includes the implicit 1 (see below). The representation also includes a sign bit. Therefore, the total number of bits in memory is equal to e + m. As mentioned above, two synthesizable operators are available: addition and multiplication. Unlike apx_fixed, the output representation of these operators is not determined to be prevented from under/overflows. Indeed, if the inputs are on (e To estimate the energy spent per operation, we also introduce a fair metric, which is the total energy spent before stabilization. Indeed, in literature, energy per operation is often estimated whether: • using the total energy per clock cycle, or • using Power-Delay Product (PDP), which is the multiplication of the average dynamic power of the operation. In the first case, a fair comparison between two different operators is strongly dependent on their difference of slack. Indeed, let us imagine two operators op 1 and op 2 which have the same size and the same static power. if op 1 stabilizes twice as fast as op 2 with a same dynamic power, then we would naturally tend to say that E (op 1 ) = 1 2 ◊E (op 2 ). However, with the total energy per clock cycle metric, E (op 1 ), if the slack is high, then op 1 and op 2 will seem to have very close energy per operation, which is indeed false. With the PDP metric, the static power is not considered. Therefore, if op 1 and op 2 have very different static power, which is true if they do not have quite equivalent area, then the energy per operation will be too much in favor of the larger operator. To be perfectly fair, the energy per operation must consider the whole energy (static and dynamic) spent before stabilization such as depicted in Figure 5.1. Considering the average static power P s , the average dynamic power P d , the critical path delay T cp , the clock period T clk and the number of latency cycles N c , the total energy spent before stabilization E op is (5.2) Chapter 5 Comparison on K-Means Clustering Application This section describes the K-means clustering algorithm and gives the comparative results for FxP and FlP. First, the principle of K-means method is described. Then, the specific algorithm used in this case study is detailed. K-Means Clustering Principle, Algorithm and Experimental Setup K-means clustering is a well-known method for vector quantization, which is mainly used in data mining, e.g. in image classification or voice identification. It consists in organizing a multidimensional space into a given number of clusters, each being totally defined by its centroid. A given vector in the space belongs to the cluster in which it is nearest from the centroid. The clustering is optimal when the sum of the distances of all points to the centroids of the cluster they belong to is minimal, which corresponds to finding the set of clusters where µ i is the centroid of cluster S i . Finding the optimal centroids position of a vector set is mathematically NP-hard. However, iterative algorithms such as Lloyd's algorithm allow us to find good approximations of the optimal centroids by an estimation-maximization process, with a linear complexity (linear with the number of clusters, with the number of data to process, with the number of dimensions and with the number of iterations). The iterative Lloyd's algorithm [START_REF] Lloyd | Least squares quantization in pcm[END_REF] is used in our case study. It is applied to bidimensional sets of vectors in order to have easier display and interpretation of the results. From now, we will only refer to the bidimensional version of the algorithm. Figure 5.4 shows results of K-Means on a random set of input vectors, obtained using double-precision floating-point computation with a very restrictive stopping condition. Results obtained this way are considered as the reference golden output in the rest of the paper. The algorithm consists of three main steps: 1. Initialization of the centroids. 2. Data labelling. 3. Centroid position update. Steps 2 and 3 are iterated until a stopping condition is met. In our case, the main stopping condition is when the difference of the sums of all distances from data points to their cluster's centroid between two iterations is less than a given threshold. A second stopping condition is the maximum number of iterations, required to avoid the algorithm getting stuck when the arithmetic approximations performed are too high to converge. The detailed algorithm for one dimension is given by Algorithm 3. Input data are represented by the vector data of size N data , output centroids by the vector c of size k. The accuracy target for stopping condition is defined by acc_target and the maximum allowed number of iterations by max_iter. In our study, we use several values for acc_target, and max_iter is set to 150, which is never reached in The impact of fixed-point and floating-point arithmetic on performance and accuracy is evaluated considering the distance computation function distance_comp, defined by: d Ω (x ≠ y) ◊ (x ≠ y). (5.4) The computation is written this way instead of using the square function in order to let the HLS determine the intermediate types, thanks to C++ native types overloading implemented in ct_float and ac_fixed, which are used for floating-point and fixed-point implementation, respectively. All the other parts of the computations are implemented using double-precision floating-point, and their contribution to the performance cost is not evaluated. Using a whole approximate K-means application would require these operations to be approximated the same way as distance computation. However, as distance computation is the most complex part of the algorithm and as it is the deepest operation in the inner loops, its impact on accuracy and performance is the most critical. In the 2D case, the distance computation becomes which is equivalent to 1 addition, 2 subtractions, and 2 multiplications. However, as distance computation is cumulative on each dimension, the hardware implementation relies only on 1 addition (accumulation), 1 subtraction, and 1 multiplication. Abstract The physical limits being reached in silicon-based computing, new ways have to be found to overcome the predicted end of Moore's law. Many applications can tolerate approximations in their computations at several levels without degrading the quality of their output, or degrading it in a acceptable way. This thesis focuses on approximate arithmetic architectures to seize this opportunity. Firstly, a critical study of state-of-the-art approximate adders and multipliers is presented. Then, a model for fixed-point error propagation leveraging power spectral density is proposed, followed by a model for bitwise-error rate propagation of approximate operators. Approximate operators are then used for the reproduction of voltage over-scaling e ects in exact arithmetic operators. Leveraging our open-source framework ApxPerf and its synthesizable template-based C++ libraries apx_fixed for approximate operators, and ct_float for low-power floating-point arithmetic, two consecutive studies are proposed leveraging complex signal processing applications. Firstly, approximate operators are compared to fixed-point arithmetic, and the superiority of fixed-point is highlighted. Secondly, fixed-point is compared to small-width floatingpoint in equivalent conditions. Depending on the applicative conditions, floating-point shows an unexpected competitiveness compared to fixed-point. The results and discussions of this thesis give a fresh look on approximate arithmetic and suggest new directions for the future of energy-e cient architectures.
367,035
[ "780051" ]
[ "490899", "491186" ]